Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
4,400 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='step1'></a>
22 - Example Simulation
Step1: <a id='step2'></a>
Step2: 1. Create your module and evaluate irradiance without the mirror element
Step3: 2. Add Mirror
Approach 1
Step4: We calculate the displacement of the morrir as per the equations show in the image at the beginning of the tutorial
Step5: Use rvu in the terminal or by commenting out the cell below to view the generated geometry, it should look like this
Step6: Just as a sanity check, we could sample the mirror...
Step7: And we can calculate the increase in front irradiance from the mirror
Step8: Approach 2 | Python Code:
import os
from pathlib import Path
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_22')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
print ("Your simulation will be stored in %s" % testfolder)
import bifacial_radiance
import numpy as np
import pprint
import pandas as pd
Explanation: <a id='step1'></a>
22 - Example Simulation: Mirrors and Modules
Doing an example tutorial for example brought up in Issue #372
End of explanation
demo = bifacial_radiance.RadianceObj('tutorial_22', path=testfolder) # Adding a simulation name. This is optional.
demo.setGround(0.2)
epwfile = demo.getEPW(lat=37.5, lon=-77.6)
metdata = demo.readWeatherFile(weatherFile=epwfile, coerce_year=2021)
timeindex = metdata.datetime.index(pd.to_datetime('2021-01-01 12:0:0 -5'))
demo.gendaylit(timeindex) # Choosing a december time when the sun is lower in the horizon
Explanation: <a id='step2'></a>
End of explanation
tilt = 75
sceneDict1 = {'tilt':tilt,'pitch':5,'clearance_height':0.05,'azimuth':180,
'nMods': 1, 'nRows': 1, 'originx': 0, 'originy': 0, 'appendRadfile':True}
mymodule1 = demo.makeModule(name='test-module',x=2,y=1, numpanels=1)
sceneObj1 = demo.makeScene(mymodule1, sceneDict1)
octfile = demo.makeOct(demo.getfilelist())
analysis = bifacial_radiance.AnalysisObj(octfile, demo.basename)
frontscan, backscan = analysis.moduleAnalysis(sceneObj1, sensorsy=1)
results = analysis.analysis(octfile, demo.basename, frontscan, backscan)
withoutMirror = bifacial_radiance.load.read1Result('results\irr_tutorial_22.csv')
withoutMirror
Explanation: 1. Create your module and evaluate irradiance without the mirror element
End of explanation
demo.addMaterial(material='testmirror', Rrefl=0.94, Grefl=0.96, Brefl=0.96,
materialtype = 'mirror') # specularity and roughness not needed for mirrors or glass.
mymodule2 = demo.makeModule(name='test-mirror',x=2,y=1, numpanels=1, modulematerial='testmirror')
Explanation: 2. Add Mirror
Approach 1: Pretend the mirror is another module.
We start by creating the mirror material in our ground.rad file, in case it is not there. For mirror or glass primitives (material classes), pecularity and roughness are not needed.
You could alternatively do a plastic material, and increase the specularity and lower the roughness to get a very reflective surface.
End of explanation
originy = -(0.5*mymodule2.sceney + 0.5*mymodule1.sceney*np.cos(np.radians(tilt)))
sceneDict2 = {'tilt':0,'pitch':0.00001,'clearance_height':0.05,'azimuth':180,
'nMods': 1, 'nRows': 1, 'originx': 0, 'originy': originy, 'appendRadfile':True}
sceneObj2 = demo.makeScene(mymodule2, sceneDict2)
octfile = demo.makeOct(demo.getfilelist())
Explanation: We calculate the displacement of the morrir as per the equations show in the image at the beginning of the tutorial
End of explanation
## Comment the line below to run rvu from the Jupyter notebook instead of your terminal.
## Simulation will stop until you close the rvu window
# !rvu -vf views\front.vp -e .01 -vp 4 -0.6 1 -vd -0.9939 0.1104 0.0 tutorial_22.oct
analysis = bifacial_radiance.AnalysisObj(octfile, demo.basename)
frontscan, backscan = analysis.moduleAnalysis(sceneObj1, sensorsy=1)
results = analysis.analysis(octfile, name=demo.basename+'_withMirror', frontscan=frontscan, backscan=backscan)
withMirror = bifacial_radiance.load.read1Result('results\irr_tutorial_22_withMirror.csv')
withMirror
Explanation: Use rvu in the terminal or by commenting out the cell below to view the generated geometry, it should look like this:
End of explanation
frontscan, backscan = analysis.moduleAnalysis(sceneObj2, sensorsy=1)
results = analysis.analysis(octfile, name=demo.basename+'_Mirroritself', frontscan=frontscan, backscan=backscan)
bifacial_radiance.load.read1Result('results\irr_tutorial_22_Mirroritself.csv')
Explanation: Just as a sanity check, we could sample the mirror...
End of explanation
print("Gain from mirror:", round((withMirror.Wm2Front[0] - withoutMirror.Wm2Front[0] )*100/withoutMirror.Wm2Front[0],1 ), "%" )
Explanation: And we can calculate the increase in front irradiance from the mirror:
End of explanation
# name='Mirror1'
# text='! genbox black cuteMirror 2 1 0.02 | xform -t -1 -0.5 0 -t 0 {} 0'.format(originy)
# customObject = demo.makeCustomObject(name,text)
# demo.appendtoScene(radfile=scene.radfiles, customObject=customObject, text="!xform -rz 0")
# sceneObj2 = demo.makeScene(mymodule2, sceneDict2)
Explanation: Approach 2:
Create mirrors as their own objects and Append to Scene, like on tutorial 5. Sample code below:
End of explanation |
4,401 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from Google.
Training
Download some data, for example
Step1: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
Step2: This will create a text8-phrases that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the origial data as input for word2vec.
Train the model using the word2phrase output.
Step3: That generated a text8.bin file containing the word vectors in a binary format.
Do the clustering of the vectors based on the trained model.
Step4: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
Step5: Import the word2vec binary file created above
Step6: We can take a look at the vocabulaty as a numpy array
Step7: Or take a look at the whole matrix
Step8: We can retreive the vector of individual words
Step9: We can do simple queries to retreive words similar to "socks" based on cosine similarity
Step10: This returned a tuple with 2 items
Step11: There is a helper function to create a combined response
Step12: Is easy to make that numpy array a pure python response
Step13: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases"
Step14: Analogies
Its possible to do more complex queries like analogies such as
Step15: Clusters
Step16: We can see get the cluster number for individual words
Step17: We can see get all the words grouped on an specific cluster
Step18: We can add the clusters to the word2vec model and generate a response that includes the clusters | Python Code:
import word2vec
Explanation: word2vec
This notebook is equivalent to demo-word.sh, demo-analogy.sh, demo-phrases.sh and demo-classes.sh from Google.
Training
Download some data, for example: http://mattmahoney.net/dc/text8.zip
End of explanation
word2vec.word2phrase('/Users/drodriguez/Downloads/text8', '/Users/drodriguez/Downloads/text8-phrases', verbose=True)
Explanation: Run word2phrase to group up similar words "Los Angeles" to "Los_Angeles"
End of explanation
word2vec.word2vec('/Users/drodriguez/Downloads/text8-phrases', '/Users/drodriguez/Downloads/text8.bin', size=100, verbose=True)
Explanation: This will create a text8-phrases that we can use as a better input for word2vec.
Note that you could easily skip this previous step and use the origial data as input for word2vec.
Train the model using the word2phrase output.
End of explanation
word2vec.word2clusters('/Users/drodriguez/Downloads/text8', '/Users/drodriguez/Downloads/text8-clusters.txt', 100, verbose=True)
Explanation: That generated a text8.bin file containing the word vectors in a binary format.
Do the clustering of the vectors based on the trained model.
End of explanation
import word2vec
Explanation: That created a text8-clusters.txt with the cluster for every word in the vocabulary
Predictions
End of explanation
model = word2vec.load('/Users/drodriguez/Downloads/text8.bin')
Explanation: Import the word2vec binary file created above
End of explanation
model.vocab
Explanation: We can take a look at the vocabulaty as a numpy array
End of explanation
model.vectors.shape
model.vectors
Explanation: Or take a look at the whole matrix
End of explanation
model['dog'].shape
model['dog'][:10]
Explanation: We can retreive the vector of individual words
End of explanation
indexes, metrics = model.cosine('socks')
indexes, metrics
Explanation: We can do simple queries to retreive words similar to "socks" based on cosine similarity:
End of explanation
model.vocab[indexes]
Explanation: This returned a tuple with 2 items:
1. numpy array with the indexes of the similar words in the vocabulary
2. numpy array with cosine similarity to each word
Its possible to get the words of those indexes
End of explanation
model.generate_response(indexes, metrics)
Explanation: There is a helper function to create a combined response: a numpy record array
End of explanation
model.generate_response(indexes, metrics).tolist()
Explanation: Is easy to make that numpy array a pure python response:
End of explanation
indexes, metrics = model.cosine('los_angeles')
model.generate_response(indexes, metrics).tolist()
Explanation: Phrases
Since we trained the model with the output of word2phrase we can ask for similarity of "phrases"
End of explanation
indexes, metrics = model.analogy(pos=['king', 'woman'], neg=['man'], n=10)
indexes, metrics
model.generate_response(indexes, metrics).tolist()
Explanation: Analogies
Its possible to do more complex queries like analogies such as: king - man + woman = queen
This method returns the same as cosine the indexes of the words in the vocab and the metric
End of explanation
clusters = word2vec.load_clusters('/Users/drodriguez/Downloads/text8-clusters.txt')
Explanation: Clusters
End of explanation
clusters['dog']
Explanation: We can see get the cluster number for individual words
End of explanation
clusters.get_words_on_cluster(90).shape
clusters.get_words_on_cluster(90)[:10]
Explanation: We can see get all the words grouped on an specific cluster
End of explanation
model.clusters = clusters
indexes, metrics = model.analogy(pos=['paris', 'germany'], neg=['france'], n=10)
model.generate_response(indexes, metrics).tolist()
Explanation: We can add the clusters to the word2vec model and generate a response that includes the clusters
End of explanation |
4,402 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
x_norm = x.reshape(x.size)
x_norm = (x_norm - min(x_norm))/(max(x_norm)-min(x_norm))
return x_norm.reshape(x.shape)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
one_hot_array = np.zeros((len(x), 10))
for index in range(len(x)):
val_index = x[index]
one_hot_array[index][val_index] = 1
return one_hot_array
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.uint8, (None, n_classes), name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, None, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
x_tensor_dims = x_tensor._shape.ndims
channel_num = x_tensor._shape.dims[x_tensor_dims - 1].value
mu = 0
sigma = 0.1
conv_weight = tf.Variable(tf.truncated_normal(shape=(conv_ksize[0], conv_ksize[1], channel_num, conv_num_outputs), mean=mu, stddev=sigma))
conv_bias = tf.Variable(tf.zeros(conv_num_outputs))
conv = tf.nn.conv2d(x_tensor, conv_weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') + conv_bias
conv = tf.nn.relu(conv)
return tf.nn.max_pool(conv, ksize=[1, pool_ksize[1], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
shaped = x_tensor.get_shape().as_list()
reshaped = tf.reshape(x_tensor, [-1, shaped[1] * shaped[2] * shaped[3]])
return reshaped
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weight = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(shape=num_outputs))
return tf.nn.relu(tf.matmul(x_tensor, weight) + bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weight = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.1))
bias = tf.Variable(tf.zeros(shape=num_outputs))
return tf.matmul(x_tensor, weight) + bias
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_num_outputs = 10
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# Function Definition from Above:
# flatten(x_tensor)
x_tensor = flatten(x_tensor)
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
num_outputs = 100
x_tensor = fully_conn(x_tensor, num_outputs)
x_tensor = tf.nn.dropout(x_tensor, keep_prob)
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
num_outputs = 10
model = output(x_tensor, num_outputs)
# TODO: return output
return model
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
feed_dict = {
x: feature_batch,
y: label_batch,
keep_prob: keep_probability}
session.run(optimizer, feed_dict=feed_dict)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
current_cost = session.run(cost,feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.})
valid_accuracy = session.run(accuracy,feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.})
print('Loss: {:<8.3} Valid Accuracy: {:<5.3}'.format(current_cost,valid_accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 20
batch_size = 128
keep_probability = 0.5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
4,403 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Pairs Trading
By Delaney Mackenzie and Maxwell Margenot
Part of the Quantopian Lecture Series
Step1: Generating Two Fake Securities
We model X's daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day.
Step2: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
Step3: Cointegration
We've constructed an example of two cointegrated series. Cointegration is a more subtle relationship than correlation. If two time series are cointegrated, there is some linear combination between them that will vary around a mean. At all points in time, the combination between them is related to the same probability distribution.
For more details on how we formally define cointegration and how to understand it, please see the Integration, Cointegration, and Stationarity lecture from the Quantopian Lecture Series.
We'll plot the difference between the two now so we can see how this looks.
Step4: Testing for Cointegration
That's an intuitive definition, but how do we test for this statistically? There is a convenient cointegration test that lives in statsmodels.tsa.stattools. Let's say that our confidence level is $0.05$. We should see a p-value below our cutoff, as we've artifically created two series that are the textbook definition of cointegration.
Step5: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
Step6: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
Step7: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
Step8: Sure enough, the correlation is incredibly low, but the p-value shows that these are cointegrated.
Hedging
Because you'd like to protect yourself from bad markets, often times short sales will be used to hedge long investments. Because a short sale makes money if the security sold loses value, and a long purchase will make money if a security gains value, one can long parts of the market and short others. That way if the entire market falls off a cliff, we'll still make money on the shorted securities and hopefully break even. In the case of two securities we'll call it a hedged position when we are long on one security and short on the other.
The Trick
Step9: Looking for Cointegrated Pairs of Alternative Energy Securities
We are looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
Our approach here is somewhere in the middle of the spectrum that we mentioned before. We have formulated an economic hypothesis that there is some sort of link between a subset of securities within the energy sector and we want to test whether there are any cointegrated pairs. This incurs significantly less multiple comparisons bias than searching through hundreds of securities and slightly more than forming a hypothesis for an individual test.
NOTE
Step10: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
Step11: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
Step12: Now we'll run our method on the list and see if any pairs are cointegrated.
Step13: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
Step14: Calculating the Spread
Now we will plot the spread of the two series. In order to actually calculate the spread, we use a linear regression to get the coefficient for the linear combination to construct between our two securities, as shown in the stationarity lecture. Using a linear regression to estimate the coefficient is known as the Engle-Granger method.
Step15: Alternatively, we could examine the ratio betwen the two series.
Step16: Examining the price ratio of a trading pair is a traditional way to handle pairs trading. Part of why this works as a signal is based in our assumptions of how stock prices move, specifically because stock prices are typically assumed to be log-normally distributed. What this implies is that by taking a ratio of the prices, we are taking a linear combination of the returns associated with them (since prices are just the exponentiated returns).
This can be a little irritating to deal with for our purposes as purchasing the precisely correct ratio of a trading pair may not be practical. We choose instead to move forward with simply calculating the spread between the cointegrated stocks using linear regression. This is a very simple way to handle the relationship, however, and is likely not feasible for non-toy examples. There are other potential methods for estimating the spread listed at the bottom of this lecture. If you want to get more into the theory of why having cointegrated stocks matters for pairs trading, again, please see the Integration, Cointegration, and Stationarity Lecture from the Quantopian Lecture Series.
So, back to our example. The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score.
WARNING
In practice this is usually done to try to give some scale to the data, but this assumes some underlying distribution. Usually normal. Under a normal distribution, we would know that approximately 84% of all spread values will be smaller. However, much financial data is not normally distributed, and one must be very careful not to assume normality, nor any specific distribution when generating statistics. It could be the case that the true distribution of spreads was very fat-tailed and prone to extreme values. This could mess up our model and result in large losses.
Step17: Simple Strategy
Step18: We can use the moving averages to compute the z-score of the spread at each given time. This will tell us how extreme the spread is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
Step19: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the spreads were all negative and that is a little counterintuitive to trade on.
Step20: Out of Sample Test
Now that we have constructed our spread appropriately and have an idea of how we will go about making trades, it is time to conduct some out of sample testing. Our whole model is based on the premise that these securities are cointegrated, but we built it on information from a certain time period. If we actually want to implement this model, we need to conduct an out of sample test to confirm that the principles of our model are still valid going forward.
Since we initially built the model on the 2014 - 2015 year, let's see if this cointegrated relationship holds for 2015 - 2016. Historical results do not guarantee future results so this is a sanity check to see if the work we have done holds strong. | Python Code:
import numpy as np
import pandas as pd
import statsmodels
import statsmodels.api as sm
from statsmodels.tsa.stattools import coint
# just set the seed for the random number generator
np.random.seed(107)
import matplotlib.pyplot as plt
Explanation: Introduction to Pairs Trading
By Delaney Mackenzie and Maxwell Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
Pairs trading is a classic example of a strategy based on mathematical analysis. The principle is as follows. Let's say you have a pair of securities X and Y that have some underlying economic link. An example might be two companies that manufacture the same product, or two companies in one supply chain. If we can model this economic link with a mathematical model, we can make trades on it. We'll start by constructing a toy example.
Before we proceed, note that the content in this lecture depends heavily on the Stationarity, Integration, and Cointegration lecture in order to properly understand the mathematical basis for the methodology that we employ here. It is recommended that you go through that lecture before this continuing.
End of explanation
X_returns = np.random.normal(0, 1, 100) # Generate the daily returns
# sum them and shift all the prices up into a reasonable range
X = pd.Series(np.cumsum(X_returns), name='X') + 50
X.plot();
Explanation: Generating Two Fake Securities
We model X's daily returns by drawing from a normal distribution. Then we perform a cumulative sum to get the value of X on each day.
End of explanation
some_noise = np.random.normal(0, 1, 100)
Y = X + 5 + some_noise
Y.name = 'Y'
pd.concat([X, Y], axis=1).plot();
Explanation: Now we generate Y. Remember that Y is supposed to have a deep economic link to X, so the price of Y should vary pretty similarly. We model this by taking X, shifting it up and adding some random noise drawn from a normal distribution.
End of explanation
(Y - X).plot() # Plot the spread
plt.axhline((Y - X).mean(), color='red', linestyle='--') # Add the mean
plt.xlabel('Time')
plt.legend(['Price Spread', 'Mean']);
Explanation: Cointegration
We've constructed an example of two cointegrated series. Cointegration is a more subtle relationship than correlation. If two time series are cointegrated, there is some linear combination between them that will vary around a mean. At all points in time, the combination between them is related to the same probability distribution.
For more details on how we formally define cointegration and how to understand it, please see the Integration, Cointegration, and Stationarity lecture from the Quantopian Lecture Series.
We'll plot the difference between the two now so we can see how this looks.
End of explanation
# compute the p-value of the cointegration test
# will inform us as to whether the spread between the 2 timeseries is stationary
# around its mean
score, pvalue, _ = coint(X,Y)
print pvalue
Explanation: Testing for Cointegration
That's an intuitive definition, but how do we test for this statistically? There is a convenient cointegration test that lives in statsmodels.tsa.stattools. Let's say that our confidence level is $0.05$. We should see a p-value below our cutoff, as we've artifically created two series that are the textbook definition of cointegration.
End of explanation
X.corr(Y)
Explanation: Correlation vs. Cointegration
Correlation and cointegration, while theoretically similar, are not the same. To demonstrate this, we'll show examples of series that are correlated, but not cointegrated, and vice versa. To start let's check the correlation of the series we just generated.
End of explanation
X_returns = np.random.normal(1, 1, 100)
Y_returns = np.random.normal(2, 1, 100)
X_diverging = pd.Series(np.cumsum(X_returns), name='X')
Y_diverging = pd.Series(np.cumsum(Y_returns), name='Y')
pd.concat([X_diverging, Y_diverging], axis=1).plot();
print 'Correlation: ' + str(X_diverging.corr(Y_diverging))
score, pvalue, _ = coint(X_diverging,Y_diverging)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: That's very high, as we would expect. But how would two series that are correlated but not cointegrated look?
Correlation Without Cointegration
A simple example is two series that just diverge.
End of explanation
Y2 = pd.Series(np.random.normal(0, 1, 1000), name='Y2') + 20
Y3 = Y2.copy()
# Y2 = Y2 + 10
Y3[0:100] = 30
Y3[100:200] = 10
Y3[200:300] = 30
Y3[300:400] = 10
Y3[400:500] = 30
Y3[500:600] = 10
Y3[600:700] = 30
Y3[700:800] = 10
Y3[800:900] = 30
Y3[900:1000] = 10
Y2.plot()
Y3.plot()
plt.ylim([0, 40]);
# correlation is nearly zero
print 'Correlation: ' + str(Y2.corr(Y3))
score, pvalue, _ = coint(Y2,Y3)
print 'Cointegration test p-value: ' + str(pvalue)
Explanation: Cointegration Without Correlation
A simple example of this case is a normally distributed series and a square wave.
End of explanation
def find_cointegrated_pairs(data):
n = data.shape[1]
score_matrix = np.zeros((n, n))
pvalue_matrix = np.ones((n, n))
keys = data.keys()
pairs = []
for i in range(n):
for j in range(i+1, n):
S1 = data[keys[i]]
S2 = data[keys[j]]
result = coint(S1, S2)
score = result[0]
pvalue = result[1]
score_matrix[i, j] = score
pvalue_matrix[i, j] = pvalue
if pvalue < 0.05:
pairs.append((keys[i], keys[j]))
return score_matrix, pvalue_matrix, pairs
Explanation: Sure enough, the correlation is incredibly low, but the p-value shows that these are cointegrated.
Hedging
Because you'd like to protect yourself from bad markets, often times short sales will be used to hedge long investments. Because a short sale makes money if the security sold loses value, and a long purchase will make money if a security gains value, one can long parts of the market and short others. That way if the entire market falls off a cliff, we'll still make money on the shorted securities and hopefully break even. In the case of two securities we'll call it a hedged position when we are long on one security and short on the other.
The Trick: Where it all comes together
Because the securities drift towards and apart from each other, there will be times when the distance is high and times when the distance is low. The trick of pairs trading comes from maintaining a hedged position across X and Y. If both securities go down, we neither make nor lose money, and likewise if both go up. We make money on the spread of the two reverting to the mean. In order to do this we'll watch for when X and Y are far apart, then short Y and long X. Similarly we'll watch for when they're close together, and long Y and short X.
Going Long the Spread
This is when the spread is small and we expect it to become larger. We place a bet on this by longing Y and shorting X.
Going Short the Spread
This is when the spread is large and we expect it to become smaller. We place a bet on this by shorting Y and longing X.
Specific Bets
One important concept here is that we are placing a bet on one specific thing, and trying to reduce our bet's dependency on other factors such as the market.
Finding real securities that behave like this
The best way to do this is to start with securities you suspect may be cointegrated and perform a statistical test. If you just run statistical tests over all pairs, you'll fall prey to multiple comparison bias.
Here's a method to look through a list of securities and test for cointegration between all pairs. It returns a cointegration test score matrix, a p-value matrix, and any pairs for which the p-value was less than $0.05$.
WARNING: This will incur a large amount of multiple comparisons bias.
The methods for finding viable pairs all live on a spectrum. At one end there is the formation of an economic hypothesis for an individual pair. You have some extra knowledge about an economic link that leads you to believe that the pair is cointegrated, so you go out and test for the presence of cointegration. In this case you will incur no multiple comparisons bias. At the other end of the spectrum, you perform a search through hundreds of different securities for any viable pairs according to your test. In this case you will incur a very large amount of multiple comparisons bias.
Multiple comparisons bias is the increased chance to incorrectly generate a significant p-value when many tests are run. If 100 tests are run on random data, we should expect to see 5 p-values below $0.05$ on expectation. Because we will perform $n(n-1)/2$ comparisons, we should expect to see many incorrectly significant p-values. For the sake of example will will ignore this and continue. In practice a second verification step would be needed if looking for pairs this way. Another approach is to pick a small number of pairs you have reason to suspect might be cointegrated and test each individually. This will result in less exposure to multiple comparisons bias. You can read more about multiple comparisons bias here.
End of explanation
symbol_list = ['ABGB', 'ASTI', 'CSUN', 'DQ', 'FSLR','SPY']
prices_df = get_pricing(symbol_list, fields=['price']
, start_date='2014-01-01', end_date='2015-01-01')['price']
prices_df.columns = map(lambda x: x.symbol, prices_df.columns)
Explanation: Looking for Cointegrated Pairs of Alternative Energy Securities
We are looking through a set of solar company stocks to see if any of them are cointegrated. We'll start by defining the list of securities we want to look through. Then we'll get the pricing data for each security for the year of 2014.
Our approach here is somewhere in the middle of the spectrum that we mentioned before. We have formulated an economic hypothesis that there is some sort of link between a subset of securities within the energy sector and we want to test whether there are any cointegrated pairs. This incurs significantly less multiple comparisons bias than searching through hundreds of securities and slightly more than forming a hypothesis for an individual test.
NOTE: We include the market in our data. This is because the market drives the movement of so many securities that you often times might find two seemingingly cointegrated securities, but in reality they are not cointegrated and just both conintegrated with the market. This is known as a confounding variable and it is important to check for market involvement in any relationship you find.
get_pricing() is a Quantopian method that pulls in stock data, and loads it into a Python Pandas DataPanel object. Available fields are 'price', 'open_price', 'high', 'low', 'volume'. But for this example we will just use 'price' which is the daily closing price of the stock.
End of explanation
prices_df.head()
Explanation: Example of how to get all the prices of all the stocks loaded using get_pricing() above in one pandas dataframe object
End of explanation
prices_df['SPY'].head()
Explanation: Example of how to get just the prices of a single stock that was loaded using get_pricing() above
End of explanation
# Heatmap to show the p-values of the cointegration test between each pair of
# stocks. Only show the value in the upper-diagonal of the heatmap
scores, pvalues, pairs = find_cointegrated_pairs(prices_df)
import seaborn
seaborn.heatmap(pvalues, xticklabels=symbol_list, yticklabels=symbol_list, cmap='RdYlGn_r'
, mask = (pvalues >= 0.05)
)
print pairs
Explanation: Now we'll run our method on the list and see if any pairs are cointegrated.
End of explanation
S1 = prices_df['ABGB']
S2 = prices_df['FSLR']
score, pvalue, _ = coint(S1, S2)
pvalue
Explanation: Looks like 'ABGB' and 'FSLR' are cointegrated. Let's take a look at the prices to make sure there's nothing weird going on.
End of explanation
S1 = sm.add_constant(S1)
results = sm.OLS(S2, S1).fit()
S1 = S1['ABGB']
b = results.params['ABGB']
spread = S2 - b * S1
spread.plot()
plt.axhline(spread.mean(), color='black')
plt.legend(['Spread']);
Explanation: Calculating the Spread
Now we will plot the spread of the two series. In order to actually calculate the spread, we use a linear regression to get the coefficient for the linear combination to construct between our two securities, as shown in the stationarity lecture. Using a linear regression to estimate the coefficient is known as the Engle-Granger method.
End of explanation
ratio = S1/S2
ratio.plot()
plt.axhline(ratio.mean(), color='black')
plt.legend(['Price Ratio']);
Explanation: Alternatively, we could examine the ratio betwen the two series.
End of explanation
def zscore(series):
return (series - series.mean()) / np.std(series)
zscore(spread).plot()
plt.axhline(zscore(spread).mean(), color='black')
plt.axhline(1.0, color='red', linestyle='--')
plt.axhline(-1.0, color='green', linestyle='--')
plt.legend(['Spread z-score', 'Mean', '+1', '-1']);
Explanation: Examining the price ratio of a trading pair is a traditional way to handle pairs trading. Part of why this works as a signal is based in our assumptions of how stock prices move, specifically because stock prices are typically assumed to be log-normally distributed. What this implies is that by taking a ratio of the prices, we are taking a linear combination of the returns associated with them (since prices are just the exponentiated returns).
This can be a little irritating to deal with for our purposes as purchasing the precisely correct ratio of a trading pair may not be practical. We choose instead to move forward with simply calculating the spread between the cointegrated stocks using linear regression. This is a very simple way to handle the relationship, however, and is likely not feasible for non-toy examples. There are other potential methods for estimating the spread listed at the bottom of this lecture. If you want to get more into the theory of why having cointegrated stocks matters for pairs trading, again, please see the Integration, Cointegration, and Stationarity Lecture from the Quantopian Lecture Series.
So, back to our example. The absolute spread isn't very useful in statistical terms. It is more helpful to normalize our signal by treating it as a z-score.
WARNING
In practice this is usually done to try to give some scale to the data, but this assumes some underlying distribution. Usually normal. Under a normal distribution, we would know that approximately 84% of all spread values will be smaller. However, much financial data is not normally distributed, and one must be very careful not to assume normality, nor any specific distribution when generating statistics. It could be the case that the true distribution of spreads was very fat-tailed and prone to extreme values. This could mess up our model and result in large losses.
End of explanation
# Get the spread between the 2 stocks
# Calculate rolling beta coefficient
rolling_beta = pd.ols(y=S1, x=S2, window_type='rolling', window=30)
spread = S2 - rolling_beta.beta['x'] * S1
spread.name = 'spread'
# Get the 1 day moving average of the price spread
spread_mavg1 = pd.rolling_mean(spread, window=1)
spread_mavg1.name = 'spread 1d mavg'
# Get the 30 day moving average
spread_mavg30 = pd.rolling_mean(spread, window=30)
spread_mavg30.name = 'spread 30d mavg'
plt.plot(spread_mavg1.index, spread_mavg1.values)
plt.plot(spread_mavg30.index, spread_mavg30.values)
plt.legend(['1 Day Spread MAVG', '30 Day Spread MAVG'])
plt.ylabel('Spread');
Explanation: Simple Strategy:
Go "Long" the spread whenever the z-score is below -1.0
Go "Short" the spread when the z-score is above 1.0
Exit positions when the z-score approaches zero
This is just the tip of the iceberg, and only a very simplistic example to illustrate the concepts. In practice you would want to compute a more optimal weighting for how many shares to hold for S1 and S2. Some additional resources on pair trading are listed at the end of this notebook
Trading using constantly updating statistics
In general taking a statistic over your whole sample size can be bad. For example, if the market is moving up, and both securities with it, then your average price over the last 3 years may not be representative of today. For this reason traders often use statistics that rely on rolling windows of the most recent data.
Moving Averages
A moving average is just an average over the last $n$ datapoints for each given time. It will be undefined for the first $n$ datapoints in our series. Shorter moving averages will be more jumpy and less reliable, but respond to new information quickly. Longer moving averages will be smoother, but take more time to incorporate new information.
We also need to use a rolling beta, a rolling estimate of how our spread should be calculated, in order to keep all of our parameters up to date.
End of explanation
# Take a rolling 30 day standard deviation
std_30 = pd.rolling_std(spread, window=30)
std_30.name = 'std 30d'
# Compute the z score for each day
zscore_30_1 = (spread_mavg1 - spread_mavg30)/std_30
zscore_30_1.name = 'z-score'
zscore_30_1.plot()
plt.axhline(0, color='black')
plt.axhline(1.0, color='red', linestyle='--');
Explanation: We can use the moving averages to compute the z-score of the spread at each given time. This will tell us how extreme the spread is and whether it's a good idea to enter a position at this time. Let's take a look at the z-score now.
End of explanation
# Plot the prices scaled down along with the negative z-score
# just divide the stock prices by 10 to make viewing it on the plot easier
plt.plot(S1.index, S1.values/10)
plt.plot(S2.index, S2.values/10)
plt.plot(zscore_30_1.index, zscore_30_1.values)
plt.legend(['S1 Price / 10', 'S2 Price / 10', 'Price Spread Rolling z-Score']);
Explanation: The z-score doesn't mean much out of context, let's plot it next to the prices to get an idea of what it looks like. We'll take the negative of the z-score because the spreads were all negative and that is a little counterintuitive to trade on.
End of explanation
symbol_list = ['ABGB', 'FSLR']
prices_df = get_pricing(symbol_list, fields=['price']
, start_date='2015-01-01', end_date='2016-01-01')['price']
prices_df.columns = map(lambda x: x.symbol, prices_df.columns)
S1 = prices_df['ABGB']
S2 = prices_df['FSLR']
score, pvalue, _ = coint(S1, S2)
print 'p-value: ', pvalue
Explanation: Out of Sample Test
Now that we have constructed our spread appropriately and have an idea of how we will go about making trades, it is time to conduct some out of sample testing. Our whole model is based on the premise that these securities are cointegrated, but we built it on information from a certain time period. If we actually want to implement this model, we need to conduct an out of sample test to confirm that the principles of our model are still valid going forward.
Since we initially built the model on the 2014 - 2015 year, let's see if this cointegrated relationship holds for 2015 - 2016. Historical results do not guarantee future results so this is a sanity check to see if the work we have done holds strong.
End of explanation |
4,404 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploratory Analysis Round 2
Now that we have looked at the data on an unfiltered way, we now take into account information we know about the data. For example, the channels, channel types, proteins we know.
Reference
Step1: Correlation Matrix of All Protein Expressions For Each Measurement
Step2: Log-Transform Data
Step3: Kernel Density Estimation
? How to run kernel density estimation ? | Python Code:
# Import Necessary Libraries
import numpy as np
import os, csv, json
from matplotlib import *
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import scipy
from scipy.cluster.hierarchy import dendrogram, linkage
from sklearn.neighbors import KernelDensity
# pretty charting
import seaborn as sns
sns.set_palette('muted')
sns.set_style('darkgrid')
%matplotlib inline
# channel = ['Synap','Synap','VGlut1','VGlut1','VGlut2','Vglut3',
# 'psd','glur2','nmdar1','nr2b','gad','VGAT', 'PV','Gephyr',
# 'GABAR1','GABABR','CR1','5HT1A', 'NOS','TH','VACht',
# 'Synapo','tubuli','DAPI']
channel = ['Synap_01','Synap_02','VGlut1_01','VGlut1_02','VGlut2','Vglut3',
'psd','glur2','nmdar1','nr2b','gad','VGAT', 'PV','Gephyr',
'GABAR1','GABABR','CR1','5HT1A', 'NOS','TH','VACht',
'Synapo','tubuli','DAPI']
channeltype = ['ex.pre','ex.pre','ex.pre','ex.pre','ex.pre','in.pre.small',
'ex.post','ex.post','ex.post','ex.post','in.pre','in.pre',
'in.pre','in.post','in.post','in.post','in.pre.small','other',
'ex.post','other','other','ex.post','none','none']
print channel
print channeltype
# load in volume data
list_of_locations = []
with open('data/synapsinR_7thA.tif.Pivots.txt') as file:
for line in file:
inner_list = [float(elt.strip()) for elt in line.split(',')]
# create list of features
list_of_locations.append(inner_list)
# conver to a numpy matrix
list_of_locations = np.array(list_of_locations)
#### RUN AT BEGINNING AND TRY NOT TO RUN AGAIN - TAKES WAY TOO LONG ####
# write new list_of_features to new txt file
csvfile = "data_normalized/shortenedFeatures_normalized.txt"
# load in the feature data
list_of_features = []
with open(csvfile) as file:
for line in file:
inner_list = [float(elt.strip()) for elt in line.split(',')]
# create list of features
list_of_features.append(inner_list)
# conver to a numpy matrix
list_of_features = np.array(list_of_features)
# for i in range(0, len(list_of_locations)):
print min(list_of_locations[:,0]), " ", max(list_of_locations[:,0])
print min(list_of_locations[:,1]), " ", max(list_of_locations[:,1])
print min(list_of_locations[:,2]), " ", max(list_of_locations[:,2])
print abs(min(list_of_locations[:,0]) - max(list_of_locations[:,0]))
print abs(min(list_of_locations[:,1]) - max(list_of_locations[:,1]))
print abs(min(list_of_locations[:,2]) - max(list_of_locations[:,2]))
# Make a feature dictionary for all the different protein expressions
features = {}
for idx, chan in enumerate(channel):
indices = [0+idx, 24+idx, 48+idx, 72+idx]
features[chan] = list_of_features[:,indices]
print "The number of protein expressions are:"
print "This number should be 24: ", len(features.keys())
#
print "The number of unique channel types are: ", len(np.unique(channeltype))
print np.unique(channeltype)
Explanation: Exploratory Analysis Round 2
Now that we have looked at the data on an unfiltered way, we now take into account information we know about the data. For example, the channels, channel types, proteins we know.
Reference: http://www.nature.com/articles/sdata201446
Co-localization:
"Examining the cross-correlations at small 2d shifts between images reveals that pairs of antibodies which are expected to colocalize within either pre- or postsynaptic compartments (for example, Synapsin1 and vGluT1 or PSD95 and GluR2, respectively) have sharp peaks of correlation, while pairs of antibodies which represent associated pre- and postsynaptic compartments (for example, Synapsin1 and PSD95) have broader, more diffuse cross-correlation peaks"
End of explanation
fig = plt.figure(figsize=(20,30))
index = 1
for key in features.keys():
# Compute feature correlation matrix
R = np.corrcoef(features[key],rowvar=0)
# R_normalize = np.corrcoef(normalize_features, rowvar=0)
# fig = plt.figure(figsize=(10,10))
plt.subplot(len(features.keys())/2, 2, index)
plt.imshow(R, cmap=plt.get_cmap('jet'), interpolation='none')
plt.title("Correlation plot of all features f0, f1, f2, f3")
plt.colorbar()
plt.xticks(np.arange(0,4, 1), ['f0', 'f1', 'f2', 'f3'])
plt.yticks(np.arange(0,4, 1), ['f0', 'f1', 'f2', 'f3'])
plt.title(key)
ax = plt.gca()
ax.grid(False)
xmin = ax.get_xlim
index += 1
plt.tight_layout()
Explanation: Correlation Matrix of All Protein Expressions For Each Measurement
End of explanation
f0_list_of_features = list_of_features[:,0:24]
new_list_of_features = np.log(f0_list_of_features)
print new_list_of_features.shape
new_list_of_features = np.array(new_list_of_features)
# write new list_of_features to new txt file
csvfile = "data/f0_logtransformed.txt"
#Assuming res is a flat list
with open(csvfile, "w") as output:
# write to new file the data
writer = csv.writer(output, lineterminator='\n')
for row in range(0, len(new_list_of_features)):
writer.writerow(new_list_of_features[row,:])
Explanation: Log-Transform Data
End of explanation
fig = plt.figure(figsize=(10,10))
for i in range(0, new_list_of_features.shape[1]):
kde = KernelDensity().fit(new_list_of_features[:,i])
log_dens = kde.score_samples(np.linspace(-5, 5, new_list_of_features.shape[0]))
plt.plot(log_dens, label="kernel = gaussian")
from scipy.stats import norm
# Plot a 1D density example
N = 100
np.random.seed(1)
X = np.concatenate((np.random.normal(0, 1, 0.3 * N),
np.random.normal(5, 1, 0.7 * N)))[:, np.newaxis]
print X.shape
X_plot = np.linspace(-5, 10, 1000)[:, np.newaxis]
print X_plot.shape
true_dens = (0.3 * norm(0, 1).pdf(X_plot[:, 0])
+ 0.7 * norm(5, 1).pdf(X_plot[:, 0]))
fig, ax = plt.subplots()
ax.fill(X_plot[:, 0], true_dens, fc='black', alpha=0.2,
label='input distribution')
for kernel in ['gaussian']:
kde = KernelDensity(kernel=kernel, bandwidth=0.5).fit(X)
log_dens = kde.score_samples(X_plot)
ax.plot(X_plot[:, 0], np.exp(log_dens), '-',
label="kernel = '{0}'".format(kernel))
ax.text(6, 0.38, "N={0} points".format(N))
ax.legend(loc='upper left')
ax.plot(X[:, 0], -0.005 - 0.01 * np.random.random(X.shape[0]), '+k')
ax.set_xlim(-4, 9)
ax.set_ylim(-0.02, 0.4)
plt.show()
Explanation: Kernel Density Estimation
? How to run kernel density estimation ?
End of explanation |
4,405 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simplified Detection Efficiency Model
Update to
Step1: Station coordinates and thresholds from a set of log files
Specify
Step2: Station coordinates from csv file
Input network title and csv file here
Step3: Converting and checking station locations
Step4: Inclusive function for detection calculations
Make sure the input array of station information matches the given dimensions
Will check for solution in the line of sight for each station within 300 km of the network at the chosen altitude (default = 7 km) and grid spacing (default = 5 km)
Minimum number of stations required to participate in solutions can be set (default = 6)
Step5: Detection efficiency plots
Step6: Minimum Detectable Power plotting
Step7: Additional functions
Want to use cartopy to plot instead?
Start here
Step8: Want to try multiple network calculations at once?
Start here | Python Code:
%pylab inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import parsed_functions as pf
from mpl_toolkits.basemap import Basemap
from coordinateSystems import TangentPlaneCartesianSystem, GeographicSystem, MapProjection
c0 = 3.0e8 # m/s
dt_rms = 23.e-9 # seconds
sq = np.load('source_quantiles',fix_imports=True, encoding='latin1') # in Watts
fde = 100-np.load('fde.csv',fix_imports=True, encoding='latin1') # Corresponding flash DE
Explanation: Simplified Detection Efficiency Model
Update to: V. C. Chmielewski and E. C. Bruning (2016), Lightning Mapping Array flash detection performance with variable receiver thresholds, J. Geophys. Res. Atmos., 121, 8600-8614, doi:10.1002/2016JD025159
Description: Instead of propogating random sources to the network, this calculates the minimum power that a given number of stations can sense at each grid point. These minimum powers are then related to the distribution of source powers as described in the paper above to estimate the detection efficincy.
Contact:
[email protected]
End of explanation
# import read_logs
# import os
# import datetime
# # start_time = datetime.datetime(2014,5,26,2) #25 set
# # end_time = datetime.datetime(2014,5,26,3,50)
# useddir = '/Users/Vanna/Documents/logs/'
# exclude = np.array(['W','A',])
# days = np.array([start_time+datetime.timedelta(days=i) for i in range((end_time-start_time).days+1)])
# days_string = np.array([i.strftime("%y%m%d") for i in days])
# logs = pd.DataFrame()
# dir = os.listdir(useddir)
# for file in dir:
# if np.any(file[2:] == days_string) & np.all(exclude!=file[1]):
# print file
# logs = logs.combine_first(read_logs.parsing(useddir+file,T_set='True'))
# aves = logs[start_time:end_time].mean()
# aves = np.array(aves).reshape(4,len(aves)/4).T
Explanation: Station coordinates and thresholds from a set of log files
Specify:
start time
end time
the directory holding the log files
any stations you wish to exclude from the analysis
End of explanation
Network = 'grid_LMA' # name of network in the csv file
# network csv file with one or multiple networks
stations = pd.read_csv('network.csv')
aves = np.array(stations.set_index('network').loc[Network])[:,:-1].astype('float')
Explanation: Station coordinates from csv file
Input network title and csv file here
End of explanation
center = (np.mean(aves[:,1]), np.mean(aves[:,2]), np.mean(aves[:,0]))
geo = GeographicSystem()
tanp = TangentPlaneCartesianSystem(center[0], center[1], center[2])
mapp = MapProjection
projl = MapProjection(projection='laea', lat_0=center[0], lon_0=center[1])
alt, lat, lon = aves[:,:3].T
plt.scatter(lon, lat, c=aves[:,3])
plt.colorbar(label='Station Threshold (dBm)')
plt.show()
Explanation: Converting and checking station locations
End of explanation
latp, lonp, sde, fde_a, minp = pf.quick_method(
# input array must be in N x (lat, lon, alt, threshold)
np.array([aves[:,1],aves[:,2],aves[:,0],aves[:,3]]).transpose(),
sq, fde,
xint=5000, # Grid spacing
altitude=7000, # Altitude of grid MSL
station_requirement=6, # Minimum number of stations required to trigger
)
Explanation: Inclusive function for detection calculations
Make sure the input array of station information matches the given dimensions
Will check for solution in the line of sight for each station within 300 km of the network at the chosen altitude (default = 7 km) and grid spacing (default = 5 km)
Minimum number of stations required to participate in solutions can be set (default = 6)
End of explanation
domain = 197.5*1000
maps = Basemap(projection='laea', resolution='i',
lat_0=center[0], lon_0=center[1], width=domain*2, height=domain*2)
x, y = maps(lonp, latp)
# Source detection efficiency
s = plt.pcolormesh(x,y,sde,cmap = 'magma')
plt.colorbar(label='Source Detection Efficiency')
s.set_clim(vmin=0,vmax=100)
# Draw flash detection efficiency contours
CS = plt.contour(x,y,fde_a, colors='k',levels=(20,40,60,70,80,85,90,95,99))
plt.clabel(CS, inline=1, fontsize=10,fmt='%3.0f')
# Overlay station locations
xs, ys = maps(lon,lat)
plt.scatter(xs,ys, color='k',s=5)
maps.drawstates()
maps.drawcoastlines()
# maps.drawcounties()
plt.show()
Explanation: Detection efficiency plots
End of explanation
minp=np.ma.masked_where(minp==999,minp) # Undetected sources given value of 999
domain = 197.5*1000
maps = Basemap(projection='laea', resolution='i',
lat_0=center[0], lon_0=center[1], width=domain*2, height=domain*2)
x, y = maps(lonp, latp)
# Source detection efficiency
s = plt.pcolormesh(x,y,minp,cmap = 'viridis_r')
plt.colorbar(label='Minimum Detectable Power (dBW)')
# Overlay station locations
xs, ys = maps(lon,lat)
plt.scatter(xs,ys, color='k',s=5)
maps.drawstates()
maps.drawcoastlines()
# maps.drawcounties()
plt.show()
Explanation: Minimum Detectable Power plotting
End of explanation
import cartopy.crs as ccrs
import cartopy.feature as cfeature
plt.figure(figsize=(10,5))
ax = plt.axes(projection=ccrs.PlateCarree())
plt.contourf(lonp, latp, np.ma.masked_where(sde<1, sde),
levels=np.arange(0,100,1), cmap='magma', transform=ccrs.PlateCarree())
plt.colorbar(label='Source Detection Efficiency')
ax.set_extent((-110, -73, 26, 45))
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='110m',
facecolor='none')
lakes = cfeature.NaturalEarthFeature(
category='physical',
name='lakes',
scale='110m',
facecolor='none')
ax.add_feature(lakes, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(states_provinces, edgecolor='black', linewidth=1)
ax.coastlines()
plt.show()
Explanation: Additional functions
Want to use cartopy to plot instead?
Start here
End of explanation
stations = pd.read_csv('network_full.csv') # network csv file with one or multiple networks
names = [['OKLMA_DC3','OKLMA_DC3sw','WTLMA'],
'COLMA_DC3',
'NALMA_DC3',
]
lats = [0]*len(names)
lons = [0]*len(names)
sdes = [0]*len(names)
fdes = [0]*len(names)
powr = [0]*len(names)
for i in range(len(names)):
aves = np.array(stations.set_index('network').loc[names[i]])[:,:-1].astype('float')
# aves[:,-1] = -78
lats[i],lons[i],sdes[i],fdes[i],powr[i] = pf.quick_method(
np.array([aves[:,1],aves[:,2],aves[:,0],aves[:,3]]).transpose(),
sq,
fde,
xint=5000,
altitude=7000,
station_requirement=6,
)
plt.figure(figsize=(10,5))
ax = plt.axes(projection=ccrs.PlateCarree())
for i in range(len(lats)):
# plt.contourf(lons[i], lats[i], np.ma.masked_where(fdes[i]<50, fdes[i]),
plt.contourf(lons[i], lats[i], np.ma.masked_where(sdes[i]<1, sdes[i]),
levels=np.arange(0,100,1), cmap='magma', transform=ccrs.PlateCarree())
plt.colorbar(label='Source Detection Efficiency')
ax.set_extent((-110, -73, 26, 45))
states_provinces = cfeature.NaturalEarthFeature(
category='cultural',
name='admin_1_states_provinces_lines',
scale='110m',
facecolor='none')
lakes = cfeature.NaturalEarthFeature(
category='physical',
name='lakes',
scale='110m',
facecolor='none')
ax.add_feature(lakes, edgecolor='black', linewidth=1)
ax.add_feature(cfeature.BORDERS, edgecolor='black', linewidth=1)
ax.add_feature(states_provinces, edgecolor='black', linewidth=1)
ax.coastlines()
plt.show()
Explanation: Want to try multiple network calculations at once?
Start here
End of explanation |
4,406 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="top"></a>
Db2 11 Time and Date Functions
There are plenty of new date and time functions found in Db2 11. These functions allow you to extract portions from a date
and format the date in a variety of different ways. While Db2 already has a number of date and time functions, these new
functions allow for greater compatibility with other database implementations, making it easier to port to DB2.ion.
Step1: Table of Contents
Extract Function
DATE_PART Function
DATE_TRUNC Function
Extracting Specific Days from a Month
Date Addition
Extracting Weeks, Months, Quarters, and Years
Next Day Function
Between Date/Time Functions
Months Between
Date Duration
Overlaps Predicate
UTC Time Conversions
Back to Top
<a id='extract'></a>
Extract Function
The EXTRACT function extracts and element from a date/time value. The syntax of the EXTRACT command is
Step2: This SQL will return every possible extract value from the current date.the SQL standard.
Step3: Back to Top
<a id='part'></a>
DATE_PART Function
DATE_PART is similar to the EXTRACT function but it uses the more familiar syntax
Step4: Back to Top
<a id='trunc'></a>
DATE_TRUNC Function
DATE_TRUNC computes the same results as the DATE_PART function but then truncates the value down. Note that not all values can be truncated. The function syntax is
Step5: Back to Top
<a id='month'></a>
Extracting Specfic Days from a Month
There are three functions that retrieve day information from a date. These functions include
Step6: This expression (DAYOFMONTH) returns the day of the month.
Step7: FIRST_DAY will return the first day of the month. You could probably compute this with standard SQL date functions, but it is a lot easier just to use this builtin function.
Step8: Finally, DAYS_TO_END_OF_MOTNH will return the number of days to the end of the month. A Zero would be returned if you are on the last day of the month.
Step9: Back to Top
<a id='add'></a>
Date Addition Functions
The date addition functions will add or subtract days from a current timestamp. The functions that
are available are
Step10: A negative number can be used to subtract values from the current date.
Step11: Back to Top
<a id='extract'></a>
Extracting Weeks, Months, Quarters, and Years from a Date
There are four functions that extract different values from a date. These functions include
Step12: There is also a NEXT function for each of these. The NEXT function will return the next week, month, quarter,
or year given a current date.
Step13: Back to Top
<a id='nextday'></a>
Next Day Function
The previous set of functions returned a date value for the current week, month, quarter, or year (or the next one
if you used the NEXT function). The NEXT_DAY function returns the next day (after the date you supply)
based on the string representation of the day. The date string will be dependent on the codepage that you are using for the database.
The date (from an English perspective) can be
Step14: Back to Top
<a id='between'></a>
Between Date/Time Functions
These date functions compute the number of full seconds, minutes, hours, days, weeks, and years between
two dates. If there isn't a full value between the two objects (like less than a day), a zero will be
returned. These new functions are
Step15: Back to Top
<a id='mbetween'></a>
MONTHS_BETWEEN Function
You may have noticed that the MONTHS_BETWEEN function was not in the previous list of functions. The
reason for this is that the value returned for MONTHS_BETWEEN is different from the other functions. The MONTHS_BETWEEN
function returns a DECIMAL value rather than an integer value. The reason for this is that the duration of a
month is not as precise as a day, week or year. The following example will show how the duration is
a decimal value rather than an integer. You could always truncate the value if you want an integer.
Step16: Back to Top
<a id='duration'></a>
Date Duration Functions
An alternate way of representing date durations is through the use of an integer with the format YYYYMMDD where
the YYYY represents the year, MM for the month and DD for the day. Date durations are easier to manipulate than
timestamp values and take up substantially less storage.
There are two new functions.
YMD_BETWEEN returns a numeric value that specifies the number of full years, full months, and full days between two datetime values
AGE returns a numeric value that represents the number of full years, full months, and full days between the current timestamp and the argument
This SQL statement will return various AGE calculations based on the current timestamp.
Step17: The YMD_BETWEEN function is similar to the AGE function except that it takes two date arguments. We can
simulate the AGE function by supplying the NOW function to the YMD_BETWEEN function.
Step18: Back to Top
<a id='overlaps'></a>
OVERLAPS Predicate
The OVERLAPS predicate is used to determine whether two chronological periods overlap. This is not a
function within DB2, but rather a special SQL syntax extension.
A chronological period is specified by a pair of date-time expressions. The first expression specifies
the start of a period; the second specifies its end.
Python
(start1,end1) OVERLAPS (start2, end2)
The beginning and end values are not included in the periods. The following
summarizes the overlap logic. For example, the periods 2016-10-19 to 2016-10-20
and 2016-10-20 to 2016-10-21 do not overlap.
For instance, the following interval does not overlap.
Step19: If the first date range is extended by one day then the range will overlap.
Step20: Identical date ranges will overlap.
Step21: Back to Top
<a id='utc'></a>
UTC Time Conversions
Db2 has two functions that allow you to translate timestamps to and from UTC (Coordinated Universal Time).
The FROM_UTC_TIMESTAMP scalar function returns a TIMESTAMP that is converted from Coordinated Universal Time
to the time zone specified by the time zone string.
The TO_UTC_TIMESTAMP scalar function returns a TIMESTAMP that is converted to Coordinated Universal Time
from the timezone that is specified by the timezone string.
The format of the two functions is
Step22: Convert the Coordinated Universal Time timestamp '2014-11-02 06
Step23: Convert the Coordinated Universal Time timestamp '2015-03-02 06
Step24: Convert the timestamp '1970-01-01 00
Step25: Using UTC Functions
One of the applications for using the UTC is to take the transaction timestamp and normalize it across
all systems that access the data. You can convert the timestamp to UTC on insert and then when it is
retrieved, it can be converted to the local timezone.
This example will use a number of techniques to hide the complexity of changing timestamps to local timezones.
The following SQL will create our base transaction table (TXS_BASE) that will be used throughout the
example.
Step26: The UTC functions will be written to take advantage of a local timezone variable called TIME_ZONE. This
variable will contain the timezone of the server (or user) that is running the transaction. In this
case we are using the timezone in Toronto, Canada.
Step27: The SET Command can be used to update the TIME_ZONE to the current location we are in.
Step28: In order to retrieve the value of the current timezone, we take advantage of a simple user-defined function
called GET_TIMEZONE. It just retrieves the contents of the current TIME_ZONE variable that we set up.
Step29: The TXS view is used by all SQL statements rather than the TXS_BASE table. The reason for this is to
take advantage of INSTEAD OF triggers that can manipulate the UTC without modifying the original SQL.
Note that when the data is returned from the view that the TXTIME field is converted from UTC to the current
TIMEZONE that we are in.
Step30: An INSTEAD OF trigger (INSERT, UPDATE, and DELETE) is created against the TXS view so that any insert or
update on a TXTIME column will be converted back to the UTC value. From an application perspective,
we are using the local time, not the UTC time.
Step31: At this point in time(!) we can start inserting records into our table. We have already set the timezone
to be Toronto, so the next insert statement will take the current time (NOW) and insert it into the table.
For reference, here is the current time.
Step32: We will insert one record into the table and immediately retrieve the result.
Step33: Note that the timsstamp appears to be the same as what we insert (plus or minus a few seconds). What actually
sits in the base table is the UTC time.
Step34: We can modify the time that is returned to us by changing our local timezone. The statement will make
the system think we are in Vancouver.
Step35: Retrieving the results will show that the timestamp has shifted by 3 hours (Vancouver is 3 hours behind
Toronto).
Step36: So what happens if we insert a record into the table now that we are in Vancouver?
Step37: The data retrieved reflects the fact that we are now in Vancouver from an application perspective. Looking at the
base table and you will see that everything has been converted to UTC time.
Step38: Finally, we can switch back to Toronto time and see when the transactions were done. You will see that from a
Toronto perspetive tht the transactions were done three hours later because of the timezone differences. | Python Code:
%run db2.ipynb
Explanation: <a id="top"></a>
Db2 11 Time and Date Functions
There are plenty of new date and time functions found in Db2 11. These functions allow you to extract portions from a date
and format the date in a variety of different ways. While Db2 already has a number of date and time functions, these new
functions allow for greater compatibility with other database implementations, making it easier to port to DB2.ion.
End of explanation
%sql VALUES NOW
Explanation: Table of Contents
Extract Function
DATE_PART Function
DATE_TRUNC Function
Extracting Specific Days from a Month
Date Addition
Extracting Weeks, Months, Quarters, and Years
Next Day Function
Between Date/Time Functions
Months Between
Date Duration
Overlaps Predicate
UTC Time Conversions
Back to Top
<a id='extract'></a>
Extract Function
The EXTRACT function extracts and element from a date/time value. The syntax of the EXTRACT command is:
Python
EXTRACT( element FROM expression )
This is a slightly different format from most functions that you see in the DB2. Element must be one of the following values:
|Element Name | Description
|:---------------- | :-----------------------------------------------------------------------------------------
|EPOCH | Number of seconds since 1970-01-01 00:00:00.00. The value can be positive or negative.
|MILLENNIUM(S) | The millennium is to be returned.
|CENTURY(CENTURIES)| The number of full 100-year periods represented by the year.
|DECADE(S) | The number of full 10-year periods represented by the year.
|YEAR(S) | The year portion is to be returned.
|QUARTER | The quarter of the year (1 - 4) is to be returned.
|MONTH | The month portion is to be returned.
|WEEK | The number of the week of the year (1 - 53) that the specified day is to be returned.
|DAY(S) | The day portion is to be returned.
|DOW | The day of the week that is to be returned. Note that "1" represents Sunday.
|DOY | The day (1 - 366) of the year that is to be returned.
|HOUR(S) | The hour portion is to be returned.
|MINUTE(S) | The minute portion is to be returned.
|SECOND(S) | The second portion is to be returned.
|MILLISECOND(S) | The second of the minute, including fractional parts to one thousandth of a second
|MICROSECOND(S) | The second of the minute, including fractional parts to one millionth of a second
The synonym NOW is going to be used in the next example. NOW is a synonym for CURRENT TIMESTAMP.
End of explanation
%%sql -a
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('EPOCH', EXTRACT( EPOCH FROM NOW )),
('MILLENNIUM(S)', EXTRACT( MILLENNIUM FROM NOW )),
('CENTURY(CENTURIES)', EXTRACT( CENTURY FROM NOW )),
('DECADE(S)', EXTRACT( DECADE FROM NOW )),
('YEAR(S)', EXTRACT( YEAR FROM NOW )),
('QUARTER', EXTRACT( QUARTER FROM NOW )),
('MONTH', EXTRACT( MONTH FROM NOW )),
('WEEK', EXTRACT( WEEK FROM NOW )),
('DAY(S)', EXTRACT( DAY FROM NOW )),
('DOW', EXTRACT( DOW FROM NOW )),
('DOY', EXTRACT( DOY FROM NOW )),
('HOUR(S)', EXTRACT( HOURS FROM NOW )),
('MINUTE(S)', EXTRACT( MINUTES FROM NOW )),
('SECOND(S)', EXTRACT( SECONDS FROM NOW )),
('MILLISECOND(S)', EXTRACT( MILLISECONDS FROM NOW )),
('MICROSECOND(S)', EXTRACT( MICROSECONDS FROM NOW ))
)
SELECT FUNCTION, CAST(RESULT AS BIGINT) FROM DATES
Explanation: This SQL will return every possible extract value from the current date.the SQL standard.
End of explanation
%%sql -a
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('EPOCH', DATE_PART('EPOCH' ,NOW )),
('MILLENNIUM(S)', DATE_PART('MILLENNIUM' ,NOW )),
('CENTURY(CENTURIES)', DATE_PART('CENTURY' ,NOW )),
('DECADE(S)', DATE_PART('DECADE' ,NOW )),
('YEAR(S)', DATE_PART('YEAR' ,NOW )),
('QUARTER', DATE_PART('QUARTER' ,NOW )),
('MONTH', DATE_PART('MONTH' ,NOW )),
('WEEK', DATE_PART('WEEK' ,NOW )),
('DAY(S)', DATE_PART('DAY' ,NOW )),
('DOW', DATE_PART('DOW' ,NOW )),
('DOY', DATE_PART('DOY' ,NOW )),
('HOUR(S)', DATE_PART('HOURS' ,NOW )),
('MINUTE(S)', DATE_PART('MINUTES' ,NOW )),
('SECOND(S)', DATE_PART('SECONDS' ,NOW )),
('MILLISECOND(S)', DATE_PART('MILLISECONDS' ,NOW )),
('MICROSECOND(S)', DATE_PART('MICROSECONDS' ,NOW ))
)
SELECT FUNCTION, CAST(RESULT AS BIGINT) FROM DATES;
Explanation: Back to Top
<a id='part'></a>
DATE_PART Function
DATE_PART is similar to the EXTRACT function but it uses the more familiar syntax:
Python
DATE_PART(element, expression)
In the case of the function, the element must be placed in quotes, rather than as a keyword in the EXTRACT function. in addition, the DATE_PART always returns a BIGINT, while the EXTRACT function will return a different data type depending on the element being returned. For instance, compare the SECONDs option for both functions. In the case of EXTRACT you get a DECIMAL result while for the DATE_PART you get a truncated BIGINT.
End of explanation
%%sql -a
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('MILLENNIUM(S)', DATE_TRUNC('MILLENNIUM' ,NOW )),
('CENTURY(CENTURIES)', DATE_TRUNC('CENTURY' ,NOW )),
('DECADE(S)', DATE_TRUNC('DECADE' ,NOW )),
('YEAR(S)', DATE_TRUNC('YEAR' ,NOW )),
('QUARTER', DATE_TRUNC('QUARTER' ,NOW )),
('MONTH', DATE_TRUNC('MONTH' ,NOW )),
('WEEK', DATE_TRUNC('WEEK' ,NOW )),
('DAY(S)', DATE_TRUNC('DAY' ,NOW )),
('HOUR(S)', DATE_TRUNC('HOURS' ,NOW )),
('MINUTE(S)', DATE_TRUNC('MINUTES' ,NOW )),
('SECOND(S)', DATE_TRUNC('SECONDS' ,NOW )),
('MILLISECOND(S)', DATE_TRUNC('MILLISECONDS' ,NOW )),
('MICROSECOND(S)', DATE_TRUNC('MICROSECONDS' ,NOW ))
)
SELECT FUNCTION, RESULT FROM DATES
Explanation: Back to Top
<a id='trunc'></a>
DATE_TRUNC Function
DATE_TRUNC computes the same results as the DATE_PART function but then truncates the value down. Note that not all values can be truncated. The function syntax is:
Python
DATE_TRUNC(element, expression)
The element must be placed in quotes, rather than as a keyword in the EXTRACT function.
Note that DATE_TRUNC always returns a BIGINT.
The elements that can be truncated are:
|Element Name |Description
|:---------------- |:------------------------------------------------------------------------------
|MILLENNIUM(S) |The millennium is to be returned.
|CENTURY(CENTURIES) |The number of full 100-year periods represented by the year.
|DECADE(S) |The number of full 10-year periods represented by the year.
|YEAR(S) |The year portion is to be returned.
|QUARTER |The quarter of the year (1 - 4) is to be returned.
|MONTH |The month portion is to be returned.
|WEEK |The number of the week of the year (1 - 53) that the specified day is to be returned.
|DAY(S) |The day portion is to be returned.
|HOUR(S) |The hour portion is to be returned.
|MINUTE(S) |The minute portion is to be returned.
|SECOND(S) |The second portion is to be returned.
|MILLISECOND(S) |The second of the minute, including fractional parts to one thousandth of a second
|MICROSECOND(S) |The second of the minute, including fractional parts to one millionth of a secondry data types.
End of explanation
%sql VALUES NOW
Explanation: Back to Top
<a id='month'></a>
Extracting Specfic Days from a Month
There are three functions that retrieve day information from a date. These functions include:
DAYOFMONTH - returns an integer between 1 and 31 that represents the day of the argument
FIRST_DAY - returns a date or timestamp that represents the first day of the month of the argument
DAYS_TO_END_OF_MONTH - returns the number of days to the end of the month
This is the current date so that you know what all of the calculations are based on.
End of explanation
%sql VALUES DAYOFMONTH(NOW)
Explanation: This expression (DAYOFMONTH) returns the day of the month.
End of explanation
%sql VALUES FIRST_DAY(NOW)
Explanation: FIRST_DAY will return the first day of the month. You could probably compute this with standard SQL date functions, but it is a lot easier just to use this builtin function.
End of explanation
%sql VALUES DAYS_TO_END_OF_MONTH(NOW)
Explanation: Finally, DAYS_TO_END_OF_MOTNH will return the number of days to the end of the month. A Zero would be returned if you are on the last day of the month.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('ADD_YEARS ',ADD_YEARS(NOW,1)),
('ADD_MONTHS ',ADD_MONTHS(NOW,1)),
('ADD_DAYS ',ADD_DAYS(NOW,1)),
('ADD_HOURS ',ADD_HOURS(NOW,1)),
('ADD_MINUTES ',ADD_MINUTES(NOW,1)),
('ADD_SECONDS ',ADD_SECONDS(NOW,1))
)
SELECT * FROM DATES
Explanation: Back to Top
<a id='add'></a>
Date Addition Functions
The date addition functions will add or subtract days from a current timestamp. The functions that
are available are:
ADD_YEARS - Add years to a date
ADD_MONTHS - Add months to a date
ADD_DAYS - Add days to a date
ADD_HOURS - Add hours to a date
ADD_MINUTES - Add minutes to a date
ADD_SECONDS - Add seconds to a date
The format of the function is:
Python
ADD_DAYS ( expression, numeric expression )
The following SQL will add one "unit" to the current date.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('ADD_YEARS ',ADD_YEARS(NOW,-1)),
('ADD_MONTHS ',ADD_MONTHS(NOW,-1)),
('ADD_DAYS ',ADD_DAYS(NOW,-1)),
('ADD_HOURS ',ADD_HOURS(NOW,-1)),
('ADD_MINUTES ',ADD_MINUTES(NOW,-1)),
('ADD_SECONDS ',ADD_SECONDS(NOW,-1))
)
SELECT * FROM DATES
Explanation: A negative number can be used to subtract values from the current date.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('THIS_WEEK ',THIS_WEEK(NOW)),
('THIS_MONTH ',THIS_MONTH(NOW)),
('THIS_QUARTER ',THIS_QUARTER(NOW)),
('THIS_YEAR ',THIS_YEAR(NOW))
)
SELECT * FROM DATES
Explanation: Back to Top
<a id='extract'></a>
Extracting Weeks, Months, Quarters, and Years from a Date
There are four functions that extract different values from a date. These functions include:
THIS_QUARTER - returns the first day of the quarter
THIS_WEEK - returns the first day of the week (Sunday is considered the first day of that week)
THIS_MONTH - returns the first day of the month
THIS_YEAR - returns the first day of the year
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('NEXT_WEEK ',NEXT_WEEK(NOW)),
('NEXT_MONTH ',NEXT_MONTH(NOW)),
('NEXT_QUARTER ',NEXT_QUARTER(NOW)),
('NEXT_YEAR ',NEXT_YEAR(NOW))
)
SELECT * FROM DATES
Explanation: There is also a NEXT function for each of these. The NEXT function will return the next week, month, quarter,
or year given a current date.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS
(
VALUES
('CURRENT DATE ',NOW),
('Monday ',NEXT_DAY(NOW,'Monday')),
('Tuesday ',NEXT_DAY(NOW,'TUE')),
('Wednesday ',NEXT_DAY(NOW,'Wednesday')),
('Thursday ',NEXT_DAY(NOW,'Thursday')),
('Friday ',NEXT_DAY(NOW,'FRI')),
('Saturday ',NEXT_DAY(NOW,'Saturday')),
('Sunday ',NEXT_DAY(NOW,'Sunday'))
)
SELECT * FROM DATES
Explanation: Back to Top
<a id='nextday'></a>
Next Day Function
The previous set of functions returned a date value for the current week, month, quarter, or year (or the next one
if you used the NEXT function). The NEXT_DAY function returns the next day (after the date you supply)
based on the string representation of the day. The date string will be dependent on the codepage that you are using for the database.
The date (from an English perspective) can be:
|Day |Short form
|:-------- |:---------
|Monday |MON
|Tuesday |TUE
|Wednesday |WED
|Thursday |THU
|Friday |FRI
|Saturday |SAT
|Sunday |SUN
The following SQL will show you the "day" after the current date that is Monday through Sunday.
End of explanation
%%sql -q
DROP VARIABLE FUTURE_DATE;
CREATE VARIABLE FUTURE_DATE TIMESTAMP DEFAULT(NOW + 1 SECOND + 1 MINUTE + 1 HOUR + 8 DAYS + 1 YEAR);
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('SECONDS_BETWEEN',SECONDS_BETWEEN(FUTURE_DATE,NOW)),
('MINUTES_BETWEEN',MINUTES_BETWEEN(FUTURE_DATE,NOW)),
('HOURS_BETWEEN ',HOURS_BETWEEN(FUTURE_DATE,NOW)),
('DAYS BETWEEN ',DAYS_BETWEEN(FUTURE_DATE,NOW)),
('WEEKS_BETWEEN ',WEEKS_BETWEEN(FUTURE_DATE,NOW)),
('YEARS_BETWEEN ',YEARS_BETWEEN(FUTURE_DATE,NOW))
)
SELECT * FROM DATES;
Explanation: Back to Top
<a id='between'></a>
Between Date/Time Functions
These date functions compute the number of full seconds, minutes, hours, days, weeks, and years between
two dates. If there isn't a full value between the two objects (like less than a day), a zero will be
returned. These new functions are:
HOURS_BETWEEN - returns the number of full hours between two arguments
MINUTES_BETWEEN - returns the number of full minutes between two arguments
SECONDS_BETWEEN - returns the number of full seconds between two arguments
DAYS_BETWEEN - returns the number of full days between two arguments
WEEKS_BETWEEN - returns the number of full weeks between two arguments
YEARS_BETWEEN - returns the number of full years between two arguments
The format of the function is:
Python
DAYS_BETWEEN( expression1, expression2 )
The following SQL will use a date that is in the future with exactly one extra second, minute, hour, day,
week and year added to it.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('0 MONTH ',MONTHS_BETWEEN(NOW, NOW)),
('1 MONTH ',MONTHS_BETWEEN(NOW + 1 MONTH, NOW)),
('1 MONTH + 1 DAY',MONTHS_BETWEEN(NOW + 1 MONTH + 1 DAY, NOW)),
('LEAP YEAR ',MONTHS_BETWEEN('2016-02-01','2016-03-01')),
('NON-LEAP YEAR ',MONTHS_BETWEEN('2015-02-01','2015-03-01'))
)
SELECT * FROM DATES
Explanation: Back to Top
<a id='mbetween'></a>
MONTHS_BETWEEN Function
You may have noticed that the MONTHS_BETWEEN function was not in the previous list of functions. The
reason for this is that the value returned for MONTHS_BETWEEN is different from the other functions. The MONTHS_BETWEEN
function returns a DECIMAL value rather than an integer value. The reason for this is that the duration of a
month is not as precise as a day, week or year. The following example will show how the duration is
a decimal value rather than an integer. You could always truncate the value if you want an integer.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('AGE + 1 DAY ',AGE(NOW - 1 DAY)),
('AGE + 1 MONTH ',AGE(NOW - 1 MONTH)),
('AGE + 1 YEAR ',AGE(NOW - 1 YEAR)),
('AGE + 1 DAY + 1 MONTH ',AGE(NOW - 1 DAY - 1 MONTH)),
('AGE + 1 DAY + 1 YEAR ',AGE(NOW - 1 DAY - 1 YEAR)),
('AGE + 1 DAY + 1 MONTH + 1 YEAR',AGE(NOW - 1 DAY - 1 MONTH - 1 YEAR))
)
SELECT * FROM DATES
Explanation: Back to Top
<a id='duration'></a>
Date Duration Functions
An alternate way of representing date durations is through the use of an integer with the format YYYYMMDD where
the YYYY represents the year, MM for the month and DD for the day. Date durations are easier to manipulate than
timestamp values and take up substantially less storage.
There are two new functions.
YMD_BETWEEN returns a numeric value that specifies the number of full years, full months, and full days between two datetime values
AGE returns a numeric value that represents the number of full years, full months, and full days between the current timestamp and the argument
This SQL statement will return various AGE calculations based on the current timestamp.
End of explanation
%%sql
WITH DATES(FUNCTION, RESULT) AS (
VALUES
('1 DAY ',YMD_BETWEEN(NOW,NOW - 1 DAY)),
('1 MONTH ',YMD_BETWEEN(NOW,NOW - 1 MONTH)),
('1 YEAR ',YMD_BETWEEN(NOW,NOW - 1 YEAR)),
('1 DAY + 1 MONTH ',YMD_BETWEEN(NOW,NOW - 1 DAY - 1 MONTH)),
('1 DAY + 1 YEAR ',YMD_BETWEEN(NOW,NOW - 1 DAY - 1 YEAR)),
('1 DAY + 1 MONTH + 1 YEAR',YMD_BETWEEN(NOW,NOW - 1 DAY - 1 MONTH - 1 YEAR))
)
SELECT * FROM DATES
Explanation: The YMD_BETWEEN function is similar to the AGE function except that it takes two date arguments. We can
simulate the AGE function by supplying the NOW function to the YMD_BETWEEN function.
End of explanation
%%sql
VALUES
CASE
WHEN
(NOW, NOW + 1 DAY) OVERLAPS (NOW + 1 DAY, NOW + 2 DAYS) THEN 'Overlaps'
ELSE
'No Overlap'
END
Explanation: Back to Top
<a id='overlaps'></a>
OVERLAPS Predicate
The OVERLAPS predicate is used to determine whether two chronological periods overlap. This is not a
function within DB2, but rather a special SQL syntax extension.
A chronological period is specified by a pair of date-time expressions. The first expression specifies
the start of a period; the second specifies its end.
Python
(start1,end1) OVERLAPS (start2, end2)
The beginning and end values are not included in the periods. The following
summarizes the overlap logic. For example, the periods 2016-10-19 to 2016-10-20
and 2016-10-20 to 2016-10-21 do not overlap.
For instance, the following interval does not overlap.
End of explanation
%%sql
VALUES
CASE
WHEN
(NOW, NOW + 2 DAYS) OVERLAPS (NOW + 1 DAY, NOW + 2 DAYS) THEN 'Overlaps'
ELSE
'No Overlap'
END
Explanation: If the first date range is extended by one day then the range will overlap.
End of explanation
%%sql
VALUES
CASE
WHEN
(NOW, NOW + 1 DAY) OVERLAPS (NOW, NOW + 1 DAY) THEN 'Overlaps'
ELSE
'No Overlap'
END
Explanation: Identical date ranges will overlap.
End of explanation
%%sql
VALUES FROM_UTC_TIMESTAMP(TIMESTAMP '2011-12-25 09:00:00.123456', 'Asia/Tokyo');
Explanation: Back to Top
<a id='utc'></a>
UTC Time Conversions
Db2 has two functions that allow you to translate timestamps to and from UTC (Coordinated Universal Time).
The FROM_UTC_TIMESTAMP scalar function returns a TIMESTAMP that is converted from Coordinated Universal Time
to the time zone specified by the time zone string.
The TO_UTC_TIMESTAMP scalar function returns a TIMESTAMP that is converted to Coordinated Universal Time
from the timezone that is specified by the timezone string.
The format of the two functions is:
Python
FROM_UTC_TIMESTAMP( expression, timezone )
TO_UTC_TIMESTAMP( expression, timezone)
The return value from each of these functions is a timestamp. The "expression" is a timestamp that
you want to convert to the local timezone (or convert to UTC). The timezone is
an expression that specifies the time zone that the expression is to be adjusted to.
The value of the timezone-expression must be a time zone name from the Internet Assigned Numbers Authority (IANA)
time zone database. The standard format for a time zone name in the IANA database is Area/Location, where:
Area is the English name of a continent, ocean, or the special area 'Etc'
Location is the English name of a location within the area; usually a city, or small island
Examples:
"America/Toronto"
"Asia/Sakhalin"
"Etc/UTC" (which represents Coordinated Universal Time)
For complete details on the valid set of time zone names and the rules that are associated with those time zones,
refer to the IANA time zone database. The database server uses version 2010c of the IANA time zone database.
The result is a timestamp, adjusted from/to the Coordinated Universal Time time zone to the time zone
specified by the timezone-expression. If the timezone-expression returns a value that is not a time zone
in the IANA time zone database, then the value of expression is returned without being adjusted.
The timestamp adjustment is done by first applying the raw offset from Coordinated Universal Time of the
timezone-expression. If Daylight Saving Time is in effect at the adjusted timestamp for the time zone
that is specified by the timezone-expression, then the Daylight Saving Time offset is also applied
to the timestamp.
Time zones that use Daylight Saving Time have ambiguities at the transition dates. When a time zone
changes from standard time to Daylight Saving Time, a range of time does not occur as it is skipped
during the transition. When a time zone changes from Daylight Saving Time to standard time,
a range of time occurs twice. Ambiguous timestamps are treated as if they occurred when standard time
was in effect for the time zone.
Convert the Coordinated Universal Time timestamp '2011-12-25 09:00:00.123456' to the 'Asia/Tokyo' time zone.
The following returns a TIMESTAMP with the value '2011-12-25 18:00:00.123456'.
End of explanation
%%sql
VALUES FROM_UTC_TIMESTAMP(TIMESTAMP'2014-11-02 06:55:00', 'America/Toronto');
Explanation: Convert the Coordinated Universal Time timestamp '2014-11-02 06:55:00' to the 'America/Toronto' time zone.
The following returns a TIMESTAMP with the value '2014-11-02 01:55:00'.
End of explanation
%%sql
VALUES FROM_UTC_TIMESTAMP(TIMESTAMP'2015-03-02 06:05:00', 'America/Toronto');
Explanation: Convert the Coordinated Universal Time timestamp '2015-03-02 06:05:00' to the 'America/Toronto'
time zone. The following returns a TIMESTAMP with the value '2015-03-02 01:05:00'.
End of explanation
%%sql
VALUES TO_UTC_TIMESTAMP(TIMESTAMP'1970-01-01 00:00:00', 'America/Denver');
Explanation: Convert the timestamp '1970-01-01 00:00:00' to the Coordinated Universal Time timezone from the 'America/Denver'
timezone. The following returns a TIMESTAMP with the value '1970-01-01 07:00:00'.
End of explanation
%%sql -q
DROP TABLE TXS_BASE;
CREATE TABLE TXS_BASE
(
ID INTEGER NOT NULL,
CUSTID INTEGER NOT NULL,
TXTIME_UTC TIMESTAMP NOT NULL
);
Explanation: Using UTC Functions
One of the applications for using the UTC is to take the transaction timestamp and normalize it across
all systems that access the data. You can convert the timestamp to UTC on insert and then when it is
retrieved, it can be converted to the local timezone.
This example will use a number of techniques to hide the complexity of changing timestamps to local timezones.
The following SQL will create our base transaction table (TXS_BASE) that will be used throughout the
example.
End of explanation
%%sql
CREATE OR REPLACE VARIABLE TIME_ZONE VARCHAR(255) DEFAULT('America/Toronto');
Explanation: The UTC functions will be written to take advantage of a local timezone variable called TIME_ZONE. This
variable will contain the timezone of the server (or user) that is running the transaction. In this
case we are using the timezone in Toronto, Canada.
End of explanation
%sql SET TIME_ZONE = 'America/Toronto'
Explanation: The SET Command can be used to update the TIME_ZONE to the current location we are in.
End of explanation
%%sql
CREATE OR REPLACE FUNCTION GET_TIMEZONE()
RETURNS VARCHAR(255)
LANGUAGE SQL CONTAINS SQL
RETURN (TIME_ZONE)
Explanation: In order to retrieve the value of the current timezone, we take advantage of a simple user-defined function
called GET_TIMEZONE. It just retrieves the contents of the current TIME_ZONE variable that we set up.
End of explanation
%%sql
CREATE OR REPLACE VIEW TXS AS
(
SELECT
ID,
CUSTID,
FROM_UTC_TIMESTAMP(TXTIME_UTC,GET_TIMEZONE()) AS TXTIME
FROM
TXS_BASE
)
Explanation: The TXS view is used by all SQL statements rather than the TXS_BASE table. The reason for this is to
take advantage of INSTEAD OF triggers that can manipulate the UTC without modifying the original SQL.
Note that when the data is returned from the view that the TXTIME field is converted from UTC to the current
TIMEZONE that we are in.
End of explanation
%%sql -d
CREATE OR REPLACE TRIGGER I_TXS
INSTEAD OF INSERT ON TXS
REFERENCING NEW AS NEW_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
INSERT INTO TXS_BASE VALUES (
NEW_TXS.ID,
NEW_TXS.CUSTID,
TO_UTC_TIMESTAMP(NEW_TXS.TXTIME,GET_TIMEZONE())
);
END
@
CREATE OR REPLACE TRIGGER U_TXS
INSTEAD OF UPDATE ON TXS
REFERENCING NEW AS NEW_TXS OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE TXS_BASE
SET (ID, CUSTID, TXTIME_UTC) =
(NEW_TXS.ID,
NEW_TXS.CUSTID,
TO_UTC_TIMESTAMP(NEW_TXS.TXTIME,TIME_ZONE)
)
WHERE
TXS_BASE.ID = OLD_TXS.ID
;
END
@
CREATE OR REPLACE TRIGGER D_TXS
INSTEAD OF DELETE ON TXS
REFERENCING OLD AS OLD_TXS
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
DELETE FROM TXS_BASE
WHERE
TXS_BASE.ID = OLD_TXS.ID
;
END
@
Explanation: An INSTEAD OF trigger (INSERT, UPDATE, and DELETE) is created against the TXS view so that any insert or
update on a TXTIME column will be converted back to the UTC value. From an application perspective,
we are using the local time, not the UTC time.
End of explanation
%sql VALUES NOW
Explanation: At this point in time(!) we can start inserting records into our table. We have already set the timezone
to be Toronto, so the next insert statement will take the current time (NOW) and insert it into the table.
For reference, here is the current time.
End of explanation
%%sql
INSERT INTO TXS VALUES(1,1,NOW);
SELECT * FROM TXS;
Explanation: We will insert one record into the table and immediately retrieve the result.
End of explanation
%sql SELECT * FROM TXS_BASE
Explanation: Note that the timsstamp appears to be the same as what we insert (plus or minus a few seconds). What actually
sits in the base table is the UTC time.
End of explanation
%sql SET TIME_ZONE = 'America/Vancouver'
Explanation: We can modify the time that is returned to us by changing our local timezone. The statement will make
the system think we are in Vancouver.
End of explanation
%sql SELECT * FROM TXS
Explanation: Retrieving the results will show that the timestamp has shifted by 3 hours (Vancouver is 3 hours behind
Toronto).
End of explanation
%%sql
INSERT INTO TXS VALUES(2,2,NOW);
SELECT * FROM TXS;
Explanation: So what happens if we insert a record into the table now that we are in Vancouver?
End of explanation
%sql SELECT * FROM TXS_BASE
Explanation: The data retrieved reflects the fact that we are now in Vancouver from an application perspective. Looking at the
base table and you will see that everything has been converted to UTC time.
End of explanation
%%sql
SET TIME_ZONE = 'America/Toronto';
SELECT * FROM TXS;
Explanation: Finally, we can switch back to Toronto time and see when the transactions were done. You will see that from a
Toronto perspetive tht the transactions were done three hours later because of the timezone differences.
End of explanation |
4,407 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
String Methods
In this lecture we are going to be looking at string methods in more detail. To simplify, a method is a function that are bound to a particular type of object. String methods, for example, are functions that only work on strings, List methods are a set of functions that only work on lists and so on. If you read the OOP lecture this should make sense to you already.
To call these methods, I have to introduce you to a new syntax
Step1: So the method 'upper' capitalises a string; "hi" becomes "HI".
Now, remember in the 'Calling Functions' lecture I talking about functions that take zero arguments? Well, at first glance it might look like upper() takes zero arguments, but actually this isn’t true, object methods take themselves as an argument.
So
Step2: Talking of syntax, addition, multiplication and so on are also methods and they can be called using two different bits of syntax
Step3: From the above three lines you can probably get a good idea what the "isdigit" method is doing; This method checks checks to see if each character in the string is a digit (i.e 1,2,3,4,5,6,7,8,9,0) or not. If every single character is a digit the method returns True, False otherwise.
Just because a string methods take strings as input DOES NOT mean they must output strings as well.
Count Method
Step4: The count method shown above can be pretty useful, it takes two arguments and returns the total number of times the second string appears in the first. Do note that it is case-sensitive and it looks for an EXACT match. For example
Step5: In the code cell above you may notice a new concept, "input". Input basically asks the user (yes thats you) to type in a message. In this particular case we call input three times and store the result in three seperate variables. Once thats done we take the text and replace character X with character Y.
For example
Step6: However, concatenation can become a little cumbersome when we start trying to create strings with several moving parts and/or with different data-types
Step7: I think you will all agree that the string 's' is getting clunky right now. Its so large that we have to scroll sideways just to see the end of it.
Format to the rescue!
“I have {x} cats and {y}{z}”.format (x, y, z)
Normally when I write syntax I use {} for my own commentary, but on this occasion you need to be aware '{}' is literally what you type in.
Python will then replace the {} with a value, which happens to be the value you give as an argument to format(). So format(x) will insert 'X' into the string. Maybe this is easier understood with actual examples
Step8: Let's quickly go back to my 203 pet wombat example. But this time instead of using concatination we shall use the format method.
Step9: Notice that this time instead of empty brackets '{}' we have numbers inside them (e.g '{3}'). This relates to something called indexing (more on this later), but for now, let's just say {3} means Python replaces {3} with the third argument parsed to the format method, in this case it is the variable called 'pet'.
As a minor technical detail, Python counts from zero so the 'third item' is actually the fourth item, if that makes sense.
Here's another example
Step10: Homework Assignment #1
Your first homework assignment for this week is to take the variable named "text" (this has been defined for you) and count the number of times "z" occurs AND the letter "k" occurs. Add those numbers up and print the result.
As a further complication, we DO NOT care about case (e.g. ‘z’ and ‘Z’ should both be included in the count).
Don’t feel bad if you struggle, this homework is a step up in difficulty compared to normal. Oh and I have also included a few test cases below that should help you figure out what to do (just in case my instructions were not clear enough).
For bonus difficulty, make your code work for any letter (e.g. "a", "b" returns the count of 'a' + 'A' + 'b' + 'B').
Step11: Homework Assignment #2
You are working on a program that has a greeting message and a goodbye message for several languages. Your job is to simply make the code more elegant and readable. Do whatever you think needs doing (note, there isn't really a right/wrong answer here, the aim of this homework is just to make you think about style and readability). | Python Code:
print("hello".upper()) # This works
print(str.upper()) # This returns an error, upper needs an argument
Explanation: String Methods
In this lecture we are going to be looking at string methods in more detail. To simplify, a method is a function that are bound to a particular type of object. String methods, for example, are functions that only work on strings, List methods are a set of functions that only work on lists and so on. If you read the OOP lecture this should make sense to you already.
To call these methods, I have to introduce you to a new syntax:
{Object}.{method name}({arguments, if any})
The syntax above is called ‘dot notation’ and this is one of the main ways we can call an objects method.
Let’s look at an example:
End of explanation
print("HELLO".lower()) # This works!
print(str.lower("HELLO")) # This also works!
Explanation: So the method 'upper' capitalises a string; "hi" becomes "HI".
Now, remember in the 'Calling Functions' lecture I talking about functions that take zero arguments? Well, at first glance it might look like upper() takes zero arguments, but actually this isn’t true, object methods take themselves as an argument.
So:
“hello”.upper() ---> "HELLO"
Would look like this (if expressed as a function):
upper(“hello”) ---> "HELLO"
The syntax is a bit different, but semantically these two things function the same.
End of explanation
print("Hello".isdigit())
print("99".isdigit())
print("103.2".isdigit())
Explanation: Talking of syntax, addition, multiplication and so on are also methods and they can be called using two different bits of syntax:
"Hello" + "World"
"Hello".__add__("World")
Just as before these two ways of doing things produce the same result. Anyway, in the rest of this lecture we will be going over some of the various string method in more detail...
Isdigit method
End of explanation
print("nananananananaBATMAN".count("A")) # Note count is case sensitive.
print("nananananananaBATMAN".count("BATMAN"))
print("nananananananaBATMAN".count("ROBIN"))
Explanation: From the above three lines you can probably get a good idea what the "isdigit" method is doing; This method checks checks to see if each character in the string is a digit (i.e 1,2,3,4,5,6,7,8,9,0) or not. If every single character is a digit the method returns True, False otherwise.
Just because a string methods take strings as input DOES NOT mean they must output strings as well.
Count Method
End of explanation
text = input("Feed me characters...FEED ME NOW GRRRR!!! ")
replace_this = input("Now give me a single character to change in the text ")
replace_with = input("What should we change that character to ? ")
print("")
print("============ RESULT =================")
print(text.replace(replace_this, replace_with))
Explanation: The count method shown above can be pretty useful, it takes two arguments and returns the total number of times the second string appears in the first. Do note that it is case-sensitive and it looks for an EXACT match. For example:
"abc".count("ab") --> 1
"acb".count("ab") --> 0
In short, count is looking for instances of the whole pattern and NOT how many times a and b appear individually.
Replace method
End of explanation
name = "chris"
greeting = "hi, " + name
print(greeting)
Explanation: In the code cell above you may notice a new concept, "input". Input basically asks the user (yes thats you) to type in a message. In this particular case we call input three times and store the result in three seperate variables. Once thats done we take the text and replace character X with character Y.
For example:
starting text = “BATMAN”
replace_this = “A”
Replace_with = “Z”
Returns: “BZTMZN”
Go ahead, why not play with this code for a bit.
Getting side-tracked with Style
Now these variables names are pretty good overall, but readability isn't just about having good function names, it is also about creating names that ‘fit’ together, that is, a naming structure consistent throughout the code.
After a little bit of thought I came up with a much better name than "old_sequence", I swapped it to "replace_this". This new name is not any better in and of itself, but when we juxtapose it alongside "replace_with" it is obviously a more elegant name.
replace_this
replace_with
‘old_sequence’ although a perfectly reasonable variable name by itself doesn’t show the reader that these two variables are related to one another. Changing ‘old_sequence’ to ‘replace_with’ makes the relationship clearer and as an extra bonus it means our code could almost pass for normal English.
text.replace(replace_this, replace_with)
text.replace(old_sequence, replace_with)
In the grand scheme of things we are making tiny little changes here, but I’d argue the first example is better. This tiny little change makes my code more beautiful, more elegant, and above all, more readable.
Hopefully this week's homework (#2) will make these concepts more clear to you.
Format
If you read my code snippets you will see that I use format a lot. At heart, format is a way to create strings with 'moving parts' inside them.
For example, if I want to greet the user I probably want some code that returns “hello, {user’s name)”.
We can do this with concatenation like so:
End of explanation
name = "chris"
age = 29
no_of_pets = 203
pet = "wombat"
s = "Hi, " + name + " your age is " + str(age) + " and you have " + str(no_of_pets) + " " + pet + "'s. Wow, thats a lot of " + pet + "'s"
print(s)
Explanation: However, concatenation can become a little cumbersome when we start trying to create strings with several moving parts and/or with different data-types:
End of explanation
s = "I have {} cats and {} {}.".format(2, 3, "dogs")
print(s)
Explanation: I think you will all agree that the string 's' is getting clunky right now. Its so large that we have to scroll sideways just to see the end of it.
Format to the rescue!
“I have {x} cats and {y}{z}”.format (x, y, z)
Normally when I write syntax I use {} for my own commentary, but on this occasion you need to be aware '{}' is literally what you type in.
Python will then replace the {} with a value, which happens to be the value you give as an argument to format(). So format(x) will insert 'X' into the string. Maybe this is easier understood with actual examples:
End of explanation
name = "chris"
age = 29
no_of_pets = 203
pet = "wombat"
s = "Hi, {0} your age is {1} and you have {2} {3}'s. Wow, thats a lot of {3}s".format(name, age, no_of_pets, pet)
print(s)
Explanation: Let's quickly go back to my 203 pet wombat example. But this time instead of using concatination we shall use the format method.
End of explanation
"{2} {0} {1} {0} {1} {2} {0} {1} {0} {1} {2}.".format("help","me", "please")
# {0} ==> "help"
# {1} ==> "me"
# {2} ==> "please"
Explanation: Notice that this time instead of empty brackets '{}' we have numbers inside them (e.g '{3}'). This relates to something called indexing (more on this later), but for now, let's just say {3} means Python replaces {3} with the third argument parsed to the format method, in this case it is the variable called 'pet'.
As a minor technical detail, Python counts from zero so the 'third item' is actually the fourth item, if that makes sense.
Here's another example:
End of explanation
text = "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzZZZZeeeewwwwwwwwkKewewe2324____23!!!!!fsdffskdsdzzzzZZZZZZZZZZZZZZZZroiooioi"
# Simple Examples (string --> total you should return)
# "zZ" --> 2
# "kK" --> 2
# "KZ" --> 2
# "1aZZZabc" --> 3
# "hello" --> 0
# "ZzKk" --> 4
# Your code goes here...
Explanation: Homework Assignment #1
Your first homework assignment for this week is to take the variable named "text" (this has been defined for you) and count the number of times "z" occurs AND the letter "k" occurs. Add those numbers up and print the result.
As a further complication, we DO NOT care about case (e.g. ‘z’ and ‘Z’ should both be included in the count).
Don’t feel bad if you struggle, this homework is a step up in difficulty compared to normal. Oh and I have also included a few test cases below that should help you figure out what to do (just in case my instructions were not clear enough).
For bonus difficulty, make your code work for any letter (e.g. "a", "b" returns the count of 'a' + 'A' + 'b' + 'B').
End of explanation
# Make this more readable...
bye_spain = "buenas noches"
english_greeting = "hello"
english_goodbye = "sod off, lad"
greeting_japanese = "konichiwa"
spanish_greeting = "hola"
hello_in_french = "bonjour"
japanese_bye = "sayonara"
Explanation: Homework Assignment #2
You are working on a program that has a greeting message and a goodbye message for several languages. Your job is to simply make the code more elegant and readable. Do whatever you think needs doing (note, there isn't really a right/wrong answer here, the aim of this homework is just to make you think about style and readability).
End of explanation |
4,408 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
BC Grid Extrapolation
Testing errors generated by grid extrapolation for extremely cool spot bolometric corrections. A first test of this will be to use a more extensive Phoenix color grid to explore effects that may be missing from MARCS (aka
Step1: Setting up Phoenix Grid Interpolation
First, load required color tables (1 optical, 1 NIR).
Step2: Generate (linear) interpolation surfaces as a function of $\log g$ and $T_{\rm eff}$.
Step3: BT-Settl Colorize a Dartmouth Isochrone
Load a standard isochrone, with MARCS colors.
Step4: Compute colors using Phoenix BT-Settl models using CIFIST 2015 color tables. Colors were shown to be compatible with MARCS colors in another note.
Step5: Convert from surface magnitudes to absolute magnitudes.
Step6: Stack colors with stellar properties to form a new isochrone.
Step7: Load Spotted Isochrone(s)
There are two types of spotted isochrones, one with magnitudes and colors with average surface properties, the other has more detailed information about spot temperatures and luminosities.
Step8: Compute colors for photospheric and spot components.
Step9: Convert surface magnitudes to absolute magnitudes.
Step10: Compute luminosity fractions for spots and photosphere for use in combining the two contributions.
Step11: Now combine spot properties with the photospheric properties to derive properties for spotted stars.
Step12: Stack with average surface properties to form a spotted isochrone.
Step13: Isochrone Comparisons
We may now compare morphologies of spotted isochrones computed using Phoenix and MARCS color tables.
Step14: Optical CMDs appear to be in good order, even though some of the spot properties may extend beyond the formal MARCS grid. At high temperatures, the Phoenix models cut out before the MARCS models, with the maximum temperature in the Phoenix models at 7000 K.
Now we may check NIR CMDs.
Step15: Things look good!
Sanity Checks
Before moving on and accepting that, down to $\varpi = 0.50$, our models produce reliable results, with possible difference in $(J-K)$ colors below the M dwarf boundary, we should confirm that all star have actual values for their spot colors. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import LinearNDInterpolator
Explanation: BC Grid Extrapolation
Testing errors generated by grid extrapolation for extremely cool spot bolometric corrections. A first test of this will be to use a more extensive Phoenix color grid to explore effects that may be missing from MARCS (aka: condensates).
End of explanation
phx_col_dir = '/Users/grefe950/Projects/starspot/starspot/color/tab/phx/CIFIST15'
opt_table = np.genfromtxt('{0}/colmag.BT-Settl.server.COUSINS.Vega'.format(phx_col_dir), comments='!')
nir_table = np.genfromtxt('{0}/colmag.BT-Settl.server.2MASS.Vega'.format(phx_col_dir), comments='!')
Explanation: Setting up Phoenix Grid Interpolation
First, load required color tables (1 optical, 1 NIR).
End of explanation
opt_surface = LinearNDInterpolator(opt_table[:, :2], opt_table[:, 4:8])
nir_surface = LinearNDInterpolator(nir_table[:, :2], nir_table[:, 4:7])
Explanation: Generate (linear) interpolation surfaces as a function of $\log g$ and $T_{\rm eff}$.
End of explanation
iso = np.genfromtxt('data/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
Explanation: BT-Settl Colorize a Dartmouth Isochrone
Load a standard isochrone, with MARCS colors.
End of explanation
phx_opt_mags = opt_surface(10.0**iso[:, 1], iso[:, 2])
phx_nir_mags = nir_surface(10.0**iso[:, 1], iso[:, 2])
Explanation: Compute colors using Phoenix BT-Settl models using CIFIST 2015 color tables. Colors were shown to be compatible with MARCS colors in another note.
End of explanation
for i in range(phx_opt_mags.shape[1]):
phx_opt_mags[:, i] = phx_opt_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
for i in range(phx_nir_mags.shape[1]):
phx_nir_mags[:, i] = phx_nir_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
Explanation: Convert from surface magnitudes to absolute magnitudes.
End of explanation
phx_iso = np.column_stack((iso[:, :6], phx_opt_mags)) # stack props with BVRI
phx_iso = np.column_stack((phx_iso, phx_nir_mags)) # stack props/BVRI with JHK
Explanation: Stack colors with stellar properties to form a new isochrone.
End of explanation
orig_iso = np.genfromtxt('/Users/grefe950/Projects/starspot/models/age_120.0+z_0.00/isochrone_120.0myr_z+0.00_a+0.00_marcs.iso')
spot_mags = np.genfromtxt('/Users/grefe950/Projects/starspot/models/age_120.0+z_0.00/sts/mag_zet+0.62_eps+1.00_rho+0.40_pi+0.50.dat')
spot_prop = np.genfromtxt('/Users/grefe950/Projects/starspot/models/age_120.0+z_0.00/sts/spots_zet+0.62_eps+1.00_rho+0.40_pi+0.50.dat')
Explanation: Load Spotted Isochrone(s)
There are two types of spotted isochrones, one with magnitudes and colors with average surface properties, the other has more detailed information about spot temperatures and luminosities.
End of explanation
phx_opt_phot = opt_surface(10**spot_prop[:, 1], spot_mags[:, 2])
phx_opt_spot = opt_surface(10**spot_prop[:, 2], spot_mags[:, 2])
phx_nir_phot = nir_surface(10**spot_prop[:, 1], spot_mags[:, 2])
phx_nir_spot = nir_surface(10**spot_prop[:, 2], spot_mags[:, 2])
Explanation: Compute colors for photospheric and spot components.
End of explanation
for i in range(phx_opt_phot.shape[1]):
phx_opt_phot[:, i] = phx_opt_phot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
phx_opt_spot[:, i] = phx_opt_spot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
for i in range(phx_nir_phot.shape[1]):
phx_nir_phot[:, i] = phx_nir_phot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
phx_nir_spot[:, i] = phx_nir_spot[:, i] - 5.0*np.log10(10**spot_mags[:, 4]*6.956e10/3.086e18) + 5.0
Explanation: Convert surface magnitudes to absolute magnitudes.
End of explanation
L_spot = 10**spot_prop[:, 4]/10**orig_iso[:, 3]
L_phot = 10**spot_prop[:, 3]/10**orig_iso[:, 3]
Explanation: Compute luminosity fractions for spots and photosphere for use in combining the two contributions.
End of explanation
phx_opt_spot_mags = np.empty(phx_opt_phot.shape)
phx_nir_spot_mags = np.empty(phx_nir_phot.shape)
for i in range(phx_opt_phot.shape[1]):
phx_opt_spot_mags[:,i] = -2.5*np.log10(0.6*10**(-phx_opt_phot[:,i]/2.5)
+ 0.4*10**(-phx_opt_spot[:,i]/2.5))
for i in range(phx_nir_phot.shape[1]):
phx_nir_spot_mags[:,i] = -2.5*np.log10(0.6*10**(-phx_nir_phot[:,i]/2.5)
+ 0.4*10**(-phx_nir_spot[:,i]/2.5))
Explanation: Now combine spot properties with the photospheric properties to derive properties for spotted stars.
End of explanation
spt_iso = np.column_stack((spot_mags[:, :6], phx_opt_spot_mags))
spt_iso = np.column_stack((spt_iso, phx_nir_spot_mags))
Explanation: Stack with average surface properties to form a spotted isochrone.
End of explanation
fig, ax = plt.subplots(1, 3, figsize=(18., 8.), sharey=True)
for axis in ax:
axis.grid(True)
axis.set_ylim(17., 2.)
axis.tick_params(which='major', axis='both', labelsize=16., length=15.)
# V/(B-V)
ax[0].set_xlim(-0.5, 2.0)
ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c='#b22222')
ax[0].plot(spot_mags[:, 7] - spot_mags[:, 8], spot_mags[:, 8], dashes=(20., 5.), lw=3, c='#b22222')
ax[0].plot(phx_iso[:, 6] - phx_iso[:, 7], phx_iso[:, 7], lw=3, c='#555555')
ax[0].plot(spt_iso[:, 6] - spt_iso[:, 7], spt_iso[:, 7], dashes=(20., 5.), lw=3, c='#555555')
# V/(V-Ic)
ax[1].set_xlim(0.0, 4.0)
ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c='#b22222')
ax[1].plot(spot_mags[:, 8] - spot_mags[:,10], spot_mags[:, 8], dashes=(20., 5.), lw=3, c='#b22222')
ax[1].plot(phx_iso[:, 7] - phx_iso[:, 9], phx_iso[:, 7], lw=3, c='#555555')
ax[1].plot(spt_iso[:, 7] - spt_iso[:, 9], spt_iso[:, 7], dashes=(20., 5.), lw=3, c='#555555')
# V/(V-K)
ax[2].set_xlim(0.0, 7.0)
ax[2].plot(iso[:, 7] - iso[:,10], iso[:, 7], lw=3, c='#b22222')
ax[2].plot(spot_mags[:, 8] - spot_mags[:,13], spot_mags[:, 8], dashes=(20., 5.), lw=3, c='#b22222')
ax[2].plot(phx_iso[:, 7] - phx_iso[:,12], phx_iso[:, 7], lw=3, c='#555555')
ax[2].plot(spt_iso[:, 7] - spt_iso[:,12], spt_iso[:, 7], dashes=(20., 5.), lw=3, c='#555555')
Explanation: Isochrone Comparisons
We may now compare morphologies of spotted isochrones computed using Phoenix and MARCS color tables.
End of explanation
fig, ax = plt.subplots(1, 3, figsize=(18., 8.), sharey=True)
for axis in ax:
axis.grid(True)
axis.set_ylim(10., 2.)
axis.tick_params(which='major', axis='both', labelsize=16., length=15.)
# K/(Ic-K)
ax[0].set_xlim(0.0, 3.0)
ax[0].plot(iso[:, 8] - iso[:, 10], iso[:, 10], lw=3, c='#b22222')
ax[0].plot(spot_mags[:, 10] - spot_mags[:, 13], spot_mags[:, 13], dashes=(20., 5.), lw=3, c='#b22222')
ax[0].plot(phx_iso[:, 9] - phx_iso[:, 12], phx_iso[:, 12], lw=3, c='#555555')
ax[0].plot(spt_iso[:, 9] - spt_iso[:, 12], spt_iso[:, 12], dashes=(20., 5.), lw=3, c='#555555')
# K/(J-K)
ax[1].set_xlim(0.0, 1.0)
ax[1].plot(iso[:, 9] - iso[:, 10], iso[:, 10], lw=3, c='#b22222')
ax[1].plot(spot_mags[:, 11] - spot_mags[:,13], spot_mags[:, 13], dashes=(20., 5.), lw=3, c='#b22222')
ax[1].plot(phx_iso[:, 10] - phx_iso[:, 12], phx_iso[:, 12], lw=3, c='#555555')
ax[1].plot(spt_iso[:, 10] - spt_iso[:, 12], spt_iso[:, 12], dashes=(20., 5.), lw=3, c='#555555')
# K/(V-K)
ax[2].set_xlim(0.0, 7.0)
ax[2].plot(iso[:, 7] - iso[:,10], iso[:, 10], lw=3, c='#b22222')
ax[2].plot(spot_mags[:, 8] - spot_mags[:,13], spot_mags[:, 13], dashes=(20., 5.), lw=3, c='#b22222')
ax[2].plot(phx_iso[:, 7] - phx_iso[:,12], phx_iso[:, 12], lw=3, c='#555555')
ax[2].plot(spt_iso[:, 7] - spt_iso[:,12], spt_iso[:, 12], dashes=(20., 5.), lw=3, c='#555555')
Explanation: Optical CMDs appear to be in good order, even though some of the spot properties may extend beyond the formal MARCS grid. At high temperatures, the Phoenix models cut out before the MARCS models, with the maximum temperature in the Phoenix models at 7000 K.
Now we may check NIR CMDs.
End of explanation
10**spot_prop[0, 1], phx_opt_phot[0], 10**spot_prop[0, 2], phx_opt_spot[0]
10**spot_prop[0, 1], phx_nir_phot[0], 10**spot_prop[0, 2], phx_nir_spot[0]
Explanation: Things look good!
Sanity Checks
Before moving on and accepting that, down to $\varpi = 0.50$, our models produce reliable results, with possible difference in $(J-K)$ colors below the M dwarf boundary, we should confirm that all star have actual values for their spot colors.
End of explanation |
4,409 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classwork 3
Michael Seaman, Chinmai Raman, Austin Ayers, Taylor Patti
Organized by Andrew Malfavon
Excercise A.2
Step1: Chinmai Raman
Classwork 3
5.49 Experience Overflow in a Function
Calculates an exponential function and returns the numerator, denominator and the fraction as a 3-tuple
Step2: Austin Ayers
Classwork 3
A.1 Determine the limit of a sequence
Computes and returns the following sequence for N = 100
$$a_n = \frac{7+1/(n+1)}{3-1/(n+1)^2}, \qquad n=0,1,2,\ldots,N$$
Step3: diffeq_midpoint
Taylor Patti
Uses the midpoint integration rule along with numpy vectors to produce a continuous vector which gives integral data for an array of prespecified points.
Here we use it to integrate sin from 0 to pi.
Step4: Observe the close adherance to the actual value of this cannonical value.
We can also call it at a different value of x. Let's look at the value of this integral from 0 to pi over 2. Again, the result will have strikingly close adherance to the analytical value of this integral. | Python Code:
n = 30
plt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fa),'g.')
plt.show()
plt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fb) ** .5 ,'b.')
plt.show()
plt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fc) ** .25 ,'y.')
plt.show()
plt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fd),'r.')
plt.show()
plt.plot([x for x in range(n)],p3.pi_sequence(n, p3.fd),'c.')
plt.show()
n = 10
plt.plot([x + 20 for x in range(n)],p3.pi_sequence(n + 20, p3.fa)[-n:],'g.')
plt.plot([x + 20 for x in range(n)],(p3.pi_sequence(n + 20, p3.fb) ** .5)[-n:] ,'b.')
plt.plot([x + 20 for x in range(n)],(p3.pi_sequence(n + 20, p3.fc) ** .25)[-n:] ,'y.')
plt.plot([x + 20 for x in range(n)],p3.pi_sequence(n + 20, p3.fd)[-n:],'r.')
plt.plot([x + 20 for x in range(n)],p3.pi_sequence(n + 20, p3.fd)[-n:],'c.')
plt.plot((20, 30), (math.pi, math.pi), 'b')
plt.show()
Explanation: Classwork 3
Michael Seaman, Chinmai Raman, Austin Ayers, Taylor Patti
Organized by Andrew Malfavon
Excercise A.2: Computing $\pi$ via sequences
Michael Seaman
The following sequences all converge to pi, although at different rates.
In order:
$$a_n = 4\sum_{k=1}^{n}\frac{(-1)^{k+1}}{2k-1}$$
$$b_n = (6\sum_{k=1}^{n}k^{-2})^{1/2} $$
$$c_n = (90\sum_{k=1}^{n}k^{-4})^{1/4} $$
$$d_n = \frac{6}{\sqrt{3}}\sum_{k=0}^{n}\frac{(-1)^{k}}{3^k(2k+1)}$$
$$e_n = 16\sum_{k=0}^{n}\frac{(-1)^{k}}{5^{2k+1}(2k+1)} - 4\sum_{k=0}^{n}\frac{(-1)^{k}}{239^{2k+1}(2k+1)}$$
End of explanation
x = np.linspace(0,1,10000)
y1 = p1.v(x, 1, np.exp)[2]
y2 = p1.v(x, 0.1, np.exp)[2]
y3 = p1.v(x, 0.01, np.exp)[2]
fig = plt.figure(1)
plt.plot(x, y1, 'b')
plt.plot(x, y2, 'r')
plt.plot(x, y3, 'g')
plt.xlabel('x')
plt.ylabel('v(x)')
plt.legend(['(1 - exp(x / mu)) / (1 - exp(1 / mu))'])
plt.axis([x[0], x[-1], min(y3), max(y3)])
plt.title('Math Function')
plt.show(fig)
Explanation: Chinmai Raman
Classwork 3
5.49 Experience Overflow in a Function
Calculates an exponential function and returns the numerator, denominator and the fraction as a 3-tuple
End of explanation
p2.part_a()
p2.part_b()
p2.part_c()
p2.part_d()
p2.part_e()
p2.part_f()
Explanation: Austin Ayers
Classwork 3
A.1 Determine the limit of a sequence
Computes and returns the following sequence for N = 100
$$a_n = \frac{7+1/(n+1)}{3-1/(n+1)^2}, \qquad n=0,1,2,\ldots,N$$
End of explanation
function_call = p4.vector_midpoint(p4.np.sin, 0, p4.np.pi, 10000)
print function_call[1][-1]
Explanation: diffeq_midpoint
Taylor Patti
Uses the midpoint integration rule along with numpy vectors to produce a continuous vector which gives integral data for an array of prespecified points.
Here we use it to integrate sin from 0 to pi.
End of explanation
print function_call[1][5000]
Explanation: Observe the close adherance to the actual value of this cannonical value.
We can also call it at a different value of x. Let's look at the value of this integral from 0 to pi over 2. Again, the result will have strikingly close adherance to the analytical value of this integral.
End of explanation |
4,410 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
<!-- TEASER_END -->
Step1: Version 1 - Sliding Window
Let's first set a variable to the number
Step2: Now let's form an array with elements representing the digits of the number.
Step3: This solution is kind of hacky and relies on specific characteristics of these Python built-in functions. Ideally, we'd build an array from the integer itself without all this type casting. But this will do for now.
Step4: Now we need to iterate over this array with some kind of sliding "window" of length $L=13$.
Step5: Example
Step6: Example
Step7: Given the frequency of 0's when applying the sliding window, the next optimization to consider is skipping over windows containing at least one zero. | Python Code:
from six.moves import map, range, reduce
Explanation: The four adjacent digits in the 1000-digit number that have the greatest product are 9 × 9 × 8 × 9 = 5832.
73167176531330624919225119674426574742355349194934
96983520312774506326239578318016984801869478851843
85861560789112949495459501737958331952853208805511
12540698747158523863050715693290963295227443043557
66896648950445244523161731856403098711121722383113
62229893423380308135336276614282806444486645238749
30358907296290491560440772390713810515859307960866
70172427121883998797908792274921901699720888093776
65727333001053367881220235421809751254540594752243
52584907711670556013604839586446706324415722155397
53697817977846174064955149290862569321978468622482
83972241375657056057490261407972968652414535100474
82166370484403199890008895243450658541227588666881
16427171479924442928230863465674813919123162824586
17866458359124566529476545682848912883142607690042
24219022671055626321111109370544217506941658960408
07198403850962455444362981230987879927244284909188
84580156166097919133875499200524063689912560717606
05886116467109405077541002256983155200055935729725
71636269561882670428252483600823257530420752963450
Find the thirteen adjacent digits in the 1000-digit number that have the greatest product. What is the value of this product?
<!-- TEASER_END -->
End of explanation
n = \
73167176531330624919225119674426574742355349194934\
96983520312774506326239578318016984801869478851843\
85861560789112949495459501737958331952853208805511\
12540698747158523863050715693290963295227443043557\
66896648950445244523161731856403098711121722383113\
62229893423380308135336276614282806444486645238749\
30358907296290491560440772390713810515859307960866\
70172427121883998797908792274921901699720888093776\
65727333001053367881220235421809751254540594752243\
52584907711670556013604839586446706324415722155397\
53697817977846174064955149290862569321978468622482\
83972241375657056057490261407972968652414535100474\
82166370484403199890008895243450658541227588666881\
16427171479924442928230863465674813919123162824586\
17866458359124566529476545682848912883142607690042\
24219022671055626321111109370544217506941658960408\
07198403850962455444362981230987879927244284909188\
84580156166097919133875499200524063689912560717606\
05886116467109405077541002256983155200055935729725\
71636269561882670428252483600823257530420752963450
n
Explanation: Version 1 - Sliding Window
Let's first set a variable to the number
End of explanation
num_to_list = lambda n: list(map(int, list(str(n))))
Explanation: Now let's form an array with elements representing the digits of the number.
End of explanation
num_to_list(n)[:10]
Explanation: This solution is kind of hacky and relies on specific characteristics of these Python built-in functions. Ideally, we'd build an array from the integer itself without all this type casting. But this will do for now.
End of explanation
window = lambda lst, n: map(list, zip(*[lst[i:-n+i] for i in range(n)]))
Explanation: Now we need to iterate over this array with some kind of sliding "window" of length $L=13$.
End of explanation
pairwise = lambda lst: window(lst, 2)
[x+y for x, y in pairwise(range(10))]
list(window(num_to_list(n), 13))[:10]
prod = lambda iterable: reduce(lambda x, y: x*y, iterable)
Explanation: Example: Partial function application and creating a list of odd numbers
End of explanation
prod(range(1, 9))
list(map(prod, window(num_to_list(n), 13)))[:20]
max(map(prod, window(num_to_list(n), 13)))
max(map(prod, window(num_to_list(n), 4)))
Explanation: Example: Compute $8!$
End of explanation
from collections import Counter
Counter(map(prod, window(num_to_list(n), 13))).most_common(10)
Explanation: Given the frequency of 0's when applying the sliding window, the next optimization to consider is skipping over windows containing at least one zero.
End of explanation |
4,411 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Inspired by the work of
Step1: 2. Project 5 data set import
This cell
Step2: 3. Print a dataset summary
This cell
Step3: 4. Exploratory visualization of the dataset
This cell
Step4: 5. Get Model
As we have seen in the "intro to convolutionnal network" lesson, a nice property of a convolutional filter is that is reuses the weights gwhile slidding through the image and feature maps, the weight number is thus not dependent on the input image size. Therefore, it is possible to train a full ConvNet to classify small size images (64x64) as an image classifier (like we have done for Project 2) and output the result on one neuron.
In our case the output will be either there is a car in the image or not (tanh=1 or tanh=-1). The weights resulting from the training can then be reused on the same full ConvNet to build an output feature map from larger images. This feature map can be seen as a heatmap in which each pixel represents the output of the original trained ConvNet for a section of the input image. These pixels thus give the "car probability" for a specific location in the input image.
This cell
Step5: 6. Declare generators/load dataset
This cell
Step6: 7. Train Model and save the best weights
This cell
Step7: Testing the classifier
8. Load weights
This cell
Step8: 9. Test the classifier on random images
This cell | Python Code:
# Visualisation parameters
display_output = 1
train_verbose_style = 2 # 1 every training image, 2 once very epoch
# Training parameters
use_generator = 0
epoch_num = 30
train_model = 1
Explanation: Inspired by the work of:
https://medium.com/@tuennermann/convolutional-neural-networks-to-find-cars-43cbc4fb713
https://github.com/HTuennermann/Vehicle-Detection-and-Tracking
https://github.com/heuritech/convnets-keras
https://github.com/maxritter/SDC-Vehicle-Lane-Detection
1. Config
This cell:
* Defines configuration variables for this IPython notebook.
End of explanation
import glob
import numpy as np
from sklearn.model_selection import train_test_split
cars = glob.glob("./dataset/vehicles/*/*.png")
non_cars = glob.glob("./dataset/non-vehicles/*/*.png")
# feature list
X = []
# Append cars and non-cars image file paths to the feature list
for car in cars:
X.append(car)
for non_car in non_cars:
X.append(non_car)
X = np.array(X)
# Generate y Vector (Cars = 1, Non-Cars = -1)
y = np.concatenate([np.ones(len(cars)), np.zeros(len(non_cars))-1])
# Randomly split the file paths in a validation set and a training set
X_train, X_valid, y_train, y_valid = train_test_split(X, y, test_size=0.1)
print("Loading done!")
Explanation: 2. Project 5 data set import
This cell:
* Creates a feature file name list of the car/non-car supplied files
* Creates a label (y) vector
* Randomly split the dataset into a train and a validation dataset
End of explanation
import matplotlib.image as mpimg
%matplotlib inline
if display_output == 1:
# Load the 1rst image to get its size
train_img = mpimg.imread(X_train[0])
# Dataset image shape
image_shape = train_img.shape
# Number of unique classes/labels there are in the dataset.
n_classes = np.unique(y_train).size
print("Number of training examples =", X_train.shape[0])
print("Number of validation examples =", X_valid.shape[0])
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: 3. Print a dataset summary
This cell:
* Prints a summary of the dataset
End of explanation
import matplotlib.pyplot as plt
import random
def show_dataset_classes_histogram(labels_train, labels_valid):
f, ax = plt.subplots(figsize=(5, 5))
# Generate histogram and bins
hist_train, bins = np.histogram(labels_train, 2)
hist_valid, bins = np.histogram(labels_valid, 2)
# Bar width
width = 1.0 * (bins[1] - bins[0])
ax.bar([-1, 1], hist_train, width=width, label="Train")
ax.bar([-1, 1], hist_valid, width=width, label="Valid")
ax.set_xlabel('Classes')
ax.set_ylabel('Number of occurence')
ax.set_title('Histogram of the data set')
ax.legend(bbox_to_anchor=(1.01, 1), loc="upper left")
f.tight_layout()
plt.savefig("./output_images/histogram_dataset.png")
def show_sample(features, labels, preprocess=0, sample_num=1, sample_index=-1):
col_num = 2
# Create training sample + histogram plot
f, axarr = plt.subplots(sample_num, col_num, figsize=(col_num * 4, sample_num * 3))
index = sample_index - 1
for i in range(0, sample_num, 1):
if sample_index == -1:
index = random.randint(0, len(features))
else:
index = index + 1
if labels[index] == 1:
label_str = "Car"
else:
label_str = "Non-Car"
image = (mpimg.imread(features[index]) * 255).astype(np.uint8)
if preprocess == 1:
image = image_preprocessing(image)
axarr[i, 0].set_title('%s' % label_str)
axarr[i, 0].imshow(image)
hist, bins = np.histogram(image.flatten(), 256, [0, 256])
cdf = hist.cumsum()
cdf_normalized = cdf * hist.max()/ cdf.max()
axarr[i, 1].plot(cdf_normalized, color='b')
axarr[i, 1].plot(hist, color='r')
axarr[i, 1].legend(('cdf', 'histogram'), loc='upper left')
axarr[i, 0].axis('off')
# Tweak spacing to prevent clipping of title labels
f.tight_layout()
if preprocess == 1:
plt.savefig("./output_images/dataset_sample_preprocessed.png")
else:
plt.savefig("./output_images/dataset_sample.png")
if display_output == 1:
show_dataset_classes_histogram(y_train, y_valid)
show_sample(X_train, y_train, sample_num=6, sample_index=110)
Explanation: 4. Exploratory visualization of the dataset
This cell:
* Defines the show_dataset_classes_histogram function
* Defines the show_sample() function
* Shows an hstogram distribution of the dataset classes
* Shows a random sample of the dataset
End of explanation
from model import get_model
from keras.layers import Flatten
# Get the "base" ConvNet Model
model = get_model()
# Flat out the last layer for training
model.add(Flatten())
# Print out model summary
if display_output == 1:
model.summary()
Explanation: 5. Get Model
As we have seen in the "intro to convolutionnal network" lesson, a nice property of a convolutional filter is that is reuses the weights gwhile slidding through the image and feature maps, the weight number is thus not dependent on the input image size. Therefore, it is possible to train a full ConvNet to classify small size images (64x64) as an image classifier (like we have done for Project 2) and output the result on one neuron.
In our case the output will be either there is a car in the image or not (tanh=1 or tanh=-1). The weights resulting from the training can then be reused on the same full ConvNet to build an output feature map from larger images. This feature map can be seen as a heatmap in which each pixel represents the output of the original trained ConvNet for a section of the input image. These pixels thus give the "car probability" for a specific location in the input image.
This cell:
* Gets an all convolutionnal Keras model to train from the file model.py.
End of explanation
from sklearn.utils import shuffle
def keras_generator(features, labels, batch_size=32):
num_features = len(features)
# Loop forever so the generator never terminates
while 1:
# shuffles the input sample
shuffle(features, labels)
for offset in range(0, num_features, batch_size):
# File path subset
batch_features = features[offset:offset + batch_size]
batch_labels = labels[offset:offset + batch_size]
imgs = []
for feature in batch_features:
image = (mpimg.imread(feature) * 255).astype(np.uint8)
# Image preprocessing
# none..
imgs.append(image)
# Convert images to numpy arrays
X = np.array(imgs, dtype=np.uint8)
y = np.array(batch_labels)
yield shuffle(X, y)
def loader(features, labels):
for iterable in keras_generator(features, labels, batch_size=len(features)):
return iterable
# Prepare generator functions /dataset
if use_generator == 1:
# Use the generator function
train_generator = keras_generator(X_train, y_train, batch_size=32)
validation_generator = keras_generator(X_valid, y_valid, batch_size=32)
else:
# Load all the preprocessed images in memory
train_set = loader(X_train, y_train)
validation_set = loader(X_valid, y_valid)
Explanation: 6. Declare generators/load dataset
This cell:
* Declares a Keras generator (keras_generator()) to enable training on low end hardware
* Declares a loader (loader()) to load all the dataset in memory (faster training, higher end hardware required)
End of explanation
from keras.callbacks import ModelCheckpoint
from keras.optimizers import Adam
def plot_train_results(history_object):
f, ax = plt.subplots(figsize=(10, 5))
ax.plot(history_object.history['acc'])
ax.plot(history_object.history['val_acc'])
ax.set_ylabel('Model accuracy')
ax.set_xlabel('Epoch')
ax.set_title('Model accuracy vs epochs')
plt.legend(['training accuracy', 'validation accuracy'], bbox_to_anchor=(1.01, 1.0))
f.tight_layout()
plt.savefig("./output_images/accuracy_over_epochs.png")
if train_model == 1:
# Compile the model using an Adam optimizer
model.compile(optimizer=Adam(), loss='mse', metrics=['accuracy'])
# saves the model weights after each epoch if the validation loss decreased
filepath = './weights/best-weights.hdf5'
checkpointer = ModelCheckpoint(filepath=filepath,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='min')
# Train the model, with or without a generator
if use_generator == 1:
history_object = model.fit_generator(train_generator,
steps_per_epoch=len(X_train),
epochs=epoch_num,
verbose=train_verbose_style,
callbacks=[checkpointer],
validation_data=validation_generator,
validation_steps=len(X_valid))
else:
history_object = model.fit(train_set[0],
train_set[1],
batch_size=64,
epochs=epoch_num,
verbose=train_verbose_style,
callbacks=[checkpointer],
validation_data=(validation_set[0], validation_set[1]))
if display_output == 1:
plot_train_results(history_object)
Explanation: 7. Train Model and save the best weights
This cell:
* Defines a training visualization function (plot_train_result())
* Compiles the Keras model. The Adam optimizer is chosen and Mean Squared Error is used as a loss function
* A Keras checkpointer is declared and configured to save the weights if loss becomes lower than the lowest loss to date. The checkpointer is called via the callback parameter and is exucuted after each epoch.
* Trains the model either with a generator or not.
* Outputs a figure of the Training/Validation accuracy vs the epochs.
End of explanation
# Load the weight
model.load_weights('./weights/best-weights.hdf5')
print("Weights loaded!")
Explanation: Testing the classifier
8. Load weights
This cell:
* Loads the best weight saved by the Keras checkpointer (code cell #7 above).
End of explanation
if display_output == 1:
import matplotlib.pyplot as plt
%matplotlib inline
import time
import numpy as np
sample_num = 12
col_num = 4
row_num = int(sample_num/col_num)
# Create training sample + histogram plot
f, axarr = plt.subplots(row_num, col_num, figsize=(col_num * 4, row_num * 3))
for i in range(sample_num):
# Pick a random image from the validation set
index = np.random.randint(validation_set[0].shape[0])
# Add one dimension to the image to fit the CNN input shape...
sample = np.reshape(validation_set[0][index], (1, 64,64,3))
# Record starting time
start_time = time.time()
# Infer the label
inference = model.predict(sample, batch_size=64, verbose=0)
# Print time difference...
print("Image %2d inference time : %.4f s" % (i, time.time() - start_time))
# Extract inference value
inference = inference[0][0]
# Show the image
color_str = 'green'
if inference >= 0.0:
title_str = "Car: {:4.2f}" .format(inference)
if validation_set[1][index] != 1:
color_str = 'red'
else:
title_str = "No Car: {:4.2f}" .format(inference)
if validation_set[1][index] != -1:
color_str = 'red'
axarr[int(i/col_num), i % col_num].imshow(validation_set[0][index])
axarr[int(i/col_num), i % col_num].set_title(title_str, color = color_str)
axarr[int(i/col_num), i % col_num].axis('off')
f.tight_layout()
plt.savefig("./output_images/inference.png")
Explanation: 9. Test the classifier on random images
This cell:
* Picks random images from the dataset
* Infers the label using the trained model
* Measures the inference time
* Shows a figure of the images and predicted label, title color changes according to correctness of the inference (green = correct, red = incorrect).
End of explanation |
4,412 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step2: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step5: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step6: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
def soliton(x, t, c, a):
Return phi(x, t) for a soliton wave with constants c and a.
return 0.5*c*(1/(np.cosh((c**(1/2)/2)*(x-c*t-a))**2))
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
4,413 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Andrea Manzini, Lorenzo Lazzara, Martin Josifoski, Mazen Fouad A-wali Mahdi
1. Introduction and General Information
In this project, we analyze the LastFM dataset released in the framework of HetRec 2011 (http
Step1: Data Exploration
In this section we analyze our dataset that has the following structure
Step2: Tag analysis
Step3: It can be observed that there are no artists with zero tags and most of the artists have more than 5 tags.
Step4: From the above plot we can observe that for each artist we have a certain number of different tags but since LastFM tags are freeform a part of them can have the same meaning while written in different form, e.g., 90 -> 90' -> 90s or can have meaning that is identifiable only by the user itself.
Listening count analysis
Step5: We can see from the plot above that most of the users have a maximum listening count of less than 50,000, while some of them are significantly higher. Observing such high listening counts for 1 artist seems highly unprobable and such activity looks suspicious, possibly motivated by commercial purposes, hence excluded.
Step6: A function that prunes the listening counts.
Step7: The plot shows that most of the users have around 50 artist connections. We decided to remove the users who listened to less than 10 different artists. (The motivation to keep all the users with artist connections above 10 will be shown later, since the distribution of artists changes slightly after a small reduction in the artist-artist similarity network construction.)
User-User network
In this section we analyze the lastFm social network.
Step8: The User social network graph is composed of 29 disconnected subgraphs. Fortunately most of the user (~ 97.5 %) are inside one subgraph, so we can discard all the others in order to have a connected graph.
Step9: As we can observe the degree distribution is mostly unchanged, we only have a little decrease of the number of user with very small connections since the subgraphs we deleted were composed of few users. The function below updates the dataset by reducing the dataset to its largest connected component.
Step10: Weight Normalization
We need to build a similarity graph for collaborative filtering. We need to understand which artists users listen to and not necessarely how much. Even though taking into account the absolute value of listening counts could have been a measure of how much we can trust a user preferences, we decided to normalize separately for each user in order to keep the weights simple and clear. In particular we model the normalized weight as the ratio the listening count and the biggest listening count for each user. Thus, all the weights of a user are less or equal to 1.
Step11: 1. Friendship inference from music taste
These are the steps we followed to build the user-user comparison network
Step12: Checking symmetry of the friendship matrix.
Step13: Creating the friendship graph.
Step14: Creating the artist-artist network
Step15: Checking symmetry of the friendship matrix.
Step16: Creating the artist-artist graph.
Step17: Creating the user-artist network
Step18: Inferring the New User-User Network
According to the artist similarity network and the user-artist listening counts, friendship between users is infered as describe in the beggining of the section.
Step19: For comparison a random network that follows the degree distribution of the artist similarity network is generated.
Step20: We check whether any stubs left unconnected (we shouldn't have any if the configuration works correctly).
Step21: Analogously we infer user friendships, based on the random network.
Step22: At this point we have a friendship network that is generated based on the similarity between artists, and another one generated based on a random network. We want to test whether users who tend to listen to similar artists and hence have similar taste of music are friends in the social network, compared to the scenario that their music taste is irrelevant for infering friendships between users.
Step23: Based on the results above, we can deduce that existence of friendship does not indicate same taste of music, since the inference based on the similarity of favourite artists between users had comparable, even slightly inferior results to the inferences based on a random network (when the taste in music is not taken into consideration at all).
2. Recommender System
In order to evaluate the artists network and users network information about users' preferencies, we tried to exploit them in a reccomender system, to see if they could give some improvements. This would allow us also to get another point of view over the user-user network nature. The data was splitted in 6 folds and cross validation was done to evaluate prediction scores. The code we used is contained in recommender.py source file.
Smooth Matrix Factorization
Introduction | Python Code:
%load_ext autoreload
%autoreload 1
import numpy as np
import pickle
import matplotlib.pyplot as plt
import scipy as sp
import pandas as pd
import os.path
import networkx as nx
from scipy.sparse import csr_matrix
from Dataset import Dataset
from plots import *
import os
from helpers import *
%matplotlib inline
Explanation: Andrea Manzini, Lorenzo Lazzara, Martin Josifoski, Mazen Fouad A-wali Mahdi
1. Introduction and General Information
In this project, we analyze the LastFM dataset released in the framework of HetRec 2011 (http://ir.ii.uam.es/hetrec2011). This dataset contains an underlying social network, artists tags given by users, and user-artist listening count from a set of 1892 users and 17632 artists. Different type of files are given, describing the interactions between user-user, user-artist and tags. Our aim is to test whether friendship implies similar taste in music, in other words whether friends do tend to listen to the same music.
We approach the problem in two different ways:
- We construst a network based on the similarity between artists and from it we try to infer user-user connections.
- We build a recommender system based on only the listening counts and then try to improve it using the underlying friendship network or the artist-artist similarity network.
End of explanation
# Loading the data
data = Dataset()
data_folder = os.path.join('.','data')
ratings_path = os.path.join(data_folder,'user_artists.dat')
ratings = pd.read_csv(ratings_path, sep='\t', header=0, skipinitialspace=True)
data.artists.head()
Explanation: Data Exploration
In this section we analyze our dataset that has the following structure: for each user there is a list of his/her favorite artists, each listened artist has a listening record denoting the listening times; for each artist there is a list of his/her genre, which is a list of tags given by users that have listened to the artist. In addition we have for each user a list of his friends.
The goal is to show the initial insights and the motivation, but the final processing is contained in the Dataset module whos' functions are called in each section to produce the dataset that is used throughout the notebook.
End of explanation
group = data.tags_assign[['artistID','tagID']].groupby(['artistID'])
group = group.size()
group.sort_values(ascending=False,inplace=True)
plot_tags_statistics(group)
small = group.loc[group<5]
big = group.loc[group>5]
plot_separate_small_artist(small, big)
Explanation: Tag analysis
End of explanation
group = data.tags_assign[['artistID','tagID']].groupby(['artistID'])
group = group.nunique().tagID.sort_values(ascending=False)
plot_unique_tags(group)
Explanation: It can be observed that there are no artists with zero tags and most of the artists have more than 5 tags.
End of explanation
max_user_weight = data.ratings.groupby('userID').max()
plot_listenig_count_frequency(max_user_weight)
max_user_weight = data.ratings.groupby('userID').max()
fig, ax = plt.subplots(nrows=1, ncols=2,figsize=(12, 6))
plt.subplot(1, 2, 1)
max_user_weight.boxplot(column='weight')
plt.subplot(1,2,2)
max_user_weight.loc[max_user_weight['weight']<=50000].boxplot(column='weight');
Explanation: From the above plot we can observe that for each artist we have a certain number of different tags but since LastFM tags are freeform a part of them can have the same meaning while written in different form, e.g., 90 -> 90' -> 90s or can have meaning that is identifiable only by the user itself.
Listening count analysis
End of explanation
print("The number of users with less than a maximum of 50,000 listening count:",\
len(max_user_weight.loc[max_user_weight['weight']<=50000]))
percentage = len(max_user_weight.loc[max_user_weight['weight']<=50000])/ len(max_user_weight)*100
print("Percentage of total users:", \
percentage ,"%")
Explanation: We can see from the plot above that most of the users have a maximum listening count of less than 50,000, while some of them are significantly higher. Observing such high listening counts for 1 artist seems highly unprobable and such activity looks suspicious, possibly motivated by commercial purposes, hence excluded.
End of explanation
data.prune_ratings()
number_user_artist =ratings.groupby('userID').nunique().artistID.to_frame()
plot_artist_per_user(number_user_artist)
Explanation: A function that prunes the listening counts.
End of explanation
#G is the total graph with all users ,except the one already pruned, and multiple component
G = nx.Graph(data.build_friend_friend())
print('My network has {} nodes.'.format(len(G.nodes())))
print('My network has {} edges.'.format(G.size()))
nx.is_connected(G)
connected_components = nx.connected_components(G)
for i, subgraph in enumerate(sorted(connected_components, key = len, reverse=True)):
print("Subgraph {} has {} nodes" .format(i, len(subgraph)))
Explanation: The plot shows that most of the users have around 50 artist connections. We decided to remove the users who listened to less than 10 different artists. (The motivation to keep all the users with artist connections above 10 will be shown later, since the distribution of artists changes slightly after a small reduction in the artist-artist similarity network construction.)
User-User network
In this section we analyze the lastFm social network.
End of explanation
degree_distribution(G.degree())
graphs = list(nx.connected_component_subgraphs(G))
#giant component is the graph which contains most of the users
giant_component = graphs[0]
degree_distribution(giant_component.degree())
print('My network has {} nodes.'.format(len(giant_component.nodes())))
print('My network has {} edges.'.format(giant_component.size()))
Explanation: The User social network graph is composed of 29 disconnected subgraphs. Fortunately most of the user (~ 97.5 %) are inside one subgraph, so we can discard all the others in order to have a connected graph.
End of explanation
data.prune_friends()
Explanation: As we can observe the degree distribution is mostly unchanged, we only have a little decrease of the number of user with very small connections since the subgraphs we deleted were composed of few users. The function below updates the dataset by reducing the dataset to its largest connected component.
End of explanation
data.normalize_weights()
# Plot distribution of weights for a random user
user_weight_distribution(data.ratings, seed=1)
Explanation: Weight Normalization
We need to build a similarity graph for collaborative filtering. We need to understand which artists users listen to and not necessarely how much. Even though taking into account the absolute value of listening counts could have been a measure of how much we can trust a user preferences, we decided to normalize separately for each user in order to keep the weights simple and clear. In particular we model the normalized weight as the ratio the listening count and the biggest listening count for each user. Thus, all the weights of a user are less or equal to 1.
End of explanation
data = Dataset()
data.prune_ratings()
data.prune_friends()
data.normalize_weights()
friendship = data.build_friend_friend()
Explanation: 1. Friendship inference from music taste
These are the steps we followed to build the user-user comparison network:
* Construct an artist similarity graph and connect the users to artists using the normalized listening count.
For each user consider all the possible direct paths $u_i \rightarrow a_k \rightarrow u_j$ and 3-hop paths $u_i \rightarrow a_k \rightarrow a_l \rightarrow u_j$ where the $u_i \rightarrow a_k$ and $a_l \rightarrow u_j$ are weighted with the respective normalized weight (call them $C_{ik}$ and $C_{jl}$) while the weight between the two artists represents the strength of the similarity taken from the constructed similiraty graph (call it $S_{kl}$).
Add an edge between each pair $u_i$ and $u_j$ with a weight equal to: $F_{ij} = \sum\limits_{k,l \in \Omega} S_{kl}\min{C_{ik}, C_{jl}} + \sum\limits_{k \in \Theta}\min{C_{ik}, C_{jk}}$ where $\Omega$ is the set of all artists that are similar to each other out of which one is connected to i and one is connected to j, and $\Theta$ is the set of all artists that are connected to both users.
To inspect the connection between the taste in music (listening counts) and the friendships in the underlined network, a random network following the same degree distribution of the artist similarity network is constructed and the user-user network generated as previously described from this random network is compared to the output generated from the meaningful artist similarity network.
This article describes the reason why LastFM enforces their tagging policy and why the tags generated in that way are a meaningful source of information.
Since the data exploration of the tags dataset showed that all of the artists have been tagged we were motivated to use the tags assigned to each artist to infer similarity between them. Before acting, it was noticed that their api offers a function that retrieves similar artists for a given mbid unique code that identifies an artist. All the details are contained in the api notebook.
Creating the friendship network
End of explanation
np.nonzero(friendship-friendship.transpose())
friendship.shape
Explanation: Checking symmetry of the friendship matrix.
End of explanation
friendship_graph = nx.Graph(friendship)
Explanation: Creating the friendship graph.
End of explanation
artist_artist_matrix = data.build_art_art()
Explanation: Creating the artist-artist network
End of explanation
np.nonzero(artist_artist_matrix-artist_artist_matrix.transpose())
artist_artist_matrix.shape
Explanation: Checking symmetry of the friendship matrix.
End of explanation
artist_artist_graph = nx.Graph(artist_artist_matrix)
Explanation: Creating the artist-artist graph.
End of explanation
user_artist_matrix = pickle.load(open('data/art_user.pickle', 'rb'))
user_artist_matrix = csr_matrix.todense(user_artist_matrix)
user_artist_matrix = np.array(user_artist_matrix.T)
user_artist_matrix.shape
Explanation: Creating the user-artist network
End of explanation
generated_user_user_matrix = generate_user_user_matrix_from_artist_artist_matrix(user_artist_matrix, artist_artist_matrix)
generated_user_user_graph = nx.Graph(generated_user_user_matrix)
Explanation: Inferring the New User-User Network
According to the artist similarity network and the user-artist listening counts, friendship between users is infered as describe in the beggining of the section.
End of explanation
artist_artist_random_graph, stub_pairs = greedy_configuration(artist_artist_graph)
artist_artist_random_matrix = np.array(csr_matrix.todense(nx.adjacency_matrix(artist_artist_random_graph)))
Explanation: For comparison a random network that follows the degree distribution of the artist similarity network is generated.
End of explanation
sum(stub_pairs.values()) # should be equal to zero if configuration was successful
Explanation: We check whether any stubs left unconnected (we shouldn't have any if the configuration works correctly).
End of explanation
generated_user_user_matrix_from_random_aa = generate_user_user_matrix_from_artist_artist_matrix(user_artist_matrix, artist_artist_random_matrix)
generated_user_user_graph_from_random_aa = nx.Graph(generated_user_user_matrix_from_random_aa)
Explanation: Analogously we infer user friendships, based on the random network.
End of explanation
print("The number of friendships in the underlying social network is %d." % (int(friendship.sum()/2)))
plt.hist(list(dict(friendship_graph.degree()).values()), log=True);
plt.title('Degree distribution of true user friendships');
plt.xlabel('Degrees');
plt.ylabel('Frequency');
threshold = 0.4
reduced_matrix = np.zeros(generated_user_user_matrix.shape)
reduced_matrix[generated_user_user_matrix > threshold] = 1
reduced_matrix[generated_user_user_matrix <= threshold] = 0
reduced_graph = nx.Graph(reduced_matrix)
compare_networks(friendship_graph, reduced_graph)
print("The number of infered friendships is %d." % int((generated_user_user_matrix.flatten()[generated_user_user_matrix.flatten() > threshold].shape)[0]/2))
plt.hist(list(dict(reduced_graph.degree()).values()), log=True);
plt.title('Degree distribution of predicted user friendships');
plt.xlabel('Degrees');
plt.ylabel('Frequency');
threshold = 0.3
reduced_matrix[generated_user_user_matrix_from_random_aa > threshold] = 1
reduced_matrix[generated_user_user_matrix_from_random_aa <= threshold] = 0
reduced_graph = nx.Graph(reduced_matrix)
compare_networks(friendship_graph, reduced_graph)
print("The number of infered friendships is %d." % int((generated_user_user_matrix_from_random_aa.flatten()[generated_user_user_matrix_from_random_aa.flatten() > threshold].shape)[0]/2))
plt.hist(list(dict(reduced_graph.degree()).values()), log=True);
plt.title('Degree distribution of predicted user friendships based on the random network');
plt.xlabel('Degrees');
plt.ylabel('Frequency');
Explanation: At this point we have a friendship network that is generated based on the similarity between artists, and another one generated based on a random network. We want to test whether users who tend to listen to similar artists and hence have similar taste of music are friends in the social network, compared to the scenario that their music taste is irrelevant for infering friendships between users.
End of explanation
plot_rmse()
Explanation: Based on the results above, we can deduce that existence of friendship does not indicate same taste of music, since the inference based on the similarity of favourite artists between users had comparable, even slightly inferior results to the inferences based on a random network (when the taste in music is not taken into consideration at all).
2. Recommender System
In order to evaluate the artists network and users network information about users' preferencies, we tried to exploit them in a reccomender system, to see if they could give some improvements. This would allow us also to get another point of view over the user-user network nature. The data was splitted in 6 folds and cross validation was done to evaluate prediction scores. The code we used is contained in recommender.py source file.
Smooth Matrix Factorization
Introduction:
We tried to implement an algorithm that uses the implicit ratings extracted by the user/artist network structure in an efficient way. We starting from one of the most famous recommender algorithm which became famous during the netflix prize. It is usually appelled as SVD and it is a matrix factorization extended with user and items baselines learning [Koren, Factorization meets the neighborhood: a multifaceted collaborative filtering model, 2008]. We sobstitute the traditional regularized term ($L_2$ norm of all the parameters) with a term which penalizes the solution that are not smooth over the given graph. The reason for this is that we expect similar artists to receive similar ratings from the same user and on the other side we expect similar users to give similar ratings to similar artists. Thus the predicted ratings should be smooth over both the graphs.
We will derive the SGD update of the algorithm only in the case of a smoothing over the user-user network. In order to get the formulas for the artist-artist network we just need to swap users with artists.
Notation:
$F$: loss function
$r_{ui}$: real ratings(weight) given by user $u$ to artist $i$
$\hat{r_{ui}}$: estimated ratings(weight) given by user $u$ to artist $i$
$b_u$: baseline of the user $u$
$b_i$: baseline of the artist $i$
$\mu$: global mean of ratings
$w_u$: vector of features associated with user $u$
$z_i$: vector of features associated with artist $i$
$L$ : laplacian matrix of the user network
$|U|$: total number of users in the network
Formulas:
$$ r_{ui} = \mu + b_u + b_i + w_u^T z_i $$
$$ F_i = ||R_{:i}-\hat{R_{:i}}||^2 + \alpha R_{:i}^T L \hat{R_{:i}}$$
$$ F_{ui} = (r_{ui} - \hat{r_{ui}})^2 + \alpha \sum_{k=1}^{|U|} L_{uk} \hat{r_{ui}} \hat{r_{ki}} $$
$$ \frac{\partial{F_{ui}}}{\partial{b_u}} = -2(r_{ui} - \hat{r_{ui}}) + \alpha \sum_{k=1}^{|U|} L_{uk} (\hat{r_{ki}} + \hat{r_{ui}}) $$
$$ \frac{\partial{F_{ui}}}{\partial{b_i}} = -2(r_{ui} - \hat{r_{ui}}) + \alpha \sum_{k=1}^{|U|} L_{uk} (\hat{r_{ki}} + \hat{r_{ui}}) $$
$$ \frac{\partial{F_{ui}}}{\partial{w_u}} = -2(r_{ui} - \hat{r_{ui}})z_i + \alpha z_i \sum_{k=1}^{|U|} L_{uk} (\hat{r_{ki}} + \hat{r_{ui}}) $$
$$ \frac{\partial{F_{ui}}}{\partial{z_i}} = -2(r_{ui} - \hat{r_{ui}})w_u + \alpha \sum_{k=1}^{|U|} L_{uk} (w_u \hat{r_{ki}} + w_k \hat{r_{ui}}) $$
From this point the SGD update rule can be easily obtained using the gradient direction. Also a costant $\gamma$ called learning rate should be added in front of the gradient.
Implementation
The algorithm was implemented using as framework the surprise library [https://github.com/NicolasHug/Surprise] which is specifically design for recommendations. We link here a GitHub fork that was customized to include the new algorithm [https://github.com/manzo94/Surprise/tree/laplacian_smooth]. We also included it in the submission folder. This customized library must be installed in order to run the code. We expected the smooth matrix factorization to be very computationl heavy, so we decided to implement it using the Cython compiler [https://github.com/cython/cython]. This compiler together with a numpy wrapper and a specifically designed code can achieve high speed performances. In particular we tried to follow some rules to get more improvements:
* For the variables we use Cython fixed types instead of Python dynamic types.
* Indexing of numpy arrays is done without slicing, inside for loops over the single elements.
* We use memoization when possible, avoiding to compute the same quantities more than once.
Results
We present the results of three different scenarios:
1) Smooth MF over artist-artist network
2) Smooth MF over social network
3) Classic SVD with regularizer
In particular we show plots of the RMSE score obtained on a 6 folds cross validation over the regularizer coefficient ($\alpha$ in the smooth MF) and the best score obtained for each method. As a reference we also report the score obtained using the global mean as a constant estimator (all predictions equal to the global mean).
End of explanation |
4,414 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The famous Monty Hall brain teaser
Step2: setting up a game
There are many ways to do this, but to keep it simple and human comprehensible I'm going to do it one game at a time.
First up, a helper function which takes the door number guessed and the door opened up the host to reveal a goat, and returns the switched door
Step4: Now the actual monty hall function - it takes in a guess and whether you want to switch your guess, and returns True or False depending on whether you win
Step5: Now to run through a bunch of monty hall games
Step6: Not switching doors wins a third of the time, which makes intuitive sense, since we are choosing one door out of three.
Step7: This is the suprising result, since switching our guess increases the win rate to two third! To put it more graphically
Step9: So our chances of winning essentially double if we switch our guess once a goat door has been opened.
A good monty hall infographic
Step11: Then I removed the revealing the goat door code from the original monty hall function above
Step12: Now to run some sims | Python Code:
import random
import numpy as np
# for plots, cause visuals
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: The famous Monty Hall brain teaser:
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
There is a really fun discussion over at Marilyn vos Savant's site.
Ok, now to setup the problem, along with some kind of visuals and what not.
End of explanation
def switch_door(guess, goat_door_opened):
takes in the guessed door and the goat door opened
and returns the switched door number
doors = [0,1,2]
doors.remove(goat_door_opened)
doors.remove(guess)
return doors[0]
Explanation: setting up a game
There are many ways to do this, but to keep it simple and human comprehensible I'm going to do it one game at a time.
First up, a helper function which takes the door number guessed and the door opened up the host to reveal a goat, and returns the switched door:
End of explanation
def monty_hall(guess=0, switch_guess=False, open_goat_door=True):
sets up 3 doors 0-2, one which has a pize, and 2 have goats.
takes in the door number guessed by the player and whether he/she switched door
after one goat door is revealed
doors = [door for door in range(3)]
np.random.shuffle(doors)
prize_door = doors.pop()
goat_door_opened = doors[0]
if goat_door_opened == guess:
goat_door_opened = doors[1]
if switch_guess:
return switch_door(guess, goat_door_opened) == prize_door
else:
return guess == prize_door
Explanation: Now the actual monty hall function - it takes in a guess and whether you want to switch your guess, and returns True or False depending on whether you win
End of explanation
no_switch = np.mean([monty_hall(random.randint(0,2), False) for _ in range(100000)])
no_switch
Explanation: Now to run through a bunch of monty hall games:
End of explanation
yes_switch = np.mean([monty_hall(random.randint(0,2), True) for _ in range(100000)])
yes_switch
Explanation: Not switching doors wins a third of the time, which makes intuitive sense, since we are choosing one door out of three.
End of explanation
plt.pie([yes_switch, no_switch], labels=["Switching win %", "Not switching win %"],
autopct='%1.1f%%', explode=(0, 0.05));
Explanation: This is the suprising result, since switching our guess increases the win rate to two third! To put it more graphically:
End of explanation
def switch_door_no_revel(guess):
takes in the guessed door
and returns the switched door number
doors = [0,1,2]
doors.remove(guess)
np.random.shuffle(doors)
return doors[0]
Explanation: So our chances of winning essentially double if we switch our guess once a goat door has been opened.
A good monty hall infographic:
<img src="images/monty-hall.png" width="75%">.
the no reveal month
So what if Monty never opens a goat door, and just gives us a change to switch the guessed door? Does the winning % still change?
So first we change the switch door function to remove the reveal option:
End of explanation
def monty_hall_no_reveal(guess=0, switch_guess=False):
sets up 3 doors 0-2, one which has a pize, and 2 have goats.
takes in the door number guessed by the player and whether he/she switched door
doors = [door for door in range(3)]
np.random.shuffle(doors)
prize_door = doors.pop()
if switch_guess:
return switch_door_no_revel(guess) == prize_door
else:
return guess == prize_door
Explanation: Then I removed the revealing the goat door code from the original monty hall function above:
End of explanation
no_switch_no_reveal = np.mean([monty_hall_no_reveal(random.randint(0,2), False) for _ in range(100000)])
yes_switch_no_reveal = np.mean([monty_hall_no_reveal(random.randint(0,2), True) for _ in range(100000)])
plt.bar([0,1], [yes_switch_no_reveal, no_switch_no_reveal], tick_label=["Switched Guess","Didn't Switch"],
color=["blue","red"], alpha=0.7);
Explanation: Now to run some sims:
End of explanation |
4,415 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generate new columns with average block info
Take average values over two time horizons
6 blocks (~1 min) -> represents the current state (short frequency view)
60 blocks (~10 min) -> represents the long term view
Step1: Merge data with new columns
Step2: Create a label
What are we predicting?
A hindsight estimate of what the price should be, given knowledge about previous blocks
Develop a summary statistic about the distribution of prices over previous blocks
Our target
Step3: There are no zero values, but many values close to zero
Step4: There are only on average 96 samples in each block
Step5: So we create groupings of 6 blocks to increase sample size
Compute the summary statistic mu
given the distribution of mv values, fit a statistical model to the data
use this fit model to compute the 25th percentile of the distribution
Step6: Trying a Gaussian distribution
Step7: Set this value to mu
Step8: Trying a Gamma distribution
Step9: The gamma distribution appears to fit the empirical data better but we get zero for the 25th percentile
Compute the label, p, given mu
knowing mu, how do we obtain our hindsight recommendation?
using our definition of mu, we solve an equation to obtain p (price)
p = (mu x gweiPaid_b) / gasUsed_b
this will serve as our label and thus recommendation for how much to pay per unit gas for a transation to successfully commence
it tells us what price we need to set in order to force mv for that bid to be mu
Step10: If mu is higher around 0.01 we get a normal distribution
Write training set and labels to a csv file for modeling
Step11: Model for first group of 6 | Python Code:
df['txcnt_second'] = df['tx_count'].values / df['blockTime'].values
df['avg_gasUsed_t_perblock'] = df.groupby('block_id')['gasUsed_t'].transform('mean')
df['avg_price_perblock'] = df.groupby('block_id')['price_gwei'].transform('mean')
def rolling_avg(window_size):
price = df[['block_id', 'avg_price_perblock']].drop_duplicates().sort_values(
'block_id', ascending=True)
gasUsed_t = df[['block_id', 'avg_gasUsed_t_perblock']].drop_duplicates().sort_values(
'block_id', ascending=True)
txcnt_second = df[['block_id', 'txcnt_second']].drop_duplicates().sort_values(
'block_id', ascending=True)
tx_count = df[['block_id', 'tx_count']].drop_duplicates().sort_values(
'block_id', ascending=True)
gasUsed_b = df[['block_id', 'gasUsed_b']].drop_duplicates().sort_values(
'block_id', ascending=True)
uncle_count = df[['block_id', 'uncle_count']].drop_duplicates().sort_values(
'block_id', ascending=True)
difficulty = df[['block_id', 'difficulty']].drop_duplicates().sort_values(
'block_id', ascending=True)
blocktime = df[['block_id', 'blockTime']].drop_duplicates().sort_values(
'block_id', ascending=True)
# create new pandas dataframe with average values
rolling_avg = pd.DataFrame()
# calculate rolling averages
rolling_avg['avg_blocktime'] = blocktime['blockTime'].rolling(window=window_size).mean()
rolling_avg['avg_gasUsed_b'] = gasUsed_b['gasUsed_b'].rolling(window=window_size).mean()
rolling_avg['avg_tx_count'] = tx_count['tx_count'].rolling(window=window_size).mean()
rolling_avg['avg_uncle_count'] = uncle_count['uncle_count'].rolling(window=window_size).mean()
rolling_avg['avg_difficulty'] = difficulty['difficulty'].rolling(window=window_size).mean()
rolling_avg['avg_txcnt_second'] = txcnt_second['txcnt_second'].rolling(window=window_size).mean()
rolling_avg['avg_gasUsed_t'] = gasUsed_t['avg_gasUsed_t_perblock'].rolling(window=window_size).mean()
rolling_avg['avg_price'] = price['avg_price_perblock'].rolling(window=window_size).mean()
# insert blockids to merge on
rolling_avg['blockids'] = df['block_id'].drop_duplicates().sort_values(ascending=True)
return rolling_avg
num_blocks = [6, 60]
for num in num_blocks:
df_rolling_avg = rolling_avg(num)
df_rolling_avg.to_csv('./../data/block_avg_{}.csv'.format(num))
df_rolling_avg_6 = rolling_avg(6)
df_rolling_avg_60 = rolling_avg(60)
Explanation: Generate new columns with average block info
Take average values over two time horizons
6 blocks (~1 min) -> represents the current state (short frequency view)
60 blocks (~10 min) -> represents the long term view
End of explanation
merged1 = pd.merge(df, df_rolling_avg_6, left_on='block_id', right_on='blockids')
merged2 = pd.merge(merged1, df_rolling_avg_60, left_on='block_id', right_on='blockids', suffixes=('_6', '_60'))
merged2.columns
Explanation: Merge data with new columns
End of explanation
merged2['mv'] = merged2.gweiShare / merged2.gasShare
merged2['mv'].isnull().sum()
merged2['mv'].describe()
Explanation: Create a label
What are we predicting?
A hindsight estimate of what the price should be, given knowledge about previous blocks
Develop a summary statistic about the distribution of prices over previous blocks
Our target: the 25th percentile of the distribution (gweiShare / gasShare)
Definitions
gasUsed_t -> the amount of gas consumed on a transation
gasUsed_b -> the amount of gas consumed in an entire block
gweiPaid -> the total amount paid (Gwei) for a transaction (= gasUsed_t x price_gwei)
gweiPaid_b -> the total amount paid in a block
gweiShare -> the fraction of gwei paid w.r.t. the entire block
gasShare -> the fraction of gas consumed w.r.t. the entire block
Define "miner value" – mv
the fraction of prices per block and gas per block
mv = gweiShare / gasShare
local parameter (per transaction)
Define mu
mu is a summary statistic of mv (global parameter)
a measure of how likely a transaction is to be "picked up" by a miner for completion (risk factor)
our target/goal is for mu to be the 25th percentile of mv (gweiShare / gasShare)
mu = percentile(mv, 25) over the entire distribution of mv values
we can tune this parameter to increase or decrease the desired percentile
it is a pre-emptive statistical calculation based on our hindsight knowledge
The "price" predicted with hindsight
knowing mu, how do we obtain our hindsight recommendation?
using our definition of mu, we solve an equation to obtain p (price)
p = (mu x gweiPaid_b) / gasUsed_b
this will serve as our label and thus recommendation for how much to pay per unit gas for a transation to successfully commence
it tells us what price we need to set in order to force mv for that bid to be mu
Calculate miner value (mv) for every datapoint in our dataset
price / gas or gweiShare / gasShare
End of explanation
merged2.groupby('block_id')['mv'].count().head(6)
merged2.groupby('block_id')['mv'].count().mean()
Explanation: There are no zero values, but many values close to zero
End of explanation
print('max tx in block: {}, min tx in block: {}'.format(
merged2.groupby('block_id')['mv'].count().max(),
merged2.groupby('block_id')['mv'].count().min()))
Explanation: There are only on average 96 samples in each block
End of explanation
merged2['mv'].hist(bins=10000, label='Miner Values', histtype='stepfilled')
plt.xlim(-2, 10)
plt.xlabel('Miner Value')
plt.legend()
# compute mean, variance, standard deviation
mu_hat = np.mean(merged2['mv'])
sigma_sq_hat = np.var(merged2['mv'])
sigma_hat = np.std(merged2['mv'])
print("Sample Mean: {0:1.3f}".format(mu_hat))
print("Sample Variance: {0:1.3f}".format(sigma_sq_hat))
print("Sample Standard Dev: {0:1.3f}".format(sigma_hat))
Explanation: So we create groupings of 6 blocks to increase sample size
Compute the summary statistic mu
given the distribution of mv values, fit a statistical model to the data
use this fit model to compute the 25th percentile of the distribution
End of explanation
x = np.linspace(-10, 15, num=1000)
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
ax.hist(merged2['mv'], normed=True, bins=10000, histtype='stepfilled', label='Samples')
ax.plot(x, norm.pdf(x, mu_hat,sigma_hat), 'r-', lw=3, label='PDF')
ax.axvline(x=np.percentile(norm.pdf(x, mu_hat,sigma_hat), 25), linestyle='--', label='25th percentile')
ax.set_xlim(-5,10)
ax.set_xlabel('Miner Values')
ax.legend()
# compute 25th percentile
np.percentile(norm.pdf(x, mu_hat,sigma_hat), 25)
Explanation: Trying a Gaussian distribution
End of explanation
mu_normal = np.percentile(norm.pdf(x, mu_hat,sigma_hat), 25)
Explanation: Set this value to mu
End of explanation
alpha = float(mu_hat ** 2) / sigma_sq_hat
beta = float(mu_hat) / sigma_sq_hat
x = np.linspace(-5, 10, num=1000)
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
ax.hist(merged2['mv'], normed=True, bins=10000, histtype='stepfilled', label='Samples')
ax.plot(x, gamma.pdf(x, alpha), 'g-', lw=3, label='Gamma PDF')
ax.axvline(x=np.percentile(gamma.pdf(x, alpha), 25), linestyle='--', label='25th percentile')
ax.set_xlim(-5,10)
ax.set_xlabel('Miner Values')
ax.legend()
# compute 25th percentile
np.percentile(gamma.pdf(x, alpha), 25)
Explanation: Trying a Gamma distribution
End of explanation
mu_normal
merged2['p_label'] = mu_normal * (merged2.gweiPaid_b / merged2.gasUsed_b)
merged2['p_label'].hist(bins=3000)
plt.xlim(-0.1,1)
Explanation: The gamma distribution appears to fit the empirical data better but we get zero for the 25th percentile
Compute the label, p, given mu
knowing mu, how do we obtain our hindsight recommendation?
using our definition of mu, we solve an equation to obtain p (price)
p = (mu x gweiPaid_b) / gasUsed_b
this will serve as our label and thus recommendation for how much to pay per unit gas for a transation to successfully commence
it tells us what price we need to set in order to force mv for that bid to be mu
End of explanation
merged2.columns
# select candidate features for modeling
sel_cols = ['gasLimit_t',
'gasUsed_t',
'newContract',
'blockTime',
'difficulty',
'gasLimit_b',
'gasUsed_b',
'reward',
'size',
'totalFee',
'amount_gwei',
'gasShare',
'gweiPaid',
'gweiPaid_b',
'gweiShare',
'free_t',
'day',
'hour',
'dayofweek',
'txcnt_second',
'avg_blocktime_6',
'avg_gasUsed_b_6',
'avg_tx_count_6',
'avg_uncle_count_6',
'avg_difficulty_6',
'avg_txcnt_second_6',
'avg_gasUsed_t_6',
'avg_price_6',
'avg_blocktime_60',
'avg_gasUsed_b_60',
'avg_tx_count_60',
'avg_uncle_count_60',
'avg_difficulty_60',
'avg_txcnt_second_60',
'avg_gasUsed_t_60',
'avg_price_60',
'mv']
features = merged2[sel_cols]
features.to_csv('./../data/training.csv')
labels = merged2['p_label']
labels.to_csv('./../data/labels.csv')
Explanation: If mu is higher around 0.01 we get a normal distribution
Write training set and labels to a csv file for modeling
End of explanation
# compute mean, variance, standard deviation
mu_hat = np.mean(samples)
sigma_sq_hat = np.var(samples)
sigma_hat = np.std(samples)
print("Sample Mean: {0:1.3f}".format(mu_hat))
print("Sample Variance: {0:1.3f}".format(sigma_sq_hat))
print("Sample Standard Dev: {0:1.3f}".format(sigma_hat))
x = np.linspace(-5, 8, num=250)
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
ax.hist(samples, normed=True, bins=25, histtype='stepfilled', label='Samples')
ax.plot(x, norm.pdf(x, mu_hat,sigma_hat), 'r-', lw=3, label='PDF')
ax.axvline(x=np.percentile(norm.pdf(x, mu_hat,sigma_hat), 25), linestyle='--', label='25th percentile')
ax.set_xlim(-2,6)
ax.set_xlabel('Miner Values')
ax.legend()
# compute 25th percentile
np.percentile(norm.pdf(x, mu_hat,sigma_hat), 25)
Explanation: Model for first group of 6
End of explanation |
4,416 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Project
Step1: Read in an Image
Step9: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are
Step10: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
Step11: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
Step12: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos
Step13: Let's try the one with the solid white lane on the right first ...
Step15: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
Step17: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
Step19: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project! | Python Code:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
Explanation: Self-Driving Car Engineer Nanodegree
Project: Finding Lane Lines on the Road
In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
<figure>
<img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
</figcaption>
</figure>
<p></p>
<figure>
<img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
<figcaption>
<p></p>
<p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
</figcaption>
</figure>
Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.
Import Packages
End of explanation
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Explanation: Read in an Image
End of explanation
import math
def grayscale(img):
Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def rgbtohsv(img):
"Applies rgb to hsv transform"
return cv2.cvtColor(img, cv2.COLOR_RGB2HSV)
def canny(img, low_threshold, high_threshold):
Applies the Canny transform
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
Applies a Gaussian Noise kernel
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[200, 0, 0], thickness = 10):
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
x_left = []
y_left = []
x_right = []
y_right = []
imshape = image.shape
ysize = imshape[0]
ytop = int(0.6*ysize) # need y coordinates of the top and bottom of left and right lane
ybtm = int(ysize) # to calculate x values once a line is found
for line in lines:
for x1,y1,x2,y2 in line:
slope = float(((y2-y1)/(x2-x1)))
if (slope > 0.5): # if the line slope is greater than tan(26.52 deg), it is the left line
x_left.append(x1)
x_left.append(x2)
y_left.append(y1)
y_left.append(y2)
if (slope < -0.5): # if the line slope is less than tan(153.48 deg), it is the right line
x_right.append(x1)
x_right.append(x2)
y_right.append(y1)
y_right.append(y2)
# only execute if there are points found that meet criteria, this eliminates borderline cases i.e. rogue frames
if (x_left!=[]) & (x_right!=[]) & (y_left!=[]) & (y_right!=[]):
left_line_coeffs = np.polyfit(x_left, y_left, 1)
left_xtop = int((ytop - left_line_coeffs[1])/left_line_coeffs[0])
left_xbtm = int((ybtm - left_line_coeffs[1])/left_line_coeffs[0])
right_line_coeffs = np.polyfit(x_right, y_right, 1)
right_xtop = int((ytop - right_line_coeffs[1])/right_line_coeffs[0])
right_xbtm = int((ybtm - right_line_coeffs[1])/right_line_coeffs[0])
cv2.line(img, (left_xtop, ytop), (left_xbtm, ybtm), color, thickness)
cv2.line(img, (right_xtop, ytop), (right_xbtm, ybtm), color, thickness)
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
draw_lines(line_img, lines)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
return cv2.addWeighted(initial_img, α, img, β, λ)
Explanation: Ideas for Lane Detection Pipeline
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images
cv2.cvtColor() to grayscale or change color
cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Helper Functions
Below are some helper functions to help get you started. They should look familiar from the lesson!
End of explanation
import os
test_images_list = os.listdir("test_images/") # modified a little to save filenames of test images
Explanation: Test Images
Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.
End of explanation
# define parameters needed for helper functions (given inline)
kernel_size = 5 # gaussian blur
low_threshold = 60 # canny edge detection
high_threshold = 180 # canny edge detection
# Define the Hough transform parameters
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 20 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 40 # minimum number of pixels making up a line
max_line_gap = 25 # maximum gap in pixels between connectable line segments
for test_image in test_images_list: # iterating through the images in test_images folder
image = mpimg.imread('test_images/' + test_image) # reading in an image
gray = grayscale(image) # convert to grayscale
blur_gray = gaussian_blur(gray, kernel_size) # add gaussian blur to remove noise
edges = canny(blur_gray, low_threshold, high_threshold) # perform canny edge detection
# extract image size and define vertices of the four sided polygon for masking
imshape = image.shape
xsize = imshape[1]
ysize = imshape[0]
vertices = np.array([[(0.05*xsize, ysize ),(0.44*xsize, 0.6*ysize),\
(0.55*xsize, 0.6*ysize), (0.95*xsize, ysize)]], dtype=np.int32) #
masked_edges = region_of_interest(edges, vertices) # retain information only in the region of interest
line_image = hough_lines(masked_edges, rho, theta, threshold,\
min_line_length, max_line_gap) # perform hough transform and retain lines with specific properties
lines_edges = weighted_img(line_image, image, α=0.8, β=1., λ=0.) # Draw the lines on the edge image
plt.imshow(lines_edges) # Display the image
plt.show()
mpimg.imsave('test_images_output/' + test_image, lines_edges) # save the resulting image
Explanation: Build a Lane Finding Pipeline
Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.
Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
End of explanation
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
gray = grayscale(image) # convert to grayscale
blur_gray = gaussian_blur(gray, kernel_size) # add gaussian blur to remove noise
edges = canny(blur_gray, low_threshold, high_threshold) # perform canny edge detection
# extract image size and define vertices of the four sided polygon for masking
imshape = image.shape
xsize = imshape[1]
ysize = imshape[0]
vertices = np.array([[(0.05*xsize, ysize ),(0.44*xsize, 0.6*ysize),\
(0.55*xsize, 0.6*ysize), (0.95*xsize, ysize)]], dtype=np.int32) #
masked_edges = region_of_interest(edges, vertices) # retain information only in the region of interest
line_image = hough_lines(masked_edges, rho, theta, threshold,\
min_line_length, max_line_gap) # perform hough transform and retain lines with specific properties
lines_edges = weighted_img(line_image, image, α=0.8, β=1., λ=0.) # Draw the lines on the edge image
return lines_edges
Explanation: Test on Videos
You know what's cooler than drawing lanes over images? Drawing lanes over video!
We can test our solution on two provided videos:
solidWhiteRight.mp4
solidYellowLeft.mp4
Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.
If you get an error that looks like this:
NeedDownloadError: Need ffmpeg exe.
You can download it by calling:
imageio.plugins.ffmpeg.download()
Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.
End of explanation
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Explanation: Let's try the one with the solid white lane on the right first ...
End of explanation
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(white_output))
Explanation: Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
End of explanation
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(yellow_output))
Explanation: Improve the draw_lines() function
At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".
Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.
Now for the one with the solid yellow lane on the left. This one's more tricky!
End of explanation
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
HTML(
<video width="960" height="540" controls>
<source src="{0}">
</video>
.format(challenge_output))
Explanation: Writeup and Submission
If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.
Optional Challenge
Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
End of explanation |
4,417 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Begin testing here
Step1: As given by newport (https
Step2: So we should expect about 0.25 maximum intensity through our 780 waveplate.
tinkering below here | Python Code:
qwp = np.matrix([[1, 0],[0, -1j]])
R(-np.pi/4)*qwp*R(np.pi/4)
qwp45 = wp(np.pi/2, np.pi/4)
qwp45
wp(np.pi/2, 0)
vpol = np.matrix([[0,0],[0,1]])
vpol
np.exp(1j*np.pi/4)
horiz = np.matrix([[1],[0]])
output = qwp*horiz
intensity(output)
before_cell = wp(np.pi/2,np.pi/4)*wp(np.pi,np.pi/10)*horiz
output = vpol*wp(np.pi/2,-np.pi/4)*before_cell
intensity(output)
Explanation: Begin testing here
End of explanation
ivals = []
thetas = np.linspace(0,np.pi)
for theta in thetas:
# vpol quarter quarter half input
output = vpol*wp(np.pi/2-0.5,theta)*horiz
ivals.append(intensity(output))
plt.plot(thetas,ivals)
plt.title("rotating qwp w/error between crossed pols")
ivals = []
thetas = np.linspace(0,np.pi/2)
for phi in [0.3,0.4,0.5,0.6,0.7]: # try a range of phase errors to compare
ivals = []
for theta in thetas:
# vpol quarter quarter half input
output = vpol*wp(np.pi/2 - phi,-theta)*wp(np.pi/2 - phi,theta)*wp(np.pi - phi,np.pi/19)*horiz
ivals.append(intensity(output))
plt.plot(thetas,ivals,label=phi)
plt.legend()
plt.ylabel("I output")
plt.xlabel("qwp1 angle (rad)")
# Rotating 780 QWP between crossed pols
# at 795 nm
ivals = []
thetas = np.linspace(0,np.pi)
for phi in [0.0,0.3,0.4,0.5,0.6,0.7]:
ivals = []
for theta in thetas:
output = vpol*wp(np.pi/2 - phi,theta)*horiz
ivals.append(intensity(output))
plt.plot(thetas,ivals,label=phi)
plt.legend()
plt.ylabel("I output")
plt.xlabel("qwp angle (rad)")
plt.title("QWP w/ phase error")
Explanation: As given by newport (https://www.newport.com/f/quartz-zero-order-waveplates), the wave error at 795 (vs 780) corresponds to a normalized wavelength of 1.02, giving wave error of -0.08 waves. That corresponds to a phase error of 2pi*(-0.08) = 0.5 radians.
We'll explore the effect of this phase error below:
End of explanation
# try to plot vectors for the polarization components
fig = plt.figure()
ax = fig.add_axes([0.1, 0.1, 0.9, 0.9], polar=True)
r = np.arange(0, 3.0, 0.01)
theta = 2*np.pi*r
ax.set_rmax(1.2)
#plt.grid(True)
# arrow at 0
arr1 = plt.arrow(0, 0, 0, 1, alpha = 0.5, width=0.03, length_includes_head=True,
edgecolor = 'black', facecolor = 'red', zorder = 5)
# arrow at 45 degree
arr2 = plt.arrow(np.pi/4, 0, 0, 1, alpha = 0.5, width=0.03, length_includes_head=True,
edgecolor = 'black', facecolor = 'blue', zorder = 5)
plt.show()
from qutip import *
%matplotlib notebook
# Start horizontal pol, propagate through system:
phi = 0.03
theta = pi/4
out = wp(np.pi/2 - phi,theta)*wp(np.pi,pi/19)*horiz
state = Qobj(out)
b = Bloch()
b.set_label_convention("polarization jones")
b.add_states(state)
b.show()
2*pi*0.25 - 2*pi*0.245
Explanation: So we should expect about 0.25 maximum intensity through our 780 waveplate.
tinkering below here
End of explanation |
4,418 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO | Python Code:
import tensorflow as tf
import numpy as np
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
Explanation: Batch Normalization – Practice
Batch normalization is most useful when building deep neural networks. To demonstrate this, we'll create a convolutional neural network with 20 convolutional layers, followed by a fully connected layer. We'll use it to classify handwritten digits in the MNIST dataset, which should be familiar to you by now.
This is not a good network for classfying MNIST digits. You could create a much simpler network and get better results. However, to give you hands-on experience with batch normalization, we had to make an example that was:
1. Complicated enough that training would benefit from batch normalization.
2. Simple enough that it would train quickly, since this is meant to be a short exercise just to give you some practice adding batch normalization.
3. Simple enough that the architecture would be easy to understand without additional resources.
This notebook includes two versions of the network that you can edit. The first uses higher level functions from the tf.layers package. The second is the same network, but uses only lower level functions in the tf.nn package.
Batch Normalization with tf.layers.batch_normalization
Batch Normalization with tf.nn.batch_normalization
The following cell loads TensorFlow, downloads the MNIST dataset if necessary, and loads it into an object named mnist. You'll need to run this cell before running anything else in the notebook.
End of explanation
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
Explanation: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
This version of the network uses tf.layers for almost everything, and expects you to implement batch normalization using tf.layers.batch_normalization
We'll use the following function to create fully connected layers in our network. We'll create them with the specified number of neurons and a ReLU activation function.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
Explanation: We'll use the following function to create convolutional layers in our network. They are very basic: we're always using a 3x3 kernel, ReLU activation functions, strides of 1x1 on layers with odd depths, and strides of 2x2 on layers with even depths. We aren't bothering with pooling layers at all in this network.
This version of the function does not include batch normalization.
End of explanation
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
This cell builds the network without batch normalization, then trains it on the MNIST dataset. It displays loss and accuracy data periodically while training.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias=False, activation=None)
layer = tf.layers.batch_normalization(layer, training=is_training)
layer = tf.nn.relu(layer)
return layer
Explanation: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Using batch normalization, you'll be able to train this same network to over 90% in that same number of batches.
Add batch normalization
We've copied the previous three cells to get you started. Edit these cells to add batch normalization to the network. For this exercise, you should use tf.layers.batch_normalization to handle most of the math, but you'll need to make a few other changes to your network to integrate batch normalization. You may want to refer back to the lesson notebook to remind yourself of important things, like how your graph operations need to know whether or not you are performing training or inference.
If you get stuck, you can check out the Batch_Normalization_Solutions notebook to see how we did things.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=None, use_bias=False)
conv_layer = tf.layers.batch_normalization(conv_layer, training=is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
Explanation: TODO: Modify conv_layer to add batch normalization to the convolutional layers it creates. Feel free to change the function's parameters if it helps.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool, name='is_training')
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs,
labels: batch_ys,
is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training, and you'll need to make sure it updates and uses its population statistics correctly.
End of explanation
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
prev_shape = prev_layer.get_shape().as_list()
weights = tf.Variable(tf.random_normal([int(prev_shape[-1]), num_units], stddev=0.05))
layer = tf.matmul(prev_layer, weights)
beta = tf.Variable(tf.ones([num_units]))
gamma = tf.Variable(tf.zeros([num_units]))
pop_mean = tf.Variable(tf.zeros([num_units]), trainable=False)
pop_variance = tf.Variable(tf.zeros([num_units]), trainable=False)
epsilon = 1e-3
def batch_norm_training():
decay = 0.99
batch_mean, batch_variance = tf.nn.moments(layer, [0])
train_mean = tf.assign(pop_mean, pop_mean * decay + batch_mean * (1 - decay))
train_variance = tf.assign(pop_variance, pop_variance * decay + batch_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
batch_norm_layer = (layer - batch_mean) / tf.sqrt(batch_variance + epsilon)
return batch_norm_layer
def batch_norm_inference():
batch_norm_layer = (layer - pop_mean) / tf.sqrt(pop_variance + epsilon)
return batch_norm_layer
layer = tf.cond(is_training, batch_norm_training, batch_norm_inference)
layer = gamma * layer + beta
layer = tf.nn.relu(layer)
return layer
Explanation: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output: Accuracy on 100 samples. If this value is low while everything else looks good, that means you did not implement batch normalization correctly. Specifically, it means you either did not calculate the population mean and variance while training, or you are not using those values during inference.
Batch Normalization using tf.nn.batch_normalization<a id="example_2"></a>
Most of the time you will be able to use higher level functions exclusively, but sometimes you may want to work at a lower level. For example, if you ever want to implement a new feature – something new enough that TensorFlow does not already include a high-level implementation of it, like batch normalization in an LSTM – then you may need to know these sorts of things.
This version of the network uses tf.nn for almost everything, and expects you to implement batch normalization using tf.nn.batch_normalization.
Optional TODO: You can run the next three cells before you edit them just to see how the network performs without batch normalization. However, the results should be pretty much the same as you saw with the previous example before you added batch normalization.
TODO: Modify fully_connected to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: For convenience, we continue to use tf.layers.dense for the fully_connected layer. By this point in the class, you should have no problem replacing that with matrix operations between the prev_layer and explicit weights and biases variables.
End of explanation
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
beta = tf.Variable(tf.zeros([out_channels]))
gamma = tf.Variable(tf.ones([out_channels]))
pop_mean = tf.Variable(tf.zeros([out_channels]), trainable=False)
pop_variance = tf.Variable(tf.zeros([out_channels]), trainable=False)
#bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
epsilon = 1e-3
def batch_norm_training():
decay = 0.99
batch_mean, batch_variance = tf.nn.moments(conv_layer, [0,1,2], keep_dims=False)
train_mean = tf.assign(pop_mean, batch_mean * decay + pop_mean * (1 - decay))
train_variance = tf.assign(pop_variance, batch_variance * decay + pop_variance * (1 - decay))
with tf.control_dependencies([train_mean, train_variance]):
batch_norm_layer = (conv_layer - batch_mean) / tf.sqrt(batch_variance + epsilon)
return batch_norm_layer
def batch_norm_inferene():
batch_norm_layer = (conv_layer - pop_mean) / tf.sqrt(pop_variance + epsilon)
return batch_norm_layer
conv_layer = tf.cond(is_training, batch_norm_training, batch_norm_inferene)
conv_layer = gamma * conv_layer + beta
#conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
# Setting initial conditions
layer = tf.placeholder(tf.float32, [None, 28, 28, 1])
print('Simulating parameter propagation over 5 steps:')
for layer_i in range(1,5):
strides = 2 if layer_i % 3 == 0 else 1
input_shape = layer.get_shape().as_list()
in_channels = layer.get_shape().as_list()[3]
out_channels = layer_i*4
weights = tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05)
layer = tf.nn.conv2d(layer, weights, strides=[1,strides, strides, 1], padding='SAME')
print('-----------------------------------------------')
print('strides:{0}'.format(strides))
print('Input layer shape:{0}'.format(input_shape))
print('in_channels:{0}'.format(in_channels))
print('out_channels:{0}'.format(out_channels))
print('Truncated normal output:{0}'.format(weights))
Explanation: TODO: Modify conv_layer to add batch normalization to the fully connected layers it creates. Feel free to change the function's parameters if it helps.
Note: Unlike in the previous example that used tf.layers, adding batch normalization to these convolutional layers does require some slight differences to what you did in fully_connected.
End of explanation
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training:True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
Explanation: TODO: Edit the train function to support batch normalization. You'll need to make sure the network knows whether or not it is training.
End of explanation |
4,419 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Say I have these 2D arrays A and B. | Problem:
import numpy as np
A=np.asarray([[1,1,1], [1,1,2], [1,1,3], [1,1,4]])
B=np.asarray([[0,0,0], [1,0,2], [1,0,3], [1,0,4], [1,1,0], [1,1,1], [1,1,4]])
dims = np.maximum(B.max(0),A.max(0))+1
result = A[~np.in1d(np.ravel_multi_index(A.T,dims),np.ravel_multi_index(B.T,dims))]
output = np.append(result, B[~np.in1d(np.ravel_multi_index(B.T,dims),np.ravel_multi_index(A.T,dims))], axis = 0) |
4,420 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.e - Correction de l'interrogation écrite du 26 septembre 2015
tests, boucles, fonctions
Step1: Enoncé 1
Q1
Le programme suivant provoque une erreur pourquoi ?
Step2: On découvre le problème en ajoutant des affichages intermédiaires
Step3: A la dernière itération, $i+1$ dévient égal à la longueur de la liste tab or le dernier indice d'un tableau est len(tab)-1.
Q2
Où est l'erreur de syntaxe ?
Step4: Le test d'égalité s'écrit ==.
Q3
On associe la valeur 1 à la lettre a, 2 à b et ainsi de suite. Ecrire une fonction qui fait la somme de ces valeurs pour une chaîne de caractères.
Exemple
Step5: On peut l'écrire de façon plus courte
Step6: Enoncé 2
Q1
Barrez les lignes qui produiraient une erreur à l'exécution et dire pourquoi ?
Step7: Les deux premières lignes sont incorrects car on essaye d'ajouter une chaîne de caractères à un nombre. La première opération est correcte "a" * 3. Dans un sens comme dans l'autre, elle donne "aaa". Mais on ne peut ajouter 1 à "aaa".
Q2
Que vaut l à la fin du programme ?
Step8: Il ne faut pas confondre la méthode append et extend.
Step9: Q3
Ecrire une fonction qui prend une chaîne de caractères et qui lui enlève une lettre sur 2.
Step10: Ou plus court encore | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 1A.e - Correction de l'interrogation écrite du 26 septembre 2015
tests, boucles, fonctions
End of explanation
tab = [1, 3]
for i in range(0, len(tab)):
print(tab[i] + tab[i+1])
Explanation: Enoncé 1
Q1
Le programme suivant provoque une erreur pourquoi ?
End of explanation
tab = [1, 3]
for i in range(0, len(tab)):
print(i, i+1, len(tab))
print(tab[i] + tab[i+1])
Explanation: On découvre le problème en ajoutant des affichages intermédiaires :
End of explanation
n = 1
if n = 1:
y = 0
else:
y = 1
Explanation: A la dernière itération, $i+1$ dévient égal à la longueur de la liste tab or le dernier indice d'un tableau est len(tab)-1.
Q2
Où est l'erreur de syntaxe ?
End of explanation
def somme_caracteres(mot):
s = 0
for c in mot :
s += ord(c) - ord("a") + 1
return s
somme_caracteres("elu")
Explanation: Le test d'égalité s'écrit ==.
Q3
On associe la valeur 1 à la lettre a, 2 à b et ainsi de suite. Ecrire une fonction qui fait la somme de ces valeurs pour une chaîne de caractères.
Exemple : elu $\rightarrow$ 5 + 12 + 21 = 38
End of explanation
def somme_caracteres(mot):
return sum(ord(c) - ord("a") + 1 for c in mot)
somme_caracteres("elu")
Explanation: On peut l'écrire de façon plus courte :
End of explanation
y = "a" * 3 + 1
z = 3 * "a" + 1
print(y,z)
Explanation: Enoncé 2
Q1
Barrez les lignes qui produiraient une erreur à l'exécution et dire pourquoi ?
End of explanation
l = []
for i in range(0, 10):
l.append([i])
print(l)
Explanation: Les deux premières lignes sont incorrects car on essaye d'ajouter une chaîne de caractères à un nombre. La première opération est correcte "a" * 3. Dans un sens comme dans l'autre, elle donne "aaa". Mais on ne peut ajouter 1 à "aaa".
Q2
Que vaut l à la fin du programme ?
End of explanation
l = []
for i in range(0, 10):
l.extend([i])
print(l)
Explanation: Il ne faut pas confondre la méthode append et extend.
End of explanation
def un_sur_deux(mot):
s = ""
for i,c in enumerate(mot):
if i % 2 == 0:
s += c
return s
un_sur_deux("python")
Explanation: Q3
Ecrire une fonction qui prend une chaîne de caractères et qui lui enlève une lettre sur 2.
End of explanation
def un_sur_deux(mot):
return "".join( c for i,c in enumerate(mot) if i % 2 == 0 )
un_sur_deux("python")
Explanation: Ou plus court encore :
End of explanation |
4,421 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Manuscript3 - Analysis of Intrinsic Network FC Properties for Fig. 3
Master code for Ito et al., 2017¶
Takuya Ito ([email protected])
Step1: 1.0 Basic parameters
Step2: 2.0 Compute out-of-network intrinsic FC for each network (rest)
Step3: Interm analysis - visualize MultRegFC matrix using Glasser networks
Step4: Visualize Regular FC matrix using Glasser networks
Step5: Visualize data
Step6: Compute BGC
Step7: Compute statistics using FWER - see if FPN is greater than all other networks | Python Code:
import sys
sys.path.append('utils/')
import numpy as np
import loadGlasser as lg
import scipy.stats as stats
from IPython.display import display, HTML
import matplotlib.pyplot as plt
import statsmodels.sandbox.stats.multicomp as mc
import sys
import multiprocessing as mp
import pandas as pd
import multregressionconnectivity as mreg
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
import permutationTesting as pt
import os
os.environ['OMP_NUM_THREADS'] = str(1)
Explanation: Manuscript3 - Analysis of Intrinsic Network FC Properties for Fig. 3
Master code for Ito et al., 2017¶
Takuya Ito ([email protected])
End of explanation
# Set basic parameters
basedir = '/projects2/ModalityControl2/'
datadir = basedir + 'data/'
resultsdir = datadir + 'resultsMaster/'
runLength = 4648
subjNums = ['032', '033', '037', '038', '039', '045',
'013', '014', '016', '017', '018', '021',
'023', '024', '025', '026', '027', '031',
'035', '046', '042', '028', '048', '053',
'040', '049', '057', '062', '050', '030', '047', '034']
glasserparcels = lg.loadGlasserParcels()
networkdef = lg.loadGlasserNetworks()
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud1':8, 'aud2':9, 'dan':11}
# Force aud2 key to be the same as aud1
aud2_ind = np.where(networkdef==networkmappings['aud2'])[0]
networkdef[aud2_ind] = networkmappings['aud1']
# Define new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11}
# Define new network mappings with no aud1/aud2 distinction
networkmappings = {'fpn':7, 'vis':1, 'smn':2, 'con':3, 'dmn':6, 'aud':8, 'dan':11,
'prem':5, 'pcc':10, 'none':12, 'hipp':13, 'pmulti':14}
# Import network reordering
networkorder = np.asarray(sorted(range(len(networkdef)), key=lambda k: networkdef[k]))
order = networkorder
order.shape = (len(networkorder),1)
# Construct xticklabels and xticks
networks = networkmappings.keys()
xticks = {}
reorderednetworkaffil = networkdef[networkorder]
for net in networks:
netNum = networkmappings[net]
netind = np.where(reorderednetworkaffil==netNum)[0]
tick = np.max(netind)
xticks[tick] = net
from matplotlib.colors import Normalize
class MidpointNormalize(Normalize):
def __init__(self, vmin=None, vmax=None, midpoint=None, clip=False):
self.midpoint = midpoint
Normalize.__init__(self, vmin, vmax, clip)
def __call__(self, value, clip=None):
# I'm ignoring masked values and all kinds of edge cases to make a
# simple example...
x, y = [self.vmin, self.midpoint, self.vmax], [0, 0.5, 1]
return np.ma.masked_array(np.interp(value, x, y))
networks = networkmappings.keys()
for net in networks:
ind = np.sum(networkdef==networkmappings[net])
print net, ind
netkeys = {0:'fpn', 1:'dan', 2:'con', 3:'dmn', 4:'vis', 5:'aud', 6:'smn'}
print netkeys
Explanation: 1.0 Basic parameters
End of explanation
# Compute rsfcMRI using multiple linear regression on Glasser parcels
fcmat_multreg = np.zeros((360,360,len(subjNums)))
scount = 0
for subj in subjNums:
outdir = '/projects2/ModalityControl2/data/resultsMaster/MultRegConnRestFC_GlasserParcels/'
outfile = subj + '_multregconn_restfc.csv'
fcmat_multreg[:,:,scount] = np.loadtxt(outdir + outfile, delimiter=',')
scount += 1
Explanation: 2.0 Compute out-of-network intrinsic FC for each network (rest)
End of explanation
tmp = np.mean(fcmat_multreg,axis=2)
plt.figure()
norm = MidpointNormalize(midpoint=0)
plt.imshow(tmp[order,order.T], norm=norm, origin='lower', cmap='bwr', vmin=-.1, vmax=.1)
plt.colorbar(fraction=0.046)
plt.title('Averaged MultReg FC Matrix',
fontsize=16, y=1.04)
plt.xlabel('Regions',fontsize=12)
plt.ylabel('Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
plt.tight_layout()
# plt.savefig('Group_MultRegFC.pdf')
Explanation: Interm analysis - visualize MultRegFC matrix using Glasser networks
End of explanation
timeseries = np.zeros((360,360,len(subjNums)))
scount = 0
for subj in subjNums:
indir = '/projects2/ModalityControl2/data/resultsMaster/glmRest_GlasserParcels/'
filename = indir + subj + '_rest_nuisanceResids_Glasser.csv'
tmp = np.loadtxt(filename, delimiter=',')
timeseries[:,:,scount] = np.corrcoef(tmp)
scount += 1
Explanation: Visualize Regular FC matrix using Glasser networks
End of explanation
mat = np.mean(timeseries,axis=2)
np.fill_diagonal(mat,0)
mat = mat[order,order.T]
norm = MidpointNormalize(midpoint=0)
plt.imshow(mat,origin='lower',norm=norm, cmap='bwr', interpolation='none')
plt.colorbar(fraction=0.046)
plt.title('Averaged Pearson FC Matrix',
fontsize=16, y=1.04)
plt.xlabel('Regions',fontsize=12)
plt.ylabel('Regions', fontsize=12)
plt.xticks(xticks.keys(),xticks.values(), rotation=-45)
plt.yticks(xticks.keys(),xticks.values())
plt.grid(linewidth=1)
plt.tight_layout()
# plt.savefig('Group_PearsonFC.pdf')
Explanation: Visualize data
End of explanation
network_gbc = {}
outofnet_gbc = {}
for net in netkeys.keys():
network_gbc[net] = []
outofnet_gbc[net] = []
net_ind = np.where(networkdef==networkmappings[netkeys[net]])[0]
outofnet_ind = np.where(networkdef!=networkmappings[netkeys[net]])[0]
scount = 0
for subj in subjNums:
tmp_outofnet = []
tmp_gbc = []
for roi in net_ind:
tmp_outofnet.append(np.mean(fcmat_multreg[roi,outofnet_ind,scount]))
# tmp_outofnet.append(np.mean(fcmat_multreg[roi,:,scount]))
tmp_gbc.append(np.mean(fcmat_multreg[roi,:,scount]))
outofnet_gbc[net].append(np.mean(tmp_outofnet))
network_gbc[net].append(np.mean(tmp_gbc))
scount += 1
outofnet_avg = {}
outofnet_sem = {}
network_avg = {}
network_sem = {}
for net in netkeys.keys():
outofnet_avg[net] = np.mean(outofnet_gbc[net])
outofnet_sem[net] = np.std(outofnet_gbc[net])/np.sqrt(len(subjNums))
network_avg[net] = np.mean(network_gbc[net])
network_sem[net] = np.std(network_gbc[net])/np.sqrt(len(subjNums))
## Plot average Out-of-Network FC
# width = .35
nbars = len(netkeys)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.bar(np.arange(nbars), outofnet_avg.values(), align='center',
yerr=outofnet_sem.values(), color='b', error_kw=dict(ecolor='black'))
ax.set_title('Average Out-of-Network (or BGC) Rest FC (MultRegFC)',
y=1.04, fontsize=16)
ax.set_ylabel('Average FC',fontsize=12)
ax.set_xlabel('Networks', fontsize=12)
ax.set_xticks(np.arange(nbars))
ax.set_xticklabels(netkeys.values(),rotation=-45)
# ax.set_ylim([0.0025,.003])
plt.tight_layout()
# plt.savefig('Fig1_OutOfNetworkRestFC_MultReg.pdf')
Explanation: Compute BGC
End of explanation
netkeys
outofnet_avg
outofnet_stats = {}
ps = []
diff_gbc = {}
ccns = ['fpn','dan','con']
ccnkeys = {'fpn':0, 'dan':1, 'con':2}
for ccn in ccns:
ccnkey = ccnkeys[ccn]
diff_gbc[ccn] = np.zeros((len(netkeys)-len(ccns),len(subjNums)))
outofnet_stats[ccn] = {}
count = 0
for net in netkeys.keys():
outofnet_stats[ccn][net] = {}
if net==ccn: continue
outofnet_stats[ccn][net]['T-value'], outofnet_stats[ccn][net]['p-value'] = stats.ttest_rel(outofnet_gbc[ccnkey],
outofnet_gbc[net])
if netkeys[net] not in ccns:
diff_gbc[ccn][count,:] = np.asarray(outofnet_gbc[ccnkey]) - np.asarray(outofnet_gbc[net])
count += 1
# Correct to 1-sided t-test
if outofnet_stats[ccn][net]['T-value'] < 0:
outofnet_stats[ccn][net]['p-value'] = 1-(outofnet_stats[ccn][net]['p-value']/2.0)
else:
outofnet_stats[ccn][net]['p-value'] = outofnet_stats[ccn][net]['p-value']/2.0
ps.append(outofnet_stats[ccn][net]['p-value'])
# We know that FPN is the 0th network key. So exclude this network when running multiple comparsisons
# tmp = mc.fdrcorrection0(ps)[1]
for ccn in ccns:
ccnkey = ccnkeys[ccn]
avg_t = []
count = 0
tmp = np.delete(diff_gbc,ccnkey,axis=0)
t_fwe, p_fwe = pt.permutationFWE(diff_gbc[ccn], nullmean=0, permutations=1000, nproc=10)
p_fwe = 1.0 - p_fwe
for net in outofnet_avg.keys():
if netkeys[net] == ccn: continue
j = 0
if count <=1:
outofnet_stats[ccn][net]['p-value (FWE-corrected)'] = np.nan
else:
outofnet_stats[ccn][net]['p-value (FWE-corrected)'] = round(p_fwe[j],3)
j += 1
outofnet_stats[ccn][net]['p-value'] = round(outofnet_stats[ccn][net]['p-value'],4)
outofnet_stats[ccn][net]['T-value'] = round(outofnet_stats[ccn][net]['T-value'],4)
if netkeys[net] not in ccns:
avg_t.append(round(outofnet_stats[ccn][net]['T-value'],4))
count += 1
print 'Average t of', ccn, 'greater than non-ccn:', np.mean(avg_t)
print 'Average effect size of', ccn, ':', np.mean(outofnet_gbc[ccnkey])
df_outofnet_stats = pd.DataFrame(outofnet_stats[ccn])
display(df_outofnet_stats)
print '\n'
Explanation: Compute statistics using FWER - see if FPN is greater than all other networks
End of explanation |
4,422 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple example of local processing
In this first tutorial we will show a complete example of usage of the library using some example datasets provided with it.
Importing the library
Importing the library
Step1: Loading datasets
PyGMQL can work with BED and GTF files with arbitrary fields and schemas. In order to load a dataset into Python the user can use the following functions
Step2: The GMQLDataset
The dataset variable defined above is a GMQLDataset, which represents a GMQL variable and on which it is possible to apply GMQL operators. It must be noticed that no data has been loaded in memory yet and the computation will only start when the query is triggered. We will see how to start the execution of a query in the following steps.
We can inspect the schema of the dataset with the following
Step3: Filtering the dataset regions based on a predicate
The first operation we will do on dataset will be selecting only the genomic regions on the 3rd chromosome and with a start position greater than 30000.
Step4: From this operation we can learn several things about the GMQLDataset data structure. Each GMQLDataset has a set of methods and fields which can be used to build GMQL queries. For example, in the previous statement we have
Step5: Notice that the notation for selecting the samples using metadata is the same as the one for filtering Pandas DataFrames.
Joining two datasets
It is not the focus of this tutorial to show all the possible operations which can be done on a GMQLDataset, they can be seen on the documentation page of the library.
For the sake of this example, let's show the JOIN operation between the two filtered datasets defined in the previous two steps.
The JOIN operation semantics relies on the concept of reference and experiment datasets. The reference dataset is the one 'calling' the join function while the experiment dataset is the one 'on which' the function is called. The semantics of the function is
resulting_dataset = <reference>.join(<experiment>, <genometric predicate>, ...)
Step6: To understand the concept of genometric predicate please visit the documentation of the library.
Materialization of the results
As we have already said, no operation has beed effectively done up to this point. What we did up to now is to define the sequence of operations to apply on the data. In order to trigger the execution we have to apply the materialize function on the variable we want to compute.
Step7: The GDataframe
The query_result variable holds the result of the previous GMQL query in the form of a GDataframe data structure. It holds the information about the regions and the metadata of the result, which can be respectively accessed through the regs and meta attributes.
Regions
Step8: Metadata | Python Code:
import gmql as gl
Explanation: Simple example of local processing
In this first tutorial we will show a complete example of usage of the library using some example datasets provided with it.
Importing the library
Importing the library
End of explanation
dataset1 = gl.get_example_dataset("Example_Dataset_1")
dataset2 = gl.get_example_dataset("Example_Dataset_2")
Explanation: Loading datasets
PyGMQL can work with BED and GTF files with arbitrary fields and schemas. In order to load a dataset into Python the user can use the following functions:
load_from_path: lazily loads a dataset into a GMQLDataset variable from the local file system
load_from_remote: lazily loads a dataset into a GMQLDataset variable from a remote GMQL service
from_pandas: lazily loads a dataset into a GMQLDataset variable from a Pandas DataFrame having at least the chromosome, start and stop columns
In addition to these functions we also provide a function called get_example_dataset which enables the user to load a sample dataset and play with it in order to get confidence with the library. Currently we provide two example datasets: Example_Dataset_1 and Example_Dataset_2.
In the following we will load two example datasets and play with them.
End of explanation
dataset1.schema
dataset2.schema
Explanation: The GMQLDataset
The dataset variable defined above is a GMQLDataset, which represents a GMQL variable and on which it is possible to apply GMQL operators. It must be noticed that no data has been loaded in memory yet and the computation will only start when the query is triggered. We will see how to start the execution of a query in the following steps.
We can inspect the schema of the dataset with the following:
End of explanation
filtered_dataset1 = dataset1.reg_select((dataset1.chr == 'chr3') & (dataset1.start >= 30000))
Explanation: Filtering the dataset regions based on a predicate
The first operation we will do on dataset will be selecting only the genomic regions on the 3rd chromosome and with a start position greater than 30000.
End of explanation
filtered_dataset_2 = dataset2[dataset2['antibody_target'] == 'CTCF']
Explanation: From this operation we can learn several things about the GMQLDataset data structure. Each GMQLDataset has a set of methods and fields which can be used to build GMQL queries. For example, in the previous statement we have:
- the reg_select method, which enables us to filter the datasets on the basis of a predicate on the region positions and features
- the chr and start fields, which enable the user to build predicates on the fields of the dataset.
Every GMQL operator has a relative method accessible from the GMQLDataset data structure, as well as any other field of the dataset.
Filtering a dataset based on a predicate on metadata
The Genomic Data Model enables us to work both with genomic regions and their relative metadata. Therefore we can filter dataset samples on the basis of predicates on metadata attributes. This can be done as follows:
End of explanation
dataset_join = dataset1.join(dataset2, [gl.DLE(0)])
Explanation: Notice that the notation for selecting the samples using metadata is the same as the one for filtering Pandas DataFrames.
Joining two datasets
It is not the focus of this tutorial to show all the possible operations which can be done on a GMQLDataset, they can be seen on the documentation page of the library.
For the sake of this example, let's show the JOIN operation between the two filtered datasets defined in the previous two steps.
The JOIN operation semantics relies on the concept of reference and experiment datasets. The reference dataset is the one 'calling' the join function while the experiment dataset is the one 'on which' the function is called. The semantics of the function is
resulting_dataset = <reference>.join(<experiment>, <genometric predicate>, ...)
End of explanation
query_result = dataset_join.materialize()
Explanation: To understand the concept of genometric predicate please visit the documentation of the library.
Materialization of the results
As we have already said, no operation has beed effectively done up to this point. What we did up to now is to define the sequence of operations to apply on the data. In order to trigger the execution we have to apply the materialize function on the variable we want to compute.
End of explanation
query_result.regs.head()
Explanation: The GDataframe
The query_result variable holds the result of the previous GMQL query in the form of a GDataframe data structure. It holds the information about the regions and the metadata of the result, which can be respectively accessed through the regs and meta attributes.
Regions
End of explanation
query_result.meta.head()
Explanation: Metadata
End of explanation |
4,423 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Hierarchical model for Rugby prediction
Step2: This is a Rugby prediction exercise. So we'll input some data
Step3: The model.
<p>The league is made up by a total of T= 6 teams, playing each other once
in a season. We indicate the number of points scored by the home and the away team in the g-th game of the season (15 games) as $y_{g1}$ and $y_{g2}$ respectively. </p>
<p>The vector of observed counts $\mathbb{y} = (y_{g1}, y_{g2})$ is modelled as independent Poisson
Step4: We specified the model and the likelihood function
Now we need to fit our model using the Maximum A Posteriori algorithm to decide where to start out No U Turn Sampler | Python Code:
!date
import numpy as np
import pandas as pd
try:
from StringIO import StringIO
except ImportError:
from io import StringIO
%matplotlib inline
import pymc3 as pm3, theano.tensor as tt
Explanation: A Hierarchical model for Rugby prediction
End of explanation
data_csv = StringIO(home_team,away_team,home_score,away_score
Wales,Italy,23,15
France,England,26,24
Ireland,Scotland,28,6
Ireland,Wales,26,3
Scotland,England,0,20
France,Italy,30,10
Wales,France,27,6
Italy,Scotland,20,21
England,Ireland,13,10
Ireland,Italy,46,7
Scotland,France,17,19
England,Wales,29,18
Italy,England,11,52
Wales,Scotland,51,3
France,Ireland,20,22)
Explanation: This is a Rugby prediction exercise. So we'll input some data
End of explanation
df = pd.read_csv(data_csv)
teams = df.home_team.unique()
teams = pd.DataFrame(teams, columns=['team'])
teams['i'] = teams.index
df = pd.merge(df, teams, left_on='home_team', right_on='team', how='left')
df = df.rename(columns = {'i': 'i_home'}).drop('team', 1)
df = pd.merge(df, teams, left_on='away_team', right_on='team', how='left')
df = df.rename(columns = {'i': 'i_away'}).drop('team', 1)
observed_home_goals = df.home_score.values
observed_away_goals = df.away_score.values
home_team = df.i_home.values
away_team = df.i_away.values
num_teams = len(df.i_home.drop_duplicates())
num_games = len(home_team)
g = df.groupby('i_away')
att_starting_points = np.log(g.away_score.mean())
g = df.groupby('i_home')
def_starting_points = -np.log(g.away_score.mean())
model = pm3.Model()
with pm3.Model() as model:
# global model parameters
home = pm3.Normal('home', 0, .0001)
tau_att = pm3.Gamma('tau_att', .1, .1)
tau_def = pm3.Gamma('tau_def', .1, .1)
intercept = pm3.Normal('intercept', 0, .0001)
# team-specific model parameters
atts_star = pm3.Normal("atts_star",
mu =0,
tau =tau_att,
shape=num_teams)
defs_star = pm3.Normal("defs_star",
mu =0,
tau =tau_def,
shape=num_teams)
atts = pm3.Deterministic('atts', atts_star - tt.mean(atts_star))
defs = pm3.Deterministic('defs', defs_star - tt.mean(defs_star))
home_theta = tt.exp(intercept + home + atts[away_team] + defs[home_team])
away_theta = tt.exp(intercept + atts[away_team] + defs[home_team])
# likelihood of observed data
home_points = pm3.Poisson('home_points', mu=home_theta, observed=observed_home_goals)
away_points = pm3.Poisson('away_points', mu=away_theta, observed=observed_away_goals)
Explanation: The model.
<p>The league is made up by a total of T= 6 teams, playing each other once
in a season. We indicate the number of points scored by the home and the away team in the g-th game of the season (15 games) as $y_{g1}$ and $y_{g2}$ respectively. </p>
<p>The vector of observed counts $\mathbb{y} = (y_{g1}, y_{g2})$ is modelled as independent Poisson:
$y_{gi}| \theta_{gj} \tilde\;\; Poisson(\theta_{gj})$
where the theta parameters represent the scoring intensity in the g-th game for the team playing at home (j=1) and away (j=2), respectively.</p>
<p>We model these parameters according to a formulation that has been used widely in the statistical literature, assuming a log-linear random effect model:
$$log \theta_{g1} = home + att_{h(g)} + def_{a(g)} $$
$$log \theta_{g2} = att_{a(g)} + def_{h(g)}$$
the parameter home represents the advantage for the team hosting the game
and we assume that this effect is constant for all the teams and
throughout the season.
End of explanation
with model:
start = pm3.find_MAP()
step = pm3.NUTS(state=start)
trace = pm3.sample(2000, step, start=start, progressbar=True)
pm3.traceplot(trace)
Explanation: We specified the model and the likelihood function
Now we need to fit our model using the Maximum A Posteriori algorithm to decide where to start out No U Turn Sampler
End of explanation |
4,424 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sun-Earth System
NOTE
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Setting Parameters
Step3: Running Compute
Step4: We'll have the sun follow a roche potential and the earth follow a rotating sphere (rotstar).
NOTE
Step5: The temperatures of earth will fall far out of bounds for any atmosphere model, so let's set the earth to be a blackbody and use a supported limb-darkening model (the default 'interp' is not valid for blackbody atmospheres). | Python Code:
!pip install -I "phoebe>=2.1,<2.2"
Explanation: Sun-Earth System
NOTE: planets are currently under testing and not yet supported
Setup
Let's first make sure we have the latest version of PHOEBE 2.1 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
from phoebe import c # constants
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary(starA='sun', starB='earth', orbit='earthorbit')
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.set_value('teff@sun', 1.0*u.solTeff)
b.set_value('requiv@sun', 1.0*u.solRad)
b.flip_constraint('period@sun', solve_for='syncpar')
b.set_value('period@sun', 24.47*u.d)
#b.set_value('incl', 23.5*u.deg)
b.set_value('teff@earth', 252*u.K)
b.set_value('requiv@earth', 1.0*c.R_earth)
b.flip_constraint('period@earth', solve_for='syncpar')
b.set_value('period@earth', 1*u.d)
b.set_value('sma@earthorbit', 1*u.AU)
b.set_value('period@earthorbit', 1*u.yr)
b.set_value('q@earthorbit', c.M_earth/c.M_sun)
#b.set_value('ecc@earthorbit')
print("Msun: {}".format(b.get_quantity('mass@sun@component', unit=u.solMass)))
print("Mearth: {}".format(b.get_quantity('mass@earth@component', unit=u.solMass)))
Explanation: Setting Parameters
End of explanation
b.add_dataset('mesh', times=[0.5], dataset='mesh01')
b.add_dataset('lc', times=np.linspace(-0.5,0.5,51), dataset='lc01')
b.set_value('ld_func@earth', 'logarithmic')
b.set_value('ld_coeffs@earth', [0.0, 0.0])
Explanation: Running Compute
End of explanation
b['distortion_method@earth'] = 'rotstar'
Explanation: We'll have the sun follow a roche potential and the earth follow a rotating sphere (rotstar).
NOTE: this doesn't work yet because the rpole<->potential is still being defined by roche, giving the earth a polar radius way too small.
End of explanation
b['atm@earth'] = 'blackbody'
b.set_value_all('ld_func@earth', 'logarithmic')
b.set_value_all('ld_coeffs@earth', [0, 0])
b.run_compute()
axs, artists = b.plot(dataset='mesh01', show=True)
axs, artists = b.plot(dataset='mesh01', component='sun', show=True)
axs, artists = b.plot(dataset='mesh01', component='earth', show=True)
b['requiv@earth@component']
axs, artists = b.plot(dataset='lc01', show=True)
Explanation: The temperatures of earth will fall far out of bounds for any atmosphere model, so let's set the earth to be a blackbody and use a supported limb-darkening model (the default 'interp' is not valid for blackbody atmospheres).
End of explanation |
4,425 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1 + np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
# Ok, let's see if this part works as planned.
bob = NeuralNetwork(3, 2, 1, 0.5)
bob.activation_function(0.5)
1/(1+np.exp(-0.5))
# Cool. Everything works there. Now, to figure out what the hell is going on with the train function.
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = self.activation_function(final_inputs) # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = error - (y - hidden_outputs)
# TODO: Backpropagated error terms - Replace these values with your calculations.
output_error_term = error * final_outputs * (1 - final_outputs)
hidden_error_term = np.dot(output_error_term, self.weights_hidden_to_output.T) * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += learning_rate * hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += learning_rate * output_error_term * hidden_outputs
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += delta_weights_h_o # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += delta_weights_i_h # update input-to-hidden weights with gradient descent step
features = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
n_records = features.shape[0]
bob.weights_input_to_hidden
bob.weights_input_to_hidden.shape
delta_weights_i_h = np.zeros(bob.weights_input_to_hidden.shape)
delta_weights_i_h
bob.weights_hidden_to_output
bob.weights_hidden_to_output.shape
delta_weights_h_o = np.zeros(bob.weights_hidden_to_output.shape)
delta_weights_h_o
jim = zip(features, targets)
features
targets
X = features
y = targets
#for X, y in zip(features, targets):
hidden_inputs = np.dot(X, bob.weights_input_to_hidden) # signals into hidden layer
X
bob.weights_input_to_hidden
hidden_inputs
hidden_outputs = bob.activation_function(hidden_inputs) # signals from hidden layer
hidden_outputs
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 100
learning_rate = 0.1
hidden_nodes = 2
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
4,426 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
eDisGo basic example
This example shows you the first steps with eDisGo. Grid expansion costs for an example distribution grid are calculated assuming renewable and conventional power plant capacities as stated in the scenario framework of the German Grid Development Plan 2015 (Netzentwicklungsplan) for the year 2035 (scenario B2). Through this, the data structure used in eDisGo is explained and it is shown how to get distribution grid data, how to use the automatic grid reinforcement methodology to determine grid expansion needs and costs and how to evaluate your results.
Learn more about eDisGo
eDisGo Source Code
eDisGo Documentation
Table of Contents
Installation
Settings
eDisGo data structure
Future generator scenario
Grid reinforcement
Results evaluation
References
Installation <a class="anchor" id="installation"></a>
This notebook requires a working installation of eDisGo as well as jupyter notebook to run the example and contextily and geopandas to view the grid topology on a map. You can install all of these as follows
Step1: Settings <a class="anchor" id="settings"></a>
The class EDisGo serves as the top-level API for
setting up your scenario, invocation of data import, power flow analysis, grid reinforcement and flexibility measures. It also provides access to all relevant data. See the class documentation for more information.
To set up a scenario to do a worst-case analysis that considers the heavy load flow and reverse power flow cases used in distribution grid planning, you simply have to provide a grid and set the 'worst_case_analysis' parameter, which is both explained in the following two sections.
Distribution grid data
Currently, synthetic grid data generated with the python project
ding0
is the only supported data source for distribution grid data. ding0 provides the grid topology data in the form of csv files, with separate files for buses, lines, loads, generators, etc. You can retrieve ding0 data from
Zenodo
(make sure you choose latest data) or check out the
Ding0 documentation
on how to generate grids yourself. A ding0 example grid can be viewed here. It is possible to provide your own grid data if it is in the same format as the ding0 grid data.
This example works with any ding0 grid data. If you don't have grid data yet, you can execute the following to download the example grid data mentioned above.
Step2: The ding0 grid you want to use in your analysis is specified through the input parameter 'ding0_grid' of the EDisGo class. The following assumes you want to use the ding0 example grid downloaded above. To use a different ding0 grid, just change the path below.
Step3: Specifying worst-cases
In conventional grid expansion planning worst-cases, the heavy load flow and the reverse power flow, are used to determine grid expansion needs. eDisGo allows you to analyze these cases separately or together. Choose between the following options
Step4: Now we are ready to initialize the edisgo object.
Step5: eDisGo data structure <a class="anchor" id="network"></a>
As stated above, the EDisGo class serves as the top-level API and provides access to all relevant data. It also enables plotting of the grid topology. In order to have a look at the MV grid topology, you can use the following plot.
Step6: Here, red nodes stand for the substation's secondary side, light blue nodes for distribution substation's primary sides, green nodes for nodes fluctuating generators are connected to, grey nodes for disconnecting points and dark blue nodes show branch tees.
Underlying LV grids are not georeferenced in ding0, wherefore a plotting for LV grids analog to the one shown above is not provided. A different possibility to get a graphical representation of LV grids is shown later in this example. Let's first get into eDisGo's data structure.
Grid data is stored in the Topology class.
Time series data can be found in the TimeSeries class. Results data holding results e.g. from the power flow analysis and grid expansion is stored in the Results class.
Configuration data from the config files (see default_configs) is stored
in the Config class.
All these can be accessed as follows
Step7: The grids can also be accessed individually. The MV grid is stored in an MVGrid object and each LV grid in an
LVGrid object.
The MV grid topology can be accessed through
Step8: A list of all LV grids can be retrieved through
Step9: Access to a single LV grid's components can be obtained analog to shown above for
the whole topology and the MV grid
Step10: A single grid's generators, loads, storage units and switches can also be
retrieved as Generator object,
Load object, Storage object, and
Switch objects, respecitvely
Step11: For some applications it is helpful to get a graph representation of the grid,
e.g. to find the path from the station to a generator. The graph representation
of the whole topology or each single grid can be retrieved as follows
Step12: In case of the LV grids, the graph can be used to get a rudimentary graphical representation
Step13: Future generator scenario <a class="anchor" id="generator_scenario"></a>
eDisGo was originally developed in the open_eGo research project. In the open_eGo project two future scenarios, the 'NEP 2035' and the 'ego 100' scenario. The 'NEP 2035' scenario closely follows the B2-Scenario 2035 from the German network developement plan (Netzentwicklungsplan NEP) 2015. The share of renewables is 65.8%, electricity demand is assumed to stay the same as in the status quo. The 'ego 100' scenario is based on the e-Highway 2050 scenario and assumes a share of renewables of 100% and again an equal electricity demand as in the status quo.
As mentioned earlier, ding0 grids represent status quo networks with status quo generator capacities (base year is the year 2015). In order to analyse future scenarios future generators have to be imported into the network.
Step14: Let's have a look at the MV grid topology in the NEP 2035 scenario
Step15: Grid reinforcement <a class="anchor" id="grid_reinforcement"></a>
Now we can calculate grid expansion costs that arise from the integration of the new generators.
The grid expansion methodology is based on the distribution grid study of dena [1] and Baden-Wuerttemberg [2]. The order grid expansion measures are conducted is as follows
Step16: Let's check voltages and line loadings before the reinforcement.
Step17: Reinforcement is invoked doing the following
Step18: Let's check voltages and line loadings again
Step19: Evaluate results <a class="anchor" id="evaluation"></a>
Results such as voltages at nodes and line loading from the power flow analysis as well as
grid expansion costs are provided through the Results class. Above it was already shown how to access
the results
Step20: An overview of the assumptions used to calculate grid expansion costs can be found in the documentation.
You can also view grid expansion costs for equipment in the MV using the following plot
Step21: Results can be saved to csv files with | Python Code:
import os
import sys
import pandas as pd
from edisgo import EDisGo
Explanation: eDisGo basic example
This example shows you the first steps with eDisGo. Grid expansion costs for an example distribution grid are calculated assuming renewable and conventional power plant capacities as stated in the scenario framework of the German Grid Development Plan 2015 (Netzentwicklungsplan) for the year 2035 (scenario B2). Through this, the data structure used in eDisGo is explained and it is shown how to get distribution grid data, how to use the automatic grid reinforcement methodology to determine grid expansion needs and costs and how to evaluate your results.
Learn more about eDisGo
eDisGo Source Code
eDisGo Documentation
Table of Contents
Installation
Settings
eDisGo data structure
Future generator scenario
Grid reinforcement
Results evaluation
References
Installation <a class="anchor" id="installation"></a>
This notebook requires a working installation of eDisGo as well as jupyter notebook to run the example and contextily and geopandas to view the grid topology on a map. You can install all of these as follows:
python
pip install eDisGo[examples,geoplot]
Checkout the eDisGo documentation on how to install eDisGo for more information.
Import packages
End of explanation
import requests
def download_ding0_example_grid():
# create directories to save ding0 example grid into
ding0_example_grid_path = os.path.join(
os.path.expanduser("~"),
".edisgo",
"ding0_test_network")
os.makedirs(
ding0_example_grid_path,
exist_ok=True)
# download files
filenames = [
"buses", "generators", "lines", "loads", "network",
"switches", "transformers", "transformers_hvmv"]
for file in filenames:
req = requests.get(
"https://raw.githubusercontent.com/openego/eDisGo/dev/tests/ding0_test_network_2/{}.csv".format(file))
filename = os.path.join(ding0_example_grid_path, "{}.csv".format(file))
with open(filename, "wb") as fout:
fout.write(req.content)
download_ding0_example_grid()
Explanation: Settings <a class="anchor" id="settings"></a>
The class EDisGo serves as the top-level API for
setting up your scenario, invocation of data import, power flow analysis, grid reinforcement and flexibility measures. It also provides access to all relevant data. See the class documentation for more information.
To set up a scenario to do a worst-case analysis that considers the heavy load flow and reverse power flow cases used in distribution grid planning, you simply have to provide a grid and set the 'worst_case_analysis' parameter, which is both explained in the following two sections.
Distribution grid data
Currently, synthetic grid data generated with the python project
ding0
is the only supported data source for distribution grid data. ding0 provides the grid topology data in the form of csv files, with separate files for buses, lines, loads, generators, etc. You can retrieve ding0 data from
Zenodo
(make sure you choose latest data) or check out the
Ding0 documentation
on how to generate grids yourself. A ding0 example grid can be viewed here. It is possible to provide your own grid data if it is in the same format as the ding0 grid data.
This example works with any ding0 grid data. If you don't have grid data yet, you can execute the following to download the example grid data mentioned above.
End of explanation
ding0_grid = os.path.join(
os.path.expanduser("~"),
".edisgo",
"ding0_test_network")
Explanation: The ding0 grid you want to use in your analysis is specified through the input parameter 'ding0_grid' of the EDisGo class. The following assumes you want to use the ding0 example grid downloaded above. To use a different ding0 grid, just change the path below.
End of explanation
worst_case_analysis = 'worst-case'
Explanation: Specifying worst-cases
In conventional grid expansion planning worst-cases, the heavy load flow and the reverse power flow, are used to determine grid expansion needs. eDisGo allows you to analyze these cases separately or together. Choose between the following options:
’worst-case-feedin’
Feed-in and demand for the worst-case scenario "reverse power flow" are generated. Demand is by default set to 15% of maximum demand for loads connected to the MV grid and 10% for loads connected to the LV grid. Feed-in of all generators is set to the nominal power of the generator, except for PV systems where it is by default set to 85% of the nominal power.
’worst-case-load’
Feed-in and demand for the worst-case scenario "heavy load flow" are generated. Demand of all loads is by default set to maximum demand; feed-in of all generators is set to zero.
’worst-case’
Feed-in and demand for the two worst-case scenarios "reverse power flow" and "heavy load flow" are generated.
Feed-in and demand in the two worst-cases are defined in the config file 'config_timeseries.cfg' and can be changed by setting different values in the config file.
Instead of doing a worst-case analysis you can also provide your own timeseries for demand and feed-in and use those in the power flow analysis. EDisGo also offers methods to generate load and feed-in time series. Check out the EDisGo class documentation and examples in the getting started documentation section for more information.
End of explanation
edisgo = EDisGo(ding0_grid=ding0_grid,
worst_case_analysis=worst_case_analysis)
Explanation: Now we are ready to initialize the edisgo object.
End of explanation
edisgo.plot_mv_grid_topology(technologies=True)
Explanation: eDisGo data structure <a class="anchor" id="network"></a>
As stated above, the EDisGo class serves as the top-level API and provides access to all relevant data. It also enables plotting of the grid topology. In order to have a look at the MV grid topology, you can use the following plot.
End of explanation
# Access all buses in MV grid and underlying LV grids
# .head() enables only viewing the first entries of the dataframe
edisgo.topology.buses_df.head()
# Access all lines in MV grid and underlying LV grids
edisgo.topology.mv_grid.lines_df.head()
# Access all generators in MV grid and underlying LV grids
edisgo.topology.generators_df.head()
Explanation: Here, red nodes stand for the substation's secondary side, light blue nodes for distribution substation's primary sides, green nodes for nodes fluctuating generators are connected to, grey nodes for disconnecting points and dark blue nodes show branch tees.
Underlying LV grids are not georeferenced in ding0, wherefore a plotting for LV grids analog to the one shown above is not provided. A different possibility to get a graphical representation of LV grids is shown later in this example. Let's first get into eDisGo's data structure.
Grid data is stored in the Topology class.
Time series data can be found in the TimeSeries class. Results data holding results e.g. from the power flow analysis and grid expansion is stored in the Results class.
Configuration data from the config files (see default_configs) is stored
in the Config class.
All these can be accessed as follows:
python
edisgo.topology
edisgo.timeseries
edisgo.results
edisgo.config
The grid data in the Topology object is stored in pandas DataFrames.
There are extra data frames for all grid elements (buses, lines, switches, transformers), as well as generators, loads and storage units.
You can access those dataframes as follows:
End of explanation
# Access all buses in MV grid
edisgo.topology.mv_grid.buses_df.head()
# Access all generators in MV grid
edisgo.topology.mv_grid.generators_df.head()
Explanation: The grids can also be accessed individually. The MV grid is stored in an MVGrid object and each LV grid in an
LVGrid object.
The MV grid topology can be accessed through:
python
edisgo.topology.mv_grid
Its components can be accessed analog to those of the whole grid topology as shown above.
End of explanation
# Get list of all underlying LV grids
# (Note that MVGrid.lv_grids returns a generator object that must first be
# converted to a list in order to view the LVGrid objects)
list(edisgo.topology.mv_grid.lv_grids)
Explanation: A list of all LV grids can be retrieved through:
End of explanation
# Get single LV grid
lv_grid = list(edisgo.topology.mv_grid.lv_grids)[0]
# Access all buses in that LV grid
lv_grid.buses_df
# Access all loads in that LV grid
lv_grid.loads_df
Explanation: Access to a single LV grid's components can be obtained analog to shown above for
the whole topology and the MV grid:
End of explanation
# Get all switch disconnectors in MV grid as Switch objects
# (Note that objects are returned as a python generator object that must
# first be converted to a list in order to view the Load objects)
list(edisgo.topology.mv_grid.switch_disconnectors)
# Have a look at the state (open or closed) of one of the switch disconnectors
switch = list(edisgo.topology.mv_grid.switch_disconnectors)[0]
switch.state
# Get all loads in LV grid as Load objects
list(lv_grid.loads)
# Have a look at the load time series of one of the loads
load = list(lv_grid.loads)[0]
load.active_power_timeseries
Explanation: A single grid's generators, loads, storage units and switches can also be
retrieved as Generator object,
Load object, Storage object, and
Switch objects, respecitvely:
End of explanation
edisgo.to_graph()
Explanation: For some applications it is helpful to get a graph representation of the grid,
e.g. to find the path from the station to a generator. The graph representation
of the whole topology or each single grid can be retrieved as follows:
```python
Get graph representation of whole topology
edisgo.to_graph()
Get graph representation for MV grid
edisgo.topology.mv_grid.graph
Get graph representation for LV grid
lv_grid.graph
```
The returned graph is :networkx:networkx.Graph<network.Graph>, where lines are represented
by edges in the graph, and buses and transformers are represented by nodes.
End of explanation
# draw graph of one of the LV grids
import networkx as nx
lv_grid = list(edisgo.topology.mv_grid.lv_grids)[5]
nx.draw(lv_grid.graph)
Explanation: In case of the LV grids, the graph can be used to get a rudimentary graphical representation:
End of explanation
# Get installed capacity in Status Quo
edisgo.topology.generators_df.p_nom.sum()
# Import generators
scenario = 'nep2035'
edisgo.import_generators(generator_scenario=scenario)
# Get installed capacity in NEP 2035 scenario
edisgo.topology.generators_df.p_nom.sum()
Explanation: Future generator scenario <a class="anchor" id="generator_scenario"></a>
eDisGo was originally developed in the open_eGo research project. In the open_eGo project two future scenarios, the 'NEP 2035' and the 'ego 100' scenario. The 'NEP 2035' scenario closely follows the B2-Scenario 2035 from the German network developement plan (Netzentwicklungsplan NEP) 2015. The share of renewables is 65.8%, electricity demand is assumed to stay the same as in the status quo. The 'ego 100' scenario is based on the e-Highway 2050 scenario and assumes a share of renewables of 100% and again an equal electricity demand as in the status quo.
As mentioned earlier, ding0 grids represent status quo networks with status quo generator capacities (base year is the year 2015). In order to analyse future scenarios future generators have to be imported into the network.
End of explanation
edisgo.plot_mv_grid_topology(technologies=True)
Explanation: Let's have a look at the MV grid topology in the NEP 2035 scenario:
End of explanation
# Do non-linear power flow analysis with PyPSA
edisgo.analyze()
# feed-in case
edisgo.plot_mv_line_loading(
node_color='voltage_deviation',
timestep=edisgo.timeseries.timeindex[0])
# load case
edisgo.plot_mv_line_loading(
node_color='voltage_deviation',
timestep=edisgo.timeseries.timeindex[1])
Explanation: Grid reinforcement <a class="anchor" id="grid_reinforcement"></a>
Now we can calculate grid expansion costs that arise from the integration of the new generators.
The grid expansion methodology is based on the distribution grid study of dena [1] and Baden-Wuerttemberg [2]. The order grid expansion measures are conducted is as follows:
Reinforce transformers and lines due to overloading issues
Reinforce lines in MV grid due to voltage issues
Reinforce distribution substations due to voltage issues
Reinforce lines in LV grid due to voltage issues
Reinforce transformers and lines due to overloading issues
Reinforcement of transformers and lines due to overloading issues is performed twice, once in the beginning and again after fixing voltage problems, as the changed power flows after reinforcing the grid may lead to new overloading issues. (For further explanation see the documentation.)
After each reinforcement step a non-linear power flow analyses is conducted using PyPSA. Let's do a power flow analysis before the reinforcement to see how many over-loading and voltage issues there are.
End of explanation
edisgo.histogram_voltage(binwidth=0.005)
edisgo.histogram_relative_line_load(binwidth=0.2)
Explanation: Let's check voltages and line loadings before the reinforcement.
End of explanation
# Do grid reinforcement
edisgo.reinforce()
Explanation: Reinforcement is invoked doing the following:
End of explanation
# load and feed-in case
edisgo.plot_mv_line_loading(
node_color='voltage_deviation')
edisgo.histogram_voltage(binwidth=0.005)
edisgo.histogram_relative_line_load(binwidth=0.2)
Explanation: Let's check voltages and line loadings again:
End of explanation
# Get voltages at nodes from last power flow analysis
edisgo.results.v_res
# View reinforced equipment
edisgo.results.equipment_changes.head()
# Get costs in kEUR for reinforcement per equipment
costs = edisgo.results.grid_expansion_costs
costs.head()
# Group costs by voltage level
costs_grouped_nep = costs.groupby(['voltage_level']).sum()
costs_grouped_nep.loc[:, ['total_costs']]
Explanation: Evaluate results <a class="anchor" id="evaluation"></a>
Results such as voltages at nodes and line loading from the power flow analysis as well as
grid expansion costs are provided through the Results class. Above it was already shown how to access
the results:
python
edisgo.results
Get voltages at nodes through v_res attribute and line loading through s_res or i_res attribute.
The equipment_changes attribute holds details about measures performed during grid expansion. Associated costs can be obtained through the grid_expansion_costs attribute.
End of explanation
edisgo.plot_mv_grid_expansion_costs()
Explanation: An overview of the assumptions used to calculate grid expansion costs can be found in the documentation.
You can also view grid expansion costs for equipment in the MV using the following plot:
End of explanation
# initialize new EDisGo object with 'ego 100' scenario
edisgo_ego100 = EDisGo(ding0_grid=ding0_grid,
worst_case_analysis=worst_case_analysis,
generator_scenario='ego100')
# conduct grid reinforcement
edisgo_ego100.reinforce()
# get grouped costs
costs_grouped_ego100 = edisgo_ego100.results.grid_expansion_costs.groupby(['voltage_level']).sum()
costs_grouped_ego100.loc[:, ['total_costs']]
# compare expansion costs for both scenarios in a plot
import matplotlib.pyplot as plt
# set up dataframe to plot
costs_df = costs_grouped_nep.loc[:, ['total_costs']].join(costs_grouped_ego100.loc[:, ['total_costs']], rsuffix='_ego100', lsuffix='_nep2035').rename(
columns={'total_costs_ego100': 'ego100',
'total_costs_nep2035': 'NEP2035'}).T
# plot
costs_df.plot(kind='bar', stacked=True)
plt.xticks(rotation=0)
plt.ylabel('Grid reinforcement costs in k€');
Explanation: Results can be saved to csv files with:
python
edisgo.results.save('path/to/results/directory/')
Now let's compare the grid expansion costs for the 'NEP 2035' scenario with grid expansion costs for the 'ego 100' scenario. Therefore, we first have to setup the new scenario and calculate grid expansion costs.
End of explanation |
4,427 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Acceptance conditions
The acceptance condition of an automaton specifies which of its paths are accepting.
The way acceptance conditions are stored in Spot is derived from the way acceptance conditions are specified in the HOA format. In HOA, acceptance conditions are given as a line of the form
Step1: As seen above, the sequence of set numbers can be specified using a list or a tuple. While from the Python language point of view, using a tuple is faster than using a list, the overhead to converting all the arguments from Python to C++ and then converting the resuslting back from C++ to Python makes this difference completely negligeable. In the following, we opted to use lists, because brackets are more readable than nested parentheses.
Step2: The bits can be set, cleared, and tested using the set(), clear(), and has() methods
Step3: Left-shifting will increment all set numbers.
This operation is useful when building the product of two automata
Step4: Internally, the mark_t stores the bit-vector as an integer. This also implies that we currently do not support more than 32 acceptance sets. The underlaying integer can be retrieved using .id.
Step5: mark_t can also be initialized using an integer
Step6: The different sets can be iterated over with the sets() method, that returns a tuble with the index of all bits set.
Step7: count() return the number of sets in a mark_t
Step8: lowest() returns a mark_t containing only the lowest set number. This provides another way to iterate overs all set numbers in cases where you need the result as a mark_t.
Step9: max_set() returns the number of the highest set plus one. This is usually used to figure out how many sets we need to declare on the Acceptance
Step10: acc_code
acc_code encodes the formula of the acceptance condition using a kind of bytecode that basically corresponds to an encoding in reverse Polish notation in which conjunctions of Inf(n) terms, and disjunctions of Fin(n) terms are grouped. In particular, the frequently-used genaralized-Büchi acceptance conditions (like Inf(0)&Inf(1)&Inf(2)) are always encoded as a single term (like Inf({0,1,2})).
The simplest way to construct an acc_code by passing a string that represent the formula to build.
Step11: You may also use a named acceptance condition
Step12: The recognized names are the valide values for acc-name
Step13: It may also be convenient to generate a random acceptance condition
Step14: The to_cnf() and to_dnf() functions can be used to rewrite the formula into Conjunctive or Disjunctive normal forms. This functions will simplify the resulting formulas to make them irredundant.
Step15: The manipulation of acc_code objects is quite rudimentary at the moment
Step16: The complement() method returns the complemented acceptance condition
Step17: Instead of using acc_code('string'), it is also possible to build an acceptance formula from atoms like Inf({...}), Fin({...}), t, or f.
Remember that in our encoding for the formula, terms like Inf(1)&Inf(2) and Fin(3)|Fin(4)|Fin(5) are actually stored as Inf({1,2}) and Fin({3,4,5}), where {1,2} and {3,4,5} are instance of mark_t. These terms can be generated with the
functions spot.acc_code.inf(mark) and spot.acc_code.fin(mark).
Inf({}) is equivalent to t, and Fin({}) is equivalent to f, but it's better to use the functions spot.acc_code.t() or spot.acc_code.f() directly.
Step18: To evaluate an acceptance condition formula on a run, build a mark_t containing all the acceptance sets that are seen infinitely often along this run, and call the accepting() method.
Step19: Finally the method used_sets() returns a mark_t with all the sets appearing in the formula
Step20: acc_cond
Automata store their acceptance condition as an instance of the acc_cond class.
This class can be thought of as a pair (n, code), where n is an integer that tells how many acceptance sets are used, while the code is an instance of acc_code and encodes the formula over a subset of these acceptance sets. We usually have n == code.used_sets().max_set()), but n can be larger.
It is OK if an automaton declares that is used 3 sets, even if the acceptance condition formula only uses set number 1.
The acc_cond objects are usually not created by hand
Step21: For convenience, you can pass the string directly
Step22: The acc_cond object can also be constructed using only a number of sets. In that case, the acceptance condition defaults to t, and it can be changed to something else later (using set_acceptance()). The number of acceptance sets can also be augmented with add_sets().
Step23: Calling the constructor of acc_cond by passing just an instance of acc_code (or a string that will be passed to the acc_code constructor) will automatically set the number of acceptance sets to the minimum needed by the formula
Step24: The above is in fact just syntactic sugar for
Step25: The common scenario of setting generalized Büchi acceptance can be achieved more efficiently by first setting the number of acceptance sets, and then requiring generalized Büchi acceptance
Step26: The acc_cond class has several methods for detecting acceptance conditions that match the named acceptance conditions of the HOA format. Note that in the HOA format, Inf(0)&Inf(1)&Inf(2)&Inf(3) is only called generalized Büchi if exactly 4 acceptance sets are used. So the following behavior should not be surprising
Step27: Similar methods like is_t(), is_f(), is_buchi(), is_co_buchi(), is_generalized_co_buchi() all return a Boolean.
The is_rabin() and is_streett() methods, however, return a number of pairs. The number of pairs is always num_sets()/2 on success, or -1 on failure.
Step28: The check for parity acceptance returns three Boolean in a list of the form [matched, max?, odd?]. If matched is False, the other values should be ignored.
Step29: acc_cond contains a few functions for manipulating mark_t instances, these are typically functions that require known the total number of accepting sets declared.
For instance complementing a mark_t
Step30: all_sets() returns a mark_t listing all the declared sets
Step31: For convencience, the accepting() method of acc_cond delegates to that of the acc_code.
Any set passed to accepting() that is not used by the acceptance formula has no influence.
Step32: Finally the unsat_mark() method of acc_cond computes an instance of mark_t that is unaccepting (i.e., passing this value to acc.accepting(...) will return False when such a value exist. Not all acceptance conditions have an satisfiable mark. Obviously the t acceptance is always satisfiable, and so are all equivalent acceptances (for instance Fin(1)|Inf(1)).
For this reason, unsat_mark() actually returns a pair | Python Code:
spot.mark_t()
spot.mark_t([0, 2, 3])
spot.mark_t((0, 2, 3))
Explanation: Acceptance conditions
The acceptance condition of an automaton specifies which of its paths are accepting.
The way acceptance conditions are stored in Spot is derived from the way acceptance conditions are specified in the HOA format. In HOA, acceptance conditions are given as a line of the form:
Acceptance: 3 (Inf(0)&Fin(1))|Inf(2)
The number 3 gives the number of acceptance sets used (numbered from 0 to 2 in that case), while the rest of the line is a positive Boolean formula over terms of the form:
- Inf(n), that is true if and only if the set n is seen infinitely often,
- Fin(n), that is true if and only if the set n should be seen finitely often,
- t, always true,
- f, always false.
The HOA specifications additionally allows terms of the form Inf(!n) or Fin(!n) but Spot automatically rewrites those away when reading an HOA file.
Note that the number of sets given can be larger than what is actually needed by the acceptance formula.
Transitions in automata can be tagged as being part of some member sets, and a path in the automaton is accepting if the set of acceptance sets visited along this path satify the acceptance condition.
Definining acceptance conditions in Spot involves three different types of C++ objects:
spot::acc_cond is used to represent an acceptance condition, that is: a number of sets and a formula.
spot::acc_cond::acc_code, is used to represent Boolean formula for the acceptance condition using a kind of byte code (hence the name)
spot::acc_cond::mark_t, is a type of bit-vector used to represent membership to acceptance sets.
In because Swig's support for nested class is limited, these types are available respectively as spot.acc_cond, spot.acc_code, and spot.mark_t in Python.
mark_t
Let's start with the simpler of these three objects. mark_t is a type of bit vector. Its main constructor takes a sequence of set numbers.
End of explanation
x = spot.mark_t([0, 2, 3])
y = spot.mark_t([0, 4])
print(x | y)
print(x & y)
print(x - y)
Explanation: As seen above, the sequence of set numbers can be specified using a list or a tuple. While from the Python language point of view, using a tuple is faster than using a list, the overhead to converting all the arguments from Python to C++ and then converting the resuslting back from C++ to Python makes this difference completely negligeable. In the following, we opted to use lists, because brackets are more readable than nested parentheses.
End of explanation
x.set(5)
print(x)
x.clear(3)
print(x)
print(x.has(2))
print(x.has(3))
Explanation: The bits can be set, cleared, and tested using the set(), clear(), and has() methods:
End of explanation
x << 2
Explanation: Left-shifting will increment all set numbers.
This operation is useful when building the product of two automata: all the set number of one automaton have to be shift by the number of sets used in the other automaton.
End of explanation
print(x)
print(x.id)
print(bin(x.id))
Explanation: Internally, the mark_t stores the bit-vector as an integer. This also implies that we currently do not support more than 32 acceptance sets. The underlaying integer can be retrieved using .id.
End of explanation
# compare
print(spot.mark_t([5]))
# with
print(spot.mark_t(5))
print(spot.mark_t(0b10101))
Explanation: mark_t can also be initialized using an integer: in that case the integer is interpreted as a bit vector.
A frequent error is to use mark_t(n) when we really mean mark_t([n]) or mark_t((n,)).
End of explanation
print(x)
print(x.sets())
for s in x.sets():
print(s)
Explanation: The different sets can be iterated over with the sets() method, that returns a tuble with the index of all bits set.
End of explanation
x.count()
Explanation: count() return the number of sets in a mark_t:
End of explanation
spot.mark_t([1,3,5]).lowest()
v = spot.mark_t([1, 3, 5])
while v: # this stops once v is empty
b = v.lowest()
v -= b
print(b)
Explanation: lowest() returns a mark_t containing only the lowest set number. This provides another way to iterate overs all set numbers in cases where you need the result as a mark_t.
End of explanation
spot.mark_t([1, 3, 5]).max_set()
Explanation: max_set() returns the number of the highest set plus one. This is usually used to figure out how many sets we need to declare on the Acceptance: line of the HOA format:
End of explanation
spot.acc_code('(Inf(0)&Fin(1))|Inf(2)')
Explanation: acc_code
acc_code encodes the formula of the acceptance condition using a kind of bytecode that basically corresponds to an encoding in reverse Polish notation in which conjunctions of Inf(n) terms, and disjunctions of Fin(n) terms are grouped. In particular, the frequently-used genaralized-Büchi acceptance conditions (like Inf(0)&Inf(1)&Inf(2)) are always encoded as a single term (like Inf({0,1,2})).
The simplest way to construct an acc_code by passing a string that represent the formula to build.
End of explanation
spot.acc_code('Rabin 2')
Explanation: You may also use a named acceptance condition:
End of explanation
print(spot.acc_code('Streett 2..4'))
print(spot.acc_code('Streett 2..4'))
Explanation: The recognized names are the valide values for acc-name: in the HOA format. Additionally numbers may be replaced by ranges of the form n..m, in which case a random number is selected in that range.
End of explanation
spot.acc_code('random 3..5')
Explanation: It may also be convenient to generate a random acceptance condition:
End of explanation
a = spot.acc_code('parity min odd 5')
a
a.to_cnf()
a.to_dnf()
Explanation: The to_cnf() and to_dnf() functions can be used to rewrite the formula into Conjunctive or Disjunctive normal forms. This functions will simplify the resulting formulas to make them irredundant.
End of explanation
x = spot.acc_code('Rabin 2')
y = spot.acc_code('Rabin 2') << 4
print(x)
print(y)
print(x | y)
print(x & y)
Explanation: The manipulation of acc_code objects is quite rudimentary at the moment: it easy to build, but it's harder take appart. In fact we won't attempt to disassemble an acc_code object in Python: those things are better done in C++
Operators |, |=, &, &=, <<, and <<= can be used with their obvious semantics.
Whenever possible, the inplace versions (|=, &=, <<=) should be prefered, because they create less temporary acceptance conditions.
End of explanation
print(x)
print(x.complement())
Explanation: The complement() method returns the complemented acceptance condition:
End of explanation
spot.acc_code.inf([1,2]) & spot.acc_code.fin([3,4,5])
spot.acc_code.inf([])
spot.acc_code.t()
spot.acc_code.fin([])
spot.acc_code.f()
Explanation: Instead of using acc_code('string'), it is also possible to build an acceptance formula from atoms like Inf({...}), Fin({...}), t, or f.
Remember that in our encoding for the formula, terms like Inf(1)&Inf(2) and Fin(3)|Fin(4)|Fin(5) are actually stored as Inf({1,2}) and Fin({3,4,5}), where {1,2} and {3,4,5} are instance of mark_t. These terms can be generated with the
functions spot.acc_code.inf(mark) and spot.acc_code.fin(mark).
Inf({}) is equivalent to t, and Fin({}) is equivalent to f, but it's better to use the functions spot.acc_code.t() or spot.acc_code.f() directly.
End of explanation
acc = spot.acc_code('Fin(0) & Inf(1) | Inf(2)')
print("acc =", acc)
for x in ([0, 1, 2], [1, 2], [0, 1], [0, 2], [0], [1], [2], []):
print("acc.accepting({}) = {}".format(x, acc.accepting(x)))
Explanation: To evaluate an acceptance condition formula on a run, build a mark_t containing all the acceptance sets that are seen infinitely often along this run, and call the accepting() method.
End of explanation
acc = spot.acc_code('Fin(0) & Inf(2)')
print(acc)
print(acc.used_sets())
print(acc.used_sets().max_set())
Explanation: Finally the method used_sets() returns a mark_t with all the sets appearing in the formula:
End of explanation
acc = spot.acc_cond(4, spot.acc_code('Rabin 2'))
acc
Explanation: acc_cond
Automata store their acceptance condition as an instance of the acc_cond class.
This class can be thought of as a pair (n, code), where n is an integer that tells how many acceptance sets are used, while the code is an instance of acc_code and encodes the formula over a subset of these acceptance sets. We usually have n == code.used_sets().max_set()), but n can be larger.
It is OK if an automaton declares that is used 3 sets, even if the acceptance condition formula only uses set number 1.
The acc_cond objects are usually not created by hand: automata have dedicated methods for that. But for the purpose of this notebook, let's do it:
End of explanation
acc = spot.acc_cond(4, 'Rabin 2')
acc
acc.num_sets()
acc.get_acceptance()
Explanation: For convenience, you can pass the string directly:
End of explanation
acc = spot.acc_cond(4)
acc
acc.add_sets(2)
acc
acc.set_acceptance('Streett 2')
acc
Explanation: The acc_cond object can also be constructed using only a number of sets. In that case, the acceptance condition defaults to t, and it can be changed to something else later (using set_acceptance()). The number of acceptance sets can also be augmented with add_sets().
End of explanation
acc = spot.acc_cond('Streett 2')
acc
Explanation: Calling the constructor of acc_cond by passing just an instance of acc_code (or a string that will be passed to the acc_code constructor) will automatically set the number of acceptance sets to the minimum needed by the formula:
End of explanation
code = spot.acc_code('Streett 2')
acc = spot.acc_cond(code.used_sets().max_set(), code)
acc
Explanation: The above is in fact just syntactic sugar for:
End of explanation
acc = spot.acc_cond(4)
acc.set_generalized_buchi()
acc
Explanation: The common scenario of setting generalized Büchi acceptance can be achieved more efficiently by first setting the number of acceptance sets, and then requiring generalized Büchi acceptance:
End of explanation
print(acc)
print(acc.is_generalized_buchi())
acc.add_sets(1)
print(acc)
print(acc.is_generalized_buchi())
Explanation: The acc_cond class has several methods for detecting acceptance conditions that match the named acceptance conditions of the HOA format. Note that in the HOA format, Inf(0)&Inf(1)&Inf(2)&Inf(3) is only called generalized Büchi if exactly 4 acceptance sets are used. So the following behavior should not be surprising:
End of explanation
acc = spot.acc_cond('Rabin 2')
print(acc)
print(acc.is_rabin())
print(acc.is_streett())
Explanation: Similar methods like is_t(), is_f(), is_buchi(), is_co_buchi(), is_generalized_co_buchi() all return a Boolean.
The is_rabin() and is_streett() methods, however, return a number of pairs. The number of pairs is always num_sets()/2 on success, or -1 on failure.
End of explanation
acc = spot.acc_cond('parity min odd 4')
print(acc)
print(acc.is_parity())
acc.set_generalized_buchi()
print(acc)
print(acc.is_parity())
Explanation: The check for parity acceptance returns three Boolean in a list of the form [matched, max?, odd?]. If matched is False, the other values should be ignored.
End of explanation
m = spot.mark_t([1, 3])
print(acc.comp(m))
Explanation: acc_cond contains a few functions for manipulating mark_t instances, these are typically functions that require known the total number of accepting sets declared.
For instance complementing a mark_t:
End of explanation
acc.all_sets()
Explanation: all_sets() returns a mark_t listing all the declared sets:
End of explanation
print("acc =", acc)
for x in ([0, 1, 2, 3, 10], [1, 2]):
print("acc.accepting({}) = {}".format(x, acc.accepting(x)))
Explanation: For convencience, the accepting() method of acc_cond delegates to that of the acc_code.
Any set passed to accepting() that is not used by the acceptance formula has no influence.
End of explanation
print(acc)
print(acc.unsat_mark())
acc = spot.acc_cond(0) # use 0 acceptance sets, and the default formula (t)
print(acc)
print(acc.unsat_mark())
acc = spot.acc_cond('Streett 2')
print(acc)
print(acc.unsat_mark())
Explanation: Finally the unsat_mark() method of acc_cond computes an instance of mark_t that is unaccepting (i.e., passing this value to acc.accepting(...) will return False when such a value exist. Not all acceptance conditions have an satisfiable mark. Obviously the t acceptance is always satisfiable, and so are all equivalent acceptances (for instance Fin(1)|Inf(1)).
For this reason, unsat_mark() actually returns a pair: (bool, mark_t) where the Boolean is False iff the acceptance is always satisfiable. When the Boolean is True, then the second element of the pair gives a non-accepting mark.
End of explanation |
4,428 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ml - Classification binaire avec features textuelles
Ce notebook propose de voir comment incorporer des features pour voir l'amélioration des performances sur une classification binaire.
Step1: Récupérer les données
Les données sont téléchargeables Compétition 2017 - additifs alimentaires ou encore avec le code | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.ml - Classification binaire avec features textuelles
Ce notebook propose de voir comment incorporer des features pour voir l'amélioration des performances sur une classification binaire.
End of explanation
from pyensae.datasource import download_data
data_train = download_data("off_train_all.zip",
url="https://raw.githubusercontent.com/sdpython/data/master/OpenFoodFacts/")
data_test = download_data("off_test_all.zip",
url="https://raw.githubusercontent.com/sdpython/data/master/OpenFoodFacts/")
import pandas
df = pandas.read_csv("off_test_all.txt", sep="\t", encoding="utf8", low_memory=False)
df.head()
df.head(n=2).T[:50]
df.head(n=2).T[50:100]
df.head(n=2).T[100:150]
df.head(n=2).T[150:]
Explanation: Récupérer les données
Les données sont téléchargeables Compétition 2017 - additifs alimentaires ou encore avec le code :
End of explanation |
4,429 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 Google LLC.
Step1: <a href="https
Step2: Explore checkpoints
This section contains shows how to use the index.csv table for model
selection.
See
vit_jax.checkpoint.get_augreg_df()
for a detailed description of the individual columns
Step4: Load a checkpoint
Step6: Using timm
If you know PyTorch, you're probably already familiar with timm.
If not yet - it's your lucky day! Please check out their docs here
Step7: Fine-tune
You want to be connected to a TPU or GPU runtime for fine-tuning.
Note that here we're just calling into the code. For more details see the
annotated Colab
https
Step8: From tfds
Step9: From JPG files
The codebase supports training directly form JPG files on the local filesystem
instead of tfds datasets. Note that the throughput is somewhat reduced, but
that only is noticeable for very small models.
The main advantage of tfds datasets is that they are versioned and available
globally. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 Google LLC.
End of explanation
# Fetch vision_transformer repository.
![ -d vision_transformer ] || git clone --depth=1 https://github.com/google-research/vision_transformer
# Install dependencies.
!pip install -qr vision_transformer/vit_jax/requirements.txt
# Import files from repository.
import sys
if './vision_transformer' not in sys.path:
sys.path.append('./vision_transformer')
%load_ext autoreload
%autoreload 2
from vit_jax import checkpoint
from vit_jax import models
from vit_jax import train
from vit_jax.configs import augreg as augreg_config
from vit_jax.configs import models as models_config
# Connect to TPUs if runtime type is of type TPU.
import os
if 'google.colab' in str(get_ipython()) and 'COLAB_TPU_ADDR' in os.environ:
import jax
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
print('Connected to TPU.')
else:
# Otherwise print information about GPU.
!nvidia-smi
# Some more imports used in this Colab.
import glob
import os
import random
import shutil
import time
from absl import logging
import pandas as pd
import seaborn as sns
import tensorflow as tf
import tensorflow_datasets as tfds
from matplotlib import pyplot as plt
pd.options.display.max_colwidth = None
logging.set_verbosity(logging.INFO) # Shows logs during training.
Explanation: <a href="https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers
Model repository published with the paper
How to train your ViT? Data, Augmentation, and Regularization in Vision
Transformers
This Colab shows how to
find checkpoints
in the repository, how to
select and load a model
form the repository and use it for inference
(also with PyTorch),
and how to
fine-tune on a dataset.
For more details, please refer to the repository:
https://github.com/google-research/vision_transformer/
Note that this Colab directly uses the unmodified code from the repository. If
you want to modify the modules and persist your changes, you can do all that
using free GPUs and TPUs without leaving the Colab environment - see
https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax.ipynb
Imports
End of explanation
# Load master table from Cloud.
with tf.io.gfile.GFile('gs://vit_models/augreg/index.csv') as f:
df = pd.read_csv(f)
# This is a pretty large table with lots of columns:
print(f'loaded {len(df):,} rows')
df.columns
# Number of distinct checkpoints
len(tf.io.gfile.glob('gs://vit_models/augreg/*.npz'))
# Any column prefixed with "adapt_" pertains to the fine-tuned checkpoints.
# Any column without that prefix pertains to the pre-trained checkpoints.
len(set(df.filename)), len(set(df.adapt_filename))
df.name.unique()
# Upstream AugReg parameters (section 3.3):
(
df.groupby(['ds', 'name', 'wd', 'do', 'sd', 'aug']).filename
.count().unstack().unstack().unstack()
.dropna(1, 'all').fillna(0).astype(int)
.iloc[:7] # Just show beginning of a long table.
)
# Downstream parameters (table 4)
# (Imbalance in 224 vs. 384 is due to recently added B/8 checkpoints)
(
df.groupby(['adapt_resolution', 'adapt_ds', 'adapt_lr', 'adapt_steps']).filename
.count().astype(str).unstack().unstack()
.dropna(1, 'all').fillna('')
)
# Let's first select the "best checkpoint" for every model. We show in the
# paper (section 4.5) that one can get a good performance by simply choosing the
# best model by final pre-train validation accuracy ("final-val" column).
# Pre-training with imagenet21k 300 epochs (ds=="i21k") gives the best
# performance in almost all cases (figure 6, table 5).
best_filenames = set(
df.query('ds=="i21k"')
.groupby('name')
.apply(lambda df: df.sort_values('final_val').iloc[-1])
.filename
)
# Select all finetunes from these models.
best_df = df.loc[df.filename.apply(lambda filename: filename in best_filenames)]
# Note: 9 * 68 == 612
len(best_filenames), len(best_df)
best_df.columns
# Note that this dataframe contains the models from the "i21k_300" column of
# table 3:
best_df.query('adapt_ds=="imagenet2012"').groupby('name').apply(
lambda df: df.sort_values('adapt_final_val').iloc[-1]
)[[
# Columns from upstream
'name', 'ds', 'filename',
# Columns from downstream
'adapt_resolution', 'infer_samples_per_sec','adapt_ds', 'adapt_final_test', 'adapt_filename',
]].sort_values('infer_samples_per_sec')
# Visualize the 2 (resolution) * 9 (models) * 8 (lr, steps) finetunings for a
# single dataset (Pets37).
# Note how larger models get better scores up to B/16 @384 even on this tiny
# dataset, if pre-trained sufficiently.
sns.relplot(
data=best_df.query('adapt_ds=="oxford_iiit_pet"'),
x='infer_samples_per_sec',
y='adapt_final_val',
hue='name',
style='adapt_resolution'
)
plt.gca().set_xscale('log');
# More details for a single pre-trained checkpoint.
best_df.query('name=="R26+S/32" and adapt_ds=="oxford_iiit_pet"')[[
col for col in best_df.columns if col.startswith('adapt_')
]].sort_values('adapt_final_val')
Explanation: Explore checkpoints
This section contains shows how to use the index.csv table for model
selection.
See
vit_jax.checkpoint.get_augreg_df()
for a detailed description of the individual columns
End of explanation
# Select a value from "adapt_filename" above that is a fine-tuned checkpoint.
filename = 'R26_S_32-i21k-300ep-lr_0.001-aug_light1-wd_0.1-do_0.0-sd_0.0--oxford_iiit_pet-steps_0k-lr_0.003-res_384'
tfds_name = filename.split('--')[1].split('-')[0]
model_config = models_config.AUGREG_CONFIGS[filename.split('-')[0]]
resolution = int(filename.split('_')[-1])
path = f'gs://vit_models/augreg/{filename}.npz'
print(f'{tf.io.gfile.stat(path).length / 1024 / 1024:.1f} MiB - {path}')
# Fetch dataset that the checkpoint was finetuned on.
# (Note that automatic download does not work with imagenet2012)
ds, ds_info = tfds.load(tfds_name, with_info=True)
ds_info
# Get model instance - no weights are initialized yet.
model = models.VisionTransformer(
num_classes=ds_info.features['label'].num_classes, **model_config)
# Load a checkpoint from cloud - for large checkpoints this can take a while...
params = checkpoint.load(path)
# Get a single example from dataset for inference.
d = next(iter(ds['test']))
def pp(img, sz):
Simple image preprocessing.
img = tf.cast(img, float) / 255.0
img = tf.image.resize(img, [sz, sz])
return img
plt.imshow(pp(d['image'], resolution));
# Inferance on batch with single example.
logits, = model.apply({'params': params}, [pp(d['image'], resolution)], train=False)
# Plot logits (you can use tf.nn.softmax() to show probabilities instead).
plt.figure(figsize=(10, 4))
plt.bar(list(map(ds_info.features['label'].int2str, range(len(logits)))), logits)
plt.xticks(rotation=90);
Explanation: Load a checkpoint
End of explanation
# Checkpoints can also be loaded directly into timm...
!pip install timm
import timm
import torch
# For available model names, see here:
# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py
# https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer_hybrid.py
timm_model = timm.create_model(
'vit_small_r26_s32_384', num_classes=ds_info.features['label'].num_classes)
# Non-default checkpoints need to be loaded from local files.
if not tf.io.gfile.exists(f'{filename}.npz'):
tf.io.gfile.copy(f'gs://vit_models/augreg/{filename}.npz', f'{filename}.npz')
timm.models.load_checkpoint(timm_model, f'{filename}.npz')
def pp_torch(img, sz):
Simple image preprocessing for PyTorch.
img = pp(img, sz)
img = img.numpy().transpose([2, 0, 1]) # PyTorch expects NCHW format.
return torch.tensor(img[None])
with torch.no_grad():
logits, = timm_model(pp_torch(d['image'], resolution)).detach().numpy()
# Same results as above (since we loaded the same checkpoint).
plt.figure(figsize=(10, 4))
plt.bar(list(map(ds_info.features['label'].int2str, range(len(logits)))), logits)
plt.xticks(rotation=90);
Explanation: Using timm
If you know PyTorch, you're probably already familiar with timm.
If not yet - it's your lucky day! Please check out their docs here:
https://rwightman.github.io/pytorch-image-models/
End of explanation
# Launch tensorboard before training - maybe click "reload" during training.
%load_ext tensorboard
%tensorboard --logdir=./workdirs
Explanation: Fine-tune
You want to be connected to a TPU or GPU runtime for fine-tuning.
Note that here we're just calling into the code. For more details see the
annotated Colab
https://colab.research.google.com/github/google-research/vision_transformer/blob/linen/vit_jax.ipynb
Also note that Colab GPUs and TPUs are not very powerful. To run this code on
more powerful machines, see:
https://github.com/google-research/vision_transformer/#running-on-cloud
In particular, note that due to the Colab "TPU Node" setup, transfering data to
the TPUs is realtively slow (for example the smallest R+Ti/16 model trains
faster on a single GPU than on 8 TPUs...)
TensorBoard
End of explanation
# Create a new temporary workdir.
workdir = f'./workdirs/{int(time.time())}'
workdir
# Get config for specified model.
# Note that we can specify simply the model name (in which case the recommended
# checkpoint for that model is taken), or it can be specified by its full
# name.
config = augreg_config.get_config('R_Ti_16')
# A very small tfds dataset that only has a "train" split. We use this single
# split both for training & evaluation by splitting it further into 90%/10%.
config.dataset = 'tf_flowers'
config.pp.train = 'train[:90%]'
config.pp.test = 'train[90%:]'
# tf_flowers only has 3670 images - so the 10% evaluation split will contain
# 360 images. We specify batch_eval=120 so we evaluate on all but 7 of those
# images (remainder is dropped).
config.batch_eval = 120
# Some more parameters that you will often want to set manually.
# For example for VTAB we used steps={500, 2500} and lr={.001, .003, .01, .03}
config.base_lr = 0.01
config.shuffle_buffer = 1000
config.total_steps = 100
config.warmup_steps = 10
config.accum_steps = 0 # Not needed with R+Ti/16 model.
config.pp['crop'] = 224
# Call main training loop. See repository and above Colab for details.
state = train.train_and_evaluate(config, workdir)
Explanation: From tfds
End of explanation
base = '.' # Store data on VM (ephemeral).
# Uncomment below lines if you want to download & persist files in your Google
# Drive instead. Note that Colab VMs are reset (i.e. files are deleted) after
# some time of inactivity. Storing data to Google Drive guarantees that it is
# still available next time you connect from a new VM.
# Note that this is significantly slower than reading from the VMs locally
# attached file system!
# from google.colab import drive
# drive.mount('/gdrive')
# base = '/gdrive/My Drive/vision_transformer_images'
# Download some dataset & unzip.
! rm -rf '$base/flower_photos'; mkdir -p '$base'
! (cd '$base' && curl https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz | tar xz)
# Since the default file format of above "tf_flowers" dataset is
# flower_photos/{class_name}/{filename}.jpg
# we first need to split it into a "train" (90%) and a "test" (10%) set:
# flower_photos/train/{class_name}/{filename}.jpg
# flower_photos/test/{class_name}/{filename}.jpg
def split(base_dir, test_ratio=0.1):
paths = glob.glob(f'{base_dir}/*/*.jpg')
random.shuffle(paths)
counts = dict(test=0, train=0)
for i, path in enumerate(paths):
split = 'test' if i < test_ratio * len(paths) else 'train'
*_, class_name, basename = path.split('/')
dst = f'{base_dir}/{split}/{class_name}/{basename}'
if not os.path.isdir(os.path.dirname(dst)):
os.makedirs(os.path.dirname(dst))
shutil.move(path, dst)
counts[split] += 1
print(f'Moved {counts["train"]:,} train and {counts["test"]:,} test images.')
split(f'{base}/flower_photos')
# Create a new temporary workdir.
workdir = f'./workdirs/{int(time.time())}'
workdir
# Read data from directory containing files.
# (See cell above for more config settings)
config.dataset = f'{base}/flower_photos'
# And fine-tune on images provided
opt = train.train_and_evaluate(config, workdir)
Explanation: From JPG files
The codebase supports training directly form JPG files on the local filesystem
instead of tfds datasets. Note that the throughput is somewhat reduced, but
that only is noticeable for very small models.
The main advantage of tfds datasets is that they are versioned and available
globally.
End of explanation |
4,430 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 11. Going fast
Step1: Question
What is the typical size of particle system?
Millenium run
One of the most famous N-body computations is the Millenium run
More than 10 billions particles ($2000^3$)
$>$ 1 month of computations, 25 Terabytes of storage
Each "particle" represents approximately a billion solar masses of dark matter
Study, how the matter is distributed through the Universy (cosmology)
Step2: Smoothed particle hydrodynamics
The particle systems can be used to model a lot of things.
For nice examples, see the website of Ron Fedkiw
Step3: Applications
Where the N-body problem arises in different problems with long-range interactions`
- Cosmology (interacting masses)
- Electrostatics (interacting charges)
- Molecular dynamics (more complicated interactions, maybe even 3-4 body terms).
- Particle modelling (smoothed particle hydrodynamics)
Fast computation
$$
V_i = \sum_{j} \frac{q_j}{\Vert x_i - y_j \Vert}
$$
Direct computation takes $\mathcal{O}(N^2)$ operations.
How to compute it fast?
The core idea | Python Code:
import numpy as np
import math
from numba import jit
N = 10000
x = np.random.randn(N, 2);
y = np.random.randn(N, 2);
charges = np.ones(N)
res = np.zeros(N)
@jit
def compute_nbody_direct(N, x, y, charges, res):
for i in xrange(N):
res[i] = 0.0
for j in xrange(N):
dist = (x[i, 0] - y[i, 0]) ** 2 + (x[i, 1] - y[i, 1]) ** 2
dist = math.sqrt(dist)
res[i] += charges[j] / dist
%timeit compute_nbody_direct(N, x, y, charges, res)
Explanation: Lecture 11. Going fast: the Barnes-Hut algorithm
Previous lecture
Discretization of the integral equations, Galerkin methods
Computation of singular integrals
Idea of the Barnes-Hut method
Todays lecture
Barnes-Hut in details
The road to the FMM
Algebraic versions of the FMM/Fast Multipole
The discretization of the integral equation leads to dense matrices.
The main question is how to compute the matrix-by-vector product,
i.e. the summation of the form:
$$\sum_{j=1}^M A_{ij} q_j = V_i, \quad i = 1, \ldots, N.$$
The matrix $A$ is dense, i.e. its element can not be omitted. The complexity is $\mathcal{O}(N^2)$.
Can we make it faster?
The simplest case is the computation of the potentials from the system of charges
$$V_i = \sum_{j} \frac{q_j}{\Vert r_i - r_j \Vert}$$
This summation appears in:
Modelling of large systems of charges
Astronomy (where instead of $q_j$ we have masses, i.e. start..)
It is called <font color='red'> the N-body problem </font>.
There is no problem with memory, since you only have two cycles.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('UC5pDPY5Nz4')
Explanation: Question
What is the typical size of particle system?
Millenium run
One of the most famous N-body computations is the Millenium run
More than 10 billions particles ($2000^3$)
$>$ 1 month of computations, 25 Terabytes of storage
Each "particle" represents approximately a billion solar masses of dark matter
Study, how the matter is distributed through the Universy (cosmology)
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('6bdIHFTfTdU')
Explanation: Smoothed particle hydrodynamics
The particle systems can be used to model a lot of things.
For nice examples, see the website of Ron Fedkiw
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("./styles/alex.css", "r").read()
return HTML(styles)
css_styling()
Explanation: Applications
Where the N-body problem arises in different problems with long-range interactions`
- Cosmology (interacting masses)
- Electrostatics (interacting charges)
- Molecular dynamics (more complicated interactions, maybe even 3-4 body terms).
- Particle modelling (smoothed particle hydrodynamics)
Fast computation
$$
V_i = \sum_{j} \frac{q_j}{\Vert x_i - y_j \Vert}
$$
Direct computation takes $\mathcal{O}(N^2)$ operations.
How to compute it fast?
The core idea: Barnes, Hut (Nature, 1986)
Use clustering of particles!
Idea on one slide
The idea was simple:
If a charge is far from a cluster of sources, it they are seen as one big "particle".
<img src="earth-andromeda.jpeg" width = 70%>
Barnes-Hut
$$\sum_j q_j F(x, y_j) \approx Q F(x, y_C)$$
$$Q = \sum_j q_j, \quad y_C = \frac{1}{J} \sum_{j} y_j$$
To compute the interaction, it is sufficient to replace by the a center-of-mass and a total mass!
The idea of Barnes and Hut was to split the <font color='red'> sources </font> into big blocks using the <font color='red'> cluster tree </font>
<img width=90% src='clustertree.png'>
The algorithm is recursive.
Let $\mathcal{T}$ be the tree, and $x$ is the point where we need to
compute the potential.
Set $N$ to the <font color='red'> root node </font>
If $x$ and $N$ <font color='red'> are separated </font> , then set $V(x) = Q V(y_{\mathrm{center}})$
If $x$ and $N$ are not separated, compute $V(x) = \sum_{C \in
\mathrm{sons}(N)} V(C, x)$ <font color='red'> recursion </font>
The complexity is $\mathcal{O}(\log N)$ for 1 point!
Trees
There are many options for the tree construction.
Quadtree/Octree
KD-tree
Recursive intertial bisection
Octtree
The simplest one: **quadtree/ octtree, when you split the square into 4 squares and do that until the number of points is less that a parameter.
It leads to the unbalanced tree, adding points is simple (but can unbalance it more).
KD-tree
Another popular choice of the tree is the KD-tree
The construction is simple as well:
Split along x-axis, then y-axis in such a way that the tree is balanced (i.e. the number of points in the left child/right child is similar).
The tree is always balanced, but biased towards the coordinate axis.
Recursive inertial bisection
Compute the center-of-mass and select a hyperplane such that sum of squares of distances to it is minimal.
$$\sum_{j} \rho^2(x_j, \Pi) \rightarrow \min.$$
Often gives best complexity, but adding/removing points can be difficult.
The scheme
You can actually code it from this description!
Construct the cluster tree
Fill the tree with charges
For any point we now can compute the potential in $\mathcal{O}(\log N)$ flops (instead of $\mathcal{O}(N)$).
Notes on the complexity
For each node of the tree, we need to compute its total mass and the center-of-mass. If we do it in a cycle, then the complexity will be $\mathcal{O}(N^2)$ for the tree constuction.
However, it is easy to construct it in a smarter way.
Start from the children (which contain only few particles) and fill then
Bottom-to-top graph traversal: if we know the charges for the children, we can cheaply compute the total charge/center of massfor the father
Now you can actually code this (minor things remaining are the bounding box and separation criteria).
Problems with Barnes-Hut
What are the problems with Barnes-Hut?
Well, several
- The logarithmic term
- Low accuracy $\varepsilon = 10^{-2}$ is ok, but if we want $\varepsilon=10^{-5}$
we have to take larger <font color='red'> separation criteria </font>
Solving problems with Barnes-Hut
Complexity: To avoid the logarithmic term, we need to store two trees, for the sources, and for receivers
Accuracy: instead of the <font color='red'> piecewise-constant approximation </font> which is inheritant in the BH algorithm, use more accurate representations.
Double tree Barnes-Hut
Principal scheme of the Double-tree BH:
Construct two trees for sources & receivers
Fill the tree for sources with charges (bottom-to-top)
Compute the interaction between nodes of the treess
Fill the tree for receivers with potentials (top-to-bottom)
The original BH method has low accuracy, and is based on the expansion
$$f(x, y) \approx f(x_0, y_0)$$
What to do?
Answer: Use higher-order expansions!
$$f(x + \delta x, y + \delta y) \approx f(x, y) + \sum_{k, l=0}^p
(D^{k} D^{l} f) \delta ^k
\delta y^l \frac{1}{k!} \frac{1}{l!} + \ldots
$$
For the Coloumb interaction $\frac{1}{r}$ we have the multipole expansion
$$
v(R) = \frac{Q}{R} + \frac{1}{R^3} \sum_{\alpha} P_{\alpha} R_{\alpha} + \frac{1}{6R^5} \sum_{\alpha, \beta} Q_{\alpha \beta} (3R_{\alpha \beta} - \delta_{\alpha \beta}R^2) + \ldots,
$$
where $P_{\alpha}$ is the dipole moment, $Q_{\alpha \beta}$ is the quadrupole moment (by actually, nothing more than the Taylor series expansion).
Fast multipole method
This combination is very powerful, and
<font color='red' size=6.0> Double tree + multipole expansion $\approx$ the Fast Multipole Method (FMM). </font>
FMM
We will talk about the exact implementation and the complexity issues in the next lecture.
Problems with FMM
FMM has problems:
- It relies on analytic expansions; maybe difficult to obtain for the integral equations
- the higher is the order of the expansion, the larger is the complexity.
- That is why the algebraic interpretation (or kernel-independent FMM) is of great importance.
FMM hardware
For cosmology this problem is so important, so that they have released a special hardware Gravity Pipe for solving the N-body problem
FMM software
Sidenote, When you Google for "FMM", you will also encounter fast marching method (even in the scikit).
Everyone uses its own in-house software, so a good Python open-source software is yet to be written.
This is also a perfect test for the GPU programming (you can try to take such project in the App Period, by the way).
Overview of todays lecture
The cluster tree
Barnes-Hut and its problems
Double tree / fast multipole method
Important difference: element evaluation is fast. In integral equations, it is slow.
Next lecture
More detailed overview of the FMM algorithm, along with complexity estimates.
Algebraic interpretation of the FMM
Application of the FMM to the solution of integral equations
End of explanation |
4,431 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LAB 3b
Step1: Lab Task #1
Step2: Create two SQL statements to evaluate the model.
Step3: Lab Task #2
Step4: Create three SQL statements to EVALUATE the model.
Let's now retrieve the training statistics and evaluate the model.
Step5: We now evaluate our model on our eval dataset
Step6: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
Step7: Lab Task #3
Step8: Let's retrieve the training statistics
Step9: We now evaluate our model on our eval dataset
Step10: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse. | Python Code:
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_train
LIMIT 0
%%bigquery
-- LIMIT 0 is a free query; this allows us to check that the table exists.
SELECT * FROM babyweight.babyweight_data_eval
LIMIT 0
Explanation: LAB 3b: BigQuery ML Model Linear Feature Engineering/Transform.
Learning Objectives
Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS
Create and evaluate linear model with BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE
Create and evaluate linear model with ML.TRANSFORM
Introduction
In this notebook, we will create multiple linear models to predict the weight of a baby before it is born, using increasing levels of feature engineering using BigQuery ML. If you need a refresher, you can go back and look how we made a baseline model in the previous notebook BQML Baseline Model.
We will create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS, create and evaluate a linear model using BigQuery's ML.FEATURE_CROSS and ML.BUCKETIZE, and create and evaluate a linear model using BigQuery's ML.TRANSFORM.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Verify tables exist
Run the following cells to verify that we previously created the dataset and data tables. If not, go back to lab 1b_prepare_data_babyweight to create them.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_1
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
# TODO: Add base features and label
ML.FEATURE_CROSS(
# TODO: Cross categorical features
) AS gender_plurality_cross
FROM
babyweight.babyweight_data_train
Explanation: Lab Task #1: Model 1: Apply the ML.FEATURE_CROSS clause to categorical features
BigQuery ML now has ML.FEATURE_CROSS, a pre-processing clause that performs a feature cross with syntax ML.FEATURE_CROSS(STRUCT(features), degree) where features are comma-separated categorical columns and degree is highest degree of all combinations.
Create model with feature cross.
End of explanation
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval
))
%%bigquery
SELECT
# TODO: Select just the calculated RMSE
FROM
ML.EVALUATE(MODEL babyweight.model_1,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval
))
Explanation: Create two SQL statements to evaluate the model.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_2
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
ML.FEATURE_CROSS(
STRUCT(
is_male,
ML.BUCKETIZE(
# TODO: Bucketize mother_age
) AS bucketed_mothers_age,
plurality,
ML.BUCKETIZE(
# TODO: Bucketize gestation_weeks
) AS bucketed_gestation_weeks
)
) AS crossed
FROM
babyweight.babyweight_data_train
Explanation: Lab Task #2: Model 2: Apply the BUCKETIZE Function
Bucketize is a pre-processing function that creates "buckets" (e.g bins) - e.g. it bucketizes a continuous numerical feature into a string feature with bucket names as the value with syntax ML.BUCKETIZE(feature, split_points) with split_points being an array of numerical points to determine bucket bounds.
Apply the BUCKETIZE function within FEATURE_CROSS.
Hint: Create a model_2.
End of explanation
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_2)
Explanation: Create three SQL statements to EVALUATE the model.
Let's now retrieve the training statistics and evaluate the model.
End of explanation
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval))
Explanation: We now evaluate our model on our eval dataset:
End of explanation
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_2,
(
SELECT
# TODO: Add same features and label as training
FROM
babyweight.babyweight_data_eval))
Explanation: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL
babyweight.model_3
TRANSFORM(
# TODO: Add base features and label as you would in select
# TODO: Add transformed features as you would in select
)
OPTIONS (
MODEL_TYPE="LINEAR_REG",
INPUT_LABEL_COLS=["weight_pounds"],
L2_REG=0.1,
DATA_SPLIT_METHOD="NO_SPLIT") AS
SELECT
*
FROM
babyweight.babyweight_data_train
Explanation: Lab Task #3: Model 3: Apply the TRANSFORM clause
Before we perform our prediction, we should encapsulate the entire feature set in a TRANSFORM clause. This way we can have the same transformations applied for training and prediction without modifying the queries.
Let's apply the TRANSFORM clause to the model_3 and run the query.
End of explanation
%%bigquery
SELECT * FROM ML.TRAINING_INFO(MODEL babyweight.model_3)
Explanation: Let's retrieve the training statistics:
End of explanation
%%bigquery
SELECT
*
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
Explanation: We now evaluate our model on our eval dataset:
End of explanation
%%bigquery
SELECT
SQRT(mean_squared_error) AS rmse
FROM
ML.EVALUATE(MODEL babyweight.model_3,
(
SELECT
*
FROM
babyweight.babyweight_data_eval
))
Explanation: Let's select the mean_squared_error from the evaluation table we just computed and square it to obtain the rmse.
End of explanation |
4,432 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grove Light Sensor 1.1
This example shows how to use the Grove Light Sensor v1.1. You will also see how to plot a graph using matplotlib.
The Grove Light Sensor produces an analog signal which requires an ADC.
The Grove Light Sensor, PYNQ Grove Adapter, and Grove I2C ADC are used for this example.
When the ambient light intensity increases, the resistance of the LDR or Photoresistor will decrease. This means that the output signal from this module will be HIGH in bright light, and LOW in the dark. Values for the sensor ranges from ~5.0 (bright) to >35.0 (dark).
1. Load overlay
Download base overlay.
Step1: 2. Read single luminance value
Now read from the Grove Light sensor which is connected to the ADC. In this example, the PYNQ Grove Adapter is connected to PMODA interface on the board.
The Grove I2C ADC is used as a bridge between G4 on the Grove Adapter and the Grove Light Sensor.
Step2: 3. Plot the light intensity over time
This Python code will do multiple light measurements over a 10 second period.
To change the light intensity, cover and uncover the light sensor. In typical ambient light, there is no need to provide an external light source, as the sensor is already reading at full scale. | Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: Grove Light Sensor 1.1
This example shows how to use the Grove Light Sensor v1.1. You will also see how to plot a graph using matplotlib.
The Grove Light Sensor produces an analog signal which requires an ADC.
The Grove Light Sensor, PYNQ Grove Adapter, and Grove I2C ADC are used for this example.
When the ambient light intensity increases, the resistance of the LDR or Photoresistor will decrease. This means that the output signal from this module will be HIGH in bright light, and LOW in the dark. Values for the sensor ranges from ~5.0 (bright) to >35.0 (dark).
1. Load overlay
Download base overlay.
End of explanation
from pynq.lib.pmod import Grove_Light
from pynq.lib.pmod import PMOD_GROVE_G4
lgt = Grove_Light(base.PMODA,PMOD_GROVE_G4)
sensor_val = lgt.read()
print(sensor_val)
Explanation: 2. Read single luminance value
Now read from the Grove Light sensor which is connected to the ADC. In this example, the PYNQ Grove Adapter is connected to PMODA interface on the board.
The Grove I2C ADC is used as a bridge between G4 on the Grove Adapter and the Grove Light Sensor.
End of explanation
import time
%matplotlib inline
import matplotlib.pyplot as plt
lgt.set_log_interval_ms(100)
lgt.start_log()
# Change input during this time
time.sleep(10)
r_log = lgt.get_log()
plt.plot(range(len(r_log)), r_log, 'ro')
plt.title('Grove Light Plot')
min_r_log = min(r_log)
max_r_log = max(r_log)
plt.axis([0, len(r_log), min_r_log, max_r_log])
plt.show()
Explanation: 3. Plot the light intensity over time
This Python code will do multiple light measurements over a 10 second period.
To change the light intensity, cover and uncover the light sensor. In typical ambient light, there is no need to provide an external light source, as the sensor is already reading at full scale.
End of explanation |
4,433 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Target selection bits and bitmasks
Author
Step1: The mask contains the name of the target bit (e.g. ELG) the bit value to which that name corresponds (e.g. 1, meaning 2-to-the-power-1), a description of the target (e.g. "ELG") and a dictionary of values that contain information for fiber assignment, such as the observing conditions allowed for the target, the initial priority with which the target class should be observed, and the initial number of observations for the target class. Note that these bits of information can be accessed individually in a number of ways
Step2: There are corresponding masks for the BGS and MWS, which can be accessed in the same way, e.g.
Step3: The Commissioning and Survey Validation bitmasks
In addition to the DESI Main Survey, desitarget produces targets for Commissioning ("CMX") and Survey Validation ("SV"). The CMX and SV bitmasks can be obtained and examined as follows (other manipulation of these masks is similar to the previous sub-section, above)
Step4: Using the bitmasks to understand a file of targets
Important aside!!!
Target classes have evolved throughout the history of the desitarget code, and the bits that correspond to those targets have thus occasionally changed. It is therefore critical that you use the same version of desitarget when working with bits in a target file as was used to create that target file!
For example, say you are working with commissioning targets that were created with version 0.X.X of desitarget. The correct version of Git to use to study this file can be obtained via
Step5: Note that if you took the file from my the examples directory, then you're using an example file that only contains a subset of columns.
Step6: Let's consider the value of DESI_TARGET for the forty-second target
Step7: What does this number mean? Well, let's see which target classes are defined by this integer
Step8: Now let's see what target classes are include for the first 10 targets
Step9: So far, we've looked at the target class for each target. Now, let's just extract target classes that correspond to a certain bit. For example, which of the first 10 targets have the 'BGS_ANY' bit set?
Step10: Which of all of the targets are both ELG and quasar targets?
Step11: Alternatively, more compactly"
Step12: You should note that the forty-second target studied above pops up in these lists!
Note that desi_mask contains a couple of special bits that simply denote whether a target is a BGS or MWS target. These are called BGS_ANY and MWS_ANY. For example
Step13: Bits representing targets for the Bright Galaxy Survey and Milky Way Survey can be manipulated in the same way as previous examples in this section. The relevant columns and masks are BGS_TARGET and bgs_mask, and MWS_TARGET and mws_mask respectively. For example
Step14: Working with target files for CMX or SV
As noted in the previous section, Commissioning and SV have different bitmasks. Conveniently, commissioning and SV also have different _TARGET column names, allowing a user to easily distinguish which "flavor" of file they are using
Step15: Let's see what would happen if our targets file was actually an SV file
Step16: An advanced example
As a challenge, let's try to find all quasar targets that are close to an LRG target using our example file of targets.
First, let's retrieve all LRG and QSO targets from our file.
Step17: We'll need the astropy spatial matching functions
Step18: Convert the lrgs and quasars to SkyCoord objects
Step19: Perform the match. Let's choose a radius of 1 arcminute
Step20: Finally, write out the matching lrgs and quasars, and the distance between them | Python Code:
from desitarget.targets import desi_mask, bgs_mask, mws_mask
print(desi_mask)
Explanation: Target selection bits and bitmasks
Author: Adam D. Myers, University of Wyoming
This Notebook describes how to work with target selection bitmasks for DESI.
Setting up your environment
First, ensure that your environment matches a standard DESI environment. For example:
module unload desimodules
source /project/projectdirs/desi/software/desi_environment.sh 18.7
desitarget relies on desiutil and desimodel, so you may also need to set up a wider DESI environment, as detailed at:
https://desi.lbl.gov/trac/wiki/Pipeline/GettingStarted/Laptop/JuneMeeting
It may also be useful to set up some additional environment variables that are used in some versions of the desitarget code (you could also place these in your .bash_profile.ext file):
export DESIMODEL=$HOME/git/desimodel
export DUST_DIR=/project/projectdirs/desi/software/edison/dust/v0_1/maps
export GAIA_DIR=/project/projectdirs/desi/target/gaia_dr2
Here, I've set DESIMODEL to a reasonable location. For a more detailed description of checking out the desimodel data files from svn see:
https://desi.lbl.gov/trac/wiki/Pipeline/GettingStarted/Laptop/JuneMeeting#Datafilesfordesimodel
Understanding the desitarget bitmasks
Main Survey bitmasks
The critical values that select_targets produces are the DESI_TARGET, BGS_TARGET and MWS_TARGET bit masks, which contain the target bits for the DESI main (or "dark time") survey and the Bright Galaxy Survey and Milky Way Survey respectively. Let's examine the masks that correspond to these surveys.
End of explanation
desi_mask["QSO"], desi_mask.QSO # ADM different ways of accessing the bit values.
desi_mask.names() # ADM the names of each target type.
desi_mask.names(7) # ADM the names of target classes that correspond to an integer value of 5.
# ADM note that 7 is 2**0 + 2**1 + 2**2.
desi_mask.bitnum("SKY") # ADM the integer value that corresponds to the "SKY" bit.
names = desi_mask.names()
bitnums = [desi_mask.bitnum(name) for name in names]
bitvals = [desi_mask[name] for name in names]
list(zip(names,bitnums,bitvals)) # ADM the bit and integer value for each defined name.
desi_mask["LRG"].priorities # ADM a dictionary of initial priorities for the LRG target class.
desi_mask["LRG"].obsconditions, desi_mask["LRG"].numobs, desi_mask["LRG"].priorities["MORE_ZGOOD"]
Explanation: The mask contains the name of the target bit (e.g. ELG) the bit value to which that name corresponds (e.g. 1, meaning 2-to-the-power-1), a description of the target (e.g. "ELG") and a dictionary of values that contain information for fiber assignment, such as the observing conditions allowed for the target, the initial priority with which the target class should be observed, and the initial number of observations for the target class. Note that these bits of information can be accessed individually in a number of ways:
End of explanation
bgs_mask.names()
mws_mask.names()
Explanation: There are corresponding masks for the BGS and MWS, which can be accessed in the same way, e.g.:
End of explanation
from desitarget.cmx.cmx_targetmask import cmx_mask
print(cmx_mask)
from desitarget.sv1.sv1_targetmask import desi_mask, bgs_mask, mws_mask
desi_mask.names()
Explanation: The Commissioning and Survey Validation bitmasks
In addition to the DESI Main Survey, desitarget produces targets for Commissioning ("CMX") and Survey Validation ("SV"). The CMX and SV bitmasks can be obtained and examined as follows (other manipulation of these masks is similar to the previous sub-section, above):
End of explanation
import os
from glob import glob
from astropy.io.fits import getdata
import numpy as np
# ADM replace this with any directory you know of that holds targets.
targdir = "/project/projectdirs/desi/target/catalogs/examples"
# ADM replace this with the name of any target file.
targfile = 'targets.fits'
targfile = os.path.join(targdir, targfile)
targs = getdata(targfile)
Explanation: Using the bitmasks to understand a file of targets
Important aside!!!
Target classes have evolved throughout the history of the desitarget code, and the bits that correspond to those targets have thus occasionally changed. It is therefore critical that you use the same version of desitarget when working with bits in a target file as was used to create that target file!
For example, say you are working with commissioning targets that were created with version 0.X.X of desitarget. The correct version of Git to use to study this file can be obtained via:
git checkout 0.X.X
and the corresponding .yaml file online on GitHub would be:
https://github.com/desihub/desitarget/blob/0.X.X/py/desitarget/cmx/data/cmx_targetmask.yaml
For example, for version 0.31.1 of desitarget issue:
git checkout 0.31.1
or look at:
https://github.com/desihub/desitarget/blob/0.31.1/py/desitarget/cmx/data/cmx_targetmask.yaml
Equivalently, for version 0.31.1 of desitarget for SV or the Main Survey:
https://github.com/desihub/desitarget/blob/0.31.1/py/desitarget/sv1/data/sv1_targetmask.yaml
https://github.com/desihub/desitarget/blob/0.31.1/py/desitarget/data/targetmask.yaml
Commissioning targeting bits are expected to be final as of version 0.32.0 of desitarget. SV and Main Survey bits may not yet be final.
Working with target files
The target files produced by select_targets contain many quantities from the Legacy Surveys data model sweeps files at, e.g.:
http://www.legacysurvey.org/dr7/files/#sweep-7-0-sweep-brickmin-brickmax-fits
The main columns added by select_targets are DESI_TARGET, BGS_TARGET and MWS_TARGET, which contain the output bitmasks from target selection. Let's take a closer look at how these columns can be used in conjunction with the bitmasks.
First, enter the Python prompt. Now, let's read in a file of targets. I'll assume you're working at NERSC, but set targdir, below, to wherever you have a targets- file.
End of explanation
print(targs.dtype)
Explanation: Note that if you took the file from my the examples directory, then you're using an example file that only contains a subset of columns.
End of explanation
targ = targs[41]
print(targ["DESI_TARGET"])
Explanation: Let's consider the value of DESI_TARGET for the forty-second target:
End of explanation
from desitarget.targets import desi_mask
desi_mask.names(targ["DESI_TARGET"])
Explanation: What does this number mean? Well, let's see which target classes are defined by this integer:
End of explanation
bitnames = np.array(desi_mask.names()) # ADM note the array conversion to help manipulation.
bitvals = [desi_mask[name] for name in bitnames]
for targ in targs[:10]:
w = np.where( (targ["DESI_TARGET"] & bitvals) != 0)[0]
print(targ["DESI_TARGET"], bitnames[w])
Explanation: Now let's see what target classes are include for the first 10 targets:
End of explanation
np.where((targs[:10]["DESI_TARGET"] & desi_mask["BGS_ANY"]) != 0)[0]
Explanation: So far, we've looked at the target class for each target. Now, let's just extract target classes that correspond to a certain bit. For example, which of the first 10 targets have the 'BGS_ANY' bit set?
End of explanation
isELG = (targs["DESI_TARGET"] & desi_mask["ELG"]) != 0
isQSO = (targs["DESI_TARGET"] & desi_mask["QSO"]) != 0
np.where(isELG & isQSO)[0]
Explanation: Which of all of the targets are both ELG and quasar targets?
End of explanation
bitvalboth = desi_mask["ELG"] + desi_mask["QSO"]
np.where(targs["DESI_TARGET"] & bitvalboth == bitvalboth)[0]
Explanation: Alternatively, more compactly"
End of explanation
print((targs[:10]["DESI_TARGET"] & desi_mask["BGS_ANY"]) != 0)
print(targs[:10]["BGS_TARGET"] != 0)
Explanation: You should note that the forty-second target studied above pops up in these lists!
Note that desi_mask contains a couple of special bits that simply denote whether a target is a BGS or MWS target. These are called BGS_ANY and MWS_ANY. For example:
End of explanation
from desitarget.targets import bgs_mask, mws_mask
bitnames = np.array(bgs_mask.names()) # ADM note the array conversion to help manipulation.
bitvals = [bgs_mask[name] for name in bitnames]
for targ in targs[:10]:
w = np.where( (targ["BGS_TARGET"] & bitvals) != 0)[0]
print(targ["BGS_TARGET"], bitnames[w])
Explanation: Bits representing targets for the Bright Galaxy Survey and Milky Way Survey can be manipulated in the same way as previous examples in this section. The relevant columns and masks are BGS_TARGET and bgs_mask, and MWS_TARGET and mws_mask respectively. For example:
End of explanation
import os, fitsio
# ADM replace this with any directory you know of that holds targets.
targdir = "/project/projectdirs/desi/target/catalogs/examples"
# ADM replace this with the name of any target file.
targfile = 'targets.fits'
targfile = os.path.join(targdir, targfile)
targs = fitsio.read(targfile)
# ADM load the convenient utility and use it.
from desitarget.targets import main_cmx_or_sv
[desi_target, bgs_target, mws_target], [desi_mask, bgs_mask, mws_mask], surv = main_cmx_or_sv(targs)
print(desi_target, mws_target)
print(surv)
print(bgs_mask)
Explanation: Working with target files for CMX or SV
As noted in the previous section, Commissioning and SV have different bitmasks. Conveniently, commissioning and SV also have different _TARGET column names, allowing a user to easily distinguish which "flavor" of file they are using:
Main Survey files have the columns DESI_TARGET, BGS_TARGET and MWS_TARGET.
Commissioning files have the column CMX_TARGET.
SV files have the columns SV1_DESI_TARGET, SV1_BGS_TARGET and SV1_MWS_TARGET.
A convenient utility is desitarget.targets.main_cmx_or_sv which will use the differing column names to load the appropriate mask or masks. For example, using our Main Survey example file:
End of explanation
import numpy.lib.recfunctions as rfn
for col in [desi_target, bgs_target, mws_target]:
sv1_targs = rfn.rename_fields(targs, {col: 'SV1_'+col})
[desi_target, bgs_target, mws_target], [desi_mask, bgs_mask, mws_mask], surv = main_cmx_or_sv(sv1_targs)
print(bgs_target, mws_target)
print(surv)
print(desi_mask)
Explanation: Let's see what would happen if our targets file was actually an SV file:
End of explanation
from desitarget.targets import desi_mask
isLRG = (targs["DESI_TARGET"] & desi_mask["LRG"]) != 0
isQSO = (targs["DESI_TARGET"] & desi_mask["QSO"]) != 0
lrgs, qsos = targs[isLRG], targs[isQSO]
# ADM a sanity check.
for qso in qsos[:10]:
print(desi_mask.names(qso["DESI_TARGET"]))
Explanation: An advanced example
As a challenge, let's try to find all quasar targets that are close to an LRG target using our example file of targets.
First, let's retrieve all LRG and QSO targets from our file.
End of explanation
from astropy.coordinates import SkyCoord
from astropy import units as u
Explanation: We'll need the astropy spatial matching functions:
End of explanation
clrgs = SkyCoord(lrgs["RA"], lrgs["DEC"], unit='degree')
cqsos = SkyCoord(qsos["RA"], qsos["DEC"], unit='degree')
Explanation: Convert the lrgs and quasars to SkyCoord objects:
End of explanation
matchrad = 20*u.arcsec
idlrgs, idqsos, sep, _ = cqsos.search_around_sky(clrgs, matchrad)
Explanation: Perform the match. Let's choose a radius of 1 arcminute:
End of explanation
lrgmatch, qsomatch = lrgs[idlrgs], qsos[idqsos]
for i in range(len(lrgmatch)):
print("LRG coordinates: {:.4f} deg, {:.4f} deg".format(lrgmatch[i]["RA"], lrgmatch[i]["DEC"]))
print("QSO coordinates: {:.4f} deg, {:.4f} deg".format(qsomatch[i]["RA"], qsomatch[i]["DEC"]))
print("Angular separation: {:.4f} arcsec".format(sep.value[i]*3600))
Explanation: Finally, write out the matching lrgs and quasars, and the distance between them:
End of explanation |
4,434 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1.x to 3.x Rule Migration Guide
This guide describes changes needed for rules to run under Insights Core 3.x.
It covers the following topics
Step1: @rule Example
Step2: <a id="filtering"></a>
Filtering
Filters are now applied to datasources instead of certain Parser classes.
```python
from insights.core.filters import add_filter
like this
add_filter("messages", "KEEP_ME")
add_filter("messages", ["KEEUP_US", "KEEP_US_TOO"])
instead of this
Messages.filters.append("KEEP_ME")
Messages.filters.extend(["KEEP_US", "KEEP_US_TOO"])
```
<a id="cluster_rules"></a>
Cluster Rules
Cluster rules are the same as before. Just add cluster=True to the @rule decoration.
<a id="testing"></a>
Testing
Unit tests need to reflect the new rule function signatures.
@archive_provider calls should now pass the rule function instead of the rule module.
```python
from insights.plugins import vulnerable_kernel
like this
@archive_provider(vulnerable_kernel.report)
def integration_tests()
Step3: This allows data sources to generate content using the full power of python. Almost anything can go in the function body of a data source.
Directly defining data sources is powerful, but it's tedious when you just want to collect files or execute simple commands. The SpecFactory class streamlines those use cases by creating @datasource decorated functions for you.
Step4: Pass the name keyword to ensure the functions returned by SpecFactory have a sensible name and are attached to the defining module. | Python Code:
# Boilerplate used in later cells
# Not necessary for new rules.
from pprint import pprint
from insights.core import dr
from insights.core.filters import add_filter, get_filters
from insights.core.context import HostContext
from insights.core.plugins import make_response, rule
dr.load_components("insights.specs")
dr.load_components("insights.parsers")
dr.load_components("insights.combiners")
def run_component(component, broker=None):
graph = dr.get_dependency_graph(component)
if not broker:
broker = dr.Broker()
broker[HostContext] = HostContext()
return dr.run(graph, broker=broker)
Explanation: 1.x to 3.x Rule Migration Guide
This guide describes changes needed for rules to run under Insights Core 3.x.
It covers the following topics:
- @rule interface
- function signatures
- filtering
- cluster rules
- testing
- new style specs
<a id="rule_interface"></a>
@rule Interface
The requires keyword is gone, and required dependencies are no longer lists.
python
@rule(requires=[InstalledRpms, PsAuxcww])
is now
```python
requires InstalledRpms and PsAuxcww
@rule(InstalledRpms, PsAuxcww)
If a rule requires at least one of a set of dependencies, they are specified in a list like before.python
requires InstalledRpms and at least one of ChkConfig or UnitFiles
@rule(InstalledRpms, [ChkConfig, UnitFiles])
```
And optional dependencies haven't changed.
```python
requires InstalledRpms and PsAuxcww. Will use NetstatS if it's available
@rule(InstalledRpms, PsAuxcww, optional=[NetstatS])
```
<a id="component_signature"></a>
Component Signature
The local and shared parameters are gone. Instead, component signatures should define parameters matching the dependencies in their @rule decorators.
```python
Requires InstalledRpms and PsAuxcww.
@rule(InstalledRpms, PsAuxcww)
def report_thing(rpms, ps):
pass
Requires InstalledRpms and at least one of ChkConfig or UnitFiles.
Both ChkConfig and UnitFiles may be populated, but only one of them is required.
If one of them isn't available, None is passed as its value.
@rule(InstalledRpms, [ChkConfig, UnitFiles])
def report_something(rpms, cfg, uf):
pass
Requires InstalledRpms, at least one of ChkConfig or UnitFiles, and will use NetstatS
if it's available. Notice how the order of report_something_else's parameter list
matches the order of the dependencies even when the dependency specification is
complicated.
@rule(InstalledRpms, [ChkConfig, UnitFiles], optional=[NetstatS])
def report_something_else(rpms, cfg, uf, netstat):
pass
```
End of explanation
from insights.parsers.installed_rpms import InstalledRpms
from insights.parsers.ps import PsAuxcww
@rule(InstalledRpms, PsAuxcww)
def report(rpms, ps):
rpm_name = "google-chrome-stable"
if rpm_name in rpms and "chrome" in ps:
rpm = rpms.get_max(rpm_name)
return make_response("CHROME_RUNNING",
version=rpm.version,
release=rpm.release,
arch=rpm.arch
)
broker = run_component(component=report)
pprint(broker[report])
Explanation: @rule Example
End of explanation
from insights.core.plugins import datasource
from insights.core.spec_factory import TextFileProvider
@datasource()
def release(broker):
return TextFileProvider("etc/redhat-release")
broker = run_component(release)
print broker[release].content
Explanation: <a id="filtering"></a>
Filtering
Filters are now applied to datasources instead of certain Parser classes.
```python
from insights.core.filters import add_filter
like this
add_filter("messages", "KEEP_ME")
add_filter("messages", ["KEEUP_US", "KEEP_US_TOO"])
instead of this
Messages.filters.append("KEEP_ME")
Messages.filters.extend(["KEEP_US", "KEEP_US_TOO"])
```
<a id="cluster_rules"></a>
Cluster Rules
Cluster rules are the same as before. Just add cluster=True to the @rule decoration.
<a id="testing"></a>
Testing
Unit tests need to reflect the new rule function signatures.
@archive_provider calls should now pass the rule function instead of the rule module.
```python
from insights.plugins import vulnerable_kernel
like this
@archive_provider(vulnerable_kernel.report)
def integration_tests():
...
instead of this
@archive_provider(vulnerable_kernel)
def integration_tests():
...
```
<a id="new_style_specs"></a>
New Style Specs
Specs in 3.x are called "data sources", and they're functions like rules and other components. However, they are special because they get passed an object called a broker instead of directly getting their dependencies, and they're meant to execute on the machine you want to analyze. The broker is like the shared object in 1.x.
End of explanation
from insights.core.spec_factory import SpecFactory
sf = SpecFactory()
hosts = sf.simple_file("/etc/hosts", name="hosts")
uptime = sf.simple_command("/bin/uptime", name="uptime")
print hosts
print uptime
Explanation: This allows data sources to generate content using the full power of python. Almost anything can go in the function body of a data source.
Directly defining data sources is powerful, but it's tedious when you just want to collect files or execute simple commands. The SpecFactory class streamlines those use cases by creating @datasource decorated functions for you.
End of explanation
broker = run_component(hosts)
broker = run_component(uptime, broker=broker)
pprint(broker[hosts].content)
pprint(broker[uptime].content)
Explanation: Pass the name keyword to ensure the functions returned by SpecFactory have a sensible name and are attached to the defining module.
End of explanation |
4,435 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Automating image building
We saw in the last notebook how we can build images of our funwave-tvd code and use Agave to make the process a bit easier. We can take some lessons learned from the devops community to automate the building of our images and implement basic benchmarking and testing.
While the Agave fork app we created is handy, it doesn't provide particularly good visibility, let along security. We certainly do not want to share an app like that for others to use. So, let's start start by creating a new Agave app that will build our Docker container. First up, our updated app assets.
Creating more meaningful Dockerfiles
While functional, our previous Dockerfile didn't give us much info we could use for things like attribution, discovery, etc. Let's add in some additional fields to give our Dockerfile meaning. We will use a couple new Dockerfile directives to do this
ARG a runtime argument supplied to the docker build command
LABEL one or more terms applied to the image as metadata
Step3: Single purpose wrapper scripts
In our previous wrapper script, we simply took whatever was given to us an ran it. Here we will restrict the wrapper to run a specific build comman.
Note that we mix an match a couple variable types. The Agave_* variable are template variables resolved by Agave at runtime with the vales from the job etails. The version variable is a parameter we will define in our app description.
Step5: More descriptive apps
Now we need to create some JSON to tell Agave how to run and advertise our app. This app definition will look a lot like the fork app definition with a few changes. First, we are updating the app id so a new app will be created. Second we change the parameter.
code_version is a string parameter describing the version of the code.
We have also removed the data file input from the previous app description. This is because our deployment folder contains the Dockerfile to build our image. No other info is needed to run our build app.
Step7: Here is our default test file
Step9: Running the build
Step11: Now we'll run our build using the following job request. This is very similar to before.
Step13: Because the setvar() command can evalute $() style bash shell substitutions, we will use it to submit our job. This will capture the output of the submit command, and allow us to parse it for the JOB_ID. We'll use the JOB_ID in several subsequent steps.
Step15: To rebuild our Docker image, we can rerun our job submission command, or simply resumbit the previous job. | Python Code:
writefile("funwave-tvd-docker-automation/Dockerfile",
FROM stevenrbrandt/science-base
MAINTAINER Steven R. Brandt <[email protected]>
ARG BUILD_DATE
ARG VERSION
LABEL org.agaveplatform.ax.architecture="x86_64" \
org.agaveplatform.ax.build-date="\$BUILD_DATE" \
org.agaveplatform.ax.version="\$VERSION" \
org.agaveplatform.ax.name="${AGAVE_USERNAME}/funwave-tvd" \
org.agaveplatform.ax.summary="Funwave-TVD is a code to simulate the shallow water and Boussinesq equations written by Dr. Fengyan Shi." \
org.agaveplatform.ax.vcs-type="git" \
org.agaveplatform.ax.vcs-url="https://github.com/fengyanshi/FUNWAVE-TVD" \
org.agaveplatform.ax.license="BSD 3-clause"
USER root
RUN mkdir -p /home/install
RUN chown jovyan /home/install
USER jovyan
RUN cd /home/install && \
git clone https://github.com/fengyanshi/FUNWAVE-TVD && \
cd FUNWAVE-TVD/src && \
perl -p -i -e 's/FLAG_8 = -DCOUPLING/#$&/' Makefile && \
make
WORKDIR /home/install/FUNWAVE-TVD/src
RUN mkdir -p /home/jovyan/rundir
WORKDIR /home/jovyan/rundir
)
Explanation: Automating image building
We saw in the last notebook how we can build images of our funwave-tvd code and use Agave to make the process a bit easier. We can take some lessons learned from the devops community to automate the building of our images and implement basic benchmarking and testing.
While the Agave fork app we created is handy, it doesn't provide particularly good visibility, let along security. We certainly do not want to share an app like that for others to use. So, let's start start by creating a new Agave app that will build our Docker container. First up, our updated app assets.
Creating more meaningful Dockerfiles
While functional, our previous Dockerfile didn't give us much info we could use for things like attribution, discovery, etc. Let's add in some additional fields to give our Dockerfile meaning. We will use a couple new Dockerfile directives to do this
ARG a runtime argument supplied to the docker build command
LABEL one or more terms applied to the image as metadata
End of explanation
writefile("funwave-tvd-docker-automation/funwave-build-wrapper.txt",
sudo docker build \
--build-arg "BUILD_DATE=\${AGAVE_JOB_SUBMIT_TIME}" \
--build-arg "VERSION=\${code_version}" \
--rm -t funwave-tvd:\${code_version} .
docker inspect funwave-tvd:\${code_version}
)
Explanation: Single purpose wrapper scripts
In our previous wrapper script, we simply took whatever was given to us an ran it. Here we will restrict the wrapper to run a specific build comman.
Note that we mix an match a couple variable types. The Agave_* variable are template variables resolved by Agave at runtime with the vales from the job etails. The version variable is a parameter we will define in our app description.
End of explanation
writefile("funwave-tvd-docker-automation/funwave-build-app.txt",
{
"name":"${AGAVE_USERNAME}-${MACHINE_NAME}-funwave-dbuild",
"version":"1.0",
"label":"Builds the funwave docker image",
"shortDescription":"Funwave docker build",
"longDescription":"",
"deploymentSystem":"${AGAVE_STORAGE_SYSTEM_ID}",
"deploymentPath":"automation/funwave-tvd-docker-automation",
"templatePath":"funwave-build-wrapper.txt",
"testPath":"test.txt",
"executionSystem":"${AGAVE_EXECUTION_SYSTEM_ID}",
"executionType":"CLI",
"parallelism":"SERIAL",
"modules":[],
"inputs":[],
"parameters":[{
"id" : "code_version",
"value" : {
"visible":true,
"required":true,
"type":"string",
"order":0,
"enquote":false,
"default":"latest"
},
"details":{
"label": "Version of the code",
"description": "If true, output will be packed and compressed",
"argument": null,
"showArgument": false,
"repeatArgument": false
},
"semantics":{
"argument": null,
"showArgument": false,
"repeatArgument": false
}
}],
"outputs":[]
}
)
Explanation: More descriptive apps
Now we need to create some JSON to tell Agave how to run and advertise our app. This app definition will look a lot like the fork app definition with a few changes. First, we are updating the app id so a new app will be created. Second we change the parameter.
code_version is a string parameter describing the version of the code.
We have also removed the data file input from the previous app description. This is because our deployment folder contains the Dockerfile to build our image. No other info is needed to run our build app.
End of explanation
writefile("funwave-tvd-docker-automation/test.txt",
code_version=latest
)
!files-mkdir -S ${AGAVE_STORAGE_SYSTEM_ID} -N automation
!files-upload -S ${AGAVE_STORAGE_SYSTEM_ID} -F funwave-tvd-docker-automation automation
!apps-addupdate -F funwave-tvd-docker-automation/funwave-build-app.txt
Explanation: Here is our default test file
End of explanation
requestbin_url = !requestbin-create
os.environ['REQUESTBIN_URL'] = requestbin_url[0]
setvar(
WEBHOOK_URL=${REQUESTBIN_URL}
)
Explanation: Running the build
End of explanation
writefile("funwave-tvd-docker-automation/job.json",
{
"name":"funwave-build",
"appId": "${AGAVE_USERNAME}-${MACHINE_NAME}-funwave-dbuild-1.0",
"maxRunTime":"00:10:00",
"archive": false,
"notifications": [
{
"url":"${WEBHOOK_URL}",
"event":"*",
"persistent":"true"
}
],
"parameters": {
"code_version":"latest"
}
}
)
Explanation: Now we'll run our build using the following job request. This is very similar to before.
End of explanation
setvar(
# Capture the output of the job submit command
OUTPUT=$(jobs-submit -F funwave-tvd-docker-automation/job.json)
# Parse out the job id from the output
JOB_ID=$(echo $OUTPUT | cut -d' ' -f4)
)
for iter in range(20):
setvar("STAT=$(jobs-status $JOB_ID)")
stat = os.environ["STAT"]
sleep(5.0)
if stat == "FINISHED" or stat == "FAILED":
break
!jobs-output-get -P ${JOB_ID} funwave-build.out
Explanation: Because the setvar() command can evalute $() style bash shell substitutions, we will use it to submit our job. This will capture the output of the submit command, and allow us to parse it for the JOB_ID. We'll use the JOB_ID in several subsequent steps.
End of explanation
setvar(
# Capture the output of the job submit command
OUTPUT=$(jobs-resubmit ${JOB_ID})
# Parse out the job id from the output
JOB_ID=$(echo $OUTPUT | cut -d' ' -f4)
)
Explanation: To rebuild our Docker image, we can rerun our job submission command, or simply resumbit the previous job.
End of explanation |
4,436 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style="width
Step1: Import Python libraries
Step2: Display options
Step3: Set directories
Step4: Chromedriver
If you want to download from sources which require scraping, download the appropriate version of Chromedriver for your platform, name it chromedriver, create folder chromedriver in the working directory, and move the driver to it. It is used by Selenium to scrape the links from web pages.
The current list of sources which require scraping (as of December 2018)
Step5: Set up a log
Step6: Execute for more detailed logging message (May slow down computation).
Step7: Select timerange
This section
Step8: Select download source
Instead of downloading from the sources, the complete raw data can be downloaded as a zip file from the OPSD Server. Advantages are
Step9: Select subset
Read in the configuration file which contains all the required infos for the download.
Step10: The next cell prints the available sources and datasets.<br>
Copy from its output and paste to following cell to get the right format.<br>
Step11: Optionally, specify a subset to download/read.<br>
Type subset = None to include all data.
Step12: Now eliminate sources and datasets not in subset.
Step13: Download
This section
Step14: Automatic download (for most sources)
Step15: Manual download
Energinet.dk
Go to http
Step16: Read a prepared table containing meta data on the geographical areas
Step17: View the areas table
Step18: Reading loop
Loop through sources and datasets to do the reading.
First read the original CSV, Excel etc. files into pandas DataFrames.
Step19: Then combine the DataFrames that have the same temporal resolution
Step20: Display some rows of the dataframes to get a first impression of the data.
Step21: Save raw data
Save the DataFrames created by the read function to disk. This way you have the raw data to fall back to if something goes wrong in the ramainder of this notebook without having to repeat the previos steps.
Step22: Load the DataFrames saved above
Step23: Processing
This section
Step24: Execute this to see an example of where the data has been patched.
Display the table of regions of missing values
Step25: You can export the NaN-tables to Excel in order to inspect where there are NaNs
Step26: Save/Load the patched data sets
Step27: Some of the following operations require the Dataframes to be lexsorted in the columns
Step28: Aggregate wind offshore + onshore
Step29: Country specific calculations - not used in this release
Germany
Aggregate German data from individual TSOs
The wind and solar in-feed data for the 4 German control areas is summed up and stored in a new column. The column headers are created in the fashion introduced in the read script. Takes 5 seconds to run.
Step30: Italy
Generation data for Italy come by region (North, Central North, Sicily, etc.) and separately for DSO and TSO, so they need to be agregated in order to get values for the whole country. In the next cell, we sum up the data by region and for each variable-attribute pair present in the Terna dataset header.
Step31: Great Britain / United Kingdom
Data for Great Britain (without Northern Ireland) are disaggregated for DSO and TSO connected generators. We calculate aggregate values.
Step32: Calculate availabilities/profiles
Calculate profiles, that is, the share of wind/solar capacity producing at a given time.
Step33: Some of the following operations require the Dataframes to be lexsorted in the columns
Step34: Another savepoint
Step35: Resample higher frequencies to 60'
Some data comes in 15 or 30-minute intervals (i.e. German or British renewable generation), other in 60-minutes (i.e. load data from ENTSO-E and Prices). We resample the 15 and 30-minute data to hourly resolution and append it to the 60-minutes dataset.
The .resample('H').mean() methods calculates the means from the values for 4 quarter hours [
Step36: Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data
Step37: Insert a column with Central European (Summer-)time
The index column of th data sets defines the start of the timeperiod represented by each row of that data set in UTC time. We include an additional column for the CE(S)T Central European (Summer-) Time, as this might help aligning the output data with other data sources.
Step38: Create a final savepoint
Step39: Show the column names contained in the final DataFrame in a table
Step40: Write data to disk
This section
Step41: Different shapes
Data are provided in three different "shapes"
Step42: Write to SQLite-database
This file format is required for the filtering function on the OPSD website. This takes ~3 minutes to complete.
Step43: Write to Excel
Writing the full tables to Excel takes extremely long. As a workaround, only the timestamp-columns are exported. The rest of the data can than be inserted manually from the _multindex.csv files.
Step44: Write to CSV
This takes about 10 minutes to complete.
Step45: Create metadata
This section
Step46: Write checksums.txt
We publish SHA-checksums for the outputfiles on GitHub to allow verifying the integrity of outputfiles on the OPSD server. | Python Code:
version = '2020-10-06'
changes = '''Yearly update'''
Explanation: <div style="width:100%; background-color: #D9EDF7; border: 1px solid #CFCFCF; text-align: left; padding: 10px;">
<b>Time series: Processing Notebook</b>
<ul>
<li><a href="main.ipynb">Main Notebook</a></li>
<li>Processing Notebook</li>
</ul>
<br>This Notebook is part of the <a href="http://data.open-power-system-data.org/time_series">Time series Data Package</a> of <a href="http://open-power-system-data.org">Open Power System Data</a>.
</div>
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Introductory-Notes" data-toc-modified-id="Introductory-Notes-1"><span class="toc-item-num">1 </span>Introductory Notes</a></span></li><li><span><a href="#Settings" data-toc-modified-id="Settings-2"><span class="toc-item-num">2 </span>Settings</a></span><ul class="toc-item"><li><span><a href="#Set-version-number-and-recent-changes" data-toc-modified-id="Set-version-number-and-recent-changes-2.1"><span class="toc-item-num">2.1 </span>Set version number and recent changes</a></span></li><li><span><a href="#Import-Python-libraries" data-toc-modified-id="Import-Python-libraries-2.2"><span class="toc-item-num">2.2 </span>Import Python libraries</a></span></li><li><span><a href="#Display-options" data-toc-modified-id="Display-options-2.3"><span class="toc-item-num">2.3 </span>Display options</a></span></li><li><span><a href="#Set-directories" data-toc-modified-id="Set-directories-2.4"><span class="toc-item-num">2.4 </span>Set directories</a></span></li><li><span><a href="#Chromedriver" data-toc-modified-id="Chromedriver-2.5"><span class="toc-item-num">2.5 </span>Chromedriver</a></span></li><li><span><a href="#Set-up-a-log" data-toc-modified-id="Set-up-a-log-2.6"><span class="toc-item-num">2.6 </span>Set up a log</a></span></li><li><span><a href="#Select-timerange" data-toc-modified-id="Select-timerange-2.7"><span class="toc-item-num">2.7 </span>Select timerange</a></span></li><li><span><a href="#Select-download-source" data-toc-modified-id="Select-download-source-2.8"><span class="toc-item-num">2.8 </span>Select download source</a></span></li><li><span><a href="#Select-subset" data-toc-modified-id="Select-subset-2.9"><span class="toc-item-num">2.9 </span>Select subset</a></span></li></ul></li><li><span><a href="#Download" data-toc-modified-id="Download-3"><span class="toc-item-num">3 </span>Download</a></span><ul class="toc-item"><li><span><a href="#Automatic-download-(for-most-sources)" data-toc-modified-id="Automatic-download-(for-most-sources)-3.1"><span class="toc-item-num">3.1 </span>Automatic download (for most sources)</a></span></li><li><span><a href="#Manual-download" data-toc-modified-id="Manual-download-3.2"><span class="toc-item-num">3.2 </span>Manual download</a></span><ul class="toc-item"><li><span><a href="#Energinet.dk" data-toc-modified-id="Energinet.dk-3.2.1"><span class="toc-item-num">3.2.1 </span>Energinet.dk</a></span></li><li><span><a href="#CEPS" data-toc-modified-id="CEPS-3.2.2"><span class="toc-item-num">3.2.2 </span>CEPS</a></span></li><li><span><a href="#ENTSO-E-Power-Statistics" data-toc-modified-id="ENTSO-E-Power-Statistics-3.2.3"><span class="toc-item-num">3.2.3 </span>ENTSO-E Power Statistics</a></span></li></ul></li></ul></li><li><span><a href="#Read" data-toc-modified-id="Read-4"><span class="toc-item-num">4 </span>Read</a></span><ul class="toc-item"><li><span><a href="#Preparations" data-toc-modified-id="Preparations-4.1"><span class="toc-item-num">4.1 </span>Preparations</a></span></li><li><span><a href="#Reading-loop" data-toc-modified-id="Reading-loop-4.2"><span class="toc-item-num">4.2 </span>Reading loop</a></span></li><li><span><a href="#Save-raw-data" data-toc-modified-id="Save-raw-data-4.3"><span class="toc-item-num">4.3 </span>Save raw data</a></span></li></ul></li><li><span><a href="#Processing" data-toc-modified-id="Processing-5"><span class="toc-item-num">5 </span>Processing</a></span><ul class="toc-item"><li><span><a href="#Missing-data-handling" data-toc-modified-id="Missing-data-handling-5.1"><span class="toc-item-num">5.1 </span>Missing data handling</a></span><ul class="toc-item"><li><span><a href="#Interpolation" data-toc-modified-id="Interpolation-5.1.1"><span class="toc-item-num">5.1.1 </span>Interpolation</a></span></li></ul></li><li><span><a href="#Aggregate-wind-offshore-+-onshore" data-toc-modified-id="Aggregate-wind-offshore-+-onshore-5.2"><span class="toc-item-num">5.2 </span>Aggregate wind offshore + onshore</a></span></li><li><span><a href="#Country-specific-calculations---not-used-in-this-release" data-toc-modified-id="Country-specific-calculations---not-used-in-this-release-5.3"><span class="toc-item-num">5.3 </span>Country specific calculations - not used in this release</a></span><ul class="toc-item"><li><span><a href="#Germany" data-toc-modified-id="Germany-5.3.1"><span class="toc-item-num">5.3.1 </span>Germany</a></span><ul class="toc-item"><li><span><a href="#Aggregate-German-data-from-individual-TSOs" data-toc-modified-id="Aggregate-German-data-from-individual-TSOs-5.3.1.1"><span class="toc-item-num">5.3.1.1 </span>Aggregate German data from individual TSOs</a></span></li></ul></li><li><span><a href="#Italy" data-toc-modified-id="Italy-5.3.2"><span class="toc-item-num">5.3.2 </span>Italy</a></span></li><li><span><a href="#Great-Britain-/-United-Kingdom" data-toc-modified-id="Great-Britain-/-United-Kingdom-5.3.3"><span class="toc-item-num">5.3.3 </span>Great Britain / United Kingdom</a></span></li></ul></li><li><span><a href="#Calculate-availabilities/profiles" data-toc-modified-id="Calculate-availabilities/profiles-5.4"><span class="toc-item-num">5.4 </span>Calculate availabilities/profiles</a></span></li><li><span><a href="#Resample-higher-frequencies-to-60'" data-toc-modified-id="Resample-higher-frequencies-to-60'-5.5"><span class="toc-item-num">5.5 </span>Resample higher frequencies to 60'</a></span></li><li><span><a href="#Fill-columns-not-retrieved-directly-from-TSO-webites-with--ENTSO-E-Transparency-data" data-toc-modified-id="Fill-columns-not-retrieved-directly-from-TSO-webites-with--ENTSO-E-Transparency-data-5.6"><span class="toc-item-num">5.6 </span>Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data</a></span></li><li><span><a href="#Insert-a-column-with-Central-European-(Summer-)time" data-toc-modified-id="Insert-a-column-with-Central-European-(Summer-)time-5.7"><span class="toc-item-num">5.7 </span>Insert a column with Central European (Summer-)time</a></span></li></ul></li><li><span><a href="#Create-a-final-savepoint" data-toc-modified-id="Create-a-final-savepoint-6"><span class="toc-item-num">6 </span>Create a final savepoint</a></span></li><li><span><a href="#Write-data-to-disk" data-toc-modified-id="Write-data-to-disk-7"><span class="toc-item-num">7 </span>Write data to disk</a></span><ul class="toc-item"><li><span><a href="#Limit-time-range" data-toc-modified-id="Limit-time-range-7.1"><span class="toc-item-num">7.1 </span>Limit time range</a></span></li><li><span><a href="#Different-shapes" data-toc-modified-id="Different-shapes-7.2"><span class="toc-item-num">7.2 </span>Different shapes</a></span></li><li><span><a href="#Write-to-SQLite-database" data-toc-modified-id="Write-to-SQLite-database-7.3"><span class="toc-item-num">7.3 </span>Write to SQLite-database</a></span></li><li><span><a href="#Write-to-Excel" data-toc-modified-id="Write-to-Excel-7.4"><span class="toc-item-num">7.4 </span>Write to Excel</a></span></li><li><span><a href="#Write-to-CSV" data-toc-modified-id="Write-to-CSV-7.5"><span class="toc-item-num">7.5 </span>Write to CSV</a></span></li><li><span><a href="#Create-metadata" data-toc-modified-id="Create-metadata-7.6"><span class="toc-item-num">7.6 </span>Create metadata</a></span></li><li><span><a href="#Write-checksums.txt" data-toc-modified-id="Write-checksums.txt-7.7"><span class="toc-item-num">7.7 </span>Write checksums.txt</a></span></li></ul></li></ul></div>
Introductory Notes
This Notebook handles missing data, performs calculations and aggragations and creates the output files.
Settings
This section performs some preparatory steps.
Set version number and recent changes
Executing this script till the end will create a new version of the data package.
The Version number specifies the local directory for the data <br>
We include a note on what has been changed.
End of explanation
# Python modules
from datetime import datetime, date, timedelta, time
import pandas as pd
import numpy as np
import logging
import json
import sqlite3
import yaml
import itertools
import os
import pytz
from shutil import copyfile
import pickle
# Skripts from time-series repository
from timeseries_scripts.read import read
from timeseries_scripts.download import download
from timeseries_scripts.imputation import find_nan, mark_own_calc
from timeseries_scripts.make_json import make_json, get_sha_hash
# Reload modules with execution of any code, to avoid having to restart
# the kernel after editing timeseries_scripts
%load_ext autoreload
%autoreload 2
# speed up tab completion in Jupyter Notebook
%config Completer.use_jedi = False
Explanation: Import Python libraries
End of explanation
# Allow pretty-display of multiple variables
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# Adjust the way pandas DataFrames a re displayed to fit more columns
pd.reset_option('display.max_colwidth')
pd.options.display.max_columns = 60
# pd.options.display.max_colwidth=5
Explanation: Display options
End of explanation
# make sure the working directory is this file's directory
try:
os.chdir(home_path)
except NameError:
home_path = os.path.realpath('.')
# optionally, set a different directory to store outputs and raw data,
# which will take up around 15 GB of disk space
#Milos: save_path is None <=> use_external_dir == False
use_external_dir = True
if use_external_dir:
save_path = os.path.join('C:', os.sep, 'OPSD_time_series_data')
else:
save_path = home_path
input_path = os.path.join(home_path, 'input')
sources_yaml_path = os.path.join(home_path, 'input', 'sources.yml')
areas_csv_path = os.path.join(home_path, 'input', 'areas.csv')
data_path = os.path.join(save_path, version, 'original_data')
out_path = os.path.join(save_path, version)
temp_path = os.path.join(save_path, 'temp')
parsed_path = os.path.join(save_path, 'parsed')
chromedriver_path = os.path.join(home_path, 'chromedriver', 'chromedriver')
for path in [data_path, out_path, temp_path, parsed_path]:
os.makedirs(path, exist_ok=True)
# change to temp directory
os.chdir(temp_path)
os.getcwd()
Explanation: Set directories
End of explanation
# Deciding whether to use the provided database of Terna links
extract_new_terna_urls = False
# Saving the choice
f = open("extract_new_terna_urls.pickle", "wb")
pickle.dump(extract_new_terna_urls, f)
f.close()
Explanation: Chromedriver
If you want to download from sources which require scraping, download the appropriate version of Chromedriver for your platform, name it chromedriver, create folder chromedriver in the working directory, and move the driver to it. It is used by Selenium to scrape the links from web pages.
The current list of sources which require scraping (as of December 2018):
- Terna
- Note that the package contains a database of Terna links up to 20 December 2018. Bu default, the links are first looked up for in this database, so if the end date of your query is not after 20 December 2018, you won't need Selenium. In the case that you need later dates, you have two options. If you set the variable extract_new_terna_urls to True, then Selenium will be used to download the files for those later dates. If you set extract_new_terna_urls to False (which is the default value), only the recorded links will be consulted and Selenium will not be used.
- Note: Make sure that the database file, recorded_terna_urls.csv, is located in the working directory.
End of explanation
# Configure the display of logs in the notebook and attach it to the root logger
logstream = logging.StreamHandler()
logstream.setLevel(logging.INFO) #threshold for log messages displayed in here
logging.basicConfig(level=logging.INFO, handlers=[logstream])
# Set up an additional logger for debug messages from the scripts
script_logger = logging.getLogger('timeseries_scripts')
script_logger.setLevel(logging.DEBUG)
formatter = logging.Formatter(fmt='%(asctime)s %(name)s %(levelname)s %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',)
# Set up a logger for logs from the notebook
logger = logging.getLogger('notebook')
# Set up a logfile and attach it to both loggers
logfile = logging.handlers.TimedRotatingFileHandler(os.path.join(temp_path, 'logfile.log'), when='midnight')
logfile.setFormatter(formatter)
logfile.setLevel(logging.DEBUG) #threshold for log messages in logfile
script_logger.addHandler(logfile)
logger.addHandler(logfile)
Explanation: Set up a log
End of explanation
logstream.setLevel(logging.DEBUG)
Explanation: Execute for more detailed logging message (May slow down computation).
End of explanation
start_from_user = date(2015, 1, 1)
end_from_user = date(2020, 9, 30)
Explanation: Select timerange
This section: select the time range and the data sources for download and read. Default: all data sources implemented, full time range available.
Source parameters are specified in input/sources.yml, which describes, for each source, the datasets (such as wind and solar generation) alongside all the parameters necessary to execute the downloads.
The option to perform downloading and reading of subsets is for testing only. To be able to run the script succesfully until the end, all sources have to be included, or otherwise the script will run into errors (i.e. the step where aggregate German timeseries are caculated requires data from all four German TSOs to be loaded).
In order to do this, specify the beginning and end of the interval for which to attempt the download.
Type None to download all available data.
End of explanation
archive_version = None # i.e. '2016-07-14'
Explanation: Select download source
Instead of downloading from the sources, the complete raw data can be downloaded as a zip file from the OPSD Server. Advantages are:
- much faster download
- back up of raw data in case it is deleted from the server at the original source
In order to do this, specify an archive version to use the raw data from that version that has been cached on the OPSD server as input. All data from that version will be downloaded - timerange and subset will be ignored.
Type None to download directly from the original sources.
End of explanation
with open(sources_yaml_path, 'r', encoding='UTF-8') as f:
sources = yaml.full_load (f.read())
Explanation: Select subset
Read in the configuration file which contains all the required infos for the download.
End of explanation
for k, v in sources.items():
print(yaml.dump({k: list(v.keys())}, default_flow_style=False))
Explanation: The next cell prints the available sources and datasets.<br>
Copy from its output and paste to following cell to get the right format.<br>
End of explanation
subset = yaml.full_load('''
ENTSO-E Transparency FTP:
- Actual Generation per Production Type
- Actual Total Load
- Day-ahead Total Load Forecast
- Day-ahead Prices
OPSD:
- capacity
''')
exclude=None
Explanation: Optionally, specify a subset to download/read.<br>
Type subset = None to include all data.
End of explanation
with open(sources_yaml_path, 'r', encoding='UTF-8') as f:
sources = yaml.full_load(f.read())
if subset: # eliminate sources and datasets not in subset
sources = {source_name:
{k: v for k, v in sources[source_name].items()
if k in dataset_list}
for source_name, dataset_list in subset.items()}
if exclude: # eliminate sources and variables in exclude
sources = {source_name: dataset_dict
for source_name, dataset_dict in sources.items()
if not source_name in exclude}
# Printing the selected sources (all of them or just a subset)
print("Selected sources: ")
for k, v in sources.items():
print(yaml.dump({k: list(v.keys())}, default_flow_style=False))
Explanation: Now eliminate sources and datasets not in subset.
End of explanation
auth = yaml.full_load('''
ENTSO-E Transparency FTP:
username: your email
password: your password
Elexon:
username: your email
password: your password
''')
Explanation: Download
This section: download data. Takes about 1 hour to run for the complete data set (subset=None).
First, a data directory is created on your local computer. Then, download parameters for each data source are defined, including the URL. These parameters are then turned into a YAML-string. Finally, the download is executed file by file.
Each file is saved under it's original filename. Note that the original file names are often not self-explanatory (called "data" or "January"). The files content is revealed by its place in the directory structure.
Some sources (currently only ENTSO-E Transparency) require an account to allow downloading. For ENTSO-E Transparency, set up an account here.
End of explanation
download(sources, data_path, input_path, chromedriver_path, auth,
archive_version=None,
start_from_user=start_from_user,
end_from_user=end_from_user,
testmode=False)
Explanation: Automatic download (for most sources)
End of explanation
headers = ['region', 'variable', 'attribute', 'source', 'web', 'unit']
Explanation: Manual download
Energinet.dk
Go to http://osp.energinet.dk/_layouts/Markedsdata/framework/integrations/markedsdatatemplate.aspx.
Check The Boxes as specified below:
- Periode
- Hent udtræk fra perioden: 01-01-2005 Til: 01-01-2019
- Select all months
- Datakolonner
- Elspot Pris, Valutakode/MWh: Select all
- Produktion og forbrug, MWh/h: Select all
- Udtræksformat
- Valutakode: EUR
- Decimalformat: Engelsk talformat (punktum som decimaltegn
- Datoformat: Andet datoformat (ÅÅÅÅ-MM-DD)
- Hent Udtræk: Til Excel
Click Hent Udtræk
You will receive a file Markedsata.xls of about 50 MB. Open the file in Excel. There will be a warning from Excel saying that file extension and content are in conflict. Select "open anyways" and and save the file as .xlsx.
In order to be found by the read-function, place the downloaded file in the following subdirectory:
{{data_path}}{{os.sep}}Energinet.dk{{os.sep}}prices_wind_solar{{os.sep}}2005-01-01_2019-01-01
CEPS
Go to http://www.ceps.cz/en/all-data#GenerationRES
check boxes as specified below:
DISPLAY DATA FOR: Generation RES
TURN ON FILTER checked
FILTER SETTINGS:
- Set the date range
- interval
- from: 2012 to: 2019
- Agregation and data version
- Aggregation: Hour
- Agregation function: average (AVG)
- Data version: real data
- Filter
- Type of power plant: ALL
- Click USE FILTER
- DOWNLOAD DATA: DATA V TXT
You will receive a file data.txt of about 1.5 MB.
In order to be found by the read-function, place the downloaded file in the following subdirectory:
{{data_path}}{{os.sep}}CEPS{{os.sep}}wind_pv{{os.sep}}2012-01-01_2019-01-01
ENTSO-E Power Statistics
Go to https://www.entsoe.eu/data/statistics/Pages/monthly_hourly_load.aspx
check boxes as specified below:
Date From: 01-01-2016 Date To: 28-02-2019
Country: (Select All)
Scale values to 100% using coverage ratio: YES
View Report
Click the Save symbol and select Excel
You will receive a file MHLV.xlsx of about 8 MB.
In order to be found by the read-function, place the downloaded file in the following subdirectory:
{{os.sep}}original_data{{os.sep}}ENTSO-E Power Statistics{{os.sep}}load{{os.sep}}2016-01-01_2016-04-30
The data covers the period from 01-01-2016 up to the present, but 4 months of data seems to be the maximum that interface supports for a single download request, so you have to repeat the download procedure for 4-Month periods to cover the whole period until the present.
Read
This section: Read each downloaded file into a pandas-DataFrame and merge data from different sources if it has the same time resolution. Takes ~15 minutes to run.
Preparations
Set the title of the rows at the top of the data used to store metadata internally. The order of this list determines the order of the levels in the resulting output.
End of explanation
areas = pd.read_csv(areas_csv_path)
Explanation: Read a prepared table containing meta data on the geographical areas
End of explanation
areas.loc[areas['area ID'].notnull(), :'EIC'].fillna('')
Explanation: View the areas table
End of explanation
areas = pd.read_csv(areas_csv_path)
read(sources, data_path, parsed_path, areas, headers,
start_from_user=start_from_user, end_from_user=end_from_user,
testmode=False)
Explanation: Reading loop
Loop through sources and datasets to do the reading.
First read the original CSV, Excel etc. files into pandas DataFrames.
End of explanation
# Create a dictionary of empty DataFrames to be populated with data
data_sets = {'15min': pd.DataFrame(),
'30min': pd.DataFrame(),
'60min': pd.DataFrame()}
entso_e = {'15min': pd.DataFrame(),
'30min': pd.DataFrame(),
'60min': pd.DataFrame()}
for filename in os.listdir(parsed_path):
res_key, source_name, dataset_name, = filename.split('_')[:3]
if subset and not source_name in subset.keys():
continue
logger.info('include %s', filename)
df_portion = pd.read_pickle(os.path.join(parsed_path, filename))
#if source_name == 'ENTSO-E Transparency FTP':
# dfs = entso_e
#else:
dfs = data_sets
if dfs[res_key].empty:
dfs[res_key] = df_portion
elif not df_portion.empty:
dfs[res_key] = dfs[res_key].combine_first(df_portion)
else:
logger.warning(filename + ' WAS EMPTY')
for res_key, df in data_sets.items():
logger.info(res_key + ': %s', df.shape)
#for res_key, df in entso_e.items():
# logger.info('ENTSO-E ' + res_key + ': %s', df.shape)
Explanation: Then combine the DataFrames that have the same temporal resolution
End of explanation
data_sets['60min']
Explanation: Display some rows of the dataframes to get a first impression of the data.
End of explanation
os.chdir(temp_path)
data_sets['15min'].to_pickle('raw_data_15.pickle')
data_sets['30min'].to_pickle('raw_data_30.pickle')
data_sets['60min'].to_pickle('raw_data_60.pickle')
entso_e['15min'].to_pickle('raw_entso_e_15.pickle')
entso_e['30min'].to_pickle('raw_entso_e_30.pickle')
entso_e['60min'].to_pickle('raw_entso_e_60.pickle')
Explanation: Save raw data
Save the DataFrames created by the read function to disk. This way you have the raw data to fall back to if something goes wrong in the ramainder of this notebook without having to repeat the previos steps.
End of explanation
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('raw_data_15.pickle')
data_sets['30min'] = pd.read_pickle('raw_data_30.pickle')
data_sets['60min'] = pd.read_pickle('raw_data_60.pickle')
entso_e = {}
entso_e['15min'] = pd.read_pickle('raw_entso_e_15.pickle')
entso_e['30min'] = pd.read_pickle('raw_entso_e_30.pickle')
entso_e['60min'] = pd.read_pickle('raw_entso_e_60.pickle')
Explanation: Load the DataFrames saved above
End of explanation
nan_tables = {}
overviews = {}
for res_key, df in data_sets.items():
data_sets[res_key], nan_tables[res_key], overviews[res_key] = find_nan(
df, res_key, headers, patch=True)
for res_key, df in entso_e.items():
entso_e[res_key], nan_tables[res_key + ' ENTSO-E'], overviews[res_key + ' ENTSO-E'] = find_nan(
df, res_key, headers, patch=True)
Explanation: Processing
This section: missing data handling, aggregation of sub-national to national data, aggregate 15'-data to 60'-resolution. Takes 30 minutes to run.
Missing data handling
Interpolation
Patch missing data. At this stage, only small gaps (up to 2 hours) are filled by linear interpolation. This catched most of the missing data due to daylight savings time transitions, while leaving bigger gaps untouched
The exact locations of missing data are stored in the nan_table DataFrames.
Patch the datasets and display the location of missing Data in the original data. Takes ~5 minutes to run.
End of explanation
nan_tables['60min']
Explanation: Execute this to see an example of where the data has been patched.
Display the table of regions of missing values
End of explanation
os.chdir(temp_path)
writer = pd.ExcelWriter('NaN_table.xlsx')
for res_key, df in nan_tables.items():
df.to_excel(writer, res_key)
writer.save()
writer = pd.ExcelWriter('Overview.xlsx')
for res_key, df in overviews.items():
df.to_excel(writer, res_key)
writer.save()
Explanation: You can export the NaN-tables to Excel in order to inspect where there are NaNs
End of explanation
os.chdir(temp_path)
data_sets['15min'].to_pickle('patched_15.pickle')
data_sets['30min'].to_pickle('patched_30.pickle')
data_sets['60min'].to_pickle('patched_60.pickle')
entso_e['15min'].to_pickle('patched_entso_e_15.pickle')
entso_e['30min'].to_pickle('patched_entso_e_30.pickle')
entso_e['60min'].to_pickle('patched_entso_e_60.pickle')
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('patched_15.pickle')
data_sets['30min'] = pd.read_pickle('patched_30.pickle')
data_sets['60min'] = pd.read_pickle('patched_60.pickle')
entso_e = {}
entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle')
entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle')
entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle')
Explanation: Save/Load the patched data sets
End of explanation
for res_key, df in data_sets.items():
df.sort_index(axis='columns', inplace=True)
Explanation: Some of the following operations require the Dataframes to be lexsorted in the columns
End of explanation
for res_key, df in data_sets.items():
for geo in df.columns.get_level_values(0).unique():
# we could also include 'generation_forecast'
for attribute in ['generation_actual']:
df_wind = df.loc[:, (geo, ['wind_onshore', 'wind_offshore'], attribute)]
if ('wind_onshore' in df_wind.columns.get_level_values('variable') and
'wind_offshore' in df_wind.columns.get_level_values('variable')):
logger.info(f'aggregate onhore + offshore for {res_key} {geo}')
# skipna=False, otherwise NAs will become zeros after summation
sum_col = df_wind.sum(axis='columns', skipna=False).to_frame()
# Create a new MultiIndex
new_col_header = {
'region': geo,
'variable': 'wind',
'attribute': 'generation_actual',
'source': 'own calculation based on ENTSO-E Transparency',
'web': '',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
df[new_col_header] = sum_col
#df[new_col_header].describe()
dfi = data_sets['15min'].copy()
dfi.columns = [' '.join(col[:3]).strip() for col in dfi.columns.values]
dfi.info(verbose=True, null_counts=True)
Explanation: Aggregate wind offshore + onshore
End of explanation
df = data_sets['15min']
control_areas_DE = ['DE_50hertz', 'DE_amprion', 'DE_tennet', 'DE_transnetbw']
for variable in ['solar', 'wind', 'wind_onshore', 'wind_offshore']:
# we could also include 'generation_forecast'
for attribute in ['generation_actual']:
# Calculate aggregate German generation
sum_frame = df.loc[:, (control_areas_DE, variable, attribute)]
sum_frame.head()
sum_col = sum_frame.sum(axis='columns', skipna=False).to_frame().round(0)
# Create a new MultiIndex
new_col_header = {
'region': 'DE',
'variable': variable,
'attribute': attribute,
'source': 'own calculation based on German TSOs',
'web': '',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
data_sets['15min'][new_col_header] = sum_col
data_sets['15min'][new_col_header].describe()
Explanation: Country specific calculations - not used in this release
Germany
Aggregate German data from individual TSOs
The wind and solar in-feed data for the 4 German control areas is summed up and stored in a new column. The column headers are created in the fashion introduced in the read script. Takes 5 seconds to run.
End of explanation
bidding_zones_IT = ['IT_CNOR', 'IT_CSUD', 'IT_NORD', 'IT_SARD', 'IT_SICI', 'IT_SUD']
attributes = ['generation_actual', 'generation_actual_dso', 'generation_actual_tso']
for variable in ['solar', 'wind_onshore']:
sum_col = (
data_sets['60min']
.loc[:, (bidding_zones_IT, variable, attributes)]
.sum(axis='columns', skipna=False))
# Create a new MultiIndex
new_col_header = {
'region': 'IT',
'variable': variable,
'attribute': 'generation_actual',
'source': 'own calculation based on Terna',
'web': 'https://www.terna.it/SistemaElettrico/TransparencyReport/Generation/Forecastandactualgeneration.aspx',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
data_sets['60min'][new_col_header] = sum_col
data_sets['60min'][new_col_header].describe()
Explanation: Italy
Generation data for Italy come by region (North, Central North, Sicily, etc.) and separately for DSO and TSO, so they need to be agregated in order to get values for the whole country. In the next cell, we sum up the data by region and for each variable-attribute pair present in the Terna dataset header.
End of explanation
for variable in ['solar', 'wind']:
sum_col = (data_sets['30min']
.loc[:, ('GB_GBN', variable, ['generation_actual_dso', 'generation_actual_tso'])]
.sum(axis='columns', skipna=False))
# Create a new MultiIndex
new_col_header = {
'region' : 'GB_GBN',
'variable' : variable,
'attribute' : 'generation_actual',
'source': 'own calculation based on Elexon and National Grid',
'web': '',
'unit': 'MW'
}
new_col_header = tuple(new_col_header[level] for level in headers)
data_sets['30min'][new_col_header] = sum_col
data_sets['30min'][new_col_header].describe()
Explanation: Great Britain / United Kingdom
Data for Great Britain (without Northern Ireland) are disaggregated for DSO and TSO connected generators. We calculate aggregate values.
End of explanation
for res_key, df in data_sets.items():
#if res_key == '60min':
# continue
for col_name, col in df.loc[:,(slice(None), slice(None), 'capacity')].iteritems():
# Get the generation data for the selected capacity column
kwargs = {
'key': (col_name[0], col_name[1], 'generation_actual'),
'level': ['region', 'variable', 'attribute'],
'axis': 'columns', 'drop_level': False}
generation_col = df.xs(**kwargs)
# take ENTSO-E transparency data if there is none from TSO
if generation_col.size == 0:
try:
generation_col = entso_e[res_key].xs(**kwargs)
except KeyError:
continue
if generation_col.size == 0:
continue
# Calculate the profile column
profile_col = generation_col.divide(col, axis='index').round(4)
# Create a new MultiIndex
new_col_header = {
'region': '{region}',
'variable': '{variable}',
'attribute': 'profile',
'source': 'own calculation based on {source}',
'web': '',
'unit': 'fraction'
}
source_capacity = col_name[3]
source_generation = generation_col.columns.get_level_values('source')[0]
if source_capacity == source_generation:
source = source_capacity
else:
source = (source_generation + ' and ' + source_capacity).replace('own calculation based on ', '')
new_col_header = tuple(new_col_header[level].format(region=col_name[0], variable=col_name[1], source=source)
for level in headers)
data_sets[res_key][new_col_header] = profile_col
data_sets[res_key][new_col_header].describe()
# Append profile to the dataset
df = df.combine_first(profile_col)
new_col_header
Explanation: Calculate availabilities/profiles
Calculate profiles, that is, the share of wind/solar capacity producing at a given time.
End of explanation
for res_key, df in data_sets.items():
df.sort_index(axis='columns', inplace=True)
Explanation: Some of the following operations require the Dataframes to be lexsorted in the columns
End of explanation
os.chdir(temp_path)
data_sets['15min'].to_pickle('calc_15.pickle')
data_sets['30min'].to_pickle('calc_30.pickle')
data_sets['60min'].to_pickle('calc_60.pickle')
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('calc_15.pickle')
data_sets['30min'] = pd.read_pickle('calc_30.pickle')
data_sets['60min'] = pd.read_pickle('calc_60.pickle')
entso_e = {}
entso_e['15min'] = pd.read_pickle('patched_entso_e_15.pickle')
entso_e['30min'] = pd.read_pickle('patched_entso_e_30.pickle')
entso_e['60min'] = pd.read_pickle('patched_entso_e_60.pickle')
Explanation: Another savepoint
End of explanation
for ds in [data_sets]:#, entso_e]:
for res_key, df in ds.items():
if res_key == '60min':
continue
# # Resample first the marker column
# marker_resampled = df['interpolated_values'].groupby(
# pd.Grouper(freq='60Min', closed='left', label='left')
# ).agg(resample_markers, drop_region='DE_AT_LU')
# marker_resampled = marker_resampled.reindex(ds['60min'].index)
# # Glue condensed 15/30 min marker onto 60 min marker
# ds['60min'].loc[:, 'interpolated_values'] = glue_markers(
# ds['60min']['interpolated_values'],
# marker_resampled.reindex(ds['60min'].index))
# # Drop DE_AT_LU bidding zone data from the 15 minute resolution data to
# # be resampled since it is already provided in 60 min resolution by
# # ENTSO-E Transparency
# df = df.drop('DE_AT_LU', axis=1, errors='ignore')
# Do the resampling
resampled = df.resample('H').mean()
resampled.columns = resampled.columns.map(mark_own_calc)
resampled.columns.names = headers
# filter out columns already represented in hourly data
data_cols = ds['60min'].columns.droplevel(['source', 'web', 'unit'])
tuples = [col for col in resampled.columns if not col[:3] in data_cols]
add_cols = pd.MultiIndex.from_tuples(tuples, names=headers)
resampled = resampled[add_cols]
# Round the resampled columns
for col in resampled.columns:
if col[2] == 'profile':
resampled.loc[:, col] = resampled.loc[:, col].round(4)
else:
resampled.loc[:, col] = resampled.loc[:, col].round(0)
ds['60min'] = ds['60min'].combine_first(resampled)
Explanation: Resample higher frequencies to 60'
Some data comes in 15 or 30-minute intervals (i.e. German or British renewable generation), other in 60-minutes (i.e. load data from ENTSO-E and Prices). We resample the 15 and 30-minute data to hourly resolution and append it to the 60-minutes dataset.
The .resample('H').mean() methods calculates the means from the values for 4 quarter hours [:00, :15, :30, :45] of an hour values, inserts that for :00 and drops the other 3 entries. Takes 15 seconds to run.
End of explanation
data_cols = data_sets['60min'].columns.droplevel(['source', 'web', 'unit'])
for res_key, df in entso_e.items():
# Combine with TSO data
# # Copy entire 30min data from ENTSO-E if there is no data from TSO
if data_sets[res_key].empty:
data_sets[res_key] = df
else:
# Keep only region, variable, attribute in MultiIndex for comparison
# Compare columns from ENTSO-E against TSO's, keep which we don't have yet
cols = [col for col in df.columns if not col[:3] in data_cols]
add_cols = pd.MultiIndex.from_tuples(cols, names=headers)
data_sets[res_key] = data_sets[res_key].combine_first(df[add_cols])
# # Add the ENTSO-E markers (but only for the columns actually copied)
# add_cols = ['_'.join(col[:3]) for col in tuples]
# # Spread marker column out over a DataFrame for easiser comparison
# # Filter out everey second column, which contains the delimiter " | "
# # from the marker
# marker_table = (df['interpolated_values'].str.split(' | ', expand=True)
# .filter(regex='^\d*[02468]$', axis='columns'))
# # Replace cells with markers marking columns not copied with NaNs
# marker_table[~marker_table.isin(add_cols)] = np.nan
# for col_name, col in marker_table.iteritems():
# if col_name == 0:
# marker_entso_e = col
# else:
# marker_entso_e = glue_markers(marker_entso_e, col)
# # Glue ENTSO-E marker onto our old marker
# marker = data_sets[res_key]['interpolated_values']
# data_sets[res_key].loc[:, 'interpolated_values'] = glue_markers(
# marker, df['interpolated_values'].reindex(marker.index))
Explanation: Fill columns not retrieved directly from TSO webites with ENTSO-E Transparency data
End of explanation
info_cols = {'utc': 'utc_timestamp',
'cet': 'cet_cest_timestamp'}
for ds in [data_sets]: #, entso_e]:
for res_key, df in ds.items():
if df.empty:
continue
df.index.rename(info_cols['utc'], inplace=True)
df.insert(0, info_cols['cet'],
df.index.tz_localize('UTC').tz_convert('CET'))
Explanation: Insert a column with Central European (Summer-)time
The index column of th data sets defines the start of the timeperiod represented by each row of that data set in UTC time. We include an additional column for the CE(S)T Central European (Summer-) Time, as this might help aligning the output data with other data sources.
End of explanation
data_sets['15min'].to_pickle('final_15.pickle')
data_sets['30min'].to_pickle('final_30.pickle')
data_sets['60min'].to_pickle('final_60.pickle')
#entso_e['15min'].to_pickle('final_entso_e_15.pickle')
#entso_e['30min'].to_pickle('final_entso_e_30.pickle')
#entso_e['60min'].to_pickle('final_entso_e_60.pickle')
os.chdir(temp_path)
data_sets = {}
data_sets['15min'] = pd.read_pickle('final_15.pickle')
data_sets['30min'] = pd.read_pickle('final_30.pickle')
data_sets['60min'] = pd.read_pickle('final_60.pickle')
#entso_e = {}
#entso_e['15min'] = pd.read_pickle('final_entso_e_15.pickle')
#entso_e['30min'] = pd.read_pickle('final_entso_e_30.pickle')
#entso_e['60min'] = pd.read_pickle('final_entso_e_60.pickle')
combined = data_sets
Explanation: Create a final savepoint
End of explanation
col_info = pd.DataFrame()
df = combined['60min']
for level in df.columns.names:
col_info[level] = df.columns.get_level_values(level)
col_info
Explanation: Show the column names contained in the final DataFrame in a table
End of explanation
for res_key, df in combined.items():
# In order to make sure that the respective time period is covered in both
# UTC and CE(S)T, we set the start in CE(S)T, but the end in UTC
if start_from_user:
start_from_user = (pytz.timezone('Europe/Brussels')
.localize(datetime.combine(start_from_user, time()))
.astimezone(pytz.timezone('UTC'))
.replace(tzinfo=None))
if end_from_user:
end_from_user = (pytz.timezone('UTC')
.localize(datetime.combine(end_from_user, time()))
.replace(tzinfo=None)
# Appropriate offset to inlude the end of period
+ timedelta(days=1, minutes=-int(res_key[:2])))
# Then cut off the data_set
data_sets[res_key] = df.loc[start_from_user:end_from_user, :]
Explanation: Write data to disk
This section: Save as Data Package (data in CSV, metadata in JSON file). All files are saved in the directory of this notebook. Alternative file formats (SQL, XLSX) are also exported. Takes about 1 hour to run.
Limit time range
Cut off the data outside of [start_from_user:end_from_user]
End of explanation
combined_singleindex = {}
combined_multiindex = {}
combined_stacked = {}
for res_key, df in combined.items():
if df.empty:
continue
# # Round floating point numbers to 2 digits
# for col_name, col in df.iteritems():
# if col_name[0] in info_cols.values():
# pass
# elif col_name[2] == 'profile':
# df[col_name] = col.round(4)
# else:
# df[col_name] = col.round(3)
# MultIndex
combined_multiindex[res_key + '_multiindex'] = df
# SingleIndex
df_singleindex = df.copy()
# use first 3 levels of multiindex to create singleindex
df_singleindex.columns = [
col_name[0] if col_name[0] in info_cols.values()
else '_'.join([level for level in col_name[0:3] if not level == ''])
for col_name in df.columns.values]
combined_singleindex[res_key + '_singleindex'] = df_singleindex
# Stacked
stacked = df.copy().drop(columns=info_cols['cet'], level=0)
stacked.columns = stacked.columns.droplevel(['source', 'web', 'unit'])
# Concatenate all columns below each other (="stack").
# df.transpose().stack() is faster than stacking all column levels
# seperately
stacked = stacked.transpose().stack(dropna=True).to_frame(name='data')
combined_stacked[res_key + '_stacked'] = stacked
Explanation: Different shapes
Data are provided in three different "shapes":
- SingleIndex (easy to read for humans, compatible with datapackage standard, small file size)
- Fileformat: CSV, SQLite
- MultiIndex (easy to read into GAMS, not compatible with datapackage standard, small file size)
- Fileformat: CSV, Excel
- Stacked (compatible with data package standard, large file size, many rows, too many for Excel)
- Fileformat: CSV
The different shapes need to be created internally befor they can be saved to files. Takes about 1 minute to run.
End of explanation
os.chdir(out_path)
for res_key, df in combined_singleindex.items():
table = 'time_series_' + res_key
df = df.copy()
df.index = df.index.strftime('%Y-%m-%dT%H:%M:%SZ')
cet_col_name = info_cols['cet']
df[cet_col_name] = (df[cet_col_name].dt.strftime('%Y-%m-%dT%H:%M:%S%z'))
df.to_sql(table, sqlite3.connect('time_series.sqlite'),
if_exists='replace', index_label=info_cols['utc'])
Explanation: Write to SQLite-database
This file format is required for the filtering function on the OPSD website. This takes ~3 minutes to complete.
End of explanation
os.chdir(out_path)
writer = pd.ExcelWriter('time_series.xlsx')
writer.save()
for res_key, df in data_sets.items():
# Need to convert CE(S)T-timestamps to tz-naive, otherwise Excel converts
# them back to UTC
df.loc[:,(info_cols['cet'], '', '', '', '', '')].dt.tz_localize(None).to_excel(writer, res_key)
filename = 'tsos_' + res_key + '.csv'
df.to_csv(filename, float_format='%.4f', date_format='%Y-%m-%dT%H:%M:%SZ')
#for res_key, df in entso_e.items():
# df.loc[:,(info_cols['cet'], '', '', '', '', '')].dt.tz_localize(None).to_excel(writer, res_key+ ' ENTSO-E')
# filename = 'entso_e_' + res_key + '.csv'
# df.to_csv(filename, float_format='%.4f', date_format='%Y-%m-%dT%H:%M:%SZ')
Explanation: Write to Excel
Writing the full tables to Excel takes extremely long. As a workaround, only the timestamp-columns are exported. The rest of the data can than be inserted manually from the _multindex.csv files.
End of explanation
os.chdir(out_path)
# itertoools.chain() allows iterating over multiple dicts at once
for res_stacking_key, df in itertools.chain(
combined_singleindex.items(),
combined_multiindex.items(),
combined_stacked.items()):
df = df.copy()
# convert the format of the cet_cest-timestamp to ISO-8601
if not res_stacking_key.split('_')[1] == 'stacked':
df.iloc[:, 0] = df.iloc[:, 0].dt.strftime('%Y-%m-%dT%H:%M:%S%z') # https://frictionlessdata.io/specs/table-schema/#date
filename = 'time_series_' + res_stacking_key + '.csv'
df.to_csv(filename, float_format='%.4f',
date_format='%Y-%m-%dT%H:%M:%SZ')
Explanation: Write to CSV
This takes about 10 minutes to complete.
End of explanation
os.chdir(out_path)
make_json(combined, info_cols, version, changes, headers, areas,
start_from_user, end_from_user)
Explanation: Create metadata
This section: create the metadata, both general and column-specific. All metadata we be stored as a JSON file. Takes 10s to run.
End of explanation
os.chdir(out_path)
files = os.listdir(out_path)
# Create checksums.txt in the output directory
with open('checksums.txt', 'w') as f:
for file_name in files:
if file_name.split('.')[-1] in ['csv', 'sqlite', 'xlsx']:
file_hash = get_sha_hash(file_name)
f.write('{},{}\n'.format(file_name, file_hash))
# Copy the file to root directory from where it will be pushed to GitHub,
# leaving a copy in the version directory for reference
copyfile('checksums.txt', os.path.join(home_path, 'checksums.txt'))
Explanation: Write checksums.txt
We publish SHA-checksums for the outputfiles on GitHub to allow verifying the integrity of outputfiles on the OPSD server.
End of explanation |
4,437 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started
Step1: Creating the model
First, we need to decide on the model type. For the given example, we are working with a continuous model.
Step2: The model is based on the assumption that we have additive process and/or measurement noise
Step3: Model measurements
This step is essential for the state estimation task
Step4: Model parameters
Next we define parameters. The MHE allows to estimate parameters as well as states. Note that not all parameters must be estimated (as shown in the MHE setup below). We can also hardcode parameters (such as the spring constants c).
Step5: Right-hand-side equation
Finally, we set the right-hand-side of the model by calling model.set_rhs(var_name, expr) with the var_name from the state variables defined above and an expression in terms of $x, u, z, p$.
Note that we can decide whether the inidividual states experience process noise.
In this example we choose that the system model is perfect.
This is the default setting, so we don't need to pass this parameter explictly.
Step6: The model setup is completed by calling model.setup()
Step7: After calling model.setup() we cannot define further variables etc.
Configuring the moving horizon estimator
The first step of configuring the moving horizon estimator is to call the class with a list of all parameters to be estimated. An empty list (default value) means that no parameters are estimated.
The list of estimated parameters must be a subset (or all) of the previously defined parameters.
<div class="alert alert-info">
**Note**
So why did we define ``Theta_2`` and ``Theta_3`` if we do not estimate them?
In many cases we will use the same model for (robust) control and MHE estimation. In that case it is possible to have some external parameters (e.g. weather prediction) that are uncertain but cannot be estimated.
</div>
Step8: MHE parameters
Step9: Objective function
The most important step of the configuration is to define the objective function for the MHE problem
Step10: Fixed parameters
If the model contains parameters and if we estimate only a subset of these parameters, it is required to pass a function that returns the value of the remaining parameters at each time step.
Furthermore, this function must return a specific structure, which is first obtained by calling
Step11: Using this structure, we then formulate the following function for the remaining (not estimated) parameters
Step12: This function is finally passed to the mhe instance
Step13: Bounds
The MHE implementation also supports bounds for states, inputs, parameters which can be set as shown below.
For the given example, it is especially important to set realistic bounds on the estimated parameter. Otherwise the MHE solution is a poor fit.
Step14: Setup
Similar to the controller, simulator and model, we finalize the MHE configuration by calling
Step15: Configuring the Simulator
In many cases, a developed control approach is first tested on a simulated system. do-mpc responds to this need with the do_mpc.simulator class. The simulator uses state-of-the-art DAE solvers, e.g. Sundials CVODE to solve the DAE equations defined in the supplied do_mpc.model. This will often be the same model as defined for the optimizer but it is also possible to use a more complex model of the same system.
In this section we demonstrate how to setup the simulator class for the given example. We initialize the class with the previously defined model
Step16: Simulator parameters
Next, we need to parametrize the simulator. Please see the API documentation for simulator.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set t_step. We choose the same value as for the optimizer.
Step17: Parameters
In the model we have defined the inertia of the masses as parameters. The simulator is now parametrized to simulate using the "true" values at each timestep. In the most general case, these values can change, which is why we need to supply a function that can be evaluted at each time to obtain the current values.
do-mpc requires this function to have a specific return structure which we obtain first by calling
Step18: We need to define a function which returns this structure with the desired numerical values. For our simple case
Step19: This function is now supplied to the simulator in the following way
Step20: Setup
Finally, we call
Step21: Creating the loop
While the full loop should also include a controller, we are currently only interested in showcasing the estimator. We therefore estimate the states for an arbitrary initial condition and some random control inputs (shown below).
Step22: To make things more interesting we pass the estimator a perturbed initial state
Step23: and use the x0 property of the simulator and estimator to set the initial state
Step24: It is also adviced to create an initial guess for the MHE optimization problem. The simplest way is to base that guess on the initial state, which is done automatically when calling
Step25: Setting up the Graphic
We are again using the do-mpc graphics module. This versatile tool allows us to conveniently configure a user-defined plot based on Matplotlib and visualize the results stored in the mhe.data, simulator.data objects.
We start by importing matplotlib
Step26: And initializing the graphics module with the data object of interest.
In this particular example, we want to visualize both the mpc.data as well as the simulator.data.
Step27: Next, we create a figure and obtain its axis object. Matplotlib offers multiple alternative ways to obtain an axis object, e.g. subplots, subplot2grid, or simply gca. We use subplots
Step28: Most important API element for setting up the graphics module is graphics.add_line, which mimics the API of model.add_variable, except that we also need to pass an axis.
We want to show both the simulator and MHE results on the same axis, which is why we configure both of them identically
Step29: Before we show any results we configure we further configure the graphic, by changing the appearance of the simulated lines. We can obtain line objects from any graphics instance with the result_lines property
Step30: We obtain a structure that can be queried conveniently as follows
Step31: In this particular case we want to change all result_lines with
Step32: We furthermore use this property to create a legend
Step33: and another legend for the parameter plot
Step34: Running the loop
We investigate the closed-loop MHE performance by alternating a simulation step (y0=simulator.make_step(u0)) and an estimation step (x0=mhe.make_step(y0)). Since we are lacking the controller which would close the loop (u0=mpc.make_step(x0)), we define a random control input function
Step35: The function holds the current input value with 80% chance or switches to a new random input value.
We can now run the loop. At each iteration, we perturb our measurements,
for a more realistic scenario.
This can be done by calling the simulator with a value for the measurement noise, which we defined in the model above.
Step36: We can visualize the resulting trajectory with the pre-defined graphic
Step37: Parameter estimation
Step38: MHE Advantages
One of the main advantages of moving horizon estimation is the possibility to set bounds for states, inputs and estimated parameters. As mentioned above, this is crucial in the presented example. Let's see how the MHE behaves without realistic bounds for the estimated mass inertia of disc one.
We simply reconfigure the bounds
Step39: And setup the MHE again. The backend is now recreating the optimization problem, taking into consideration the currently saved bounds.
Step40: We reset the history of the estimator and simulator (to clear their data objects and start "fresh").
Step41: Finally, we run the exact same loop again obtaining new results.
Step42: These results now look quite terrible
Step43: Clearly, the main problem is a faulty parameter estimation, which is off by orders of magnitude | Python Code:
import numpy as np
from casadi import *
# Add do_mpc to path. This is not necessary if it was installed via pip.
import sys
sys.path.append('../../')
# Import do_mpc package:
import do_mpc
Explanation: Getting started: MHE
Open an interactive online Jupyter Notebook with this content on Binder:
In this Jupyter Notebook we illustrate application of the do-mpc moving horizon estimation module.
Please follow first the general Getting Started guide, as we cover the sample example and skip over some previously explained details.
End of explanation
model_type = 'continuous' # either 'discrete' or 'continuous'
model = do_mpc.model.Model(model_type)
Explanation: Creating the model
First, we need to decide on the model type. For the given example, we are working with a continuous model.
End of explanation
phi = model.set_variable(var_type='_x', var_name='phi', shape=(3,1))
dphi = model.set_variable(var_type='_x', var_name='dphi', shape=(3,1))
# Two states for the desired (set) motor position:
phi_m_set = model.set_variable(var_type='_u', var_name='phi_m_set', shape=(2,1))
# Two additional states for the true motor position:
phi_m = model.set_variable(var_type='_x', var_name='phi_m', shape=(2,1))
Explanation: The model is based on the assumption that we have additive process and/or measurement noise:
\begin{align}
\dot{x}(t) &= f(x(t),u(t),z(t),p(t),p_{\text{tv}}(t))+w(t), \
y(t) &= h(x(t),u(t),z(t),p(t),p_{\text{tv}}(t))+v(t),
\end{align}
we are free to chose, which states and which measurements experience additive noise.
Model variables
The next step is to define the model variables. It is important to define the variable type, name and optionally shape (default is scalar variable).
In contrast to the previous example, we now use vectors for all variables.
End of explanation
# State measurements
phi_meas = model.set_meas('phi_1_meas', phi, meas_noise=True)
# Input measurements
phi_m_set_meas = model.set_meas('phi_m_set_meas', phi_m_set, meas_noise=False)
Explanation: Model measurements
This step is essential for the state estimation task: We must define a measurable output.
Typically, this is a subset of states (or a transformation thereof) as well as the inputs.
Note that some MHE implementations consider inputs separately.
As mentionned above, we need to define for each measurement if additive noise is present.
In our case we assume noisy state measurements ($\phi$) but perfect input measurements.
End of explanation
Theta_1 = model.set_variable('parameter', 'Theta_1')
Theta_2 = model.set_variable('parameter', 'Theta_2')
Theta_3 = model.set_variable('parameter', 'Theta_3')
c = np.array([2.697, 2.66, 3.05, 2.86])*1e-3
d = np.array([6.78, 8.01, 8.82])*1e-5
Explanation: Model parameters
Next we define parameters. The MHE allows to estimate parameters as well as states. Note that not all parameters must be estimated (as shown in the MHE setup below). We can also hardcode parameters (such as the spring constants c).
End of explanation
model.set_rhs('phi', dphi)
dphi_next = vertcat(
-c[0]/Theta_1*(phi[0]-phi_m[0])-c[1]/Theta_1*(phi[0]-phi[1])-d[0]/Theta_1*dphi[0],
-c[1]/Theta_2*(phi[1]-phi[0])-c[2]/Theta_2*(phi[1]-phi[2])-d[1]/Theta_2*dphi[1],
-c[2]/Theta_3*(phi[2]-phi[1])-c[3]/Theta_3*(phi[2]-phi_m[1])-d[2]/Theta_3*dphi[2],
)
model.set_rhs('dphi', dphi_next, process_noise = False)
tau = 1e-2
model.set_rhs('phi_m', 1/tau*(phi_m_set - phi_m))
Explanation: Right-hand-side equation
Finally, we set the right-hand-side of the model by calling model.set_rhs(var_name, expr) with the var_name from the state variables defined above and an expression in terms of $x, u, z, p$.
Note that we can decide whether the inidividual states experience process noise.
In this example we choose that the system model is perfect.
This is the default setting, so we don't need to pass this parameter explictly.
End of explanation
model.setup()
Explanation: The model setup is completed by calling model.setup():
End of explanation
mhe = do_mpc.estimator.MHE(model, ['Theta_1'])
Explanation: After calling model.setup() we cannot define further variables etc.
Configuring the moving horizon estimator
The first step of configuring the moving horizon estimator is to call the class with a list of all parameters to be estimated. An empty list (default value) means that no parameters are estimated.
The list of estimated parameters must be a subset (or all) of the previously defined parameters.
<div class="alert alert-info">
**Note**
So why did we define ``Theta_2`` and ``Theta_3`` if we do not estimate them?
In many cases we will use the same model for (robust) control and MHE estimation. In that case it is possible to have some external parameters (e.g. weather prediction) that are uncertain but cannot be estimated.
</div>
End of explanation
setup_mhe = {
't_step': 0.1,
'n_horizon': 10,
'store_full_solution': True,
'meas_from_data': True
}
mhe.set_param(**setup_mhe)
Explanation: MHE parameters:
Next, we pass MHE parameters. Most importantly, we need to set the time step and the horizon.
We also choose to obtain the measurement from the MHE data object.
Alternatively, we are able to set a user defined measurement function that is called at each timestep and returns the N previous measurements for the estimation step.
End of explanation
P_v = np.diag(np.array([1,1,1]))
P_x = np.eye(8)
P_p = 10*np.eye(1)
mhe.set_default_objective(P_x, P_v, P_p)
Explanation: Objective function
The most important step of the configuration is to define the objective function for the MHE problem:
\begin{align}
\underset{
\begin{array}{c}
\mathbf{x}{0:N+1}, \mathbf{u}{0:N}, p,\
\mathbf{w}{0:N}, \mathbf{v}{0:N}
\end{array}
}{\mathrm{min}}
&\frac{1}{2}\|x_0-\tilde{x}0\|{P_x}^2+\frac{1}{2}\|p-\tilde{p}\|{P_p}^2
+\sum{k=0}^{N-1} \left(\frac{1}{2}\|v_k\|{P{v,k}}^2
+ \frac{1}{2}\|w_k\|{P{w,k}}^2\right),\
&\left.\begin{aligned}
\mathrm{s.t.}\quad
x_{k+1} &= f(x_k,u_k,z_k,p,p_{\text{tv},k})+ w_k,\
y_k &= h(x_k,u_k,z_k,p,p_{\text{tv},k}) + v_k, \
&g(x_k,u_k,z_k,p_k,p_{\text{tv},k}) \leq 0
\end{aligned}\right} k=0,\dots, N
\end{align}
We typically consider the formulation shown above, where the user has to pass the weighting matrices P_x, P_v, P_p and P_w.
In our concrete example, we assume a perfect model without process noise and thus P_w is not required.
We set the objective function with the weighting matrices shown below:
End of explanation
p_template_mhe = mhe.get_p_template()
Explanation: Fixed parameters
If the model contains parameters and if we estimate only a subset of these parameters, it is required to pass a function that returns the value of the remaining parameters at each time step.
Furthermore, this function must return a specific structure, which is first obtained by calling:
End of explanation
def p_fun_mhe(t_now):
p_template_mhe['Theta_2'] = 2.25e-4
p_template_mhe['Theta_3'] = 2.25e-4
return p_template_mhe
Explanation: Using this structure, we then formulate the following function for the remaining (not estimated) parameters:
End of explanation
mhe.set_p_fun(p_fun_mhe)
Explanation: This function is finally passed to the mhe instance:
End of explanation
mhe.bounds['lower','_u', 'phi_m_set'] = -2*np.pi
mhe.bounds['upper','_u', 'phi_m_set'] = 2*np.pi
mhe.bounds['lower','_p_est', 'Theta_1'] = 1e-5
mhe.bounds['upper','_p_est', 'Theta_1'] = 1e-3
Explanation: Bounds
The MHE implementation also supports bounds for states, inputs, parameters which can be set as shown below.
For the given example, it is especially important to set realistic bounds on the estimated parameter. Otherwise the MHE solution is a poor fit.
End of explanation
mhe.setup()
Explanation: Setup
Similar to the controller, simulator and model, we finalize the MHE configuration by calling:
End of explanation
simulator = do_mpc.simulator.Simulator(model)
Explanation: Configuring the Simulator
In many cases, a developed control approach is first tested on a simulated system. do-mpc responds to this need with the do_mpc.simulator class. The simulator uses state-of-the-art DAE solvers, e.g. Sundials CVODE to solve the DAE equations defined in the supplied do_mpc.model. This will often be the same model as defined for the optimizer but it is also possible to use a more complex model of the same system.
In this section we demonstrate how to setup the simulator class for the given example. We initialize the class with the previously defined model:
End of explanation
# Instead of supplying a dict with the splat operator (**), as with the optimizer.set_param(),
# we can also use keywords (and call the method multiple times, if necessary):
simulator.set_param(t_step = 0.1)
Explanation: Simulator parameters
Next, we need to parametrize the simulator. Please see the API documentation for simulator.set_param() for a full description of available parameters and their meaning. Many parameters already have suggested default values. Most importantly, we need to set t_step. We choose the same value as for the optimizer.
End of explanation
p_template_sim = simulator.get_p_template()
Explanation: Parameters
In the model we have defined the inertia of the masses as parameters. The simulator is now parametrized to simulate using the "true" values at each timestep. In the most general case, these values can change, which is why we need to supply a function that can be evaluted at each time to obtain the current values.
do-mpc requires this function to have a specific return structure which we obtain first by calling:
End of explanation
def p_fun_sim(t_now):
p_template_sim['Theta_1'] = 2.25e-4
p_template_sim['Theta_2'] = 2.25e-4
p_template_sim['Theta_3'] = 2.25e-4
return p_template_sim
Explanation: We need to define a function which returns this structure with the desired numerical values. For our simple case:
End of explanation
simulator.set_p_fun(p_fun_sim)
Explanation: This function is now supplied to the simulator in the following way:
End of explanation
simulator.setup()
Explanation: Setup
Finally, we call:
End of explanation
x0 = np.pi*np.array([1, 1, -1.5, 1, -5, 5, 0, 0]).reshape(-1,1)
Explanation: Creating the loop
While the full loop should also include a controller, we are currently only interested in showcasing the estimator. We therefore estimate the states for an arbitrary initial condition and some random control inputs (shown below).
End of explanation
x0_mhe = x0*(1+0.5*np.random.randn(8,1))
Explanation: To make things more interesting we pass the estimator a perturbed initial state:
End of explanation
simulator.x0 = x0
mhe.x0_mhe = x0_mhe
mhe.p_est0 = 1e-4
Explanation: and use the x0 property of the simulator and estimator to set the initial state:
End of explanation
mhe.set_initial_guess()
Explanation: It is also adviced to create an initial guess for the MHE optimization problem. The simplest way is to base that guess on the initial state, which is done automatically when calling:
End of explanation
import matplotlib.pyplot as plt
import matplotlib as mpl
# Customizing Matplotlib:
mpl.rcParams['font.size'] = 18
mpl.rcParams['lines.linewidth'] = 3
mpl.rcParams['axes.grid'] = True
Explanation: Setting up the Graphic
We are again using the do-mpc graphics module. This versatile tool allows us to conveniently configure a user-defined plot based on Matplotlib and visualize the results stored in the mhe.data, simulator.data objects.
We start by importing matplotlib:
End of explanation
mhe_graphics = do_mpc.graphics.Graphics(mhe.data)
sim_graphics = do_mpc.graphics.Graphics(simulator.data)
Explanation: And initializing the graphics module with the data object of interest.
In this particular example, we want to visualize both the mpc.data as well as the simulator.data.
End of explanation
%%capture
# We just want to create the plot and not show it right now. This "inline magic" suppresses the output.
fig, ax = plt.subplots(3, sharex=True, figsize=(16,9))
fig.align_ylabels()
# We create another figure to plot the parameters:
fig_p, ax_p = plt.subplots(1, figsize=(16,4))
Explanation: Next, we create a figure and obtain its axis object. Matplotlib offers multiple alternative ways to obtain an axis object, e.g. subplots, subplot2grid, or simply gca. We use subplots:
End of explanation
%%capture
for g in [sim_graphics, mhe_graphics]:
# Plot the angle positions (phi_1, phi_2, phi_2) on the first axis:
g.add_line(var_type='_x', var_name='phi', axis=ax[0])
ax[0].set_prop_cycle(None)
g.add_line(var_type='_x', var_name='dphi', axis=ax[1])
ax[1].set_prop_cycle(None)
# Plot the set motor positions (phi_m_1_set, phi_m_2_set) on the second axis:
g.add_line(var_type='_u', var_name='phi_m_set', axis=ax[2])
ax[2].set_prop_cycle(None)
g.add_line(var_type='_p', var_name='Theta_1', axis=ax_p)
ax[0].set_ylabel('angle position [rad]')
ax[1].set_ylabel('angular \n velocity [rad/s]')
ax[2].set_ylabel('motor angle [rad]')
ax[2].set_xlabel('time [s]')
Explanation: Most important API element for setting up the graphics module is graphics.add_line, which mimics the API of model.add_variable, except that we also need to pass an axis.
We want to show both the simulator and MHE results on the same axis, which is why we configure both of them identically:
End of explanation
sim_graphics.result_lines
Explanation: Before we show any results we configure we further configure the graphic, by changing the appearance of the simulated lines. We can obtain line objects from any graphics instance with the result_lines property:
End of explanation
# First element for state phi:
sim_graphics.result_lines['_x', 'phi', 0]
Explanation: We obtain a structure that can be queried conveniently as follows:
End of explanation
for line_i in sim_graphics.result_lines.full:
line_i.set_alpha(0.4)
line_i.set_linewidth(6)
Explanation: In this particular case we want to change all result_lines with:
End of explanation
ax[0].legend(sim_graphics.result_lines['_x', 'phi'], '123', title='Sim.', loc='center right')
ax[1].legend(mhe_graphics.result_lines['_x', 'phi'], '123', title='MHE', loc='center right')
Explanation: We furthermore use this property to create a legend:
End of explanation
ax_p.legend(sim_graphics.result_lines['_p', 'Theta_1']+mhe_graphics.result_lines['_p', 'Theta_1'], ['True','Estim.'])
Explanation: and another legend for the parameter plot:
End of explanation
def random_u(u0):
# Hold the current value with 80% chance or switch to new random value.
u_next = (0.5-np.random.rand(2,1))*np.pi # New candidate value.
switch = np.random.rand() >= 0.8 # switching? 0 or 1.
u0 = (1-switch)*u0 + switch*u_next # Old or new value.
return u0
Explanation: Running the loop
We investigate the closed-loop MHE performance by alternating a simulation step (y0=simulator.make_step(u0)) and an estimation step (x0=mhe.make_step(y0)). Since we are lacking the controller which would close the loop (u0=mpc.make_step(x0)), we define a random control input function:
End of explanation
%%capture
np.random.seed(999) #make it repeatable
u0 = np.zeros((2,1))
for i in range(50):
u0 = random_u(u0) # Control input
v0 = 0.1*np.random.randn(model.n_v,1) # measurement noise
y0 = simulator.make_step(u0, v0=v0)
x0 = mhe.make_step(y0) # MHE estimation step
Explanation: The function holds the current input value with 80% chance or switches to a new random input value.
We can now run the loop. At each iteration, we perturb our measurements,
for a more realistic scenario.
This can be done by calling the simulator with a value for the measurement noise, which we defined in the model above.
End of explanation
sim_graphics.plot_results()
mhe_graphics.plot_results()
# Reset the limits on all axes in graphic to show the data.
mhe_graphics.reset_axes()
# Mark the time after a full horizon is available to the MHE.
ax[0].axvline(1)
ax[1].axvline(1)
ax[2].axvline(1)
# Show the figure:
fig
Explanation: We can visualize the resulting trajectory with the pre-defined graphic:
End of explanation
ax_p.set_ylim(1e-4, 4e-4)
ax_p.set_ylabel('mass inertia')
ax_p.set_xlabel('time [s]')
fig_p
Explanation: Parameter estimation:
End of explanation
mhe.bounds['lower','_p_est', 'Theta_1'] = -np.inf
mhe.bounds['upper','_p_est', 'Theta_1'] = np.inf
Explanation: MHE Advantages
One of the main advantages of moving horizon estimation is the possibility to set bounds for states, inputs and estimated parameters. As mentioned above, this is crucial in the presented example. Let's see how the MHE behaves without realistic bounds for the estimated mass inertia of disc one.
We simply reconfigure the bounds:
End of explanation
mhe.setup()
Explanation: And setup the MHE again. The backend is now recreating the optimization problem, taking into consideration the currently saved bounds.
End of explanation
mhe.reset_history()
simulator.reset_history()
Explanation: We reset the history of the estimator and simulator (to clear their data objects and start "fresh").
End of explanation
%%capture
np.random.seed(999) #make it repeatable
u0 = np.zeros((2,1))
for i in range(50):
u0 = random_u(u0) # Control input
v0 = 0.1*np.random.randn(model.n_v,1) # measurement noise
y0 = simulator.make_step(u0, v0=v0)
x0 = mhe.make_step(y0) # MHE estimation step
Explanation: Finally, we run the exact same loop again obtaining new results.
End of explanation
sim_graphics.plot_results()
mhe_graphics.plot_results()
# Reset the limits on all axes in graphic to show the data.
mhe_graphics.reset_axes()
# Mark the time after a full horizon is available to the MHE.
ax[0].axvline(1)
ax[1].axvline(1)
ax[2].axvline(1)
# Show the figure:
fig
Explanation: These results now look quite terrible:
End of explanation
ax_p.set_ylabel('mass inertia')
ax_p.set_xlabel('time [s]')
fig_p
Explanation: Clearly, the main problem is a faulty parameter estimation, which is off by orders of magnitude:
End of explanation |
4,438 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: {TF Probability、R、Stan} における線形混合効果回帰
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step3: 2 階層線形モデル
R、Stan、TFP を比較するために、階層線形モデル (HLM) をラドンデータセットに適合させます。このデータセットはゲルマンら (559 ページ、第 2 版、250 ページ、第 3 版) によるベイジアンデータ分析で有名になったものです。
次の生成モデルを前提としています。
$$\begin{align} \text{for } & c=1\ldots \text{NumCounties}
Step4: 3.1 データを調査する
このセクションでは、提案されたモデルが合理的である理由を理解するために、radon データセットを調べます。
Step5: 結論
Step6: 5 Stan を使用して HLM を適合させる
このセクションでは、rstanarm を使用して、上記の lme4 モデルと同じ式/構文を使用して Stan モデルを適合させます。
lme4 や以下の TF モデルとは異なり、rstanarm は完全なベイズモデルです。つまり、すべてのパラメータは正規分布から引き出され、パラメータ自体は分布から引き出されていると推定されます。
注:このセクションを実行するには、R colab ランタイムに切り替える必要があります。
Step7: 注
Step8: 注
Step9: 後で視覚化するために、lme4 からグループ変量効果の点推定と条件付き標準偏差を取得します。
Step10: lme4 の推定平均と標準偏差を使用して、郡の重みのサンプルを抽出します。
Step11: また、Stan の適合から郡の重みの事後サンプルを取得します。
Step12: この Stan の例は、確率モデルを直接指定することにより、TFP に似たスタイルで LMER を実装する方法を示しています。
6 TF Probability を使用して HLM を適合させる
このセクションでは、低レベルの TensorFlow Probability プリミティブ (Distributions) を使用して、階層線形モデルを指定し、不明なパラメータを適合させます。
Step13: 6.1 モデルの指定
このセクションでは、TFP プリミティブを使用してラドン線形混合効果モデルを指定します。これを行うには、2 つの TFP 分布を生成する 2 つの関数を指定します。
make_weights_prior
Step15: 次の関数は、事前 の$p(\beta|\sigma_C)$ を作成します。ここで、$\beta$ は変量効果の重みを示し、$\sigma_C$ は標準偏差を示します。
tf.make_template を使用して、この関数の最初の呼び出しで使用する TF 変数がインスタンス化され、その後のすべての呼び出しで reuse 変数の最新の値がインスタンス化されるようにします。
Step16: 次の関数は、尤度 $p(y|x,\omega,\beta,\sigma_N)$ を作成します。ここで、$y,x$ は応答と証拠を示し、$\omega,\beta$ は固定効果と変量効果の重みを示し、$\sigma_N$ は標準偏差を示します。
ここでも、tf.make_template を使用して、TF 変数が呼び出し間で再利用されるようにします。
Step17: 最後に、事前と尤度生成器を使用して、同時対数密度を構築します。
Step18: 6.2 トレーニング (期待値最大化の確率的近似)
線形混合効果回帰モデルを適合させるために、期待値最大化アルゴリズム (SAEM) の確率的近似バージョンを使用します。基本的な考え方は、事後のサンプルを使用して、予想される同時対数密度 (E ステップ) を概算することです。次に、この計算を最大化するパラメータを見つけます (M ステップ)。具体的には、不動点イテレーションは次の式で取得できます。
$$\begin{align} \text{E}[ \log p(x, Z | \theta) | \theta_0] &\approx \frac{1}{M} \sum_{m=1}^M \log p(x, z_m | \theta), \quad Z_m\sim p(Z | x, \theta_0) && \text{E-step}\ &=
Step19: これで、HMC 遷移カーネルを作成して E ステップのセットアップを完了しました。
注意
Step20: ここで、M ステップを設定します。これは基本的に、TF で行うことのある最適化と同じです。
Step21: 最後に、いくつかのハウスキーピングタスクを実行します。すべての変数が初期化されていることを TF に通知する必要があります。また、TF 変数へのハンドルを作成して、プロシージャの各イテレーションでそれらの値を print できるようにします。
Step22: 6.3 実行
このセクションでは、SAEM TF グラフを実行します。ここで重要な点は、HMC カーネルからの最後のものを次のイテレーションにフィードすることです。これは、feed_dict 呼び出しで sess.run を使用することで実現されます。
Step23: 約 1500 ステップ後、パラメータの推定値は安定しました。
6.4 結果
パラメータを適合させたので、多数の事後サンプルを生成して結果を調べてみます。
Step24: 次に、$\beta_c \log(\text{UraniumPPM}_c)$ 変量効果の箱ひげ図を作成します。郡の頻度を減らして、変量効果を並べ替えます。
Step25: この箱ひげ図から、郡レベルの $\log(\text{UraniumPPM})$ 変量効果の分散は、データセットにある郡のデータが少ない場合に増加することがわかります。直感的には、これは理にかなっています。証拠が少ない場合は、特定の郡の影響について確信が持てません。
7 並べて比較する
次に、3 つの手順すべての結果を比較します。これを行うために、Stan と TFP によって生成された事後サンプルの非パラメータ推定値を計算します。また、R の lme4 パッケージによって生成されたパラメトリック (概算) 推定値と比較します。
次のプロットは、ミネソタ州の各郡の各重みの事後分布を示しています。Stan (赤)、TFP (青)、および R のlme4 (オレンジ) の結果を示します。Stan と TFP の結果をシェーディングするため、2つ が一致すると紫色になると予想されます。簡単にするために、R からの結果はシェーディングしません。各サブプロットは単一の郡を表し、ラスタースキャンの順序で頻度の降順で並べられます (つまり、左から右、次に上から下)。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
%matplotlib inline
import os
from six.moves import urllib
import numpy as np
import pandas as pd
import warnings
from matplotlib import pyplot as plt
import seaborn as sns
from IPython.core.pylabtools import figsize
figsize(11, 9)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
Explanation: {TF Probability、R、Stan} における線形混合効果回帰
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/HLM_TFP_R_Stan"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHubでソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
1 はじめに
このコラボでは、線形混合効果回帰モデルを人気のあるトイデータセットに適合させます。R の lme4、Stan の混合効果パッケージ、および TensorFlow Probability (TFP) プリミティブを使用して、これを 3 回適合させます。そして、これらからほぼ同じ適合パラメータと事後分布を得られることを示します。
主な結論として、TFP には HLM のようなモデルを適合させるために必要な一般的な要素があり、lme4 や rstanarm などの他のソフトウェアパッケージと一致する結果を生成します。このコラボは、比較したパッケージの計算効率を正確に反映したものではありません。
End of explanation
def load_and_preprocess_radon_dataset(state='MN'):
Preprocess Radon dataset as done in "Bayesian Data Analysis" book.
We filter to Minnesota data (919 examples) and preprocess to obtain the
following features:
- `log_uranium_ppm`: Log of soil uranium measurements.
- `county`: Name of county in which the measurement was taken.
- `floor`: Floor of house (0 for basement, 1 for first floor) on which the
measurement was taken.
The target variable is `log_radon`, the log of the Radon measurement in the
house.
ds = tfds.load('radon', split='train')
radon_data = tfds.as_dataframe(ds)
radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)
df = radon_data[radon_data.state==state.encode()].copy()
# For any missing or invalid activity readings, we'll use a value of `0.1`.
df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)
# Make county names look nice.
df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()
# Remap categories to start from 0 and end at max(category).
county_name = sorted(df.county.unique())
df['county'] = df.county.astype(
pd.api.types.CategoricalDtype(categories=county_name)).cat.codes
county_name = list(map(str.strip, county_name))
df['log_radon'] = df['radon'].apply(np.log)
df['log_uranium_ppm'] = df['Uppm'].apply(np.log)
df = df[['idnum', 'log_radon', 'floor', 'county', 'log_uranium_ppm']]
return df, county_name
radon, county_name = load_and_preprocess_radon_dataset()
# We'll use the following directory to store our preprocessed dataset.
CACHE_DIR = os.path.join(os.sep, 'tmp', 'radon')
# Save processed data. (So we can later read it in R.)
if not tf.gfile.Exists(CACHE_DIR):
tf.gfile.MakeDirs(CACHE_DIR)
with tf.gfile.Open(os.path.join(CACHE_DIR, 'radon.csv'), 'w') as f:
radon.to_csv(f, index=False)
Explanation: 2 階層線形モデル
R、Stan、TFP を比較するために、階層線形モデル (HLM) をラドンデータセットに適合させます。このデータセットはゲルマンら (559 ページ、第 2 版、250 ページ、第 3 版) によるベイジアンデータ分析で有名になったものです。
次の生成モデルを前提としています。
$$\begin{align} \text{for } & c=1\ldots \text{NumCounties}:\ & \beta_c \sim \text{Normal}\left(\text{loc}=0, \text{scale}=\sigma_C \right) \ \text{for } & i=1\ldots \text{NumSamples}:\ &\eta_i = \underbrace{\omega_0 + \omega_1 \text{Floor}i}\text{fixed effects} + \underbrace{\beta_{ \text{County}i} \log( \text{UraniumPPM}{\text{County}i}))}\text{random effects} \ &\log(\text{Radon}_i) \sim \text{Normal}(\text{loc}=\eta_i , \text{scale}=\sigma_N) \end{align}$$
R の lme4 「チルダ表記」では、このモデルは次と同等です。
log_radon ~ 1 + floor + (0 + log_uranium_ppm | county)
${\beta_c}_{c=1}^\text{NumCounties}$ の事後分布 (証拠を条件とする) を使用して、$\omega, \sigma_C, \sigma_N$ の MLE を見つけます。
本質的に同じモデルですが、ランダム切片があります。付録 A を参照してください。
HLM のより一般的な仕様については、付録 B を参照してください。
3 データマンジング
このセクションでは、<code>radon</code> データセットを取得し、想定されるモデルに準拠するように最小限の前処理を行います。
End of explanation
radon.head()
fig, ax = plt.subplots(figsize=(22, 5));
county_freq = radon['county'].value_counts()
county_freq.plot(kind='bar', color='#436bad');
plt.xlabel('County index')
plt.ylabel('Number of radon readings')
plt.title('Number of radon readings per county', fontsize=16)
county_freq = np.array(zip(county_freq.index, county_freq.values)) # We'll use this later.
fig, ax = plt.subplots(ncols=2, figsize=[10, 4]);
radon['log_radon'].plot(kind='density', ax=ax[0]);
ax[0].set_xlabel('log(radon)')
radon['floor'].value_counts().plot(kind='bar', ax=ax[1]);
ax[1].set_xlabel('Floor');
ax[1].set_ylabel('Count');
fig.subplots_adjust(wspace=0.25)
Explanation: 3.1 データを調査する
このセクションでは、提案されたモデルが合理的である理由を理解するために、radon データセットを調べます。
End of explanation
suppressMessages({
library('bayesplot')
library('data.table')
library('dplyr')
library('gfile')
library('ggplot2')
library('lattice')
library('lme4')
library('plyr')
library('rstanarm')
library('tidyverse')
RequireInitGoogle()
})
data = read_csv(gfile::GFile('/tmp/radon/radon.csv'))
head(data)
# https://github.com/stan-dev/example-models/wiki/ARM-Models-Sorted-by-Chapter
radon.model <- lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
summary(radon.model)
qqmath(ranef(radon.model, condVar=TRUE))
write.csv(as.data.frame(ranef(radon.model, condVar = TRUE)), '/tmp/radon/lme4_fit.csv')
Explanation: 結論:
85 の郡のロングテールがあります (GLMM でよく発生します)。
$\log(\text{Radon})$ には制約がありません (したがって、線形回帰は理にかなっているかもしれません)。
測定はほとんど $0$ 階で行われます。$1$ より上の階では測定は行われませんでした (したがって、固定効果には 2 つの重みしかありません)。
4 R を使用して HLM を適合させる
このセクションでは、R の lme4 パッケージを使用して、上記の確率モデルを適合させます。
注: このセクションを実行するには、R colab ランタイムに切り替える必要があります。
End of explanation
fit <- stan_lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
Explanation: 5 Stan を使用して HLM を適合させる
このセクションでは、rstanarm を使用して、上記の lme4 モデルと同じ式/構文を使用して Stan モデルを適合させます。
lme4 や以下の TF モデルとは異なり、rstanarm は完全なベイズモデルです。つまり、すべてのパラメータは正規分布から引き出され、パラメータ自体は分布から引き出されていると推定されます。
注:このセクションを実行するには、R colab ランタイムに切り替える必要があります。
End of explanation
fit
color_scheme_set("red")
ppc_dens_overlay(y = fit$y,
yrep = posterior_predict(fit, draws = 50))
color_scheme_set("brightblue")
ppc_intervals(
y = data$log_radon,
yrep = posterior_predict(fit),
x = data$county,
prob = 0.8
) +
labs(
x = "County",
y = "log radon",
title = "80% posterior predictive intervals \nvs observed log radon",
subtitle = "by county"
) +
panel_bg(fill = "gray95", color = NA) +
grid_lines(color = "white")
# Write the posterior samples (4000 for each variable) to a CSV.
write.csv(tidy(as.matrix(fit)), "/tmp/radon/stan_fit.csv")
Explanation: 注: ランタイムは単一の CPU コアからのものです。(このコラボは、Stan または TFP ランタイムを忠実に表現することを目的としたものではありません。)
End of explanation
with tf.gfile.Open('/tmp/radon/lme4_fit.csv', 'r') as f:
lme4_fit = pd.read_csv(f, index_col=0)
lme4_fit.head()
Explanation: 注: Python TF カーネルランタイムに切り替えます。
End of explanation
posterior_random_weights_lme4 = np.array(lme4_fit.condval, dtype=np.float32)
lme4_prior_scale = np.array(lme4_fit.condsd, dtype=np.float32)
print(posterior_random_weights_lme4.shape, lme4_prior_scale.shape)
Explanation: 後で視覚化するために、lme4 からグループ変量効果の点推定と条件付き標準偏差を取得します。
End of explanation
with tf.Session() as sess:
lme4_dist = tfp.distributions.Independent(
tfp.distributions.Normal(
loc=posterior_random_weights_lme4,
scale=lme4_prior_scale),
reinterpreted_batch_ndims=1)
posterior_random_weights_lme4_final_ = sess.run(lme4_dist.sample(4000))
posterior_random_weights_lme4_final_.shape
Explanation: lme4 の推定平均と標準偏差を使用して、郡の重みのサンプルを抽出します。
End of explanation
with tf.gfile.Open('/tmp/radon/stan_fit.csv', 'r') as f:
samples = pd.read_csv(f, index_col=0)
samples.head()
posterior_random_weights_cols = [
col for col in samples.columns if 'b.log_uranium_ppm.county' in col
]
posterior_random_weights_final_stan = samples[
posterior_random_weights_cols].values
print(posterior_random_weights_final_stan.shape)
Explanation: また、Stan の適合から郡の重みの事後サンプルを取得します。
End of explanation
# Handy snippet to reset the global graph and global session.
with warnings.catch_warnings():
warnings.simplefilter('ignore')
tf.reset_default_graph()
try:
sess.close()
except:
pass
sess = tf.InteractiveSession()
Explanation: この Stan の例は、確率モデルを直接指定することにより、TFP に似たスタイルで LMER を実装する方法を示しています。
6 TF Probability を使用して HLM を適合させる
このセクションでは、低レベルの TensorFlow Probability プリミティブ (Distributions) を使用して、階層線形モデルを指定し、不明なパラメータを適合させます。
End of explanation
inv_scale_transform = lambda y: np.log(y) # Not using TF here.
fwd_scale_transform = tf.exp
Explanation: 6.1 モデルの指定
このセクションでは、TFP プリミティブを使用してラドン線形混合効果モデルを指定します。これを行うには、2 つの TFP 分布を生成する 2 つの関数を指定します。
make_weights_prior: ランダムな重みの多変量正規分布 (線形予測子を計算するために $\log(\text{UraniumPPM}_{c_i})$ を掛けます)。
make_log_radon_likelihood: 観測された各 $\log(\text{Radon}_i)$ 従属変数にわたる Normal 分布のバッチ。
これらの各分布のパラメータを適合するため、TF 変数 (tf.get_variable) を使用する必要があります。ただし、制約なしの最適化を使用したいので、必要なセマンティクスを実現するために実数値を制約する方法を見つける必要があります (例: 標準偏差を表す正の値)。
End of explanation
def _make_weights_prior(num_counties, dtype):
Returns a `len(log_uranium_ppm)` batch of univariate Normal.
raw_prior_scale = tf.get_variable(
name='raw_prior_scale',
initializer=np.array(inv_scale_transform(1.), dtype=dtype))
return tfp.distributions.Independent(
tfp.distributions.Normal(
loc=tf.zeros(num_counties, dtype=dtype),
scale=fwd_scale_transform(raw_prior_scale)),
reinterpreted_batch_ndims=1)
make_weights_prior = tf.make_template(
name_='make_weights_prior', func_=_make_weights_prior)
Explanation: 次の関数は、事前 の$p(\beta|\sigma_C)$ を作成します。ここで、$\beta$ は変量効果の重みを示し、$\sigma_C$ は標準偏差を示します。
tf.make_template を使用して、この関数の最初の呼び出しで使用する TF 変数がインスタンス化され、その後のすべての呼び出しで reuse 変数の最新の値がインスタンス化されるようにします。
End of explanation
def _make_log_radon_likelihood(random_effect_weights, floor, county,
log_county_uranium_ppm, init_log_radon_stddev):
raw_likelihood_scale = tf.get_variable(
name='raw_likelihood_scale',
initializer=np.array(
inv_scale_transform(init_log_radon_stddev), dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', initializer=np.array([0., 1.], dtype=dtype))
fixed_effects = fixed_effect_weights[0] + fixed_effect_weights[1] * floor
random_effects = tf.gather(
random_effect_weights * log_county_uranium_ppm,
indices=tf.to_int32(county),
axis=-1)
linear_predictor = fixed_effects + random_effects
return tfp.distributions.Normal(
loc=linear_predictor, scale=fwd_scale_transform(raw_likelihood_scale))
make_log_radon_likelihood = tf.make_template(
name_='make_log_radon_likelihood', func_=_make_log_radon_likelihood)
Explanation: 次の関数は、尤度 $p(y|x,\omega,\beta,\sigma_N)$ を作成します。ここで、$y,x$ は応答と証拠を示し、$\omega,\beta$ は固定効果と変量効果の重みを示し、$\sigma_N$ は標準偏差を示します。
ここでも、tf.make_template を使用して、TF 変数が呼び出し間で再利用されるようにします。
End of explanation
def joint_log_prob(random_effect_weights, log_radon, floor, county,
log_county_uranium_ppm, dtype):
num_counties = len(log_county_uranium_ppm)
rv_weights = make_weights_prior(num_counties, dtype)
rv_radon = make_log_radon_likelihood(
random_effect_weights,
floor,
county,
log_county_uranium_ppm,
init_log_radon_stddev=radon.log_radon.values.std())
return (rv_weights.log_prob(random_effect_weights)
+ tf.reduce_sum(rv_radon.log_prob(log_radon), axis=-1))
Explanation: 最後に、事前と尤度生成器を使用して、同時対数密度を構築します。
End of explanation
# Specify unnormalized posterior.
dtype = np.float32
log_county_uranium_ppm = radon[
['county', 'log_uranium_ppm']].drop_duplicates().values[:, 1]
log_county_uranium_ppm = log_county_uranium_ppm.astype(dtype)
def unnormalized_posterior_log_prob(random_effect_weights):
return joint_log_prob(
random_effect_weights=random_effect_weights,
log_radon=dtype(radon.log_radon.values),
floor=dtype(radon.floor.values),
county=np.int32(radon.county.values),
log_county_uranium_ppm=log_county_uranium_ppm,
dtype=dtype)
Explanation: 6.2 トレーニング (期待値最大化の確率的近似)
線形混合効果回帰モデルを適合させるために、期待値最大化アルゴリズム (SAEM) の確率的近似バージョンを使用します。基本的な考え方は、事後のサンプルを使用して、予想される同時対数密度 (E ステップ) を概算することです。次に、この計算を最大化するパラメータを見つけます (M ステップ)。具体的には、不動点イテレーションは次の式で取得できます。
$$\begin{align} \text{E}[ \log p(x, Z | \theta) | \theta_0] &\approx \frac{1}{M} \sum_{m=1}^M \log p(x, z_m | \theta), \quad Z_m\sim p(Z | x, \theta_0) && \text{E-step}\ &=: Q_M(\theta, \theta_0) \ \theta_0 &= \theta_0 - \eta \left.\nabla_\theta Q_M(\theta, \theta_0)\right|_{\theta=\theta_0} && \text{M-step} \end{align}$$
ここで、$x$ は証拠、$Z$ は無視する必要のある潜在変数、および $\theta,\theta_0$ の可能なパラメータ化を示します。
詳細については、次を参照してください。Convergence of a stochastic approximation version of the EM algorithms by Bernard Delyon, Marc Lavielle, Eric, Moulines (Ann. Statist., 1999)。
E ステップを計算するには、事後からサンプリングする必要があります。事後分布からサンプリングをすのは簡単ではないため、ハミルトニアンモンテカルロ (HMC) を使用します。HMC は、非正規化された事後対数密度の勾配 (パラメータではなく、wrt 状態) を使用して新しいサンプルを提供するマルコフ連鎖モンテカルロ法の手順です。
正規化されていない事後対数密度を指定するのは簡単です。これは、条件付けする「固定」された同時対数密度にすぎません。
End of explanation
# Set-up E-step.
step_size = tf.get_variable(
'step_size',
initializer=np.array(0.2, dtype=dtype),
trainable=False)
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=None),
state_gradients_are_stopped=True)
init_random_weights = tf.placeholder(dtype, shape=[len(log_county_uranium_ppm)])
posterior_random_weights, kernel_results = tfp.mcmc.sample_chain(
num_results=3,
num_burnin_steps=0,
num_steps_between_results=0,
current_state=init_random_weights,
kernel=hmc)
Explanation: これで、HMC 遷移カーネルを作成して E ステップのセットアップを完了しました。
注意:
state_stop_gradient=Trueを使用して、M ステップが MCMC からの抽出を介してバックプロパゲーションするのを防ぎます。(E ステップでは前の最もよく知られている推定量で意図的にパラメータ化されているため、バックプロパゲーションを行う必要はありません。)
tf.placeholder を使用して、最終的に TF グラフを実行するときに、前のイテレーションのランダム MCMC サンプルを次のイテレーションのチェーンの値としてフィードできるようにします。
TFP の適応型 step_size ヒューリスティック、tfp.mcmc.hmc_step_size_update_fn を使用します。
End of explanation
# Set-up M-step.
loss = -tf.reduce_mean(kernel_results.accepted_results.target_log_prob)
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.train.exponential_decay(
learning_rate=0.1,
global_step=global_step,
decay_steps=2,
decay_rate=0.99)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
Explanation: ここで、M ステップを設定します。これは基本的に、TF で行うことのある最適化と同じです。
End of explanation
# Initialize all variables.
init_op = tf.initialize_all_variables()
# Grab variable handles for diagnostic purposes.
with tf.variable_scope('make_weights_prior', reuse=True):
prior_scale = fwd_scale_transform(tf.get_variable(
name='raw_prior_scale', dtype=dtype))
with tf.variable_scope('make_log_radon_likelihood', reuse=True):
likelihood_scale = fwd_scale_transform(tf.get_variable(
name='raw_likelihood_scale', dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', dtype=dtype)
Explanation: 最後に、いくつかのハウスキーピングタスクを実行します。すべての変数が初期化されていることを TF に通知する必要があります。また、TF 変数へのハンドルを作成して、プロシージャの各イテレーションでそれらの値を print できるようにします。
End of explanation
init_op.run()
w_ = np.zeros([len(log_county_uranium_ppm)], dtype=dtype)
%%time
maxiter = int(1500)
num_accepted = 0
num_drawn = 0
for i in range(maxiter):
[
_,
global_step_,
loss_,
posterior_random_weights_,
kernel_results_,
step_size_,
prior_scale_,
likelihood_scale_,
fixed_effect_weights_,
] = sess.run([
train_op,
global_step,
loss,
posterior_random_weights,
kernel_results,
step_size,
prior_scale,
likelihood_scale,
fixed_effect_weights,
], feed_dict={init_random_weights: w_})
w_ = posterior_random_weights_[-1, :]
num_accepted += kernel_results_.is_accepted.sum()
num_drawn += kernel_results_.is_accepted.size
acceptance_rate = num_accepted / num_drawn
if i % 100 == 0 or i == maxiter - 1:
print('global_step:{:>4} loss:{: 9.3f} acceptance:{:.4f} '
'step_size:{:.4f} prior_scale:{:.4f} likelihood_scale:{:.4f} '
'fixed_effect_weights:{}'.format(
global_step_, loss_.mean(), acceptance_rate, step_size_,
prior_scale_, likelihood_scale_, fixed_effect_weights_))
Explanation: 6.3 実行
このセクションでは、SAEM TF グラフを実行します。ここで重要な点は、HMC カーネルからの最後のものを次のイテレーションにフィードすることです。これは、feed_dict 呼び出しで sess.run を使用することで実現されます。
End of explanation
%%time
posterior_random_weights_final, kernel_results_final = tfp.mcmc.sample_chain(
num_results=int(15e3),
num_burnin_steps=int(1e3),
current_state=init_random_weights,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size))
[
posterior_random_weights_final_,
kernel_results_final_,
] = sess.run([
posterior_random_weights_final,
kernel_results_final,
], feed_dict={init_random_weights: w_})
print('prior_scale: ', prior_scale_)
print('likelihood_scale: ', likelihood_scale_)
print('fixed_effect_weights: ', fixed_effect_weights_)
print('acceptance rate final: ', kernel_results_final_.is_accepted.mean())
Explanation: 約 1500 ステップ後、パラメータの推定値は安定しました。
6.4 結果
パラメータを適合させたので、多数の事後サンプルを生成して結果を調べてみます。
End of explanation
x = posterior_random_weights_final_ * log_county_uranium_ppm
I = county_freq[:, 0]
x = x[:, I]
cols = np.array(county_name)[I]
pw = pd.DataFrame(x)
pw.columns = cols
fig, ax = plt.subplots(figsize=(25, 4))
ax = pw.boxplot(rot=80, vert=True);
Explanation: 次に、$\beta_c \log(\text{UraniumPPM}_c)$ 変量効果の箱ひげ図を作成します。郡の頻度を減らして、変量効果を並べ替えます。
End of explanation
nrows = 17
ncols = 5
fig, ax = plt.subplots(nrows, ncols, figsize=(18, 21), sharey=True, sharex=True)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
ii = -1
for r in range(nrows):
for c in range(ncols):
ii += 1
idx = county_freq[ii, 0]
sns.kdeplot(
posterior_random_weights_final_[:, idx] * log_county_uranium_ppm[idx],
color='blue',
alpha=.3,
shade=True,
label='TFP',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_final_stan[:, idx] *
log_county_uranium_ppm[idx],
color='red',
alpha=.3,
shade=True,
label='Stan/rstanarm',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_lme4_final_[:, idx] *
log_county_uranium_ppm[idx],
color='#F4B400',
alpha=.7,
shade=False,
label='R/lme4',
ax=ax[r][c])
ax[r][c].vlines(
posterior_random_weights_lme4[idx] * log_county_uranium_ppm[idx],
0,
5,
color='#F4B400',
linestyle='--')
ax[r][c].set_title(county_name[idx] + ' ({})'.format(idx), y=.7)
ax[r][c].set_ylim(0, 5)
ax[r][c].set_xlim(-1., 1.)
ax[r][c].get_yaxis().set_visible(False)
if ii == 2:
ax[r][c].legend(bbox_to_anchor=(1.4, 1.7), fontsize=20, ncol=3)
else:
ax[r][c].legend_.remove()
fig.subplots_adjust(wspace=0.03, hspace=0.1)
Explanation: この箱ひげ図から、郡レベルの $\log(\text{UraniumPPM})$ 変量効果の分散は、データセットにある郡のデータが少ない場合に増加することがわかります。直感的には、これは理にかなっています。証拠が少ない場合は、特定の郡の影響について確信が持てません。
7 並べて比較する
次に、3 つの手順すべての結果を比較します。これを行うために、Stan と TFP によって生成された事後サンプルの非パラメータ推定値を計算します。また、R の lme4 パッケージによって生成されたパラメトリック (概算) 推定値と比較します。
次のプロットは、ミネソタ州の各郡の各重みの事後分布を示しています。Stan (赤)、TFP (青)、および R のlme4 (オレンジ) の結果を示します。Stan と TFP の結果をシェーディングするため、2つ が一致すると紫色になると予想されます。簡単にするために、R からの結果はシェーディングしません。各サブプロットは単一の郡を表し、ラスタースキャンの順序で頻度の降順で並べられます (つまり、左から右、次に上から下)。
End of explanation |
4,439 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Préambule
Step2: Exercice Robozzle | Python Code:
if (cas simple):
(solution immédiate)
else:
(solution récursive,
impliquant un cas plus simple que le problème original)
Explanation: Préambule : nous avons commencé par faire un rappel sur la récursivité en ré-écrivant le comportement de factorielle au tableau et en déroulant l'algorithme à la main.
Nous rappelons qu'une fonction récursive est une fonction qui s'appelle elle-même. Chaque appel à la fonction est indépendant des autres. Le moyen le plus simple pour réussir l'écriture d'une fonction récursive est de toujours commencer par exprimer le cas simple (la solutiotin immédiate), puis d'écrire le cas récursif (la solution qui fait appel à la solution plus simple).
Ci-dessous, vous trouverez le squelette d'une fonction récursive typique.
End of explanation
def fact(n):
:entrée n: int
:pré-cond: n > 0
:sortie f: int
:post-cond: f = n * (n-1) * ... * 1
if n == 1:
f = 1
else:
f = fact(n-1)*n
print("--- fact({}) = {}".format(n,f))
return f
print(fact(6))
Explanation: Exercice Robozzle : nous avons ensuite résolu l'exercice Robozzle n°656, l'objectif était de vous faire comprendre le fonctionnement de la pile d'appels récursifs. http://robozzle.com/js/play.aspx?puzzle=656
End of explanation |
4,440 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Record, Save, and Play Moves on a Poppy Creature
This notebook is still work in progress! Feedbacks are welcomed!
In this tutorial we will show how to
Step1: Import the Move, Recorder and Player
Step2: Create a Recorder for the robot Poppy
Step3: Start the recording
First, turn the recorded motors compliant, so you can freely move them
Step4: Starts the recording when you are ready!
Step5: Stop the recording
Stop it when you are done demonstrating the movement.
Step6: Turn back off the compliance.
Step7: Get the recorder Move and store it on the disk
Save the recorded move on the text file named 'mymove.json'.
Step8: Load a saved Move
Re-load it from the file jsut as an example purpose.
Step9: Create a Move Player and Play Back a Recorded Move
First, create the object used to re-play a recorded Move.
Step10: You can start the play back whenever you want
Step11: You can play your move as many times as you want. Note, that we use the wait_to_stop method to wait for the first play abck to end before running it again. | Python Code:
from pypot.creatures import PoppyErgo
poppy = PoppyErgo()
for m in poppy.motors:
m.compliant = False
m.goal_position = 0.0
Explanation: Record, Save, and Play Moves on a Poppy Creature
This notebook is still work in progress! Feedbacks are welcomed!
In this tutorial we will show how to:
* record moves by direct demonstration on a Poppy Creature
* save them to the disk - and re-load them
* play, and re-play the best moves
To follow this notebook, you should already have installed everything needed to control a Poppy Creature. The examples below used a Poppy Ergo but then can be easily transposed to a Poppy Humanoid or to any other creatures.
Connect to your Poppy Creature
First, connect to your Poppy Creature and put it in its "base" position so you can easily record motions.
Here we use a Poppy Ergo but you can replace it by a Poppy Humanoid.
End of explanation
# Import everything you need for recording, playing, saving, and loading Moves
# Move: object used to represent a movement
# MoveRecorder: object used to record a Move
# MovePlayer: object used to play (and re-play) a Move
from pypot.primitive.move import Move, MoveRecorder, MovePlayer
Explanation: Import the Move, Recorder and Player
End of explanation
record_frequency = 50.0 # This means that a new position will be recorded 50 times per second.
recorded_motors = [poppy.m4, poppy.m5, poppy.m6] # We will record the position of the 3 last motors of the Ergo
# You can also use alias for the recorded_motors
# e.g. recorder = MoveRecorder(poppy, record_frequency, poppy.tip)
# or even to record all motors position
# recorder = MoveRecorder(poppy, record_frequency, poppy.motors)
recorder = MoveRecorder(poppy, record_frequency, recorded_motors)
Explanation: Create a Recorder for the robot Poppy
End of explanation
for m in recorded_motors:
m.compliant = True
Explanation: Start the recording
First, turn the recorded motors compliant, so you can freely move them:
End of explanation
recorder.start()
Explanation: Starts the recording when you are ready!
End of explanation
recorder.stop()
Explanation: Stop the recording
Stop it when you are done demonstrating the movement.
End of explanation
for m in recorded_motors:
m.compliant = False
Explanation: Turn back off the compliance.
End of explanation
recorded_move = recorder.move
with open('mymove.json', 'w') as f:
recorded_move.save(f)
Explanation: Get the recorder Move and store it on the disk
Save the recorded move on the text file named 'mymove.json'.
End of explanation
with open('mymove.json') as f:
loaded_move = Move.load(f)
Explanation: Load a saved Move
Re-load it from the file jsut as an example purpose.
End of explanation
player = MovePlayer(poppy, loaded_move)
Explanation: Create a Move Player and Play Back a Recorded Move
First, create the object used to re-play a recorded Move.
End of explanation
player.start()
Explanation: You can start the play back whenever you want:
End of explanation
for _ in range(3):
player.start()
player.wait_to_stop()
Explanation: You can play your move as many times as you want. Note, that we use the wait_to_stop method to wait for the first play abck to end before running it again.
End of explanation |
4,441 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exact solution used in MES runs
We would like to MES the operation
$$
J\nabla \cdot \mathbf{f}_\perp
$$
Using cylindrical geometry.
Step1: Initialize
Step2: Define the variables
Step3: Define manifactured solutions
Due to orthogonality we have that
$$
S = J\nabla \cdot \mathbf{f}\perp
= J\nabla\perp \cdot \mathbf{f}_\perp
= \partial_i \left(Jf^i\right)
= \partial_x \left(Jf^x\right) + \partial_z \left(Jf^z\right)
$$
In cylindrical coordinates $J=\rho$, so this gives
$$
f = \partial_\rho \left(\rho f^\rho\right) + \partial_\theta \left(\rho f^\theta\right)
= \rho\partial_\rho f^\rho + f^\rho + \rho\partial_\theta f^\theta
$$
NOTE
Step4: Calculate the solution
Step5: Plot
Step6: Print the variables in BOUT++ format | Python Code:
%matplotlib notebook
from sympy import init_printing
from sympy import S
from sympy import sin, cos, tanh, exp, pi, sqrt
from boutdata.mms import x, y, z, t
from boutdata.mms import Delp2, DDX, DDY, DDZ
import os, sys
# If we add to sys.path, then it must be an absolute path
common_dir = os.path.abspath('./../../../../common')
# Sys path is a list of system paths
sys.path.append(common_dir)
from CELMAPy.MES import get_metric, make_plot, BOUT_print
init_printing()
Explanation: Exact solution used in MES runs
We would like to MES the operation
$$
J\nabla \cdot \mathbf{f}_\perp
$$
Using cylindrical geometry.
End of explanation
folder = '../twoGaussians/'
metric = get_metric()
Explanation: Initialize
End of explanation
# Initialization
the_vars = {}
Explanation: Define the variables
End of explanation
# We need Lx
from boututils.options import BOUTOptions
myOpts = BOUTOptions(folder)
Lx = eval(myOpts.geom['Lx'])
# Two gaussians
# NOTE: S actually looks good
# The skew sinus
# In cartesian coordinates we would like a sinus with with a wave-vector in the direction
# 45 degrees with respect to the first quadrant. This can be achieved with a wave vector
# k = [1/sqrt(2), 1/sqrt(2)]
# sin((1/sqrt(2))*(x + y))
# We would like 2 nodes, so we may write
# sin((1/sqrt(2))*(x + y)*(2*pi/(2*Lx)))
# Rewriting this to cylindrical coordinates, gives
# sin((1/sqrt(2))*(x*(cos(z)+sin(z)))*(2*pi/(2*Lx)))
# The gaussian
# In cartesian coordinates we would like
# f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2))
# In cylindrical coordinates, this translates to
# f = exp(-(1/(2*w^2))*(x^2 + y^2 + x0^2 + y0^2 - 2*(x*x0+y*y0) ))
# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta)*cos(theta0)+sin(theta)*sin(theta0)) ))
# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta - theta0)) ))
# A parabola
# In cartesian coordinates, we have
# ((x-x0)/Lx)^2
# Chosing this function to have a zero value at the edge yields in cylindrical coordinates
# ((x*cos(z)+Lx)/(2*Lx))^2
w = 0.8*Lx
rho0 = 0.3*Lx
theta0 = 5*pi/4
the_vars['f^x'] = sin((1/sqrt(2))*(x*(cos(z)+sin(z)))*(2*pi/(2*Lx)))*\
exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))*\
((x*cos(z)+Lx)/(2*Lx))**2
# The gaussian
# In cartesian coordinates we would like
# f = exp(-(1/(2*w^2))*((x-x0)^2 + (y-y0)^2))
# In cylindrical coordinates, this translates to
# f = exp(-(1/(2*w^2))*(x^2 + y^2 + x0^2 + y0^2 - 2*(x*x0+y*y0) ))
# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta)*cos(theta0)+sin(theta)*sin(theta0)) ))
# = exp(-(1/(2*w^2))*(rho^2 + rho0^2 - 2*rho*rho0*(cos(theta - theta0)) ))
w = 0.5*Lx
rho0 = 0.2*Lx
theta0 = pi
the_vars['f^z'] = exp(-(1/(2*w**2))*(x**2 + rho0**2 - 2*x*rho0*(cos(z - theta0)) ))
Explanation: Define manifactured solutions
Due to orthogonality we have that
$$
S = J\nabla \cdot \mathbf{f}\perp
= J\nabla\perp \cdot \mathbf{f}_\perp
= \partial_i \left(Jf^i\right)
= \partial_x \left(Jf^x\right) + \partial_z \left(Jf^z\right)
$$
In cylindrical coordinates $J=\rho$, so this gives
$$
f = \partial_\rho \left(\rho f^\rho\right) + \partial_\theta \left(\rho f^\theta\right)
= \rho\partial_\rho f^\rho + f^\rho + \rho\partial_\theta f^\theta
$$
NOTE:
z must be periodic
The field $f(\rho, \theta)$ must be of class infinity in $z=0$ and $z=2\pi$
The field $f(\rho, \theta)$ must be single valued when $\rho\to0$
The field $f(\rho, \theta)$ must be continuous in the $\rho$ direction with $f(\rho, \theta + \pi)$
Eventual BC in $\rho$ must be satisfied
End of explanation
the_vars['S'] = DDX(metric.J*the_vars['f^x'], metric=metric)\
+ 0\
+ DDZ(metric.J*the_vars['f^z'], metric=metric)
Explanation: Calculate the solution
End of explanation
make_plot(folder=folder, the_vars=the_vars, plot2d=True, include_aux=False)
Explanation: Plot
End of explanation
BOUT_print(the_vars, rational=False)
Explanation: Print the variables in BOUT++ format
End of explanation |
4,442 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimisation
Step1: As before, we can define an error function and minimise it
Step2: We could also however define the inference problem statistically, by specifying a likelihood.
Step3: Then use an optimisation routine to find the maximum likelihood estimates. (Note that the below is supposed to fail.)
Step4: Uh oh! What's happened here?
The likelihood function we used requires a sigma parameter, an extra parameter it adds to the inference problem that describes the estimated noise level in the data.
This means the number of parameters in our log-likelihood has gone up by one from the problem's number of parameters
Step5: As a result, we need to update our initial point (and boundaries) with a guess for what sigma may be.
In a realistic situtation, we could try to find a flat bit of signal to obtain a first estimate. In this example, we'll just start off by guessing sigma=1
Step6: Note that the noise has introduced a slight bias into the outcome, and the estimated sigma is different to the true value of 3.
As before, we can now plot a simulation with the obtained parameters, and see how it matches the data
Step7: We can now estimate profile likelihood confidence intervals for the carrying capacity parameter by fixing the other parameters at their maximum likelihood estimates. In classical statistics, it is assumed that the log-likelihood near the maximum likelihood estimates is well approximated by a normal distribution. When this assumption breaks down, because the likelihood distribution is skewed, some prefer to use profile likelihood approaches to construct confidence intervals (others prefer Bayesian approaches which don't rely on such assumptions).
To start, let's plot the profile log-likelihood.
Step8: To construct a profile likelihood $(1-\alpha)$% confidence interval we determine the region in parameter space that satisfies
Step9: Next we find the bounds of this region, which yields the 95% confidence interval on this parameter. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pints
import pints.toy as toy
# Create a model
model = toy.LogisticModel()
# Set some parameters
real_parameters = [0.1, 50]
# Create fake data
times = model.suggested_times()
values = model.simulate(real_parameters, times)
sigma = 3
noisy_values = values + np.random.normal(0, sigma, times.shape)
# Create an inference problem
problem = pints.SingleOutputProblem(model, times, noisy_values)
# Show the generated data
plt.figure()
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, noisy_values)
plt.plot(times, values)
plt.show()
Explanation: Optimisation: Maximising a log-likelihood
As well as minimising error functions, PINTS optimisation can be used to find the maximum of a loglikelihood (or of any pints.LogPDF object).
Following on from the first example, we can define an inference problem using the logistic model:
End of explanation
score = pints.SumOfSquaresError(problem)
boundaries = pints.RectangularBoundaries([0, 5], [1, 500])
x0 = np.array([0.5, 200])
opt = pints.OptimisationController(score, x0, method=pints.XNES)
opt.set_log_to_screen(False)
x1, f1 = opt.run()
print('Estimated parameters:')
print(x1)
Explanation: As before, we can define an error function and minimise it:
End of explanation
log_likelihood = pints.GaussianLogLikelihood(problem)
Explanation: We could also however define the inference problem statistically, by specifying a likelihood.
End of explanation
opt = pints.OptimisationController(log_likelihood, x0, method=pints.XNES)
x2, f2 = opt.run()
Explanation: Then use an optimisation routine to find the maximum likelihood estimates. (Note that the below is supposed to fail.)
End of explanation
print(model.n_parameters())
print(problem.n_parameters())
print(log_likelihood.n_parameters())
Explanation: Uh oh! What's happened here?
The likelihood function we used requires a sigma parameter, an extra parameter it adds to the inference problem that describes the estimated noise level in the data.
This means the number of parameters in our log-likelihood has gone up by one from the problem's number of parameters:
End of explanation
y0 = np.array([0.5, 200, 1])
boundaries_3d = pints.RectangularBoundaries([0, 5, 1e-3], [1, 500, 10])
opt = pints.OptimisationController(log_likelihood, y0, boundaries=boundaries_3d, method=pints.XNES)
opt.set_log_to_screen(False)
y1, g1 = opt.run()
print('Estimated parameters:')
print(y1)
Explanation: As a result, we need to update our initial point (and boundaries) with a guess for what sigma may be.
In a realistic situtation, we could try to find a flat bit of signal to obtain a first estimate. In this example, we'll just start off by guessing sigma=1:
End of explanation
# Show the generated data
simulated_values = problem.evaluate(y1[:2])
plt.figure()
plt.xlabel('Time')
plt.ylabel('Values')
plt.plot(times, noisy_values)
plt.fill_between(times, simulated_values - sigma, simulated_values + sigma, alpha=0.2)
plt.plot(times, simulated_values)
plt.show()
Explanation: Note that the noise has introduced a slight bias into the outcome, and the estimated sigma is different to the true value of 3.
As before, we can now plot a simulation with the obtained parameters, and see how it matches the data:
End of explanation
kappa = np.linspace(30, 70, 100)
log_prob = [log_likelihood([y1[0], k, y1[2]]) for k in kappa]
plt.plot(kappa, log_prob)
plt.vlines(y1[1], ymin=-2700, ymax=log_likelihood(y1), linestyles='dashed')
plt.xlabel('Carrying capacity')
plt.ylabel('Log-likelihood')
plt.show()
Explanation: We can now estimate profile likelihood confidence intervals for the carrying capacity parameter by fixing the other parameters at their maximum likelihood estimates. In classical statistics, it is assumed that the log-likelihood near the maximum likelihood estimates is well approximated by a normal distribution. When this assumption breaks down, because the likelihood distribution is skewed, some prefer to use profile likelihood approaches to construct confidence intervals (others prefer Bayesian approaches which don't rely on such assumptions).
To start, let's plot the profile log-likelihood.
End of explanation
import scipy.stats
chi2 = scipy.stats.chi2.ppf(0.95, df=1)
log_likelihood_min = log_likelihood(y1) - chi2 / 2
print(log_likelihood_min)
Explanation: To construct a profile likelihood $(1-\alpha)$% confidence interval we determine the region in parameter space that satisfies:
$$\text{log } p(X|\theta, \hat{\phi}) \geq \text{log } p(X|\hat{\theta}, \hat{\phi}) - \frac{1}{2}\chi(1)^2_{1-\alpha},$$
where $\theta$ is the parameter we are seeking a confidence interval for and $\phi$ is a vector of other parameters; the $(\hat{\theta},\hat{\phi})$ variables indicate the maximum likelihood estimates; and $\chi(1)^2_{1-\alpha}$ represents the $\alpha$% critical values of a chi-squared distribution with one degree of freedom.
First we obtain the threshold value of log-likelihood to construct a 95% confidence interval.
End of explanation
import scipy.optimize
def log_likelihood_bounds(k):
return (log_likelihood([y1[0], k, y1[2]]) - log_likelihood_min)**2
res = scipy.optimize.minimize(log_likelihood_bounds, 40)
kappa_min = res.x[0]
res = scipy.optimize.minimize(log_likelihood_bounds, 60)
kappa_max = res.x[0]
print('Confidence interval = ' + str([kappa_min, kappa_max]))
Explanation: Next we find the bounds of this region, which yields the 95% confidence interval on this parameter.
End of explanation |
4,443 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Energy Meter Examples
BayLibre's ACME Cape and IIOCapture
More information can be found at https
Step1: Import required modules
Step2: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
Step3: Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Each EnergyMeter derived class has two main methods
Step4: Power Measurements Data | Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
Explanation: Energy Meter Examples
BayLibre's ACME Cape and IIOCapture
More information can be found at https://github.com/ARM-software/lisa/wiki/Energy-Meters-Requirements#iiocapture---baylibre-acme-cape.
End of explanation
# Generate plots inline
%matplotlib inline
import os
# Support to access the remote target
import devlib
from env import TestEnv
# RTApp configurator for generation of PERIODIC tasks
from wlgen import RTA, Ramp
Explanation: Import required modules
End of explanation
# Setup target configuration
my_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
"host" : '192.168.0.1',
# Folder where all the results will be collected
"results_dir" : "EnergyMeter_IIOCapture",
# Define devlib modules to load
"exclude_modules" : [ 'hwmon' ],
# Energy Meters Configuration for BayLibre's ACME Cape
"emeter" : {
"instrument" : "acme",
"conf" : {
#'iio-capture' : '/usr/bin/iio-capture',
#'ip_address' : 'baylibre-acme.local',
},
'channel_map' : {
'Device0' : 0,
'Device1' : 1,
}
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'rt-app' ],
# Comment this line to calibrate RTApp in your own platform
"rtapp-calib" : {"0": 360, "1": 142, "2": 138, "3": 352, "4": 352, "5": 353},
}
# Initialize a test environment using:
te = TestEnv(my_conf, wipe=False, force_new=True)
target = te.target
Explanation: Target Configuration
The target configuration is used to describe and configure your test environment.
You can find more details in examples/utils/testenv_example.ipynb.
End of explanation
# Create and RTApp RAMP task
rtapp = RTA(te.target, 'ramp', calibration=te.calibration())
rtapp.conf(kind='profile',
params={
'ramp' : Ramp(
start_pct = 60,
end_pct = 20,
delta_pct = 5,
time_s = 0.5).get()
})
# EnergyMeter Start
te.emeter.reset()
rtapp.run(out_dir=te.res_dir)
# EnergyMeter Stop and samples collection
nrg_report = te.emeter.report(te.res_dir)
logging.info("Collected data:")
!tree $te.res_dir
Explanation: Workload Execution and Power Consumptions Samping
Detailed information on RTApp can be found in examples/wlgen/rtapp_example.ipynb.
Each EnergyMeter derived class has two main methods: reset and report.
- The reset method will reset the energy meter and start sampling from channels specified in the target configuration. <br>
- The report method will stop capture and will retrieve the energy consumption data. This returns an EnergyReport composed of the measured channels energy and the report file. Each of the samples can also be obtained, as you can see below.
End of explanation
logging.info("Measured channels energy:")
logging.info("%s", nrg_report.channels)
logging.info("Returned energy file:")
logging.info(" %s", nrg_report.report_file)
!cat $nrg_report.report_file
stats_file = nrg_report.report_file.replace('.json', '_stats.json')
logging.info("Complete energy stats:")
logging.info(" %s", stats_file)
!cat $stats_file
logging.info("Device0 stats (head)")
samples_file = os.path.join(te.res_dir, 'samples_Device0.csv')
!head $samples_file
logging.info("Device1 stats (head)")
samples_file = os.path.join(te.res_dir, 'samples_Device1.csv')
!head $samples_file
Explanation: Power Measurements Data
End of explanation |
4,444 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook is a simulation of 5000 ms of 400 independent descending commands following a gamma distribution with mean of 12 ms and order 10 and the Soleus muscle (800 motoneurons). Each descending command connects to approximately 30 % of the motor units. Also, a pool of 350 Renshaw cells is present.
Step1: The spike times of all descending commands along the 5000 ms of simulation is shown in Fig. \ref{fig
Step2: The spike times of the MNs along the 5000 ms of simulation is shown in Fig. \ref{fig
Step3: The spike times of the Renshaw cells along the 5000 ms of simulation is shown in Fig. \ref{fig
Step4: The muscle force during the simulation \ref{fig | Python Code:
import sys
sys.path.insert(0, '..')
import time
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('pdf', 'png')
plt.rcParams['savefig.dpi'] = 75
plt.rcParams['figure.autolayout'] = False
plt.rcParams['figure.figsize'] = 10, 6
plt.rcParams['axes.labelsize'] = 18
plt.rcParams['axes.titlesize'] = 20
plt.rcParams['font.size'] = 16
plt.rcParams['lines.linewidth'] = 2.0
plt.rcParams['lines.markersize'] = 8
plt.rcParams['legend.fontsize'] = 14
plt.rcParams['text.usetex'] = True
plt.rcParams['font.family'] = "serif"
plt.rcParams['font.serif'] = "cm"
plt.rcParams['text.latex.preamble'] = "\usepackage{subdepth}, \usepackage{type1cm}"
import numpy as np
from Configuration import Configuration
from MotorUnitPool import MotorUnitPool
from InterneuronPool import InterneuronPool
from NeuralTract import NeuralTract
from SynapsesFactory import SynapsesFactory
conf = Configuration('confMNPoolWithRenshawCells.rmto')
conf.simDuration_ms = 5000 # Here I change simulation duration without changing the Configuration file.
# Time vector for the simulation
t = np.arange(0.0, conf.simDuration_ms, conf.timeStep_ms)
membPotential = np.zeros_like(t, dtype = 'd')
pools = dict()
pools[0] = MotorUnitPool(conf, 'SOL')
pools[1] = NeuralTract(conf, 'CMExt')
pools[2] = InterneuronPool(conf, 'RC', 'ext')
Syn = SynapsesFactory(conf, pools)
GammaOrder = 10
FR = 1000/12.0
tic = time.time()
for i in xrange(0, len(t)-1):
pools[1].atualizePool(t[i], FR, GammaOrder) # NeuralTract
pools[0].atualizeMotorUnitPool(t[i]) # MN pool
pools[3].atualizePool(t[i]) # RC synaptic Noise
pools[2].atualizeInterneuronPool(t[i]) # RC pool
toc = time.time()
print str(toc - tic) + ' seconds'
pools[0].listSpikes()
pools[1].listSpikes()
pools[2].listSpikes()
Explanation: This notebook is a simulation of 5000 ms of 400 independent descending commands following a gamma distribution with mean of 12 ms and order 10 and the Soleus muscle (800 motoneurons). Each descending command connects to approximately 30 % of the motor units. Also, a pool of 350 Renshaw cells is present.
End of explanation
plt.figure()
plt.plot(pools[1].poolTerminalSpikes[:, 0],
pools[1].poolTerminalSpikes[:, 1]+1, '.')
plt.xlabel('t (ms)')
plt.ylabel('Descending Command index')
Explanation: The spike times of all descending commands along the 5000 ms of simulation is shown in Fig. \ref{fig:spikesDescRenshaw}.
End of explanation
plt.figure()
plt.plot(pools[0].poolTerminalSpikes[:, 0],
pools[0].poolTerminalSpikes[:, 1]+1, '.')
plt.xlabel('t (ms)')
plt.ylabel('Motor Unit index')
Explanation: The spike times of the MNs along the 5000 ms of simulation is shown in Fig. \ref{fig:spikesMNRenshaw}.
End of explanation
plt.figure()
plt.plot(pools[2].poolSomaSpikes[:, 0],
pools[2].poolSomaSpikes[:, 1]+1, '.')
plt.xlabel('t (ms)')
plt.ylabel('Renshaw cell index')
Explanation: The spike times of the Renshaw cells along the 5000 ms of simulation is shown in Fig. \ref{fig:spikesRenshawRenshaw}.
End of explanation
plt.figure()
plt.plot(t, pools[0].Muscle.force, '-')
plt.xlabel('t (ms)')
plt.ylabel('Muscle force (N)')
Explanation: The muscle force during the simulation \ref{fig:forceRenshaw}.
End of explanation |
4,445 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Superradiance in the open Dicke model
Step1: Wigner Function
Below we calculate the Wigner function of the photonic part of the steady state. It shows two displaced squeezed states in the reciprocal photonic space. The result is in agreement with the findings of Ref [2].
Step2: Time Evolution
Here we calculate the time evolution of a state initialized in the most excited spin state with no photons in the cavity. We calculate the full density matrix evolution as well as spin and photon operator mean values.
Step3: Plots
Step4: References
[1] E.G. Dalla Torre et al., Phys Rev. A 94, 061802(R) (2016)
[2] P. Kirton and J. Keeling, , Phys. Rev. Lett. 118, 123602 (2017)
[3] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori, to be submitted.
[4] J. R. Johansson, P. D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012). http | Python Code:
import matplotlib as mpl
from matplotlib import cm
import matplotlib.pyplot as plt
import numpy as np
from qutip import *
from qutip.piqs import *
#TLS parameters
N = 6
ntls = N
nds = num_dicke_states(ntls)
[jx, jy, jz] = jspin(N)
jp = jspin(N,"+")
jm = jp.dag()
w0 = 1
gE = 0.1
gD = 0.01
h = w0 * jz
#photonic parameters
nphot = 20
wc = 1
kappa = 1
ratio_g = 2
g = ratio_g/np.sqrt(N)
a = destroy(nphot)
#TLS liouvillian
system = Dicke(N = N)
system.hamiltonian = h
system.emission = gE
system.dephasing = gD
liouv = system.liouvillian()
#photonic liouvilian
h_phot = wc * a.dag() * a
c_ops_phot = [np.sqrt(kappa) * a]
liouv_phot = liouvillian(h_phot, c_ops_phot)
#identity operators
id_tls = to_super(qeye(nds))
id_phot = to_super(qeye(nphot))
#light-matter superoperator and total liouvillian
liouv_sum = super_tensor(liouv_phot, id_tls) + super_tensor(id_phot, liouv)
h_int = g * tensor(a + a.dag(), jx)
liouv_int = -1j* spre(h_int) + 1j* spost(h_int)
liouv_tot = liouv_sum + liouv_int
#total operators
jz_tot = tensor(qeye(nphot), jz)
jpjm_tot = tensor(qeye(nphot), jp*jm)
nphot_tot = tensor(a.dag()*a, qeye(nds))
rho_ss = steadystate(liouv_tot, method="eigen")
jz_ss = expect(jz_tot, rho_ss)
jpjm_ss = expect(jpjm_tot, rho_ss)
nphot_ss = expect(nphot_tot, rho_ss)
psi = rho_ss.ptrace(0)
xvec = np.linspace(-6, 6, 100)
W = wigner(psi, xvec, xvec)
Explanation: Superradiance in the open Dicke model: $N$ qubits in a bosonic cavity
Author: Nathan Shammah ([email protected])
We consider a system of $N$ two-level systems (TLSs) coupled to a cavity mode. This is known as the Dicke model
\begin{eqnarray}
H &=&\omega_{0}J_z + \omega_{c}a^\dagger a + g\left(a^\dagger + a\right)\left(J_{+} + J_{-}\right)
\end{eqnarray}
where each TLS has identical frequency $\omega_{0}$. The light matter coupling can be in the ultrastrong coupling (USC) regime, $g/\omega_{0}>0.1$.
If we study this model as an open quantum system, the cavity can leak photons and the TLSs are subject to local processes. For example the system can be incoherently pumped at a rate $\gamma_\text{P}$, the TLSs are subject to dephaisng at a rate $\gamma_\text{D}$, and local incoherent emission occurs at a rate $\gamma_\text{E}$. The dynamics of the coupled light-matter system is governed by
\begin{eqnarray}
\dot{\rho} &=&
-i\lbrack \omega_{0}J_z + \omega_{a}a^\dagger a + g\left(a^\dagger + a\right)\left(J_{+} + J_{-}\right),\rho \rbrack
+\frac{\kappa}{2}\mathcal{L}{a}[\rho]
+\sum{n=1}^{N}\left(\frac{\gamma_\text{P}}{2}\mathcal{L}{J{+,n}}[\rho]
+\frac{\gamma_\text{E}}{2}\mathcal{L}{J{+,n}}[\rho]
+\frac{\gamma_\text{D}}{2}\mathcal{L}{J{+,n}}[\rho]\right)
\end{eqnarray}
When only the dissipation of the cavity is present, beyond a critical value of the coupling $g$, the steady state of the system becomes superradiant. This is visible by looking at the Wigner function of the photonic part of the density matrix, which displays two displaced lobes in the $x$ and $p$ plane.
As it has been shown in Ref. [1], the presence of dephasing suppresses the superradiant phase transition, while the presence of local emission restores it [2].
In order to study this system using QuTiP and $PIQS$, we will first build the TLS Liouvillian, then we will build the photonic Liouvillian and finally we will build the light-matter interaction. The total dynamics of the system is thus defined in a Liouvillian space that has both TLS and photonic degrees of freedom.
This is driven-dissipative system dysplaying out-of-equilibrium quantum phase transition.
End of explanation
jmax = (0.5 * N)
j2max = (0.5 * N + 1) * (0.5 * N)
plt.rc('text', usetex = True)
label_size = 20
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
wmap = wigner_cmap(W) # Generate Wigner colormap
nrm = mpl.colors.Normalize(0, W.max())
max_cb =np.max(W)
min_cb =np.min(W)
fig2 = plt.figure(2)
plotw = plt.contourf(xvec, xvec, W, 100, cmap=wmap, norm=nrm)
plt.title(r"Wigner Function", fontsize=label_size);
plt.xlabel(r'$x$', fontsize = label_size)
plt.ylabel(r'$p$', fontsize = label_size)
cb = plt.colorbar()
cb.set_ticks( [min_cb, max_cb])
cb.set_ticklabels([r'$0$',r'max'])
plt.show()
plt.close()
Explanation: Wigner Function
Below we calculate the Wigner function of the photonic part of the steady state. It shows two displaced squeezed states in the reciprocal photonic space. The result is in agreement with the findings of Ref [2].
End of explanation
#set initial conditions for spins and cavity
tmax = 40
nt = 1000
t = np.linspace(0, tmax, nt)
rho0 = dicke(N, N/2, N/2)
rho0_phot = ket2dm(basis(nphot,0))
rho0_tot = tensor(rho0_phot, rho0)
result = mesolve(liouv_tot, rho0_tot, t, [], e_ops = [jz_tot, jpjm_tot, nphot_tot])
rhot_tot = result.states
jzt_tot = result.expect[0]
jpjmt_tot = result.expect[1]
adagat_tot = result.expect[2]
Explanation: Time Evolution
Here we calculate the time evolution of a state initialized in the most excited spin state with no photons in the cavity. We calculate the full density matrix evolution as well as spin and photon operator mean values.
End of explanation
jmax = (N/2)
j2max = N/2*(N/2+1)
fig1 = plt.figure(1)
plt.plot(t, jzt_tot/jmax, 'k-', label='time evolution')
plt.plot(t, t*0+jz_ss/jmax, 'g--', label='steady state')
plt.title('Total inversion', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle J_z\rangle (t)$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
fig2 = plt.figure(2)
plt.plot(t, jpjmt_tot/j2max, 'k-', label='time evolution')
plt.plot(t, t*0+jpjm_ss/j2max, 'g--', label='steady state')
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle J_{+}J_{-}\rangle (t)$', fontsize = label_size)
plt.title('Light emission', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
fig3 = plt.figure(3)
plt.plot(t, adagat_tot, 'k-', label='time evolution')
plt.plot(t, t*0 + nphot_ss, 'g--', label='steady state')
plt.title('Cavity photons', fontsize = label_size)
plt.xlabel(r'$t$', fontsize = label_size)
plt.ylabel(r'$\langle a^\dagger a \rangle (t)$', fontsize = label_size)
plt.legend(fontsize = label_size)
plt.show()
plt.close()
Explanation: Plots
End of explanation
qutip.about()
Explanation: References
[1] E.G. Dalla Torre et al., Phys Rev. A 94, 061802(R) (2016)
[2] P. Kirton and J. Keeling, , Phys. Rev. Lett. 118, 123602 (2017)
[3] N. Shammah, S. Ahmed, N. Lambert, S. De Liberato, and F. Nori, to be submitted.
[4] J. R. Johansson, P. D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012). http://qutip.org
End of explanation |
4,446 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collaborative filtering example
collab models use data in a DataFrame of user, items, and ratings.
Step1: That's all we need to create and train a model
Step2: Movielens 100k
Let's try with the full Movielens 100k data dataset, available from http
Step3: Here's some benchmarks on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of 0.91**2 = 0.83.
Interpretation
Setup
Step4: Movie bias
Step5: Movie weights | Python Code:
user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
Explanation: Collaborative filtering example
collab models use data in a DataFrame of user, items, and ratings.
End of explanation
dls = CollabDataLoaders.from_df(ratings, bs=64, seed=42)
y_range = [0,5.5]
learn = collab_learner(dls, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
Explanation: That's all we need to create and train a model:
End of explanation
path=Config().data/'ml-100k'
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
dls = CollabDataLoaders.from_df(rating_movie, seed=42, valid_pct=0.1, bs=64, item_name=title, path=path)
dls.show_batch()
y_range = [0,5.5]
learn = collab_learner(dls, n_factors=40, y_range=y_range)
learn.lr_find()
learn.fit_one_cycle(5, 5e-3, wd=1e-1)
learn.save('dotprod')
Explanation: Movielens 100k
Let's try with the full Movielens 100k data dataset, available from http://files.grouplens.org/datasets/movielens/ml-100k.zip
End of explanation
learn.load('dotprod');
learn.model
g = rating_movie.groupby('title')['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
Explanation: Here's some benchmarks on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of 0.91**2 = 0.83.
Interpretation
Setup
End of explanation
movie_bias = learn.model.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby('title')['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
Explanation: Movie bias
End of explanation
movie_w = learn.model.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
Explanation: Movie weights
End of explanation |
4,447 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
Step3: 연습문제 12.1
이번 장에서 저자가 사용한 선형모형은 선형이라는 명백한 결점이 있고, 가격이 시간에 따라 선형으로 변할 것이라고 예측할 이유는 없다. 11.3 절에서 했던 것처럼, 2차항을 추가해서 모형에 유연성을 더할 수 있다.
2차 모형을 사용해서 시계열 일별가격을 적합할 수 있고, 모형을 사용해서 예측값도 생성할 수 있다. 2차 모형을 돌리는 RunLinearModel 버젼을 작성해야할 것이다. 하지만, 예측을 생성하는데 timeseries.py에 나온 코드를 재사용할 수도 있다.
Step8: 연습문제 12.2
9.2 절에 나온 HypothesisTest을 확장하는 클래스를 정의하는데 명칭은 SerialCorrelationTest이다. 데이터로 시계열과 시차(lag)를 받아서, 주어진 시차를 갖는 시계열 데이터의 계열상관을 계산하고 나서, 관측된 상관에 대한 p-값을 계산한다.
이 클래스를 사용해서 원가격 데이터에 나온 계열 상관이 통계적으로 유의적인지 검정한다. 또한, 선형모형과 (만약 이전 예제를 수행했다면) 2차 모형의 잔차를 검정한다.
Step10: 연습문제 12.3
예측을 만들어 내는데, EWMA 모형을 확장하는 몇가지 방식이 있다. 가장 단순한 방법중의 하나는 다음과 같다 | Python Code:
from __future__ import print_function
import pandas
import numpy as np
import statsmodels.formula.api as smf
import thinkplot
import thinkstats2
import regression
import timeseries
%matplotlib inline
Explanation: 통계적 사고 (2판) 연습문제 (thinkstats2.com, think-stat.xwmooc.org)<br>
Allen Downey / 이광춘(xwMOOC)
End of explanation
def RunQuadraticModel(daily):
Runs a linear model of prices versus years.
daily: DataFrame of daily prices
returns: model, results
daily['years2'] = daily.years**2
model = smf.ols('ppg ~ years + years2', data=daily)
results = model.fit()
return model, results
def PlotQuadraticModel(daily, name):
model, results = RunQuadraticModel(daily)
regression.SummarizeResults(results)
timeseries.PlotFittedValues(model, results, label=name)
thinkplot.Save(root='timeseries11',
title='fitted values',
xlabel='years',
xlim=[-0.1, 3.8],
ylabel='price per gram ($)')
timeseries.PlotResidualPercentiles(model, results)
thinkplot.Show(title='residuals',
xlabel='years',
ylabel='price per gram ($)')
years = np.linspace(0, 5, 101)
thinkplot.Scatter(daily.years, daily.ppg, alpha=0.1, label=name)
timeseries.PlotPredictions(daily, years, func=RunQuadraticModel)
thinkplot.Show(title='predictions',
xlabel='years',
xlim=[years[0]-0.1, years[-1]+0.1],
ylabel='price per gram ($)')
transactions = timeseries.ReadData()
dailies = timeseries.GroupByQualityAndDay(transactions)
name = 'high'
daily = dailies[name]
PlotQuadraticModel(daily, name)
Explanation: 연습문제 12.1
이번 장에서 저자가 사용한 선형모형은 선형이라는 명백한 결점이 있고, 가격이 시간에 따라 선형으로 변할 것이라고 예측할 이유는 없다. 11.3 절에서 했던 것처럼, 2차항을 추가해서 모형에 유연성을 더할 수 있다.
2차 모형을 사용해서 시계열 일별가격을 적합할 수 있고, 모형을 사용해서 예측값도 생성할 수 있다. 2차 모형을 돌리는 RunLinearModel 버젼을 작성해야할 것이다. 하지만, 예측을 생성하는데 timeseries.py에 나온 코드를 재사용할 수도 있다.
End of explanation
class SerialCorrelationTest(thinkstats2.HypothesisTest):
Tests serial correlations by permutation.
def TestStatistic(self, data):
Computes the test statistic.
data: tuple of xs and ys
series, lag = data
test_stat = abs(thinkstats2.SerialCorr(series, lag))
return test_stat
def RunModel(self):
Run the model of the null hypothesis.
returns: simulated data
series, lag = self.data
permutation = series.reindex(np.random.permutation(series.index))
return permutation, lag
def TestSerialCorr(daily):
Tests serial correlations in daily prices and their residuals.
daily: DataFrame of daily prices
# test the correlation between consecutive prices
series = daily.ppg
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
# test for serial correlation in residuals of the linear model
_, results = timeseries.RunLinearModel(daily)
series = results.resid
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
# test for serial correlation in residuals of the quadratic model
_, results = RunQuadraticModel(daily)
series = results.resid
test = SerialCorrelationTest((series, 1))
pvalue = test.PValue()
print(test.actual, pvalue)
TestSerialCorr(daily)
Explanation: 연습문제 12.2
9.2 절에 나온 HypothesisTest을 확장하는 클래스를 정의하는데 명칭은 SerialCorrelationTest이다. 데이터로 시계열과 시차(lag)를 받아서, 주어진 시차를 갖는 시계열 데이터의 계열상관을 계산하고 나서, 관측된 상관에 대한 p-값을 계산한다.
이 클래스를 사용해서 원가격 데이터에 나온 계열 상관이 통계적으로 유의적인지 검정한다. 또한, 선형모형과 (만약 이전 예제를 수행했다면) 2차 모형의 잔차를 검정한다.
End of explanation
def PlotEwmaPredictions(daily, name):
# use EWMA to estimate slopes
filled = timeseries.FillMissing(daily)
filled['slope'] = pandas.ewma(filled.ppg.diff(), span=180)
filled[-1:]
# extract the last inter and slope
start = filled.index[-1]
inter = filled.ewma[-1]
slope = filled.slope[-1]
# reindex the DataFrame, adding a year to the end
dates = pandas.date_range(filled.index.min(),
filled.index.max() + np.timedelta64(365, 'D'))
predicted = filled.reindex(dates)
# generate predicted values and add them to the end
predicted['date'] = predicted.index
one_day = np.timedelta64(1, 'D')
predicted['days'] = (predicted.date - start) / one_day
predict = inter + slope * predicted.days
predicted.ewma.fillna(predict, inplace=True)
# plot the actual values and predictions
thinkplot.Scatter(daily.ppg, alpha=0.1, label=name)
thinkplot.Plot(predicted.ewma)
thinkplot.Show(legend=False)
PlotEwmaPredictions(daily, name)
Explanation: 연습문제 12.3
예측을 만들어 내는데, EWMA 모형을 확장하는 몇가지 방식이 있다. 가장 단순한 방법중의 하나는 다음과 같다:
시계열 EWMA를 계산하고, 가장 마지막 점을 절편 inter으로 사용한다.
시계열의 연속 요소사이에 EWMA 차이를 계산하고, 가장 마지막 점을 기울기 slope로 사용한다.
미래 시점에 값을 예측하는데, inter + slope * dt을 계산한다. 여기서 dt는 예측 시점과 가장 마지막 예측 시점의 차이다.
이 방법을 사용해서, 마지막 관측점 다음 연도에 대한 예측을 생성한다.
몇가지 힌드는 다음과 같다:
timeseries.FillMissing을 사용해서 분석을 돌리기 전에 결측값을 채워넣는다. 이런 방식으로 연속된 요소값 사이 시점이 일치한다.
Series.diff을 사용해서 연속된 요소 사이 차이를 계산한다.
reindex을 사용해서 데이터프레임을 미래로 연장한다.
fillna을 사용해서 예측한 값을 데이터프레임에 넣는다.
End of explanation |
4,448 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Elemento Truss
El elemento Truss plano es un elemento finito con coordenadas locales y globales, tiene un modulo de elasticidad $E$, una sección transversal $A$ y una longitud $L$. Cada elemento tiene dos nodos y un ángulo de inclinación $\theta$ medido en sentido antihorario desde el eje $X$ global, como se muestra en la figura. Sean $C=\cos(\theta)$ y $S=\sin(\theta)$, entonces la matriz de rigidez por elemento está dada por
Step1: Ejemplo 2.
<img src="src/truss-element/example_02.png" width="300px">
Step2: Ejemplo 3
<figure>
<img src="src/truss-element/example_03.png" width="350px">
<center><figcaption>Fuente | Python Code:
%matplotlib inline
from nusa import * # Importando nusa
E,A = 210e9, 3.1416*(10e-3)**2
n1 = Node((0,0))
n2 = Node((2,0))
n3 = Node((0,2))
e1 = Truss((n1,n2),E,A)
e2 = Truss((n1,n3),E,A)
e3 = Truss((n2,n3),E,A)
m = TrussModel()
for n in (n1,n2,n3): m.add_node(n)
for e in (e1,e2,e3): m.add_element(e)
m.add_constraint(n1, ux=0, uy=0)
m.add_constraint(n2, uy=0)
m.add_force(n3, (500,0))
m.plot_model()
m.solve()
m.plot_deformed_shape()
m.plot_deformed_shape()
m.simple_report()
Explanation: Elemento Truss
El elemento Truss plano es un elemento finito con coordenadas locales y globales, tiene un modulo de elasticidad $E$, una sección transversal $A$ y una longitud $L$. Cada elemento tiene dos nodos y un ángulo de inclinación $\theta$ medido en sentido antihorario desde el eje $X$ global, como se muestra en la figura. Sean $C=\cos(\theta)$ y $S=\sin(\theta)$, entonces la matriz de rigidez por elemento está dada por:
$$
k = \frac{EA}{L}
\begin{bmatrix}
C^2 & CS & -C^2 & -CS \
CS & S^2 & -CS & -S^2 \
-C^2 & -CS & C^2 & CS \
-CS & -S^2 & CS & S^2 \
\end{bmatrix}
$$
<img src="src/truss-element/truss_element.PNG" width="200px">
El elemento Truss tiene dos grados de libertad en cada nodo: desplazamientos en X e Y.
La fuerza en cada elemento se calcula como sigue:
$$
f = \frac{EA}{L} \begin{bmatrix} -C & -S & C & S \end{bmatrix} \left{ u \right}
$$
Donde $f$ es la fuerza (escalar) en el elemento y $\left{u\right}$ el vector de desplazamientos en el elemento. Una fuerza negativa indica que el elemento está sometido a compresión.
El esfuerzo en el elemento se obtiene dividiendo la fuerza $f$ por la sección transversal, es decir:
$$
\sigma = \frac{f}{A}
$$
Ejemplo 1. Estructura simple de tres elementos
Como primer ejemplo vamos a resolver una estructura simple de tres elementos, con un apoyo fijo en A, un soporte simple en C y una fuerza horizontal de 500 N en B, como se muestra en la figura.
<figure>
<img src="src/truss-element/example_01.png" width="250px">
<center><figcaption>Fuente: [1]</figcaption></center>
</figure>
End of explanation
E,A = 200e9, 0.01
n1 = Node((0,0))
n2 = Node((6,0))
n3 = Node((6,4))
n4 = Node((3,4))
e1 = Truss((n1,n2),E,A)
e2 = Truss((n2,n3),E,A)
e3 = Truss((n4,n3),E,A)
e4 = Truss((n1,n4),E,A)
e5 = Truss((n2,n4),E,A)
m = TrussModel()
for n in (n1,n2,n3,n4): m.add_node(n)
for e in (e1,e2,e3,e4,e5): m.add_element(e)
m.add_constraint(n1, uy=0)
m.add_constraint(n3, ux=0, uy=0)
m.add_force(n2, (600,0))
m.add_force(n4, (0,-400))
m.plot_model()
m.solve()
m.plot_deformed_shape()
m.simple_report()
Explanation: Ejemplo 2.
<img src="src/truss-element/example_02.png" width="300px">
End of explanation
E,A = 29e6, 0.1
n1 = Node((0,0)) # A
n2 = Node((8*12,6*12)) # B
n3 = Node((8*12,0)) # C
n4 = Node((16*12,8*12+4)) # D
n5 = Node((16*12,0)) # E
n6 = Node((24*12,6*12)) # F
n7 = Node((24*12,0)) # G
n8 = Node((32*12,0)) # H
e1 = Truss((n1,n2),E,A)
e2 = Truss((n1,n3),E,A)
e3 = Truss((n2,n3),E,A)
e4 = Truss((n2,n4),E,A)
e5 = Truss((n2,n5),E,A)
e6 = Truss((n3,n5),E,A)
e7 = Truss((n5,n4),E,A)
e8 = Truss((n4,n6),E,A)
e9 = Truss((n5,n6),E,A)
e10 = Truss((n5,n7),E,A)
e11 = Truss((n6,n7),E,A)
e12 = Truss((n6,n8),E,A)
e13 = Truss((n7,n8),E,A)
m = TrussModel("Gambrel Roof")
for n in (n1,n2,n3,n4,n5,n6,n7,n8): m.add_node(n)
for e in (e1,e2,e3,e4,e5,e6,e7,e8,e9,e10,e11,e12,e13): m.add_element(e)
m.add_constraint(n1, uy=0)
m.add_constraint(n8, ux=0, uy=0)
m.add_force(n2, (0,-600))
m.add_force(n4, (0,-600))
m.add_force(n6, (0,-600))
m.add_force(n8, (0,-300))
m.add_force(n1, (0,-300))
m.plot_model()
m.solve()
m.plot_deformed_shape()
m.simple_report()
Explanation: Ejemplo 3
<figure>
<img src="src/truss-element/example_03.png" width="350px">
<center><figcaption>Fuente: [2]</figcaption></center>
</figure>
End of explanation |
4,449 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análise Nodal com Fontes de Tensão
Jupyter Notebook desenvolvido por Gustavo S.S.
Um supernó é formado envolvendo-se uma fonte de tensão (dependente
ou independente) conectada entre dois nós que não são de referência e
quaisquer elementos conectados em paralelo com ele.
Um supernó pode ser
considerado uma superfície que
engloba a fonte de tensão e seus
dois nós.
Observe as seguintes propriedades de um supernó
Step1: Problema Prático 3.3
Calcule v e i no circuito da Figura 3.11.
Step2: Exemplo 3.4
Determine as tensões nodais no circuito da Figura 3.12.
Step3: Problema Prático 3.4
Determine v1, v2 e v3 no circuito da Figura 3.14 usando análise nodal. | Python Code:
print("Exemplo 3.3")
import numpy as np
from sympy import *
Vsource = 2
Csource1 = 2
Csource2 = 7
R1 = 2
R2 = 4
R3 = 10
#i1 = v1/R1 = v1/2
#i2 = v2/R2 = v2/4
#i1 + i2 + 7 = 2 => i1 + i2 = -5
#v2 - v1 = 2
#v1/2 + v2/4 = -5 => (v2 - 2)/2 + v2/4 = - 5
#3v2/4 = -4
v2 = -16/3
v1 = v2 - 2
print("V1:", v1, "V")
print("V2:", v2, "V")
Explanation: Análise Nodal com Fontes de Tensão
Jupyter Notebook desenvolvido por Gustavo S.S.
Um supernó é formado envolvendo-se uma fonte de tensão (dependente
ou independente) conectada entre dois nós que não são de referência e
quaisquer elementos conectados em paralelo com ele.
Um supernó pode ser
considerado uma superfície que
engloba a fonte de tensão e seus
dois nós.
Observe as seguintes propriedades de um supernó:
A fonte de tensão dentro do supernó fornece uma equação de restrição necessária para encontrar as tensões nodais.
Um supernó não tem nenhuma tensão própria.
Um supernó requer a aplicação tanto da LKC como da LKT.
Exemplo 3.3
Para o circuito apresentado na Figura 3.9, determine as tensões nodais.
End of explanation
print("Problema Prático 3.3")
Vsource1 = 14
Vsource2 = 6
#v2 - v = 6
#i1 = i2 + i + i3
#i1 = (14 - v)/4
#i2 = v/3
#i = v2/2
#i3 = v2/6
#7/2 - v/4 = v/3 + 3 + v/2 + 1 + v/6 => 13v/12
v = (-1/2)*12/13
v2 = v + 6
i = v2/2
print("Valor de v:",v,"V")
print("Valor de i:",i,"A")
Explanation: Problema Prático 3.3
Calcule v e i no circuito da Figura 3.11.
End of explanation
print("Exemplo 3.4")
import numpy as np
R1 = 2
R2 = 6
R3 = 4
R4 = 1
Rx = 3
#i1 = v1/R1 = v1/2
#i2 = (v2 - v3)/R2 = (v2 - v3)/6
#i3 = v3/R3 = v3/4
#i4 = v4/R4 = v4
#ix = vx/Rx = vx/3
#i1 + i2 + ix = 10
#i2 + ix = i3 + i4
#(v1 - v2) = 20
#(v3 - v4) = 3vx
#(v1 - v4) = vx
#(v2 - v3) = vx - 3vx - 20 = -2vx - 20
#v1/2 + (-2vx - 20)/6 + vx/3 = 10 => v1/2 = 40/3
v1 = 80/3
v2 = v1 - 20
#v3 - v4 -3vx = 0
#-v4 - vx = -80/3
#-3v4 -3vx = -80
#-v3 + 2vx = - 80/3
#-3v3 + 6vx = -80
#i2 + ix = i3 + i4
#=> (v2 - v3)/6 + vx/3 = v3/4 + v4
#=> -5v3/12 -v4 + vx/3 = -10/9
#=> -15v3 -36v4 + 12vx = -40
coef = np.matrix('1 -1 -3;0 -3 -3;-15 -36 12')
res = np.matrix('0;-80;-40')
V = np.linalg.inv(coef)*res
#10/9 - (20/3 + 2vx + 20)/6 + vx/3 = (20/3 + 2vx + 20)/4 + 80/3 - vx
#7vx/6 = -10/3 + 5/3 + 5 + 80/3
#7vx/6 = 30
vx = 180/7
v4 = v1 - vx
v3 = v2 + 2*vx + 20
print("V1:", v1, "V")
print("V2:", v2, "V")
print("V3:", float(V[0]), "V")
print("V4:", float(V[1]), "V")
print("Vx:", float(V[2]), "V")
Explanation: Exemplo 3.4
Determine as tensões nodais no circuito da Figura 3.12.
End of explanation
print("Problema Prático 3.4")
#i = v1/2
#i2 = v2/4
#i3 = v3/3
#i4 = (v1 - v3)/6
#(v1 - v3) = 25 - 5i
#(v1 - v3) = 25 - 5v1/2
#7v1/2 - v3 = 25
#7v1 - 2v3 = 50
#(v1 - v2) = 25
#(v3 - v2) = 5i = 5v1/2
#-5v1/2 -v2 + v3 = 0
#-5v1 -2v2 + 2v3 = 0
#organizando
#7v1 - 2v3 = 50
#v1 - v2 = 25
#-5v1 -2v2 + 2v3 = 0
#i + i2 + i4 = 0
#=> v1/2 + v2/4 + (v1 - v3)/6 = 0
#=>2v1/3 + v2/4 - v3/6 = 0
#=> 8v1 + 3v2 - 2v3 = 0
#i2 + i3 = i4
#=> v2/4 + v3/3 = (v1 - v3)/6
#=>-v1/6 + v2/4 + v3/3 = 0
#=> -2v1 + 3v2 + 4v3 = 0
#i + i2 + i3 = 0
#=>v1/2 + v2/4 + v3/3 = 0
#=>6v1 + 3v2 + 4v3 = 0
coef = np.matrix('1 -1 0;6 3 4;-5 -2 2')
res = np.matrix('25; 0; 0')
V = np.linalg.inv(coef)*res
print("Valor de v1:",float(V[0]),"V")
print("Valor de v2:",float(V[1]),"V")
print("Valor de v3:",float(V[2]),"V")
Explanation: Problema Prático 3.4
Determine v1, v2 e v3 no circuito da Figura 3.14 usando análise nodal.
End of explanation |
4,450 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Image Classification Problem using Keras (dog_vs_cat)
Step1: Data Preprocessing
Step2: Define an architecture - > Feed Forward Network of dimension "3072-768-384-2" | Python Code:
# import the necessary packages
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split
from keras.models import Sequential
from keras.layers import Activation
from keras.optimizers import SGD
from keras.layers import Dense
from keras.utils import np_utils
from imutils import paths
import numpy as np
import argparse
import os
import cv2
import pandas as pd
import numpy as np
def image_to_feature_vector(image, size=(32, 32)):
# resize the image to a fixed size, then flatten the image into
# a list of raw pixel intensities
return cv2.resize(image, size).flatten()
import glob
print("[INFO] describing images...")
train_image_path = "data/train/"
image_paths = glob.glob(os.path.join(train_image_path, '*.jpg'))
# initialize the data matrix and labels list
data = []
labels = []
# loop over the input images
for (i, imagePath) in enumerate(image_paths):
# load the image and extract the class label (assuming that our
# path as the format: /path/to/dataset/{class}.{image_num}.jpg
image = cv2.imread(imagePath)
label = imagePath.split(os.path.sep)[-1].split(".")[0]
# construct a feature vector raw pixel intensities, then update
# the data matrix and labels list
features = image_to_feature_vector(image)
data.append(features)
labels.append(label)
# show an update every 1,000 images
if i > 0 and i % 1000 == 0:
print("[INFO] processed {}/{}".format(i, len(image_paths)))
Explanation: A Simple Image Classification Problem using Keras (dog_vs_cat)
End of explanation
# encode the labels, converting them from strings to integers
le = LabelEncoder()
encoded_labels = le.fit_transform(labels)
pd.DataFrame(encoded_labels).head(5)
print(pd.DataFrame(labels).describe())
normalized_data = np.array(data) / 255.0
categorical_labels = np_utils.to_categorical(encoded_labels, 2)
# partition the data into training and testing splits, using 75%
# of the data for training and the remaining 25% for testing
print("[INFO] constructing training/testing split...")
labels = categorical_labels.tolist
(trainData, testData, trainLabels, testLabels) = train_test_split(data, categorical_labels, test_size=0.25, random_state=42)
Explanation: Data Preprocessing
End of explanation
model = Sequential()
model.add(Dense(768, input_dim=3072, kernel_initializer="uniform", activation="relu"))
model.add(Dense(384, kernel_initializer="uniform", activation="relu"))
model.add(Dense(2))
model.add(Activation("softmax"))
# train the model using SGD
print("[INFO] compiling model...")
sgd = SGD(lr=0.001)
model.compile(loss="binary_crossentropy", optimizer=sgd, metrics=["accuracy"])
model.fit(np.array(trainData), np.array(trainLabels), epochs=50, batch_size=128)
# show the accuracy on the testing set
print("[INFO] evaluating on testing set...")
(loss, accuracy) = model.evaluate(np.array(testData), np.array(testLabels), batch_size=150, verbose=1)
print("[INFO] loss={:.4f}, accuracy: {:.4f}%".format(loss, accuracy * 100))
Explanation: Define an architecture - > Feed Forward Network of dimension "3072-768-384-2"
End of explanation |
4,451 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mahalanobis Distance
This notebook shows how I think we should do Mahalanobis distance for the SECIM project. From JMP Website
Step1: Generation of a simulated data set
As an example I will simulate a data set
Step2: Calculate Mahalanobis distance
There is a Mahalanobis function in scipy, but from what I can tell it only works on 1d arrays and not matrices. I think it will be easiest to o the calculation by hand. | Python Code:
import pandas as pd
import numpy as np
import scipy as sp
import scipy.stats as stats
import matplotlib.pyplot as plt
import cPickle as pickle
import os
%matplotlib inline
Explanation: Mahalanobis Distance
This notebook shows how I think we should do Mahalanobis distance for the SECIM project. From JMP Website:
The Mahalanobis distance takes into account the correlation structure of the data and the individual scales. For each value, the Mahalanobis distance is denoted $M_i$ and is computed as
$M_i = \sqrt{(Y_i - \bar Y)^T S^{-1}(Y_i - \bar Y)}$
where:
* $Y_i$ is the data for the ith row
* Y is the row of means
* S is the estimated covariance matrix for the data
End of explanation
covSim = np.array([[1.0, .8, .2, .2],
[.8, 1.0, .3, .3],
[.3, .3, 1.0, .8],
[.2, .2, .8, 1.0]])
np.random.seed(111)
datSim = np.random.multivariate_normal([2, 3, 8, 9], covSim, size=1000)
dfSim = pd.DataFrame(data=datSim, columns=['sample1', 'sample2', 'sample3', 'sample4'])
# Save for comparing in sas
dfSim.to_csv('/home/jfear/tmp/dfSim.csv', index=False)
dfSim.head()
Explanation: Generation of a simulated data set
As an example I will simulate a data set
End of explanation
# Calculate the covaranice matrix from the data
covHat = dfSim.cov()
covHat
# Get the inverse of the covarance matrix
covHatInv = np.linalg.inv(covHat)
covHatInv
# Calculate the column means
colMean = dfSim.mean(axis=0)
colMean
# Subtract the mean from each value
dfSimCenter = (dfSim - colMean).T
dfSimCenter.head()
# Calculate the mahalanobis distance
MD = np.sqrt(np.dot(np.dot(dfSimCenter.T, covHatInv), dfSimCenter))
MDval = np.diag(MD)
plt.scatter(x=range(len(MDval)), y=MDval)
plt.axhline(np.percentile(MDval, 95), ls='--', lw=2)
plt.show()
Explanation: Calculate Mahalanobis distance
There is a Mahalanobis function in scipy, but from what I can tell it only works on 1d arrays and not matrices. I think it will be easiest to o the calculation by hand.
End of explanation |
4,452 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MNE inverse solution on evoked data in a mixed source space
Create a mixed source space and compute MNE inverse solution on evoked dataset.
Step1: Set up our source space.
Step2: We could write the mixed source space with
Step3: Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space | Python Code:
# Author: Annalisa Pascarella <[email protected]>
#
# License: BSD (3-clause)
import os.path as op
import matplotlib.pyplot as plt
from nilearn import plotting
import mne
from mne.minimum_norm import make_inverse_operator, apply_inverse
# Set dir
data_path = mne.datasets.sample.data_path()
subject = 'sample'
data_dir = op.join(data_path, 'MEG', subject)
subjects_dir = op.join(data_path, 'subjects')
bem_dir = op.join(subjects_dir, subject, 'bem')
# Set file names
fname_mixed_src = op.join(bem_dir, '%s-oct-6-mixed-src.fif' % subject)
fname_aseg = op.join(subjects_dir, subject, 'mri', 'aseg.mgz')
fname_model = op.join(bem_dir, '%s-5120-bem.fif' % subject)
fname_bem = op.join(bem_dir, '%s-5120-bem-sol.fif' % subject)
fname_evoked = data_dir + '/sample_audvis-ave.fif'
fname_trans = data_dir + '/sample_audvis_raw-trans.fif'
fname_fwd = data_dir + '/sample_audvis-meg-oct-6-mixed-fwd.fif'
fname_cov = data_dir + '/sample_audvis-shrunk-cov.fif'
Explanation: Compute MNE inverse solution on evoked data in a mixed source space
Create a mixed source space and compute MNE inverse solution on evoked dataset.
End of explanation
# List substructures we are interested in. We select only the
# sub structures we want to include in the source space
labels_vol = ['Left-Amygdala',
'Left-Thalamus-Proper',
'Left-Cerebellum-Cortex',
'Brain-Stem',
'Right-Amygdala',
'Right-Thalamus-Proper',
'Right-Cerebellum-Cortex']
# Get a surface-based source space, here with few source points for speed
# in this demonstration, in general you should use oct6 spacing!
src = mne.setup_source_space(subject, spacing='oct5',
add_dist=False, subjects_dir=subjects_dir)
# Now we create a mixed src space by adding the volume regions specified in the
# list labels_vol. First, read the aseg file and the source space bounds
# using the inner skull surface (here using 10mm spacing to save time,
# we recommend something smaller like 5.0 in actual analyses):
vol_src = mne.setup_volume_source_space(
subject, mri=fname_aseg, pos=10.0, bem=fname_model,
volume_label=labels_vol, subjects_dir=subjects_dir,
add_interpolator=False, # just for speed, usually this should be True
verbose=True)
# Generate the mixed source space
src += vol_src
# Visualize the source space.
src.plot(subjects_dir=subjects_dir)
n = sum(src[i]['nuse'] for i in range(len(src)))
print('the src space contains %d spaces and %d points' % (len(src), n))
Explanation: Set up our source space.
End of explanation
nii_fname = op.join(bem_dir, '%s-mixed-src.nii' % subject)
src.export_volume(nii_fname, mri_resolution=True)
plotting.plot_img(nii_fname, cmap='nipy_spectral')
# Compute the fwd matrix
fwd = mne.make_forward_solution(
fname_evoked, fname_trans, src, fname_bem,
mindist=5.0, # ignore sources<=5mm from innerskull
meg=True, eeg=False, n_jobs=1)
leadfield = fwd['sol']['data']
print("Leadfield size : %d sensors x %d dipoles" % leadfield.shape)
src_fwd = fwd['src']
n = sum(src_fwd[i]['nuse'] for i in range(len(src_fwd)))
print('the fwd src space contains %d spaces and %d points' % (len(src_fwd), n))
# Load data
condition = 'Left Auditory'
evoked = mne.read_evokeds(fname_evoked, condition=condition,
baseline=(None, 0))
noise_cov = mne.read_cov(fname_cov)
# Compute inverse solution and for each epoch
snr = 3.0 # use smaller SNR for raw data
inv_method = 'dSPM' # sLORETA, MNE, dSPM
parc = 'aparc' # the parcellation to use, e.g., 'aparc' 'aparc.a2009s'
lambda2 = 1.0 / snr ** 2
# Compute inverse operator
inverse_operator = make_inverse_operator(evoked.info, fwd, noise_cov,
depth=None, fixed=False)
stc = apply_inverse(evoked, inverse_operator, lambda2, inv_method,
pick_ori=None)
# Get labels for FreeSurfer 'aparc' cortical parcellation with 34 labels/hemi
labels_parc = mne.read_labels_from_annot(
subject, parc=parc, subjects_dir=subjects_dir)
Explanation: We could write the mixed source space with::
write_source_spaces(fname_mixed_src, src, overwrite=True)
We can also export source positions to nift file and visualize it again:
End of explanation
src = inverse_operator['src']
label_ts = mne.extract_label_time_course(
[stc], labels_parc, src, mode='mean', allow_empty=True)
# plot the times series of 2 labels
fig, axes = plt.subplots(1)
axes.plot(1e3 * stc.times, label_ts[0][0, :], 'k', label='bankssts-lh')
axes.plot(1e3 * stc.times, label_ts[0][71, :].T, 'r', label='Brain-stem')
axes.set(xlabel='Time (ms)', ylabel='MNE current (nAm)')
axes.legend()
mne.viz.tight_layout()
Explanation: Average the source estimates within each label of the cortical parcellation
and each sub structure contained in the src space
End of explanation |
4,453 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: How to use the GFSA layer for new tasks
This notebook describes the high-level process for using the GFSA layer in a new task, specifically focusing on how to represent a new type of graph as an MDP so that you can use the GFSA layer.
Setup and imports
Step2: Defining your graph domain
The first step in using the GFSA layer for a new task is to specify how to interpret the graphs in your domain as MDPs. Specifically, you must define a set of node types, and then for each node type, define the set of possible actions ("out edges") the agent can take at that node, and the set of observations ("in edges") the agent can receive when it arrives at the node. In the codebase, this is referred to as a "graph schema".
For the dataset of simple Python functions, the graph schema is derived from a simpler "AST specification"
Step3: It's possible to infer the AST specification from a dataset of ASTs using ast_spec_inference.py. In the paper, we use two different AST specifications, one for the synthetic Python examples (shown above), and one for the Python examples written by humans (since these use many additional types of AST nodes).
For the maze dataset, the node types are determined by the shape of the grid cell, and the graph schema determines which actions are valid
Step4: As a toy example of how you might encode a new graph domain, suppose we have a network of houses connected by directed roads. Each house is adjacent to exactly one road, and each road has at least one entry and exit point but may have more.
We can encode this structure using the following schema
Step6: Building MDP graphs
Before running the GFSA layer on a specific input graph, you need to specify the result of taking each of the actions defined in the schema.
For AST graphs, these transitions can be automatically computed based on the AST and its specification
Step7: In the maze dataset, we precompute the destination of taking each possible action at each possible node
Step8: For the toy houses-and-roads example above, we might have a graph that looks something like this
Step9: Note that every action must have at least one destination and associated observation! This is the case even for roads with no house, for which the "to_house" action results in staying in place and getting a special sentinel observation.
Encoding MDP graphs into GFSA-compatible NDArrays
Once you have an MDP graph that conforms to a schema, you can use an automaton builder object to encode that graph into a set of NDArrays | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!git clone https://github.com/google-research/google-research.git --depth=1
import os
os.chdir("google-research")
!pip install flax
import gast
import numpy as np
from gfsa import automaton_builder
from gfsa import generic_ast_graphs
from gfsa import graph_types
from gfsa import py_ast_graphs
from gfsa import schema_util
from gfsa.datasets.mazes import maze_schema
from gfsa.datasets.mazes import maze_task
from gfsa.visualization.pprint import pprint
from gfsa.visualization.pytrees import summarize_tree
Explanation: How to use the GFSA layer for new tasks
This notebook describes the high-level process for using the GFSA layer in a new task, specifically focusing on how to represent a new type of graph as an MDP so that you can use the GFSA layer.
Setup and imports
End of explanation
# AST specification:
py_ast_graphs.PY_AST_SPECS
# Derived schema (using `generic_ast_graphs.build_ast_graph_schema`)
py_ast_graphs.SCHEMA
Explanation: Defining your graph domain
The first step in using the GFSA layer for a new task is to specify how to interpret the graphs in your domain as MDPs. Specifically, you must define a set of node types, and then for each node type, define the set of possible actions ("out edges") the agent can take at that node, and the set of observations ("in edges") the agent can receive when it arrives at the node. In the codebase, this is referred to as a "graph schema".
For the dataset of simple Python functions, the graph schema is derived from a simpler "AST specification":
End of explanation
maze_task.SCHEMA
Explanation: It's possible to infer the AST specification from a dataset of ASTs using ast_spec_inference.py. In the paper, we use two different AST specifications, one for the synthetic Python examples (shown above), and one for the Python examples written by humans (since these use many additional types of AST nodes).
For the maze dataset, the node types are determined by the shape of the grid cell, and the graph schema determines which actions are valid:
End of explanation
road_network_schema = {
# Houses are only connected to roads, so the only movement action
# available is to go to the road; likewise the only observation we receive
# after moving is the observation that we arrived from a road.
"house": graph_types.NodeSchema(
in_edges=["from_road"],
out_edges=["to_road"]),
# Roads are more complex. We can always move to a random previous or next
# road in the road network. We can also try to move to a house, but if there
# is no house, we will have to stay on the road. We denote this with a
# special observation (in `in_edges`).
"road": graph_types.NodeSchema(
in_edges=["from_next", "from_prev", "from_house", "no_house_here"],
out_edges=["to_next", "to_prev", "to_house"]),
}
road_network_schema
Explanation: As a toy example of how you might encode a new graph domain, suppose we have a network of houses connected by directed roads. Each house is adjacent to exactly one road, and each road has at least one entry and exit point but may have more.
We can encode this structure using the following schema:
End of explanation
the_ast = gast.parse(
def test_function(foo):
if foo:
return
pass
)
generic_ast = py_ast_graphs.py_ast_to_generic(the_ast)
mdp_graph, id_conversion_map = generic_ast_graphs.ast_to_graph(generic_ast, ast_spec=py_ast_graphs.PY_AST_SPECS)
schema_util.assert_conforms_to_schema(mdp_graph, py_ast_graphs.SCHEMA)
mdp_graph
Explanation: Building MDP graphs
Before running the GFSA layer on a specific input graph, you need to specify the result of taking each of the actions defined in the schema.
For AST graphs, these transitions can be automatically computed based on the AST and its specification:
End of explanation
the_maze_raw = [
"███████ ████",
"████ █ █ █",
"████ ███████",
]
the_maze = np.array([[c != " " for c in r] for r in the_maze_raw])
mdp_graph, coordinates = maze_schema.encode_maze(the_maze)
schema_util.assert_conforms_to_schema(mdp_graph, maze_task.SCHEMA)
mdp_graph
Explanation: In the maze dataset, we precompute the destination of taking each possible action at each possible node:
End of explanation
GraphNode = graph_types.GraphNode
InputTaggedNode = graph_types.InputTaggedNode
mdp_graph = {
'R0': GraphNode(node_type='road',
out_edges={
'to_next': [InputTaggedNode(node_id='R1', in_edge='from_prev')],
'to_prev': [InputTaggedNode(node_id='R1', in_edge='from_next')],
'to_house': [InputTaggedNode(node_id='H0', in_edge='from_road')]
}),
'R1': GraphNode(node_type='road',
out_edges={
'to_next': [InputTaggedNode(node_id='R0', in_edge='from_prev')],
'to_prev': [InputTaggedNode(node_id='R0', in_edge='from_next'),
InputTaggedNode(node_id='R3', in_edge='from_next')],
'to_house': [InputTaggedNode(node_id='H1', in_edge='from_road')]
}),
'R2': GraphNode(node_type='road',
out_edges={
'to_next': [InputTaggedNode(node_id='R3', in_edge='from_prev'),
InputTaggedNode(node_id='R4', in_edge='from_prev')],
'to_prev': [InputTaggedNode(node_id='R4', in_edge='from_next')],
'to_house': [InputTaggedNode(node_id='R2', in_edge='no_house_here')]
}),
'R3': GraphNode(node_type='road',
out_edges={
'to_next': [InputTaggedNode(node_id='R1', in_edge='from_prev')],
'to_prev': [InputTaggedNode(node_id='R2', in_edge='from_next')],
'to_house': [InputTaggedNode(node_id='H2', in_edge='from_road')]
}),
'R4': GraphNode(node_type='road',
out_edges={
'to_next': [InputTaggedNode(node_id='R2', in_edge='from_prev')],
'to_prev': [InputTaggedNode(node_id='R2', in_edge='from_next')],
'to_house': [InputTaggedNode(node_id='R4', in_edge='no_house_here')]
}),
'H0': GraphNode(node_type='house',
out_edges={
'to_road': [InputTaggedNode(node_id='R0', in_edge='from_house')]
}),
'H1': GraphNode(node_type='house',
out_edges={
'to_road': [InputTaggedNode(node_id='R1', in_edge='from_house')]
}),
'H2': GraphNode(node_type='house',
out_edges={
'to_road': [InputTaggedNode(node_id='R3', in_edge='from_house')]
}),
}
schema_util.assert_conforms_to_schema(mdp_graph, road_network_schema)
Explanation: For the toy houses-and-roads example above, we might have a graph that looks something like this:
End of explanation
road_builder = automaton_builder.AutomatonBuilder(road_network_schema)
pprint(summarize_tree(road_builder.encode_graph(mdp_graph, as_jax=False)))
Explanation: Note that every action must have at least one destination and associated observation! This is the case even for roads with no house, for which the "to_house" action results in staying in place and getting a special sentinel observation.
Encoding MDP graphs into GFSA-compatible NDArrays
Once you have an MDP graph that conforms to a schema, you can use an automaton builder object to encode that graph into a set of NDArrays:
End of explanation |
4,454 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model understanding and interpretability
In this colab, we will
- Will learn how to interpret model results and reason about the features
- Visualize the model results
Please complete the exercises and answer the questions tagged ???.
Step1: Below we demonstrate both local and global model interpretability for gradient boosted trees.
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole.
For local interpretability, we show how to create and visualize per-instance contributions using the technique outlined in Palczewska et al and by Saabas in Interpreting Random Forests (this method is also available in scikit-learn for Random Forests in the treeinterpreter package). To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability we show how to retrieve and visualize gain-based feature importances, permutation feature importances and also show aggregated DFCs.
Setup
Load dataset
We will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
Step2: Interpret model
Local interpretability
Output directional feature contributions (DFCs) to explain individual predictions, using the approach outlined in Palczewska et al and by Saabas in Interpreting Random Forests. The DFCs are generated with
Step4: Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in Palczewska et al and by Saabas in Interpreting Random Forests (this method is also available in scikit-learn for Random Forests in the treeinterpreter package). The DFCs are generated with
Step5: Plot results
Exercise
Step9: Prettier plotting
Color codes based on directionality and adds feature values on figure. Please do not worry about the details of the plotting code
Step10: Global feature importances
Gain-based feature importances using est.experimental_feature_importances
Aggregate DFCs using est.experimental_predict_with_explanations
Permutation importances
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated (source). Check out this article for an in-depth overview and great discussion on different feature importance types.
1. Gain-based feature importances
Step11: ??? What does the x axis represent?
??? Can we completely trust these results and the magnitudes?
2. Average absolute DFCs
We can also average the absolute values of DFCs to understand impact at a global level.
Step12: We can also see how DFCs vary as a feature value varies.
Step13: Visualizing the model's prediction surface
Lets first simulate/create training data using the following formula
Step15: We can visualize our function
Step16: First let's try to fit a linear model to the data.
Step17: Not very good at all...
??? Why is the linear model not performing well for this problem? Can you think of how to improve it just using a linear model?
Next let's try to fit a GBDT model to it and try to understand what the model does | Python Code:
import time
# We will use some np and pandas for dealing with input data.
import numpy as np
import pandas as pd
# And of course, we need tensorflow.
import tensorflow as tf
from matplotlib import pyplot as plt
from IPython.display import clear_output
tf.__version__
Explanation: Model understanding and interpretability
In this colab, we will
- Will learn how to interpret model results and reason about the features
- Visualize the model results
Please complete the exercises and answer the questions tagged ???.
End of explanation
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tf-datasets/titanic/eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
# Feature columns.
fcol = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fcol.indicator_column(
fcol.categorical_column_with_vocabulary_list(feature_name,
vocab))
fc = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
fc.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
fc.append(fcol.numeric_column(feature_name,
dtype=tf.float32))
# Input functions.
def make_input_fn(X, y, n_epochs=None):
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((X.to_dict(orient='list'), y))
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = (dataset
.repeat(n_epochs)
.batch(len(y))) # Use entire dataset since this is such a small dataset.
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, n_epochs=1)
Explanation: Below we demonstrate both local and global model interpretability for gradient boosted trees.
Local interpretability refers to an understanding of a model’s predictions at the individual example level, while global interpretability refers to an understanding of the model as a whole.
For local interpretability, we show how to create and visualize per-instance contributions using the technique outlined in Palczewska et al and by Saabas in Interpreting Random Forests (this method is also available in scikit-learn for Random Forests in the treeinterpreter package). To distinguish this from feature importances, we refer to these values as directional feature contributions (DFCs).
For global interpretability we show how to retrieve and visualize gain-based feature importances, permutation feature importances and also show aggregated DFCs.
Setup
Load dataset
We will be using the titanic dataset, where the goal is to predict passenger survival given characteristiscs such as gender, age, class, etc.
End of explanation
params = {
'n_trees': 50,
'max_depth': 3,
'n_batches_per_layer': 1,
# You must enable center_bias = True to get DFCs. This will force the model to
# make an initial prediction before using any features (e.g. use the mean of
# the training labels for regression or log odds for classification when
# using cross entropy loss).
'center_bias': True
}
est = tf.estimator.BoostedTreesClassifier(fc, **params)
# Train model.
est.train(train_input_fn)
# Evaluation.
results = est.evaluate(eval_input_fn)
clear_output()
pd.Series(results).to_frame()
Explanation: Interpret model
Local interpretability
Output directional feature contributions (DFCs) to explain individual predictions, using the approach outlined in Palczewska et al and by Saabas in Interpreting Random Forests. The DFCs are generated with:
pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
sns_colors = sns.color_palette('colorblind')
pred_dicts = list(est.experimental_predict_with_explanations(eval_input_fn))
def clean_feature_names(df):
Boilerplate code to cleans up feature names -- this is unneed in TF 2.0
df.columns = [v.split(':')[0].split('_indi')[0] for v in df.columns.tolist()]
df = df.T.groupby(level=0).sum().T
return df
# Create DFC Pandas dataframe.
labels = y_eval.values
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
df_dfc = pd.DataFrame([pred['dfc'] for pred in pred_dicts])
df_dfc.columns = est._names_for_feature_id
df_dfc = clean_feature_names(df_dfc)
df_dfc.describe()
# Sum of DFCs + bias == probabality.
bias = pred_dicts[0]['bias']
dfc_prob = df_dfc.sum(axis=1) + bias
np.testing.assert_almost_equal(dfc_prob.values,
probs.values)
Explanation: Local interpretability
Next you will output the directional feature contributions (DFCs) to explain individual predictions using the approach outlined in Palczewska et al and by Saabas in Interpreting Random Forests (this method is also available in scikit-learn for Random Forests in the treeinterpreter package). The DFCs are generated with:
pred_dicts = list(est.experimental_predict_with_explanations(pred_input_fn))
(Note: The method is named experimental as we may modify the API before dropping the experimental prefix.)
End of explanation
import seaborn as sns # Make plotting nicer.
sns_colors = sns.color_palette('colorblind')
def plot_dfcs(example_id):
label, prob = labels[ID], probs[ID]
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index
ax = example[sorted_ix].plot(kind='barh', color='g', figsize=(10,5))
ax.grid(False, axis='y')
plt.title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, prob, label))
plt.xlabel('Contribution to predicted probability')
ID = 102 # Change this.
plot_dfcs(ID)
Explanation: Plot results
Exercise: Plot figures for multiple examples. How would you explain each plot in plain english?
End of explanation
def plot_example_pretty(example):
Boilerplate code for better plotting :)
def _get_color(value):
To make positive DFCs plot green, negative DFCs plot red.
green, red = sns.color_palette()[2:4]
if value >= 0: return green
return red
def _add_feature_values(feature_values, ax):
Display feature's values on left of plot.
x_coord = ax.get_xlim()[0]
OFFSET = 0.15
for y_coord, (feat_name, feat_val) in enumerate(feature_values.items()):
t = plt.text(x_coord, y_coord - OFFSET, '{}'.format(feat_val), size=12)
t.set_bbox(dict(facecolor='white', alpha=0.5))
from matplotlib.font_manager import FontProperties
font = FontProperties()
font.set_weight('bold')
t = plt.text(x_coord, y_coord + 1 - OFFSET, 'feature\nvalue',
fontproperties=font, size=12)
TOP_N = 8 # View top 8 features.
sorted_ix = example.abs().sort_values()[-TOP_N:].index # Sort by magnitude.
example = example[sorted_ix]
colors = example.map(_get_color).tolist()
ax = example.to_frame().plot(kind='barh',
color=[colors],
legend=None,
alpha=0.75,
figsize=(10,6))
ax.grid(False, axis='y')
ax.set_yticklabels(ax.get_yticklabels(), size=14)
_add_feature_values(dfeval.iloc[ID].loc[sorted_ix], ax)
ax.set_title('Feature contributions for example {}\n pred: {:1.2f}; label: {}'.format(ID, probs[ID], labels[ID]))
ax.set_xlabel('Contribution to predicted probability', size=14)
plt.show()
return ax
# Plot results.
ID = 102
example = df_dfc.iloc[ID] # Choose ith example from evaluation set.
ax = plot_example_pretty(example)
Explanation: Prettier plotting
Color codes based on directionality and adds feature values on figure. Please do not worry about the details of the plotting code :)
End of explanation
features, importances = est.experimental_feature_importances(normalize=True)
df_imp = pd.DataFrame(importances, columns=['importances'], index=features)
# For plotting purposes. This is not needed in TF 2.0.
df_imp = clean_feature_names(df_imp.T).T.sort_values('importances', ascending=False)
# Visualize importances.
N = 8
ax = df_imp.iloc[0:N][::-1]\
.plot(kind='barh',
color=sns_colors[0],
title='Gain feature importances',
figsize=(10, 6))
ax.grid(False, axis='y')
plt.tight_layout()
Explanation: Global feature importances
Gain-based feature importances using est.experimental_feature_importances
Aggregate DFCs using est.experimental_predict_with_explanations
Permutation importances
Gain-based feature importances measure the loss change when splitting on a particular feature, while permutation feature importances are computed by evaluating model performance on the evaluation set by shuffling each feature one-by-one and attributing the change in model performance to the shuffled feature.
In general, permutation feature importance are preferred to gain-based feature importance, though both methods can be unreliable in situations where potential predictor variables vary in their scale of measurement or their number of categories and when features are correlated (source). Check out this article for an in-depth overview and great discussion on different feature importance types.
1. Gain-based feature importances
End of explanation
# Plot.
dfc_mean = df_dfc.abs().mean()
sorted_ix = dfc_mean.abs().sort_values()[-8:].index # Average and sort by absolute.
ax = dfc_mean[sorted_ix].plot(kind='barh',
color=sns_colors[1],
title='Mean |directional feature contributions|',
figsize=(10, 6))
ax.grid(False, axis='y')
Explanation: ??? What does the x axis represent?
??? Can we completely trust these results and the magnitudes?
2. Average absolute DFCs
We can also average the absolute values of DFCs to understand impact at a global level.
End of explanation
age = pd.Series(df_dfc.age.values, index=dfeval.age.values).sort_index()
sns.jointplot(age.index.values, age.values);
Explanation: We can also see how DFCs vary as a feature value varies.
End of explanation
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
# Create fake data
seed(0)
npts = 5000
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# Prep data for training.
df = pd.DataFrame({'x': x, 'y': y, 'z': z})
xi = np.linspace(-2.0, 2.0, 200),
yi = np.linspace(-2.1, 2.1, 210),
xi,yi = np.meshgrid(xi, yi)
df_predict = pd.DataFrame({
'x' : xi.flatten(),
'y' : yi.flatten(),
})
predict_shape = xi.shape
def plot_contour(x, y, z, **kwargs):
# Grid the data.
plt.figure(figsize=(10, 8))
# Contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(x, y, z, 15, linewidths=0.5, colors='k')
CS = plt.contourf(x, y, z, 15,
vmax=abs(zi).max(), vmin=-abs(zi).max(), cmap='RdBu_r')
plt.colorbar() # Draw colorbar.
# Plot data points.
plt.xlim(-2, 2)
plt.ylim(-2, 2)
Explanation: Visualizing the model's prediction surface
Lets first simulate/create training data using the following formula:
$z=x* e^{-x^2 - y^2}$
Where $z$ is the dependent variable we are trying to predict and $x$ and $y$ are the features.
End of explanation
zi = griddata(x, y, z, xi, yi, interp='linear')
plot_contour(xi, yi, zi)
plt.scatter(df.x, df.y, marker='.')
plt.title('Contour on training data')
plt.show()
def predict(est):
Predictions from a given estimator.
predict_input_fn = lambda: tf.data.Dataset.from_tensors(dict(df_predict))
preds = np.array([p['predictions'][0] for p in est.predict(predict_input_fn)])
return preds.reshape(predict_shape)
Explanation: We can visualize our function:
End of explanation
fc = [tf.feature_column.numeric_column('x'),
tf.feature_column.numeric_column('y')]
train_input_fn = make_input_fn(df, df.z)
est = tf.estimator.LinearRegressor(fc)
est.train(train_input_fn, max_steps=500);
plot_contour(xi, yi, predict(est))
Explanation: First let's try to fit a linear model to the data.
End of explanation
for n_trees in [1,2,3,10,30,50,100,200]:
est = tf.estimator.BoostedTreesRegressor(fc,
n_batches_per_layer=1,
max_depth=4,
n_trees=n_trees)
est.train(train_input_fn)
plot_contour(xi, yi, predict(est))
plt.text(-1.8, 2.1, '# trees: {}'.format(n_trees), color='w', backgroundcolor='black', size=20)
Explanation: Not very good at all...
??? Why is the linear model not performing well for this problem? Can you think of how to improve it just using a linear model?
Next let's try to fit a GBDT model to it and try to understand what the model does
End of explanation |
4,455 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistics
There are many specialized packages for dealing with data analysis and statistical programming. One very important code that you will see in MATH1024, Introduction to Probability and Statistics, is R. A Python package for performing similar analysis of large data sets is pandas. However, simple statistical tasks on simple data sets can be tackled using numpy and scipy.
Getting data in
A data file containing the monthly rainfall for Southampton, taken from the Met Office data can be downloaded from this link. We will save that file locally, and then look at the data.
The first few lines of the file are
Step1: We can use numpy to load this data into a variable, where we can manipulate it. This is not ideal
Step2: We see that the first column - the year - has been converted to a floating point number, which is not helpful. However, we can now split the data using standard numpy operations
Step3: We can now plot, for example, the rainfall in January for all years
Step4: Basic statistical functions
numpy contains a number of basic statistical functions, such as min, max and mean. These will act on entire arrays to give the "all time" minimum, maximum, and average rainfall
Step5: Of more interest would be either
the mean (min/max) rainfall in a given month for all years, or
the mean (min/max) rainfall in a given year for all months.
So the mean rainfall in the first year, 1855, would be
Step6: Whilst the mean rainfall in January, averaging over all years, would be
Step7: If we wanted to plot the mean rainfall per year, across all years, this would be tedious - there are 145 years of data in the file. Even computing the mean rainfall in each month, across all years, would be bad with 12 months. We could write a loop. However, numpy allows us to apply a function along an axis of the array, which does this is one operation
Step8: The axis argument gives the direction we want to keep - that we do not apply the operation to. For this data set, each row contains a year and each column a month. To find the mean in a given month we want to keep the row information (axis 0) and take the mean over the column. To find the mean in a given year we want to keep the column information (axis 1) and take the mean over the row.
We can now plot how the mean varies with each year.
Step9: We can also compute the standard deviation
Step10: We can then add confidence intervals to the plot
Step11: This isn't particularly pretty or clear
Step12: Categorical data
Looking at the means by month, it would be better to give them names rather than numbers. We will also summarize the available information using a boxplot
Step13: Much better ways of working with categorical data are available through more specialized packages.
Regression
We can go beyond the basic statistical functions in numpy and look at other standard tasks. For example, we can look for simple trends in our data with a linear regression. There is a function to compute the linear regression in scipy we can use. We will use this to see if there is a trend in the mean yearly rainfall
Step14: It looks like there's a good chance that the slight decrease in mean rainfall with time is a real effect.
Random numbers
Random processes and random variables may be at the heart of probability and statistics, but computers cannot generate anything "truly" random. Instead they can generate pseudo-random numbers using random number generators (RNGs). Constructing a random number generator is a hard problem and wherever possible you should use a well-tested RNG rather than attempting to write your own.
Python has many ways of generating random numbers. Perhaps the most useful are given by the numpy.random module, which can generate a numpy array filled with random numbers from various distributions. For example
Step15: More distributions
Whilst the standard distributions are given by the convenience functions above, the full documentation of numpy.random shows many other distributions available. For example, we can draw $10,000$ samples from the Beta distribution using the parameters $\alpha = 1/2 = \beta$ as
Step16: We can do this $5,000$ times and compute the mean of each set of samples | Python Code:
!head southampton_precip.txt
Explanation: Statistics
There are many specialized packages for dealing with data analysis and statistical programming. One very important code that you will see in MATH1024, Introduction to Probability and Statistics, is R. A Python package for performing similar analysis of large data sets is pandas. However, simple statistical tasks on simple data sets can be tackled using numpy and scipy.
Getting data in
A data file containing the monthly rainfall for Southampton, taken from the Met Office data can be downloaded from this link. We will save that file locally, and then look at the data.
The first few lines of the file are:
End of explanation
import numpy
data = numpy.loadtxt('southampton_precip.txt')
data
Explanation: We can use numpy to load this data into a variable, where we can manipulate it. This is not ideal: it will lose the information in the header, and that the first column corresponds to years. However, it is simple to use.
End of explanation
years = data[:, 0]
rainfall = data[:, 1:]
Explanation: We see that the first column - the year - has been converted to a floating point number, which is not helpful. However, we can now split the data using standard numpy operations:
End of explanation
%matplotlib inline
from matplotlib import rcParams
rcParams['figure.figsize']=(12,9)
from matplotlib import pyplot
pyplot.plot(years, rainfall[:,0])
pyplot.xlabel('Year')
pyplot.ylabel('Rainfall in January');
Explanation: We can now plot, for example, the rainfall in January for all years:
End of explanation
print("Minimum rainfall: {}".format(rainfall.min()))
print("Maximum rainfall: {}".format(rainfall.max()))
print("Mean rainfall: {}".format(rainfall.mean()))
Explanation: Basic statistical functions
numpy contains a number of basic statistical functions, such as min, max and mean. These will act on entire arrays to give the "all time" minimum, maximum, and average rainfall:
End of explanation
print ("Mean rainfall in 1855: {}".format(rainfall[0, :].mean()))
Explanation: Of more interest would be either
the mean (min/max) rainfall in a given month for all years, or
the mean (min/max) rainfall in a given year for all months.
So the mean rainfall in the first year, 1855, would be
End of explanation
print ("Mean rainfall in January: {}".format(rainfall[:, 0].mean()))
Explanation: Whilst the mean rainfall in January, averaging over all years, would be
End of explanation
mean_rainfall_in_month = rainfall.mean(axis=0)
mean_rainfall_per_year = rainfall.mean(axis=1)
Explanation: If we wanted to plot the mean rainfall per year, across all years, this would be tedious - there are 145 years of data in the file. Even computing the mean rainfall in each month, across all years, would be bad with 12 months. We could write a loop. However, numpy allows us to apply a function along an axis of the array, which does this is one operation:
End of explanation
pyplot.plot(years, mean_rainfall_per_year)
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall');
Explanation: The axis argument gives the direction we want to keep - that we do not apply the operation to. For this data set, each row contains a year and each column a month. To find the mean in a given month we want to keep the row information (axis 0) and take the mean over the column. To find the mean in a given year we want to keep the column information (axis 1) and take the mean over the row.
We can now plot how the mean varies with each year.
End of explanation
std_rainfall_per_year = rainfall.std(axis=1)
Explanation: We can also compute the standard deviation:
End of explanation
pyplot.errorbar(years, mean_rainfall_per_year, yerr = std_rainfall_per_year)
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall');
Explanation: We can then add confidence intervals to the plot:
End of explanation
pyplot.plot(years, mean_rainfall_per_year)
pyplot.fill_between(years, mean_rainfall_per_year - std_rainfall_per_year,
mean_rainfall_per_year + std_rainfall_per_year,
alpha=0.25, color=None)
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall');
Explanation: This isn't particularly pretty or clear: a nicer example would use better packages, but a quick fix uses an alternative matplotlib approach:
End of explanation
months = ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec']
pyplot.boxplot(rainfall, labels=months)
pyplot.xlabel('Month')
pyplot.ylabel('Mean rainfall');
Explanation: Categorical data
Looking at the means by month, it would be better to give them names rather than numbers. We will also summarize the available information using a boxplot:
End of explanation
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(years, mean_rainfall_per_year)
pyplot.plot(years, mean_rainfall_per_year, 'b-', label='Data')
pyplot.plot(years, intercept + slope*years, 'k-', label='Linear Regression')
pyplot.xlabel('Year')
pyplot.ylabel('Mean rainfall')
pyplot.legend();
print("The change in rainfall (the slope) is {}.".format(slope))
print("However, the error estimate is {}.".format(std_err))
print("The correlation coefficient between rainfall and year"
" is {}.".format(r_value))
print("The probability that the slope is zero is {}.".format(p_value))
Explanation: Much better ways of working with categorical data are available through more specialized packages.
Regression
We can go beyond the basic statistical functions in numpy and look at other standard tasks. For example, we can look for simple trends in our data with a linear regression. There is a function to compute the linear regression in scipy we can use. We will use this to see if there is a trend in the mean yearly rainfall:
End of explanation
from numpy import random
uniform = random.rand(10000)
normal = random.randn(10000)
fig = pyplot.figure()
ax1 = fig.add_subplot(1,2,1)
ax2 = fig.add_subplot(1,2,2)
ax1.hist(uniform, 20)
ax1.set_title('Uniform data')
ax2.hist(normal, 20)
ax2.set_title('Normal data')
fig.tight_layout()
fig.show();
Explanation: It looks like there's a good chance that the slight decrease in mean rainfall with time is a real effect.
Random numbers
Random processes and random variables may be at the heart of probability and statistics, but computers cannot generate anything "truly" random. Instead they can generate pseudo-random numbers using random number generators (RNGs). Constructing a random number generator is a hard problem and wherever possible you should use a well-tested RNG rather than attempting to write your own.
Python has many ways of generating random numbers. Perhaps the most useful are given by the numpy.random module, which can generate a numpy array filled with random numbers from various distributions. For example:
End of explanation
beta_samples = random.beta(0.5, 0.5, 10000)
pyplot.hist(beta_samples, 20)
pyplot.title('Beta data')
pyplot.show();
Explanation: More distributions
Whilst the standard distributions are given by the convenience functions above, the full documentation of numpy.random shows many other distributions available. For example, we can draw $10,000$ samples from the Beta distribution using the parameters $\alpha = 1/2 = \beta$ as
End of explanation
n_trials = 5000
beta_means = numpy.zeros((n_trials,))
for trial in range(n_trials):
beta_samples = random.beta(0.5, 0.5, 10000)
beta_means[trial] = numpy.mean(beta_samples)
pyplot.hist(beta_means, 20)
pyplot.title('Mean of Beta trials')
pyplot.show();
Explanation: We can do this $5,000$ times and compute the mean of each set of samples:
End of explanation |
4,456 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Statistics Fundamentals
Statistics is primarily about analyzing data samples, and that starts with udnerstanding the distribution of data in a sample.
Analyzing Data Distribution
A great deal of statistical analysis is based on the way that data values are distributed within the dataset. In this section, we'll explore some statistics that you can use to tell you about the values in a dataset.
Measures of Central Tendency
The term measures of central tendency sounds a bit grand, but really it's just a fancy way of saying that we're interested in knowing where the middle value in our data is. For example, suppose decide to conduct a study into the comparative salaries of people who graduated from the same school. You might record the results like this
Step1: So, is 71,000 really the central value? Or put another way, would it be reasonable for a graduate of this school to expect to earn $71,000? After all, that's the average salary of a graduate from this school.
If you look closely at the salaries, you can see that out of the seven former students, six earn less than the mean salary. The data is skewed by the fact that Rosie has clearly managed to find a much higher-paid job than her classmates.
Median
OK, let's see if we can find another definition for the central value that more closely reflects the expected earning potential of students attending our school. Another measure of central tendancy we can use is the median. To calculate the median, we need to sort the values into ascending order and then find the middle-most value. When there are an odd number of observations, you can find the position of the median value using this formula (where n is the number of observations)
Step2: Mode
Another related statistic is the mode, which indicates the most frequently occurring value. If you think about it, this is potentially a good indicator of how much a student might expect to earn when they graduate from the school; out of all the salaries that are being earned by former students, the mode is earned by more than any other.
Looking at our list of salaries, there are two instances of former students earning 50,000, but only one instance each for all other salaries
Step3: Multimodal Data
It's not uncommon for a set of data to have more than one value as the mode. For example, suppose Ethan receives a raise that takes his salary to 59,000
Step4: Distribution and Density
Now we know something about finding the center, we can start to explore how the data is distributed around it. What we're interested in here is understanding the general "shape" of the data distribution so that we can begin to get a feel for what a 'typical' value might be expected to be.
We can start by finding the extremes - the minimum and maximum. In the case of our salary data, the lowest paid graduate from our school is Vicky, with a salary of 40,000; and the highest-paid graduate is Rosie, with 189,000.
The pandas.dataframe class has min and max functions to return these values.
Run the following code to compare the minimum and maximum salaries to the central measures we calculated previously
Step5: We can examine these values, and get a sense for how the data is distributed - for example, we can see that the mean is closer to the max than the median, and that both are closer to the min than to the max.
However, it's generally easier to get a sense of the distribution by visualizing the data. Let's start by creating a histogram of the salaries, highlighting the mean and median salaries (the min, max are fairly self-evident, and the mode is wherever the highest bar is)
Step6: The <span style="color
Step7: Note that the density line takes the form of an asymmetric curve that has a "peak" on the left and a long tail on the right. We describe this sort of data distribution as being skewed; that is, the data is not distributed symmetrically but "bunched together" on one side. In this case, the data is bunched together on the left, creating a long tail on the right; and is described as being right-skewed because some infrequently occurring high values are pulling the mean to the right.
Let's take a look at another set of data. We know how much money our graduates make, but how many hours per week do they need to work to earn their salaries? Here's the data
Step8: Once again, the distribution is skewed, but this time it's left-skewed. Note that the curve is asymmetric with the <span style="color
Step9: This time, the distribution is symmetric, forming a "bell-shaped" curve. The <span style="color
Step10: Now let's look at the distribution of a real dataset - let's see how the heights of the father's measured in Galton's study of parent and child heights are distributed
Step11: As you can see, the father's height measurements are approximately normally distributed - in other words, they form a more or less normal distribution that is symmetric around the mean.
Measures of Variance
We can see from the distribution plots of our data that the values in our dataset can vary quite widely. We can use various measures to quantify this variance.
Range
A simple way to quantify the variance in a dataset is to identify the difference between the lowest and highest values. This is called the range, and is calculated by subtracting the minimim value from the maximum value.
The following Python code creates a single Pandas dataframe for our school graduate data, and calculates the range for each of the numeric features
Step12: Percentiles and Quartiles
The range is easy to calculate, but it's not a particularly useful statistic. For example, a range of 149,000 between the lowest and highest salary does not tell us which value within that range a graduate is most likely to earn - it doesn't tell us nothing about how the salaries are distributed around the mean within that range. The range tells us very little about the comparative position of an individual value within the distribution - for example, Frederic scored 57 in his final grade at school; which is a pretty good score (it's more than all but one of his classmates); but this isn't immediately apparent from a score of 57 and range of 90.
Percentiles
A percentile tells us where a given value is ranked in the overall distribution. For example, 25% of the data in a distribution has a value lower than the 25th percentile; 75% of the data has a value lower than the 75th percentile, and so on. Note that half of the data has a value lower than the 50th percentile - so the 50th percentile is also the median!
Let's examine Frederic's grade using this approach. We know he scored 57, but how does he rank compared to his fellow students?
Well, there are seven students in total, and five of them scored less than Frederic; so we can calculate the percentile for Frederic's grade like this
Step13: We've used the strict definition of percentile; but sometimes it's calculated as being the percentage of values that are less than or equal to the value you're comparing. In this case, the calculation for Frederic's percentile would include his own score
Step14: We've considered the percentile of Frederic's grade, and used it to rank him compared to his fellow students. So what about Dan, Joann, and Ethan? How do they compare to the rest of the class? They scored the same grade (50), so in a sense they share a percentile.
To deal with this grouped scenario, we can average the percentage rankings for the matching scores. We treat half of the scores matching the one we're ranking as if they are below it, and half as if they are above it. In this case, there were three matching scores of 50, and for each of these we calculate the percentile as if 1 was below and 1 was above. So the calculation for a percentile for Joann based on scores being less than or equal to 50 is
Step15: Quartiles
Rather than using individual percentiles to compare data, we can consider the overall spread of the data by dividing those percentiles into four quartiles. The first quartile contains the values from the minimum to the 25th percentile, the second from the 25th percentile to the 50th percentile (which is the median), the third from the 50th percentile to the 75th percentile, and the fourth from the 75th percentile to the maximum.
In Python, you can use the quantile function of the pandas.dataframe class to find the threshold values at the 25th, 50th, and 75th percentiles (quantile is a generic term for a ranked position, such as a percentile or quartile).
Run the following code to find the quartile thresholds for the weekly hours worked by our former students
Step16: Its usually easier to understand how data is distributed across the quartiles by visualizing it. You can use a histogram, but many data scientists use a kind of visualization called a box plot (or a box and whiskers plot).
Let's create a box plot for the weekly hours
Step17: The box plot consists of
Step18: So what's going on here?
Well, as we've already noticed, Rosie earns significantly more than her former classmates. So much more in fact, that her salary has been identifed as an outlier. An outlier is a value that is so far from the center of the distribution compared to other values that it skews the distribution by affecting the mean. There are all sorts of reasons that you might have outliers in your data, including data entry errors, failures in sensors or data-generating equipment, or genuinely anomalous values.
So what should we do about it?
This really depends on the data, and what you're trying to use it for. In this case, let's assume we're trying to figure out what's a reasonable expectation of salary for a graduate of our school to earn. Ignoring for the moment that we have an extremly small dataset on which to base our judgement, it looks as if Rosie's salary could be either an error (maybe she mis-typed it in the form used to collect data) or a genuine anomaly (maybe she became a professional athelete or some other extremely highly paid job). Either way, it doesn't seem to represent a salary that a typical graduate might earn.
Let's see what the distribution of the data looks like without the outlier
Step19: Now it looks like there's a more even distribution of salaries. It's still not quite symmetrical, but there's much less overall variance. There's potentially some cause here to disregard Rosie's salary data when we compare the salaries, as it is tending to skew the analysis.
So is that OK? Can we really just ignore a data value we don't like?
Again, it depends on what you're analyzing. Let's take a look at the distribution of final grades
Step20: Once again there are outliers, this time at both ends of the distribution. However, think about what this data represents. If we assume that the grade for the final test is based on a score out of 100, it seems reasonable to expect that some students will score very low (maybe even 0) and some will score very well (maybe even 100); but most will get a score somewhere in the middle. The reason that the low and high scores here look like outliers might just be because we have so few data points. Let's see what happens if we include a few more students in our data
Step21: With more data, there are some more high and low scores; so we no longer consider the isolated cases to be outliers.
The key point to take away here is that you need to really understand the data and what you're trying to do with it, and you need to ensure that you have a reasonable sample size, before determining what to do with outlier values.
Variance and Standard Deviation
We've seen how to understand the spread of our data distribution using the range, percentiles, and quartiles; and we've seen the effect of outliers on the distribution. Now it's time to look at how to measure the amount of variance in the data.
Variance
Variance is measured as the average of the squared difference from the mean. For a full population, it's indicated by a squared Greek letter sigma (σ<sup>2</sup>) and calculated like this
Step22: Standard Deviation
To calculate the variance, we squared the difference of each value from the mean. If we hadn't done this, the numerator of our fraction would always end up being zero (because the mean is at the center of our values). However, this means that the variance is not in the same unit of measurement as our data - in our case, since we're calculating the variance for grade points, it's in grade points squared; which is not very helpful.
To get the measure of variance back into the same unit of measurement, we need to find its square root
Step23: Standard Deviation in a Normal Distribution
In statistics and data science, we spend a lot of time considering normal distributions; because they occur so frequently. The standard deviation has an important relationship to play in a normal distribution.
Run the following cell to show a histogram of a standard normal distribution (which is a distribution with a mean of 0 and a standard deviation of 1)
Step24: The horizontal colored lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus).
In any normal distribution | Python Code:
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000]})
print (df['Salary'].mean())
Explanation: Statistics Fundamentals
Statistics is primarily about analyzing data samples, and that starts with udnerstanding the distribution of data in a sample.
Analyzing Data Distribution
A great deal of statistical analysis is based on the way that data values are distributed within the dataset. In this section, we'll explore some statistics that you can use to tell you about the values in a dataset.
Measures of Central Tendency
The term measures of central tendency sounds a bit grand, but really it's just a fancy way of saying that we're interested in knowing where the middle value in our data is. For example, suppose decide to conduct a study into the comparative salaries of people who graduated from the same school. You might record the results like this:
| Name | Salary |
|----------|-------------|
| Dan | 50,000 |
| Joann | 54,000 |
| Pedro | 50,000 |
| Rosie | 189,000 |
| Ethan | 55,000 |
| Vicky | 40,000 |
| Frederic | 59,000 |
Now, some of the former-students may earn a lot, and others may earn less; but what's the salary in the middle of the range of all salaries?
Mean
A common way to define the central value is to use the mean, often called the average. This is calculated as the sum of the values in the dataset, divided by the number of observations in the dataset. When the dataset consists of the full population, the mean is represented by the Greek symbol μ (mu), and the formula is written like this:
\begin{equation}\mu = \frac{\displaystyle\sum_{i=1}^{N}X_{i}}{N}\end{equation}
More commonly, when working with a sample, the mean is represented by x̄ (x-bar), and the formula is written like this (note the lower case letters used to indicate values from a sample):
\begin{equation}\bar{x} = \frac{\displaystyle\sum_{i=1}^{n}x_{i}}{n}\end{equation}
In the case of our list of heights, this can be calculated as:
\begin{equation}\bar{x} = \frac{50000+54000+50000+189000+55000+40000+59000}{7}\end{equation}
Which is 71,000.
In technical terminology, x̄ is a statistic (an estimate based on a sample of data) and μ is a parameter (a true value based on the entire population). A lot of the time, the parameters for the full population will be impossible (or at the very least, impractical) to measure; so we use statistics obtained from a representative sample to approximate them. In this case, we can use the sample mean of salary for our selection of surveyed students to try to estimate the actual average salary of all students who graduate from our school.
In Python, when working with data in a pandas.dataframe, you can use the mean function, like this:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000]})
print (df['Salary'].median())
Explanation: So, is 71,000 really the central value? Or put another way, would it be reasonable for a graduate of this school to expect to earn $71,000? After all, that's the average salary of a graduate from this school.
If you look closely at the salaries, you can see that out of the seven former students, six earn less than the mean salary. The data is skewed by the fact that Rosie has clearly managed to find a much higher-paid job than her classmates.
Median
OK, let's see if we can find another definition for the central value that more closely reflects the expected earning potential of students attending our school. Another measure of central tendancy we can use is the median. To calculate the median, we need to sort the values into ascending order and then find the middle-most value. When there are an odd number of observations, you can find the position of the median value using this formula (where n is the number of observations):
\begin{equation}\frac{n+1}{2}\end{equation}
Remember that this formula returns the position of the median value in the sorted list; not the value itself.
If the number of observations is even, then things are a little (but not much) more complicated. In this case you calculate the median as the average of the two middle-most values, which are found like this:
\begin{equation}\frac{n}{2} \;\;\;\;and \;\;\;\; \frac{n}{2} + 1\end{equation}
So, for our graduate salaries; first lets sort the dataset:
| Salary |
|-------------|
| 40,000 |
| 50,000 |
| 50,000 |
| 54,000 |
| 55,000 |
| 59,000 |
| 189,000 |
There's an odd number of observation (7), so the median value is at position (7 + 1) ÷ 2; in other words, position 4:
| Salary |
|-------------|
| 40,000 |
| 50,000 |
| 50,000 |
|>54,000 |
| 55,000 |
| 59,000 |
| 189,000 |
So the median salary is 54,000.
The pandas.dataframe class in Python has a median function to find the median:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000]})
print (df['Salary'].mode())
Explanation: Mode
Another related statistic is the mode, which indicates the most frequently occurring value. If you think about it, this is potentially a good indicator of how much a student might expect to earn when they graduate from the school; out of all the salaries that are being earned by former students, the mode is earned by more than any other.
Looking at our list of salaries, there are two instances of former students earning 50,000, but only one instance each for all other salaries:
| Salary |
|-------------|
| 40,000 |
|>50,000|
|>50,000|
| 54,000 |
| 55,000 |
| 59,000 |
| 189,000 |
The mode is therefore 50,000.
As you might expect, the pandas.dataframe class has a mode function to return the mode:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,59000,40000,59000]})
print (df['Salary'].mode())
Explanation: Multimodal Data
It's not uncommon for a set of data to have more than one value as the mode. For example, suppose Ethan receives a raise that takes his salary to 59,000:
| Salary |
|-------------|
| 40,000 |
|>50,000|
|>50,000|
| 54,000 |
|>59,000|
|>59,000|
| 189,000 |
Now there are two values with the highest frequency. This dataset is bimodal. More generally, when there is more than one mode value, the data is considered multimodal.
The pandas.dataframe.mode** function returns all of the modes:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000]})
print ('Min: ' + str(df['Salary'].min()))
print ('Mode: ' + str(df['Salary'].mode()[0]))
print ('Median: ' + str(df['Salary'].median()))
print ('Mean: ' + str(df['Salary'].mean()))
print ('Max: ' + str(df['Salary'].max()))
Explanation: Distribution and Density
Now we know something about finding the center, we can start to explore how the data is distributed around it. What we're interested in here is understanding the general "shape" of the data distribution so that we can begin to get a feel for what a 'typical' value might be expected to be.
We can start by finding the extremes - the minimum and maximum. In the case of our salary data, the lowest paid graduate from our school is Vicky, with a salary of 40,000; and the highest-paid graduate is Rosie, with 189,000.
The pandas.dataframe class has min and max functions to return these values.
Run the following code to compare the minimum and maximum salaries to the central measures we calculated previously:
End of explanation
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000]})
salary = df['Salary']
salary.plot.hist(title='Salary Distribution', color='lightblue', bins=25)
plt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2)
plt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2)
plt.show()
Explanation: We can examine these values, and get a sense for how the data is distributed - for example, we can see that the mean is closer to the max than the median, and that both are closer to the min than to the max.
However, it's generally easier to get a sense of the distribution by visualizing the data. Let's start by creating a histogram of the salaries, highlighting the mean and median salaries (the min, max are fairly self-evident, and the mode is wherever the highest bar is):
End of explanation
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000]})
salary = df['Salary']
density = stats.gaussian_kde(salary)
n, x, _ = plt.hist(salary, histtype='step', normed=True, bins=25)
plt.plot(x, density(x)*5)
plt.axvline(salary.mean(), color='magenta', linestyle='dashed', linewidth=2)
plt.axvline(salary.median(), color='green', linestyle='dashed', linewidth=2)
plt.show()
Explanation: The <span style="color:magenta">mean</span> and <span style="color:green">median</span> are shown as dashed lines. Note the following:
- Salary is a continuous data value - graduates could potentially earn any value along the scale, even down to a fraction of cent.
- The number of bins in the histogram determines the size of each salary band for which we're counting frequencies. Fewer bins means merging more individual salaries together to be counted as a group.
- The majority of the data is on the left side of the histogram, reflecting the fact that most graduates earn between 40,000 and 55,000
- The mean is a higher value than the median and mode.
- There are gaps in the histogram for salary bands that nobody earns.
The histogram shows the relative frequency of each salary band, based on the number of bins. It also gives us a sense of the density of the data for each point on the salary scale. With enough data points, and small enough bins, we could view this density as a line that shows the shape of the data distribution.
Run the following cell to show the density of the salary data as a line on top of the histogram:
End of explanation
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Hours':[41,40,36,30,35,39,40]})
hours = df['Hours']
density = stats.gaussian_kde(hours)
n, x, _ = plt.hist(hours, histtype='step', normed=True, bins=25)
plt.plot(x, density(x)*7)
plt.axvline(hours.mean(), color='magenta', linestyle='dashed', linewidth=2)
plt.axvline(hours.median(), color='green', linestyle='dashed', linewidth=2)
plt.show()
Explanation: Note that the density line takes the form of an asymmetric curve that has a "peak" on the left and a long tail on the right. We describe this sort of data distribution as being skewed; that is, the data is not distributed symmetrically but "bunched together" on one side. In this case, the data is bunched together on the left, creating a long tail on the right; and is described as being right-skewed because some infrequently occurring high values are pulling the mean to the right.
Let's take a look at another set of data. We know how much money our graduates make, but how many hours per week do they need to work to earn their salaries? Here's the data:
| Name | Hours |
|----------|-------|
| Dan | 41 |
| Joann | 40 |
| Pedro | 36 |
| Rosie | 30 |
| Ethan | 35 |
| Vicky | 39 |
| Frederic | 40 |
Run the following code to show the distribution of the hours worked:
End of explanation
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Grade':[50,50,46,95,50,5,57]})
grade = df['Grade']
density = stats.gaussian_kde(grade)
n, x, _ = plt.hist(grade, histtype='step', normed=True, bins=25)
plt.plot(x, density(x)*7.5)
plt.axvline(grade.mean(), color='magenta', linestyle='dashed', linewidth=2)
plt.axvline(grade.median(), color='green', linestyle='dashed', linewidth=2)
plt.show()
Explanation: Once again, the distribution is skewed, but this time it's left-skewed. Note that the curve is asymmetric with the <span style="color:magenta">mean</span> to the left of the <span style="color:green">median</span> and the mode; and the average weekly working hours skewed to the lower end.
Once again, Rosie seems to be getting the better of the deal. She earns more than her former classmates for working fewer hours. Maybe a look at the test scores the students achieved on their final grade at school might help explain her success:
| Name | Grade |
|----------|-------|
| Dan | 50 |
| Joann | 50 |
| Pedro | 46 |
| Rosie | 95 |
| Ethan | 50 |
| Vicky | 5 |
| Frederic | 57 |
Let's take a look at the distribution of these grades:
End of explanation
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
import scipy.stats as stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
numcols = ['Salary', 'Hours', 'Grade']
for col in numcols:
print(df[col].name + ' skewness: ' + str(df[col].skew()))
print(df[col].name + ' kurtosis: ' + str(df[col].kurt()))
density = stats.gaussian_kde(df[col])
n, x, _ = plt.hist(df[col], histtype='step', normed=True, bins=25)
plt.plot(x, density(x)*6)
plt.show()
print('\n')
Explanation: This time, the distribution is symmetric, forming a "bell-shaped" curve. The <span style="color:magenta">mean</span>, <span style="color:green">median</span>, and mode are at the same location, and the data tails off evenly on both sides from a central peak.
Statisticians call this a normal distribution (or sometimes a Gaussian distribution), and it occurs quite commonly in many scenarios due to something called the Central Limit Theorem, which reflects the way continuous probability works - more about that later.
Skewness and Kurtosis
You can measure skewness (in which direction the data is skewed and to what degree) and kurtosis (how "peaked" the data is) to get an idea of the shape of the data distribution. In Python, you can use the skew and kurt functions to find this:
End of explanation
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
fathers = df['father']
density = stats.gaussian_kde(fathers)
n, x, _ = plt.hist(fathers, histtype='step', normed=True, bins=50)
plt.plot(x, density(x)*2.5)
plt.axvline(fathers.mean(), color='magenta', linestyle='dashed', linewidth=2)
plt.axvline(fathers.median(), color='green', linestyle='dashed', linewidth=2)
plt.show()
Explanation: Now let's look at the distribution of a real dataset - let's see how the heights of the father's measured in Galton's study of parent and child heights are distributed:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
numcols = ['Salary', 'Hours', 'Grade']
for col in numcols:
print(df[col].name + ' range: ' + str(df[col].max() - df[col].min()))
Explanation: As you can see, the father's height measurements are approximately normally distributed - in other words, they form a more or less normal distribution that is symmetric around the mean.
Measures of Variance
We can see from the distribution plots of our data that the values in our dataset can vary quite widely. We can use various measures to quantify this variance.
Range
A simple way to quantify the variance in a dataset is to identify the difference between the lowest and highest values. This is called the range, and is calculated by subtracting the minimim value from the maximum value.
The following Python code creates a single Pandas dataframe for our school graduate data, and calculates the range for each of the numeric features:
End of explanation
import pandas as pd
from scipy import stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(stats.percentileofscore(df['Grade'], 57, 'strict'))
Explanation: Percentiles and Quartiles
The range is easy to calculate, but it's not a particularly useful statistic. For example, a range of 149,000 between the lowest and highest salary does not tell us which value within that range a graduate is most likely to earn - it doesn't tell us nothing about how the salaries are distributed around the mean within that range. The range tells us very little about the comparative position of an individual value within the distribution - for example, Frederic scored 57 in his final grade at school; which is a pretty good score (it's more than all but one of his classmates); but this isn't immediately apparent from a score of 57 and range of 90.
Percentiles
A percentile tells us where a given value is ranked in the overall distribution. For example, 25% of the data in a distribution has a value lower than the 25th percentile; 75% of the data has a value lower than the 75th percentile, and so on. Note that half of the data has a value lower than the 50th percentile - so the 50th percentile is also the median!
Let's examine Frederic's grade using this approach. We know he scored 57, but how does he rank compared to his fellow students?
Well, there are seven students in total, and five of them scored less than Frederic; so we can calculate the percentile for Frederic's grade like this:
\begin{equation}\frac{5}{7} \times 100 \approx 71.4\end{equation}
So Frederic's score puts him at the 71.4th percentile in his class.
In Python, you can use the percentileofscore function in the scipy.stats package to calculate the percentile for a given value in a set of values:
End of explanation
import pandas as pd
from scipy import stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(stats.percentileofscore(df['Grade'], 57, 'weak'))
Explanation: We've used the strict definition of percentile; but sometimes it's calculated as being the percentage of values that are less than or equal to the value you're comparing. In this case, the calculation for Frederic's percentile would include his own score:
\begin{equation}\frac{6}{7} \times 100 \approx 85.7\end{equation}
You can calculate this way in Python by using the weak mode of the percentileofscore function:
End of explanation
import pandas as pd
from scipy import stats
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(stats.percentileofscore(df['Grade'], 50, 'rank'))
Explanation: We've considered the percentile of Frederic's grade, and used it to rank him compared to his fellow students. So what about Dan, Joann, and Ethan? How do they compare to the rest of the class? They scored the same grade (50), so in a sense they share a percentile.
To deal with this grouped scenario, we can average the percentage rankings for the matching scores. We treat half of the scores matching the one we're ranking as if they are below it, and half as if they are above it. In this case, there were three matching scores of 50, and for each of these we calculate the percentile as if 1 was below and 1 was above. So the calculation for a percentile for Joann based on scores being less than or equal to 50 is:
\begin{equation}(\frac{4}{7}) \times 100 \approx 57.14\end{equation}
The value of 4 consists of the two scores that are below Joann's score of 50, Joann's own score, and half of the scores that are the same as Joann's (of which there are two, so we count one).
In Python, the percentileofscore function has a rank function that calculates grouped percentiles like this:
End of explanation
# Quartiles
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(df['Hours'].quantile([0.25, 0.5, 0.75]))
Explanation: Quartiles
Rather than using individual percentiles to compare data, we can consider the overall spread of the data by dividing those percentiles into four quartiles. The first quartile contains the values from the minimum to the 25th percentile, the second from the 25th percentile to the 50th percentile (which is the median), the third from the 50th percentile to the 75th percentile, and the fourth from the 75th percentile to the maximum.
In Python, you can use the quantile function of the pandas.dataframe class to find the threshold values at the 25th, 50th, and 75th percentiles (quantile is a generic term for a ranked position, such as a percentile or quartile).
Run the following code to find the quartile thresholds for the weekly hours worked by our former students:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
# Plot a box-whisker chart
df['Hours'].plot(kind='box', title='Weekly Hours Distribution', figsize=(10,8))
plt.show()
Explanation: Its usually easier to understand how data is distributed across the quartiles by visualizing it. You can use a histogram, but many data scientists use a kind of visualization called a box plot (or a box and whiskers plot).
Let's create a box plot for the weekly hours:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,30,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
# Plot a box-whisker chart
df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8))
plt.show()
Explanation: The box plot consists of:
- A rectangular box that shows where the data between the 25th and 75th percentile (the second and third quartile) lie. This part of the distribution is often referred to as the interquartile range - it contains the middle 50 data values.
- Whiskers that extend from the box to the bottom of the first quartile and the top of the fourth quartile to show the full range of the data.
- A line in the box that shows that location of the median (the 50th percentile, which is also the threshold between the second and third quartile)
In this case, you can see that the interquartile range is between 35 and 40, with the median nearer the top of that range. The range of the first quartile is from around 30 to 35, and the fourth quartile is from 40 to 41.
Outliers
Let's take a look at another box plot - this time showing the distribution of the salaries earned by our former classmates:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
# Plot a box-whisker chart
df['Salary'].plot(kind='box', title='Salary Distribution', figsize=(10,8), showfliers=False)
plt.show()
Explanation: So what's going on here?
Well, as we've already noticed, Rosie earns significantly more than her former classmates. So much more in fact, that her salary has been identifed as an outlier. An outlier is a value that is so far from the center of the distribution compared to other values that it skews the distribution by affecting the mean. There are all sorts of reasons that you might have outliers in your data, including data entry errors, failures in sensors or data-generating equipment, or genuinely anomalous values.
So what should we do about it?
This really depends on the data, and what you're trying to use it for. In this case, let's assume we're trying to figure out what's a reasonable expectation of salary for a graduate of our school to earn. Ignoring for the moment that we have an extremly small dataset on which to base our judgement, it looks as if Rosie's salary could be either an error (maybe she mis-typed it in the form used to collect data) or a genuine anomaly (maybe she became a professional athelete or some other extremely highly paid job). Either way, it doesn't seem to represent a salary that a typical graduate might earn.
Let's see what the distribution of the data looks like without the outlier:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
# Plot a box-whisker chart
df['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8))
plt.show()
Explanation: Now it looks like there's a more even distribution of salaries. It's still not quite symmetrical, but there's much less overall variance. There's potentially some cause here to disregard Rosie's salary data when we compare the salaries, as it is tending to skew the analysis.
So is that OK? Can we really just ignore a data value we don't like?
Again, it depends on what you're analyzing. Let's take a look at the distribution of final grades:
End of explanation
%matplotlib inline
import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic', 'Jimmie', 'Rhonda', 'Giovanni', 'Francesca', 'Rajab', 'Naiyana', 'Kian', 'Jenny'],
'Grade':[50,50,46,95,50,5,57,42,26,72,78,60,40,17,85]})
# Plot a box-whisker chart
df['Grade'].plot(kind='box', title='Grade Distribution', figsize=(10,8))
plt.show()
Explanation: Once again there are outliers, this time at both ends of the distribution. However, think about what this data represents. If we assume that the grade for the final test is based on a score out of 100, it seems reasonable to expect that some students will score very low (maybe even 0) and some will score very well (maybe even 100); but most will get a score somewhere in the middle. The reason that the low and high scores here look like outliers might just be because we have so few data points. Let's see what happens if we include a few more students in our data:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(df['Grade'].var())
Explanation: With more data, there are some more high and low scores; so we no longer consider the isolated cases to be outliers.
The key point to take away here is that you need to really understand the data and what you're trying to do with it, and you need to ensure that you have a reasonable sample size, before determining what to do with outlier values.
Variance and Standard Deviation
We've seen how to understand the spread of our data distribution using the range, percentiles, and quartiles; and we've seen the effect of outliers on the distribution. Now it's time to look at how to measure the amount of variance in the data.
Variance
Variance is measured as the average of the squared difference from the mean. For a full population, it's indicated by a squared Greek letter sigma (σ<sup>2</sup>) and calculated like this:
\begin{equation}\sigma^{2} = \frac{\displaystyle\sum_{i=1}^{N} (X_{i} -\mu)^{2}}{N}\end{equation}
For a sample, it's indicated as s<sup>2</sup> calculated like this:
\begin{equation}s^{2} = \frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}}{n-1}\end{equation}
In both cases, we sum the difference between the individual data values and the mean and square the result. Then, for a full population we just divide by the number of data items to get the average. When using a sample, we divide by the total number of items minus 1 to correct for sample bias.
Let's work this out for our student grades (assuming our data is a sample from the larger student population).
First, we need to calculate the mean grade:
\begin{equation}\bar{x} = \frac{50+50+46+95+50+5+57}{7}\approx 50.43\end{equation}
Then we can plug that into our formula for the variance:
\begin{equation}s^{2} = \frac{(50-50.43)^{2}+(50-50.43)^{2}+(46-50.43)^{2}+(95-50.43)^{2}+(50-50.43)^{2}+(5-50.43)^{2}+(57-50.43)^{2}}{7-1}\end{equation}
So:
\begin{equation}s^{2} = \frac{0.185+0.185+19.625+1986.485+0.185+2063.885+43.165}{6}\end{equation}
Which simplifies to:
\begin{equation}s^{2} = \frac{4113.715}{6}\end{equation}
Giving the result:
\begin{equation}s^{2} \approx 685.619\end{equation}
The higher the variance, the more spread your data is around the mean.
In Python, you can use the var function of the pandas.dataframe class to calculate the variance of a column in a dataframe:
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(df['Grade'].std())
Explanation: Standard Deviation
To calculate the variance, we squared the difference of each value from the mean. If we hadn't done this, the numerator of our fraction would always end up being zero (because the mean is at the center of our values). However, this means that the variance is not in the same unit of measurement as our data - in our case, since we're calculating the variance for grade points, it's in grade points squared; which is not very helpful.
To get the measure of variance back into the same unit of measurement, we need to find its square root:
\begin{equation}s = \sqrt{685.619} \approx 26.184\end{equation}
So what does this value represent?
It's the standard deviation for our grades data. More formally, it's calculated like this for a full population:
\begin{equation}\sigma = \sqrt{\frac{\displaystyle\sum_{i=1}^{N} (X_{i} -\mu)^{2}}{N}}\end{equation}
Or like this for a sample:
\begin{equation}s = \sqrt{\frac{\displaystyle\sum_{i=1}^{n} (x_{i} -\bar{x})^{2}}{n-1}}\end{equation}
Note that in both cases, it's just the square root of the corresponding variance forumla!
In Python, you can calculate it using the std function:
End of explanation
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as stats
# Create a random standard normal distribution
df = pd.DataFrame(np.random.randn(100000, 1), columns=['Grade'])
# Plot the distribution as a histogram with a density curve
grade = df['Grade']
density = stats.gaussian_kde(grade)
n, x, _ = plt.hist(grade, color='lightgrey', normed=True, bins=100)
plt.plot(x, density(x))
# Get the mean and standard deviation
s = df['Grade'].std()
m = df['Grade'].mean()
# Annotate 1 stdev
x1 = [m-s, m+s]
y1 = [0.25, 0.25]
plt.plot(x1,y1, color='magenta')
plt.annotate('1s (68.26%)', (x1[1],y1[1]))
# Annotate 2 stdevs
x2 = [m-(s*2), m+(s*2)]
y2 = [0.05, 0.05]
plt.plot(x2,y2, color='green')
plt.annotate('2s (95.45%)', (x2[1],y2[1]))
# Annotate 3 stdevs
x3 = [m-(s*3), m+(s*3)]
y3 = [0.005, 0.005]
plt.plot(x3,y3, color='orange')
plt.annotate('3s (99.73%)', (x3[1],y3[1]))
# Show the location of the mean
plt.axvline(grade.mean(), color='grey', linestyle='dashed', linewidth=1)
plt.show()
Explanation: Standard Deviation in a Normal Distribution
In statistics and data science, we spend a lot of time considering normal distributions; because they occur so frequently. The standard deviation has an important relationship to play in a normal distribution.
Run the following cell to show a histogram of a standard normal distribution (which is a distribution with a mean of 0 and a standard deviation of 1):
End of explanation
import pandas as pd
df = pd.DataFrame({'Name': ['Dan', 'Joann', 'Pedro', 'Rosie', 'Ethan', 'Vicky', 'Frederic'],
'Salary':[50000,54000,50000,189000,55000,40000,59000],
'Hours':[41,40,36,17,35,39,40],
'Grade':[50,50,46,95,50,5,57]})
print(df.describe())
Explanation: The horizontal colored lines show the percentage of data within 1, 2, and 3 standard deviations of the mean (plus or minus).
In any normal distribution:
- Approximately 68.26% of values fall within one standard deviation from the mean.
- Approximately 95.45% of values fall within two standard deviations from the mean.
- Approximately 99.73% of values fall within three standard deviations from the mean.
Z Score
So in a normal (or close to normal) distribution, standard deviation provides a way to evaluate how far from a mean a given range of values falls, allowing us to compare where a particular value lies within the distribution. For example, suppose Rosie tells you she was the highest scoring student among her friends - that doesn't really help us assess how well she scored. She may have scored only a fraction of a point above the second-highest scoring student. Even if we know she was in the top quartile; if we don't know how the rest of the grades are distributed it's still not clear how well she performed compared to her friends.
However, if she tells you how many standard deviations higher than the mean her score was, this will help you compare her score to that of her classmates.
So how do we know how many standard deviations above or below the mean a particular value is? We call this a Z Score, and it's calculated like this for a full population:
\begin{equation}Z = \frac{x - \mu}{\sigma}\end{equation}
or like this for a sample:
\begin{equation}Z = \frac{x - \bar{x}}{s}\end{equation}
So, let's examine Rosie's grade of 95. Now that we know the mean grade is 50.43 and the standard deviation is 26.184, we can calculate the Z Score for this grade like this:
\begin{equation}Z = \frac{95 - 50.43}{26.184} = 1.702\end{equation}.
So Rosie's grade is 1.702 standard deviations above the mean.
Summarizing Data Distribution in Python
We've seen how to obtain individual statistics in Python, but you can also use the describe function to retrieve summary statistics for all numeric columns in a dataframe. These summary statistics include many of the statistics we've examined so far (though it's worth noting that the median is not included):
End of explanation |
4,457 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow
Step1: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
Step2: Writing and running programs in TensorFlow has the following steps
Step3: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
Step4: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
Step6: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step15: Expected Output
Step16: Change the index below and run the cell to visualize some examples in the dataset.
Step17: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
Step19: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise
Step21: Expected Output
Step23: Expected Output
Step25: Expected Output
Step27: Expected Output
Step28: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
Step29: Expected Output | Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict
%matplotlib inline
np.random.seed(1)
Explanation: TensorFlow Tutorial
Welcome to this week's programming assignment. Until now, you've always used numpy to build neural networks. Now we will step you through a deep learning framework that will allow you to build neural networks more easily. Machine learning frameworks like TensorFlow, PaddlePaddle, Torch, Caffe, Keras, and many others can speed up your machine learning development significantly. All of these frameworks also have a lot of documentation, which you should feel free to read. In this assignment, you will learn to do the following in TensorFlow:
Initialize variables
Start your own session
Train algorithms
Implement a Neural Network
Programing frameworks can not only shorten your coding time, but sometimes also perform optimizations that speed up your code.
1 - Exploring the Tensorflow Library
To start, you will import the library:
End of explanation
y_hat = tf.constant(36, name='y_hat') # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y') # Define y. Set to 39
loss = tf.Variable((y - y_hat)**2, name='loss') # Create a variable for the loss
init = tf.global_variables_initializer() # When init is run later (session.run(init)),
# the loss variable will be initialized and ready to be computed
with tf.Session() as session: # Create a session and print the output
session.run(init) # Initializes the variables
print(session.run(loss)) # Prints the loss
Explanation: Now that you have imported the library, we will walk you through its different applications. You will start with an example, where we compute for you the loss of one training example.
$$loss = \mathcal{L}(\hat{y}, y) = (\hat y^{(i)} - y^{(i)})^2 \tag{1}$$
End of explanation
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)
Explanation: Writing and running programs in TensorFlow has the following steps:
Create Tensors (variables) that are not yet executed/evaluated.
Write operations between those Tensors.
Initialize your Tensors.
Create a Session.
Run the Session. This will run the operations you'd written above.
Therefore, when we created a variable for the loss, we simply defined the loss as a function of other quantities, but did not evaluate its value. To evaluate it, we had to run init=tf.global_variables_initializer(). That initialized the loss variable, and in the last line we were finally able to evaluate the value of loss and print its value.
Now let us look at an easy example. Run the cell below:
End of explanation
sess = tf.Session()
print(sess.run(c))
Explanation: As expected, you will not see 20! You got a tensor saying that the result is a tensor that does not have the shape attribute, and is of type "int32". All you did was put in the 'computation graph', but you have not run this computation yet. In order to actually multiply the two numbers, you will have to create a session and run it.
End of explanation
# Change the value of x in the feed_dict
x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()
Explanation: Great! To summarize, remember to initialize your variables, create a session and run the operations inside the session.
Next, you'll also have to know about placeholders. A placeholder is an object whose value you can specify only later.
To specify values for a placeholder, you can pass in values by using a "feed dictionary" (feed_dict variable). Below, we created a placeholder for x. This allows us to pass in a number later when we run the session.
End of explanation
# GRADED FUNCTION: linear_function
def linear_function():
Implements a linear function:
Initializes W to be a random tensor of shape (4,3)
Initializes X to be a random tensor of shape (3,1)
Initializes b to be a random tensor of shape (4,1)
Returns:
result -- runs the session for Y = WX + b
np.random.seed(1)
### START CODE HERE ### (4 lines of code)
X = tf.constant(np.random.randn(3,1), name = "X")
W = tf.constant(np.random.randn(4,3), name = "W")
b = tf.constant(np.random.randn(4,1), name = "b")
Y = tf.add(tf.matmul(W, X), b)
### END CODE HERE ###
# Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate
### START CODE HERE ###
sess = tf.Session()
result = sess.run(Y)
### END CODE HERE ###
# close the session
sess.close()
return result
print( "result = " + str(linear_function()))
Explanation: When you first defined x you did not have to specify a value for it. A placeholder is simply a variable that you will assign data to only later, when running the session. We say that you feed data to these placeholders when running the session.
Here's what's happening: When you specify the operations needed for a computation, you are telling TensorFlow how to construct a computation graph. The computation graph can have some placeholders whose values you will specify only later. Finally, when you run the session, you are telling TensorFlow to execute the computation graph.
1.1 - Linear function
Lets start this programming exercise by computing the following equation: $Y = WX + b$, where $W$ and $X$ are random matrices and b is a random vector.
Exercise: Compute $WX + b$ where $W, X$, and $b$ are drawn from a random normal distribution. W is of shape (4, 3), X is (3,1) and b is (4,1). As an example, here is how you would define a constant X that has shape (3,1):
```python
X = tf.constant(np.random.randn(3,1), name = "X")
```
You might find the following functions helpful:
- tf.matmul(..., ...) to do a matrix multiplication
- tf.add(..., ...) to do an addition
- np.random.randn(...) to initialize randomly
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Computes the sigmoid of z
Arguments:
z -- input value, scalar or vector
Returns:
results -- the sigmoid of z
### START CODE HERE ### ( approx. 4 lines of code)
# Create a placeholder for x. Name it 'x'.
x = tf.placeholder(tf.float32, name = "x")
# compute sigmoid(x)
sigmoid = tf.sigmoid(x)
# Create a session, and run it. Please use the method 2 explained above.
# You should use a feed_dict to pass z's value to x.
with tf.Session() as sess:
# Run session and call the output "result"
result = sess.run(sigmoid, feed_dict = {x: z})
### END CODE HERE ###
return result
print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))
Explanation: Expected Output :
<table>
<tr>
<td>
**result**
</td>
<td>
[[-2.15657382]
[ 2.95891446]
[-1.08926781]
[-0.84538042]]
</td>
</tr>
</table>
1.2 - Computing the sigmoid
Great! You just implemented a linear function. Tensorflow offers a variety of commonly used neural network functions like tf.sigmoid and tf.softmax. For this exercise lets compute the sigmoid function of an input.
You will do this exercise using a placeholder variable x. When running the session, you should use the feed dictionary to pass in the input z. In this exercise, you will have to (i) create a placeholder x, (ii) define the operations needed to compute the sigmoid using tf.sigmoid, and then (iii) run the session.
Exercise : Implement the sigmoid function below. You should use the following:
tf.placeholder(tf.float32, name = "...")
tf.sigmoid(...)
sess.run(..., feed_dict = {x: z})
Note that there are two typical ways to create and use sessions in tensorflow:
Method 1:
```python
sess = tf.Session()
Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
**Method 2:**python
with tf.Session() as sess:
# run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
# This takes care of closing the session for you :)
```
End of explanation
# GRADED FUNCTION: cost
def cost(logits, labels):
Computes the cost using the sigmoid cross entropy
Arguments:
logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
labels -- vector of labels y (1 or 0)
Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels"
in the TensorFlow documentation. So logits will feed into z, and labels into y.
Returns:
cost -- runs the session of the cost (formula (2))
### START CODE HERE ###
# Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
z = tf.placeholder(tf.float32, name = "z")
y = tf.placeholder(tf.float32, name = "y")
# Use the loss function (approx. 1 line)
cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y)
# Create a session (approx. 1 line). See method 1 above.
sess = tf.Session()
# Run the session (approx. 1 line).
cost = sess.run(cost, feed_dict = {z: logits,y: labels})
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return cost
logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))
Explanation: Expected Output :
<table>
<tr>
<td>
**sigmoid(0)**
</td>
<td>
0.5
</td>
</tr>
<tr>
<td>
**sigmoid(12)**
</td>
<td>
0.999994
</td>
</tr>
</table>
<font color='blue'>
To summarize, you how know how to:
1. Create placeholders
2. Specify the computation graph corresponding to operations you want to compute
3. Create the session
4. Run the session, using a feed dictionary if necessary to specify placeholder variables' values.
1.3 - Computing the Cost
You can also use a built-in function to compute the cost of your neural network. So instead of needing to write code to compute this as a function of $a^{2}$ and $y^{(i)}$ for i=1...m:
$$ J = - \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log a^{ [2] (i)} + (1-y^{(i)})\log (1-a^{ [2] (i)} )\large )\small\tag{2}$$
you can do it in one line of code in tensorflow!
Exercise: Implement the cross entropy loss. The function you will use is:
tf.nn.sigmoid_cross_entropy_with_logits(logits = ..., labels = ...)
Your code should input z, compute the sigmoid (to get a) and then compute the cross entropy cost $J$. All this can be done using one call to tf.nn.sigmoid_cross_entropy_with_logits, which computes
$$- \frac{1}{m} \sum_{i = 1}^m \large ( \small y^{(i)} \log \sigma(z^{2}) + (1-y^{(i)})\log (1-\sigma(z^{2})\large )\small\tag{2}$$
End of explanation
# GRADED FUNCTION: one_hot_matrix
def one_hot_matrix(labels, C):
Creates a matrix where the i-th row corresponds to the ith class number and the jth column
corresponds to the jth training example. So if example j had a label i. Then entry (i,j)
will be 1.
Arguments:
labels -- vector containing the labels
C -- number of classes, the depth of the one hot dimension
Returns:
one_hot -- one hot matrix
### START CODE HERE ###
# Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
C = tf.constant(C)
# Use tf.one_hot, be careful with the axis (approx. 1 line)
one_hot_matrix = tf.one_hot(labels, C, axis=0)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session (approx. 1 line)
one_hot = sess.run(one_hot_matrix)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return one_hot
labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))
Explanation: Expected Output :
<table>
<tr>
<td>
**cost**
</td>
<td>
[ 1.00538719 1.03664088 0.41385433 0.39956614]
</td>
</tr>
</table>
1.4 - Using One Hot encodings
Many times in deep learning you will have a y vector with numbers ranging from 0 to C-1, where C is the number of classes. If C is for example 4, then you might have the following y vector which you will need to convert as follows:
<img src="images/onehot.png" style="width:600px;height:150px;">
This is called a "one hot" encoding, because in the converted representation exactly one element of each column is "hot" (meaning set to 1). To do this conversion in numpy, you might have to write a few lines of code. In tensorflow, you can use one line of code:
tf.one_hot(labels, depth, axis)
Exercise: Implement the function below to take one vector of labels and the total number of classes $C$, and return the one hot encoding. Use tf.one_hot() to do this.
End of explanation
# GRADED FUNCTION: ones
def ones(shape):
Creates an array of ones of dimension shape
Arguments:
shape -- shape of the array you want to create
Returns:
ones -- array containing only ones
### START CODE HERE ###
# Create "ones" tensor using tf.ones(...). (approx. 1 line)
ones = tf.ones(shape)
# Create the session (approx. 1 line)
sess = tf.Session()
# Run the session to compute 'ones' (approx. 1 line)
ones = sess.run(ones)
# Close the session (approx. 1 line). See method 1 above.
sess.close()
### END CODE HERE ###
return ones
print ("ones = " + str(ones([3])))
Explanation: Expected Output:
<table>
<tr>
<td>
**one_hot**
</td>
<td>
[[ 0. 0. 0. 1. 0. 0.]
[ 1. 0. 0. 0. 0. 1.]
[ 0. 1. 0. 0. 1. 0.]
[ 0. 0. 1. 0. 0. 0.]]
</td>
</tr>
</table>
1.5 - Initialize with zeros and ones
Now you will learn how to initialize a vector of zeros and ones. The function you will be calling is tf.ones(). To initialize with zeros you could use tf.zeros() instead. These functions take in a shape and return an array of dimension shape full of zeros and ones respectively.
Exercise: Implement the function below to take in a shape and to return an array (of the shape's dimension of ones).
tf.ones(shape)
End of explanation
# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Expected Output:
<table>
<tr>
<td>
**ones**
</td>
<td>
[ 1. 1. 1.]
</td>
</tr>
</table>
2 - Building your first neural network in tensorflow
In this part of the assignment you will build a neural network using tensorflow. Remember that there are two parts to implement a tensorflow model:
Create the computation graph
Run the graph
Let's delve into the problem you'd like to solve!
2.0 - Problem statement: SIGNS Dataset
One afternoon, with some friends we decided to teach our computers to decipher sign language. We spent a few hours taking pictures in front of a white wall and came up with the following dataset. It's now your job to build an algorithm that would facilitate communications from a speech-impaired person to someone who doesn't understand sign language.
Training set: 1080 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (180 pictures per number).
Test set: 120 pictures (64 by 64 pixels) of signs representing numbers from 0 to 5 (20 pictures per number).
Note that this is a subset of the SIGNS dataset. The complete dataset contains many more signs.
Here are examples for each number, and how an explanation of how we represent the labels. These are the original pictures, before we lowered the image resolutoion to 64 by 64 pixels.
<img src="images/hands.png" style="width:800px;height:350px;"><caption><center> <u><font color='purple'> Figure 1</u><font color='purple'>: SIGNS dataset <br> <font color='black'> </center>
Run the following code to load the dataset.
End of explanation
# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: Change the index below and run the cell to visualize some examples in the dataset.
End of explanation
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)
print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
Explanation: As usual you flatten the image dataset, then normalize it by dividing by 255. On top of that, you will convert each label to a one-hot vector as shown in Figure 1. Run the cell below to do so.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_x, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
n_y -- scalar, number of classes (from 0 to 5, so -> 6)
Returns:
X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"
Tips:
- You will use None because it let's us be flexible on the number of examples you will for the placeholders.
In fact, the number of examples during test/train is different.
### START CODE HERE ### (approx. 2 lines)
X = tf.placeholder(tf.float32, shape = [n_x, None], name = "X")
Y = tf.placeholder(tf.float32, shape = [n_y, None], name = "Y")
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: Note that 12288 comes from $64 \times 64 \times 3$. Each image is square, 64 by 64 pixels, and 3 is for the RGB colors. Please make sure all these shapes make sense to you before continuing.
Your goal is to build an algorithm capable of recognizing a sign with high accuracy. To do so, you are going to build a tensorflow model that is almost the same as one you have previously built in numpy for cat recognition (but now using a softmax output). It is a great occasion to compare your numpy implementation to the tensorflow one.
The model is LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX. The SIGMOID output layer has been converted to a SOFTMAX. A SOFTMAX layer generalizes SIGMOID to when there are more than two classes.
2.1 - Create placeholders
Your first task is to create placeholders for X and Y. This will allow you to later pass your training data in when you run your session.
Exercise: Implement the function below to create the placeholders in tensorflow.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes parameters to build a neural network with tensorflow. The shapes are:
W1 : [25, 12288]
b1 : [25, 1]
W2 : [12, 25]
b2 : [12, 1]
W3 : [6, 12]
b3 : [6, 1]
Returns:
parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 6 lines of code)
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
W3 = tf.get_variable("W3", [6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2,
"W3": W3,
"b3": b3}
return parameters
tf.reset_default_graph()
with tf.Session() as sess:
parameters = initialize_parameters()
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected Output:
<table>
<tr>
<td>
**X**
</td>
<td>
Tensor("Placeholder_1:0", shape=(12288, ?), dtype=float32) (not necessarily Placeholder_1)
</td>
</tr>
<tr>
<td>
**Y**
</td>
<td>
Tensor("Placeholder_2:0", shape=(10, ?), dtype=float32) (not necessarily Placeholder_2)
</td>
</tr>
</table>
2.2 - Initializing the parameters
Your second task is to initialize the parameters in tensorflow.
Exercise: Implement the function below to initialize the parameters in tensorflow. You are going use Xavier Initialization for weights and Zero Initialization for biases. The shapes are given below. As an example, to help you, for W1 and b1 you could use:
python
W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
Please use seed = 1 to make sure your results match ours.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
b1 = parameters['b1']
W2 = parameters['W2']
b2 = parameters['b2']
W3 = parameters['W3']
b3 = parameters['b3']
### START CODE HERE ### (approx. 5 lines) # Numpy Equivalents:
Z1 = tf.add(tf.matmul(W1, X), b1) # Z1 = np.dot(W1, X) + b1
A1 = tf.nn.relu(Z1) # A1 = relu(Z1)
Z2 = tf.add(tf.matmul(W2, A1), b2) # Z2 = np.dot(W2, a1) + b2
A2 = tf.nn.relu(Z2) # A2 = relu(Z2)
Z3 = tf.add(tf.matmul(W3, A2), b3) # Z3 = np.dot(W3,Z2) + b3
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
print("Z3 = " + str(Z3))
Explanation: Expected Output:
<table>
<tr>
<td>
**W1**
</td>
<td>
< tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b1**
</td>
<td>
< tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**W2**
</td>
<td>
< tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref >
</td>
</tr>
<tr>
<td>
**b2**
</td>
<td>
< tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref >
</td>
</tr>
</table>
As expected, the parameters haven't been evaluated yet.
2.3 - Forward propagation in tensorflow
You will now implement the forward propagation module in tensorflow. The function will take in a dictionary of parameters and it will complete the forward pass. The functions you will be using are:
tf.add(...,...) to do an addition
tf.matmul(...,...) to do a matrix multiplication
tf.nn.relu(...) to apply the ReLU activation
Question: Implement the forward pass of the neural network. We commented for you the numpy equivalents so that you can compare the tensorflow implementation to numpy. It is important to note that the forward propagation stops at z3. The reason is that in tensorflow the last linear layer output is given as input to the function computing the loss. Therefore, you don't need a3!
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
logits = tf.transpose(Z3)
labels = tf.transpose(Y)
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
X, Y = create_placeholders(12288, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
print("cost = " + str(cost))
Explanation: Expected Output:
<table>
<tr>
<td>
**Z3**
</td>
<td>
Tensor("Add_2:0", shape=(6, ?), dtype=float32)
</td>
</tr>
</table>
You may have noticed that the forward propagation doesn't output any cache. You will understand why below, when we get to brackpropagation.
2.4 Compute cost
As seen before, it is very easy to compute the cost using:
python
tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))
Question: Implement the cost function below.
- It is important to know that the "logits" and "labels" inputs of tf.nn.softmax_cross_entropy_with_logits are expected to be of shape (number of examples, num_classes). We have thus transposed Z3 and Y for you.
- Besides, tf.reduce_mean basically does the summation over the examples.
End of explanation
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
num_epochs = 1500, minibatch_size = 32, print_cost = True):
Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.
Arguments:
X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
X_test -- training set, of shape (input size = 12288, number of training examples = 120)
Y_test -- test set, of shape (output size = 6, number of test examples = 120)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep consistent results
seed = 3 # to keep consistent results
(n_x, m) = X_train.shape # (n_x: input size, m : number of examples in the train set)
n_y = Y_train.shape[0] # n_y : output size
costs = [] # To keep track of the cost
# Create Placeholders of shape (n_x, n_y)
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_x, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
epoch_cost = 0. # Defines a cost related to an epoch
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , minibatch_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
epoch_cost += minibatch_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 100 == 0:
print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
if print_cost == True and epoch % 5 == 0:
costs.append(epoch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# lets save the parameters in a variable
parameters = sess.run(parameters)
print ("Parameters have been trained!")
# Calculate the correct predictions
correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))
return parameters
Explanation: Expected Output:
<table>
<tr>
<td>
**cost**
</td>
<td>
Tensor("Mean:0", shape=(), dtype=float32)
</td>
</tr>
</table>
2.5 - Backward propagation & parameter updates
This is where you become grateful to programming frameworks. All the backpropagation and the parameters update is taken care of in 1 line of code. It is very easy to incorporate this line in the model.
After you compute the cost function. You will create an "optimizer" object. You have to call this object along with the cost when running the tf.session. When called, it will perform an optimization on the given cost with the chosen method and learning rate.
For instance, for gradient descent the optimizer would be:
python
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
To make the optimization you would do:
python
_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
This computes the backpropagation by passing through the tensorflow graph in the reverse order. From cost to inputs.
Note When coding, we often use _ as a "throwaway" variable to store values that we won't need to use later. Here, _ takes on the evaluated value of optimizer, which we don't need (and c takes the value of the cost variable).
2.6 - Building the model
Now, you will bring it all together!
Exercise: Implement the model. You will be calling the functions you had previously implemented.
End of explanation
parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model! On our machine it takes about 5 minutes. Your "Cost after epoch 100" should be 1.016458. If it's not, don't waste time; interrupt the training by clicking on the square (⬛) in the upper bar of the notebook, and try to correct your code. If it is the correct cost, take a break and come back in 5 minutes!
End of explanation
import scipy
from PIL import Image
from scipy import ndimage
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "thumbs_up.jpg"
## END CODE HERE ##
# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)
plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))
Explanation: Expected Output:
<table>
<tr>
<td>
**Train Accuracy**
</td>
<td>
0.999074
</td>
</tr>
<tr>
<td>
**Test Accuracy**
</td>
<td>
0.716667
</td>
</tr>
</table>
Amazing, your algorithm can recognize a sign representing a figure between 0 and 5 with 71.7% accuracy.
Insights:
- Your model seems big enough to fit the training set well. However, given the difference between train and test accuracy, you could try to add L2 or dropout regularization to reduce overfitting.
- Think about the session as a block of code to train the model. Each time you run the session on a minibatch, it trains the parameters. In total you have run the session a large number of times (1500 epochs) until you obtained well trained parameters.
2.7 - Test with your own image (optional / ungraded exercise)
Congratulations on finishing this assignment. You can now take a picture of your hand and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
End of explanation |
4,458 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: ¿Por qué PyCUDA?
Hasta ahora hemos visto que si bien CUDA no es un lenguaje imposible de aprender, puede llegar a ser un dolor de cabeza el tener muchos apuntadores y manejar la memoria de un modo tan rudimentario.
Sin embargo hay alternativas que nos permiten trabajar en entornos más agradables, un ejemplo de ellos es PyCUDA creado con Andreas Klöckner. Básicamente PyCUDA se encarga de mapear todo CUDA dentro de Python.
Por poner un ejemplo, un código simple sería el siguiente
Step2: Al correr este programa vamos a obtener un montón de ceros; algo no muy interesante. Sin embargo detrás de escenas sí pasó algo interesante.
PyCUDA compiló el código fuente y lo cargó a la tarjeta.
Se asignó memoria automáticamente, además de copiar las cosas de CPU a GPU y de vuelta.
Por último la limpieza (liberación de memoria) se hace sola.
Útil ¿cierto?
Usando PyCUDA
Para empezar debemos importar e incializar PyCUDA
Step3: Transferir datos
El siguiente paso es transferir datos al GPU. Principalmente arreglos de numpy. Por ejemplo, tomemos un arreglo de números aleatorios de $4 \times 4$
Step4: sin embargo nuestro arreglo a consiste en números de doble precisión, dado que no todos los GPU de NVIDIA cuentan con doble precisión es que hacemos lo siguiente
Step5: finalmente, necesitmos un arreglo hacia el cuál transferir la información, así que deberíamos guardar la memoria en el dispositivo
Step6: como último paso, necesitamos tranferir los datos al GPU
Step8: Ejecutando kernels
Durante este capítulo nos centraremos en un ejemplo muy simple. Escribir un código para duplicar cada una de las entradas en un arreglo, en seguida escribimos el kernel en CUDA C, y se lo otorgamos al constructor de pycuda.compiler.SourceModule
Step9: Si no hay errores, el código ahora ha sido compilado y cargado en el dispositivo. Encontramos una referencia a nuestra pycuda.driver.Function y la llamamos, especificando a_gpu como el argumento, y un tamaño de bloque de $4\times 4$
Step10: Finalmente recogemos la información del GPU y la mostramos con el a original | Python Code:
import pycuda.autoinit
import pycuda.driver as drv
import numpy
from pycuda.compiler import SourceModule
mod = SourceModule(
__global__ void multiplicar(float *dest, float *a, float *b)
{
const int i = threadIdx.x;
dest[i] = a[i] * b[i];
}
)
multiplicar = mod.get_function("multiplicar")
a = numpy.random.randn(400).astype(numpy.float32)
b = numpy.random.randn(400).astype(numpy.float32)
dest = numpy.zeros_like(a)
print dest
multiplicar(
drv.Out(dest), drv.In(a), drv.In(b),
block=(400,1,1), grid=(1,1))
print dest
print dest-a*b
Explanation: ¿Por qué PyCUDA?
Hasta ahora hemos visto que si bien CUDA no es un lenguaje imposible de aprender, puede llegar a ser un dolor de cabeza el tener muchos apuntadores y manejar la memoria de un modo tan rudimentario.
Sin embargo hay alternativas que nos permiten trabajar en entornos más agradables, un ejemplo de ellos es PyCUDA creado con Andreas Klöckner. Básicamente PyCUDA se encarga de mapear todo CUDA dentro de Python.
Por poner un ejemplo, un código simple sería el siguiente
End of explanation
import pycuda.driver as cuda
import pycuda.autoinit
from pycuda.compiler import SourceModule
Explanation: Al correr este programa vamos a obtener un montón de ceros; algo no muy interesante. Sin embargo detrás de escenas sí pasó algo interesante.
PyCUDA compiló el código fuente y lo cargó a la tarjeta.
Se asignó memoria automáticamente, además de copiar las cosas de CPU a GPU y de vuelta.
Por último la limpieza (liberación de memoria) se hace sola.
Útil ¿cierto?
Usando PyCUDA
Para empezar debemos importar e incializar PyCUDA
End of explanation
import numpy
a = numpy.random.randn(4,4)
Explanation: Transferir datos
El siguiente paso es transferir datos al GPU. Principalmente arreglos de numpy. Por ejemplo, tomemos un arreglo de números aleatorios de $4 \times 4$
End of explanation
a = a.astype(numpy.float32)
Explanation: sin embargo nuestro arreglo a consiste en números de doble precisión, dado que no todos los GPU de NVIDIA cuentan con doble precisión es que hacemos lo siguiente
End of explanation
a_gpu = cuda.mem_alloc(a.nbytes)
Explanation: finalmente, necesitmos un arreglo hacia el cuál transferir la información, así que deberíamos guardar la memoria en el dispositivo:
End of explanation
cuda.memcpy_htod(a_gpu, a)
Explanation: como último paso, necesitamos tranferir los datos al GPU
End of explanation
mod = SourceModule(
__global__ void duplicar(float *a)
{
int idx = threadIdx.x + threadIdx.y*4;
a[idx] *= 2;
}
)
Explanation: Ejecutando kernels
Durante este capítulo nos centraremos en un ejemplo muy simple. Escribir un código para duplicar cada una de las entradas en un arreglo, en seguida escribimos el kernel en CUDA C, y se lo otorgamos al constructor de pycuda.compiler.SourceModule
End of explanation
mod
func = mod.get_function("duplicar")
func(a_gpu, block=(4,4,1))
func
Explanation: Si no hay errores, el código ahora ha sido compilado y cargado en el dispositivo. Encontramos una referencia a nuestra pycuda.driver.Function y la llamamos, especificando a_gpu como el argumento, y un tamaño de bloque de $4\times 4$:
End of explanation
a_duplicado = numpy.empty_like(a)
cuda.memcpy_dtoh(a_duplicado, a_gpu)
print a_duplicado
print a
print(type(a))
print(type(a_gpu))
print(type(a_duplicado))
a_duplicado
Explanation: Finalmente recogemos la información del GPU y la mostramos con el a original
End of explanation |
4,459 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 4
Step1: 2. Uso de Pandas para descargar datos de precios de cierre
Ahora, en forma de función
Step2: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota
Step3: Nota
Step5: 4. Optimización de portafolios
Step6: 5. ETF | Python Code:
#importar los paquetes que se van a usar
import pandas as pd
import pandas_datareader.data as web
import numpy as np
import datetime
from datetime import datetime
import scipy.stats as stats
import scipy as sp
import scipy.optimize as scopt
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
#algunas opciones para Python
pd.set_option('display.notebook_repr_html', True)
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 10)
pd.set_option('display.width', 78)
pd.set_option('precision', 3)
def def_portafolio(tickers, participacion=None):
if (participacion is None):
part = np.ones(len(tickers))/len(tickers)
portfolio = pd.DataFrame({'Tickers': tickers, 'Participacion': participacion}, index=tickers)
return portfolio
portafolio = def_portafolio(['Acción A', 'Acción B'], [1, 1])
portafolio
rendimientos = pd.DataFrame({'Acción A': [0.1, 0.24, 0.05, -0.02, 0.2],
'Acción B': [-0.15, -0.2, -0.01, 0.04, -0.15]})
rendimientos
def valor_portafolio_ponderado(portafolio, rendimientos, name='Valor'):
total_participacion = portafolio.Participacion.sum()
ponderaciones=portafolio.Participacion/total_participacion
rendimientos_ponderados = rendimientos*ponderaciones
return pd.DataFrame({name: rendimientos_ponderados.sum(axis=1)})
rend_portafolio=valor_portafolio_ponderado(portafolio, rendimientos, 'Valor')
rend_portafolio
total_rend=pd.concat([rendimientos, rend_portafolio], axis=1)
total_rend
total_rend.std()
rendimientos.corr()
total_rend.plot(figsize=(8,6));
def plot_portafolio_rend(rend, title=None):
rend.plot(figsize=(8,6))
plt.xlabel('Año')
plt.ylabel('Rendimientos')
if (title is not None): plt.title(title)
plt.show()
plot_portafolio_rend(total_rend);
Explanation: Clase 4: Portafolios y riesgo
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
En primer lugar, para poder bajar precios y información sobre opciones de Yahoo, es necesario cargar algunos paquetes de Python. En este caso, el paquete principal será Pandas. También, se usarán el Scipy y el Numpy para las matemáticas necesarias y, el Matplotlib y el Seaborn para hacer gráficos de las series de datos.
End of explanation
def get_historical_closes(ticker, start_date, end_date):
p = web.DataReader(ticker, "yahoo", start_date, end_date).sort_index('major_axis')
d = p.to_frame()['Adj Close'].reset_index()
d.rename(columns={'minor': 'Ticker', 'Adj Close': 'Close'}, inplace=True)
pivoted = d.pivot(index='Date', columns='Ticker')
pivoted.columns = pivoted.columns.droplevel(0)
return pivoted
Explanation: 2. Uso de Pandas para descargar datos de precios de cierre
Ahora, en forma de función
End of explanation
closes=get_historical_closes(['AA','AAPL','MSFT','KO'], '2010-01-01', '2016-12-31')
closes
closes.plot(figsize=(8,6));
Explanation: Una vez cargados los paquetes, es necesario definir los tickers de las acciones que se usarán, la fuente de descarga (Yahoo en este caso, pero también se puede desde Google) y las fechas de interés. Con esto, la función DataReader del paquete pandas_datareader bajará los precios solicitados.
Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda:
*conda install -c conda-forge pandas-datareader *
End of explanation
def calc_daily_returns(closes):
return np.log(closes/closes.shift(1))[1:]
daily_returns=calc_daily_returns(closes)
daily_returns.plot(figsize=(8,6));
daily_returns.corr()
def calc_annual_returns(daily_returns):
grouped = np.exp(daily_returns.groupby(lambda date: date.year).sum())-1
return grouped
annual_returns = calc_annual_returns(daily_returns)
annual_returns
def calc_portfolio_var(returns, weights=None):
if (weights is None):
weights = np.ones(returns.columns.size)/returns.columns.size
sigma = np.cov(returns.T,ddof=0)
var = (weights * sigma * weights.T).sum()
return var
calc_portfolio_var(annual_returns)
def sharpe_ratio(returns, weights = None, risk_free_rate = 0.015):
n = returns.columns.size
if weights is None: weights = np.ones(n)/n
var = calc_portfolio_var(returns, weights)
means = returns.mean()
return (means.dot(weights) - risk_free_rate)/np.sqrt(var)
sharpe_ratio(annual_returns)
Explanation: Nota: Para descargar datos de la bolsa mexicana de valores (BMV), el ticker debe tener la extensión MX.
Por ejemplo: MEXCHEM.MX, LABB.MX, GFINBURO.MX y GFNORTEO.MX.
3. Formulación del riesgo de un portafolio
End of explanation
def f(x): return 2+x**2
scopt.fmin(f, 10)
def negative_sharpe_ratio_n_minus_1_stock(weights,returns,risk_free_rate):
Given n-1 weights, return a negative sharpe ratio
weights2 = sp.append(weights, 1-np.sum(weights))
return -sharpe_ratio(returns, weights2, risk_free_rate)
def optimize_portfolio(returns, risk_free_rate):
w0 = np.ones(returns.columns.size-1, dtype=float) * 1.0 / returns.columns.size
w1 = scopt.fmin(negative_sharpe_ratio_n_minus_1_stock, w0, args=(returns, risk_free_rate))
final_w = sp.append(w1, 1 - np.sum(w1))
final_sharpe = sharpe_ratio(returns, final_w, risk_free_rate)
return (final_w, final_sharpe)
optimize_portfolio(annual_returns, 0.0003)
def objfun(W, R, target_ret):
stock_mean = np.mean(R,axis=0)
port_mean = np.dot(W,stock_mean)
cov=np.cov(R.T)
port_var = np.dot(np.dot(W,cov),W.T)
penalty = 2000*abs(port_mean-target_ret)
return np.sqrt(port_var) + penalty
def calc_efficient_frontier(returns):
result_means = []
result_stds = []
result_weights = []
means = returns.mean()
min_mean, max_mean = means.min(), means.max()
nstocks = returns.columns.size
for r in np.linspace(min_mean, max_mean, 150):
weights = np.ones(nstocks)/nstocks
bounds = [(0,1) for i in np.arange(nstocks)]
constraints = ({'type': 'eq', 'fun': lambda W: np.sum(W) - 1})
results = scopt.minimize(objfun, weights, (returns, r), method='SLSQP', constraints = constraints, bounds = bounds)
if not results.success: # handle error
raise Exception(result.message)
result_means.append(np.round(r,4)) # 4 decimal places
std_=np.round(np.std(np.sum(returns*results.x,axis=1)),6)
result_stds.append(std_)
result_weights.append(np.round(results.x, 5))
return {'Means': result_means, 'Stds': result_stds, 'Weights': result_weights}
frontier_data = calc_efficient_frontier(annual_returns)
def plot_efficient_frontier(ef_data):
plt.figure(figsize=(12,8))
plt.title('Efficient Frontier')
plt.xlabel('Standard Deviation of the porfolio (Risk))')
plt.ylabel('Return of the portfolio')
plt.plot(ef_data['Stds'], ef_data['Means'], '--');
plot_efficient_frontier(frontier_data)
Explanation: 4. Optimización de portafolios
End of explanation
etf=get_historical_closes(['PICK','IBB','XBI','MLPX','AMLP','VGT','RYE','IEO','AAPL'], '2014-01-01', '2014-12-31')
etf.plot(figsize=(8,6));
daily_returns_etf=calc_daily_returns(etf)
daily_returns_etf
daily_returns_etf_mean=1000*daily_returns_etf.mean()
daily_returns_etf_mean
daily_returns_etf_std=daily_returns_etf.std()
daily_returns_etf_std
daily_returns_ms=pd.concat([daily_returns_etf_mean, daily_returns_etf_std], axis=1)
daily_returns_ms
from sklearn.cluster import KMeans
random_state = 10
y_pred = KMeans(n_clusters=4, random_state=random_state).fit_predict(daily_returns_ms)
plt.scatter(daily_returns_etf_mean, daily_returns_etf_std, c=y_pred);
plt.axis([-1, 1, 0.01, 0.03]);
import scipy.cluster.hierarchy as hac
daily_returns_etf.corr()
Z = hac.linkage(daily_returns_etf.corr(), 'single')
# Plot the dendogram
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
hac.dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.show()
Explanation: 5. ETF
End of explanation |
4,460 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute
Now that we have datasets added to our Bundle, our next step is to run the forward model and compute a synthetic model for each of these datasets.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
Step1: Now we'll import our packages and initialize the default PHOEBE bundle.
Step2: And we'll attach some dummy datasets. See the datasets tutorial for more details.
Step3: Default Compute Options
Any default Bundle already has a set of default compute options to run the backend for PHOEBE 2. In most cases, you can just edit the options in this default set of compte options.
Step4: Adding Compute Options
In other cases, we may want to manually add additional sets of compute options.
This syntax should look very familiar by now, it takes a function (or the name of a recognized function in phoebe.parameters.compute) and then any
kwargs to set in that ParameterSet, passed to b.add_compute.
Let's say that we want to create two sets of compute options - in this example, we'll create one called 'preview' which will cut some corners to quickly get us a model, and one called 'detailed' which will get a much more precise model but likely take longer. As with other tags, the string you provide for the compute tag is up to you (so long as it doesn't raise an error because it conflicts with other tags).
Step5: Editing Compute Options
Backend-Specific Compute Options
Most of the parameters in the compute options are specific to the backend being used. Here, of course, we're using the PHOEBE 2.0 backend - but for details on other backends see the Advanced
Step6: as you can see, there is a copy for both of our compute options ('preview' and 'detailed').
If we know which set of compute options we'll be using, or only want to enable/disable for a given set, then we can do that (we could also use b.disable_dataset and b.enable_dataset
Step7: or to enable/disable a dataset for all sets of compute options, we can use the set_value_all method
Step8: If the enabled parameter is missing for a set of compute options - it is likely that that particular backend does not support that dataset type.
Running Compute
run_compute takes arguments for the compute tag as well as the model tag for the resulting synthetic model(s).
You do not need to provide the compute tag if only 0 or 1 set of compute options exist in the Bundle. If there are no compute options, the default PHOEBE 2 options will be added on your behalf and used. If there is a single set of compute options, those will be assumed. In our case, we have two compute options in the Bundle (with tags 'preview' and 'detailed') so we must provide an argument for compute.
If you do not provide a tag for the model, one will be created for you called 'latest'. Note that the 'latest' model will be overwritten without throwing any errors, whereas other named models can only be overwritten if you pass overwrite=True (see the run_compute API docs for details). In general, though, if you want to maintain the results from previous calls to run_compute, you must provide a NEW model tag.
Step9: Storing/Tagging Models
Now let's compute models for three different 'versions' of parameters. By providing a model tag, we can keep the synthetics for each of these different runs in the bundle - which will be handy later on for plotting and comparing models.
Step10: We will now have three new sets of synthetics which can be compared, plotted, or removed.
Step11: To remove a model, call remove_model.
Step12: Accessing Synthetics from Models
The synthetics can be accessed by their dataset and model tags. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
Explanation: Compute
Now that we have datasets added to our Bundle, our next step is to run the forward model and compute a synthetic model for each of these datasets.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: Now we'll import our packages and initialize the default PHOEBE bundle.
End of explanation
b.add_dataset('orb',
compute_times=phoebe.linspace(0,10,10),
dataset='orb01')
b.add_dataset('lc',
compute_times=phoebe.linspace(0,1,101),
dataset='lc01')
Explanation: And we'll attach some dummy datasets. See the datasets tutorial for more details.
End of explanation
print(b.computes)
print(b.filter(context='compute'))
b.set_value(qualifier='irrad_method', value='none')
Explanation: Default Compute Options
Any default Bundle already has a set of default compute options to run the backend for PHOEBE 2. In most cases, you can just edit the options in this default set of compte options.
End of explanation
b.add_compute(phoebe.compute.phoebe, compute='preview', irrad_method='none')
print(b.filter(compute='preview', context='compute'))
b.add_compute('phoebe', compute='detailed', irrad_method='wilson')
print(b.get_compute('detailed'))
Explanation: Adding Compute Options
In other cases, we may want to manually add additional sets of compute options.
This syntax should look very familiar by now, it takes a function (or the name of a recognized function in phoebe.parameters.compute) and then any
kwargs to set in that ParameterSet, passed to b.add_compute.
Let's say that we want to create two sets of compute options - in this example, we'll create one called 'preview' which will cut some corners to quickly get us a model, and one called 'detailed' which will get a much more precise model but likely take longer. As with other tags, the string you provide for the compute tag is up to you (so long as it doesn't raise an error because it conflicts with other tags).
End of explanation
print(b.filter(qualifier='enabled', dataset='lc01'))
Explanation: Editing Compute Options
Backend-Specific Compute Options
Most of the parameters in the compute options are specific to the backend being used. Here, of course, we're using the PHOEBE 2.0 backend - but for details on other backends see the Advanced: Alternate Backends Tutorial.
The PHOEBE compute options are described in the tutorial on their relevant dataset types:
Light Curves/Fluxes (lc)
Radial Velocities (rv)
Line Profiles (lp)
Orbits (orb)
Meshes (mesh)
Enabling/Disabling Datasets
By default, synthetic models will be created for all datasets in the Bundle when run_compute is called. But you can disable a dataset to have run_compute ignore that dataset. This is handled by a BoolParameter with the qualifier 'enabled' - and has a copy that lives in each set of compute options
Let's say we wanted to compute the orbit but not light curve - so we want to set enabled@lc01:
End of explanation
b.set_value(qualifier='enabled', dataset='lc01', compute='preview', value=False)
print(b.filter(qualifier='enabled', dataset='lc01'))
Explanation: as you can see, there is a copy for both of our compute options ('preview' and 'detailed').
If we know which set of compute options we'll be using, or only want to enable/disable for a given set, then we can do that (we could also use b.disable_dataset and b.enable_dataset:
End of explanation
b.set_value_all('enabled@lc01', True)
print(b.filter(qualifier='enabled', dataset='lc01'))
Explanation: or to enable/disable a dataset for all sets of compute options, we can use the set_value_all method:
End of explanation
b.run_compute(compute='preview')
print(b.models)
Explanation: If the enabled parameter is missing for a set of compute options - it is likely that that particular backend does not support that dataset type.
Running Compute
run_compute takes arguments for the compute tag as well as the model tag for the resulting synthetic model(s).
You do not need to provide the compute tag if only 0 or 1 set of compute options exist in the Bundle. If there are no compute options, the default PHOEBE 2 options will be added on your behalf and used. If there is a single set of compute options, those will be assumed. In our case, we have two compute options in the Bundle (with tags 'preview' and 'detailed') so we must provide an argument for compute.
If you do not provide a tag for the model, one will be created for you called 'latest'. Note that the 'latest' model will be overwritten without throwing any errors, whereas other named models can only be overwritten if you pass overwrite=True (see the run_compute API docs for details). In general, though, if you want to maintain the results from previous calls to run_compute, you must provide a NEW model tag.
End of explanation
b.set_value(qualifier='incl', kind='orbit', value=90)
b.run_compute(compute='preview', model='run_with_incl_90')
b.set_value(qualifier='incl', kind='orbit', value=85)
b.run_compute(compute='preview', model='run_with_incl_85')
b.set_value(qualifier='incl', kind='orbit', value=80)
b.run_compute(compute='preview', model='run_with_incl_80')
Explanation: Storing/Tagging Models
Now let's compute models for three different 'versions' of parameters. By providing a model tag, we can keep the synthetics for each of these different runs in the bundle - which will be handy later on for plotting and comparing models.
End of explanation
print(b.models)
Explanation: We will now have three new sets of synthetics which can be compared, plotted, or removed.
End of explanation
b.remove_model('latest')
print(b.models)
Explanation: To remove a model, call remove_model.
End of explanation
b.filter(model='run_with_incl_90')
b.filter(component='primary', model='run_with_incl_90')
b.get_parameter(qualifier='us', component='primary', model='run_with_incl_90')
b.get_value(qualifier='us', dataset='orb01', component='primary', model='run_with_incl_90')[:10]
Explanation: Accessing Synthetics from Models
The synthetics can be accessed by their dataset and model tags.
End of explanation |
4,461 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id='beginning'></a> <!--\label{beginning}-->
* Outline
* Glossary
* 4. The Visibility Space
* Previous
Step1: Import section specific modules
Step2: 4.5.1 UV coverage
Step3: Let's express the corresponding physical baseline in ENU coordinates.
Step4: Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
Step5: Figure 4.5.1
Step6: 4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time
As seen previously, we convert the baseline coordinates using the previous matrix transformation.
Step7: As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\S$ 4.2.2 ➞
Step8: We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.
Step9: Figure 4.5.2
Step10: Figure 4.5.3
Step11: Let's compute the $uv$ tracks of an observation of the NCP ($\delta=90^\circ$)
Step12: Let's compute the uv tracks when observing a source at $\delta=30^\circ$
Step13: Figure 4.5.4
Step14: Figure 4.5.5
Step15: <span style="background-color
Step16: We then convert the ($\alpha$,$\delta$) to $l,m$
Step17: The source and phase centre coordinates are now given in degrees.
Step18: Figure 4.5.6
Step19: We create the dimensions of our visibility plane.
Step20: We create our fully-filled visibility plane. With a "perfect" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write
Step21: Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
Step22: Figure 4.5.7
Step23: Figure 4.5.8
Step24: Figure 4.5.9 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: <a id='beginning'></a> <!--\label{beginning}-->
* Outline
* Glossary
* 4. The Visibility Space
* Previous: 4.4 The Visibility Function
* Next: 4.5.2 UV Coverage: Improving Your Coverage
Import standard modules:
End of explanation
from mpl_toolkits.mplot3d import Axes3D
import plotBL
HTML('../style/code_toggle.html')
Explanation: Import section specific modules:
End of explanation
ant1 = np.array([-500e3,500e3,0]) # in m
ant2 = np.array([500e3,-500e3,+10]) # in m
Explanation: 4.5.1 UV coverage : UV tracks
The objective of $\S$ 4.5.1 ⤵ and $\S$ 4.5.2 ➞ is to give you a glimpse into the process of aperture synthesis. <span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span> An interferometer measures components of the Fourier Transform of the sky by sampling the visibility function, $\mathcal{V}$. This collection of samples lives in ($u$, $v$, $w$) space, and are often projected onto the so-called $uv$-plane.
In $\S$ 4.5.1 ⤵, we will focus on the way the visibility function is sampled. This sampling is a function of the interferometer's configuration, the direction of the source and the observation time.
In $\S$ 4.5.2 ➞, we will see how this sampling can be improved by using certain observing techniques.
4.5.1.1 The projected baseline with time: the $uv$ track
A projected baseline depends on a baseline's coordinates, and the direction being observed in the sky. It corresponds to the baseline as seen from the source. The projected baseline is what determines the spatial frequency of the sky that the baseline will measure. As the Earth rotates, the projected baseline and its corresponding spatial frequency (defined by the baseline's ($u$, $v$)-coordinates) vary slowly in time, generating a path in the $uv$-plane.
We will now generate test cases to see what locus the path takes, and how it can be predicted depending on the baseline's geometry.
4.5.1.1.1 Baseline projection as seen from the source
Let's generate one baseline from two antennas Ant$_1$ and Ant$_2$.
End of explanation
b_ENU = ant2-ant1 # baseline
D = np.sqrt(np.sum((b_ENU)**2)) # |b|
print str(D/1000)+" km"
Explanation: Let's express the corresponding physical baseline in ENU coordinates.
End of explanation
L = (np.pi/180)*(45+0./60+0./3600) # Latitude in radians
A = np.arctan2(b_ENU[0],b_ENU[1])
print "Baseline Azimuth="+str(np.degrees(A))+"°"
E = np.arcsin(b_ENU[2]/D)
print "Baseline Elevation="+str(np.degrees(E))+"°"
%matplotlib nbagg
plotBL.sphere(ant1,ant2,A,E,D,L)
Explanation: Let's place the interferometer at a latitude $L_a=+45^\circ00'00''$.
End of explanation
# Observation parameters
c = 3e8 # Speed of light
f = 1420e9 # Frequency
lam = c/f # Wavelength
dec = (np.pi/180)*(-30-43.0/60-17.34/3600) # Declination
time_steps = 600 # Time Steps
h = np.linspace(-4,4,num=time_steps)*np.pi/12 # Hour angle window
Explanation: Figure 4.5.1: A baseline located at +45$^\circ$ as seen from the sky. This plot is interactive and can be rotated in 3D to see different baseline projections, depending on the position of the source w.r.t. the physical baseline.
On the interactive plot above, we represent a baseline located at +45$^\circ$. It is aligned with the local south-west/north-east axis, as seen from the sky frame of reference. By rotating the sphere westward, you can simulate the variation of the projected baseline as seen from a source in apparent motion on the celestial sphere.
4.5.1.1.2 Coordinates of the baseline in the ($u$,$v$,$w$) plane
We will now simulate an observation to study how a projected baseline will change with time. We will position this baseline at a South African latitude. We first need the expression of the physical baseline in a convenient reference frame, attached to the source in the sky.
In $\S$ 4.2 ➞, we linked the equatorial coordinates of the baseline to the ($u$,$v$,$w$) coordinates through the transformation matrix:
\begin{equation}
\begin{pmatrix}
u\
v\
w
\end{pmatrix}
=
\frac{1}{\lambda}
\begin{pmatrix}
\sin H_0 & \cos H_0 & 0\
-\sin \delta_0 \cos H_0 & \sin\delta_0\sin H_0 & \cos\delta_0\
\cos \delta_0 \cos H_0 & -\cos\delta_0\sin H_0 & \sin\delta_0\
\end{pmatrix}
\begin{pmatrix}
X\
Y\
Z
\end{pmatrix}
\end{equation}
<a id="vis:eq:451"></a> <!---\label{vis:eq:451}--->
\begin{equation}
\begin{bmatrix}
X\
Y\
Z
\end{bmatrix}
=|\mathbf{b}|
\begin{bmatrix}
\cos L_a \sin \mathcal{E} - \sin L_a \cos \mathcal{E} \cos \mathcal{A}\nonumber\
\cos \mathcal{E} \sin \mathcal{A} \nonumber\
\sin L_a \sin \mathcal{E} + \cos L_a \cos \mathcal{E} \cos \mathcal{A}\
\end{bmatrix}
\end{equation}
Equation 4.5.1
This expression of $\mathbf{b}$ is a function of ($\mathcal{A}$,$\mathcal{E}$), and therefore of ($X$,$Y$,$Z$) in the equatorial frame of reference.
4.5.1.1.2 Observation parameters
Let's define an arbitrary set of observation parameters to mimic a real observation.
Latitude of the baseline: $L_a=-30^\circ43'17.34''$
Declination of the observation: $\delta=-74^\circ39'37.481''$
Duration of the observation: $\Delta \text{HA}=[-4^\text{h},4^\text{h}]$
Time steps: 600
Frequency: 1420 MHz
End of explanation
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
L = (np.pi/180)*(-30-43.0/60-17.34/3600)
A=np.arctan2(b_ENU[0],b_ENU[1])
print "Azimuth=",A*(180/np.pi)
E=np.arcsin(b_ENU[2]/D)
print "Elevation=",E*(180/np.pi)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
Explanation: 4.5.1.1.3 Computing of the projected baselines in ($u$,$v$,$w$) coordinates as a function of time
As seen previously, we convert the baseline coordinates using the previous matrix transformation.
End of explanation
u = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
Explanation: As the $u$, $v$, $w$ coordinates explicitly depend on $H$, we must evaluate them for each observational time step. We will use the equations defined in $\S$ 4.2.2 ➞:
$\lambda u = X \sin H + Y \cos H$
$\lambda v= -X \sin \delta \cos H + Y \sin\delta\sin H + Z \cos\delta$
$\lambda w= X \cos \delta \cos H -Y \cos\delta\sin H + Z \sin\delta$
End of explanation
%matplotlib nbagg
plotBL.UV(u,v,w)
Explanation: We now have everything that describes the $uvw$-track of the baseline (over an 8-hour observational period). It is hard to predict which locus the $uvw$ track traverses given only the three mathematical equations from above. Let's plot it in $uvw$ space and its projection in $uv$ space.
End of explanation
%matplotlib inline
from matplotlib.patches import Ellipse
# parameters of the UVtrack as an ellipse
a=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b=a*np.sin(dec) # minor axis
v0=Z/lam*np.cos(dec)/1e3 # center of ellipse
plotBL.UVellipse(u,v,w,a,b,v0)
Explanation: Figure 4.5.2: $uvw$ track derived from the simulation and projection in the $uv$-plane.
The track in $uvw$ space are curves and the projection in the $uv$ plane are arcs. Let us focus on the track's projection in this plane. To get observation-independent knowledge of the track we can try to combine the three equations of $u$, $v$ and $w$, the aim being to eliminate $H$ from the equation. We end up with an equation linking $u$, $v$, $X$ and $Y$ (the full derivation can be found in $\S$ A.3 ➞):
$$\boxed{u^2 + \left[ \frac{v -\frac{Z}{\lambda} \cos \delta}{\sin \delta} \right]^2 = \left[ \frac{X}{\lambda} \right]^2 + \left[ \frac{Y}{\lambda} \right]^2}$$
One can note that in this particular case, the $uv$ track takes on the form of an ellipse.
<span style="background-color:cyan">TLG:GM: Check if the italic words are in the glossary. </span>
This ellipse is centered at $(0,\frac{Z}{\lambda} \cos \delta)$ in the ($u$,$v$) plane.
The major axis is $a=\frac{\sqrt{X^2 + Y^2}}{\lambda}$.
The minor axis (along the axis $v$) will be a function of $Z$, $\delta$ and $a$.
We can check this by plotting the theoretical ellipse over the observed portion of the track. (You can fall back to the duration of the observation to see that the track is mapping this ellipse exactly).
End of explanation
L=np.radians(90.)
ant1 = np.array([25.095,-9.095,0.045])
ant2 = np.array([90.284,26.380,-0.226])
b_ENU = ant2-ant1
D = np.sqrt(np.sum((b_ENU)**2))
A=np.arctan2(b_ENU[0],b_ENU[1])
print "Azimuth=",A*(180/np.pi)
E=np.arcsin(b_ENU[2]/D)
print "Elevation=",E*(180/np.pi)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
Explanation: Figure 4.5.3: The blue (resp. the red) curve is the $uv$ track of the baseline $\mathbf{b}{12}$ (resp. $\mathbf{b}{21}$). As $I_\nu$ is real, the real part of the visibility $\mathcal{V}$ is even and the imaginary part is odd making $\mathcal{V}(-u,-v)=\mathcal{V}^*$. It implies that one baseline automatically provides a measurement of a visibility and its complex conjugate at ($-u$,$-v$).
4.5.1.2 Special cases
4.5.1.2.1 The Polar interferometer
Let settle one baseline at the North pole. The local zenith corresponds to the North Celestial Pole (NCP) at $\delta=90^\circ$. As seen from the NCP, the baseline will rotate and the projected baseline will correspond to the physical baseline. This configuration is the only case where this happens.
If $\mathbf{b}$ rotates, we can guess that the $uv$ tracks will be perfect circles. Let's check:
End of explanation
dec=np.radians(90.)
uNCP = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vNCP = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wNCP = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aNCP=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bNCP=aNCP*np.sin(dec) # minor axi
v0NCP=Z/lam*np.cos(dec)/1e3 # center of ellipse
Explanation: Let's compute the $uv$ tracks of an observation of the NCP ($\delta=90^\circ$):
End of explanation
dec=np.radians(30.)
u30 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v30 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w30 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a30=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b30=a*np.sin(dec) # minor axi
v030=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u30,v30,w30,a30,b30,v030)
plotBL.UVellipse(uNCP,vNCP,wNCP,aNCP,bNCP,v0NCP)
Explanation: Let's compute the uv tracks when observing a source at $\delta=30^\circ$:
End of explanation
L=np.radians(90.)
X = D*(np.cos(L)*np.sin(E)-np.sin(L)*np.cos(E)*np.cos(A))
Y = D*np.cos(E)*np.sin(A)
Z = D*(np.sin(L)*np.sin(E)+np.cos(L)*np.cos(E)*np.cos(A))
# At local zenith == Celestial Equator
dec=np.radians(0.)
uEQ = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
vEQ = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
wEQ = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
# parameters of the UVtrack as an ellipse
aEQ=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
bEQ=aEQ*np.sin(dec) # minor axi
v0EQ=Z/lam*np.cos(dec)/1e3 # center of ellipse
# Close to Zenith
dec=np.radians(10.)
u10 = lam**(-1)*(np.sin(h)*X+np.cos(h)*Y)/1e3
v10 = lam**(-1)*(-np.sin(dec)*np.cos(h)*X+np.sin(dec)*np.sin(h)*Y+np.cos(dec)*Z)/1e3
w10 = lam**(-1)*(np.cos(dec)*np.cos(h)*X-np.cos(dec)*np.sin(h)*Y+np.sin(dec)*Z)/1e3
a10=np.sqrt(X**2+Y**2)/lam/1e3 # major axis
b10=a*np.sin(dec) # minor axi
v010=Z/lam*np.cos(dec)/1e3 # center of ellipse
%matplotlib inline
plotBL.UVellipse(u10,v10,w10,a10,b10,v010)
plotBL.UVellipse(uEQ,vEQ,wEQ,aEQ,bEQ,v0EQ)
Explanation: Figure 4.5.4: $uv$ track for a baseline at the pole observing at $\delta=90^\circ$ (NCP) and at $\delta=30^\circ$ with the same color conventions as the previous figure.
When observing a source at declination $\delta$, we still have an elliptical shape but centered at (0,0). In the case of a polar interferometer, the full $uv$ track can be covered in 12 hours only due to the symmetry of the baseline.
4.5.1.2.2 The Equatorial interferometer
Let's consider the other extreme scenario: this time, we position the interferometer at the equator. The local zenith is crossed by the Celestial Equator at $\delta=0^\circ$. As seen from the celestial equator, the baseline will not rotate and the projected baseline will no longer correspond to the physical baseline. This configuration is the only case where this happens.
If $\mathbf{b}$ is not rotating, we can intuitively guess that the $uv$ tracks will be straight lines.
End of explanation
H = np.linspace(-6,6,600)*(np.pi/12) #Hour angle in radians
d = 100 #We assume that we have already divided by wavelength
delta = 60*(np.pi/180) #Declination in degrees
u_60 = d*np.cos(H)
v_60 = d*np.sin(H)*np.sin(delta)
Explanation: Figure 4.5.5: $uv$ track for a baseline at the equator observing at $\delta=0^\circ$ and at $\delta=10^\circ$, with the same color conventions as the previous figure.
An equatorial interferometer observing its zenith will see radio sources crossing the sky on straight, linear paths. Therefore, they will produce straight $uv$ coordinates.
4.5.1.1.3 The East-West array <a id='vis:sec:ew'></a> <!--\label{vis:sec:ew}-->
The East-West array is the special case of an interferometer with physical baselines aligned with the East-West direction in the ground-based frame of reference. They have the convenient property of giving a $uv$ coverage which lies entirely on a plane.
If the baseline is aligned with the East-West direction, then the Elevation $\mathcal{E}$ of the baseline is zero and the Azimuth $\mathcal{A}$ is $\frac{\pi}{2}$. Eq. 4.5.1 ⤵ then simplifies considerably:
The only non-zero component of the baseline will be its $Y$-component.
\begin{equation}
\frac{1}{\lambda}
\begin{bmatrix}
X\
Y\
Z
\end{bmatrix}
=
|\mathbf{b_\lambda}|
\begin{bmatrix}
\cos L_a \sin 0 - \sin L_a \cos 0 \cos \frac{\pi}{2}\nonumber\
\cos 0 \sin \frac{\pi}{2} \nonumber\
\sin L_a \sin 0 + \cos L_a \cos 0 \cos \frac{\pi}{2}\
\end{bmatrix}
=
\begin{bmatrix}
0\
|\mathbf{b_\lambda}|\
0 \
\end{bmatrix}
\end{equation}
If we observe a source at declination $\delta_0$ with varying Hour Angle, $H$, we obtain:
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
\sin H & \cos H & 0\
-\sin \delta_0 \cos H & \sin\delta_0\sin H & \cos\delta_0\
\cos \delta_0 \cos H & -\cos\delta_0\sin H & \sin\delta_0\
\end{pmatrix}
\begin{pmatrix}
0\
|\mathbf{b_\lambda}| \
0
\end{pmatrix}
\end{equation}
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
|\mathbf{b_\lambda}| \cos H \
|\mathbf{b_\lambda}| \sin\delta_0 \sin H\
-|\mathbf{b_\lambda}|\cos\delta_0\sin H\
\end{pmatrix}
\end{equation}
when $H = 6^\text{h}$ (West)
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
0 \
|\mathbf{b_\lambda}|\sin\delta_0\
|\mathbf{b_\lambda}|\cos\delta_0\
\end{pmatrix}
\end{equation}
when $H = 0^\text{h}$ (South)
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
|\mathbf{b_\lambda}| \
0\
0\
\end{pmatrix}
\end{equation}
when $H = -6^\text{h}$ (East)
\begin{equation}
\begin{pmatrix}
u\
v\
w\
\end{pmatrix}
=
\begin{pmatrix}
0 \
-|\mathbf{b_\lambda}|\sin\delta_0\
-|\mathbf{b_\lambda}|\cos\delta_0
\end{pmatrix}
\end{equation}
In this case, one can notice that we always have a relationship between $u$, $v$ and $|\mathbf{b_\lambda}|$:
$$ u^2+\left( \frac{v}{\sin\delta_0}\right) ^2=|\mathbf{b_\lambda}|^2$$
<div class=warn>
<b>Warning:</b> The $\sin\delta_0$ factor, appearing in the previous equation, can be interpreted as a compression factor.
</div>
4.5.1.3 Sampling the visibility plane with $uv$-tracks
4.5.1.3.1 Simulating a baseline
When we have an EW baseline, some equations simplify.
Firstly, $XYZ = [0~d~0]^T$, where $d$ is the baseline length measured in wavelengths.
Secondly, we have the following relationships: $u = d\cos(H)$, $v = d\sin(H)\sin(\delta)$,
where $H$ is the hour angle of the field center and $\delta$ its declination.
In this section, we will plot the $uv$-coverage of an EW-baseline whose field center is at two different declinations.
End of explanation
RA_sources = np.array([5+30.0/60,5+32.0/60+0.4/3600,5+36.0/60+12.8/3600,5+40.0/60+45.5/3600])
DEC_sources = np.array([60,60+17.0/60+57.0/3600,61+12.0/60+6.9/3600,61+56.0/60+34.0/3600])
Flux_sources_labels = np.array(["","1 Jy","0.5 Jy","0.2 Jy"])
Flux_sources = np.array([1,0.5,0.1]) #in Jy
step_size = 200
print "Phase center Source 1 Source 2 Source3"
print repr("RA="+str(RA_sources)).ljust(2)
print "DEC="+str(DEC_sources)
Explanation: <span style="background-color:red">TLG:AC: Add the following figures. This is specifically for an EW array. They will add some more insight. </span>
<img src='figures/EW_1_d.svg' width=40%>
<img src='figures/EW_2_d.svg' width=40%>
<img src='figures/EW_3_d.svg' width=40%>
4.5.1.3.2 Simulating the sky
Let us populate our sky with three sources, with positions given in RA ($\alpha$) and DEC ($\delta$):
* Source 1: (5h 32m 0.4s,60$^{\circ}$-17' 57'') - 1 Jy
* Source 2: (5h 36m 12.8s,-61$^{\circ}$ 12' 6.9'') - 0.5 Jy
* Source 3: (5h 40m 45.5s,-61$^{\circ}$ 56' 34'') - 0.2 Jy
We place the field center at $(\alpha_0,\delta_0) = $ (5h 30m,60$^{\circ}$).
End of explanation
RA_rad = np.array(RA_sources)*(np.pi/12)
DEC_rad = np.array(DEC_sources)*(np.pi/180)
RA_delta_rad = RA_rad-RA_rad[0]
l = np.cos(DEC_rad)*np.sin(RA_delta_rad)
m = (np.sin(DEC_rad)*np.cos(DEC_rad[0])-np.cos(DEC_rad)*np.sin(DEC_rad[0])*np.cos(RA_delta_rad))
print "l=",l*(180/np.pi)
print "m=",m*(180/np.pi)
point_sources = np.zeros((len(RA_sources)-1,3))
point_sources[:,0] = Flux_sources
point_sources[:,1] = l[1:]
point_sources[:,2] = m[1:]
Explanation: We then convert the ($\alpha$,$\delta$) to $l,m$: <span style="background-color:red">TLG:AC:Point to Chapter 3.</span>
* $l = \cos \delta \sin \Delta \alpha$
* $m = \sin \delta\cos\delta_0 -\cos \delta\sin\delta_0\cos\Delta \alpha$
* $\Delta \alpha = \alpha - \alpha_0$
End of explanation
%matplotlib inline
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
plt.xlim([-4,4])
plt.ylim([-4,4])
plt.xlabel("$l$ [degrees]")
plt.ylabel("$m$ [degrees]")
plt.plot(l[0],m[0],"bx")
plt.hold("on")
plt.plot(l[1:]*(180/np.pi),m[1:]*(180/np.pi),"ro")
counter = 1
for xy in zip(l[1:]*(180/np.pi)+0.25, m[1:]*(180/np.pi)+0.25):
ax.annotate(Flux_sources_labels[counter], xy=xy, textcoords='offset points',horizontalalignment='right',
verticalalignment='bottom')
counter = counter + 1
plt.grid()
Explanation: The source and phase centre coordinates are now given in degrees.
End of explanation
u = np.linspace(-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10, num=step_size, endpoint=True)
v = np.linspace(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10, num=step_size, endpoint=True)
uu, vv = np.meshgrid(u, v)
zz = np.zeros(uu.shape).astype(complex)
Explanation: Figure 4.5.6: Distribution of the simulated sky in the $l$,$m$ plane.
4.5.1.3.3 Simulating an observation
We will now create a fully-filled $uv$-plane, and sample it using the EW-baseline track we created in the first section. We will be ignoring the $w$-term for the sake of simplicity.
End of explanation
s = point_sources.shape
for counter in xrange(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
zz += A_i*np.exp(-2*np.pi*1j*(uu*l_i+vv*m_i))
zz = zz[:,::-1]
Explanation: We create the dimensions of our visibility plane.
End of explanation
u_track = u_60
v_track = v_60
z = np.zeros(u_track.shape).astype(complex)
s = point_sources.shape
for counter in xrange(1, s[0]+1):
A_i = point_sources[counter-1,0]
l_i = point_sources[counter-1,1]
m_i = point_sources[counter-1,2]
z += A_i*np.exp(-1*2*np.pi*1j*(u_track*l_i+v_track*m_i))
Explanation: We create our fully-filled visibility plane. With a "perfect" interferometer, we could sample the entire $uv$-plane. Since we only have a finite amount of antennas, this is never possible in practice. Recall that our sky brightness $I(l,m)$ is related to our visibilites $V(u,v)$ via the Fourier transform. For a bunch of point sources we can therefore write:
$$V(u,v)=\mathcal{F}{I(l,m)} = \mathcal{F}{\sum_k A_k \delta(l-l_k,m-m_k)} = \sum_k A_k e^{-2\pi i (ul_i+vm_i)}$$
Let's compute the total visibilities for our simulated sky.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(zz.real,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Real part of visibilities")
plt.subplot(122)
plt.imshow(zz.imag,extent=[-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10,-1*(np.amax(abs(v_60)))-10, \
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Imaginary part of visibilities")
Explanation: Below we sample our visibility plane on the $uv$-track derived in the first section, i.e. $V(u_t,v_t)$.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(z.real)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Real: sampled visibilities")
plt.subplot(122)
plt.plot(z.imag)
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Imag: sampled visibilities")
Explanation: Figure 4.5.7: Real and imaginary parts of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.
We now plot the sampled visibilites as a function of time-slots, i.e $V(u_t(t_s),v_t(t_s))$.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.imshow(abs(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Amplitude of visibilities")
plt.subplot(122)
plt.imshow(np.angle(zz),
extent=[-1*(np.amax(np.abs(u_60)))-10,
np.amax(np.abs(u_60))+10,
-1*(np.amax(abs(v_60)))-10,
np.amax(abs(v_60))+10])
plt.plot(u_60,v_60,"k")
plt.xlim([-1*(np.amax(np.abs(u_60)))-10, np.amax(np.abs(u_60))+10])
plt.ylim(-1*(np.amax(abs(v_60)))-10, np.amax(abs(v_60))+10)
plt.xlabel("u")
plt.ylabel("v")
plt.title("Phase of visibilities")
Explanation: Figure 4.5.8: Real and imaginary parts of the visibility sampled by the black curve in Fig. 4.5.7, plotted as a function of time.
End of explanation
plt.figure(figsize=(12,6))
plt.subplot(121)
plt.plot(abs(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Abs: sampled visibilities")
plt.subplot(122)
plt.plot(np.angle(z))
plt.xlabel("Timeslots")
plt.ylabel("Jy")
plt.title("Phase: sampled visibilities")
Explanation: Figure 4.5.9: Amplitude and Phase of the visibility function. The black curve is the portion of the $uv$ track crossing the visibility.
End of explanation |
4,462 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting topographic maps of evoked data
Load evoked data and plot topomaps for selected time points using multiple
additional options.
Step1: Basic
Step2: If times is set to None at most 10 regularly spaced topographies will be
shown
Step3: We can use nrows and ncols parameter to create multiline plots
with more timepoints.
Step4: Instead of showing topographies at specific time points we can compute
averages of 50 ms bins centered on these time points to reduce the noise in
the topographies
Step5: We can plot gradiometer data (plots the RMS for each pair of gradiometers)
Step6: Additional
Step7: If you look at the edges of the head circle of a single topomap you'll see
the effect of extrapolation. There are three extrapolation modes
Step8: More advanced usage
Now we plot magnetometer data as topomap at a single time point
Step9: Animating the topomap
Instead of using a still image we can plot magnetometer data as an animation,
which animates properly only in matplotlib interactive mode. | Python Code:
# Authors: Christian Brodbeck <[email protected]>
# Tal Linzen <[email protected]>
# Denis A. Engeman <[email protected]>
# Mikołaj Magnuski <[email protected]>
# Eric Larson <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne.datasets import sample
from mne import read_evokeds
print(__doc__)
path = sample.data_path()
fname = path + '/MEG/sample/sample_audvis-ave.fif'
# load evoked corresponding to a specific condition
# from the fif file and subtract baseline
condition = 'Left Auditory'
evoked = read_evokeds(fname, condition=condition, baseline=(None, 0))
Explanation: Plotting topographic maps of evoked data
Load evoked data and plot topomaps for selected time points using multiple
additional options.
End of explanation
times = np.arange(0.05, 0.151, 0.02)
evoked.plot_topomap(times, ch_type='mag', time_unit='s')
Explanation: Basic :func:~mne.viz.plot_topomap options
We plot evoked topographies using :func:mne.Evoked.plot_topomap. The first
argument, times allows to specify time instants (in seconds!) for which
topographies will be shown. We select timepoints from 50 to 150 ms with a
step of 20ms and plot magnetometer data:
End of explanation
evoked.plot_topomap(ch_type='mag', time_unit='s')
Explanation: If times is set to None at most 10 regularly spaced topographies will be
shown:
End of explanation
all_times = np.arange(-0.2, 0.5, 0.03)
evoked.plot_topomap(all_times, ch_type='mag', time_unit='s',
ncols=8, nrows='auto')
Explanation: We can use nrows and ncols parameter to create multiline plots
with more timepoints.
End of explanation
evoked.plot_topomap(times, ch_type='mag', average=0.05, time_unit='s')
Explanation: Instead of showing topographies at specific time points we can compute
averages of 50 ms bins centered on these time points to reduce the noise in
the topographies:
End of explanation
evoked.plot_topomap(times, ch_type='grad', time_unit='s')
Explanation: We can plot gradiometer data (plots the RMS for each pair of gradiometers)
End of explanation
evoked.plot_topomap(times, ch_type='mag', cmap='Spectral_r', res=32,
outlines='skirt', contours=4, time_unit='s')
Explanation: Additional :func:~mne.viz.plot_topomap options
We can also use a range of various :func:mne.viz.plot_topomap arguments
that control how the topography is drawn. For example:
cmap - to specify the color map
res - to control the resolution of the topographies (lower resolution
means faster plotting)
outlines='skirt' to see the topography stretched beyond the head circle
contours to define how many contour lines should be plotted
End of explanation
extrapolations = ['local', 'head', 'box']
fig, axes = plt.subplots(figsize=(7.5, 4.5), nrows=2, ncols=3)
# Here we look at EEG channels, and use a custom head sphere to get all the
# sensors to be well within the drawn head surface
for axes_row, ch_type in zip(axes, ('mag', 'eeg')):
for ax, extr in zip(axes_row, extrapolations):
evoked.plot_topomap(0.1, ch_type=ch_type, size=2, extrapolate=extr,
axes=ax, show=False, colorbar=False,
sphere=(0., 0., 0., 0.09))
ax.set_title('%s %s' % (ch_type.upper(), extr), fontsize=14)
fig.tight_layout()
Explanation: If you look at the edges of the head circle of a single topomap you'll see
the effect of extrapolation. There are three extrapolation modes:
extrapolate='local' extrapolates only to points close to the sensors.
extrapolate='head' extrapolates out to the head head circle.
extrapolate='box' extrapolates to a large box stretching beyond the
head circle.
The default value extrapolate='auto' will use 'local' for MEG sensors
and 'head' otherwise. Here we show each option:
End of explanation
evoked.plot_topomap(0.1, ch_type='mag', show_names=True, colorbar=False,
size=6, res=128, title='Auditory response',
time_unit='s')
plt.subplots_adjust(left=0.01, right=0.99, bottom=0.01, top=0.88)
Explanation: More advanced usage
Now we plot magnetometer data as topomap at a single time point: 100 ms
post-stimulus, add channel labels, title and adjust plot margins:
End of explanation
times = np.arange(0.05, 0.151, 0.01)
fig, anim = evoked.animate_topomap(
times=times, ch_type='mag', frame_rate=2, time_unit='s', blit=False)
Explanation: Animating the topomap
Instead of using a still image we can plot magnetometer data as an animation,
which animates properly only in matplotlib interactive mode.
End of explanation |
4,463 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Acoustic system calibration
Since the calibration measurements may be dealing with very small values, there's potential for running into the limitations of <a href="https
Step1: Calculating the frequency response
Using a hamming window for the signal is strongly recommended. The only exception is when measuring the sensitivity of the calibration microphone using a standard (e.g. a pistonphone that generates 114 dB SPL at 1 kHz). When you're using a single-tone calibration, a flattop window is best.
Speaker output
Output of speaker in Pa, $O(\omega)$, can be measured by playing a signal with known RMS voltage, $V_{speaker}(\omega)$ and measuring the voltage of a calibration microphone, $V_{cal}(\omega)$, with a known sensitivity, $S_{cal} = \frac{V_{rms}}{Pa}$.
$O(\omega) = \frac{V_{cal}(\omega)}{S_{cal}}$
Alternatively, the output can be specified in dB
$O_{dB}(\omega) = 20 \times log_{10}(\frac{V_{cal}(\omega)}{S_{cal}})$
$O_{dB}(\omega) = 20 \times log_{10}(V_{cal}(\omega))-20 \times log_{10}(S_{cal})$
Experiment microphone sensitivity
If we wish to calibrate an experiment microphone, we will record the voltage, $V_{exp}(\omega)$, at the same time we measure the speaker's output in the previous exercise. Using the known output of the speaker, we can then determine the experiment microphone sensitivity, $S_{exp}(\omega)$.
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{O(\omega)}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{\frac{V_{cal}(\omega)}{S_{cal}}}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega) \times S_{cal}}{V_{cal}(\omega)}$
The resulting sensitivity is in $\frac{V}{Pa}$. Alternatively the sensitivity can be expressed in dB, which gives us sensitivity as dB re Pa.
$S_{exp_{dB}}(\omega) = 20 \times log_{10}(V_{exp})+20 \times log_{10}(S_{cal})-20 \times log_{10}(V_{cal})$
In-ear speaker calibration
Since the acoustics of the system will change once the experiment microphone is inserted in the ear (e.g. the ear canal acts as a compliance which alters the harmonics of the system), we need to recalibrate each time we reposition the experiment microphone while it's in the ear of an animal. We need to compute the speaker transfer function, $S_{s}(\omega)$, in units of $\frac{V_{rms}}{Pa}$ which will be used to compute the actual voltage needed to drive the speaker at a given level. To compute the calibration, we generate a stimulus via the digital to analog converter (DAC) with known frequency content, $V_{DAC}(\omega)$, in units of $V_{RMS}$.
The output of the speaker is measured using the experiment microphone and can be determined using the experiment microphone sensitivity
$O(\omega) = \frac{V_{PT}(\omega)}{S_{PT}(\omega)}$
The sensitivity of the speaker can then be calculated as
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{O(\omega)}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{\frac{V_{PT}(\omega)}{S_{PT}(\omega)}}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega) \times S_{PT}(\omega)}{V_{PT}(\omega)}$
Alternatively, we can express the sensitivity as dB
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+20 \times log_{10}(S_{PT}(\omega))-20 \times log_{10}(V_{PT}(\omega))$
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+S_{PT_{dB}}(\omega)-20 \times log_{10}(V_{PT}(\omega))$
Generating a tone at a specific level
Given the speaker sensitivity, $S_{s}(\omega)$, we can compute the voltage at the DAC required to generate a tone at a specific amplitude in Pa, $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times O$
Usually, however, we generally prefer to express the amplitude in dB SPL.
$O_{dB SPL} = 20 \times log_{10}(\frac{O}{20 \times 10^{-6}})$
Solving for $O$.
$O = 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Substituting $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Expressed in dB
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + 20 \times log_{10}(10^{\frac{O_{dB SPL}}{20}}) + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
We can use the last equation to compute the voltage since it expresses the speaker calibration in units that we have calculated. However, we need to convert the voltage back to a linear scale.
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
Estimating output at a specific $V_{rms}$
Taking the equation above and solving for $O_{dB SPL}(\omega)$
$O_{dB SPL}(\omega) = 20 \times log_{10}(V_{DAC}) - S_{s_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Or, if we want to compute in Pa
$O(\omega) = \frac{V_{DAC}}{S_{s}(\omega)}$
Common calculations based on $S_{s_{dB}}(\omega)$ and $S_{PT_{dB}}(\omega)$
To estimate the voltage required at the DAC for a given dB SPL
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
To convert the microphone voltage measurement to dB SPL
$O_{dB SPL} = V_{DAC_{dB}}(\omega) - S_{PT_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Given the dB SPL, $O_{dB SPL}(\omega)$ at 1 VRMS
$S(\omega) = (10^{\frac{O_{dB SPL}(\omega)}{20}} \times 20 \times 10^{-6})^{-1}$
$S_{dB}(\omega) = - [O_{dB SPL}(\omega) + 20 \times log_{10}(20 \times 10^{-6})]$
Less common calculations
Given sensitivity calculated using a different $V_{rms}$, $x$, (e.g. $10 V_{rms}$), compute the sensitivity at $1 V_{rms}$ (used by the attenuation calculation in the neurogen package).
$S_{dB}(\omega) = S_{dB_{1V}}(\omega) = S_{dB_{x}}(\omega) - 20 \times log_{10}x$
Estimating the PSD
Applying a window to the signal is not always a good idea.
Step2: Designing an output circuit
Speaker sensitivity is typically reported in $\frac{dB}{W}$ at a distance of 1 meter. For an $8\Omega$ speaker, $2.83V$ produces exactly $1W$. We know this because $P = I^2 \times R$ and $V = I \times R$. Solving for $I$
Step3: Let's say we have an $8\Omega$ speaker with a handling capacity is $0.5W$. If we want to achieve the maximum (i.e. $0.5W$), then we need to determine the voltage that will achieve that wattage given the speaker rating.
$V = R \times \sqrt{\frac{P}{R}}$
$V = 8\Omega \times \sqrt{\frac{0.5W}{8\Omega}}$
$V = 2V$
Even if your system can generate larger values, there is no point in driving the speaker at values greater than 1V. It will simply distort or get damaged. However, your system needs to be able to provide the appropriate current to drive the speaker.
$I = \sqrt{\frac{P}{R}}$
$I = \sqrt{\frac{0.5W}{8\Omega}}$
$I = 0.25A$
This is based on nominal specs.
So, what is the maximum output in dB SPL? Assume that the spec sheet reports $92dB$ at $0.3W$.
$10 \times log_{10}(0.5W/0.3W) = 2.2 dB$
This means that we will get only $2.2dB$ more for a total of $94.2 dB SPL$.
$10 \times log_{10}(0.1W/0.3W) = -4.7 dB$
Step4: Now that you've figured out the specs of your speaker, you need to determine whether you need a voltage divider to bring output voltage down to a safe level (especially if you are trying to use the full range of your DAC).
$V_{speaker} = V_{out} \times \frac{R_{speaker}}{R+R_{speaker}}$
Don't forget to compensate for any gain you may have built into the op-amp and buffer circuit.
$R = \frac{R_{speaker} \times (V_{out}-V_{speaker})}{V_{speaker}}$
Step5: Good details here http
Step6: Size of the FFT
Step7: Ensuring reproducible generation of bandpass filtered noise
Step8: Computing noise power
Step9: Analysis of grounding
Signal cables resonate when physical length is a quarter wavelength.
Step10: Resonance of acoustic tube
Step11: chirps
Step12: Converting band level to spectrum level
$BL = 10 \times log{\frac{I_{tot}}{I_{ref}}} $ where $ I_{tot} = I_{SL}*\Delta f$. Using multiplication rule for logarigthms, $BL = 10 \times log{\frac{I_{SL} \times 1 Hz}{I_{ref}}} + 10 \times log \frac{\Delta f}{1 Hz}$ which simplifies to $BL = ISL_{ave}+ 10\times log(\Delta f)$
Equalizing a signal using the impulse response | Python Code:
%matplotlib inline
from scipy import signal
from scipy import integrate
import pylab as pl
import numpy as np
Explanation: Acoustic system calibration
Since the calibration measurements may be dealing with very small values, there's potential for running into the limitations of <a href="https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html">floating-point arithmetic</a>. When implementing the computational algorithms, using dB is recommended to avoid floating-point errors.
Throughout this description, we express sensitivity (e.g. of the microphone or speaker) in units of $\frac{V}{Pa}$ (which is commonly used throughout the technical literature) rather than the notation used in the EPL cochlear function test suite which are $\frac{Pa}{V}$. Sensitivity in the context of microphones is the voltage generated by the microphone in response to a given pressure. In the context of speakers, sensitivity is the output, in Pa, produced by a given voltage. We assume that the sensitivity of the calibration microphone is uniform across all frequencies (and it generally is if you spend enough money on the microphone). Sometimes you may wish to use a cheaper microphone to record audio during experiments. Since this microphone is cheap, sensitivity will vary as a function of frequency.
End of explanation
fs = 10e3
t = np.arange(fs)/fs
frequency = 500
tone_waveform = np.sin(2*np.pi*frequency*t)
chirp_waveform = signal.chirp(t, 100, 1, 900)
clipped_waveform = np.clip(tone_waveform, -0.9, 0.9)
ax = pl.subplot(131)
ax.plot(t, tone_waveform)
ax = pl.subplot(132, sharex=ax, sharey=ax)
ax.plot(t, chirp_waveform)
ax = pl.subplot(133, sharex=ax, sharey=ax)
ax.plot(t, clipped_waveform)
ax.axis(xmin=0, xmax=0.01)
pl.tight_layout()
s = tone_waveform
for window in ('flattop', 'boxcar', 'blackman', 'hamming', 'hanning'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.axis(xmin=490, xmax=520)
pl.legend()
def plot_fft_windows(s):
for window in ('flattop', 'boxcar', 'blackman', 'hamming', 'hanning'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.legend()
pl.figure(); plot_fft_windows(tone_waveform); pl.axis(xmin=490, xmax=510)
pl.figure(); plot_fft_windows(chirp_waveform); pl.axis(xmin=0, xmax=1500, ymin=-100)
pl.figure(); plot_fft_windows(clipped_waveform);
Explanation: Calculating the frequency response
Using a hamming window for the signal is strongly recommended. The only exception is when measuring the sensitivity of the calibration microphone using a standard (e.g. a pistonphone that generates 114 dB SPL at 1 kHz). When you're using a single-tone calibration, a flattop window is best.
Speaker output
Output of speaker in Pa, $O(\omega)$, can be measured by playing a signal with known RMS voltage, $V_{speaker}(\omega)$ and measuring the voltage of a calibration microphone, $V_{cal}(\omega)$, with a known sensitivity, $S_{cal} = \frac{V_{rms}}{Pa}$.
$O(\omega) = \frac{V_{cal}(\omega)}{S_{cal}}$
Alternatively, the output can be specified in dB
$O_{dB}(\omega) = 20 \times log_{10}(\frac{V_{cal}(\omega)}{S_{cal}})$
$O_{dB}(\omega) = 20 \times log_{10}(V_{cal}(\omega))-20 \times log_{10}(S_{cal})$
Experiment microphone sensitivity
If we wish to calibrate an experiment microphone, we will record the voltage, $V_{exp}(\omega)$, at the same time we measure the speaker's output in the previous exercise. Using the known output of the speaker, we can then determine the experiment microphone sensitivity, $S_{exp}(\omega)$.
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{O(\omega)}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega)}{\frac{V_{cal}(\omega)}{S_{cal}}}$
$S_{exp}(\omega) = \frac{V_{exp}(\omega) \times S_{cal}}{V_{cal}(\omega)}$
The resulting sensitivity is in $\frac{V}{Pa}$. Alternatively the sensitivity can be expressed in dB, which gives us sensitivity as dB re Pa.
$S_{exp_{dB}}(\omega) = 20 \times log_{10}(V_{exp})+20 \times log_{10}(S_{cal})-20 \times log_{10}(V_{cal})$
In-ear speaker calibration
Since the acoustics of the system will change once the experiment microphone is inserted in the ear (e.g. the ear canal acts as a compliance which alters the harmonics of the system), we need to recalibrate each time we reposition the experiment microphone while it's in the ear of an animal. We need to compute the speaker transfer function, $S_{s}(\omega)$, in units of $\frac{V_{rms}}{Pa}$ which will be used to compute the actual voltage needed to drive the speaker at a given level. To compute the calibration, we generate a stimulus via the digital to analog converter (DAC) with known frequency content, $V_{DAC}(\omega)$, in units of $V_{RMS}$.
The output of the speaker is measured using the experiment microphone and can be determined using the experiment microphone sensitivity
$O(\omega) = \frac{V_{PT}(\omega)}{S_{PT}(\omega)}$
The sensitivity of the speaker can then be calculated as
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{O(\omega)}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega)}{\frac{V_{PT}(\omega)}{S_{PT}(\omega)}}$
$S_{s}(\omega) = \frac{V_{DAC}(\omega) \times S_{PT}(\omega)}{V_{PT}(\omega)}$
Alternatively, we can express the sensitivity as dB
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+20 \times log_{10}(S_{PT}(\omega))-20 \times log_{10}(V_{PT}(\omega))$
$S_{s_{dB}}(\omega) = 20 \times log_{10}(V_{DAC}(\omega))+S_{PT_{dB}}(\omega)-20 \times log_{10}(V_{PT}(\omega))$
Generating a tone at a specific level
Given the speaker sensitivity, $S_{s}(\omega)$, we can compute the voltage at the DAC required to generate a tone at a specific amplitude in Pa, $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times O$
Usually, however, we generally prefer to express the amplitude in dB SPL.
$O_{dB SPL} = 20 \times log_{10}(\frac{O}{20 \times 10^{-6}})$
Solving for $O$.
$O = 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Substituting $O$.
$V_{DAC}(\omega) = S_{s}(\omega) \times 10^{\frac{O_{dB SPL}}{20}} \times 20 \times 10^{-6}$
Expressed in dB
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + 20 \times log_{10}(10^{\frac{O_{dB SPL}}{20}}) + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = 20 \times log_{10}(S_{s}(\omega)) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
$V_{DAC_{dB}}(\omega) = S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})$
We can use the last equation to compute the voltage since it expresses the speaker calibration in units that we have calculated. However, we need to convert the voltage back to a linear scale.
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
Estimating output at a specific $V_{rms}$
Taking the equation above and solving for $O_{dB SPL}(\omega)$
$O_{dB SPL}(\omega) = 20 \times log_{10}(V_{DAC}) - S_{s_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Or, if we want to compute in Pa
$O(\omega) = \frac{V_{DAC}}{S_{s}(\omega)}$
Common calculations based on $S_{s_{dB}}(\omega)$ and $S_{PT_{dB}}(\omega)$
To estimate the voltage required at the DAC for a given dB SPL
$V_{DAC}(\omega) = 10^{\frac{S_{s_{dB}}(\omega) + O_{dB SPL} + 20 \times log_{10}(20 \times 10^{-6})}{20}}$
To convert the microphone voltage measurement to dB SPL
$O_{dB SPL} = V_{DAC_{dB}}(\omega) - S_{PT_{dB}}(\omega) - 20 \times log_{10}(20 \times 10^{-6})$
Given the dB SPL, $O_{dB SPL}(\omega)$ at 1 VRMS
$S(\omega) = (10^{\frac{O_{dB SPL}(\omega)}{20}} \times 20 \times 10^{-6})^{-1}$
$S_{dB}(\omega) = - [O_{dB SPL}(\omega) + 20 \times log_{10}(20 \times 10^{-6})]$
Less common calculations
Given sensitivity calculated using a different $V_{rms}$, $x$, (e.g. $10 V_{rms}$), compute the sensitivity at $1 V_{rms}$ (used by the attenuation calculation in the neurogen package).
$S_{dB}(\omega) = S_{dB_{1V}}(\omega) = S_{dB_{x}}(\omega) - 20 \times log_{10}x$
Estimating the PSD
Applying a window to the signal is not always a good idea.
End of explanation
print(0.5**2*8)
R = 8
P = 1
V = 2.83
print('Voltage is', R*np.sqrt(P/R))
print('Power is', V**2/R)
Explanation: Designing an output circuit
Speaker sensitivity is typically reported in $\frac{dB}{W}$ at a distance of 1 meter. For an $8\Omega$ speaker, $2.83V$ produces exactly $1W$. We know this because $P = I^2 \times R$ and $V = I \times R$. Solving for $I$:
$I = \sqrt{\frac{P}{R}}$ and $I = \frac{V}{R}$
$\sqrt{\frac{P}{R}} = \frac{V}{R}$
$P = \frac{V^2}{R}$
$V = R \times \sqrt{\frac{P}{R}}$
End of explanation
P = 0.5
R = 8
print('Voltage is', R*np.sqrt(P/R))
print('Current is', np.sqrt(P/R))
P_test = 0.1
P_max = 1
O_test = 90
dB_incr = 10*np.log10(P_max/P_test)
O_max = O_test+dB_incr
print('{:0.2f} dB increase giving {:0.2f} max output'.format(dB_incr, O_max))
Explanation: Let's say we have an $8\Omega$ speaker with a handling capacity is $0.5W$. If we want to achieve the maximum (i.e. $0.5W$), then we need to determine the voltage that will achieve that wattage given the speaker rating.
$V = R \times \sqrt{\frac{P}{R}}$
$V = 8\Omega \times \sqrt{\frac{0.5W}{8\Omega}}$
$V = 2V$
Even if your system can generate larger values, there is no point in driving the speaker at values greater than 1V. It will simply distort or get damaged. However, your system needs to be able to provide the appropriate current to drive the speaker.
$I = \sqrt{\frac{P}{R}}$
$I = \sqrt{\frac{0.5W}{8\Omega}}$
$I = 0.25A$
This is based on nominal specs.
So, what is the maximum output in dB SPL? Assume that the spec sheet reports $92dB$ at $0.3W$.
$10 \times log_{10}(0.5W/0.3W) = 2.2 dB$
This means that we will get only $2.2dB$ more for a total of $94.2 dB SPL$.
$10 \times log_{10}(0.1W/0.3W) = -4.7 dB$
End of explanation
P_max = 0.3 # rated long-term capacity of the speaker
R = 8 #
V = R * np.sqrt(P_max/R)
print('{:0.2f} max safe long-term voltage'.format(V))
P_max = 0.5 # rated long-term capacity of the speaker
R = 8 #
V = R * np.sqrt(P_max/R)
print('{:0.2f} max safe short-term voltage'.format(V))
R_speaker = 8
V_speaker = 2
V_out = 10
R = (R_speaker*(V_out-V_speaker))/V_speaker
print('Series divider resistor is {:.2f}'.format(R))
Explanation: Now that you've figured out the specs of your speaker, you need to determine whether you need a voltage divider to bring output voltage down to a safe level (especially if you are trying to use the full range of your DAC).
$V_{speaker} = V_{out} \times \frac{R_{speaker}}{R+R_{speaker}}$
Don't forget to compensate for any gain you may have built into the op-amp and buffer circuit.
$R = \frac{R_{speaker} \times (V_{out}-V_{speaker})}{V_{speaker}}$
End of explanation
def plot_fft_windows(s):
for window in ('flattop', 'boxcar', 'hamming'):
w = signal.get_window(window, len(s))
csd = np.fft.rfft(s*w/w.mean())
psd = np.real(csd*np.conj(csd))/len(s)
p = 20*np.log10(psd)
f = np.fft.rfftfreq(len(s), fs**-1)
pl.plot(f, p, label=window)
pl.legend()
fs = 100e3
duration = 50e-3
t = np.arange(int(duration*fs))/fs
f1 = 500
f2 = f1/1.2
print(duration*f1)
print(duration*f2)
coerced_f2 = np.round(duration*f2)/duration
print(f2, coerced_f2)
t1 = np.sin(2*np.pi*f1*t)
t2 = np.sin(2*np.pi*f2*t)
t2_coerced = np.sin(2*np.pi*coerced_f2*t)
pl.figure(); plot_fft_windows(t1); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t2); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t2_coerced); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t1+t2); pl.axis(xmax=f1*2)
pl.figure(); plot_fft_windows(t1+t2_coerced); pl.axis(xmax=f1*2)
Explanation: Good details here http://www.dspguide.com/ch9/1.htm
End of explanation
n = 50e3
npow2 = 2**np.ceil(np.log2(n))
s = np.random.uniform(-1, 1, size=n)
spow2 = np.random.uniform(-1, 1, size=npow2)
%timeit np.fft.fft(s)
%timeit np.fft.fft(spow2)
Explanation: Size of the FFT
End of explanation
rs = np.random.RandomState(seed=1)
a1 = rs.uniform(-1, 1, 5000)
a2 = rs.uniform(-1, 1, 5000)
rs = np.random.RandomState(seed=1)
b1 = rs.uniform(-1, 1, 3330)
b2 = rs.uniform(-1, 1, 3330)
b3 = rs.uniform(-1, 1, 10000-6660)
np.equal(np.concatenate((a1, a2)), np.concatenate((b1, b2, b3))).all()
b, a = signal.iirfilter(7, (1e3/5000, 2e3/5000), rs=85, rp=0.3, ftype='ellip', btype='band')
zi = signal.lfilter_zi(b, a)
a1f, azf1 = signal.lfilter(b, a, a1, zi=zi)
a2f, azf2 = signal.lfilter(b, a, a2, zi=azf1)
b1f, bzf1 = signal.lfilter(b, a, b1, zi=zi)
b2f, bzf2 = signal.lfilter(b, a, b2, zi=bzf1)
b3f, bzf3 = signal.lfilter(b, a, b3, zi=bzf2)
print(np.equal(np.concatenate((a1f, a2f)), np.concatenate((b1f, b2f, b3f))).all())
pl.plot(np.concatenate((b1f, b2f, b3f)))
zi = signal.lfilter_zi(b, a)
a1f = signal.lfilter(b, a, a1)
a2f = signal.lfilter(b, a, a2)
b1f = signal.lfilter(b, a, b1)
b2f = signal.lfilter(b, a, b2)
b3f = signal.lfilter(b, a, b3)
print(np.equal(np.concatenate((a1f, a2f)), np.concatenate((b1f, b2f, b3f))).all())
pl.plot(np.concatenate((b1f, b2f, b3f)))
Explanation: Ensuring reproducible generation of bandpass filtered noise
End of explanation
frequency = np.fft.rfftfreq(int(200e3), 1/200e3)
flb, fub = 4e3, 64e3
mask = (frequency >= flb) & (frequency < fub)
noise_floor = 0
for sl in (56, 58, 60, 62, 64, 66, 96, 98):
power_db = np.ones_like(frequency)*noise_floor
power_db[mask] = sl
power = (10**(power_db/20.0))*20e-6
#power_sum = integrate.trapz(power**2, frequency)**0.5
power_sum = np.sum(power**2)**0.5
total_db = 20*np.log10(power_sum/20e-6)
pl.semilogx(frequency, power_db)
print(f'{total_db:.2f}dB with spectrum level at {sl:.2f}dB, expected {sl+10*np.log10(fub-flb):0.2f}dB')
frequency = np.fft.rfftfreq(int(100e3), 1/100e3)
mask = (frequency >= 4e3) & (frequency < 8e3)
for noise_floor in (-20, -10, 0, 10, 20, 30, 40, 50, 60):
power_db = np.ones_like(frequency)*noise_floor
power_db[mask] = 65
power = (10**(power_db/20.0))*20e-6
#power_sum = integrate.trapz(power**2, frequency)**0.5
power_sum = np.sum(power**2)**0.5
total_db = 20*np.log10(power_sum/20e-6)
print('{}dB SPL with noise floor at {}dB SPL'.format(int(total_db), noise_floor))
# Compute power in dB then convert to power in volts
power_db = np.ones_like(frequency)*30
power_db[mask] = 65
power = (10**(power_db/20.0))*20e-6
psd = power/2*len(power)*np.sqrt(2)
phase = np.random.uniform(0, 2*np.pi, len(psd))
csd = psd*np.exp(-1j*phase)
signal = np.fft.irfft(csd)
pl.plot(signal)
rms = np.mean(signal**2)**0.5
print(rms)
print('RMS power, dB SPL', 20*np.log10(rms/20e-6))
signal = np.random.uniform(-1, 1, len(power_v))
rms = np.mean(signal**2)**0.5
20*np.log10(rms/20e-6)
csd = np.fft.rfft(signal)
psd = np.real(csd*np.conj(csd))**2
print(psd[:5])
psd = np.abs(csd)**2
print(psd[:5])
Explanation: Computing noise power
End of explanation
flb, fub = 100, 100e3
# resonant frequency of cable
c = 299792458 # speed of light in m/s
l = 3 # length of cable in meters
resonant_frequency = 1/(l*4/c)
flb, fub = 100, 100e3
llb = c/flb/4
lub = c/fub/4
print(llb, lub)
# As shown here, since we're not running cables for 750 meters,
# we don't have an issue.
c/resonant_frequency/4.0
Explanation: Analysis of grounding
Signal cables resonate when physical length is a quarter wavelength.
End of explanation
f = 14000.0 # Hz, cps
w = (1/f)*340.0
w*1e3 # resonance in mm assuming quarter wavelength is what's important
length = 20e-3
period = length/340.0
frequency = 1.0/period
frequency
import numpy as np
def exp_ramp_v1(f0, k, t):
return f0*k**t
def exp_ramp_v2(f0, f1, t):
k = np.exp(np.log(f1/f0)/t[-1])
return exp_ramp_v1(f0, k, t)
t = np.arange(10e3)/10e3
f0 = 0.5e3
f1 = 50e3
e1 = exp_ramp_v2(50e3, 200e3, t)
e2 = exp_ramp_v2(0.5e3, 200e3, t)
pl.plot(t, e1)
pl.plot(t, e2)
Explanation: Resonance of acoustic tube
End of explanation
fs = 1000.0
f = np.linspace(1, 200, fs)
t = np.arange(fs)/fs
pl.plot(t, np.sin(f.cumsum()/fs))
(2*np.pi*f[-1]*t[-1]) % 2*np.pi
(f.cumsum()[-1]/fs) % 2*np.pi
Explanation: chirps
End of explanation
signal.iirfilter?
signal.freqs?
from scipy import signal
fs = 100e3
kwargs = dict(N=1, Wn=1e3/(2*fs), rp=0.4, rs=50, btype='highpass', ftype='ellip')
b, a = signal.iirfilter(analog=False, **kwargs)
ba, aa = signal.iirfilter(analog=True, **kwargs)
t, ir = signal.impulse([ba, aa], 50)
w, h = signal.freqz(b, a)
pl.figure()
pl.plot(t, ir)
pl.figure()
pl.plot(w, h)
rs = np.random.RandomState(seed=1)
noise = rs.uniform(-1, 1, 5000)
f = np.linspace(100, 25000, fs)
t = np.arange(fs)/fs
chirp = np.sin(f.cumsum()/fs)
psd = np.abs(np.fft.rfft(chirp)**2)
freq = np.fft.rfftfreq(len(chirp), fs**-1)
pl.semilogx(freq, 20*np.log10(psd), 'k')
chirp_ir = signal.lfilter(b, a, chirp)
psd_ir = np.abs(np.fft.rfft(chirp_ir)**2)
pl.semilogx(freq, 20*np.log10(psd_ir), 'r')
#pl.axis(ymin=40, xmin=10, xmax=10000)
chirp_eq = signal.lfilter(ir**-1, 1, chirp_ir)
psd_eq = np.abs(np.fft.rfft(chirp_eq)**2)
pl.semilogx(freq, 20*np.log10(psd_eq), 'g')
Explanation: Converting band level to spectrum level
$BL = 10 \times log{\frac{I_{tot}}{I_{ref}}} $ where $ I_{tot} = I_{SL}*\Delta f$. Using multiplication rule for logarigthms, $BL = 10 \times log{\frac{I_{SL} \times 1 Hz}{I_{ref}}} + 10 \times log \frac{\Delta f}{1 Hz}$ which simplifies to $BL = ISL_{ave}+ 10\times log(\Delta f)$
Equalizing a signal using the impulse response
End of explanation |
4,464 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning
Step1: Writing the objective function
We can decompose the objective function as the sum of a least squares loss function and an $\ell_1$ regularizer.
Step2: Generating data
We generate training examples and observations that are linearly related; we make the relationship sparse, and we'll see how lasso will approximately recover it.
Step3: Fitting the model
All we need to do to fit the model is create a CVXPY problem where the objective is to minimize the the objective function defined above. We make $\lambda$ a CVXPY parameter, so that we can use a single CVXPY problem to obtain estimates for many values of $\lambda$.
Step4: Evaluating the model
Just as we saw for ridge regression, regularization improves generalizability.
Step5: Regularization path and feature selection
As $\lambda$ increases, the parameters are driven to $0$. By $\lambda \approx 10$, approximately 80 percent of the coefficients are exactly zero. This parallels the fact that $\beta^*$ was generated such that 80 percent of its entries were zero. The features corresponding to the slowest decaying coefficients can be interpreted as the most important ones.
Qualitatively, lasso differs from ridge in that the former often drives parameters to exactly zero, whereas the latter shrinks parameters but does not usually zero them out. That is, lasso results in sparse models; ridge (usually) does not. | Python Code:
import cvxpy as cp
import numpy as np
import matplotlib.pyplot as plt
Explanation: Machine Learning: Lasso Regression
Lasso regression is, like ridge regression, a shrinkage method. It differs from ridge regression in its choice of penalty: lasso imposes an $\ell_1$ penalty on the paramters $\beta$. That is, lasso finds an assignment to $\beta$ that minimizes the function
$$f(\beta) = \|X\beta - Y\|_2^2 + \lambda \|\beta\|_1,$$
where $\lambda$ is a hyperparameter and, as usual, $X$ is the training data and $Y$ the observations. The $\ell_1$ penalty encourages sparsity in the learned parameters, and, as we will see, can drive many coefficients to zero. In this sense, lasso is a continuous feature selection method.
In this notebook, we show how to fit a lasso model using CVXPY, how to evaluate the model, and how to tune the hyperparameter $\lambda$.
End of explanation
def loss_fn(X, Y, beta):
return cp.norm2(cp.matmul(X, beta) - Y)**2
def regularizer(beta):
return cp.norm1(beta)
def objective_fn(X, Y, beta, lambd):
return loss_fn(X, Y, beta) + lambd * regularizer(beta)
def mse(X, Y, beta):
return (1.0 / X.shape[0]) * loss_fn(X, Y, beta).value
Explanation: Writing the objective function
We can decompose the objective function as the sum of a least squares loss function and an $\ell_1$ regularizer.
End of explanation
def generate_data(m=100, n=20, sigma=5, density=0.2):
"Generates data matrix X and observations Y."
np.random.seed(1)
beta_star = np.random.randn(n)
idxs = np.random.choice(range(n), int((1-density)*n), replace=False)
for idx in idxs:
beta_star[idx] = 0
X = np.random.randn(m,n)
Y = X.dot(beta_star) + np.random.normal(0, sigma, size=m)
return X, Y, beta_star
m = 100
n = 20
sigma = 5
density = 0.2
X, Y, _ = generate_data(m, n, sigma)
X_train = X[:50, :]
Y_train = Y[:50]
X_test = X[50:, :]
Y_test = Y[50:]
Explanation: Generating data
We generate training examples and observations that are linearly related; we make the relationship sparse, and we'll see how lasso will approximately recover it.
End of explanation
beta = cp.Variable(n)
lambd = cp.Parameter(nonneg=True)
problem = cp.Problem(cp.Minimize(objective_fn(X_train, Y_train, beta, lambd)))
lambd_values = np.logspace(-2, 3, 50)
train_errors = []
test_errors = []
beta_values = []
for v in lambd_values:
lambd.value = v
problem.solve()
train_errors.append(mse(X_train, Y_train, beta))
test_errors.append(mse(X_test, Y_test, beta))
beta_values.append(beta.value)
Explanation: Fitting the model
All we need to do to fit the model is create a CVXPY problem where the objective is to minimize the the objective function defined above. We make $\lambda$ a CVXPY parameter, so that we can use a single CVXPY problem to obtain estimates for many values of $\lambda$.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
def plot_train_test_errors(train_errors, test_errors, lambd_values):
plt.plot(lambd_values, train_errors, label="Train error")
plt.plot(lambd_values, test_errors, label="Test error")
plt.xscale("log")
plt.legend(loc="upper left")
plt.xlabel(r"$\lambda$", fontsize=16)
plt.title("Mean Squared Error (MSE)")
plt.show()
plot_train_test_errors(train_errors, test_errors, lambd_values)
Explanation: Evaluating the model
Just as we saw for ridge regression, regularization improves generalizability.
End of explanation
def plot_regularization_path(lambd_values, beta_values):
num_coeffs = len(beta_values[0])
for i in range(num_coeffs):
plt.plot(lambd_values, [wi[i] for wi in beta_values])
plt.xlabel(r"$\lambda$", fontsize=16)
plt.xscale("log")
plt.title("Regularization Path")
plt.show()
plot_regularization_path(lambd_values, beta_values)
Explanation: Regularization path and feature selection
As $\lambda$ increases, the parameters are driven to $0$. By $\lambda \approx 10$, approximately 80 percent of the coefficients are exactly zero. This parallels the fact that $\beta^*$ was generated such that 80 percent of its entries were zero. The features corresponding to the slowest decaying coefficients can be interpreted as the most important ones.
Qualitatively, lasso differs from ridge in that the former often drives parameters to exactly zero, whereas the latter shrinks parameters but does not usually zero them out. That is, lasso results in sparse models; ridge (usually) does not.
End of explanation |
4,465 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Heap
La structure heap ou tas en français est utilisée pour trier. Elle peut également servir à obtenir les k premiers éléments d'une liste.
Step1: Un tas est peut être considéré comme un tableau $T$ qui vérifie une condition assez simple, pour tout indice $i$, alors $T[i] \geqslant \max(T[2i+1], T[2i+2])$. On en déduit nécessairement que le premier élément du tableau est le plus grand. Maintenant comment transformer un tableau en un autre qui respecte cette contrainte ?
Step2: Transformer en tas
Step3: Comme ce n'est pas facile de vérifer que c'est un tas, on le dessine.
Dessiner un tas
Step4: Le nombre entre crochets est la position, l'autre nombre est la valeur à cette position. Cette représentation fait apparaître une structure d'arbre binaire.
Première version
Step5: Même chose avec les indices au lieu des valeurs
Step6: Coût de l'algorithme
Step7: A peu près linéaire comme attendu.
Step8: Pas évident, au pire en $O(n\ln n)$, au mieux en $O(n)$.
Version simplifiée
A-t-on vraiment besoin de _heapify_max_bottom_position ? | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: Heap
La structure heap ou tas en français est utilisée pour trier. Elle peut également servir à obtenir les k premiers éléments d'une liste.
End of explanation
%matplotlib inline
Explanation: Un tas est peut être considéré comme un tableau $T$ qui vérifie une condition assez simple, pour tout indice $i$, alors $T[i] \geqslant \max(T[2i+1], T[2i+2])$. On en déduit nécessairement que le premier élément du tableau est le plus grand. Maintenant comment transformer un tableau en un autre qui respecte cette contrainte ?
End of explanation
def swap(tab, i, j):
"Echange deux éléments."
tab[i], tab[j] = tab[j], tab[i]
def entas(heap):
"Organise un ensemble selon un tas."
modif = 1
while modif > 0:
modif = 0
i = len(heap) - 1
while i > 0:
root = (i-1) // 2
if heap[root] < heap[i]:
swap(heap, root, i)
modif += 1
i -= 1
return heap
ens = [1,2,3,4,7,10,5,6,11,12,3]
entas(ens)
Explanation: Transformer en tas
End of explanation
from pyensae.graphhelper import draw_diagram
def dessine_tas(heap):
rows = ["blockdiag {"]
for i, v in enumerate(heap):
if i*2+1 < len(heap):
rows.append('"[{}]={}" -> "[{}]={}";'.format(
i, heap[i], i * 2 + 1, heap[i*2+1]))
if i*2+2 < len(heap):
rows.append('"[{}]={}" -> "[{}]={}";'.format(
i, heap[i], i * 2 + 2, heap[i*2+2]))
rows.append("}")
return draw_diagram("\n".join(rows))
ens = [1,2,3,4,7,10,5,6,11,12,3]
dessine_tas(entas(ens))
Explanation: Comme ce n'est pas facile de vérifer que c'est un tas, on le dessine.
Dessiner un tas
End of explanation
def swap(tab, i, j):
"Echange deux éléments."
tab[i], tab[j] = tab[j], tab[i]
def _heapify_max_bottom(heap):
"Organise un ensemble selon un tas."
modif = 1
while modif > 0:
modif = 0
i = len(heap) - 1
while i > 0:
root = (i-1) // 2
if heap[root] < heap[i]:
swap(heap, root, i)
modif += 1
i -= 1
def _heapify_max_up(heap):
"Organise un ensemble selon un tas."
i = 0
while True:
left = 2*i + 1
right = left+1
if right < len(heap):
if heap[left] > heap[i] >= heap[right]:
swap(heap, i, left)
i = left
elif heap[right] > heap[i]:
swap(heap, i, right)
i = right
else:
break
elif left < len(heap) and heap[left] > heap[i]:
swap(heap, i, left)
i = left
else:
break
def topk_min(ens, k):
"Retourne les k plus petits éléments d'un ensemble."
heap = ens[:k]
_heapify_max_bottom(heap)
for el in ens[k:]:
if el < heap[0]:
heap[0] = el
_heapify_max_up(heap)
return heap
ens = [1,2,3,4,7,10,5,6,11,12,3]
for k in range(1, len(ens)-1):
print(k, topk_min(ens, k))
Explanation: Le nombre entre crochets est la position, l'autre nombre est la valeur à cette position. Cette représentation fait apparaître une structure d'arbre binaire.
Première version
End of explanation
def _heapify_max_bottom_position(ens, pos):
"Organise un ensemble selon un tas."
modif = 1
while modif > 0:
modif = 0
i = len(pos) - 1
while i > 0:
root = (i-1) // 2
if ens[pos[root]] < ens[pos[i]]:
swap(pos, root, i)
modif += 1
i -= 1
def _heapify_max_up_position(ens, pos):
"Organise un ensemble selon un tas."
i = 0
while True:
left = 2*i + 1
right = left+1
if right < len(pos):
if ens[pos[left]] > ens[pos[i]] >= ens[pos[right]]:
swap(pos, i, left)
i = left
elif ens[pos[right]] > ens[pos[i]]:
swap(pos, i, right)
i = right
else:
break
elif left < len(pos) and ens[pos[left]] > ens[pos[i]]:
swap(pos, i, left)
i = left
else:
break
def topk_min_position(ens, k):
"Retourne les positions des k plus petits éléments d'un ensemble."
pos = list(range(k))
_heapify_max_bottom_position(ens, pos)
for i, el in enumerate(ens[k:]):
if el < ens[pos[0]]:
pos[0] = k + i
_heapify_max_up_position(ens, pos)
return pos
ens = [1,2,3,7,10,4,5,6,11,12,3]
for k in range(1, len(ens)-1):
pos = topk_min_position(ens, k)
print(k, pos, [ens[i] for i in pos])
import numpy.random as rnd
X = rnd.randn(10000)
%timeit topk_min(X, 20)
%timeit topk_min_position(X, 20)
Explanation: Même chose avec les indices au lieu des valeurs
End of explanation
from cpyquickhelper.numbers import measure_time
from tqdm import tqdm
from pandas import DataFrame
rows = []
for n in tqdm(list(range(1000, 20001, 1000))):
X = rnd.randn(n)
res = measure_time('topk_min_position(X, 100)',
{'X': X, 'topk_min_position': topk_min_position},
div_by_number=True,
number=10)
res["size"] = n
rows.append(res)
df = DataFrame(rows)
df.head()
import matplotlib.pyplot as plt
df[['size', 'average']].set_index('size').plot()
plt.title("Coût topk en fonction de la taille du tableau");
Explanation: Coût de l'algorithme
End of explanation
rows = []
X = rnd.randn(10000)
for k in tqdm(list(range(500, 2001, 150))):
res = measure_time('topk_min_position(X, k)',
{'X': X, 'topk_min_position': topk_min_position, 'k': k},
div_by_number=True,
number=5)
res["k"] = k
rows.append(res)
df = DataFrame(rows)
df.head()
df[['k', 'average']].set_index('k').plot()
plt.title("Coût topk en fonction de k");
Explanation: A peu près linéaire comme attendu.
End of explanation
def _heapify_max_up_position_simple(ens, pos, first):
"Organise un ensemble selon un tas."
i = first
while True:
left = 2*i + 1
right = left+1
if right < len(pos):
if ens[pos[left]] > ens[pos[i]] >= ens[pos[right]]:
swap(pos, i, left)
i = left
elif ens[pos[right]] > ens[pos[i]]:
swap(pos, i, right)
i = right
else:
break
elif left < len(pos) and ens[pos[left]] > ens[pos[i]]:
swap(pos, i, left)
i = left
else:
break
def topk_min_position_simple(ens, k):
"Retourne les positions des k plus petits éléments d'un ensemble."
pos = list(range(k))
pos[k-1] = 0
for i in range(1, k):
pos[k-i-1] = i
_heapify_max_up_position_simple(ens, pos, k-i-1)
for i, el in enumerate(ens[k:]):
if el < ens[pos[0]]:
pos[0] = k + i
_heapify_max_up_position_simple(ens, pos, 0)
return pos
ens = [1,2,3,7,10,4,5,6,11,12,3]
for k in range(1, len(ens)-1):
pos = topk_min_position_simple(ens, k)
print(k, pos, [ens[i] for i in pos])
X = rnd.randn(10000)
%timeit topk_min_position_simple(X, 20)
Explanation: Pas évident, au pire en $O(n\ln n)$, au mieux en $O(n)$.
Version simplifiée
A-t-on vraiment besoin de _heapify_max_bottom_position ?
End of explanation |
4,466 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dans ce notebook je récupère une liste de chocs Tore Supra obtenue avec Ged.
Pour chacun de ces chocs, je récupère les signaux de puissance FCI, et j'en déduis la puissance couplée max et la durée de la puissance RF max. L'objectif est de réaliser une mini base de données des performances du système FCI de Tore Supra.
Tore Supra database
Step1: JET database
Step2: LHD
Step3: EAST
Step4: Plot | Python Code:
from pywed import * # Tore Supra database library
%pylab inline
pulse_list = np.loadtxt('data/liste_choc_fci.txt', dtype=int)
pulse_list = np.arange(44092, 48311, dtype='int')
ts_max_power = []
ts_max_duration = []
for pulse in pulse_list:
#print('Retrieve date for pulse {}'.format(pulse))
# retrieve ICRH power from Tore Supra database
try:
data = tsbase(int(pulse), 'GPUIFCI')
# Check the case when power is alway nul during shot
non_zero_values = np.squeeze(np.nonzero(data[0][:,3]))
if non_zero_values.size>1:
# Max power in MW, addition of all launchers
# (4th columns of Gpuifci power signal)
ts_max_power.append(np.max(data[0][:,3], axis=0))
# Max duration : take the max power
# and calculates the time duration btw first and last non-zero values
t_start = data[1][non_zero_values[0],0]
t_end = data[1][non_zero_values[-1],0]
t_duration = t_end - t_start
ts_max_duration.append(t_duration)
except PyWEDException:
pass
#print('no data')
pulse
np.save('TS_data_44092-48310.npy', np.vstack([np.array(ts_max_power), np.array(ts_max_duration)]))
ts_data_35000_44091.shape
ts_data_35000_44091 = np.load('TS_data_35000-44092.npy')
ts_data_44092_48310 = np.load('TS_data_44092-48310.npy')
ts_max_power = np.concatenate((ts_data_35000_44091[0,:],ts_data_44092_48310[0,:]))
ts_max_duration = np.concatenate((ts_data_35000_44091[1,:],ts_data_44092_48310[1,:]))
scatter(ts_max_power, ts_max_duration, alpha=0.2)
ylim(1,1.1*60*60)
xlim(0,10)
xlabel('RF Max. Coupled Power [MW]', fontsize=14)
ylabel('RF Max. Duration [s]', fontsize=14)
yscale('log')
yticks([1, 10, 100, 1000], ['1', '10', '100', '1000'], fontsize=14)
xticks(fontsize=14)
Explanation: Dans ce notebook je récupère une liste de chocs Tore Supra obtenue avec Ged.
Pour chacun de ces chocs, je récupère les signaux de puissance FCI, et j'en déduis la puissance couplée max et la durée de la puissance RF max. L'objectif est de réaliser une mini base de données des performances du système FCI de Tore Supra.
Tore Supra database
End of explanation
import MDSplus as mds
conx = mds.Connection('mdsplus.jet.efda.org')
print(conx.hostspec)
jet_pulse_list = [68752, 68809, 68110, 65947, 78069, 73520,77894,78125,77404,78070,77293,76721,76722]
jet_pulse_list = range(68955, 76723) # CW
jet_pulse_list = range(80000, 87944) # ILW
jet_max_power = []
jet_max_duration = []
for pulse in jet_pulse_list:
try:
y = np.array(conx.get('_sig=jet("ppf/icrh/ptot", '+str(pulse)+')')) / 1e6 # total ICRH power in MW
t = np.array(conx.get('dim_of(_sig)')) # time vector
non_zero_values = np.squeeze(np.nonzero(y))
# continue only if the y vector is not 0
if non_zero_values.size:
jet_max_power.append(np.max(y))
t_start = t[non_zero_values[0]]
t_end = t[non_zero_values[-1]]
t_duration = t_end - t_start
jet_max_duration.append(t_duration)
except KeyError :
pass#print('no data')
np.save('JET_power_ILW.npy', np.array(jet_max_power))
np.save('JET_duration_ILW.npy', np.array(jet_max_duration))
JET_max_power = np.load('JET_power_ILW.npy')
JET_max_power.size
Explanation: JET database
End of explanation
# references
# Seki 2013
# Kasahara 2010 _Study of High power ICRF antenna design in LHD
lhd_power = [0.55, 0.52, 0.23, 0.49, 0.24, 0.7, 0.9, 0.4, 3, 3.5, 4.5, 0.96]
lhd_duration = [1*60*60, 0.5*60*60, 1*60*60, 0.5*60*60, 1*60*60, 1135, 48*60, 54*60, 2, 2, 2, 2859]
Explanation: LHD
End of explanation
# references:
# B.Wan NF 2013
# Y.P.Zhao FED 2014
east_power = [0.6, 1.6, 2, 0.8]
east_duration = [5, 6, 4, 30]
Explanation: EAST
End of explanation
import matplotlib as mpl
#To make sure we have always the same matplotlib settings
#(the ones in comments are the ipython notebook settings)
#mpl.rcParams['figure.figsize']=(8.0,6.0) #(6.0,4.0)
#mpl.rcParams['font.size']=12 #10
mpl.rcParams['savefig.dpi']=100 #72
#mpl.rcParams['figure.subplot.bottom']=.1 #.125
jet_max_power = np.load('JET_power_ILW.npy')
jet_max_duration = np.load('JET_duration_ILW.npy')
scatter(ts_max_power, ts_max_duration, marker='.', s=30, color=(31/255, 119/255, 180/255), alpha=0.8)
scatter(jet_max_power, jet_max_duration, marker='.', s=30, color=(214/255, 39/255, 40/255), alpha=0.3)
scatter(lhd_power, lhd_duration, s=30, marker='s', color='k')
scatter(east_power, east_duration, marker='D', s=30, color='#FFB800')
ylim(1,1.5*60*60)
xlim(0,10)
xlabel('RF Max. Coupled Power [MW]', fontsize=14)
ylabel('RF Max. Duration [s]', fontsize=14)
yscale('log')
yticks([1, 10, 100, 1000], ['1', '10', '100', '1000'], fontsize=14)
xticks(fontsize=14)
grid(True, axis='y')
# Put a legend to the right of the current axis
lgd = legend(('Tore Supra', 'JET-ILW', 'LHD', 'EAST'), loc=5, bbox_to_anchor=(1.02, 0, 0.5, 1),
ncol=1, mode="expand", borderaxespad=0., frameon=False, fontsize=14, scatterpoints=1)
# Remove the plot frame lines. They are unnecessary chartjunk.
gca().spines["top"].set_visible(False)
gca().spines["bottom"].set_visible(False)
gca().spines["right"].set_visible(False)
gca().spines["left"].set_visible(False)
# Ensure that the axis ticks only show up on the bottom and left of the plot.
# Ticks on the right and top of the plot are generally unnecessary chartjunk.
gca().get_xaxis().tick_bottom()
gca().get_yaxis().tick_left()
# Remove the tick marks; they are unnecessary with the tick lines we just plotted.
tick_params(axis="x", which="both", bottom="off", top="off",
labelbottom="on", left="off", right="off", labelleft="on")
gcf().set_size_inches(5,3)
savefig('ICRF_Power-vs-duration.png', dpi=120, bbox_inches='tight', pad_inches=0)
Explanation: Plot
End of explanation |
4,467 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What is the true normal human body temperature?
Background
The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. In 1992, this value was revised to 36.8$^{\circ}$C or 98.2$^{\circ}$F.
Exercise
In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.
Answer the following questions in this notebook below and submit to your Github account.
Is the distribution of body temperatures normal?
Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply.
Is the true population mean really 98.6 degrees F?
Bring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different?
At what temperature should we consider someone's temperature to be "abnormal"?
Start by computing the margin of error and confidence interval.
Is there a significant difference between males and females in normal temperature?
Set up and solve for a two sample hypothesis testing.
You can include written notes in notebook cells using Markdown
Step1: Is the distribution of body temperatures normal?
Data is right skewed but it is nearly normal. We can apply CLT for hypothesis testing as population size is more than 30.
Step2: Is the true population mean really 98.6 degrees F?
Bring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different?
For z-test Vs t-test
Step3: Since p_value is much more less than 5%, null hypothesis can be rejected that the true population mean is 98.6 degrees Fahrenheit.
Testing for t-test
Step4: For t-test p value is little different from the pvalue from z-test but evidence is strong enough to reject null hypotesis.
==================================================================================================================
At what temperature should we consider someone's temperature to be "abnormal"?
95% confidence interval can be considered good enough for this assesment.
Margin of Error (M.E) = (critical value * standard error)
Critical value for confidence interval 95% = 1.96
Confidence interval = (Mean - Margin of Error, Mean + Margin of Error)
Step5: If temprature goes out of above range it might be considered as abnormal.
=======================================================================================================================
Is there a significant difference between males and females in normal temperature?
Step6: Again sample size is large enough to test using z-test. | Python Code:
import pandas as pd
%matplotlib inline
df = pd.read_csv('data/human_body_temperature.csv')
Explanation: What is the true normal human body temperature?
Background
The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. In 1992, this value was revised to 36.8$^{\circ}$C or 98.2$^{\circ}$F.
Exercise
In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.
Answer the following questions in this notebook below and submit to your Github account.
Is the distribution of body temperatures normal?
Remember that this is a condition for the CLT, and hence the statistical tests we are using, to apply.
Is the true population mean really 98.6 degrees F?
Bring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different?
At what temperature should we consider someone's temperature to be "abnormal"?
Start by computing the margin of error and confidence interval.
Is there a significant difference between males and females in normal temperature?
Set up and solve for a two sample hypothesis testing.
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
Resources
Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm
Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
End of explanation
df.hist()
df.describe()
Explanation: Is the distribution of body temperatures normal?
Data is right skewed but it is nearly normal. We can apply CLT for hypothesis testing as population size is more than 30.
End of explanation
import scipy.special
n = df.count()['temperature']
sigma = df['temperature'].std()
x_bar = df['temperature'].mean()
standard_error = sigma/((n)**(1.0/2))
z_score = ( x_bar - 98.6)/standard_error
p_values = 2*scipy.special.ndtr(z_score)
p_values
Explanation: Is the true population mean really 98.6 degrees F?
Bring out the one sample hypothesis test! In this situation, is it approriate to apply a z-test or a t-test? How will the result be different?
For z-test Vs t-test:
We need to apply t-test if sample size is smaller than 30. Since sample size is more than 30 it is better to use z-test. Result will be almost the same as data is not extreme skewed and sample size is large enough.
Sample Mean = 98.249231
Sample stddev = 0.733183
n = 130
Sample Hypotesis Test:
Step 1:
- Null Hypothesis : Mean = 98.6
- Alternative Hypothesis : Mean != 98.6
Step 2:
- Point of estimate sample mean = 98.6
- Calculate Standard Error (SE)
Step 3:
- Check condition
-- Independence ==> True
-- If Sample is skewed then sample size > 30 ==> True
Step 4:
- Calculate z score and pvalue
Step 5:
- Based on p-value check if Null can be rejected.
End of explanation
import scipy.stats as stats
stats.ttest_1samp(df.temperature,98.6)
Explanation: Since p_value is much more less than 5%, null hypothesis can be rejected that the true population mean is 98.6 degrees Fahrenheit.
Testing for t-test
End of explanation
margin_of_error = 1.96*standard_error
confidence_interval = [x_bar - margin_of_error, x_bar + margin_of_error]
confidence_interval
Explanation: For t-test p value is little different from the pvalue from z-test but evidence is strong enough to reject null hypotesis.
==================================================================================================================
At what temperature should we consider someone's temperature to be "abnormal"?
95% confidence interval can be considered good enough for this assesment.
Margin of Error (M.E) = (critical value * standard error)
Critical value for confidence interval 95% = 1.96
Confidence interval = (Mean - Margin of Error, Mean + Margin of Error)
End of explanation
import numpy as np
female_temprature = np.array(df.temperature[df.gender=='F'])
len(female_temprature)
male_temprature = np.array(df.temperature[df.gender=='M'])
len(male_temprature)
Explanation: If temprature goes out of above range it might be considered as abnormal.
=======================================================================================================================
Is there a significant difference between males and females in normal temperature?
End of explanation
from statsmodels.stats.weightstats import ztest
tstat,p_val = ztest(female_temprature, male_temprature)
p_val_percent = p_val*100
if p_val_percent < 5:
print ("p-value is less then 5% so null hypothesis should be rejected.\n"
"There a significant difference between males and females in normal temperature ")
Explanation: Again sample size is large enough to test using z-test.
End of explanation |
4,468 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 4
Step1: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
Step3: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights
Step4: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as
Step5: To test your feature derivartive run the following
Step6: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
Step7: Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature
Step8: Let us split the dataset into training set and test set. Make sure to use seed=0
Step9: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
Step10: Let's set the parameters for our optimization
Step11: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step12: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step13: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
Step14: Compute the RSS on the TEST data for the following three sets of weights
Step15: QUIZ QUESTIONS
1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
263.0
124.6
<br/>
2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?
no regularization
<br/>
3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
initial == 1.78427328252e+15
no regularization == 2.75723634598e+14
high regularization == 6.94642100914e+14
<br/>
Running a multiple regression with L2 penalty
Let us now consider a model with 2 features
Step16: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
Step17: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step18: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights
Step19: Compute the RSS on the TEST data for the following three sets of weights
Step20: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house? | Python Code:
import graphlab
Explanation: Regression Week 4: Ridge Regression (gradient descent)
In this notebook, you will implement ridge regression via gradient descent. You will:
* Convert an SFrame into a Numpy array
* Write a Numpy function to compute the derivative of the regression weights with respect to a single feature
* Write gradient descent function to compute the regression weights given an initial weight vector, step size, tolerance, and L2 penalty
Fire up graphlab create
Make sure you have the latest version of GraphLab Create (>= 1.7)
End of explanation
sales = graphlab.SFrame('../Data/kc_house_data.gl/')
Explanation: Load in house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
import numpy as np # note this allows us to refer to numpy as np instead
def get_numpy_data(data_sframe, features, output):
data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame
# add the column 'constant' to the front of the features list so that we can extract it along with the others:
features = ['constant'] + features # this is how you combine two lists
# select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):
features_sframe = data_sframe[features]
# the following line will convert the features_SFrame into a numpy matrix:
feature_matrix = features_sframe.to_numpy()
# assign the column of data_sframe associated with the output to the SArray output_sarray
output_sarray = data_sframe[output]
# the following will convert the SArray into a numpy array by first converting it to a list
output_array = output_sarray.to_numpy()
return(feature_matrix, output_array)
Explanation: If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the first notebook of Week 2. For this notebook, however, we will work with the existing features.
Import useful functions from previous notebook
As in Week 2, we convert the SFrame into a 2D Numpy array. Copy and paste get_num_data() from the second notebook of Week 2.
End of explanation
def predict_output(feature_matrix, weights):
# assume feature_matrix is a numpy matrix containing the features as columns and weights is a corresponding numpy array
# create the predictions vector by using np.dot()
predictions = np.dot(feature_matrix, weights)
return(predictions)
Explanation: Also, copy and paste the predict_output() function to compute the predictions for an entire matrix of features given the matrix and the weights:
End of explanation
def feature_derivative_ridge(errors, feature, weight, l2_penalty, feature_is_constant):
# If feature_is_constant is True, derivative is twice the dot product of errors and feature
# Otherwise, derivative is twice the dot product plus 2*l2_penalty*weight
derivative = 2 * np.dot(errors,feature)
if not feature_is_constant:
derivative += 2 * l2_penalty * weight
return( derivative )
Explanation: Computing the Derivative
We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output, plus the L2 penalty term.
Cost(w)
= SUM[ (prediction - output)^2 ]
+ l2_penalty*(w[0]^2 + w[1]^2 + ... + w[k]^2).
Since the derivative of a sum is the sum of the derivatives, we can take the derivative of the first part (the RSS) as we did in the notebook for the unregularized case in Week 2 and add the derivative of the regularization part. As we saw, the derivative of the RSS with respect to w[i] can be written as:
2*SUM[ error*[feature_i] ].
The derivative of the regularization term with respect to w[i] is:
2*l2_penalty*w[i].
Summing both, we get
2*SUM[ error*[feature_i] ] + 2*l2_penalty*w[i].
That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself, plus 2*l2_penalty*w[i].
We will not regularize the constant. Thus, in the case of the constant, the derivative is just twice the sum of the errors (without the 2*l2_penalty*w[0] term).
Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors, plus 2*l2_penalty*w[i].
With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points). To decide when to we are dealing with the constant (so we don't regularize it) we added the extra parameter to the call feature_is_constant which you should set to True when computing the derivative of the constant and False otherwise.
End of explanation
(example_features, example_output) = get_numpy_data(sales, ['sqft_living'], 'price')
my_weights = np.array([1., 10.])
test_predictions = predict_output(example_features, my_weights)
errors = test_predictions - example_output # prediction errors
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,1], my_weights[1], 1, False)
print np.sum(errors*example_features[:,1])*2+20.
print ''
# next two lines should print the same values
print feature_derivative_ridge(errors, example_features[:,0], my_weights[0], 1, True)
print np.sum(errors)*2.
Explanation: To test your feature derivartive run the following:
End of explanation
def ridge_regression_gradient_descent( feature_matrix, output, initial_weights, step_size, l2_penalty, max_iterations=100 ):
weights = np.array( initial_weights ) # make sure it's a numpy array
iteration = 0
#while not reached maximum number of iterations:
for j in xrange( max_iterations ):
# compute the predictions based on feature_matrix and weights using your predict_output() function
# compute the errors as predictions - output
errors = predict_output( feature_matrix, weights ) - output
for i in xrange( weights.size ): # loop over each weight
# Recall that feature_matrix[:,i] is the feature column associated with weights[i]
# compute the derivative for weight[i].
#(Remember: when i=0, you are computing the derivative of the constant!)
# subtract the step size times the derivative from the current weight
weights[i] -= step_size * feature_derivative_ridge( errors, feature_matrix[:, i], weights[i],
l2_penalty, i == 0 )
return weights
Explanation: Gradient Descent
Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function.
The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. Unlike in Week 2, this time we will set a maximum number of iterations and take gradient steps until we reach this maximum number. If no maximum number is supplied, the maximum should be set 100 by default. (Use default parameter values in Python.)
With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent, we update the weight for each feature before computing our stopping criteria.
End of explanation
simple_features = ['sqft_living']
my_output = 'price'
Explanation: Visualizing effect of L2 penalty
The L2 penalty gets its name because it causes weights to have small L2 norms than otherwise. Let's see how large weights get penalized. Let us consider a simple model with 1 feature:
End of explanation
train_data,test_data = sales.random_split(.8,seed=0)
Explanation: Let us split the dataset into training set and test set. Make sure to use seed=0:
End of explanation
(simple_feature_matrix, output) = get_numpy_data(train_data, simple_features, my_output)
(simple_test_feature_matrix, test_output) = get_numpy_data(test_data, simple_features, my_output)
Explanation: In this part, we will only use 'sqft_living' to predict 'price'. Use the get_numpy_data function to get a Numpy versions of your data with only this feature, for both the train_data and the test_data.
End of explanation
initial_weights = np.array([0., 0.])
step_size = 1e-12
max_iterations=1000
Explanation: Let's set the parameters for our optimization:
End of explanation
simple_weights_0_penalty = ridge_regression_gradient_descent(
simple_feature_matrix, output, initial_weights, step_size, 0.0, max_iterations )
Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_0_penalty
we'll use them later.
End of explanation
simple_weights_high_penalty = ridge_regression_gradient_descent(
simple_feature_matrix, output, initial_weights, step_size, 1e11, max_iterations )
Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
simple_weights_high_penalty
we'll use them later.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(simple_feature_matrix,output,'k.',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_0_penalty),'b-',
simple_feature_matrix,predict_output(simple_feature_matrix, simple_weights_high_penalty),'r-')
Explanation: This code will plot the two learned models. (The blue line is for the model with no regularization and the red line is for the one with high regularization.)
End of explanation
test_simple_predictions_initial = predict_output(simple_test_feature_matrix, initial_weights)
test_simple_residuals_initial = test_simple_predictions_initial - test_data['price']
test_simple_rss_initial = sum(pow(test_simple_residuals_initial,2))
print test_simple_rss_initial
test_simple_predictions_0_penalty = predict_output(simple_test_feature_matrix, simple_weights_0_penalty)
test_simple_residuals_0_penalty = test_simple_predictions_0_penalty - test_data['price']
test_simple_rss_0_penalty = sum(pow(test_simple_residuals_0_penalty,2))
print test_simple_rss_0_penalty
test_simple_predictions_high_penalty = predict_output(simple_test_feature_matrix, simple_weights_high_penalty)
test_simple_residuals_high_penalty = test_simple_predictions_high_penalty - test_data['price']
test_simple_rss_high_penalty = sum(pow(test_simple_residuals_high_penalty,2))
print test_simple_rss_high_penalty
print simple_weights_0_penalty
print round(simple_weights_0_penalty[1],1)
print simple_weights_high_penalty
print round(simple_weights_high_penalty[1],1)
Explanation: Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
End of explanation
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors.
my_output = 'price'
(train_feature_matrix, train_output) = get_numpy_data(train_data, model_features, my_output)
(test_feature_matrix, test_output) = get_numpy_data(test_data, model_features, my_output)
Explanation: QUIZ QUESTIONS
1. What is the value of the coefficient for sqft_living that you learned with no regularization, rounded to 1 decimal place? What about the one with high regularization?
263.0
124.6
<br/>
2. Comparing the lines you fit with the with no regularization versus high regularization, which one is steeper?
no regularization
<br/>
3. What are the RSS on the test data for each of the set of weights above (initial, no regularization, high regularization)?
initial == 1.78427328252e+15
no regularization == 2.75723634598e+14
high regularization == 6.94642100914e+14
<br/>
Running a multiple regression with L2 penalty
Let us now consider a model with 2 features: ['sqft_living', 'sqft_living15'].
First, create Numpy versions of your training and test data with these two features.
End of explanation
initial_weights = np.array([0.0,0.0,0.0])
step_size = 1e-12
max_iterations = 1000
Explanation: We need to re-inialize the weights, since we have one extra parameter. Let us also set the step size and maximum number of iterations.
End of explanation
multiple_weights_0_penalty = ridge_regression_gradient_descent(
train_feature_matrix, train_output, initial_weights, step_size, 0.0, max_iterations )
Explanation: First, let's consider no regularization. Set the l2_penalty to 0.0 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_0_penalty
End of explanation
multiple_weights_high_penalty = ridge_regression_gradient_descent(
train_feature_matrix, train_output, initial_weights, step_size, 1e11, max_iterations )
Explanation: Next, let's consider high regularization. Set the l2_penalty to 1e11 and run your ridge regression algorithm to learn the weights of your model. Call your weights:
multiple_weights_high_penalty
End of explanation
test_multi_predictions_initial = predict_output(test_feature_matrix, initial_weights)
test_multi_residuals_initial = test_multi_predictions_initial - test_data['price']
test_multi_rss_initial = sum(pow(test_multi_residuals_initial,2))
print test_multi_rss_initial
test_multi_predictions_0_penalty = predict_output(test_feature_matrix, multiple_weights_0_penalty)
test_multi_residuals_0_penalty = test_multi_predictions_0_penalty - test_data['price']
test_multi_rss_0_penalty = sum(pow(test_multi_residuals_0_penalty,2))
print test_multi_rss_0_penalty
test_multi_predictions_high_penalty = predict_output(test_feature_matrix, multiple_weights_high_penalty)
test_multi_residuals_high_penalty = test_multi_predictions_high_penalty - test_data['price']
test_multi_rss_high_penalty = sum(pow(test_multi_residuals_high_penalty,2))
print test_multi_rss_high_penalty
Explanation: Compute the RSS on the TEST data for the following three sets of weights:
1. The initial weights (all zeros)
2. The weights learned with no regularization
3. The weights learned with high regularization
Which weights perform best?
End of explanation
print test_multi_predictions_0_penalty[0]
print test_data[0]['price']
print test_multi_predictions_0_penalty[0] - test_data[0]['price']
print test_multi_predictions_high_penalty[0]
print test_data[0]['price']
print test_multi_predictions_high_penalty[0] - test_data[0]['price']
print multiple_weights_0_penalty
print round(multiple_weights_0_penalty[1],1)
print multiple_weights_high_penalty
print round(multiple_weights_high_penalty[1],1)
Explanation: Predict the house price for the 1st house in the test set using the no regularization and high regularization models. (Remember that python starts indexing from 0.) How far is the prediction from the actual price? Which weights perform best for the 1st house?
End of explanation |
4,469 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random forest parameter-tuning
Table of contents
Data preprocessing
Validation curves
KS-test tuning
Step1: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
Step2: Validation curves
(10-fold CV)
Maximum depth
Step3: Max features
Step4: Minimum samples in leaf node
Step5: KS-test tuning
Maximum depth
Step6: Minimum samples in leaf node
Step7: Maximum depth for various minimum samples in leaf node | Python Code:
import sys
sys.path.append('/home/jbourbeau/cr-composition')
print('Added to PYTHONPATH')
from __future__ import division, print_function
from collections import defaultdict
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
import seaborn.apionly as sns
import scipy.stats as stats
from sklearn.metrics import accuracy_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import validation_curve, GridSearchCV, cross_val_score, ParameterGrid, KFold, ShuffleSplit
import composition as comp
# Plotting-related
sns.set_palette('muted')
sns.set_color_codes()
color_dict = defaultdict()
for i, composition in enumerate(['light', 'heavy', 'total']):
color_dict[composition] = sns.color_palette('muted').as_hex()[i]
%matplotlib inline
Explanation: Random forest parameter-tuning
Table of contents
Data preprocessing
Validation curves
KS-test tuning
End of explanation
sim_train, sim_test = comp.preprocess_sim(return_energy=True)
X_test_data, energy_test_data = comp.preprocess_data(return_energy=True)
Explanation: Data preprocessing
Load simulation dataframe and apply specified quality cuts
Extract desired features from dataframe
Get separate testing and training datasets
End of explanation
pipeline = comp.get_pipeline('xgboost')
param_range = np.arange(1, 212, 20)
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=sim_train.X,
y=sim_train.y,
param_name='classifier__n_estimators',
param_range=param_range,
cv=3,
scoring='accuracy',
verbose=2,
n_jobs=15)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
plt.legend(loc='lower right')
plt.xlabel('Number estimators')
plt.ylabel('Accuracy')
# plt.ylim([0.7, 0.8])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
diff = train_mean-test_mean
diff_std = np.sqrt(train_std**2 + test_std**2)
plt.plot(param_range, diff,
color='b', marker='.',
markersize=5)
plt.fill_between(param_range,
diff + diff_std,
diff - diff_std,
alpha=0.15, color='b')
plt.grid()
plt.xlabel('Number estimators')
plt.ylabel('Overtraining')
# plt.ylim([0.7, 0.8])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
Explanation: Validation curves
(10-fold CV)
Maximum depth
End of explanation
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, X_train.shape[1])
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train,
y=y_train,
param_name='classifier__max_features',
param_range=param_range,
cv=10,
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
# plt.xscale('log')
plt.legend(loc='lower right')
plt.xlabel('Maximum features')
plt.ylabel('Accuracy')
# plt.ylim([0.8, 1.0])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
Explanation: Max features
End of explanation
pipeline = comp.get_pipeline('RF')
param_range = np.arange(1, 400, 25)
train_scores, test_scores = validation_curve(
estimator=pipeline,
X=X_train,
y=y_train,
param_name='classifier__min_samples_leaf',
param_range=param_range,
cv=10,
verbose=2,
n_jobs=20)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(param_range, train_mean,
color='b', marker='o',
markersize=5, label='training accuracy')
plt.fill_between(param_range, train_mean + train_std,
train_mean - train_std, alpha=0.15,
color='b')
plt.plot(param_range, test_mean,
color='g', linestyle='None',
marker='s', markersize=5,
label='validation accuracy')
plt.fill_between(param_range,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='g')
plt.grid()
# plt.xscale('log')
plt.legend()
plt.xlabel('Minimum samples in leaf node')
plt.ylabel('Accuracy')
# plt.ylim([0.8, 1.0])
# plt.savefig('/home/jbourbeau/public_html/figures/composition/parameter-tuning/RF-validation_curve_min_samples_leaf.png', dpi=300)
plt.show()
Explanation: Minimum samples in leaf node
End of explanation
comp_list = ['light', 'heavy']
max_depth_list = np.arange(1, 16)
pval_comp = defaultdict(list)
ks_stat = defaultdict(list)
kf = KFold(n_splits=10)
fold_num = 0
for train_index, test_index in kf.split(X_train):
fold_num += 1
print('\r')
print('Fold {}: '.format(fold_num), end='')
X_train_fold, X_test_fold = X_train[train_index], X_train[test_index]
y_train_fold, y_test_fold = y_train[train_index], y_train[test_index]
pval_maxdepth = defaultdict(list)
print('max_depth = ', end='')
for max_depth in max_depth_list:
print('{}...'.format(max_depth), end='')
pipeline = comp.get_pipeline('RF')
pipeline.named_steps['classifier'].set_params(max_depth=max_depth)
pipeline.fit(X_train_fold, y_train_fold)
test_probs = pipeline.predict_proba(X_test_fold)
train_probs = pipeline.predict_proba(X_train_fold)
for class_ in pipeline.classes_:
pval_maxdepth[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
for composition in comp_list:
pval_comp[composition].append(pval_maxdepth[composition])
pval_sys_err = {key: np.std(pval_comp[key], axis=0) for key in pval_comp}
pval = {key: np.mean(pval_comp[key], axis=0) for key in pval_comp}
comp_list = ['light']
fig, ax = plt.subplots()
for composition in comp_list:
upper_err = np.copy(pval_sys_err[composition])
upper_err = [val if ((pval[composition][i] + val) < 1) else 1-pval[composition][i] for i, val in enumerate(upper_err)]
lower_err = np.copy(pval_sys_err[composition])
lower_err = [val if ((pval[composition][i] - val) > 0) else pval[composition][i] for i, val in enumerate(lower_err)]
if composition == 'light':
ax.errorbar(max_depth_list -0.25/2, pval[composition],
yerr=[lower_err, upper_err],
marker='.', linestyle=':',
label=composition, alpha=0.75)
if composition == 'heavy':
ax.errorbar(max_depth_list + 0.25/2, pval[composition],
yerr=[lower_err, upper_err],
marker='.', linestyle=':',
label=composition, alpha=0.75)
plt.ylabel('KS-test p-value')
plt.xlabel('Maximum depth')
plt.ylim([-0.1, 1.1])
# plt.legend()
plt.grid()
plt.show()
pval
Explanation: KS-test tuning
Maximum depth
End of explanation
comp_list = np.unique(df['MC_comp_class'])
min_samples_list = np.arange(1, 400, 25)
pval = defaultdict(list)
ks_stat = defaultdict(list)
print('min_samples_leaf = ', end='')
for min_samples_leaf in min_samples_list:
print('{}...'.format(min_samples_leaf), end='')
pipeline = comp.get_pipeline('RF')
params = {'max_depth': 4, 'min_samples_leaf': min_samples_leaf}
pipeline.named_steps['classifier'].set_params(**params)
pipeline.fit(X_train, y_train)
test_probs = pipeline.predict_proba(X_test)
train_probs = pipeline.predict_proba(X_train)
for class_ in pipeline.classes_:
pval[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
fig, ax = plt.subplots()
for composition in pval:
ax.plot(min_samples_list, pval[composition], linestyle='-.', label=composition)
plt.ylabel('KS-test p-value')
plt.xlabel('Minimum samples leaf node')
plt.legend()
plt.grid()
plt.show()
Explanation: Minimum samples in leaf node
End of explanation
# comp_list = np.unique(df['MC_comp_class'])
comp_list = ['light']
min_samples_list = [1, 25, 50, 75]
min_samples_list = [1, 100, 200, 300]
fig, axarr = plt.subplots(2, 2, sharex=True, sharey=True)
print('min_samples_leaf = ', end='')
for min_samples_leaf, ax in zip(min_samples_list, axarr.flatten()):
print('{}...'.format(min_samples_leaf), end='')
max_depth_list = np.arange(1, 16)
pval = defaultdict(list)
ks_stat = defaultdict(list)
for max_depth in max_depth_list:
pipeline = comp.get_pipeline('RF')
params = {'max_depth': max_depth, 'min_samples_leaf': min_samples_leaf}
pipeline.named_steps['classifier'].set_params(**params)
pipeline.fit(X_train, y_train)
test_probs = pipeline.predict_proba(X_test)
train_probs = pipeline.predict_proba(X_train)
for class_ in pipeline.classes_:
pval[le.inverse_transform(class_)].append(stats.ks_2samp(test_probs[:, class_], train_probs[:, class_])[1])
for composition in pval:
ax.plot(max_depth_list, pval[composition], linestyle='-.', label=composition)
ax.set_ylabel('KS-test p-value')
ax.set_xlabel('Maximum depth')
ax.set_title('min samples = {}'.format(min_samples_leaf))
ax.set_ylim([0, 0.5])
ax.legend()
ax.grid()
plt.tight_layout()
plt.show()
Explanation: Maximum depth for various minimum samples in leaf node
End of explanation |
4,470 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-label data stratification
With the development of more complex multi-label transformation methods the community realizes how much the quality of classification depends on how the data is split into train/test sets or into folds for parameter estimation. More questions appear on stackoverflow or crossvalidated concerning methods for multi-label stratification.
For many reasons, described here and here traditional single-label approaches to stratifying data fail to provide balanced data set divisions which prevents classifiers from generalizing information.
Some train/test splits don't include evidence for a given label at all in the train set. others disproportionately put even as much as 70% of label pair evidence in the test set, leaving the train set without proper evidence for generalizing conditional probabilities for label relations.
You can also watch a great video presentation from ECML 2011 which explains this in depth
Step1: Let's look at how many examples are available per label combination
Step2: Let's load up the original division, to see how the set was split into train/test data in 2004, before multi-label stratification methods appeared.
Step3: We can see that the split size is nearly identical, yet some label combination evidence is well balanced between the splits. While this is a toy case on a small data set, such phenomena are common in larger datasets. We would like to fix this.
Let's load the iterative stratifier and divided the set again. | Python Code:
from skmultilearn.dataset import load_dataset
X,y, _, _ = load_dataset('scene', 'undivided')
Explanation: Multi-label data stratification
With the development of more complex multi-label transformation methods the community realizes how much the quality of classification depends on how the data is split into train/test sets or into folds for parameter estimation. More questions appear on stackoverflow or crossvalidated concerning methods for multi-label stratification.
For many reasons, described here and here traditional single-label approaches to stratifying data fail to provide balanced data set divisions which prevents classifiers from generalizing information.
Some train/test splits don't include evidence for a given label at all in the train set. others disproportionately put even as much as 70% of label pair evidence in the test set, leaving the train set without proper evidence for generalizing conditional probabilities for label relations.
You can also watch a great video presentation from ECML 2011 which explains this in depth:
<blockquote>
<a href='http://videolectures.net/ecmlpkdd2011_tsoumakas_stratification/'>
<img src='http://videolectures.net/ecmlpkdd2011_tsoumakas_stratification/thumb.jpg' border=0 />
<br/>On the Stratification of Multi-Label Data</a><br/>
Grigorios Tsoumakas
</blockquote>
Scikit-multilearn provides an implementation of iterative stratification which aims to provide well-balanced distribution of evidence of label relations up to a given order. To see what it means, let's load up some data. We'll be using the scene data set, both in divided and undivided variants, to illustrate the problem.
End of explanation
from skmultilearn.model_selection.measures import get_combination_wise_output_matrix
Counter(combination for row in get_combination_wise_output_matrix(y.A, order=2) for combination in row)
Explanation: Let's look at how many examples are available per label combination:
End of explanation
_, original_y_train, _, _ = load_dataset('scene', 'train')
_, original_y_test, _, _ = load_dataset('scene', 'test')
import pandas as pd
pd.DataFrame({
'train': Counter(str(combination) for row in get_combination_wise_output_matrix(original_y_train.A, order=2) for combination in row),
'test' : Counter(str(combination) for row in get_combination_wise_output_matrix(original_y_test.A, order=2) for combination in row)
}).T.fillna(0.0)
original_y_train.shape[0], original_y_test.shape[0]
Explanation: Let's load up the original division, to see how the set was split into train/test data in 2004, before multi-label stratification methods appeared.
End of explanation
from skmultilearn.model_selection import iterative_train_test_split
X_train, y_train, X_test, y_test = iterative_train_test_split(X, y, test_size = 0.5)
pd.DataFrame({
'train': Counter(str(combination) for row in get_combination_wise_output_matrix(y_train.A, order=2) for combination in row),
'test' : Counter(str(combination) for row in get_combination_wise_output_matrix(y_test.A, order=2) for combination in row)
}).T.fillna(0.0)
Explanation: We can see that the split size is nearly identical, yet some label combination evidence is well balanced between the splits. While this is a toy case on a small data set, such phenomena are common in larger datasets. We would like to fix this.
Let's load the iterative stratifier and divided the set again.
End of explanation |
4,471 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First Step
Step1: Now, we have the data stored as a DataFrame titled "imdb". As a simple first step, we'd like to see the structure of this DataFrame. We'll use different ways to do this ("df" is the name of an imaginary dataframe)
Step2: But I want to look at the Data...not just funny numbers!
* Use the head method
Step3: Please use the tail method to exhibit the last 3 rows of the imdb Dataframe
Step4: Individual columns of the dataframe can be accessed by df.column_name or df['column_name']. However, the result is not a DataFrame, but a Series structure.
Step5: Slicing/Filtering data
Culling the data based on some condition.
Running a relational quiery (<,>,==...) on any column(s) returns a boolean vector. This boolean vector can be used to filter the data.
Step6: Exercise 1
Find all the movies in the data made after the year 1950.
Bonus question
Step7: If using multiple conditions in the filter, seperate each condition in brackets and use the logical operators
Step8: This output can be sorted using the sort method.
df.sort(column_name)
Step9: Exercise 2
How many movies are there in the top 250 list?
Step10: Exercise 3
What are the 3 top rated tv series in the top 250 list?
Step11: Exercise 4
How many movies or tv series from the 80's are there in the list?
Step12: Dealing with nulls/NAs/NANs
Real data is always full of missing or bad entries. On a Series, a , we can use 2 methods
* a.isnull()
Step13: Exercise 5
How many movies or tv series do not have a properly entered "top 250 rank" attribute?
Step14: Text Mining using Dataframes
The string methods from python can be applied to Series, with the prefix "str".
So, for a Series, a, we can quiery
Step15: Exercise 6
How many movies or tv series have names that start with "The"? | Python Code:
imdb = pd.DataFrame.from_csv('imdb.csv', index_col=None)
Explanation: First Step: Get the data from storage into the dataframe.
Simple and easy method: pd.DataFrame.from_csv
* CSV: Comma Separated Values, used in spreadsheets. Popular for import & export of smaller datasets.
* Arguments: path-location of data file, index_col-Column to use as the row labels of the DataFrame.
End of explanation
len(imdb)
imdb.shape
Explanation: Now, we have the data stored as a DataFrame titled "imdb". As a simple first step, we'd like to see the structure of this DataFrame. We'll use different ways to do this ("df" is the name of an imaginary dataframe):
* Using the length function on the dataframe: "len(df)" will return the number of rows in the dataframe.
* Using the .shape property of DataFrames: "df.shape" will return a tuple showing the dimensions of the dataframe df (number of rows,number of columns).
End of explanation
imdb.head(5)
Explanation: But I want to look at the Data...not just funny numbers!
* Use the head method: "df.head(n)" will exhibit the first n rows of the dataframe. This is the easiest manner to get to see the structure, the names of the cloumns etc. (Highly recommended)
* Use the tail method: "df.tail(n)" will exhibit the last n rows of the dataframe.
* Enter the Dataframe's name. An excerpt of the data will be rendered in the notebook. (Not recommended)
End of explanation
#Enter your code here
Explanation: Please use the tail method to exhibit the last 3 rows of the imdb Dataframe
End of explanation
imdb_top5=imdb.head(5) #smaller dataframe with only top 5 rows.
imdb_top5['title'] #The 'title' column of this smaller dataframe
a=imdb_top5.title
type(a)
Explanation: Individual columns of the dataframe can be accessed by df.column_name or df['column_name']. However, the result is not a DataFrame, but a Series structure.
End of explanation
top_years=imdb_top5.year
top_years
top_years>1950 #get boolean vector. So, only 2 movies in the top 5 were made after 1950
#smh
imdb_top5[imdb_top5.year>1950] #passing this boolean vector to the dataframe, filters the data
#rows which meet the condition (have a "True") are retained.
#rows not meeting the condition(have a "False) are removed.
Explanation: Slicing/Filtering data
Culling the data based on some condition.
Running a relational quiery (<,>,==...) on any column(s) returns a boolean vector. This boolean vector can be used to filter the data.
End of explanation
#type your code here
len(imdb[imdb.year>1950])
Explanation: Exercise 1
Find all the movies in the data made after the year 1950.
Bonus question: how many such movies are there?
End of explanation
imdb[(imdb.year>1950) & (imdb.rating>8.8)]
#Filters all movies/shows made after 1950 AND having a rating of over 8.8
Explanation: If using multiple conditions in the filter, seperate each condition in brackets and use the logical operators:
* & (AND)
* | (or)
End of explanation
imdb[(imdb.year>1950) & (imdb.rating>8.8)].sort('year')
#Filters all movies/shows made after 1950 AND having a rating of over 8.8,
#then sort this Dataframe based on the year
Explanation: This output can be sorted using the sort method.
df.sort(column_name)
End of explanation
#type your code here
Explanation: Exercise 2
How many movies are there in the top 250 list?
End of explanation
#type your code here
Explanation: Exercise 3
What are the 3 top rated tv series in the top 250 list?
End of explanation
#type your code here
Explanation: Exercise 4
How many movies or tv series from the 80's are there in the list?
End of explanation
imdb_top5=imdb.head(5)
a=imdb_top5.top_250_rank
a
a.isnull()
Explanation: Dealing with nulls/NAs/NANs
Real data is always full of missing or bad entries. On a Series, a , we can use 2 methods
* a.isnull() : returns a vector exhibiting if the row is a null/NA
* a.notnull(): returns a vector exhibiting if the row is not a null/NA
End of explanation
#type your code here
temp=imdb
temp.fillna(0)
Explanation: Exercise 5
How many movies or tv series do not have a properly entered "top 250 rank" attribute?
End of explanation
imdb[imdb.title.str.contains("Files")]
Explanation: Text Mining using Dataframes
The string methods from python can be applied to Series, with the prefix "str".
So, for a Series, a, we can quiery:
* a.str.contains("target")
* a.str.startswith("target")
End of explanation
#Enter your code here
Explanation: Exercise 6
How many movies or tv series have names that start with "The"?
End of explanation |
4,472 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: TPU 사용하기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: TPU 초기화
TPU는 일반적으로 사용자 파이썬 프로그램을 실행하는 로컬 프로세스와는 다른 Cloud TPU 작업자(worker)에 있습니다. 따라서 원격 클러스터에 연결하고 TPU를 초기화하려면 일부 초기화 작업을 수행해야 합니다. TPUClusterResolver에 대한 tpu 인수는 Colab 전용의 특수 주소입니다. Google Compute Engine(GCE)에서 실행 중인 경우, CloudTPU 이름으로 전달해야 합니다.
참고
Step3: 수동 기기 배치
TPU가 초기화된 후 수동 기기 배치를 사용하여 단일 TPU 기기에 계산을 배치할 수 있습니다.
Step4: 배포 전략
대부분의 경우, 사용자는 데이터를 병렬로 여러 TPU에서 모델을 실행하려고 합니다. 배포 전략은 CPU, GPU 또는 TPU에서 모델을 구동하는 데 사용할 수 있는 추상화입니다. 배포 전략을 바꾸면 모델이 지정된 기기에서 실행됩니다. 자세한 내용은 배포 전략 가이드를 참조하세요.
먼저, TPUStrategy 객체를 만듭니다.
Step5: 계산이 모든 TPU 코어에서 실행될 수 있도록 계산을 복제하기 위해 간단하게 strategy.run API에 전달할 수 있습니다. 아래는 모든 코어가 같은 입력 (a, b)을 가져와 각 코어에서 독립적으로 matmul을 수행하는 예제입니다. 출력은 모든 복제본의 값이 됩니다.
Step6: TPU 기반 분류
기본 개념을 배웠으므로 더 구체적인 예를 살펴볼 차례입니다. 이 가이드는 배포 전략 tf.distribute.experimental.TPUStrategy을 사용하여 Cloud TPU를 구동하고 Keras 모델을 훈련하는 방법을 보여줍니다.
Keras 모델 정의하기
아래는 Keras를 사용한 MNIST 모델의 정의이며, CPU 또는 GPU에서 사용하는 모델이 변경되지 않았습니다. Keras 모델 작성은 strategy.scope 내에 있어야 하므로 각 TPU 기기에서 변수를 작성할 수 있습니다. 코드의 다른 부분은 전략 범위 내에 있을 필요는 없습니다.
Step7: 입력 데이터세트
Cloud TPU를 사용하는 경우, tf.data.Dataset API를 효율적으로 사용하는 것이 중요합니다. 데이터를 충분히 빠르게 공급할 수 없다면 Cloud TPU를 사용할 수 없습니다. 데이터세트 성능에 대한 자세한 내용은 입력 파이프라인 성능 가이드를 참조하세요.
가장 간단한 실험(tf.data.Dataset.from_tensor_slices 또는 기타 in-graph 데이터 사용)을 제외하고, Dataset에서 읽은 모든 데이터 파일을 Google Cloud Storage(GCS) 버킷에 저장해야 합니다.
사용 사례 대부분의 경우, 데이터를 TFRecord 형식으로 변환하고 tf.data.TFRecordDataset을 사용하여 데이터를 읽는 것이 좋습니다. 이를 수행하는 방법에 대한 자세한 내용은 TFRecord 및 tf.Example 튜토리얼을 참조하세요. 엄격한 요구 사항은 아니며, 원하는 경우 다른 데이터세트 리더 (FixedLengthRecordDataset 또는TextLineDataset)를 사용할 수 있습니다.
작은 데이터세트는 tf.data.Dataset.cache를 사용하여 메모리에 완전히 로드할 수 있습니다.
사용된 데이터 형식과 관계없이 100MB 정도의 큰 파일을 사용하는 것이 좋습니다. 파일을 여는 오버헤드가 상당히 높기 때문에 네트워크 설정에서 특히 중요합니다.
이때 tensorflow_datasets 모듈을 사용하여 MNIST 훈련 데이터의 사본을 가져와야 합니다. 공개 GCS 버킷에서 사용 가능한 사본을 사용하도록 try_gcs이 지정되었습니다. 이를 지정하지 않으면 TPU가 다운로드된 데이터에 액세스할 수 없습니다.
Step8: Keras 고수준 API를 사용하여 모델 훈련하기
Keras fit/compile API를 사용하여 모델을 간단히 훈련할 수 있습니다. TPU에만 해당되는 것은 없으며, 여러 GPU가 있고 TPUStrategy 대신 MirroredStrategy 사용하는 경우 아래의 같은 코드를 작성합니다. 자세한 내용은 Keras를 사용한 분산 훈련 튜토리얼을 확인하세요.
Step9: To reduce python overhead, and maximize the performance of your TPU, try out the experimental experimental_steps_per_execution argument to Model.compile. Here it increases throughput by about 50%
Step12: 사용자 정의 훈련 루프를 사용하여 모델 훈련하기
tf.function 및 tf.distribute API를 직접 사용하여 모델을 작성하고 훈련할 수도 있습니다. strategy.experimental_distribute_datasets_from_function API는 데이터세트 함수가 지정된 데이터세트를 분배하는 데 사용됩니다. 이 경우 데이터세트에 전달된 배치 크기는 전역 배치 크기가 아닌 복제본 배치 크기입니다. 자세한 내용은 tf.distribute.Strategy를 사용한 사용자 정의 훈련 튜토리얼을 확인하세요.
먼저, 모델, 데이터세트 및 tf.functions를 작성합니다.
Step13: 그런 다음 훈련 루프를 실행합니다.
Step16: tf.function 내에서 여러 단계로 성능 향상하기
tf.function 내에서 여러 단계를 실행하여 성능을 향상할 수 있습니다. strategy.run 호출을 tf.function 내에 tf.range로 래핑하고, AutoGraph는 TPU 작업자에서 tf.while_loop로 변환합니다.
더 나은 성능을 제공하지만 tf.function 내의 단일 단계와 비교하여 보완해야 할 점이 있습니다. tf.function에서 여러 단계를 실행하면 유연성이 떨어지므로 단계 내에서 즉시 또는 임의의 파이썬 코드를 실행할 수 없습니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import os
import tensorflow_datasets as tfds
Explanation: TPU 사용하기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/tpu"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/tpu.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/tpu.ipynb"> <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/tpu.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
Cloud TPU에 대한 실험적인 지원은 현재 Keras 및 Google Colab에 제공됩니다. Colab 노트북을 실행하기 전에 노트북 설정(Runtime > Change runtime type > Hardware accelerator > TPU)을 확인하여 하드웨어 가속기가 TPU인지 확인하세요.
설정
End of explanation
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='')
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
Explanation: TPU 초기화
TPU는 일반적으로 사용자 파이썬 프로그램을 실행하는 로컬 프로세스와는 다른 Cloud TPU 작업자(worker)에 있습니다. 따라서 원격 클러스터에 연결하고 TPU를 초기화하려면 일부 초기화 작업을 수행해야 합니다. TPUClusterResolver에 대한 tpu 인수는 Colab 전용의 특수 주소입니다. Google Compute Engine(GCE)에서 실행 중인 경우, CloudTPU 이름으로 전달해야 합니다.
참고: TPU 초기화 코드는 프로그램의 시작 부분에 있어야 합니다.
End of explanation
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
with tf.device('/TPU:0'):
c = tf.matmul(a, b)
print("c device: ", c.device)
print(c)
Explanation: 수동 기기 배치
TPU가 초기화된 후 수동 기기 배치를 사용하여 단일 TPU 기기에 계산을 배치할 수 있습니다.
End of explanation
strategy = tf.distribute.experimental.TPUStrategy(resolver)
Explanation: 배포 전략
대부분의 경우, 사용자는 데이터를 병렬로 여러 TPU에서 모델을 실행하려고 합니다. 배포 전략은 CPU, GPU 또는 TPU에서 모델을 구동하는 데 사용할 수 있는 추상화입니다. 배포 전략을 바꾸면 모델이 지정된 기기에서 실행됩니다. 자세한 내용은 배포 전략 가이드를 참조하세요.
먼저, TPUStrategy 객체를 만듭니다.
End of explanation
@tf.function
def matmul_fn(x, y):
z = tf.matmul(x, y)
return z
z = strategy.run(matmul_fn, args=(a, b))
print(z)
Explanation: 계산이 모든 TPU 코어에서 실행될 수 있도록 계산을 복제하기 위해 간단하게 strategy.run API에 전달할 수 있습니다. 아래는 모든 코어가 같은 입력 (a, b)을 가져와 각 코어에서 독립적으로 matmul을 수행하는 예제입니다. 출력은 모든 복제본의 값이 됩니다.
End of explanation
def create_model():
return tf.keras.Sequential(
[tf.keras.layers.Conv2D(256, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(256, 3, activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)])
Explanation: TPU 기반 분류
기본 개념을 배웠으므로 더 구체적인 예를 살펴볼 차례입니다. 이 가이드는 배포 전략 tf.distribute.experimental.TPUStrategy을 사용하여 Cloud TPU를 구동하고 Keras 모델을 훈련하는 방법을 보여줍니다.
Keras 모델 정의하기
아래는 Keras를 사용한 MNIST 모델의 정의이며, CPU 또는 GPU에서 사용하는 모델이 변경되지 않았습니다. Keras 모델 작성은 strategy.scope 내에 있어야 하므로 각 TPU 기기에서 변수를 작성할 수 있습니다. 코드의 다른 부분은 전략 범위 내에 있을 필요는 없습니다.
End of explanation
def get_dataset(batch_size, is_training=True):
split = 'train' if is_training else 'test'
dataset, info = tfds.load(name='mnist', split=split, with_info=True,
as_supervised=True, try_gcs=True)
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255.0
return image, label
dataset = dataset.map(scale)
# Only shuffle and repeat the dataset in training. The advantage to have a
# infinite dataset for training is to avoid the potential last partial batch
# in each epoch, so users don't need to think about scaling the gradients
# based on the actual batch size.
if is_training:
dataset = dataset.shuffle(10000)
dataset = dataset.repeat()
dataset = dataset.batch(batch_size)
return dataset
Explanation: 입력 데이터세트
Cloud TPU를 사용하는 경우, tf.data.Dataset API를 효율적으로 사용하는 것이 중요합니다. 데이터를 충분히 빠르게 공급할 수 없다면 Cloud TPU를 사용할 수 없습니다. 데이터세트 성능에 대한 자세한 내용은 입력 파이프라인 성능 가이드를 참조하세요.
가장 간단한 실험(tf.data.Dataset.from_tensor_slices 또는 기타 in-graph 데이터 사용)을 제외하고, Dataset에서 읽은 모든 데이터 파일을 Google Cloud Storage(GCS) 버킷에 저장해야 합니다.
사용 사례 대부분의 경우, 데이터를 TFRecord 형식으로 변환하고 tf.data.TFRecordDataset을 사용하여 데이터를 읽는 것이 좋습니다. 이를 수행하는 방법에 대한 자세한 내용은 TFRecord 및 tf.Example 튜토리얼을 참조하세요. 엄격한 요구 사항은 아니며, 원하는 경우 다른 데이터세트 리더 (FixedLengthRecordDataset 또는TextLineDataset)를 사용할 수 있습니다.
작은 데이터세트는 tf.data.Dataset.cache를 사용하여 메모리에 완전히 로드할 수 있습니다.
사용된 데이터 형식과 관계없이 100MB 정도의 큰 파일을 사용하는 것이 좋습니다. 파일을 여는 오버헤드가 상당히 높기 때문에 네트워크 설정에서 특히 중요합니다.
이때 tensorflow_datasets 모듈을 사용하여 MNIST 훈련 데이터의 사본을 가져와야 합니다. 공개 GCS 버킷에서 사용 가능한 사본을 사용하도록 try_gcs이 지정되었습니다. 이를 지정하지 않으면 TPU가 다운로드된 데이터에 액세스할 수 없습니다.
End of explanation
with strategy.scope():
model = create_model()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
batch_size = 200
steps_per_epoch = 60000 // batch_size
validation_steps = 10000 // batch_size
train_dataset = get_dataset(batch_size, is_training=True)
test_dataset = get_dataset(batch_size, is_training=False)
model.fit(train_dataset,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=test_dataset,
validation_steps=validation_steps)
Explanation: Keras 고수준 API를 사용하여 모델 훈련하기
Keras fit/compile API를 사용하여 모델을 간단히 훈련할 수 있습니다. TPU에만 해당되는 것은 없으며, 여러 GPU가 있고 TPUStrategy 대신 MirroredStrategy 사용하는 경우 아래의 같은 코드를 작성합니다. 자세한 내용은 Keras를 사용한 분산 훈련 튜토리얼을 확인하세요.
End of explanation
with strategy.scope():
model = create_model()
model.compile(optimizer='adam',
# Anything between 2 and `steps_per_epoch` could help here.
experimental_steps_per_execution = 50,
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset,
epochs=5,
steps_per_epoch=steps_per_epoch,
validation_data=test_dataset,
validation_steps=validation_steps)
Explanation: To reduce python overhead, and maximize the performance of your TPU, try out the experimental experimental_steps_per_execution argument to Model.compile. Here it increases throughput by about 50%:
End of explanation
# Create the model, optimizer and metrics inside strategy scope, so that the
# variables can be mirrored on each device.
with strategy.scope():
model = create_model()
optimizer = tf.keras.optimizers.Adam()
training_loss = tf.keras.metrics.Mean('training_loss', dtype=tf.float32)
training_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
'training_accuracy', dtype=tf.float32)
# Calculate per replica batch size, and distribute the datasets on each TPU
# worker.
per_replica_batch_size = batch_size // strategy.num_replicas_in_sync
train_dataset = strategy.experimental_distribute_datasets_from_function(
lambda _: get_dataset(per_replica_batch_size, is_training=True))
@tf.function
def train_step(iterator):
The step function for one training step
def step_fn(inputs):
The computation to run on each TPU device.
images, labels = inputs
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits, from_logits=True)
loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
training_loss.update_state(loss * strategy.num_replicas_in_sync)
training_accuracy.update_state(labels, logits)
strategy.run(step_fn, args=(next(iterator),))
Explanation: 사용자 정의 훈련 루프를 사용하여 모델 훈련하기
tf.function 및 tf.distribute API를 직접 사용하여 모델을 작성하고 훈련할 수도 있습니다. strategy.experimental_distribute_datasets_from_function API는 데이터세트 함수가 지정된 데이터세트를 분배하는 데 사용됩니다. 이 경우 데이터세트에 전달된 배치 크기는 전역 배치 크기가 아닌 복제본 배치 크기입니다. 자세한 내용은 tf.distribute.Strategy를 사용한 사용자 정의 훈련 튜토리얼을 확인하세요.
먼저, 모델, 데이터세트 및 tf.functions를 작성합니다.
End of explanation
steps_per_eval = 10000 // batch_size
train_iterator = iter(train_dataset)
for epoch in range(5):
print('Epoch: {}/5'.format(epoch))
for step in range(steps_per_epoch):
train_step(train_iterator)
print('Current step: {}, training loss: {}, accuracy: {}%'.format(
optimizer.iterations.numpy(),
round(float(training_loss.result()), 4),
round(float(training_accuracy.result()) * 100, 2)))
training_loss.reset_states()
training_accuracy.reset_states()
Explanation: 그런 다음 훈련 루프를 실행합니다.
End of explanation
@tf.function
def train_multiple_steps(iterator, steps):
The step function for one training step
def step_fn(inputs):
The computation to run on each TPU device.
images, labels = inputs
with tf.GradientTape() as tape:
logits = model(images, training=True)
loss = tf.keras.losses.sparse_categorical_crossentropy(
labels, logits, from_logits=True)
loss = tf.nn.compute_average_loss(loss, global_batch_size=batch_size)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(list(zip(grads, model.trainable_variables)))
training_loss.update_state(loss * strategy.num_replicas_in_sync)
training_accuracy.update_state(labels, logits)
for _ in tf.range(steps):
strategy.run(step_fn, args=(next(iterator),))
# Convert `steps_per_epoch` to `tf.Tensor` so the `tf.function` won't get
# retraced if the value changes.
train_multiple_steps(train_iterator, tf.convert_to_tensor(steps_per_epoch))
print('Current step: {}, training loss: {}, accuracy: {}%'.format(
optimizer.iterations.numpy(),
round(float(training_loss.result()), 4),
round(float(training_accuracy.result()) * 100, 2)))
Explanation: tf.function 내에서 여러 단계로 성능 향상하기
tf.function 내에서 여러 단계를 실행하여 성능을 향상할 수 있습니다. strategy.run 호출을 tf.function 내에 tf.range로 래핑하고, AutoGraph는 TPU 작업자에서 tf.while_loop로 변환합니다.
더 나은 성능을 제공하지만 tf.function 내의 단일 단계와 비교하여 보완해야 할 점이 있습니다. tf.function에서 여러 단계를 실행하면 유연성이 떨어지므로 단계 내에서 즉시 또는 임의의 파이썬 코드를 실행할 수 없습니다.
End of explanation |
4,473 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 2
Imports
Step1: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 2
Imports
End of explanation
t=np.linspace(0,4*np.pi,250)
def plot_sine1(a, b):
plt.figure(figsize=(6+a,6))
plt.plot(t, np.sin(t*a+b))
plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], [0,r'$\pi$',r'$2\pi$',r'$3\pi$',r'$4\pi$'])
plt.tight_layout
plt.ylabel('sin(ax+b)')
plt.xlabel('x')
plt.title('Sin(ax+b) vs x')
plt.ylim(-1.25,1.25)
plt.yticks([-1.0,0,1.0], [-1,0,1])
plot_sine1(5, 3.4)
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
interact(plot_sine1, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1))
assert True # leave this for grading the plot_sine1 exercise
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
def plot_sine2(a, b, style):
plt.figure(figsize=(6+a,6))
plt.plot(t, np.sin(t*a+b), style)
plt.xticks([0,np.pi,2*np.pi,3*np.pi,4*np.pi], [0,r'$\pi$',r'$2\pi$',r'$3\pi$',r'$4\pi$'])
plt.tight_layout
plt.ylabel('sin(ax+b)')
plt.xlabel('x')
plt.title('Sin(ax+b) vs x')
plt.ylim(-1.25,1.25)
plt.yticks([-1.0,0,1.0], [-1,0,1])
plot_sine2(4.0, -1.0, 'r--')
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
interact(plot_sine2, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style=('b','ko','r^'))
assert True # leave this for grading the plot_sine2 exercise
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation |
4,474 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculating Ground Motion Intensity Measures
The SMTK contains two modules for the characterisation of ground motion
Step1: Get Response Spectrum - Nigam & Jennings
Step2: Plot Time Series
Step3: Intensity Measures
Get PGA, PGV and PGD
Step4: Get Durations
Step5: Get Arias Intensity, CAV, CAV5 and rms acceleration
Step6: Spectrum Intensities
Step7: Get the response spectrum pair from two records
Step8: Get Geometric Mean Spectrum
Step9: Get Envelope Spectrum
Step10: Rotationally Dependent and Independent IMs
GMRotD50 and GMRotI50
Step11: Fourier Spectra, Smoothing and HVSR
Show the Fourier Spectrum
Step12: Smooth the Fourier Spectrum Using the Konno & Omachi (1998) Method
Step13: Get the HVSR
Load in the Time Series
Step14: Look at the Fourier Spectra
Step15: Calculate the Horizontal To Vertical Spectral Ratio | Python Code:
# Import modules
%matplotlib inline
import numpy as np # Numerical Python package
import matplotlib.pyplot as plt # Python plotting package
# Import
import smtk.response_spectrum as rsp # Response Spectra tools
import smtk.intensity_measures as ims # Intensity Measure Tools
periods = np.array([0.01, 0.02, 0.03, 0.04, 0.05, 0.075, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19,
0.20, 0.22, 0.24, 0.26, 0.28, 0.30, 0.32, 0.34, 0.36, 0.38, 0.40, 0.42, 0.44, 0.46, 0.48, 0.5,
0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 1.0, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8,
1.9, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.2, 3.4, 3.6, 3.8, 4.0, 4.2, 4.4, 4.6, 4.8, 5.0, 5.5, 6.0,
6.5, 7.0,7.5, 8.0, 8.5, 9.0, 9.5, 10.0], dtype=float)
number_periods = len(periods)
# Load record pair from files
x_record = np.genfromtxt("data/sm_record_x.txt")
y_record = np.genfromtxt("data/sm_record_y.txt")
x_time_step = 0.002 # Record sampled at 0.002 s
y_time_step = 0.002
Explanation: Calculating Ground Motion Intensity Measures
The SMTK contains two modules for the characterisation of ground motion:
1) smtk.response_spectrum
This module contains methods for calculation of the response of a set of single degree-of-freedom (SDOF) oscillators using an input time series. Two methods are currently supported:
i) Newmark-Beta
ii) Nigam & Jennings (1969) {Preferred}
The module also includes functions for plotting the response spectra and time series
2) smtk.intensity_measures
This module contains a set of functions for deriving different intensity measures from a strong motion record
i) get_peak_measures(...) - returns PGA, PGV and PGD
ii) get_response_spectrum(...) - returns the response spectrum
iii) get_response_spectrum_pair(...) - returns a response spectrum pair
iv) geometric_mean_spectrum(...) - returns the geometric mean of a pair of records
v) arithmetic_mean_spectrum(...) - returns the arithmetic mean of a pair of records
vi) geometric_mean_spectrum(...) - returns the envelope spectrum of a pair of records
vii) larger_pga(...) - Returns the spectrum with the larger PGA
viii) rotate_horizontal(...) - rotates a record pair through angle theta
ix) gmrotdpp(...) - Returns the rotationally-dependent geometric fractile (pp) of a pair of records
x) gmrotipp(...) - Returns the rotationally-independent geometric fractile (pp) of a pair of records
Example Usage of the Response Spectrum
End of explanation
# Create an instance of the Newmark-Beta class
nigam_jennings = rsp.NigamJennings(x_record, x_time_step, periods, damping=0.05, units="cm/s/s")
sax, time_series, acc, vel, dis = nigam_jennings.evaluate()
# Plot Response Spectrum
rsp.plot_response_spectra(sax, axis_type="semilogx", filename="images/response_nigam_jennings.pdf",filetype="pdf")
Explanation: Get Response Spectrum - Nigam & Jennings
End of explanation
rsp.plot_time_series(time_series["Acceleration"],
x_time_step,
time_series["Velocity"],
time_series["Displacement"])
Explanation: Plot Time Series
End of explanation
pga_x, pgv_x, pgd_x, _, _ = ims.get_peak_measures(0.002, x_record, True, True)
print "PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm" % (pga_x, pgv_x, pgd_x)
pga_y, pgv_y, pgd_y, _, _ = ims.get_peak_measures(0.002, y_record, True, True)
print "PGA = %10.4f cm/s/s, PGV = %10.4f cm/s, PGD = %10.4f cm" % (pga_y, pgv_y, pgd_y)
Explanation: Intensity Measures
Get PGA, PGV and PGD
End of explanation
print "Bracketed Duration (> 5 cm/s/s) = %9.3f s" % ims.get_bracketed_duration(x_record, x_time_step, 5.0)
print "Uniform Duration (> 5 cm/s/s) = %9.3f s" % ims.get_uniform_duration(x_record, x_time_step, 5.0)
print "Significant Duration (5 - 95 Arias ) = %9.3f s" % ims.get_significant_duration(x_record, x_time_step, 0.05, 0.95)
Explanation: Get Durations: Bracketed, Uniform, Significant
End of explanation
print "Arias Intensity = %12.4f cm-s" % ims.get_arias_intensity(x_record, x_time_step)
print "Arias Intensity (5 - 95) = %12.4f cm-s" % ims.get_arias_intensity(x_record, x_time_step, 0.05, 0.95)
print "CAV = %12.4f cm-s" % ims.get_cav(x_record, x_time_step)
print "CAV5 = %12.4f cm-s" % ims.get_cav(x_record, x_time_step, threshold=5.0)
print "Arms = %12.4f cm-s" % ims.get_arms(x_record, x_time_step)
Explanation: Get Arias Intensity, CAV, CAV5 and rms acceleration
End of explanation
# Get response spectrum
sax = ims.get_response_spectrum(x_record, x_time_step, periods)[0]
print "Velocity Spectrum Intensity (cm/s/s) = %12.5f" % ims.get_response_spectrum_intensity(sax)
print "Acceleration Spectrum Intensity (cm-s) = %12.5f" % ims.get_acceleration_spectrum_intensity(sax)
Explanation: Spectrum Intensities: Housner Intensity, Acceleration Spectrum Intensity
End of explanation
sax, say = ims.get_response_spectrum_pair(x_record, x_time_step,
y_record, y_time_step,
periods,
damping=0.05,
units="cm/s/s",
method="Nigam-Jennings")
Explanation: Get the response spectrum pair from two records
End of explanation
sa_gm = ims.geometric_mean_spectrum(sax, say)
rsp.plot_response_spectra(sa_gm, "semilogx", filename="images/geometric_mean_spectrum.pdf", filetype="pdf")
Explanation: Get Geometric Mean Spectrum
End of explanation
sa_env = ims.envelope_spectrum(sax, say)
rsp.plot_response_spectra(sa_env, "semilogx", filename="images/envelope_spectrum.pdf", filetype="pdf")
Explanation: Get Envelope Spectrum
End of explanation
gmrotd50 = ims.gmrotdpp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0,
damping=0.05, units="cm/s/s")
gmroti50 = ims.gmrotipp(x_record, x_time_step, y_record, y_time_step, periods, percentile=50.0,
damping=0.05, units="cm/s/s")
# Plot all of the rotational angles!
plt.figure(figsize=(8, 6))
for row in gmrotd50["GeoMeanPerAngle"]:
plt.semilogx(periods, row, "-", color="LightGray")
plt.semilogx(periods, gmrotd50["GMRotDpp"], 'b-', linewidth=2, label="GMRotD50")
plt.semilogx(periods, gmroti50["Pseudo-Acceleration"], 'r-', linewidth=2, label="GMRotI50")
plt.xlabel("Period (s)", fontsize=18)
plt.ylabel("Acceleration (cm/s/s)", fontsize=18)
plt.legend(loc=0)
plt.savefig("images/rotational_spectra.pdf", dpi=300, format="pdf")
Explanation: Rotationally Dependent and Independent IMs
GMRotD50 and GMRotI50
End of explanation
ims.plot_fourier_spectrum(x_record, x_time_step,
filename="images/fourier_spectrum.pdf", filetype="pdf")
Explanation: Fourier Spectra, Smoothing and HVSR
Show the Fourier Spectrum
End of explanation
from smtk.smoothing.konno_ohmachi import KonnoOhmachi
# Get the original Fourier spectrum
freq, amplitude = ims.get_fourier_spectrum(x_record, x_time_step)
# Configure Smoothing Parameters
smoothing_config = {"bandwidth": 40, # Size of smoothing window (lower = more smoothing)
"count": 1, # Number of times to apply smoothing (may be more for noisy records)
"normalize": True}
# Apply the Smoothing
smoother = KonnoOhmachi(smoothing_config)
smoothed_spectra = smoother.apply_smoothing(amplitude, freq)
# Compare the Two Spectra
plt.figure(figsize=(7,5))
plt.loglog(freq, amplitude, "k-", lw=1.0,label="Original")
plt.loglog(freq, smoothed_spectra, "r", lw=2.0, label="Smoothed")
plt.xlabel("Frequency (Hz)", fontsize=14)
plt.xlim(0.05, 200)
plt.ylabel("Fourier Amplitude", fontsize=14)
plt.tick_params(labelsize=12)
plt.legend(loc=0, fontsize=14)
plt.grid(True)
plt.savefig("images/SmoothedFourierSpectra.pdf", format="pdf", dpi=300)
Explanation: Smooth the Fourier Spectrum Using the Konno & Omachi (1998) Method
End of explanation
# Load in a three component data set
record_file = "data/record_3component.csv"
record_3comp = np.genfromtxt(record_file, delimiter=",")
time_vector = record_3comp[:, 0]
x_record = record_3comp[:, 1]
y_record = record_3comp[:, 2]
v_record = record_3comp[:, 3]
time_step = 0.002
# Plot the records
fig = plt.figure(figsize=(8,12))
fig.set_tight_layout(True)
ax = plt.subplot(311)
ax.plot(time_vector, x_record)
ax.set_ylim(-80., 80.)
ax.set_xlim(0., 10.5)
ax.grid(True)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Acceleration (cm/s/s)", fontsize=14)
ax.tick_params(labelsize=12)
ax.set_title("EW", fontsize=16)
ax = plt.subplot(312)
ax.plot(time_vector, y_record)
ax.set_xlim(0., 10.5)
ax.set_ylim(-80., 80.)
ax.grid(True)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Acceleration (cm/s/s)", fontsize=14)
ax.set_title("NS", fontsize=16)
ax.tick_params(labelsize=12)
ax = plt.subplot(313)
ax.plot(time_vector, v_record)
ax.set_xlim(0., 10.5)
ax.set_ylim(-40., 40.)
ax.grid(True)
ax.set_xlabel("Time (s)", fontsize=14)
ax.set_ylabel("Acceleration (cm/s/s)", fontsize=14)
ax.set_title("Vertical", fontsize=16)
ax.tick_params(labelsize=12)
plt.savefig("images/3component_timeseries.pdf", format="pdf", dpi=300)
Explanation: Get the HVSR
Load in the Time Series
End of explanation
x_freq, x_four = ims.get_fourier_spectrum(x_record, time_step)
y_freq, y_four = ims.get_fourier_spectrum(y_record, time_step)
v_freq, v_four = ims.get_fourier_spectrum(v_record, time_step)
plt.figure(figsize=(7, 5))
plt.loglog(x_freq, x_four, "k-", lw=1.0, label="EW")
plt.loglog(y_freq, y_four, "b-", lw=1.0, label="NS")
plt.loglog(v_freq, v_four, "r-", lw=1.0, label="V")
plt.xlim(0.05, 200.)
plt.tick_params(labelsize=12)
plt.grid(True)
plt.xlabel("Frequency (Hz)", fontsize=16)
plt.ylabel("Fourier Amplitude", fontsize=16)
plt.legend(loc=3, fontsize=16)
plt.savefig("images/3component_fas.pdf", format="pdf", dpi=300)
Explanation: Look at the Fourier Spectra
End of explanation
# Setup parameters
params = {"Function": "KonnoOhmachi",
"bandwidth": 40.0,
"count": 1.0,
"normalize": True
}
# Returns
# 1. Horizontal to Vertical Spectral Ratio
# 2. Frequency
# 3. Maximum H/V
# 4. Period of Maximum H/V
hvsr, freq, max_hv, t_0 = ims.get_hvsr(x_record, time_step, y_record, time_step, v_record, time_step, params)
plt.figure(figsize=(7,5))
plt.semilogx(freq, hvsr, 'k-', lw=2.0)
# Show T0
t_0_line = np.array([[t_0, 0.0],
[t_0, 1.1 * max_hv]])
plt.semilogx(1.0 / t_0_line[:, 0], t_0_line[:, 1], "r--", lw=1.5)
plt.xlabel("Frequency (Hz)", fontsize=14)
plt.ylabel("H / V", fontsize=14)
plt.tick_params(labelsize=14)
plt.xlim(0.1, 10.0)
plt.grid(True)
plt.title(r"$T_0 = %.4f s$" % t_0, fontsize=16)
plt.savefig("images/hvsr_example1.pdf", format="pdf", dpi=300)
Explanation: Calculate the Horizontal To Vertical Spectral Ratio
End of explanation |
4,475 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alternative More Detailed Solution for Lab01 - Task C
Step1: First try - Computing the probability using for loops
The following was not clear in the instructions
Step2: And it works pretty well. Comparing with our solution, the difference is very small, around $10^{-15}$
Step3: Computing the log-probability directly
Why log? - Our goal is to compare probabilities to see to which of the two stars a point belongs.
But the formula for the probability is a bit heavy, with multiplications and exponents.
By applying a log transform, we get additions and multiplications, which is easer to handle, and does not impact the comparison - if a > b, log(a) > log(b).
(If it does not make sense - don't worry - you'll see this in the coming lectures)
Simplifying the equation
Notation
Step4: Matrix formulation - using no for loops
Now, how can we use numpy to avoid using for loops?
This is firstly an algebra problem, not a programming one - the programming is just translation. Our main tools are
- Traditional matrix multiplication
- Addition and multiplication by scalars
- Elementwise Addition and multiplication by matrices
- Summation and product over rows or columns
- ...
But first, we need to know what we are after. The expression for the log is
$$\log p(x | \mu, \Sigma) = - \frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu) + c(\mu,\Sigma),$$
Where $c(\mu, \Sigma) = -\log \left[ (2\pi)^{d/2} |\Sigma|^{1/2} \right] = - \frac{1}{2}\left[d\log(2 \pi) + \log(|\Sigma|)\right]$
And our function compute_log_p should return a vector of N elements, which is
$$
\texttt{compute_log_p}(X, \mu, \Sigma) =
\begin{pmatrix}
\log p(x_1 | \mu, \Sigma)\
...\
\log p(x_n | \mu, \Sigma)\
\end{pmatrix}
=
-\frac{1}{2}
\begin{pmatrix}
(x_1 - \mu)^T\Sigma^{-1}(x_1-\mu)\
...\
(x_N - \mu)^T\Sigma^{-1}(x_N-\mu)\
\end{pmatrix}
+c(\mu, \Sigma)
$$
Let us focus on the matrix part of the formula, which we'll call $M$, after some simplification
Step5: More bruteforce approach
If your $\Sigma$ matrix is less nice, you might need to take some more steps. Again, looking at a single sample, we have
$$M_i = A_i^T \Sigma^{-1} A_i =
\begin{pmatrix}
A_{i1} & ... & A_{iD}
\end{pmatrix}
\begin{pmatrix}
\Sigma_{11} & ... & \Sigma_{1D} \
\vdots & \ddots & \vdots \
\Sigma_{D1} & ... & \Sigma_{DD} \
\end{pmatrix}^{-1}
\begin{pmatrix}
A_{i1} \ ... \ A_{iD}
\end{pmatrix}
$$
Using $\Sigma' = \Sigma^{-1}$ to avoid confusion between the matrix inverse and the elementwise inverse, this gives
$$M_n =
\begin{pmatrix}
A_{n1} & ... & A_{nD}
\end{pmatrix}
\begin{pmatrix}
\sum_{i=1}^D a_{ni} \Sigma'{1i} \
\vdots \
\sum{i=1}^D a_{ni} \Sigma'{Di} \
\end{pmatrix}
= \sum{i=1}^D \sum_{j=1}^D a_{ni} a_{nj} \Sigma'_{ij}
$$
In last section we found a nice way to split the one sample the expression to a [1 x D][D x 1] matrix multiplication, with the inputs on the left [1 x D] matrix and the transformation being the [D x 1] matrix; extanding to a [N x D] input is easy. Here, we are not assuming that $\Sigma$ is diagonal, which complicates a little bit the system. We will need to bring two tools out of the box
Step6: Why do we even care?
Time.
For loops are much more expensive than lingear algebra - for which we have specialized libraries, Code that correctly uses linear algebra can run 10x to 100x faster than a for-loop program for the same output.
Here is a comparison of the different solutions,
* Using for loops
* Using numpy to do the heavy lifting
* Using the fact that $\Sigma$ is a diagonal
Most of the benefit comes from using numpy correctly | Python Code:
%matplotlib inline
import numpy as np
from numpy.random import rand, randn
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
# Data generation
n, d, k = 100, 2, 2
np.random.seed(20)
X = rand(n, d)
means = [rand(d) * 0.5 + 0.5 , - rand(d) * 0.5 + 0.5] # for better plotting when k = 2
S = np.diag(rand(d))
sigmas = [S]*k
Explanation: Alternative More Detailed Solution for Lab01 - Task C
End of explanation
def compute_p_forloop(X, mean, sigma):
[n, d] = np.shape(X)
# The constant (same for all samples) term
c = 2*np.power(np.pi, d/2)*np.power(np.linalg.det(sigma), 0.5)
invSigma = np.linalg.inv(sigma)
result = np.zeros((n,))
for i in range(n):
xmu = X[i] - mean
result[i] = 1/c * np.exp( - 0.5 * (xmu).T.dot(invSigma).dot(xmu))
return result
Explanation: First try - Computing the probability using for loops
The following was not clear in the instructions: The function you had to implement was compute_log_p, so you should have transformed the expression by applying a log first.
It wouldn't have changed the result of the graph though.
If you skipped this step and completed this exercise using for loops, you probably did something like the next cell.
The function to implement:
$$p(x | \mu, \Sigma) = \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} \exp\left(-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu)\right)$$
End of explanation
### -----
### Applying log to our function + Solution
def compute_log_p_forloop(X, mean, sigma):
return np.log(compute_p_forloop(X, mean, sigma))
def compute_log_p_solution(X, mean, sigma):
dxm = X - mean
return -0.5 * np.sum(dxm * np.dot(dxm, np.linalg.inv(sigma)), axis=1) - np.log(2 * np.pi) * (d / 2) - 0.5 * np.log(np.linalg.det(sigma))
### -----
### Difference between solution and this implementation
a = compute_log_p_forloop(X, means[0], sigmas[0])
b = compute_log_p_solution(X, means[0], sigmas[0])
print("|a-b|_2 =", np.linalg.norm(a-b))
### -----
### Print the graphs
def makeGraph(function, X, means, sigmas):
log_ps = [function(X, m, s) for m, s in zip(means, sigmas)]
assignments = np.argmax(log_ps, axis=0)
colors = np.array(['red', 'green'])[assignments]
plt.title(function.__name__)
plt.scatter(X[:, 0], X[:, 1], c=colors, s=100)
plt.scatter(np.array(means)[:, 0], np.array(means)[:, 1], marker='*', s=200)
plt.show()
makeGraph(compute_log_p_forloop, X, means, sigmas)
makeGraph(compute_log_p_solution, X, means, sigmas)
Explanation: And it works pretty well. Comparing with our solution, the difference is very small, around $10^{-15}$
End of explanation
def compute_log_p_forloop(X, mean, sigma):
[n, d] = np.shape(X)
result = np.zeros((n,))
constant = -0.5 * (d*np.log(2*np.pi) + np.log(np.linalg.det(sigma)))
invSigma = np.linalg.inv(sigma)
for i in range(n):
xmu = X[i] - mean
result[i] = -(1/2) * (xmu).T.dot(invSigma).dot(xmu) + constant
return result
### -----
### Difference between solution and this implementation
a = compute_log_p_forloop(X, means[0], sigmas[0])
b = compute_log_p_solution(X, means[0], sigmas[0])
print("|a-b|_2 =", np.linalg.norm(a-b))
### -----
### Print the graphs
makeGraph(compute_log_p_forloop, X, means, sigmas)
makeGraph(compute_log_p_solution, X, means, sigmas)
Explanation: Computing the log-probability directly
Why log? - Our goal is to compare probabilities to see to which of the two stars a point belongs.
But the formula for the probability is a bit heavy, with multiplications and exponents.
By applying a log transform, we get additions and multiplications, which is easer to handle, and does not impact the comparison - if a > b, log(a) > log(b).
(If it does not make sense - don't worry - you'll see this in the coming lectures)
Simplifying the equation
Notation: $x$ is a sample, so $[D \times 1]$, $X$ is the matrix, so $[N \times D]$
$$p(x | \mu, \Sigma) = \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} \exp\left(-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu)\right)$$
$$\log p(x | \mu, \Sigma) = \log \left[ \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} \exp\left(-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu)\right) \right]$$
$$\log p(x | \mu, \Sigma) = \log \left[ \frac{1}{(2\pi)^{d/2} |\Sigma|^{1/2}} \right] + \log \left[\exp\left(-\frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu)\right) \right]$$
$$\log p(x | \mu, \Sigma) = -\log \left[ (2\pi)^{d/2} |\Sigma|^{1/2} \right] - \frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu)$$
This gives us the following expression,
$$\log p(x | \mu, \Sigma) = - \frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu) + c(\mu,\Sigma),$$
Where $c(\mu, \Sigma) = -\log \left[ (2\pi)^{d/2} |\Sigma|^{1/2} \right] = - \frac{1}{2}\left[d\log(2 \pi) + \log(|\Sigma|)\right]$
Steps used:
- $\log(ab) = \log(a) + \log(b)$
- $\log(1/a) = -\log(a)$
- $\log(\exp(a)) = a$
Implementing this function
End of explanation
def compute_log_p_sol1(X, mean, sigma):
[n, d] = np.shape(X)
constant = - 0.5 * (d * np.log(2 * np.pi) + np.log(np.linalg.det(sigma)))
diagInvSigma = np.diag(np.linalg.inv(sigma))
xmu = X - mean
xmu2 = xmu*xmu
return -0.5 * xmu2.dot(diagInvSigma) + constant
#return result
### -----
### Difference between solution and this implementation
a = compute_log_p_sol1(X, means[0], sigmas[0])
b = compute_log_p_solution(X, means[0], sigmas[0])
print("|a-b|_2 =", np.linalg.norm(a-b))
### -----
### Print the graphs
makeGraph(compute_log_p_sol1, X, means, sigmas)
makeGraph(compute_log_p_solution, X, means, sigmas)
Explanation: Matrix formulation - using no for loops
Now, how can we use numpy to avoid using for loops?
This is firstly an algebra problem, not a programming one - the programming is just translation. Our main tools are
- Traditional matrix multiplication
- Addition and multiplication by scalars
- Elementwise Addition and multiplication by matrices
- Summation and product over rows or columns
- ...
But first, we need to know what we are after. The expression for the log is
$$\log p(x | \mu, \Sigma) = - \frac{1}{2}(x - \mu)^T\Sigma^{-1}(x-\mu) + c(\mu,\Sigma),$$
Where $c(\mu, \Sigma) = -\log \left[ (2\pi)^{d/2} |\Sigma|^{1/2} \right] = - \frac{1}{2}\left[d\log(2 \pi) + \log(|\Sigma|)\right]$
And our function compute_log_p should return a vector of N elements, which is
$$
\texttt{compute_log_p}(X, \mu, \Sigma) =
\begin{pmatrix}
\log p(x_1 | \mu, \Sigma)\
...\
\log p(x_n | \mu, \Sigma)\
\end{pmatrix}
=
-\frac{1}{2}
\begin{pmatrix}
(x_1 - \mu)^T\Sigma^{-1}(x_1-\mu)\
...\
(x_N - \mu)^T\Sigma^{-1}(x_N-\mu)\
\end{pmatrix}
+c(\mu, \Sigma)
$$
Let us focus on the matrix part of the formula, which we'll call $M$, after some simplification:
- substitue $A = X - \mu$ (a is a [N x D] matrix).
- $A_i$ is the first row, a D-elements vector
- $A_{ij}$ is the element at cell $i,j$.
We have
$$
M=
\begin{pmatrix}
A_1^T\Sigma^{-1}A_1\
...\
A_N^T\Sigma^{-1}A_N\
\end{pmatrix}
$$
Can we simplify the expression? For a single row, we have
$$M_i = A_i^T \Sigma^{-1} A_i =
\begin{pmatrix}
A_{i1} & ... & A_{iD}
\end{pmatrix}
\begin{pmatrix}
\Sigma_{11} & ... & \Sigma_{1D} \
\vdots & \ddots & \vdots \
\Sigma_{D1} & ... & \Sigma_{DD} \
\end{pmatrix}^{-1}
\begin{pmatrix}
A_{i1} \ ... \ A_{iD}
\end{pmatrix}
\text{ - Dimensions: } [1 x D] [D x D] [D x 1] \text{ }
$$
From here, there are two path - simplification using the properties of the matrices we have, and a more brute force approach.
Simplifying using the properties of the matrices
The thing to note is that $\Sigma$ is a diagonal matrix (Data loading code: S = np.diag(rand(d))). Therefore, we have
$$M_i = A_i^T \Sigma^{-1} A_i =
\begin{pmatrix}
A_{i1} & ... & A_{iD}
\end{pmatrix}
\begin{pmatrix}
1/\Sigma_{11} & 0 & ... & 0 \
0 & \ddots & \ddots & \vdots \
\vdots & \ddots & \ddots & \vdots \
0 & ... & ... & 1/\Sigma_{DD} \
\end{pmatrix}
\begin{pmatrix}
A_{i1} \ ... \ A_{iD}
\end{pmatrix}
$$
$$
\begin{pmatrix}
A_{i1} & ... & A_{iD}
\end{pmatrix}
\begin{pmatrix}
A_{i1}/\Sigma_{11} \ ... \ A_{iD}/\Sigma_{DD}
\end{pmatrix}
=
\sum_{j=1}^D A_{ij}^2 /\Sigma_{jj}
=
\begin{pmatrix}
A_{i1}^2 & ... & A_{iD}^2
\end{pmatrix}
\begin{pmatrix}
1/\Sigma_{11} \ ... \ 1/\Sigma_{DD}
\end{pmatrix}
$$
Notice that on those last formulations, we have a [1 x D] [D x 1] system. The [D x 1] matrix is a transformation we apply to the [1 x D] input, and we can apply it to all samples by providing a [N x D] input, as follow.
$$
=
\begin{pmatrix}
A_{11}^2 & ... & A_{1D}^2 \
\vdots & \ddots & \vdots \
A_{N1}^2 & ... & A_{ND}^2 \
\end{pmatrix}
\begin{pmatrix}
1/\Sigma_{11} \ ... \ 1/\Sigma_{DD}
\end{pmatrix}
$$
Or, in (pseudo) code,
A = X - mu
A2 = A * A # element-wise multiplication
invSigma = np.linalg.inverse(Sigma)
diagInvSigma = np.diag(invSigma)
M = A2.dot(diagInvSigma)
compute_log_p(X, mu, sigma) = - 0.5 * M + c(mu, sigma)
End of explanation
def compute_log_p_solution(X, mean, sigma):
c = - np.log(2 * np.pi) * (d / 2) - 0.5 * np.log(np.linalg.det(sigma))
A = X - mean
invSigma = np.linalg.inv(sigma)
return -0.5 * np.sum(A * (A.dot(invSigma)), axis=1) + c
makeGraph(compute_log_p_solution, X, means, sigmas)
Explanation: More bruteforce approach
If your $\Sigma$ matrix is less nice, you might need to take some more steps. Again, looking at a single sample, we have
$$M_i = A_i^T \Sigma^{-1} A_i =
\begin{pmatrix}
A_{i1} & ... & A_{iD}
\end{pmatrix}
\begin{pmatrix}
\Sigma_{11} & ... & \Sigma_{1D} \
\vdots & \ddots & \vdots \
\Sigma_{D1} & ... & \Sigma_{DD} \
\end{pmatrix}^{-1}
\begin{pmatrix}
A_{i1} \ ... \ A_{iD}
\end{pmatrix}
$$
Using $\Sigma' = \Sigma^{-1}$ to avoid confusion between the matrix inverse and the elementwise inverse, this gives
$$M_n =
\begin{pmatrix}
A_{n1} & ... & A_{nD}
\end{pmatrix}
\begin{pmatrix}
\sum_{i=1}^D a_{ni} \Sigma'{1i} \
\vdots \
\sum{i=1}^D a_{ni} \Sigma'{Di} \
\end{pmatrix}
= \sum{i=1}^D \sum_{j=1}^D a_{ni} a_{nj} \Sigma'_{ij}
$$
In last section we found a nice way to split the one sample the expression to a [1 x D][D x 1] matrix multiplication, with the inputs on the left [1 x D] matrix and the transformation being the [D x 1] matrix; extanding to a [N x D] input is easy. Here, we are not assuming that $\Sigma$ is diagonal, which complicates a little bit the system. We will need to bring two tools out of the box:
Column summation - transforms a [A x B] matrix into a [A x 1] matrix by summing all columns, for each row, as follow:
$$
\texttt{column summation of }
\begin{pmatrix}
y_{11} & y_{12} & ... & y_{1B} \
y_{21} & y_{22} & ... & y_{2B}
\end{pmatrix}
=
\begin{pmatrix}
\sum_{i=1}^B y_{1i} \
\sum_{i=1}^B y_{2i}
\end{pmatrix}
$$
Element-wise matrix multiplication - What the * operator does on numpy matrices - written $\odot$ here,
$$
\begin{pmatrix}
y_{1} & y_{2} & ... & y_{B}
\end{pmatrix}
\odot
\begin{pmatrix}
z_{1} & z_{2} & ... & z_{B}
\end{pmatrix}
=
\begin{pmatrix}
y_{1}z_{1} & y_{2}z_{2} & ... & y_{B}z_{B}
\end{pmatrix}
$$
Using those tools, we can work out a better representation for our formula. We first expand one of the summation by doing a reverse-column-summation to go from our scalar result to a [D x 1] matrix.
$$
\sum_{i=1}^D \sum_{j=1}^D a_{ni} a_{nj} \Sigma'{ij}
=
\begin{pmatrix}
A{n1} \sum_{j=1}^D A_{nj} \Sigma'{1j} &
A{n2} \sum_{j=1}^D A_{nj} \Sigma'{2j} &
...
A{nD} \sum_{j=1}^D A_{nj} \Sigma'{Dj}
\end{pmatrix}
$$
We now can separate the matrix into two parts using element-wise multiplication.
$$
=
\begin{pmatrix}
A{n1} &
A_{n2} &
...
A_{nD}
\end{pmatrix}
\odot
\begin{pmatrix}
\sum_{j=1}^D A_{nj} \Sigma'{1j} &
\sum{j=1}^D A_{nj} \Sigma'{2j} &
...
\sum{j=1}^D A_{nj} \Sigma'{Dj}
\end{pmatrix}
$$
$$
=
A_n
\odot
\begin{pmatrix}
\sum{j=1}^D A_{nj} \Sigma'{1j} &
\sum{j=1}^D A_{nj} \Sigma'{2j} &
...
\sum{j=1}^D A_{nj} \Sigma'_{Dj}
\end{pmatrix}
$$
Now, we can see that the [1 x D] matrix on the right is the result of $A_n\Sigma'$ ([1 x D][D x D]).
$$
=
A_n \odot (A_n \Sigma')
$$
And this is it. If instead of $A_n$, we plug $A$ into the system, we get a [N x D] $\odot$ [N x D][D x D] system, the solution for every sample in one line,
$$A \odot (A \Sigma')$$
End of explanation
import time
def generateData(n, d, k):
X = rand(n, d)
means = [rand(d) * 0.5 + 0.5 , - rand(d) * 0.5 + 0.5] # for better plotting when k = 2
S = np.diag(rand(d))
sigmas = [S]*k
return X, means, sigmas
matrix_time = np.zeros(10,)
forloop_time = np.zeros(10,)
i = 0
Ns = np.logspace(0, 7, num=10)
for N in Ns:
X_n, means_n, sigmas_n = generateData(np.floor(N), 2, 2)
start_time = time.time()
compute_log_p_solution(X_n, means_n[0], sigmas_n[0])
matrix_time[i] = time.time() - start_time
start_time = time.time()
compute_log_p_forloop(X_n, means_n[0], sigmas_n[0])
forloop_time[i] = time.time() - start_time
i += 1
plt.title('Computation time comparison\n Assuming diagonal matrix vs. no assumption')
h1 = plt.plot(np.floor(Ns), forloop_time, label='For loop')
h2 = plt.plot(np.floor(Ns), matrix_time, label='Matrix')
plt.xscale("log", nonposx='clip')
plt.yscale("log", nonposx='clip')
plt.ylabel('Time (s)')
plt.legend(['For loop', 'Matrix'])
plt.grid()
plt.show()
import datetime
import timeit
def generateData(n, d, k):
X = rand(n, d)
means = [rand(d) * 0.5 + 0.5 , - rand(d) * 0.5 + 0.5] # for better plotting when k = 2
S = np.diag(rand(d))
sigmas = [S]*k
return X, means, sigmas
matrix_time = np.zeros(10,)
diag_time = np.zeros(10,)
i = 0
Ns = np.logspace(0, 7, num=10)
for N in Ns:
X_n, means_n, sigmas_n = generateData(np.floor(N), 2, 2)
start_time = time.time()
compute_log_p_solution(X_n, means_n[0], sigmas_n[0])
matrix_time[i] = time.time() - start_time
start_time = datetime.datetime.now()
compute_log_p_sol1(X_n, means_n[0], sigmas_n[0])
diag_time[i] = (datetime.datetime.now() - start_time).total_seconds()
i += 1
plt.title('Computation time comparison\n Assuming diagonal matrix vs. no assumption')
h1 = plt.plot(np.floor(Ns), diag_time, label='Assuming diagonal covariance')
h2 = plt.plot(np.floor(Ns), matrix_time, label='Matrix')
plt.xscale("log", nonposx='clip')
plt.yscale("log", nonposx='clip')
plt.ylabel('Time (s)')
plt.legend(['Assuming diagonal covariance', 'No assumption'])
plt.grid()
plt.show()
Explanation: Why do we even care?
Time.
For loops are much more expensive than lingear algebra - for which we have specialized libraries, Code that correctly uses linear algebra can run 10x to 100x faster than a for-loop program for the same output.
Here is a comparison of the different solutions,
* Using for loops
* Using numpy to do the heavy lifting
* Using the fact that $\Sigma$ is a diagonal
Most of the benefit comes from using numpy correctly
End of explanation |
4,476 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="http
Step1: From Landlab, we'll need a grid on which to plot data, and a plotting function. We'll start with just imshow_grid, but be aware that similar but more specifically named functions like imshow_grid_at_node are also available. These all wrap the same basic Landlab functionality, so we're taking the most general method.
Note that you can use imshow_grid as a function or as a method of any landlab grid. For example, the following two usages can be used interchangably,
```python
grid.imshow(values)
python
imshow_grid(grid, values)
```
Step2: We'll also need some functions from matplotlib proper to help us handle our graphical output
Step3: Plotting in 2D
The imshow plotter method is Landlab's primary function for plotting data distributed across the grid. It's pretty powerful, and comes with a fairly extensive suite of options to control the appearance of your output. You can see the full list of options in the imshow_grid documentation.
However, most simply, it just takes grid.imshow(data). Data can be either a field name string, or an array of the data itself.
Step4: Those units for the axis are taken from the grid property axis_units, which is a tuple that we can set. Alternatively, pass a tuple directly to the plotter with the keyword grid_units.
While we're at it, let's plot from a field instead of an array, and also mix up the default color scheme. The cmap keyword can take any input that you could also supply to matplotlib; see, e.g., http
Step5: The plotter works just fine with both raster grids and irregular grids. Name a plot with the var_name keyword.
Step6: Now, let's look at some of the other more advanced options imshow can provide.
imshow offers plenty of keyword options for modifying the colorbar, including var_name, var_units, symmetric_cbar, vmin, vmax, and shrink. We've already seen allow_colorbar, which lets you suppress the bar entirely. Let's see some in action.
Step7: Now let's explore color control. The grid takes the keyword color_for_background, which as you'd expect, colors any exposed part of the frame without cells over it. It knows the same color representations as matplotlib, e.g., (0., 0., 0.5), '0.3', 'b', 'yellow'.
Step8: The plotter knows about boundary condition status, and we can control the colour of such nodes as well. This is useful if plotting an irregular watershed on a raster, for example. Here, None means transparent, as we will see in the next example.
Step9: Finally, note that the plotter recognises any masked node in a masked array as a closed node. This can be used as a convenient way to make grid overlays, as follows
Step10: Plotting in 1D
Landlab basically lets you get on with it for yourself if plotting cross sections, or otherwise in 1D. We recommend the basic matplotlib plotting suite. Often plot() is totally adequate.
For a simple grid cross section, just reshape the data array back to a raster and take a slice
Step11: Additionally, Landlab makes available a stream profiler tool. It finds the highest drainage area node in a landscape whenever it's called, then follows the drainage structure back upstream from that node, always choosing the upstream node with the highest drainage area. This means we can do things like this | Python Code:
import numpy as np
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
Plotting grid data with Landlab
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This tutorial illustrates how you can plot spatial data in Landlab, focusing in particular on Landlab's imshow_grid plotter and associated functions. Landlab's plotters are built onto the widely used Matplotlib Python package.
We start by importing the NumPy library, which we'll use in producing some data to plot:
End of explanation
from landlab import RasterModelGrid, RadialModelGrid, imshow_grid
Explanation: From Landlab, we'll need a grid on which to plot data, and a plotting function. We'll start with just imshow_grid, but be aware that similar but more specifically named functions like imshow_grid_at_node are also available. These all wrap the same basic Landlab functionality, so we're taking the most general method.
Note that you can use imshow_grid as a function or as a method of any landlab grid. For example, the following two usages can be used interchangably,
```python
grid.imshow(values)
python
imshow_grid(grid, values)
```
End of explanation
import matplotlib.pyplot as plt
Explanation: We'll also need some functions from matplotlib proper to help us handle our graphical output:
End of explanation
%matplotlib inline
rmg = RasterModelGrid((50, 50), 1.0)
rmg.imshow(rmg.x_of_node) # plot the x distances at nodes
plt.show()
Explanation: Plotting in 2D
The imshow plotter method is Landlab's primary function for plotting data distributed across the grid. It's pretty powerful, and comes with a fairly extensive suite of options to control the appearance of your output. You can see the full list of options in the imshow_grid documentation.
However, most simply, it just takes grid.imshow(data). Data can be either a field name string, or an array of the data itself.
End of explanation
rmg.axis_units = ("km", "km")
_ = rmg.add_field(
"myfield", (rmg.x_of_node**2 + rmg.y_of_node**2) ** 0.5, at="node", clobber=True
)
rmg.imshow("myfield", cmap="bone")
plt.show()
Explanation: Those units for the axis are taken from the grid property axis_units, which is a tuple that we can set. Alternatively, pass a tuple directly to the plotter with the keyword grid_units.
While we're at it, let's plot from a field instead of an array, and also mix up the default color scheme. The cmap keyword can take any input that you could also supply to matplotlib; see, e.g., http://matplotlib.org/examples/color/colormaps_reference.html.
End of explanation
radmg = RadialModelGrid(n_rings=10, spacing=10.0)
plt.subplot(121)
rmg.imshow(rmg.y_of_node, allow_colorbar=False, plot_name="regular grid")
plt.subplot(122)
radmg.imshow(radmg.x_of_node, allow_colorbar=False, plot_name="irregular grid")
plt.show()
Explanation: The plotter works just fine with both raster grids and irregular grids. Name a plot with the var_name keyword.
End of explanation
radz = (radmg.x_of_node**2 + radmg.y_of_node**2) ** 0.5
radz = radz.max() - radz - 0.75 * radz.mean()
# let's plot these elevations truncated at radz >= 0
radmg.imshow(
radz,
grid_units=("m", "m"),
vmin=0.0,
shrink=0.75,
var_name="radz",
var_units="no units",
)
plt.show()
Explanation: Now, let's look at some of the other more advanced options imshow can provide.
imshow offers plenty of keyword options for modifying the colorbar, including var_name, var_units, symmetric_cbar, vmin, vmax, and shrink. We've already seen allow_colorbar, which lets you suppress the bar entirely. Let's see some in action.
End of explanation
radmg.imshow(radmg.y_of_node, color_for_background="0.3")
plt.show()
Explanation: Now let's explore color control. The grid takes the keyword color_for_background, which as you'd expect, colors any exposed part of the frame without cells over it. It knows the same color representations as matplotlib, e.g., (0., 0., 0.5), '0.3', 'b', 'yellow'.
End of explanation
rmg2 = RasterModelGrid((50, 50), (1.0, 2.0))
myvals = ((rmg2.x_of_node - 50.0) ** 2 + (rmg2.y_of_node - 25.0) ** 2) ** 0.5
rmg2.status_at_node[myvals > 30.0] = rmg2.BC_NODE_IS_CLOSED
rmg2.imshow(myvals, color_for_closed="blue", shrink=0.6)
Explanation: The plotter knows about boundary condition status, and we can control the colour of such nodes as well. This is useful if plotting an irregular watershed on a raster, for example. Here, None means transparent, as we will see in the next example.
End of explanation
mymask_1stcondition = np.logical_or(rmg.x_of_node < 15, rmg.x_of_node > 35)
mymask_2ndcondition = np.logical_or(rmg.y_of_node < 15, rmg.y_of_node > 35)
mymask = np.logical_or(mymask_1stcondition, mymask_2ndcondition)
overlay_data = np.ma.array(rmg.y_of_node, mask=mymask, copy=False)
rmg.imshow(rmg.x_of_node)
rmg.imshow(overlay_data, color_for_closed=None, cmap="winter")
plt.show()
Explanation: Finally, note that the plotter recognises any masked node in a masked array as a closed node. This can be used as a convenient way to make grid overlays, as follows:
End of explanation
# Note, in Landlab 2.0 a new component that will permit profiles based on endpoints
# will be added to the component library.
mg = RasterModelGrid((30, 30))
z = (mg.x_of_node**2 + mg.y_of_node**2) ** 0.5
z = z.max() - z
z_raster = z.reshape(mg.shape)
x_raster = mg.x_of_node.reshape(mg.shape)
for i in range(0, 30, 5):
plt.plot(x_raster[i, :], z_raster[i, :])
plt.title("east-west cross sections though z")
plt.xlabel("x (m)")
plt.ylabel("z (m)")
plt.show()
Explanation: Plotting in 1D
Landlab basically lets you get on with it for yourself if plotting cross sections, or otherwise in 1D. We recommend the basic matplotlib plotting suite. Often plot() is totally adequate.
For a simple grid cross section, just reshape the data array back to a raster and take a slice:
End of explanation
from landlab.components import FlowAccumulator, FastscapeEroder, ChannelProfiler
mg = RasterModelGrid((100, 100), 1000.0)
mg.axis_units = ("m", "m")
z = mg.add_zeros("topographic__elevation", at="node")
z += np.random.rand(mg.number_of_nodes) # roughen the initial surface
fr = FlowAccumulator(mg)
sp = FastscapeEroder(mg, K_sp=1.0e-5)
dt = 50000.0
for ndt in range(100):
z[mg.core_nodes] += 10.0
fr.run_one_step()
sp.run_one_step(dt)
if ndt % 5 == 0:
print(ndt)
prf = ChannelProfiler(
mg, number_of_watersheds=4, main_channel_only=False, minimum_channel_threshold=1e7
)
prf.run_one_step()
plt.figure(1)
prf.plot_profiles()
plt.show()
plt.figure(1)
prf.plot_profiles_in_map_view()
plt.show()
Explanation: Additionally, Landlab makes available a stream profiler tool. It finds the highest drainage area node in a landscape whenever it's called, then follows the drainage structure back upstream from that node, always choosing the upstream node with the highest drainage area. This means we can do things like this:
End of explanation |
4,477 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MachineLearningWorkShop at UCSC
Aug 18th - Learning with TESS Simulated data
Last month we explored all type of learning algorithms with simulated light curves including
Step1: Let's first look at what the TESS light curves
Step2: The combined feature files contain features from Box Least Squal measurements, and 20 PCA components from the light curve. Later on hopefully we can explore how to create new features from the light curves.
Let's first examine the columns in the combined feature files
Step3: Columns Ids, Catalog_Period, Depth, Catalog_Epoch records the information we have regarding the injected transits. Anything with period smaller than 0 is not a transit.
SNR is the signal to noise calculated for the transits using the catalog value.
SNR=\sqrt{Ntransit}*Depth/200mmag
There are three type of Y values included in this feature file
Step4: We show the results from some standard algorithms here
Step5: We can compare the prediction with the Manuel selection and the Catalog selection as the following
Step6: Feature Selection | Python Code:
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.cross_validation import train_test_split
from sklearn.utils import shuffle
from sklearn import metrics
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
from sklearn.decomposition import PCA
from sklearn.svm import SVC
from sklearn.cross_validation import cross_val_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler
from sklearn.grid_search import GridSearchCV
import matplotlib
from matplotlib import pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
def make_ROC_curve(testY, predY, name):
fig2 = plt.figure()
ax= fig2.add_subplot(1,1,1)
fpr, tpr, _ = roc_curve(testY, predY)
ax.plot(fpr, tpr, label = name)
ax.set_title(('ROC Curve for %s') % name)
ax.set_ylabel('True Positive Rate')
ax.set_xlabel('False Positive Rate')
def collect_lc_feature(idlist):
LCfeature=np.zeros([len(idlist),481])
count=0
for i in idlist:
#print i
infile="LTFsmall/"+str(i)+".ltf"
lc=np.loadtxt(infile)[:,1]
LCfeature[count,0]=i
LCfeature[count,1:]=lc
count+=1
return LCfeature
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(2)
plt.xticks(tick_marks, ['false positives', 'transits'], rotation=45)
plt.yticks(tick_marks, ['false positives', 'transits'])
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
def fit(model,name,data,cv=True):
trainX,trainY,testX,testY,X,Y=data
model.fit(trainX, trainY)
predY = model.predict(testX)
f1score = metrics.f1_score(testY, predY)
cm = metrics.confusion_matrix(testY, predY)
plot_confusion_matrix(cm)
predY=model.predict_proba(testX)[:,1]
rocscore = metrics.roc_auc_score(testY, predY)
precision, recall, thresholds = metrics.precision_recall_curve(testY, predY)
aucscore=metrics.auc(precision,recall,reorder=True)
print "#####################################"
print "Result using",model
print "f1 score from train test split %f" % f1score
print "roc score from train test split %f" % rocscore
print "auc score from train test split %f" % aucscore
if cv:
#cvscore= cross_val_score(model, X, Y, cv = 5, scoring = 'f1')
cvscore= cross_val_score(model, X, Y, cv = 5, scoring = 'roc_auc')
print "f1 score from CV5 %f" % np.mean(cvscore)
print cm
make_ROC_curve(testY,predY,name)
return
Explanation: MachineLearningWorkShop at UCSC
Aug 18th - Learning with TESS Simulated data
Last month we explored all type of learning algorithms with simulated light curves including:
different planet sizes
various period
various white/red noise level
different baseline
While the result looked promising, we need to extend our experiments to more realistic data.
The data set we are using today is created from SPyFFI, an image simulator created by Zack Berta and his undergrad student Jacobi Kosiarok.
The ingrediants included by SPyFFI are:
- catalogs of real stars
- somewhat realistic Camera and CCD effects, such as PRF variation, readout smear, resembling the TESS telescope
- spacecraft effects such as jitter/focusing change
- somewhat realistic noise buget
- transits and stellar variability (sine curves) draw from Kepler
SPyFFI out puts image time series like this:
We process the images from 10 days of TESS observations with standard photometry pipeline, and create light curves for all the stars with TESS magnituded brighter than 14.
For the region of sky we simulated (6 by 6 square degree), this results in 16279 stars. To make our tasks today simpler, we are going to work with only ~4000 stars.
End of explanation
#df=pd.read_csv("TESS_simulated_10day_small.csv",index_col=0)
#df=pd.read_csv("TESS_simulateddata_combinedfeatures.csv",index_col=0)
df=pd.read_csv("TESSfield_19h_44d_combinedfeatures.csv")
#LCfeature=pd.DataFrame(collect_lc_feature(df['Ids']),columns=['Ids']+list(np.arange(480)))
#LCfeature.to_csv
#LCfeature=pd.read_csv("TESS_simulated_lc_small.csv",index_col=0)
#plt.plot(LCfeature.iloc[-1,1:],'.')
#plt.plot(LCfeature.iloc[0,:],'.')
Explanation: Let's first look at what the TESS light curves:
TESS_simulated_10day_small.csv is the combined feature file.
TESS_simulated_lc_small.csv is the light curve file.
End of explanation
df.columns
Explanation: The combined feature files contain features from Box Least Squal measurements, and 20 PCA components from the light curve. Later on hopefully we can explore how to create new features from the light curves.
Let's first examine the columns in the combined feature files:
End of explanation
X=df.drop(['Ids','CatalogY','ManuleY','CombinedY','Catalog_Period','Depth','Catalog_Epoch','SNR'],axis=1)
#print X.isnull().any()
Y=df['CombinedY']
trainX, testX, trainY, testY= train_test_split(X, Y,test_size = 0.2)
data=[trainX,trainY,testX,testY,X,Y]
print X.shape, Y[Y==1].shape
Explanation: Columns Ids, Catalog_Period, Depth, Catalog_Epoch records the information we have regarding the injected transits. Anything with period smaller than 0 is not a transit.
SNR is the signal to noise calculated for the transits using the catalog value.
SNR=\sqrt{Ntransit}*Depth/200mmag
There are three type of Y values included in this feature file:
-CatalogY marks True for all the transit planets with SNR larger than 8.5 and BLS_SignaltoPinkNoise_1_0 larger than 7.
-ManuleY marks True all the transits identified by eye.
-CombinedY marks True if either CatalogY and ManuleY is True.
The signals identified by manule effort but not by Catalog is due to blending. The signal missed by manule effort is either because of low signal to noise or due to cuts in Ntransit and Q value (standard practice before manuel inspection).
Let's drop the irrelevent columns before training:
End of explanation
model=RandomForestClassifier(n_estimators=1000,class_weight={0:10,1:1},n_jobs=-1)
name="RFC"
fit(model,name,data,cv=False)
model=GradientBoostingClassifier(n_estimators=1000)
name="GBC"
fit(model,name,data,cv=False)
from xgboost import XGBClassifier
model = XGBClassifier(n_estimators=1000)
#model=XGBClassifier(learning_rate=0.1,
# n_estimators=1000,
# max_depth=5,
# min_child_weight=1,
# gamma=0,
# subsample=0.8,
# colsample_bytree=0.8,
# objective='binary:logistic')
model.fit(trainX,trainY)
#model.plot_importance(bst)
#name="XGBoost"
fit(model,name,data,cv=False)
Explanation: We show the results from some standard algorithms here:
End of explanation
from sklearn.cross_validation import StratifiedKFold
model=RandomForestClassifier(n_estimators=3000,n_jobs=-1,class_weight='balanced_subsample',oob_score=True)
skf=StratifiedKFold(Y,n_folds=4)
i=1
for train_index,test_index in skf:
trainX=X.iloc[train_index];testX=X.iloc[test_index]
trainY=np.array(Y)[train_index];testY=np.array(Y)[test_index]
#print train_index
traincatY=np.array(df['CatalogY'])[train_index];testcatY=np.array(df['CatalogY'])[test_index]
trainmanY=np.array(df['ManuleY'])[train_index];testmanY=np.array(df['ManuleY'])[test_index]
model.fit(trainX,trainY)
predY=model.predict_proba(testX)[:,1]
rocscore = metrics.roc_auc_score(testY, predY)
precision, recall, thresholds = metrics.precision_recall_curve(testY, predY)
aucscore=metrics.auc(precision,recall,reorder=True)
predY=model.predict(testX)
f1score = metrics.f1_score(testY, predY)
print "#####################################"
print "fold %d:" % i
print "f1 score from train test split %f" % f1score
print "roc score from train test split %f" % rocscore
print "auc score from train test split %f" % aucscore
print "oob score from RF %f" % model.oob_score_
flag1=(predY==1)*(predY==np.array(testY))
flag2=(predY==1)*(predY==np.array(testmanY))
print "predict Transit %d" % len(predY[predY==1])
print "real Transit %d" % len(testY[testY==1])
print "real Transit selected by eye %d" % len(testmanY[testmanY==1])
print "predicted Transit that's real %d" % len(predY[flag1])
print "predicted Transits selected by eye %d" % len(predY[flag2])
i+=1
model=GradientBoostingClassifier(n_estimators=3000)
skf=StratifiedKFold(Y,n_folds=4)
i=1
for train_index,test_index in skf:
trainX=X.iloc[train_index];testX=X.iloc[test_index]
trainY=np.array(Y)[train_index];testY=np.array(Y)[test_index]
#print train_index
traincatY=np.array(df['CatalogY'])[train_index];testcatY=np.array(df['CatalogY'])[test_index]
trainmanY=np.array(df['ManuleY'])[train_index];testmanY=np.array(df['ManuleY'])[test_index]
model.fit(trainX,trainY)
predY=model.predict(testX)
f1score = metrics.f1_score(testY, predY)
predY=model.predict_proba(testX)[:,1]
rocscore = metrics.roc_auc_score(testY, predY)
print "#####################################"
print "fold %d:" % i
print "f1 score from train test split %f" % f1score
print "roc score from train test split %f" % rocscore
flag1=(predY==1)*(predY==np.array(testY))
flag2=(predY==1)*(predY==np.array(testmanY))
print "predict Transit %d" % len(predY[predY==1])
print "real Transit %d" % len(testY[testY==1])
print "real Transit selected by eye %d" % len(testmanY[testmanY==1])
print "predicted Transit that's real %d" % len(predY[flag1])
print "predicted Transits selected by eye %d" % len(predY[flag2])
i+=1
model=XGBClassifier(n_estimators=3000)
#GradientBoostingClassifier(n_estimators=3000)
skf=StratifiedKFold(Y,n_folds=4)
i=1
for train_index,test_index in skf:
trainX=X.iloc[train_index];testX=X.iloc[test_index]
trainY=np.array(Y)[train_index];testY=np.array(Y)[test_index]
#print train_index
traincatY=np.array(df['CatalogY'])[train_index];testcatY=np.array(df['CatalogY'])[test_index]
trainmanY=np.array(df['ManuleY'])[train_index];testmanY=np.array(df['ManuleY'])[test_index]
model.fit(trainX,trainY)
predY=model.predict(testX)
f1score = metrics.f1_score(testY, predY)
predY=model.predict_proba(testX)[:,1]
rocscore = metrics.roc_auc_score(testY, predY)
print "#####################################"
print "fold %d:" % i
print "f1 score from train test split %f" % f1score
print "roc score from train test split %f" % rocscore
flag1=(predY==1)*(predY==np.array(testY))
flag2=(predY==1)*(predY==np.array(testmanY))
print "predict Transit %d" % len(predY[predY==1])
print "real Transit %d" % len(testY[testY==1])
print "real Transit selected by eye %d" % len(testmanY[testmanY==1])
print "predicted Transit that's real %d" % len(predY[flag1])
print "predicted Transits selected by eye %d" % len(predY[flag2])
i+=1
Explanation: We can compare the prediction with the Manuel selection and the Catalog selection as the following:
End of explanation
featurelist=X.columns
rfc= RandomForestClassifier(n_estimators=1000)
rfc.fit(trainX, trainY)
importances = rfc.feature_importances_
std = np.std([tree.feature_importances_ for tree in rfc.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
threshold=0.02
droplist=[]
for f in range(X.shape[1]):
if importances[indices[f]]<threshold:
droplist.append(featurelist[indices[f]])
print("%d. feature %d (%s %f)" % (f + 1, indices[f], featurelist[indices[f]],importances[indices[f]]))
X_selected=X.drop(droplist,axis=1)
X_selected.head()
model=RandomForestClassifier(n_estimators=1000,n_jobs=-1,class_weight='balanced_subsample')
name="RFC"
fit(model,name,data)
Explanation: Feature Selection
End of explanation |
4,478 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Process Discovery
by
Step1: Observe that the process model that we discovered, describes the same behavior as the model that we have shown above.
As indicated, the algorithm used in this example actually discovers a Process Tree.
Such a process tree is, mathematically speaking, a rooted tree, annotated with ‘control-flow’ information.
We’ll first use the following code snippet to discover a process tree based on the running example, and, afterwards shortly analyze the model.
Step2: We'll analyze the process tree model from top to bottom.
The first circle, i.e., the ‘root’ of the process tree, describes a ‘->’ symbol.
This means that, when srolling further down, the process described by the model executes the ‘children’ of the root from left to right.
Hence, first “register request” is executed, followed by the circle node with the ‘*’ symbol, finally to be followed by the node with the ‘X’ symbol.
The node with the ‘*’ represents ‘repeated behavior’, i.e., the possibility to repeat the behavior.
When scrolling further down, the left-most ‘subtree’ of the ‘*’-operator is always executed, the right-most child (in this case, “reinitiate request”) triggers a repeated execution of the left-most child.
Observe that this is in line with the process models we have seen before, i.e., the “reinitiate request” activity allows us to repeat the behavior regarding examinations and checking the ticket.
When we go further down below in the subtree of the ‘*’-operator, we again observe a ‘->’ node.
Hence, its left-most child is executed first, followed by its right-most child (“decide”).
The left-most child of the ‘->’ node has a ‘+’ symbol.
This represents concurrent behavior; hence, its children can be executed simultaneously or in any order.
Its left-most child is the “check ticket” activity.
Its right-most child is a node with an ‘X’ symbol (just like the right-most child of the tree's root).
This represents an exclusive choice, i.e., one of the children is executed (either “examine casually” or “examine thoroughly”).
Observe that the process tree describes the exact same behavior as the BPMN models shown before.
There are different ways to obtain a petri net
Step3: Observe that both functions return three arguments, i.e., the Petri net, an initial and a final marking.
Unsurprisingly, the two models are the same (i.e., the pm4py.discover_petri_net_inductive(df) function applies the conversion internally).
However, there are alternative algorithms implemented in pm4py, that allow you obtain a Petri net based on an event log.
These algorithms are
Step4: Note that, by definition, the alpha miner variants cannot discover invisible transitions (black boxes).
Additionally, these algorithms have no form of formal quality guarantees w.r.t. the resulting process models.
As such, we strongly discourage the use of the alpha miners in practice, apart from educational purposes.
Obtaining a Process Map
Many commercial process mining solutions do not provide extended support for discovering process models.
Often, as a main visualization of processes, process maps are used.
A process map contains activities and connections (by means of arcs) between them.
A connection between two activities usually means that there some form of precedence relation.
In its simplest form, it means that the ‘source’ activity directly precedes the ‘target’ activity.
Let’s quickly take a look at a concrete example!
Consider the following code snippet, in which we learn a ‘Directly Follows Graph’ (DFG)-based process map.
Step5: The pm4py.discover_dfg(log) function returns a triple.
The first result, i.e., called dfg in this example, is a dictionary mapping pairs of activities that follow each other directly, to the number of corresponding observations.
The second and third arguments are the start and end activities observed in the event log (again counters).
In the visualization, the green circle represents the start of any observed process instance.
The orange circle represents the end of an observed process instance.
In 6 cases, the register request is the first activity observed (represented by the arc labeled with value 6).
In the event log, the check ticket activity is executed directly after the register request activity.
The examine thoroughly activity is following registration once, examine casually follows 3 times.
Note that, indeed, in total, the register activity is followed by 6 different events, i.e., there are 6 traces in the running example event log.
However, note that there are typically much more relations observable compared to the number of cases in an event log.
Even using this simple event data, the DFG-based process map of the process is much more complex than the process models learned earlier.
Furthermore, it is much more difficult to infer the actual execution of the process based on the process map.
Hence, when using process maps, one should be very carefully when trying to comprehend the actual process.
In PM4Py, we also implemented the Heuristics Miner, a more advanced process map discovery algorithm, compared to its DFG-based alternative.
We won’t go into the algorithmic details here, however, in a HM-based process map, the arcs between activities represent observed concurrency.
For example, the algorithm is able to detect that the ticket check and examination are concurrent.
Hence, these activities will not be connected in the process map.
As such, a HM-based process map is typically simpler compared to a DFG-based process map.
Step6: Advanced Discovery | Python Code:
import pandas as pd
import pm4py
df = pm4py.format_dataframe(pd.read_csv('data/running_example.csv', sep=';'), case_id='case_id',activity_key='activity',
timestamp_key='timestamp')
bpmn_model = pm4py.discover_bpmn_inductive(df)
pm4py.view_bpmn(bpmn_model)
Explanation: Process Discovery
by: Sebastiaan J. van Zelst
Since we have studied basic conceptual knowledge of process mining and event data munging and crunching, we focus on process discovery.
Here, the goal is to discover, i.e., primarily completely automated and algorithmically, a process model that accurately describes the process, i.e., as observed in the event data.
For example, given the running example event data, we aim to discover the process model that we have used to explain the running example's process behavior.
For example, when using the sample event log we have seen before, we aim to discover:
This section briefly explains what modeling formalisms exist in PM4Py while applying different process discovery algorithms.
Secondly, we give an overview of the implemented process discovery algorithms, their output type(s), and how we can invoke them.
Finally, we discuss the challenges of applying process discovery in practice.
Note that, we will not explain the internal workings of the algorithms presented here.
For more information regarding the algorithmic details, consider the Coursera MOOC, the papers/articles/web pages we refer to in the notebook, or, contact us for in-depth algorithmic training :-).
Obtaining a Process Model
There are three different process modeling notations that are currently supported in PM4Py.
These notations are: BPMN, i.e., models such as the ones shown earlier in this tutorial, Process Trees and Petri nets.
A Petri net is a more mathematical modeling representation compared to BPMN.
Often, the behavior of a Petri net is more difficult to comprehend compared to BPMN models.
However, due to their mathematical nature, Petri nets are typically less ambiguous (i.e., confusion about their described behavior is not possible).
Process Trees represent a strict subset of Petri nets and describe process behavior in a hierarchical manner.
In this tutorial, we will focus primarily on BPMN models and process trees.
For more information about Petri nets and their application to (business) process modeling (from a ‘workflow’ perspective), we refer to this article.
Interestingly, none of the algorithms implemented in PM4Py directly discovers a BPMN model.
However, any process tree can easily be translated to a BPMN model.
Since we have already discussed the basic operators of BPMN models, we will start with the discovery of a process tree, which we convert to a BPMN model.
Later, we will study the ‘underlying’ process tree.
The algorithm that we are going to use is the ‘Inductive Miner’;
More details about the (inner workings of the) algorithm can be found in this presentation and in this article.
Consider the following code snippet showing how to obtain a BPMN model from an event log.
End of explanation
process_tree = pm4py.discover_process_tree_inductive(df)
pm4py.view_process_tree(process_tree)
Explanation: Observe that the process model that we discovered, describes the same behavior as the model that we have shown above.
As indicated, the algorithm used in this example actually discovers a Process Tree.
Such a process tree is, mathematically speaking, a rooted tree, annotated with ‘control-flow’ information.
We’ll first use the following code snippet to discover a process tree based on the running example, and, afterwards shortly analyze the model.
End of explanation
net1, im1, fm1 = pm4py.convert_to_petri_net(process_tree)
pm4py.view_petri_net(net1,im1,fm1)
net2, im2, fm2 = pm4py.discover_petri_net_inductive(df)
pm4py.view_petri_net(net2, im2, fm2)
Explanation: We'll analyze the process tree model from top to bottom.
The first circle, i.e., the ‘root’ of the process tree, describes a ‘->’ symbol.
This means that, when srolling further down, the process described by the model executes the ‘children’ of the root from left to right.
Hence, first “register request” is executed, followed by the circle node with the ‘*’ symbol, finally to be followed by the node with the ‘X’ symbol.
The node with the ‘*’ represents ‘repeated behavior’, i.e., the possibility to repeat the behavior.
When scrolling further down, the left-most ‘subtree’ of the ‘*’-operator is always executed, the right-most child (in this case, “reinitiate request”) triggers a repeated execution of the left-most child.
Observe that this is in line with the process models we have seen before, i.e., the “reinitiate request” activity allows us to repeat the behavior regarding examinations and checking the ticket.
When we go further down below in the subtree of the ‘*’-operator, we again observe a ‘->’ node.
Hence, its left-most child is executed first, followed by its right-most child (“decide”).
The left-most child of the ‘->’ node has a ‘+’ symbol.
This represents concurrent behavior; hence, its children can be executed simultaneously or in any order.
Its left-most child is the “check ticket” activity.
Its right-most child is a node with an ‘X’ symbol (just like the right-most child of the tree's root).
This represents an exclusive choice, i.e., one of the children is executed (either “examine casually” or “examine thoroughly”).
Observe that the process tree describes the exact same behavior as the BPMN models shown before.
There are different ways to obtain a petri net:
- Let the algorithm directly return a Petri net.
- Convert the obtained process tree to a Petri net (recall that process trees are a strict sub-class of Petri nets).
For example:
End of explanation
net3, im3, fm3 = pm4py.discover_petri_net_alpha(df)
pm4py.view_petri_net(net3, im3, fm3)
net4, im4, fm4 = pm4py.discover_petri_net_alpha_plus(df)
pm4py.view_petri_net(net4, im4, fm4)
Explanation: Observe that both functions return three arguments, i.e., the Petri net, an initial and a final marking.
Unsurprisingly, the two models are the same (i.e., the pm4py.discover_petri_net_inductive(df) function applies the conversion internally).
However, there are alternative algorithms implemented in pm4py, that allow you obtain a Petri net based on an event log.
These algorithms are:
* The alpha miner; One of the first process discovery algorithms
* The alpha+ miner; Extension of the alpha miner that handles length-one-loops and short loops.
Invocation of the aformentioned algorithms is straightforward:
End of explanation
dfg, start_activities, end_activities = pm4py.discover_dfg(df)
pm4py.view_dfg(dfg, start_activities, end_activities)
Explanation: Note that, by definition, the alpha miner variants cannot discover invisible transitions (black boxes).
Additionally, these algorithms have no form of formal quality guarantees w.r.t. the resulting process models.
As such, we strongly discourage the use of the alpha miners in practice, apart from educational purposes.
Obtaining a Process Map
Many commercial process mining solutions do not provide extended support for discovering process models.
Often, as a main visualization of processes, process maps are used.
A process map contains activities and connections (by means of arcs) between them.
A connection between two activities usually means that there some form of precedence relation.
In its simplest form, it means that the ‘source’ activity directly precedes the ‘target’ activity.
Let’s quickly take a look at a concrete example!
Consider the following code snippet, in which we learn a ‘Directly Follows Graph’ (DFG)-based process map.
End of explanation
map = pm4py.discover_heuristics_net(df)
pm4py.view_heuristics_net(map)
Explanation: The pm4py.discover_dfg(log) function returns a triple.
The first result, i.e., called dfg in this example, is a dictionary mapping pairs of activities that follow each other directly, to the number of corresponding observations.
The second and third arguments are the start and end activities observed in the event log (again counters).
In the visualization, the green circle represents the start of any observed process instance.
The orange circle represents the end of an observed process instance.
In 6 cases, the register request is the first activity observed (represented by the arc labeled with value 6).
In the event log, the check ticket activity is executed directly after the register request activity.
The examine thoroughly activity is following registration once, examine casually follows 3 times.
Note that, indeed, in total, the register activity is followed by 6 different events, i.e., there are 6 traces in the running example event log.
However, note that there are typically much more relations observable compared to the number of cases in an event log.
Even using this simple event data, the DFG-based process map of the process is much more complex than the process models learned earlier.
Furthermore, it is much more difficult to infer the actual execution of the process based on the process map.
Hence, when using process maps, one should be very carefully when trying to comprehend the actual process.
In PM4Py, we also implemented the Heuristics Miner, a more advanced process map discovery algorithm, compared to its DFG-based alternative.
We won’t go into the algorithmic details here, however, in a HM-based process map, the arcs between activities represent observed concurrency.
For example, the algorithm is able to detect that the ticket check and examination are concurrent.
Hence, these activities will not be connected in the process map.
As such, a HM-based process map is typically simpler compared to a DFG-based process map.
End of explanation
df_broken = pd.read_csv('data/running_example_broken.csv', sep=';')
bpmn_unfiltered = pm4py.discover_bpmn_inductive(df_broken)
pm4py.view_bpmn(bpmn_unfiltered)
bpmn_filtered = pm4py.discover_bpmn_inductive(df_broken, 0.8)
pm4py.view_bpmn(bpmn_filtered)
Explanation: Advanced Discovery: Handling Noise
In the previous tutorial, we have already seen some generic ways of data filtering.
However, most of the functionalities presented there are useful for preprocessing the event data.
After preprocessing, it often happens that event data is still containing various 'problematic cases' that were hard to filter out.
We won't go into too much detail here, however, various causes of outliers and noise in cleaned event data exist.
Some algorithms have built-in filtering mechanisms, allowing you to filter the cleaned event data internally.
Consider the following example, in which we use a 'broken' variant of the running example data (some events are missing), with and without filtering.
End of explanation |
4,479 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'awi', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: AWI
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:38
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
4,480 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data and Data Visualization
Machine learning, and therefore a large part of AI, is based on statistical analysis of data. In this notebook, you'll examine some fundamental concepts related to data and data visualization.
Introduction to Data
Statistics are based on data, which consist of a collection of pieces of information about things you want to study. This information can take the form of descriptions, quantities, measurements, and other observations. Typically, we work with related data items in a dataset, which often consists of a collection of observations or cases. Most commonly, we thing about this dataset as a table that consists of a row for each observation, and a column for each individual data point related to that observation - we variously call these data points attributes or features, and they each describe a specific characteristic of the thing we're observing.
Let's take a look at a real example. In 1886, Francis Galton conducted a study into the relationship between heights of parents and their (adult) children. Run the Python code below to view the data he collected (you can safely ignore a deprecation warning if it is displayed)
Step1: Types of Data
Now, let's take a closer look at this data (you can click the left margin next to the dataset to toggle between full height and a scrollable pane). There are 933 observations, each one recording information pertaining to an individual child. The information recorded consists of the following features
Step2: From this chart, you can see that there are slightly more male children than female children; but the data is reasonably evenly split between the two genders.
Bar charts are typically used to compare categorical (qualitative) data values; but in some cases you might treat a discrete quantitative data value as a category. For example, in the Galton dataset the number of children in each family could be used as a way to categorize families. We might want to see how many familes have one child, compared to how many have two children, etc.
Here's some Python code to create a bar chart showing family counts based on the number of children in the family.
Step3: Note that the code sorts the data so that the categories on the x axis are in order - attention to this sort of detail can make your charts easier to read. In this case, we can see that the most common number of children per family is 1, followed by 5 and 6. Comparatively fewer families have more than 8 children.
Histograms
Bar charts work well for comparing categorical or discrete numeric values. When you need to compare continuous quantitative values, you can use a similar style of chart called a histogram. Histograms differ from bar charts in that they group the continuous values into ranges or bins - so the chart doesn't show a bar for each individual value, but rather a bar for each range of binned values. Because these bins represent continuous data rather than discrete data, the bars aren't separated by a gap. Typically, a histogram is used to show the relative frequency of values in the dataset.
Here's some Python code to create a histogram of the father values in the Galton dataset, which record the father's height
Step4: The histogram shows that the most frequently occuring heights tend to be in the mid-range. There are fewer extremely short or exteremely tall fathers.
In the histogram above, the number of bins (and their corresponding ranges, or bin widths) was determined automatically by Python. In some cases you may want to explicitly control the number of bins, as this can help you see detail in the distribution of data values that otherwise you might miss. The following code creates a histogram for the same father's height values, but explicitly distributes them over 20 bins (19 are specified, and Python adds one)
Step5: We can still see that the most common heights are in the middle, but there's a notable drop in the number of fathers with a height between 67.5 and 70.
Pie Charts
Pie charts are another way to compare relative quantities of categories. They're not commonly used by data scientists, but they can be useful in many business contexts with manageable numbers of categories because they not only make it easy to compare relative quantities by categories; they also show those quantities as a proportion of the whole set of data.
Here's some Python to show the gender counts as a pie chart
Step6: Note that the chart includes a legend to make it clear what category each colored area in the pie chart represents. From this chart, you can see that males make up slightly more than half of the overall number of children; with females accounting for the rest.
Scatter Plots
Often you'll want to compare quantative values. This can be especially useful in data science scenarios where you are exploring data prior to building a machine learning model, as it can help identify apparent relationships between numeric features. Scatter plots can also help identify potential outliers - values that are significantly outside of the normal range of values.
The following Python code creates a scatter plot that plots the intersection points for midparentHeight on the x axis, and childHeight on the y axis
Step7: In a scatter plot, each dot marks the intersection point of the two values being plotted. In this chart, most of the heights are clustered around the center; which indicates that most parents and children tend to have a height that is somewhere in the middle of the range of heights observed. At the bottom left, there's a small cluster of dots that show some parents from the shorter end of the range who have children that are also shorter than their peers. At the top right, there are a few extremely tall parents who have extremely tall children. It's also interesting to note that the top left and bottom right of the chart are empty - there aren't any cases of extremely short parents with extremely tall children or vice-versa.
Line Charts
Line charts are a great way to see changes in values along a series - usually (but not always) based on a time period. The Galton dataset doesn't include any data of this type, so we'll use a different dataset that includes observations of sea surface temperature between 1950 and 2010 for this example | Python Code:
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
df
Explanation: Data and Data Visualization
Machine learning, and therefore a large part of AI, is based on statistical analysis of data. In this notebook, you'll examine some fundamental concepts related to data and data visualization.
Introduction to Data
Statistics are based on data, which consist of a collection of pieces of information about things you want to study. This information can take the form of descriptions, quantities, measurements, and other observations. Typically, we work with related data items in a dataset, which often consists of a collection of observations or cases. Most commonly, we thing about this dataset as a table that consists of a row for each observation, and a column for each individual data point related to that observation - we variously call these data points attributes or features, and they each describe a specific characteristic of the thing we're observing.
Let's take a look at a real example. In 1886, Francis Galton conducted a study into the relationship between heights of parents and their (adult) children. Run the Python code below to view the data he collected (you can safely ignore a deprecation warning if it is displayed):
End of explanation
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of gender counts
genderCounts = df['gender'].value_counts()
# Plot a bar chart
%matplotlib inline
from matplotlib import pyplot as plt
genderCounts.plot(kind='bar', title='Gender Counts')
plt.xlabel('Gender')
plt.ylabel('Number of Children')
plt.show()
Explanation: Types of Data
Now, let's take a closer look at this data (you can click the left margin next to the dataset to toggle between full height and a scrollable pane). There are 933 observations, each one recording information pertaining to an individual child. The information recorded consists of the following features:
- family: An identifier for the family to which the child belongs.
- father: The height of the father.
- mother: The height of the mother.
- midparentHeight: The mid-point between the father and mother's heights (calculated as (father + 1.08 x mother) ÷ 2)
- children: The total number of children in the family.
- childNum: The number of the child to whom this observation pertains (Galton numbered the children in desending order of height, with male children listed before female children)
- gender: The gender of the child to whom this observation pertains.
- childHeight: The height of the child to whom this observation pertains.
It's worth noting that there are several distinct types of data recorded here. To begin with, there are some features that represent qualities, or characteristics of the child - for example, gender. Other feaures represent a quantity or measurement, such as the child's height. So broadly speaking, we can divide data into qualitative and quantitative data.
Qualitative Data
Let's take a look at qualitative data first. This type of data is categorical - it is used to categorize or identify the entity being observed. Sometimes you'll see features of this type described as factors.
Nominal Data
In his observations of children's height, Galton assigned an identifier to each family and he recorded the gender of each child. Note that even though the family identifier is a number, it is not a measurement or quantity. Family 002 it not "greater" than family 001, just as a gender value of "male" does not indicate a larger or smaller value than "female". These are simply named values for some characteristic of the child, and as such they're known as nominal data.
Ordinal Data
So what about the childNum feature? It's not a measurement or quantity - it's just a way to identify individual children within a family. However, the number assigned to each child has some additional meaning - the numbers are ordered. You can find similar data that is text-based; for example, data about training courses might include a "level" attribute that indicates the level of the course as "basic:, "intermediate", or "advanced". This type of data, where the value is not itself a quantity or measurement, but it indicates some sort of inherent order or heirarchy, is known as ordinal data.
Quantitative Data
Now let's turn our attention to the features that indicate some kind of quantity or measurement.
Discrete Data
Galton's observations include the number of children in each family. This is a discrete quantative data value - it's something we count rather than measure. You can't, for example, have 2.33 children!
Continuous Data
The data set also includes height values for father, mother, midparentHeight, and childHeight. These are measurements along a scale, and as such they're described as continuous quantative data values that we measure rather than count.
Sample vs Population
Galton's dataset includes 933 observations. It's safe to assume that this does not account for every person in the world, or even just the UK, in 1886 when the data was collected. In other words, Galton's data represents a sample of a larger population. It's worth pausing to think about this for a few seconds, because there are some implications for any conclusions we might draw from Galton's observations.
Think about how many times you see a claim such as "one in four Americans enjoys watching football". How do the people who make this claim know that this is a fact? Have they asked everyone in the the US about their football-watching habits? Well, that would be a bit impractical, so what usually happens is that a study is conducted on a subset of the population, and (assuming that this is a well-conducted study), that subset will be a representative sample of the population as a whole. If the survey was conducted at the stadium where the Superbowl is being played, then the results are likely to be skewed because of a bias in the study participants.
Similarly, we might look at Galton's data and assume that the heights of the people included in the study bears some relation to the heights of the general population in 1886; but if Galton specifically selected abnormally tall people for his study, then this assumption would be unfounded.
When we deal with statistics, we usually work with a sample of the data rather than a full population. As you'll see later, this affects the way we use notation to indicate statistical measures; and in some cases we calculate statistics from a sample differently than from a full population to account for bias in the sample.
Visualizing Data
Data visualization is one of the key ways in which we can examine data and get insights from it. If a picture is worth a thousand words, then a good graph or chart is worth any number of tables of data.
Let's examine some common kinds of data visualization:
Bar Charts
A bar chart is a good way to compare numeric quantities or counts across categories. For example, in the Galton dataset, you might want to compare the number of female and male children.
Here's some Python code to create a bar chart showing the number of children of each gender.
End of explanation
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of child counts
# there's a row for each child, so we need to filter to one row per family to avoid over-counting
families = df[['family', 'children']].drop_duplicates()
# Now count number of rows for each 'children' value, and sort by the index (children)
childCounts = families['children'].value_counts().sort_index()
# Plot a bar chart
%matplotlib inline
from matplotlib import pyplot as plt
childCounts.plot(kind='bar', title='Family Size')
plt.xlabel('Number of Children')
plt.ylabel('Families')
plt.show()
Explanation: From this chart, you can see that there are slightly more male children than female children; but the data is reasonably evenly split between the two genders.
Bar charts are typically used to compare categorical (qualitative) data values; but in some cases you might treat a discrete quantitative data value as a category. For example, in the Galton dataset the number of children in each family could be used as a way to categorize families. We might want to see how many familes have one child, compared to how many have two children, etc.
Here's some Python code to create a bar chart showing family counts based on the number of children in the family.
End of explanation
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Plot a histogram of midparentHeight
%matplotlib inline
from matplotlib import pyplot as plt
df['father'].plot.hist(title='Father Heights')
plt.xlabel('Height')
plt.ylabel('Frequency')
plt.show()
Explanation: Note that the code sorts the data so that the categories on the x axis are in order - attention to this sort of detail can make your charts easier to read. In this case, we can see that the most common number of children per family is 1, followed by 5 and 6. Comparatively fewer families have more than 8 children.
Histograms
Bar charts work well for comparing categorical or discrete numeric values. When you need to compare continuous quantitative values, you can use a similar style of chart called a histogram. Histograms differ from bar charts in that they group the continuous values into ranges or bins - so the chart doesn't show a bar for each individual value, but rather a bar for each range of binned values. Because these bins represent continuous data rather than discrete data, the bars aren't separated by a gap. Typically, a histogram is used to show the relative frequency of values in the dataset.
Here's some Python code to create a histogram of the father values in the Galton dataset, which record the father's height:
End of explanation
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Plot a histogram of midparentHeight
%matplotlib inline
from matplotlib import pyplot as plt
df['father'].plot.hist(title='Father Heights', bins=19)
plt.xlabel('Height')
plt.ylabel('Frequency')
plt.show()
Explanation: The histogram shows that the most frequently occuring heights tend to be in the mid-range. There are fewer extremely short or exteremely tall fathers.
In the histogram above, the number of bins (and their corresponding ranges, or bin widths) was determined automatically by Python. In some cases you may want to explicitly control the number of bins, as this can help you see detail in the distribution of data values that otherwise you might miss. The following code creates a histogram for the same father's height values, but explicitly distributes them over 20 bins (19 are specified, and Python adds one):
End of explanation
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of gender counts
genderCounts = df['gender'].value_counts()
# Plot a pie chart
%matplotlib inline
from matplotlib import pyplot as plt
genderCounts.plot(kind='pie', title='Gender Counts', figsize=(6,6))
plt.legend()
plt.show()
Explanation: We can still see that the most common heights are in the middle, but there's a notable drop in the number of fathers with a height between 67.5 and 70.
Pie Charts
Pie charts are another way to compare relative quantities of categories. They're not commonly used by data scientists, but they can be useful in many business contexts with manageable numbers of categories because they not only make it easy to compare relative quantities by categories; they also show those quantities as a proportion of the whole set of data.
Here's some Python to show the gender counts as a pie chart:
End of explanation
import statsmodels.api as sm
df = sm.datasets.get_rdataset('GaltonFamilies', package='HistData').data
# Create a data frame of heights (father vs child)
parentHeights = df[['midparentHeight', 'childHeight']]
# Plot a scatter plot chart
%matplotlib inline
from matplotlib import pyplot as plt
parentHeights.plot(kind='scatter', title='Parent vs Child Heights', x='midparentHeight', y='childHeight')
plt.xlabel('Avg Parent Height')
plt.ylabel('Child Height')
plt.show()
Explanation: Note that the chart includes a legend to make it clear what category each colored area in the pie chart represents. From this chart, you can see that males make up slightly more than half of the overall number of children; with females accounting for the rest.
Scatter Plots
Often you'll want to compare quantative values. This can be especially useful in data science scenarios where you are exploring data prior to building a machine learning model, as it can help identify apparent relationships between numeric features. Scatter plots can also help identify potential outliers - values that are significantly outside of the normal range of values.
The following Python code creates a scatter plot that plots the intersection points for midparentHeight on the x axis, and childHeight on the y axis:
End of explanation
import statsmodels.api as sm
df = sm.datasets.elnino.load_pandas().data
df['AVGSEATEMP'] = df.mean(1)
# Plot a line chart
%matplotlib inline
from matplotlib import pyplot as plt
df.plot(title='Average Sea Temperature', x='YEAR', y='AVGSEATEMP')
plt.xlabel('Year')
plt.ylabel('Average Sea Temp')
plt.show()
Explanation: In a scatter plot, each dot marks the intersection point of the two values being plotted. In this chart, most of the heights are clustered around the center; which indicates that most parents and children tend to have a height that is somewhere in the middle of the range of heights observed. At the bottom left, there's a small cluster of dots that show some parents from the shorter end of the range who have children that are also shorter than their peers. At the top right, there are a few extremely tall parents who have extremely tall children. It's also interesting to note that the top left and bottom right of the chart are empty - there aren't any cases of extremely short parents with extremely tall children or vice-versa.
Line Charts
Line charts are a great way to see changes in values along a series - usually (but not always) based on a time period. The Galton dataset doesn't include any data of this type, so we'll use a different dataset that includes observations of sea surface temperature between 1950 and 2010 for this example:
End of explanation |
4,481 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Registration Settings
Step1: Read the RIRE data and generate a larger point set as a reference
Step2: Initial Alignment
We use the CenteredTransformInitializer. Should we use the GEOMETRY based version or the MOMENTS based one?
Step3: Registration
Possible choices for simple rigid multi-modality registration framework (<b>300</b> component combinations, in addition to parameter settings for each of the components)
Step4: In some cases visual comparison of the registration errors using the same scale is not informative, as seen above [all points are grey/black]. We therefor set the color scale to the min-max error range found in the current data and not the range from the previous stage.
Step5: Now using the built in multi-resolution framework
Perform registration using the same settings as above, but take advantage of the multi-resolution framework which provides a significant speedup with minimal effort (3 lines of code).
It should be noted that when using this framework the similarity metric value will not necessarily decrease between resolutions, we are only ensured that it decreases per resolution. This is not an issue, as we are actually observing the values of a different function at each resolution.
The example below shows that registration is improving even though the similarity value increases when changing resolution levels.
Step6: Sufficient accuracy <u>inside</u> the ROI
Up to this point our accuracy evaluation has ignored the content of the image and is likely overly conservative. We have been looking at the registration errors inside the volume, but not necesserily in the smaller ROI.
To see the difference you will have to <b>comment out the timeit magic in the code above</b>, run it again, and then run the following cell. | Python Code:
import SimpleITK as sitk
# Utility method that either downloads data from the network or
# if already downloaded returns the file name for reading from disk (cached data).
from downloaddata import fetch_data as fdata
# Always write output to a separate directory, we don't want to pollute the source directory.
OUTPUT_DIR = 'Output'
import registration_callbacks as rc
import registration_utilities as ru
%matplotlib inline
Explanation: <h1 align="center">Registration Settings: Choices, Choices, Choices</h1>
The performance of most registration algorithms is dependent on a large number of parameter settings. For optimal performance you will need to customize your settings, turning all the knobs to their "optimal" position:<br>
<img src="knobs.jpg" style="width:700px"/>
<font size="1"> [This image was originally posted to Flickr and downloaded from wikimedia commons https://commons.wikimedia.org/wiki/File:TASCAM_M-520_knobs.jpg]</font>
This notebook illustrates the use of reference data (a.k.a "gold" standard) to empirically tune a registration framework for specific usage. This is dependent on the characteristics of your images (anatomy, modality, image's physical spacing...) and on the clinical needs.
Also keep in mind that the defintion of optimal settings does not necessarily correspond to those that provide the most accurate results.
The optimal settings are task specific and should provide:
<ul>
<li>Sufficient accuracy in the Region Of Interest (ROI).</li>
<li>Complete the computation in the alloted time.</li>
</ul>
We will be using the training data from the Retrospective Image Registration Evaluation (<a href="http://www.insight-journal.org/rire/">RIRE</a>) project.
End of explanation
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32)
moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32)
fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(fdata("ct_T1.standard"))
# Estimate the reference_transform defined by the RIRE fiducials and check that the FRE makes sense (low)
R, t = ru.absolute_orientation_m(fixed_fiducial_points, moving_fiducial_points)
reference_transform = sitk.Euler3DTransform()
reference_transform.SetMatrix(R.flatten())
reference_transform.SetTranslation(t)
reference_errors_mean, reference_errors_std, _, reference_errors_max,_ = ru.registration_errors(reference_transform, fixed_fiducial_points, moving_fiducial_points)
print('Reference data errors (FRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(reference_errors_mean, reference_errors_std, reference_errors_max))
# Generate a reference dataset from the reference transformation
# (corresponding points in the fixed and moving images).
fixed_points = ru.generate_random_pointset(image=fixed_image, num_points=100)
moving_points = [reference_transform.TransformPoint(p) for p in fixed_points]
# Compute the TRE prior to registration.
pre_errors_mean, pre_errors_std, pre_errors_min, pre_errors_max, _ = ru.registration_errors(sitk.Euler3DTransform(), fixed_points, moving_points, display_errors = True)
print('Before registration, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(pre_errors_mean, pre_errors_std, pre_errors_max))
Explanation: Read the RIRE data and generate a larger point set as a reference
End of explanation
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()),
moving_image,
sitk.Euler3DTransform(),
sitk.CenteredTransformInitializerFilter.GEOMETRY)
initial_errors_mean, initial_errors_std, initial_errors_min, initial_errors_max, _ = ru.registration_errors(initial_transform, fixed_points, moving_points, min_err=pre_errors_min, max_err=pre_errors_max, display_errors=True)
print('After initialization, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max))
Explanation: Initial Alignment
We use the CenteredTransformInitializer. Should we use the GEOMETRY based version or the MOMENTS based one?
End of explanation
#%%timeit -r1 -n1
# to time this cell uncomment the line above
#the arguments to the timeit magic specify that this cell should only be run once. running it multiple
#times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy
#results from multiple runs you will have to modify the code to save them instead of just printing them out.
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.01)
registration_method.SetInterpolator(sitk.sitkNearestNeighbor) #2. Replace with sitkLinear
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #1. Increase to 1000
registration_method.SetOptimizerScalesFromPhysicalShift()
# Don't optimize in-place, we would like to run this cell multiple times
registration_method.SetInitialTransform(initial_transform, inPlace=False)
# Add callbacks which will display the similarity measure value and the reference data during the registration process
registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))
final_transform_single_scale = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, min_err=initial_errors_min, max_err=initial_errors_max, display_errors=True)
print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
Explanation: Registration
Possible choices for simple rigid multi-modality registration framework (<b>300</b> component combinations, in addition to parameter settings for each of the components):
<ul>
<li>Similarity metric, 2 options (Mattes MI, JointHistogram MI):
<ul>
<li>Number of histogram bins.</li>
<li>Sampling strategy, 3 options (NONE, REGULAR, RANDOM)</li>
<li>Sampling percentage.</li>
</ul>
</li>
<li>Interpolator, 10 options (sitkNearestNeighbor, sitkLinear, sitkGaussian, sitkBSpline,...)</li>
<li>Optimizer, 5 options (GradientDescent, GradientDescentLineSearch, RegularStepGradientDescent...):
<ul>
<li>Number of iterations.</li>
<li>learning rate (step size along parameter space traversal direction).</li>
</ul>
</li>
</ul>
In this example we will plot the similarity metric's value and more importantly the TREs for our reference data. A good choice for the former should be reflected by the later. That is, the TREs should go down as the similarity measure value goes down (not necessarily at the same rates).
Finally, we are also interested in timing our registration. Ipython allows us to do this with minimal effort using the <a href="http://ipython.org/ipython-doc/stable/interactive/magics.html?highlight=timeit#magic-timeit">timeit</a> cell magic (Ipython has a set of predefined functions that use a command line syntax, and are referred to as magic functions).
End of explanation
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, display_errors=True)
Explanation: In some cases visual comparison of the registration errors using the same scale is not informative, as seen above [all points are grey/black]. We therefor set the color scale to the min-max error range found in the current data and not the range from the previous stage.
End of explanation
%%timeit -r1 -n1
#the arguments to the timeit magic specify that this cell should only be run once. running it multiple
#times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy
#results from multiple runs you will have to modify the code to save them instead of just printing them out.
registration_method = sitk.ImageRegistrationMethod()
registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
registration_method.SetMetricSamplingStrategy(registration_method.RANDOM)
registration_method.SetMetricSamplingPercentage(0.1)
registration_method.SetInterpolator(sitk.sitkLinear) #2. Replace with sitkLinear
registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100)
registration_method.SetOptimizerScalesFromPhysicalShift()
# Don't optimize in-place, we would like to run this cell multiple times
registration_method.SetInitialTransform(initial_transform, inPlace=False)
# Add callbacks which will display the similarity measure value and the reference data during the registration process
registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot)
registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot)
registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points))
registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1])
registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0])
registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn()
final_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32),
sitk.Cast(moving_image, sitk.sitkFloat32))
print('Final metric value: {0}'.format(registration_method.GetMetricValue()))
print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription()))
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, fixed_points, moving_points, True)
print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
Explanation: Now using the built in multi-resolution framework
Perform registration using the same settings as above, but take advantage of the multi-resolution framework which provides a significant speedup with minimal effort (3 lines of code).
It should be noted that when using this framework the similarity metric value will not necessarily decrease between resolutions, we are only ensured that it decreases per resolution. This is not an issue, as we are actually observing the values of a different function at each resolution.
The example below shows that registration is improving even though the similarity value increases when changing resolution levels.
End of explanation
# Threshold the original fixed, CT, image at 0HU (water), resulting in a binary labeled [0,1] image.
roi = fixed_image> 0
# Our ROI consists of all voxels with a value of 1, now get the bounding box surrounding the head.
label_shape_analysis = sitk.LabelShapeStatisticsImageFilter()
label_shape_analysis.SetBackgroundValue(0)
label_shape_analysis.Execute(roi)
bounding_box = label_shape_analysis.GetBoundingBox(1)
# Bounding box in physical space.
sub_image_min = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0],bounding_box[1], bounding_box[2]))
sub_image_max = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0]+bounding_box[3]-1,
bounding_box[1]+bounding_box[4]-1,
bounding_box[2]+bounding_box[5]-1))
# Only look at the points inside our bounding box.
sub_fixed_points = []
sub_moving_points = []
for fixed_pnt, moving_pnt in zip(fixed_points, moving_points):
if sub_image_min[0]<=fixed_pnt[0]<=sub_image_max[0] and \
sub_image_min[1]<=fixed_pnt[1]<=sub_image_max[1] and \
sub_image_min[2]<=fixed_pnt[2]<=sub_image_max[2] :
sub_fixed_points.append(fixed_pnt)
sub_moving_points.append(moving_pnt)
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, sub_fixed_points, sub_moving_points, True)
print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
Explanation: Sufficient accuracy <u>inside</u> the ROI
Up to this point our accuracy evaluation has ignored the content of the image and is likely overly conservative. We have been looking at the registration errors inside the volume, but not necesserily in the smaller ROI.
To see the difference you will have to <b>comment out the timeit magic in the code above</b>, run it again, and then run the following cell.
End of explanation |
4,482 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Widgets in a Jupyter Notebook
An example of using version 4.x widgets in a Jupyter Notebook.
Reference
Step1: Define a sine wave
Step2: Plot the sine wave
Step3: Define a function that allows a user to vary the wavenumber of the sine wave
Step4: For example
Step5: Use a slider widget to call sine_plotter and interactively change the wave number of the sine wave | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import interact
Explanation: Widgets in a Jupyter Notebook
An example of using version 4.x widgets in a Jupyter Notebook.
Reference: https://ipywidgets.readthedocs.io/en/latest/.
Depending on the current state of your Python installation, you may have to install the ipywidgets package, have notebook>=4.2.0, and enable widgets.
$ pip install ipywidgets
$ jupyter nbextension enable --py --sys-prefix widgetsnbextension
All of the imports:
End of explanation
x = np.linspace(0, 1, 101)
k = 2
f = np.sin(2*np.pi * k * x)
Explanation: Define a sine wave:
End of explanation
plt.plot(x, f)
Explanation: Plot the sine wave:
End of explanation
def sine_plotter(wave_number):
plt.plot(x, np.sin(2*np.pi * x * wave_number), 'r')
Explanation: Define a function that allows a user to vary the wavenumber of the sine wave:
End of explanation
sine_plotter(5)
Explanation: For example:
End of explanation
interact(sine_plotter, wave_number=(1, 10, 0.5))
Explanation: Use a slider widget to call sine_plotter and interactively change the wave number of the sine wave:
End of explanation |
4,483 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The purpose of this notebook is twofold. First, it demonstrates the basic functionality of PyLogit for estimating nested logit models. Secondly, it compares the nested logit capabilities of PyLogit with Python Biogeme. The dataset used is the SwissMetro dataset from <a href="http
Step1: 1. Load the Swissmetro Dataset
Step2: 2. Clean the dataset
Note that the 09NestedLogit.py file provided is an example from Python Biogeme (see
Step3: 3. Create an id column that ignores the repeat observations per individual
In the simple example given on the Python Biogeme website for 09NestedLogit.py, the repeated observations per individual are treated as separate and independent observations. We will do the same.
Step4: 4. Convert the data from 'wide' format to 'long' format
4a. Determine the 'type' of each column in the dataset.
Step5: 4b. Actually perform the conversion from wide to long formats
Step6: 5. Create the variables used in the Python Biogeme Nested Logit Model Example
In 09NestedLogit.py, the travel time and travel cost variables are scaled for ease of numeric optimization. We will do the same such that our estimated coefficients are comparable.
Step7: 6. Specify and Estimate the Python Biogeme Nested Logit Model Example
6a. Specify the Model
Step9: 6b. Estimate the model
One main difference between the nested logit implementation in PyLogit and in Python Biogeme or mLogit in R is that PyLogit reparameterizes the 'standard' nested logit model. In particular, one standard reperesntation of the nested logit model is in terms of the inverse of the 'scale' parameter for each nest (see for example the representation given by Kenneth Train in section 4.2 <a href="http
Step10: Also, note that the functionality of using parameter constraints is restriced to the Mixed Logit and Nested Logit models at the moment. Moreover, this functionality is only relevant when using optimization method that make use of gradient information. Gradient-free estimation methods such as 'powell's' method or 'nelder-mead' will not make use of the constrained_pos keyword argument.
6.c Compare the model output with that of Python Biogeme
Step11: Compare with PythonBiogeme
Step12: Summary
My parameter estimates match those of Python Biogeme. <br>
The Python Biogeme log-likelihood is -5,236.900 and their estimated parameters are
Step13: Python Biogeme Output
<pre>
Name Value Std err t-test p-value
ASC_CAR -0.167 0.0371 -4.50 0.00
ASC_TRAIN -0.512 0.0452 -11.33 0.00
B_COST -0.857 0.0463 -18.51 0.00
B_TIME -0.899 0.0570 -15.77 0.00
MU 2.05 0.118 17.45 0.00
</pre>
From above, we see that for the index coefficients, the standard errors that are calculated using the numeric approximation of the hessian match the standard errors returned by Python Biogeme. This suggests that the standard errors of Python Biogeme, for the nested logit model, are based on a numeric differentiation approximation to the Hessian.
Below, we investigate whether the numeric approximation of the gradient via numeric differentiation is a close approximation to the analytic gradient. The premise is that if the numeric gradient does not adequately approximate the analytic gradient, then what chance does the numeric hessian have of adequately approximating the analytic hessian? | Python Code:
from collections import OrderedDict # For recording the model specification
import pandas as pd # For file input/output
import numpy as np # For vectorized math operations
import statsmodels.tools.numdiff as numdiff # For numeric hessian
import scipy.linalg # For matrix inversion
import pylogit as pl # For choice model estimation
from pylogit import nested_logit as nl # For nested logit convenience funcs
Explanation: The purpose of this notebook is twofold. First, it demonstrates the basic functionality of PyLogit for estimating nested logit models. Secondly, it compares the nested logit capabilities of PyLogit with Python Biogeme. The dataset used is the SwissMetro dataset from <a href="http://biogeme.epfl.ch/examples_swissmetro.html">http://biogeme.epfl.ch/examples_swissmetro.html</a>. For an explanation of the variables in the dataset, see http://www.strc.ch/conferences/2001/bierlaire1.pdf
End of explanation
# Load the raw swiss metro data
# Note the .dat files are tab delimited text files
swissmetro_wide = pd.read_table("../data/swissmetro.dat", sep='\t')
Explanation: 1. Load the Swissmetro Dataset
End of explanation
# Select obervations whose choice is known (i.e. CHOICE != 0)
# **AND** whose PURPOSE is either 1 or 3
include_criteria = (swissmetro_wide.PURPOSE.isin([1, 3]) &
(swissmetro_wide.CHOICE != 0))
# Use ".copy()" so that later on, we avoid performing operations
# on a view of a dataframe as opposed to on an actual dataframe
clean_sm_wide = swissmetro_wide.loc[include_criteria].copy()
# Look at how many observations we have after removing unwanted
# observations
final_num_obs = clean_sm_wide.shape[0]
num_obs_statement = "The cleaned number of observations is {:,.0f}."
print(num_obs_statement.format(final_num_obs))
Explanation: 2. Clean the dataset
Note that the 09NestedLogit.py file provided is an example from Python Biogeme (see: <a href="http://biogeme.epfl.ch/examples_swissmetro.html">http://biogeme.epfl.ch/examples_swissmetro.html</a>). The 09NestedLogit.py file excludes observations meeting the following critera:
<pre>
exclude = (( PURPOSE != 1 ) * ( PURPOSE != 3 ) + ( CHOICE == 0 )) > 0
</pre>
As a result, their dataset has 6,768 observations. Below, I make the same exclusions.
End of explanation
# Create a custom id column that ignores the fact that this is a
# panel/repeated-observations dataset, and start the "custom_id" from 1
clean_sm_wide["custom_id"] = np.arange(clean_sm_wide.shape[0], dtype=int) + 1
Explanation: 3. Create an id column that ignores the repeat observations per individual
In the simple example given on the Python Biogeme website for 09NestedLogit.py, the repeated observations per individual are treated as separate and independent observations. We will do the same.
End of explanation
# Look at the columns of the swissmetro data
clean_sm_wide.columns
# Create the list of individual specific variables
ind_variables = clean_sm_wide.columns.tolist()[:15]
# Specify the variables that vary across individuals **AND**
# across some or all alternatives
alt_varying_variables = {u'travel_time': dict([(1, 'TRAIN_TT'),
(2, 'SM_TT'),
(3, 'CAR_TT')]),
u'travel_cost': dict([(1, 'TRAIN_CO'),
(2, 'SM_CO'),
(3, 'CAR_CO')]),
u'headway': dict([(1, 'TRAIN_HE'),
(2, 'SM_HE')]),
u'seat_configuration': dict([(2, "SM_SEATS")])}
# Specify the availability variables
availability_variables = dict(zip(range(1, 4), ['TRAIN_AV', 'SM_AV', 'CAR_AV']))
# Determine the columns that will denote the
# new column of alternative ids, and the columns
# that denote the custom observation ids and the
# choice column
new_alt_id = "mode_id"
obs_id_column = "custom_id"
choice_column = "CHOICE"
Explanation: 4. Convert the data from 'wide' format to 'long' format
4a. Determine the 'type' of each column in the dataset.
End of explanation
# Perform the desired conversion
long_swiss_metro = pl.convert_wide_to_long(clean_sm_wide,
ind_variables,
alt_varying_variables,
availability_variables,
obs_id_column,
choice_column,
new_alt_id_name=new_alt_id)
# Look at the first 9 rows of the long-format dataframe
long_swiss_metro.head(9).T
Explanation: 4b. Actually perform the conversion from wide to long formats
End of explanation
# Scale both the travel time and travel cost by 100
long_swiss_metro["travel_time_hundredth"] = (long_swiss_metro["travel_time"] /
100.0)
# Figure out which rows correspond to train or swiss metro
# alternatives for individuals with GA passes. These individuals face no
# marginal costs for a trip
train_pass_train_alt = ((long_swiss_metro["GA"] == 1) *
(long_swiss_metro["mode_id"].isin([1, 2]))).astype(int)
# Note that the (train_pass_train_alt == 0) term accounts for the
# fact that those with a GA pass have no marginal cost for the trip
long_swiss_metro["travel_cost_hundredth"] = (long_swiss_metro["travel_cost"] *
(train_pass_train_alt == 0) /
100.0)
Explanation: 5. Create the variables used in the Python Biogeme Nested Logit Model Example
In 09NestedLogit.py, the travel time and travel cost variables are scaled for ease of numeric optimization. We will do the same such that our estimated coefficients are comparable.
End of explanation
# Specify the nesting values
nest_membership = OrderedDict()
nest_membership["Future Modes"] = [2]
nest_membership["Existing Modes"] = [1, 3]
# Create the model's specification dictionary and variable names dictionary
# NOTE: - Keys should be variables within the long format dataframe.
# The sole exception to this is the "intercept" key.
# - For the specification dictionary, the values should be lists
# or lists of lists. Within a list, or within the inner-most
# list should be the alternative ID's of the alternative whose
# utility specification the explanatory variable is entering.
example_specification = OrderedDict()
example_names = OrderedDict()
# Note that 1 is the id for the Train and 3 is the id for the Car.
# The next two lines are placing alternative specific constants in
# the utility equations for the Train and for the Car. The order
# in which these variables are placed is chosen so the summary
# dataframe which is returned will match that shown in the HTML
# file of the python biogeme example.
example_specification["intercept"] = [3, 1]
example_names["intercept"] = ['ASC Car', 'ASC Train']
# Note that the names used below are simply for consistency with
# the coefficient names given in the Python Biogeme example.
# example_specification["travel_cost_hundredth"] = [[1, 2, 3]]
# example_names["travel_cost_hundredth"] = ['B_COST']
example_specification["travel_cost_hundredth"] = [[1, 2, 3]]
example_names["travel_cost_hundredth"] = ['B_COST']
example_specification["travel_time_hundredth"] = [[1, 2, 3]]
example_names["travel_time_hundredth"] = ['B_TIME']
Explanation: 6. Specify and Estimate the Python Biogeme Nested Logit Model Example
6a. Specify the Model
End of explanation
# Define a function that calculates the "logit" transformation of values
# between 0.0 and 1.0.
def logit(x):
Parameters
----------
x : int, float, or 1D ndarray.
If an array, all elements should be ints or floats. All
elements should be between zero and one, exclusive of 1.0.
Returns
-------
The logit of x: `np.log(x / (1.0 - x))`.
return np.log(x/(1.0 - x))
# Provide the module with the needed input arguments to create
# an instance of the MNL model class
example_nested = pl.create_choice_model(data=long_swiss_metro,
alt_id_col=new_alt_id,
obs_id_col=obs_id_column,
choice_col=choice_column,
specification=example_specification,
model_type="Nested Logit",
names=example_names,
nest_spec=nest_membership)
# Specify the initial nesting parameter values
# Note: This should be in terms of the reparameterized values used
# by PyLogit.
# Note: The '40' corresponds to scale parameter that is numerically
# indistinguishable from 1.0
# Note: 2.05 is the scale parameter that is estimated by PythonBiogeme
# so we invert it, then take the logit of this inverse to get the
# corresponding starting value to be used by PyLogit.
# Note the first value corresponds to the first nest in 'nest_spec'
# and the second value corresponds to the second nest in 'nest_spec'.
init_nests = np.array([40, logit(2.05**-1)])
# Specify the initial index coefficients used by PythonBiogeme
init_coefs = np.array([-0.167, -0.512, -0.899, -0.857])
# Create a single array of the initial values
init_values = np.concatenate((init_nests, init_coefs), axis=0)
# Start the model estimation from the pythonbiogeme initial values
# Note that the first value, in the initial values, is constrained
# to remain constant through the estimation process. This is because
# the first nest in nest_spec is a 'degenerate' nest with only one
# alternative, and the nest parameter of degenerate nests is not
# identified.
example_nested.fit_mle(init_values,
constrained_pos=[0])
Explanation: 6b. Estimate the model
One main difference between the nested logit implementation in PyLogit and in Python Biogeme or mLogit in R is that PyLogit reparameterizes the 'standard' nested logit model. In particular, one standard reperesntation of the nested logit model is in terms of the inverse of the 'scale' parameter for each nest (see for example the representation given by Kenneth Train in section 4.2 <a href="http://eml.berkeley.edu/books/choice2nd/Ch04_p76-96.pdf">here</a>). The 'scale' parameter has domain from zero to infinity, therefore the inverse of the scale parameter has the same domain.
However, for econometric purposes (such as conforming to the assumptions that individuals are making choices through a utility maximizing decision protocol), the scale parameter of a 'lower level nest' is constrained to be greater than or equal to 1 (assuming that the 'upper level nest' is constrained to 1.0 for identification purposes). The inverse of the scale parameter would then be constrained to be between 0.0 and 1.0 in this case. In order to make use of unconstrained optimization algorithms, we therefore estimate the logit ( i.e. $\ln \left[ \frac{\textrm{scale}^{-1}}{1.0 - \textrm{scale}^{-1}} \right]$) of the inverse of the scale parameter, assuming that the inverse of the scale parameter will lie between zero and one (and accordingly that the scale parameter be greater than or equal to one).
End of explanation
# Look at the estimated coefficients and goodness-of-fit statistics
example_nested.get_statsmodels_summary()
Explanation: Also, note that the functionality of using parameter constraints is restriced to the Mixed Logit and Nested Logit models at the moment. Moreover, this functionality is only relevant when using optimization method that make use of gradient information. Gradient-free estimation methods such as 'powell's' method or 'nelder-mead' will not make use of the constrained_pos keyword argument.
6.c Compare the model output with that of Python Biogeme
End of explanation
# Note that the Mu (i.e the scale parameter) estimated by python biogeme is
# 1.0 / nest_coefficient where
# nest_coefficient = 1.0 / (1.0 + exp[-1 * estimated_nest_param])
pylogit_mu = 1.0 + np.exp(-1 * example_nested.params["Existing Modes"])
print("PyLogit's estimated Mu is: {:,.4f}".format(pylogit_mu))
Explanation: Compare with PythonBiogeme
End of explanation
# Create objects for all of the necessary arguments that are
# needed to compute the log-likelihood of the nested logit model
# given the data used in this example
nested_design = example_nested.design
mapping_res = example_nested.get_mappings_for_fit()
choice_array = long_swiss_metro["CHOICE"].values
# Create a nested logit estimation object
est_object = nl.NestedEstimator(example_nested,
mapping_res,
None,
np.zeros(example_nested.params.size),
nl.split_param_vec,
constrained_pos=[0])
# Create a 'convenience' function that simply returns the log-likelihood
# given a vector of coefficients
def convenient_log_likelihood(all_coefs):
return est_object.convenience_calc_log_likelihood(all_coefs)
# Calculate the numeric hessian
numeric_hess = numdiff.approx_hess(example_nested.params.values,
convenient_log_likelihood)
# Account for the fact that the first param is constrained
numeric_hess[0, :] = 0
numeric_hess[:, 0] = 0
numeric_hess[0, 0] = -1
# Calculate the asymptotic covariance with the numeric hessian
numeric_cov = -1 * scipy.linalg.inv(numeric_hess)
# Get the numeric standard errors
numeric_std_errs = pd.Series(np.sqrt(np.diag(numeric_cov)),
index=example_nested.params.index)
# Make sure the Future Modes Nest param has a standard error of np.nan
numeric_std_errs.loc["Future Modes"] = np.nan
# Order the numeric standard errors according to the Python Biogeme
# output
numeric_std_errs = pd.concat([numeric_std_errs[example_nested.params.index[2:]],
numeric_std_errs[example_nested.params.index[:2]]],
axis=0)
# Display the numeric standard errors
numeric_std_errs
Explanation: Summary
My parameter estimates match those of Python Biogeme. <br>
The Python Biogeme log-likelihood is -5,236.900 and their estimated parameters are:
<pre>
ASC Car: -0.167
ASC Train: -0.512
B_COST: -0.857
B_TIME: -0.899
Mu: 2.05
</pre>
As shown above, my log-likelihood is -5,236.900, and my estimated parameters are:
<pre>
ASC Car: -0.1672
ASC Train: -0.5119
B_COST: -0.8567
B_TIME: -0.8987
Existing Modes Nest Param: 2.0541
</pre>
PyLogit's covariance estimates for the Nested Logit model are currently based on the BHHH approximation to the Fisher Information Matrix. This is the same procedure used by mlogit. However, based on the disaggreement between PyLogit's standard errors and those of Python Biogeme, Python Biogeme is clearly not using the BHHH approximation to the Fisher Information Matrix to calculate its standard errors. How does Python Biogeme calculate its standard errors?
Investigate the use of numeric approximations to the Hessian
End of explanation
# Approximate the gradient using numeric differentiation
numeric_grad = numdiff.approx_fprime(example_nested.params.values,
convenient_log_likelihood)
pd.DataFrame([numeric_grad,
example_nested.gradient.values],
index=["Numeric Differentiation", "Analytic"],
columns=example_nested.params.index).T
Explanation: Python Biogeme Output
<pre>
Name Value Std err t-test p-value
ASC_CAR -0.167 0.0371 -4.50 0.00
ASC_TRAIN -0.512 0.0452 -11.33 0.00
B_COST -0.857 0.0463 -18.51 0.00
B_TIME -0.899 0.0570 -15.77 0.00
MU 2.05 0.118 17.45 0.00
</pre>
From above, we see that for the index coefficients, the standard errors that are calculated using the numeric approximation of the hessian match the standard errors returned by Python Biogeme. This suggests that the standard errors of Python Biogeme, for the nested logit model, are based on a numeric differentiation approximation to the Hessian.
Below, we investigate whether the numeric approximation of the gradient via numeric differentiation is a close approximation to the analytic gradient. The premise is that if the numeric gradient does not adequately approximate the analytic gradient, then what chance does the numeric hessian have of adequately approximating the analytic hessian?
End of explanation |
4,484 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# DONE: Implement Function
eos = target_vocab_to_int['<EOS>']
source_ids = [[source_vocab_to_int[w] for w in s.split()] for s in source_text.split('\n')]
target_ids = [[target_vocab_to_int[w] for w in s.split()] + [eos] for s in target_text.split('\n')]
return source_ids, target_ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# DONE: Implement Function
input = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None])
learning_rate = tf.placeholder(tf.float32, None)
keep_prob = tf.placeholder(tf.float32, None, name='keep_prob')
return input, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# DONE: Implement Function
last_removed = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
decoding_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), last_removed], 1)
return decoding_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# DONE: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
_, final_state = tf.nn.dynamic_rnn(drop, rnn_inputs, dtype=tf.float32)
return final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# DONE: Implement Function
dec_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
outputs_train, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, dec_fn_train, dec_embed_input, sequence_length, scope=decoding_scope)
logits = tf.contrib.layers.dropout(output_fn(outputs_train), keep_prob)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# DONE: Implement Function
dec_fn_inference = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size)
logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, dec_fn_inference, scope=decoding_scope)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# DONE: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
cell = tf.contrib.rnn.MultiRNNCell([lstm] * num_layers)
output_fn = lambda x: tf.contrib.layers.fully_connected(
x, vocab_size, None, weights_initializer=tf.truncated_normal_initializer())
with tf.variable_scope("decoding") as decoding_scope:
logits_train = decoding_layer_train(
encoder_state, cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
logits_infer = decoding_layer_infer(
encoder_state, cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'],
sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return logits_train, logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# DONE: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
enc_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
logits_train, logits_infer = decoding_layer(
dec_embed_input, dec_embeddings, enc_state,
target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return logits_train, logits_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 3
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 1.0
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# DONE: Implement Function
sentence = sentence.lower()
ids = []
for word in sentence.split():
try:
ids.append(vocab_to_int[word])
except KeyError:
ids.append(vocab_to_int['<UNK>'])
return ids
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = "her favorite april fruit is my freezing french banana"
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
4,485 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Radiation Forces on Circumplanetary Dust
This example shows how to integrate circumplanetary dust particles under the action of radiation forces. We use Saturn's Phoebe ring as an example, a distant ring of debris.
We have to make sure we add all quantities in the same units. Here we choose to use SI units. We begin by adding the Sun and Saturn, and use Saturn's orbital plane as the reference plane
Step1: Now let's set up REBOUNDx and add radiation_forces. We also have to set the speed of light in the units we want to use.
Step2: By default, the radiation_forces effect assumes the particle at index 0 is the source of the radiation. If you'd like to use a different one, or it's possible that the radiation source might move to a different index (e.g. with a custom merger routine), you can add a radiation_source flag to the appropriate particle like this
Step3: Here we show how to add two dust grains to the simulation in different ways. Let's first initialize their orbits. In both cases we use the orbital elements of Saturn's irregular satellite Phoebe, which the dust grains will inherit upon release (Tamayo et al. 2011). Since the dust grains don't interact with one another, putting them on top of each other is OK.
Step4: Now we add the grains' physical properties. In order for particles to feel radiation forces, we have to set their beta parameter. $\beta$ is tha ratio of the radiation force to the gravitational force from the star (Burns et al. 1979). One can either set it directly
Step5: or we can calculate it from more fundamental parameters. REBOUNDx has a convenience function that takes the gravitional constant, speed of light, radiation source's mass and luminosity, and then the grain's physical radius, bulk density, and radiation pressure coefficient Q_pr (Burns et al. 1979, equals 1 in the limit that the grain size is >> the radiation's wavelength).
Step6: Now let's run for 100 years (about 3 Saturn orbits), and look at how the eccentricity varies over a Saturn year | Python Code:
import rebound
import reboundx
import numpy as np
sim = rebound.Simulation()
sim.G = 6.674e-11 # SI units
sim.dt = 1.e4 # Initial timestep in sec.
sim.N_active = 2 # Make it so dust particles don't interact with one another gravitationally
sim.add(m=1.99e30, hash="Sun") # add Sun with mass in kg
sim.add(m=5.68e26, a=1.43e12, e=0.056, pomega = 0., f=0., hash="Saturn") # Add Saturn at pericenter
ps = sim.particles
Explanation: Radiation Forces on Circumplanetary Dust
This example shows how to integrate circumplanetary dust particles under the action of radiation forces. We use Saturn's Phoebe ring as an example, a distant ring of debris.
We have to make sure we add all quantities in the same units. Here we choose to use SI units. We begin by adding the Sun and Saturn, and use Saturn's orbital plane as the reference plane:
End of explanation
rebx = reboundx.Extras(sim)
rf = rebx.load_force("radiation_forces")
rebx.add_force(rf)
rf.params["c"] = 3.e8
Explanation: Now let's set up REBOUNDx and add radiation_forces. We also have to set the speed of light in the units we want to use.
End of explanation
ps["Sun"].params["radiation_source"] = 1
Explanation: By default, the radiation_forces effect assumes the particle at index 0 is the source of the radiation. If you'd like to use a different one, or it's possible that the radiation source might move to a different index (e.g. with a custom merger routine), you can add a radiation_source flag to the appropriate particle like this:
End of explanation
a = 1.3e10 # in meters
e = 0.16
inc = 175*np.pi/180.
Omega = 0. # longitude of node
omega = 0. # argument of pericenter
f = 0. # true anomaly
# Add two dust grains with the same orbit
sim.add(primary=ps["Saturn"], a=a, e=e, inc=inc, Omega=Omega, omega=omega, f=f, hash="p1")
sim.add(primary=ps["Saturn"], a=a, e=e, inc=inc, Omega=Omega, omega=omega, f=f, hash="p2")
Explanation: Here we show how to add two dust grains to the simulation in different ways. Let's first initialize their orbits. In both cases we use the orbital elements of Saturn's irregular satellite Phoebe, which the dust grains will inherit upon release (Tamayo et al. 2011). Since the dust grains don't interact with one another, putting them on top of each other is OK.
End of explanation
ps["p1"].params["beta"] = 0.01
Explanation: Now we add the grains' physical properties. In order for particles to feel radiation forces, we have to set their beta parameter. $\beta$ is tha ratio of the radiation force to the gravitational force from the star (Burns et al. 1979). One can either set it directly:
End of explanation
grain_radius = 1.e-5 # grain radius in m
density = 1000. # kg/m^3 = 1g/cc
Q_pr = 1.
luminosity = 3.85e26 # Watts
ps["p2"].params["beta"] = rebx.rad_calc_beta(sim.G, rf.params["c"], ps[0].m, luminosity, grain_radius, density, Q_pr)
print("Particle 2's beta parameter = {0}".format(ps["p2"].params["beta"]))
Explanation: or we can calculate it from more fundamental parameters. REBOUNDx has a convenience function that takes the gravitional constant, speed of light, radiation source's mass and luminosity, and then the grain's physical radius, bulk density, and radiation pressure coefficient Q_pr (Burns et al. 1979, equals 1 in the limit that the grain size is >> the radiation's wavelength).
End of explanation
yr = 365*24*3600 # s
Noutput = 1000
times = np.linspace(0,100.*yr, Noutput)
e1, e2 = np.zeros(Noutput), np.zeros(Noutput)
sim.move_to_com() # move to center of mass frame first
for i, time in enumerate(times):
sim.integrate(time)
e1[i] = ps["p1"].calculate_orbit(primary=ps["Saturn"]).e
e2[i] = ps["p2"].calculate_orbit(primary=ps["Saturn"]).e
%matplotlib inline
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(15,5))
ax.plot(times/yr, e1, label=r"$\beta$={0:.1e}".format(ps["p1"].params["beta"]))
ax.plot(times/yr, e2, label=r"$\beta$={0:.1e}".format(ps["p2"].params["beta"]))
ax.set_xlabel('Time (yrs)', fontsize=24)
ax.set_ylabel('Eccentricity', fontsize=24)
plt.legend(fontsize=24)
Explanation: Now let's run for 100 years (about 3 Saturn orbits), and look at how the eccentricity varies over a Saturn year:
End of explanation |
4,486 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Stacks Getting Started
Welcome to the Subinitial Stacks Getting Started Guide!
This document will guide you through setup of the Stacks and your first script.
Useful Links
Official Hardware Getting Started Guide
Official Software Getting Started Guide
All Official Documentation
Subinitial Python Library Documentation
Installing Python and Required Libraries
Python 3.5+ is required to use the Stacks. Install Python by following this link
Step1: Run the following code to make sure the library installed correctly.
Step2: You should see the following output
Step3: If you are able to ping it successfully, great! The Stacks is now communicating with your computer.
* If the ping failed, try assigning a static IP to your computer.
* Assign IP | Python Code:
!pip3 install --user git+https://bitbucket.org/subinitial/subinitial.git
Explanation: Stacks Getting Started
Welcome to the Subinitial Stacks Getting Started Guide!
This document will guide you through setup of the Stacks and your first script.
Useful Links
Official Hardware Getting Started Guide
Official Software Getting Started Guide
All Official Documentation
Subinitial Python Library Documentation
Installing Python and Required Libraries
Python 3.5+ is required to use the Stacks. Install Python by following this link:
Python Download
Install git if you haven't(it's a good idea!).
Git
Then, install the Subinitial Python Library by running the following command in your command line:
End of explanation
import subinitial.stacks as stacks
print("Stacks Library Major Version:", stacks.VERSION_STACKS[0])
Explanation: Run the following code to make sure the library installed correctly.
End of explanation
!ping 192.168.1.49
Explanation: You should see the following output:
Stacks Library Major Version: 1
Setting up your Stacks
For best results, use the Stacks with a laptop with a Wi-Fi connection.
Connect an Ethernet cable from the Stacks to your computer. Verify that the second light from the top in the light bank
is lit.
* If this light is not lit, verify the connection.
Open your terminal or command prompt, and type the following command:
End of explanation
import subinitial.stacks1 as stacks
core = stacks.Core(host="192.168.1.49")
core.print_console("id")
Explanation: If you are able to ping it successfully, great! The Stacks is now communicating with your computer.
* If the ping failed, try assigning a static IP to your computer.
* Assign IP: 192.168.1.40 and Subnet mask: 255.255.255.0
+ On Windows 10, right-click on the Wi-Fi icon, and click on Open Network and Sharing Center.
+ Click on Change adapter settings.
+ Right-click on your Ethernet connection(the one with a cable), and select Properties.
+ Select Internet Protocol Version 4(TCP/IPv4).
+ Click the Properties button.
+ When the window pops up, click on Use the following IP address, and enter the following information:
- IP: 192.168.1.40
- Subnet mast: 255.255.255.0
Run the following script to verify that everything works:
End of explanation |
4,487 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction GPU
Chainer とはニューラルネットの実装を簡単にしたフレームワークです。
今回は言語の分野でニューラルネットを適用してみました。
今回は言語モデルを作成していただきます。
言語モデルとはある単語が来たときに次の単語に何が来やすいかを予測するものです。
言語モデルにはいくつか種類があるのでここでも紹介しておきます。
n-グラム言語モデル
単語の数を単純に数え挙げて作成されるモデル。考え方としてはデータにおけるある単語の頻度に近い
ニューラル言語モデル
単語の辞書ベクトルを潜在空間ベクトルに落とし込み、ニューラルネットで次の文字を学習させる手法
リカレントニューラル言語モデル
基本的なアルゴリズムはニューラル言語モデルと同一だが過去に使用した単語を入力に加えることによって文脈を考慮した言語モデルの学習が可能となる。ニューラル言語モデルとは異なり、より古い情報も取得可能
以下では、このChainerを利用しデータを準備するところから実際に言語モデルを構築し学習・評価を行うまでの手順を解説します。
各種ライブラリ導入
初期設定
データ入力
リカレントニューラル言語モデル設定
学習を始める前の設定
パラメータ更新方法(確率的勾配法)
言語の予測
もしGPUを使用したい方は、以下にまとめてあるのでご参考ください。
Chainer を用いてリカレントニューラル言語モデル作成のサンプルコードを解説してみた
1.各種ライブラリ導入
Chainerの言語処理では多数のライブラリを導入します。
Step1: `導入するライブラリの代表例は下記です。
numpy
Step2: 3.データ入力
学習用にダウンロードしたファイルをプログラムに読ませる処理を関数化しています
学習データをバイナリ形式で読み込んでいます。
文字データを確保するための行列を定義しています。
データを単語をキー、長さを値とした辞書データにして行列データセットに登録しています。
学習データ、単語の長さ、語彙数を取得しています。
上記をそれぞれ行列データとして保持しています。
Step3: 4.リカレントニューラル言語モデル設定
RNNLM(リカレントニューラル言語モデルの設定を行っています)
EmbedIDで行列変換を行い、疎なベクトルを密なベクトルに変換しています。
出力が4倍の理由は入力層、出力層、忘却層、前回の出力をLSTMでは入力に使用するためです。
隠れ層に前回保持した隠れ層の状態を入力することによってLSTMを実現しています。
ドロップアウトにより過学習するのを抑えています。
予測を行なうメソッドも実装しており、入力されたデータ、状態を元に次の文字列と状態を返すような関数になっています。
モデルの初期化を行なう関数もここで定義しています。
Step4: RNNLM(リカレントニューラル言語モデルの設定を行っています)
作成したリカレントニューラル言語モデルを導入しています。
最適化の手法はRMSpropを使用
http
Step5: 5.学習を始める前の設定
学習データのサイズを取得
ジャンプの幅を設定(順次学習しない)
パープレキシティを0で初期化
最初の時間情報を取得
初期状態を現在の状態に付与
状態の初期化
損失を0で初期化
Step6: 6.パラメータ更新方法(ミニバッチ)
確率的勾配法を用いて学習している。
一定のデータを選択し損失計算をしながらパラメータ更新をしている。
逐次尤度の計算も行っている。
適宜学習データのパープレキシティも計算している
バックプロパゲーションでパラメータを更新する。
truncateはどれだけ過去の履歴を見るかを表している。
optimizer.clip_gradsの部分でL2正則化をかけている。
過学習を抑えるために学習効率を徐々に下げている。
Step7: 7.言語の予測
学習したモデルを取得
モデルからユニット数を取得
最初の空文字を設定
Step8: 学習したモデルを利用して文字の予測を行なう。
予測で出力された文字と状態を次の入力に使用する。 | Python Code:
import time
import math
import sys
import pickle
import copy
import os
import re
import numpy as np
from chainer import cuda, Variable, FunctionSet, optimizers
import chainer.functions as F
Explanation: Introduction GPU
Chainer とはニューラルネットの実装を簡単にしたフレームワークです。
今回は言語の分野でニューラルネットを適用してみました。
今回は言語モデルを作成していただきます。
言語モデルとはある単語が来たときに次の単語に何が来やすいかを予測するものです。
言語モデルにはいくつか種類があるのでここでも紹介しておきます。
n-グラム言語モデル
単語の数を単純に数え挙げて作成されるモデル。考え方としてはデータにおけるある単語の頻度に近い
ニューラル言語モデル
単語の辞書ベクトルを潜在空間ベクトルに落とし込み、ニューラルネットで次の文字を学習させる手法
リカレントニューラル言語モデル
基本的なアルゴリズムはニューラル言語モデルと同一だが過去に使用した単語を入力に加えることによって文脈を考慮した言語モデルの学習が可能となる。ニューラル言語モデルとは異なり、より古い情報も取得可能
以下では、このChainerを利用しデータを準備するところから実際に言語モデルを構築し学習・評価を行うまでの手順を解説します。
各種ライブラリ導入
初期設定
データ入力
リカレントニューラル言語モデル設定
学習を始める前の設定
パラメータ更新方法(確率的勾配法)
言語の予測
もしGPUを使用したい方は、以下にまとめてあるのでご参考ください。
Chainer を用いてリカレントニューラル言語モデル作成のサンプルコードを解説してみた
1.各種ライブラリ導入
Chainerの言語処理では多数のライブラリを導入します。
End of explanation
#-------------Explain7 in the Qiita-------------
n_epochs = 30
n_units = 641
batchsize = 200
bprop_len = 40
grad_clip = 0.3
gpu_ID = 0
data_dir = "data_hands_on"
checkpoint_dir = "cv"
xp = cuda.cupy if gpu_ID >= 0 else np
#-------------Explain7 in the Qiita-------------
Explanation: `導入するライブラリの代表例は下記です。
numpy: 行列計算などの複雑な計算を行なうライブラリ
chainer: Chainerの導入
2.初期設定
学習回数、ユニット数、確率的勾配法に使用するデータの数、学習に使用する文字列の長さ、勾配法で使用する敷居値、学習データの格納場所、モデルの出力場所を設定しています。
End of explanation
# input data
#-------------Explain1 in the Qiita-------------
def source_to_words(source):
line = source.replace("¥n", " ").replace("¥t", " ")
for spacer in ["(", ")", "{", "}", "[", "]", ",", ";", ":", "++", "!", "$", '"', "'"]:
line = line.replace(spacer, " " + spacer + " ")
words = [w.strip() for w in line.split()]
return words
def load_data():
vocab = {}
print ('%s/angular.js'% data_dir)
source = open('%s/angular_full_remake.js' % data_dir, 'r').read()
words = source_to_words(source)
freq = {}
dataset = np.ndarray((len(words),), dtype=np.int32)
for i, word in enumerate(words):
if word not in vocab:
vocab[word] = len(vocab)
freq[word] = 0
dataset[i] = vocab[word]
freq[word] += 1
print('corpus length:', len(words))
print('vocab size:', len(vocab))
return dataset, words, vocab, freq
#-------------Explain1 in the Qiita-------------
if not os.path.exists(checkpoint_dir):
os.mkdir(checkpoint_dir)
train_data, words, vocab, freq = load_data()
for f in ["frequent", "rarely"]:
print("{0} words".format(f))
print(sorted(freq.items(), key=lambda i: i[1], reverse=True if f == "frequent" else False)[:50])
Explanation: 3.データ入力
学習用にダウンロードしたファイルをプログラムに読ませる処理を関数化しています
学習データをバイナリ形式で読み込んでいます。
文字データを確保するための行列を定義しています。
データを単語をキー、長さを値とした辞書データにして行列データセットに登録しています。
学習データ、単語の長さ、語彙数を取得しています。
上記をそれぞれ行列データとして保持しています。
End of explanation
#-------------Explain2 in the Qiita-------------
class CharRNN(FunctionSet):
def __init__(self, n_vocab, n_units):
super(CharRNN, self).__init__(
embed = F.EmbedID(n_vocab, n_units),
l1_x = F.Linear(n_units, 4*n_units),
l1_h = F.Linear(n_units, 4*n_units),
l2_h = F.Linear(n_units, 4*n_units),
l2_x = F.Linear(n_units, 4*n_units),
l3 = F.Linear(n_units, n_vocab),
)
for param in self.parameters:
param[:] = np.random.uniform(-0.08, 0.08, param.shape)
def forward_one_step(self, x_data, y_data, state, train=True, dropout_ratio=0.7):
x = Variable(x_data, volatile=not train)
t = Variable(y_data, volatile=not train)
h0 = self.embed(x)
h1_in = self.l1_x(F.dropout(h0, ratio=dropout_ratio, train=train)) + self.l1_h(state['h1'])
c1, h1 = F.lstm(state['c1'], h1_in)
h2_in = self.l2_x(F.dropout(h1, ratio=dropout_ratio, train=train)) + self.l2_h(state['h2'])
c2, h2 = F.lstm(state['c2'], h2_in)
y = self.l3(F.dropout(h2, ratio=dropout_ratio, train=train))
state = {'c1': c1, 'h1': h1, 'c2': c2, 'h2': h2}
return state, F.softmax_cross_entropy(y, t)
def predict(self, x_data, state):
x = Variable(x_data, volatile=True)
h0 = self.embed(x)
h1_in = self.l1_x(h0) + self.l1_h(state['h1'])
c1, h1 = F.lstm(state['c1'], h1_in)
h2_in = self.l2_x(h1) + self.l2_h(state['h2'])
c2, h2 = F.lstm(state['c2'], h2_in)
y = self.l3(h2)
state = {'c1': c1, 'h1': h1, 'c2': c2, 'h2': h2}
return state, F.softmax(y)
def make_initial_state(n_units, batchsize=50, train=True):
return {name: Variable(np.zeros((batchsize, n_units), dtype=np.float32),
volatile=not train)
for name in ('c1', 'h1', 'c2', 'h2')}
#-------------Explain2 in the Qiita-------------
Explanation: 4.リカレントニューラル言語モデル設定
RNNLM(リカレントニューラル言語モデルの設定を行っています)
EmbedIDで行列変換を行い、疎なベクトルを密なベクトルに変換しています。
出力が4倍の理由は入力層、出力層、忘却層、前回の出力をLSTMでは入力に使用するためです。
隠れ層に前回保持した隠れ層の状態を入力することによってLSTMを実現しています。
ドロップアウトにより過学習するのを抑えています。
予測を行なうメソッドも実装しており、入力されたデータ、状態を元に次の文字列と状態を返すような関数になっています。
モデルの初期化を行なう関数もここで定義しています。
End of explanation
# Prepare RNNLM model
model = CharRNN(len(vocab), n_units)
if gpu_ID >= 0:
cuda.check_cuda_available()
cuda.get_device(gpu_ID).use()
model.to_gpu()
optimizer = optimizers.RMSprop(lr=2e-3, alpha=0.95, eps=1e-8)
optimizer.setup(model)
Explanation: RNNLM(リカレントニューラル言語モデルの設定を行っています)
作成したリカレントニューラル言語モデルを導入しています。
最適化の手法はRMSpropを使用
http://qiita.com/skitaoka/items/e6afbe238cd69c899b2a
初期のパラメータを-0.1〜0.1の間で与えています。
End of explanation
whole_len = train_data.shape[0]
jump = whole_len // batchsize
epoch = 0
start_at = time.time()
cur_at = start_at
state = make_initial_state(n_units, batchsize=batchsize)
cur_log_perp = 0
if gpu_ID >= 0:
accum_loss = Variable(cuda.zeros(()))
for key, value in state.items():
value.data = cuda.to_gpu(value.data)
else:
accum_loss = Variable(xp.zeros((), dtype=np.float32))
Explanation: 5.学習を始める前の設定
学習データのサイズを取得
ジャンプの幅を設定(順次学習しない)
パープレキシティを0で初期化
最初の時間情報を取得
初期状態を現在の状態に付与
状態の初期化
損失を0で初期化
End of explanation
for i in range(int(jump * n_epochs)):
#-------------Explain4 in the Qiita-------------
x_batch = np.array([train_data[(jump * j + i) % whole_len]
for j in range(batchsize)])
y_batch = np.array([train_data[(jump * j + i + 1) % whole_len]
for j in range(batchsize)])
if gpu_ID >= 0:
x_batch = cuda.to_gpu(x_batch)
y_batch = cuda.to_gpu(y_batch)
state, loss_i = model.forward_one_step(x_batch, y_batch, state, dropout_ratio=0.7)
accum_loss += loss_i
cur_log_perp += loss_i.data
if (i + 1) % bprop_len == 0: # Run truncated BPTT
now = time.time()
cur_at = now
print('{}/{}, train_loss = {}, time = {:.2f}'.format((i + 1)/bprop_len, jump, accum_loss.data / bprop_len, now-cur_at))
optimizer.zero_grads()
accum_loss.backward()
accum_loss.unchain_backward() # truncate
accum_loss = Variable(np.zeros((), dtype=np.float32))
if gpu_ID >= 0:
accum_loss = Variable(cuda.zeros(()))
else:
accum_loss = Variable(np.zeros((), dtype=np.float32))
optimizer.clip_grads(grad_clip)
optimizer.update()
if (i + 1) % 10000 == 0:
perp = math.exp(cuda.to_cpu(cur_log_perp) / 10000)
print('iter {} training perplexity: {:.2f} '.format(i + 1, perp))
fn = ('%s/charrnn_epoch_%i.chainermodel' % (checkpoint_dir, epoch))
pickle.dump(copy.deepcopy(model).to_cpu(), open(fn, 'wb'))
cur_log_perp = 0
if (i + 1) % jump == 0:
epoch += 1
#-------------Explain4 in the Qiita-------------
sys.stdout.flush()
Explanation: 6.パラメータ更新方法(ミニバッチ)
確率的勾配法を用いて学習している。
一定のデータを選択し損失計算をしながらパラメータ更新をしている。
逐次尤度の計算も行っている。
適宜学習データのパープレキシティも計算している
バックプロパゲーションでパラメータを更新する。
truncateはどれだけ過去の履歴を見るかを表している。
optimizer.clip_gradsの部分でL2正則化をかけている。
過学習を抑えるために学習効率を徐々に下げている。
End of explanation
# load model
#-------------Explain6 in the Qiita-------------
model = pickle.load(open("cv/charrnn_epoch_22.chainermodel", 'rb'))
#-------------Explain6 in the Qiita-------------
n_units = model.embed.W.shape[1]
if gpu_ID >= 0:
cuda.check_cuda_available()
cuda.get_device(gpu_ID).use()
model.to_gpu()
# initialize generator
state = make_initial_state(n_units, batchsize=1, train=False)
if gpu_ID >= 0:
for key, value in state.items():
value.data = cuda.to_gpu(value.data)
# show vocababury
ivocab = {}
ivocab = {v:k for k, v in vocab.items()}
Explanation: 7.言語の予測
学習したモデルを取得
モデルからユニット数を取得
最初の空文字を設定
End of explanation
# initialize generator
index = np.random.randint(0, len(vocab), 1)[0]
sampling_range = 5
prev_char = np.array([0], dtype=np.int32)
if gpu_ID >= 0:
prev_char = cuda.to_gpu(prev_char)
for i in range(1000):
if ivocab[index] in ["}", ";"]:
sys.stdout.write(ivocab[index] + "\n")
else:
sys.stdout.write(ivocab[index] + " ")
#-------------Explain7 in the Qiita-------------
state, prob = model.predict(prev_char, state)
index = np.argmax(cuda.to_cpu(prob.data))
#index = np.random.choice(prob.data.argsort()[0,-sampling_range:][::-1], 1)[0]
#-------------Explain7 in the Qiita-------------
prev_char = np.array([index], dtype=np.int32)
if gpu_ID >= 0:
prev_char = cuda.to_gpu(prev_char)
print
Explanation: 学習したモデルを利用して文字の予測を行なう。
予測で出力された文字と状態を次の入力に使用する。
End of explanation |
4,488 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPFlow first approximation
Step1: Simulating Data
Simulate random uniform 4-d vector. Give N of this.
Step2: Calculate distance
X can be interpreted as covariate matrix in which the first two columns are the longitud and latitude.
GPFlow requires that all the covariates (including spatio-temporal coordinates) are in X.
Step3: Simulate the Gaussian Process $S$
Remmember that for a stationary Gaussian Process, the value at Z is independent of the betas (Covariate weights).
Mean 0's $\Sigma$ Correlation matrix
$$S = MVN(0,\Sigma) + \epsilon$$
$ \epsilon \sim N(0,\sigma^{2}) $
S is a realization of a spatial process.
Step4: Simulate the Response Variable $y$
$$y_1(x_1,x_2) = S(x_1,x_2) $$
$$y_2(x_1,x_2) = \beta_0 + x_3\beta_1 + x_4\beta_2 + S(x_1,x_2)$$
Step5: GP Model !
This model is without covariates
Step6: Model for $y_1$
Step7: Like in tensorflow, m is a graph and has at least three nodes
Step8: compare with original parameters (made from the simulation)
Step9: it was close enough
GAUSSIAN PROCESS WITH LINEAR TREND
Defining the model
Step10: Original parameters
phi = 0.05 ---> lengthscale
sigma2 = 1.0 ---> variance transform
nugget = 0.03 ---> likelihood variance
beta_0 = 10.0 ---> mean_function b
beta_1 = 1.5 ---> mean_fucntionA [2]
beta_2 = -1.0 ---> mean_functionA [3]
mean_functionA[0] and mean_functionA[1] are the betas for for x and y (coordinates respectively)
Without spatial coordinates as covariates
Step13: Custom made mean function (Erick Chacón )
Step14: Now we can use the special mean function without the coordinates (covariates).
Step15: Only 2 parameters now!
Step16: Original parameters
phi = 0.05 ---> lengthscale
sigma2 = 1.0 ---> variance transform
nugget = 0.03 ---> likelihood variance
beta_0 = 10.0 ---> mean_function b
beta_1 = 1.5 ---> mean_fucntionA [2]
beta_2 = -1.0 ---> mean_functionA [3] | Python Code:
## Import modules
import numpy as np
import scipy.spatial.distance as sp
from matplotlib import pyplot as plt
plt.style.use('ggplot')
## Parameter definitions
N = 1000
phi = 0.05
sigma2 = 1.0
beta_0 = 10.0
beta_1 = 1.5
beta_2 = -1.0
# AL NAGAT
nugget = 0.03
Explanation: GPFlow first approximation
End of explanation
X = np.random.rand(N,4)
Explanation: Simulating Data
Simulate random uniform 4-d vector. Give N of this.
End of explanation
points = X[:,0:2]
dist_points = sp.pdist(points)
## Reshape the vector to square matrix
distance_matrix = sp.squareform(dist_points)
correlation_matrix = np.exp(- distance_matrix / phi)
covariance_matrix = correlation_matrix * sigma2
plt.imshow(covariance_matrix)
Explanation: Calculate distance
X can be interpreted as covariate matrix in which the first two columns are the longitud and latitude.
GPFlow requires that all the covariates (including spatio-temporal coordinates) are in X.
End of explanation
S = np.random.multivariate_normal(np.zeros(N), correlation_matrix) +\
np.random.normal(size = N) * nugget
S.shape
# We convert to Matrix [1 column]
S = S.reshape(N,1)
## Plot x, y using as color the Gaussian process
plt.scatter(X[:, 0], X[:, 1], c = S)
plt.colorbar()
Explanation: Simulate the Gaussian Process $S$
Remmember that for a stationary Gaussian Process, the value at Z is independent of the betas (Covariate weights).
Mean 0's $\Sigma$ Correlation matrix
$$S = MVN(0,\Sigma) + \epsilon$$
$ \epsilon \sim N(0,\sigma^{2}) $
S is a realization of a spatial process.
End of explanation
# remmember index 0 is 1
mu = beta_0 + beta_1 * X[:, 2] + beta_2 * X[:, 3]
mu = mu.reshape(N,1)
Y1 = S
Y2 = mu + S
plt.scatter(X[:, 0], X[:, 1], c = Y2)
plt.colorbar()
S.shape
Explanation: Simulate the Response Variable $y$
$$y_1(x_1,x_2) = S(x_1,x_2) $$
$$y_2(x_1,x_2) = \beta_0 + x_3\beta_1 + x_4\beta_2 + S(x_1,x_2)$$
End of explanation
# Import GPFlow
import GPflow as gf
# Defining the model Matern function with \kappa = 0.5
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] )
type(k)
Explanation: GP Model !
This model is without covariates
End of explanation
m = gf.gpr.GPR(points, Y1, k)
## First guess
init_nugget = 0.001
m.likelihood.variance = init_nugget
print(m)
Explanation: Model for $y_1$
End of explanation
# Estimation using symbolic gradient descent
m.optimize()
print(m)
Explanation: Like in tensorflow, m is a graph and has at least three nodes: lengthscale, kern variance and likelihood variance
End of explanation
print(phi,sigma2,nugget)
Explanation: compare with original parameters (made from the simulation)
End of explanation
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1] )
gf.mean_functions.Linear()
meanf = gf.mean_functions.Linear(np.ones((4,1)), np.ones(1))
m = gf.gpr.GPR(X, Y2, k, meanf)
m.likelihood.variance = init_nugget
print(m)
# Estimation
m.optimize()
print(m)
Explanation: it was close enough
GAUSSIAN PROCESS WITH LINEAR TREND
Defining the model
End of explanation
# Defining the model
k = gf.kernels.Matern12(2, lengthscales=1, active_dims = [0,1])
Explanation: Original parameters
phi = 0.05 ---> lengthscale
sigma2 = 1.0 ---> variance transform
nugget = 0.03 ---> likelihood variance
beta_0 = 10.0 ---> mean_function b
beta_1 = 1.5 ---> mean_fucntionA [2]
beta_2 = -1.0 ---> mean_functionA [3]
mean_functionA[0] and mean_functionA[1] are the betas for for x and y (coordinates respectively)
Without spatial coordinates as covariates
End of explanation
from GPflow.mean_functions import MeanFunction, Param
import tensorflow as tf
class LinearG(MeanFunction):
y_i = A x_i + b
def __init__(self, A=None, b=None):
A is a matrix which maps each element of X to Y, b is an additive
constant.
If X has N rows and D columns, and Y is intended to have Q columns,
then A must be D x Q, b must be a vector of length Q.
A = np.ones((1, 1)) if A is None else A
b = np.zeros(1) if b is None else b
MeanFunction.__init__(self)
self.A = Param(np.atleast_2d(A))
self.b = Param(b)
def __call__(self, X):
Anew = tf.concat([np.zeros((2,1)),self.A],0)
return tf.matmul(X, Anew) + self.b
Explanation: Custom made mean function (Erick Chacón )
End of explanation
meanf = LinearG(np.ones((2,1)), np.ones(1))
m = gf.gpr.GPR(X, Y2, k, meanf)
m.likelihood.variance = 0.1
print(m)
Explanation: Now we can use the special mean function without the coordinates (covariates).
End of explanation
# Estimation
m.optimize()
print(m)
Explanation: Only 2 parameters now!
End of explanation
predicted_x = np.linspace(0.0,1.0,100)
from external_plugins.spystats.models import makeDuples
predsX = makeDuples(predicted_x)
pX = np.array(predsX)
tt = np.ones((10000,2)) *0.5
## Concatenate with horizontal stack
SuperX = np.hstack((pX,tt))
SuperX.shape
mean, variance = m.predict_y(SuperX)
minmean = min(mean)
maxmean = max(mean)
#plt.figure(figsize=(12, 6))
plt.scatter(pX[:,0], pX[:,1])
Xx, Yy = np.meshgrid(predicted_x,predicted_x)
plt.pcolor(Xx,Yy,mean.reshape(100,100),cmap=plt.cm.Accent)
Nn = 300
predicted_x = np.linspace(0.0,1.0,Nn)
Xx, Yy = np.meshgrid(predicted_x,predicted_x)
## Predict
pX = np.array(predsX)
tt = np.ones((Nn**2,2)) *0.5
from external_plugins.spystats.models import makeDuples
predsX = makeDuples(predicted_x)
SuperX = np.hstack((pX,tt))
mean, variance = m.predict_y(SuperX)
minmean = min(mean)
maxmean = max(mean)
width = 12
height = 8
minz = minmean
maxz = maxmean
plt.figure(figsize=(width, height))
plt.subplot(1,2,1)
scat = plt.scatter(X[:, 0], X[:, 1], c = Y2)
#plt.axis('equal')
plt.xlim((0,1))
plt.ylim((0,1))
plt.clim(minz,maxz)
#plt.colorbar()
plt.subplot(1,2,2)
#field = plt.imshow(mean.reshape(100,100).transpose().transpose(),interpolation=None)
plt.pcolor(Xx,Yy,mean.reshape(Nn,Nn).transpose())
plt.colorbar()
plt.clim(minz,maxz)
fig, axes = plt.subplots(nrows=1, ncols=2)
scat = plt.scatter(X[:, 0], X[:, 1], c = Y2)
field = plt.imshow(mean.reshape(100,100),interpolation=None)
#fig.subplots_adjust(right=0.8)
#cbar_ax = fig.add_axes([0.85, 0.05])
fig.colorbar(field, ax=axes.ravel().tolist())
plt.imshow(mean.reshape(100,100),interpolation=None)
plt.colorbar()
plt.plot(xx, mean, 'b', lw=2)
plt.fill_between(xx[:,0], mean[:,0] - 2*np.sqrt(var[:,0]), mean[:,0] + 2*np.sqrt(var[:,0]), color='blue', alpha=0.2)
plt.xlim(-0.1, 1.1)
plot(m)
Explanation: Original parameters
phi = 0.05 ---> lengthscale
sigma2 = 1.0 ---> variance transform
nugget = 0.03 ---> likelihood variance
beta_0 = 10.0 ---> mean_function b
beta_1 = 1.5 ---> mean_fucntionA [2]
beta_2 = -1.0 ---> mean_functionA [3]
End of explanation |
4,489 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
4,490 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Running a Tapas fine-tuned checkpoint
This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper
Step2: Fetch models fom Google Storage
Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is a medium sized model trained on WTQ. Note that best results in the paper were obtained with a large model.
Step3: Imports
Step5: Load checkpoint for prediction
Here's the prediction code, which will create and interaction_pb2.Interaction protobuf object, which is the datastructure we use to store examples, and then call the prediction script.
Step7: Predict | Python Code:
# Copyright 2019 The Google AI Language Team Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: <a href="https://colab.research.google.com/github/google-research/tapas/blob/master/notebooks/wtq_predictions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 The Google AI Language Team Authors
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
! pip install tapas-table-parsing
Explanation: Running a Tapas fine-tuned checkpoint
This notebook shows how to load and make predictions with TAPAS model, which was introduced in the paper: TAPAS: Weakly Supervised Table Parsing via Pre-training
Clone and install the repository
First, let's install the code.
End of explanation
! gsutil cp "gs://tapas_models/2020_08_05/tapas_wtq_wikisql_sqa_masklm_medium_reset.zip" "tapas_model.zip" && unzip tapas_model.zip
! mv tapas_wtq_wikisql_sqa_masklm_medium_reset tapas_model
Explanation: Fetch models fom Google Storage
Next we can get pretrained checkpoint from Google Storage. For the sake of speed, this is a medium sized model trained on WTQ. Note that best results in the paper were obtained with a large model.
End of explanation
import tensorflow.compat.v1 as tf
import os
import shutil
import csv
import pandas as pd
import IPython
tf.get_logger().setLevel('ERROR')
from tapas.utils import tf_example_utils
from tapas.protos import interaction_pb2
from tapas.utils import number_annotation_utils
from tapas.scripts import prediction_utils
Explanation: Imports
End of explanation
os.makedirs('results/wtq/tf_examples', exist_ok=True)
os.makedirs('results/wtq/model', exist_ok=True)
with open('results/wtq/model/checkpoint', 'w') as f:
f.write('model_checkpoint_path: "model.ckpt-0"')
for suffix in ['.data-00000-of-00001', '.index', '.meta']:
shutil.copyfile(f'tapas_model/model.ckpt{suffix}', f'results/wtq/model/model.ckpt-0{suffix}')
max_seq_length = 512
vocab_file = "tapas_model/vocab.txt"
config = tf_example_utils.ClassifierConversionConfig(
vocab_file=vocab_file,
max_seq_length=max_seq_length,
max_column_id=max_seq_length,
max_row_id=max_seq_length,
strip_column_names=False,
add_aggregation_candidates=False,
)
converter = tf_example_utils.ToClassifierTensorflowExample(config)
def convert_interactions_to_examples(tables_and_queries):
Calls Tapas converter to convert interaction to example.
for idx, (table, queries) in enumerate(tables_and_queries):
interaction = interaction_pb2.Interaction()
for position, query in enumerate(queries):
question = interaction.questions.add()
question.original_text = query
question.id = f"{idx}-0_{position}"
for header in table[0]:
interaction.table.columns.add().text = header
for line in table[1:]:
row = interaction.table.rows.add()
for cell in line:
row.cells.add().text = cell
number_annotation_utils.add_numeric_values(interaction)
for i in range(len(interaction.questions)):
try:
yield converter.convert(interaction, i)
except ValueError as e:
print(f"Can't convert interaction: {interaction.id} error: {e}")
def write_tf_example(filename, examples):
with tf.io.TFRecordWriter(filename) as writer:
for example in examples:
writer.write(example.SerializeToString())
def aggregation_to_string(index):
if index == 0:
return "NONE"
if index == 1:
return "SUM"
if index == 2:
return "AVERAGE"
if index == 3:
return "COUNT"
raise ValueError(f"Unknown index: {index}")
def predict(table_data, queries):
table = [list(map(lambda s: s.strip(), row.split("|")))
for row in table_data.split("\n") if row.strip()]
examples = convert_interactions_to_examples([(table, queries)])
write_tf_example("results/wtq/tf_examples/test.tfrecord", examples)
write_tf_example("results/wtq/tf_examples/random-split-1-dev.tfrecord", [])
! python -m tapas.run_task_main \
--task="WTQ" \
--output_dir="results" \
--noloop_predict \
--test_batch_size={len(queries)} \
--tapas_verbosity="ERROR" \
--compression_type= \
--reset_position_index_per_cell \
--init_checkpoint="tapas_model/model.ckpt" \
--bert_config_file="tapas_model/bert_config.json" \
--mode="predict" 2> error
results_path = "results/wtq/model/test.tsv"
all_coordinates = []
df = pd.DataFrame(table[1:], columns=table[0])
display(IPython.display.HTML(df.to_html(index=False)))
print()
with open(results_path) as csvfile:
reader = csv.DictReader(csvfile, delimiter='\t')
for row in reader:
coordinates = sorted(prediction_utils.parse_coordinates(row["answer_coordinates"]))
all_coordinates.append(coordinates)
answers = ', '.join([table[row + 1][col] for row, col in coordinates])
position = int(row['position'])
aggregation = aggregation_to_string(int(row["pred_aggr"]))
print(">", queries[position])
answer_text = str(answers)
if aggregation != "NONE":
answer_text = f"{aggregation} of {answer_text}"
print(answer_text)
return all_coordinates
Explanation: Load checkpoint for prediction
Here's the prediction code, which will create and interaction_pb2.Interaction protobuf object, which is the datastructure we use to store examples, and then call the prediction script.
End of explanation
# Based on SQA example nu-1000-0
result = predict(
Pos | No | Driver | Team | Laps | Time/Retired | Grid | Points
1 | 32 | Patrick Carpentier | Team Player's | 87 | 1:48:11.023 | 1 | 22
2 | 1 | Bruno Junqueira | Newman/Haas Racing | 87 | +0.8 secs | 2 | 17
3 | 3 | Paul Tracy | Team Player's | 87 | +28.6 secs | 3 | 14
4 | 9 | Michel Jourdain, Jr. | Team Rahal | 87 | +40.8 secs | 13 | 12
5 | 34 | Mario Haberfeld | Mi-Jack Conquest Racing | 87 | +42.1 secs | 6 | 10
6 | 20 | Oriol Servia | Patrick Racing | 87 | +1:00.2 | 10 | 8
7 | 51 | Adrian Fernandez | Fernandez Racing | 87 | +1:01.4 | 5 | 6
8 | 12 | Jimmy Vasser | American Spirit Team Johansson | 87 | +1:01.8 | 8 | 5
9 | 7 | Tiago Monteiro | Fittipaldi-Dingman Racing | 86 | + 1 Lap | 15 | 4
10 | 55 | Mario Dominguez | Herdez Competition | 86 | + 1 Lap | 11 | 3
11 | 27 | Bryan Herta | PK Racing | 86 | + 1 Lap | 12 | 2
12 | 31 | Ryan Hunter-Reay | American Spirit Team Johansson | 86 | + 1 Lap | 17 | 1
13 | 19 | Joel Camathias | Dale Coyne Racing | 85 | + 2 Laps | 18 | 0
14 | 33 | Alex Tagliani | Rocketsports Racing | 85 | + 2 Laps | 14 | 0
15 | 4 | Roberto Moreno | Herdez Competition | 85 | + 2 Laps | 9 | 0
16 | 11 | Geoff Boss | Dale Coyne Racing | 83 | Mechanical | 19 | 0
17 | 2 | Sebastien Bourdais | Newman/Haas Racing | 77 | Mechanical | 4 | 0
18 | 15 | Darren Manning | Walker Racing | 12 | Mechanical | 7 | 0
19 | 5 | Rodolfo Lavin | Walker Racing | 10 | Mechanical | 16 | 0
, ["Who are the drivers with 87 laps?", "Sum of laps for team Walker Racing?", "Average grid for the drivers with less than 80 laps?",])
Explanation: Predict
End of explanation |
4,491 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
From here is seems clear that the criteria necessary to satisfy the "suspect" condition are relatively narrow, and the real issue is whether or not poor photometry significantly affects the model.
Nearly 24% of all sources in the HST COSMOS-PS1 cross match have poor photometry. This has the potential to seriously jeopardize the efficacy of the model.
Step1: On the other hand, only 6% of the sources that have detections in the mean catalog have poor photometry.
As shown below, these sources begin to dominate at wwKronMag ~ 23.7 mag, meaning they will dominate classifications for sources this faint [there will be few sources this faint with nDetections > 0]. | Python Code:
print("Of the {} mean det sources, {} have suspect, and {} have poor photometry".format(sum(mean_det), sum(mean_det & suspect_phot), sum(mean_det & poor_phot)))
Explanation: From here is seems clear that the criteria necessary to satisfy the "suspect" condition are relatively narrow, and the real issue is whether or not poor photometry significantly affects the model.
Nearly 24% of all sources in the HST COSMOS-PS1 cross match have poor photometry. This has the potential to seriously jeopardize the efficacy of the model.
End of explanation
whiteKronMag = -2.5*np.log10(ts["wwKronFlux"]/3631)
plt.hist(whiteKronMag, bins=np.arange(14,25.5,0.25),alpha=0.5)
plt.hist(whiteKronMag[mean_det], bins=np.arange(14,25.5,0.25), alpha=0.5)
plt.hist(whiteKronMag[poor_phot], bins=np.arange(14,25.5,0.25), alpha=0.5)
plt.hist(whiteKronMag[poor_phot & mean_det], bins=np.arange(14,25.5,0.25), alpha=0.8)
Explanation: On the other hand, only 6% of the sources that have detections in the mean catalog have poor photometry.
As shown below, these sources begin to dominate at wwKronMag ~ 23.7 mag, meaning they will dominate classifications for sources this faint [there will be few sources this faint with nDetections > 0].
End of explanation |
4,492 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Molecular Dynamics
Step1: Basics of Molecular Dynamics | Python Code:
from IPython.core.display import HTML
css_file = 'https://raw.githubusercontent.com/ngcm/training-public/master/ipython_notebook_styles/ngcmstyle.css'
HTML(url=css_file)
Explanation: Molecular Dynamics: Lab 1
In part based on Fortran code from Furio Ercolessi.
End of explanation
%matplotlib inline
import numpy
from matplotlib import pyplot
from mpl_toolkits.mplot3d.axes3d import Axes3D
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size'] = 16
rcParams['figure.figsize'] = (12,6)
Explanation: Basics of Molecular Dynamics
End of explanation |
4,493 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
7 - Advanced topics - Multiple SceneObjects Example
This journal shows how to
Step1: <a id='step1a'></a>
A. Generating the first scene object
This is a standard fixed-tilt setup for one hour. Gencumsky could be used too for the whole year.
The key here is that we are setting in sceneDict the variable appendRadfile to true.
Step2: Checking values after Scene for the scene Object created
Step3: <a id='step1b'></a>
B. Generating the second scene object.
Creating a different Scene. Same Module, different values.
Notice we are passing a different originx and originy to displace the center of this new sceneObj to that location.
Step4: <a id='step2'></a>
2. Add a Marker at the Origin (coordinates 0,0) for help with visualization
Creating a "markers" for the geometry is useful to orient one-self when doing sanity-checks (for example, marke where 0,0 is, or where 5,0 coordinate is).
<div class="alert alert-warning">
Note that if you analyze the module that intersects with the marker, some of the sensors will be wrong. To perform valid analysis, do so without markers, as they are 'real' objects on your scene.
</div>
Step5: <a id='step3'></a>
3. Combine all scene Objects into one OCT file & Visualize
Marking this as its own steps because this is the step that joins our Scene Objects 1, 2 and the appended Post.
Run makeOCT to make the scene with both scene objects AND the marker in it, the ground and the skies.
Step6: At this point you should be able to go into a command window (cmd.exe) and check the geometry. Example
Step7: It should look something like this
Step8: Let's do a Sanity check for first object
Step9: Let's analyze a module in sceneobject 2 now. Remember we can specify which module/row we want. We only have one row in this Object though.
Step10: Sanity check for first object. Since we didn't pass any desired module, it should grab the center module of the center row (rounding down). For 1 rows, that is row 0, module 4 ~ indexed at 0, a3.0.a0.Longi... and a3.0.a1.Longi since it is a 2-UP system. | Python Code:
import os
import numpy as np
import pandas as pd
from pathlib import Path
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_07')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
print ("Your simulation will be stored in %s" % testfolder)
from bifacial_radiance import RadianceObj, AnalysisObj
Explanation: 7 - Advanced topics - Multiple SceneObjects Example
This journal shows how to:
<ul>
<li> Create multiple scene objects in the same scene. </li>
<li> Analyze multiple scene objects in the same scene </li>
<li> Add a marker to find the origin (0,0) on a scene (for sanity-checks/visualization). </li>
A scene Object is defined as an array of modules, with whatever parameters you want to give it. In this case, we are modeling one array of 2 rows of 5 modules in landscape, and one array of 1 row of 5 modules in 2-UP, portrait configuration, as the image below:

### Steps:
<ol>
<li> <a href='#step1'> Generating the setups</a></li>
<ol type='A'>
<li> <a href='#step1a'> Generating the firt scene object</a></li>
<li> <a href='#step1b'> Generating the second scene object.</a></li>
</ol>
<li> <a href='#step2'> Add a Marker at the Origin (coordinates 0,0) for help with visualization </a></li>
<li> <a href='#step3'> Combine all scene Objects into one OCT file & Visualize </a></li>
<li> <a href='#step4'> Analysis for Each sceneObject </a></li>
</ol>
<a id='step1'></a>
### 1. Generating the Setups
End of explanation
demo = RadianceObj("tutorial_7", path = testfolder)
demo.setGround(0.62)
epwfile = demo.getEPW(lat = 37.5, lon = -77.6)
metdata = demo.readWeatherFile(epwfile, coerce_year=2001)
fullYear = True
timestamp = metdata.datetime.index(pd.to_datetime('2001-06-17 13:0:0 -5')) # Noon, June 17th
demo.gendaylit(timestamp)
module_type = 'test-moduleA'
mymodule = demo.makeModule(name=module_type,y=1,x=1.7)
sceneDict = {'tilt':10,'pitch':1.5,'clearance_height':0.2,'azimuth':180, 'nMods': 5, 'nRows': 2, 'appendRadfile':True}
sceneObj1 = demo.makeScene(mymodule, sceneDict)
Explanation: <a id='step1a'></a>
A. Generating the first scene object
This is a standard fixed-tilt setup for one hour. Gencumsky could be used too for the whole year.
The key here is that we are setting in sceneDict the variable appendRadfile to true.
End of explanation
print ("SceneObj1 modulefile: %s" % sceneObj1.modulefile)
print ("SceneObj1 SceneFile: %s" %sceneObj1.radfiles)
print ("SceneObj1 GCR: %s" % round(sceneObj1.gcr,2))
print ("FileLists: \n %s" % demo.getfilelist())
Explanation: Checking values after Scene for the scene Object created
End of explanation
sceneDict2 = {'tilt':30,'pitch':5,'clearance_height':1,'azimuth':180,
'nMods': 5, 'nRows': 1, 'originx': 0, 'originy': 3.5, 'appendRadfile':True}
module_type2='test-moduleB'
mymodule2 = demo.makeModule(name=module_type2,x=1,y=1.6, numpanels=2, ygap=0.15)
sceneObj2 = demo.makeScene(mymodule2, sceneDict2)
# Checking values for both scenes after creating new SceneObj
print ("SceneObj1 modulefile: %s" % sceneObj1.modulefile)
print ("SceneObj1 SceneFile: %s" %sceneObj1.radfiles)
print ("SceneObj1 GCR: %s" % round(sceneObj1.gcr,2))
print ("\nSceneObj2 modulefile: %s" % sceneObj2.modulefile)
print ("SceneObj2 SceneFile: %s" %sceneObj2.radfiles)
print ("SceneObj2 GCR: %s" % round(sceneObj2.gcr,2))
#getfilelist should have info for the rad file created by BOTH scene objects.
print ("NEW FileLists: \n %s" % demo.getfilelist())
Explanation: <a id='step1b'></a>
B. Generating the second scene object.
Creating a different Scene. Same Module, different values.
Notice we are passing a different originx and originy to displace the center of this new sceneObj to that location.
End of explanation
# NOTE: offsetting translation by 0.1 so the center of the marker (with sides of 0.2) is at the desired coordinate.
name='Post1'
text='! genbox black originMarker 0.2 0.2 1 | xform -t -0.1 -0.1 0'
customObject = demo.makeCustomObject(name,text)
demo.appendtoScene(sceneObj1.radfiles, customObject, '!xform -rz 0')
Explanation: <a id='step2'></a>
2. Add a Marker at the Origin (coordinates 0,0) for help with visualization
Creating a "markers" for the geometry is useful to orient one-self when doing sanity-checks (for example, marke where 0,0 is, or where 5,0 coordinate is).
<div class="alert alert-warning">
Note that if you analyze the module that intersects with the marker, some of the sensors will be wrong. To perform valid analysis, do so without markers, as they are 'real' objects on your scene.
</div>
End of explanation
octfile = demo.makeOct(demo.getfilelist())
Explanation: <a id='step3'></a>
3. Combine all scene Objects into one OCT file & Visualize
Marking this as its own steps because this is the step that joins our Scene Objects 1, 2 and the appended Post.
Run makeOCT to make the scene with both scene objects AND the marker in it, the ground and the skies.
End of explanation
## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.
## Simulation will stop until you close the rvu window
#!rvu -vf views\front.vp -e .01 -pe 0.3 -vp 1 -7.5 12 tutorial_7.oct
Explanation: At this point you should be able to go into a command window (cmd.exe) and check the geometry. Example:
rvu -vf views\front.vp -e .01 -pe 0.3 -vp 1 -7.5 12 tutorial_7.oct
End of explanation
sceneObj1.sceneDict
sceneObj2.sceneDict
analysis = AnalysisObj(octfile, demo.basename)
frontscan, backscan = analysis.moduleAnalysis(sceneObj1)
frontdict, backdict = analysis.analysis(octfile, "FirstObj", frontscan, backscan) # compare the back vs front irradiance
print('Annual bifacial ratio First Set of Panels: %0.3f ' %( np.mean(analysis.Wm2Back) / np.mean(analysis.Wm2Front)) )
Explanation: It should look something like this:
<a id='step4'></a>
4. Analysis for Each sceneObject
a sceneDict is saved for each scene. When calling the Analysis, you should reference the scene object you want.
End of explanation
print (frontdict['x'])
print ("")
print (frontdict['y'])
print ("")
print (frontdict['mattype'])
Explanation: Let's do a Sanity check for first object:
Since we didn't pass any desired module, it should grab the center module of the center row (rounding down). For 2 rows and 5 modules, that is row 1, module 3 ~ indexed at 0, a2.0.a0.PVmodule.....""
End of explanation
analysis2 = AnalysisObj(octfile, demo.basename)
modWanted = 4
rowWanted = 1
sensorsy=4
frontscan, backscan = analysis2.moduleAnalysis(sceneObj2, modWanted = modWanted, rowWanted = rowWanted, sensorsy=sensorsy)
frontdict2, backdict2 = analysis2.analysis(octfile, "SecondObj", frontscan, backscan)
print('Annual bifacial ratio Second Set of Panels: %0.3f ' %( np.mean(analysis2.Wm2Back) / np.mean(analysis2.Wm2Front)) )
Explanation: Let's analyze a module in sceneobject 2 now. Remember we can specify which module/row we want. We only have one row in this Object though.
End of explanation
print ("x coordinate points:" , frontdict2['x'])
print ("")
print ("y coordinate points:", frontdict2['y'])
print ("")
print ("Elements intersected at each point: ", frontdict2['mattype'])
Explanation: Sanity check for first object. Since we didn't pass any desired module, it should grab the center module of the center row (rounding down). For 1 rows, that is row 0, module 4 ~ indexed at 0, a3.0.a0.Longi... and a3.0.a1.Longi since it is a 2-UP system.
End of explanation |
4,494 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Optimisation in a transformed parameter space
This example shows you how to run an optimisation in a transformed parameter space, using a pints.Transformation object.
Parameter transformations can often significantly improve the performance and robustness of an optimisation (see e.g. [1]).
In addition, some methods have requirements (e.g. that all parameters are unconstrained, or that all parameters have similar magnitudes) that prevent them from being used on certain models in their untransformed form.
[1] Whittaker, DG, Clerx, M, Lei, CL, Christini, DJ, Mirams, GR. Calibration of ionic and cellular cardiac electrophysiology models. WIREs Syst Biol Med. 2020; 12
Step1: We then define some parameters and set up the problem for the optimisation.
The parameter vector for the toy logistic model is $\theta_\text{original} = [r, K]$, where $r$ is the growth rate and $K$ is called the carrying capacity.
Step2: In this example, we will pick some difficult starting points for the optimisation
Step3: Now we run a Nelder-Mead optimisation without doing any parameter transformation to check its performance.
Step4: As we can see, the optimiser made some initial improvements, but then got stuck somewhere in $[r, K]$ space, and failed to converge to the true parameters.
We can improve its performance by defining a parameter transformation so that it searches in $\theta = [r, \log(K)]$ space instead.
To do this, we'll create a pints.Transformation object, that leaves $r$ alone, but applies a log-transformation to $K$.
This is implemented by defining an IdentifyTransformation for $r$, a LogTransformation for $K$, and then creating a ComposedTransformation for the full parameter vector $\theta$
Step5: The resulting Transformation object can be passed in the optimise method, as shown below, but can also be used in combination with Controller classes such as the pints.OptimisationController or pints.MCMCController. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pints
import pints.toy as toy
# Set some random seed so this notebook can be reproduced
np.random.seed(10)
# Load a logistic forward model
model = toy.LogisticModel()
Explanation: Optimisation in a transformed parameter space
This example shows you how to run an optimisation in a transformed parameter space, using a pints.Transformation object.
Parameter transformations can often significantly improve the performance and robustness of an optimisation (see e.g. [1]).
In addition, some methods have requirements (e.g. that all parameters are unconstrained, or that all parameters have similar magnitudes) that prevent them from being used on certain models in their untransformed form.
[1] Whittaker, DG, Clerx, M, Lei, CL, Christini, DJ, Mirams, GR. Calibration of ionic and cellular cardiac electrophysiology models. WIREs Syst Biol Med. 2020; 12:e1482. https://doi.org/10.1002/wsbm.1482
We start by loading a pints.Forwardmodel implementation, in this case a logistic model.
End of explanation
# Create some toy data
real_parameters = [0.015, 400] # [r, K]
times = np.linspace(0, 1000, 1000)
values = model.simulate(real_parameters, times)
# Add noise
values += np.random.normal(0, 10, values.shape)
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, values)
# Select a score function
score = pints.SumOfSquaresError(problem)
Explanation: We then define some parameters and set up the problem for the optimisation.
The parameter vector for the toy logistic model is $\theta_\text{original} = [r, K]$, where $r$ is the growth rate and $K$ is called the carrying capacity.
End of explanation
x0 = [0.5, 0.1] # [r, K]
sigma0 = [0.01, 2.0]
Explanation: In this example, we will pick some difficult starting points for the optimisation:
End of explanation
found_parameters, found_value = pints.optimise(
score,
x0,
sigma0,
method=pints.NelderMead,
transformation=None,
)
# Show score of true solution
print('Score at true solution: ')
print(score(real_parameters))
# Compare parameters with original
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters):
print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k]))
# Show quality of fit
plt.figure()
plt.xlabel('Time')
plt.ylabel('Value')
plt.plot(times, values, alpha=0.25, label='Nosiy data')
plt.plot(times, problem.evaluate(found_parameters), label='Fit without transformation')
plt.legend()
plt.show()
Explanation: Now we run a Nelder-Mead optimisation without doing any parameter transformation to check its performance.
End of explanation
# No transformation: [r] -> [r]
transform_r = pints.IdentityTransformation(n_parameters=1)
# Log-transformation: [K] -> [log(K)]
transform_K = pints.LogTransformation(n_parameters=1)
# The full transformation: [r, K] -> [r, log(K)]
transformation = pints.ComposedTransformation(transform_r, transform_K)
Explanation: As we can see, the optimiser made some initial improvements, but then got stuck somewhere in $[r, K]$ space, and failed to converge to the true parameters.
We can improve its performance by defining a parameter transformation so that it searches in $\theta = [r, \log(K)]$ space instead.
To do this, we'll create a pints.Transformation object, that leaves $r$ alone, but applies a log-transformation to $K$.
This is implemented by defining an IdentifyTransformation for $r$, a LogTransformation for $K$, and then creating a ComposedTransformation for the full parameter vector $\theta$:
End of explanation
found_parameters_trans, found_value_trans = pints.optimise(
score,
x0,
sigma0,
method=pints.NelderMead,
transformation=transformation, # Pass the transformation to the optimiser
)
# Show score of true solution
print('Score at true solution: ')
print(score(real_parameters))
# Compare parameters with original
print('Found solution: True parameters:' )
for k, x in enumerate(found_parameters_trans):
print(pints.strfloat(x) + ' ' + pints.strfloat(real_parameters[k]))
# Show quality of fit
plt.figure()
plt.xlabel('Time')
plt.ylabel('Value')
plt.plot(times, values, alpha=0.25, label='Nosiy data')
plt.plot(times, problem.evaluate(found_parameters), label='Fit without transformation')
plt.plot(times, problem.evaluate(found_parameters_trans), label='Fit with transformation')
plt.legend()
plt.show()
Explanation: The resulting Transformation object can be passed in the optimise method, as shown below, but can also be used in combination with Controller classes such as the pints.OptimisationController or pints.MCMCController.
End of explanation |
4,495 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Teorema da Convolução
Convolução periódica
Antes de falarmos sobre o Teorema da Convolução, precisamos entender a convolução periódica (pconv). Até agora, vimos a convolução linear (conv ou scipy.signal.convolve2d), onde o kernel $h$ tem sua origem no centro e a imagem $f$ tem sua origem no canto superior esquerdo. Na convolução periódica, a origem do kernel $h$ está na origem da imagem $f$. Ambos kernel e imagem são periódicos, com o mesmo período. Como normalmente o kernel $h$ é muito menor que a imagem $f$, ele é preenchido com zeros até o tamanho de $f$.
Step1: Teorema da convolução
O teorema da convolução diz que
$$ F(f * g) = F(f) \cdot F(g) $$
$$ F(f\cdot g) = F(f) * F(g) $$
onde $F$ indica o operador da transformada de Fourier, ou seja, $F(f)$ and $F(g)$ são as transformdas de $f$ e $g$. É importante perceber que a convolução usada aqui é a convolução periódica.
Vamos ilustrar o Teorema da Convolução com um exemplo numérico. Primeiro, vamos calcular a convolução periódica de uma imagem $f$ com um kernel $h$
Step2: Agora, vamos calcular a transformada de Fourier $F(f)$ da imagem e $F(h)$ do kernel. Antes de mais nada, precisamos garantir que a imagem $f$ e o kernel $h$ sejam periódicos e tenham o mesmo tamanho.
Step3: Pelo teorema da convolução, gg e g deveriam ser iguais | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
from numpy.fft import *
import sys,os
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
f = np.array([[1,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0],
[0,0,0,1,0,0,0,0,0],
[0,0,0,0,0,0,0,0,1],
[0,0,0,0,0,0,0,0,0]])
print("Image (f):")
print(f)
h = np.array([[1,2,3],
[4,5,6]])
print("\n Image Kernel (h):")
print(h)
g1 = ia.pconv(f,h)
print("Image Output (pconv):")
print(g1)
Explanation: Teorema da Convolução
Convolução periódica
Antes de falarmos sobre o Teorema da Convolução, precisamos entender a convolução periódica (pconv). Até agora, vimos a convolução linear (conv ou scipy.signal.convolve2d), onde o kernel $h$ tem sua origem no centro e a imagem $f$ tem sua origem no canto superior esquerdo. Na convolução periódica, a origem do kernel $h$ está na origem da imagem $f$. Ambos kernel e imagem são periódicos, com o mesmo período. Como normalmente o kernel $h$ é muito menor que a imagem $f$, ele é preenchido com zeros até o tamanho de $f$.
End of explanation
fr = np.linspace(-1,1,6)
f = np.array([fr,2*fr,fr,fr])
print(f)
hh = np.array([-1,0,+1])
h = np.array([hh,2*hh,hh])
print(h)
g = ia.pconv(f,h)
print(g)
Explanation: Teorema da convolução
O teorema da convolução diz que
$$ F(f * g) = F(f) \cdot F(g) $$
$$ F(f\cdot g) = F(f) * F(g) $$
onde $F$ indica o operador da transformada de Fourier, ou seja, $F(f)$ and $F(g)$ são as transformdas de $f$ e $g$. É importante perceber que a convolução usada aqui é a convolução periódica.
Vamos ilustrar o Teorema da Convolução com um exemplo numérico. Primeiro, vamos calcular a convolução periódica de uma imagem $f$ com um kernel $h$
End of explanation
# Aumentando h para o tamanho de f
aux = np.zeros(f.shape)
r,c = h.shape
aux[:r,:c] = h
# Calculando a Transformada de Fourier da f e h
F = fft2(f)
H = fft2(aux)
# Multiplicando-se as Tranformadas
G = F * H
# Calculando a Transformada inversa
gg = ifft2(G)
print("Result gg: \n",np.around(gg))
Explanation: Agora, vamos calcular a transformada de Fourier $F(f)$ da imagem e $F(h)$ do kernel. Antes de mais nada, precisamos garantir que a imagem $f$ e o kernel $h$ sejam periódicos e tenham o mesmo tamanho.
End of explanation
print('O teorema da convolução funcionou?', np.allclose(gg.real,g))
Explanation: Pelo teorema da convolução, gg e g deveriam ser iguais:
End of explanation |
4,496 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contact Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
Step1: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
Step2: Model without Spots
Step3: Adding Spots
Let's add a spot to the primary component in our binary. Note that if you attempt to attach to the 'contact_envelope' component, an error will be raised. Spots can only be attached to star components.
The 'colat' parameter defines the latitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See to spots tutorial for more details.
Step4: Comparing Light Curves
Step5: Spots near the "neck"
Since the spots are still defined with the coordinate system of the individual star components, this can result in spots that are distorted and even "cropped" at the neck. Furthermore, spots with long=0 could be completely "hidden" by the neck or result in a ring around the neck.
To see this, let's plot our mesh with teff as the facecolor.
Step6: Now if we set the long closer to the neck, we'll see it get cropped by the boundary between the two components. If we need a spot that crosses between the two "halves" of the contact, we'd have to add separate spots to each component, with each getting cropped at the boundary.
Step7: If we set long to zero, the spot completely disappears (as there is nowhere in the neck that is still on the surface.
Step8: But if we increase the radius large enough, we'll get a ring. | Python Code:
!pip install -I "phoebe>=2.2,<2.3"
%matplotlib inline
Explanation: Contact Binary with Spots
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
import phoebe
from phoebe import u # units
logger = phoebe.logger()
b = phoebe.default_binary(contact_binary=True)
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
b.add_dataset('lc', times=phoebe.linspace(0,0.5,101))
b.run_compute(irrad_method='none', model='no_spot')
Explanation: Model without Spots
End of explanation
b.add_feature('spot', component='primary', feature='spot01', relteff=0.9, radius=20, colat=90, long=-45)
b.run_compute(irrad_method='none', model='with_spot')
Explanation: Adding Spots
Let's add a spot to the primary component in our binary. Note that if you attempt to attach to the 'contact_envelope' component, an error will be raised. Spots can only be attached to star components.
The 'colat' parameter defines the latitude on the star measured from its North (spin) Pole. The 'long' parameter measures the longitude of the spot - with longitude = 0 being defined as pointing towards the other star at t0. See to spots tutorial for more details.
End of explanation
afig, mplfig = b.plot(show=True, legend=True)
Explanation: Comparing Light Curves
End of explanation
b.remove_dataset(kind='lc')
b.remove_model(model=['with_spot', 'no_spot'])
b.add_dataset('mesh', compute_times=b.to_time(0.25), columns='teffs')
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
Explanation: Spots near the "neck"
Since the spots are still defined with the coordinate system of the individual star components, this can result in spots that are distorted and even "cropped" at the neck. Furthermore, spots with long=0 could be completely "hidden" by the neck or result in a ring around the neck.
To see this, let's plot our mesh with teff as the facecolor.
End of explanation
b.set_value('long', value=-30)
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
Explanation: Now if we set the long closer to the neck, we'll see it get cropped by the boundary between the two components. If we need a spot that crosses between the two "halves" of the contact, we'd have to add separate spots to each component, with each getting cropped at the boundary.
End of explanation
b.set_value('long', value=0.0)
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
Explanation: If we set long to zero, the spot completely disappears (as there is nowhere in the neck that is still on the surface.
End of explanation
b.set_value('radius', value=40)
b.run_compute(irrad_method='none')
afig, mplfig = b.plot(fc='teffs', ec='face', fcmap='plasma', show=True)
Explanation: But if we increase the radius large enough, we'll get a ring.
End of explanation |
4,497 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Updating Plots
bqplot is an interactive plotting library. Attributes of plots can be updated in place without recreating the whole figure and marks. Let's look at idiomatic ways of updating plots in bqplot
Step1: To update the attributes of the plot(x, y, color etc.) the correct way to do it is to update the attributes of the mark objects in place. Recreating figure or mark objects is not recommended
Step2: We can update multiple attributes of the mark object simultaneously by using the hold_sync method like so. (This makes only one round trip from the python kernel to front end)
Step3: We can also animate the changes to the x, y and other data attributes by setting the animation_duration property on the figure object. More examples of animations can found in the Animations notebook
Step4: Let's look at an example to update a scatter plot | Python Code:
import numpy as np
import bqplot.pyplot as plt
x = np.linspace(-10, 10, 100)
y = np.sin(x)
fig = plt.figure()
line = plt.plot(x=x, y=y)
fig
Explanation: Updating Plots
bqplot is an interactive plotting library. Attributes of plots can be updated in place without recreating the whole figure and marks. Let's look at idiomatic ways of updating plots in bqplot
End of explanation
# update y attribute of the line object
line.y = np.tan(x)
Explanation: To update the attributes of the plot(x, y, color etc.) the correct way to do it is to update the attributes of the mark objects in place. Recreating figure or mark objects is not recommended
End of explanation
# update both x and y together
with line.hold_sync():
line.x = np.arange(100)
line.y = x ** 3 - x
Explanation: We can update multiple attributes of the mark object simultaneously by using the hold_sync method like so. (This makes only one round trip from the python kernel to front end)
End of explanation
fig.animation_duration = 1000
line.y = np.cos(x)
Explanation: We can also animate the changes to the x, y and other data attributes by setting the animation_duration property on the figure object. More examples of animations can found in the Animations notebook
End of explanation
x, y = np.random.rand(2, 10)
fig = plt.figure(animation_duration=1000)
scat = plt.scatter(x=x, y=y)
fig
# update the x and y attreibutes in place using hold_sync
with scat.hold_sync():
scat.x, scat.y = np.random.rand(2, 10)
Explanation: Let's look at an example to update a scatter plot
End of explanation |
4,498 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Java vs Python
Who am I?
Cesare Placanica
Cisco Photonics NMTG group.
Java developer since 2010 (Java 1.5/1.7, Spring, Hibernate and Tomcat).
Previously C/C++ embedded RTOSes.
Python Wannabe since 2004.
Agenda
Round 1 AOP vs decorators.
Round 2 Generics vs Duck Typing.
Round 1 Aspect Oriented Programming vs Decorators
http
Step1: The former was both a Closure and a High Order Function. Did you heard about Functional Programming?
Step2: The previous was the built-in "Decorator" syntax.
Battery Included!!
Round 2 Generics vs Duck Typing
Problem definition
Write "Generic" code.
OK, less naive, in Java, write code in terms of types to-be-specified-later that are checked at compile time when instantiated by the client of a "parametrized" class.
Duck Typing it's a sort of Generic Programming where object's suitability is determined by the presence of certain methods and properties, rather than the actual type of the object. | Python Code:
"Elapsed decorator."
import datetime
def elapsed(func):
"Elapsed decorator"
def _wrapper(*args, **kwargs):
"Decoration function"
start = datetime.datetime.now()
ret = func(*args, **kwargs)
print("Elapsed time", datetime.datetime.now() - start)
return ret
return _wrapper
Explanation: Java vs Python
Who am I?
Cesare Placanica
Cisco Photonics NMTG group.
Java developer since 2010 (Java 1.5/1.7, Spring, Hibernate and Tomcat).
Previously C/C++ embedded RTOSes.
Python Wannabe since 2004.
Agenda
Round 1 AOP vs decorators.
Round 2 Generics vs Duck Typing.
Round 1 Aspect Oriented Programming vs Decorators
http://stackoverflow.com/questions/4551457/python-like-decorators-in-java
Problem Definition
Encapsulation of cross-cutting concers. Logging, AAA, Caching, Profiling.
<img src="burger.png" alt="Drawing" style="width: 400px;">
Java
A Spring Aspect Oriented Programming example.
https://github.com/keobox/dojokaffe/tree/master/fp/spring-elapsed
Code walkthrough in Eclipse.
Python
End of explanation
# Usage:
import time
@elapsed
def task():
"A difficult task."
print("Processing...")
time.sleep(2)
task()
Explanation: The former was both a Closure and a High Order Function. Did you heard about Functional Programming?
End of explanation
class Duck:
def quack(self):
print("Just a crazy, darn fool duck!")
class Man:
def quack(self):
print("Are you crazy?!")
def porky_pig_shoots_a(quacker):
quacker.quack()
duffy = Duck()
cesare = Man()
porky_pig_shoots_a(duffy)
porky_pig_shoots_a(cesare)
Explanation: The previous was the built-in "Decorator" syntax.
Battery Included!!
Round 2 Generics vs Duck Typing
Problem definition
Write "Generic" code.
OK, less naive, in Java, write code in terms of types to-be-specified-later that are checked at compile time when instantiated by the client of a "parametrized" class.
Duck Typing it's a sort of Generic Programming where object's suitability is determined by the presence of certain methods and properties, rather than the actual type of the object.
End of explanation |
4,499 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What's New in Marvin 2.2!
Lots of things are new in Marvin 2.2.0. See the list with links to individual sections here http
Step1: Smarter handling of inputs
You can still specify plateifu, mangaid, or filename but now Marvin will try to guess your input type if you do not specify an input keyword argument.
Step2: Fuzzy indexing and extraction
Marvin now includes fuzzy lists and dictionaries in the Maps and Datamodels. This means Marvin will try to guess what you mean by what you type. For example, all of these methods grab the H-alpha flux map.
Step3: New DRP, DAP and Query Datamodels
There are new datamodels representing the MaNGA data for DRP, DAP and Query parameters. The datamodel is attached to every object you instantiate, or it can be accessed independently. For example, the Maps datamodel will list all the available map properties. See http
Step4: Each Property contains a name, a channel, the unit of the property, and a description
Step5: The fulll datamodel is available as a parent attribute or you can import it directly
Step6: Cubes, Maps, ModelCubes now utilize Quantity-based Objects
Most Marvin Tools now use new objects to represent their data. DataCubes represent 3-d data, while a Spectrum represents a 1-d array of data. These sub-class from Astropy Quantities. This means now most properties have associated units. We also now track and propagate inverse variances and masks.
Step7: The cube flux is now a DataCube, has proper units, has an ivar, mask, and wavelength attached to it
Step8: Slicing a Datacube in 2-d will return a new DataCube, while slicing in 3-d will return a Spectrum
Step9: Maskbits
There is a new Maskbit class for improved maskbit handling. All objects now include new Maskbit versions of the DRP/DAP quality flag (quality_flag), targeting bits (target_flags), and pixel masks (pixmask). Now you can easily look up the labels for bits and create custom masks. See http
Step10: Improved Query and Results Handling
The handling of Queries and Results has been improved to provider better means of retrieving all the results of a query, extracting columns of parameters, and quickly plotting results.
See http
Step11: The Query Datamodel shows you every parameter that is available to search on. It groups parameters together into common types. | Python Code:
%matplotlib inline
from marvin import config
config.switchSasUrl('local')
config.forceDbOff()
from marvin.tools.cube import Cube
plateifu='8485-1901'
cube = Cube(plateifu=plateifu)
print(cube)
maps = cube.getMaps(bintype='HYB10')
print(maps)
Explanation: What's New in Marvin 2.2!
Lots of things are new in Marvin 2.2.0. See the list with links to individual sections here http://sdss-marvin.readthedocs.io/en/latest/whats-new.html
Marvin now includes MPL-6 data
End of explanation
from marvin.tools.maps import Maps
maps = Maps(plateifu)
# or a filename
maps = Maps('/Users/Brian/Work/Manga/analysis/v2_3_1/2.1.3/SPX-GAU-MILESHC/8485/1901/manga-8485-1901-MAPS-SPX-GAU-MILESHC.fits.gz')
print(maps)
Explanation: Smarter handling of inputs
You can still specify plateifu, mangaid, or filename but now Marvin will try to guess your input type if you do not specify an input keyword argument.
End of explanation
# grab an H-alpha flux map
ha = maps['emline_gflux_ha_6564']
# fuzzy name indexing
ha = maps['gflux_ha']
# all map properties are available as class attributes. If using iPython, you can tab complete to see them all.
ha = maps.emline_gflux_ha_6564
Explanation: Fuzzy indexing and extraction
Marvin now includes fuzzy lists and dictionaries in the Maps and Datamodels. This means Marvin will try to guess what you mean by what you type. For example, all of these methods grab the H-alpha flux map.
End of explanation
# see the datamodel on maps
maps.datamodel
Explanation: New DRP, DAP and Query Datamodels
There are new datamodels representing the MaNGA data for DRP, DAP and Query parameters. The datamodel is attached to every object you instantiate, or it can be accessed independently. For example, the Maps datamodel will list all the available map properties. See http://sdss-marvin.readthedocs.io/en/latest/datamodel/datamodels.html for details.
End of explanation
haew_prop = maps.datamodel['emline_gew_ha']
haew_prop
print(haew_prop.name, haew_prop.unit, haew_prop.description)
Explanation: Each Property contains a name, a channel, the unit of the property, and a description
End of explanation
dapdm = maps.datamodel.parent
print(dapdm)
# get a list of all available DAP datamodels
from marvin.utils.datamodel.dap import datamodel
print(datamodel)
# let's get the MPL-6 datamodel
dapdm = datamodel['MPL-6']
print(dapdm)
Explanation: The fulll datamodel is available as a parent attribute or you can import it directly
End of explanation
# the cube datamodel shows the available datacubes
cube.datamodel.datacubes
# and spectra
cube.datamodel.spectra
Explanation: Cubes, Maps, ModelCubes now utilize Quantity-based Objects
Most Marvin Tools now use new objects to represent their data. DataCubes represent 3-d data, while a Spectrum represents a 1-d array of data. These sub-class from Astropy Quantities. This means now most properties have associated units. We also now track and propagate inverse variances and masks.
End of explanation
print(type(cube.flux))
print('flux', cube.flux)
print('mask', cube.flux.mask)
print('wavelength', cube.flux.wavelength)
Explanation: The cube flux is now a DataCube, has proper units, has an ivar, mask, and wavelength attached to it
End of explanation
spec = cube.flux[:,17,17]
print(type(spec))
print(spec)
print(spec.unit)
spec.plot()
Explanation: Slicing a Datacube in 2-d will return a new DataCube, while slicing in 3-d will return a Spectrum
End of explanation
# H-alpha DAP quality flag
ha.quality_flag
ha.target_flags
ha.pixmask
# bits for mask value 1027
print('bits', ha.pixmask.values_to_bits(1027))
print('labels', ha.pixmask.values_to_labels(1027))
# convert the H-alpha mask into an list of labels
ha.pixmask.labels
Explanation: Maskbits
There is a new Maskbit class for improved maskbit handling. All objects now include new Maskbit versions of the DRP/DAP quality flag (quality_flag), targeting bits (target_flags), and pixel masks (pixmask). Now you can easily look up the labels for bits and create custom masks. See http://sdss-marvin.readthedocs.io/en/latest/utils/maskbit.html for details
End of explanation
from marvin.tools.query import Query
config.setRelease('MPL-4')
q = Query(search_filter='nsa.z < 0.1', return_params=['cube.ra', 'cube.dec', 'absmag_g_r', 'nsa.elpetro_ba'])
r = q.run()
# your results are now in Sets
r.results
# see the available columns
r.columns
# quickly plot the redshift vs g-r color
output = r.plot('nsa.z', 'absmag_g_r')
# or a histogram of the elpetro b/a axis ratio
output=r.hist('elpetro_ba')
# get all of the g-r colors as a list
gr = r.getListOf('absmag_g_r', return_all=True)
gr
# the results currently only have 100 out of some total
print(r.count, r.totalcount)
# let's extend our result set by the next chunk of 100
r.extendSet()
print(r.count, r.totalcount)
print(r.results)
Explanation: Improved Query and Results Handling
The handling of Queries and Results has been improved to provider better means of retrieving all the results of a query, extracting columns of parameters, and quickly plotting results.
See http://sdss-marvin.readthedocs.io/en/latest/query.html for Query handling
See http://sdss-marvin.readthedocs.io/en/latest/results.html for Results handling
See http://sdss-marvin.readthedocs.io/en/latest/datamodel/query_dm.html for how to use the Query Datamodel
See http://sdss-marvin.readthedocs.io/en/latest/utils/plot-scatter.html for quick scatter plotting
See http://sdss-marvin.readthedocs.io/en/latest/utils/plot-hist.html for quick histogram plotting
End of explanation
qdm = q.datamodel
qdm
qdm.groups
# look at all the available NSA parameters
qdm.groups['nsa'].parameters
Explanation: The Query Datamodel shows you every parameter that is available to search on. It groups parameters together into common types.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.