Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
6,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: Universal Array Functions
Numpy comes with many universal array functions, which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones | Python Code:
import numpy as np
arr = np.arange(0, 10)
arr + arr
arr * arr
arr - arr
# Warning on division by zero, but not an error!
# Just replaced with nan
arr / arr
# Also warning, but not an error instead infinity
1 / arr
arr ** 3
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
<center>Copyright Pierian Data 2017</center>
<center>For more information, visit us at www.pieriandata.com</center>
NumPy Operations
Arithmetic
You can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
End of explanation
#Taking Square Roots
np.sqrt(arr)
#Calcualting exponential (e^)
np.exp(arr)
np.max(arr) #same as arr.max()
np.sin(arr)
np.log(arr)
Explanation: Universal Array Functions
Numpy comes with many universal array functions, which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones:
End of explanation |
6,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cascade (HD-CNN Model Deriative)
Objective
This notebook demonstrates building a hierachical image classifer based on a HD-CNN deriative which uses cascading classifers to predict the class of a label from a coarse to finer classes.
In this demonstration, we have two classes in the heirarchy
Step1: Getting Started
We will be using the fully frameworks and Python modules
Step2: Make Datasets
Make Coarse Category Dataset
This makes the by fruit type dataset.
Step3: Make Finer Category Datasets
This makes the by Fruit Variety datasets
Step4: Generate the preprocessed Coarse Dataset
Step5: Split Coarse Dataset (by Fruit) into Train, Validation and Test
First split into train and test. Then split out 10% of train to use for validation during training.
- Train
Step6: Make Trainers
Create the routines we will use for training.
Make Feeder
Prepare the Feeder mechanism for training the neural networkm using ImageDataGenerator.
Add image augmentation for
Step7: Make Trainer
Prepare a training session
Step8: Make Model
Stem Convolutional Block (Base Model)
We will use this base model as the stem convolutional block of cascading model
Step9: Simple ConvNet
The stem convolutional block consists of a mini-VGG, which consists of
Step10: Start Training
1. Train the Coarse Classifier
2. Add Finer Classifiers
4. Train the Finer Classifiers
Generate Coarse Model
Choose between
Step11: Train the Coarse Model
Step12: Save the Coarse Model
Step13: Prepare Coarse CNN for cascade training
1. Freeze all layers
2. Find bottleneck layer
Step14: Generate the preprocessed Finer Datasets
Split Finer (by Variety) Datasets into Train, Validation and Test
1. For each fruit type, split the corresponding variety images into train, validation and test.
2. Save each split dataset in a dictionary, using the fruit name as the key.
Step15: Add Each Cascade (Finer) Classifier
1. Get the bottleneck layer for the coarse CNN
2. Add an independent finer classifier per fruit from the bottleneck layer
Step16: Compile each finer classifier
Step17: Train the finer classifiers
Step18: Evaluate the Model
1. Evaluate the Model for each finer classifier.
Step19: Save the Finer Models
Step20: Let's do some cascading predictions
We will take one random selected image per type of fruit, and
Step21: End of Notebook | Python Code:
!gsutil cp gs://cloud-samples-data/air/fruits360/fruits360-combined.zip .
!ls
!unzip -qn fruits360-combined.zip
Explanation: Cascade (HD-CNN Model Deriative)
Objective
This notebook demonstrates building a hierachical image classifer based on a HD-CNN deriative which uses cascading classifers to predict the class of a label from a coarse to finer classes.
In this demonstration, we have two classes in the heirarchy: fruits and varieties of fruit. The model will first predict the coarse class (type of fruit) and then within that class of fruit, the variety. For example, if given an image of Apple Granny Smith, it would first predict 'Apple' (fruit) and then predict the 'Apple Granny Smith'.
This deriative of the HD-CNN is designed to demonstrate both the methodology of heirarchical classification, as well as design improvements not available at the time (2014) when the model was first published Zhicheng Yan.
General Approach
Our HD-CNN deriative archirecture consists of:
1. An stem convolutional block.
- The output from the stem convolutional head is shared with the coarse and finer classifiers
(referred to as the shared layers in the paper).
2. A coarse classifier.
- A Convolution and Dense layers for classifying the coarse level class.
3. A set of finer classifiers, one per coarse level class.
- A Convolution and Dense layers per coarse level class for classifying the corresponding finer
level class.
4. A conditional execution step for predicting a specific finer classifier based on the output of the
coarse classifier.
- The coarse level classifier is predicted.
- The index of the prediction is used to select a finer classifier.
- An im-memory copy of the shared bottleneck layer (i.e., last convolution layer in stem) is passed as the
input to the finer level classifier.
Our HD-CNN deriative is trained as follows:
1. Train the coarse level classifier using the coarse level labels in the dataset.
<img src='arch-1.png'>
2. Train the finer level classifier per coarse level class, using the corresponding subset (with finer
labels) from the dataset.
<img src='arch-2.png'>
<br/>
Dataset
We will be using the Fruits-360 dataset, which was formerly a Kaggle competition. It consists of images of fruit labeled by fruit type and the variety.
1. There are a total of 47 types of fruit (e.g., Apple, Orange, Pear, etc) and 81 varieties.
2. On average, there are 656 images per variety.
3. Each image is 128x128 RGB.
<div>
<img src='Training/Apple/Apple Golden 2/0_100.jpg' style='float: left'>
<img src='Training/Apple/Apple Red 1/0_100.jpg' style='float: left'>
<img src='Training/Apple/Apple Red 1/0_100.jpg' style='float: left'>
<img src='Training/Orange/Orange/0_100.jpg' style = 'float: left'>
<img src='Training/Pear/Pear/0_100.jpg' style = 'float: left'>
</div>
Objective
The objective is to train a hierarchical image classifier (coarse and then finer label) using a cascading layer architecture. First, the shared layers and coarse classifier are trained. Then the cascading finer classifiers are trained.
For prediction, the outcome (softmax) of the coarse classifier will conditionally execute the corresponding finer classifier and reuse the feature maps from the shared layers.
Costs
This notebook requires 17GB of memory. It will not run on a Standard TF JaaS instance (15GB). You will need to select an instance with memory > 17GB.
Prerequisites
Download the Fruits 360 dataset from GCS public bucket into this JaaS instance.
Some of the cells in the notebook display images. The images will not appear until the cell for copying the training data/misc from GCS into the JaaS instance is executed.
End of explanation
import os
from keras.applications.resnet50 import ResNet50
from keras.preprocessing import image
from keras.applications.resnet50 import preprocess_input, decode_predictions
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import GlobalAveragePooling2D, Dense
from keras import Sequential, Model, Input
from keras.layers import Conv2D, Flatten, MaxPooling2D, Dense, Dropout, BatchNormalization, ReLU
from keras import Model, optimizers
from keras.models import load_model
from keras.utils import to_categorical
import keras.layers as layers
from sklearn.model_selection import train_test_split
import tensorflow as tf
import numpy as np
import cv2
Explanation: Getting Started
We will be using the fully frameworks and Python modules:
1. Keras framework for building and training models.
2. Keras builtin models (resnet50).
3. Keras preprocessing for feeding and augmenting the dataset during training.
4. Gap data engineering framework for preprocessing the image data.
5. Numpy for general image/matrix manipulation.
End of explanation
def Fruits(root):
n_label = 0
images = []
labels = []
classes = {}
os.chdir(root)
classes_ = os.scandir('./')
for class_ in classes_:
print(class_.name)
os.chdir(class_.name)
classes[class_.name] = n_label
# Finer Level Subdirectories per Coarse Level
subclasses = os.scandir('./')
for subclass in subclasses:
os.chdir(subclass.name)
files = os.listdir('./')
for file in files:
image = cv2.imread(file)
images.append(image)
labels.append(n_label)
os.chdir('../')
os.chdir('../')
n_label += 1
os.chdir('../')
images = np.asarray(images)
images = (images / 255.0).astype(np.float32)
labels = to_categorical(labels, n_label)
print("Images", images.shape, "Labels", labels.shape, "Classes", classes)
# Split the processed image dataset into training and test data
x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.20, shuffle=True)
return x_train, x_test, y_train, y_test, classes
Explanation: Make Datasets
Make Coarse Category Dataset
This makes the by fruit type dataset.
End of explanation
def Varieties(root):
''' Generate Cascade (Finer) Level Dataset for Fruit Varieties'''
datasets = {}
os.chdir(root)
fruits = os.scandir('./')
for fruit in fruits:
n_label = 0
images = []
labels = []
classes = {}
print('FRUIT', fruit.name)
os.chdir(fruit.name)
varieties = os.scandir('./')
for variety in varieties:
print('VARIETY', variety.name)
classes[variety.name] = n_label
os.chdir(variety.name)
files = os.listdir('./')
for file in files:
image = cv2.imread(file)
images.append(image)
labels.append(n_label)
os.chdir('../')
n_label += 1
images = np.asarray(images)
images = (images / 255.0).astype(np.float32)
labels = to_categorical(labels, n_label)
x_train, x_test, y_train, y_test = train_test_split(images, labels, test_size=0.20, shuffle=True)
datasets[fruit.name] = (x_train, x_test, y_train, y_test, classes)
os.chdir('../')
print("IMAGES", x_train.shape, y_train.shape, "CLASSES", classes)
os.chdir('../')
return datasets
Explanation: Make Finer Category Datasets
This makes the by Fruit Variety datasets
End of explanation
!free -m
x_train, x_test, y_train, y_test, fruits_classes = Fruits('Training')
!free -m
Explanation: Generate the preprocessed Coarse Dataset
End of explanation
# Split out 10% of Train to use for Validation
pivot = int(len(x_train) * 0.9)
x_val = x_train[pivot:]
y_val = y_train[pivot:]
x_train = x_train[:pivot]
y_train = y_train[:pivot]
print("train", x_train.shape, y_train.shape)
print("val ", x_val.shape, y_val.shape)
print("test ", x_test.shape, y_test.shape)
!free -m
Explanation: Split Coarse Dataset (by Fruit) into Train, Validation and Test
First split into train and test. Then split out 10% of train to use for validation during training.
- Train: 80%
- Train: 90%
- Validation: 10%
- Test : 20%
End of explanation
def Feeder():
datagen = ImageDataGenerator(horizontal_flip=True, vertical_flip=True, rotation_range=30)
return datagen
Explanation: Make Trainers
Create the routines we will use for training.
Make Feeder
Prepare the Feeder mechanism for training the neural networkm using ImageDataGenerator.
Add image augmentation for:
1. Horizontal Flip
2. Verticial Flip
3. Random Rotation +/- 30 degrees
End of explanation
def Train(model, datagen, x_train, y_train, x_test, y_test, epochs=10, batch_size=32):
model.fit_generator(datagen.flow(x_train, y_train, batch_size=batch_size, shuffle=True),
steps_per_epoch=len(x_train) / batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test))
scores = model.evaluate(x_train, y_train, verbose=1)
print("Train", scores)
Explanation: Make Trainer
Prepare a training session:
1. Epochs defaults to 10
2. Batch size defaults to 32
3. Train with validation data
4. Final evaluation with test data (holdout set).
End of explanation
def ResNet(shape=(128, 128, 3), nclasses=47, optimizer='adam', weights=None):
base_model = ResNet50(weights=weights, include_top=False, input_shape=shape)
for i, layer in enumerate(base_model.layers):
# first: train only the top layers (which were randomly initialized) for Transfer Learning
if weights is not None:
layer.trainable = False
# label the last convolutional layer in the base model as the bottleneck
layer.name = 'bottleneck'
# Get the last convolutional layer of the ResNet base model
x = base_model.output
# add a global spatial average pooling layer
x = GlobalAveragePooling2D()(x)
# let's add a fully-connected layer
#x = Dense(1024, activation='relu')(x)
# and a logistic layer
predictions = Dense(nclasses, activation='softmax')(x)
# this is the model we will train
model = Model(inputs=base_model.input, outputs=predictions)
# compile the model (should be done *after* setting layers to non-trainable)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
return model
Explanation: Make Model
Stem Convolutional Block (Base Model)
We will use this base model as the stem convolutional block of cascading model:
1. The output of this model are a set of pooled feature maps.
2. The last layer that produces this set of pooled feature maps is referred to as the bottleneck layer.
Coarse Classifier
The coarse classifier is an independent block layer for classifying the coarse level label:
1. Input is the bottleneck layer from the stem convolutional block.
2. Layer consists of a convolution layer and a dense layer, where the dense layer is the classifier.
Finer Classifier
The finer classifiers are a set of independent block layers for classifying the finer label. There is one finer classifier per unique coarse level label.
1. Input is the bottleneck layer from the stem convolutional block.
2. Layer consists of a convolution layer and a dense layer, where the dense layer is the classifier.
3. The finer classifer is conditionally executed based on the softmax output from the coarse classifier.
ResNet for Transfer Learning
Use a prebuilt Keras model (ResNet 50). Either as:
1. Transfer Learning: The layers are pretrained with imagenet weights.
2. Full Training: layers are not pretrained (weights = None)
End of explanation
def ConvNet(shape=(128, 128, 3), nclasses=47, optimizer='adam'):
model = Sequential()
# stem convolutional group
model.add(Conv2D(16, (3,3), padding='same', activation='relu', input_shape=shape))
# conv block - double filters
model.add(Conv2D(32, (3,3), padding='same'))
model.add(ReLU())
model.add(Dropout(0.50))
model.add(MaxPooling2D((2,2)))
# conv block - double filters
model.add(Conv2D(64, (3,3), padding='same'))
model.add(ReLU())
model.add(MaxPooling2D((2,2)))
# conv block - double filters + bottleneck layer
model.add(Conv2D(128, (3,3), padding='same', activation='relu'))
model.add(MaxPooling2D((2,2), name="bottleneck"))
# dense block
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.25))
# classifier
model.add(Dense(nclasses, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
model.summary()
return model
Explanation: Simple ConvNet
The stem convolutional block consists of a mini-VGG, which consists of:
1. A convolutional input (stem)
2. Three convolutional groups, each doubling the number of filers.
3. Each convolutional group consists of one convolutional block.
4. A dropout of 50% is added to the first convolutional group.
The coarse classifier consists of:
1. A 1024 none dense layer
2. A 47 node dense layer for classification.
End of explanation
# Select the model for the stem convolutional group (shared layers)
stem = 'ConvNet'
if stem == 'ConvNet':
model = ConvNet(shape=(100, 100, 3))
elif stem == 'ResNet-imagenet':
model = ResNet(weights='imagenet', optimizer='adagrad')
elif stem == 'ResNet':
model = ResNet()
# load previously stored model
else:
model = load_model('model.h5')
Explanation: Start Training
1. Train the Coarse Classifier
2. Add Finer Classifiers
4. Train the Finer Classifiers
Generate Coarse Model
Choose between:
1. A untrained simple VGG CovNet as Stem Convolution Group, or
2. Pre-trained ResNet50 (imagenet weights) for Transfer Learning
End of explanation
datagen = Feeder()
Train(model, datagen, x_train, y_train, x_val, y_val, 5)
scores = model.evaluate(x_test, y_test, verbose=1)
print("Test", scores)
Explanation: Train the Coarse Model
End of explanation
# Save the model and weights
model.save("model-coarse.h5")
Explanation: Save the Coarse Model
End of explanation
def Bottleneck(model):
for layer in model.layers:
layer.trainable = False
if layer.name == 'bottleneck':
bottleneck = layer
print("BOTTLENECK", bottleneck.output.shape)
return bottleneck
Explanation: Prepare Coarse CNN for cascade training
1. Freeze all layers
2. Find bottleneck layer
End of explanation
# Converse memory by releasing training data for coarse model
import gc
x_train = y_train = x_val = y_val = x_test = y_test = None
gc.collect()
varieties_datasets = Varieties('Training')
for key, dataset in varieties_datasets.items():
_x_train, _x_test, _y_train, _y_test, classes = dataset
# Separate out 10% of train for validation
pivot = int(len(_x_train) * 0.9)
_x_val = _x_train[pivot:]
_y_val = _y_train[pivot:]
_x_train = _x_train[:pivot]
_y_train = _y_train[:pivot]
# save the dataset for this fruit (key)
varieties_datasets[key] = { 'classes': classes, 'train': (_x_train, _y_train), 'val': (_x_val, _y_val), 'test': (_x_test, _y_test) }
!free -m
Explanation: Generate the preprocessed Finer Datasets
Split Finer (by Variety) Datasets into Train, Validation and Test
1. For each fruit type, split the corresponding variety images into train, validation and test.
2. Save each split dataset in a dictionary, using the fruit name as the key.
End of explanation
bottleneck = Bottleneck(model)
cascades = []
for key, val in varieties_datasets.items():
classes = val['classes']
print("KEY", key, classes)
# if only one subclassifier, then skip (i.e., coarse == finer)
if len(classes) == 1:
continue
x = layers.Conv2D(128, (3,3), padding='same', activation='relu')(bottleneck.output)
x = BatchNormalization()(x)
x = MaxPooling2D((2,2))(x)
x = layers.Flatten()(bottleneck.output)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dense(len(classes), activation='softmax', name=key.replace(' ', ''))(x)
cascades.append(x)
Explanation: Add Each Cascade (Finer) Classifier
1. Get the bottleneck layer for the coarse CNN
2. Add an independent finer classifier per fruit from the bottleneck layer
End of explanation
classifiers = []
for cascade in cascades:
_model = Model(model.input, cascade)
_model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
_model.summary()
classifiers.append(_model)
Explanation: Compile each finer classifier
End of explanation
for classifier in classifiers:
# get the output layer for this subclassifier
last = classifier.layers[len(classifier.layers)-1]
print(last, last.name)
# find the corresponding variety dataset
for key, dataset in varieties_datasets.items():
if key == last.name:
x_train, y_train = dataset['train']
x_val, y_val = dataset['val']
datagen = Feeder()
Train(classifier, datagen, x_train, y_train, x_val, y_val, 5)
Explanation: Train the finer classifiers
End of explanation
for classifier in classifiers:
# get the output layer for this subclassifier
last = classifier.layers[len(classifier.layers)-1]
print(last, last.name)
# find the corresponding variety dataset
for key, dataset in varieties_datasets.items():
if key == last.name:
x_test, y_test = dataset['test']
scores = classifier.evaluate(x_test, y_test, verbose=1)
print("Test", scores)
Explanation: Evaluate the Model
1. Evaluate the Model for each finer classifier.
End of explanation
n = 0
for classifier in classifiers:
classifier.save('model-finer-' + str(n) + '.h5')
n += 1
Explanation: Save the Finer Models
End of explanation
import random
# Let's make a prediction for each type of fruit
for key, dataset in varieties_datasets.items():
# Get the variety test data for this type of fruit
x_test, y_test = dataset['test']
# pick a random image in the variety datast
index = random.randint(0, len(x_test))
# use the coarse model to predict the type of fruit
yhat = np.argmax( model.predict(x_test[index:index+1]) )
# let's find the class name (type of fruit) for this predicted label
for fruit, label in fruits_classes.items():
if label == yhat:
break
print("Yhat", yhat, "Coarse Prediction", key, "=", fruit)
# Prediction was correct
if key == fruit:
if len(dataset['classes']) == 1:
print("No Finer Classifier")
continue
# find the corresponding finer classifier for this type of fruit
for classifier in classifiers:
# get the output layer for this subclassifier
last = classifier.layers[len(classifier.layers)-1]
if last.name == fruit:
# use the finer model to predict the variety of this type of fruit
yhat = np.argmax(classifier.predict(x_test[index:index+1]))
for variety, value in dataset['classes'].items():
if value == np.argmax(y_test[index]):
break
for yhat_variety, value in dataset['classes'].items():
if value == yhat:
break
print("Yhat", yhat, "Finer Prediction", variety, "=", yhat_variety)
break
Explanation: Let's do some cascading predictions
We will take one random selected image per type of fruit, and:
1. Run the image through the coarse classifier (by fruit).
2. Based on the predicted output, select the corresponding finer classifier (by variety).
3. Run the image through the corresponding finer classifier.
End of explanation
# extractfeatures = Model(input=model.input, output=model.get_layer('bottleneck').output)
Explanation: End of Notebook
End of explanation |
6,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Siu expression autocompletion
Step2: In this ADR, I will review how we can find the right DataFrame to autocomplete, the state of autocompletion in IPython, and three potential solutions.
Key questions
I'll review each of these questions below.
framing
Step3: The tree is traversed depth first, and can be dumped out for inspection. See greentreesnakes for a nice python AST primer.
Step4: Just knowing the variable names is not enough. We also need to know which ones are DataFrames. For our guess, we can use what type of object a variable is at the time the user pressed tab (may differ from when the code is run!).
Here is an example of one way that can be done in IPython.
Step5: Last DataFrame defined or used seems ideal!
Once we know the DataFrame the user has in mind, we need to work it into the autocompletion machinery somehow, so that _.<tab> returns the same results as if that DataFrame were being autocompleted.
IPython Autocompletion
This section will go into great detail about how IPython's autocomplete works, to set the stage for technical solutions. Essentially, when a user interacts with autocompletion, there are 3 main libraries involved
Step6: Notice that it knows the suggestion mad is a function! For a column of the data, it knows that it's not a function, but an instance.
The IPython shell has an instance of it's IPCompleter class, and it's _jedi_matches method is responsible for doing the jedi stuff.
Step7: While this simple description captures the main thrust of how autocomplete works, the full dynamics include some more features such as entry hooks, and some shuffling things around (since the IPCompleter is deprecating its old methods for completing).
The sequence diagrams below show how the kernel sets up autocomplete, and how a specific autocomplete event is run.
Links to code used for diagrams
Step8: This let's you set hooks that only fire when a specific match is in the code being completed. For example...
Step9: For example, the code below should make _.a<tab> complete to _.ab.
Step10: This would be a really promising avenue. However, as I'll show in the next section, hooks must return a list of strings, so cannot give the nice color info with completions, even if they use jedi under the hood.
IPython _jedi_matches
The following diagrams illustrates what the path through a single autocompletion event (e.g. pressing tab) looks like. Note that because IPCompleter is transitioning to a new setup, there is some shuffling around that goes on (e.g. do_complete calls _experimental_do_complete, etc..).
Intriguingly, ipykernel also jumps over InteractiveShell, accessing the shell's Completer instance directly. Then, essentially 3 critical steps are run
Step11: <div style="width
Step12: However, a problem here is that when Jedi completes on a DataFrame (vs something with a __dir__ method that spits out DataFrame info), it can add type information. With the __dir__ method, Jedi does not know we want it to think of Symbolic2 as a DataFrame.
Step13: This is why in the output above it doesn't know that abs is a function, so reports it as an instance.
Option 3 | Python Code:
from siuba.siu import _
dir(_)[:6]
Explanation: Siu expression autocompletion: _.cyl.\<tab>
Note: this is document is based on PR 248 by @tmastny, and all the discussion there!
(Drafted on 7 August 2020)
tl;dr. Implementing autocompletion requires 3 components: identifying the DataFrame to complete, understanding IPython autocompletion, and plugging in to it. The approach we took is to use a user's execution history to identify the DataFrame, and to modify IPCompleter._jedi_matches. As discussed in this PR, a useful approach in the future would be to use a simple regex, like RStudio does.
Problem
The _ is meant as a lazy way of representing your data.
Currently, a user autocompleting with _.\<tab> will not receive suggestions for the data they have in mind.
After importing siuba's mtcars data, a user might want to filter by cylinder, but forget its exact name.
Autocomplete to the rescue! They'd be able to press tab to receive handy suggestions, including column names.
While an exciting feature, this requires solving hard problems. There are significant technical challenges related to (1) getting the DataFrame to complete, and (2) plugging into the autocomplete architecture.
For example, the most common approach used for autocompletion--and one used by pandas--is to define a __dir__ method.
This method then lists out everything you want the user to see when they autocomplete.
However, because the _ object doesn't know anything about DataFrames, it doesn't return anything useful.
End of explanation
import ast
class FrameDetector(ast.NodeVisitor):
def __init__(self):
self.results = []
super().__init__()
def visit_Name(self, node):
# visit any children
self.generic_visit(node)
# store name as a result
self.results.append(node.id)
visitor = FrameDetector()
visitor.visit(ast.parse(
from siuba.data import mtcars
cars2 = mtcars + 1
))
visitor.results
Explanation: In this ADR, I will review how we can find the right DataFrame to autocomplete, the state of autocompletion in IPython, and three potential solutions.
Key questions
I'll review each of these questions below.
framing: How do we know what DataFrame (e.g. mtcars) the user wants completions for?
IPython autocompletion: What are the key technical hurdles in the existing autocomplete API?
glory road: What are three ways to get completions like in the gif?
Framing: what DataFrame are users looking for?
Two possibilities come to mind:
The DataFrame being used at the start of a pipe.
The last DataFrame they defined or used.
Start of a pipe
```python
(mtcars
filter(.<tab> == 6) # note the tab!
mutate(hp2 = .hp*2)
)
```
A big challenge here is that this code is not valid python (since it has _. == 6). We would likely need to use regex to analyze it. Alternatively, looking at the code they've already run, rather then the code they're on, might be a better place to start.
Last defined or used
The last defined or used DataFrame is likely impossible to identify, since it'd require knowing the order variables get defined and accessed. However, static analysis of code history would let us take a guess. For example, the code below shows some different cases. In each case, we could pick the mtcars or cars2 is being used.
```python
import mtcars
from siuba.data import mtcars
assign cars2
cars2 = mtcars
attribute access cars2
cars2.cyl + 1
```
End of explanation
ast.dump(ast.parse("cars2 = mtcars"))
Explanation: The tree is traversed depth first, and can be dumped out for inspection. See greentreesnakes for a nice python AST primer.
End of explanation
import pandas as pd
shell = get_ipython()
[k for k,v in shell.user_ns.items() if isinstance(v, pd.DataFrame)]
Explanation: Just knowing the variable names is not enough. We also need to know which ones are DataFrames. For our guess, we can use what type of object a variable is at the time the user pressed tab (may differ from when the code is run!).
Here is an example of one way that can be done in IPython.
End of explanation
import jedi
from siuba.data import mtcars
interpreter = jedi.Interpreter('zzz.m', [{'zzz': mtcars}])
completions = list(interpreter.complete())
entry = completions[0]
entry, entry.type
Explanation: Last DataFrame defined or used seems ideal!
Once we know the DataFrame the user has in mind, we need to work it into the autocompletion machinery somehow, so that _.<tab> returns the same results as if that DataFrame were being autocompleted.
IPython Autocompletion
This section will go into great detail about how IPython's autocomplete works, to set the stage for technical solutions. Essentially, when a user interacts with autocompletion, there are 3 main libraries involved: ipykernel, IPython, and jedi. This is shown in the dependency graph below.
Essentially, our challenge is figuring how where autocomplete could fit in. Just to set the stage, the IPython IPCompleter uses some of its own useful completion strategies, but the bulk of where we benefit comes from its use of the library jedi.
In the sections below, I'll first give a quick preview of how jedi works, followed by two sequence diagrams of how it's intergrated into the ipykernel.
Jedi completion
At it's core, jedi is easy to use, and does a mix of static analysis and object evaluation. It's super handy!
The code below shows how it might autocomplete a DataFrame called zzz, where we define zzz to really be the mtcars data.
End of explanation
from siuba.data import mtcars
shell = get_ipython()
df_auto = list(shell.Completer._jedi_matches(7, 0, "mtcars."))
df_auto[:5]
Explanation: Notice that it knows the suggestion mad is a function! For a column of the data, it knows that it's not a function, but an instance.
The IPython shell has an instance of it's IPCompleter class, and it's _jedi_matches method is responsible for doing the jedi stuff.
End of explanation
from IPython.utils.strdispatch import StrDispatch
dis = StrDispatch()
dis.add_s('hei', lambda: 1)
dis.add_re('_\\..*', lambda: 2)
# must be exactly hei
list(dis.flat_matches('hei'))
Explanation: While this simple description captures the main thrust of how autocomplete works, the full dynamics include some more features such as entry hooks, and some shuffling things around (since the IPCompleter is deprecating its old methods for completing).
The sequence diagrams below show how the kernel sets up autocomplete, and how a specific autocomplete event is run.
Links to code used for diagrams:
ipykernel 5.3.4
IPython 7.17.0 - completer.py
IPython interactive shell
IPython hooks
ipykernel sets everything up, and also exposes methods for using IPCompleter hooks.
(Note that InteractiveShell and IPCompleter come from IPython)
A key here is that one hook, set by the set_hooks method is configured using something called StrDispatch
End of explanation
# needs to match regex abc.*
list(dis.flat_matches('_.abc'))
Explanation: This let's you set hooks that only fire when a specific match is in the code being completed. For example...
End of explanation
shell = get_ipython()
shell.set_hook('complete_command', lambda shell, event: ['_.ab'], re_key = '_\\.a.*')
Explanation: For example, the code below should make _.a<tab> complete to _.ab.
End of explanation
#TODO: make workable
from siuba.data import mtcars
from siuba import _
import sys
# will use zzz.<tab> for this example
zzz = _
def hook(shell, event):
# change the completers namespace, then change it back at the end
# would likely need to be done in a context manager, down the road!
old_ns = shell.Completer.namespace
target_df = shell.user_ns["mtcars"]
shell.Completer.namespace = {**old_ns, "zzz": target_df}
# then, run completion method
col_num, line_num = len(event.symbol), 0
completions = shell.Completer._jedi_matches(col_num, line_num, event.symbol)
# change namespace back
shell.Completer.namespace = old_ns
# get suggestions
suggestions = [event.command + x.name for x in completions]
# should be able to see these in the terminal for debugging
with open('/dev/stdout', 'w') as f:
print(suggestions, file = f)
return suggestions
shell = get_ipython()
shell.set_hook('complete_command', hook, re_key = '.*zzz.*')
# uncomment and press tab
#zzz.
Explanation: This would be a really promising avenue. However, as I'll show in the next section, hooks must return a list of strings, so cannot give the nice color info with completions, even if they use jedi under the hood.
IPython _jedi_matches
The following diagrams illustrates what the path through a single autocompletion event (e.g. pressing tab) looks like. Note that because IPCompleter is transitioning to a new setup, there is some shuffling around that goes on (e.g. do_complete calls _experimental_do_complete, etc..).
Intriguingly, ipykernel also jumps over InteractiveShell, accessing the shell's Completer instance directly. Then, essentially 3 critical steps are run: jedi completions, two kinds of hooks, and wrapping each result in a simple Completion class.
Glory road: three technical solutions
Essentially, the dynamics described above leave us with three potential solutions for autocomplete:
hooks (without type info)
modify siu's Symbolic.__dir__ method
monkey patch Completer's _jedi_matches method
To foreshadow, the last is the only one that will give us those sweet colored type annotations, so is preferred!
Option 1: IPython.Completer hooks
While hooks are an interesting approach, they currently require you to return a list of strings. Only Completer._jedi_matches can return the enriched suggestions, and it requires strings from hooks.
(NOTE: if you make changes to the code below, you may need to restart your kernel and re-run the cell's code.)
End of explanation
from siuba.siu import Symbolic
from siuba.data import mtcars
class Symbolic2(Symbolic):
def __dir__(self):
return dir(mtcars)
Explanation: <div style="width: 200px;">

</div>
Option 2: monkey patching siuba.siu.Symbolic
Finally, you could imagine that we replace some part of the Symbolic class, so that it does the autocomplete. This is shown below (using a new class rather than monkey patching).
End of explanation
bbb = Symbolic2()
from siuba import _
import jedi
interpreter = jedi.Interpreter('bbb.', [{'bbb': bbb, 'mtcars': mtcars}])
completions = list(interpreter.complete())
entry = completions[0]
entry.name, entry.type
Explanation: However, a problem here is that when Jedi completes on a DataFrame (vs something with a __dir__ method that spits out DataFrame info), it can add type information. With the __dir__ method, Jedi does not know we want it to think of Symbolic2 as a DataFrame.
End of explanation
import types
from functools import wraps
from siuba.data import mtcars
from siuba import _
# using aaa for this example
aaa = _
def _jedi_matches_wrapper(obj):
f = obj._jedi_matches
@wraps(f)
def wrapper(self, *args, **kwargs):
# store old namespace (should be context manager)
old_ns = self.namespace
target_df = self.namespace["mtcars"]
self.namespace = {**old_ns, "aaa": target_df}
res = f(*args, **kwargs)
# set namespace back
self.namespace = old_ns
# return results
return res
return types.MethodType(wrapper, obj)
#shell = get_ipython()
#shell.Completer._jedi_matches = _jedi_matches_wrapper(shell.Completer)
from IPython.core.completer import IPCompleter, provisionalcompleter
shell = get_ipython()
completer = IPCompleter(shell, shell.user_ns)
completer._jedi_matches = _jedi_matches_wrapper(shell.Completer)
with provisionalcompleter():
completions = list(completer.completions('aaa.', 4))
completions[:3]
Explanation: This is why in the output above it doesn't know that abs is a function, so reports it as an instance.
Option 3: monkey patching IPython.Completer._jedi_matches
This approach is similar to the above, where we replace _ in the Completer's namespace with the target DataFrame. However, we do the replacement by manually copying the code of the _jedi_matches method, and making the replacement at the very beginning.
Alternatively, you could just wrap _jedi_matches to change shell.Completer.namespace as in the hook example.
End of explanation |
6,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cross-Validation and scoring methods
In the previous sections and notebooks, we split our dataset into two parts, a training set and a test set. We used the training set to fit our model, and we used the test set to evaluate its generalization performance -- how well it performs on new, unseen data.
<img src="figures/train_test_split.svg" width="100%">
However, often (labeled) data is precious, and this approach lets us only use ~ 3/4 of our data for training. On the other hand, we will only ever try to apply our model 1/4 of our data for testing.
A common way to use more of the data to build a model, but also get a more robust estimate of the generalization performance, is cross-validation.
In cross-validation, the data is split repeatedly into a training and non-overlapping test-sets, with a separate model built for every pair. The test-set scores are then aggregated for a more robust estimate.
The most common way to do cross-validation is k-fold cross-validation, in which the data is first split into k (often 5 or 10) equal-sized folds, and then for each iteration, one of the k folds is used as test data, and the rest as training data
Step1: The labels in iris are sorted, which means that if we split the data as illustrated above, the first fold will only have the label 0 in it, while the last one will only have the label 2
Step2: To avoid this problem in evaluation, we first shuffle our data
Step3: Now implementing cross-validation is easy
Step4: Let's check that our test mask does the right thing
Step5: And now let's look a the scores we computed
Step6: As you can see, there is a rather wide spectrum of scores from 90% correct to 100% correct. If we only did a single split, we might have gotten either answer.
As cross-validation is such a common pattern in machine learning, there are functions to do the above for you with much more flexibility and less code.
The sklearn.model_selection module has all functions related to cross validation. There easiest function is cross_val_score which takes an estimator and a dataset, and will do all of the splitting for you
Step7: As you can see, the function uses three folds by default. You can change the number of folds using the cv argument
Step8: There are also helper objects in the cross-validation module that will generate indices for you for all kinds of different cross-validation methods, including k-fold
Step9: By default, cross_val_score will use StratifiedKFold for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0.
If you would just use KFold cross-validation, it is likely that you would generate a split that only contains class 0.
It is generally a good idea to use StratifiedKFold whenever you do classification.
StratifiedKFold would also remove our need to shuffle iris.
Let's see what kinds of folds it generates on the unshuffled iris dataset.
Each cross-validation class is a generator of sets of training and test indices
Step10: As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds.
This way, the class ratios are preserved. Let's visualize the split
Step11: For comparison, again the standard KFold, that ignores the labels
Step12: Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation
Step13: Another helpful cross-validation generator is ShuffleSplit. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently
Step14: If you want a more robust estimate, you can just increase the number of iterations
Step15: You can use all of these cross-validation generators with the cross_val_score method | Python Code:
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
Explanation: Cross-Validation and scoring methods
In the previous sections and notebooks, we split our dataset into two parts, a training set and a test set. We used the training set to fit our model, and we used the test set to evaluate its generalization performance -- how well it performs on new, unseen data.
<img src="figures/train_test_split.svg" width="100%">
However, often (labeled) data is precious, and this approach lets us only use ~ 3/4 of our data for training. On the other hand, we will only ever try to apply our model 1/4 of our data for testing.
A common way to use more of the data to build a model, but also get a more robust estimate of the generalization performance, is cross-validation.
In cross-validation, the data is split repeatedly into a training and non-overlapping test-sets, with a separate model built for every pair. The test-set scores are then aggregated for a more robust estimate.
The most common way to do cross-validation is k-fold cross-validation, in which the data is first split into k (often 5 or 10) equal-sized folds, and then for each iteration, one of the k folds is used as test data, and the rest as training data:
<img src="figures/cross_validation.svg" width="100%">
This way, each data point will be in the test-set exactly once, and we can use all but a k'th of the data for training.
Let us apply this technique to evaluate the KNeighborsClassifier algorithm on the Iris dataset:
End of explanation
y
Explanation: The labels in iris are sorted, which means that if we split the data as illustrated above, the first fold will only have the label 0 in it, while the last one will only have the label 2:
End of explanation
import numpy as np
rng = np.random.RandomState(0)
permutation = rng.permutation(len(X))
X, y = X[permutation], y[permutation]
print(y)
Explanation: To avoid this problem in evaluation, we first shuffle our data:
End of explanation
k = 5
n_samples = len(X)
fold_size = n_samples // k
scores = []
masks = []
for fold in range(k):
# generate a boolean mask for the test set in this fold
test_mask = np.zeros(n_samples, dtype=bool)
test_mask[fold * fold_size : (fold + 1) * fold_size] = True
# store the mask for visualization
masks.append(test_mask)
# create training and test sets using this mask
X_test, y_test = X[test_mask], y[test_mask]
X_train, y_train = X[~test_mask], y[~test_mask]
# fit the classifier
classifier.fit(X_train, y_train)
# compute the score and record it
scores.append(classifier.score(X_test, y_test))
Explanation: Now implementing cross-validation is easy:
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.matshow(masks)
Explanation: Let's check that our test mask does the right thing:
End of explanation
print(scores)
print(np.mean(scores))
Explanation: And now let's look a the scores we computed:
End of explanation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(classifier, X, y)
print(scores)
print(np.mean(scores))
Explanation: As you can see, there is a rather wide spectrum of scores from 90% correct to 100% correct. If we only did a single split, we might have gotten either answer.
As cross-validation is such a common pattern in machine learning, there are functions to do the above for you with much more flexibility and less code.
The sklearn.model_selection module has all functions related to cross validation. There easiest function is cross_val_score which takes an estimator and a dataset, and will do all of the splitting for you:
End of explanation
cross_val_score(classifier, X, y, cv=5)
Explanation: As you can see, the function uses three folds by default. You can change the number of folds using the cv argument:
End of explanation
from sklearn.model_selection import KFold, StratifiedKFold, ShuffleSplit
Explanation: There are also helper objects in the cross-validation module that will generate indices for you for all kinds of different cross-validation methods, including k-fold:
End of explanation
cv = StratifiedKFold(n_splits=5)
for train, test in cv.split(X, y):
print(test)
Explanation: By default, cross_val_score will use StratifiedKFold for classification, which ensures that the class proportions in the dataset are reflected in each fold. If you have a binary classification dataset with 90% of data point belonging to class 0, that would mean that in each fold, 90% of datapoints would belong to class 0.
If you would just use KFold cross-validation, it is likely that you would generate a split that only contains class 0.
It is generally a good idea to use StratifiedKFold whenever you do classification.
StratifiedKFold would also remove our need to shuffle iris.
Let's see what kinds of folds it generates on the unshuffled iris dataset.
Each cross-validation class is a generator of sets of training and test indices:
End of explanation
def plot_cv(cv, y):
masks = []
X = np.ones((len(y), 1))
for train, test in cv.split(X, y):
mask = np.zeros(len(y), dtype=bool)
mask[test] = 1
masks.append(mask)
plt.matshow(masks)
plot_cv(StratifiedKFold(n_splits=5), iris.target)
Explanation: As you can see, there are a couple of samples from the beginning, then from the middle, and then from the end, in each of the folds.
This way, the class ratios are preserved. Let's visualize the split:
End of explanation
plot_cv(KFold(n_splits=5), iris.target)
Explanation: For comparison, again the standard KFold, that ignores the labels:
End of explanation
plot_cv(KFold(n_splits=10), iris.target)
Explanation: Keep in mind that increasing the number of folds will give you a larger training dataset, but will lead to more repetitions, and therefore a slower evaluation:
End of explanation
plot_cv(ShuffleSplit(n_splits=5, test_size=.2), iris.target)
Explanation: Another helpful cross-validation generator is ShuffleSplit. This generator simply splits of a random portion of the data repeatedly. This allows the user to specify the number of repetitions and the training set size independently:
End of explanation
plot_cv(ShuffleSplit(n_splits=10, test_size=.2), iris.target)
Explanation: If you want a more robust estimate, you can just increase the number of iterations:
End of explanation
cv = ShuffleSplit(n_splits=5, test_size=.2)
cross_val_score(classifier, X, y, cv=cv)
Explanation: You can use all of these cross-validation generators with the cross_val_score method:
End of explanation |
6,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Thinc's Model class, model definition and methods
Thinc follows a functional-programming approach to model definition. Its approach is especially effective for complicated network architectures, and use cases where different data types need to be passed through the network to reach specific subcomponents. This notebook shows how to compose Thinc models and how to use the Model class and its methods.
Step1: Thinc provides a variety of layers, functions that create Model instances. Thinc tries to avoid inheritance, preferring function composition. The Linear function gives you a model that computes Y = X @ W.T + b (the function is defined in thinc.layers.linear.forward).
Step2: Models support dimension inference from data. You can defer some or all of the dimensions.
Step3: The chain function wires two model instances together, with a feed-forward relationship. Dimension inference is especially helpful here.
Step4: We call functions like chain combinators. Combinators take one or more models as arguments, and return another model instance, without introducing any new weight parameters. Another useful combinator is concatenate
Step5: The concatenate function produces a layer that runs the child layers separately, and then concatenates their outputs together. This is often useful for combining features from different sources. For instance, we use this all the time to build spaCy's embedding layers.
Some combinators work on a layer and a numeric argument. For instance, the clone combinator creates a number of copies of a layer, and chains them together into a deep feed-forward network. The shape inference is especially handy here
Step6: We can apply clone to model instances that have child layers, making it easy to define more complex architectures. For instance, we often want to attach an activation function and dropout to a linear layer, and then repeat that substructure a number of times. Of course, you can make whatever intermediate functions you find helpful.
Step7: Some combinators are unary functions
Step8: The combinator system makes it easy to wire together complex models very concisely. A concise notation is a huge advantage, because it lets you read and review your model with less clutter – making it easy to spot mistakes, and easy to make changes. For the ultimate in concise notation, you can also take advantage of Thinc's operator overloading, which lets you use an infix notation. Operator overloading can lead to unexpected results, so you have to enable the overloading explicitly in a contextmanager. This also lets you control how the operators are bound, making it easy to use the feature with your own combinators. For instance, here is a definition for a text classification network
Step9: The network above will expect a list of arrays as input, where each array should have two columns with different numeric identifier features. The two features will be embedded using separate embedding tables, and the two vectors added and passed through a Maxout layer with layer normalization and dropout. The sequences then pass through two pooling functions, and the concatenated results are passed through 2 Relu layers with dropout and residual connections. Finally, the sequence vectors are passed through an output layer, which has a Softmax activation.
Using a model
Define the model
Step10: Initialize the model with a sample of the data
Step11: Run the model over some data
Step12: Get a callback to backpropagate
Step13: Run the callback to calculate the gradient with respect to the inputs. If the model has trainable parameters, gradients for the parameters are accumulated internally, as a side-effect.
Step14: The backprop() callback only increments the parameter gradients, it doesn't actually change the weights. To increment the weights, call model.finish_update(), passing it an optimizer
Step15: You can get and set dimensions, parameters and attributes by name
Step16: You can also retrieve parameter gradients, and increment them explicitly
Step17: Finally, you can serialize models using the model.to_bytes and model.to_disk methods, and load them back with from_bytes and from_disk. | Python Code:
!pip install "thinc>=8.0.0"
Explanation: Intro to Thinc's Model class, model definition and methods
Thinc follows a functional-programming approach to model definition. Its approach is especially effective for complicated network architectures, and use cases where different data types need to be passed through the network to reach specific subcomponents. This notebook shows how to compose Thinc models and how to use the Model class and its methods.
End of explanation
import numpy
from thinc.api import Linear, zero_init
n_in = numpy.zeros((128, 16), dtype="f")
n_out = numpy.zeros((128, 10), dtype="f")
model = Linear(nI=n_in.shape[1], nO=n_out.shape[1], init_W=zero_init)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
Explanation: Thinc provides a variety of layers, functions that create Model instances. Thinc tries to avoid inheritance, preferring function composition. The Linear function gives you a model that computes Y = X @ W.T + b (the function is defined in thinc.layers.linear.forward).
End of explanation
model = Linear(init_W=zero_init)
print(f"Initialized model with no input/output dimensions.")
X = numpy.zeros((128, 16), dtype="f")
Y = numpy.zeros((128, 10), dtype="f")
model.initialize(X=X, Y=Y)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
Explanation: Models support dimension inference from data. You can defer some or all of the dimensions.
End of explanation
from thinc.api import chain, glorot_uniform_init
n_hidden = 128
X = numpy.zeros((128, 16), dtype="f")
Y = numpy.zeros((128, 10), dtype="f")
model = chain(Linear(n_hidden, init_W=glorot_uniform_init), Linear(init_W=zero_init),)
model.initialize(X=X, Y=Y)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
nO_hidden = model.layers[0].get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
print(f"The size of the hidden layer is {nO_hidden}.")
Explanation: The chain function wires two model instances together, with a feed-forward relationship. Dimension inference is especially helpful here.
End of explanation
from thinc.api import concatenate
model = concatenate(Linear(n_hidden), Linear(n_hidden))
model.initialize(X=X)
nO = model.get_dim("nO")
print(f"Initialized model with output dimension nO={nO}.")
Explanation: We call functions like chain combinators. Combinators take one or more models as arguments, and return another model instance, without introducing any new weight parameters. Another useful combinator is concatenate:
End of explanation
from thinc.api import clone
model = clone(Linear(), 5)
model.layers[0].set_dim("nO", n_hidden)
model.initialize(X=X, Y=Y)
nI = model.get_dim("nI")
nO = model.get_dim("nO")
print(f"Initialized model with input dimension nI={nI} and output dimension nO={nO}.")
Explanation: The concatenate function produces a layer that runs the child layers separately, and then concatenates their outputs together. This is often useful for combining features from different sources. For instance, we use this all the time to build spaCy's embedding layers.
Some combinators work on a layer and a numeric argument. For instance, the clone combinator creates a number of copies of a layer, and chains them together into a deep feed-forward network. The shape inference is especially handy here: we want the first and last layers to have different shapes, so we can avoid providing any dimensions into the layer we clone. We then just have to specify the first layer's output size, and we can let the rest of the dimensions be inferred from the data.
End of explanation
from thinc.api import Relu, Dropout
def Hidden(dropout=0.2):
return chain(Linear(), Relu(), Dropout(dropout))
model = clone(Hidden(0.2), 5)
Explanation: We can apply clone to model instances that have child layers, making it easy to define more complex architectures. For instance, we often want to attach an activation function and dropout to a linear layer, and then repeat that substructure a number of times. Of course, you can make whatever intermediate functions you find helpful.
End of explanation
from thinc.api import with_array
model = with_array(Linear(4, 2))
Xs = [model.ops.alloc2f(10, 2, dtype="f")]
model.initialize(X=Xs)
Ys = model.predict(Xs)
print(f"Prediction shape: {Ys[0].shape}.")
Explanation: Some combinators are unary functions: they take only one model. These are usually input and output transformations. For instance, the with_array combinator produces a model that flattens lists of arrays into a single array, and then calls the child layer to get the flattened output. It then reverses the transformation on the output.
End of explanation
from thinc.api import add, chain, concatenate, clone
from thinc.api import with_array, reduce_max, reduce_mean, residual
from thinc.api import Model, Embed, Maxout, Softmax
nH = 5
with Model.define_operators({">>": chain, "|": concatenate, "+": add, "**": clone}):
model = (
with_array(
(Embed(128, column=0) + Embed(64, column=1))
>> Maxout(nH, normalize=True, dropout=0.2)
)
>> (reduce_max() | reduce_mean())
>> residual(Relu() >> Dropout(0.2)) ** 2
>> Softmax()
)
Explanation: The combinator system makes it easy to wire together complex models very concisely. A concise notation is a huge advantage, because it lets you read and review your model with less clutter – making it easy to spot mistakes, and easy to make changes. For the ultimate in concise notation, you can also take advantage of Thinc's operator overloading, which lets you use an infix notation. Operator overloading can lead to unexpected results, so you have to enable the overloading explicitly in a contextmanager. This also lets you control how the operators are bound, making it easy to use the feature with your own combinators. For instance, here is a definition for a text classification network:
End of explanation
from thinc.api import Linear, Adam
import numpy
X = numpy.zeros((128, 10), dtype="f")
dY = numpy.zeros((128, 10), dtype="f")
model = Linear(10, 10)
Explanation: The network above will expect a list of arrays as input, where each array should have two columns with different numeric identifier features. The two features will be embedded using separate embedding tables, and the two vectors added and passed through a Maxout layer with layer normalization and dropout. The sequences then pass through two pooling functions, and the concatenated results are passed through 2 Relu layers with dropout and residual connections. Finally, the sequence vectors are passed through an output layer, which has a Softmax activation.
Using a model
Define the model:
End of explanation
model.initialize(X=X, Y=dY)
Explanation: Initialize the model with a sample of the data:
End of explanation
Y = model.predict(X)
Y
Explanation: Run the model over some data
End of explanation
Y, backprop = model.begin_update(X)
Y, backprop
Explanation: Get a callback to backpropagate:
End of explanation
dX = backprop(dY)
dX
Explanation: Run the callback to calculate the gradient with respect to the inputs. If the model has trainable parameters, gradients for the parameters are accumulated internally, as a side-effect.
End of explanation
optimizer = Adam()
model.finish_update(optimizer)
Explanation: The backprop() callback only increments the parameter gradients, it doesn't actually change the weights. To increment the weights, call model.finish_update(), passing it an optimizer:
End of explanation
dim = model.get_dim("nO")
W = model.get_param("W")
model.attrs["hello"] = "world"
model.attrs.get("foo", "bar")
Explanation: You can get and set dimensions, parameters and attributes by name:
End of explanation
dW = model.get_grad("W")
model.inc_grad("W", dW * 0.1)
Explanation: You can also retrieve parameter gradients, and increment them explicitly:
End of explanation
model_bytes = model.to_bytes()
Explanation: Finally, you can serialize models using the model.to_bytes and model.to_disk methods, and load them back with from_bytes and from_disk.
End of explanation |
6,905 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h3 STYLE="background
Step1: <h3 STYLE="background
Step2: <h3 STYLE="background
Step3: <h3 STYLE="background
Step4: matplotlib で定義済みのカラーマップで彩色できます。次の例では、quality に応じて coolwarm に従った彩色を行います。他のカラーマップの例は http
Step5: 同じような絵を描く方法はいくつもあって、たとえば次のように、微妙に仕上がりが違います。
Step6: 今回は quality は連続値ではなく離散値ですので、次のような描き方のほうが良いかもしれません。
Step7: もし、気に入った colormap がなければ、以下のように自作もできます。
Step8: <h3 STYLE="background
Step9: matplotlib で定義済みのカラーマップで彩色できます。次の例では、quality に応じて coolwarm に従った彩色を行います。他のカラーマップの例は http
Step10: 先ほどと同様、自作の colormap も使えます。
Step11: <h3 STYLE="background
Step12: 上のような数字だらけの表だと全体像を掴みづらいので、カラーマップにしてみましょう。
Step13: quality は alcohol と正の相関、 volatile acidity と負の相関にあることなどが見て取れます。
<h3 STYLE="background
Step14: 機械学習のライブラリ sklearn の PCA を用いて主成分分析を行います。
Step15: 主成分分析では、個々の変数の線形結合を主成分として分析を行ないますので、それぞれの主成分がもとのデータをどの程度説明しているかを示す尺度が必要となります。それを寄与率といいます。また、寄与率を第1主成分から順に累積していったものを累積寄与率といいます。
Step16: これもやはり好きな色で彩色できます。
Step17: 行列の転置 .T をすることで、行と列を入れ替えて主成分分析を行うことができます。
Step18: <h4 style="padding | Python Code:
# 数値計算やデータフレーム操作に関するライブラリをインポートする
import numpy as np
import pandas as pd
# URL によるリソースへのアクセスを提供するライブラリをインポートする。
# import urllib # Python 2 の場合
import urllib.request # Python 3 の場合
# 図やグラフを図示するためのライブラリをインポートする。
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from matplotlib.colors import LinearSegmentedColormap
from sklearn.decomposition import PCA #主成分分析器
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;">Step 3. 実データの読み込みから俯瞰まで</h3>
<ol>
<li><a href="#1">「ワインの品質」データ読み込み</a>
<li><a href="#2">ヒストグラム</a>
<li><a href="#3">散布図</a>
<li><a href="#4">散布図行列</a>
<li><a href="#5">相関行列</a>
<li><a href="#7">主成分分析</a>
<li><a href="#6">練習</a>
</ol>
<h4 style="border-bottom: solid 1px black;">Step 3 の目標</h4>
実際の多変量データを、主成分分析やその他の手法で可視化し俯瞰する。
<img src="fig/pca.png">
End of explanation
# ウェブ上のリソースを指定する
url = 'https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt'
# 指定したURLからリソースをダウンロードし、名前をつける。
# urllib.urlretrieve(url, 'winequality-red.csv') # Python 2 の場合
urllib.request.urlretrieve(url, 'winequality-red.txt') # Python 3 の場合
# データの読み込み
df1 = pd.read_csv('winequality-red.txt', sep='\t', index_col=0)
df1 # 中身の確認
df1.T # .T は行列の転置
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="1">1. 「ワインの品質」データ読み込み</a></h3>
データは <a href="http://archive.ics.uci.edu/ml/index.php" target="_blank">UC Irvine Machine Learning Repository</a> から取得したものを少し改変しました。
赤ワイン https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-red.txt
白ワイン https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-white.txt
<h4 style="border-bottom: solid 1px black;"> <a href="http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality.names">詳細</a></h4>
<ol>
<li>fixed acidity : 不揮発酸濃度(ほぼ酒石酸濃度)
<li>volatile acidity : 揮発酸濃度(ほぼ酢酸濃度)
<li>citric acid : クエン酸濃度
<li>residual sugar : 残存糖濃度
<li>chlorides : 塩化物濃度
<li>free sulfur dioxide : 遊離亜硫酸濃度
<li>total sulfur dioxide : 亜硫酸濃度
<li>density : 密度
<li>pH : pH
<li>sulphates : 硫酸塩濃度
<li>alcohol : アルコール度数
<li>quality (score between 0 and 10) : 0-10 の値で示される品質のスコア
</ol>
End of explanation
# 図やグラフを図示するためのライブラリをインポートする。
import matplotlib.pyplot as plt
%matplotlib inline
df1['fixed acidity'].hist()
df1['fixed acidity'].hist(figsize=(5, 5), bins=20) # bin の数を増やす
# まとめて表示もできる
df1.hist(figsize=(20, 20), bins=20)
plt.show()
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="2">2. ヒストグラム</a></h3>
End of explanation
df1.plot(kind='scatter', x=u'pH', y=u'alcohol', grid=True)
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="3">3. 散布図</a></h3>
好きな列を2つ選んで散布図が描けます。
End of explanation
df1.plot(kind='scatter', x=u'pH', y=u'alcohol', \
c=df1['quality'], cmap='coolwarm', grid=True)
Explanation: matplotlib で定義済みのカラーマップで彩色できます。次の例では、quality に応じて coolwarm に従った彩色を行います。他のカラーマップの例は http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html などを参照のこと。
End of explanation
plt.scatter(df1['pH'], df1['alcohol'], alpha=0.5, \
c=df1['quality'], cmap='coolwarm')
plt.colorbar(label='quality')
plt.xlabel('pH')
plt.ylabel('alcohol')
plt.grid()
Explanation: 同じような絵を描く方法はいくつもあって、たとえば次のように、微妙に仕上がりが違います。
End of explanation
cmap = plt.get_cmap('coolwarm')
colors = [cmap(c / 5) for c in np.arange(1, 6)]
fig, ax = plt.subplots(1, 1)
for i, (key, group) in enumerate(df1.groupby('quality')):
group.plot(kind='scatter', x=u'pH', y=u'alcohol', color=cmap(i / 5), ax=ax, label=key, alpha=0.5, grid=True)
Explanation: 今回は quality は連続値ではなく離散値ですので、次のような描き方のほうが良いかもしれません。
End of explanation
dic = {'red': ((0, 0, 0), (0.5, 1, 1), (1, 1, 1)),
'green': ((0, 0, 0), (0.5, 1, 1), (1, 0, 0)),
'blue': ((0, 1, 1), (0.5, 0, 0), (1, 0, 0))}
tricolor_cmap = LinearSegmentedColormap('tricolor', dic)
plt.scatter(df1['pH'], df1['alcohol'], alpha=0.5, \
c=df1['quality'], cmap=tricolor_cmap)
plt.colorbar(label='quality')
plt.xlabel('pH')
plt.ylabel('alcohol')
plt.grid()
cmap = tricolor_cmap
colors = [cmap(c / 5) for c in np.arange(1, 6)]
fig, ax = plt.subplots(1, 1)
for i, (key, group) in enumerate(df1.groupby('quality')):
group.plot(kind='scatter', x=u'pH', y=u'alcohol', color=cmap(i / 5), ax=ax, label=key, alpha=0.5, grid=True)
Explanation: もし、気に入った colormap がなければ、以下のように自作もできます。
End of explanation
pd.plotting.scatter_matrix(df1.dropna(axis=1)[df1.columns[:]], figsize=(20, 20))
plt.show()
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="4">4. 散布図行列</a></h3>
散布図行列は、多数の変数の間の関係を俯瞰するのに大変便利です。
End of explanation
cmap = plt.get_cmap('coolwarm')
colors = [cmap((c - 3)/ 5) for c in df1['quality'].tolist()]
pd.plotting.scatter_matrix(df1.dropna(axis=1)[df1.columns[:]], figsize=(20, 20), color=colors)
plt.show()
Explanation: matplotlib で定義済みのカラーマップで彩色できます。次の例では、quality に応じて coolwarm に従った彩色を行います。他のカラーマップの例は http://www.scipy-lectures.org/intro/matplotlib/matplotlib.html などを参照のこと。
End of explanation
cmap = tricolor_cmap
colors = [cmap((c - 3)/ 5) for c in df1['quality'].tolist()]
pd.plotting.scatter_matrix(df1.dropna(axis=1)[df1.columns[:]], figsize=(20, 20), color=colors)
plt.show()
Explanation: 先ほどと同様、自作の colormap も使えます。
End of explanation
pd.DataFrame(np.corrcoef(df1.T.dropna().iloc[:, :].as_matrix().tolist()),
columns=df1.columns, index=df1.columns)
Explanation: <h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="5">5. 相関行列</a></h3>
変数間の関係を概観するにあたり、全対全の相関係数を見せてくれる相関行列も便利です。
End of explanation
corrcoef = np.corrcoef(df1.dropna().iloc[:, :].T.as_matrix().tolist())
#plt.figure(figsize=(8, 8))
plt.imshow(corrcoef, interpolation='nearest', cmap=plt.cm.coolwarm)
plt.colorbar(label='correlation coefficient')
tick_marks = np.arange(len(corrcoef))
plt.xticks(tick_marks, df1.columns, rotation=90)
plt.yticks(tick_marks, df1.columns)
plt.tight_layout()
Explanation: 上のような数字だらけの表だと全体像を掴みづらいので、カラーマップにしてみましょう。
End of explanation
dfs = df1.apply(lambda x: (x-x.mean())/x.std(), axis=0).fillna(0)
dfs.head() # 先頭5行だけ表示
Explanation: quality は alcohol と正の相関、 volatile acidity と負の相関にあることなどが見て取れます。
<h3 STYLE="background: #c2edff;padding: 0.5em;"><a name="7">8. 主成分分析</a></h3>
主成分分析を行う前に、データの正規化を行うことが一般的です。よく使われる正規化として、次のように、各項目において平均0・分散1となるように変換します。
End of explanation
pca = PCA()
pca.fit(dfs.iloc[:, :10])
# データを主成分空間に写像 = 次元圧縮
feature = pca.transform(dfs.iloc[:, :10])
#plt.figure(figsize=(6, 6))
plt.scatter(feature[:, 0], feature[:, 1], alpha=0.5)
plt.title('Principal Component Analysis')
plt.xlabel('The first principal component')
plt.ylabel('The second principal component')
plt.grid()
plt.show()
Explanation: 機械学習のライブラリ sklearn の PCA を用いて主成分分析を行います。
End of explanation
# 累積寄与率を図示する
plt.gca().get_xaxis().set_major_locator(ticker.MaxNLocator(integer=True))
plt.plot([0] + list(np.cumsum(pca.explained_variance_ratio_)), '-o')
plt.xlabel('Number of principal components')
plt.ylabel('Cumulative contribution ratio')
plt.grid()
plt.show()
Explanation: 主成分分析では、個々の変数の線形結合を主成分として分析を行ないますので、それぞれの主成分がもとのデータをどの程度説明しているかを示す尺度が必要となります。それを寄与率といいます。また、寄与率を第1主成分から順に累積していったものを累積寄与率といいます。
End of explanation
pca = PCA()
pca.fit(dfs.iloc[:, :10])
# データを主成分空間に写像 = 次元圧縮
feature = pca.transform(dfs.iloc[:, :10])
#plt.figure(figsize=(6, 6))
plt.scatter(feature[:, 0], feature[:, 1], alpha=0.5, color=colors)
plt.title('Principal Component Analysis')
plt.xlabel('The first principal component')
plt.ylabel('The second principal component')
plt.grid()
plt.show()
Explanation: これもやはり好きな色で彩色できます。
End of explanation
pca = PCA()
pca.fit(dfs.iloc[:, :10].T)
# データを主成分空間に写像 = 次元圧縮
feature = pca.transform(dfs.iloc[:, :10].T)
#plt.figure(figsize=(6, 6))
for x, y, name in zip(feature[:, 0], feature[:, 1], dfs.columns[:10]):
plt.text(x, y, name, alpha=0.8, size=8)
plt.scatter(feature[:, 0], feature[:, 1], alpha=0.5)
plt.title('Principal Component Analysis')
plt.xlabel('The first principal component')
plt.ylabel('The second principal component')
plt.grid()
plt.show()
# 累積寄与率を図示する
plt.gca().get_xaxis().set_major_locator(ticker.MaxNLocator(integer=True))
plt.plot([0] + list(np.cumsum(pca.explained_variance_ratio_)), '-o')
plt.xlabel('Number of principal components')
plt.ylabel('Cumulative contribution ratio')
plt.grid()
plt.show()
Explanation: 行列の転置 .T をすることで、行と列を入れ替えて主成分分析を行うことができます。
End of explanation
# 練習3.1
Explanation: <h4 style="padding: 0.25em 0.5em;color: #494949;background: transparent;border-left: solid 5px #7db4e6;"><a name="6">練習3.1</a></h4>
白ワインのデータ(https://raw.githubusercontent.com/chemo-wakate/tutorial-6th/master/beginner/data/winequality-white.txt) を読み込み、ヒストグラム、散布図行列、相関行列を描いてください。
End of explanation |
6,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Appendix D – Autodiff
This notebook contains toy implementations of various autodiff techniques, to explain how they works.
<table align="left">
<td>
<a target="_blank" href="https
Step1: Introduction
Suppose we want to compute the gradients of the function $f(x,y)=x^2y + y + 2$ with regards to the parameters x and y
Step2: One approach is to solve this analytically
Step3: So for example $\dfrac{\partial f}{\partial x}(3,4) = 24$ and $\dfrac{\partial f}{\partial y}(3,4) = 10$.
Step4: Perfect! We can also find the equations for the second order derivatives (also called Hessians)
Step5: Perfect, but this requires some mathematical work. It is not too hard in this case, but for a deep neural network, it is pratically impossible to compute the derivatives this way. So let's look at various ways to automate this!
Numeric differentiation
Here, we compute an approxiation of the gradients using the equation
Step6: It works well!
The good news is that it is pretty easy to compute the Hessians. First let's create functions that compute the first order derivatives (also called Jacobians)
Step7: Now we can simply apply the gradients() function to these functions
Step8: So everything works well, but the result is approximate, and computing the gradients of a function with regards to $n$ variables requires calling that function $n$ times. In deep neural nets, there are often thousands of parameters to tweak using gradient descent (which requires computing the gradients of the loss function with regards to each of these parameters), so this approach would be much too slow.
Implementing a Toy Computation Graph
Rather than this numerical approach, let's implement some symbolic autodiff techniques. For this, we will need to define classes to represent constants, variables and operations.
Step9: Good, now we can build a computation graph to represent the function $f$
Step10: And we can run this graph to compute $f$ at any point, for example $f(3, 4)$.
Step11: Perfect, it found the ultimate answer.
Computing gradients
The autodiff methods we will present below are all based on the chain rule.
Suppose we have two functions $u$ and $v$, and we apply them sequentially to some input $x$, and we get the result $z$. So we have $z = v(u(x))$, which we can rewrite as $z = v(s)$ and $s = u(x)$. Now we can apply the chain rule to get the partial derivative of the output $z$ with regards to the input $x$
Step12: Look good. Now let's do the same thing using reverse mode autodiff. This time the algorithm would start from the right hand side so it would compute $\dfrac{\partial z}{\partial s_1} = \dfrac{\partial \sin(s_1)}{\partial s_1}=\cos(s_1)=\cos(3^2)\approx -0.91$. Next it would compute $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1} \approx \dfrac{\partial s_1}{\partial x} \cdot -0.91 = \dfrac{\partial x^2}{\partial x} \cdot -0.91=2x \cdot -0.91 = 6\cdot-0.91=-5.46$.
Of course both approaches give the same result (except for rounding errors), and with a single input and output they involve the same number of computations. But when there are several inputs or outputs, they can have very different performance. Indeed, if there are many inputs, the right-most terms will be needed to compute the partial derivatives with regards to each input, so it is a good idea to compute these right-most terms first. That means using reverse-mode autodiff. This way, the right-most terms can be computed just once and used to compute all the partial derivatives. Conversely, if there are many outputs, forward-mode is generally preferable because the left-most terms can be computed just once to compute the partial derivatives of the different outputs. In Deep Learning, there are typically thousands of model parameters, meaning there are lots of inputs, but few outputs. In fact, there is generally just one output during training
Step13: Since the output of the gradient() method is fully symbolic, we are not limited to the first order derivatives, we can also compute second order derivatives, and so on
Step14: Note that the result is now exact, not an approximation (up to the limit of the machine's float precision, of course).
Forward mode autodiff using dual numbers
A nice way to apply forward mode autodiff is to use dual numbers. In short, a dual number $z$ has the form $z = a + b\epsilon$, where $a$ and $b$ are real numbers, and $\epsilon$ is an infinitesimal number, positive but smaller than all real numbers, and such that $\epsilon^2=0$.
It can be shown that $f(x + \epsilon) = f(x) + \dfrac{\partial f}{\partial x}\epsilon$, so simply by computing $f(x + \epsilon)$ we get both the value of $f(x)$ and the partial derivative of $f$ with regards to $x$.
Dual numbers have their own arithmetic rules, which are generally quite natural. For example
Step15: $3 + (3 + 4 \epsilon) = 6 + 4\epsilon$
Step16: $(3 + 4ε)\times(5 + 7ε)$ = $3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε$ = $15 + 21ε + 20ε + 28ε^2$ = $15 + 41ε + 28 \times 0$ = $15 + 41ε$
Step17: Now let's see if the dual numbers work with our toy computation framework
Step18: Yep, sure works. Now let's use this to compute the partial derivatives of $f$ with regards to $x$ and $y$ at x=3 and y=4
Step19: Great! However, in this implementation we are limited to first order derivatives.
Now let's look at reverse mode.
Reverse mode autodiff
Let's rewrite our toy framework to add reverse mode autodiff
Step20: Again, in this implementation the outputs are just numbers, not symbolic expressions, so we are limited to first order derivatives. However, we could have made the backpropagate() methods return symbolic expressions rather than values (e.g., return Add(2,3) rather than 5). This would make it possible to compute second order gradients (and beyond). This is what TensorFlow does, as do all the major libraries that implement autodiff.
Reverse mode autodiff using TensorFlow
Step21: Since everything is symbolic, we can compute second order derivatives, and beyond. However, when we compute the derivative of a tensor with regards to a variable that it does not depend on, instead of returning 0.0, the gradients() function returns None, which cannot be evaluated by sess.run(). So beware of None values. Here we just replace them with zero tensors. | Python Code:
# To support both python 2 and python 3
from __future__ import absolute_import, division, print_function, unicode_literals
Explanation: Appendix D – Autodiff
This notebook contains toy implementations of various autodiff techniques, to explain how they works.
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/ageron/handson-ml/blob/master/extra_autodiff.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
</table>
Warning: this is the code for the 1st edition of the book. Please visit https://github.com/ageron/handson-ml2 for the 2nd edition code, with up-to-date notebooks using the latest library versions. In particular, the 1st edition is based on TensorFlow 1, while the 2nd edition uses TensorFlow 2, which is much simpler to use.
Setup
First, let's make sure this notebook works well in both python 2 and 3:
End of explanation
def f(x,y):
return x*x*y + y + 2
Explanation: Introduction
Suppose we want to compute the gradients of the function $f(x,y)=x^2y + y + 2$ with regards to the parameters x and y:
End of explanation
def df(x,y):
return 2*x*y, x*x + 1
Explanation: One approach is to solve this analytically:
$\dfrac{\partial f}{\partial x} = 2xy$
$\dfrac{\partial f}{\partial y} = x^2 + 1$
End of explanation
df(3, 4)
Explanation: So for example $\dfrac{\partial f}{\partial x}(3,4) = 24$ and $\dfrac{\partial f}{\partial y}(3,4) = 10$.
End of explanation
def d2f(x, y):
return [2*y, 2*x], [2*x, 0]
d2f(3, 4)
Explanation: Perfect! We can also find the equations for the second order derivatives (also called Hessians):
$\dfrac{\partial^2 f}{\partial x \partial x} = \dfrac{\partial (2xy)}{\partial x} = 2y$
$\dfrac{\partial^2 f}{\partial x \partial y} = \dfrac{\partial (2xy)}{\partial y} = 2x$
$\dfrac{\partial^2 f}{\partial y \partial x} = \dfrac{\partial (x^2 + 1)}{\partial x} = 2x$
$\dfrac{\partial^2 f}{\partial y \partial y} = \dfrac{\partial (x^2 + 1)}{\partial y} = 0$
At x=3 and y=4, these Hessians are respectively 8, 6, 6, 0. Let's use the equations above to compute them:
End of explanation
def gradients(func, vars_list, eps=0.0001):
partial_derivatives = []
base_func_eval = func(*vars_list)
for idx in range(len(vars_list)):
tweaked_vars = vars_list[:]
tweaked_vars[idx] += eps
tweaked_func_eval = func(*tweaked_vars)
derivative = (tweaked_func_eval - base_func_eval) / eps
partial_derivatives.append(derivative)
return partial_derivatives
def df(x, y):
return gradients(f, [x, y])
df(3, 4)
Explanation: Perfect, but this requires some mathematical work. It is not too hard in this case, but for a deep neural network, it is pratically impossible to compute the derivatives this way. So let's look at various ways to automate this!
Numeric differentiation
Here, we compute an approxiation of the gradients using the equation: $\dfrac{\partial f}{\partial x} = \displaystyle{\lim_{\epsilon \to 0}}\dfrac{f(x+\epsilon, y) - f(x, y)}{\epsilon}$ (and there is a similar definition for $\dfrac{\partial f}{\partial y}$).
End of explanation
def dfdx(x, y):
return gradients(f, [x,y])[0]
def dfdy(x, y):
return gradients(f, [x,y])[1]
dfdx(3., 4.), dfdy(3., 4.)
Explanation: It works well!
The good news is that it is pretty easy to compute the Hessians. First let's create functions that compute the first order derivatives (also called Jacobians):
End of explanation
def d2f(x, y):
return [gradients(dfdx, [3., 4.]), gradients(dfdy, [3., 4.])]
d2f(3, 4)
Explanation: Now we can simply apply the gradients() function to these functions:
End of explanation
class Const(object):
def __init__(self, value):
self.value = value
def evaluate(self):
return self.value
def __str__(self):
return str(self.value)
class Var(object):
def __init__(self, name, init_value=0):
self.value = init_value
self.name = name
def evaluate(self):
return self.value
def __str__(self):
return self.name
class BinaryOperator(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Add(BinaryOperator):
def evaluate(self):
return self.a.evaluate() + self.b.evaluate()
def __str__(self):
return "{} + {}".format(self.a, self.b)
class Mul(BinaryOperator):
def evaluate(self):
return self.a.evaluate() * self.b.evaluate()
def __str__(self):
return "({}) * ({})".format(self.a, self.b)
Explanation: So everything works well, but the result is approximate, and computing the gradients of a function with regards to $n$ variables requires calling that function $n$ times. In deep neural nets, there are often thousands of parameters to tweak using gradient descent (which requires computing the gradients of the loss function with regards to each of these parameters), so this approach would be much too slow.
Implementing a Toy Computation Graph
Rather than this numerical approach, let's implement some symbolic autodiff techniques. For this, we will need to define classes to represent constants, variables and operations.
End of explanation
x = Var("x")
y = Var("y")
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
Explanation: Good, now we can build a computation graph to represent the function $f$:
End of explanation
x.value = 3
y.value = 4
f.evaluate()
Explanation: And we can run this graph to compute $f$ at any point, for example $f(3, 4)$.
End of explanation
from math import sin
def z(x):
return sin(x**2)
gradients(z, [3])
Explanation: Perfect, it found the ultimate answer.
Computing gradients
The autodiff methods we will present below are all based on the chain rule.
Suppose we have two functions $u$ and $v$, and we apply them sequentially to some input $x$, and we get the result $z$. So we have $z = v(u(x))$, which we can rewrite as $z = v(s)$ and $s = u(x)$. Now we can apply the chain rule to get the partial derivative of the output $z$ with regards to the input $x$:
$ \dfrac{\partial z}{\partial x} = \dfrac{\partial s}{\partial x} \cdot \dfrac{\partial z}{\partial s}$
Now if $z$ is the output of a sequence of functions which have intermediate outputs $s_1, s_2, ..., s_n$, the chain rule still applies:
$ \dfrac{\partial z}{\partial x} = \dfrac{\partial s_1}{\partial x} \cdot \dfrac{\partial s_2}{\partial s_1} \cdot \dfrac{\partial s_3}{\partial s_2} \cdot \dots \cdot \dfrac{\partial s_{n-1}}{\partial s_{n-2}} \cdot \dfrac{\partial s_n}{\partial s_{n-1}} \cdot \dfrac{\partial z}{\partial s_n}$
In forward mode autodiff, the algorithm computes these terms "forward" (i.e., in the same order as the computations required to compute the output $z$), that is from left to right: first $\dfrac{\partial s_1}{\partial x}$, then $\dfrac{\partial s_2}{\partial s_1}$, and so on. In reverse mode autodiff, the algorithm computes these terms "backwards", from right to left: first $\dfrac{\partial z}{\partial s_n}$, then $\dfrac{\partial s_n}{\partial s_{n-1}}$, and so on.
For example, suppose you want to compute the derivative of the function $z(x)=\sin(x^2)$ at x=3, using forward mode autodiff. The algorithm would first compute the partial derivative $\dfrac{\partial s_1}{\partial x}=\dfrac{\partial x^2}{\partial x}=2x=6$. Next, it would compute $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1}= 6 \cdot \dfrac{\partial \sin(s_1)}{\partial s_1}=6 \cdot \cos(s_1)=6 \cdot \cos(3^2)\approx-5.46$.
Let's verify this result using the gradients() function defined earlier:
End of explanation
Const.gradient = lambda self, var: Const(0)
Var.gradient = lambda self, var: Const(1) if self is var else Const(0)
Add.gradient = lambda self, var: Add(self.a.gradient(var), self.b.gradient(var))
Mul.gradient = lambda self, var: Add(Mul(self.a, self.b.gradient(var)), Mul(self.a.gradient(var), self.b))
x = Var(name="x", init_value=3.)
y = Var(name="y", init_value=4.)
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
dfdx = f.gradient(x) # 2xy
dfdy = f.gradient(y) # x² + 1
dfdx.evaluate(), dfdy.evaluate()
Explanation: Look good. Now let's do the same thing using reverse mode autodiff. This time the algorithm would start from the right hand side so it would compute $\dfrac{\partial z}{\partial s_1} = \dfrac{\partial \sin(s_1)}{\partial s_1}=\cos(s_1)=\cos(3^2)\approx -0.91$. Next it would compute $\dfrac{\partial z}{\partial x}=\dfrac{\partial s_1}{\partial x}\cdot\dfrac{\partial z}{\partial s_1} \approx \dfrac{\partial s_1}{\partial x} \cdot -0.91 = \dfrac{\partial x^2}{\partial x} \cdot -0.91=2x \cdot -0.91 = 6\cdot-0.91=-5.46$.
Of course both approaches give the same result (except for rounding errors), and with a single input and output they involve the same number of computations. But when there are several inputs or outputs, they can have very different performance. Indeed, if there are many inputs, the right-most terms will be needed to compute the partial derivatives with regards to each input, so it is a good idea to compute these right-most terms first. That means using reverse-mode autodiff. This way, the right-most terms can be computed just once and used to compute all the partial derivatives. Conversely, if there are many outputs, forward-mode is generally preferable because the left-most terms can be computed just once to compute the partial derivatives of the different outputs. In Deep Learning, there are typically thousands of model parameters, meaning there are lots of inputs, but few outputs. In fact, there is generally just one output during training: the loss. This is why reverse mode autodiff is used in TensorFlow and all major Deep Learning libraries.
There's one additional complexity in reverse mode autodiff: the value of $s_i$ is generally required when computing $\dfrac{\partial s_{i+1}}{\partial s_i}$, and computing $s_i$ requires first computing $s_{i-1}$, which requires computing $s_{i-2}$, and so on. So basically, a first pass forward through the network is required to compute $s_1$, $s_2$, $s_3$, $\dots$, $s_{n-1}$ and $s_n$, and then the algorithm can compute the partial derivatives from right to left. Storing all the intermediate values $s_i$ in RAM is sometimes a problem, especially when handling images, and when using GPUs which often have limited RAM: to limit this problem, one can reduce the number of layers in the neural network, or configure TensorFlow to make it swap these values from GPU RAM to CPU RAM. Another approach is to only cache every other intermediate value, $s_1$, $s_3$, $s_5$, $\dots$, $s_{n-4}$, $s_{n-2}$ and $s_n$. This means that when the algorithm computes the partial derivatives, if an intermediate value $s_i$ is missing, it will need to recompute it based on the previous intermediate value $s_{i-1}$. This trades off CPU for RAM (if you are interested, check out this paper).
Forward mode autodiff
End of explanation
d2fdxdx = dfdx.gradient(x) # 2y
d2fdxdy = dfdx.gradient(y) # 2x
d2fdydx = dfdy.gradient(x) # 2x
d2fdydy = dfdy.gradient(y) # 0
[[d2fdxdx.evaluate(), d2fdxdy.evaluate()],
[d2fdydx.evaluate(), d2fdydy.evaluate()]]
Explanation: Since the output of the gradient() method is fully symbolic, we are not limited to the first order derivatives, we can also compute second order derivatives, and so on:
End of explanation
class DualNumber(object):
def __init__(self, value=0.0, eps=0.0):
self.value = value
self.eps = eps
def __add__(self, b):
return DualNumber(self.value + self.to_dual(b).value,
self.eps + self.to_dual(b).eps)
def __radd__(self, a):
return self.to_dual(a).__add__(self)
def __mul__(self, b):
return DualNumber(self.value * self.to_dual(b).value,
self.eps * self.to_dual(b).value + self.value * self.to_dual(b).eps)
def __rmul__(self, a):
return self.to_dual(a).__mul__(self)
def __str__(self):
if self.eps:
return "{:.1f} + {:.1f}ε".format(self.value, self.eps)
else:
return "{:.1f}".format(self.value)
def __repr__(self):
return str(self)
@classmethod
def to_dual(cls, n):
if hasattr(n, "value"):
return n
else:
return cls(n)
Explanation: Note that the result is now exact, not an approximation (up to the limit of the machine's float precision, of course).
Forward mode autodiff using dual numbers
A nice way to apply forward mode autodiff is to use dual numbers. In short, a dual number $z$ has the form $z = a + b\epsilon$, where $a$ and $b$ are real numbers, and $\epsilon$ is an infinitesimal number, positive but smaller than all real numbers, and such that $\epsilon^2=0$.
It can be shown that $f(x + \epsilon) = f(x) + \dfrac{\partial f}{\partial x}\epsilon$, so simply by computing $f(x + \epsilon)$ we get both the value of $f(x)$ and the partial derivative of $f$ with regards to $x$.
Dual numbers have their own arithmetic rules, which are generally quite natural. For example:
Addition
$(a_1 + b_1\epsilon) + (a_2 + b_2\epsilon) = (a_1 + a_2) + (b_1 + b_2)\epsilon$
Subtraction
$(a_1 + b_1\epsilon) - (a_2 + b_2\epsilon) = (a_1 - a_2) + (b_1 - b_2)\epsilon$
Multiplication
$(a_1 + b_1\epsilon) \times (a_2 + b_2\epsilon) = (a_1 a_2) + (a_1 b_2 + a_2 b_1)\epsilon + b_1 b_2\epsilon^2 = (a_1 a_2) + (a_1b_2 + a_2b_1)\epsilon$
Division
$\dfrac{a_1 + b_1\epsilon}{a_2 + b_2\epsilon} = \dfrac{a_1 + b_1\epsilon}{a_2 + b_2\epsilon} \cdot \dfrac{a_2 - b_2\epsilon}{a_2 - b_2\epsilon} = \dfrac{a_1 a_2 + (b_1 a_2 - a_1 b_2)\epsilon - b_1 b_2\epsilon^2}{{a_2}^2 + (a_2 b_2 - a_2 b_2)\epsilon - {b_2}^2\epsilon} = \dfrac{a_1}{a_2} + \dfrac{a_1 b_2 - b_1 a_2}{{a_2}^2}\epsilon$
Power
$(a + b\epsilon)^n = a^n + (n a^{n-1}b)\epsilon$
etc.
Let's create a class to represent dual numbers, and implement a few operations (addition and multiplication). You can try adding some more if you want.
End of explanation
3 + DualNumber(3, 4)
Explanation: $3 + (3 + 4 \epsilon) = 6 + 4\epsilon$
End of explanation
DualNumber(3, 4) * DualNumber(5, 7)
Explanation: $(3 + 4ε)\times(5 + 7ε)$ = $3 \times 5 + 3 \times 7ε + 4ε \times 5 + 4ε \times 7ε$ = $15 + 21ε + 20ε + 28ε^2$ = $15 + 41ε + 28 \times 0$ = $15 + 41ε$
End of explanation
x.value = DualNumber(3.0)
y.value = DualNumber(4.0)
f.evaluate()
Explanation: Now let's see if the dual numbers work with our toy computation framework:
End of explanation
x.value = DualNumber(3.0, 1.0) # 3 + ε
y.value = DualNumber(4.0) # 4
dfdx = f.evaluate().eps
x.value = DualNumber(3.0) # 3
y.value = DualNumber(4.0, 1.0) # 4 + ε
dfdy = f.evaluate().eps
dfdx
dfdy
Explanation: Yep, sure works. Now let's use this to compute the partial derivatives of $f$ with regards to $x$ and $y$ at x=3 and y=4:
End of explanation
class Const(object):
def __init__(self, value):
self.value = value
def evaluate(self):
return self.value
def backpropagate(self, gradient):
pass
def __str__(self):
return str(self.value)
class Var(object):
def __init__(self, name, init_value=0):
self.value = init_value
self.name = name
self.gradient = 0
def evaluate(self):
return self.value
def backpropagate(self, gradient):
self.gradient += gradient
def __str__(self):
return self.name
class BinaryOperator(object):
def __init__(self, a, b):
self.a = a
self.b = b
class Add(BinaryOperator):
def evaluate(self):
self.value = self.a.evaluate() + self.b.evaluate()
return self.value
def backpropagate(self, gradient):
self.a.backpropagate(gradient)
self.b.backpropagate(gradient)
def __str__(self):
return "{} + {}".format(self.a, self.b)
class Mul(BinaryOperator):
def evaluate(self):
self.value = self.a.evaluate() * self.b.evaluate()
return self.value
def backpropagate(self, gradient):
self.a.backpropagate(gradient * self.b.value)
self.b.backpropagate(gradient * self.a.value)
def __str__(self):
return "({}) * ({})".format(self.a, self.b)
x = Var("x", init_value=3)
y = Var("y", init_value=4)
f = Add(Mul(Mul(x, x), y), Add(y, Const(2))) # f(x,y) = x²y + y + 2
result = f.evaluate()
f.backpropagate(1.0)
print(f)
result
x.gradient
y.gradient
Explanation: Great! However, in this implementation we are limited to first order derivatives.
Now let's look at reverse mode.
Reverse mode autodiff
Let's rewrite our toy framework to add reverse mode autodiff:
End of explanation
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 1.x
except Exception:
pass
import tensorflow as tf
tf.reset_default_graph()
x = tf.Variable(3., name="x")
y = tf.Variable(4., name="y")
f = x*x*y + y + 2
jacobians = tf.gradients(f, [x, y])
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
f_val, jacobians_val = sess.run([f, jacobians])
f_val, jacobians_val
Explanation: Again, in this implementation the outputs are just numbers, not symbolic expressions, so we are limited to first order derivatives. However, we could have made the backpropagate() methods return symbolic expressions rather than values (e.g., return Add(2,3) rather than 5). This would make it possible to compute second order gradients (and beyond). This is what TensorFlow does, as do all the major libraries that implement autodiff.
Reverse mode autodiff using TensorFlow
End of explanation
hessians_x = tf.gradients(jacobians[0], [x, y])
hessians_y = tf.gradients(jacobians[1], [x, y])
def replace_none_with_zero(tensors):
return [tensor if tensor is not None else tf.constant(0.)
for tensor in tensors]
hessians_x = replace_none_with_zero(hessians_x)
hessians_y = replace_none_with_zero(hessians_y)
init = tf.global_variables_initializer()
with tf.Session() as sess:
init.run()
hessians_x_val, hessians_y_val = sess.run([hessians_x, hessians_y])
hessians_x_val, hessians_y_val
Explanation: Since everything is symbolic, we can compute second order derivatives, and beyond. However, when we compute the derivative of a tensor with regards to a variable that it does not depend on, instead of returning 0.0, the gradients() function returns None, which cannot be evaluated by sess.run(). So beware of None values. Here we just replace them with zero tensors.
End of explanation |
6,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parsing events from raw data
This tutorial describes how to read experimental events from raw recordings,
and how to convert between the two different representations of events within
MNE-Python (Events arrays and Annotations objects).
Step1: The Events and Annotations data structures
Generally speaking, both the Events and
Step2: You can see that STI 014 (the summation channel) contains pulses of
different magnitudes whereas pulses on other channels have consistent
magnitudes. You can also see that every time there is a pulse on one of the
other STIM channels, there is a corresponding pulse on STI 014.
.. TODO
Step3: .. sidebar
Step4: The core data within an
Step5: More information on working with
Step6: If you want to control which integers are mapped to each unique description
value, you can pass a
Step7: To make the opposite conversion (from Events array to
Step8: Now, the annotations will appear automatically when plotting the raw data,
and will be color-coded by their label value
Step9: Making multiple events per annotation
As mentioned above, you can generate equally-spaced events from an
Step10: Now we can check that our events indeed fall in the ranges 5-21 seconds and
41-52 seconds, and are ~1.5 seconds apart (modulo some jitter due to the
sampling frequency). Here are the event times rounded to the nearest
millisecond | Python Code:
import os
import numpy as np
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file)
raw.crop(tmax=60).load_data()
Explanation: Parsing events from raw data
This tutorial describes how to read experimental events from raw recordings,
and how to convert between the two different representations of events within
MNE-Python (Events arrays and Annotations objects).
:depth: 1
In the introductory tutorial <overview-tut-events-section> we saw an
example of reading experimental events from a :term:"STIM" channel <stim
channel>; here we'll discuss :term:events and :term:annotations more
broadly, give more detailed information about reading from STIM channels, and
give an example of reading events that are in a marker file or included in the
data file as an embedded array. The tutorials tut-event-arrays and
tut-annotate-raw discuss how to plot, combine, load, save, and
export :term:events and :class:~mne.Annotations (respectively), and the
latter tutorial also covers interactive annotation of :class:~mne.io.Raw
objects.
We'll begin by loading the Python modules we need, and loading the same
example data <sample-dataset> we used in the introductory tutorial
<tut-overview>, but to save memory we'll crop the :class:~mne.io.Raw object
to just 60 seconds before loading it into RAM:
End of explanation
raw.copy().pick_types(meg=False, stim=True).plot(start=3, duration=6)
Explanation: The Events and Annotations data structures
Generally speaking, both the Events and :class:~mne.Annotations data
structures serve the same purpose: they provide a mapping between times
during an EEG/MEG recording and a description of what happened at those
times. In other words, they associate a when with a what. The main
differences are:
Units: the Events data structure represents the when in terms of
samples, whereas the :class:~mne.Annotations data structure represents
the when in seconds.
Limits on the description: the Events data structure represents the
what as an integer "Event ID" code, whereas the
:class:~mne.Annotations data structure represents the what as a
string.
How duration is encoded: Events in an Event array do not have a
duration (though it is possible to represent duration with pairs of
onset/offset events within an Events array), whereas each element of an
:class:~mne.Annotations object necessarily includes a duration (though
the duration can be zero if an instantaneous event is desired).
Internal representation: Events are stored as an ordinary
:class:NumPy array <numpy.ndarray>, whereas :class:~mne.Annotations is
a :class:list-like class defined in MNE-Python.
What is a STIM channel?
A :term:stim channel (short for "stimulus channel") is a channel that does
not receive signals from an EEG, MEG, or other sensor. Instead, STIM channels
record voltages (usually short, rectangular DC pulses of fixed magnitudes
sent from the experiment-controlling computer) that are time-locked to
experimental events, such as the onset of a stimulus or a button-press
response by the subject (those pulses are sometimes called TTL_ pulses,
event pulses, trigger signals, or just "triggers"). In other cases, these
pulses may not be strictly time-locked to an experimental event, but instead
may occur in between trials to indicate the type of stimulus (or experimental
condition) that is about to occur on the upcoming trial.
The DC pulses may be all on one STIM channel (in which case different
experimental events or trial types are encoded as different voltage
magnitudes), or they may be spread across several channels, in which case the
channel(s) on which the pulse(s) occur can be used to encode different events
or conditions. Even on systems with multiple STIM channels, there is often
one channel that records a weighted sum of the other STIM channels, in such a
way that voltage levels on that channel can be unambiguously decoded as
particular event types. On older Neuromag systems (such as that used to
record the sample data) this "summation channel" was typically STI 014;
on newer systems it is more commonly STI101. You can see the STIM
channels in the raw data file here:
End of explanation
events = mne.find_events(raw, stim_channel='STI 014')
print(events[:5]) # show the first 5
Explanation: You can see that STI 014 (the summation channel) contains pulses of
different magnitudes whereas pulses on other channels have consistent
magnitudes. You can also see that every time there is a pulse on one of the
other STIM channels, there is a corresponding pulse on STI 014.
.. TODO: somewhere in prev. section, link out to a table of which systems
have STIM channels vs. which have marker files or embedded event arrays
(once such a table has been created).
Converting a STIM channel signal to an Events array
If your data has events recorded on a STIM channel, you can convert them into
an events array using :func:mne.find_events. The sample number of the onset
(or offset) of each pulse is recorded as the event time, the pulse magnitudes
are converted into integers, and these pairs of sample numbers plus integer
codes are stored in :class:NumPy arrays <numpy.ndarray> (usually called
"the events array" or just "the events"). In its simplest form, the function
requires only the :class:~mne.io.Raw object, and the name of the channel(s)
from which to read events:
End of explanation
testing_data_folder = mne.datasets.testing.data_path()
eeglab_raw_file = os.path.join(testing_data_folder, 'EEGLAB', 'test_raw.set')
eeglab_raw = mne.io.read_raw_eeglab(eeglab_raw_file)
print(eeglab_raw.annotations)
Explanation: .. sidebar:: The middle column of the Events array
MNE-Python events are actually *three* values: in between the sample
number and the integer event code is a value indicating what the event
code was on the immediately preceding sample. In practice, that value is
almost always `0`, but it can be used to detect the *endpoint* of an
event whose duration is longer than one sample. See the documentation of
:func:`mne.find_events` for more details.
If you don't provide the name of a STIM channel, :func:~mne.find_events
will first look for MNE-Python config variables <tut-configure-mne>
for variables MNE_STIM_CHANNEL, MNE_STIM_CHANNEL_1, etc. If those are
not found, channels STI 014 and STI101 are tried, followed by the
first channel with type "STIM" present in raw.ch_names. If you regularly
work with data from several different MEG systems with different STIM channel
names, setting the MNE_STIM_CHANNEL config variable may not be very
useful, but for researchers whose data is all from a single system it can be
a time-saver to configure that variable once and then forget about it.
:func:~mne.find_events has several options, including options for aligning
events to the onset or offset of the STIM channel pulses, setting the minimum
pulse duration, and handling of consecutive pulses (with no return to zero
between them). For example, you can effectively encode event duration by
passing output='step' to :func:mne.find_events; see the documentation
of :func:~mne.find_events for details. More information on working with
events arrays (including how to plot, combine, load, and save event arrays)
can be found in the tutorial tut-event-arrays.
Reading embedded events as Annotations
Some EEG/MEG systems generate files where events are stored in a separate
data array rather than as pulses on one or more STIM channels. For example,
the EEGLAB format stores events as a collection of arrays in the :file:.set
file. When reading those files, MNE-Python will automatically convert the
stored events into an :class:~mne.Annotations object and store it as the
:attr:~mne.io.Raw.annotations attribute of the :class:~mne.io.Raw object:
End of explanation
print(len(eeglab_raw.annotations))
print(set(eeglab_raw.annotations.duration))
print(set(eeglab_raw.annotations.description))
print(eeglab_raw.annotations.onset[0])
Explanation: The core data within an :class:~mne.Annotations object is accessible
through three of its attributes: onset, duration, and
description. Here we can see that there were 154 events stored in the
EEGLAB file, they all had a duration of zero seconds, there were two
different types of events, and the first event occurred about 1 second after
the recording began:
End of explanation
events_from_annot, event_dict = mne.events_from_annotations(eeglab_raw)
print(event_dict)
print(events_from_annot[:5])
Explanation: More information on working with :class:~mne.Annotations objects, including
how to add annotations to :class:~mne.io.Raw objects interactively, and how
to plot, concatenate, load, save, and export :class:~mne.Annotations
objects can be found in the tutorial tut-annotate-raw.
Converting between Events arrays and Annotations objects
Once your experimental events are read into MNE-Python (as either an Events
array or an :class:~mne.Annotations object), you can easily convert between
the two formats as needed. You might do this because, e.g., an Events array
is needed for epoching continuous data, or because you want to take advantage
of the "annotation-aware" capability of some functions, which automatically
omit spans of data if they overlap with certain annotations.
To convert an :class:~mne.Annotations object to an Events array, use the
function :func:mne.events_from_annotations on the :class:~mne.io.Raw file
containing the annotations. This function will assign an integer Event ID to
each unique element of raw.annotations.description, and will return the
mapping of descriptions to integer Event IDs along with the derived Event
array. By default, one event will be created at the onset of each annotation;
this can be modified via the chunk_duration parameter of
:func:~mne.events_from_annotations to create equally spaced events within
each annotation span (see chunk-duration, below, or see
fixed-length-events for direct creation of an Events array of
equally-spaced events).
End of explanation
custom_mapping = {'rt': 77, 'square': 42}
(events_from_annot,
event_dict) = mne.events_from_annotations(eeglab_raw, event_id=custom_mapping)
print(event_dict)
print(events_from_annot[:5])
Explanation: If you want to control which integers are mapped to each unique description
value, you can pass a :class:dict specifying the mapping as the
event_id parameter of :func:~mne.events_from_annotations; this
:class:dict will be returned unmodified as the event_dict.
.. TODO add this when the other tutorial is nailed down:
Note that this event_dict can be used when creating
:class:~mne.Epochs from :class:~mne.io.Raw objects, as demonstrated
in :doc:epoching_tutorial_whatever_its_name_is.
End of explanation
mapping = {1: 'auditory/left', 2: 'auditory/right', 3: 'visual/left',
4: 'visual/right', 5: 'smiley', 32: 'buttonpress'}
onsets = events[:, 0] / raw.info['sfreq']
durations = np.zeros_like(onsets) # assumes instantaneous events
descriptions = [mapping[event_id] for event_id in events[:, 2]]
annot_from_events = mne.Annotations(onset=onsets, duration=durations,
description=descriptions,
orig_time=raw.info['meas_date'])
raw.set_annotations(annot_from_events)
Explanation: To make the opposite conversion (from Events array to
:class:~mne.Annotations object), you can create a mapping from integer
Event ID to string descriptions, and use the :class:~mne.Annotations
constructor to create the :class:~mne.Annotations object, and use the
:meth:~mne.io.Raw.set_annotations method to add the annotations to the
:class:~mne.io.Raw object. Because the sample data <sample-dataset>
was recorded on a Neuromag system (where sample numbering starts when the
acquisition system is initiated, not when the recording is initiated), we
also need to pass in the orig_time parameter so that the onsets are
properly aligned relative to the start of recording:
End of explanation
raw.plot(start=5, duration=5)
Explanation: Now, the annotations will appear automatically when plotting the raw data,
and will be color-coded by their label value:
End of explanation
# create the REM annotations
rem_annot = mne.Annotations(onset=[5, 41],
duration=[16, 11],
description=['REM'] * 2)
raw.set_annotations(rem_annot)
(rem_events,
rem_event_dict) = mne.events_from_annotations(raw, chunk_duration=1.5)
Explanation: Making multiple events per annotation
As mentioned above, you can generate equally-spaced events from an
:class:~mne.Annotations object using the chunk_duration parameter of
:func:~mne.events_from_annotations. For example, suppose we have an
annotation in our :class:~mne.io.Raw object indicating when the subject was
in REM sleep, and we want to perform a resting-state analysis on those spans
of data. We can create an Events array with a series of equally-spaced events
within each "REM" span, and then use those events to generate (potentially
overlapping) epochs that we can analyze further.
End of explanation
print(np.round((rem_events[:, 0] - raw.first_samp) / raw.info['sfreq'], 3))
Explanation: Now we can check that our events indeed fall in the ranges 5-21 seconds and
41-52 seconds, and are ~1.5 seconds apart (modulo some jitter due to the
sampling frequency). Here are the event times rounded to the nearest
millisecond:
End of explanation |
6,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Running and Weight Data Cleaning
Four disparate sources
Each have different features and formats, units, etc.
Work through to figure out requisite steps for processing and combining
Once done, wrap all steps up into concise functions for future usage
Step1: Load data
Weight
Myfitnesspal
Weightgurus
Running
Runkeeper
Strava
Step2: Convert dates to datetime
Step3: Combine weight data sources
Step6: Combine running data sources
Strava has elapsed time in seconds and distance in km
Calculate distance in miles
Calculate pace and duration in decimal minutes (for tying to plot attributes)
Calculate pace and duration as string MM
Step7: Encapsulate procedure into functions
Give your code a home! Put this into a file for posterity! | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
Explanation: Running and Weight Data Cleaning
Four disparate sources
Each have different features and formats, units, etc.
Work through to figure out requisite steps for processing and combining
Once done, wrap all steps up into concise functions for future usage
End of explanation
weight_gurus = pd.read_csv('data/raw/weight-gurus-history.csv')
weight_gurus.head()
weight_gurus['Weight (lb)'].plot()
mfp = pd.read_csv('data/raw/myfitnesspal-export.csv')
mfp = mfp.dropna(subset=['Weight'])
mfp.head()
mfp.Weight.plot()
strava = pd.read_csv('data/raw/strava-activities.csv')
strava = strava[strava['Activity Type'] == "Run"]
strava.head()
runkeeper = pd.read_csv('data/raw/runkeeper-activities.csv')
runkeeper = runkeeper[runkeeper['Type'] == "Running"]
runkeeper.head()
Explanation: Load data
Weight
Myfitnesspal
Weightgurus
Running
Runkeeper
Strava
End of explanation
from datetime import datetime
from datetime import time
weight_gurus_dt_format = "%b %d %Y %I:%M:%S %p"
mfp_dt_format = "%Y-%m-%d"
strava_dt_format = "%b %d, %Y, %I:%M:%S %p"
runkeeper_dt_format = "%Y-%m-%d %H:%M:%S"
weight_gurus = weight_gurus.rename(columns={'Date/Time': 'Date'})
weight_gurus['Date'] = weight_gurus['Date'].apply(lambda x: datetime.strptime(x, weight_gurus_dt_format))
mfp['Date'] = mfp['Date'].apply(lambda x: datetime.strptime(x, mfp_dt_format))
strava = strava.rename(columns={'Activity Date': 'Date'})
strava['Date'] = strava['Date'].apply(lambda x: datetime.strptime(x, strava_dt_format))
runkeeper['Date'] = runkeeper['Date'].apply(lambda x: datetime.strptime(x, runkeeper_dt_format))
Explanation: Convert dates to datetime
End of explanation
weight_gurus = weight_gurus.rename(columns={'Weight (lb)': 'Weight'})
weight_cols = ['Date', 'Weight']
weight_df = pd.concat([
mfp[weight_cols],
weight_gurus[weight_cols]
])
weight_df = weight_df.sort_values('Date')
weight_df.head()
Explanation: Combine weight data sources
End of explanation
# Convert km -> mi
strava['Distance'] = strava['Distance'] * 0.621371
# Calculate pace (in decimal minutes)
strava['Pace_min'] = strava['Elapsed Time'] / (60*strava['Distance'])
# Calculate duration (in decimal minutes)
strava['Duration_min'] = strava['Elapsed Time']/60.0
from math import floor
def decimal_minute_to_time(dec_minutes):
Converts decimal minutes to MM:SS format.
Parameters
----------
dec_minutes : float
Time in minutes
Returns
-------
str
hour = floor(dec_minutes / 60)
minute = int(dec_minutes % 60)
sec = int(60 * (dec_minutes - int(dec_minutes)))
time_str = ""
if hour > 0:
time_str = "{}:{:02}:{:02}".format(hour, minute, sec)
else:
time_str = "{}:{:02}".format(minute, sec)
return time_str
def time_to_decimal_minute(time_str):
Converts MM:SS or HH:MM:SS string to decimal minute format.
Parameters
----------
time_str : str
Time in "MM:SS" or "HH:MM:SS" format
Returns
-------
float
Raises
------
ValueError
For poorly formatted string.
time_list = time_str.split(":")
minute, second = int(time_list[-2]), int(time_list[-1])
if len(time_list) == 3:
minute = minute + 60.0 * int(time_list[0])
if second >= 60:
raise ValueError("Bad time string format. More than 60s: %s", second)
dec_minute = minute + second/60.0
return dec_minute
decimal_minute_to_time(125.5)
time_to_decimal_minute("2:05:30")
# Convert decimal minute to MM:SS
strava['Pace'] = strava['Pace_min'].apply(decimal_minute_to_time)
strava['Duration'] = strava['Duration_min'].apply(decimal_minute_to_time)
strava = strava.rename(columns={'Activity Name': 'Name',
'Activity Description': 'Description'})
strava['Tracker'] = 'Strava'
strava.head()
runkeeper = runkeeper.rename(columns={'Distance (mi)': 'Distance',
'Notes': 'Name',
'Average Pace': 'Pace'})
runkeeper['Pace_min'] = runkeeper['Pace'].apply(time_to_decimal_minute)
runkeeper['Duration_min'] = runkeeper['Duration'].apply(time_to_decimal_minute)
runkeeper['Description'] = None
runkeeper['Tracker'] = "Runkeeper"
runkeeper.head()
run_cols = ['Date', 'Name', 'Description', 'Distance', 'Pace',
'Duration', 'Pace_min', 'Duration_min', 'Tracker']
run_df = pd.concat([strava[run_cols], runkeeper[run_cols]])
run_df.head()
Explanation: Combine running data sources
Strava has elapsed time in seconds and distance in km
Calculate distance in miles
Calculate pace and duration in decimal minutes (for tying to plot attributes)
Calculate pace and duration as string MM:SS (minutes per mile) for display
End of explanation
WG_DT_FORMAT = "%b %d %Y %I:%M:%S %p"
MFP_DT_FORMAT = "%Y-%m-%d"
RUNKEEPER_DT_FORMAT = "%Y-%m-%d %H:%M:%S"
STRAVA_DT_FORMAT = "%b %d, %Y, %I:%M:%S %p"
WEIGHT_COLS = ["Date", "Weight"]
RUN_COLS = ['Date', 'Name', 'Description', 'Distance', 'Pace',
'Duration', 'Pace_min', 'Duration_min', 'Tracker']
def process_weight_gurus(wg_filename):
weight_gurus = pd.read_csv(wg_filename)
weight_gurus = weight_gurus.rename(columns={'Date/Time': 'Date'})
weight_gurus['Date'] = weight_gurus['Date'].apply(
lambda x: datetime.strptime(x, WG_DT_FORMAT)
)
weight_gurus = weight_gurus.rename(
columns={'Weight (lb)': 'Weight'}
)
return weight_gurus
def process_mfp_weight(mfp_filename):
mfp = pd.read_csv(mfp_filename)
mfp = mfp.dropna(subset=['Weight'])
mfp['Date'] = mfp['Date'].apply(
lambda x: datetime.strptime(x, MFP_DT_FORMAT)
)
return mfp
def process_runkeeper(runkeeper_filename):
runkeeper = pd.read_csv(runkeeper_filename)
runkeeper = runkeeper[runkeeper['Type'] == "Running"]
runkeeper['Date'] = runkeeper['Date'].apply(
lambda x: datetime.strptime(x, RUNKEEPER_DT_FORMAT)
)
runkeeper = runkeeper.rename(columns={'Distance (mi)': 'Distance',
'Notes': 'Name',
'Average Pace': 'Pace'})
runkeeper['Pace_min'] = runkeeper['Pace'].apply(time_to_decimal_minute)
runkeeper['Duration_min'] = runkeeper['Duration'].apply(time_to_decimal_minute)
runkeeper['Description'] = None
runkeeper['Tracker'] = "Runkeeper"
return runkeeper
def process_strava(strava_filename, dt_format=STRAVA_DT_FORMAT):
# Load and filter to only running activities
strava = pd.read_csv(strava_filename)
strava = strava[strava['Activity Type'] == "Run"]
# Rename the features for consistency
strava = strava.rename(columns={'Activity Date': 'Date',
'Activity Name': 'Name',
'Activity Description': 'Description'})
# Turn Date into datetime type
strava['Date'] = strava['Date'].apply(
lambda x: datetime.strptime(x, dt_format)
)
# Convert km -> mi
strava['Distance'] = strava['Distance'] * 0.621371
# Calculate pace (in decimal minutes)
strava['Pace_min'] = strava['Elapsed Time'] / (60 * strava['Distance'])
# Calculate duration (in decimal minutes)
strava['Duration_min'] = strava['Elapsed Time']/60.0
# Convert decimal minute to MM:SS
strava['Pace'] = strava['Pace_min'].apply(decimal_minute_to_time)
strava['Duration'] = strava['Duration_min'].apply(decimal_minute_to_time)
# Tag each row with the tracker it came from
strava['Tracker'] = 'Strava'
return strava
def combine_weights(df_list, weight_cols=WEIGHT_COLS):
weight_df = pd.concat([df[weight_cols] for df in df_list])
weight_df = weight_df.sort_values('Date')
return weight_df
def combine_runs(df_list, run_cols=RUN_COLS):
run_df = pd.concat([df[run_cols] for df in df_list])
run_df = run_df.sort_values('Date')
return run_df
def main():
strava = process_strava('data/raw/strava-activities.csv')
runkeeper = process_runkeeper('data/raw/runkeeper-activities.csv')
mfp = process_mfp_weight('data/raw/myfitnesspal-export.csv')
weight_gurus = process_weight_gurus('data/raw/weight-gurus-history.csv')
run_df = combine_runs([strava, runkeeper])
weight_df = combine_weights([mfp, weight_gurus])
run_df.to_csv('data/processed/run.csv')
weight_df.to_csv('data/processed/weight.csv')
if __name__ == "__main__":
print("processing data beep boop bonk")
main()
Explanation: Encapsulate procedure into functions
Give your code a home! Put this into a file for posterity!
End of explanation |
6,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two Layer QG Model Example
Here is a quick overview of how to use the two-layer model. See the
Step1: Initialize and Run the Model
Here we set up a model which will run for 10 years and start averaging
after 5 years. There are lots of parameters that can be specified as
keyword arguments but we are just using the defaults.
Step2: Visualize Output
We access the actual pv values through the attribute m.q. The first axis
of q corresponds with the layer number. (Remeber that in python, numbering
starts at 0.)
Step3: Plot Diagnostics
The model automatically accumulates averages of certain diagnostics. We can
find out what diagnostics are available by calling
Step4: To look at the wavenumber energy spectrum, we plot the KEspec diagnostic.
(Note that summing along the l-axis, as in this example, does not give us
a true isotropic wavenumber spectrum.)
Step5: We can also plot the spectral fluxes of energy. | Python Code:
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import pyqg
Explanation: Two Layer QG Model Example
Here is a quick overview of how to use the two-layer model. See the
:py:class:pyqg.QGModel api documentation for further details.
First import numpy, matplotlib, and pyqg:
End of explanation
year = 24*60*60*360.
m = pyqg.QGModel(tmax=10*year, twrite=10000, tavestart=5*year)
m.run()
Explanation: Initialize and Run the Model
Here we set up a model which will run for 10 years and start averaging
after 5 years. There are lots of parameters that can be specified as
keyword arguments but we are just using the defaults.
End of explanation
q_upper = m.q[0] + m.Qy[0]*m.y
plt.contourf(m.x, m.y, q_upper, 12, cmap='RdBu_r')
plt.xlabel('x'); plt.ylabel('y'); plt.title('Upper Layer PV')
plt.colorbar();
Explanation: Visualize Output
We access the actual pv values through the attribute m.q. The first axis
of q corresponds with the layer number. (Remeber that in python, numbering
starts at 0.)
End of explanation
m.describe_diagnostics()
Explanation: Plot Diagnostics
The model automatically accumulates averages of certain diagnostics. We can
find out what diagnostics are available by calling
End of explanation
kespec_u = m.get_diagnostic('KEspec')[0].sum(axis=0)
kespec_l = m.get_diagnostic('KEspec')[1].sum(axis=0)
plt.loglog( m.kk, kespec_u, '.-' )
plt.loglog( m.kk, kespec_l, '.-' )
plt.legend(['upper layer','lower layer'], loc='lower left')
plt.ylim([1e-9,1e-3]); plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Kinetic Energy Spectrum');
Explanation: To look at the wavenumber energy spectrum, we plot the KEspec diagnostic.
(Note that summing along the l-axis, as in this example, does not give us
a true isotropic wavenumber spectrum.)
End of explanation
ebud = [ -m.get_diagnostic('APEgenspec').sum(axis=0),
-m.get_diagnostic('APEflux').sum(axis=0),
-m.get_diagnostic('KEflux').sum(axis=0),
-m.rek*m.del2*m.get_diagnostic('KEspec')[1].sum(axis=0)*m.M**2 ]
ebud.append(-np.vstack(ebud).sum(axis=0))
ebud_labels = ['APE gen','APE flux','KE flux','Diss.','Resid.']
[plt.semilogx(m.kk, term) for term in ebud]
plt.legend(ebud_labels, loc='upper right')
plt.xlim([m.kk.min(), m.kk.max()])
plt.xlabel(r'k (m$^{-1}$)'); plt.grid()
plt.title('Spectral Energy Transfers');
Explanation: We can also plot the spectral fluxes of energy.
End of explanation |
6,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Finding the right capcha with Keras
Step1: We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.
Step2: We then load the training set and the test set and prepare them with the function prep_datas.
Step3: Image before/after compression
Step4: Lenet neural network
Step5: We build the neural network and fit it on the training set
Step6: We now compare with the real world images (with the deshear method)
Step7: with the labels of Peter | Python Code:
import os
import numpy as np
import tools as im
from matplotlib import pyplot as plt
from skimage.transform import resize
%matplotlib inline
path=os.getcwd()+'/' # finds the path of the folder in which the notebook is
path_train=path+'images/train/'
path_test=path+'images/test/'
path_real=path+'images/real_world/'
Explanation: Finding the right capcha with Keras
End of explanation
def prep_datas(xset,xlabels):
X=list(xset)
for i in range(len(X)):
X[i]=resize(X[i],(32,32,1)) #reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X=np.reshape(X,(len(X),1,32,32)) # reshape the liste to have the form required by keras (theano), ie (1,32,32)
X=np.array(X) #transforms it into an array
Y = np.eye(2, dtype='uint8')[xlabels] # generates vectors, here of two elements as required by keras (number of classes)
return X,Y
Explanation: We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.
End of explanation
training_set, training_labels = im.load_images(path_train)
test_set, test_labels = im.load_images(path_test)
X_train,Y_train=prep_datas(training_set,training_labels)
X_test,Y_test=prep_datas(test_set,test_labels)
Explanation: We then load the training set and the test set and prepare them with the function prep_datas.
End of explanation
i=11
plt.subplot(1,2,1)
plt.imshow(training_set[i],cmap='gray')
plt.subplot(1,2,2)
plt.imshow(X_train[i][0],cmap='gray')
Explanation: Image before/after compression
End of explanation
# import the necessary packages
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras.optimizers import SGD
# this code comes from http://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/
class LeNet:
@staticmethod
def build(width, height, depth, classes, weightsPath=None):
# initialize the model
model = Sequential()
# first set of CONV => RELU => POOL
model.add(Convolution2D(20, 5, 5, border_mode="same",input_shape=(depth, height, width)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# second set of CONV => RELU => POOL
model.add(Convolution2D(50, 5, 5, border_mode="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# set of FC => RELU layers
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
Explanation: Lenet neural network
End of explanation
model = LeNet.build(width=32, height=32, depth=1, classes=2)
opt = SGD(lr=0.01)#Sochastic gradient descent with learning rate 0.01
model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"])
model.fit(X_train, Y_train, batch_size=10, nb_epoch=300,verbose=1)
y_pred = model.predict_classes(X_test)
print(y_pred)
print(test_labels)
Explanation: We build the neural network and fit it on the training set
End of explanation
real_world_set=[]
for i in np.arange(1,73):
filename=path+'images/real_world/'+str(i)+'.png'
real_world_set.append(im.deshear(filename))
fake_label=np.ones(len(real_world_set),dtype='int32')
X_real,Y_real=prep_datas(real_world_set,fake_label)
y_pred = model.predict_classes(X_real)
Explanation: We now compare with the real world images (with the deshear method)
End of explanation
f=open(path+'images/real_world/labels.txt',"r")
lines=f.readlines()
result=[]
for x in lines:
result.append((x.split(' ')[1]).replace('\n',''))
f.close()
result=np.array([int(x) for x in result])
result[result>1]=1
plt.plot(y_pred,'o')
plt.plot(2*result,'o')
plt.ylim(-0.5,2.5);
Explanation: with the labels of Peter
End of explanation |
6,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test Access to Earth Engine
Run the code blocks below to test if the notebook server is authorized to communicate with the Earth Engine backend servers.
First, check if the IPython Widgets library is available on the server.
Step1: Next, check if the Earth Engine API is available on the server.
Step2: Finally, check if the notebook server is authorized to access the Earth Engine backend servers.
Step3: Once the server is authorized, you can retrieve data from Earth Engine and use it in the notebook. | Python Code:
# Code to check the IPython Widgets library.
try:
import ipywidgets
print('The IPython Widgets library (version {0}) is available on this server.'.format(
ipywidgets.__version__
))
except ImportError:
print('The IPython Widgets library is not available on this server.\n'
'Please see https://github.com/jupyter-widgets/ipywidgets '
'for information on installing the library.')
raise
Explanation: Test Access to Earth Engine
Run the code blocks below to test if the notebook server is authorized to communicate with the Earth Engine backend servers.
First, check if the IPython Widgets library is available on the server.
End of explanation
# Code to check the Earth Engine API library.
try:
import ee
print('The Earth Engine Python API (version {0}) is available on this server.'.format(
ee.__version__
))
except ImportError:
print('The Earth Engine Python API library is not available on this server.\n'
'Please see https://developers.google.com/earth-engine/python_install '
'for information on installing the library.')
raise
Explanation: Next, check if the Earth Engine API is available on the server.
End of explanation
# Code to check if authorized to access Earth Engine.
import cStringIO
import os
import urllib
def isAuthorized():
try:
ee.Initialize()
return True
except:
return False
form_item_layout = ipywidgets.Layout(width="100%", align_items='center')
if isAuthorized():
def revoke_credentials(sender):
credentials = ee.oauth.get_credentials_path()
if os.path.exists(credentials):
os.remove(credentials)
print('Credentials have been revoked.')
# Define widgets that may be displayed.
auth_status_button = ipywidgets.Button(
layout=form_item_layout,
disabled = True,
description = 'The server is authorized to access Earth Engine',
button_style = 'success',
icon = 'check'
)
instructions = ipywidgets.Button(
layout=form_item_layout,
description = 'Click here to revoke authorization',
button_style = 'danger',
disabled = False,
)
instructions.on_click(revoke_credentials)
else:
def save_credentials(sender):
try:
token = ee.oauth.request_token(get_auth_textbox.value.strip())
except Exception as e:
print(e)
return
ee.oauth.write_token(token)
get_auth_textbox.value = '' # Clear the textbox.
print('Successfully saved authorization token.')
# Define widgets that may be displayed.
get_auth_textbox = ipywidgets.Text(
placeholder='Paste authorization code here',
description='Authentication Code:'
)
get_auth_textbox.on_submit(save_credentials)
auth_status_button = ipywidgets.Button(
layout=form_item_layout,
button_style = 'danger',
description = 'The server is not authorized to access Earth Engine',
disabled = True
)
instructions = ipywidgets.VBox(
[
ipywidgets.HTML(
'Click on the link below to start the authentication and authorization process. '
'Once you have received an authorization code, paste it in the box below and press return.'
),
ipywidgets.HTML(
'<a href="{url}" target="auth">Open Authentication Tab</a><br/>'.format(
url=ee.oauth.get_authorization_url()
)
),
get_auth_textbox
],
layout=form_item_layout
)
# Display the form.
form = ipywidgets.VBox([
auth_status_button,
instructions
])
form
Explanation: Finally, check if the notebook server is authorized to access the Earth Engine backend servers.
End of explanation
# Code to display an Earth Engine generated image.
from IPython.display import Image
url = ee.Image("CGIAR/SRTM90_V4").getThumbUrl({'min':0, 'max':3000})
Image(url=url)
Explanation: Once the server is authorized, you can retrieve data from Earth Engine and use it in the notebook.
End of explanation |
6,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I. Basics
All the toolbox is a package tree.
You need to import the __init__.py file at the root of each package add the toolbox toplevel to your path.
Step1: Database
The voc database is instanciated with a given voc stored in a file.
Load database
Step2: Ther are 2 saved database format.
Text file
Step3: Which is quite slow to load for large voc. Due to performance issues, we need to use numpy format for large voc.
Npy matrix and an association dictionary
The voc is then splited in 2 files
Step4: Extract concept
We will refer as concept for a word and his associated vector.
Load an existing concept
Step5: Load a missing concept
Step6: Check if a concept is in the database
Step7: Extract a random sample
Step8: Find concept
A common operation is to find the closest word for a given concept.
You can do this according to several metrics
Cosine similarity
Step9: Euclidean distance
Step10: Manhattan distance
Step11: Operations
You can apply several operations between concepts to build new ones.
Created concept names are in reverse polish notation.
Step12: a. Add and substract
Step13: b. Transform
Normalized concept
Step14: Polar coordinate
Transform the carthesian coordinate into hyperspherical ones.
First value is the norm, the other values are angles in rad.
Step15: Angular coordinates
Polar transformation without the norm.
Step16: Concept feature
Since this toolbox is designed in a first place for machine learning activies, we provide some feature extraction functions
Step17: Identity vector
Identity vector is the raw vector in the vector space in carthesian coordinates.
Step18: Polar vector
We can also transform this carthesian coordinates into hyperspherical ones.
Step19: Angular vector
And remove the norm to keep only the angle.
Step20: In practice, we discovered the semantic meaning of the norm tends to be the 'specialisation' of the concept. and the angle the field of application.
Thus
Step21: To keep a trace of the feature transformation used and keep a high level manipulation, we'll adopt the following operation for a conceptPair
Step22: a. Classic feature
These are simple operations
Step23: b. Projection features
We also introduced another type of concept relation.
Based on the idea it would be usefull to compare similarity between 2 concepts for each dimension, we introduced some 'projection metrics' features.
Advantage
Step24: Projection similarity
The idea is to use a metric on the projected vectors for each dimension of the vector
We could introduce it as
Step25: The last boolean argument allow to try to compose concept for unknown words based on existing vocabulary.
Step26: Concept pair
Almost the same function are exposed for for building concept pairs
Step27: To build a negative sample in concept pair, you can shuffle an existing pair list | Python Code:
import __init__
Explanation: I. Basics
All the toolbox is a package tree.
You need to import the __init__.py file at the root of each package add the toolbox toplevel to your path.
End of explanation
import cpLib.conceptDB as db
Explanation: Database
The voc database is instanciated with a given voc stored in a file.
Load database
End of explanation
d = db.DB('../data/voc/txt/googleNews_mini.txt', verbose=False)
Explanation: Ther are 2 saved database format.
Text file
End of explanation
d = db.DB('../data/voc/npy/googleNews_mini.npy')
Explanation: Which is quite slow to load for large voc. Due to performance issues, we need to use numpy format for large voc.
Npy matrix and an association dictionary
The voc is then splited in 2 files: one containing the matrix (in npy format), the other for words association (a dict in json format)
NB: for both loading approach, you can verbose or not, it is usefull to deal with files created by std redirection
End of explanation
v1 = d.get('king')
print v1, type(v1.vect), len(v1.vect)
Explanation: Extract concept
We will refer as concept for a word and his associated vector.
Load an existing concept
End of explanation
v2 = d.get('toto')
print v2
Explanation: Load a missing concept
End of explanation
print d.has('king')
print d.has('toto')
Explanation: Check if a concept is in the database
End of explanation
conceptList = d.getSample(5)
print len(conceptList)
print conceptList[0]
Explanation: Extract a random sample
End of explanation
king = d.get('king')
print d.find_cosSim(king)
Explanation: Find concept
A common operation is to find the closest word for a given concept.
You can do this according to several metrics
Cosine similarity
End of explanation
print d.find_euclDist(king)
Explanation: Euclidean distance
End of explanation
print d.find_manaDist(king)
Explanation: Manhattan distance
End of explanation
import cpLib.concept as cp
Explanation: Operations
You can apply several operations between concepts to build new ones.
Created concept names are in reverse polish notation.
End of explanation
v1 = cp.add(d.get('king'), d.get('man'))
v1 = cp.sub(v1, d.get('queen'))
v2 = cp.addSub([d.get('king'), d.get('man')], [d.get('queen')], normalized=True)
print v1, ' ~ ', d.find_cosSim(v1)[0][1]
print v2, ' ~ ', d.find_cosSim(v2)[0][1]
Explanation: a. Add and substract
End of explanation
k = d.get('king')
print k.normalized()
Explanation: b. Transform
Normalized concept
End of explanation
k = d.get('king')
print k.polarized()
print 'norm =', k.polarized().vect[0]
print '1st angle =', k.polarized().vect[1]
Explanation: Polar coordinate
Transform the carthesian coordinate into hyperspherical ones.
First value is the norm, the other values are angles in rad.
End of explanation
k = d.get('king')
print k.angularized()
print 'vector dimension =', len(k.angularized().vect)
print '1st angle =', k.angularized().vect[0]
Explanation: Angular coordinates
Polar transformation without the norm.
End of explanation
import mlLib.conceptFeature as cpf
Explanation: Concept feature
Since this toolbox is designed in a first place for machine learning activies, we provide some feature extraction functions:
End of explanation
k = d.get('king')
print len(cpf.identity(k))
Explanation: Identity vector
Identity vector is the raw vector in the vector space in carthesian coordinates.
End of explanation
k = d.get('king')
print len(cpf.polar(k))
Explanation: Polar vector
We can also transform this carthesian coordinates into hyperspherical ones.
End of explanation
k = d.get('king')
print len(cpf.angular(k))
Explanation: Angular vector
And remove the norm to keep only the angle.
End of explanation
import mlLib.conceptPairFeature as cppf
Explanation: In practice, we discovered the semantic meaning of the norm tends to be the 'specialisation' of the concept. and the angle the field of application.
Thus:
* Angular and Polar features will we more adapted to classify domains
* Carthesian usefull when we need to access the 'deepness' of the concept
You can check dataExploration folder notebook for more details
Concept pair feature
A common usecase for supervised learning would be to detect the relation between 2 concepts.
We also provide so comparison features for this.
End of explanation
conceptPair = (d.get('king'), 'relation', d.get('queen'))
conceptPair
Explanation: To keep a trace of the feature transformation used and keep a high level manipulation, we'll adopt the following operation for a conceptPair:
End of explanation
conceptPair = (d.get('king'), 'relation', d.get('queen'))
featureDimDf = pd.DataFrame(index=['substraction', 'concatenation'])
featureDimDf['carthesian'] = [len(feature(conceptPair)) for feature in [cppf.subCarth, cppf.concatCarth]]
featureDimDf['polar'] = [len(feature(conceptPair)) for feature in [cppf.subPolar, cppf.concatPolar]]
featureDimDf['angular'] = [len(feature(conceptPair)) for feature in [cppf.subAngular, cppf.concatAngular]]
print 'feature dimension depending of the used function'
featureDimDf
Explanation: a. Classic feature
These are simple operations: substraction and concatenation of 2 concept features presented in the previous part.
End of explanation
cppf.pCosSim(conceptPair)
cppf.pEuclDist(conceptPair)
print 'feature dimension:', len(cppf.pManaDist(conceptPair))
cppf.pdCosSim(conceptPair)
cppf.pdEuclDist(conceptPair)
print 'feature dimension:', len(cppf.pdManaDist(conceptPair))
Explanation: b. Projection features
We also introduced another type of concept relation.
Based on the idea it would be usefull to compare similarity between 2 concepts for each dimension, we introduced some 'projection metrics' features.
Advantage: a feature to compare 2 concepts similarity according to each dimension.
Drawback: commutative so not usefull for 'ordered' pairs
So far, we provide the projection features for the following metrics:
* Cosine similarity
* Euclidean distance
* Manhattan distance
End of explanation
import cpLib.conceptExtraction as cpe
conceptStrList = ['king', 'queen', 'cat', 'bird', 'king bird']
cpe.buildConceptList(d, conceptStrList, True)
Explanation: Projection similarity
The idea is to use a metric on the projected vectors for each dimension of the vector
We could introduce it as:
"Beside this dimension $i$, how A and B are similar ?"
Formal approach:
$E$: the word vector space, $E \in \mathbb{R}^{n}$
$a, b \in E$
$m$: a metric $m \in E \mapsto \mathbb{R}$
Given a projection operator on dimension $i$:
$P_{i}(a) = a_{i \neq j}$
We define the projection similarity for metric $m$:
$P_{m, i}(a, b) = m(P_{i}(a), P_{i}(b))$
We apply it to each dimension and get the feature vector:
$\vec{P_{m}}(a, b) = \sum \limits_{i=1}^n P_{m, i}(a, b) \vec{e_{i}}$
Projection dissimilarity
We introduced the projection dissimilarity as the difference between a defined metric and the projected ones of each dimensions.
We could translate it as:
"How important is this dimension $i$ important to mesure the similarity between A and B ?"
Formal approach:
We use the same notation as in previous section to define the projection dissimilarity:
$Pd_{m, i}(a, b) = m(a, b) - m(P_{i}(a), Pd_{i}(b))$
Same, same but different =), we also apply it to each dimension to get the feature vector:
$\vec{Pd_{m}}(a, b) = \sum \limits_{i=1}^n Pd_{m, i}(a, b) \vec{e_{i}}$
II. Classification
Build learning sample
For supervised learning task, this toolbox propose a high level solution:
Provide the dataset and the extraction feature function to an overlay classifier.
The dataset is either a list of concepts or a list of concept pair we describe above.
The overlay classifier is built with a model
Concept
End of explanation
cpe.buildConceptList(d, conceptStrList, False)
Explanation: The last boolean argument allow to try to compose concept for unknown words based on existing vocabulary.
End of explanation
conceptPairStrList = [('king', 'relation', 'queen'),
('man', 'relation', 'woman'),
('bird', 'relation', 'cat')]
conceptPairList = cpe.buildConceptPairList(d, conceptPairStrList, True)
conceptPairList[0]
Explanation: Concept pair
Almost the same function are exposed for for building concept pairs
End of explanation
cpe.shuffledConceptPairList(conceptPairList)
Explanation: To build a negative sample in concept pair, you can shuffle an existing pair list:
End of explanation |
6,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Training Keras model on Cloud AI Platform</h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create model arguments for hyperparameter tuning</li>
<li>Create the model and specify checkpoints during training</li>
<li>Train the keras model using model.fit</li>
</ol>
Note
Step1: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
<p>
<h2> Train on Cloud AI Platform</h2>
<p>
Training on Cloud AI Platform requires
Step2: Lab Task 2
The following code edits babyweight_tf2/trainer/model.py.
Step3: Lab Task 3
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on my laptop). Even then, this takes about <b>3 minutes</b> in which you won't see any output ...
Step4: Lab Task 4
Since we are using TensorFlow 2.0 preview, we will use a container image to run the code on AI Platform.
Once TensorFlow 2.0 is released, you will be able to simply do (without having to build a container)
<pre>
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs
Step5: Note
Step6: Lab Task 5
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
Step7: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was
Step8: <h2> Repeat training </h2>
<p>
This time with tuned parameters (note last line) | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '2.0' # not used in this notebook
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/babyweight/preproc; then
gsutil mb -l ${REGION} gs://${BUCKET}
# copy canonical set of preprocessed files if you didn't do previous notebook
gsutil -m cp -R gs://cloud-training-demos/babyweight gs://${BUCKET}
fi
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: <h1>Training Keras model on Cloud AI Platform</h1>
<h2>Learning Objectives</h2>
<ol>
<li>Create model arguments for hyperparameter tuning</li>
<li>Create the model and specify checkpoints during training</li>
<li>Train the keras model using model.fit</li>
</ol>
Note: This notebook requires TensorFlow 2.0 as we are creating a model using Keras.
TODO: Complete the lab notebook #TODO sections. You can refer to the solutions/ notebook for reference.
This notebook illustrates distributed training and hyperparameter tuning on Cloud AI Platform (formerly known as Cloud ML Engine). This uses Keras and requires TensorFlow 2.0
End of explanation
!mkdir -p babyweight_tf2/trainer
%%writefile babyweight_tf2/trainer/task.py
import argparse
import json
import os
from . import model
import tensorflow as tf
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--bucket',
help = 'GCS path to data. We assume that data is in gs://BUCKET/babyweight/preproc/',
required = True
)
parser.add_argument(
'--output_dir',
help = 'GCS location to write checkpoints and export models',
required = True
)
parser.add_argument(
'--batch_size',
help = 'Number of examples to compute gradient over.',
type = int,
default = 512
)
parser.add_argument(
'--job-dir',
help = 'this model ignores this field, but it is required by gcloud',
default = 'junk'
)
parser.add_argument(
'--nnsize',
help = 'Hidden layer sizes to use for DNN feature columns -- provide space-separated layers',
nargs = '+',
type = int,
default=[128, 32, 4]
)
parser.add_argument(
'--nembeds',
help = 'Embedding size of a cross of n key real-valued parameters',
type = int,
default = 3
)
## TODO 1: add the new arguments here
parser.add_argument(
'--train_examples',
# TODO specify 5000 training examples as default
)
parser.add_argument(
'--pattern',
# TODO
)
parser.add_argument(
'--eval_steps',
# specify eval steps, default is none
)
## parse all arguments
args = parser.parse_args()
arguments = args.__dict__
# unused args provided by service
arguments.pop('job_dir', None)
arguments.pop('job-dir', None)
## assign the arguments to the model variables
output_dir = arguments.pop('output_dir')
model.BUCKET = arguments.pop('bucket')
model.BATCH_SIZE = arguments.pop('batch_size')
model.TRAIN_EXAMPLES = arguments.pop('train_examples') * 1000
model.EVAL_STEPS = arguments.pop('eval_steps')
print ("Will train on {} examples using batch_size={}".format(model.TRAIN_EXAMPLES, model.BATCH_SIZE))
model.PATTERN = arguments.pop('pattern')
model.NEMBEDS= arguments.pop('nembeds')
model.NNSIZE = arguments.pop('nnsize')
print ("Will use DNN size of {}".format(model.NNSIZE))
# Append trial_id to path if we are doing hptuning
# This code can be removed if you are not using hyperparameter tuning
output_dir = os.path.join(
output_dir,
json.loads(
os.environ.get('TF_CONFIG', '{}')
).get('task', {}).get('trial', '')
)
# Run the training job
model.train_and_evaluate(output_dir)
Explanation: Now that we have the Keras wide-and-deep code working on a subset of the data, we can package the TensorFlow code up as a Python module and train it on Cloud AI Platform.
<p>
<h2> Train on Cloud AI Platform</h2>
<p>
Training on Cloud AI Platform requires:
<ol>
<li> Making the code a Python package
<li> Using gcloud to submit the training code to Cloud AI Platform
</ol>
Ensure that the AI Platform API is enabled by going to this [link](https://console.developers.google.com/apis/library/ml.googleapis.com).
## Lab Task 1
The following code edits babyweight_tf2/trainer/task.py.
End of explanation
%%writefile babyweight_tf2/trainer/model.py
import shutil, os, datetime
import numpy as np
import tensorflow as tf
BUCKET = None # set from task.py
PATTERN = 'of' # gets all files
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
# Define some hyperparameters
TRAIN_EXAMPLES = 1000 * 1000
EVAL_STEPS = None
NUM_EVALS = 10
BATCH_SIZE = 512
NEMBEDS = 3
NNSIZE = [64, 16, 4]
# Create an input function reading a file using the Dataset API
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a Keras wide-and-deep model using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)
wrapped = tf.feature_column.indicator_column(orig)
return orig, wrapped
def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):
# input layer
deep_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
wide_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
}
inputs = {**wide_inputs, **deep_inputs}
# feature columns from inputs
deep_fc = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
wide_fc = {}
is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
plurality, wide_fc['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(deep_fc['mother_age'],
boundaries=np.arange(15,45,1).tolist())
wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(deep_fc['gestation_weeks'],
boundaries=np.arange(17,47,1).tolist())
wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)
# cross all the wide columns. We have to do the crossing before we one-hot encode
crossed = tf.feature_column.crossed_column(
[is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)
deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(wide_fc.values(), name='wide_inputs')(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(deep_fc.values(), name='deep_inputs')(inputs)
# hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep)
deep_out = deep
# linear model for the wide side
wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)
# concatenate the two sides
both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
# The main function
def train_and_evaluate(output_dir):
model = build_wd_model(NNSIZE, NEMBEDS)
print("Here is our Wide-and-Deep architecture so far:\n")
print(model.summary())
train_file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, 'train', PATTERN)
eval_file_path = 'gs://{}/babyweight/preproc/{}*{}*'.format(BUCKET, 'eval', PATTERN)
trainds = load_dataset('train*', BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL)
if EVAL_STEPS:
evalds = evalds.take(EVAL_STEPS)
steps_per_epoch = TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
checkpoint_path = os.path.join(output_dir, 'checkpoints/babyweight')
# TODO Create a checkpoint to save the model after every epoch
# https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint
cp_callback = # TODO
# TODO Complete the model.fit statement
# https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit
history = model.fit(
)
EXPORT_PATH = os.path.join(output_dir, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
Explanation: Lab Task 2
The following code edits babyweight_tf2/trainer/model.py.
End of explanation
%%bash
echo "bucket=${BUCKET}"
rm -rf babyweight_trained
export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight_tf2
python3 -m trainer.task \
--bucket=${BUCKET} \
--output_dir=babyweight_trained \
--job-dir=./tmp \
--pattern="00000-of-" --train_examples=1 --eval_steps=1 --batch_size=10
Explanation: Lab Task 3
After moving the code to a package, make sure it works standalone. (Note the --pattern and --train_examples lines so that I am not trying to boil the ocean on my laptop). Even then, this takes about <b>3 minutes</b> in which you won't see any output ...
End of explanation
%%writefile babyweight_tf2/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY trainer /babyweight_tf2/trainer
RUN apt update && \
apt install --yes python3-pip && \
pip3 install --upgrade --quiet tf-nightly-2.0-preview
ENV PYTHONPATH ${PYTHONPATH}:/babyweight_tf2
CMD ["python3", "-m", "trainer.task"]
%%writefile babyweight_tf2/push_docker.sh
export PROJECT_ID=$(gcloud config list project --format "value(core.project)")
export IMAGE_REPO_NAME=babyweight_training_container
#export IMAGE_TAG=$(date +%Y%m%d_%H%M%S)
#export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME:$IMAGE_TAG
export IMAGE_URI=gcr.io/$PROJECT_ID/$IMAGE_REPO_NAME
echo "Building $IMAGE_URI"
docker build -f Dockerfile -t $IMAGE_URI ./
echo "Pushing $IMAGE_URI"
docker push $IMAGE_URI
Explanation: Lab Task 4
Since we are using TensorFlow 2.0 preview, we will use a container image to run the code on AI Platform.
Once TensorFlow 2.0 is released, you will be able to simply do (without having to build a container)
<pre>
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
</pre>
End of explanation
%%bash
cd babyweight_tf2
bash push_docker.sh
Explanation: Note: If you get a permissions/stat error when running push_docker.sh from Notebooks, do it from CloudShell:
Open CloudShell on the GCP Console
* git clone https://github.com/GoogleCloudPlatform/training-data-analyst
* cd training-data-analyst/courses/machine_learning/deepdive/06_structured/babyweight_tf2/containers
* bash push_docker.sh
This step takes 5-10 minutes to run
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model
JOBID=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
#IMAGE=gcr.io/deeplearning-platform-release/tf2-cpu
IMAGE=gcr.io/$PROJECT/$IMAGE_REPO_NAME
gcloud beta ai-platform jobs submit training $JOBID \
--staging-bucket=gs://$BUCKET --region=$REGION \
--master-image-uri=$IMAGE \
--master-machine-type=n1-standard-4 --scale-tier=CUSTOM \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=200000
Explanation: Lab Task 5
Once the code works in standalone mode, you can run it on Cloud AI Platform. Because this is on the entire dataset, it will take a while. The training run took about <b> two hours </b> for me. You can monitor the job from the GCP console in the Cloud AI Platform section.
End of explanation
%%writefile hyperparam.yaml
trainingInput:
scaleTier: STANDARD_1
hyperparameters:
hyperparameterMetricTag: rmse
goal: MINIMIZE
maxTrials: 20
maxParallelTrials: 5
enableTrialEarlyStopping: True
params:
- parameterName: batch_size
type: INTEGER
minValue: 8
maxValue: 512
scaleType: UNIT_LOG_SCALE
- parameterName: nembeds
type: INTEGER
minValue: 3
maxValue: 30
scaleType: UNIT_LINEAR_SCALE
- parameterName: nnsize
type: INTEGER
minValue: 64
maxValue: 512
scaleType: UNIT_LOG_SCALE
%%bash
OUTDIR=gs://${BUCKET}/babyweight/hyperparam
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--config=hyperparam.yaml \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--eval_steps=10 \
--train_examples=20000
Explanation: When I ran it, I used train_examples=2000000. When training finished, I filtered in the Stackdriver log on the word "dict" and saw that the last line was:
<pre>
Saving dict for global step 5714290: average_loss = 1.06473, global_step = 5714290, loss = 34882.4, rmse = 1.03186
</pre>
The final RMSE was 1.03 pounds.
<h2> Hyperparameter tuning </h2>
<p>
All of these are command-line parameters to my program. To do hyperparameter tuning, create hyperparam.xml and pass it as --configFile.
This step will take <b>up to 2 hours</b> -- you can increase maxParallelTrials or reduce maxTrials to get it done faster. Since maxParallelTrials is the number of initial seeds to start searching from, you don't want it to be too large; otherwise, all you have is a random search.
End of explanation
%%bash
OUTDIR=gs://${BUCKET}/babyweight/trained_model_tuned
JOBNAME=babyweight_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ai-platform jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=$(pwd)/babyweight/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=STANDARD_1 \
--runtime-version=$TFVERSION \
-- \
--bucket=${BUCKET} \
--output_dir=${OUTDIR} \
--train_examples=20000 --batch_size=35 --nembeds=16 --nnsize=281
Explanation: <h2> Repeat training </h2>
<p>
This time with tuned parameters (note last line)
End of explanation |
6,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Newton's Method in $\mathbb{R}^n$ </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version
Step1: <div id='newton' />
Newton's method | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from ipywidgets import interact
Explanation: <center>
<h1> ILI285 - Computación Científica I / INF285 - Computación Científica </h1>
<h2> Newton's Method in $\mathbb{R}^n$ </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.01</h2>
</center>
Table of Contents
Newton's method
Python Modules and Functions
Acknowledgements
End of explanation
f1= lambda x,y: x**2+y**2-1
f2= lambda x,y: y-x**2
J = lambda x,y: np.array([[2*x, 2*y],[-2*x, 1]])
Newton = lambda x,y: np.array([[x],[y]])-np.linalg.solve(J(x,y),np.array([[f1(x,y)],[f2(x,y)]]))
delta = 0.025
x = np.arange(-1.5, 1.5, delta)
y = np.arange(-1.5, 1.5, delta)
X, Y = np.meshgrid(x, y)
Z1 = f1(X,Y)
Z2 = f2(X,Y)
plt.figure()
CS1 = plt.contour(X, Y, Z1,levels=[0])
CS2 = plt.contour(X, Y, Z2,levels=[0])
#plt.clabel(CS1, inline=1, fontsize=10)
#plt.clabel(CS2, inline=1, fontsize=10)
plt.grid()
plt.axis('equal')
plt.title(r'Newton $\mathbb{R}^n$')
plt.show()
def Show_Newton(x0=1.2,y0=0.3,n=0):
plt.figure()
CS1 = plt.contour(X, Y, Z1,levels=[0])
CS2 = plt.contour(X, Y, Z2,levels=[0])
plt.grid()
plt.axis('equal')
plt.title(r'Newton $\mathbb{R}^n$')
plt.plot(x0,y0,'rx')
print(x0,y0)
for i in np.arange(n):
xout=Newton(x0,y0)
x1=float(xout[0])
y1=float(xout[1])
plt.plot(x1,y1,'rx')
plt.plot([x0, x1],[y0, y1],'r')
x0=x1
y0=y1
print(x0,y0)
plt.show()
interact(Show_Newton,x0=(-1.4,1.4,0.1),y0=(-1.4,1.4,0.1), n=(0,100,1))
Explanation: <div id='newton' />
Newton's method
End of explanation |
6,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scikit-Learn
scikit-learn is a Python library that provides many machine learning algorithms via a consistent API known as the estimator.
Step1: Validation Data
Using validation data, we avoid model overfitting. In general, we split our data set into two partitions
Step2: Estimators
An Estimator can be seen as a base class for any algorithm that learns from data. It can be a classification, regression or clustering algorithm or a transformer that extracts useful features from raw data.
Hyperparameters and Parameters
In the documentation, there is a distinction between hyperparameters and parameters. Hyperparameters refers to algorithm settings that is used to tune the algorithm itself. They are usually set when an estimator is initialised. Parameters on the other hand refers to the coefficients found by the learning algorithm. | Python Code:
import numpy as np
Explanation: Scikit-Learn
scikit-learn is a Python library that provides many machine learning algorithms via a consistent API known as the estimator.
End of explanation
from sklearn.model_selection import train_test_split
# Let X be our input data consisting of
# 5 samples and 2 features
X = np.arange(10).reshape(5, 2)
# Let y be the target feature
y = [0, 1, 2, 3, 4]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
Explanation: Validation Data
Using validation data, we avoid model overfitting. In general, we split our data set into two partitions:
training: used to construct the model.
test: represents the future data.
The test data is only used to make a final estimate of the generalisation error. It is never used for fine-tuning the model. We can use the train_test_split() method to randomly split our data into training and test sets.
End of explanation
from sklearn.linear_model import LinearRegression
lr = LinearRegression(normalize=True)
print(lr) # outputs the name of the estimator and its hyperparameters
Explanation: Estimators
An Estimator can be seen as a base class for any algorithm that learns from data. It can be a classification, regression or clustering algorithm or a transformer that extracts useful features from raw data.
Hyperparameters and Parameters
In the documentation, there is a distinction between hyperparameters and parameters. Hyperparameters refers to algorithm settings that is used to tune the algorithm itself. They are usually set when an estimator is initialised. Parameters on the other hand refers to the coefficients found by the learning algorithm.
End of explanation |
6,916 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Neverending Search for Periodicity
Step1: Problem 1b
Create a function gen_periodic_data that returns
$$y = C + A\cos\left(\frac{2\pi x}{P}\right) + \sigma_y$$
where $C$, $A$, and $P$ are constants, $x$ is input data and $\sigma_y$ represents Gaussian noise.
Hint - this should only require a minor adjustment to your function from lecture 1.
Step2: Problem 1c
Later, we will be using MCMC. Execute the following cell which will plot the chains from emcee to follow the MCMC walkers.
Step3: Problem 1d
Using gen_periodic_data generate 250 observations taken at random times between 0 and 10, with $C = 10$, $A = 2$, $P = 0.4$, and variance of the noise = 0.1. Create an uncertainty array dy with the same length as y and each value equal to $\sqrt{0.1}$.
Plot the resulting data over the exact (noise-free) signal.
Step4: Problem 2) Maximum-Likelihood Optimization
A common approach$^\dagger$ in the literature for problems where there is good reason to place a strong prior on the signal (i.e. to only try and fit a single model) is maximum likelihood optimization [this is sometimes also called $\chi^2$ minimization].
$^\dagger$The fact that this approach is commonly used, does not mean it should be commonly used.
In this case, where we are fitting for a known signal in simulated data, we are justified in assuming an extremely strong prior and fitting a sinusoidal model to the data.
Problem 2a
Write a function, correct_model, that returns the expected signal for our data given input time $t$
Step5: For these data the log likelihood of the data can be written as
Step6: Problem 2c
Use the minimize function from scipy.optimize to determine maximum likelihood estimates for the model parameters for the data simulated in problem 1d. What is the best fit period?
The optimization routine requires an initial guess for the model parameters, use 10 for the offset, 1 for the amplitude of variations, and 0.39 for the period.
Hint - as arguments, minimize takes the function, nll, the initial guess, and optional keyword args, which should be (x, y, dy) in this case.
Step7: Problem 2d
Plot the input model, the noisy data, and the maximum likelihood model.
How does the model fit look?
Step8: Problem 2e
Repeat the maximum likelihood optimization, but this time use an initial guess of 10 for the offset, 1 for the amplitude of variations, and 0.393 for the period.
Step9: Given the lecture order this is a little late, but we have now identified the fundamental challenge in identifying periodic signals in astrophysical observations
Step10: Problem 3b
Write a function lnprob1 to calculate the log of the posterior probability. This function should take $\theta$ and x, y, dy as inputs.
Step11: Problem 3c
Initialize the walkers for emcee, which we will use to draw samples from the posterior. Like before, we need to include an initial guess (the parameters of which don't matter much beyond the period). Start with a guess of 0.6 for the period.
As a quick reminder, emcee is a pure python implementation of Goodman & Weare's affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler. emcee seeds several "walkers" which are members of the ensemble. You can think of each walker as its own Metropolis-Hastings chain, but the key detail is that the chains are not independent. Thus, the proposal distribution for each new step in the chain is dependent upon the position of all the other walkers in the chain.
Choosing the initial position for each of the walkers does not significantly affect the final results (though it will affect the burn in time). Standard procedure is to create several walkers in a small ball around a reasonable guess [the samplers will quickly explore beyond the extent of the initial ball].
Step12: Problem 3d
Run the walkers through 1000 steps.
Hint - The run_mcmc method on the sampler object may be useful.
Step13: Problem 3e
Use the previous created plot_chains helper funtion to plot the chains from the MCMC sampling. Note - you may need to adjust nburn after examining the chains.
Have your chains converged? Will extending the chains improve this?
Step14: Problem 3f
Make a corner plot (use corner) to examine the post burn-in samples from the MCMC chains.
Step15: As you can see - force feeding this problem into a Bayesian framework does not automatically generate more reasonable answers. While some of the chains appear to have identified periods close to the correct period most of them are suck in local minima.
There are sampling techniques designed to handle multimodal posteriors, but the non-linear nature of this problem makes it difficult for the various walkers to explore the full parameter space in the way that we would like.
Problem 4) GPs and MCMC to identify a best-fit period
We will now attempt to model the data via a Gaussian Process (GP). As a very brief reminder, a GP is a collection of random variables, in which any finite subset has a multivariate gaussian distribution.
A GP is fully specified by a mean function and a covariance matrix $K$. In this case, we wish to model the simulated data from problem 1. If we specify a cosine kernel for the covariance
Step16: To model the GP in this problem we will use the george package (first introduced during session 4) written by Dan Foreman-Mackey. george is a fast and flexible tool for GP regression in python. It includes several built-in kernel functions, which we will take advantage of.
Problem 4b
Write a function lnlike2 to calculate the likelihood for the GP model assuming a cosine kernel, and mean model defined by model2.
Note - george takes $\ln P$ as an argument and not $P$. We will see why this is useful later.
Hint - there isn't a lot you need to do for this one! But pay attention to the functional form of the model.
Step17: Problem 4c
Write a function lnprior2 to calculte $\ln P(\theta)$, the log prior for the model parameters. Use a wide flat prior for the parameters.
Note - a flat prior in log space is not flat in the parameters.
Step18: Problem 4d
Write a function lnprob2 to calculate the log posterior given the model parameters and data.
Step19: Problem 4e
Intialize 100 walkers in an emcee.EnsembleSampler variable called sampler. For you initial guess at the parameter values set $\ln a = 1$, $\ln P = 1$, and $b = 8$.
Note - this is very similar to what you did previously.
Step20: Problem 4f
Run the chains for 200 steps.
Hint - you'll notice these are shorter chains than we previously used. That is because the computational time is longer, as will be the case for this and all the remaining problems.
Step21: Problem 4g
Plot the chains from the MCMC.
Step22: It should be clear that the chains have not, in this case, converged. This will be true even if you were to continue to run them for a very long time.
Nevertheless, if we treat this entire run as a burn in, we can actually extract some useful information from this initial run. In particular, we will look at the posterior values for the different walkers at the end of their chains. From there we will re-initialize our walkers.
We are actually free to initialize the walkers at any location we choose, so this approach is not cheating. However, one thing that should make you a bit uneasy about the way in which we are re-initializing the walkers is that we have no guarantee that the initial run that we just performed found a global maximum for the posterior. Thus, it may be the case that our continued analysis in this case is not "right."
Problem 4h
Below you are given two arrays, chain_lnp_end and chain_lnprob_end, that contain the final $\ln P$ and log posterior, respectively, for each of the walkers.
Plot these two arrays against each other, to get a sense of what period is "best."
Step23: Problem 4i
Reinitialize the walkers in a ball around the maximum log posterior value from the walkers in the previous burn in. Then run the MCMC sampler for 200 steps.
Hint - you'll want to run sampler.reset() prior to the running the MCMC, but after selecting the new starting point for the walkers.
Step24: Problem 4j
Plot the chains. Have they converged?
Step25: Problem 4k
Make a corner plot of the samples. Does the marginalized distribution on $P$ make sense?
Step26: If you run the cell below, you will see random samples from the posterior overplotted on the data. Do the posterior samples seem reasonable in this case?
Step27: Problem 4l
What is the marginalized best period estimate, including uncertainties?
Step28: In this way - it is possible to use GPs + MCMC to determine the period in noisy irregular data. Furthermore, unlike with LS, we actually have a direct estimate on the uncertainty for that period.
As I previously alluded to, however, the solution does depend on how we initialize the walkers. Because this is simulated data, we know that the correct period has been estimated in this case, but there's no guarantee of that once we start working with astronomical sources. This is something to keep in mind if you plan on using GPs to search for periodic signals...
Problem 5) The Quasi-Periodic Kernel
As we saw in the first lecture, there are many sources with periodic light curves that are not strictly sinusoidal. Thus, the use of the cosine kernel (on its own) may not be sufficient to model the signal. As Suzanne told us during session, the quasi-period kernel
Step29: Problem 5b
Initialize 100 walkers around a reasonable starting point. Be sure that $\ln P = 0$ in this initialization.
Run the MCMC for 200 steps.
Hint - it may be helpful to run this second step in a separate cell.
Step30: Problem 5c
Plot the chains from the MCMC. Did the chains converge?
Step31: Problem 5d
Plot the final $\ln P$ vs. log posterior for each of the walkers. Do you notice anything interesting?
Hint - recall that you are plotting the log posterior, and not the posterior.
Step32: Problem 5e
Re-initialize the walkers around the chain with the maximum log posterior value.
Run the MCMC for 500 steps.
Step33: Problem 5f
Plot the chains for the MCMC.
Hint - you may need to adjust the length of the burn in.
Step34: Problem 5g
Make a corner plot for the samples.
Is the marginalized estimate for the period reasonable?
Step35: Problem 6) GPs + MCMC for actual astronomical data
We will now apply this model to the same light curve that we studied in the LS lecture.
In this case we do not know the actual period (that's only sorta true), so we will have to be even more careful about initializing the walkers and performing burn in than we were previously.
Problem 6a
Read in the data for the light curve stored in example_asas_lc.dat.
Step36: Problem 6b
Adjust the prior from problem 5 to be appropriate for this data set.
Step37: Because we have no idea where to initialize our walkers in this case, we are going to use an ad hoc common sense + brute force approach.
Problem 6c
Run LombScarge on the data and determine the top three peaks in the periodogram. Set nterms = 2, and the maximum frequency to 5 (this is arbitrary but sufficient in this case).
Hint - you may need to search more than the top 3 periodogram values to find the 3 peaks.
Step38: Problem 6d
Initialize one third of your 100 walkers around each of the periods identified in the previous problem (note - the total number of walkers must be an even number, so use 34 walkers around one of the top 3 frequency peaks).
Run the MCMC for 500 steps following this initialization.
Step39: Problem 6e
Plot the chains.
Step40: Problem 6f
Plot $\ln P$ vs. log posterior.
Step41: Problem 6g
Reinitialize the walkers around the previous walker with the maximum posterior value.
Run the MCMC for 500 steps. Plot the chains. Have they converged?
Step42: Problem 6h
Make a corner plot of the samples. What is the marginalized estimate for the period of this source?
How does this estimate compare to LS?
Step43: The cell below shows marginalized samples overplotted on the actual data. How well does the model perform? | Python Code:
ncores = # adjust to number of CPUs on your machine
np.random.seed(23)
Explanation: The Neverending Search for Periodicity: Techniques Beyond Lomb-Scargle
Version 0.1
By AA Miller 28 Apr 2018
In this lecture we will examine alternative methods to search for periodic signals in astronomical time series. The problems will provide a particular focus on a relatively new technique, which is to model the periodic behavior as a Gaussian Process, and then sample the posterior to identify the optimal period via Markov Chain Monte Carlo analysis. A lot of this work has been pioneered by previous DSFP lecturer Suzanne Aigrain.
For a refresher on GPs, see Suzanne's previous lectures: part 1 & part 2. For a refresher on MCMC, see Andy Connolly's previous lectures: part 1, part 2, & part 3.
An Incomplete Whirlwind Tour
In addition to LS, the following techniques are employed to search for periodic signals:
String Length
The string length method (Dworetsky 1983), phase folds the data at trial periods and then minimizes the distance to connect the phase-ordered observations.
<img style="display: block; margin-left: auto; margin-right: auto" src="./images/StringLength.png" align="middle">
<div align="right"> <font size="-3">(credit: Gaveen Freer - http://slideplayer.com/slide/4212629/#) </font></div>
Phase Dispersion Minimization
Phase Dispersion Minimization (PDM; Jurkevich 1971, Stellingwerth 1978), like LS, folds the data at a large number of trial frequencies $f$.
The phased data are then binned, and the variance is calculated in each bin, combined, and compared to the overall variance of the signal. No functional form of the signal is assumed, and thus, non-sinusoidal signals can be found.
Challenge: how to select the number of bins?
<img style="display: block; margin-left: auto; margin-right: auto" src="./images/PDM.jpg" align="middle">
<div align="right"> <font size="-3">(credit: Gaveen Freer - http://slideplayer.com/slide/4212629/#) </font></div>
Analysis of Variance
Analysis of Variance (AOV; Schwarzenberg-Czerny 1989) is similar to PDM. Optimal periods are defined via hypothesis testing, and these methods are found to perform best for certain types of astronomical signals.
Supersmoother
Supersmoother (Reimann) is a least-squares approach wherein a flexible, non-parametric model is fit to the folded observations at many trial frequncies. The use of this flexible model reduces aliasing issues relative to models that assume a sinusoidal shape, however, this comes at the cost of requiring considerable computational time.
Conditional Entropy
Conditional Entropy (CE; Graham et al. 2013), and other entropy based methods, aim to minimize the entropy in binned (normalized magnitude, phase) space. CE, in particular, is good at supressing signal due to the window function.
When tested on real observations, CE outperforms most of the alternatives (e.g., LS, PDM, etc).
<img style="display: block; margin-left: auto; margin-right: auto" src="./images/CE.png" align="middle">
<div align="right"> <font size="-3">(credit: Graham et al. 2013) </font></div>
Bayesian Methods
There have been some efforts to frame the period-finding problem in a Bayesian framework. Bretthorst 1988 developed Bayesian generalized LS models, while Gregory & Loredo 1992 applied Bayesian techniques to phase-binned models.
More recently, efforts to use Gaussian processes (GPs) to model and extract a period from the light curve have been developed (Wang et al. 2012). These methods have proved to be especially useful for detecting stellar rotation in Kepler light curves (Angus et al. 2018).
[Think of Suzanne's lectures during session 4]
For this lecture we will focus on the use GPs, combined with an MCMC analysis (and we will take some shortcuts in the interest of time), to identify periodic signals in astronomical data.
Problem 1) Helper Functions
We are going to create a few helper functions, similar to the previous lecture, that will help minimize repetition for some common tasks in this notebook.
Problem 1a
Adjust the variable ncores to match the number of CPUs on your machine.
End of explanation
def gen_periodic_data( # complete
y = # complete
return y
Explanation: Problem 1b
Create a function gen_periodic_data that returns
$$y = C + A\cos\left(\frac{2\pi x}{P}\right) + \sigma_y$$
where $C$, $A$, and $P$ are constants, $x$ is input data and $\sigma_y$ represents Gaussian noise.
Hint - this should only require a minor adjustment to your function from lecture 1.
End of explanation
def plot_chains(sampler, nburn, paramsNames):
Nparams = len(paramsNames) # + 1
fig, ax = plt.subplots(Nparams,1, figsize = (8,2*Nparams), sharex = True)
fig.subplots_adjust(hspace = 0)
ax[0].set_title('Chains')
xplot = range(len(sampler.chain[0,:,0]))
for i,p in enumerate(paramsNames):
for w in range(sampler.chain.shape[0]):
ax[i].plot(xplot[:nburn], sampler.chain[w,:nburn,i], color="0.5", alpha = 0.4, lw = 0.7, zorder = 1)
ax[i].plot(xplot[nburn:], sampler.chain[w,nburn:,i], color="k", alpha = 0.4, lw = 0.7, zorder = 1)
ax[i].set_ylabel(p)
fig.tight_layout()
return ax
Explanation: Problem 1c
Later, we will be using MCMC. Execute the following cell which will plot the chains from emcee to follow the MCMC walkers.
End of explanation
x = # complete
y = # complete
dy = # complete
# complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.plot( # complete
# complete
# complete
fig.tight_layout()
Explanation: Problem 1d
Using gen_periodic_data generate 250 observations taken at random times between 0 and 10, with $C = 10$, $A = 2$, $P = 0.4$, and variance of the noise = 0.1. Create an uncertainty array dy with the same length as y and each value equal to $\sqrt{0.1}$.
Plot the resulting data over the exact (noise-free) signal.
End of explanation
def correct_model( # complete
# complete
return # complete
Explanation: Problem 2) Maximum-Likelihood Optimization
A common approach$^\dagger$ in the literature for problems where there is good reason to place a strong prior on the signal (i.e. to only try and fit a single model) is maximum likelihood optimization [this is sometimes also called $\chi^2$ minimization].
$^\dagger$The fact that this approach is commonly used, does not mean it should be commonly used.
In this case, where we are fitting for a known signal in simulated data, we are justified in assuming an extremely strong prior and fitting a sinusoidal model to the data.
Problem 2a
Write a function, correct_model, that returns the expected signal for our data given input time $t$:
$$f(t) = a + b\cos\left(\frac{2\pi t}{c}\right)$$
where $a, b, c$ are model parameters.
Hint - store the model parameters in a single variable (this will make things easier later).
End of explanation
def lnlike1( # complete
return # complete
def nll( # complete
return # complete
Explanation: For these data the log likelihood of the data can be written as:
$$\ln \mathcal{L} = -\frac{1}{2} \sum \left(\frac{y - f(t)}{\sigma_y}\right)^2$$
Ultimately, it is easier to minimize the negative log likelihood, so we will do that.
Problem 2b
Write a function, lnlike1, that returns the log likelihood for the data given model parameters $\theta$, and $t, y, \sigma_y$.
Write a second function, nll, that returns the negative log likelihood.
End of explanation
initial_theta = # complete
res = minimize( # complete
print("The maximum likelihood estimate for the period is: {:.5f}".format( # complete
Explanation: Problem 2c
Use the minimize function from scipy.optimize to determine maximum likelihood estimates for the model parameters for the data simulated in problem 1d. What is the best fit period?
The optimization routine requires an initial guess for the model parameters, use 10 for the offset, 1 for the amplitude of variations, and 0.39 for the period.
Hint - as arguments, minimize takes the function, nll, the initial guess, and optional keyword args, which should be (x, y, dy) in this case.
End of explanation
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.plot( # complete
ax.plot( # complete
ax.set_xlabel('x')
ax.set_ylabel('y')
fig.tight_layout()
Explanation: Problem 2d
Plot the input model, the noisy data, and the maximum likelihood model.
How does the model fit look?
End of explanation
initial_theta = # complete
res = minimize( # complete
print("The ML estimate for a, b, c is: {:.5f}, {:.5f}, {:.5f}".format( # complete
Explanation: Problem 2e
Repeat the maximum likelihood optimization, but this time use an initial guess of 10 for the offset, 1 for the amplitude of variations, and 0.393 for the period.
End of explanation
def lnprior1( # complete
a, b, c = # complete
if # complete
return 0.0
return -np.inf
Explanation: Given the lecture order this is a little late, but we have now identified the fundamental challenge in identifying periodic signals in astrophysical observations:
periodic models are highly non-linear!
This can easily be seen in the LS periodograms from the previous lecture: period estimates essentially need to be perfect to properly identify the signal. Take for instance the previous example, where we adjusted the initial guess for the period by less than 1% and it made the difference between correct estimates catastrophic errors.
This also means that classic optimization procedures (e.g., gradient decent) are helpless for this problem. If you guess the wrong period there is no obvious way to know whether the subsequent guess should use a larger or smaller period.
Problem 3) Sampling Techniques
Given our lack of success with maximum likelihood techniques, we will now attempt a Bayesian approach. As a brief reminder, Bayes theorem tells us that:
$$P(\theta|X) \propto P(X|\theta) P(\theta).$$
In words, the posterior probability is proportional to the likelihood multiplied by the prior. We will use sampling techniques, MCMC, to estimate the posterior.
Remember - we already calculated the likelihood above.
Problem 3a
Write a function lnprior1 to calculate the log of the prior on $\theta$. Use a reasonable, wide and flat prior for all the model parameters.
Hint - for emcee the log prior should return 0 within the prior and $-\infty$ otherwise.
End of explanation
def lnprob1( # complete
lp = lnprior1(theta)
if np.isfinite(lp):
return # complete
return -np.inf
Explanation: Problem 3b
Write a function lnprob1 to calculate the log of the posterior probability. This function should take $\theta$ and x, y, dy as inputs.
End of explanation
guess = [10, 1, 0.6]
ndim = len(guess)
nwalkers = 100
p0 = [np.array(guess) + 1e-8 * np.random.randn(ndim)
for i in range(nwalkers)]
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob1, args=(x, y, dy), threads = ncores)
Explanation: Problem 3c
Initialize the walkers for emcee, which we will use to draw samples from the posterior. Like before, we need to include an initial guess (the parameters of which don't matter much beyond the period). Start with a guess of 0.6 for the period.
As a quick reminder, emcee is a pure python implementation of Goodman & Weare's affine Invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler. emcee seeds several "walkers" which are members of the ensemble. You can think of each walker as its own Metropolis-Hastings chain, but the key detail is that the chains are not independent. Thus, the proposal distribution for each new step in the chain is dependent upon the position of all the other walkers in the chain.
Choosing the initial position for each of the walkers does not significantly affect the final results (though it will affect the burn in time). Standard procedure is to create several walkers in a small ball around a reasonable guess [the samplers will quickly explore beyond the extent of the initial ball].
End of explanation
sampler.run_mcmc( # complete
Explanation: Problem 3d
Run the walkers through 1000 steps.
Hint - The run_mcmc method on the sampler object may be useful.
End of explanation
params_names = # complete
nburn = # complete
plot_chains( # complete
Explanation: Problem 3e
Use the previous created plot_chains helper funtion to plot the chains from the MCMC sampling. Note - you may need to adjust nburn after examining the chains.
Have your chains converged? Will extending the chains improve this?
End of explanation
samples = sampler.chain[:, nburn:, :].reshape((-1, ndim))
fig = # complete
Explanation: Problem 3f
Make a corner plot (use corner) to examine the post burn-in samples from the MCMC chains.
End of explanation
def model2( # complete
# complete
return # complete
Explanation: As you can see - force feeding this problem into a Bayesian framework does not automatically generate more reasonable answers. While some of the chains appear to have identified periods close to the correct period most of them are suck in local minima.
There are sampling techniques designed to handle multimodal posteriors, but the non-linear nature of this problem makes it difficult for the various walkers to explore the full parameter space in the way that we would like.
Problem 4) GPs and MCMC to identify a best-fit period
We will now attempt to model the data via a Gaussian Process (GP). As a very brief reminder, a GP is a collection of random variables, in which any finite subset has a multivariate gaussian distribution.
A GP is fully specified by a mean function and a covariance matrix $K$. In this case, we wish to model the simulated data from problem 1. If we specify a cosine kernel for the covariance:
$$K_{ij} = k(x_i - x_j) = \cos\left(\frac{2\pi \left|x_i - x_j\right|}{P}\right)$$
then the mean function is simply the offset, b.
Problem 4a
Write a function model2 that returns the mean function for the GP given input parameters $\theta$.
Hint - no significant computation is required to complete this task.
End of explanation
def lnlike2(theta, t, y, yerr):
lnper, lna = theta[:2]
gp = george.GP(np.exp(lna) * kernels.CosineKernel(lnper))
gp.compute(t, yerr)
return gp.lnlikelihood(y - model2(theta, t), quiet=True)
Explanation: To model the GP in this problem we will use the george package (first introduced during session 4) written by Dan Foreman-Mackey. george is a fast and flexible tool for GP regression in python. It includes several built-in kernel functions, which we will take advantage of.
Problem 4b
Write a function lnlike2 to calculate the likelihood for the GP model assuming a cosine kernel, and mean model defined by model2.
Note - george takes $\ln P$ as an argument and not $P$. We will see why this is useful later.
Hint - there isn't a lot you need to do for this one! But pay attention to the functional form of the model.
End of explanation
def lnprior2( # complete
# complete
# complete
# complete
# complete
# complete
# complete
Explanation: Problem 4c
Write a function lnprior2 to calculte $\ln P(\theta)$, the log prior for the model parameters. Use a wide flat prior for the parameters.
Note - a flat prior in log space is not flat in the parameters.
End of explanation
def lnprob2(# complete
# complete
# complete
# complete
# complete
# complete
# complete
Explanation: Problem 4d
Write a function lnprob2 to calculate the log posterior given the model parameters and data.
End of explanation
initial = # complete
ndim = len(initial)
p0 = [np.array(initial) + 1e-4 * np.random.randn(ndim)
for i in range(nwalkers)]
sampler = emcee.EnsembleSampler( # complete
Explanation: Problem 4e
Intialize 100 walkers in an emcee.EnsembleSampler variable called sampler. For you initial guess at the parameter values set $\ln a = 1$, $\ln P = 1$, and $b = 8$.
Note - this is very similar to what you did previously.
End of explanation
p0, _, _ = sampler.run_mcmc( # complete
Explanation: Problem 4f
Run the chains for 200 steps.
Hint - you'll notice these are shorter chains than we previously used. That is because the computational time is longer, as will be the case for this and all the remaining problems.
End of explanation
params_names = ['ln(P)', 'ln(a)', 'b']
nburn = # complete
plot_chains( # complete
Explanation: Problem 4g
Plot the chains from the MCMC.
End of explanation
chain_lnp_end = sampler.chain[:,-1,0]
chain_lnprob_end = sampler.lnprobability[:,-1]
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
fig.tight_layout()
Explanation: It should be clear that the chains have not, in this case, converged. This will be true even if you were to continue to run them for a very long time.
Nevertheless, if we treat this entire run as a burn in, we can actually extract some useful information from this initial run. In particular, we will look at the posterior values for the different walkers at the end of their chains. From there we will re-initialize our walkers.
We are actually free to initialize the walkers at any location we choose, so this approach is not cheating. However, one thing that should make you a bit uneasy about the way in which we are re-initializing the walkers is that we have no guarantee that the initial run that we just performed found a global maximum for the posterior. Thus, it may be the case that our continued analysis in this case is not "right."
Problem 4h
Below you are given two arrays, chain_lnp_end and chain_lnprob_end, that contain the final $\ln P$ and log posterior, respectively, for each of the walkers.
Plot these two arrays against each other, to get a sense of what period is "best."
End of explanation
p = # complete
sampler.reset()
p0 = # complete
p0, _, _ = sampler.run_mcmc( # complete
Explanation: Problem 4i
Reinitialize the walkers in a ball around the maximum log posterior value from the walkers in the previous burn in. Then run the MCMC sampler for 200 steps.
Hint - you'll want to run sampler.reset() prior to the running the MCMC, but after selecting the new starting point for the walkers.
End of explanation
paramsNames = ['ln(P)', 'ln(a)', 'b']
nburn = # complete
plot_chains( # complete
Explanation: Problem 4j
Plot the chains. Have they converged?
End of explanation
fig =
Explanation: Problem 4k
Make a corner plot of the samples. Does the marginalized distribution on $P$ make sense?
End of explanation
fig, ax = plt.subplots()
ax.errorbar(x, y, dy, fmt='o')
ax.set_xlabel('x')
ax.set_ylabel('y')
for s in samples[np.random.randint(len(samples), size=5)]:
# Set up the GP for this sample.
lnper, lna = s[:2]
gp = george.GP(np.exp(lna) * kernels.CosineKernel(lnper))
gp.compute(x, dy)
# Compute the prediction conditioned on the observations and plot it.
m = gp.sample_conditional(y - model2(s, x), x_grid) + model2(s, x_grid)
ax.plot(x_grid, m, color="0.2", alpha=0.3)
fig.tight_layout()
Explanation: If you run the cell below, you will see random samples from the posterior overplotted on the data. Do the posterior samples seem reasonable in this case?
End of explanation
# complete
print('ln(P) = {:.6f} +{:.6f} -{:.6f}'.format( # complete
print('True period = 0.4, GP Period = {:.4f}'.format( # complete
Explanation: Problem 4l
What is the marginalized best period estimate, including uncertainties?
End of explanation
# complete
# complete
# complete
def lnprob3( # complete
# complete
# complete
Explanation: In this way - it is possible to use GPs + MCMC to determine the period in noisy irregular data. Furthermore, unlike with LS, we actually have a direct estimate on the uncertainty for that period.
As I previously alluded to, however, the solution does depend on how we initialize the walkers. Because this is simulated data, we know that the correct period has been estimated in this case, but there's no guarantee of that once we start working with astronomical sources. This is something to keep in mind if you plan on using GPs to search for periodic signals...
Problem 5) The Quasi-Periodic Kernel
As we saw in the first lecture, there are many sources with periodic light curves that are not strictly sinusoidal. Thus, the use of the cosine kernel (on its own) may not be sufficient to model the signal. As Suzanne told us during session, the quasi-period kernel:
$$K_{ij} = k(x_i - x_j) = \exp \left(-\Gamma \sin^2\left[\frac{\pi}{P} \left|x_i - x_j\right|\right]\right)$$
is useful for non-sinusoidal signals. We will now use this kernel to model the variations in the simulated data.
Problem 5a
Write a function lnprob3 to calculate log posterior given model parameters $\theta$ and data x, y, dy.
Hint - it may be useful to write this out as multiple functions.
End of explanation
# complete
# complete
# complete
sampler = emcee.EnsembleSampler( # complete
p0, _, _ = sampler.run_mcmc( # complete
Explanation: Problem 5b
Initialize 100 walkers around a reasonable starting point. Be sure that $\ln P = 0$ in this initialization.
Run the MCMC for 200 steps.
Hint - it may be helpful to run this second step in a separate cell.
End of explanation
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$']
nburn = # complete
plot_chains( # complete
Explanation: Problem 5c
Plot the chains from the MCMC. Did the chains converge?
End of explanation
# complete
# complete
# complete
# complete
# complete
# complete
Explanation: Problem 5d
Plot the final $\ln P$ vs. log posterior for each of the walkers. Do you notice anything interesting?
Hint - recall that you are plotting the log posterior, and not the posterior.
End of explanation
p = # complete
sampler.reset()
# complete
sampler.run_mcmc( # complete
Explanation: Problem 5e
Re-initialize the walkers around the chain with the maximum log posterior value.
Run the MCMC for 500 steps.
End of explanation
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$']
nburn = # complete
plot_chains( # complete
Explanation: Problem 5f
Plot the chains for the MCMC.
Hint - you may need to adjust the length of the burn in.
End of explanation
# complete
fig = # complete
Explanation: Problem 5g
Make a corner plot for the samples.
Is the marginalized estimate for the period reasonable?
End of explanation
# complete
Explanation: Problem 6) GPs + MCMC for actual astronomical data
We will now apply this model to the same light curve that we studied in the LS lecture.
In this case we do not know the actual period (that's only sorta true), so we will have to be even more careful about initializing the walkers and performing burn in than we were previously.
Problem 6a
Read in the data for the light curve stored in example_asas_lc.dat.
End of explanation
def lnprior3( # complete
# complete
# complete
# complete
# complete
# complete
# complete
# complete
Explanation: Problem 6b
Adjust the prior from problem 5 to be appropriate for this data set.
End of explanation
from astropy.stats import LombScargle
frequency, power = # complete
print('Top LS period is {}'.format(# complete
print( # complete
Explanation: Because we have no idea where to initialize our walkers in this case, we are going to use an ad hoc common sense + brute force approach.
Problem 6c
Run LombScarge on the data and determine the top three peaks in the periodogram. Set nterms = 2, and the maximum frequency to 5 (this is arbitrary but sufficient in this case).
Hint - you may need to search more than the top 3 periodogram values to find the 3 peaks.
End of explanation
initial1 = # complete
# complete
# complete
initial2 = # complete
# complete
# complete
initial3 = # complete
# complete
# complete
# complete
sampler = emcee.EnsembleSampler( # complete
p0, _, _ = sampler.run_mcmc( # complete
Explanation: Problem 6d
Initialize one third of your 100 walkers around each of the periods identified in the previous problem (note - the total number of walkers must be an even number, so use 34 walkers around one of the top 3 frequency peaks).
Run the MCMC for 500 steps following this initialization.
End of explanation
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$']
nburn = # complete
plot_chains( # complete
Explanation: Problem 6e
Plot the chains.
End of explanation
# complete
# complete
# complete
# complete
# complete
# complete
Explanation: Problem 6f
Plot $\ln P$ vs. log posterior.
End of explanation
# complete
sampler.reset()
# complete
# complete
sampler.run_mcmc( # complete
paramsNames = ['ln(P)', 'ln(a)', 'b', '$ln(\gamma)$']
nburn = # complete
plot_chains( # complete
Explanation: Problem 6g
Reinitialize the walkers around the previous walker with the maximum posterior value.
Run the MCMC for 500 steps. Plot the chains. Have they converged?
End of explanation
# complete
fig = corner.corner( # complete
# complete
print('ln(P) = {:.6f} +{:.6f} -{:.6f}'.format( # complete
print('GP Period = {:.6f}'.format( # complete
Explanation: Problem 6h
Make a corner plot of the samples. What is the marginalized estimate for the period of this source?
How does this estimate compare to LS?
End of explanation
fig, ax = plt.subplots()
ax.errorbar(lc['hjd'], lc['mag'], lc['mag_unc'], fmt='o')
ax.set_xlabel('HJD (d)')
ax.set_ylabel('mag')
hjd_grid = np.linspace(2800, 3000,3000)
for s in samples[np.random.randint(len(samples), size=5)]:
# Set up the GP for this sample.
lnper, lna, b, lngamma = s
gp = george.GP(np.exp(lna) * kernels.ExpSine2Kernel(np.exp(lngamma), lnper))
gp.compute(lc['hjd'], lc['mag_unc'])
# Compute the prediction conditioned on the observations and plot it.
m = gp.sample_conditional(lc['mag'] - model3(s, lc['hjd']), hjd_grid) + model3(s, hjd_grid)
ax.plot(hjd_grid, m, color="0.2", alpha=0.3)
fig.tight_layout()
Explanation: The cell below shows marginalized samples overplotted on the actual data. How well does the model perform?
End of explanation |
6,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: For this problem set, we'll be using the Jupyter notebook
Step4: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does
Step6: Part B (1 point)
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
Step9: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get
Step11: Part C (1 point)
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
$\sum_{i=1}^n i^2$
Part D (2 points)
Find a usecase for your sum_of_squares function and implement that usecase in the cell below. | Python Code:
def squares(n):
Compute the squares of numbers from 1 to n, such that the
ith element of the returned list equals i^2.
### BEGIN SOLUTION
if n < 1:
raise ValueError("n must be greater than or equal to 1")
return [i ** 2 for i in range(1, n + 1)]
### END SOLUTION
Explanation: For this problem set, we'll be using the Jupyter notebook:
Part A (2 points)
Write a function that returns a list of numbers, such that $x_i=i^2$, for $1\leq i \leq n$. Make sure it handles the case where $n<1$ by raising a ValueError.
End of explanation
squares(10)
Check that squares returns the correct output for several inputs
assert squares(1) == [1]
assert squares(2) == [1, 4]
assert squares(10) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
assert squares(11) == [1, 4, 9, 16, 25, 36, 49, 64, 81, 100, 121]
Check that squares raises an error for invalid inputs
try:
squares(0)
except ValueError:
pass
else:
raise AssertionError("did not raise")
try:
squares(-4)
except ValueError:
pass
else:
raise AssertionError("did not raise")
Explanation: Your function should print [1, 4, 9, 16, 25, 36, 49, 64, 81, 100] for $n=10$. Check that it does:
End of explanation
def sum_of_squares(n):
Compute the sum of the squares of numbers from 1 to n.
### BEGIN SOLUTION
return sum(squares(n))
### END SOLUTION
Explanation: Part B (1 point)
Using your squares function, write a function that computes the sum of the squares of the numbers from 1 to $n$. Your function should call the squares function -- it should NOT reimplement its functionality.
End of explanation
sum_of_squares(10)
Check that sum_of_squares returns the correct answer for various inputs.
assert sum_of_squares(1) == 1
assert sum_of_squares(2) == 5
assert sum_of_squares(10) == 385
assert sum_of_squares(11) == 506
Check that sum_of_squares relies on squares.
orig_squares = squares
del squares
try:
sum_of_squares(1)
except NameError:
pass
else:
raise AssertionError("sum_of_squares does not use squares")
finally:
squares = orig_squares
Explanation: The sum of squares from 1 to 10 should be 385. Verify that this is the answer you get:
End of explanation
def pyramidal_number(n):
Returns the n^th pyramidal number
return sum_of_squares(n)
Explanation: Part C (1 point)
Using LaTeX math notation, write out the equation that is implemented by your sum_of_squares function.
$\sum_{i=1}^n i^2$
Part D (2 points)
Find a usecase for your sum_of_squares function and implement that usecase in the cell below.
End of explanation |
6,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow graph optimization with Grappler
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Create a context manager to easily toggle optimizer states.
Step3: Compare execution performance with and without Grappler
TensorFlow 2 and beyond executes eagerly by default. Use tf.function to switch the default execution to Graph mode. Grappler runs automatically in the background to apply the graph optimizations above and improve execution performance.
Constant folding optimizer
As a preliminary example, consider a function which performs operations on constants and returns an output.
Step4: Turn off the constant folding optimizer and execute the function
Step5: Enable the constant folding optimizer and execute the function again to observe a speed-up in function execution.
Step6: Debug stripper optimizer
Consider a simple function that checks the numeric value of its input argument and returns it.
Step7: First, execute the function with the debug stripper optimizer turned off.
Step8: tf.debugging.check_numerics raises an invalid argument error because of the Inf argument to test_func.
Enable the debug stripper optimizer and execute the function again. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import timeit
import traceback
import contextlib
import tensorflow as tf
Explanation: TensorFlow graph optimization with Grappler
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/graph_optimization"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/graph_optimization.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/graph_optimization.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/graph_optimization.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TensorFlow uses both graph and eager executions to execute computations. A tf.Graph contains a set of tf.Operation objects (ops) which represent units of computation and tf.Tensor objects which represent the units of data that flow between ops.
Grappler is the default graph optimization system in the TensorFlow runtime. Grappler applies optimizations in graph mode (within tf.function) to improve the performance of your TensorFlow computations through graph simplifications and other high-level optimizations such as inlining function bodies to enable inter-procedural optimizations. Optimizing the tf.Graph also reduces the device peak memory usage and improves hardware utilization by optimizing the mapping of graph nodes to compute resources.
Use tf.config.optimizer.set_experimental_options() for finer control over your tf.Graph optimizations.
Available graph optimizers
Grappler performs graph optimizations through a top-level driver called the MetaOptimizer. The following graph optimizers are available with TensorFlow:
Constant folding optimizer - Statically infers the value of tensors when possible by folding constant nodes in the graph and materializes the result using constants.
Arithmetic optimizer - Simplifies arithmetic operations by eliminating common subexpressions and simplifying arithmetic statements.
Layout optimizer - Optimizes tensor layouts to execute data format dependent operations such as convolutions more efficiently.
Remapper optimizer - Remaps subgraphs onto more efficient implementations by replacing commonly occurring subgraphs with optimized fused monolithic kernels.
Memory optimizer - Analyzes the graph to inspect the peak memory usage for each operation and inserts CPU-GPU memory copy operations for swapping GPU memory to CPU to reduce the peak memory usage.
Dependency optimizer - Removes or rearranges control dependencies to shorten the critical path for a model step or enables other
optimizations. Also removes nodes that are effectively no-ops such as Identity.
Pruning optimizer - Prunes nodes that have no effect on the output from the graph. It is usually run first to reduce the size of the graph and speed up processing in other Grappler passes.
Function optimizer - Optimizes the function library of a TensorFlow program and inlines function bodies to enable other inter-procedural optimizations.
Shape optimizer - Optimizes subgraphs that operate on shape and shape related information.
Autoparallel optimizer - Automatically parallelizes graphs by splitting along the batch dimension. This optimizer is turned OFF by default.
Loop optimizer - Optimizes the graph control flow by hoisting loop-invariant subgraphs out of loops and by removing redundant stack operations in loops. Also optimizes loops with statically known trip counts and removes statically known dead branches in conditionals.
Scoped allocator optimizer - Introduces scoped allocators to reduce data movement and to consolidate some operations.
Pin to host optimizer - Swaps small operations onto the CPU. This optimizer is turned OFF by default.
Auto mixed precision optimizer - Converts data types to float16 where applicable to improve performance. Currently applies only to GPUs.
Debug stripper - Strips nodes related to debugging operations such as tf.debugging.Assert, tf.debugging.check_numerics, and tf.print from the graph. This optimizer is turned OFF by default.
Setup
End of explanation
@contextlib.contextmanager
def options(options):
old_opts = tf.config.optimizer.get_experimental_options()
tf.config.optimizer.set_experimental_options(options)
try:
yield
finally:
tf.config.optimizer.set_experimental_options(old_opts)
Explanation: Create a context manager to easily toggle optimizer states.
End of explanation
def test_function_1():
@tf.function
def simple_function(input_arg):
print('Tracing!')
a = tf.constant(np.random.randn(2000,2000), dtype = tf.float32)
c = a
for n in range(50):
c = c@a
return tf.reduce_mean(c+input_arg)
return simple_function
Explanation: Compare execution performance with and without Grappler
TensorFlow 2 and beyond executes eagerly by default. Use tf.function to switch the default execution to Graph mode. Grappler runs automatically in the background to apply the graph optimizations above and improve execution performance.
Constant folding optimizer
As a preliminary example, consider a function which performs operations on constants and returns an output.
End of explanation
with options({'constant_folding': False}):
print(tf.config.optimizer.get_experimental_options())
simple_function = test_function_1()
# Trace once
x = tf.constant(2.2)
simple_function(x)
print("Vanilla execution:", timeit.timeit(lambda: simple_function(x), number = 1), "s")
Explanation: Turn off the constant folding optimizer and execute the function:
End of explanation
with options({'constant_folding': True}):
print(tf.config.optimizer.get_experimental_options())
simple_function = test_function_1()
# Trace once
x = tf.constant(2.2)
simple_function(x)
print("Constant folded execution:", timeit.timeit(lambda: simple_function(x), number = 1), "s")
Explanation: Enable the constant folding optimizer and execute the function again to observe a speed-up in function execution.
End of explanation
def test_function_2():
@tf.function
def simple_func(input_arg):
output = input_arg
tf.debugging.check_numerics(output, "Bad!")
return output
return simple_func
Explanation: Debug stripper optimizer
Consider a simple function that checks the numeric value of its input argument and returns it.
End of explanation
test_func = test_function_2()
p1 = tf.constant(float('inf'))
try:
test_func(p1)
except tf.errors.InvalidArgumentError as e:
traceback.print_exc(limit=2)
Explanation: First, execute the function with the debug stripper optimizer turned off.
End of explanation
with options({'debug_stripper': True}):
test_func2 = test_function_2()
p1 = tf.constant(float('inf'))
try:
test_func2(p1)
except tf.errors.InvalidArgumentError as e:
traceback.print_exc(limit=2)
Explanation: tf.debugging.check_numerics raises an invalid argument error because of the Inf argument to test_func.
Enable the debug stripper optimizer and execute the function again.
End of explanation |
6,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
notation for differentation
We'll mostly use Lagrange's notation, the first three deriviatives of a function $f$ are denoted $f'$, $f''$ and $f'''$. After that we'll use $f^{(4)}, f^{(5)}, \ldots, f^{(n)}$.
limits
Below is a function $f$ with two cases.
$
f(x) = \begin{cases}
x + 1, & \text{if $x \gt 0$} \
-x + 2, & \text{if $x \lt 0$}
\end{cases}
$
Notice that for this function, the value of $f$ for when $x = 0$ is undefined. Also note that this is a discontinuous function.
Step1: Even though the value of $y$ for when $x = 0$ is undefined we can say something about the limits of this function.
right-hand limit
If we approach from a position where $x \gt 0$ we can say that as $x$ approaches 0 the limit of the function is $x + 1$.
$$\lim_{x^+\to 0}f(x) = \lim_{x\to 0}x + 1 = 1$$
left-hand limit
On the other hand, if we approach from a position where $x \lt 0$ then we see that as $x$ approaches 0 the limit of the function is $-x + 2$.
$$\lim_{x^-\to 0}f(x) = \lim_{x\to 0}-x + 2 = 2$$
And now that we have defined limits we can also define what it means for a function to be continuous at a certain value. A function $f$ is continuous at $x_0$ when $\lim_{x\to x_0}f(x) = f(x_0)$
discontinuous functions
jump discontinuity
The limits of a funcion $f(x)$ at $x_0$ exist but are not equal. This is basically the example from above.
$$\lim_{x^+\to x_0}f(x) \neq \lim_{x^-\to x_0}f(x)$$
removable discontinuity
Let $g(x) = \frac{\sin x}{x}$ and $h(x) = \frac{1 - \cos x}{x}$
Step2: Note that dividing by zero is an undefined operation, both of these functions are undefined for when $x = 0$ so we'll have two little circles in the plot. However we can see that $\lim_{x^+\to 0}g(x) = 1$ and that $\lim_{x^-\to 0}g(x) = 1$ so generally we can say that $\lim_{x\to 0}g(x) = 1$. We can also see that $\lim_{x^+\to 0}h(x) = 0$ and $\lim_{x^-\to 0}h(x) = 0$ so $\lim_{x\to 0}h(x) = 0$.
Because for both functions $\lim_{x^+\to 0} = \lim_{x^-\to 0}$ we can say that these functions have a removable discontinuity at $x = 0$
infinite discontinuity
This time we'll use $y = f(x) = \frac{1}{x}$
Step3: Now we see that $\lim_{x^+\to 0}\frac{1}{x} = \infty$ and $\lim_{x^-\to 0}\frac{1}{x} = -\infty$ and even though some people might say that these limits are undefined they are going in a definite direction so if able we should specify what they are.
However we cannot say that $\lim_{x\to 0}\frac{1}{x} = \infty$ even though this is sometimes done it's usually because people are sloppy and only considering $y = \frac{1}{x}$ for when $x \gt 0$.
There's an interesting thing we can observe when we plot the derivative of this function.
Step4: If we take the derivative of an odd function we get an even function.
other (ugly) discontinuities
Take for example the function $y = \sin \frac{1}{x}$ as $x\to 0$
Step5: As we approach $x = 0$ it will oscilate into infinity. There is no left or right limit in this case.
rate of change
Let's start with the question of what is a derivative? We'll look at a few different aspects
Step6: We can now define, $f'(x_0)$ (the derivative) of $f$ at $x_0$ is the slope of the tangent line to $y = f(x)$ at the point $P$. The tangent line is equal to the limit of secant lines $PQ$ as $Q\to P$ where $P$ is fixed. In the picture above we can see that the slope of our our secant line $PQ$ is simply defined as $\frac{\Delta{f}}{\Delta{x}}$. However we can now define the slope $m$ of our tangent line as
$$m = \lim_{\Delta{x}\to 0}\frac{\Delta{f}}{\Delta{x}}$$
The next thing we want to do is to write $\Delta{f}$ more explicitly. We already have $P = (x_0, f(x_0))$ and $Q = (x_0 + \Delta{x}, f(x_0 + \Delta{x})$. With this information information we can write down
Step7: Now let's compute $f'(x)$.
$$
\begin{align}
f'(x) & = \lim_{\Delta{x}\to 0}\frac{f(x + \Delta{x}) - f(x)}{\Delta{x}} \
& = \lim_{\Delta{x}\to 0}\frac{\frac{1}{1 + (x + \Delta{x})^2} - \frac{1}{1 + x^2}}{\Delta{x}} \
& = \lim_{\Delta{x}\to 0}\frac{1}{\Delta{x}}\frac{1 + x^2 - (1 + (x + \Delta{x})^2)}{(1 + (x + \Delta{x})^2)(1 + x^2)} \
& = \lim_{\Delta{x}\to 0}\frac{1}{\Delta{x}}\frac{1 + x^2 -1 - x^2 - 2x\Delta{x} - \Delta{x}^2}{(1 + (x + \Delta{x})^2)(1 + x^2)} \
& = \lim_{\Delta{x}\to 0}\frac{-2x - \Delta{x}}{(1 + (x + \Delta{x}^2)(1 + x^2)} \
& = \frac{-2x}{(1 + x^2)^2}
\end{align}
$$ | Python Code:
c1 = lambda x: x + 1
c2 = lambda x: -x + 2
x1 = np.linspace(0.01, 2, 10)
x2 = np.linspace(-2, -0.01, 10)
plt.plot(x1, c1(x1), label=r"$y = x + 1$")
plt.plot(x2, c2(x2), label=r"$y = -x + 2$")
plt.plot(0, 2, 'wo', markersize=7)
plt.plot(0, 1, 'wo', markersize=7)
ax = plt.axes()
ax.set_ylim(0, 4)
plt.legend(loc=3)
Explanation: notation for differentation
We'll mostly use Lagrange's notation, the first three deriviatives of a function $f$ are denoted $f'$, $f''$ and $f'''$. After that we'll use $f^{(4)}, f^{(5)}, \ldots, f^{(n)}$.
limits
Below is a function $f$ with two cases.
$
f(x) = \begin{cases}
x + 1, & \text{if $x \gt 0$} \
-x + 2, & \text{if $x \lt 0$}
\end{cases}
$
Notice that for this function, the value of $f$ for when $x = 0$ is undefined. Also note that this is a discontinuous function.
End of explanation
g = lambda x: np.sin(x) / x
h = lambda x: (1 - np.cos(x)) / x
x = np.linspace(-3 * np.pi, 3 * np.pi, 100)
ax = plt.axes()
ax.set_xlim(-3 * np.pi, 3 * np.pi)
ax.set_ylim(-1, 1.25)
plt.plot(x, g(x), label=r"$y = g(x) = \frac{\sin x}{x}$")
plt.plot(x, h(x), label=R"$y = h(x) = \frac{1 - \cos x}{x}$")
plt.plot(0, 1, 'wo', markersize=7)
plt.plot(0, 0, 'wo', markersize=7)
plt.legend(loc=4)
Explanation: Even though the value of $y$ for when $x = 0$ is undefined we can say something about the limits of this function.
right-hand limit
If we approach from a position where $x \gt 0$ we can say that as $x$ approaches 0 the limit of the function is $x + 1$.
$$\lim_{x^+\to 0}f(x) = \lim_{x\to 0}x + 1 = 1$$
left-hand limit
On the other hand, if we approach from a position where $x \lt 0$ then we see that as $x$ approaches 0 the limit of the function is $-x + 2$.
$$\lim_{x^-\to 0}f(x) = \lim_{x\to 0}-x + 2 = 2$$
And now that we have defined limits we can also define what it means for a function to be continuous at a certain value. A function $f$ is continuous at $x_0$ when $\lim_{x\to x_0}f(x) = f(x_0)$
discontinuous functions
jump discontinuity
The limits of a funcion $f(x)$ at $x_0$ exist but are not equal. This is basically the example from above.
$$\lim_{x^+\to x_0}f(x) \neq \lim_{x^-\to x_0}f(x)$$
removable discontinuity
Let $g(x) = \frac{\sin x}{x}$ and $h(x) = \frac{1 - \cos x}{x}$
End of explanation
f = lambda x: 1/x
x1 = np.linspace(-0.5, -0.01, 1000)
x2 = np.linspace(0.01, 0.5, 1000)
ax = plt.axes()
#ax.spines['left'].set_position(('data', 0))
#ax.spines['bottom'].set_position(('data', 0))
ax.set_xlim(-0.1, 0.1)
plt.plot(x1, f(x1), 'b')
plt.plot(x2, f(x2), 'b')
Explanation: Note that dividing by zero is an undefined operation, both of these functions are undefined for when $x = 0$ so we'll have two little circles in the plot. However we can see that $\lim_{x^+\to 0}g(x) = 1$ and that $\lim_{x^-\to 0}g(x) = 1$ so generally we can say that $\lim_{x\to 0}g(x) = 1$. We can also see that $\lim_{x^+\to 0}h(x) = 0$ and $\lim_{x^-\to 0}h(x) = 0$ so $\lim_{x\to 0}h(x) = 0$.
Because for both functions $\lim_{x^+\to 0} = \lim_{x^-\to 0}$ we can say that these functions have a removable discontinuity at $x = 0$
infinite discontinuity
This time we'll use $y = f(x) = \frac{1}{x}$
End of explanation
f0 = lambda x: 1/x
f1 = lambda x: -1/x**2
x1 = np.linspace(-0.5, -0.01, 1000)
x2 = np.linspace(0.01, 0.5, 1000)
p1 = plt.subplot(211)
p1.set_xlim(-0.1, 0.1)
plt.plot(x1, f0(x1), 'b', label=r"$y = 1/x$")
plt.plot(x2, f0(x2), 'b')
plt.legend(loc=4)
p2 = plt.subplot(212)
p2.set_xlim(-0.1, 0.1)
p2.set_ylim(-2000, 0)
plt.plot(x1, f1(x1), 'g', label=r"$y = -1/x^2$")
plt.plot(x2, f1(x2), 'g')
plt.legend(loc=4)
Explanation: Now we see that $\lim_{x^+\to 0}\frac{1}{x} = \infty$ and $\lim_{x^-\to 0}\frac{1}{x} = -\infty$ and even though some people might say that these limits are undefined they are going in a definite direction so if able we should specify what they are.
However we cannot say that $\lim_{x\to 0}\frac{1}{x} = \infty$ even though this is sometimes done it's usually because people are sloppy and only considering $y = \frac{1}{x}$ for when $x \gt 0$.
There's an interesting thing we can observe when we plot the derivative of this function.
End of explanation
f = lambda x: np.sin(1/x)
x1 = np.linspace(-0.1, -0.01, 100)
x2 = np.linspace(0.01, 0.1, 100)
ax = plt.axes()
ax.set_xlim(-0.1, 0.1)
ax.set_ylim(-1.2, 1.2)
plt.plot(x1, f(x1))
plt.plot(x2, f(x2))
Explanation: If we take the derivative of an odd function we get an even function.
other (ugly) discontinuities
Take for example the function $y = \sin \frac{1}{x}$ as $x\to 0$
End of explanation
f = lambda x: x**2
fig, ax = plt.subplots()
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_position(('data', 0))
ax.spines['left'].set_position(('data', 0))
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticklabels(['$x_0$', '$x$'])
ax.yaxis.set_ticklabels(['$y_0$', '$y$'])
ax.xaxis.set_ticks([1, 1.5])
ax.yaxis.set_ticks([1, f(1.5)])
ax.set_xlim(-1, 2)
ax.set_ylim(-1, 3)
x = np.linspace(-1, 2, 100)
plt.plot(x, f(x))
plt.plot(1, f(1), 'ko')
plt.plot(1.5, f(1.5), 'ko')
plt.plot([1, 1.5], [f(1), f(1)], 'k--')
plt.plot([1.5, 1.5], [f(1), f(1.5)], 'k--')
plt.plot([1, 1.5], [f(1), f(1.5)], 'k--')
plt.annotate('$P$', (0.8, 1))
plt.annotate('$Q$', (1.3, f(1.5)))
plt.annotate('$\Delta{x}$', (1.25, 0.75))
plt.annotate('$\Delta{f}$', (1.55, 1.5))
Explanation: As we approach $x = 0$ it will oscilate into infinity. There is no left or right limit in this case.
rate of change
Let's start with the question of what is a derivative? We'll look at a few different aspects:
Geometric interpretation
Physical interpretation
Importance to measurements
We'll start with the geometric interpretation.
geometric interpretation
Find the tangent line to the graph of some function $y = f(x)$ at some point $P = (x_0, y_0)$. We also know this line can be written as the equation $y - y_0 = m(x - x_0)$. In order to figure out this equation we need to know two things, point $P$ which is $(x_0, y_0)$ where $y_0 = f(x_0)$ and the value of $m$ which is the slope of the line. In calculus we also call this the derivative or $f'(x)$.
End of explanation
f = lambda x: 1 / (1 + x**2)
x = np.linspace(-2, 2, 100)
y = f(x)
ax = plt.axes()
ax.set_ylim(0, 1.25)
plt.plot(x, y, label=r"$y = \frac{1}{1 + x^2}$")
plt.legend()
Explanation: We can now define, $f'(x_0)$ (the derivative) of $f$ at $x_0$ is the slope of the tangent line to $y = f(x)$ at the point $P$. The tangent line is equal to the limit of secant lines $PQ$ as $Q\to P$ where $P$ is fixed. In the picture above we can see that the slope of our our secant line $PQ$ is simply defined as $\frac{\Delta{f}}{\Delta{x}}$. However we can now define the slope $m$ of our tangent line as
$$m = \lim_{\Delta{x}\to 0}\frac{\Delta{f}}{\Delta{x}}$$
The next thing we want to do is to write $\Delta{f}$ more explicitly. We already have $P = (x_0, f(x_0))$ and $Q = (x_0 + \Delta{x}, f(x_0 + \Delta{x})$. With this information information we can write down:
$$f'(x_0) = m = \lim_{\Delta{x}\to 0}\frac{f(x_0 + \Delta{x}) - f(x_0)}{\Delta{x}}$$
recital
Let $f(x) = \frac{1}{1 + x^2}$. Graph $y = f(x)$ and compute $f'(x)$.
End of explanation
f_acc = lambda x: (-2 * x) / ((1 + x**2)**2)
x = np.linspace(-2, 2, 100)
plt.plot(x, f_acc(x))
Explanation: Now let's compute $f'(x)$.
$$
\begin{align}
f'(x) & = \lim_{\Delta{x}\to 0}\frac{f(x + \Delta{x}) - f(x)}{\Delta{x}} \
& = \lim_{\Delta{x}\to 0}\frac{\frac{1}{1 + (x + \Delta{x})^2} - \frac{1}{1 + x^2}}{\Delta{x}} \
& = \lim_{\Delta{x}\to 0}\frac{1}{\Delta{x}}\frac{1 + x^2 - (1 + (x + \Delta{x})^2)}{(1 + (x + \Delta{x})^2)(1 + x^2)} \
& = \lim_{\Delta{x}\to 0}\frac{1}{\Delta{x}}\frac{1 + x^2 -1 - x^2 - 2x\Delta{x} - \Delta{x}^2}{(1 + (x + \Delta{x})^2)(1 + x^2)} \
& = \lim_{\Delta{x}\to 0}\frac{-2x - \Delta{x}}{(1 + (x + \Delta{x}^2)(1 + x^2)} \
& = \frac{-2x}{(1 + x^2)^2}
\end{align}
$$
End of explanation |
6,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: TensorBoard Scalars
Step2: Set up data for a simple regression
You're now going to use Keras to calculate a regression, i.e., find the best line of fit for a paired data set. (While using neural networks and gradient descent is overkill for this kind of problem, it does make for a very easy to understand example.)
You're going to use TensorBoard to observe how training and test loss change across epochs. Hopefully, you'll see training and test loss decrease over time and then remain steady.
First, generate 1000 data points roughly along the line y = 0.5x + 2. Split these data points into training and test sets. Your hope is that the neural net learns this relationship.
Step3: Training the model and logging loss
You're now ready to define, train and evaluate your model.
To log the loss scalar as you train, you'll do the following
Step4: Examining loss using TensorBoard
Now, start TensorBoard, specifying the root log directory you used above.
Wait a few seconds for TensorBoard's UI to spin up.
Step5: <!-- <img class="tfo-display-only-on-site" src="https
Step7: Not bad!
Logging custom scalars
What if you want to log custom values, such as a dynamic learning rate? To do that, you need to use the TensorFlow Summary API.
Retrain the regression model and log a custom learning rate. Here's how
Step8: Let's look at TensorBoard again.
Step9: <!-- <img class="tfo-display-only-on-site" src="https | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
# Load the TensorBoard notebook extension.
%load_ext tensorboard
from datetime import datetime
from packaging import version
import tensorflow as tf
from tensorflow import keras
import numpy as np
print("TensorFlow version: ", tf.__version__)
assert version.parse(tf.__version__).release[0] >= 2, \
"This notebook requires TensorFlow 2.0 or above."
Explanation: TensorBoard Scalars: Logging training metrics in Keras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tensorboard/scalars_and_keras"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/tensorboard/blob/master/docs/scalars_and_keras.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Overview
Machine learning invariably involves understanding key metrics such as loss and how they change as training progresses. These metrics can help you understand if you're overfitting, for example, or if you're unnecessarily training for too long. You may want to compare these metrics across different training runs to help debug and improve your model.
TensorBoard's Scalars Dashboard allows you to visualize these metrics using a simple API with very little effort. This tutorial presents very basic examples to help you learn how to use these APIs with TensorBoard when developing your Keras model. You will learn how to use the Keras TensorBoard callback and TensorFlow Summary APIs to visualize default and custom scalars.
Setup
End of explanation
data_size = 1000
# 80% of the data is for training.
train_pct = 0.8
train_size = int(data_size * train_pct)
# Create some input data between -1 and 1 and randomize it.
x = np.linspace(-1, 1, data_size)
np.random.shuffle(x)
# Generate the output data.
# y = 0.5x + 2 + noise
y = 0.5 * x + 2 + np.random.normal(0, 0.05, (data_size, ))
# Split into test and train pairs.
x_train, y_train = x[:train_size], y[:train_size]
x_test, y_test = x[train_size:], y[train_size:]
Explanation: Set up data for a simple regression
You're now going to use Keras to calculate a regression, i.e., find the best line of fit for a paired data set. (While using neural networks and gradient descent is overkill for this kind of problem, it does make for a very easy to understand example.)
You're going to use TensorBoard to observe how training and test loss change across epochs. Hopefully, you'll see training and test loss decrease over time and then remain steady.
First, generate 1000 data points roughly along the line y = 0.5x + 2. Split these data points into training and test sets. Your hope is that the neural net learns this relationship.
End of explanation
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(learning_rate=0.2),
)
print("Training ... With default parameters, this takes less than 10 seconds.")
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback],
)
print("Average test loss: ", np.average(training_history.history['loss']))
Explanation: Training the model and logging loss
You're now ready to define, train and evaluate your model.
To log the loss scalar as you train, you'll do the following:
Create the Keras TensorBoard callback
Specify a log directory
Pass the TensorBoard callback to Keras' Model.fit().
TensorBoard reads log data from the log directory hierarchy. In this notebook, the root log directory is logs/scalars, suffixed by a timestamped subdirectory. The timestamped subdirectory enables you to easily identify and select training runs as you use TensorBoard and iterate on your model.
End of explanation
%tensorboard --logdir logs/scalars
Explanation: Examining loss using TensorBoard
Now, start TensorBoard, specifying the root log directory you used above.
Wait a few seconds for TensorBoard's UI to spin up.
End of explanation
print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]]
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_loss.png?raw=1"/> -->
You may see TensorBoard display the message "No dashboards are active for the current data set". That's because initial logging data hasn't been saved yet. As training progresses, the Keras model will start logging data. TensorBoard will periodically refresh and show you your scalar metrics. If you're impatient, you can tap the Refresh arrow at the top right.
As you watch the training progress, note how both training and validation loss rapidly decrease, and then remain stable. In fact, you could have stopped training after 25 epochs, because the training didn't improve much after that point.
Hover over the graph to see specific data points. You can also try zooming in with your mouse, or selecting part of them to view more detail.
Notice the "Runs" selector on the left. A "run" represents a set of logs from a round of training, in this case the result of Model.fit(). Developers typically have many, many runs, as they experiment and develop their model over time.
Use the Runs selector to choose specific runs, or choose from only training or validation. Comparing runs will help you evaluate which version of your code is solving your problem better.
Ok, TensorBoard's loss graph demonstrates that the loss consistently decreased for both training and validation and then stabilized. That means that the model's metrics are likely very good! Now see how the model actually behaves in real life.
Given the input data (60, 25, 2), the line y = 0.5x + 2 should yield (32, 14.5, 3). Does the model agree?
End of explanation
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
Returns a custom learning rate that decreases as epochs progress.
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
Explanation: Not bad!
Logging custom scalars
What if you want to log custom values, such as a dynamic learning rate? To do that, you need to use the TensorFlow Summary API.
Retrain the regression model and log a custom learning rate. Here's how:
Create a file writer, using tf.summary.create_file_writer().
Define a custom learning rate function. This will be passed to the Keras LearningRateScheduler callback.
Inside the learning rate function, use tf.summary.scalar() to log the custom learning rate.
Pass the LearningRateScheduler callback to Model.fit().
In general, to log a custom scalar, you need to use tf.summary.scalar() with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the tf.summary.scalar().
End of explanation
%tensorboard --logdir logs/scalars
Explanation: Let's look at TensorBoard again.
End of explanation
print(model.predict([60, 25, 2]))
# True values to compare predictions against:
# [[32.0]
# [14.5]
# [ 3.0]]
Explanation: <!-- <img class="tfo-display-only-on-site" src="https://github.com/tensorflow/tensorboard/blob/master/docs/images/scalars_custom_lr.png?raw=1"/> -->
Using the "Runs" selector on the left, notice that you have a <timestamp>/metrics run. Selecting this run displays a "learning rate" graph that allows you to verify the progression of the learning rate during this run.
You can also compare this run's training and validation loss curves against your earlier runs.
You might also notice that the learning rate schedule returned discrete values, depending on epoch, but the learning rate plot may appear smooth. TensorBoard has a smoothing parameter that you may need to turn down to zero to see the unsmoothed values.
How does this model do?
End of explanation |
6,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
4.a Plot type - xy
Our first plot example is a simple xy-plot and the graphics output format is PNG.
Step1: To use Numpy arrays we need to import the module.
Define x- and y-values
Step2: Hm, we created the plot but where is it? Unlike matplotlib PyNGL can't display inline plots but IPython provides a solution for us.
retina=True --> half of the size of the plot
Step3: That's really sparse. Next, we want to add a title above the plot and add the axis titles, too.
To do that we use the plot resources and add the resources to the plot function.
NOTICE
Step4: Display the grid lines of the coordinate system, too.
Step5: Change the line settings
line color from black to red
line pattern from solid pattern to dashed pattern
line width thicker
Step6: To display two datasets in one xy-plot.
Step7: Uh, that's not what we want! We want to have two lines with different colors, lets say red and blue. And while we're at it, they should have two different dash pattern types.
Step8: NOTICE
Step9: We can distinguish them now but don't know which line is y1, y2 or y3. It is always good to have legend and that's what we want to do next.
Step10: That is the default. Doesn't look very nice, does it? So, we should fix it up a bit.
use the correct dataset names
make the legend smaller
move it to the upper left inside the plot
Step11: The next example shows how to read a dataset from file.
Dataset
Step12: The variable time is x and and the variable tsurf is y.
Note, that we have to use tsurf.values because Ngl.xy needs to get a numpy array.
Step13: Hm, I would like to have the x-axis labels as dates and not as indices.
Convert the time values to date strings using the Python module datetime. | Python Code:
import Ngl
wks = Ngl.open_wks('png', 'plot_xy')
Explanation: 4.a Plot type - xy
Our first plot example is a simple xy-plot and the graphics output format is PNG.
End of explanation
import numpy as np
x = np.arange(0,5)
y = np.arange(0,10,2)
plot = Ngl.xy(wks, x, y)
Explanation: To use Numpy arrays we need to import the module.
Define x- and y-values:
End of explanation
from IPython.display import Image
Image(filename='plot_xy.png', retina=True)
Explanation: Hm, we created the plot but where is it? Unlike matplotlib PyNGL can't display inline plots but IPython provides a solution for us.
retina=True --> half of the size of the plot
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.tiXAxisString = 'x-axis title string'
res.tiYAxisString = 'y-axis title string'
plot = Ngl.xy(wks, x, y, res)
Image(filename='plot_xy.png', retina=True)
Explanation: That's really sparse. Next, we want to add a title above the plot and add the axis titles, too.
To do that we use the plot resources and add the resources to the plot function.
NOTICE:
The first plot call has created a file called plot_map1.png. If we call plot again it will create two files with the names plot_maps1.000001.png, plot_maps1.000001.png, and so on. The first one is the plot above, the second will be the one the plot below. If we make changes it will increase the number of plots on disk, and it is hard to display the correct plot in the notebook. That's why we delete the workstation and create just one single plot.
If you use a script and run it at a terminal the workstation will be closed when the script exits, and you can rerun it without producing multiple files.
<br>
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.tiXAxisString = 'x-axis title string'
res.tiYAxisString = 'y-axis title string'
res.tmXMajorGrid = True
res.tmYMajorGrid = True
plot = Ngl.xy(wks, x, y, res)
Image(filename='plot_xy.png', retina=True)
Explanation: Display the grid lines of the coordinate system, too.
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.tiXAxisString = 'x-axis title string'
res.tiYAxisString = 'y-axis title string'
res.tmXMajorGrid = True
res.tmYMajorGrid = True
res.xyLineColor = 'red'
res.xyDashPattern = 3 # -- - -- - --
res.xyLineThicknessF = 5
plot = Ngl.xy(wks, x, y, res)
Image(filename='plot_xy.png', retina=True)
Explanation: Change the line settings
line color from black to red
line pattern from solid pattern to dashed pattern
line width thicker
End of explanation
y1 = np.array([0,3,6,1,4])
y2 = np.array([2,5,3,2,7])
data = np.array([y1,y2])
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Explanation: To display two datasets in one xy-plot.
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue']
res.xyDashPatterns = [3,16]
res.xyLineThicknesses = [5,3]
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Explanation: Uh, that's not what we want! We want to have two lines with different colors, lets say red and blue. And while we're at it, they should have two different dash pattern types.
End of explanation
y1 = np.array([0,3,6,1,4])
y2 = np.array([2,5,3,2,7])
y3 = np.array([1,1,2,3,5])
data = np.array([y1,y2,y3])
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue','green']
res.xyDashPatterns = [3,16,0]
res.xyLineThicknesses = [5,3,5]
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Explanation: NOTICE:
The used resources for more than one color, line thickness and line dash pattern are the plural of the resources from the single line.
Set the same color, dash pattern and thickness for one or multiple lines:
res.xyLineColor
res.xyDashPattern
res.xyLineThicknessF
Set different colors, dash pattern and thickness for each line:
res.xyLineColors
res.xyDashPatterns
res.xyLineThicknesses
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue','green']
res.xyDashPatterns = [3,16,0]
res.xyLineThicknesses = [5,3,5]
res.pmLegendDisplayMode = "Always" #-- turn on the drawing
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Explanation: We can distinguish them now but don't know which line is y1, y2 or y3. It is always good to have legend and that's what we want to do next.
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy.png')
res = Ngl.Resources()
res.tiMainString = 'This is the title string'
res.xyLineColors = ['red','blue','green']
res.xyDashPatterns = [3,16,0]
res.xyLineThicknesses = [5,3,5]
res.xyExplicitLegendLabels = ["y1","y2","y3"]
res.pmLegendDisplayMode = "Always" #-- turn on the legend drawing
res.pmLegendOrthogonalPosF = -1.0 #-- move the legend upward
res.pmLegendParallelPosF = 0.17 #-- move the legend rightward
res.pmLegendWidthF = 0.15 #-- increase width
res.pmLegendHeightF = 0.10 #-- increase height
res.lgPerimOn = False #-- turn off the perimeter
plot = Ngl.xy(wks, x, data, res)
Image(filename='plot_xy.png', retina=True)
Explanation: That is the default. Doesn't look very nice, does it? So, we should fix it up a bit.
use the correct dataset names
make the legend smaller
move it to the upper left inside the plot
End of explanation
import xarray as xr
ds = xr.open_dataset('./data/tsurf_fldmean.nc')
tsurf = ds.tsurf
time = np.arange(0,len(ds.time),1)
Explanation: The next example shows how to read a dataset from file.
Dataset: ./data/tsurf_fldmean.nc
Variable: tsurf
End of explanation
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy_tsurf.png')
res = Ngl.Resources()
res.tiMainString = 'Variable tsurf'
res.tiXAxisString = 'time'
res.tiYAxisString = tsurf.long_name
plot = Ngl.xy(wks, time, tsurf[:,0,0].values, res)
Image(filename='plot_xy_tsurf.png', retina=True)
Explanation: The variable time is x and and the variable tsurf is y.
Note, that we have to use tsurf.values because Ngl.xy needs to get a numpy array.
End of explanation
import datetime
ntime = len(ds.time)
years = ds.time.dt.year.values
months = ds.time.dt.month.values
days = ds.time.dt.day.values
date_labels = [datetime.date(years[i],months[i],days[i]) for i in range(0,ntime)]
date_labels = list(np.array(date_labels,dtype='str'))
Ngl.delete_wks(wks)
wks = Ngl.open_wks('png', 'plot_xy_tsurf.png')
res = Ngl.Resources()
res.tiMainString = 'Variable tsurf'
res.tiXAxisString = 'time'
res.tiYAxisString = tsurf.long_name
res.tmXBMode = 'Explicit' #-- use explicit values
res.tmXBValues = time[::4] #-- use the new x-values array
res.tmXBLabels = date_labels[::4] #-- use the new x-values array as labels
res.tmXBLabelFontHeightF = 0.008
res.tmXBLabelAngleF = 45
res.tmXBMinorOn = False #-- turn off minor tickmark
plot = Ngl.xy(wks, time, tsurf[:,0,0].values, res)
Image(filename='plot_xy_tsurf.png', retina=True)
Explanation: Hm, I would like to have the x-axis labels as dates and not as indices.
Convert the time values to date strings using the Python module datetime.
End of explanation |
6,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>PDBbind Database</h1>
Step1: Download a dataset from PDBbind and unpack (I used core-set 2016).
Step2: We will use the pdbbind class.
Step3: You can get one target or iterate over all of them. To do it you can use PDB ID of a target or an index from list in INDEX file (INDEX_refined_data.2016).
The class has two properties
Step4: Let's choose one target.
Step5: You can always check PDB ID.
Step6: Target has three properties
Step7: If you want to check activity, you can use the sets dict. | Python Code:
from __future__ import print_function, division, unicode_literals
import oddt
from oddt.datasets import pdbbind
oddt.toolkit.image_size = (400, 400)
print(oddt.__version__)
Explanation: <h1>PDBbind Database</h1>
End of explanation
%%bash
wget -qO- http://www.pdbbind.org.cn/download/pdbbind_v2016_core.tar.gz | tar xz
directory = './core-set/'
Explanation: Download a dataset from PDBbind and unpack (I used core-set 2016).
End of explanation
pdbbind_database = pdbbind(home=directory,
version='2016',
default_set='core') # Available sets in wrapper: core, refined, general_PL (general for 2007)
Explanation: We will use the pdbbind class.
End of explanation
all_ids = pdbbind_database.ids
print('Number of targets:', len(all_ids))
print('First ten targets:', all_ids[:10])
all_activities = pdbbind_database.activities
print('First ten activities:', all_activities[:10])
Explanation: You can get one target or iterate over all of them. To do it you can use PDB ID of a target or an index from list in INDEX file (INDEX_refined_data.2016).
The class has two properties: ids and activities.
End of explanation
target = pdbbind_database[0]
Explanation: Let's choose one target.
End of explanation
target.id
Explanation: You can always check PDB ID.
End of explanation
max_atoms = 0
for target in pdbbind_database:
if max_atoms < len(target.ligand.atoms):
max_atoms = len(target.ligand.atoms)
largest = target
print('Target ID:', largest.id, '\nNumber of atoms:', max_atoms)
largest_ligand = largest.ligand
largest_ligand.removeh()
largest_ligand
Explanation: Target has three properties: protein, pocket and ligand. All of them are of oddt.tolkit.Molecule class.
Let's find the largest ligand.
End of explanation
pdbbind_database.sets['core'][largest.id]
Explanation: If you want to check activity, you can use the sets dict.
End of explanation |
6,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Phoenix BT-Settl Bolometric Corrections
Figuring out the best method of handling Phoenix bolometric correction files.
Step1: Change to directory containing bolometric correction files.
Step2: Load a bolometric correction table, say for the Cousins AB photometric system.
Step3: Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment.
Let's take a first swing at the problem by using the LinearND Interpolator from SciPy.
Step4: The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity.
Step5: This agrees with data in the bolometric correciton table.
Teff logg [Fe/H] [a/Fe] B V R I
1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291
Now, let's raise the temperature.
Step6: Again, we have a good match to tabulated values,
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273
However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare?
Step7: This appears consistent. What about progressing to lower metallicity values?
Step8: For reference, at [Fe/H] = $-0.5$ dex, we have
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154
The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes.
Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections.
Step9: Make sure there are magnitudes and colors associated with this isochrone.
Step10: A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS.
Step11: For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes.
Step12: Let's try something different
Step13: Create an interpolation surface from the magnitude table.
Step14: Compute magnitudes for a Dartmouth isochrone.
Step15: Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star.
Step16: Now compare against MARCS values.
Step17: Load an isochrone from the Lyon-Phoenix series.
Step18: Export a new isochrone with colors from AGSS09 (PHX)
Step19: Separate Test Case
These are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.interpolate as scint
Explanation: Phoenix BT-Settl Bolometric Corrections
Figuring out the best method of handling Phoenix bolometric correction files.
End of explanation
cd /Users/grefe950/Projects/starspot/starspot/color/tab/phx/CIFIST15/
Explanation: Change to directory containing bolometric correction files.
End of explanation
bc_table = np.genfromtxt('colmag.BT-Settl.server.JOHNSON.Vega', comments='!')
Explanation: Load a bolometric correction table, say for the Cousins AB photometric system.
End of explanation
test_surface = scint.LinearNDInterpolator(bc_table[:, :2], bc_table[:, 4:])
Explanation: Now, the structure of the file is quite irregular. The grid is not rectangular, which is not an immediate problem. The table is strucutred such that column 0 contains Teff in increasing order, followed by logg in column 1 in increasing order. However, metallicities in column 2 appear to be in decreasing order, which may be a problem for simple interpolation routines. Alpha abundances follow and are in increasing order, but since this is a "standard" grid, whereby alpha enrichment is a function of metallicity, we can ignore it for the moment.
Let's take a first swing at the problem by using the LinearND Interpolator from SciPy.
End of explanation
test_surface(np.array([1500., 5.0]))
Explanation: The surface compiled, but that is not a guarantee that the interpolation will work successfully. Some tests are required to confirm this is the case. Let's try a few Teffs at logg = 5 with solar metallicity.
End of explanation
test_surface(np.array([3000., 5.0]))
Explanation: This agrees with data in the bolometric correciton table.
Teff logg [Fe/H] [a/Fe] B V R I
1500.00 5.00 0.00 0.00 -15.557 -16.084 -11.560 -9.291
Now, let's raise the temperature.
End of explanation
test_surface(np.array([3000., 5.0]))
Explanation: Again, we have a good match to tabulated values,
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 0.00 0.00 -6.603 -5.641 -4.566 -3.273
However, since we are using a tabulated metallicity, the interpolation may proceed without too much trouble. If we select a metallicity between grid points, how do we fare?
End of explanation
test_surface(np.array([3000., 5.0]))
Explanation: This appears consistent. What about progressing to lower metallicity values?
End of explanation
iso = np.genfromtxt('/Users/grefe950/evolve/dmestar/iso/dmestar_00120.0myr_z+0.00_a+0.00_marcs.iso')
Explanation: For reference, at [Fe/H] = $-0.5$ dex, we have
Teff logg [Fe/H] [a/Fe] B V R I
3000.00 5.00 -0.50 0.20 -6.533 -5.496 -4.424 -3.154
The interpolation routine has seemingly handled the non-monotonic nature of the metallicity column, as all interpolate values lie between values at the two respective nodes.
Now let's import an isochrone and calcuate colors for stellar models for comparison against MARCS bolometric corrections.
End of explanation
iso.shape
Explanation: Make sure there are magnitudes and colors associated with this isochrone.
End of explanation
test_bcs = test_surface(10**iso[:,1], iso[:, 2])
test_bcs.shape
Explanation: A standard isochrone would only have 6 columns, so 11 indicates this isochrone does have photometric magnitudes computed, likely BV(Ic) (JK)2MASS.
End of explanation
bol_mags = 4.74 - 2.5*iso[:, 3]
for i in range(test_bcs.shape[1]):
bcs = -1.0*np.log10(10**iso[:, 1]/5777.) + test_bcs[:, i] - 5.0*iso[:, 4]
if i == 0:
test_mags = bol_mags - bcs
else:
test_mags = np.column_stack((test_mags, bol_mags - bcs))
iso[50, 0:4], iso[50, 6:], test_mags[50]
Explanation: For each Teff and logg combination we now have BCs for BV(RI)c from BT-Settl models. Now we need to convert the bolometric corrections to absolute magnitudes.
End of explanation
col_table = np.genfromtxt('colmag.BT-Settl.server.COUSINS.Vega', comments='!')
Explanation: Let's try something different: using the color tables provided by the Phoenix group, from which the bolometric corrections are calculated.
End of explanation
col_surface = scint.LinearNDInterpolator(col_table[:, :2], col_table[:, 4:8])
Explanation: Create an interpolation surface from the magnitude table.
End of explanation
phx_mags = col_surface(10.0**iso[:, 1], iso[:, 2])
Explanation: Compute magnitudes for a Dartmouth isochrone.
End of explanation
for i in range(phx_mags.shape[1]):
phx_mags[:, i] = phx_mags[:, i] - 5.0*np.log10(10**iso[:, 4]*6.956e10/3.086e18) + 5.0
Explanation: Convert surface magnitudes to absolute magnitudes using the distance modulus and the radius of the star.
End of explanation
iso[40, :5], iso[40, 6:], phx_mags[40]
Explanation: Now compare against MARCS values.
End of explanation
phx_iso = np.genfromtxt('/Users/grefe950/Notebook/Projects/ngc2516_spots/data/phx_isochrone_120myr.txt')
fig, ax = plt.subplots(1, 2, figsize=(12., 8.), sharey=True)
ax[0].set_xlim(0.0, 2.0)
ax[1].set_xlim(0.0, 4.0)
ax[0].set_ylim(16, 2)
ax[0].plot(iso[:, 6] - iso[:, 7], iso[:, 7], lw=3, c="#b22222")
ax[0].plot(phx_mags[:, 0] - phx_mags[:, 1], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[0].plot(phx_iso[:, 7] - phx_iso[:, 8], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
ax[1].plot(iso[:, 7] - iso[:, 8], iso[:, 7], lw=3, c="#b22222")
ax[1].plot(phx_mags[:, 1] - phx_mags[:, 3], phx_mags[:, 1], lw=3, c="#1e90ff")
ax[1].plot(phx_iso[:, 8] - phx_iso[:, 10], phx_iso[:, 8], dashes=(20., 5.), lw=3, c="#555555")
Explanation: Load an isochrone from the Lyon-Phoenix series.
End of explanation
new_isochrone = np.column_stack((iso[:, :6], phx_mags))
np.savetxt('/Users/grefe950/Notebook/Projects/pleiades_colors/data/dmestar_00120.0myr_z+0.00_a+0.00_mixed.iso',
new_isochrone, fmt='%16.8f')
Explanation: Export a new isochrone with colors from AGSS09 (PHX)
End of explanation
tmp = -10.*np.log10(3681./5777.) + test_surface(3681., 4.78, 0.0) #+ 5.0*np.log10(0.477)
tmp
4.74 - 2.5*(-1.44) - tmp
Explanation: Separate Test Case
These are clearly not correct and are between 1 and 2 magnitudes off from expected values. Need to reproduce the Phoenix group's results, first.
End of explanation |
6,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Seismic NMO Widget
Using the Notebook
This is the <a href="https
Step1: Two common-mid-point (CMP) gathers
Step2: Step 2
Step3: Step 3
Step4: Step 4 | Python Code:
%pylab inline
from geoscilabs.seismic.NMOwidget import ViewWiggle, InteractClean, InteractNosiy, NMOstackthree
from SimPEG.utils import download
# Define path to required data files
synDataFilePath = 'http://github.com/geoscixyz/geosci-labs/raw/master/assets/seismic/syndata1.npy'
obsDataFilePath = 'https://github.com/geoscixyz/geosci-labs/raw/master/assets/seismic/obsdata1.npy'
timeFilePath= 'https://github.com/geoscixyz/geosci-labs/raw/master/assets/seismic/time1.npy'
# Download the data
synData = download(synDataFilePath,overwrite=True,verbose=False)
obsData = download(obsDataFilePath,overwrite=True,verbose=False)
timeData = download(timeFilePath,overwrite=True,verbose=False)
Explanation: Seismic NMO Widget
Using the Notebook
This is the <a href="https://jupyter.org/">Jupyter Notebook</a>, an interactive coding and computation environment. For this lab, you do not have to write any code, you will only be running it.
To use the notebook:
- "Shift + Enter" runs the code within the cell (so does the forward arrow button near the top of the document)
- You can alter variables and re-run cells
- If you want to start with a clean slate, restart the Kernel either by going to the top, clicking on Kernel: Restart, or by "esc + 00" (if you do this, you will need to re-run Step 0 before running any other cells in the notebook)
Instructions as to how to set up Python and the iPython notebook on your personal computer are attached in the appendix of the lab
Step 0: Import Necessary Packages
End of explanation
# Plot the data
ViewWiggle(synData, obsData)
Explanation: Two common-mid-point (CMP) gathers: Clean and Noisy
We have two CMP gathers generated from different geologic models. One data set is clean and the other is contaminated with noise. The seismic data were adapted from SeismicLab (http://seismic-lab.physics.ualberta.ca/).
In this notebook, we will walk through how to construct a normal incidence seismogram from these data sets.
We will do this in the following steps:
- Plot the data
- Fit a hyperbola to the reflection event in the data
- Perform the NMO correction and stack
Step 1: Plot the data
As you can see from clean CMP gather, you can recognize that we have only have one reflector, meaning there is a single interface seperating two geologic units visible in these data.
(Note: The direct and any refracted arrivals have been removed).
It is difficult to distinguish any reflectors in the noisy data. However, there is a single reflector in these data, and we will perform normal moveout (NMO) and stacking operations to construct a normal-incidence seismogram where this reflector is visible.
End of explanation
# Fit hyperbola to clean data
clean = InteractClean(synData,timeData)
clean
Explanation: Step 2: Fit A Hyperbola to the Data
Each reflection event in a CMP gather has a travel time that corresponds to a hyperbola:
$$ t(x) = \sqrt{\frac{x^2}{v^2_{stacking}} + t_0^2}$$
where $x$ is offset between source and receiver, $v_{stacking}$ is stacking velocity, and $t_0$ is the intercept time:
$$ t_0 = \sqrt{\frac{4d^2}{v^2_{stacking}}}$$
where $d$ is the thickness of the first layer.
For each reflection event hyperbola, perform a velocity analysis to find $v_{stacking}$. This is done by first choosing $t_o$. Then choose a trial value of velocity. <img src="http://www.eos.ubc.ca/courses/eosc350/content/methods/meth_10d/assets/kearey_fig4_21.gif"></img>
Calculate the Normal Moveout Correction: Using the hyperbolia corresponding to $v_{stacking}$, compute the normal moveout for each trace and then adjust the reflection time by the amount $\triangle T$: $$ \triangle T = t_0-t(x) \ $$ <img src="http://www.eos.ubc.ca/courses/eosc350/content/methods/meth_10d/assets/ch1_fig8.gif"></img>
Estimate $t_0$, and a plausible $v_{stack}$ by altering t0 and v using below widget. This hyperbola will be drawn as red hyperbola on the middle panel. On the right panel we apply stack with the velocity that you fit, and provice stacked trace.
Parameters of the below widget to fit observed reflection event are:
t0: intercept time of the hyperbola
v: velocity of the hyperbola
End of explanation
noisy = InteractNosiy(obsData,timeData)
noisy
Explanation: Step 3: Applying NMO correction to the Noisy Data
Compared to the previous data set, this one is quite noisy. There is a reflector in the data, and your goal is to construct a stacked trace where this reflection is visible.
Estimate $t_0$, and a plausible $v_{stack}$ by altering t0 and v using below widget. This hyperbola will be drawn as red hyperbola on the middle panel. On the right panel we apply stack with the velocity that you fit, and provice stacked trace.
End of explanation
NMOstackthree(obsData, noisy.kwargs["t0"], noisy.kwargs["v"]-200., noisy.kwargs["v"], noisy.kwargs["v"]+200.,timeData)
Explanation: Step 4: Apply CMP stack with estimated $v_{stack}$ (For noisy CMP gather)
In the previous step, you chose an intercept time (t0) and a stacking velocity (v). Running below cell will generate trhee stacked traces:
- Left: using t0 and v-200 m/s that we fit from Step 3
- Middle: using t0 and v that we fit from Step 3
- Right: using t0 and v+200 m/s that we fit from Step 3
End of explanation |
6,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
XML
Die Extensible Markup Language ist ein Format und ein Metasprache für (primär) hierarchische Sprachen. Da XML in einigen anderen Lehrveranstaltungen verwendet wird, gehe ich hier nicht näher darauf ein, sondern möchte nur kurz zeigen, wie man XML mit Python verarbeiten und erzeugen kann.
XML Bibliotheken in Python
Die in der Standardbibliothek vorhandenen XML-Bibliotheken finden sich im Package xml. Diese sind
Step1: Dann lesen wir die Datei ein und weisen den so entstandenen ElementTree der Variable tree zu
Step2: Um im Tree navigieren zu können, brauchen wir seine Wurzel
Step3: Dann können wir uns root genauer ansehen
Step4: Das Element-Objekt
root ist also ein Element-Objekt. Dieses hat drei wichtige Eigenschaften
Step5: Unser Wurzelelement enthält also keinen richtigen Text; wir werden uns diese Eigenschaft später an einem anderen Element anschauen.
Step6: Wie wir sehen, ist für das Element recipes kein Attribut definiert, daher ist die Eigenschaft attrib ein leeres Dicitonary.
Kindelemente
Von jedem beliebigen Element aus können wir auf seine Kindelemente zugreifen. Die getchildren()-Methode liefert eine Liste von Element-Objekten. Wenn das Element keine Kindelemente hat, liefert die Methode eine leere Liste.
Step7: Wir sehen, dass das recipes-Element 6 recipe-Element enthält.
Da Element-Objekte iterable sind, können wir statt getchildren() auch einfach den in-Operator verwenden, um auf Kindelement nach Kindelement zuzugreifen
Step8: Betrachten wir nun einmal das erste Rezept genauer, indem wir ausgehend vom Root-Element eine Referenz auf das erste Kindelement setzen. Dazu können wir bequemerweise, wie bei einer Liste eine Indexzahl verwenden
Step9: recipe1 hat auch Attribute, auf die wir über die Eigenschaft attrib zugreifen können
Step10: Um auf den Wert eines bestimmten Attributs zuzugreifen bietet Element die get-Methode
Step11: Greift man auf ein nicht existierendes Attribut zu, liefert get() None
Step12: Auf bestimmte Elemente zugreifen
find()
Sehen wir uns einmal die Kindelemente von recipe1 an
Step13: Das Element hat also vier Kindelemente. Wollen wir auf ein bestimmtes Kindelement zugreifen, können wir die find(<gesuchter_tag>) Methode nutzen. Sie liefert das erste unmittelbare Kindelement mit dem angegebenen Tag-Namen
Step14: Wir können auch direkt auf die Text-Eigenschaft des gefundenen Elements zugreifen
Step15: Damit könnten wir uns schon ein Übersicht über die vorhanden Rezepte verschaffen
Step16: findall()
Während find() nur das erste gesuchte Kindelement liefert, findet findall() alle direkten Kindelemente mit einem bestimmten Tag.
Step17: Da es pro Rezept nur ein title-Element gibt, bekommen wir eine Liste mit nur einem Eintrag. Interessanter wird es, wenn wir findall() auf das Element ingredients anwenden
Step18: ingredients hat also 4 Kindelemente vom Typ ingredient.
Anstatt nur die Elemente auszugeben, können wir das nutzen um eine Liste aller Zutaten zu erzeugen
Step19: iter()
Die Methoden find() und findall() durchsuchen nur die direkten Kindelemente. Wenn wir sie vom Element recipe1 aus anwenden, um nach ingredient zu suchen, wird nichts gefunden, weil das Element ingredients dazwischen liegt
Step20: Wollen wir tiefer als eine Ebene suchen, brauchen wir die Methode iter(). Diese liefert einen Iterator, der ein Element nach dem anderen liefert
Step21: Damit können wir auch alle Zutaten für alle Rezepte sehr einfach ausgeben, weil iter() alle Elemente beliebig tief in der Struktur findet
Step22: XPath
XPath ist eine Sprache, die es erlaubt, Zugriffspfade auf ein oder mehrere XML-Elemente zu definieren. ElementTree unterstützt XPath, allerdings unvollständig (mehr dazu hier
Step23: oder kürzer (aber langsamer) so
Step24: Sind wird nur an den Zudaten für Desserts interessiert, können wir nach dem Wert des Attributs type im Element recipe filtern
Step25: Selbstverständlich können wir von jedem gefundenen Element aus weiter durch den Pfad navigieren. Hier suchen wir zunächst via XPath alle Rezepte des Typs dessert. Von jedem gefundenen Rezept-Element aus verwenden wir einene weiteren XPath-Ausdruck, um den Title des Rezepts auszugeben und einen zweiten Xpath, um zu den Zudaten des Rezepts zu navigieren.
Step26: lxml unterstützt neben XPath noch weitere Methoden wie getprevious() getnext() oder getparent() um durch den Baum zu navigieren.
Daten verändern
Attribute und Text verändern
ElementTree unterstützt natürlich auch die Veränderung von Daten. Wir können beispielsweise die Bezeichnung 'grams' auf 'Gramm' ändern. Suchen wir zunächst via XPath nach allen ingredient-Elementen, in denen dieser Wert vorkommen
Step27: Zur Kontrolle können wir uns das ganze XML-Dokument ausgeben lassen
Step28: oder in eine Datei schreiben lassen
Step29: Genauso wie die Attribute eines Elements, können wir die Text-Eigenschaft verändern
Step30: Löschen und Hinzufügen neuer Element
Step31: Sax und Minidom
Wie bereits beschrieben, gibt es weitere Möglichkeiten, mit Python XML zu verarbeiten. Wir sehen uns hier in aller Kürze zwei davon an.
Sax
Sax ist eventbasiert und, weil nie das gesamte Dokument im Speicher gehalten werden muss, speicherschonend. Man erzeugt einen Parser und weist diesem einen selbstgeschriebenen ContentHandler zu. Dann übergibt man dem Parser das zu parsende Dokument
Step32: Dieser Code funktioniert noch nicht, weil wir zuerst den ContentHandler schreiben müssen
Step33: Nun funktioniert auch der bereits geschriebene Code
Step34: MiniDOM
Mit MiniDOM bietet Python eine Sichtweise auf XML-Daten die weitgehend konform mit dem vom World Wide Web Consortium definierten DOM (Document Object Model) ist, das weite Verbreitung hat und von vielen Programmiersprachen unterstützt wird. DOM ist im Vergleich zu ElementTree relativ umständlich und vor allem im Vergleich zu lxml (bei großen Dokumenten) sehr langsam.
Sehen wir uns ein kleines Beispiel an | Python Code:
import xml.etree.ElementTree as ET
Explanation: XML
Die Extensible Markup Language ist ein Format und ein Metasprache für (primär) hierarchische Sprachen. Da XML in einigen anderen Lehrveranstaltungen verwendet wird, gehe ich hier nicht näher darauf ein, sondern möchte nur kurz zeigen, wie man XML mit Python verarbeiten und erzeugen kann.
XML Bibliotheken in Python
Die in der Standardbibliothek vorhandenen XML-Bibliotheken finden sich im Package xml. Diese sind:
dom.sax - liest XML als Datenstrom ein und generiert Events um z.B. auf bestimmte Tags zu reagieren. Dieses Modul wird vor allem verwendet, um riesige XML-Dokumente zu verarbeiten, ohne dabei viel RAM zu verbrauchen.
xml.dom.minidom - Macht ein XML-Dokument als Document-Objekt-Model verfügbar. Das DOM ist eine abstrakte Sichtweise und API auf ein XML-Dokument, das von vielen Programmiersprachen unterstützt wird. DOM spielt z.B. beim Zugriff von JavaScript auf HTML eine große Rolle.
xml.dom.pulldom ist eine etwas exotische Zwischenlösung zwischen SAX und DOM, mit einem sehr überschaubaren Einsatzbereich.
xml.etree.ElementTree ist eine sehr "pythonische" Art, XML zu verarbeiten. Es ist von der Idee her mit DOM vergleichbar, bietet aber ein einfacheres Interface.
Zusatzbibliotheken
Hier ist vor allem lxml (http://lxml.de/) zu erwähnen. lxml ist so etwas wie der größere, klügere und stärkere Bruder von xml.etree.ElementTree . Beide sind jedoch so ähnlich, dass man, wenn man xml.etree.ElementTree verwendet hat, mit minimalen Codeänderungen auf lxml umsteigen kann. Der Hauptunterschied ist einerseits die Verarbeitungsgeschwindigkeit und die Unterstützung von XPath, die in lxml im Unterschied zu etree vollständig implementiert ist. lxml bietet darüber hinaus noch ein Reihe weiterer Möglichkeiten, etwas zum Parsen von HTML.
In der Folge werden wir auf ElementTree fokusieren, weil dies die gebräuchlichste Art ist, XML-Daten mit Python zu verarbeiten und weitgehend gleich verwendet wird, wie das mächtigere lxml.
Im Anhang werde ich noch kurze Beispiele für die anderen Möglichkeiten geben.
Die Beispieldaten
Wir verwenden hier sehr einfache Daten, die hoffentlich einfach zu verstehen sind. Dabei handelt es sich um eine kurze Rezepthandlung.
In XML sind Daten baumförmig organisiert. Ein Wurzelelement (hier: <recipes>) enthält ein oder mehrere Kindelement, die wiederum Kindelemente enthalten können. Im Beispiel kann das Wurzelement <recipes> beliebig viele <recipe>-Elemente enthalten. Jedes <recipe>-Element enthält wiederum diese Elemente: <title>, <coocingTime>, <ingredients> und <instructions> usw.
Man kann sich diese Struktur wie eine Menge ineinander steckender Behälter vorstellen:
<img src="img/recipes_box.png" style="height: 200px;"/>
oder auch als Baum, ähnlich etwa einer Verzeichnisstruktur:
<img src="img/recipes_tree.png" style="width: 500px;"/>
Wichtig ist auch noch, dass Elemente Attribute haben können. Beispielsweise wird das verwendet, um jedem Rezept einen type und eine Sprache (xml:lang) zuzuweisen:
~~~
<recipe type="soup" xml:lang="de">
~~~
Die gesamte XML-Datei finden Sie hier: data/recipes.xml
Die XML-Datei einlesen
Wir werden ins bei den folgenden Beispiel mit etree begnügen. Die Beispiele sollten aber grundsätzlich auch mit lxml funktionieren.
Zunächst müssen wir das Modul importieren. Um nicht immer den langen Modulnamen eintippen zu müssen, importieren wir das Modul unter dem Namen ET:
End of explanation
tree = ET.parse('data/recipes.xml')
Explanation: Dann lesen wir die Datei ein und weisen den so entstandenen ElementTree der Variable tree zu:
End of explanation
root = tree.getroot()
Explanation: Um im Tree navigieren zu können, brauchen wir seine Wurzel:
End of explanation
root
Explanation: Dann können wir uns root genauer ansehen:
End of explanation
root.tag
root.text
Explanation: Das Element-Objekt
root ist also ein Element-Objekt. Dieses hat drei wichtige Eigenschaften:
tag repräsentiert den Tag-Namen des Elements
text repräsentiert einen eventuell dem Element untergeordneten Text-Knoten
attrib ist ein Dictionary mit allen Attributen des XML-Elements.
End of explanation
root.attrib
Explanation: Unser Wurzelelement enthält also keinen richtigen Text; wir werden uns diese Eigenschaft später an einem anderen Element anschauen.
End of explanation
root.getchildren()
Explanation: Wie wir sehen, ist für das Element recipes kein Attribut definiert, daher ist die Eigenschaft attrib ein leeres Dicitonary.
Kindelemente
Von jedem beliebigen Element aus können wir auf seine Kindelemente zugreifen. Die getchildren()-Methode liefert eine Liste von Element-Objekten. Wenn das Element keine Kindelemente hat, liefert die Methode eine leere Liste.
End of explanation
for child in root:
print(child)
Explanation: Wir sehen, dass das recipes-Element 6 recipe-Element enthält.
Da Element-Objekte iterable sind, können wir statt getchildren() auch einfach den in-Operator verwenden, um auf Kindelement nach Kindelement zuzugreifen:
End of explanation
recipe1 = root[0]
recipe1
Explanation: Betrachten wir nun einmal das erste Rezept genauer, indem wir ausgehend vom Root-Element eine Referenz auf das erste Kindelement setzen. Dazu können wir bequemerweise, wie bei einer Liste eine Indexzahl verwenden:
End of explanation
recipe1.attrib
Explanation: recipe1 hat auch Attribute, auf die wir über die Eigenschaft attrib zugreifen können:
End of explanation
recipe1.get('type')
Explanation: Um auf den Wert eines bestimmten Attributs zuzugreifen bietet Element die get-Methode:
End of explanation
print(recipe1.get('hudriwurdri'))
Explanation: Greift man auf ein nicht existierendes Attribut zu, liefert get() None:
End of explanation
for child in recipe1:
print(child.tag)
Explanation: Auf bestimmte Elemente zugreifen
find()
Sehen wir uns einmal die Kindelemente von recipe1 an:
End of explanation
recipe1.find('title')
Explanation: Das Element hat also vier Kindelemente. Wollen wir auf ein bestimmtes Kindelement zugreifen, können wir die find(<gesuchter_tag>) Methode nutzen. Sie liefert das erste unmittelbare Kindelement mit dem angegebenen Tag-Namen:
End of explanation
recipe1.find('title').text
Explanation: Wir können auch direkt auf die Text-Eigenschaft des gefundenen Elements zugreifen:
End of explanation
for recipe in root:
print(recipe.find('title').text)
Explanation: Damit könnten wir uns schon ein Übersicht über die vorhanden Rezepte verschaffen:
End of explanation
recipe1.findall('title')
Explanation: findall()
Während find() nur das erste gesuchte Kindelement liefert, findet findall() alle direkten Kindelemente mit einem bestimmten Tag.
End of explanation
recipe1.find('ingredients').findall('ingredient')
Explanation: Da es pro Rezept nur ein title-Element gibt, bekommen wir eine Liste mit nur einem Eintrag. Interessanter wird es, wenn wir findall() auf das Element ingredients anwenden:
End of explanation
for ingredient in recipe1.find('ingredients').findall('ingredient'):
print('{} {} {}'.format(
ingredient.get('quantity', ''),
ingredient.get('unit', ''),
ingredient.text))
Explanation: ingredients hat also 4 Kindelemente vom Typ ingredient.
Anstatt nur die Elemente auszugeben, können wir das nutzen um eine Liste aller Zutaten zu erzeugen:
End of explanation
recipe1.findall('ingredient')
Explanation: iter()
Die Methoden find() und findall() durchsuchen nur die direkten Kindelemente. Wenn wir sie vom Element recipe1 aus anwenden, um nach ingredient zu suchen, wird nichts gefunden, weil das Element ingredients dazwischen liegt:
End of explanation
for ingredient in recipe1.iter('ingredient'):
print(ingredient.text)
Explanation: Wollen wir tiefer als eine Ebene suchen, brauchen wir die Methode iter(). Diese liefert einen Iterator, der ein Element nach dem anderen liefert:
End of explanation
for ingredient in root.iter('ingredient'):
print(ingredient.text)
Explanation: Damit können wir auch alle Zutaten für alle Rezepte sehr einfach ausgeben, weil iter() alle Elemente beliebig tief in der Struktur findet:
End of explanation
for ingredient in root.findall('./recipe/ingredients/ingredient'):
print(ingredient.text)
Explanation: XPath
XPath ist eine Sprache, die es erlaubt, Zugriffspfade auf ein oder mehrere XML-Elemente zu definieren. ElementTree unterstützt XPath, allerdings unvollständig (mehr dazu hier: https://docs.python.org/3/library/xml.etree.elementtree.html#elementtree-xpath). Daher noch einmal der Verweis auf lxml für komplexere XML-Projekte.
Das letzte Beispiel ließe sich unter Verwendung von XPath auch so schreiben:
End of explanation
for ingredient in root.findall('.//ingredient'):
print(ingredient.text)
Explanation: oder kürzer (aber langsamer) so:
End of explanation
for ingredient in root.findall('./recipe[@type="dessert"]/ingredients/ingredient'):
print(ingredient.text)
Explanation: Sind wird nur an den Zudaten für Desserts interessiert, können wir nach dem Wert des Attributs type im Element recipe filtern:
End of explanation
for recipe in root.findall('./recipe[@type="dessert"]'):
print(recipe.find('./title').text)
for ingredient in recipe.findall('./ingredients/ingredient'):
print('\t{}'.format(ingredient.text))
Explanation: Selbstverständlich können wir von jedem gefundenen Element aus weiter durch den Pfad navigieren. Hier suchen wir zunächst via XPath alle Rezepte des Typs dessert. Von jedem gefundenen Rezept-Element aus verwenden wir einene weiteren XPath-Ausdruck, um den Title des Rezepts auszugeben und einen zweiten Xpath, um zu den Zudaten des Rezepts zu navigieren.
End of explanation
for ingredient in root.findall('./recipe/ingredients/ingredient[@unit="grams"]'):
print(ingredient.attrib)
for ingredient in root.findall('./recipe/ingredients/ingredient[@unit="grams"]'):
ingredient.set('unit', 'Gramm')
Explanation: lxml unterstützt neben XPath noch weitere Methoden wie getprevious() getnext() oder getparent() um durch den Baum zu navigieren.
Daten verändern
Attribute und Text verändern
ElementTree unterstützt natürlich auch die Veränderung von Daten. Wir können beispielsweise die Bezeichnung 'grams' auf 'Gramm' ändern. Suchen wir zunächst via XPath nach allen ingredient-Elementen, in denen dieser Wert vorkommen:
End of explanation
ET.tostring(root, encoding='utf-8')
Explanation: Zur Kontrolle können wir uns das ganze XML-Dokument ausgeben lassen:
End of explanation
tree.write('rezepte_de.xml', encoding="utf-8")
Explanation: oder in eine Datei schreiben lassen:
End of explanation
for ingredient in root.findall('./recipe/ingredients/ingredient'):
ingredient.text = ingredient.text.replace('Erdäpfel', 'Kartoffel')
tree.write('rezepte_de.xml', encoding="utf-8")
Explanation: Genauso wie die Attribute eines Elements, können wir die Text-Eigenschaft verändern:
End of explanation
new_recipe = ET.SubElement(root, 'recipe')
new_recipe.set('type', 'mainDish')
new_recipe.set('xml:lang', 'de')
title = ET.SubElement(new_recipe, 'title')
title.text = 'Ravioli aus der Dose'
ingredients = ET.SubElement(new_recipe, 'ingredients')
ingredient = ET.SubElement(ingredients, 'ingredient')
ingredient.set('quantity', '1')
ingredient.set('unit', 'pieces')
ingredient.text = 'Dose Ravioli'
ET.dump(new_recipe)
Explanation: Löschen und Hinzufügen neuer Element
End of explanation
import xml.sax as sax
parser = sax.make_parser()
parser.setContentHandler(RecipeHandler())
parser.parse('data/recipes.xml')
Explanation: Sax und Minidom
Wie bereits beschrieben, gibt es weitere Möglichkeiten, mit Python XML zu verarbeiten. Wir sehen uns hier in aller Kürze zwei davon an.
Sax
Sax ist eventbasiert und, weil nie das gesamte Dokument im Speicher gehalten werden muss, speicherschonend. Man erzeugt einen Parser und weist diesem einen selbstgeschriebenen ContentHandler zu. Dann übergibt man dem Parser das zu parsende Dokument:
End of explanation
class RecipeHandler(sax.handler.ContentHandler):
def __init__(self):
self.in_title = False # set to True if we are inside a <title> Tag
self.in_ingredient = False
self.content = ''
def startElement(self, name, attrs):
"This method is called for each opening tag."
if name == 'title':
self.in_title = True
if name == 'ingredient':
self.in_ingredient = True
def characters(self, content):
"Content within tag markers"
if self.in_title or self.in_ingredient:
self.content = content
def endElement(self, name):
if name == 'title':
self.in_title = False
print(self.content)
elif name == 'ingredient':
self.in_ingredient = False
print("\t{}".format(self.content))
Explanation: Dieser Code funktioniert noch nicht, weil wir zuerst den ContentHandler schreiben müssen:
End of explanation
parser = sax.make_parser()
parser.setContentHandler(RecipeHandler())
parser.parse('data/recipes.xml')
Explanation: Nun funktioniert auch der bereits geschriebene Code:
End of explanation
import xml.dom.minidom as minidom
tree = minidom.parse('data/recipes.xml')
for recipe in tree.getElementsByTagName('recipe'):
title = recipe.getElementsByTagName('title')[0]
print(title.firstChild.data)
for ingredient in recipe.getElementsByTagName('ingredient'):
print("\t{}".format(ingredient.firstChild.data))
Explanation: MiniDOM
Mit MiniDOM bietet Python eine Sichtweise auf XML-Daten die weitgehend konform mit dem vom World Wide Web Consortium definierten DOM (Document Object Model) ist, das weite Verbreitung hat und von vielen Programmiersprachen unterstützt wird. DOM ist im Vergleich zu ElementTree relativ umständlich und vor allem im Vergleich zu lxml (bei großen Dokumenten) sehr langsam.
Sehen wir uns ein kleines Beispiel an:
End of explanation |
6,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
You have the tools to obtain data from a single table in whatever format you want it. But what if the data you want is spread across multiple tables?
That's where JOIN comes in! JOIN is incredibly important in practical SQL workflows. So let's get started.
Example
We'll use our imaginary pets table, which has three columns
Step1: The second table is the sample_files table, which provides, among other information, the GitHub repo that each file belongs to (in the repo_name column). The first several rows of this table are printed below.
Step3: Next, we write a query that uses information in both tables to determine how many files are released in each license.
Step4: It's a big query, and so we'll investigate each piece separately.
We'll begin with the JOIN (highlighted in blue above). This specifies the sources of data and how to join them. We use ON to specify that we combine the tables by matching the values in the repo_name columns in the tables.
Next, we'll talk about SELECT and GROUP BY (highlighted in yellow). The GROUP BY breaks the data into a different group for each license, before we COUNT the number of rows in the sample_files table that corresponds to each license. (Remember that you can count the number of rows with COUNT(1).)
Finally, the ORDER BY (highlighted in purple) sorts the results so that licenses with more files appear first.
It was a big query, but it gave us a nice table summarizing how many files have been committed under each license | Python Code:
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "github_repos" dataset
dataset_ref = client.dataset("github_repos", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "licenses" table
licenses_ref = dataset_ref.table("licenses")
# API request - fetch the table
licenses_table = client.get_table(licenses_ref)
# Preview the first five lines of the "licenses" table
client.list_rows(licenses_table, max_results=5).to_dataframe()
Explanation: Introduction
You have the tools to obtain data from a single table in whatever format you want it. But what if the data you want is spread across multiple tables?
That's where JOIN comes in! JOIN is incredibly important in practical SQL workflows. So let's get started.
Example
We'll use our imaginary pets table, which has three columns:
- ID - ID number for the pet
- Name - name of the pet
- Animal - type of animal
We'll also add another table, called owners. This table also has three columns:
- ID - ID number for the owner (different from the ID number for the pet)
- Name - name of the owner
- Pet_ID - ID number for the pet that belongs to the owner (which matches the ID number for the pet in the pets table)
To get information that applies to a certain pet, we match the ID column in the pets table to the Pet_ID column in the owners table.
For example,
- the pets table shows that Dr. Harris Bonkers is the pet with ID 1.
- The owners table shows that Aubrey Little is the owner of the pet with ID 1.
Putting these two facts together, Dr. Harris Bonkers is owned by Aubrey Little.
Fortunately, we don't have to do this by hand to figure out which owner goes with which pet. In the next section, you'll learn how to use JOIN to create a new table combining information from the pets and owners tables.
JOIN
Using JOIN, we can write a query to create a table with just two columns: the name of the pet and the name of the owner.
We combine information from both tables by matching rows where the ID column in the pets table matches the Pet_ID column in the owners table.
In the query, ON determines which column in each table to use to combine the tables. Notice that since the ID column exists in both tables, we have to clarify which one to use. We use p.ID to refer to the ID column from the pets table, and o.Pet_ID refers to the Pet_ID column from the owners table.
In general, when you're joining tables, it's a good habit to specify which table each of your columns comes from. That way, you don't have to pull up the schema every time you go back to read the query.
The type of JOIN we're using today is called an INNER JOIN. That means that a row will only be put in the final output table if the value in the columns you're using to combine them shows up in both the tables you're joining. For example, if Tom's ID number of 4 didn't exist in the pets table, we would only get 3 rows back from this query. There are other types of JOIN, but an INNER JOIN is very widely used, so it's a good one to start with.
Example: How many files are covered by each type of software license?
GitHub is the most popular place to collaborate on software projects. A GitHub repository (or repo) is a collection of files associated with a specific project.
Most repos on GitHub are shared under a specific legal license, which determines the legal restrictions on how they are used. For our example, we're going to look at how many different files have been released under each license.
We'll work with two tables in the database. The first table is the licenses table, which provides the name of each GitHub repo (in the repo_name column) and its corresponding license. Here's a view of the first five rows.
End of explanation
#$HIDE_INPUT$
# Construct a reference to the "sample_files" table
files_ref = dataset_ref.table("sample_files")
# API request - fetch the table
files_table = client.get_table(files_ref)
# Preview the first five lines of the "sample_files" table
client.list_rows(files_table, max_results=5).to_dataframe()
Explanation: The second table is the sample_files table, which provides, among other information, the GitHub repo that each file belongs to (in the repo_name column). The first several rows of this table are printed below.
End of explanation
# Query to determine the number of files per license, sorted by number of files
query =
SELECT L.license, COUNT(1) AS number_of_files
FROM `bigquery-public-data.github_repos.sample_files` AS sf
INNER JOIN `bigquery-public-data.github_repos.licenses` AS L
ON sf.repo_name = L.repo_name
GROUP BY L.license
ORDER BY number_of_files DESC
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 10 GB)
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=10**10)
query_job = client.query(query, job_config=safe_config)
# API request - run the query, and convert the results to a pandas DataFrame
file_count_by_license = query_job.to_dataframe()
Explanation: Next, we write a query that uses information in both tables to determine how many files are released in each license.
End of explanation
# Print the DataFrame
file_count_by_license
Explanation: It's a big query, and so we'll investigate each piece separately.
We'll begin with the JOIN (highlighted in blue above). This specifies the sources of data and how to join them. We use ON to specify that we combine the tables by matching the values in the repo_name columns in the tables.
Next, we'll talk about SELECT and GROUP BY (highlighted in yellow). The GROUP BY breaks the data into a different group for each license, before we COUNT the number of rows in the sample_files table that corresponds to each license. (Remember that you can count the number of rows with COUNT(1).)
Finally, the ORDER BY (highlighted in purple) sorts the results so that licenses with more files appear first.
It was a big query, but it gave us a nice table summarizing how many files have been committed under each license:
End of explanation |
6,927 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I am using KMeans in sklearn on a data set which have more than 5000 samples. And I want to get the 50 samples(not just index but full data) closest to "p" (e.g. p=2), a cluster center, as an output, here "p" means the p^th center. | Problem:
import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
p, X = load_data()
assert type(X) == np.ndarray
km = KMeans()
km.fit(X)
d = km.transform(X)[:, p]
indexes = np.argsort(d)[::][:50]
closest_50_samples = X[indexes] |
6,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading and Writing Models
Cobrapy supports reading and writing models in SBML (with and without FBC), JSON, MAT, and pickle formats. Generally, SBML with FBC version 2 is the preferred format for general use. The JSON format may be more useful for cobrapy-specific functionality.
The package also ships with test models in various formats for testing purposes.
Step1: SBML
The Systems Biology Markup Language is an XML-based standard format for distributing models which has support for COBRA models through the FBC extension version 2.
Cobrapy has native support for reading and writing SBML with FBCv2. Please note that all id's in the model must conform to the SBML SID requirements in order to generate a valid SBML file.
Step2: There are other dialects of SBML prior to FBC 2 which have previously been use to encode COBRA models. The primary ones is the "COBRA" dialect which used the "notes" fields in SBML files.
Cobrapy can use libsbml, which must be installed separately (see installation instructions) to read and write these files. When reading in a model, it will automatically detect whether fbc was used or not. When writing a model, the use_fbc_package flag can be used can be used to write files in this legacy "cobra" format.
Step3: JSON
cobrapy models have a JSON (JavaScript Object Notation) representation. This format was crated for interoperability with escher.
Step4: MATLAB
Often, models may be imported and exported soley for the purposes of working with the same models in cobrapy and the MATLAB cobra toolbox. MATLAB has its own ".mat" format for storing variables. Reading and writing to these mat files from python requires scipy.
A mat file can contain multiple MATLAB variables. Therefore, the variable name of the model in the MATLAB file can be passed into the reading function
Step5: If the mat file contains only a single model, cobra can figure out which variable to read from, and the variable_name paramter is unnecessary.
Step6: Saving models to mat files is also relatively straightforward | Python Code:
import cobra.test
import os
from os.path import join
data_dir = cobra.test.data_directory
print("mini test files: ")
print(", ".join(i for i in os.listdir(data_dir)
if i.startswith("mini")))
textbook_model = cobra.test.create_test_model("textbook")
ecoli_model = cobra.test.create_test_model("ecoli")
salmonella_model = cobra.test.create_test_model("salmonella")
Explanation: Reading and Writing Models
Cobrapy supports reading and writing models in SBML (with and without FBC), JSON, MAT, and pickle formats. Generally, SBML with FBC version 2 is the preferred format for general use. The JSON format may be more useful for cobrapy-specific functionality.
The package also ships with test models in various formats for testing purposes.
End of explanation
cobra.io.read_sbml_model(join(data_dir, "mini_fbc2.xml"))
cobra.io.write_sbml_model(textbook_model, "test_fbc2.xml")
Explanation: SBML
The Systems Biology Markup Language is an XML-based standard format for distributing models which has support for COBRA models through the FBC extension version 2.
Cobrapy has native support for reading and writing SBML with FBCv2. Please note that all id's in the model must conform to the SBML SID requirements in order to generate a valid SBML file.
End of explanation
cobra.io.read_sbml_model(join(data_dir, "mini_cobra.xml"))
cobra.io.write_sbml_model(textbook_model, "test_cobra.xml",
use_fbc_package=False)
Explanation: There are other dialects of SBML prior to FBC 2 which have previously been use to encode COBRA models. The primary ones is the "COBRA" dialect which used the "notes" fields in SBML files.
Cobrapy can use libsbml, which must be installed separately (see installation instructions) to read and write these files. When reading in a model, it will automatically detect whether fbc was used or not. When writing a model, the use_fbc_package flag can be used can be used to write files in this legacy "cobra" format.
End of explanation
cobra.io.load_json_model(join(data_dir, "mini.json"))
cobra.io.save_json_model(textbook_model, "test.json")
Explanation: JSON
cobrapy models have a JSON (JavaScript Object Notation) representation. This format was crated for interoperability with escher.
End of explanation
cobra.io.load_matlab_model(join(data_dir, "mini.mat"),
variable_name="mini_textbook")
Explanation: MATLAB
Often, models may be imported and exported soley for the purposes of working with the same models in cobrapy and the MATLAB cobra toolbox. MATLAB has its own ".mat" format for storing variables. Reading and writing to these mat files from python requires scipy.
A mat file can contain multiple MATLAB variables. Therefore, the variable name of the model in the MATLAB file can be passed into the reading function:
End of explanation
cobra.io.load_matlab_model(join(data_dir, "mini.mat"))
Explanation: If the mat file contains only a single model, cobra can figure out which variable to read from, and the variable_name paramter is unnecessary.
End of explanation
cobra.io.save_matlab_model(textbook_model, "test.mat")
Explanation: Saving models to mat files is also relatively straightforward
End of explanation |
6,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="https
Step1: 1.4 Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
You can also use the + button to create a cell and the scissors buttom to cut it out.
1.5 Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
Step2: Once you've run all three cells, try modifying the first one to set who to your anything else.
Rerun the first and third cells without rerunning the second and see what happens. It seems that you need to run the three cells to have the expected result.
Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
1.6 Autocompletion
Use TAB to automplete python methods and variables.
Step3: 1.7 Final tips
If the output is too long click on the left part of the result
Step4: 2.3 Load .csv in a DataFrame with the data
We have to import pandas library.
Step6: We create a Pandas DataFrame from a csv file using the read_csv method. The name of the columns are going to be those of the CSVs columns.
Check Pandas Input/Output commands to check other available read methods read_json or read_excel.
Step8: To explore the DataFrame we can use head() method that results by default in the first 5 lines of the DataFrame
Step9: The attribute columns returns a list of the column names.
Step10: 2.4 Concatenate several DataFrames
If we want to have the historical data to see the evolution of the disease we need to create a new DataFrame containing the content of the different csv files. We use pd.concat() to concatenate one DataFrame per csv file.
Step11: 2.5 Arrange columns
2.5.1 Processing dates
We want to process the report_date column, being able to extract the year or the month or any other part of the date
Step12: 2.5.2 New column -- 'week'
Now we are going to create a column week from the column report_date using the apply method
Step13: 2.5.3 New column -- 'state'
Now we are going to create a column state from the column location
Step14: 2.5.4 Deleting columns
We also want to delete the columns that we don't need. We use the drop method with
Step15: NOTE
Step16: 2.5.6 Solution to the previous exercise
Step17: 2.6 Select and filter data
2.6.1 Selectors
Step18: Creating a new DataFrame from a fragment of other.
It is a good practice to use the copy() method to avoid later problems when creating a new DataFrame out of a piece of another one. Pandas cannot assure the piece you are taking is a view or a copy of the original DataFrame and we want to be sure its a copy, so we do not modify the original one if we change the new one. Explanation
Step19: 2.6.2 Applying conditions
Which are the zika infection cases in the District of Columbia?
Step20: But we should store the value of the latest week, to know which are the latest results if we need them.
Step21: Which are the current zika infection cases in the District of Columbia?
Step22: Which states or territories have zika cases due to local infection?
Step23: 2.6.3 Exercise - try it on your own!
Try to do this exercise without having a look to the solution in the next paragraph
Use year_df dataframe and conditions to create a new df called syear_df where the state column contains only state names, not territory names
To ensure that we are making a copy use the copy() method
Step24: 2.6.4 Solution to exercise 2.6.3
Step25: 2.7 Grouping and ordering
2.7.1 Which states have more zika cases?
First we selected the chunk that we want to use, the latest report data
Step26: Now we group the data by state to sum both local and travel cases
Step27: We order the df by the value columns
Step29: 2.8 Plotting Data
matplotlib is one of the most common libraries for making 2D plots of arrays in Python, Pandas library has a plot() method that wraps the plotting functionalities of matplotlib. We'll try to use this method when possible, for simplicity.
Even so, in order to see the plots inmediately in our script we need to use the matplotlib magic line as we'll see in the next paragraph. More about magic lines here.
2.8.1 Bar graph with the aggregated cases of zika in the US
Step30: Let's draw it horizontally
Step31: Wouldn't it be clearer if the highest value is at the top of the plot?
2.8.2 Exercise - try it on your own!
Try to do this exercise without having a look to the solution in the next paragraph
Draw the same horizontal bar graph that we've done before but ordered from the top to the bottom
You need to re-write the code we've done for the graph, read the instructions in the following cell
Step32: 2.8.3 Solution to 2.8.2
Step33: Hint
Step35: 2.8.5 Add some labels!! (extra)
Step36: 2.8.6 Stacked bar plots (extra)
We want to show in the same bar plot which cases of zika were transmitted locally and which cases where due to travelling to affected areas.
In order to plot two different variables travel and local we need to create two new columns for these variables.
Step37: 2.8.7 Arranging the axes for dates (extra)
This example is a little bit more complex, in this case we have to go down to the original matplotlib library and use its plot_data method that allows as to configure their axes according to our necessities.
Step38: 2.9 Store data in a file
Again, consult Pandas Input/Output commands to check other file formats available. | Python Code:
# Hit shift + enter or use the run button to run this cell and see the results
print 'Hello PyLadies'
# The last line of every code cell will be displayed by default,
# even if you don't print it. Run this cell to see how this works.
2 + 2 # The result of this line will not be displayed
3 + 3 # The result of this line will be displayed, because it is the last line of the cell
Explanation: <center>
<img src="https://raw.githubusercontent.com/pyladies-bcn/pyladies_latex_template/master/pyladies.png" WIDTH=600>
<h1>
WORKSHOP<br>
"Python for Journalists and Data Nerds"<br>
</h1>
<dl>
<dt><br></dt>
<dt>Marta Alonso @malonfe</dt>
<dt>Kathleen Burnett @kmb232</dt>
</dl>
Based in an idea from <a href="https://twitter.com/crisodisy">@crisodisy</a> from <a href="http://www.meetup.com/PyLadies-BCN/">PyLadiesBCN</a>
</center>
1. INTRO TO JUPYTER NOTEBOODK
1.1 Jupyter Kernel
Be sure you are using the appropriate kernel.
Go to the menu "Kernel > Change Kernel" and select the one that refers to your virtual environment.
1.2 Text cells
Double click on this cell, you will see the text without formatting.
This allows you to edit this block of text which is written using Markdown. But you can also use <strong>html</strong> to edit it.
Hit shift + enter or shift + return on your keyboard to show the formatted text again. This is called running the cell, and you can also do it using the run button in the toolbar.
1.3 Code cells
One great advantage of Jupyter notebooks is that you can show your Python code alongside the results, add comments to the code, or even add blocks of text using Markdown. The following cell is a code cell.
End of explanation
who = "Python"
who
message = "We love " + who + "!"
message
Explanation: 1.4 Creating cells
To create a new code cell, click "Insert > Insert Cell [Above or Below]". A code cell will automatically be created.
To create a new markdown cell, first follow the process above to create a code cell, then change the type from "Code" to "Markdown" using the dropdown next to the run, stop, and restart buttons.
You can also use the + button to create a cell and the scissors buttom to cut it out.
1.5 Re-running cells
If you find a bug in your code, you can always update the cell and re-run it. However, any cells that come afterward won't be automatically updated. Try it out below. First run each of the three cells. The first two don't have any output, but you will be able to tell they've run because a number will appear next to them, for example, "In [5]". The third cell should output the message "Intro to Data Analysis is awesome!"
End of explanation
print len(message)
print len(message)
Explanation: Once you've run all three cells, try modifying the first one to set who to your anything else.
Rerun the first and third cells without rerunning the second and see what happens. It seems that you need to run the three cells to have the expected result.
Often, after changing a cell, you'll want to rerun all the cells below it. You can do that quickly by clicking "Cell > Run All Below".
1.6 Autocompletion
Use TAB to automplete python methods and variables.
End of explanation
import os
print os.getcwd()
print os.listdir('.')
print os.listdir('data')
Explanation: 1.7 Final tips
If the output is too long click on the left part of the result:
* One click will confine the result to a box
* Double click will hide the result
One final thing to remember: if you shut down the kernel after saving your notebook, the cells' output will still show up as you left it at the end of your session when you start the notebook back up. However, the state of the kernel will be reset. If you are actively working on a notebook, remember to re-run your cells to set up your working environment to really pick up where you last left off.
2. WORKSHOP
Handy documentation:
Pandas Docs
Pandas API Reference
Pandas DataFrames
THE "ZIKA REPORTS"
2.1 Obtaining and Preparing data
For this exercise we are going to use a real data set of Zika infection cases. The original data is provided by the Centers for Desease Control and Prevention (CDC) in raw HTML. You can find it in the CDC site from where it is scrapped and formatted as CSVs files.
The CSVs are available from:
* Epidemic Prediction Innitiative github: https://github.com/cdcepi/zika
* Our github: https://github.com/PyLadiesDC/python-for-journalists
You've downloaded already all the data so you should have all that you need to start right now.
2.2 Check that we have all necessary files in current directory
First of all, we have to ensure that all files we downloded are in the working directory and all data files are available.
In this exercice we are going to work with data stored in the data folder. Check that you have that folder running the following cell.
End of explanation
import pandas as pd
#from now on we use pd as an abbreviation for pandas
Explanation: 2.3 Load .csv in a DataFrame with the data
We have to import pandas library.
End of explanation
shape is an attribute (invoked without parenthesis)
tells us how many rows and columns our DataFrame has
august_df = pd.read_csv("data/CDC_Report-2016-08-31.csv")
august_df.shape
Explanation: We create a Pandas DataFrame from a csv file using the read_csv method. The name of the columns are going to be those of the CSVs columns.
Check Pandas Input/Output commands to check other available read methods read_json or read_excel.
End of explanation
head() is a method, to call it you use parenthesis
august_df.head()
Explanation: To explore the DataFrame we can use head() method that results by default in the first 5 lines of the DataFrame
End of explanation
august_df.columns
Explanation: The attribute columns returns a list of the column names.
End of explanation
import glob
#glob finds all the pathnames matching a specified pattern
csv_list = glob.glob("data/*.csv")
df_list = []
for f in csv_list:
df = pd.read_csv(f)
df_list.append(df)
year_df = pd.concat(df_list, ignore_index=True)
# NOTE: a more pythonic way of doing the last five lines would be:
# year_df = pd.concat((pd.read_csv(f) for f in all_files), ignore_index=True)
year_df.shape
year_df.head(2)
Explanation: 2.4 Concatenate several DataFrames
If we want to have the historical data to see the evolution of the disease we need to create a new DataFrame containing the content of the different csv files. We use pd.concat() to concatenate one DataFrame per csv file.
End of explanation
import datetime as dt
#creates a datetime object from a string representing a date and time and a corresponding format string.
my_date = dt.datetime.strptime('2010-06-17', '%Y-%m-%d')
# from the datetime object we can extract the following attributes and functions
print my_date.year
print my_date.month
print my_date.day
print my_date.hour
print my_date.minute
print my_date.isocalendar() #returns a tuple (ISO year, ISO week number, ISO weekday)
Explanation: 2.5 Arrange columns
2.5.1 Processing dates
We want to process the report_date column, being able to extract the year or the month or any other part of the date:
* we need to import datetime library
* using strptime to create a datetime object according to a format (see the link for understand the format directives)
End of explanation
def get_week_number(any_date):
return dt.datetime.strptime(any_date,'%Y-%m-%d').isocalendar()[1]
# we apply the function to each of the elements of the column "report_date"
year_df['week'] = year_df['report_date'].apply(get_week_number)
year_df.head()
Explanation: 2.5.2 New column -- 'week'
Now we are going to create a column week from the column report_date using the apply method:
* We first define a function that extracts the week number for each of the dates.
* Then we need to apply this function to all the elements in the report_date column.
End of explanation
def get_state(location):
return location.split("-")[1]
year_df['state'] = year_df['location'].apply(get_state)
year_df.head(10)
Explanation: 2.5.3 New column -- 'state'
Now we are going to create a column state from the column location:
* We first define a function that extracts the week number for each of the dates.
* Then we need to apply this function to all the elements in the report_date column.
End of explanation
year_df.drop('time_period', axis=1, inplace=True)
year_df.head()
Explanation: 2.5.4 Deleting columns
We also want to delete the columns that we don't need. We use the drop method with:
* axis = 1 specifying that we are deleting a column (0 for deleting rows)
* inplace = True specifies that we are deleting the column in our object, with inplace = False we are creating a new DataFrame without that column.
End of explanation
# your solution goes here
Explanation: NOTE: If you try to execute the drop twice you'll get an error because that column doesn't exist anymore. If you want to reset the kernel and start from scratch, you can go to the meny "Kernel" and select any of the "Restart" options and afterwards going to the menu "Cell" and Run the cells above or below the focused cell.
2.5.5 Exercise - try it on your own!
Try to do this exercise without having a look to the solution in the next paragraph
Create a column country similarly as the one you created before
Delete the column time_period_type, unit, data_field_code and location that you are not using
End of explanation
# Adding "country" column
def get_country(location):
return location.split("-")[0]
year_df['country'] = year_df['location'].apply(get_country)
# Deleting extra columns
year_df.drop('time_period_type', axis=1, inplace=True)
year_df.drop('unit', axis=1, inplace=True)
year_df.drop('data_field_code', axis=1, inplace=True)
year_df.drop('location', axis=1, inplace=True)
year_df.head()
Explanation: 2.5.6 Solution to the previous exercise
End of explanation
year_df.ix[:3,'report_date':'data_field']
Explanation: 2.6 Select and filter data
2.6.1 Selectors
End of explanation
two_columns_df = year_df[["week","value"]].copy()
two_columns_df.head()
# Another way of copying columns selecting all the rows for certain columns, no need to add the copy() method
# two_columns_df = year_df.loc[:,["week","value"]]
Explanation: Creating a new DataFrame from a fragment of other.
It is a good practice to use the copy() method to avoid later problems when creating a new DataFrame out of a piece of another one. Pandas cannot assure the piece you are taking is a view or a copy of the original DataFrame and we want to be sure its a copy, so we do not modify the original one if we change the new one. Explanation: here
End of explanation
year_df[(year_df["state"] == "District_of_Columbia") & (year_df["week"] == 30)]
Explanation: 2.6.2 Applying conditions
Which are the zika infection cases in the District of Columbia?
End of explanation
max_week = year_df["week"].max()
print max_week
Explanation: But we should store the value of the latest week, to know which are the latest results if we need them.
End of explanation
year_df[(year_df["state"] == "District_of_Columbia") & (year_df["week"] == max_week)]
Explanation: Which are the current zika infection cases in the District of Columbia?
End of explanation
year_df[(year_df["value"] != 0) & (year_df["week"] == max_week) & (year_df["data_field"] == "zika_reported_local") ]
Explanation: Which states or territories have zika cases due to local infection?
End of explanation
# Your solution goes here
Explanation: 2.6.3 Exercise - try it on your own!
Try to do this exercise without having a look to the solution in the next paragraph
Use year_df dataframe and conditions to create a new df called syear_df where the state column contains only state names, not territory names
To ensure that we are making a copy use the copy() method
End of explanation
syear_df = year_df[year_df['location_type'] == 'state'].copy()
syear_df.head()
Explanation: 2.6.4 Solution to exercise 2.6.3
End of explanation
latest_df = syear_df[syear_df['week'] == max_week].copy()
latest_df.head()
Explanation: 2.7 Grouping and ordering
2.7.1 Which states have more zika cases?
First we selected the chunk that we want to use, the latest report data
End of explanation
sum_df = latest_df.groupby('state').sum()
sum_df.head()
# see how all the numerical columns ar added up, although the **week** column added doesn't make any sense
# pay attention to how the resulted DF has states as indexes
Explanation: Now we group the data by state to sum both local and travel cases
End of explanation
sorted_df = sum_df.sort_values(by='value', ascending=False)
sorted_df[:10]
Explanation: We order the df by the value columns
End of explanation
This is the matplotlib magic line, by default matplotlib defers drawing until the end
of the script. But we need matplotlib to works interactively and draw the plots right away
%matplotlib inline
# Seaborn is a library that makes your plots prettier
import seaborn as sns
# we are drawing a plot bar with values in axis Y
# by default the indexes of the DF (states) are used for axis X
sorted_df[:10].plot.bar(y='value', figsize=(8,4), title='zika cases')
Explanation: 2.8 Plotting Data
matplotlib is one of the most common libraries for making 2D plots of arrays in Python, Pandas library has a plot() method that wraps the plotting functionalities of matplotlib. We'll try to use this method when possible, for simplicity.
Even so, in order to see the plots inmediately in our script we need to use the matplotlib magic line as we'll see in the next paragraph. More about magic lines here.
2.8.1 Bar graph with the aggregated cases of zika in the US
End of explanation
# Remember how we sorted:
#sorted_df = sum_df.sort_values(by='value', ascending=False)
sorted_df[:10].plot.barh(y='value', figsize=(8,4), title='zika cases')
Explanation: Let's draw it horizontally:
End of explanation
# This is the code we used:
# sorted_df = sum_df.sort_values(by='value', ascending=False)
# sorted_df[:10].plot.barh(y='value', figsize=(8,4), title='zika cases')
# 1. sort the dataframe in an ascending way
# 2. get the last 10 positions of the dataframe instead of the first ten
Explanation: Wouldn't it be clearer if the highest value is at the top of the plot?
2.8.2 Exercise - try it on your own!
Try to do this exercise without having a look to the solution in the next paragraph
Draw the same horizontal bar graph that we've done before but ordered from the top to the bottom
You need to re-write the code we've done for the graph, read the instructions in the following cell
End of explanation
sorted_df = sum_df.sort_values(by='value', ascending=True)
sorted_df[-10:].plot.barh(y='value', figsize=(8,4), title='zika cases',color='sandybrown')
Explanation: 2.8.3 Solution to 2.8.2
End of explanation
# Remember that syear_df is our dataframe with the anual data for the states
weekly_df = syear_df.groupby('week').sum()
weekly_df.head()
# now the weeks are the new indexes
# again the X axis takes by default the indexes of the DF (the weeks)
weekly_df.plot.line(y='value',figsize=(8,4), title='zika evolution in the US', color='orange')
Explanation: Hint: color names for plotting:
http://matplotlib.org/mpl_examples/color/named_colors.hires.png
2.8.4 Linegraph with the evolution of cases throught 2016 in the US
End of explanation
Our plotting function returns the axes object, which allow us to access
the rectangles and add a text to them
# first reorder the data descendingly
sorted_df = sum_df.sort_values(by='value', ascending=False)
# access the axes object when creating the bar plot
axes = sorted_df[:10].plot.bar(y='value', figsize=(8,4), title='zika cases')
# loop over the rectangles and add a tex
for p in axes.patches:
axes.text(p.get_x() + p.get_width()/2, # x positions
p.get_height()-35, # y position
int(p.get_height()), # label text
ha='center', va='bottom', color='white', fontsize=9) # fontdict with font alignment and properties
Explanation: 2.8.5 Add some labels!! (extra)
End of explanation
# We'll use latest_df dataframe which contains the most recent values, already filtered only for states.
latest_df['travel'] = latest_df.loc[latest_df['data_field'] == 'zika_reported_travel','value']
latest_df['local'] = latest_df.loc[latest_df['data_field'] == 'zika_reported_local','value']
latest_df.head()
group_df = latest_df.groupby('state').sum()
group_df.sort_values(by="value", ascending = False, inplace = True)
group_df[:10].plot.bar(y=['travel','local'], stacked = True, figsize=(8,4), title='zika cases')
Explanation: 2.8.6 Stacked bar plots (extra)
We want to show in the same bar plot which cases of zika were transmitted locally and which cases where due to travelling to affected areas.
In order to plot two different variables travel and local we need to create two new columns for these variables.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.dates as dates
# group by date returns a df with report_date as indexes
bydate_df = syear_df.groupby('report_date').sum()
# Series with the values to plot
y_values = bydate_df['value']
# Get the values of the indexes (our dates) and returns a list of Datetime objects
my_date_index = pd.DatetimeIndex(bydate_df.index.values).to_pydatetime()
f = plt.figure(figsize=(8, 4))
ax = f.gca() # get current axes
# set ticks location in the X axis, in the 14th day of each month (to see Sept)
ax.xaxis.set_major_locator(dates.MonthLocator(bymonthday=14))
# set ticks format in the X axis => (%b is abbreviated month)+new line+ year
ax.xaxis.set_major_formatter(dates.DateFormatter('%b\n%Y'))
plt.plot_date(
my_date_index, # x values
y_values, # y values
fmt='-', # format of the line
xdate=True, # x-axis will be labeled with dates
ydate=False, # y-axis don't
color='red')
Explanation: 2.8.7 Arranging the axes for dates (extra)
This example is a little bit more complex, in this case we have to go down to the original matplotlib library and use its plot_data method that allows as to configure their axes according to our necessities.
End of explanation
group_df.to_csv("output.csv")
Explanation: 2.9 Store data in a file
Again, consult Pandas Input/Output commands to check other file formats available.
End of explanation |
6,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Well near a straight river
Step1: Consider a well in the middle aquifer of a three aquifer system located at $(x,y)=(0,0)$. The well starts pumping at time $t=0$ at a discharge of $Q=1000$ m$^3$/d. Aquifer properties are the shown in table 3 (same as exercise 2). A stream runs North-South along the line $x=50$. The head along the stream is fixed.
Table 3 - Aquifer properties for exercise 3.
| Layer | $k$ (m/d) | $c$ (d) | $S$ | $S_s$ | $z_t$ (m) | $z_b$ (m)|
|---------------| ---------
Step2: Exercise 3b
Compute the discharge of the stream section (the stream depletion) as a function of time from $t=0.1$ till $t=1000$ days.
Step3: Exercise 3c
Make a contour plot of each layer after 100 days of pumping. Use 20 grid points in each direction (this may take a little time).
Step4: Exercise 3d
The discharge of the well is $Q=1000$ m$^3$/d for 100 days every summer. Compute the stream depletion for a five year period. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
Explanation: Well near a straight river
End of explanation
ml = ModelMaq(kaq=[1, 20, 2], z=[25, 20, 18, 10, 8, 0], c=[1000, 2000],
Saq=[0.1, 1e-4, 1e-4], Sll=[0, 0], phreatictop=True,
tmin=0.1, tmax=1000)
w = Well(ml, xw=0, yw=0, rw=0.2, tsandQ=[(0, 1000)], layers=1, label='well 1')
yls = [-500, -300, -200, -100, -50, 0, 50, 100, 200, 300, 500]
xls = 50 * np.ones(len(yls))
ls1 = HeadLineSinkString(ml, list(zip(xls, yls)), tsandh='fixed', layers=0, label='river')
ml.solve()
ml.xsection(x1=-200, x2=200, npoints=100, t=100, layers=[0, 1, 2], sstart=-200)
Explanation: Consider a well in the middle aquifer of a three aquifer system located at $(x,y)=(0,0)$. The well starts pumping at time $t=0$ at a discharge of $Q=1000$ m$^3$/d. Aquifer properties are the shown in table 3 (same as exercise 2). A stream runs North-South along the line $x=50$. The head along the stream is fixed.
Table 3 - Aquifer properties for exercise 3.
| Layer | $k$ (m/d) | $c$ (d) | $S$ | $S_s$ | $z_t$ (m) | $z_b$ (m)|
|---------------| ---------:| -------:| -----:| -----:| ---------:| --------:|
|Aquifer 0 | 1 | | 0.1 | | 25 | 20|
|Leaky layer 1 | | 1000 | |0 | 20 | 18|
|Aquifer 1 | 20 | | |0.0001 | 18 | 10|
|Leaky layer 2 | | 2000 | |0 | 10 | 8|
|Aquifer 2 | 2 | | |0.0001 | 8 | 0|
Exercise 3a
Model a 1000 m long section of the stream using 12 linesinks with $y$-endpoints at [-500,-300,-200,-100,-50,0,50,100,200,300,500]. Create a cross-section of the head along $y=0$ from $x=-200$ to $x=200$ in all 3 layers.
End of explanation
t = np.logspace(-1, 3, 100)
Q = ls1.discharge(t)
plt.semilogx(t, Q[0])
plt.ylabel('Q [m$^3$/d]')
plt.xlabel('time [days]');
Explanation: Exercise 3b
Compute the discharge of the stream section (the stream depletion) as a function of time from $t=0.1$ till $t=1000$ days.
End of explanation
ml.contour(win=[-200, 200, -200, 200], ngr=[20, 20], t=100, layers=0,
levels=20, color='b', labels='True', decimals=2, figsize=(8, 8))
Explanation: Exercise 3c
Make a contour plot of each layer after 100 days of pumping. Use 20 grid points in each direction (this may take a little time).
End of explanation
ml = ModelMaq(kaq=[1, 20, 2], z=[25, 20, 18, 10, 8, 0], c=[1000, 2000],
Saq=[0.1, 1e-4, 1e-4], Sll=[0, 0], phreatictop=True,
tmin=0.1, tmax=2000)
tsandQ=[(0, 1000), (100, 0), (365, 1000), (465, 0),
(730, 1000), (830, 0), (1095, 1000), (1195, 0),
(1460, 1000), (1560, 0)]
w = Well(ml, xw=0, yw=0, rw=0.2, tsandQ=tsandQ, layers=1, label='well 1')
yls = [-500, -300, -200, -100, -50, 0, 50, 100, 200, 300, 500]
xls = 50 * np.ones(len(yls))
ls1 = HeadLineSinkString(ml, list(zip(xls, yls)), tsandh='fixed', layers=0, label='river')
ml.solve()
t = np.linspace(0.1, 2000, 200)
Q = ls1.discharge(t)
plt.plot(t, Q[0])
plt.ylabel('Q [m$^3$/d]')
plt.xlabel('time [days]');
Explanation: Exercise 3d
The discharge of the well is $Q=1000$ m$^3$/d for 100 days every summer. Compute the stream depletion for a five year period.
End of explanation |
6,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Standard pandas imports
Step1: We'll be working with data from the bike rental setup
Step2: Let's inspect it
Step3: That must be the only numeric value, let's try again
Step4: Wow, those season_desc values look wrong! We'll come back to that in a minute.
Now let's look at some of the columns
Step5: Sanity checking the weather data
Step6: The season codes look OK (1 = Winter, 2 = Spring ...), but the season descriptions look wrong. Let's fix them
Step7: That looks right, but we would rather the date be our index
Step8: Let's start asking some questions
How many unique bikes are there?
Step9: How many unique stations are there?
Step10: How many of those actually appear in our usage data?
Step11: Let's look for correlations in the weather
Step12: When temperature is higher, there are more riders, and when windspeed is higher there are fewer riders. Maybe these trends are different at different times of the year?
Step13: So in winter higher temperatures mean more riders, but in summer higher temperatures mean fewer riders. This makes a good bit of sense.
Station Success
We'll measure a station's success by its average number of daily rentals. We can take a couple of different approaches.
The first is to use groupby. We'll start by adding a date column
Step14: We can also use a pivot table
Step15: There are invariably other ways to do this
Joins
We'll want to look at the avg daily trips by geographic location, which is in the stations data frame. Let's pull it out into its own
Step16: And then we need to make trips into a data frame
Step17: Getting Our Data Ready
Before we merge, we'd like to aggregate the usage data to the daily level | Python Code:
from pandas import DataFrame, Series
import pandas as pd
import numpy as np
Explanation: Standard pandas imports
End of explanation
weather = pd.read_table('daily_weather.tsv', parse_dates=['date'])
stations = pd.read_table('stations.tsv')
usage = pd.read_table('usage_2012.tsv', parse_dates=['time_start', 'time_end'])
Explanation: We'll be working with data from the bike rental setup:
End of explanation
usage.describe()
Explanation: Let's inspect it
End of explanation
usage
weather.describe()
weather
Explanation: That must be the only numeric value, let's try again:
End of explanation
weather.columns
weather['is_holiday'].value_counts()
weather['temp'].describe()
Explanation: Wow, those season_desc values look wrong! We'll come back to that in a minute.
Now let's look at some of the columns
End of explanation
weather.groupby(['season_code', 'season_desc'])['date'].agg([min, max])
Explanation: Sanity checking the weather data
End of explanation
weather.season_desc = weather.season_desc.map({'Spring' : 'Winter', 'Winter' : 'Fall', 'Fall' : 'Summer', 'Summer' : 'Spring' })
weather.season_desc
Explanation: The season codes look OK (1 = Winter, 2 = Spring ...), but the season descriptions look wrong. Let's fix them:
End of explanation
weather.index = pd.DatetimeIndex(weather['date'])
Explanation: That looks right, but we would rather the date be our index:
End of explanation
usage.columns
usage['bike_id'].describe()
usage['bike_id'].nunique()
Explanation: Let's start asking some questions
How many unique bikes are there?
End of explanation
stations.shape
len(stations)
Explanation: How many unique stations are there?
End of explanation
usage['station_start'].nunique()
usage['station_end'].nunique()
usage['station_start'].unique()
Explanation: How many of those actually appear in our usage data?
End of explanation
weather[['temp', 'subjective_temp', 'humidity', 'windspeed', 'total_riders']].corr()
Explanation: Let's look for correlations in the weather:
End of explanation
weather[weather.season_desc=='Winter'][['temp', 'subjective_temp', 'humidity', 'windspeed', 'total_riders']].corr()
weather[weather.season_desc=='Summer'][['temp', 'subjective_temp', 'humidity', 'windspeed', 'total_riders']].corr()
Explanation: When temperature is higher, there are more riders, and when windspeed is higher there are fewer riders. Maybe these trends are different at different times of the year?
End of explanation
usage['date'] = usage.time_start.dt.date
usage.groupby('date')
station_counts = usage.groupby(['station_start']).size() / 366
station_counts.sort()
station_counts
Explanation: So in winter higher temperatures mean more riders, but in summer higher temperatures mean fewer riders. This makes a good bit of sense.
Station Success
We'll measure a station's success by its average number of daily rentals. We can take a couple of different approaches.
The first is to use groupby. We'll start by adding a date column:
End of explanation
pivot = pd.pivot_table(usage, index='date', columns='station_start', values='bike_id', aggfunc=len, fill_value=0)
pivot
avg_daily_trips = pivot.mean()
avg_daily_trips.sort()
avg_daily_trips.index.name = 'station'
avg_daily_trips
Explanation: We can also use a pivot table:
End of explanation
station_geos = stations[['lat','long']]
station_geos.index = stations['station']
station_geos
Explanation: There are invariably other ways to do this
Joins
We'll want to look at the avg daily trips by geographic location, which is in the stations data frame. Let's pull it out into its own:
End of explanation
trips = DataFrame({ 'avg_daily_trips' : avg_daily_trips})
trips
trips_by_geo = station_geos.join(trips, how='inner')
trips_by_geo
Explanation: And then we need to make trips into a data frame
End of explanation
daily_usage = usage.groupby(['date', 'station_start', 'cust_type'], as_index=False)['duration_mins'].agg(['mean', len])
daily_usage.columns = ['avg_trip_duration', 'num_trips']
daily_usage
daily_usage = daily_usage.reset_index()
daily_usage.columns
daily_usage.index
stations.columns
weather.columns
weather_rentals = daily_usage.merge(weather, left_on='date', right_on='date')
weather_rentals
weather['date'] = weather['date'].dt.date
usage_weather = daily_usage.merge(weather)
usage_weather
uws = usage_weather.merge(stations, left_on='station_start', right_on='station')
uws
sorted(uws.columns)
uws['crossing']
Explanation: Getting Our Data Ready
Before we merge, we'd like to aggregate the usage data to the daily level:
End of explanation |
6,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas offers a powerful interface for data
manipulation and analysis, but the dataframe can be an opaque object that’s
hard to reason about in terms of its data types and other properties. Often
times you’ll have to manually inspect a dataframe at runtime to confirm its contents.
The careful reader may even be able to infer the datatypes of its columns by the kinds
of functions you apply to it, but at the cost of considerable cognitive overhead.
This problem is magnified in complex ETL pipelines that may involve many
transformations and joins, or collaborative contexts in which teams of data
scientists/engineers need to maintain pandas code in production. Even in
research or reporting settings, maintaining reproducible code can be a challenge
if the underlying dataset is corrupted or otherwise changed unexpectedly,
especially if the findings of an analysis leads to business-critical decisions.
Pandera is a validation toolkit to make
pandas data structures more transparent so it’s easier to reason about the
underlying schema of pandas data structures as they undergo various transformations.
In this post I’ll sketch out a situation that you may find yourself in where
using Pandera may save you and your team from a lot of headaches.
Update 12/31/2018
Step1: Simple enough! We can see that each row in this dataset is a service request record containing
metadata about different aspects of the request, like which borough the call came from, and which
agency responded to the call.
One thing we can do to make this code more readable would be to
explicitly specify the columns we want to use, and what type we expect them to be.
Step2: Wait, but we can do even better! Based on the project requirements and what we know about
these data either by reading the documentation or doing some
exploratory data analysis, we can make some stricter assertions
about them. We can do this very simply with pandera.
Defining a DataFrameSchema
Beyond the column presence and data type checks, we can make assertions about
the properties that the dataset must have in order to be considered valid.
We first define a DataFrameSchema, feeding it a dictionary where keys are
column names and values are Column objects, which are initialized with the
data type of the column and a Check or a list of Checks.
Step3: A Check takes a function as an argument with the signature x -> Bool
where x is a particular value in the column. In the code below you can
see that the status and borough column checks assert that all the values in
the column are in a pre-specified set of categories.
python
Check(lambda x
Step4: Multiple columns can also use the same Check objects. In the code snippet
below I've defined a date_min_check object that are used to verify the
due_date, and closed_date columns, along with the df_311_schema that
specifies the schema for the 311 data.
Step5: Once we've defined the DataFrameSchema, we can use it to verify the data.
I usually take this opportunity to create a preprocessing function that does some basic
filtering/transformations. In this case I'm going to assume that records with
closed_date < created_date are malformed data. There may some good reason the data is
this way, but for now so I'll be removing them from the analysis.
Step6: With a DataFrameSchema, not only can we see what to expect from
our input data, pandera also verifies that they fulfill these expectations
at runtime.
Suppose that for some unknown reason these data are corrupted at a future date.
pandera gives us useful error messages based on whether a column is missing, a
column has the incorrect data type, or whether a Check assertion failed.
For example, if some of the created_date values somehow fell out of the expected date
range due to a datetime parsing error, we receive a useful error message.
Step7: Or if a column isn't the expected type.
Step8: Or if the column is somehow not present in the dataframe.
Step9: Note that calling schema.validate(df) will return the validated dataframe,
so you would be able to easily refactor an existing function to perform schema
validation
Step10: Adding Guardrails around your Data Munging Pipeline
To obtain the three insights that we need to create our monthly report, we need
to manipulate the data. There's no single workflow for adding guard rails around your
data manipulation code, but a good rule of thumb is to compose a sequence of functions
together to do it. We can then use these functions as scaffolding to verify the
dataframe inputs/outputs of a function before they’re passed onto the next one.
Cleaning up Complaints
First we clean up the complaint_type column in order to address the first
question
Step11: Creating Derived Data
Next, we create a new column closed_lte_due which is a boolean column
where True indicates that the service request was closed before or at
the due_date. We'll need this derived data when answering the second question
Step12: Usage Note
Step13: Usage Note
Step14: Now we can pipe these functions in sequence to obtain our cleaned data.
Step16: Reproducible Reports
Step17: Proportion of Service Requests Closed on or Before the Due Date
For this question we'll compute the proportion of requests that were closed
on or before the due_date by agency_name, where we'll remove entries
that have null values or where the proportion is 0.
Step18: Daily Complaints per Borough
Here we have to count up all number of service requests per day by borough,
so we'll want to make sure that the number_of_complaints is a positive number
and that the borough values are in the BOROUGHS global variable that we defined
earlier.
Here we also normalize the per-borough counts by the respective population (per 1K). | Python Code:
import logging
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from collections import OrderedDict
from IPython.display import display, Markdown
from sodapy import Socrata
logging.disable(logging.WARNING)
# utility function to print python output as markdown snippets
def print_output(s):
display(Markdown("```python\n{}\n```".format(s)))
plt.style.use('seaborn-white')
%matplotlib inline
# define date range
DATE_RANGE = ("2018/12/01", "2018/12/31")
client = Socrata("data.cityofnewyork.us", None)
# get data from the beginning of the month
df_311 = pd.DataFrame.from_records(
client.get(
"erm2-nwe9",
# use socrata SoQL query clauses: https://dev.socrata.com/docs/queries/
where="created_date >= '%s' and created_date <= '%s'" % DATE_RANGE))
df_311.head(3)
Explanation: Pandas offers a powerful interface for data
manipulation and analysis, but the dataframe can be an opaque object that’s
hard to reason about in terms of its data types and other properties. Often
times you’ll have to manually inspect a dataframe at runtime to confirm its contents.
The careful reader may even be able to infer the datatypes of its columns by the kinds
of functions you apply to it, but at the cost of considerable cognitive overhead.
This problem is magnified in complex ETL pipelines that may involve many
transformations and joins, or collaborative contexts in which teams of data
scientists/engineers need to maintain pandas code in production. Even in
research or reporting settings, maintaining reproducible code can be a challenge
if the underlying dataset is corrupted or otherwise changed unexpectedly,
especially if the findings of an analysis leads to business-critical decisions.
Pandera is a validation toolkit to make
pandas data structures more transparent so it’s easier to reason about the
underlying schema of pandas data structures as they undergo various transformations.
In this post I’ll sketch out a situation that you may find yourself in where
using Pandera may save you and your team from a lot of headaches.
Update 12/31/2018: service request absolute counts are misleading,
since the underlying population of each borough varies. I updated the
transformation and plotting functions to normalize the counts by
population size.
Case Study: New York 311 Data
Suppose that you run a small data science shop, and one of your clients is the
New York mayor’s office. They’ve tasked you with creating monthly reports of
New York’s 311 calls containing insights about:
The most common complaints/descriptors by borough.
The proportion of service requests that are closed on or before the due date
by responding agency.
The number of complaints per day by complaint type and borough.
For the purposes of this exercise, let’s assume that this dataset is
periodically updated on the official data portal. Every month you need to
generate a new report (an html file) with some plots showing the relevant
summary statistics.
Dataset Quality Validation
The first thing we need to do is read the data into memory from
nycopendata.
End of explanation
# specify column names and types
usecols = OrderedDict([
("unique_key", str),
("borough", str),
("agency_name", str),
("created_date", "datetime64[ns]"),
("due_date", "datetime64[ns]"),
("closed_date", "datetime64[ns]"),
("complaint_type", str),
])
cols = list(usecols.keys())
# page through the results
MAX_PAGES = 500
LIMIT = 10000
records = []
print("fetching 311 data:")
for i in range(MAX_PAGES):
results = client.get(
"erm2-nwe9",
select=",".join(cols),
where="created_date >= '%s' and created_date <= '%s'" % DATE_RANGE,
order="created_date",
limit=LIMIT,
offset=LIMIT * i)
print(".", end="", flush=True)
records.extend(results)
if len(results) < LIMIT:
break
df_311 = pd.DataFrame.from_records(records)[cols]
df_311 = df_311.astype(usecols)
display(df_311.head(3))
Explanation: Simple enough! We can see that each row in this dataset is a service request record containing
metadata about different aspects of the request, like which borough the call came from, and which
agency responded to the call.
One thing we can do to make this code more readable would be to
explicitly specify the columns we want to use, and what type we expect them to be.
End of explanation
from pandera import DataFrameSchema, Check, Column, Index, Bool, \
DateTime, Float, Int, String
schema = DataFrameSchema({
"column1": Column(String),
"column2": Column(Int, Check(lambda x: x > 0)),
"column3": Column(Float, [
Check(lambda x: x > 0.),
Check(lambda s: s.mean() > 0.5, element_wise=False)])
})
Explanation: Wait, but we can do even better! Based on the project requirements and what we know about
these data either by reading the documentation or doing some
exploratory data analysis, we can make some stricter assertions
about them. We can do this very simply with pandera.
Defining a DataFrameSchema
Beyond the column presence and data type checks, we can make assertions about
the properties that the dataset must have in order to be considered valid.
We first define a DataFrameSchema, feeding it a dictionary where keys are
column names and values are Column objects, which are initialized with the
data type of the column and a Check or a list of Checks.
End of explanation
from pandera import SeriesSchema
s = pd.Series([1, 1, 2, 3])
series_schema = SeriesSchema(
Int, Check(lambda s: s.duplicated().sum() == 0, element_wise=False,
error="failed uniqueness check"))
try:
series_schema.validate(s)
except Exception as e:
print_output(e)
Explanation: A Check takes a function as an argument with the signature x -> Bool
where x is a particular value in the column. In the code below you can
see that the status and borough column checks assert that all the values in
the column are in a pre-specified set of categories.
python
Check(lambda x: x in ["category_1", "category_2"]);
You can create vectorized checks by specifying element_wise=False (True by default), which
changes the expected function signature to s -> Bool|Series[Bool], where s is a pandas
Series and the return value can either be Bool or a boolean Series.
```python
the checking function resolves to a boolean
Check(lambda s: s.mean() > 0.5, element_wise=False);
the checking function can also resolve to a boolean Series
Check(lambda s: s > 0, element_wise=False);
```
For human-friendly error messages, you can also supply an error argument with the message
to raise if the check fails. We can check this functionality out with a SeriesSchema, which
has a similar API to the DataFrameSchema.
End of explanation
# define a date range checker
date_range_check = Check(
lambda s: (s >= pd.Timestamp(DATE_RANGE[0])) &
(s <= pd.Timestamp(DATE_RANGE[1])),
element_wise=False)
date_min_check = Check(
lambda s: s >= pd.Timestamp(DATE_RANGE[0]),
element_wise=False)
BOROUGHS = [
"BROOKLYN",
"QUEENS",
"BRONX",
"MANHATTAN",
"STATEN ISLAND",
"Unspecified"]
# constructing a schema should feel familiar for pandas users
df_311_schema = DataFrameSchema({
# make sure unique_key is unique
"unique_key": Column(String, Check(lambda s: s.duplicated().sum() == 0,
element_wise=False,
error="column is not unique")),
# assert borough column contain proper values
"borough": Column(String, Check(lambda x: x in BOROUGHS,
error="borough check failed")),
"agency_name": Column(String),
# assert that records are within the date range
"created_date": Column(DateTime, date_range_check),
"due_date": Column(DateTime, date_min_check, nullable=True),
"closed_date": Column(DateTime, date_min_check, nullable=True),
"complaint_type": Column(String),
})
Explanation: Multiple columns can also use the same Check objects. In the code snippet
below I've defined a date_min_check object that are used to verify the
due_date, and closed_date columns, along with the df_311_schema that
specifies the schema for the 311 data.
End of explanation
def preprocess_data(df):
# remove records where closed_date occurs before created_date
df = df[~(df.closed_date < df.created_date)]
return df
preprocessed_df_311 = df_311_schema.validate(preprocess_data(df_311))
Explanation: Once we've defined the DataFrameSchema, we can use it to verify the data.
I usually take this opportunity to create a preprocessing function that does some basic
filtering/transformations. In this case I'm going to assume that records with
closed_date < created_date are malformed data. There may some good reason the data is
this way, but for now so I'll be removing them from the analysis.
End of explanation
df_311_corrupt = df_311.copy()
df_311_corrupt["created_date"].iloc[:5] = df_311_corrupt[
"created_date"].head(5) - pd.Timedelta(weeks=10)
try:
df_311_schema.validate(df_311_corrupt)
except Exception as e:
print_output(e.code)
Explanation: With a DataFrameSchema, not only can we see what to expect from
our input data, pandera also verifies that they fulfill these expectations
at runtime.
Suppose that for some unknown reason these data are corrupted at a future date.
pandera gives us useful error messages based on whether a column is missing, a
column has the incorrect data type, or whether a Check assertion failed.
For example, if some of the created_date values somehow fell out of the expected date
range due to a datetime parsing error, we receive a useful error message.
End of explanation
df_311_corrupt = df_311.copy().assign(
unique_key=df_311.unique_key.astype(int))
try:
df_311_schema.validate(df_311_corrupt)
except Exception as e:
print_output(e.code)
Explanation: Or if a column isn't the expected type.
End of explanation
df_311_corrupt = df_311.copy().drop("complaint_type", axis=1)
try:
df_311_schema.validate(df_311_corrupt)
except Exception as e:
print_output(e.code)
Explanation: Or if the column is somehow not present in the dataframe.
End of explanation
def processing_function(df):
# do something
...
return processed_df
def processing_function(df):
# do something
...
# validate the output
return schema.validate(processed_df)
def processing_function(df):
# validate the input
df = schema.validate(df)
# do something
...
return processed_df
Explanation: Note that calling schema.validate(df) will return the validated dataframe,
so you would be able to easily refactor an existing function to perform schema
validation:
End of explanation
REPLACE_DICT = {
"Noise - Residential": "Noise",
"Noise - Street/Sidewalk": "Noise",
"Noise - Commercial": "Noise",
"Noise - Park": "Noise",
"Noise - Helicopter": "Noise",
"Noise - Vehicle": "Noise",
}
clean_complaint_schema = DataFrameSchema({
"complaint_type_clean": Column(String, [
Check(lambda x: x not in REPLACE_DICT),
Check(lambda s: (s == "Noise").any(), element_wise=False)
])
})
def clean_complaint_type(df):
clean_df = (
df.assign(complaint_type_clean=df.complaint_type)
.replace({"complaint_type_clean": REPLACE_DICT})
)
return clean_complaint_schema.validate(clean_df)
clean_complaint_type(df_311).head(3)
Explanation: Adding Guardrails around your Data Munging Pipeline
To obtain the three insights that we need to create our monthly report, we need
to manipulate the data. There's no single workflow for adding guard rails around your
data manipulation code, but a good rule of thumb is to compose a sequence of functions
together to do it. We can then use these functions as scaffolding to verify the
dataframe inputs/outputs of a function before they’re passed onto the next one.
Cleaning up Complaints
First we clean up the complaint_type column in order to address the first
question:
The most common complaints by borough.
In this case we'll be re-mapping a few of the values in the complaint_type column
and then validating the output of the function with a DataFrameSchema.
End of explanation
from pandera import check_output
@check_output(DataFrameSchema({"closed_lte_due": Column(Bool, nullable=True)}))
def add_closed_lte_due(df):
return df.assign(
closed_lte_due=(
(df.closed_date <= df.due_date)
.where(df.due_date.notnull(), pd.NaT))
)
add_closed_lte_due(df_311).head(3)
Explanation: Creating Derived Data
Next, we create a new column closed_lte_due which is a boolean column
where True indicates that the service request was closed before or at
the due_date. We'll need this derived data when answering the second question:
The proportion of service requests that are closed on or before the due date
by responding agency.
In this case, we'll use the check_output decorator as a convenience to validate
the output of the function (which is assumed to be a dataframe).
End of explanation
from pandera import check_input
@check_input(DataFrameSchema({"created_date": Column(DateTime)}))
@check_output(DataFrameSchema({"created_date_clean": Column(DateTime)}))
def clean_created_date(df):
return (
df.assign(created_date_clean=(
df.created_date
.dt.strftime("'%Y-%m-%d")
.astype("datetime64[ns]")))
)
clean_created_date(df_311).head(3)
Explanation: Usage Note:
You can specify where the dataframe is in the output structure, where the
default assumes a single dataframe as an output.
python
@check_output(schema)
def my_function(df):
# do stuff
return df
Or supply an integer, indexing where the dataframe is in a tuple output.
python
@check_output(schema, 2)
def my_function(df):
...
return x, y, df
And for more complex outputs, supply a lambda function to specify how to
pull the dataframe from python objects.
python
@check_output(schema, lambda out: out[2]["df_key"])
def my_function(df):
...
return x, y, {"df_key": df}
Cleaning Created Date
The following transformation cleans up the created_date column and creates a new column
created_date_clean with the format YYYY-MM-DD. We'll need this in order
to count up the number of records created per day for the last question:
Number of complaints recorded per day by complaint type and borough.
For this last function, we'll be validating both the inputs and outputs of
our function with check_input and check_output, respectively. Checking
the input is probably not necessary at this point, but it just illustrates
how one can define validation points at the input or output level.
End of explanation
BOROUGH_POPULATION_MAP = {
"BROOKLYN": 2648771,
"QUEENS": 2358582,
"BRONX": 1471160,
"MANHATTAN": 1664727,
"STATEN ISLAND": 479458,
}
@check_output(DataFrameSchema({
"borough_population": Column(
Float, Check(lambda x: x > 0),nullable=True)
}))
def add_borough_population(df):
return df.assign(
borough_population=df.borough.map(BOROUGH_POPULATION_MAP))
add_borough_population(df_311).head(3)
Explanation: Usage Note:
Using @check_input, you can specify which positional or key-word argument
references the dataframe, where the default assumes the first argument is the
dataframe/series to check.
```python
@check_input(schema, 1)
def my_function(x, dataframe):
...
@check_input(schema, "dataframe")
def my_function(x, dataframe):
...
```
Joining with Data External Sources
Since our analysis involves counting up complaints by borough, we'll need to normalize
the counts by dividing it by borough population estimates.
You can imagine that your script calls some API that sends these estimates for you, but
for now we're going to hard-code them here. These numbers are taken from
NYC.gov.
End of explanation
clean_df_311 = (
df_311
.pipe(clean_complaint_type)
.pipe(add_closed_lte_due)
.pipe(clean_created_date)
.pipe(add_borough_population))
Explanation: Now we can pipe these functions in sequence to obtain our cleaned data.
End of explanation
complaint_by_borough_schema = DataFrameSchema({
"borough": Column(String),
"borough_population": Column(Float, nullable=True),
"complaint_type_clean": Column(String),
# make sure count column contains positive integers
"count": Column(Int, Check(lambda x: x > 0)),
"complaints_per_pop": Column(Float)
})
TOP_N = 12
COMPLAINT_TYPE_TITLE = \
"%s ( %s - %s )" % (
"Number of New York 311 service requests by borough and complaint type",
DATE_RANGE[0], DATE_RANGE[1])
def normalize_by_population(
df: pd.DataFrame,
count: str,
population: str,
scale: float) -> pd.Series:
Normalizes at 1 million scale.
return df[count] / (df[population] / scale)
@check_output(complaint_by_borough_schema)
def agg_complaint_types_by_borough(clean_df):
plot_df = (
clean_df
.groupby(["borough", "borough_population", "complaint_type_clean"])
.unique_key.count()
.rename("count")
.reset_index()
.assign(complaints_per_pop=lambda df: (
normalize_by_population(df, "count", "borough_population", 10**6)))
)
# select only the top 12 complaint types (across all boroughs)
top_complaints = (
clean_df.complaint_type_clean
.value_counts()
.sort_values(ascending=False)
.head(TOP_N).index.tolist())
return plot_df[plot_df.complaint_type_clean.isin(top_complaints)]
# this is probably overkill, but this illustrates that you can
# add schema checks at the interface of two functions.
@check_input(complaint_by_borough_schema)
def plot_complaint_types_by_borough(complaint_by_borough_df):
g = sns.catplot(
x="complaints_per_pop",
y="borough",
col="complaint_type_clean",
col_wrap=3,
data=complaint_by_borough_df,
kind="bar",
height=3,
aspect=1.4,
sharex=False,
)
g.set_titles(template="{col_name}")
g.set_ylabels("")
g.set_xlabels("n complaints / 1M people")
g.fig.suptitle(COMPLAINT_TYPE_TITLE, y=1.05, fontweight="bold", fontsize=18)
plt.tight_layout()
plt.subplots_adjust(hspace=0.6, wspace=0.4)
sns.despine(left=True, bottom=True)
for ax in g.axes.ravel():
ax.tick_params(left=False)
return g
with sns.plotting_context(context="notebook", font_scale=1.2):
g = agg_complaint_types_by_borough(clean_df_311).pipe(
plot_complaint_types_by_borough)
Explanation: Reproducible Reports: Validate Analysis and Plotting Code
Now that we have all the derived data we need to produce our report, we can now
compute summary statistics and create plots for the final product.
Here it’s useful to think of our data manipulation code as the “backend” data
and our insight-generating code as the “frontend” data.
So at this point we need to reshape our “backend” data into the appropriate
aggregated form that can be easily plotted. Here pandera can help by
clarifying what our aggregation and plotting functions can expect.
Count of the Most Common Complaints by Borough
First we select records belonging to the top 12 complaint types and
count them up by borough and complaint_type_clean. These aggregated
data can then be used to produce a plot of the count of complaints in
the last quarter vs. borough, faceted by complaint_type_clean.
Note that here we normalize the per-borough counts by the respective population, so the normalized count interpretation would be "number of complaints per 1
million people".
End of explanation
proportion_by_agency_schema = DataFrameSchema({
"agency_name": Column(String),
"proportion_closed_on_time": Column(
Float, Check(lambda x: 0 <= x <= 1), nullable=True)
})
PROPORTION_BY_AGENCY_TITLE = \
"%s ( %s - %s )" % (
"Proportion of New York 311 requests closed on time by Responding Agency",
DATE_RANGE[0], DATE_RANGE[1])
@check_output(proportion_by_agency_schema)
def agg_proportion_by_agency(clean_df):
return (
clean_df.groupby("agency_name")
.closed_lte_due.apply(lambda s: s.mean() if s.count() > 0 else np.nan)
.dropna()
.rename("proportion_closed_on_time")
.reset_index("agency_name")
.query("proportion_closed_on_time > 0")
)
@check_input(proportion_by_agency_schema)
def plot_proportion_by_agency(proportion_by_agency_df):
g = sns.catplot(
x="proportion_closed_on_time", y="agency_name",
order=proportion_by_agency_df.sort_values(
"proportion_closed_on_time", ascending=False).agency_name,
data=proportion_by_agency_df,
kind="bar",
height=8,
aspect=1.4)
sns.despine(left=True, bottom=True)
g.set_ylabels("")
g.set_xlabels("proportion closed on time")
for ax in g.axes.ravel():
ax.tick_params(left=False)
g.fig.suptitle(PROPORTION_BY_AGENCY_TITLE, y=1.03, fontweight="bold", fontsize=14)
return g
with sns.plotting_context(context="notebook", font_scale=1.1):
axes = plot_proportion_by_agency(agg_proportion_by_agency(clean_df_311))
Explanation: Proportion of Service Requests Closed on or Before the Due Date
For this question we'll compute the proportion of requests that were closed
on or before the due_date by agency_name, where we'll remove entries
that have null values or where the proportion is 0.
End of explanation
daily_complaints_schema = DataFrameSchema({
"created_date_clean": Column(DateTime, Check(lambda x: x >= pd.Timestamp(DATE_RANGE[0]))),
"borough": Column(String, Check(lambda x: x in BOROUGHS)),
"borough_population": Column(Float, nullable=True),
"count": Column(Int, Check(lambda x: x > 0)),
"complaints_per_pop": Column(Float, nullable=True)
})
DAILY_COMPLAINTS_TITLE = \
"%s ( %s - %s )" % (
"Number of daily New York 311 requests by borough",
DATE_RANGE[0], DATE_RANGE[1])
@check_output(daily_complaints_schema)
def agg_daily_complaints(clean_df):
return (
clean_df
.groupby(["borough", "borough_population", "created_date_clean"])
.unique_key.count().rename("count")
.reset_index()
.assign(complaints_per_pop=lambda df: (
normalize_by_population(df, "count", "borough_population", 10**3))))
@check_input(daily_complaints_schema)
def plot_daily_complaints(daily_complaints_df):
fig, ax = plt.subplots(1, figsize=(12, 6))
ax = sns.lineplot(
x="created_date_clean", y="complaints_per_pop", hue="borough",
data=daily_complaints_df, ax=ax)
sns.despine()
ax.set_ylabel("n complaints / 1K people")
ax.set_xlabel("created on")
fig.suptitle(DAILY_COMPLAINTS_TITLE, y=0.99, fontweight="bold", fontsize=16)
return ax
with sns.plotting_context(context="notebook", font_scale=1.2):
plot_daily_complaints(agg_daily_complaints(clean_df_311))
Explanation: Daily Complaints per Borough
Here we have to count up all number of service requests per day by borough,
so we'll want to make sure that the number_of_complaints is a positive number
and that the borough values are in the BOROUGHS global variable that we defined
earlier.
Here we also normalize the per-borough counts by the respective population (per 1K).
End of explanation |
6,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3章 ニューラルネットワーク
パーセプトロンは複雑な関数を表現できるが、重みは人力で設定する必要があった。
ニューラルネットワークでは適切な重みパラメータをデータから自動で学習できる性質が備わっている
3.1 パーセプトロンからニューラルネットワークへ
3.1.1 ニューラルネットワークの例
例として下図のようなネットワークがある。中間層は隠れ層とも呼ばれる。入力層から出力層へ向かい第0層、第1層、第2層と呼ぶ。(重みを持つ層は実質2層なので2層のネットワークと呼ぶ。書籍によっては異なるので注意。)
Step1: 3.1.2 パーセプトロンの復習
パーセプトロンは以下式で表すことができた。
$$
y
= \begin{cases}
& \ 0 \; (b + w_{1}x_{1} + w_{1}x_{2} \leq 0) \
& \ 1 \; (b + w_{1}x_{1} + w_{1}x_{2} > 0)
\end{cases}
$$
バイアスはニューロンの発火のしやすさ、重みは各信号の重要性をコントロールしている。
上記の式を簡略化すると以下2つの式となる。
$$
y = h(b + w_{1}x_{1} + w_{1}x_{2})
$$
$$
h(x)
= \begin{cases}
& \ 0 \; (x \leq 0) \
& \ 1 \; (x > 0)
\end{cases}
$$
入力信号の総和がh(x)という関数で変換されて出力yとなる。
3.1.3 活性化関数の登場
h(x)は一般的に活性化関数(activation function)と呼ばれる。
以前の活性化関数を含む式を変形すると以下といえる。
$$
a = b + w_{1}x_{1} + w_{1}x_{2}
$$
$$
y
= h(a)
$$
これを図示すると以下となる。
Step2: 3.2 活性化関数
閾値を堺にして出力が切り替わる関数は「ステップ関数」、「階段関数」と呼ばれる。
パーセプトロンでは活性化関数にステップ関数を用いていた。
ニューラルネットワークでは活性化関数を別の関数に変更する。
3.2.1 シグモイド関数
活性化関数としてシグモイド関数を用いる
$$
h(x) = \frac{1}{1+exp(-x)}
$$
exp(-x)は$e^{-x}$を表す。
3.2.2 ステップ関数の実装
Step3: 3.2.3 ステップ関数のグラフ
Step4: 3.2.4 シグモイド関数の実装
Step5: 3.2.5 シグモイド関数とステップ関数の比較
シグモイド関数はステップ関数と比べるとなめらかなグラフになる。(実践がシグモイド、破線がステップ)
ステップ関数は0or1を返却するが、シグモイド関数は0~1の間の実数を返す。共通の性質としては入力が以下に小さいもしくは大きくてもその値域は[0,1]となる。
Step6: 3.2.6 非線形関数
ステップ関数とシグモイド関数の共通点としては、非線形関数であるということ。(線形は直線、非線形は曲線を描く)
ニューラルネットワークでは活性化関数に線形関数を用いてはいけない。それは層を深くすることの意味がなくなってしまうため。
3.2.7 ReLU関数
活性化関数にReLU(Rectified Linear Unit)が用いられる事がある。数式で表すと以下。
$$
y
= \begin{cases}
& \ x \; (x > 0) \
& \ 0 \; (x \leq 0)
\end{cases}
$$
Step7: 3.3 多次元配列の計算
3.3.1 多次元配列
Step8: 3.3.2 行列の内積
Step9: 3.3.3 ニューラルネットワークの内積
バイアスと活性化関数を省略し、重みだけのニューラルネットワークの実装を行なう。
行列の内積を用いるとループなどを用いずに一度に計算することができる。
Step10: 3.4 3層ニューラルネットワークの実装
3.4.1 記号の確認
ここでは以下のように重みを表現する。
$$
w_{1 \; 2}^{(1)}
$$
上段((1))は第1層目の重み
下段左(1)は次層の1番目のニューロン、
下段右(2)は前層の2番目のニューロンを表す。
3.4.2 角層における信号伝達の実装
第1層目の1番目のニューロンへ信号伝達は以下式となる。
$$
a_{1}^{(1)} = w_{1\;1}^{(1)} + w_{1\;2}^{(1)} + b_{1}^{(1)}
$$
行列の内積を用いると以下式で表される。
$$
A_{1}^{(1)} = XW^{(1)} + B^{(1)}
$$
$$
X = (x1 \; x2)
$$
$$
B^{(1)} = (b_{1}^{(1)} \; b_{2}^{(1)} \; b_{3}^{(1)})
$$
$$
W_{1} = \begin{pmatrix}
w_{1\;1}^{(1)} & w_{2\;1}^{(1)} & w_{3\;1}^{(1)} \
w_{1\;2}^{(1)} & w_{2\;2}^{(1)} & w_{3\;2}^{(1)} \
\end{pmatrix}
$$
実装は以下。
Step11: 活性化関数としてシグモイド関数を利用した場合は以下となる。
重み付き和はa、活性化関数で変換された信号をzとする。活性化関数はh()とする。
Step12: 3.4.3 実装のまとめ
これまで行った実装をまとめると以下となる。ニューラルネットワークの慣例として重みだけは大文字をもちいてWを使用する。
Step13: 3.5 出力層の設計
分類問題と回帰問題のどちらに用いるかで、出力層の活性化関数を変更する必要がある
例)
* 分類
各クラスに属する確率
回帰
数値
3.5.1 恒等関数とソフトマックス関数
恒等関数は入力をそのまま出力する関数。(回帰で利用)
ソフトマックス関数は以下で表される。(分類で利用)
出力層がn個あった場合に、k番目の出力$y_{k}$を求める式。
$$
y_{k} = \frac{exp(a_{k})}{\sum_{i=1}^{n}exp(a_{i})}
$$
Step14: 3.5.3 ソフトマックス関数の実装上の注意
指数関数の計算を行なう際、大きな値になってしまいオーバーフローを起こすおそれがある。ソフトマックス中の分子分母両者の指数関数の計算において定数を減算することによって桁あふれを防ぐ。(結果は変わらない)
定数は入力信号の最大値を用いることが一般的。
$$
y_{k} = \frac{exp(a_{k})}{\sum_{i=1}^{n}exp(a_{i})} \
= \frac{exp(a_{k})}{\sum_{i=1}^{n}exp(a_{i})} \
= \frac{Cexp(a_{k})}{C\sum_{i=1}^{n}exp(a_{i})} \
= \frac{exp(a_{k} + logC)}{\sum_{i=1}^{n}exp(a_{i} + logC)} \
= \frac{exp(a_{k} + C')}{\sum_{i=1}^{n}exp(a_{i} + C')} \
$$
Step15: 3.5.3 ソフトマックス関数の特徴
ソフトマックス関数は0~1の間の実数を返却する。また、ソフトマックス関数の出力の総和は1となる。これを「確率」として解釈することで分類に利用できる。
ただし、ソフトマックス関数を適用しても入力値各要素の大小関係は変わらない。(指数関数が単調増加する性質であるため)
よって、ニューラルネットワークのクラス分類の推論ではソフトマックス関数を省略し、最も値の大きいニューロンに相当するクラスを推定クラスとして用いる。
(出力層にソフトマックス関数を用いる理由は学習時に関係する。)
3.5.4 出力層のニューロンの数
出力層のニューロンの数は解くべき問題に応じて定める。クラス分類では分類したいクラスの数に設定するのが一般的。
3.6 手書き数字認識
Step16: 3.6.2 ニューラルネットワークの推論処理
推論を行なうニューラルネットワークを実装する。入力層を784個(画像サイズより)、出力層を10個(推定する数字のクラス0~9)、隠れ層を2層(第一層を50個、第二層を100個のニューロン。50、100個は任意に設定した)とする。
load_mnist中でnormalizeを行っており、画像の各ピクセル値を255で除算している。その結果、0.0~1.0の範囲に収まるように変換されている。
データをある決まった範囲に変換する処理を正規化(normalization)と言う。このようなニューラルネットワークの入力データに対して決まった変換を行なうことは前処理(pre-processiong)と呼ばれる。(入力画像データに対して前処理として正規化を行った)
Step17: 3.6.3 バッチ処理
入力データと重みパラメータの形状に注目してみると、次元の要素数の変遷は以下になる。
隣接する次元が一致している。
X → W1 → W2 → W3 → Y
784 → 784×50 → 50×100 → 100×10 → 10
画像を複数枚まとめて処理する場合は以下。(100枚分を処理する場合)
X → W1 → W2 → W3 → Y
100×784 → 784×50 → 50×100 → 100×10 → 100×10
これで一度に100枚分の入力データの推論が出来る。この入力データのまとまりをバッチ(batch)と呼ぶ。これにより処理時間の短縮が図れる。 | Python Code:
import matplotlib.pyplot as plt
from matplotlib.image import imread
from graphviz import Digraph
f = Digraph(format="png")
f.attr(rankdir='LR')
f.attr('node', shape='circle')
f.node('x1','')
f.node('x2','')
f.node('s1','')
f.node('s2','')
f.node('s3','')
f.node('y1','')
f.node('y2','')
with f.subgraph(name='cluster_1') as c:
c.node('x1')
c.node('x2')
c.attr(color='white')
c.attr(label='入力層')
with f.subgraph(name='cluster_2') as c:
c.node('s1')
c.node('s2')
c.node('s3')
c.attr(color='white')
c.attr(label='中間層')
with f.subgraph(name='cluster_3') as c:
c.node('y1')
c.node('y2')
c.attr(color='white')
c.attr(label='出力層')
f.edge('x1', 's1', len='1.00')
f.edge('x1', 's2', len='10')
f.edge('x1', 's3', len='10')
f.edge('x2', 's1', len='10')
f.edge('x2', 's2', len='10')
f.edge('x2', 's3', len='10')
f.edge('s1', 'y1', len='10')
f.edge('s2', 'y1', len='10')
f.edge('s3', 'y1', len='10')
f.edge('s1', 'y2', len='10')
f.edge('s2', 'y2', len='10')
f.edge('s3', 'y2', len='10')
f.render("../docs/neural_network")
img = imread('../docs/neural_network.png')
plt.figure(figsize=(6,4))
plt.imshow(img)
ax = plt.gca() # get current axis
# 枠線非表示
#ax.spines["right"].set_color("none") # 右消し
#ax.spines["left"].set_color("none") # 左消し
#ax.spines["top"].set_color("none") # 上消し
#ax.spines["bottom"].set_color("none") # 下消し
## 目盛を非表示にする
ax.tick_params(axis='x', which='both', top='off', bottom='off', labelbottom='off')
ax.tick_params(axis='y', which='both', left='off', right='off', labelleft='off')
plt.show()
Explanation: 3章 ニューラルネットワーク
パーセプトロンは複雑な関数を表現できるが、重みは人力で設定する必要があった。
ニューラルネットワークでは適切な重みパラメータをデータから自動で学習できる性質が備わっている
3.1 パーセプトロンからニューラルネットワークへ
3.1.1 ニューラルネットワークの例
例として下図のようなネットワークがある。中間層は隠れ層とも呼ばれる。入力層から出力層へ向かい第0層、第1層、第2層と呼ぶ。(重みを持つ層は実質2層なので2層のネットワークと呼ぶ。書籍によっては異なるので注意。)
End of explanation
import matplotlib.pyplot as plt
from matplotlib.image import imread
from graphviz import Digraph
f = Digraph(format="png")
f.attr(rankdir='LR')
f.attr('node', shape='circle')
with f.subgraph(name='cluster_0') as c:
c.edge('a', 'y', label='h()')
c.attr(style='rounded')
f.edge('1', 'a', label='b')
f.edge('x1', 'a', label='w1')
f.edge('x2', 'a', label='w2')
f.render("../docs/activation_function")
img = imread('../docs/activation_function.png')
plt.figure(figsize=(6,4))
plt.imshow(img)
ax = plt.gca() # get current axis
# 枠線非表示
#ax.spines["right"].set_color("none") # 右消し
#ax.spines["left"].set_color("none") # 左消し
#ax.spines["top"].set_color("none") # 上消し
#ax.spines["bottom"].set_color("none") # 下消し
## 目盛を非表示にする
ax.tick_params(axis='x', which='both', top='off', bottom='off', labelbottom='off')
ax.tick_params(axis='y', which='both', left='off', right='off', labelleft='off')
plt.show()
Explanation: 3.1.2 パーセプトロンの復習
パーセプトロンは以下式で表すことができた。
$$
y
= \begin{cases}
& \ 0 \; (b + w_{1}x_{1} + w_{1}x_{2} \leq 0) \
& \ 1 \; (b + w_{1}x_{1} + w_{1}x_{2} > 0)
\end{cases}
$$
バイアスはニューロンの発火のしやすさ、重みは各信号の重要性をコントロールしている。
上記の式を簡略化すると以下2つの式となる。
$$
y = h(b + w_{1}x_{1} + w_{1}x_{2})
$$
$$
h(x)
= \begin{cases}
& \ 0 \; (x \leq 0) \
& \ 1 \; (x > 0)
\end{cases}
$$
入力信号の総和がh(x)という関数で変換されて出力yとなる。
3.1.3 活性化関数の登場
h(x)は一般的に活性化関数(activation function)と呼ばれる。
以前の活性化関数を含む式を変形すると以下といえる。
$$
a = b + w_{1}x_{1} + w_{1}x_{2}
$$
$$
y
= h(a)
$$
これを図示すると以下となる。
End of explanation
# xは実数のみ
import numpy as np
def step_function_from_num(x):
if x > 0:
return 1
else:
return 0
# 配列対応の場合
def step_function_from_array(x):
y = x > 0
return y.astype(np.int)
print(step_function_from_array(np.array([-0.1, 0, 1])))
Explanation: 3.2 活性化関数
閾値を堺にして出力が切り替わる関数は「ステップ関数」、「階段関数」と呼ばれる。
パーセプトロンでは活性化関数にステップ関数を用いていた。
ニューラルネットワークでは活性化関数を別の関数に変更する。
3.2.1 シグモイド関数
活性化関数としてシグモイド関数を用いる
$$
h(x) = \frac{1}{1+exp(-x)}
$$
exp(-x)は$e^{-x}$を表す。
3.2.2 ステップ関数の実装
End of explanation
import numpy as np
import matplotlib.pylab as plt
def step_function(x):
return np.array(x > 0, dtype=np.int)
x = np.arange(-5.0, 5.0, 0.1)
y = step_function(x)
plt.plot(x, y)
plt.ylim(-0.1, 1.1)
plt.show()
Explanation: 3.2.3 ステップ関数のグラフ
End of explanation
def sigmoid(x):
return 1 / (1 + np.exp(-x))
x = np.array([-1.0, 1.0, 2.0])
print(sigmoid(x))
x = np.arange(-5.0, 5.0, 0.1)
y = sigmoid(x)
plt.plot(x, y)
plt.ylim(-0.1, 1.1)
plt.show()
Explanation: 3.2.4 シグモイド関数の実装
End of explanation
x = np.arange(-5.0, 5.0, 0.1)
y_step = step_function(x)
y_sig = sigmoid(x)
plt.plot(x, y_step, '--')
plt.plot(x, y_sig)
plt.ylim(-0.1, 1.1)
plt.show()
Explanation: 3.2.5 シグモイド関数とステップ関数の比較
シグモイド関数はステップ関数と比べるとなめらかなグラフになる。(実践がシグモイド、破線がステップ)
ステップ関数は0or1を返却するが、シグモイド関数は0~1の間の実数を返す。共通の性質としては入力が以下に小さいもしくは大きくてもその値域は[0,1]となる。
End of explanation
def relu(x):
return np.maximum(0, x)
print(relu(np.array([-2, 0, 5])))
x = np.arange(-6.0, 6.0, 0.1)
y = relu(x)
plt.plot(x, y)
plt.ylim(-0.5, 5)
plt.show()
Explanation: 3.2.6 非線形関数
ステップ関数とシグモイド関数の共通点としては、非線形関数であるということ。(線形は直線、非線形は曲線を描く)
ニューラルネットワークでは活性化関数に線形関数を用いてはいけない。それは層を深くすることの意味がなくなってしまうため。
3.2.7 ReLU関数
活性化関数にReLU(Rectified Linear Unit)が用いられる事がある。数式で表すと以下。
$$
y
= \begin{cases}
& \ x \; (x > 0) \
& \ 0 \; (x \leq 0)
\end{cases}
$$
End of explanation
import numpy as np
A = np.array([1, 2, 3, 4])
print(A)
print(np.ndim(A)) # 次元数
print(A.shape) # 4行,
print(A.shape[0]) # 添字0次元のデータ数
import numpy as np
B = np.array([[1, 2], [3, 4], [5, 6]])
print(B)
print(np.ndim(B)) # 次元数
print(B.shape) # 3行2列
print(B.shape[0]) # 添字0次元のデータ数
Explanation: 3.3 多次元配列の計算
3.3.1 多次元配列
End of explanation
# 行列の内積
A = np.array([[1,2], [3,4]])
print(A.shape)
B = np.array([[5,6], [7,8]])
print(B.shape)
print(np.dot(A, B))
# 行列の内積
A = np.array([[1,2,3], [4,5,6]])
print(A.shape)
B = np.array([[1,2], [3,4], [5,6]])
print(B.shape)
print(np.dot(A, B))
# 行列の内積
A = np.array([[1,2], [3,4], [5,6]])
print(A.shape)
B = np.array([7,8])
print(B.shape)
print(np.dot(A, B))
Explanation: 3.3.2 行列の内積
End of explanation
# 入力
X = np.array([1, 2])
print(X.shape)
# 重み
W = np.array([[1,3,5], [2,4,6]])
print(W)
print(W.shape)
# ニューラルネットワークの内積
Y = np.dot(X, W)
print(Y)
Explanation: 3.3.3 ニューラルネットワークの内積
バイアスと活性化関数を省略し、重みだけのニューラルネットワークの実装を行なう。
行列の内積を用いるとループなどを用いずに一度に計算することができる。
End of explanation
# 入力層から第1層
import numpy as np
X = np.array([1.0, 0.5])
W1 = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
B1 = np.array([0.1, 0.2, 0.3])
print(W1.shape)
print(X.shape)
print(B1.shape)
A1 = np.dot(X, W1) + B1
print(A1)
Explanation: 3.4 3層ニューラルネットワークの実装
3.4.1 記号の確認
ここでは以下のように重みを表現する。
$$
w_{1 \; 2}^{(1)}
$$
上段((1))は第1層目の重み
下段左(1)は次層の1番目のニューロン、
下段右(2)は前層の2番目のニューロンを表す。
3.4.2 角層における信号伝達の実装
第1層目の1番目のニューロンへ信号伝達は以下式となる。
$$
a_{1}^{(1)} = w_{1\;1}^{(1)} + w_{1\;2}^{(1)} + b_{1}^{(1)}
$$
行列の内積を用いると以下式で表される。
$$
A_{1}^{(1)} = XW^{(1)} + B^{(1)}
$$
$$
X = (x1 \; x2)
$$
$$
B^{(1)} = (b_{1}^{(1)} \; b_{2}^{(1)} \; b_{3}^{(1)})
$$
$$
W_{1} = \begin{pmatrix}
w_{1\;1}^{(1)} & w_{2\;1}^{(1)} & w_{3\;1}^{(1)} \
w_{1\;2}^{(1)} & w_{2\;2}^{(1)} & w_{3\;2}^{(1)} \
\end{pmatrix}
$$
実装は以下。
End of explanation
Z1 = sigmoid(A1)
print(A1)
print(Z1)
# 第1層から第2層
W2 = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
B2 = np.array([0.1, 0.2])
print(Z1.shape)
print(W2.shape)
print(B2.shape)
A2 = np.dot(Z1, W2) + B2
Z2 = sigmoid(A2)
# 第2層から出力層
def identity_function(x):
return x
W3 = np.array([[0.1, 0.3], [0.2, 0.4]])
B3 = np.array([0.1, 0.2])
A3 = np.dot(Z2, W3) + B3
Y = identity_function(A3)
print(Y)
Explanation: 活性化関数としてシグモイド関数を利用した場合は以下となる。
重み付き和はa、活性化関数で変換された信号をzとする。活性化関数はh()とする。
End of explanation
def init_network():
network = {}
network['W1'] = np.array([[0.1, 0.3, 0.5], [0.2, 0.4, 0.6]])
network['b1'] = np.array([0.1, 0.2, 0.3])
network['W2'] = np.array([[0.1, 0.4], [0.2, 0.5], [0.3, 0.6]])
network['b2'] = np.array([0.1, 0.2])
network['W3'] = np.array([[0.1, 0.3], [0.2, 0.4]])
network['b3'] = np.array([0.1, 0.2])
return network
def forward(network, x):
W1, W2, W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a3 = np.dot(z2, W3) + b3
y = identity_function(a3)
return y
network = init_network()
x = np.array([1.0, 0.5])
y = forward(network, x)
print(y)
Explanation: 3.4.3 実装のまとめ
これまで行った実装をまとめると以下となる。ニューラルネットワークの慣例として重みだけは大文字をもちいてWを使用する。
End of explanation
# ソフトマックス関数
import matplotlib.pyplot as plt
from matplotlib.image import imread
from graphviz import Digraph
f = Digraph(format="png")
f.attr(rankdir='LR')
f.attr('node', shape='circle')
f.edge('a1', 'y1', len='10')
f.edge('a2', 'y1', len='10')
f.edge('a3', 'y1', len='10')
f.edge('a1', 'y2', len='10')
f.edge('a2', 'y2', len='10')
f.edge('a3', 'y2', len='10')
f.edge('a1', 'y3', len='10')
f.edge('a2', 'y3', len='10')
f.edge('a3', 'y3', len='10')
f.render("../docs/neural_network")
img = imread('../docs/neural_network.png')
plt.figure(figsize=(6,4))
plt.imshow(img)
ax = plt.gca() # get current axis
# 枠線非表示
#ax.spines["right"].set_color("none") # 右消し
#ax.spines["left"].set_color("none") # 左消し
#ax.spines["top"].set_color("none") # 上消し
#ax.spines["bottom"].set_color("none") # 下消し
## 目盛を非表示にする
ax.tick_params(axis='x', which='both', top='off', bottom='off', labelbottom='off')
ax.tick_params(axis='y', which='both', left='off', right='off', labelleft='off')
plt.show()
# ソフトマックス関数の実装
a = np.array([0.3, 2.9, 4.0])
exp_a = np.exp(a)
print(exp_a)
sum_exp_a = np.sum(exp_a)
print(sum_exp_a)
y = exp_a / sum_exp_a
print(y)
# 関数として定義
def softmax(a):
exp_a = np.exp(a)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
Explanation: 3.5 出力層の設計
分類問題と回帰問題のどちらに用いるかで、出力層の活性化関数を変更する必要がある
例)
* 分類
各クラスに属する確率
回帰
数値
3.5.1 恒等関数とソフトマックス関数
恒等関数は入力をそのまま出力する関数。(回帰で利用)
ソフトマックス関数は以下で表される。(分類で利用)
出力層がn個あった場合に、k番目の出力$y_{k}$を求める式。
$$
y_{k} = \frac{exp(a_{k})}{\sum_{i=1}^{n}exp(a_{i})}
$$
End of explanation
# オーバーフローの再現と改善の検証
a = np.array([1010, 1000, 990])
print(np.exp(a) / np.sum(np.exp(a)))
c = np.max(a)
print(a - c)
print(np.exp(a - c) / np.sum(np.exp(a - c)))
# ソフトマックス関数(改善)の実装
def softmax(a):
c = np.max(a)
exp_a = np.exp(a - c)
sum_exp_a = np.sum(exp_a)
y = exp_a / sum_exp_a
return y
Explanation: 3.5.3 ソフトマックス関数の実装上の注意
指数関数の計算を行なう際、大きな値になってしまいオーバーフローを起こすおそれがある。ソフトマックス中の分子分母両者の指数関数の計算において定数を減算することによって桁あふれを防ぐ。(結果は変わらない)
定数は入力信号の最大値を用いることが一般的。
$$
y_{k} = \frac{exp(a_{k})}{\sum_{i=1}^{n}exp(a_{i})} \
= \frac{exp(a_{k})}{\sum_{i=1}^{n}exp(a_{i})} \
= \frac{Cexp(a_{k})}{C\sum_{i=1}^{n}exp(a_{i})} \
= \frac{exp(a_{k} + logC)}{\sum_{i=1}^{n}exp(a_{i} + logC)} \
= \frac{exp(a_{k} + C')}{\sum_{i=1}^{n}exp(a_{i} + C')} \
$$
End of explanation
import sys, os
sys.path.append(os.pardir)
from src.mnist import load_mnist
# normalize:入力画像の正規化(255から0~1)
# flatten:入力画像を平らに変換(1×28×28から784個の配列)
# one_hot_label:ラベルのone_hot表現(2であれば[0,0,1,0,0,0,0,0,0,0])
(x_train, t_train), (x_test, t_test) = \
load_mnist(flatten=True, normalize=False)
print(x_train.shape) # 訓練画像
print(t_train.shape) # テスト画像
print(x_test.shape) # 訓練ラベル
print(t_test.shape) # テストラベル
# MNIST画像の表示
import sys, os
sys.path.append(os.pardir)
import numpy as np
from src.mnist import load_mnist
from PIL import Image
def img_show(img):
pil_img = Image.fromarray(np.uint8(img))
pil_img.show()
(x_train, t_train), (x_test, t_test) = \
load_mnist(flatten=True, normalize=False)
img = x_train[0]
label = t_train[0]
print(label)
print(img.shape)
img = img.reshape(28, 28)
print(img.shape)
img_show(img)
Explanation: 3.5.3 ソフトマックス関数の特徴
ソフトマックス関数は0~1の間の実数を返却する。また、ソフトマックス関数の出力の総和は1となる。これを「確率」として解釈することで分類に利用できる。
ただし、ソフトマックス関数を適用しても入力値各要素の大小関係は変わらない。(指数関数が単調増加する性質であるため)
よって、ニューラルネットワークのクラス分類の推論ではソフトマックス関数を省略し、最も値の大きいニューロンに相当するクラスを推定クラスとして用いる。
(出力層にソフトマックス関数を用いる理由は学習時に関係する。)
3.5.4 出力層のニューロンの数
出力層のニューロンの数は解くべき問題に応じて定める。クラス分類では分類したいクラスの数に設定するのが一般的。
3.6 手書き数字認識
End of explanation
import pickle
def get_data():
(x_train, t_train), (x_test, t_test) = \
load_mnist(normalize=True, flatten=True, one_hot_label=False)
return x_test, t_test
# 学習結果のサンプルパラメータの読込
def init_network():
with open("../src/sample_weight.pkl", 'rb') as f:
network = pickle.load(f)
return network
def predict(network, x):
W1, W2, W3 = network['W1'], network['W2'], network['W3']
b1, b2, b3 = network['b1'], network['b2'], network['b3']
a1 = np.dot(x, W1) + b1
z1 = sigmoid(a1)
a2 = np.dot(z1, W2) + b2
z2 = sigmoid(a2)
a3 = np.dot(z2, W3) + b3
y = softmax(a3)
return y
# 推定
x, t = get_data()
# 学習済みパラメータ取得
network = init_network()
accuracy_cnt = 0
for i in range(len(x)):
# 推定
y = predict(network, x[i])
# 最も確率が高いクラスを推定ラベルとして取得
p = np.argmax(y)
# 推定ラベルと正解ラベルの突き合わせ
if p == t[i]:
accuracy_cnt += 1
# 正解率の表示
print("Accuracy:" + str(float(accuracy_cnt) / len(x)))
Explanation: 3.6.2 ニューラルネットワークの推論処理
推論を行なうニューラルネットワークを実装する。入力層を784個(画像サイズより)、出力層を10個(推定する数字のクラス0~9)、隠れ層を2層(第一層を50個、第二層を100個のニューロン。50、100個は任意に設定した)とする。
load_mnist中でnormalizeを行っており、画像の各ピクセル値を255で除算している。その結果、0.0~1.0の範囲に収まるように変換されている。
データをある決まった範囲に変換する処理を正規化(normalization)と言う。このようなニューラルネットワークの入力データに対して決まった変換を行なうことは前処理(pre-processiong)と呼ばれる。(入力画像データに対して前処理として正規化を行った)
End of explanation
# バッチによる実装
x, t = get_data()
# 学習済みパラメータ取得
network = init_network()
batch_size = 100
accuracy_cnt = 0
for i in range(0, len(x), batch_size):
x_batch = x[i:i+batch_size]
y_batch = predict(network, x_batch)
p = np.argmax(y_batch, axis=1)
accuracy_cnt += np.sum(p == t[i:i+batch_size])
# 正解率の表示
print("Accuracy:" + str(float(accuracy_cnt) / len(x)))
Explanation: 3.6.3 バッチ処理
入力データと重みパラメータの形状に注目してみると、次元の要素数の変遷は以下になる。
隣接する次元が一致している。
X → W1 → W2 → W3 → Y
784 → 784×50 → 50×100 → 100×10 → 10
画像を複数枚まとめて処理する場合は以下。(100枚分を処理する場合)
X → W1 → W2 → W3 → Y
100×784 → 784×50 → 50×100 → 100×10 → 100×10
これで一度に100枚分の入力データの推論が出来る。この入力データのまとまりをバッチ(batch)と呼ぶ。これにより処理時間の短縮が図れる。
End of explanation |
6,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Django-Geo-SPaaS - GeoDjango framework for Satellite Data Management
First of all we need to initialize Django to work. Let's do some 'magic'
Step1: Now we can import our models
Step2: Now we can use the model Dataset to search for datasets
Step3: What is happening
Step4: Use the complex structure of the catalog models
Step5: Search for data
Step6: Finally, get data | Python Code:
import os, sys
os.environ['DJANGO_SETTINGS_MODULE'] = 'geospaas_project.settings'
sys.path.insert(0, '/vagrant/shared/course_vm/geospaas_project/')
import django
django.setup()
from django.conf import settings
Explanation: Django-Geo-SPaaS - GeoDjango framework for Satellite Data Management
First of all we need to initialize Django to work. Let's do some 'magic'
End of explanation
from geospaas.catalog.models import Dataset
from geospaas.catalog.models import DatasetURI
Explanation: Now we can import our models
End of explanation
# find all images
datasets = Dataset.objects.all()
Explanation: Now we can use the model Dataset to search for datasets
End of explanation
print datasets.count()
# print info about each image
for ds in datasets:
print ds
Explanation: What is happening:
A SQL query is generated
The query is sent to the database (local database driven by SpatiaLite)
The query is executed by the database engine
The result is sent back to Python
The results is wrapped into a QuerySet object
End of explanation
# get just one Dataset
ds0 = datasets.first()
print ds0.time_coverage_start
# print joined fields (Foreign key)
for ds in datasets:
print ds.source.instrument.short_name,
print ds.source.platform.short_name
# get infromation from Foreign key in the opposite direction
print ds0.dataseturi_set.first().uri
Explanation: Use the complex structure of the catalog models
End of explanation
# search by time
ds = Dataset.objects.filter(time_coverage_start='2012-03-03 09:38:10.423969')
print ds
ds = Dataset.objects.filter(time_coverage_start__gte='1900-03-01')
print ds
# search by instrument
ds = Dataset.objects.filter(source__instrument__short_name='MODIS')
print ds
# search by spatial location
ds0 = Dataset.objects.first()
ds0_geom = ds0.geographic_location.geometry
ds_ovlp = Dataset.objects.filter(
geographic_location__geometry__intersects=ds0_geom,
time_coverage_start__gte='2015-05-02',
source__platform__short_name='AQUA')
print ds_ovlp
Explanation: Search for data
End of explanation
dsovlp0 = ds_ovlp.first()
uri0 = dsovlp0.dataseturi_set.first().uri
print uri0
from nansat import Nansat
n = Nansat(uri0.replace('file://localhost', ''))
print n[1]
Explanation: Finally, get data
End of explanation |
6,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="http
Step1: Second, the instantiation of the class.
Step2: The following is an example list object containing datetime objects.
Step3: The call of the method get_forward_reates() yields the above time_list object and the simulated forward rates. In this case, 10 simulations.
Step4: Accordingly, the call of the get_discount_factors() method yields simulated zero-coupon bond prices for the time grid.
Step5: Stochstic Drifts
Let us value use the stochastic short rate model to simulate a geometric Brownian motion with stochastic short rate. Define the market environment as follows
Step6: Then add the stochastic_short_rate object as discount curve.
Step7: Finally, instantiate the geometric_brownian_motion object.
Step8: We get simulated instrument values as usual via the get_instrument_values() method.
Step9: Visualization of Simulated Stochastic Short Rate | Python Code:
from dx import *
me = market_environment(name='me', pricing_date=dt.datetime(2015, 1, 1))
me.add_constant('initial_value', 0.01)
me.add_constant('volatility', 0.1)
me.add_constant('kappa', 2.0)
me.add_constant('theta', 0.05)
me.add_constant('paths', 1000)
me.add_constant('frequency', 'M')
me.add_constant('starting_date', me.pricing_date)
me.add_constant('final_date', dt.datetime(2015, 12, 31))
me.add_curve('discount_curve', 0.0) # dummy
me.add_constant('currency', 0.0) # dummy
Explanation: <img src="http://hilpisch.com/tpq_logo.png" alt="The Python Quants" width="45%" align="right" border="4">
Stochastic Short Rates
This brief section illustrates the use of stochastic short rate models for simulation and (risk-neutral) discounting. The class used is called stochastic_short_rate.
The Modelling
First, the market environment. As a stochastic short rate model the square_root_diffusion class is (currently) available. We therefore need to define the respective parameters for this class in the market environment.
End of explanation
ssr = stochastic_short_rate('sr', me)
Explanation: Second, the instantiation of the class.
End of explanation
time_list = [dt.datetime(2015, 1, 1),
dt.datetime(2015, 4, 1),
dt.datetime(2015, 6, 15),
dt.datetime(2015, 10, 21)]
Explanation: The following is an example list object containing datetime objects.
End of explanation
ssr.get_forward_rates(time_list, 10)
Explanation: The call of the method get_forward_reates() yields the above time_list object and the simulated forward rates. In this case, 10 simulations.
End of explanation
ssr.get_discount_factors(time_list, 10)
Explanation: Accordingly, the call of the get_discount_factors() method yields simulated zero-coupon bond prices for the time grid.
End of explanation
me.add_constant('initial_value', 36.)
me.add_constant('volatility', 0.2)
# time horizon for the simulation
me.add_constant('currency', 'EUR')
me.add_constant('frequency', 'M')
# monthly frequency; paramter accorind to pandas convention
me.add_constant('paths', 10)
# number of paths for simulation
Explanation: Stochstic Drifts
Let us value use the stochastic short rate model to simulate a geometric Brownian motion with stochastic short rate. Define the market environment as follows:
End of explanation
me.add_curve('discount_curve', ssr)
Explanation: Then add the stochastic_short_rate object as discount curve.
End of explanation
gbm = geometric_brownian_motion('gbm', me)
Explanation: Finally, instantiate the geometric_brownian_motion object.
End of explanation
gbm.get_instrument_values()
Explanation: We get simulated instrument values as usual via the get_instrument_values() method.
End of explanation
from pylab import plt
plt.style.use('seaborn')
%matplotlib inline
# short rate paths
plt.figure(figsize=(10, 6))
plt.plot(ssr.process.instrument_values[:, :10]);
Explanation: Visualization of Simulated Stochastic Short Rate
End of explanation |
6,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Simply the first step to prepare the data for the following notebooks
Step1: Data source is http
Step2: Can also migrate it to a sqlite database
Step3: Can perform queries | Python Code:
import Quandl
import pandas as pd
import numpy as np
import blaze as bz
Explanation: Introduction
Simply the first step to prepare the data for the following notebooks
End of explanation
with open('../.quandl_api_key.txt', 'r') as f:
api_key = f.read()
db = Quandl.get("EOD/DB", authtoken=api_key)
bz.odo(db['Rate'].reset_index(), '../data/db.bcolz')
fx = Quandl.get("CURRFX/EURUSD", authtoken=api_key)
bz.odo(fx['Rate'].reset_index(), '../data/eurusd.bcolz')
Explanation: Data source is http://www.quandl.com.
We use blaze to store data.
End of explanation
bz.odo('../data/db.bcolz', 'sqlite:///osqf.db::db')
%load_ext sql
%%sql sqlite:///osqf.db
select * from db
Explanation: Can also migrate it to a sqlite database
End of explanation
d = bz.Data('../data/db.bcolz')
d.Close.max()
Explanation: Can perform queries
End of explanation |
6,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PCA主元分析
假设数据符合高斯分布,目标是得到一组正交基,使得数据在这组正交基上分布放方差最大,PCA最大的用途是数据降维。
已知一组数据$(X_1, X_2, ..., X_N)$,其中每个数据$X_i$都是n维列向量,利用矩阵分解求解PCA的步骤如下
1. 计算均值
$X_{mean} = \frac{1}{N}\sum_{i=1}^N X_i$
2. 去中心化
$\bar X_i = X_i - X_{mean}$
3. 计算协方差矩阵 (特征的协方差)
$C = \bar X \bar X^T$
4. 矩阵分解,求$C$的特征值和特征向量
$(\lambda _i,V_i) \ \ \ i = 1,2,...,N$
其中$\lambda _i$是特征值,$V_i$是特征向量,其中$V_i$就是要求取的正交基,$\lambda _i$表示数据在$V_i$的分布,和数据在该方向上投影
的方差正相关
特征值和特征向量
向量X的特征值和特征向量的定义如下
$\lambda X = V^TX$
这个公式说明$X$乘以一个标量$\lambda$的结果和乘以另一个向量$V$一样,其中$\lambda$和$V$就是特征值和特征向量。 上式等号左右两侧同时右乘$X^T$得到
$\lambda X X^T = V^TXX^T$
如果$X$均值为零,令$C=\bar X \bar X^T$表示协方差矩阵,则上式得到
$\lambda C = V^T C$
即$\lambda$和$V^T$也是$C$的特征值和特征向量
Step1: 计算矩阵$C$的特征值和特征向量有两个方法
1. 直接计算特征值,特征向量
2. 利用SVD
在数据降维中,需要按照方差(特征值)选择若干维度,抛弃其他维度,达到降维目的。
Step2: 不同方法得到的特征向量的方向可能不同,需要取模后再比较
Step3: PCA空间可视化
Step4: PCA重建
去中心化
投影到PCA空间,获得pca系数
用pca系数乘以对应的特征向量,结果向量取和 | Python Code:
import cv2
import sys,os
import numpy as np
sample_size = (64//2,64//2)
smallset_size = 10 #每类下采样,方便调试
flag_debug = True
def load_mnist(num_per_class, dataset_root="C:/dataset/mnist/",resize=sample_size):
data_pairs = []
labeldict = {}
ds_root = os.path.join(dataset_root,'train')
for rdir, pdirs, names in os.walk(ds_root):
for name in names:
basename,ext = os.path.splitext(name)
if ext != ".jpg":
continue
fullpath = os.path.join(rdir,name)
label = fullpath.split('\\')[-2]
label = int(label)
if num_per_class > 0 and ( label in labeldict.keys() ) and labeldict[label] >= num_per_class:
continue
data_pairs.append((label,fullpath))
if label in labeldict:
labeldict[label] += 1
else:
labeldict[label] = 1
data = np.zeros((resize[0]*resize[1],len(data_pairs)))
labels = np.zeros(len(data_pairs))
for col,(label, path) in enumerate(data_pairs):
img = cv2.imread(path,0)
img = cv2.resize(img,resize)
img = (img / 255.0).flatten()
data[:,col] = img
labels[col] = label
return (data,labels)
X,Y = load_mnist(smallset_size)
print('data shape: {}'.format(X.shape))
Xmean = np.reshape( X.mean(axis=1),(-1,1))
Xmean = np.tile(Xmean,(1,X.shape[1]))
print('mean shape: {}'.format(Xmean.shape))
Xbar = X - Xmean
C = Xbar.dot(Xbar.transpose())
print("conv shape: {}".format(C.shape))
Explanation: PCA主元分析
假设数据符合高斯分布,目标是得到一组正交基,使得数据在这组正交基上分布放方差最大,PCA最大的用途是数据降维。
已知一组数据$(X_1, X_2, ..., X_N)$,其中每个数据$X_i$都是n维列向量,利用矩阵分解求解PCA的步骤如下
1. 计算均值
$X_{mean} = \frac{1}{N}\sum_{i=1}^N X_i$
2. 去中心化
$\bar X_i = X_i - X_{mean}$
3. 计算协方差矩阵 (特征的协方差)
$C = \bar X \bar X^T$
4. 矩阵分解,求$C$的特征值和特征向量
$(\lambda _i,V_i) \ \ \ i = 1,2,...,N$
其中$\lambda _i$是特征值,$V_i$是特征向量,其中$V_i$就是要求取的正交基,$\lambda _i$表示数据在$V_i$的分布,和数据在该方向上投影
的方差正相关
特征值和特征向量
向量X的特征值和特征向量的定义如下
$\lambda X = V^TX$
这个公式说明$X$乘以一个标量$\lambda$的结果和乘以另一个向量$V$一样,其中$\lambda$和$V$就是特征值和特征向量。 上式等号左右两侧同时右乘$X^T$得到
$\lambda X X^T = V^TXX^T$
如果$X$均值为零,令$C=\bar X \bar X^T$表示协方差矩阵,则上式得到
$\lambda C = V^T C$
即$\lambda$和$V^T$也是$C$的特征值和特征向量
End of explanation
import pdb
def get_top_idx(lam, ratio):
#pdb.set_trace()
for k in range(1,len(lam)):
if np.sum(lam[:k]) > ratio * np.sum(lam):
#print('{} {}'.format(lam[:k].sum(), lam.sum()))
return k
return len(lam)
def calc_inv_basic(C,ratio=-1,N=-1):
print(C.shape)
if ratio < 0 and N < 0:
return C.shape[1]
U,V = np.linalg.eigh(C)
U = U[::-1]
for k in range(C.shape[1]):
V[k,:] = V[k,:][::-1]
if ratio > 0:
idx = get_top_idx(U,ratio)
else:
idx = N
topV = V[:,:idx]
return topV
def calc_inv_svd(C, ratio=-1,N=-1):
if ratio < 0 and N < 0:
return C.shape[1]
U,D,V = np.linalg.svd(C)
if ratio > 0:
idx = get_top_idx(D,ratio)
else:
idx = N
topU = U[:,:idx]
return topU
Explanation: 计算矩阵$C$的特征值和特征向量有两个方法
1. 直接计算特征值,特征向量
2. 利用SVD
在数据降维中,需要按照方差(特征值)选择若干维度,抛弃其他维度,达到降维目的。
End of explanation
#mat = [[-1,-1,0,2,1],[2,0,0,-1,-1],[2,0,1,1,0]]
#C = np.asarray(mat)
#C = C.transpose().dot(C)
ratio = 0.9
N = -1
topV_from_origin = calc_inv_basic(C,N=N, ratio=ratio)
topV_from_svd = calc_inv_svd(C,N=N, ratio=ratio)
print('topV_from_origin: {}'.format(topV_from_origin.shape))
print('topV_from_svd: {}'.format(topV_from_svd.shape))
if flag_debug:
#print(topV_from_origin[0,:])
# print(topV_from_svd[0,:])
df = np.abs(np.abs(topV_from_origin) - np.abs(topV_from_svd)).max()
print('topV_from_origin - topV_from_svd = {}'.format(df))
%matplotlib inline
import matplotlib.pyplot as plt
def show_eigen_vector(name,m, size, N = 10):
fig = plt.figure()
plt.title(name)
plt.axis('off')
if N > m.shape[1]:
N = m.shape[1]
for c in range(N):
data = np.reshape(m[:,c],size)
ax = fig.add_subplot(1, N, c+1)
ax.axis('off')
ax.imshow(data,cmap='gray')
show_eigen_vector('origin topV',topV_from_origin, sample_size)
show_eigen_vector('svd topV',topV_from_svd, sample_size)
Explanation: 不同方法得到的特征向量的方向可能不同,需要取模后再比较
End of explanation
pcs = topV_from_svd
cValue = [(1,0,0), (0,1,0), (0,0,1), \
(0.5, 0, 0), (0,0.5,0), (0,0,0.5),\
(1.0,1.0,0), (1.0,0,1.0), (0,1,1),\
(0,0,0)]
Xpca = pcs.transpose().dot(Xbar)
Lx,Ly,Lc = [],[],[]
for c in range(Xpca.shape[1]):
Lx.append(Xpca[0,c])
Ly.append(Xpca[1,c])
Lc.append(cValue[Y[c].astype(int)])
fig = plt.figure()
plt.xlabel('pc-1')
plt.ylabel('pc-2')
ax = fig.add_subplot(111)
ax.scatter(Lx,Ly,c = Lc, marker='s')
plt.show()
Explanation: PCA空间可视化
End of explanation
pcs = topV_from_svd
Xpca = pcs.transpose().dot(Xbar)
Xnew = pcs.dot(Xpca) + Xmean
import random
idxs = [k for k in range(X.shape[1])]
random.shuffle(idxs)
num = 10
fig = plt.figure()
plt.title('rebuild vs source')
plt.axis('off')
for n in range(num):
c = idxs[n]
img = np.reshape(Xnew[:,c],sample_size)
ax = fig.add_subplot(num,2,n*2+1)
ax.axis('off')
ax.imshow(img,cmap='gray')
img = np.reshape(X[:,c],sample_size)
ab = fig.add_subplot(num,2,n*2+2)
ab.axis('off')
ab.imshow(img,cmap='gray')
plt.show()
Explanation: PCA重建
去中心化
投影到PCA空间,获得pca系数
用pca系数乘以对应的特征向量,结果向量取和
End of explanation |
6,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HTML Bias CV view
showing ["institution", "institute_id", "bc_method", "bc_method_id",
"institute_id"-"bc_method_id", "terms_of_use", "CORDEX_domain",
"reference", "package" ]
Step1: HTML bias CV view separated in 2 tables | Python Code:
result = web.jsonfile_to_dict("/home/stephan/Repos/ENES-EUDAT/cordex/CORDEX_adjust_register.json")
html_out = web.generate_bias_table(result)
HTML(html_out)
Explanation: HTML Bias CV view
showing ["institution", "institute_id", "bc_method", "bc_method_id",
"institute_id"-"bc_method_id", "terms_of_use", "CORDEX_domain",
"reference", "package" ]
End of explanation
html_out = web.generate_bias_table(result)
HTML(html_out)
html_out = web.generate_bias_table_add(result)
HTML(html_out)
Explanation: HTML bias CV view separated in 2 tables
End of explanation |
6,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Return of the Functions
1) Rappels sur les fonctions
Quand a t'on besoin d'une fonction ?
dB or not dB ?
Définir ou Utiliser ?
1.1) Quand à t'on besoin d'ecrire une fonction ?
Pas toujours, mais presque...
Appel d'une fonction du système
Cet exemple marche bien avec les systèmes baseés sur unix, qui ont une horloge
Step1: Utilisation des methodes associés aux listes (les methodes sont des fonctions)
Step2: 1.2) Quand n'a t'on pas besoin d'ecrire une fonction ?
Pas besoin de définir de nouvelles fonctions si celles-ci existent déja.
Python dispose d'un grand nombre de librairies de fonctions
mais il se peux que l'on n'ai pas besoin de toutes les fonctions
de cette librairie tout au long du programme....
Donc on a TOUJOURS besoin de connaître les fonctions
Step3: 1.1) Quand doit t'on impérativement utiliser une fonction ?
les fonctions recursives (qui s'appellent elles mêmes)
Voici les 3 lois de la recursivité
Step5: Il existe des fonction primitives propres a python 3
En anglais Build-In Functions
...
On ne doit jamais redéfinir une fonction qui est déja dans le sytème d'ou l'intérêt de savoir ce que contiens une fonction
...
Ce qui suit est un exemple de fonction "dangeureuse"
Step7: 2) Reprise des exemples de l'introduction en fonctions
Palindrome
La racine carrée
Anneés Bissextiles
Nous alons reprendre les exemples du premier cours mais cette fois nous transformerons tout en fonctions. L'objectif est de montrer la rapidité d'execution, et la modularité que la programmation à l'aide des fonctions est seulle à permettre.
2.1) Le Palindrome
Step8: 2.2) Le test d'Héron d'Alexandrie
Step9: 2.3) Les annees bisextiles | Python Code:
'''
De nombreuses fonction existent déjà
Python permet aussi d'appeller des fonction nouvelles
D'abord nous allons voire les appels au sytème,
puis des fonctions comme print que nous utilisions toujours
print(VARIABLE)
'''
A=!date
# ici A est à un type "IPython.utils.text.SList" c'est une liste
print(A)
Explanation: The Return of the Functions
1) Rappels sur les fonctions
Quand a t'on besoin d'une fonction ?
dB or not dB ?
Définir ou Utiliser ?
1.1) Quand à t'on besoin d'ecrire une fonction ?
Pas toujours, mais presque...
Appel d'une fonction du système
Cet exemple marche bien avec les systèmes baseés sur unix, qui ont une horloge
End of explanation
# On peux ici se servir le la methode "list" qui va
date=A.list[0]
# On peux ici se servir le la methode "split" qui va separer la chaine de charactères
#(jour,mois,date,heure,fuseau,annee)=date.split() # Dans certains sytèmes
(jour,date,mois,annee,heure,fuseau)=date.split() # Dans d'autres sytèmes
# Ici je travaille en python 3.5
print('En l\'an de grâces %s les étudiants de l\'ISEP on étudié les Fonctions, c\'était le %s du mois \'%s\'.' % (annee, date, mois))
# Comme on voit c'est pas franchement lisible en Français, donc on va traduire...
# On peux utiliser les types que sont les tableaux associatifs appelés "dicts"
#pour associer un mois a son abreviation
mois_Month = {'janvier': 'de janvier',
'fevrier': 'de février',
'mars': 'de mars',
'avril': 'd\'avril',
'mai':'de mai',
'juin':'de juin',
'juillet':'de juillet',
'août':'d\'août',
'septembre':'de septembre',
'octobre':'d\' octobre',
'novembre':'de novembre',
'décembre':'de décembre'}
print("En l\'an de grâces {0} les étudiants on étudié les Fonctions, c\'était le {1} du mois {2}." .format(annee, date, mois_Month[mois]))
Explanation: Utilisation des methodes associés aux listes (les methodes sont des fonctions)
End of explanation
# Exemple d'appel d'une fonction dans une librairie
from math import log10
# Ici je déclare les variables
signal_power = 50 #Watts
noise_power = 5 #Watts
# Là j'utilise une division entre les variables déclareés
ratio = signal_power / noise_power
# Nous allons utiliser une fonction logarithme
decibels = 10 * log10(ratio)
# voila la sortie
print(decibels)
# Cette operation peux être transformée en fonction
def dB(signal_power,noise_power):
ratio = signal_power / noise_power
decibels = 10 * log10(ratio)
return decibels
#Warning: au moment de l'appeler if faut que le log10 soit défini
from math import log10
signal_power = 50
noise_power = 5
dB(signal_power,noise_power)
Explanation: 1.2) Quand n'a t'on pas besoin d'ecrire une fonction ?
Pas besoin de définir de nouvelles fonctions si celles-ci existent déja.
Python dispose d'un grand nombre de librairies de fonctions
mais il se peux que l'on n'ai pas besoin de toutes les fonctions
de cette librairie tout au long du programme....
Donc on a TOUJOURS besoin de connaître les fonctions
End of explanation
# On va déclarer une fonction qui renvoie la valeur de la puissance entière d'un nombre
# Cette solution utilise le fait que les nombres peuvent être pairs ou impairs
def expo(a, n):
if n == 0:
return 1
else:
if n%2 == 0:
# on utilise la puissance de 2 directerment pour les nombres pairs
return expo(a, n//2)**2
# on utilise pour les impairs, la même methode mais appliquée
else:
return a*(expo(a, (n-1)//2)**2)
# Par exemple...
print("Cette fonction peut s'avèrer utile pour ecrire une suite de nombres bien connue :")
print(expo(2,0),expo(2,1),expo(2,2),expo(2,3),expo(2,4))
print('...')
print(expo(2,11),'\n')
# On peux invoquer une puissance directement grace à son return
print('Entrez le Nombre a que l\'on veux élever à la puissance n')
a=eval(input('a = '))
n=eval(input('n = '))
print("La puissance au {} de {} est egale à {} ".format(n,a,expo(a,n)))
# OK... pour les fortiches en math il existe déjà une fonction puissance
x=pow(4,3)
print(x)
Explanation: 1.1) Quand doit t'on impérativement utiliser une fonction ?
les fonctions recursives (qui s'appellent elles mêmes)
Voici les 3 lois de la recursivité:
Une fonction recursive contiens un cas de base
Une fonction recursive doit modifier son état pour se ramenner au cas de base
Une fonction recursive doit s'appeler soit même
Par exemple:
$$ x_0 = 1 \, \mathrm{et} \, x^n=x.x^{n-1} \, \mathrm{si} \, n>1 $$
End of explanation
def jetter_les_des_jusquau_double_six():
Cette fonction importe toutes les fonction de la librairie "random"
Elle continue de tourner jusqu'au moment ou un double 6 est tiré
Bonne chance !
import random
throw_1 = random.randint(1, 6)
throw_2 = random.randint(1, 6)
while not (throw_1 == 6 and throw_2 == 6):
total = throw_1 + throw_2
print(total)
throw_1 = random.randint(1, 6)
throw_2 = random.randint(1, 6)
print('12 !!! Double Six thrown!')
jetter_les_des_jusquau_double_six()
Explanation: Il existe des fonction primitives propres a python 3
En anglais Build-In Functions
...
On ne doit jamais redéfinir une fonction qui est déja dans le sytème d'ou l'intérêt de savoir ce que contiens une fonction
...
Ce qui suit est un exemple de fonction "dangeureuse"
End of explanation
Programme : estPalindrome(unMot)
Objectif : test de Palindrome
Entrée : unMot, STR contenant une chaine de charactères
Sortie : True or False (Booléen)
@auteur : VF
def estPalindrome(unMot):
# Declaration des Variables locales
estPal = True
l = len(unMot)
d = 0
# On compare les lettres symétriques par rapport au milieu du mot de longueur l
# qui ont pour indices [d] et [l-1-d], le parcours s'effectue tant que les lettres
# comparées sont égales et que le milieu n'est pas atteint
while estPal and d < l/2:
if unMot[d] == unMot[l-1-d]: d += 1
else: estPal = False
return estPal
####################################################
# Test de la fonction
mot = input("Donner un mot : ")
# Boucle conditionnelle
if estPalindrome(mot):
print(mot, "est un palindrome")
else:
print(mot, "n'est pas un palindrome")
Explanation: 2) Reprise des exemples de l'introduction en fonctions
Palindrome
La racine carrée
Anneés Bissextiles
Nous alons reprendre les exemples du premier cours mais cette fois nous transformerons tout en fonctions. L'objectif est de montrer la rapidité d'execution, et la modularité que la programmation à l'aide des fonctions est seulle à permettre.
2.1) Le Palindrome
End of explanation
def RacineCarree(unNombre_Carre,unNombre_Estimateur):
# Declaration des Variables locales
EPS = 0.0000000000001 # précision
val = unNombre_Carre
racCarreeVal = unNombre_Estimateur
while racCarreeVal * racCarreeVal - val < -EPS or \
racCarreeVal * racCarreeVal - val > EPS:
# la valeur absolue de la différence est supérieure à un seuil
racCarreeVal = (racCarreeVal + val / racCarreeVal) / 2
return racCarreeVal
####################################################
# Test de la fonction
unNombre_Carre = 42
unNombre_Carre=float(unNombre_Carre)
unNombre_Estimateur = input("Première estimation de la racine de {} : ".format(unNombre_Carre))
unNombre_Estimateur=float(unNombre_Estimateur)
print(RacineCarree(unNombre_Carre,unNombre_Estimateur))
42**(1/2)
from math import sqrt
sqrt(42)
Explanation: 2.2) Le test d'Héron d'Alexandrie
End of explanation
def Annees_Bissextiles(annee_debut,annee_fin):
bissextile=[]
for b in range(annee_debut,annee_fin+1):
if ((b % 4 == 0) and (b % 100 != 0)) or (b % 400 == 0):
bissextile.append(b)
return bissextile
####################################################
annees_a_28_fevrier=Annees_Bissextiles(1992,2021)
print(annees_a_28_fevrier)
Explanation: 2.3) Les annees bisextiles
End of explanation |
6,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Response Distributions
Lets look at the answer distributions for each of the 3 questions in out survey.
Step1: Load and Prep Data
Step2: Q1 Information Depth
I am reading this article to ... [Information Depth]
* look up a specific fact or to get a quick answer. [fact]
* get an overview of the topic. [overview]
* get an in-depth understanding of the topic. [in-depth]
Information Depth Histogram (Dektop vs Mobile)
Step3: The distributions are relatively similar across mobile and desktop. The most common use case is to get an overview, then look up a fact, then get an in depth understanding.
Q2 Familiarity
Prior to visiting this article ... [Prior Knowledge]
* I was already familiar with the topic. [familiar]
* I was not familiar with the topic and I am learning about it for the first time. [unfamiliar]
Prior Knowledge Histogram (Dektop vs Mobile)
Step4: Desktop and mobile are basically identical. People are slightly more likely to be familiar with the topic they are reading about.
Q3 Motivation
I am reading this article because ... [Motivation]
* I have a work or school-related assignment. [work/school]
* I need to make a personal decision based on this topic (e.g., to buy a book or game, to choose a travel destination). [personal-decision]
* I want to know more about a current event (e.g. Black Friday, a soccer game, a recent earthquake, somebody's death). [current event]
* the topic was referenced in a piece of media (e.g. TV, radio, article, film, book). [media]
* the topic came up in a conversation. [conversation]
* I am bored or randomly exploring Wikipedia for fun. [bored/random]
* this topic is important to me and I want to learn more about it. (e.g., to learn about a culture). [intrinsic_learning]
Number of Motivations Distribution
Subjects were allowed to select multiple reasons. How many motivations do people select?
Step5: 30% of respondents listed more than one motivation.
Single Motivation Histogram
For responses with only a single motivation, what is the distribution over motivations.
Step6: Media and work/school are the most popular
Single Motivation Histogram (Dektop vs Mobile)
Step7: For Desktop, the most common motivation is work/school. For Mobile, it is media. Also, for mobile users, conversation is more likely compared to desktop.
Motivation Histogram
For each motivation lets count how often it was chosen as at least one of the motivations.
Step8: Suddenly intrinsic learning features much more prominently. It must be a common occurence in multi-choice answers.
Double Motivation Co-occurrence Heatmaps
For users who chose 2 motivations, which motivations co-occur?
Step9: The since some motivations are more popular than others, the color coding can be misleading. Let's look at the conditional distributions instead.
Step10: Given that work/school is a motivation, the most common other motivation is intrinsic_learning by a long shot. seems like people in our survey who choose 2 motivations like their job/studies
the pattern is similar for personal decisions
Given that people are bored/randomly exploring, their most likely other motivations is media. the next most likely is intrinsic_learning
the pattern is similar for current events
Response Co-occurence
Information Depth and Prior Knowledge
Step11: When seeking in-depth information or looking up a fact, readers are more likely to be familiar with the topic. When they are seeking an overview, the are more likely to be unfamiliar.
Step12: Readers famailiar with the topic are most likely to be looking up a fact. Unfamiliar users are the most likely to be getting an overview.
Prior Knowledge and Motivation
Step13: When people come for intrinsic learning, they tend to be familair with the topic already. When people come because of a reference in the media, they tend to be unfamialiar with the topic.
Step14: Bad Visualization
Information Depth and Motivation
Step15: bored/random users are interested in getting an overview. Users in a conversation are loking up a fact... | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import inspect, os
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
parentdir = os.path.dirname(currentdir)
os.sys.path.insert(0,parentdir)
from data_generation.join_traces_and_survey import load_survey_dfs
from response_distributions_util import *
import copy
Explanation: Response Distributions
Lets look at the answer distributions for each of the 3 questions in out survey.
End of explanation
d_survey, d_joined = load_survey_dfs()
d_desktop = d_joined[d_joined['host'] == 'mobile']
d_mobile = d_joined[d_joined['host'] == 'desktop']
d_single_motivation = d_joined[d_joined['motivation'].apply(lambda x: len(x.split('|')) == 1)]
print('Num Responses: ', d_survey.shape[0])
print('Num in EL: ', d_joined.shape[0])
print('Num Mobile Responses in EL: ', d_desktop.shape[0])
print('Num Desktop Responses in EL: ', d_mobile.shape[0])
Explanation: Load and Prep Data
End of explanation
x = 'information depth'
hue = 'host'
title = 'Mobile vs Desktop Information Depth Distribution'
xorder = ['in-depth', 'overview', 'fact']
plot_proportion(d_joined, x, hue, title, xorder = xorder)
Explanation: Q1 Information Depth
I am reading this article to ... [Information Depth]
* look up a specific fact or to get a quick answer. [fact]
* get an overview of the topic. [overview]
* get an in-depth understanding of the topic. [in-depth]
Information Depth Histogram (Dektop vs Mobile)
End of explanation
x = 'prior knowledge'
hue = 'host'
title = 'Mobile vs Desktop'
plot_proportion(d_joined, x, hue, title)
Explanation: The distributions are relatively similar across mobile and desktop. The most common use case is to get an overview, then look up a fact, then get an in depth understanding.
Q2 Familiarity
Prior to visiting this article ... [Prior Knowledge]
* I was already familiar with the topic. [familiar]
* I was not familiar with the topic and I am learning about it for the first time. [unfamiliar]
Prior Knowledge Histogram (Dektop vs Mobile)
End of explanation
d_in = pd.DataFrame()
d_in['counts'] = d_joined['motivation'].apply(lambda x: len(x.split('|'))).value_counts()
d_in['Proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in['# of Reasons Given'] = d_in.index
fig = sns.barplot(y="Proportion",
x = '# of Reasons Given',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939)
)
Explanation: Desktop and mobile are basically identical. People are slightly more likely to be familiar with the topic they are reading about.
Q3 Motivation
I am reading this article because ... [Motivation]
* I have a work or school-related assignment. [work/school]
* I need to make a personal decision based on this topic (e.g., to buy a book or game, to choose a travel destination). [personal-decision]
* I want to know more about a current event (e.g. Black Friday, a soccer game, a recent earthquake, somebody's death). [current event]
* the topic was referenced in a piece of media (e.g. TV, radio, article, film, book). [media]
* the topic came up in a conversation. [conversation]
* I am bored or randomly exploring Wikipedia for fun. [bored/random]
* this topic is important to me and I want to learn more about it. (e.g., to learn about a culture). [intrinsic_learning]
Number of Motivations Distribution
Subjects were allowed to select multiple reasons. How many motivations do people select?
End of explanation
d_in = pd.DataFrame()
d_in['counts'] = d_single_motivation['motivation'].value_counts()
d_in['proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in['motivation'] = d_in.index
fig = sns.barplot(y="proportion",
x = 'motivation',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939),
)
plt.ylabel('Proportion')
plt.xlabel('Motivation')
for item in fig.get_xticklabels():
item.set_rotation(45)
Explanation: 30% of respondents listed more than one motivation.
Single Motivation Histogram
For responses with only a single motivation, what is the distribution over motivations.
End of explanation
x = 'motivation'
hue = 'host'
title = 'Mobile vs Desktop'
order = ['media', 'work/school','intrinsic learning', 'bored/random', 'conversation', 'other','current event', 'personal decision', ]
plot_proportion(d_single_motivation, x, hue, title, xorder = order, rotate=True)
Explanation: Media and work/school are the most popular
Single Motivation Histogram (Dektop vs Mobile)
End of explanation
d_in = pd.DataFrame(columns = ['motivation', 'counts'])
ms = [
'work/school',
'personal decision',
'current event',
'media',"conversation",
'bored/random',
'no response',
'intrinsic learning',
'other'
]
for i, m in enumerate(ms):
d_in.loc[i] = [m, d_joined['motivation'].apply(lambda x: m in x).sum()]
d_in['proportion'] = d_in['counts'] / d_in['counts'].sum()
d_in.sort_values(by = 'counts', inplace = True, ascending = False)
fig = sns.barplot(y="proportion",
x = 'motivation',
data=d_in,
color = (0.54308344686732579, 0.73391773700714114, 0.85931565621319939),
)
plt.ylabel('Proportion')
plt.xlabel('Motivation')
for item in fig.get_xticklabels():
item.set_rotation(45)
Explanation: For Desktop, the most common motivation is work/school. For Mobile, it is media. Also, for mobile users, conversation is more likely compared to desktop.
Motivation Histogram
For each motivation lets count how often it was chosen as at least one of the motivations.
End of explanation
df = copy.deepcopy(d_joined[d_joined['motivation'].apply(lambda x: len(x.split('|')) == 2)])
df['pm'] = df['motivation'].apply(lambda x: '|'.join(sorted(x.split('|'))))
df_joint = pd.DataFrame()
df_joint['count'] = df['pm'].value_counts()
df_joint['pm'] = df_joint.index
df_joint.index = range(0, df_joint.shape[0])
df_joint['m1'] = df_joint['pm'].apply(lambda x: x.split('|')[0])
df_joint['m2'] = df_joint['pm'].apply(lambda x: x.split('|')[1])
df_joint['count'] = df_joint['count'].apply(int)
df_joint2 = copy.deepcopy(df_joint)
df_joint2['pm'] = df_joint2['pm'].apply(lambda x: '|'.join(sorted(x.split('|'), reverse = True)))
df_joint2['m1'] = df_joint2['pm'].apply(lambda x: x.split('|')[0])
df_joint2['m2'] = df_joint2['pm'].apply(lambda x: x.split('|')[1])
df_joint2.index = range(df_joint.shape[0], 2 * df_joint.shape[0])
df_joint12 = pd.concat([df_joint, df_joint2]).pivot("m1", "m2", "count")
#ax = sns.heatmap(df_joint12, annot=True, fmt="0.0f")
#plt.ylabel('Motivation 1')
#plt.xlabel('Motivation 2')
#plt.title('Raw Co-occurence counts')
Explanation: Suddenly intrinsic learning features much more prominently. It must be a common occurence in multi-choice answers.
Double Motivation Co-occurrence Heatmaps
For users who chose 2 motivations, which motivations co-occur?
End of explanation
df_joint12_norm = df_joint12.div(df_joint12.sum(axis=1), axis=0)
ax = sns.heatmap(df_joint12_norm, annot=True, fmt="0.2f")
plt.ylabel('Motivation 1')
plt.xlabel('P(Motivation 2 | Motivation 1)')
plt.title('Conditional Distributions')
Explanation: The since some motivations are more popular than others, the color coding can be misleading. Let's look at the conditional distributions instead.
End of explanation
x = 'information depth'
hue = 'prior knowledge'
title = 'P(Prior Knowledge | Information Depth = x) '
xorder = order = ['in-depth', 'overview', 'fact']
plot_proportion(d_joined, x, hue, title, xorder = xorder, normx=False)
Explanation: Given that work/school is a motivation, the most common other motivation is intrinsic_learning by a long shot. seems like people in our survey who choose 2 motivations like their job/studies
the pattern is similar for personal decisions
Given that people are bored/randomly exploring, their most likely other motivations is media. the next most likely is intrinsic_learning
the pattern is similar for current events
Response Co-occurence
Information Depth and Prior Knowledge
End of explanation
hue = 'information depth'
x = 'prior knowledge'
title = 'P(Information Depth | Prior Knowledge = x)'
xorder = order = ['familiar', 'unfamiliar']
plot_proportion(d_joined, x, hue, title, xorder = xorder, normx=False)
Explanation: When seeking in-depth information or looking up a fact, readers are more likely to be familiar with the topic. When they are seeking an overview, the are more likely to be unfamiliar.
End of explanation
hue = 'prior knowledge'
x = 'motivation'
title = 'P(Prior Knowledge | Motivation = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
Explanation: Readers famailiar with the topic are most likely to be looking up a fact. Unfamiliar users are the most likely to be getting an overview.
Prior Knowledge and Motivation
End of explanation
x = 'prior knowledge'
hue = 'motivation'
title = 'P(Motivation | Prior Knowledge = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
Explanation: When people come for intrinsic learning, they tend to be familair with the topic already. When people come because of a reference in the media, they tend to be unfamialiar with the topic.
End of explanation
hue = 'information depth'
x = 'motivation'
title = 'P(Information Depth | Motivation = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
Explanation: Bad Visualization
Information Depth and Motivation
End of explanation
x = 'information depth'
hue = 'motivation'
title = 'P(Motivation | Information Depth = x)'
plot_proportion(d_single_motivation, x, hue, title, rotate=True, normx=False)
Bad Visualization
Explanation: bored/random users are interested in getting an overview. Users in a conversation are loking up a fact...
End of explanation |
6,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
author = "Peter J Usherwood"
Estas tutorias são um introdução para Python no contexto de ciência de dados. Elas não assumem conhecimento prévio de Python ou programação de computadors, comecando na começa. Os dois primeiro notebooks chamado "Conceitos Básicos" são os fundamentais de Python e são relevantes para qualquer pessoa querando a aprender Python.
Variáveis - Estruturas de dados
Variáveis com um valor
As variáveis são as representações dos nossos informações
Em idiomas de programação de computadores um variável tem dois propatidades, um tipo, e um valor. Em Python voce só precisa atribuir um valor, Python vai infere o tipo.
Criando as variávels
Quando nós criamos um variável com um valor (e tipo) nós "instanciamos" a variavel
Para instanciar uma variável nós precisamos duas coisas
Step1: Há mais tipos dos variávels então isso, vamos encontrar mais mais tarde.
Advançado!
Usando a função type(x) poderia ser muito util quando voce tem um erro no seu codigo de tipo "Type Error". Acontece muito em Python porque variaveis não são fixado para um tipo.
Mudando os valores
Quando nós temos uma variável nós podemos mudar o valor com operações
Step2: Variáveis com vários valores
As listas são um tipo de variável com vários valores.
Estes tem um conjunto das variávels em uma lista, você busca sobre dando o índice da variável voce quer
O índice é um inteiro
O valor é qualquer tipo de variável
As listas tem ordem
Os valores podem ter tipos das variáveis diferentes em uma lista
Step3: Os dicionários são um outro tipo de variável com vários valores. Nota | Python Code:
lista = [1,2,23,4,2]
lista.sort()
lista
# Instanciando
# A variável "a" é um numero com valor 7
a = 7
# A variável "name" é cadeia, ele pode tem qualquer caracteres no teclado
nome = 'Felipe'
print('O valor da "a" é:', a) # Aqui "print()" e "type()" são funções, vou explicar sobre elas mais tarde
print('O tipo da "a" é:', type(a))
print('O valor da "nome" é:', nome)
print('O tipo da "nome" é:', type(nome))
Explanation: author = "Peter J Usherwood"
Estas tutorias são um introdução para Python no contexto de ciência de dados. Elas não assumem conhecimento prévio de Python ou programação de computadors, comecando na começa. Os dois primeiro notebooks chamado "Conceitos Básicos" são os fundamentais de Python e são relevantes para qualquer pessoa querando a aprender Python.
Variáveis - Estruturas de dados
Variáveis com um valor
As variáveis são as representações dos nossos informações
Em idiomas de programação de computadores um variável tem dois propatidades, um tipo, e um valor. Em Python voce só precisa atribuir um valor, Python vai infere o tipo.
Criando as variávels
Quando nós criamos um variável com um valor (e tipo) nós "instanciamos" a variavel
Para instanciar uma variável nós precisamos duas coisas: um nome pela variável (isso pode ser quase qualquer string de texto, mas nao pode ter espaços ou começa com um numero), e o valor, nos atribuimos o valor para a variável usando o '=' operador.
End of explanation
# Operações numéricas
a = 6
b = 10
c = a*b # multiplicação
print('"a" is', a,'\n"b" is', b, '\n"c (a*b)" is', c)
print('\n') # \n é uma caracter de linha nova
a = a*b # multiplicação e substituindo para "a" novamente
print('"a" is', a,'\n"b" is', b)
print('\n')
a = a/10 # Aqui nós usamos uma variável sem atribuindo ela um nome (10)
print('"a" is', a)
print('\n')
# Operações de strings
a = 'Olá'
b = 'Felipe'
c = a + ' ' + b # Aqui nós usamos vários operações em uma linha
print('"a" is', a,'\n"b" is', b, '\nc (a+b) is', c)
print('\n')
Explanation: Há mais tipos dos variávels então isso, vamos encontrar mais mais tarde.
Advançado!
Usando a função type(x) poderia ser muito util quando voce tem um erro no seu codigo de tipo "Type Error". Acontece muito em Python porque variaveis não são fixado para um tipo.
Mudando os valores
Quando nós temos uma variável nós podemos mudar o valor com operações:
- A operaçâo mais basica é o '=' nós usamos antes para instanciar, nós podemos usar isso para mudar o valor directamente (você pode pensar sobre isso como criando uma variável nova e substituindo a variável antiga)
O de cima é um caso especial, normalmente você vai usar outras operações e com elas o comportamento é diferente
- Nós precisamos dois variávels e uma operação, a combinação vai dar uma variável nova terçia (normalmente)
- Esta variável terça pode ser atribuido para uma variável usado na operação
- As duas variávels usado dever ser o mesmo tipo
- As operações vão funcaionar diferente dependente no tipo da variável
End of explanation
# Instanciando
lista_eg = [6,10,'Paulo']
print('Os valores do "lista_eg" estão:', lista_eg)
print('O tipo do "lista_eg" está:', type(lista_eg))
print('\n')
# Chamando
print('O valor do segundo índice está:', lista_eg[1]) # Nota nas listas de Python o primeiro índice é 0 (não 1)
print('\n')
# Substiuindo/Mudando os valores
lista_eg[1] = lista_eg[1]*4
print('O valor do segundo índice está:', lista_eg[1])
print('\n')
lista_eg
Explanation: Variáveis com vários valores
As listas são um tipo de variável com vários valores.
Estes tem um conjunto das variávels em uma lista, você busca sobre dando o índice da variável voce quer
O índice é um inteiro
O valor é qualquer tipo de variável
As listas tem ordem
Os valores podem ter tipos das variáveis diferentes em uma lista
End of explanation
# Instanciando
dict_eg = {'cebolas':6, 'cenuras':10, 'nome':'Sopa'}
print('Os valores do "dict_eg" é:', dict_eg)
print('O tipo do "dict_eg" é:', type(dict_eg))
print('\n')
# Chamando
print('O valor pelas "cebolas" é:', dict_eg['cebolas'])
print('\n')
# Substiuindo/Mudando os valores
dict_eg['cebolas'] = 50
print('O valor pelas "cebolas" é:', dict_eg['cebolas'])
print('\n')
Explanation: Os dicionários são um outro tipo de variável com vários valores. Nota: dicionários de Python são dados na forma de JSON
Estes tem um conjunto de "pares de valores-chaves", você busca sobre dado usando uma "chave" (única)
A chave é uma referencia única
O valor poder ser qualquer tipo de variável
Os dicionários não tem ordem
Os valores podem ter tipos das variáveis diferentes em um dicionário
End of explanation |
6,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy! | Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
len(trainY[0])
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
#Input layer
net = tflearn.input_data([None, len(trainX[0])])
#Hidden layer
net = tflearn.fully_connected(net, n_units=100, activation='ReLU')
net = tflearn.fully_connected(net, n_units=20, activation='ReLU')
#Output layer
net = tflearn.fully_connected(net, n_units=len(trainY[0]), activation='softmax')
#Training
net = tflearn.regression(net,
optimizer='sgd',
learning_rate=0.1,
loss='categorical_crossentropy'
)
# This model assumes that your network is named "net"
model = tflearn.DNN(net, tensorboard_verbose=3, tensorboard_dir='mnist_model')
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=20)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 95% accuracy. Some simple models have been known to get up to 99.7% accuracy!
End of explanation |
6,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 9 Convolutional Networks
Convolution is a specialized kind of linear operation.
9.1 The Convolution Operation
\begin{align}
s(t) &= \int x(a) w(t-a) \mathrm{d}a \
&= (x \ast w)(t)
\end{align}
where $x$ is the input, $w$ is the kernel, and $s(t)$ is referred to as the feature map.
Since convolution is commutative,
\begin{align}
S(i, j) &= (I \ast K)(i, j) &= \sum_m \sum_n I(m, n) K(i - m, j - n) \
&= (K \ast I)(i, j) &= \sum_m \sum_n I(i - m, j - n) K(m, n)
\end{align}
Many machine learning libraries implement cross-correlation but call it convolution.
Discrete convolution can be viewed as multiplication by a matrix.
Step1: 9.2 Motivation
Convolution leverages three important ideas
Step2: 9.3 Pooling
A pooling function replaces the output of the new at a certain location with a summary statistic of the nearby outputs.
popular pooling functions
Step3: 9.4 Convolution and Pooling as an Infinitely Strong Prior
Prior
Step4: Comparison of local connections, convolution, and full connections.
convolutional layers $\to$ tiled convolution $\to$ locally connected layer | Python Code:
show_image("fig9_1.png", figsize=(8, 8))
Explanation: Chapter 9 Convolutional Networks
Convolution is a specialized kind of linear operation.
9.1 The Convolution Operation
\begin{align}
s(t) &= \int x(a) w(t-a) \mathrm{d}a \
&= (x \ast w)(t)
\end{align}
where $x$ is the input, $w$ is the kernel, and $s(t)$ is referred to as the feature map.
Since convolution is commutative,
\begin{align}
S(i, j) &= (I \ast K)(i, j) &= \sum_m \sum_n I(m, n) K(i - m, j - n) \
&= (K \ast I)(i, j) &= \sum_m \sum_n I(i - m, j - n) K(m, n)
\end{align}
Many machine learning libraries implement cross-correlation but call it convolution.
Discrete convolution can be viewed as multiplication by a matrix.
End of explanation
show_image("fig9_5.png", figsize=(8, 5))
Explanation: 9.2 Motivation
Convolution leverages three important ideas:
sparse interactions: fewer parameters
parameter sharing: tied weights
equivariant representations:
a function $f(x)$ is equivariant to a funtion $g$ if $f(g(x)) = g(f(x))$.
End of explanation
show_image("fig9_7.png", figsize=(10, 8))
show_image("fig9_9.png", figsize=(10, 8))
Explanation: 9.3 Pooling
A pooling function replaces the output of the new at a certain location with a summary statistic of the nearby outputs.
popular pooling functions:
max
average
L2 norm
weighted average
strong prior: function must be invariant to small translations.
End of explanation
show_image("matlab_conv_2d.png", figsize=(12, 8))
A = np.random.rand(3, 3)
A
B = np.random.rand(4, 4)
B
def gen_kernel_fn(kernel):
def kernel_fn(a, x_start, y_start):
x_size, y_size = kernel.shape
a_slice = a[x_start:x_start+x_size, y_start:y_start+y_size]
return (a_slice * kernel).sum()
return kernel_fn
def calc_conv2d_res_size(a, kernel):
res_x_size = a.shape[0] - kernel.shape[0] + 1
res_y_size = a.shape[1] - kernel.shape[1] + 1
return res_x_size, res_y_size
def conv2d(a, kernel):
kernel_fn = gen_kernel_fn(kernel)
res_x_size, res_y_size = calc_conv2d_res_size(a, kernel)
res = np.zeros((res_x_size, res_y_size))
for x in range(res_x_size):
for y in range(res_y_size):
res[x, y] = kernel_fn(a, x, y)
return res
# valid convolution
conv2d(B, A)
def calc_2d_pad_width(target_size, real_size):
pad_x_width = (target_size[0] - real_size[0]) / 2
pad_y_width = (target_size[1] - real_size[1]) / 2
return np.array([[pad_x_width] * 2, [pad_y_width] * 2], dtype=np.int)
def zero_pad_and_conv2d(a, kernel, target_size):
res_size = calc_conv2d_res_size(a, kernel)
pad_width = calc_2d_pad_width(target_size, res_size)
a_pad = np.pad(a, pad_width, 'constant', constant_values=0)
return conv2d(a_pad, kernel)
# same convolution
same_conv_size = B.shape
zero_pad_and_conv2d(B, A, same_conv_size)
# full convolution
full_conv_size = [x1 + x2 for (x1, x2) in zip(B.shape, A.shape)]
print("full convolution size: {}".format(full_conv_size))
zero_pad_and_conv2d(B, A, full_conv_size)
Explanation: 9.4 Convolution and Pooling as an Infinitely Strong Prior
Prior: weak or strong <== how concentrated the probability density in the prior is.
We can image a convolutional net as being similar to a fully connected net, but with an infinitely strong prior over its weights: the weights for one hidden unit must be identical to the weights of its neighbor, but shifted in space.
convolution and pooling can cause underfitting.
9.5 Variants of the Basic Convolution Function
4-D tensors: (batch_size, height, width, channels)
Three zero-padding:
valid convolution: m - k +1 = m - (k - 1)
same convolution: m,补零至核中心
full convolution: m + k - 1,补零至核边角
see details below.
End of explanation
show_image("fig9_16.png", figsize=(10, 8))
Explanation: Comparison of local connections, convolution, and full connections.
convolutional layers $\to$ tiled convolution $\to$ locally connected layer
End of explanation |
6,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Semana 1
Step1: Este código faz com que primeiramente toda a primeira linha seja preenchida, em seguida a segunda e assim sucessivamente. Se nós quiséssemos que a primeira coluna fosse preenchida e em seguida a segunda coluna e assim por diante, como ficaria o código?
Um exemplo
Step2: Exercício 1
Step3: Exercício 2
Step4: Praticar tarefa de programação
Step5: Exercício 2
Step6: Semana 2
Step7: Exercício
Escrever uma função que recebe uma lista de Strings contendo nomes de pessoas como parâmetro e devolve o nome mais curto. A função deve ignorar espaços antes e depois do nome e deve devolver o nome com a primeira letra maiúscula.
Step8: Exercício
Escreva uma função que recebe um array de strings como parâmetro e devolve o primeiro string na ordem lexicográfica, ignorando-se maiúsculas e minúsculas
Step9: Incluindo <pre>print(__name__)</pre> na última linha de fibo.py, ao fazer a importação import fibo no shell do Python, imprime 'fibo', que é o nome do programa.
Ao incluir
<pre>
if __name__ == "__main__"
Step10: Exercício 1
Step11: Exercício 2
Step12: Exercícios adicionais
Exercício 1
Step13: Exercício 2
Step14: Semana 3 - POO – Programação Orientada a Objetos
Step15: POO
Step16: Testes para praticar
Step17: POO – Programação Orientada a Objetos – Parte 2
Step18: TESTE PARA PRATICAR POO – Programação Orientada a Objetos – Parte 2
Step19: Códigos Testáveis
Step20: Fixture
Step21: Parametrização
Step22: Exercícios
Escreva uma versão do TestaBhaskara usando @pytest.mark.parametrize
Escreva uma bateria de testes para o seu código preferido
Tarefa de programação
Step23: Exercício 2
Step24: Exercício 2
Step25: Week 4
Busca Sequencial
Step26: Complexidade Computacional
Análise matemática do desempenho de um algoritmo
Estudo analítico de
Step27: Tarefa de programação
Step28: Exercício 2
Step29: Praticar tarefa de programação
Step30: Exercício 2
Step31: Week 5 - Algoritmo de Ordenação da Bolha - Bubblesort
Lista como um tubo de ensaio vertical, os elementos mais leves sobem à superfície como uma bolha, os mais pesados afundam.
Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem
Step32: Exemplo do algoritmo bubblesort em ação
Step33: Comparação de Desempenho
Módulo time
Step34: Melhoria no Algoritmo de Ordenação da Bolha
Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem.
Melhoria
Step35: Site com algoritmos de ordenação http
Step36: Busca Binária
Objetivo
Step37: Complexidade da Busca Binária
Dado uma lista de n elementos
No pior caso, teremos que efetuar
Step38: Exercício 2
Step39: Praticar tarefa de programação
Step40: Week 6
Recursão (Definição. Como resolver um problema recursivo. Exemplos. Implementações.)
Step41: Mergesort
Ordenação por Intercalação
Step42: Base da recursão é a condição que faz o problema ser definitivamente resolvido. Caso essa condição, essa base da recursão, não seja satisfeita, o problema continua sendo reduzido em instâncias menores até que a condição passe a ser satisfeita.
Chamada recursiva é a linha onde a função faz uma chamada a ela mesma.
Função recursiva é a função que chama ela mesma.
A linha 2 tem a condição que é a base da recursão
A linha 5 tem a chamada recursiva
Para o algoritmo funcionar corretamente, é necessário trocar a linha 3 por “return 1”
if (n < 2)
Step43: Tarefa de programação
Step44: Exercício 2
Step45: Exercício 3 | Python Code:
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
x = cria_matriz(2, 3, 99)
x
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
x = cria_matriz(2, 3, 99)
x
Explanation: Semana 1
End of explanation
def cria_matriz(num_linhas, num_colunas):
matriz = [] #lista vazia
for i in range(num_linhas):
linha = []
for j in range(num_colunas):
linha.append(0)
matriz.append(linha)
for i in range(num_colunas):
for j in range(num_linhas):
matriz[j][i] = int(input("Digite o elemento [" + str(j) + "][" + str(i) + "]: "))
return matriz
x = cria_matriz(2, 3)
x
def tarefa(mat):
dim = len(mat)
for i in range(dim):
print(mat[i][dim-1-i], end=" ")
mat = [[1,2,3],[4,5,6],[7,8,9]]
tarefa(mat)
# Observação: o trecho do print (end = " ") irá mudar a finalização padrão do print
# que é pular para a próxima linha. Com esta mudança, o cursor permanecerá na mesma
# linha aguardando a impressão seguinte.
Explanation: Este código faz com que primeiramente toda a primeira linha seja preenchida, em seguida a segunda e assim sucessivamente. Se nós quiséssemos que a primeira coluna fosse preenchida e em seguida a segunda coluna e assim por diante, como ficaria o código?
Um exemplo: se o usuário digitasse o seguinte comando “x = cria_matriz(2,3)” e em seguida informasse os seis números para serem armazenados na matriz, na seguinte ordem: 1, 2, 3, 4, 5, 6; o x teria ao final da função a seguinte matriz: [[1, 3, 5], [2, 4, 6]].
End of explanation
def dimensoes(A):
'''Função que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.
Obs: i = colunas, j = linhas
Exemplo:
>>> minha_matriz = [[1],
[2],
[3]
]
>>> dimensoes(minha_matriz)
>>> 3X1
'''
lin = len(A)
col = len(A[0])
return print("%dX%d" % (lin, col))
matriz1 = [[1], [2], [3]]
dimensoes(matriz1)
matriz2 = [[1, 2, 3], [4, 5, 6]]
dimensoes(matriz2)
Explanation: Exercício 1: Tamanho da matriz
Escreva uma função dimensoes(matriz) que recebe uma matriz como parâmetro e imprime as dimensões da matriz recebida, no formato iXj.
Exemplos:
minha_matriz = [[1], [2], [3]]
dimensoes(minha_matriz)
3X1
minha_matriz = [[1, 2, 3], [4, 5, 6]]
dimensoes(minha_matriz)
2X3
End of explanation
def soma_matrizes(m1, m2):
def dimensoes(A):
lin = len(A)
col = len(A[0])
return ((lin, col))
if dimensoes(m1) != dimensoes(m2):
return False
else:
matriz = []
for i in range(len(m1)):
linha = []
for j in range(len(m1[0])):
linha.append(m1[i][j] + m2[i][j])
matriz.append(linha)
return matriz
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2)
m1 = [[1], [2], [3]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2)
Explanation: Exercício 2: Soma de matrizes
Escreva a função soma_matrizes(m1, m2) que recebe 2 matrizes e devolve uma matriz que represente sua soma caso as matrizes tenham dimensões iguais. Caso contrário, a função deve devolver False.
Exemplos:
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2) => [[3, 5, 7], [9, 11, 13]]
m1 = [[1], [2], [3]]
m2 = [[2, 3, 4], [5, 6, 7]]
soma_matrizes(m1, m2) => False
End of explanation
def imprime_matriz(A):
for i in range(len(A)):
for j in range(len(A[i])):
print(A[i][j])
minha_matriz = [[1], [2], [3]]
imprime_matriz(minha_matriz)
minha_matriz = [[1, 2, 3], [4, 5, 6]]
imprime_matriz(minha_matriz)
Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais)
Exercício 1: Imprimindo matrizes
Como proposto na primeira vídeo-aula da semana, escreva uma função imprime_matriz(matriz), que recebe uma matriz como parâmetro e imprime a matriz, linha por linha. Note que NÃO se deve imprimir espaços após o último elemento de cada linha!
Exemplos:
minha_matriz = [[1], [2], [3]]
imprime_matriz(minha_matriz)
1
2
3
minha_matriz = [[1, 2, 3], [4, 5, 6]]
imprime_matriz(minha_matriz)
1 2 3
4 5 6
End of explanation
def sao_multiplicaveis(m1, m2):
'''Recebe duas matrizes como parâmetros e devolve True se as matrizes forem multiplicáveis (número de colunas
da primeira é igual ao número de linhs da segunda). False se não forem
'''
if len(m1) == len(m2[0]):
return True
else:
return False
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
sao_multiplicaveis(m1, m2)
m1 = [[1], [2], [3]]
m2 = [[1, 2, 3]]
sao_multiplicaveis(m1, m2)
Explanation: Exercício 2: Matrizes multiplicáveis
Duas matrizes são multiplicáveis se o número de colunas da primeira é igual ao número de linhas da segunda. Escreva a função sao_multiplicaveis(m1, m2) que recebe duas matrizes como parâmetro e devolve True se as matrizes forem multiplicavéis (na ordem dada) e False caso contrário.
Exemplos:
m1 = [[1, 2, 3], [4, 5, 6]]
m2 = [[2, 3, 4], [5, 6, 7]]
sao_multiplicaveis(m1, m2) => False
m1 = [[1], [2], [3]]
m2 = [[1, 2, 3]]
sao_multiplicaveis(m1, m2) => True
End of explanation
"áurea gosta de coentro".capitalize()
"AQUI".capitalize()
# função para remover espaços em branco
" [email protected] ".strip()
"o abecedário da Xuxa é didático".count("a")
"o abecedário da Xuxa é didático".count("á")
"o abecedário da Xuxa é didático".count("X")
"o abecedário da Xuxa é didático".count("x")
"o abecedário da Xuxa é didático".count("z")
"A vida como ela seje".replace("seje", "é")
"áurea gosta de coentro".capitalize().center(80) #80 caracteres de largura, no centro apareça este texto
texto = "Ao que se percebe, só há o agora"
texto
texto.find("q")
texto.find('se')
texto[7] + texto[8]
texto.find('w')
fruta = 'amora'
fruta[:4] # desde o começo até a posição TRÊS!
fruta[1:] # desde a posição 1 (começa no zero) até o final
fruta[2:4] # desde a posição 2 até a posição 3
Explanation: Semana 2
End of explanation
def mais_curto(lista_de_nomes):
menor = lista_de_nomes[0] # considerando que o menor nome está no primeiro lugar
for i in lista_de_nomes:
if len(i) < len(menor):
menor = i
return menor.capitalize()
lista = ['carlos', 'césar', 'ana', 'vicente', 'maicon', 'washington']
mais_curto(lista)
ord('a')
ord('A')
ord('b')
ord('m')
ord('M')
ord('AA')
'maçã' > 'banana'
'Maçã' > 'banana'
'Maçã'.lower() > 'banana'.lower()
txt = 'José'
txt = txt.lower()
txt
lista = ['ana', 'maria', 'José', 'Valdemar']
len(lista)
lista[3].lower()
lista[2]
lista[2] = lista[2].lower()
lista
for i in lista:
print(i)
lista[0][0]
Explanation: Exercício
Escrever uma função que recebe uma lista de Strings contendo nomes de pessoas como parâmetro e devolve o nome mais curto. A função deve ignorar espaços antes e depois do nome e deve devolver o nome com a primeira letra maiúscula.
End of explanation
def menor_string(array_string):
for i in range(len(array_string)):
array_string[i] = array_string[i].lower()
menor = array_string[0] # considera o primeiro como o menor
for i in array_string:
if ord(i[0][0]) < ord(menor[0]):
menor = i
return menor
lista = ['maria', 'José', 'Valdemar']
menor_string(lista)
# Código para inverter string e deixa maiúsculo
def fazAlgo(string):
pos = len(string)-1
string = string.upper()
while pos >= 0:
print(string[pos],end = "")
pos = pos - 1
fazAlgo("paralelepipedo")
# Código que deixa maiúsculo as letras de ordem ímpar:
def fazAlgo(string):
pos = 0
string1 = ""
string = string.lower()
stringMa = string.upper()
while pos < len(string):
if pos % 2 == 0:
string1 = string1 + stringMa[pos]
else:
string1 = string1 + string[pos]
pos = pos + 1
return string1
print(fazAlgo("paralelepipedo"))
# Código que tira os espaços em branco
def fazAlgo(string):
pos = 0
string1 = ""
while pos < len(string):
if string[pos] != " ":
string1 = string1 + string[pos]
pos = pos + 1
return string1
print(fazAlgo("ISTO É UM TESTE"))
# e para retornar "Istoéumteste", ou seja, só deixar a primeira letra maiúscula...
def fazAlgo(string):
pos = 0
string1 = ""
while pos < len(string):
if string[pos] != " ":
string1 = string1 + string[pos]
pos = pos + 1
string1 = string1.capitalize()
return string1
print(fazAlgo("ISTO É UM TESTE"))
x, y = 10, 20
x, y
x
y
def peso_altura():
return 77, 1.83
peso_altura()
peso, altura = peso_altura()
peso
altura
# Atribuição múltipla em C (vacas magras...)
'''
int a, b, temp
a = 10
b = 20
temp = a
a = b
b = temp
'''
a, b = 10, 20
a, b = b, a
a, b
# Atribuição aumentada
x = 10
x = x + 10
x
x = 10
x += 10
x
x = 3
x *= 2
x
x = 2
x **= 10
x
x = 100
x /= 3
x
def pagamento_semanal(valor_por_hora, num_horas = 40):
return valor_por_hora * num_horas
pagamento_semanal(10)
pagamento_semanal(10, 20) # aceita, mesmo assim, o segundo parâmetro.
# Asserção de Invariantes
def pagamento_semanal(valor_por_hora, num_horas = 40):
assert valor_por_hora >= 0 and num_horas > 0
return valor_por_hora * num_horas
pagamento_semanal(30, 10)
pagamento_semanal(10, -10)
x, y = 10, 12
x, y = y, x
print("x = ",x,"e y = ",y)
x = 10
x += 10
x /= 2
x //= 3
x %= 2
x *= 9
print(x)
def calculo(x, y = 10, z = 5):
return x + y * z;
calculo(1, 2, 3)
calculo(1, 2) # 2 entra em y.
def calculo(x, y = 10, z = 5):
return x + y * z;
print(calculo(1, 2, 3))
calculo()
print(calculo( ,12, 10))
def horario_em_segundos(h, m, s):
assert h >= 0 and m >= 0 and s >= 0
return h * 3600 + m * 60 + s
print(horario_em_segundos (3,0,50))
print(horario_em_segundos(1,2,3))
print(horario_em_segundos (-1,20,30))
# Módulos em Python
def fib(n): # escreve a série de Fibonacci até n
a, b = 0, 1
while b < n:
print(b, end = ' ')
a, b = b, a + b
print()
def fib2(n):
result = []
a, b = 0, 1
while b < n:
result.append(b)
a, b = b, a + b
return result
'''
E no shell do Python (chamado na pasta que contém o arquivo fibo.py)
>>> import fibo
>>> fibo.fib(100)
1 1 2 3 5 8 13 21 34 55 89
>>> fibo.fib2(100)
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
>>> fibo.fib2(1000)
[1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987]
>>> meuFib = fibo.fib
>>> meuFib(20)
1 1 2 3 5 8 13
'''
Explanation: Exercício
Escreva uma função que recebe um array de strings como parâmetro e devolve o primeiro string na ordem lexicográfica, ignorando-se maiúsculas e minúsculas
End of explanation
def fazAlgo(string): # inverte a string e deixa as vogais maiúsculas
pos = len(string)-1 # define a variável posição do array
stringMi = string.lower() # aqui estão todas minúsculas
string = string.upper() # aqui estão todas maiúsculas
stringRe = "" # string de retorno
while pos >= 0:
if string[pos] == 'A' or string[pos] == 'E' or string[pos] == 'I' or string[pos] == 'O' or string[pos] == 'U':
stringRe = stringRe + string[pos]
else:
stringRe = stringRe + stringMi[pos]
pos = pos - 1
return stringRe
if __name__ == "__main__":
print(fazAlgo("teste"))
print(fazAlgo("o ovo do avestruz"))
print(fazAlgo("A CASA MUITO ENGRAÇADA"))
print(fazAlgo("A TELEvisão queBROU"))
print(fazAlgo("A Vaca Amarela"))
Explanation: Incluindo <pre>print(__name__)</pre> na última linha de fibo.py, ao fazer a importação import fibo no shell do Python, imprime 'fibo', que é o nome do programa.
Ao incluir
<pre>
if __name__ == "__main__":
import sys
fib(int(sys.argv[1]))
</pre>
podemos ver se está sendo executado como script (com o if do jeito que está) ou como módulo dentro de outro código (se o nome não for main, está sendo importado pra usar alguma função lá dentro).
End of explanation
maiusculas('Programamos em python 2?')
# deve devolver 'P'
maiusculas('Programamos em Python 3.')
# deve devolver 'PP'
maiusculas('PrOgRaMaMoS em python!')
# deve devolver 'PORMMS'
def maiusculas(frase):
listRe = [] # lista de retorno vazia
stringRe = '' # string de retorno vazia
for ch in frase:
if ord(ch) >=65 and ord(ch) <= 91:
listRe.append(ch)
# retornando a lista para string
stringRe = ''.join(listRe)
return stringRe
maiusculas('Programamos em python 2?')
maiusculas('Programamos em Python 3.')
maiusculas('PrOgRaMaMoS em python!')
x = ord('A')
y = ord('a')
x, y
ord('B')
ord('Z')
Explanation: Exercício 1: Letras maiúsculas
Escreva a função maiusculas(frase) que recebe uma frase (uma string) como parâmetro e devolve uma string com as letras maiúsculas que existem nesta frase, na ordem em que elas aparecem.
Para resolver este exercício, pode ser útil verificar uma tabela ASCII, que contém os valores de cada caractere. Ver http://equipe.nce.ufrj.br/adriano/c/apostila/tabascii.htm
Note que para simplificar a solução do exercício, as frases passadas para a sua função não possuirão caracteres que não estejam presentes na tabela ASCII apresentada, como ç, á, É, ã, etc.
Dica: Os valores apresentados na tabela são os mesmos devolvidos pela função ord apresentada nas aulas.
Exemplos:
End of explanation
menor_nome(['maria', 'josé', 'PAULO', 'Catarina'])
# deve devolver 'José'
menor_nome(['maria', ' josé ', ' PAULO', 'Catarina '])
# deve devolver 'José'
menor_nome(['Bárbara', 'JOSÉ ', 'Bill'])
# deve devolver José
def menor_nome(nomes):
tamanho = len(nomes) # pega a quantidade de nomes na lista
menor = '' # variável para escolher o menor nome
lista_limpa = [] # lista de nomes sem os espaços em branco
# ignora espaços em branco
for str in nomes:
lista_limpa.append(str.strip())
# verifica o menor nome
menor = lista_limpa[0] # considera o primeiro como menor
for str in lista_limpa:
if len(str) < len(menor): # não deixei <= senão pegará um segundo menor de mesmo tamanho
menor = str
return menor.capitalize() # deixa a primeira letra maiúscula
menor_nome(['maria', 'josé', 'PAULO', 'Catarina'])
# deve devolver 'José'
menor_nome(['maria', ' josé ', ' PAULO', 'Catarina '])
# deve devolver 'José'
menor_nome(['Bárbara', 'JOSÉ ', 'Bill'])
# deve devolver José
menor_nome(['Bárbara', 'JOSÉ ', 'Bill', ' aDa '])
Explanation: Exercício 2: Menor nome
Como pedido no primeiro vídeo desta semana, escreva uma função menor_nome(nomes) que recebe uma lista de strings com nome de pessoas como parâmetro e devolve o nome mais curto presente na lista.
A função deve ignorar espaços antes e depois do nome e deve devolver o menor nome presente na lista. Este nome deve ser devolvido com a primeira letra maiúscula e seus demais caracteres minúsculos, independente de como tenha sido apresentado na lista passada para a função.
Quando houver mais de um nome com o menor comprimento dentre os nomes na lista, a função deve devolver o primeiro nome com o menor comprimento presente na lista.
Exemplos:
End of explanation
def conta_letras(frase, contar = 'vogais'):
pos = len(frase) - 1 # atribui na variável pos (posição) a posição do array
count = 0 # define o contador de vogais
while pos >= 0: # conta as vogais
if frase[pos] == 'a' or frase[pos] == 'e' or frase[pos] == 'i' or frase[pos] == 'o' or frase[pos] == 'u':
count += 1
pos = pos - 1
if contar == 'consoantes':
frase = frase.replace(' ', '') # retira espaços em branco
return len(frase) - count # subtrai do total as vogais
else:
return count
conta_letras('programamos em python')
conta_letras('programamos em python', 'vogais')
conta_letras('programamos em python', 'consoantes')
conta_letras('bcdfghjklmnpqrstvxywz', 'consoantes')
len('programamos em python')
frase = 'programamos em python'
frase.replace(' ', '')
frase
Explanation: Exercícios adicionais
Exercício 1: Contando vogais ou consoantes
Escreva a função conta_letras(frase, contar="vogais"), que recebe como primeiro parâmetro uma string contendo uma frase e como segundo parâmetro uma outra string. Este segundo parâmetro deve ser opcional.
Quando o segundo parâmetro for definido como "vogais", a função deve devolver o numero de vogais presentes na frase. Quando ele for definido como "consoantes", a função deve devolver o número de consoantes presentes na frase. Se este parâmetro não for passado para a função, deve-se assumir o valor "vogais" para o parâmetro.
Exemplos:
conta_letras('programamos em python')
6
conta_letras('programamos em python', 'vogais')
6
conta_letras('programamos em python', 'consoantes')
13
End of explanation
def primeiro_lex(lista):
resposta = lista[0] # define o primeiro item da lista como a resposta...mas verifica depois.
for str in lista:
if ord(str[0]) < ord(resposta[0]):
resposta = str
return resposta
assert primeiro_lex(['oĺá', 'A', 'a', 'casa']), 'A'
assert primeiro_lex(['AAAAAA', 'b']), 'AAAAAA'
primeiro_lex(['casa', 'a', 'Z', 'A'])
primeiro_lex(['AAAAAA', 'b'])
Explanation: Exercício 2: Ordem lexicográfica
Como pedido no segundo vídeo da semana, escreva a função primeiro_lex(lista) que recebe uma lista de strings como parâmetro e devolve o primeiro string na ordem lexicográfica. Neste exercício, considere letras maiúsculas e minúsculas.
Dica: revise a segunda vídeo-aula desta semana.
Exemplos:
primeiro_lex(['oĺá', 'A', 'a', 'casa'])
'A'
primeiro_lex(['AAAAAA', 'b'])
'AAAAAA'
End of explanation
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
# import matriz # descomentar apenas no arquivo .py
def soma_matrizes(A, B):
num_lin = len(A)
num_col = len(A[0])
C = cria_matriz(num_lin, num_col, 0) # matriz com zeros
for lin in range(num_lin): # percorre as linhas da matriz
for col in range(num_col): # percorre as colunas da matriz
C[lin][col] = A[lin][col] + B[lin][col]
return C
if __name__ == '__main__':
A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]
print(soma_matrizes(A, B))
# No arquivo matriz.py
def cria_matriz(tot_lin, tot_col, valor):
matriz = [] #lista vazia
for i in range(tot_lin):
linha = []
for j in range(tot_col):
linha.append(valor)
matriz.append(linha)
return matriz
# E no arquivo soma_matrizes.py
import matriz
def soma_matrizes(A, B):
num_lin = len(A)
num_col = len(A[0])
C = matriz.cria_matriz(num_lin, num_col, 0) # matriz com zeros
for lin in range(num_lin): # percorre as linhas da matriz
for col in range(num_col): # percorre as colunas da matriz
C[lin][col] = A[lin][col] + B[lin][col]
return C
if __name__ == '__main__':
A = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
B = [[10, 20, 30], [40, 50, 60], [70, 80, 90]]
print(soma_matrizes(A, B))
'''
Multiplicação de matrizes:
1 2 3 1 2 22 28
4 5 6 * 3 4 = 49 64
5 6
1*1 + 2*3 + 3*5 = 22
1*2 + 2*4 + 3*6 = 28
4*1 + 5*3 + 6*5 = 49
4*2 + 5*4 + 6*6 = 64
c11 = a11*b11 + a12*b21 + c13*c31
c12 = a11*b21 + a12*b22 + c13*c23
c21 = a21*b11 + a22*b21 + c23*c31
c22 = a21*b21 + a22*b22 + c23*c23
'''
def multiplica_matrizes (A, B):
num_linA, num_colA = len(A), len(A[0])
num_linB, num_colB = len(B), len(B[0])
assert num_colA == num_linB
C = []
for lin in range(num_linA): # percorre as linhas da matriz A
# começando uma nova linha
C.append([])
for col in range(num_colB): # percorre as colunas da matriz B
# Adicionando uma nova coluna na linha
C[lin].append(0)
for k in range(num_colA):
C[lin][col] += A[lin][k] * B[k][col]
return C
if __name__ == '__main__':
A = [[1, 2, 3], [4, 5, 6]]
B = [[1, 2], [3, 4], [5, 6]]
print(multiplica_matrizes(A, B))
Explanation: Semana 3 - POO – Programação Orientada a Objetos
End of explanation
class Carro:
pass
meu_carro = Carro()
meu_carro
carro_do_trabalho = Carro()
carro_do_trabalho
meu_carro.ano = 1968
meu_carro.modelo = 'Fusca'
meu_carro.cor = 'azul'
meu_carro.ano
meu_carro.cor
carro_do_trabalho.ano = 1981
carro_do_trabalho.modelo = 'Brasília'
carro_do_trabalho.cor = 'amarela'
carro_do_trabalho.ano
novo_fusca = meu_carro # duas variáveis apontando para o mesmo objeto
novo_fusca #repare que é o mesmo end. de memória
novo_fusca.ano += 10
novo_fusca.ano
novo_fusca
Explanation: POO
End of explanation
class Pato:
pass
pato = Pato()
patinho = Pato()
if pato == patinho:
print("Estamos no mesmo endereço!")
else:
print("Estamos em endereços diferentes!")
class Carro:
def __init__(self, modelo, ano, cor): # init é o Construtor da classe
self.modelo = modelo
self.ano = ano
self.cor = cor
carro_do_meu_avo = Carro('Ferrari', 1980, 'vermelha')
carro_do_meu_avo
carro_do_meu_avo.cor
Explanation: Testes para praticar
End of explanation
def main():
carro1 = Carro('Brasília', 1968, 'amarela', 80)
carro2 = Carro('Fuscão', 1981, 'preto', 95)
carro1.acelere(40)
carro2.acelere(50)
carro1.acelere(80)
carro1.pare()
carro2.acelere(100)
class Carro:
def __init__(self, modelo, ano, cor, vel_max):
self.modelo = modelo
self.ano = ano
self.cor = cor
self.vel = 0
self.maxV = vel_max # velocidade máxima
def imprima(self):
if self.vel == 0: # parado dá para ver o ano
print('%s %s %d' % (self.modelo, self.cor, self.ano))
elif self.vel < self.maxV:
print('%s %s indo a %d km/h' % (self.modelo, self.cor, self.vel))
else:
print('%s %s indo muito rapido!' % (self.modelo, self.cor))
def acelere(self, velocidade):
self.vel = velocidade
if self.vel > self.maxV:
self.vel = self.maxV
self.imprima()
def pare(self):
self.vel = 0
self.imprima()
main()
Explanation: POO – Programação Orientada a Objetos – Parte 2
End of explanation
class Cafeteira:
def __init__(self, marca, tipo, tamanho, cor):
self.marca = marca
self.tipo = tipo
self.tamanho = tamanho
self.cor = cor
class Cachorro:
def __init__(self, raça, idade, nome, cor):
self.raça = raça
self.idade = idade
self.nome = nome
self.cor = cor
rex = Cachorro('vira-lata', 2, 'Bobby', 'marrom')
'vira-lata' == rex.raça
rex.idade > 2
rex.idade == '2'
rex.nome == 'rex'
Bobby.cor == 'marrom'
rex.cor == 'marrom'
class Lista:
def append(self, elemento):
return "Oops! Este objeto não é uma lista"
lista = []
a = Lista()
b = a.append(7)
lista.append(b)
a
b
lista
Explanation: TESTE PARA PRATICAR POO – Programação Orientada a Objetos – Parte 2
End of explanation
import math
class Bhaskara:
def delta(self, a, b, c):
return b ** 2 - 4 * a * c
def main(self):
a_digitado = float(input("Digite o valor de a:"))
b_digitado = float(input("Digite o valor de b:"))
c_digitado = float(input("Digite o valor de c:"))
print(self.calcula_raizes(a_digitado, b_digitado, c_digitado))
def calcula_raizes(self, a, b, c):
d = self.delta(self, a, b, c)
if d == 0:
raiz1 = (-b + math.sqrt(d)) / (2 * a)
return 1, raiz1 # indica que tem uma raiz e o valor dela
else:
if d < 0:
return 0
else:
raiz1 = (-b + math.sqrt(d)) / (2 * a)
raiz2 = (-b - math.sqrt(d)) / (2 * a)
return 2, raiz1, raiz2
main()
main()
import Bhaskara
class TestBhaskara:
def testa_uma_raiz(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(1, 0, 0) == (1, 0)
def testa_duas_raizes(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(1, -5, 6) == (2, 3, 2)
def testa_zero_raizes(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(10, 10, 10) == 0
def testa_raiz_negativa(self):
b = Bhaskara.Bhaskara()
assert b.calcula_raizes(10, 20, 10) == (1, -1)
Explanation: Códigos Testáveis
End of explanation
# Nos estudos ficou pytest_bhaskara.py
import Bhaskara
import pytest
class TestBhaskara:
@pytest.fixture
def b(self):
return Bhaskara.Bhaskara()
def testa_uma_raiz(self, b):
assert b.calcula_raizes(1, 0, 0) == (1, 0)
def testa_duas_raizes(self, b):
assert b.calcula_raizes(1, -5, 6) == (2, 3, 2)
def testa_zero_raizes(self, b):
assert b.calcula_raizes(10, 10, 10) == 0
def testa_raiz_negativa(self, b):
assert b.calcula_raizes(10, 20, 10) == (1, -1)
Explanation: Fixture: valor fixo para um conjunto de testes
@pytest.fixture
End of explanation
def fatorial(n):
if n < 0:
return 0
i = fat = 1
while i <= n:
fat = fat * i
i += 1
return fat
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(-10, 0),
(4, 24),
(5, 120)
])
def testa_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
Explanation: Parametrização
End of explanation
class Triangulo:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def perimetro(self):
return self.a + self.b + self.c
t = Triangulo(1, 1, 1)
t.a
t.b
t.c
t.perimetro()
Explanation: Exercícios
Escreva uma versão do TestaBhaskara usando @pytest.mark.parametrize
Escreva uma bateria de testes para o seu código preferido
Tarefa de programação: Lista de exercícios - 3
Exercício 1: Uma classe para triângulos
Defina a classe Triangulo cujo construtor recebe 3 valores inteiros correspondentes aos lados a, b e c de um triângulo.
A classe triângulo também deve possuir um método perimetro, que não recebe parâmetros e devolve um valor inteiro correspondente ao perímetro do triângulo.
t = Triangulo(1, 1, 1)
deve atribuir uma referência para um triângulo de lados 1, 1 e 1 à variável t
Um objeto desta classe deve responder às seguintes chamadas:
t.a
deve devolver o valor do lado a do triângulo
t. b
deve devolver o valor do lado b do triângulo
t.c
deve devolver o valor do lado c do triângulo
t.perimetro()
deve devolver um inteiro correspondente ao valor do perímetro do triângulo.
End of explanation
class Triangulo:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def tipo_lado(self):
if self.a == self.b and self.a == self.c:
return 'equilátero'
elif self.a != self.b and self.a != self.c and self.b != self.c:
return 'escaleno'
else:
return 'isósceles'
t = Triangulo(4, 4, 4)
t.tipo_lado()
u = Triangulo(3, 4, 5)
u.tipo_lado()
v = Triangulo(1, 3, 3)
v.tipo_lado()
t = Triangulo(5, 8, 5)
t.tipo_lado()
t = Triangulo(5, 5, 6)
t.tipo_lado()
'''
Exercício 1: Triângulos retângulos
Escreva, na classe Triangulo, o método retangulo() que devolve
True se o triângulo for retângulo, e False caso contrário.
Exemplos:
t = Triangulo(1, 3, 5)
t.retangulo()
# deve devolver False
u = Triangulo(3, 4, 5)
u.retangulo()
# deve devolver True
'''
class Triangulo:
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
def retangulo(self):
if self.a > self.b and self.a > self.c:
if self.a ** 2 == self.b ** 2 + self.c ** 2:
return True
else:
return False
elif self.b > self.a and self.b > self.c:
if self.b ** 2 == self.c ** 2 + self.a ** 2:
return True
else:
return False
else:
if self.c ** 2 == self.a ** 2 + self.b ** 2:
return True
else:
return False
t = Triangulo(1, 3, 5)
t.retangulo()
t = Triangulo(3, 1, 5)
t.retangulo()
t = Triangulo(5, 1, 3)
t.retangulo()
u = Triangulo(3, 4, 5)
u.retangulo()
u = Triangulo(4, 5, 3)
u.retangulo()
u = Triangulo(5, 3, 4)
u.retangulo()
Explanation: Exercício 2: Tipos de triângulos
Na classe triângulo, definida na Questão 1, escreva o metodo tipo_lado() que devolve uma string dizendo se o triângulo é:
isóceles (dois lados iguais)
equilátero (todos os lados iguais)
escaleno (todos os lados diferentes)
Note que se o triângulo for equilátero, a função não deve devolver isóceles.
Exemplos:
t = Triangulo(4, 4, 4)
t.tipo_lado()
deve devolver 'equilátero'
u = Triangulo(3, 4, 5)
.tipo_lado()
deve devolver 'escaleno'
End of explanation
class Triangulo:
'''
O resultado dos testes com seu programa foi:
***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(3, 4, 5) - Falhou *****
TypeError: 'Triangulo' object is not iterable
***** [0.2 pontos]: Testando método semelhantes(Triangulo(3, 4, 5)) para Triangulo(6, 8, 10) - Falhou *****
TypeError: 'Triangulo' object is not iterable
***** [0.2 pontos]: Testando método semelhantes(Triangulo(6, 8, 10)) para Triangulo(3, 4, 5) - Falhou *****
TypeError: 'Triangulo' object is not iterable
***** [0.4 pontos]: Testando método semelhantes(Triangulo(3, 3, 3)) para Triangulo(3, 4, 5) - Falhou *****
TypeError: 'Triangulo' object is not iterable
'''
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
# https://stackoverflow.com/questions/961048/get-class-that-defined-method
def semelhantes(self, Triangulo):
list1 = []
for arg in self:
list1.append(arg)
list2 = []
for arg in self1:
list2.append(arg)
for i in list2:
print(i)
t1 = Triangulo(2, 2, 2)
t2 = Triangulo(4, 4, 4)
t1.semelhantes(t2)
Explanation: Exercício 2: Triângulos semelhantes
Ainda na classe Triangulo, escreva um método semelhantes(triangulo)
que recebe um objeto do tipo Triangulo como parâmetro e verifica
se o triângulo atual é semelhante ao triângulo passado como parâmetro.
Caso positivo, o método deve devolver True. Caso negativo,
deve devolver False.
Verifique a semelhança dos triângulos através do comprimento
dos lados.
Dica: você pode colocar os lados de cada um dos triângulos em uma
lista diferente e ordená-las.
Exemplo:
t1 = Triangulo(2, 2, 2)
t2 = Triangulo(4, 4, 4)
t1.semelhantes(t2)
deve devolver True
'''
End of explanation
def busca_sequencial(seq, x):
'''(list, bool) -> bool'''
for i in range(len(seq)):
if seq[i] == x:
return True
return False
# código com cara de C =\
list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
busca_sequencial(list, 3)
list = ['casa', 'texto', 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
busca_sequencial(list, 'texto')
class Musica:
def __init__(self, titulo, interprete, compositor, ano):
self.titulo = titulo
self.interprete = interprete
self.compositor = compositor
self.ano = ano
class Buscador:
def busca_por_titulo(self, playlist, titulo):
for i in range(len(playlist)):
if playlist[i].titulo == titulo:
return i
return -1
def vamos_buscar(self):
playlist = [Musica("Ponta de Areia", "Milton Nascimento", "Milton Nascimento", 1975),
Musica("Podres Poderes", "Caetano Veloso", "Caetano Veloso", 1984),
Musica("Baby", "Gal Costa", "Caetano Veloso", 1969)]
onde_achou = self.busca_por_titulo(playlist, "Baby")
if onde_achou == -1:
print("A música buscada não está na playlist")
else:
preferida = playlist[onde_achou]
print(preferida.titulo, preferida.interprete, preferida.compositor, preferida.ano, sep = ', ')
b = Buscador()
b.vamos_buscar()
Explanation: Week 4
Busca Sequencial
End of explanation
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
lista = [10, 3, 8, -10, 200, 17, 32]
o = Ordenador()
o.selecao_direta(lista)
lista
lista_nomes = ['maria', 'carlos', 'wilson', 'ana']
o.selecao_direta(lista_nomes)
lista_nomes
import random
print(random.randint(1, 10))
from random import shuffle
x = [i for i in range(100)]
shuffle(x)
x
o.selecao_direta(x)
x
def comprova_ordem(list):
flag = True
for i in range(len(list) - 1):
if list[i] > list[i + 1]:
flag = False
return flag
comprova_ordem(x)
list = [1, 2, 3, 4, 5]
list2 = [1, 3, 2, 4, 5]
comprova_ordem(list)
comprova_ordem(list2)
def busca_sequencial(seq, x):
for i in range(len(seq)):
if seq[i] == x:
return True
return False
def selecao_direta(lista):
fim = len(lista)
for i in range(fim-1):
pos_menor = i
for j in range(i+1,fim):
if lista[j] < lista[pos_menor]:
pos_menor = j
lista[i],lista[pos_menor] = lista[pos_menor],lista[i]
return lista
numeros = [55,33,0,900,-432,10,77,2,11]
Explanation: Complexidade Computacional
Análise matemática do desempenho de um algoritmo
Estudo analítico de:
Quantas operações um algoritmo requer para que ele seja executado
Quanto tempo ele vai demorar para ser executado
Quanto de memória ele vai ocupar
Análise da Busca Sequencial
Exemplo:
Lista telefônica de São Paulo, supondo 2 milhões de telefones fixos.
Supondo que cada iteração do for comparação de string dure 1 milissegundo.
Pior caso: 2000s = 33,3 minutos
Caso médio (1 milhão): 1000s = 16,6 minutos
Complexidade Computacional da Busca Sequencial
Dada uma lista de tamanho n
A complexidade computacional da busca sequencial é:
n, no pior caso
n/2, no caso médio
Conclusão
Busca sequencial é boa pois é bem simples
Funciona bem quando a busca é feita num volume pequeno de dados
Sua Complexidade Computacional é muito alta
É muito lenta quando o volume de dados é grande
Portanto, dizemos que é um algoritmo ineficiente
Algoritmo de Ordenação Seleção Direta
Seleção Direta
A cada passo, busca pelo menor elemento do pedaço ainda não ordenado da lista e o coloca no início da lista
No 1º passo, busca o menor elemento de todos e coloca na posição inicial da lista.
No 2º passo, busca o 2º menor elemento da lista e coloca na 2ª posição da lista.
No 3º passo, busca o 3º menor elemento da lista e coloca na 3ª posição da lista.
Repete até terminar a lista
End of explanation
def ordenada(list):
flag = True
for i in range(len(list) - 1):
if list[i] > list[i + 1]:
flag = False
return flag
Explanation: Tarefa de programação: Lista de exercícios - 4
Exercício 1: Lista ordenada
Escreva a função ordenada(lista), que recebe uma lista com números inteiros como parâmetro e devolve o booleano True se a lista estiver ordenada e False se a lista não estiver ordenada.
End of explanation
def busca(lista, elemento):
for i in range(len(lista)):
if lista[i] == elemento:
return i
return False
busca(['a', 'e', 'i'], 'e')
busca([12, 13, 14], 15)
Explanation: Exercício 2: Busca sequencial
Implemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca sequencial. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False.
busca(['a', 'e', 'i'], 'e')
deve devolver => 1
busca([12, 13, 14], 15)
deve devolver => False
End of explanation
def lista_grande(n):
import random
return random.sample(range(1, 1000), n)
lista_grande(10)
Explanation: Praticar tarefa de programação: Exercícios adicionais (opcionais)
Exercício 1: Gerando listas grandes
Escreva a função lista_grande(n), que recebe como parâmetro um número inteiro n e devolve uma lista contendo n números inteiros aleatórios.
End of explanation
def ordena(lista):
fim = len(lista)
for i in range(fim - 1):
min = i
for j in range(i + 1, fim):
if lista[j] < lista[min]:
min = j
lista[i], lista[min] = lista[min], lista[i]
return lista
lista = [10, 3, 8, -10, 200, 17, 32]
ordena(lista)
lista
Explanation: Exercício 2: Ordenação com selection sort
Implemente a função ordena(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo selection sort.
End of explanation
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
Explanation: Week 5 - Algoritmo de Ordenação da Bolha - Bubblesort
Lista como um tubo de ensaio vertical, os elementos mais leves sobem à superfície como uma bolha, os mais pesados afundam.
Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem
End of explanation
lista = [10, 3, 8, -10, 200, 17, 32]
o = Ordenador()
o.bolha(lista)
lista
Explanation: Exemplo do algoritmo bubblesort em ação:
Inicial:
5 1 7 3 2
1 5 7 3 2
1 5 3 7 2
1 5 3 2 7 (fim da primeira iteração)
1 3 5 2 7
1 3 2 5 7 (fim da segunda iteração)
1 2 3 5 7
End of explanation
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
import random
import time
class ContaTempos:
def lista_aleatoria(self, n): # n = número de elementos da lista
from random import randrange
lista = [0 for x in range(n)] # lista com n elementos, todos sendo zero
for i in range(n):
lista[i] = random.randrange(1000) # inteiros entre 0 e 999
return lista
def compara(self, n):
lista1 = self.lista_aleatoria(n)
lista2 = lista1
o = Ordenador()
antes = time.time()
o.bolha(lista1)
depois = time.time()
print("Bolha demorou", depois - antes, "segundos")
antes = time.time()
o.selecao_direta(lista2)
depois = time.time()
print("Seleção direta demorou", depois - antes, "segundos")
c = ContaTempos()
c.compara(1000)
print("Diferença de", 0.16308164596557617 - 0.05245494842529297)
c.compara(5000)
Explanation: Comparação de Desempenho
Módulo time:
função time()
devolve o tempo decorrido (em segundos) desde 1/1/1970 (no Unix)
Para medir um intervalo de tempo
import time
antes = time.time()
algoritmo_a_ser_cronometrado()
depois = time.time()
print("A execução do algoritmo demorou ", depois - antes, "segundos")
End of explanation
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
# Inicialmente o menor elemento já visto é o i-ésimo
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]: # encontrou um elemento menor...
posicao_do_minimo = j # ...substitui.
# Coloca o menor elemento encontrado no início da sub-lista
# Para isso, troca de lugar os elementos nas posições i e posicao_do_minimo
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
def bolha_curta(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
trocou = False
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
trocou = True
if not trocou: # que é igual a if trocou == False
return
import random
import time
class ContaTempos:
def lista_aleatoria(self, n): # n = número de elementos da lista
from random import randrange
lista = [random.randrange(1000) for x in range(n)] # lista com n elementos, todos sendo aleatórios de 0 a 999
return lista
def lista_quase_ordenada(self, n):
lista = [x for x in range(n)] # lista ordenada
lista[n//10] = -500 # localizou o -500 no primeiro décimo da lista
return lista
def compara(self, n):
lista1 = self.lista_aleatoria(n)
lista2 = lista1
lista3 = lista2
o = Ordenador()
print("Comparando lista aleatórias")
antes = time.time()
o.bolha(lista1)
depois = time.time()
print("Bolha demorou", depois - antes, "segundos")
antes = time.time()
o.selecao_direta(lista2)
depois = time.time()
print("Seleção direta demorou", depois - antes, "segundos")
antes = time.time()
o.bolha_curta(lista3)
depois = time.time()
print("Bolha otimizada", depois - antes, "segundos")
print("\nComparando lista quase ordenadas")
lista1 = self.lista_quase_ordenada(n)
lista2 = lista1
lista3 = lista2
antes = time.time()
o.bolha(lista1)
depois = time.time()
print("Bolha demorou", depois - antes, "segundos")
antes = time.time()
o.selecao_direta(lista2)
depois = time.time()
print("Seleção direta demorou", depois - antes, "segundos")
antes = time.time()
o.bolha_curta(lista3)
depois = time.time()
print("Bolha otimizada", depois - antes, "segundos")
c = ContaTempos()
c.compara(1000)
c.compara(5000)
Explanation: Melhoria no Algoritmo de Ordenação da Bolha
Percorre a lista múltiplas vezes; a cada passagem, compara todos os elementos adjacentes e troca de lugar os que estiverem fora de ordem.
Melhoria: se em uma das iterações, nenhuma troca é realizada, isso significa que a lista já está ordenada e podemos finalizar o algoritmo.
End of explanation
class Ordenador:
def selecao_direta(self, lista):
fim = len(lista)
for i in range(fim - 1):
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]:
posicao_do_minimo = j
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
def bolha(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
def bolha_curta(self, lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
trocou = False
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
trocou = True
if not trocou:
return
import random
import time
class ContaTempos:
def lista_aleatoria(self, n):
from random import randrange
lista = [random.randrange(1000) for x in range(n)]
return lista
def lista_quase_ordenada(self, n):
lista = [x for x in range(n)]
lista[n//10] = -500
return lista
import pytest
class TestaOrdenador:
@pytest.fixture
def o(self):
return Ordenador()
@pytest.fixture
def l_quase(self):
c = ContaTempos()
return c.lista_quase_ordenada(100)
@pytest.fixture
def l_aleatoria(self):
c = ContaTempos()
return c.lista_aleatoria(100)
def esta_ordenada(self, l):
for i in range(len(l) - 1):
if l[i] > l[i+1]:
return False
return True
def test_bolha_curta_aleatoria(self, o, l_aleatoria):
o.bolha_curta(l_aleatoria)
assert self.esta_ordenada(l_aleatoria)
def test_selecao_direta_aleatoria(self, o, l_aleatoria):
o.selecao_direta(l_aleatoria)
assert self.esta_ordenada(l_aleatoria)
def test_bolha_curta_quase(self, o, l_quase):
o.bolha_curta(l_quase)
assert self.esta_ordenada(l_quase)
def test_selecao_direta_quase(self, o, l_quase):
o.selecao_direta(l_quase)
assert self.esta_ordenada(l_quase)
[5, 2, 1, 3, 4]
2 5 1 3 4
2 1 5 3 4
2 1 3 5 4
2 1 3 4 5
[2, 3, 4, 5, 1]
2 3 4 1 5
2 3 1 4 5
2 1 3 4 5
1 2 3 4 5
Explanation: Site com algoritmos de ordenação http://nicholasandre.com.br/sorting/
Testes automatizados dos algoritmos de ordenação
End of explanation
class Buscador:
def busca_por_titulo(self, playlist, titulo):
for i in range(len(playlist)):
if playlist[i].titulo == titulo:
return i
return -1
def busca_binaria(self, lista, x):
primeiro = 0
ultimo = len(lista) - 1
while primeiro <= ultimo:
meio = (primeiro + ultimo) // 2
if lista[meio] == x:
return meio
else:
if x < lista[meio]: # busca na primeira metade da lista
ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos
else:
primeiro = meio + 1
return -1
lista = [-100, 0, 20, 30, 50, 100, 3000, 5000]
b = Buscador()
b.busca_binaria(lista, 30)
Explanation: Busca Binária
Objetivo: localizar o elemento x em uma lista
Considere o elemento m do meio da lista
se x == m ==> encontrou!
se x < m ==> procure apenas na 1ª metade (da esquerda)
se x > m ==> procure apenas na 2ª metade (da direita),
repetir o processo até que o x seja encontrado ou que a sub-lista em questão esteja vazia
End of explanation
def busca(lista, elemento):
primeiro = 0
ultimo = len(lista) - 1
while primeiro <= ultimo:
meio = (primeiro + ultimo) // 2
if lista[meio] == elemento:
print(meio)
return meio
else:
if elemento < lista[meio]: # busca na primeira metade da lista
ultimo = meio - 1 # já foi visto que não está no elemento meio, então vai um a menos
print(meio) # função deve imprimir cada um dos índices testados pelo algoritmo.
else:
primeiro = meio + 1
print(meio)
return False
busca(['a', 'e', 'i'], 'e')
busca([1, 2, 3, 4, 5], 6)
busca([1, 2, 3, 4, 5, 6], 4)
Explanation: Complexidade da Busca Binária
Dado uma lista de n elementos
No pior caso, teremos que efetuar:
$$log_2n$$ comparações
No exemplo da lista telefônica (com 2 milhões de números):
$$log_2(2 milhões) = 20,9$$
Portanto: resposta em menos de 21 milissegundos!
Conclusão
Busca Binária é um algoritmo bastante eficiente
Ao estudar a eficiência de um algoritmo é interessante:
Analisar a complexidade computacional
Realizar experimentos medindo o desempenho
Tarefa de programação: Lista de exercícios - 5
Exercício 1: Busca binária
Implemente a função busca(lista, elemento), que busca um determinado elemento em uma lista e devolve o índice correspondente à posição do elemento encontrado. Utilize o algoritmo de busca binária. Nos casos em que o elemento buscado não existir na lista, a função deve devolver o booleano False.
Além de devolver o índice correspondente à posição do elemento encontrado, sua função deve imprimir cada um dos índices testados pelo algoritmo.
Exemplo:
busca(['a', 'e', 'i'], 'e')
1
deve devolver => 1
busca([1, 2, 3, 4, 5], 6)
2
3
4
deve devolver => False
busca([1, 2, 3, 4, 5, 6], 4)
2
4
3
deve devolver => 3
End of explanation
def bubble_sort(lista):
fim = len(lista)
for i in range(fim - 1, 0, -1):
for j in range(i):
if lista[j] > lista[j + 1]:
lista[j], lista[j + 1] = lista[j + 1], lista[j]
print(lista)
print(lista)
return lista
bubble_sort([5, 1, 4, 2, 8])
#[1, 4, 2, 5, 8]
#[1, 2, 4, 5, 8]
#[1, 2, 4, 5, 8]
#deve devolver [1, 2, 4, 5, 8]
bubble_sort([1, 3, 4, 2, 0, 5])
#Esperado:
#[1, 3, 2, 0, 4, 5]
#[1, 2, 0, 3, 4, 5]
#[1, 0, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
#O resultado dos testes com seu programa foi:
#***** [0.6 pontos]: Verificando funcionamento do bubble sort - Falhou *****
#AssertionError: Expected
#[1, 3, 4, 2, 0, 5]
#[1, 3, 2, 0, 4, 5]
#[1, 2, 0, 3, 4, 5]
#[1, 0, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
# but got
#[1, 3, 4, 2, 0, 5]
#[1, 3, 2, 0, 4, 5]
#[1, 2, 0, 3, 4, 5]
#[1, 0, 2, 3, 4, 5]
#[0, 1, 2, 3, 4, 5]
Explanation: Exercício 2: Ordenação com bubble sort
Implemente a função bubble_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo bubble sort.
Além de devolver uma lista ordenada, sua função deve imprimir os resultados parciais da ordenação ao fim de cada iteração do algoritmo ao longo da lista. Observe que, como a última iteração do algoritmo apenas verifica que a lista está ordenada, o último resultado deve ser impresso duas vezes. Portanto, se seu algoritmo precisa de duas passagens para ordenar a lista, e uma terceira para verificar que a lista está ordenada, 3 resultados parciais devem ser impressos.
bubble_sort([5, 1, 4, 2, 8])
[1, 4, 2, 5, 8]
[1, 2, 4, 5, 8]
[1, 2, 4, 5, 8]
deve devolver [1, 2, 4, 5, 8]
End of explanation
def insertion_sort(lista):
fim = len(lista)
for i in range(fim - 1):
posicao_do_minimo = i
for j in range(i + 1, fim):
if lista[j] < lista[posicao_do_minimo]:
posicao_do_minimo = j
lista[i], lista[posicao_do_minimo] = lista[posicao_do_minimo], lista[i]
return lista
Explanation: Praticar tarefa de programação: Exercício adicional (opcional)
Exercício 1: Ordenação com insertion sort
Implemente a função insertion_sort(lista), que recebe uma lista com números inteiros como parâmetro e devolve esta lista ordenada. Utilize o algoritmo insertion sort.
End of explanation
def fatorial(n):
if n <= 1: # base da recursão
return 1
else:
return n * fatorial(n - 1) # chamada recursiva
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(2, 2),
(3, 6),
(4, 24),
(5, 120)
])
def testa_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
#fatorial.py
def fatorial(n):
if n <= 1: # base da recursão
return 1
else:
return n * fatorial(n - 1) # chamada recursiva
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 1),
(1, 1),
(2, 2),
(3, 6),
(4, 24),
(5, 120)
])
def testa_fatorial(entrada, esperado):
assert fatorial(entrada) == esperado
# fibonacci.py
# Fn = 0 if n = 0
# Fn = 1 if n = 1
# Fn+1 + Fn-2 if n > 1
def fibonacci(n):
if n < 2:
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)
import pytest
@pytest.mark.parametrize("entrada, esperado", [
(0, 0),
(1, 1),
(2, 1),
(3, 2),
(4, 3),
(5, 5),
(6, 8),
(7, 13)
])
def testa_fibonacci(entrada, esperado):
assert fibonacci(entrada) == esperado
# busca binária
def busca_binaria(lista, elemento, min = 0, max = None):
if max == None: # se nada for passado, o tamanho máximo é o tamanho da lista
max = len(lista) - 1
if max < min: # situação que não encontrou o elemento
return False
else:
meio = min + (max - min) // 2
if lista[meio] > elemento:
return busca_binaria(lista, elemento, min, meio - 1)
elif lista[meio] < elemento:
return busca_binaria(lista, elemento, meio + 1, max)
else:
return meio
a = [10, 20, 30, 40, 50, 60]
import pytest
@pytest.mark.parametrize("lista, valor, esperado", [
(a, 10, 0),
(a, 20, 1),
(a, 30, 2),
(a, 40, 3),
(a, 50, 4),
(a, 60, 5),
(a, 70, False),
(a, 70, False),
(a, 15, False),
(a, -10, False)
])
def testa_busca_binaria(lista, valor, esperado):
assert busca_binaria(lista, valor) == esperado
Explanation: Week 6
Recursão (Definição. Como resolver um problema recursivo. Exemplos. Implementações.)
End of explanation
def merge_sort(lista):
if len(lista) <= 1:
return lista
meio = len(lista) // 2
lado_esquerdo = merge_sort(lista[:meio])
lado_direito = merge_sort(lista[meio:])
return merge(lado_esquerdo, lado_direito) # intercala os dois lados
def merge(lado_esquerdo, lado_direito):
if not lado_esquerdo: # se o lado esquerdo for uma lista vazia...
return lado_direito
if not lado_direito: # se o lado direito for uma lista vazia...
return lado_esquerdo
if lado_esquerdo[0] < lado_direito[0]: # compara o primeiro elemento da posição do lado esquerdo com o primeiro do lado direito
return [lado_esquerdo[0]] + merge(lado_esquerdo[1:], lado_direito) # merge(lado_esquerdo[1:]) ==> pega o lado esquerdo, menos o primeiro elemento
return [lado_direito[0]] + merge(lado_esquerdo, lado_direito[1:])
Explanation: Mergesort
Ordenação por Intercalação:
Divida a lista na metade recursivamente, até que cada sublista contenha apenas 1 elemento (portanto, já ordenada).
Repetidamente, intercale as sublistas para produzir novas listas ordenadas.
Repita até que tenhamos apenas 1 lista no final (que estará ordenada).
Ex:
6 5 3 1 8 7 2 4
5 6 1 3 7 8 2 4
1 3 5 6 2 4 7 8
1 2 3 4 5 6 7 8
End of explanation
def x(n):
if n == 0:
#<espaço A>
print(n)
else:
#<espaço B>
x(n-1)
print(n)
#<espaço C>
#<espaço D>
#<espaço E>
x(10)
def x(n):
if n >= 0 or n <= 2:
print(n)
# return n
else:
print(n-1)
print(n-2)
print(n-3)
#return x(n-1) + x(n-2) + x(n-3)
print(x(6))
def busca_binaria(lista, elemento, min=0, max=None):
if max == None:
max = len(lista)-1
if max < min:
return False
else:
meio = min + (max-min)//2
print(lista[meio])
if lista[meio] > elemento:
return busca_binaria(lista, elemento, min, meio - 1)
elif lista[meio] < elemento:
return busca_binaria(lista, elemento, meio + 1, max)
else:
return meio
a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]
a
busca_binaria(a, 99)
Explanation: Base da recursão é a condição que faz o problema ser definitivamente resolvido. Caso essa condição, essa base da recursão, não seja satisfeita, o problema continua sendo reduzido em instâncias menores até que a condição passe a ser satisfeita.
Chamada recursiva é a linha onde a função faz uma chamada a ela mesma.
Função recursiva é a função que chama ela mesma.
A linha 2 tem a condição que é a base da recursão
A linha 5 tem a chamada recursiva
Para o algoritmo funcionar corretamente, é necessário trocar a linha 3 por “return 1”
if (n < 2):
if (n <= 1):
No <espaço A> e no <espaço C>
looping infinito
Resultado: 6. Chamadas recursivas: nenhuma.
Resultado: 20. Chamadas recursivas: 24
1
End of explanation
def soma_lista_tradicional_way(lista):
soma = 0
for i in range(len(lista)):
soma += lista[i]
return soma
a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]
soma_lista_tradicional_way(a)
b = [-10, -2, 0, 5]
soma_lista_tradicional_way(b)
def soma_lista(lista):
if len(lista) == 1:
return lista[0]
else:
return lista[0] + soma_lista(lista[1:])
a = [-10, -2, 0, 5, 66, 77, 99, 102, 239, 567, 875, 934]
soma_lista(a) # retorna 2952
b = [-10, -2, 0, 5]
soma_lista(b)
Explanation: Tarefa de programação: Lista de exercícios - 6
Exercício 1: Soma dos elementos de uma lista
Implemente a função soma_lista(lista), que recebe como parâmetro uma lista de números inteiros e devolve um número inteiro correspondente à soma dos elementos desta lista.
Sua solução deve ser implementada utilizando recursão.
End of explanation
def encontra_impares_tradicional_way(lista):
lista_impares = []
for i in lista:
if i % 2 != 0: # é impar!
lista_impares.append(i)
return lista_impares
a = [5, 66, 77, 99, 102, 239, 567, 875, 934]
encontra_impares_tradicional_way(a)
b = [2, 5, 34, 66, 100, 102, 999]
encontra_impares_tradicional_way(b)
stack = ['a','b']
stack.extend(['g','h'])
stack
def encontra_impares(lista):
if len(lista) == 0:
return []
if lista[0] % 2 != 0: # se o elemento é impar
return [lista[0]] + encontra_impares(lista[1:])
else:
return encontra_impares(lista[1:])
a = [5, 66, 77, 99, 102, 239, 567, 875, 934]
encontra_impares(a)
encontra_impares([5])
encontra_impares([1, 2, 3])
encontra_impares([2, 4, 6, 8])
encontra_impares([9])
encontra_impares([4, 11])
encontra_impares([2, 10, 20, 7, 30, 12, 6, 6])
encontra_impares([])
encontra_impares([4, 331, 1001, 4])
Explanation: Exercício 2: Encontrando ímpares em uma lista
Implemente a função encontra_impares(lista), que recebe como parâmetro uma lista de números inteiros e devolve uma outra lista apenas com os números ímpares da lista dada.
Sua solução deve ser implementada utilizando recursão.
Dica: você vai precisar do método extend() que as listas possuem.
End of explanation
def incomodam(n):
if type(n) != int or n <= 0:
return ''
else:
s1 = 'incomodam '
return s1 + incomodam(n - 1)
incomodam('-1')
incomodam(2)
incomodam(3)
incomodam(8)
incomodam(-3)
incomodam(1)
incomodam(7)
def incomodam(n):
if type(n) != int or n <= 0:
return ''
else:
s1 = 'incomodam '
return s1 + incomodam(n - 1)
def elefantes(n):
if type(n) != int or n <= 0:
return ''
if n == 1:
return "Um elefante incomoda muita gente"
else:
return elefantes(n - 1) + str(n) + " elefantes " + incomodam(n) + ("muita gente" if n % 2 > 0 else "muito mais") + "\r\n"
elefantes(1)
print(elefantes(3))
elefantes(2)
elefantes(3)
print(elefantes(4))
type(str(3))
def incomodam(n):
if type(n) != int or n < 0:
return ''
else:
return print('incomodam ' * n)
def elefantes(n):
texto_inicial = 'Um elefante incomoda muita gente\n'
texto_posterior1 = '%d elefantes ' + incomodam(n) + 'muito mais\n\n'
texto_posterior2 = 'elefantes ' + incomodam(n) + 'muita gente\n'
if n == 1:
return print(texto_inicial)
else:
return print(texto_inicial) + print(texto_posterior1)
elefantes(1)
elefantes(2)
Explanation: Exercício 3: Elefantes
Este exercício tem duas partes:
Implemente a função incomodam(n) que devolve uma string contendo "incomodam " (a palavra seguida de um espaço) n vezes. Se n não for um inteiro estritamente positivo, a função deve devolver uma string vazia. Essa função deve ser implementada utilizando recursão.
Utilizando a função acima, implemente a função elefantes(n) que devolve uma string contendo a letra de "Um elefante incomoda muita gente..." de 1 até n elefantes. Se n não for maior que 1, a função deve devolver uma string vazia. Essa função também deve ser implementada utilizando recursão.
Observe que, para um elefante, você deve escrever por extenso e no singular ("Um elefante..."); para os demais, utilize números e o plural ("2 elefantes...").
Dica: lembre-se que é possível juntar strings com o operador "+". Lembre-se também que é possível transformar números em strings com a função str().
Dica: Será que neste caso a base da recursão é diferente de n==1?
Por exemplo, uma chamada a elefantes(4) deve devolver uma string contendo:
Um elefante incomoda muita gente
2 elefantes incomodam incomodam muito mais
2 elefantes incomodam incomodam muita gente
3 elefantes incomodam incomodam incomodam muito mais
3 elefantes incomodam incomodam incomodam muita gente
4 elefantes incomodam incomodam incomodam incomodam muito mais
End of explanation |
6,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is a notebook to explore opSim outputs in different ways, mostly useful to supernova analysis. We will look at the opsim output called Enigma_1189
Step1: Read in OpSim output for modern versions
Step2: Read in the OpSim DataBase into a pandas dataFrame
Step3: The opsim database is a large file (approx 4.0 GB), but still possible to read into memory on new computers. You usually only need the Summary Table, which is about 900 MB. If you are only interested in the Deep Drilling Fields, you can use the read_sql_query to only select information pertaining to Deep Drilling Observations. This has a memory footprint of about 40 MB.
Obviously, you can reduce this further by narrowing down the columns to those of interest only. For the entire Summary Table, this step takes a few minutes on my computer.
If you are going to do the read from disk step very often, you can further reduce the time used by storing the output on disk as a hdf5 file and reading that into memory
We will look at three different Summaries of OpSim Runs. A summary of the
1. Deep Drilling fields
Step4: Some properties of the OpSim Outputs
Step5: Construct our Summary
Step6: First Season
We can visualize the cadence during the first season using the cadence plot for a particular field
Step7: Example to obtain the observations of in a 100 day period in a field
First find the fieldID witha center closest to your coordinates. The fields are of radial size about 1.75 degrees. I would suggest just going to a fieldID, as you probably don't care about the coordinates. Then the following query would get this done. Alternatively, you could also achieve some of these goals using sql queries on the opsim database.
Step8: Plots
Step9: This is a DDF.
- Many observations per night
- Often about 20-25 per filter per night
Step10: WFD field | Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Required packages sqlachemy, pandas (both are part of anaconda distribution, or can be installed with a python installer)
# One step requires the LSST stack, can be skipped for a particular OPSIM database in question
import OpSimSummary.summarize_opsim as so
from sqlalchemy import create_engine
import pandas as pd
print so.__file__
# This step requires LSST SIMS package MAF. The main goal of this step is to set DD and WFD to integer keys that
# label an observation as Deep Drilling or for Wide Fast Deep.
# If you want to skip this step, you can use the next cell by uncommenting it, and commenting out this cell, if all you
# care about is the database used in this example. But there is no guarantee that the numbers in the cell below will work
# on other versions of opsim database outputs
#from lsst.sims.maf import db
#from lsst.sims.maf.utils import opsimUtils
# DD = 366
# WFD = 364
Explanation: This is a notebook to explore opSim outputs in different ways, mostly useful to supernova analysis. We will look at the opsim output called Enigma_1189
End of explanation
# Change dbname to point at your own location of the opsim output
dbname = '/Users/rbiswas/data/LSST/OpSimData/enigma_1189_sqlite.db'
#opsdb = db.OpsimDatabase(dbname)
#propID, propTags = opsdb.fetchPropInfo()
#DD = propTags['DD'][0]
#WFD = propTags['WFD'][0]
Explanation: Read in OpSim output for modern versions: (sqlite formats)
Description of OpSim outputs are available on the page https://confluence.lsstcorp.org/display/SIM/OpSim+Datasets+for+Cadence+Workshop+LSST2015http://tusken.astro.washington.edu:8080
Here we will use the opsim output http://ops2.tuc.noao.edu/runs/enigma_1189/data/enigma_1189_sqlite.db.gz
I have downloaded this database, unzipped and use the variable dbname to point to its location
End of explanation
engine = create_engine('sqlite:///' + dbname)
Explanation: Read in the OpSim DataBase into a pandas dataFrame
End of explanation
# Load to a dataframe
# Summary = pd.read_hdf('storage.h5', 'table')
Summary = pd.read_sql_table('Summary', engine, index_col='obsHistID')
# EnigmaDeep = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is 366', engine)
# EnigmaD = pd.read_sql_query('SELECT * FROM SUMMARY WHERE PROPID is 366', engine)
EnigmaCombined = Summary.query('propID == [364, 366]')# & (fieldID == list(EnigmaDeep.fieldID.unique().values)')
EnigmaCombined.propID.unique()
Explanation: The opsim database is a large file (approx 4.0 GB), but still possible to read into memory on new computers. You usually only need the Summary Table, which is about 900 MB. If you are only interested in the Deep Drilling Fields, you can use the read_sql_query to only select information pertaining to Deep Drilling Observations. This has a memory footprint of about 40 MB.
Obviously, you can reduce this further by narrowing down the columns to those of interest only. For the entire Summary Table, this step takes a few minutes on my computer.
If you are going to do the read from disk step very often, you can further reduce the time used by storing the output on disk as a hdf5 file and reading that into memory
We will look at three different Summaries of OpSim Runs. A summary of the
1. Deep Drilling fields: These are the observations corresponding to propID of the variable DD above, and are restricted to a handful of fields
2. WFD (Main) Survey: These are the observations corresponding to the propID of the variables WFD
3. Combined Survey: These are observations combining DEEP and WFD in the DDF. Note that this leads to duplicate observations which must be subsequently dropped.
End of explanation
EnigmaCombined.fieldID.unique().size
Explanation: Some properties of the OpSim Outputs
End of explanation
Full = so.SummaryOpsim(EnigmaCombined)
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(111, projection='mollweide');
fig = Full.showFields(ax=fig.axes[0], marker='o', s=1)
Explanation: Construct our Summary
End of explanation
fieldList = Full.fieldIds
len(fieldList)
Explanation: First Season
We can visualize the cadence during the first season using the cadence plot for a particular field: The following plot shows how many visits we have in different filters on a particular night:
End of explanation
selected = Full.df.query('fieldID == 290 and expMJD > 49490 and expMJD < 49590')
selected.head()
# write to disk in ascii file
selected.to_csv('selected_obs.csv', index='obsHistID')
# write to disk in ascii file with selected columns
selected[['expMJD', 'night', 'filter', 'fiveSigmaDepth', 'filtSkyBrightness', 'finSeeing']].to_csv('selected_cols.csv', index='obsHistID')
Explanation: Example to obtain the observations of in a 100 day period in a field
First find the fieldID witha center closest to your coordinates. The fields are of radial size about 1.75 degrees. I would suggest just going to a fieldID, as you probably don't care about the coordinates. Then the following query would get this done. Alternatively, you could also achieve some of these goals using sql queries on the opsim database.
End of explanation
fig_firstSeason, firstSeasonCadence = Full.cadence_plot(fieldList[0], observedOnly=False, sql_query='night < 366')
Explanation: Plots
End of explanation
fig_firstSeason_1, firstSeasonCadence_1 = Full.cadence_plot(fieldList[0], observedOnly=True, sql_query='night < 366')
Explanation: This is a DDF.
- Many observations per night
- Often about 20-25 per filter per night
End of explanation
fig_firstSeason_main, firstSeasonCadence_main = Full.cadence_plot(fieldList[1], observedOnly=False, sql_query='night < 366')
fig_long, figCadence_long = Full.cadence_plot(fieldList[0], observedOnly=False, sql_query='night < 3655', nightMax=3655)
fig_2, figCadence_2 = Full.cadence_plot(fieldList[0], observedOnly=False,
sql_query='night < 720', nightMax=720, nightMin=365)
fig_SN, SN_matrix = Full.cadence_plot(fieldList[0], observedOnly=False, mjd_center=49540., mjd_range=[-30., 50.])
Explanation: WFD field
End of explanation |
6,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Linear Shell solution
Init symbols for sympy
Step1: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step2: Cartesian coordinates | Python Code:
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
import matplotlib.pyplot as plt
import sys
sys.path.append("../")
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
# Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here
%config InlineBackend.figure_format='retina'
plt.rcParams['figure.figsize'] = (12, 12)
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
init_printing()
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
A,K,rho = symbols("A K rho")
Explanation: Linear Shell solution
Init symbols for sympy
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
B=Matrix([[0, 1/(A*(K*alpha3 + 1)), 0, 0, 0, 0, 0, 0, K/(K*alpha3 + 1), 0, 0, 0], [0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 1/(A*(K*alpha3 + 1)), 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0], [-K/(K*alpha3 + 1), 0, 0, 0, 0, 0, 0, 0, 0, 1/(A*(K*alpha3 + 1)), 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1]])
B
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
simplify(E*B*T)
mu = Symbol('mu')
la = Symbol('lambda')
C_tensor = getIsotropicStiffnessTensor(mu, la)
C = convertStiffnessTensorToMatrix(C_tensor)
C
S=T.T*B.T*E.T*C*E*B*T*A*(1+alpha3*K)**2
S=simplify(S)
S
h=Symbol('h')
S_in = integrate(S*(1-alpha3*K+(alpha3**2)*K),(alpha3, -h/2, h/2))
S_in
E,nu=symbols('E nu')
lambda_elastic=E*nu/((1+nu)*(1-2*nu))
mu_elastic=E/(2*(1+nu))
S_ins=simplify(S_in.subs(A,1).subs(la,lambda_elastic).subs(mu,mu_elastic))
S_ins
a11=E/(1-nu**2)
a44=5*E/(12*(1+nu))
AM=Matrix([[a11,0],[0,a44]])
strainT=Matrix([[1,alpha3,0],[0,0,1]])
AT=strainT.T*AM*strainT
integrate(AT,(alpha3, -h/2, h/2))
M=Matrix([[rho, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, rho, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, rho, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])
M=T.T*M*T*A*(1+alpha3*K)
M
M_in = integrate(M,(alpha3, -h/2, h/2))
M_in
Explanation: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
import fem.geometry as g
import fem.model as m
import fem.material as mat
import fem.shell.shellsolver as s
import fem.shell.mesh1D as me
import plot
stiffness_matrix_func = lambdify([A, K, mu, la, h], S_in, "numpy")
mass_matrix_func = lambdify([A, K, rho, h], M_in, "numpy")
def stiffness_matrix(material, geometry, x1, x2, x3):
A,K = geometry.get_A_and_K(x1,x2,x3)
return stiffness_matrix_func(A, K, material.mu(), material.lam(), thickness)
def mass_matrix(material, geometry, x1, x2, x3):
A,K = geometry.get_A_and_K(x1,x2,x3)
return mass_matrix_func(A, K, material.rho, thickness)
def generate_layers(thickness, layers_count, material):
layer_top = thickness / 2
layer_thickness = thickness / layers_count
layers = set()
for i in range(layers_count):
layer = m.Layer(layer_top - layer_thickness, layer_top, material, i)
layers.add(layer)
layer_top -= layer_thickness
return layers
def solve(geometry, thickness, linear, N_width, N_height):
layers_count = 1
layers = generate_layers(thickness, layers_count, mat.IsotropicMaterial.steel())
model = m.Model(geometry, layers, m.Model.FIXED_BOTTOM_LEFT_RIGHT_POINTS)
mesh = me.Mesh1D.generate(width, layers, N_width, m.Model.FIXED_BOTTOM_LEFT_RIGHT_POINTS)
lam, vec = s.solve(model, mesh, stiffness_matrix, mass_matrix)
return lam, vec, mesh, geometry
width = 2
curvature = 0.8
thickness = 0.05
corrugation_amplitude = 0.05
corrugation_frequency = 20
# geometry = g.CorrugatedCylindricalPlate(width, curvature, corrugation_amplitude, corrugation_frequency)
geometry = g.CylindricalPlate(width, curvature)
# geometry = g.Plate(width)
N_width = 100
N_height = 4
lam, vec, mesh, geometry = solve(geometry, thickness, False, N_width, N_height)
results = s.convert_to_results(lam, vec, mesh, geometry)
results_index = 0
plot.plot_init_and_deformed_geometry_in_cartesian(results[results_index], 0, width, -thickness / 2, thickness / 2, 0, geometry.to_cartesian_coordinates)
to_print = 20
if (len(results) < to_print):
to_print = len(results)
for i in range(to_print):
print(results[i].rad_per_sec_to_Hz(results[i].freq))
Explanation: Cartesian coordinates
End of explanation |
6,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Team members have produced a list of know database tables.
I'm going to try to represent those in machine-readable format, and run tests against the API for existence and row-count
Table Names Document
https
Step1: For each table, I want to
* assert that it actually exists
* get a rowcount | Python Code:
import requests
import io
import pandas
from itertools import chain
def makeurl(tablename,start,end):
return "https://iaspub.epa.gov/enviro/efservice/{tablename}/JSON/rows/{start}:{end}".format_map(locals())
def table_count(tablename):
url= "https://iaspub.epa.gov/enviro/efservice/{tablename}/COUNT/JSON".format_map(locals())
out=requests.get(url)
try:
return out.json()[0]['TOTALQUERYRESULTS']
except Exception as e:
print(e)
print(out.text)
return -1
table_names=[
"BREPORT_CYCLE",
"RCR_HHANDLER",
"RCR_BGM_BASIC",
"PUB_DIM_FACILITY",
"PUB_FACTS_SUBP_GHG_EMISSION",
"PUB_FACTS_SECTOR_GHG_EMISSION",
"PUB_DIM_SUBPART",
"PUB_DIM_GHG",
"PUB_DIM_SECTOR",
"PUB_DIM_SUBSECTOR",
"PUB_DIM_FACILITY",
"AA_MAKEUP_CHEMICAL_INFO",
"AA_SUBPART_LEVEL_INFORMATION",
"AA_SPENT_LIQUOR_INFORMATION",
"AA_FOSSIL_FUEL_INFORMATION",
"AA_FOSSIL_FUEL_TIER_2_INFO",
"AA_CEMS_DETAILS",
"AA_TIER_4_CEMS_QUARTERLY_CO2",
"PUB_DIM_FACILITY",
"EE_CEMS_DETAILS",
"EE_CEMS_INFO",
"EE_FACILITY_INFO",
"EE_NOCEMS_MONTHLYDETAILS",
"EE_NOCEMSTIO2DETAILS",
"EE_SUBPART_LEVEL_INFORMATION",
"EE_TIER4CEMS_QTRDTLS",
"PUB_DIM_FACILITY",
"GG_FACILITY_INFO",
"GG_NOCEMS_ZINC_DETAILS",
"GG_SUBPART_LEVEL_INFORMATION",
"PUB_DIM_FACILITY",
"II_BIOGAS_REC_PROC",
"II_CH4_GEN_PROCESS",
"II_EQU_II1_OR_II2",
"II_EQU_II4_INPUT",
"II_EQUATION_II3",
"II_EQUATION_II6",
"II_EQUATION_II7",
"II_SUBPART_LEVEL_INFORMATION",
"II_PROCESS_DETAILS",
"PUB_DIM_FACILITY",
"NN_SUBPART_LEVEL_INFORMATION",
"NN_NGL_FRACTIONATOR_METHODS",
"NN_LDC_NAT_GAS_DELIVERIES",
"NN_LDC_DETAILS",
"PUB_DIM_FACILITY",
"R_SUBPART_LEVEL_INFORMATION",
"R_FACILITY_INFO",
"R_SMELTING_FURNACE_INFO",
"R_FEEDSTOCK_INFO",
"PUB_DIM_FACILITY",
"TT_SUBPART_GHG_INFO",
"TT_LANDFILL_DETAILS",
"TT_LF_GAS_COLL_DETAILS",
"TT_WASTE_DEPTH_DETAILS",
"TT_WASTESTREAM_DETLS",
"TT_HIST_WASTE_METHOD",
"PUB_DIM_FACILITY",
"W_SUBPART_LEVEL_INFORMATION",
"W_LIQUIDS_UNLOADING",
"W_TRANSMISSION_TANKS",
"W_PNEUMATIC_DEVICES",
"W_WELL_COMPLETION_HYDRAULIC",
"W_WELL_TESTING",
]
Explanation: Team members have produced a list of know database tables.
I'm going to try to represent those in machine-readable format, and run tests against the API for existence and row-count
Table Names Document
https://docs.google.com/spreadsheets/d/1LDDH-qxJunBqqkS1EfG2mhwgwFi7PylXtz3GYsGjDzA/edit#gid=933879858
End of explanation
table_count(table_names[0])
%%time
table_counts={
table_name:table_count(table_name)
for table_name in table_names
}
pandas.Series(table_counts)
len(table_counts)
Explanation: For each table, I want to
* assert that it actually exists
* get a rowcount
End of explanation |
6,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Generate Features And Target Data
Step2: Create Logistic Regression
Step3: Cross-Validate Model Using Recall | Python Code:
# Load libraries
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.datasets import make_classification
Explanation: Title: Recall
Slug: recall
Summary: How to evaluate a Python machine learning using recall.
Date: 2017-09-15 12:00
Category: Machine Learning
Tags: Model Evaluation
Authors: Chris Albon
Recall is the proportion of every positive observation that is truly positive. Recall measures the model's ability to identify a observation of the positive class. Models with high recall are optimistic in that they have a low-bar for predicting that an observation in the positive class.
$$\displaystyle \mathrm {Recall}=\frac {TP}{TP + FN}$$
Preliminaries
End of explanation
# Generate features matrix and target vector
X, y = make_classification(n_samples = 10000,
n_features = 3,
n_informative = 3,
n_redundant = 0,
n_classes = 2,
random_state = 1)
Explanation: Generate Features And Target Data
End of explanation
# Create logistic regression
logit = LogisticRegression()
Explanation: Create Logistic Regression
End of explanation
# Cross-validate model using precision
cross_val_score(logit, X, y, scoring="recall")
Explanation: Cross-Validate Model Using Recall
End of explanation |
6,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SBML Parsing Example
Step1: Biomodels repository hosts a number of published models.
We can download one of them to our working directory
Step2: This model can be parsed into MEANS Model object using means.io.read_sbml function. When parsing the SBML format, compartments of species are neglected, as the species names are assumed to be compartment-specific.
The values for the constants and initial amounts of all species are also retrieved for making it easier to simulate the trajectories later.
WARNING
Step3: To view the model, simply output it
Step4: Note that a set of parameters and initial conditions are also parsed from the SBML file directly, let's view them | Python Code:
import means
Explanation: SBML Parsing Example
End of explanation
import urllib
__ = urllib.urlretrieve("http://www.ebi.ac.uk/biomodels/models-main/publ/"
"BIOMD0000000010/BIOMD0000000010.xml.origin",
filename="autoreg.xml")
Explanation: Biomodels repository hosts a number of published models.
We can download one of them to our working directory:
End of explanation
# Requires: libsbml
autoreg_model, autoreg_parameters, autoreg_initial_conditions \
= means.io.read_sbml('autoreg.xml')
Explanation: This model can be parsed into MEANS Model object using means.io.read_sbml function. When parsing the SBML format, compartments of species are neglected, as the species names are assumed to be compartment-specific.
The values for the constants and initial amounts of all species are also retrieved for making it easier to simulate the trajectories later.
WARNING: Please note that in order to run this example
one will have to have libsmbl installed together with its python bindings.
Consult SBML website for more information on how to do this.
End of explanation
autoreg_model
Explanation: To view the model, simply output it:
End of explanation
print autoreg_parameters[:3], '.. snip ..', autoreg_parameters[-3:]
print autoreg_initial_conditions
Explanation: Note that a set of parameters and initial conditions are also parsed from the SBML file directly, let's view them:
End of explanation |
6,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Word Embeddings and Sentiment
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Get the dataset
We're going to use a dataset containing Amazon and Yelp reviews, with their related sentiment (1 for positive, 0 for negative). This dataset was originally extracted from here.
Step3: Tokenize the dataset
Tokenize the dataset, including padding and OOV
Step4: Review a Sequence
Let's quickly take a look at one of the padded sequences to ensure everything above worked appropriately.
Step5: Train a Basic Sentiment Model with Embeddings
Step6: Get files for visualizing the network
The code below will download two files for visualizing how your network "sees" the sentiment related to each word. Head to http
Step7: Predicting Sentiment in New Reviews
Now that you've trained and visualized your network, take a look below at how we can predict sentiment in new reviews the network has never seen before. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
Explanation: Word Embeddings and Sentiment
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c04_nlp_embeddings_and_sentiment.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l09c04_nlp_embeddings_and_sentiment.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this colab, you'll work with word embeddings and train a basic neural network to predict text sentiment. At the end, you'll be able to visualize how the network sees the related sentiment of each word in the dataset.
Import TensorFlow and related functions
End of explanation
!wget --no-check-certificate \
-O /tmp/sentiment.csv https://drive.google.com/uc?id=13ySLC_ue6Umt9RJYSeM2t-V0kCv-4C-P
import numpy as np
import pandas as pd
dataset = pd.read_csv('/tmp/sentiment.csv')
sentences = dataset['text'].tolist()
labels = dataset['sentiment'].tolist()
# Separate out the sentences and labels into training and test sets
training_size = int(len(sentences) * 0.8)
training_sentences = sentences[0:training_size]
testing_sentences = sentences[training_size:]
training_labels = labels[0:training_size]
testing_labels = labels[training_size:]
# Make labels into numpy arrays for use with the network later
training_labels_final = np.array(training_labels)
testing_labels_final = np.array(testing_labels)
Explanation: Get the dataset
We're going to use a dataset containing Amazon and Yelp reviews, with their related sentiment (1 for positive, 0 for negative). This dataset was originally extracted from here.
End of explanation
vocab_size = 1000
embedding_dim = 16
max_length = 100
trunc_type='post'
padding_type='post'
oov_tok = "<OOV>"
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
tokenizer = Tokenizer(num_words = vocab_size, oov_token=oov_tok)
tokenizer.fit_on_texts(training_sentences)
word_index = tokenizer.word_index
sequences = tokenizer.texts_to_sequences(training_sentences)
padded = pad_sequences(sequences,maxlen=max_length, padding=padding_type,
truncating=trunc_type)
testing_sequences = tokenizer.texts_to_sequences(testing_sentences)
testing_padded = pad_sequences(testing_sequences,maxlen=max_length,
padding=padding_type, truncating=trunc_type)
Explanation: Tokenize the dataset
Tokenize the dataset, including padding and OOV
End of explanation
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
def decode_review(text):
return ' '.join([reverse_word_index.get(i, '?') for i in text])
print(decode_review(padded[1]))
print(training_sentences[1])
Explanation: Review a Sequence
Let's quickly take a look at one of the padded sequences to ensure everything above worked appropriately.
End of explanation
# Build a basic sentiment network
# Note the embedding layer is first,
# and the output is only 1 node as it is either 0 or 1 (negative or positive)
model = tf.keras.Sequential([
tf.keras.layers.Embedding(vocab_size, embedding_dim, input_length=max_length),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(6, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',optimizer='adam',metrics=['accuracy'])
model.summary()
num_epochs = 10
model.fit(padded, training_labels_final, epochs=num_epochs, validation_data=(testing_padded, testing_labels_final))
Explanation: Train a Basic Sentiment Model with Embeddings
End of explanation
# First get the weights of the embedding layer
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # shape: (vocab_size, embedding_dim)
import io
# Write out the embedding vectors and metadata
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for word_num in range(1, vocab_size):
word = reverse_word_index[word_num]
embeddings = weights[word_num]
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in embeddings]) + "\n")
out_v.close()
out_m.close()
# Download the files
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
Explanation: Get files for visualizing the network
The code below will download two files for visualizing how your network "sees" the sentiment related to each word. Head to http://projector.tensorflow.org/ and load these files, then click the "Sphereize" checkbox.
End of explanation
# Use the model to predict a review
fake_reviews = ['I love this phone', 'I hate spaghetti',
'Everything was cold',
'Everything was hot exactly as I wanted',
'Everything was green',
'the host seated us immediately',
'they gave us free chocolate cake',
'not sure about the wilted flowers on the table',
'only works when I stand on tippy toes',
'does not work when I stand on my head']
print(fake_reviews)
# Create the sequences
padding_type='post'
sample_sequences = tokenizer.texts_to_sequences(fake_reviews)
fakes_padded = pad_sequences(sample_sequences, padding=padding_type, maxlen=max_length)
print('\nHOT OFF THE PRESS! HERE ARE SOME NEWLY MINTED, ABSOLUTELY GENUINE REVIEWS!\n')
classes = model.predict(fakes_padded)
# The closer the class is to 1, the more positive the review is deemed to be
for x in range(len(fake_reviews)):
print(fake_reviews[x])
print(classes[x])
print('\n')
# Try adding reviews of your own
# Add some negative words (such as "not") to the good reviews and see what happens
# For example:
# they gave us free chocolate cake and did not charge us
Explanation: Predicting Sentiment in New Reviews
Now that you've trained and visualized your network, take a look below at how we can predict sentiment in new reviews the network has never seen before.
End of explanation |
6,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Names
Step1: 1. Background Information
1.1 Introduction to the Second Half of the Class
The remainder of this course will be divided into three two week modules, each dealing with a different dataset. During the first week of each module, you will complete a (two class) lab in which you are introduced to the dataset and various techniques that you need to use to explore it.
At the end of Week 1, you and your lab partner will write a brief (1 paragraph) proposal to Professor Follette detailing an investigation that you would like to complete using that dataset in Week 2. You and your partener will complete this investigation and write it up as your lab the following week. Detailed instructions for submitting your proposal are at the end of this lab. Detailed instructions for the lab writeups will be provided next week.
1.2. Introduction to the QuaRCS Dataset
The Quantitative Reasoning for College Science (QuaRCS) assessment is an assessment instrument that Profssor Follette has been administering in general education science classes across the country since 2012. It consists of 25 quantitative questions involving "real world" mathematical skills plus 24 attitudinal and demographic questions. It has been administered to more than 5000 students at eleven institutions. You will be reading the published results of this study for class on Thursday, and exploring the data in class this week and next.
A description of all of the variables (pandas dataframe columns) in the QuaRCS dataset and what each numerical answer choice "stands for" is in the file QuaRCS_descriptions.pdf.
2. Investigating Tabular Data with Pandas
2.1 Reading In and Cleaning Data
Step2: Read in the QuaRCS data as a pandas dataframe called "data".
Step3: Once a dataset has been read in as a pandas dataframe, several useful built-in pandas methods are made available to us. Recall that you call methods with data.method. Check out each of the following
Step4: 2.2 The describe() method
There are also a whole bunch of built in functions that can operate on a pandas dataframe that become available once you've defined it. To see a full list type data. in an empty frame and then hit tab.
An especially useful one is dataframe.describe() method, which creates a summary table with some common statistics for all of the columns in the dataframe.
In our case here there are a number of NaNs in our table (cases where an answer was left blank), and the describe method ignores them for mean, standard deviation (std), min and max. However, there is a known bug in the pandas module that cause NaNs to break the quartiles in the describe method, so these will always be NaN for any column that has a NaN anywhere in it, rendering them mostly useless here. Still, this is a nice quick way to get descriptive statistics for a table.
Step5: 2.3. Computing Descriptive Statistics
You can also of course compute descriptive statistics for columns in a pandas dataframe individually. Examples of each one applied to a single column - student scores on the assessment (PRE_SCORE) are shown below.
Step6: <div class=hw>
### Exercise 1
------------------
Choose one categorical (answer to any demographic or attitudinal question) and one continuous variable (e.g. PRE_TIME, ZPR_1) and compute all of the statistics from the list above ***in one code cell*** (use print statements) for each variable. Write a paragraph describing all of the statistics that are informative for that variable in words. An example is given below for PRE_SCORE. Because score is numerical ***and*** discrete, all of the statistics above are informative. In your two cases, fewer statistics will be informative, so your explanations may be shorter, though you should challenge yourselves to go beyond merely reporting the statistcs, and should interpret them as well, as below.
*QuaRCS score can take discrete integer values between 0 and 25. The minimum score for this dataset is 1 and the maximum is 25. There are 2,777 valid entries for score in this QuaRCS dataset, for which the mean is 13.9 and the median is 14 (both 56\% of the maximum score). These are very close together, suggesting a reasonably centrally-concentrated score distrubution, and the low skewness value of 0.1 supports this. The kurtosis of the distribution is negative (platykurtic), which tells us that the distribution of scores is flat rather than peaky. The most common score ("mode") is 10, with 197 (~7%) of participants getting this score, however all score values from 7-21 have counts of greater than 100, supporting the flat nature of the distribution suggested by the negative kurtosis. The interquartile range (25-75 percentiles) is 8 points, and the standard deviation is 5.3. These represent a large fraction (20 and 32\%) of the entire available score range, respectively, making the distribution quite wide.
*Your description of categorical distribution here*
*Your description of continuous distribution here*
Step7: 2.4. Creating Statistical Graphics
<div class=hw>
### Exercise 2 - Summary plots for distributions
*Warning
Step8: Your explanation here, with figures
<div class=hw>
### 2b - Box plot
The syntax for creating a box plot for a pair of pandas dataframe columns is
Step9: Your explanation here
<div class=hw>
### 2c - Pie Chart
The format for making the kind of pie chart that might be useful in this context is as follows
Step10: Your explanation here
<div class=hw>
### 2d - Scatter Plot
The syntax for creating a scatter plot is
Step11: Your explanation here
2.5. Selecting a Subset of Data
<div class=hw>
### Exercise 3
--------------
Write a function called "filter" that takes a dataframe, column name, and value for that column as input and returns a new dataframe containing only those rows where column name = value. For example filter(data, "PRE_GENDER", 1) should return a dataframe about half the size of the original dataframe where all values in the PRE_GENDER column are 1.
Step12: If you get to this point during lab time on Tuesday, stop here
3. Testing Differences Between Datasets
3.1 Computing Confidence Intervals
Now that we have a mechanism for filtering the dataset, we can test differences between groups with confidence intervals. The syntax for computing the confidence interval on a mean for a given variable is as follows.
variable1 = st.t.interval(conf_level,n,loc=np.nanmean(variable2), scale=st.sem(variable2))
where conf_level is the confidence level you with to calculate (e.g. 0.95 is 95% confidence, 0.98 is 98%, etc.)
n is the number of samples and should generally be set to the number of valid entries in variable2 -1.
An example can be found below.
Step13: <div class=hw>
### Exercise 4
------------------
Choose a categorical variable (any demographic or attitudinal variable) that you find interesting and that has at least four possible values and calculate the condifence intervals on the mean score for each group. Then write a paragraph describing the results. Are the differences between the groups significant according to your data? Would they still be significant if you were to compute the 98% (3-sigma) confidence intervals?
Step14: explanatory text
3.2 Visualizing Differences with Overlapping Plots
<div class=hw>
### Exercise 5
---------------
Make another dataframe consisting only of students who "devoted effort" to the assessment, meaning their answer for PRE_EFFORT was EITHER a 4 or a 5 (you may have to modify your filter function to accept more than one value for "value").
Make overlapping histograms showing (a) scores for the entire student population and (b) scores for this "high effort" subset. The "alpha" keyword inside the plot commands will set the transparency of your histogram so that you can see both. Play around with it until it looks good. Make sure your chart includes a legend, and describe what conclusions you can draw from the result in a paragraph below the final chart.
Step15: explanatory text here
4. Data Investigation - Week 2 Instructions
Now that you are familar with the QuaRCS dataset, you and your partner must come up with an investigation that you would like to complete using this data. For the next two modules, this will be more open, but for this first investigation, I will suggest the following three options, of which each group will need to pick one (we will divide in class) | Python Code:
#various things that we will need
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import scipy.stats as st
Explanation: Names: [Insert Your Names Here]
Lab 9 - Data Investigation 1 (Week 1) - Educational Research Data
Lab 9 Contents
Background Information
Intro to the Second Half of the Class
Intro to Dataset 1: The Quantitative Reasoning for College Science Assessment
Investigating Tabular Data with Pandas
Reading in and Cleaning Data
The describe() Method
Computing Descriptive Statistics
Creating Statistical Graphics
Selecting a Subset of Data
Testing Differences Between Datasets
Computing Confidence Intervals
Visualizing Differences with Overlapping Plots
Data Investigation 1 - Week 2 Instructions
End of explanation
# these set the pandas defaults so that it will print ALL values, even for very long lists and large dataframes
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
Explanation: 1. Background Information
1.1 Introduction to the Second Half of the Class
The remainder of this course will be divided into three two week modules, each dealing with a different dataset. During the first week of each module, you will complete a (two class) lab in which you are introduced to the dataset and various techniques that you need to use to explore it.
At the end of Week 1, you and your lab partner will write a brief (1 paragraph) proposal to Professor Follette detailing an investigation that you would like to complete using that dataset in Week 2. You and your partener will complete this investigation and write it up as your lab the following week. Detailed instructions for submitting your proposal are at the end of this lab. Detailed instructions for the lab writeups will be provided next week.
1.2. Introduction to the QuaRCS Dataset
The Quantitative Reasoning for College Science (QuaRCS) assessment is an assessment instrument that Profssor Follette has been administering in general education science classes across the country since 2012. It consists of 25 quantitative questions involving "real world" mathematical skills plus 24 attitudinal and demographic questions. It has been administered to more than 5000 students at eleven institutions. You will be reading the published results of this study for class on Thursday, and exploring the data in class this week and next.
A description of all of the variables (pandas dataframe columns) in the QuaRCS dataset and what each numerical answer choice "stands for" is in the file QuaRCS_descriptions.pdf.
2. Investigating Tabular Data with Pandas
2.1 Reading In and Cleaning Data
End of explanation
data=pd.read_csv('AST200_data_anonymized.csv', encoding="ISO-8859-1")
mask = np.where(data == 999)
data = data.replace(999,np.nan)
Explanation: Read in the QuaRCS data as a pandas dataframe called "data".
End of explanation
# the * is a trick to print without the ...s for an ordinary python object
print(*data.columns)
data.dtypes
Explanation: Once a dataset has been read in as a pandas dataframe, several useful built-in pandas methods are made available to us. Recall that you call methods with data.method. Check out each of the following
End of explanation
data.describe()
Explanation: 2.2 The describe() method
There are also a whole bunch of built in functions that can operate on a pandas dataframe that become available once you've defined it. To see a full list type data. in an empty frame and then hit tab.
An especially useful one is dataframe.describe() method, which creates a summary table with some common statistics for all of the columns in the dataframe.
In our case here there are a number of NaNs in our table (cases where an answer was left blank), and the describe method ignores them for mean, standard deviation (std), min and max. However, there is a known bug in the pandas module that cause NaNs to break the quartiles in the describe method, so these will always be NaN for any column that has a NaN anywhere in it, rendering them mostly useless here. Still, this is a nice quick way to get descriptive statistics for a table.
End of explanation
np.mean(data["PRE_SCORE"])
#or
data["PRE_SCORE"].mean()
np.nanmedian(data["PRE_SCORE"])
#or
data["PRE_SCORE"].median()
data["PRE_SCORE"].max()
data["PRE_SCORE"].min()
data["PRE_SCORE"].mode()
#where first number is the index (should be zero unless column has multiple dimensions
# and second number is the mode
#not super useful for continuous variables for example, if you put in a continuous variable (like ZPR_1) it won't
#return anything because there are no repeat values
#perhaps equally useful is the value_counts method, which will tell you how many times each value appears int he column
data["PRE_SCORE"].value_counts()
#and to count all of the non-zero values
data["PRE_SCORE"].count()
#different generally from len(dataframe["column name]) because len will count NaNs
# but the Score column has no NaNs, so swap this cell and the one before our with
#a column that does have NaNs to verify
len(data["PRE_SCORE"])
#standard deviation
data["PRE_SCORE"].std()
#variance
data["PRE_SCORE"].var()
#verify relationship between variance and standard deviation
np.sqrt(data["PRE_SCORE"].var())
#quantiles
data["PRE_SCORE"].quantile(0.5) # should return the median!
data["PRE_SCORE"].quantile(0.25)
data["PRE_SCORE"].quantile(0.75)
#interquartile range
data["PRE_SCORE"].quantile(0.75)-data["PRE_SCORE"].quantile(0.25)
data["PRE_SCORE"].skew()
data["PRE_SCORE"].kurtosis()
Explanation: 2.3. Computing Descriptive Statistics
You can also of course compute descriptive statistics for columns in a pandas dataframe individually. Examples of each one applied to a single column - student scores on the assessment (PRE_SCORE) are shown below.
End of explanation
#your code computing all descriptive statistics for your categorical variable here
#your code computing all descriptive statistics for your categorical variable here
Explanation: <div class=hw>
### Exercise 1
------------------
Choose one categorical (answer to any demographic or attitudinal question) and one continuous variable (e.g. PRE_TIME, ZPR_1) and compute all of the statistics from the list above ***in one code cell*** (use print statements) for each variable. Write a paragraph describing all of the statistics that are informative for that variable in words. An example is given below for PRE_SCORE. Because score is numerical ***and*** discrete, all of the statistics above are informative. In your two cases, fewer statistics will be informative, so your explanations may be shorter, though you should challenge yourselves to go beyond merely reporting the statistcs, and should interpret them as well, as below.
*QuaRCS score can take discrete integer values between 0 and 25. The minimum score for this dataset is 1 and the maximum is 25. There are 2,777 valid entries for score in this QuaRCS dataset, for which the mean is 13.9 and the median is 14 (both 56\% of the maximum score). These are very close together, suggesting a reasonably centrally-concentrated score distrubution, and the low skewness value of 0.1 supports this. The kurtosis of the distribution is negative (platykurtic), which tells us that the distribution of scores is flat rather than peaky. The most common score ("mode") is 10, with 197 (~7%) of participants getting this score, however all score values from 7-21 have counts of greater than 100, supporting the flat nature of the distribution suggested by the negative kurtosis. The interquartile range (25-75 percentiles) is 8 points, and the standard deviation is 5.3. These represent a large fraction (20 and 32\%) of the entire available score range, respectively, making the distribution quite wide.
*Your description of categorical distribution here*
*Your description of continuous distribution here*
End of explanation
#this cell is for playing around with histograms
Explanation: 2.4. Creating Statistical Graphics
<div class=hw>
### Exercise 2 - Summary plots for distributions
*Warning: Although you will be using QuaRCS data to investigate and experiment with each type of plot below, when you write up your descriptions, they should refer to the **general properties** of the plots, and not to the QuaRCS data specifically. In other words, your descriptions should be general descriptions of the plot types that could be applied to any dataset.*
### 2a - Histogram
The syntax for creating a histogram for a pandas dataframe column is:
dataframe["Column Name"].hist(bins=nbins)
Play around with the column name and bins and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful.
Play around with inputs (e.g. column name) until you find a case (dataframe column) where you think the histogram tells you something important and use it as an example to inform your answer. Inputs that do not produce informative histograms should also help to inform your answer. Save a couple of representative histograms (good and bad, use plt.savefig("figure name")) and integrate them into your written (markdown) explanation to support your argument.
End of explanation
#your sample boxplot code here
Explanation: Your explanation here, with figures
<div class=hw>
### 2b - Box plot
The syntax for creating a box plot for a pair of pandas dataframe columns is:
dataframe.boxplot(column="column name 1", by="column name 2")
Play around with the column and by variables and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful.
Play around with inputs (e.g. column names) until you find a case that you think is well-described by a box and whisker plot and use it as an example to inform your answer. Inputs that do not produce informative box plots should also help to inform your answer. Save a couple of representative box plots (good and bad) and integrate them into your written explanation.
End of explanation
#your sample pie chart code here
Explanation: Your explanation here
<div class=hw>
### 2c - Pie Chart
The format for making the kind of pie chart that might be useful in this context is as follows:
newdataframe = dataframe["column name"].value()counts
newdataframe.plot.pie(figsize=(6,6))
Play around with the column and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful. In your explanation here, focus on how a bar chart compares to a histogram, and when you think one or the other might be useful.
Play around with inputs (e.g. column names) until you find a case that you think is well-described by a pie chart and use it as an example to inform your answer. Inputs that do not produce informative pie charts should also help to inform your answer. Save a couple of representative pie charts (good and bad) and integrate them into your written explanation.
End of explanation
#your sample scatter plot code here
Explanation: Your explanation here
<div class=hw>
### 2d - Scatter Plot
The syntax for creating a scatter plot is:
dataframe.plot.scatter(x='column name',y='column name')
Play around with the column and refer to the docstring as needed until you understand thoroughly what is being shown. Describe what this ***type of plot*** (not any individual plot that you've made) shows in words and describe when you think it might be useful.
Play around with inputs (e.g. column names) until you find a case that you think is well-described by a scatter plot and use it as an example to inform your answer. Inputs that do not produce informative scatter plots should also help to inform your answer. Save a couple of representative pie charts (good and bad) and integrate them into your written explanation.
End of explanation
#your function here
#your tests here
Explanation: Your explanation here
2.5. Selecting a Subset of Data
<div class=hw>
### Exercise 3
--------------
Write a function called "filter" that takes a dataframe, column name, and value for that column as input and returns a new dataframe containing only those rows where column name = value. For example filter(data, "PRE_GENDER", 1) should return a dataframe about half the size of the original dataframe where all values in the PRE_GENDER column are 1.
End of explanation
## apply filter to select only men from data, and pull the scores from this group into a variable
df2=filter(data,'PRE_GENDER',1)
men_scores=df2['PRE_SCORE']
#compute 95% confidence intervals on the mean (low and high)
men_conf=st.t.interval(0.95, len(men_scores)-1, loc=np.mean(men_scores), scale=st.sem(men_scores))
men_conf
Explanation: If you get to this point during lab time on Tuesday, stop here
3. Testing Differences Between Datasets
3.1 Computing Confidence Intervals
Now that we have a mechanism for filtering the dataset, we can test differences between groups with confidence intervals. The syntax for computing the confidence interval on a mean for a given variable is as follows.
variable1 = st.t.interval(conf_level,n,loc=np.nanmean(variable2), scale=st.sem(variable2))
where conf_level is the confidence level you with to calculate (e.g. 0.95 is 95% confidence, 0.98 is 98%, etc.)
n is the number of samples and should generally be set to the number of valid entries in variable2 -1.
An example can be found below.
End of explanation
#code to filter data and compute confidence intervals for each answer choice
Explanation: <div class=hw>
### Exercise 4
------------------
Choose a categorical variable (any demographic or attitudinal variable) that you find interesting and that has at least four possible values and calculate the condifence intervals on the mean score for each group. Then write a paragraph describing the results. Are the differences between the groups significant according to your data? Would they still be significant if you were to compute the 98% (3-sigma) confidence intervals?
End of explanation
#modified filter function here
#define your new high effort dataframe using the filter
#plot two overlapping histograms
Explanation: explanatory text
3.2 Visualizing Differences with Overlapping Plots
<div class=hw>
### Exercise 5
---------------
Make another dataframe consisting only of students who "devoted effort" to the assessment, meaning their answer for PRE_EFFORT was EITHER a 4 or a 5 (you may have to modify your filter function to accept more than one value for "value").
Make overlapping histograms showing (a) scores for the entire student population and (b) scores for this "high effort" subset. The "alpha" keyword inside the plot commands will set the transparency of your histogram so that you can see both. Play around with it until it looks good. Make sure your chart includes a legend, and describe what conclusions you can draw from the result in a paragraph below the final chart.
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("../custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: explanatory text here
4. Data Investigation - Week 2 Instructions
Now that you are familar with the QuaRCS dataset, you and your partner must come up with an investigation that you would like to complete using this data. For the next two modules, this will be more open, but for this first investigation, I will suggest the following three options, of which each group will need to pick one (we will divide in class):
Design visualizations that compare student attitudes pre and post-semester
Design visualizations that compare student skills (by topical area) pre and post semester
Design visualizations that compare students' awareness of their own skills pre and post semester
Before 5pm next Monday evening (3/27), you must send Professor Follette a brief e-mail (that you write together, one e-mail per group) describing a plan for how you will approach the problem you've been assigned. What do you need to know that you don't know already? What kind of plots will you make and what kinds of statistics will you compute? What is your first thought for what your final data representations will look like (histograms? box and whisker plots? overlapping plots or side by side?).
End of explanation |
6,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Using pre-trained embeddings with TensorFlow Hub</h1>
This notebook illustrates
Step1: Install the TensorFlow Hub library
Step2: <h2>TensorFlow Hub Concepts</h2>
TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning, which we covered as part of the course on Image Models.
To download and use a module, it's as easy as
Step3: When I completed this exercise, I got a vector that looked like
Step4: Now, we'll use the same process of using our Hub module to generate embeddings but instead of printing the embeddings, capture them in a variable called 'my_embeddings'.
Step5: Now, we'll use Seaborn's heatmap function to see how the vectors compare to each other. I've written the shell of a function that you'll need to complete that will generate a heatmap. The one piece that's missing is how we'll compare each pair of vectors. Note that because we are computing a score for every pair of vectors, we should have len(my_embeddings)^2 scores. There are many valid ways of comparing vectors. Generality, similarity scores are symmetric. The simplest is to take their dot product. For extra credit, implement a more complicated vector comparison function.
Step6: What you should observe is that, trivially, all words are identical to themselves, and, more interestingly, that the two more similar words have more similar embeddings than the third word.
<h2>Task 3
Step7: Which is cat more similar to, "The cat sat on the mat" or "dog"? Is this desireable?
Think back to how an RNN scans a sequence and maintains its state. Naive methods of embedding composition (mapping many to one) can't possibly compete with a network trained for this very purpose!
<h2>Task 4
Step8: <h3>Build the Evaluation Graph</h3>
Next, we need to build the evaluation graph.
Step10: <h3>Evaluate Sentence Embeddings</h3>
Finally, we need to create a session and run our evaluation. | Python Code:
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
Explanation: <h1>Using pre-trained embeddings with TensorFlow Hub</h1>
This notebook illustrates:
<ol>
<li>How to instantiate a TensorFlow Hub module</li>
<li>How to find pre-trained TensorFlow Hub modules for a variety of purposes</li>
<li>How to examine the embeddings of a Hub module</li>
<li>How one Hub module composes representations of sentences from individual words</li>
<li>How to assess word embeddings using a semantic similarity test</li>
</ol>
End of explanation
!pip install -q tensorflow-hub
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
import scipy
import math
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
os.environ['TFVERSION'] = '1.8'
import tensorflow as tf
print(tf.__version__)
Explanation: Install the TensorFlow Hub library
End of explanation
# Task 1
embed = ...
Explanation: <h2>TensorFlow Hub Concepts</h2>
TensorFlow Hub is a library for the publication, discovery, and consumption of reusable parts of machine learning models. A module is a self-contained piece of a TensorFlow graph, along with its weights and assets, that can be reused across different tasks in a process known as transfer learning, which we covered as part of the course on Image Models.
To download and use a module, it's as easy as:
However, because modules are self-contained parts of a TensorFlow graph, in order to actually collect values from a module, you'll need to evaluate it in the context of a session.
First, let's explore what hub modules there are. Go to the documentation page and explore a bit.
Note that TensorFlow Hub has modules for Images, Text, and Other. In this case, we're interested in a Text module, so navigate to the Text section.
Within the Text section, there are a number of modules. If you click on a link, you'll be taken to a page that describes the module and links to the original paper where the model was proposed. Click on a model in the Word2Vec section of the page.
Note the details section, which describes what the module expects as input, how it preprocesses data, what it does when it encounters a word it hasn't seen before (OOV means "out of vocabulary") and in this case, how word embeddings can be composed to form sentence embeddings.
Finally, note the URL of the page. This is the URL you can copy to instantiate your module.
<h2>Task 1: Create an embedding using the NNLM model</h2>
To complete this task:
<ol>
<li>Find the module URL for the NNLM 50 dimensional English model</li>
<li>Use it to instantiate a module as 'embed'</li>
<li>Print the embedded representation of "cat"</li>
</ol>
NOTE: downloading hub modules requires downloading a lot of data. Instantiating the module will take a few minutes.
End of explanation
word_1 = #
word_2 = #
word_3 = #
Explanation: When I completed this exercise, I got a vector that looked like:
[[ 0.11233182 -0.3176392 -0.01661182...]]
<h2>Task 2: Assess the Embeddings Informally</h2>
<ol>
<li>Identify some words to test</li>
<li>Retrieve the embeddings for each word</li>
<li>Determine what method to use to compare each pair of embeddings</li>
</ol>
So, now we have some vectors but the question is, are they any good? One way of testing whether they are any good is to try them for your task. But, first, let's just take a peak.
For our test, we'll need three common words such that two of the words are much closer in meaning than the third.
End of explanation
# Task 2b
Explanation: Now, we'll use the same process of using our Hub module to generate embeddings but instead of printing the embeddings, capture them in a variable called 'my_embeddings'.
End of explanation
def plot_similarity(labels, embeddings):
corr = # ... TODO: fill out a len(embeddings) x len(embeddings) array
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=90)
g.set_title("Semantic Textual Similarity")
plot_similarity([word_1, word_2, word_3], my_embeddings)
Explanation: Now, we'll use Seaborn's heatmap function to see how the vectors compare to each other. I've written the shell of a function that you'll need to complete that will generate a heatmap. The one piece that's missing is how we'll compare each pair of vectors. Note that because we are computing a score for every pair of vectors, we should have len(my_embeddings)^2 scores. There are many valid ways of comparing vectors. Generality, similarity scores are symmetric. The simplest is to take their dot product. For extra credit, implement a more complicated vector comparison function.
End of explanation
# Task 3
Explanation: What you should observe is that, trivially, all words are identical to themselves, and, more interestingly, that the two more similar words have more similar embeddings than the third word.
<h2>Task 3: From Words to Sentences</h2>
Up until now, we've used our module to produce representations of words. But, in fact, if we want to, we can also use it to construct representations of sentences. The methods used by the module to compose a representation of a sentence won't be as nuanced as what an RNN might do, but they are still worth examining because they are so convenient.
<ol>
<li> Examine the documentation for our hub module and determine how to ask it to construct a representation of a sentence</li>
<li> Figure out how the module takes word embeddings and uses them to construct sentence embeddings </li>
<li> Construct a embeddings of a "cat", "The cat sat on the mat", "dog" and "The cat sat on the dog" and plot their similarity
</ol>
End of explanation
def load_sts_dataset(filename):
# Loads a subset of the STS dataset into a DataFrame. In particular both
# sentences and their human rated similarity score.
sent_pairs = []
with tf.gfile.GFile(filename, "r") as f:
for line in f:
ts = line.strip().split("\t")
# (sent_1, sent_2, similarity_score)
sent_pairs.append((ts[5], ts[6], float(ts[4])))
return pd.DataFrame(sent_pairs, columns=["sent_1", "sent_2", "sim"])
def download_and_load_sts_data():
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = load_sts_dataset(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"))
sts_test = load_sts_dataset(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"))
return sts_dev, sts_test
sts_dev, sts_test = download_and_load_sts_data()
sts_dev.head()
Explanation: Which is cat more similar to, "The cat sat on the mat" or "dog"? Is this desireable?
Think back to how an RNN scans a sequence and maintains its state. Naive methods of embedding composition (mapping many to one) can't possibly compete with a network trained for this very purpose!
<h2>Task 4: Assessing the Embeddings Formally</h2>
Of course, it's great to know that our embeddings match our intuitions to an extent, but it'd be better to have a formal, data-driven measure of the quality of the representation.
Researchers have
The STS Benchmark provides an intristic evaluation of the degree to which similarity scores computed using sentence embeddings align with human judgements. The benchmark requires systems to return similarity scores for a diverse selection of sentence pairs. Pearson correlation is then used to evaluate the quality of the machine similarity scores against human judgements.
End of explanation
sts_input1 = tf.placeholder(tf.string, shape=(None))
sts_input2 = tf.placeholder(tf.string, shape=(None))
# For evaluation we use exactly normalized rather than
# approximately normalized.
sts_encode1 = tf.nn.l2_normalize(embed(sts_input1), axis=1)
sts_encode2 = tf.nn.l2_normalize(embed(sts_input2), axis=1)
cosine_similarities = tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1)
clip_cosine_similarities = tf.clip_by_value(cosine_similarities, -1.0, 1.0)
sim_scores = 1.0 - tf.acos(clip_cosine_similarities)
Explanation: <h3>Build the Evaluation Graph</h3>
Next, we need to build the evaluation graph.
End of explanation
sts_data = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
text_a = sts_data['sent_1'].tolist()
text_b = sts_data['sent_2'].tolist()
dev_scores = sts_data['sim'].tolist()
def run_sts_benchmark(session):
Returns the similarity scores
emba, embb, scores = session.run(
[sts_encode1, sts_encode2, sim_scores],
feed_dict={
sts_input1: text_a,
sts_input2: text_b
})
return scores
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
scores = run_sts_benchmark(session)
pearson_correlation = scipy.stats.pearsonr(scores, dev_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
Explanation: <h3>Evaluate Sentence Embeddings</h3>
Finally, we need to create a session and run our evaluation.
End of explanation |
6,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
7. Fixed Loops
In the previous lesson we studied conditional loops. Now it is time to see fixed loops.
What's the difference?
With a fixed loop, you know how many times you are going to repeat the loop in advance. This is not the case with conditional loops as you have seen.
Run this code. It should produce 5 stars, each on a separate line.
Then change it to produce just 3 stars, each on a separate line.
Step1: If you've done it right, your output should now look like this
Step2: The variable number is called a 'dummy variable'. It's not that we are insulting it, rather think of number as a placeholder. In the copy of the same loop below, you should change both instances of number to a variable name of your own choice. I recommend beeblebrox, but it's up to you. Make sure the code still works as before.
Step3: Ranges can start with other numbers, but stop before the second number.
Run this code to obtain a list from 10 to 19
Step4: You should now change the following program so that is shows a list of the numbers from 20 to 39.
Please see how we now put a comma after each number as well as a space.
Step5: If you've done it right, your output should now look like this
Step6: Alter this code to produce the stations of the 6 times table | Python Code:
for star in range(5):
print("*")
Explanation: 7. Fixed Loops
In the previous lesson we studied conditional loops. Now it is time to see fixed loops.
What's the difference?
With a fixed loop, you know how many times you are going to repeat the loop in advance. This is not the case with conditional loops as you have seen.
Run this code. It should produce 5 stars, each on a separate line.
Then change it to produce just 3 stars, each on a separate line.
End of explanation
for number in range(5):
print(number)
Explanation: If you've done it right, your output should now look like this:
*
*
*
The function range() will produce a series of numbers for you, starting with zero by default.
Technically, range(n) produces a zero indexed iterator 0,1,2,..,n
Run this code to produce a count from zero to four:
End of explanation
for number in range(5):
print(number)
Explanation: The variable number is called a 'dummy variable'. It's not that we are insulting it, rather think of number as a placeholder. In the copy of the same loop below, you should change both instances of number to a variable name of your own choice. I recommend beeblebrox, but it's up to you. Make sure the code still works as before.
End of explanation
for number in range(10,20):
print(number, end=' ')
Explanation: Ranges can start with other numbers, but stop before the second number.
Run this code to obtain a list from 10 to 19
End of explanation
for number in range(10,20): #change the range!
print(number, end=' ') #change the end to end=','
Explanation: You should now change the following program so that is shows a list of the numbers from 20 to 39.
Please see how we now put a comma after each number as well as a space.
End of explanation
for number in range(1,20,2):
print(number, end=' ')
Explanation: If you've done it right, your output should now look like this:
20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,
Ranges can go up (or down) by different numbers.
Run this example to produce the first few odd numbers.
End of explanation
for number in range(0,73,3):
print(number, end=' ')
Explanation: Alter this code to produce the stations of the 6 times table:
End of explanation |
6,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step2: The estimation game
Root mean squared error is one of several ways to summarize the average error of an estimation process.
Step4: The following function simulates experiments where we try to estimate the mean of a population based on a sample with size n=7. We run iters=1000 experiments and collect the mean and median of each sample.
Step6: Using $\bar{x}$ to estimate the mean works a little better than using the median; in the long run, it minimizes RMSE. But using the median is more robust in the presence of outliers or large errors.
Estimating variance
The obvious way to estimate the variance of a population is to compute the variance of the sample, $S^2$, but that turns out to be a biased estimator; that is, in the long run, the average error doesn't converge to 0.
The following function computes the mean error for a collection of estimates.
Step7: The following function simulates experiments where we try to estimate the variance of a population based on a sample with size n=7. We run iters=1000 experiments and two estimates for each sample, $S^2$ and $S_{n-1}^2$.
Step8: The mean error for $S^2$ is non-zero, which suggests that it is biased. The mean error for $S_{n-1}^2$ is close to zero, and gets even smaller if we increase iters.
The sampling distribution
The following function simulates experiments where we estimate the mean of a population using $\bar{x}$, and returns a list of estimates, one from each experiment.
Step9: Here's the "sampling distribution of the mean" which shows how much we should expect $\bar{x}$ to vary from one experiment to the next.
Step10: The mean of the sample means is close to the actual value of $\mu$.
Step11: An interval that contains 90% of the values in the sampling disrtribution is called a 90% confidence interval.
Step12: And the RMSE of the sample means is called the standard error.
Step13: Confidence intervals and standard errors quantify the variability in the estimate due to random sampling.
Estimating rates
The following function simulates experiments where we try to estimate the mean of an exponential distribution using the mean and median of a sample.
Step16: The RMSE is smaller for the sample mean than for the sample median.
But neither estimator is unbiased.
Exercises
Exercise
Step18: Exercise
Step20: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import brfss
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
def RMSE(estimates, actual):
Computes the root mean squared error of a sequence of estimates.
estimate: sequence of numbers
actual: actual value
returns: float RMSE
e2 = [(estimate-actual)**2 for estimate in estimates]
mse = np.mean(e2)
return np.sqrt(mse)
Explanation: The estimation game
Root mean squared error is one of several ways to summarize the average error of an estimation process.
End of explanation
import random
def Estimate1(n=7, iters=1000):
Evaluates RMSE of sample mean and median as estimators.
n: sample size
iters: number of iterations
mu = 0
sigma = 1
means = []
medians = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for _ in range(n)]
xbar = np.mean(xs)
median = np.median(xs)
means.append(xbar)
medians.append(median)
print('Experiment 1')
print('rmse xbar', RMSE(means, mu))
print('rmse median', RMSE(medians, mu))
Estimate1()
Explanation: The following function simulates experiments where we try to estimate the mean of a population based on a sample with size n=7. We run iters=1000 experiments and collect the mean and median of each sample.
End of explanation
def MeanError(estimates, actual):
Computes the mean error of a sequence of estimates.
estimate: sequence of numbers
actual: actual value
returns: float mean error
errors = [estimate-actual for estimate in estimates]
return np.mean(errors)
Explanation: Using $\bar{x}$ to estimate the mean works a little better than using the median; in the long run, it minimizes RMSE. But using the median is more robust in the presence of outliers or large errors.
Estimating variance
The obvious way to estimate the variance of a population is to compute the variance of the sample, $S^2$, but that turns out to be a biased estimator; that is, in the long run, the average error doesn't converge to 0.
The following function computes the mean error for a collection of estimates.
End of explanation
def Estimate2(n=7, iters=1000):
mu = 0
sigma = 1
estimates1 = []
estimates2 = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for i in range(n)]
biased = np.var(xs)
unbiased = np.var(xs, ddof=1)
estimates1.append(biased)
estimates2.append(unbiased)
print('mean error biased', MeanError(estimates1, sigma**2))
print('mean error unbiased', MeanError(estimates2, sigma**2))
Estimate2()
Explanation: The following function simulates experiments where we try to estimate the variance of a population based on a sample with size n=7. We run iters=1000 experiments and two estimates for each sample, $S^2$ and $S_{n-1}^2$.
End of explanation
def SimulateSample(mu=90, sigma=7.5, n=9, iters=1000):
xbars = []
for j in range(iters):
xs = np.random.normal(mu, sigma, n)
xbar = np.mean(xs)
xbars.append(xbar)
return xbars
xbars = SimulateSample()
Explanation: The mean error for $S^2$ is non-zero, which suggests that it is biased. The mean error for $S_{n-1}^2$ is close to zero, and gets even smaller if we increase iters.
The sampling distribution
The following function simulates experiments where we estimate the mean of a population using $\bar{x}$, and returns a list of estimates, one from each experiment.
End of explanation
cdf = thinkstats2.Cdf(xbars)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Sample mean',
ylabel='CDF')
Explanation: Here's the "sampling distribution of the mean" which shows how much we should expect $\bar{x}$ to vary from one experiment to the next.
End of explanation
np.mean(xbars)
Explanation: The mean of the sample means is close to the actual value of $\mu$.
End of explanation
ci = cdf.Percentile(5), cdf.Percentile(95)
ci
Explanation: An interval that contains 90% of the values in the sampling disrtribution is called a 90% confidence interval.
End of explanation
stderr = RMSE(xbars, 90)
stderr
Explanation: And the RMSE of the sample means is called the standard error.
End of explanation
def Estimate3(n=7, iters=1000):
lam = 2
means = []
medians = []
for _ in range(iters):
xs = np.random.exponential(1.0/lam, n)
L = 1 / np.mean(xs)
Lm = np.log(2) / thinkstats2.Median(xs)
means.append(L)
medians.append(Lm)
print('rmse L', RMSE(means, lam))
print('rmse Lm', RMSE(medians, lam))
print('mean error L', MeanError(means, lam))
print('mean error Lm', MeanError(medians, lam))
Estimate3()
Explanation: Confidence intervals and standard errors quantify the variability in the estimate due to random sampling.
Estimating rates
The following function simulates experiments where we try to estimate the mean of an exponential distribution using the mean and median of a sample.
End of explanation
# Solution
def Estimate4(n=7, iters=100000):
Mean error for xbar and median as estimators of population mean.
n: sample size
iters: number of iterations
mu = 0
sigma = 1
means = []
medians = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for i in range(n)]
xbar = np.mean(xs)
median = np.median(xs)
means.append(xbar)
medians.append(median)
print('Experiment 1')
print('mean error xbar', MeanError(means, mu))
print('mean error median', MeanError(medians, mu))
Estimate4()
# Solution
def Estimate5(n=7, iters=100000):
RMSE for biased and unbiased estimators of population variance.
n: sample size
iters: number of iterations
mu = 0
sigma = 1
estimates1 = []
estimates2 = []
for _ in range(iters):
xs = [random.gauss(mu, sigma) for i in range(n)]
biased = np.var(xs)
unbiased = np.var(xs, ddof=1)
estimates1.append(biased)
estimates2.append(unbiased)
print('Experiment 2')
print('RMSE biased', RMSE(estimates1, sigma**2))
print('RMSE unbiased', RMSE(estimates2, sigma**2))
Estimate5()
# Solution
# My conclusions:
# 1) xbar and median yield lower mean error as m increases, so neither
# one is obviously biased, as far as we can tell from the experiment.
# 2) The biased estimator of variance yields lower RMSE than the unbiased
# estimator, by about 10%. And the difference holds up as m increases.
Explanation: The RMSE is smaller for the sample mean than for the sample median.
But neither estimator is unbiased.
Exercises
Exercise: In this chapter we used $\bar{x}$ and median to estimate µ, and found that $\bar{x}$ yields lower MSE. Also, we used $S^2$ and $S_{n-1}^2$ to estimate σ, and found that $S^2$ is biased and $S_{n-1}^2$ unbiased.
Run similar experiments to see if $\bar{x}$ and median are biased estimates of µ. Also check whether $S^2$ or $S_{n-1}^2$ yields a lower MSE.
End of explanation
# Solution
def SimulateSample(lam=2, n=10, iters=1000):
Sampling distribution of L as an estimator of exponential parameter.
lam: parameter of an exponential distribution
n: sample size
iters: number of iterations
def VertLine(x, y=1):
thinkplot.Plot([x, x], [0, y], color='0.8', linewidth=3)
estimates = []
for _ in range(iters):
xs = np.random.exponential(1.0/lam, n)
lamhat = 1.0 / np.mean(xs)
estimates.append(lamhat)
stderr = RMSE(estimates, lam)
print('standard error', stderr)
cdf = thinkstats2.Cdf(estimates)
ci = cdf.Percentile(5), cdf.Percentile(95)
print('confidence interval', ci)
VertLine(ci[0])
VertLine(ci[1])
# plot the CDF
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='estimate',
ylabel='CDF',
title='Sampling distribution')
return stderr
SimulateSample()
# Solution
# My conclusions:
# 1) With sample size 10:
# standard error 0.762510819389
# confidence interval (1.2674054394352277, 3.5377353792673705)
# 2) As sample size increases, standard error and the width of
# the CI decrease:
# 10 0.90 (1.3, 3.9)
# 100 0.21 (1.7, 2.4)
# 1000 0.06 (1.9, 2.1)
# All three confidence intervals contain the actual value, 2.
Explanation: Exercise: Suppose you draw a sample with size n=10 from an exponential distribution with λ=2. Simulate this experiment 1000 times and plot the sampling distribution of the estimate L. Compute the standard error of the estimate and the 90% confidence interval.
Repeat the experiment with a few different values of n and make a plot of standard error versus n.
End of explanation
def SimulateGame(lam):
Simulates a game and returns the estimated goal-scoring rate.
lam: actual goal scoring rate in goals per game
goals = 0
t = 0
while True:
time_between_goals = random.expovariate(lam)
t += time_between_goals
if t > 1:
break
goals += 1
# estimated goal-scoring rate is the actual number of goals scored
L = goals
return L
# Solution
# The following function simulates many games, then uses the
# number of goals scored as an estimate of the true long-term
# goal-scoring rate.
def Estimate6(lam=2, m=1000000):
estimates = []
for i in range(m):
L = SimulateGame(lam)
estimates.append(L)
print('Experiment 4')
print('rmse L', RMSE(estimates, lam))
print('mean error L', MeanError(estimates, lam))
pmf = thinkstats2.Pmf(estimates)
thinkplot.Hist(pmf)
thinkplot.Config(xlabel='Goals scored', ylabel='PMF')
Estimate6()
# Solution
# My conclusions:
# 1) RMSE for this way of estimating lambda is 1.4
# 2) The mean error is small and decreases with m, so this estimator
# appears to be unbiased.
# One note: If the time between goals is exponential, the distribution
# of goals scored in a game is Poisson.
# See https://en.wikipedia.org/wiki/Poisson_distribution
Explanation: Exercise: In games like hockey and soccer, the time between goals is roughly exponential. So you could estimate a team’s goal-scoring rate by observing the number of goals they score in a game. This estimation process is a little different from sampling the time between goals, so let’s see how it works.
Write a function that takes a goal-scoring rate, lam, in goals per game, and simulates a game by generating the time between goals until the total time exceeds 1 game, then returns the number of goals scored.
Write another function that simulates many games, stores the estimates of lam, then computes their mean error and RMSE.
Is this way of making an estimate biased?
End of explanation |
6,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with EPA CEMS data
CEMS or <a href='https
Step1: The following settings and variables may be changed to impact the processing of this notebook
Step2: <a id='access'></a>
Accessing CEMS data
The CEMS dataset is enormous! It contains hourly emissions data on an hourly basis between 1995 and 2020, meaning that the full dataset is close to a billion rows and takes up 100 GB of space. That's a lot to process when you may only need a fraction of it for analysis. The following steps will help you access and work with CEMS efficiently.
<a id='1subset'></a>
1. Select a subset of raw data using Dask
Dask is a python package that parallelizes pandas dataframes so that you can access larger-than-memory data. With Dask, you can select the subset of CEMS data that you'd like to analyse before loading the data into a dataframe. While in Dask, you can interact with the data as if it were in a pandas dataframe.
We'll start with a single year and offer an option to integrate a range of dates below.
Step3: With a Dask dataframe you can learn things about the data such as column names and datatypes without having to load all of the data. If you take a look at the length of the Dask dataframe, you'll understand why we're not in pandas yet.
Step4: Take a look at the CEMS fields available below. The records are organized by the date and time of their measurement as well as the EIA plant id (plant_id_eia) and EPA unit id (unitid) they coorespond to. The EPA unit id is EPA's most granular level of emissions tracing; it represents a singular "smokestack" where emissions data are monitored and recorded. Depending on the unit in question, this unit may reflect a single generator (in the case of a combustion gas turbine where emissions are directly associated with generation), or a group of inter-operating biolers and generators (such as with steam powered generators where one or more boilers, possibly with differing fuel types, provide mechanical power to turbines). As a result, the EPA unit id does not map directly onto EIA's generator id, rather, it serves as its own unique grouping.
The EPA is in the process of publishing a "crosswalk" spreadsheet that links EPA units to EIA's more granular boiler and generator ids. This information is forthcoming and will be integrated into pudl as soon as possible.
For more information on the individual fields, refer to the metadata.
Step5: Now that you know what's available, pick the columns you'd like to work with, and aggregate rows as necessary. Note that the state and measurement_code columns are categorical datatypes, meaning that they will overwhelm your computer memory if included in the list of columns you'd like to groupby. In pandas, this is solved by including the statement observed=True in the groupby, but with Dask we'll solve this by changing the datatype to string. As mentioned previously, the dataset is very large. If the Dask dataframe you construct is too similar to the original dataset -- imagine the example below without the groupby -- the client will be unable to load it in pandas and the kernel will attempt to run indefinitely (or until it crashes your computer). The dataset below should load in a couple of minutes when transfered to pandas.
Step6: <a id='2transfer'></a>
2. Transfer desired data to pandas
Now that you've selected the data you want to work with, we'll transfer it to pandas so that all rows are accessible. It'll take a moment to run because there are so many rows. If it takes longer than a couple of minutes, check to see that your Dask dataset is altered enough from it's original form. Remember, it's in Dask because the data is bigger than your computer's memory! You'll have to do some grouping or paring down in order to access its entirety in pandas.
Step7: To get data from multiple years, run the following code block. It's commented out because it takes a while to run and isn't required to run the full notebook.
Step8: <a id='3pickle'></a>
3. Store custom CEMS dataframe as a pickle
Because CEMS takes a while to run, it may be in your best interest to save your finalized CEMS dataframes as pickle files. This will prevent you from having to run the entire Dask-to-pandas process over and over if you restart the notebook and you want to access your carve-out of CEMS. Rather, it will save a local copy of your CEMS dataframe that it can access in a matter of seconds. Uncomment the following to set up a pickle file
Step9: <a id='manipulating'></a>
Manipulating & Visualizing CEMS data
Now that we have access to CEMS in pandas, lets see what we can do!
<a id='emap'></a>
1. Simple Choropleth
Visualizing CEMS data
Lets start by mapping which states have the highest CO2 emissions from power plants in 2018. States with darker colors will indicate higher CO2 emissions. To do this, we'll need to merge a geodataframe of the US with the desired emissions data from each state.
Prep US geospatial data
Step10: Prep CEMS data
Step11: Combine with Geo-data and Plot
Step12: <a id='pcem'></a>
2. Proportional Coordinates Map
Integrate CEMS emission quantities with EIA plant location data
In order to integreate CEMS with other datasets in pudl, you'll need to start by integrating CEMS with a dataset that also has a field for plant_id_eia. If you want to integrate with FERC later on, you'll also want a dataset that has a field for plant_id_pudl. Integrating CEMS with EIA860 data will provide coordinates for each plant.
Prep CEMS data
Step13: Prep EIA data
Step14: Combine EIA and CEMS data
Step15: Overlay Coordinates on Base Map
Step16: <a id='glc'></a>
3. State-to-State Gross Load and Emissions Comparison
Compare state load and emissions profiles
Not all states use locally sourced electricity, however, looking at load profiles of plants in a given state can provide a glimpse of who is responsible for the greatest fossil peak load. This allocation can also be done by utility, but requires a table that maps plant or unit ownership percentage by utility.
Prep CEMS data (you'll have to re-load from Dask for this new data arrangement)
Step17: Select state subset
Step18: Plot load and emissions comparison
Step19: Plot CO2 to gross load comparison | Python Code:
%load_ext autoreload
%autoreload 2
# Standard libraries
import logging
import sys
import os
import pathlib
# 3rd party libraries
import geopandas as gpd
import dask.dataframe as dd
from dask.distributed import Client
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
import pandas as pd
import seaborn as sns
import sqlalchemy as sa
# Local libraries
import pudl
# Enable viewing of logging outputs
logger=logging.getLogger()
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(stream=sys.stdout)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
logger.handlers = [handler]
# Establish connection to pudl database
pudl_settings = pudl.workspace.setup.get_defaults()
pudl_engine = sa.create_engine(pudl_settings['pudl_db'])
pudl_out = pudl.output.pudltabl.PudlTabl(pudl_engine, freq='AS') #annual frequency
Explanation: Working with EPA CEMS data
CEMS or <a href='https://www.epa.gov/emc/emc-continuous-emission-monitoring-systems'>Continuous Emissions Monitoring Systems</a> are used to track power plant's compliance with EPA emission standards. Included the data are hourly measurements of gross load, SO2, CO2, and NOx emissions associated with a given point source. The EPA's <a href='https://www.epa.gov/airmarkets'>Clean Air Markets Division</a> has collected CEMS data stretching back to 1995 and publicized it in their <a href='https://ampd.epa.gov/ampd/'>data portal</a>. Combinging the CEMS data with geospatial, EIA and FERC data can enable greater and more specific analysis of utilities and their generation facilities. This notebook provides examples of working with the CEMS data in pudl.
*NOTE: This Notebook presuposes access to the parquet files where the full CEMS data are stored.
Notebook Contents:
<a href='#setup'>Setup</a>
<a href='#access'>Accessing CEMS data</a>
<a href='#1subset'>1. Select a subset of raw data using Dask</a>
<a href='#2transfer'>2. Transfer desired data to pandas</a>
<a href='#3pickle'>3. Store custom CEMS dataframe as a pickle</a>
<a href='#manipulating'>Manipulating & Visualizing CEMS data</a>
<a href='#emap'>1. Simple Choropleth</a>
<a href='#pcem'>2. Proportional Coordinates Map</a>
<a href='#glc'>3. State-to-State Gross Load Comparison</a>
<a id='setup'></a>
Setup
The following kernels enable interaction with the CEMS dataset through pudl.
End of explanation
# Display settings
sns.set()
%matplotlib inline
mpl.rcParams['figure.dpi'] = 150
pd.options.display.max_columns = 100
pd.options.display.max_rows = 5
# CEMS dates
CEMS_year = 2018
CEMS_year_range = range(2010, 2012)
# State selection
state_subset = ['CO', 'TX', 'WY', 'MN', 'OH', 'PA', 'WV', 'FL', 'GE', 'CA']
Explanation: The following settings and variables may be changed to impact the processing of this notebook
End of explanation
# Locate the data for the given year/s on your hard drive.
epacems_path = (pudl_settings['parquet_dir'] + f'/epacems/year={CEMS_year}')
# Create a Dask object for preliminary data interaction
cems_dd = dd.read_parquet(epacems_path)
Explanation: <a id='access'></a>
Accessing CEMS data
The CEMS dataset is enormous! It contains hourly emissions data on an hourly basis between 1995 and 2020, meaning that the full dataset is close to a billion rows and takes up 100 GB of space. That's a lot to process when you may only need a fraction of it for analysis. The following steps will help you access and work with CEMS efficiently.
<a id='1subset'></a>
1. Select a subset of raw data using Dask
Dask is a python package that parallelizes pandas dataframes so that you can access larger-than-memory data. With Dask, you can select the subset of CEMS data that you'd like to analyse before loading the data into a dataframe. While in Dask, you can interact with the data as if it were in a pandas dataframe.
We'll start with a single year and offer an option to integrate a range of dates below.
End of explanation
len(cems_dd) # This shows how many rows there are for one year!!
Explanation: With a Dask dataframe you can learn things about the data such as column names and datatypes without having to load all of the data. If you take a look at the length of the Dask dataframe, you'll understand why we're not in pandas yet.
End of explanation
cems_dd.columns.tolist()
Explanation: Take a look at the CEMS fields available below. The records are organized by the date and time of their measurement as well as the EIA plant id (plant_id_eia) and EPA unit id (unitid) they coorespond to. The EPA unit id is EPA's most granular level of emissions tracing; it represents a singular "smokestack" where emissions data are monitored and recorded. Depending on the unit in question, this unit may reflect a single generator (in the case of a combustion gas turbine where emissions are directly associated with generation), or a group of inter-operating biolers and generators (such as with steam powered generators where one or more boilers, possibly with differing fuel types, provide mechanical power to turbines). As a result, the EPA unit id does not map directly onto EIA's generator id, rather, it serves as its own unique grouping.
The EPA is in the process of publishing a "crosswalk" spreadsheet that links EPA units to EIA's more granular boiler and generator ids. This information is forthcoming and will be integrated into pudl as soon as possible.
For more information on the individual fields, refer to the metadata.
End of explanation
# A list of the columns you'd like to include in your analysis
my_cols = [
'state',
'plant_id_eia',
'unitid',
'so2_mass_lbs',
'nox_mass_lbs',
'co2_mass_tons'
]
# Select emissions data are grouped by state, plant_id and unit_id
# Remember to change the datatype for 'state' from category to string
my_cems_dd = (
dd.read_parquet(epacems_path, columns=my_cols)
.assign(state=lambda x: x['state'].astype('string'))
.groupby(['plant_id_eia', 'unitid', 'state'])[
['so2_mass_lbs', 'nox_mass_lbs', 'co2_mass_tons']]
.sum()
)
Explanation: Now that you know what's available, pick the columns you'd like to work with, and aggregate rows as necessary. Note that the state and measurement_code columns are categorical datatypes, meaning that they will overwhelm your computer memory if included in the list of columns you'd like to groupby. In pandas, this is solved by including the statement observed=True in the groupby, but with Dask we'll solve this by changing the datatype to string. As mentioned previously, the dataset is very large. If the Dask dataframe you construct is too similar to the original dataset -- imagine the example below without the groupby -- the client will be unable to load it in pandas and the kernel will attempt to run indefinitely (or until it crashes your computer). The dataset below should load in a couple of minutes when transfered to pandas.
End of explanation
# Create a pandas dataframe out of your Dask dataframe and add a column to
# indicate the year the data are coming from.
client = Client()
my_cems_df = (
client.compute(my_cems_dd)
.result()
.assign(year=CEMS_year)
).reset_index()
my_cems_df
Explanation: <a id='2transfer'></a>
2. Transfer desired data to pandas
Now that you've selected the data you want to work with, we'll transfer it to pandas so that all rows are accessible. It'll take a moment to run because there are so many rows. If it takes longer than a couple of minutes, check to see that your Dask dataset is altered enough from it's original form. Remember, it's in Dask because the data is bigger than your computer's memory! You'll have to do some grouping or paring down in order to access its entirety in pandas.
End of explanation
# years = CEMS_years
# multi_year_cems_df = pd.DataFrame()
# for yr in years:
# epacems_path = (pudl_settings['parquet_dir'] + f'/epacems/year={yr}')
# cems_dd = (
# dd.read_parquet(epacems_path, columns=my_cols)
# .assign(state=lambda x: x['state'].astype('string'))
# .groupby(['plant_id_eia', 'unitid', 'state'])[
# ['so2_mass_lbs', 'nox_mass_lbs', 'co2_mass_tons']]
# .sum())
# cems_df = (
# client.compute(cems_dd)
# .result()
# .assign(year=yr))
# multi_year_cems_df = pd.concat([multi_year_cems_df, cems_df])
# multi_year_cems_df
Explanation: To get data from multiple years, run the following code block. It's commented out because it takes a while to run and isn't required to run the full notebook.
End of explanation
# Savings CEMS as a pickle file
#path = os.getcwd()
#my_cems_df.to_pickle(path + '/MY_CEMS_DF.pkl')
# Accessing CEMS as a pickle file
#my_cems_df = pd.read_pickle(path + '/MY_CEMS_DF.pkl')
#my_cems_df
Explanation: <a id='3pickle'></a>
3. Store custom CEMS dataframe as a pickle
Because CEMS takes a while to run, it may be in your best interest to save your finalized CEMS dataframes as pickle files. This will prevent you from having to run the entire Dask-to-pandas process over and over if you restart the notebook and you want to access your carve-out of CEMS. Rather, it will save a local copy of your CEMS dataframe that it can access in a matter of seconds. Uncomment the following to set up a pickle file
End of explanation
# Use pre-existing pudl shapefile for state outlines
us_map_df = (
pudl.analysis.service_territory.get_census2010_gdf(pudl_settings, 'state')
.rename({'STUSPS10': 'state'}, axis=1)
.to_crs("EPSG:3395") # Change the projection
)
Explanation: <a id='manipulating'></a>
Manipulating & Visualizing CEMS data
Now that we have access to CEMS in pandas, lets see what we can do!
<a id='emap'></a>
1. Simple Choropleth
Visualizing CEMS data
Lets start by mapping which states have the highest CO2 emissions from power plants in 2018. States with darker colors will indicate higher CO2 emissions. To do this, we'll need to merge a geodataframe of the US with the desired emissions data from each state.
Prep US geospatial data:
End of explanation
# Convert lbs to tons for so2 and nox and remove old columns
# Aggregate CEMS emissions data to the state level
cems_map_df = (
my_cems_df.assign(
so2_mass_tons=lambda x: x.so2_mass_lbs * 0.0005,
nox_mass_tons=lambda x: x.nox_mass_lbs * 0.0005
).drop(columns=['so2_mass_lbs', 'nox_mass_lbs', 'plant_id_eia'], axis=1)
.groupby(['state', 'year']).sum(min_count=1)
.reset_index()
)
Explanation: Prep CEMS data:
End of explanation
# Combine CEMS and map dataframes
states_cems_gdf = pd.merge(us_map_df, cems_map_df, on='state', how='outer')
# Add plots for the US, HI, and AK
# The column on which to base the choroplath
choro_col = 'co2_mass_tons'
us_fig, us_ax = plt.subplots(figsize=(15, 10))
#ak_hi_fig, (ak_ax, hi_ax) = plt.subplots(ncols=2)
states_cems_gdf.plot(column=choro_col, cmap='Reds', linewidth=0.8, edgecolor='black', ax=us_ax)
#states_cems_gdf.plot(column=choro_col, cmap='Reds', linewidth=0.8, edgecolor='black', ax=ak_ax)
#states_cems_gdf.plot(column=choro_col, cmap='Reds', linewidth=0.8, edgecolor='black', ax=hi_ax)
us_ax.set_xlim(-1.45e7, -0.7e7) # Used to position US in center of the graph
us_ax.set_ylim(0.25e7, 0.65e7) # Used to position US in center of the graph
us_ax.set_title('CO2 Emissions from Power Plants in 2018 (Megatons)', fontdict={'fontsize': '18'})
us_ax.axis('off') # Remove lat and long tick marks
#ak_ax.set_xlim(1.9e7, 6.7e6) #(-2e7, -1.4e7)
#ak_ax.set_ylim(0.6e7, 1.2e7)
#hi_ax.set_xlim(-1.71e7, -1.8e7)
#hi_ax.set_ylim(2e6, 2.6e6)
# Add a legend
vmax = states_cems_gdf[f'{choro_col}'].max() / 1000000 # (convert from tons to megatons)
sm = plt.cm.ScalarMappable(cmap='Reds', norm=plt.Normalize(vmin=0, vmax=vmax))
sm._A = []
cbar = us_fig.colorbar(sm, orientation="horizontal", pad=0, aspect = 50, label='CO2 Emissions (MT)')
from matplotlib import axes
axes.Axes.mouseover
plt.show()
Explanation: Combine with Geo-data and Plot:
End of explanation
# Aggregate CEMS data to the plant level, adjust units for visualization purposes
cems_df = (
my_cems_df
.copy()
.assign(
co2_mass_mt=lambda df: df.co2_mass_tons / 10000 # measure in 10K tons
).drop(columns=['co2_mass_tons'], axis=1)
.groupby(['plant_id_eia', 'state', 'year'])
.sum(min_count=1)
.reset_index()
)
Explanation: <a id='pcem'></a>
2. Proportional Coordinates Map
Integrate CEMS emission quantities with EIA plant location data
In order to integreate CEMS with other datasets in pudl, you'll need to start by integrating CEMS with a dataset that also has a field for plant_id_eia. If you want to integrate with FERC later on, you'll also want a dataset that has a field for plant_id_pudl. Integrating CEMS with EIA860 data will provide coordinates for each plant.
Prep CEMS data:
End of explanation
# Grab EIA 860 plant data that matched the year selected for CEMS
plants_eia860 = (
pudl_out.plants_eia860()
.assign(year=lambda df: df.report_date.dt.year)
.query("year==@CEMS_year")
)
Explanation: Prep EIA data:
End of explanation
# Combine CEMS and EIA on plant_id_eia, state, and year
eia860_cems_df = (
pd.merge(plants_eia860, cems_df, on=['plant_id_eia', 'state', 'year'], how='inner')
)
Explanation: Combine EIA and CEMS data:
End of explanation
# Make lat and long data cols into plotable points in geopandas
# Make CRS compatile with base map
eia860_cems_gdf = (
gpd.GeoDataFrame(
eia860_cems_df, geometry=gpd.points_from_xy(
eia860_cems_df.longitude, eia860_cems_df.latitude))
.set_crs(epsg=4326, inplace=True) # necessary step before to_crs(epsg=3395)
.to_crs(epsg=3395)
)
# Make a base map
us_fig, us_ax = plt.subplots(figsize=(15, 10))
base = us_map_df.plot(color='white', edgecolor='black', ax=us_ax)
us_ax.set_xlim(-1.45e7, -0.7e7) # Used to position US in center of the graph
us_ax.set_ylim(0.25e7, 0.65e7) # Used to position US in center of the graph
us_ax.set_title('CO2 Emissions from Power Plants in 2018 (10K tons)', fontdict={'fontsize': '20'})
us_ax.axis('off') # Remove lat and long tick marks
# Plot the coordinates on top of the base map
eia860_cems_df['alpha_co2'] = eia860_cems_df['co2_mass_mt']
eia860_cems_gdf.plot(ax=base, marker='o', color='red', markersize=eia860_cems_df['co2_mass_mt'], alpha=0.1)
plt.show()
Explanation: Overlay Coordinates on Base Map:
End of explanation
year = 2018
# A list of the columns you'd like to include in your analysis
my_cols = [
'state',
'plant_id_eia',
'unitid',
'operating_datetime_utc',
'co2_mass_tons',
'gross_load_mw',
]
my_cems_dd = (
dd.read_parquet(epacems_path, columns=my_cols)
.assign(
state=lambda x: x['state'].astype('string'),
month=lambda x: x['operating_datetime_utc'].dt.month)
.groupby(['state', 'month'])['gross_load_mw', 'co2_mass_tons'].sum()
.reset_index()
)
# Create a pandas dataframe out of your Dask dataframe and add a column to
# indicate the year the data are coming from.
client = Client()
my_cems_gl = (
client.compute(my_cems_dd)
.result()
)
Explanation: <a id='glc'></a>
3. State-to-State Gross Load and Emissions Comparison
Compare state load and emissions profiles
Not all states use locally sourced electricity, however, looking at load profiles of plants in a given state can provide a glimpse of who is responsible for the greatest fossil peak load. This allocation can also be done by utility, but requires a table that maps plant or unit ownership percentage by utility.
Prep CEMS data (you'll have to re-load from Dask for this new data arrangement):
End of explanation
gl_piv = my_cems_gl.pivot(columns='state', index=['month'], values=['gross_load_mw'])
gl_piv_subset = gl_piv.iloc[:, gl_piv.columns.get_level_values(1).isin(state_subset)].copy()
co2_piv = my_cems_gl.pivot(columns='state', index=['month'], values=['co2_mass_tons'])
co2_piv_subset = co2_piv.iloc[:, co2_piv.columns.get_level_values(1).isin(state_subset)].copy()
Explanation: Select state subset:
End of explanation
fig, (gl_ax, co2_ax) = plt.subplots(1,2)
gl_piv_subset.plot(
figsize=(15,8),
xticks=gl_piv_subset.index,
ylabel='Gross Load MW',
xlabel='Months',
ax=gl_ax
)
co2_piv_subset.plot(
figsize=(15,8),
xticks=gl_piv_subset.index,
ylabel='Gross Load MW',
xlabel='Months',
ax=co2_ax
)
gl_ax.set_title('CEMS State-Level Gross Load 2018',fontsize= 18, pad=20)
co2_ax.set_title('CEMS State-Level CO2 Emissions 2018', fontsize=18, pad=20)
plt.show()
Explanation: Plot load and emissions comparison:
End of explanation
# Add field for comparison
my_cems_gl['co2/load'] = my_cems_gl.co2_mass_tons / my_cems_gl.gross_load_mw
# Pivot table around comparison field
gl_co2_piv = my_cems_gl.pivot(columns='state', index=['month'], values=['co2/load'])
gl_co2_piv_subset = gl_co2_piv.iloc[:, gl_co2_piv.columns.get_level_values(1).isin(state_subset)].copy()
# Create a figure to plot the different values
fig, gl_co2_ax = plt.subplots()
gl_co2_piv_subset.plot(
figsize=(15,8),
xticks=gl_co2_piv_subset.index,
xlabel='Months',
ylabel='CO2 Emissions (Tons)',
ax=gl_co2_ax
)
gl_co2_ax.set_title('State CO2 Emissions / Gross Load in 2018',fontsize= 18, pad=20)
plt.show()
Explanation: Plot CO2 to gross load comparison:
End of explanation |
6,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Eye traces in pipeline
Step1: Eye tracking traces are in EyeTracking and in its part table EyeTracking.Frame. EyeTracking is a grouping table that refers to one scan and one eye video, whereas EyeTracking.Frame contains the single frames. The table BehaviorSync is used to synchronize the behavior measurements (Treadmill, Eyetracking) to the scan frames.
Step2: In this notebook, we'll fetch the pupil radius and position, and plot it along with a calcium trace, all on the behavior clock. The relative times of the eye, treadmill, and trace are precise, but the clock itself starts at some arbitrary offset.
Step3: Eye
Step4: Calcium Traces
Step5: Join the trace and segmentation tables to get more info about this trace and the mask used to generate it
Step6: ...and fetch the trace and slice number for the single trace from the joined tables using fetch1
Step7: Fetch the imaging frame times on the behavior clock and the number of slices per scan
Step8: In a single scan with 3 slices, imaging frames are collected from slice 1, 2, 3, 1, 2, 3...
So there are nslices * length(tr) frame times
Step9: Visual stimulus | Python Code:
%pylab inline
pylab.rcParams['figure.figsize'] = (6, 6)
%matplotlib inline
import datajoint as dj
from pipeline import vis, preprocess
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Eye traces in pipeline
End of explanation
(dj.ERD.from_sequence([preprocess.EyeTracking,preprocess.BehaviorSync]) - 1 + \
dj.ERD(preprocess.EyeTracking).add_parts()).draw()
Explanation: Eye tracking traces are in EyeTracking and in its part table EyeTracking.Frame. EyeTracking is a grouping table that refers to one scan and one eye video, whereas EyeTracking.Frame contains the single frames. The table BehaviorSync is used to synchronize the behavior measurements (Treadmill, Eyetracking) to the scan frames.
End of explanation
# choose an arbitrary scan
key = dict(animal_id=8804, session=1, scan_idx=3)
Explanation: In this notebook, we'll fetch the pupil radius and position, and plot it along with a calcium trace, all on the behavior clock. The relative times of the eye, treadmill, and trace are precise, but the clock itself starts at some arbitrary offset.
End of explanation
# Fetch the pupil radius trace and the centers
r, center = (preprocess.EyeTracking.Frame() & key).fetch['major_r', 'center']
# undetected frames are nans in the radius trace
detectedFrames = ~np.isnan(r)
# convert positions to a 2d numpy array
xy = np.full((len(r),2),np.nan)
xy[detectedFrames, :] = np.vstack(center[detectedFrames])
# get pupil tracking times on the behavior clock
et = (preprocess.Eye() & key).fetch1['eye_time']
# plot xy position and radius
plt.plot(et,r)
plt.plot(et,xy)
Explanation: Eye
End of explanation
# choose an arbitrary calcium trace
trace_key = dict(key, extract_method=2, trace_id=256)
# ...and fetch the trace
tr = (preprocess.ComputeTraces.Trace() & trace_key).fetch1['trace']
Explanation: Calcium Traces
End of explanation
tr_info = preprocess.ComputeTraces.Trace() * preprocess.ExtractRaw.GalvoROI() & trace_key
tr_info
Explanation: Join the trace and segmentation tables to get more info about this trace and the mask used to generate it
End of explanation
tr, slice_no = (preprocess.ComputeTraces.Trace() * preprocess.ExtractRaw.GalvoROI()
& trace_key).fetch1['trace','slice']
Explanation: ...and fetch the trace and slice number for the single trace from the joined tables using fetch1
End of explanation
ft, nslices = (preprocess.BehaviorSync() * preprocess.Prepare.Galvo()
& key).fetch1['frame_times','nslices']
Explanation: Fetch the imaging frame times on the behavior clock and the number of slices per scan
End of explanation
assert nslices*len(tr)==len(ft),\
'You should never see this message unless the scan was aborted'
# get the frame times for this slice
ft_slice = ft[slice_no-1::nslices] # slices are numbered 1 based
# Plot the trace to the pupil plot with some scaling
plt.plot(et,r)
plt.plot(et,xy)
plt.plot(ft_slice,tr/tr.min()*20-60)
Explanation: In a single scan with 3 slices, imaging frames are collected from slice 1, 2, 3, 1, 2, 3...
So there are nslices * length(tr) frame times
End of explanation
# fetch the frame times on the visual stimulus clock
vt = (preprocess.Sync() & key).fetch1['frame_times'].squeeze()
vt_slice = vt[slice_no-1::nslices]
# get the trials and for this scan and their flip times
flip_times = (vis.Trial() * preprocess.Sync() & key
& 'trial_idx > first_trial and trial_idx < last_trial').fetch['flip_times']
plt.plot(et,r)
plt.plot(et,xy)
plt.plot(ft_slice,tr/tr.min()*20-60)
for flip_time in flip_times:
# Get the imaging frame where the vis stim trial started
start_idx = np.where(vt_slice > flip_time[0,0])[0][0]
# Use that frame to index into the times on the behavior clock
plt.plot(ft_slice[start_idx],150,'ok', mfc='orange', ms=4)
plt.legend(['Pupil Radius (pxls)', 'Pupil X (pxls)','Pupil Y (pxls)',
'dF/F (scaled)', 'Vis Trial Start'], bbox_to_anchor=(1.4,1),
bbox_transform=plt.gca().transAxes)
plt.xlabel('time on behavior clock (s)')
Explanation: Visual stimulus
End of explanation |
6,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
As of this writing, the L1 h(t) data has a very large dynamic range at low frequencies. We need to remove this before doing anything.
Step1: Below are three options for bandpasses.
The first one isolates the caduceus lines, and also notches the 60 Hz and 180 Hz which are very close.
The second one is similar, but removes the first harmonic to give a clearer sound from the ones that more audible.
The third just isolates the violin modes. | Python Code:
data_dt=1.e20*data.astype(float64).detrend()
filt=sig.firwin(int(8*srate)-1,9./nyquist,pass_zero=False,window='hann')
data_hp=fir_filter(data_dt,filt)
Explanation: As of this writing, the L1 h(t) data has a very large dynamic range at low frequencies. We need to remove this before doing anything.
End of explanation
freqs=[52,59.8,60.2,64,112,124,171,179.5,180.5,183,230,242]
#freqs=[110,124,171,179.5,180.5,183,230,242]
#freqs=[480,530,980,1040,1460,1530]
srate=data.sample_rate.value
nyquist=srate/2.
filt=sig.firwin(32*srate,[ff/nyquist for ff in freqs],pass_zero=False,window='hann')
data_bp=fir_filter(data_hp,filt)
from scipy.io import wavfile
output=data_bp.value[int(16*srate):int(46*srate)]
output=output/max(abs(output))
wavfile.write('fir_bp_caduceus.wav',rate=srate,data=output)
p1=data_bp.asd(16,12).plot()
Explanation: Below are three options for bandpasses.
The first one isolates the caduceus lines, and also notches the 60 Hz and 180 Hz which are very close.
The second one is similar, but removes the first harmonic to give a clearer sound from the ones that more audible.
The third just isolates the violin modes.
End of explanation |
6,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Naive Bayes
Vamos criar a classe Naive Bayes para representar o nosso algoritmo.
O método init representa o construtor, inicializando as variáveis do nosso modelo.
O modelo gerado é formado basicamente pela frequência das palavras, que em nosso caso, representa os possíveis valores de cada feature e label.
O defaultdict é utilizado para inicializar nosso dicionário com valores default, no caso 0 (int), para chaves que tentamos acessar e ainda não foram adicionadas.
Step1: Modelo
Como o modelo é representado basicamente pela frequência das palavras, precisamos categorizar os possíveis valores das features. Após esse processo, fazemos a contagem.
countFrequencies
Step2: Treino
Vamos treinar o nosso modelos. O que deve ser composto na função de treino?
Step3: Classificação
Com o modelos em mãos, agora podemos classificar nosso dataset. Abaixo, segue algumas dicas para tratarmos melhor os dados em nossa função.
Step4: Pré-processamento
Abaixo uma função de suporte para a leitura do nosso dataset. Em seguida, um processo de separação dos dados entre dados de treino e teste. | Python Code:
from collections import defaultdict
from functools import reduce
import math
class NaiveBayes:
def __init__(self):
self.freqFeature = defaultdict(int)
self.freqLabel = defaultdict(int)
# condFreqFeature[label][feature]
self.condFreqFeature = defaultdict(lambda: defaultdict(int))
Explanation: Naive Bayes
Vamos criar a classe Naive Bayes para representar o nosso algoritmo.
O método init representa o construtor, inicializando as variáveis do nosso modelo.
O modelo gerado é formado basicamente pela frequência das palavras, que em nosso caso, representa os possíveis valores de cada feature e label.
O defaultdict é utilizado para inicializar nosso dicionário com valores default, no caso 0 (int), para chaves que tentamos acessar e ainda não foram adicionadas.
End of explanation
def countFrequencies(self)
def countCondFrequencies(self)
Explanation: Modelo
Como o modelo é representado basicamente pela frequência das palavras, precisamos categorizar os possíveis valores das features. Após esse processo, fazemos a contagem.
countFrequencies: faz a contagem que cada valor de feature e label aparecem em todo o dataset de treino, independentemente.
countCondFrequencies: faz a contagem que cada valor de feature aparece para cada possível label.
End of explanation
def train(self, dataSet_x, dataSet_y)
Explanation: Treino
Vamos treinar o nosso modelos. O que deve ser composto na função de treino?
End of explanation
def predict(self, dataSet_x):
# Correcao de Laplace
# P( f | l) = (freq( f | l ) + laplace*) / ( freq(l)** + qnt(distinct(f))*** )
#
# * -> laplace smoothing: add 1
# ** -> Frequencia com que o valor de label aparece
# *** -> Quantidade de features distintas
#
# Devido a possibilidade de underflow de pontos flutuantes, eh interessante fazer
# P(x1|l)*P(x2|l) ... -> exp(Log(P(x1|l)) + Log(P(x2|l))) ...
Explanation: Classificação
Com o modelos em mãos, agora podemos classificar nosso dataset. Abaixo, segue algumas dicas para tratarmos melhor os dados em nossa função.
End of explanation
import random
# Car dataset
# Attribute Information:
#
# Class Values:
#
# unacc, acc, good, vgood
#
# Attributes:
#
# buying: vhigh, high, med, low.
# maint: vhigh, high, med, low.
# doors: 2, 3, 4, 5more.
# persons: 2, 4, more.
# lug_boot: small, med, big.
# safety: low, med, high.
#Retur dataset
def readFile(path):
rawDataset = open(path, 'r')
suffix = ['_buy', '_maint', '_doors', '_pers', '_lug', '_safety', '_class']
dataset = []
rawDataset.seek(0)
for line in rawDataset:
l = line.split(',')
l[-1] = l[-1].replace("\n", "")
newTuple = map(lambda (x,y): x+y, zip( l , suffix))
dataset.append( newTuple )
return dataset
def main():
preparedDataset = readFile('carData.txt')
random.shuffle(preparedDataset)
dataset = []
#Features
dataset.append([])
#Label
dataset.append([])
for t in preparedDataset:
dataset[0].append(t[:-1])
dataset[1].append(t[-1])
dataSet_x = dataset[0]
dataSet_y = dataset[1]
nTuples = len(dataSet_x)
nToTrain = int(nTuples * 0.7)
dataSet_x_train = dataSet_x[:nToTrain]
dataSet_y_train = dataSet_y[:nToTrain]
dataSet_x_test = dataSet_x[nToTrain:]
dataSet_y_test = dataSet_y[nToTrain:]
naive = NaiveBayes()
naive.train(dataSet_x_train, dataSet_y_train)
accuracy = 0.0
results = naive.predict(dataSet_x_test)
for index, r in enumerate(results):
yPredicted = max(r, key=r.get)
y = dataSet_y_test[index]
if(y == yPredicted):
accuracy += 1.0
print accuracy / len(dataSet_y_test)
main()
Explanation: Pré-processamento
Abaixo uma função de suporte para a leitura do nosso dataset. Em seguida, um processo de separação dos dados entre dados de treino e teste.
End of explanation |
6,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
sat-search
This notebook is a tutorial on how to use sat-search to search STAC APIs, save the results, and download assets.
Sat-search is built using sat-stac which provides the core Python classes used to represent STAC catalogs
Step1: Complex query
Now we combine all these filters and add in a sort filter to order the results (which will be shown further below).
Step2: Intersects query
The intersects query works the same way, except a geometry is provided.
Step3: Alternate search syntax
This all works fine, except the syntax for creating queries is a bit verbose, so sat-search allows an alternate syntax using simple strings of the property and equality symbols.
A typical query is shown below for eo
Step4: Fetching results
The examples above use the Search
Step5: Limit
The search.items() function does take 1 argument, limit. This is the total number of items that will be returned. Behind the scenes sat-search may make multiple queries to the API, up until either the limit, or the total number of hits, whichever is greater.
Step6: Returned Items
The returned Items object has several useful functions and is covered in detail in the sat-stac STAC classes tutorial. The Items object contains all the returned Items (Items._items), along with any Collection references by those Items (Items._collections), and the search parameters used (Items._search) Below are some examples. | Python Code:
from satsearch import Search
search = Search(bbox=[-110, 39.5, -105, 40.5])
print('bbox search: %s items' % search.found())
search = Search(datetime='2018-02-12T00:00:00Z/2018-03-18T12:31:12Z')
print('time search: %s items' % search.found())
search = Search(query={'eo:cloud_cover': {'lt': 10}})
print('cloud_cover search: %s items' % search.found())
Explanation: sat-search
This notebook is a tutorial on how to use sat-search to search STAC APIs, save the results, and download assets.
Sat-search is built using sat-stac which provides the core Python classes used to represent STAC catalogs: Collection, Item, and Items. It is recommended to review the tutorial on STAC Classes for more information on how to use these objects returned from searching.
Only the search module is in sat-search is used as a library, and it contains a single class, Search. The parser module is used for creating a CLI parser, and main contains the main function used in the CLI.
API endpoint: Sat-search required an endpoint to be passed in or defined by the STAC_API_URL environment variable. This tutorial uses https://earth-search.aws.element84.com/v0 but any STAC endpoint can be used.
Initializing a Search object
The first step in performing a search is to create a Search object with all the desired query parameters. Query parameters need to follow the querying as provided in the STAC specification, although an abbreviated form is also supported (see below).
Another place to look at the STAC query format is in the sat-api docs, specifically see the section on full-features querying which is what sat-search uses to POST queries to an API. Any field that can be provided in the searchBody can be provided as a keyword parameter when creating the search. These fields include:
bbox: bounding box of the form [minlon, minlat, maxlon, maxlat]
intersects: A GeoJSON geometry
time: A single date-time, a period string, or a range (seperated by /)
sort: A dictionary of fields to sort along with ascending/descending
query: Dictionary of properties to query on, supports eq, lt, gt, lte, gte
Examples of queries are in the sat-api docs, but an example JSON query that would be POSTed might be:
{
"bbox": [
-110,
39.5,
-105,
40.5
],
"time": "2018-02-12T00:00:00Z/2018-03-18T12:31:12Z",
"query": {
"eo:cloud_cover": {
"lt": 10
}
},
"sort": [
{
"field": "eo:cloud_cover",
"direction": "desc"
}
]
}
Simple queries
In sat-search, each of the fields in the query is simply provided as a keyword argument
End of explanation
search = Search(bbox=[-110, 39.5, -105, 40.5],
datetime='2018-02-12T00:00:00Z/2018-03-18T12:31:12Z',
query={'eo:cloud_cover': {'lt': 10}},
collections=['sentinel-s2-l2a'])
print('%s items' % search.found())
Explanation: Complex query
Now we combine all these filters and add in a sort filter to order the results (which will be shown further below).
End of explanation
geom = {
"type": "Polygon",
"coordinates": [
[
[
-66.3958740234375,
43.305193797650546
],
[
-64.390869140625,
43.305193797650546
],
[
-64.390869140625,
44.22945656830167
],
[
-66.3958740234375,
44.22945656830167
],
[
-66.3958740234375,
43.305193797650546
]
]
]
}
search = Search(intersects=geom)
print('intersects search: %s items' % search.found())
Explanation: Intersects query
The intersects query works the same way, except a geometry is provided.
End of explanation
query = {
"eo:cloud_cover": {
"lt": 10
}
}
search = Search(query=query)
print('%s items found' % search.found())
search = Search.search(query=["eo:cloud_cover<10"])
print('%s items found' % search.found())
Explanation: Alternate search syntax
This all works fine, except the syntax for creating queries is a bit verbose, so sat-search allows an alternate syntax using simple strings of the property and equality symbols.
A typical query is shown below for eo:cloud_cover, along with the alternate versions that use the alternate syntax.
End of explanation
search = Search(bbox=[-110, 39.5, -105, 40.5],
datetime='2018-02-01/2018-02-04',
property=["eo:cloud_cover<5"])
print('%s items' % search.found())
items = search.items()
print('%s items' % len(items))
print('%s collections' % len(items._collections))
print(items._collections)
for item in items:
print(item)
Explanation: Fetching results
The examples above use the Search::found() function, but this only returns the total number of hits by performing a fast query with limit=0 (returns no items). To fetch the actual Items use the Search::items() function. This returns a sat-stac Items object.
End of explanation
items = search.items(limit=2)
print(items.summary())
Explanation: Limit
The search.items() function does take 1 argument, limit. This is the total number of items that will be returned. Behind the scenes sat-search may make multiple queries to the API, up until either the limit, or the total number of hits, whichever is greater.
End of explanation
print(items.summary())
from satstac import ItemCollection
search = Search.search(bbox=[-110, 39.5, -105, 40.5],
datetime='2018-02-01/2018-02-10',
query=["eo:cloud_cover<25"],
collections=['sentinel-s2-l2a'])
items = search.items()
print(items.summary())
items.save('test.json')
items2 = ItemCollection.open('test.json')
print(items2.summary(['date', 'id', 'eo:cloud_cover']))
# download a specific asset from all items and put in a directory by date in 'downloads'
filenames = items.download('metadata', filename_template='downloads/${date}/${id}')
print(filenames)
Explanation: Returned Items
The returned Items object has several useful functions and is covered in detail in the sat-stac STAC classes tutorial. The Items object contains all the returned Items (Items._items), along with any Collection references by those Items (Items._collections), and the search parameters used (Items._search) Below are some examples.
End of explanation |
6,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Updated Voce-Chaboche Model Fitting Example 1
An example of fitting the updated Voce-Chaboche (UVC) model to a set of test data is provided.
Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions.
Step1: Run optimization with single test data set
This is a simple example for fitting the UVC model to a set of test data.
We only use one backstresses in this model, additional backstresses can be specified by adding pairs of 0.1's to the list of x_0. E.g., three backstresses would be
x_0 = [200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
Likewise, two backstresses can be specified by removing two pairs of 0.1's from the list below.
The overall steps to calibrate the model parameters are as follows
Step2: The minimization problem in 4b above is solved in multiple steps because it is typically difficult to find a minimum to the UVC problem with a strict tolerance.
Each step successively relaxes the tolerance on the norm of the gradient of the Lagrangian.
The first step is 30 iterations at 1e-8, then 30 iterations at 1e-2, then a maximum of 50 iterations at 5e-2.
Confidence in the solution point can be gained using the visualization tools shown in the Visualization_Example_1 Notebook.
In the case shown above, the analysis exits during the third step.
Plot results
After the analysis is finished we can plot the test data versus the fitted model.
If we set output_dir='./output/' instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below. | Python Code:
import RESSPyLab as rpl
import numpy as np
Explanation: Updated Voce-Chaboche Model Fitting Example 1
An example of fitting the updated Voce-Chaboche (UVC) model to a set of test data is provided.
Documentation for all the functions used in this example can be found by either looking at docstrings for any of the functions.
End of explanation
# Specify the true stress-strain to be used in the calibration
# Only one test used, see the VC_Calibration_Example_1 example for multiple tests
data_files = ['example_1.csv']
# Set initial parameters for the UVC model with one backstresses
# [E, \sigma_{y0}, Q_\infty, b, D_\infty, a, C_1, \gamma_1]
x_0 = np.array([200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1])
# Log files for the parameters at each step, and values of the objective function at each step
# The logs are only kept for step 4b, the result of 4a will be the first entry of the log file
x_log = './output/x_log_upd.txt'
fxn_log = './output/fxn_log_upd.txt'
# (Optional) Set the number of iterations to run in step 4b
# The recommended number of iterations is its = [300, 1000, 3000]
# For the purpose of this example less iterations are run
its = [30, 30, 40]
# Run the calibration
# Set filter_data=True if you have NOT already filtered/reduced the data
# We recommend that you filter/reduce the data beforehand
x_sol = rpl.uvc_param_opt(x_0, data_files, x_log, fxn_log, find_initial_point=True, filter_data=False,
step_iterations=its)
Explanation: Run optimization with single test data set
This is a simple example for fitting the UVC model to a set of test data.
We only use one backstresses in this model, additional backstresses can be specified by adding pairs of 0.1's to the list of x_0. E.g., three backstresses would be
x_0 = [200000., 355., 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1]
Likewise, two backstresses can be specified by removing two pairs of 0.1's from the list below.
The overall steps to calibrate the model parameters are as follows:
1. Load the set of test data
2. Choose a starting point
3. Set the location to save the analysis history
4. Run the analysis
Step 4. from above is slightly more complicated for the updated model than it is for the original model.
This step is divided into two parts:
a) Run the original model with the same number of backstresses to obtain an initial set of parameters (without the updated parameters)
b) Run the updated model from the point found in 4a.
If you already have an initial set of parameters you can skip substep 4a by setting find_initial_point=False.
End of explanation
data = rpl.load_data_set(data_files)
rpl.uvc_data_plotter(x_sol[0], data, output_dir='', file_name='uvc_example_plots', plot_label='Fitted-UVC')
Explanation: The minimization problem in 4b above is solved in multiple steps because it is typically difficult to find a minimum to the UVC problem with a strict tolerance.
Each step successively relaxes the tolerance on the norm of the gradient of the Lagrangian.
The first step is 30 iterations at 1e-8, then 30 iterations at 1e-2, then a maximum of 50 iterations at 5e-2.
Confidence in the solution point can be gained using the visualization tools shown in the Visualization_Example_1 Notebook.
In the case shown above, the analysis exits during the third step.
Plot results
After the analysis is finished we can plot the test data versus the fitted model.
If we set output_dir='./output/' instead of output_dir='' the uvc_data_plotter function will save pdf's of all the plots instead of displaying them below.
End of explanation |
6,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Rossiter-McLaughlin Effect
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: As always, let's do imports and initialize a logger and a new bundle.
Step2: Now we'll try to exaggerate the effect by spinning up the secondary component.
Step3: Adding Datasets
We'll add radial velocity, line profile, and mesh datasets. We'll compute the rvs through the whole orbit, but the mesh and line profiles right around the eclipse - just at the times that we want to plot for an animation.
Step4: We'll add two identical datasets, one where we compute only dynamical RVs (won't include Rossiter-McLaughlin) and another where we compute flux-weighted RVs (will include Rossiter-McLaughlin).
Step5: For the mesh, we'll save some time by only exposing plane-of-sky coordinates and the 'rvs' column.
Step6: And for the line-profile, we'll expose the line-profile for both of our stars separately, instead of for the entire system.
Step7: Running Compute
Step8: Plotting
Throughout all of these plots, we'll color the components green and magenta (to differentiate them from the red and blue of the RV mapping).
Step9: First let's compare between the dynamical and numerical RVs.
The dynamical RVs show the velocity of the center of each star along the line of sight. But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array).
Step10: Now let's make a plot of the line profiles and mesh during ingress to visualize what's happening.
Let's go through these options (see the plot API docs for more details)
Step11: Here we can see that star in front (green) is eclipsing more of the blue-shifted part of the back star (magenta), distorting the line profile, causing the apparent center of the line profile to be shifted to the right/red, and therefore the radial velocities to be articially increased as compared to the dynamical RVs.
Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Rossiter-McLaughlin Effect
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
import numpy as np
b = phoebe.default_binary()
Explanation: As always, let's do imports and initialize a logger and a new bundle.
End of explanation
b.set_value('q', value=0.7)
b.set_value('incl', component='binary', value=87)
b.set_value('requiv', component='primary', value=0.8)
b.set_value('teff', component='secondary', value=6500)
b.set_value('syncpar', component='secondary', value=1.5)
Explanation: Now we'll try to exaggerate the effect by spinning up the secondary component.
End of explanation
anim_times = phoebe.arange(0.44, 0.56, 0.002)
Explanation: Adding Datasets
We'll add radial velocity, line profile, and mesh datasets. We'll compute the rvs through the whole orbit, but the mesh and line profiles right around the eclipse - just at the times that we want to plot for an animation.
End of explanation
b.add_dataset('rv',
times=phoebe.linspace(0,1,201),
dataset='dynamicalrvs')
b.set_value_all('rv_method', dataset='dynamicalrvs', value='dynamical')
b.add_dataset('rv',
times=phoebe.linspace(0,1,201),
dataset='numericalrvs')
b.set_value_all('rv_method', dataset='numericalrvs', value='flux-weighted')
Explanation: We'll add two identical datasets, one where we compute only dynamical RVs (won't include Rossiter-McLaughlin) and another where we compute flux-weighted RVs (will include Rossiter-McLaughlin).
End of explanation
b.add_dataset('mesh',
compute_times=anim_times,
coordinates='uvw',
columns=['rvs@numericalrvs'],
dataset='mesh01')
Explanation: For the mesh, we'll save some time by only exposing plane-of-sky coordinates and the 'rvs' column.
End of explanation
b.add_dataset('lp',
compute_times=anim_times,
component=['primary', 'secondary'],
wavelengths=phoebe.linspace(549.5,550.5,101),
profile_rest=550)
Explanation: And for the line-profile, we'll expose the line-profile for both of our stars separately, instead of for the entire system.
End of explanation
b.run_compute(irrad_method='none')
Explanation: Running Compute
End of explanation
colors = {'primary': 'green', 'secondary': 'magenta'}
Explanation: Plotting
Throughout all of these plots, we'll color the components green and magenta (to differentiate them from the red and blue of the RV mapping).
End of explanation
afig, mplfig = b.plot(kind='rv',
c=colors,
ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},
show=True)
Explanation: First let's compare between the dynamical and numerical RVs.
The dynamical RVs show the velocity of the center of each star along the line of sight. But the numerical method integrates over the visible surface elements, giving us what we'd observe if deriving RVs from observed spectra of the binary. Here we do see the Rossiter McLaughlin effect. You'll also notice that RVs are not available for the secondary star when its completely occulted (they're nans in the array).
End of explanation
afig, mplfig= b.plot(time=0.46,
fc='rvs@numericalrvs', ec='face',
c=colors,
ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},
highlight={'numericalrvs': True, 'dynamicalrvs': False},
axpos={'mesh': 211, 'rv': 223, 'lp': 224},
xlim={'rv': (0.4, 0.6)}, ylim={'rv': (-80, 80)},
tight_layout=True,
show=True)
Explanation: Now let's make a plot of the line profiles and mesh during ingress to visualize what's happening.
Let's go through these options (see the plot API docs for more details):
* time: make the plot at this single time
* fc: (will be ignored by everything but the mesh): set the facecolor to the rvs column. This will automatically apply a red-blue color mapping.
* ec: disable drawing the edges of the triangles in a separate color. We could also set this to 'none', but then we'd be able to "see-through" the triangle edges.
* c: set the colors as defined in our dictionary above. This will apply to the rv, lp, and horizon datasets, but will be ignored by the mesh.
* ls: set the linestyle to differentiate between numerical and dynamical rvs.
* highlight: highlight the current time on the numerical rvs only.
* axpos: define the layout of the axes so the mesh plot takes up the horizontal space it needs.
* xlim: "zoom-in" on the RM effect in the RVs, allow the others to fallback on automatic limits.
* tight_layout: use matplotlib's tight layout to ensure we have enough padding between axes to see the labels.
End of explanation
afig, mplanim = b.plot(times=anim_times,
fc='rvs@numericalrvs', ec='face',
c=colors,
ls={'numericalrvs': 'solid', 'dynamicalrvs': 'dotted'},
highlight={'numericalrvs': True, 'dynamicalrvs': False},
pad_aspect=False,
axpos={'mesh': 211, 'rv': 223, 'lp': 224},
xlim={'rv': (0.4, 0.6)}, ylim={'rv': (-80, 80)},
animate=True,
save='rossiter_mclaughlin.gif',
save_kwargs={'writer': 'imagemagick'})
Explanation: Here we can see that star in front (green) is eclipsing more of the blue-shifted part of the back star (magenta), distorting the line profile, causing the apparent center of the line profile to be shifted to the right/red, and therefore the radial velocities to be articially increased as compared to the dynamical RVs.
Now let's animate the same figure in time. We'll use the same arguments as the static plot above, with the following exceptions:
times: pass our array of times that we want the animation to loop over.
pad_aspect: pad_aspect doesn't work with animations, so we'll disable to avoid the warning messages.
animate: self-explanatory.
save: we could use show=True, but that doesn't always play nice with jupyter notebooks
save_kwargs: may need to change these for your setup, to create a gif, passing {'writer': 'imagemagick'} is often useful.
End of explanation |
6,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Image As Greyscale
Step2: Apply Adaptive Thresholding
Step3: View Image | Python Code:
# Load image
import cv2
import numpy as np
from matplotlib import pyplot as plt
Explanation: Title: Binarize Images
Slug: binarize_image
Summary: How to binarize images using OpenCV in Python.
Date: 2017-09-11 12:00
Category: Machine Learning
Tags: Preprocessing Images
Authors: Chris Albon
Preliminaries
End of explanation
# Load image as greyscale
image_grey = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)
Explanation: Load Image As Greyscale
End of explanation
# Apply adaptive thresholding
max_output_value = 255
neighorhood_size = 99
subtract_from_mean = 10
image_binarized = cv2.adaptiveThreshold(image_grey,
max_output_value,
cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
cv2.THRESH_BINARY,
neighorhood_size,
subtract_from_mean)
Explanation: Apply Adaptive Thresholding
End of explanation
# Show image
plt.imshow(image_binarized, cmap='gray'), plt.axis("off")
plt.show()
Explanation: View Image
End of explanation |
6,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use.
Step3: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
Step4: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
Step5: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
Step6: Write out the graph for TensorBoard
Step7: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
Step8: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
Explanation: First we'll load the text file and convert it into integers for our network to use.
End of explanation
def split_data(chars, batch_size, num_steps, split_frac=0.9):
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
Explanation: Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
End of explanation
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN putputs to a softmax layer and calculate the cost
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
preds = tf.nn.softmax(logits, name='predictions')
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
Explanation: I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
End of explanation
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 1
learning_rate = 0.001
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
End of explanation
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
file_writer = tf.summary.FileWriter('./logs/1', sess.graph)
Explanation: Write out the graph for TensorBoard
End of explanation
!mkdir -p checkpoints/anna
epochs = 1
save_every_n = 200
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
Explanation: Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation |
6,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Overview" data-toc-modified-id="Overview-1"><span class="toc-item-num">1 </span>Overview</a></div><div class="lev2 toc-item"><a href="#pwd---Print-Working-Directory" data-toc-modified-id="pwd---Print-Working-Directory-11"><span class="toc-item-num">1.1 </span>pwd - Print Working Directory</a></div><div class="lev2 toc-item"><a href="#ls---List-files-and-directory-names,-attributes" data-toc-modified-id="ls---List-files-and-directory-names,-attributes-12"><span class="toc-item-num">1.2 </span>ls - List files and directory names, attributes</a></div><div class="lev2 toc-item"><a href="#mkdir---Make-a-new-directory" data-toc-modified-id="mkdir---Make-a-new-directory-13"><span class="toc-item-num">1.3 </span>mkdir - Make a new directory</a></div><div class="lev2 toc-item"><a href="#cd---Change-to-a-particular-directory" data-toc-modified-id="cd---Change-to-a-particular-directory-14"><span class="toc-item-num">1.4 </span>cd - Change to a particular directory</a></div><div class="lev2 toc-item"><a href="#rmdir---Remove-a-directory" data-toc-modified-id="rmdir---Remove-a-directory-15"><span class="toc-item-num">1.5 </span>rmdir - Remove a directory</a></div><div class="lev2 toc-item"><a href="#cp---Copy-Files" data-toc-modified-id="cp---Copy-Files-16"><span class="toc-item-num">1.6 </span>cp - Copy Files</a></div><div class="lev2 toc-item"><a href="#rm---Remove-files" data-toc-modified-id="rm---Remove-files-17"><span class="toc-item-num">1.7 </span>rm - Remove files</a></div><div class="lev2 toc-item"><a href="#mv-
Step1: ls - List files and directory names, attributes
Some commonly used commands are below
Step2: mkdir - Make a new directory
Step3: cd - Change to a particular directory
Step4: rmdir - Remove a directory
If the folder is not empty, it need the "-r" flag.
Example
Step5: cp - Copy Files
Careful with the filenames! Will be overwritten without warning.
Step6: rm - Remove files
Note that this is different to rmdir, which exists to remove a directory
Step7: mv
Step8: CURL - Getting Data from the Command Line
Let's begin by copying a simple tab-separated file.
The format is as below
Step9: In case your system doesn't have jq, you can follow the instructions here.
* https
Step10: Register for the Mashape API Market here
Step11: Note
Step12: grep
Step13: More options for grep
Step14: Redirection (or Downloading)
This is really useful to quickly download a dataset using what is called an API Endpoint.
Let's download the 'Times Square Entertainment Venues' dataset from New York City's Open Data Portal to demonstrate this.
https | Python Code:
!pwd
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Overview" data-toc-modified-id="Overview-1"><span class="toc-item-num">1 </span>Overview</a></div><div class="lev2 toc-item"><a href="#pwd---Print-Working-Directory" data-toc-modified-id="pwd---Print-Working-Directory-11"><span class="toc-item-num">1.1 </span>pwd - Print Working Directory</a></div><div class="lev2 toc-item"><a href="#ls---List-files-and-directory-names,-attributes" data-toc-modified-id="ls---List-files-and-directory-names,-attributes-12"><span class="toc-item-num">1.2 </span>ls - List files and directory names, attributes</a></div><div class="lev2 toc-item"><a href="#mkdir---Make-a-new-directory" data-toc-modified-id="mkdir---Make-a-new-directory-13"><span class="toc-item-num">1.3 </span>mkdir - Make a new directory</a></div><div class="lev2 toc-item"><a href="#cd---Change-to-a-particular-directory" data-toc-modified-id="cd---Change-to-a-particular-directory-14"><span class="toc-item-num">1.4 </span>cd - Change to a particular directory</a></div><div class="lev2 toc-item"><a href="#rmdir---Remove-a-directory" data-toc-modified-id="rmdir---Remove-a-directory-15"><span class="toc-item-num">1.5 </span>rmdir - Remove a directory</a></div><div class="lev2 toc-item"><a href="#cp---Copy-Files" data-toc-modified-id="cp---Copy-Files-16"><span class="toc-item-num">1.6 </span>cp - Copy Files</a></div><div class="lev2 toc-item"><a href="#rm---Remove-files" data-toc-modified-id="rm---Remove-files-17"><span class="toc-item-num">1.7 </span>rm - Remove files</a></div><div class="lev2 toc-item"><a href="#mv-:-Move-a-file" data-toc-modified-id="mv-:-Move-a-file-18"><span class="toc-item-num">1.8 </span>mv : Move a file</a></div><div class="lev2 toc-item"><a href="#CURL---Getting-Data-from-the-Command-Line" data-toc-modified-id="CURL---Getting-Data-from-the-Command-Line-19"><span class="toc-item-num">1.9 </span>CURL - Getting Data from the Command Line</a></div><div class="lev2 toc-item"><a href="#head/tail" data-toc-modified-id="head/tail-110"><span class="toc-item-num">1.10 </span>head/tail</a></div><div class="lev2 toc-item"><a href="#grep:" data-toc-modified-id="grep:-111"><span class="toc-item-num">1.11 </span>grep:</a></div><div class="lev2 toc-item"><a href="#Redirection-(or-Downloading)" data-toc-modified-id="Redirection-(or-Downloading)-112"><span class="toc-item-num">1.12 </span>Redirection (or Downloading)</a></div>
# Overview
This is by no means an exhaustive list, the point is just to give you a feeler for what's possible. If you have used Linux or Mac, or have written code in Ruby, chances are you have used Unix commands already. If you're a Windows user, here are two good resources:
* https://www.howtogeek.com/249966/how-to-install-and-use-the-linux-bash-shell-on-windows-10/
* https://www.howtogeek.com/howto/41382/how-to-use-linux-commands-in-windows-with-cygwin/
Another great resource in general on the basics of unix commands:
* http://matt.might.net/articles/basic-unix/
## pwd - Print Working Directory
End of explanation
ls
!ls
ls -al
ls -Al
Explanation: ls - List files and directory names, attributes
Some commonly used commands are below:
* -A: list all of the contents of the queried directory, even hidden files.
* -l: detailed format, display additional info for all files and directories.
* -R: recursively list the contents of any subdirectories.
* -t: sort files by the time of the last modification.
* -S: sort files by size.
* -r: reverse any sort order.
* -h: when used in conjunction with -l, gives a more human-readable output.
You can also combine the commands/flags.
For example:
* -al
* -Al
Read more on this topic here: https://www.mkssoftware.com/docs/man1/ls.1.asp
End of explanation
!mkdir NewFolder
ls
Explanation: mkdir - Make a new directory
End of explanation
cd NewFolder
!pwd
cd ..
!pwd
ls
Explanation: cd - Change to a particular directory
End of explanation
rmdir NewFolder
ls
Explanation: rmdir - Remove a directory
If the folder is not empty, it need the "-r" flag.
Example:
rmdir -r NewFolder
End of explanation
ls
# Copy in the same directory
!cp 01.Unix_and_Shell_Command_Basics.ipynb Notebook01.ipynb
ls
rm Notebook01.ipynb
ls
# Copy to another directory
!mkdir TempFolder
!cp 01.Unix_and_Shell_Command_Basics.ipynb TempFolder/File01.ipynb
!ls
cd TempFolder
ls
Explanation: cp - Copy Files
Careful with the filenames! Will be overwritten without warning.
End of explanation
pwd
!rm File01.ipynb
!ls
!pwd
!ls -al
!ls
cd ..
ls
Explanation: rm - Remove files
Note that this is different to rmdir, which exists to remove a directory
End of explanation
pwd
ls
rm -r TempFolder
ls
cp 01.Unix_and_Shell_Command_Basics.ipynb NewFile01.ipynb
ls
mkdir TempFolder02
ls
mv NewFile01.ipynb TempFolder02
ls
cd TempFolder02
ls
cd ..
rm -r TempFolder02
Explanation: mv : Move a file
This is close to the the 'cut' function available for files on Windows.
When you use the 'mv' command, a file is copied to a new location, and removed from it's original location.
End of explanation
!curl -L 'https://dl.dropboxusercontent.com/s/j2yh7nvlli1nsa5/gdp.txt'
!curl -L 'https://dl.dropboxusercontent.com/s/eqyhkf3tpgre0jb/foo.txt'
!curl -s "http://freegeoip.net/json/" | jq .
Explanation: CURL - Getting Data from the Command Line
Let's begin by copying a simple tab-separated file.
The format is as below:
!curl -OptionalFlag 'http://url'
End of explanation
!curl -s "http://api.open-notify.org/iss-now.json"
!curl -s "http://api.open-notify.org/astros.json"
Explanation: In case your system doesn't have jq, you can follow the instructions here.
* https://stedolan.github.io/jq/download/
End of explanation
!curl -X POST --include 'https://community-sentiment.p.mashape.com/text/' \
-H 'X-Mashape-Key: YFWRiIyfNemshsFin8iTJy0XFUjNp1rXoY7jsnoPlVphvWnKY6' \
-H 'Content-Type: application/x-www-form-urlencoded' \
-H 'Accept: application/json' \
-d 'txt=My team lost badly! I am sad :('
Explanation: Register for the Mashape API Market here: https://market.mashape.com
End of explanation
pwd
ls
cd Data
ls
!head -n 3 sample.txt
!tail -n 4 sample.txt
!cat sample.txt
# Selecting specific fields
!cut -f2,3 sample.txt
!sort sample.txt
!sort -k 2 sample.txt
!cat nyt.txt
!wc nyt.txt
# Where 21 is the number of lines, 245 is the number of words, and 1515 is the number of characters.
!wc -w nyt.txt
Explanation: Note: This is a free API, so I have exposed my API key in the code. In practice, if you are ever sharing code, please take adequate precautions, and never expose your private key.
head/tail
End of explanation
pwd
ls
!cat nyt.txt
# Count the number of matches
!grep -c 'Kennedy' nyt.txt
!grep -o 'Kennedy' nyt.txt
Explanation: grep:
Grep is a pattern matching utility built into unix and it's flavors. The typical format is:
grep [option] [pattern] [file/s]
End of explanation
!curl -s 'http://freegeoip.net/json/' > location.json
!jq . location.json
!curl -s 'http://freegeoip.net/json/' | jq .
Explanation: More options for grep:
* -c Print only a count of matched lines.
* -l List only filenames
* -i Ignore lowercase and uppercase distinctions
* -o prints only the matching part of the line
* -n Print matching line with its line number
* -v Negate matches; print lines that do not match the regex
* -r Recursively Search subdirectories listed
End of explanation
!curl "https://data.cityofnewyork.us/resource/2pc8-n4xe.json" > venues.json
!cat venues.json
!grep 'Ripley' venues.json
!grep -i 'Theater' venues.json
# Multiple flags, and multiple conditions
!grep -v -e 'Theater' -e 'Theatre' venues.json
Explanation: Redirection (or Downloading)
This is really useful to quickly download a dataset using what is called an API Endpoint.
Let's download the 'Times Square Entertainment Venues' dataset from New York City's Open Data Portal to demonstrate this.
https://data.cityofnewyork.us/Business/Times-Square-Entertainment-Venues/jxdc-hnze
End of explanation |
6,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cortical Signal Suppression (CSS) for removal of cortical signals
This script shows an example of how to use CSS
Step1: Load sample subject data
Step2: Find patches (labels) to activate
Step5: Simulate one cortical dipole (40 Hz) and one subcortical (239 Hz)
Step6: Process with CSS and plot PSD of EEG data before and after processing | Python Code:
# Author: John G Samuelsson <[email protected]>
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.simulation import simulate_sparse_stc, simulate_evoked
Explanation: Cortical Signal Suppression (CSS) for removal of cortical signals
This script shows an example of how to use CSS
:footcite:Samuelsson2019 . CSS suppresses the cortical contribution
to the signal subspace in EEG data using MEG data, facilitating
detection of subcortical signals. We will illustrate how it works by
simulating one cortical and one subcortical oscillation at different
frequencies; 40 Hz and 239 Hz for cortical and subcortical activity,
respectively, then process it with CSS and look at the power spectral
density of the raw and processed data.
End of explanation
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-no-filter-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-cov.fif'
trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
bem_fname = subjects_dir + '/sample' + '/bem' + '/sample-5120-bem-sol.fif'
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')
fwd = mne.read_forward_solution(fwd_fname)
fwd = mne.convert_forward_solution(fwd, force_fixed=True, surf_ori=True)
fwd = mne.pick_types_forward(fwd, meg=True, eeg=True, exclude=raw.info['bads'])
cov = mne.read_cov(cov_fname)
Explanation: Load sample subject data
End of explanation
all_labels = mne.read_labels_from_annot(subject='sample',
subjects_dir=subjects_dir)
labels = []
for select_label in ['parahippocampal-lh', 'postcentral-rh']:
labels.append([lab for lab in all_labels if lab.name in select_label][0])
hiplab, postcenlab = labels
Explanation: Find patches (labels) to activate
End of explanation
def cortical_waveform(times):
Create a cortical waveform.
return 10e-9 * np.cos(times * 2 * np.pi * 40)
def subcortical_waveform(times):
Create a subcortical waveform.
return 10e-9 * np.cos(times * 2 * np.pi * 239)
times = np.linspace(0, 0.5, int(0.5 * raw.info['sfreq']))
stc = simulate_sparse_stc(fwd['src'], n_dipoles=2, times=times,
location='center', subjects_dir=subjects_dir,
labels=[postcenlab, hiplab],
data_fun=cortical_waveform)
stc.data[np.where(np.isin(stc.vertices[0], hiplab.vertices))[0], :] = \
subcortical_waveform(times)
evoked = simulate_evoked(fwd, stc, raw.info, cov, nave=15)
Explanation: Simulate one cortical dipole (40 Hz) and one subcortical (239 Hz)
End of explanation
evoked_subcortical = mne.preprocessing.cortical_signal_suppression(evoked,
n_proj=6)
chs = mne.pick_types(evoked.info, meg=False, eeg=True)
psd = np.mean(np.abs(np.fft.rfft(evoked.data))**2, axis=0)
psd_proc = np.mean(np.abs(np.fft.rfft(evoked_subcortical.data))**2, axis=0)
freq = np.arange(0, stop=int(evoked.info['sfreq'] / 2),
step=evoked.info['sfreq'] / (2 * len(psd)))
fig, ax = plt.subplots()
ax.plot(freq, psd, label='raw')
ax.plot(freq, psd_proc, label='processed')
ax.text(.2, .7, 'cortical', transform=ax.transAxes)
ax.text(.8, .25, 'subcortical', transform=ax.transAxes)
ax.set(ylabel='EEG Power spectral density', xlabel='Frequency (Hz)')
ax.legend()
# References
# ^^^^^^^^^^
#
# .. footbibliography::
Explanation: Process with CSS and plot PSD of EEG data before and after processing
End of explanation |
6,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial 2
Step1: These variables do not change anything in the simulation engine, but
are just standard Python variables. They are used to increase the
readability and flexibility of the script. The box length is not a
parameter of this simulation, it is calculated from the number of
particles and the system density. This allows to change the parameters
later easily, e.g. to simulate a bigger system.
We use dictionaries for all particle related parameters, which is less error-prone and
readable as we will see later when we actually need the values. The parameters here define a purely repulsive,
equally sized, monovalent salt.
The simulation engine itself is modified by changing the
espressomd.System() properties. We create an instance <tt>system</tt> and
set the box length, periodicity and time step. The skin depth <tt>skin</tt>
is a parameter for the link--cell system which tunes its
performance, but shall not be discussed here. Further, we activate the Langevin thermostat
for our NVT ensemble with temperature <tt>temp</tt> and friction coefficient <tt>gamma</tt>.
Step2: We now fill this simulation box with particles at random positions, using type and charge from our dictionaries.
Using the length of the particle list <tt>system.part</tt> for the id, we make sure that our particles are numbered consecutively.
The particle type is used to link non-bonded interactions to a certain group of particles.
Step3: Before we can really start the simulation, we have to specify the
interactions between our particles. We already defined the Lennard-Jones parameters at the beginning,
what is left is to specify the combination rule and to iterate over particle type pairs. For simplicity,
we implement only the Lorentz-Berthelot rules.
We pass our interaction pair to <tt>system.non_bonded_inter[*,*]</tt> and set the
pre-calculated LJ parameters <tt>epsilon</tt>, <tt>sigma</tt> and <tt>cutoff</tt>. With <tt>shift="auto"</tt>,
we shift the interaction potential to the cutoff so that $U_\mathrm{LJ}(r_\mathrm{cutoff})=0$.
Step4: 3 Equilibration
With randomly positioned particles, we most likely have huge overlap and the strong repulsion will
cause the simulation to crash. The next step in our script therefore is a suitable LJ equilibration.
This is known to be a tricky part of a simulation and several approaches exist to reduce the particle overlap.
Here, we use a highly damped system (large gamma in the thermostat) and cap the forces of the LJ interaction.
We use <tt>system.analysis.mindist</tt> to get the minimal distance between all particles pairs. This value
is used to progressively increase the force capping. This results in a slow increase of the force capping at
strong overlap. At the end, we reset our thermostat to the target values and deactivate the force cap by setting
it to zero.
Step5: ESPResSo uses so-called <tt>actors</tt> for electrostatics, magnetostatics and hydrodynamics. This ensures that unphysical combinations of algorithms are
avoided, for example simultaneous usage of two electrostatic interactions.
Adding an actor to the system also activates the method and calls necessary
initialization routines. Here, we define a P$^3$M object with parameters Bjerrum
length and rms force error . This automatically starts a
tuning function which tries to find optimal parameters for P$^3$M and prints them
to the screen
Step6: Before the production part of the simulation, we do a quick temperature
equilibration. For the output, we gather all energies with <tt>system.analysis.energy()</tt>, calculate the "current" temperature from the ideal part and print it to the screen along with the total and Coulomb energies. Note that for the ideal gas the temperature is given via $1/2 m \sqrt{\langle v^2 \rangle}=3/2 k_BT$, where $\langle \cdot \rangle$ denotes the ensemble average. Calculating some kind of "current temperature" via $T_\text{cur}=\frac{m}{3 k_B} \sqrt{ v^2 }$ you do not obtain the temperature in the system. Only when averaging the squared velocities first one would obtain the temperature for the ideal gas. $T$ is a fixed quantity and does not fluctuate in the canonical ensemble.
We integrate for a certain amount of steps with <tt>system.integrator.run(100)</tt>.
Step7: <figure>
<img src='figures/salt.png' alt='missing' style="width
Step8: Additionally, we append all particle configurations in the core with <tt>system.analysis.append()</tt> for a very convenient analysis later on.
5 Analysis
Now, we want to calculate the averaged radial distribution functions
$g_{++}(r)$ and $g_{+-}(r)$ with the <tt>rdf()</tt> command from <tt>system.analysis</tt>
Step9: The shown <tt>rdf()</tt> commands return the radial distribution functions for
equally and oppositely charged particles for specified radii and number of bins.
In this case, we calculate the averaged rdf of the stored
configurations, denoted by the chevrons in <tt>rdf_type='$<\mathrm{rdf}>$'</tt>. Using <tt>rdf_type='rdf'</tt> would simply calculate the rdf of the current particle
configuration. The results are two NumPy arrays containing the $r$ and $g(r)$
values. We can then write the data into a file with standard python output routines.
Step10: Finally we can plot the two radial distribution functions using pyplot. | Python Code:
from __future__ import print_function
from espressomd import System, electrostatics, features
import espressomd
import numpy
import matplotlib.pyplot as plt
plt.ion()
# Print enabled features
required_features = ["EXTERNAL_FORCES", "MASS", "ELECTROSTATICS", "LENNARD_JONES"]
espressomd.assert_features(required_features)
print(espressomd.features())
# System Parameters
n_part = 200
n_ionpairs = n_part/2
density = 0.5
time_step = 0.01
temp = 1.0
gamma = 1.0
l_bjerrum = 7.0
num_steps_equilibration = 1000
num_configs = 500
integ_steps_per_config = 1000
# Particle Parameters
types = {"Anion": 0, "Cation": 1}
numbers = {"Anion": n_ionpairs, "Cation": n_ionpairs}
charges = {"Anion": -1.0, "Cation": 1.0}
lj_sigmas = {"Anion": 1.0, "Cation": 1.0}
lj_epsilons = {"Anion": 1.0, "Cation": 1.0}
WCA_cut = 2.**(1. / 6.)
lj_cuts = {"Anion": WCA_cut * lj_sigmas["Anion"],
"Cation": WCA_cut * lj_sigmas["Cation"]}
Explanation: Tutorial 2: A Simple Charged System, Part 1
1 Introduction
This tutorial introduces some of the basic features of ESPResSo for charged systems by constructing a simulation script for a simple salt crystal. In the subsequent task, we use a more realistic force-field for a NaCl crystal. Finally, we introduce constraints and 2D-Electrostatics to simulate a molten salt in a parallel plate capacitor. We assume that the reader is familiar with the basic concepts of Python and MD simulations. Compile espresso with the following features in your myconfig.hpp to be set throughout the whole tutorial:
```
define EXTERNAL_FORCES
define MASS
define ELECTROSTATICS
define LENNARD_JONES
```
2 Basic Set Up
The script for the tutorial can be found in your build directory at <tt>/doc/tutorials/02-charged_system/scripts/nacl.py</tt>.
We start with importing numpy, pyplot, and the espressomd features and setting up all
the relevant simulation parameters in one place:
End of explanation
# Setup System
box_l = (n_part / density)**(1. / 3.)
system = System(box_l = [box_l, box_l, box_l])
system.seed=42
system.periodicity = [1, 1, 1]
system.time_step = time_step
system.cell_system.skin = 0.3
system.thermostat.set_langevin(kT=temp, gamma=gamma)
Explanation: These variables do not change anything in the simulation engine, but
are just standard Python variables. They are used to increase the
readability and flexibility of the script. The box length is not a
parameter of this simulation, it is calculated from the number of
particles and the system density. This allows to change the parameters
later easily, e.g. to simulate a bigger system.
We use dictionaries for all particle related parameters, which is less error-prone and
readable as we will see later when we actually need the values. The parameters here define a purely repulsive,
equally sized, monovalent salt.
The simulation engine itself is modified by changing the
espressomd.System() properties. We create an instance <tt>system</tt> and
set the box length, periodicity and time step. The skin depth <tt>skin</tt>
is a parameter for the link--cell system which tunes its
performance, but shall not be discussed here. Further, we activate the Langevin thermostat
for our NVT ensemble with temperature <tt>temp</tt> and friction coefficient <tt>gamma</tt>.
End of explanation
for i in range(int(n_ionpairs)):
system.part.add(
id=len(system.part),
type=types["Anion"],
pos=numpy.random.random(3) * box_l,
q=charges["Anion"])
for i in range(int(n_ionpairs)):
system.part.add(
id=len(system.part),
type=types["Cation"],
pos=numpy.random.random(3) * box_l,
q=charges["Cation"])
Explanation: We now fill this simulation box with particles at random positions, using type and charge from our dictionaries.
Using the length of the particle list <tt>system.part</tt> for the id, we make sure that our particles are numbered consecutively.
The particle type is used to link non-bonded interactions to a certain group of particles.
End of explanation
def combination_rule_epsilon(rule, eps1, eps2):
if rule=="Lorentz":
return (eps1*eps2)**0.5
else:
return ValueError("No combination rule defined")
def combination_rule_sigma(rule, sig1, sig2):
if rule=="Berthelot":
return (sig1+sig2)*0.5
else:
return ValueError("No combination rule defined")
# Lennard-Jones interactions parameters
for s in [["Anion", "Cation"], ["Anion", "Anion"], ["Cation", "Cation"]]:
lj_sig = combination_rule_sigma("Berthelot",lj_sigmas[s[0]], lj_sigmas[s[1]])
lj_cut = combination_rule_sigma("Berthelot", lj_cuts[s[0]], lj_cuts[s[1]])
lj_eps = combination_rule_epsilon("Lorentz", lj_epsilons[s[0]],lj_epsilons[s[1]])
system.non_bonded_inter[types[s[0]], types[s[1]]].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig, cutoff=lj_cut, shift="auto")
Explanation: Before we can really start the simulation, we have to specify the
interactions between our particles. We already defined the Lennard-Jones parameters at the beginning,
what is left is to specify the combination rule and to iterate over particle type pairs. For simplicity,
we implement only the Lorentz-Berthelot rules.
We pass our interaction pair to <tt>system.non_bonded_inter[*,*]</tt> and set the
pre-calculated LJ parameters <tt>epsilon</tt>, <tt>sigma</tt> and <tt>cutoff</tt>. With <tt>shift="auto"</tt>,
we shift the interaction potential to the cutoff so that $U_\mathrm{LJ}(r_\mathrm{cutoff})=0$.
End of explanation
# Lennard Jones Equilibration
max_sigma = max(lj_sigmas.values())
min_dist = 0.0
cap = 10.0
# Warmup Helper: Cold, highly damped system
system.thermostat.set_langevin(kT=temp*0.1, gamma=gamma*50.0)
while min_dist < max_sigma:
#Warmup Helper: Cap max. force, increase slowly for overlapping particles
min_dist = system.analysis.min_dist([types["Anion"],types["Cation"]],[types["Anion"],types["Cation"]])
cap += min_dist
#print min_dist, cap
system.force_cap=cap
system.integrator.run(10)
# Don't forget to reset thermostat, timestep and force cap
system.thermostat.set_langevin(kT=temp, gamma=gamma)
system.force_cap=0
Explanation: 3 Equilibration
With randomly positioned particles, we most likely have huge overlap and the strong repulsion will
cause the simulation to crash. The next step in our script therefore is a suitable LJ equilibration.
This is known to be a tricky part of a simulation and several approaches exist to reduce the particle overlap.
Here, we use a highly damped system (large gamma in the thermostat) and cap the forces of the LJ interaction.
We use <tt>system.analysis.mindist</tt> to get the minimal distance between all particles pairs. This value
is used to progressively increase the force capping. This results in a slow increase of the force capping at
strong overlap. At the end, we reset our thermostat to the target values and deactivate the force cap by setting
it to zero.
End of explanation
p3m = electrostatics.P3M(prefactor=l_bjerrum*temp,
accuracy=1e-3)
system.actors.add(p3m)
Explanation: ESPResSo uses so-called <tt>actors</tt> for electrostatics, magnetostatics and hydrodynamics. This ensures that unphysical combinations of algorithms are
avoided, for example simultaneous usage of two electrostatic interactions.
Adding an actor to the system also activates the method and calls necessary
initialization routines. Here, we define a P$^3$M object with parameters Bjerrum
length and rms force error . This automatically starts a
tuning function which tries to find optimal parameters for P$^3$M and prints them
to the screen:
End of explanation
# Temperature Equilibration
system.time = 0.0
for i in range(int(num_steps_equilibration/50)):
energy = system.analysis.energy()
temp_measured = energy['kinetic'] / ((3.0 / 2.0) * n_part)
print("t={0:.1f}, E_total={1:.2f}, E_coulomb={2:.2f},T={3:.4f}".format(system.time, energy['total'],
energy['coulomb'], temp_measured), end='\r')
system.integrator.run(200)
Explanation: Before the production part of the simulation, we do a quick temperature
equilibration. For the output, we gather all energies with <tt>system.analysis.energy()</tt>, calculate the "current" temperature from the ideal part and print it to the screen along with the total and Coulomb energies. Note that for the ideal gas the temperature is given via $1/2 m \sqrt{\langle v^2 \rangle}=3/2 k_BT$, where $\langle \cdot \rangle$ denotes the ensemble average. Calculating some kind of "current temperature" via $T_\text{cur}=\frac{m}{3 k_B} \sqrt{ v^2 }$ you do not obtain the temperature in the system. Only when averaging the squared velocities first one would obtain the temperature for the ideal gas. $T$ is a fixed quantity and does not fluctuate in the canonical ensemble.
We integrate for a certain amount of steps with <tt>system.integrator.run(100)</tt>.
End of explanation
# Integration
system.time = 0.0
for i in range(num_configs):
energy = system.analysis.energy()
temp_measured = energy['kinetic'] / ((3.0 / 2.0) * n_part)
print("t={0:.1f}, E_total={1:.2f}, E_coulomb={2:.2f}, T={3:.4f}".format(system.time, energy['total'],
energy['coulomb'], temp_measured), end='\r')
system.integrator.run(integ_steps_per_config)
# Internally append particle configuration
system.analysis.append()
Explanation: <figure>
<img src='figures/salt.png' alt='missing' style="width: 300px;"/>
<center>
<figcaption>Figure 1: VMD Snapshot of the Salt System</figcaption>
</figure>
4 Running the Simulation
Now we can integrate the particle trajectories for a couple of time
steps. Our integration loop basically looks like the equilibration:
End of explanation
# Analysis
# Calculate the averaged rdfs
rdf_bins = 100
r_min = 0.0
r_max = system.box_l[0]/2.0
r,rdf_00 = system.analysis.rdf(rdf_type='<rdf>',
type_list_a=[types["Anion"]],
type_list_b=[types["Anion"]],
r_min=r_min,
r_max=r_max,
r_bins=rdf_bins)
r,rdf_01 = system.analysis.rdf(rdf_type='<rdf>',
type_list_a=[types["Anion"]],
type_list_b=[types["Cation"]],
r_min=r_min, r_max=r_max, r_bins=rdf_bins)
Explanation: Additionally, we append all particle configurations in the core with <tt>system.analysis.append()</tt> for a very convenient analysis later on.
5 Analysis
Now, we want to calculate the averaged radial distribution functions
$g_{++}(r)$ and $g_{+-}(r)$ with the <tt>rdf()</tt> command from <tt>system.analysis</tt>:
End of explanation
with open('rdf.data', 'w') as rdf_fp:
for i in range(rdf_bins):
rdf_fp.write("%1.5e %1.5e %1.5e\n" %
(r[i], rdf_00[i], rdf_01[i]))
Explanation: The shown <tt>rdf()</tt> commands return the radial distribution functions for
equally and oppositely charged particles for specified radii and number of bins.
In this case, we calculate the averaged rdf of the stored
configurations, denoted by the chevrons in <tt>rdf_type='$<\mathrm{rdf}>$'</tt>. Using <tt>rdf_type='rdf'</tt> would simply calculate the rdf of the current particle
configuration. The results are two NumPy arrays containing the $r$ and $g(r)$
values. We can then write the data into a file with standard python output routines.
End of explanation
# Plot the distribution functions
plt.figure(figsize=(10,6), dpi=80)
plt.plot(r[:],rdf_00[:], label='$g(r)_{++}$')
plt.plot(r[:],rdf_01[:], label='$g(r)_{+-}$')
plt.xlabel('$r$', fontsize=20)
plt.ylabel('$g(r)$', fontsize=20)
plt.legend(fontsize=20)
plt.show()
Explanation: Finally we can plot the two radial distribution functions using pyplot.
End of explanation |
6,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('reviews.txt', 'r') as f:
reviews = f.read()
with open('labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
vocab_to_int =
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints =
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels =
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints =
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
features =
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
split_frac = 0.8
train_x, val_x =
train_y, val_y =
val_x, test_x =
val_y, test_y =
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ =
labels_ =
keep_prob =
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding =
embed =
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm =
# Add dropout to the cell
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state =
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
6,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial-1
The first thing to do to use the python wrappers is to import the package. PyDealII is only a shell and importing it will only allow you to call
python
help(PyDealII)
PyDealII is composed of two libraries
Step1: We start by creating a 2D Triangulation of an hyper cube and we globally refine it twice. You can read the documention of Triangulation by typing
Step2: Now we would like to visualize the mesh that has been created. We can output a vtu file using
python
triangulation.write('triangulation.vtu', 'vtu')
and then use VisIt or Paraview. This is probably what you want to do for a 3D Triangulation. However, in this tutorial, we will create a function to plot the result using matplotlib. This will allow us to look at the mesh inside the notebook. To do that we will use matplotlib and numpy.
Step3: The function below takes as input a Triangulation and a function that is used to define the color scheme. In this function, we loop over all the active cells, get the coordinates of the vertices, and use these coordinates to create polygons that we plot. We can loop over the active cells using
python
for cell in triangulation.active_cells()
Once we have a cell, we can get any vertex using
python
vertex = cell.get_vertex(i)
Since a vertex is a Point, we can the coordinates of the Point using
python
x = vertex.x
y = vertex.y
z = vertex.z
Step4: We know define a color scheme function and plot the Triangulation
Step5: Now let's assume that the left half of the domain is composed of a different material than the right half of the domain. We will loop over all the cells and if the abscissa of the cell barycenter is less than 0.5, we will assign zero to the material_id of the cell. The others cells will be assigned a material_id of one.
Step6: We will refine isotropically the cells that have a material_id equal to zero and plot the Triangulation.
Step7: We will now show how to merge two Triangulations. In order to merge the two Triangulations, we will need to move (shift) the second Triangulation such that it doesn't overlap with the first one.
Step8: We are now almost ready to merge the Triangulations. However, deal.II does not allow us to merge Triangulations that have been refined. We can use the flatten_triangulation function to create new Triangulations that are not refined but this function does not work if the mesh contains hanging nodes. Thus, we need to modify the first triangulation.
Step9: Like expected the second Triangulation is moved too far on the right. This mistake can easily fixed by moving flatten_triangulation_2 to the left and merging the Triangulations once again. We can see the advantage of using python over C++. In C++, we would have to recompile and rerun the code while in python we can very easily fix our mistake.
Step10: Now that we are done generating the grid, we need to save it in a format that will make it easy to load in our C++ code. This can be done using the save function python and then using the Triangulation
Step11: The C++ code to load the mesh is
C++
triangulation.load('merged_triangulation');
NOTE
If the C++ code throws an exception, the error message will be shown. However, if the code segfaults then the kernel will simply be killed. In this case, the easiest way to debug your code is to use gdb to find the problem. This can be done by exporting your notebook as a python code and then typing | Python Code:
%matplotlib inline
import PyDealII.Debug as dealii
Explanation: Tutorial-1
The first thing to do to use the python wrappers is to import the package. PyDealII is only a shell and importing it will only allow you to call
python
help(PyDealII)
PyDealII is composed of two libraries:
- PyDealII.Debug which uses the debug version of deal.II
- PyDealII.Release which uses the release version of deal.II
In this tutorial, we import the debug version of the library as dealii.
End of explanation
triangulation = dealii.Triangulation('2D')
triangulation.generate_hyper_cube()
triangulation.refine_global(2)
Explanation: We start by creating a 2D Triangulation of an hyper cube and we globally refine it twice. You can read the documention of Triangulation by typing:
python
help(dealii.Triangulation)
This will show you all the functions associated with Triangulation and for each one, you will get a short explanation.
End of explanation
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
from matplotlib.collections import PatchCollection
import numpy as np
Explanation: Now we would like to visualize the mesh that has been created. We can output a vtu file using
python
triangulation.write('triangulation.vtu', 'vtu')
and then use VisIt or Paraview. This is probably what you want to do for a 3D Triangulation. However, in this tutorial, we will create a function to plot the result using matplotlib. This will allow us to look at the mesh inside the notebook. To do that we will use matplotlib and numpy.
End of explanation
def plot_triangulation(triangulation, color_scheme):
fig, ax = plt.subplots()
patches = []
colors = []
cell_id = 0
for cell in triangulation.active_cells():
quad_vertices = np.zeros((4,2))
# The shift variable is used to reorder the vertices because
# deal.II and matplotlib require different ordering
shift = [0,1,3,2]
for i in range(4):
vertex = cell.get_vertex(i)
quad_vertices[shift[i]][0] = vertex.x
quad_vertices[shift[i]][1] = vertex.y
quad = Polygon(quad_vertices, closed=True)
patches.append(quad)
colors.append(color_scheme(cell_id, cell))
cell_id += 1
p = PatchCollection(patches)
p.set_array(np.array(colors))
ax.add_collection(p, autolim=True)
ax.autoscale_view()
plt.show()
Explanation: The function below takes as input a Triangulation and a function that is used to define the color scheme. In this function, we loop over all the active cells, get the coordinates of the vertices, and use these coordinates to create polygons that we plot. We can loop over the active cells using
python
for cell in triangulation.active_cells()
Once we have a cell, we can get any vertex using
python
vertex = cell.get_vertex(i)
Since a vertex is a Point, we can the coordinates of the Point using
python
x = vertex.x
y = vertex.y
z = vertex.z
End of explanation
def color_sc(cell_id, cell):
return cell_id
plot_triangulation(triangulation, color_sc)
Explanation: We know define a color scheme function and plot the Triangulation
End of explanation
for cell in triangulation.active_cells():
if cell.barycenter().x < 0.5:
cell.material_id = 0
else:
cell.material_id = 1
plot_triangulation(triangulation, lambda cell_id,cell : cell.material_id)
Explanation: Now let's assume that the left half of the domain is composed of a different material than the right half of the domain. We will loop over all the cells and if the abscissa of the cell barycenter is less than 0.5, we will assign zero to the material_id of the cell. The others cells will be assigned a material_id of one.
End of explanation
for cell in triangulation.active_cells():
if cell.material_id == 0:
cell.refine_flag ='isotropic'
triangulation.execute_coarsening_and_refinement()
plot_triangulation(triangulation, color_sc)
Explanation: We will refine isotropically the cells that have a material_id equal to zero and plot the Triangulation.
End of explanation
triangulation_2 = dealii.Triangulation('2D')
triangulation_2.generate_hyper_cube()
triangulation_2.refine_global(2)
triangulation_2.shift([2.,0.])
plot_triangulation(triangulation_2, color_sc)
Explanation: We will now show how to merge two Triangulations. In order to merge the two Triangulations, we will need to move (shift) the second Triangulation such that it doesn't overlap with the first one.
End of explanation
flatten_triangulation_1 = dealii.Triangulation('2D')
triangulation.generate_hyper_cube()
triangulation.refine_global(2)
triangulation.flatten_triangulation(flatten_triangulation_1)
flatten_triangulation_2 = dealii.Triangulation('2D')
triangulation_2.flatten_triangulation(flatten_triangulation_2)
triangulation_3 = dealii.Triangulation('2D')
triangulation_3.merge_triangulations(flatten_triangulation_1, flatten_triangulation_2)
plot_triangulation(triangulation_3, color_sc)
Explanation: We are now almost ready to merge the Triangulations. However, deal.II does not allow us to merge Triangulations that have been refined. We can use the flatten_triangulation function to create new Triangulations that are not refined but this function does not work if the mesh contains hanging nodes. Thus, we need to modify the first triangulation.
End of explanation
flatten_triangulation_2.shift([-1.,0])
triangulation_3.merge_triangulations(flatten_triangulation_1, flatten_triangulation_2)
plot_triangulation(triangulation_3, color_sc)
Explanation: Like expected the second Triangulation is moved too far on the right. This mistake can easily fixed by moving flatten_triangulation_2 to the left and merging the Triangulations once again. We can see the advantage of using python over C++. In C++, we would have to recompile and rerun the code while in python we can very easily fix our mistake.
End of explanation
triangulation_3.save('merged_triangulation')
Explanation: Now that we are done generating the grid, we need to save it in a format that will make it easy to load in our C++ code. This can be done using the save function python and then using the Triangulation::load() function in C++. The only caveat is that parallel::distributed::Triangulation cannot load a grid which has refined cells. Once again this can be fixed by flattening the Triangulation (this is not necessary here).
End of explanation
for cell in triangulation.active_cells():
vertex = cell.get_vertex(5)
Explanation: The C++ code to load the mesh is
C++
triangulation.load('merged_triangulation');
NOTE
If the C++ code throws an exception, the error message will be shown. However, if the code segfaults then the kernel will simply be killed. In this case, the easiest way to debug your code is to use gdb to find the problem. This can be done by exporting your notebook as a python code and then typing:
bash
gdb python
run my_program.py
Below, we show an example of an error message coming from the C++ code.
End of explanation |
6,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filters
By Evgenia "Jenny" Nitishinskaya, Dr. Aidan O'Mahony, and Delaney Granizo-Mackenzie. Algorithms by David Edwards.
Kalman Filter Beta Estimation Example from Dr. Aidan O'Mahony's blog.
Part of the Quantopian Lecture Series
Step1: Toy example
Step2: At each point in time we plot the state estimate <i>after</i> accounting for the most recent measurement, which is why we are not at position 30 at time 0. The filter's attentiveness to the measurements allows it to correct for the initial bogus state we gave it. Then, by weighing its model and knowledge of the physical laws against new measurements, it is able to filter out much of the noise in the camera data. Meanwhile the confidence in the estimate increases with time, as shown by the graph below
Step3: The Kalman filter can also do <i>smoothing</i>, which takes in all of the input data at once and then constructs its best guess for the state of the system in each period post factum. That is, it does not provide online, running estimates, but instead uses all of the data to estimate the historical state, which is useful if we only want to use the data after we have collected all of it.
Step4: Example
Step5: This is a little hard to see, so we'll plot a subsection of the graph. | Python Code:
from SimPEG import *
%pylab inline
# Import a Kalman filter and other useful libraries
from pykalman import KalmanFilter
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy import poly1d
Explanation: Filters
By Evgenia "Jenny" Nitishinskaya, Dr. Aidan O'Mahony, and Delaney Granizo-Mackenzie. Algorithms by David Edwards.
Kalman Filter Beta Estimation Example from Dr. Aidan O'Mahony's blog.
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
End of explanation
tau = 0.1
# Set up the filter
kf = KalmanFilter(n_dim_obs=1, n_dim_state=2, # position is 1-dimensional, (x,v) is 2-dimensional
initial_state_mean=[30,10],
initial_state_covariance=np.eye(2),
transition_matrices=[[1,tau], [0,1]],
observation_matrices=[[1,0]],
observation_covariance=3,
transition_covariance=np.zeros((2,2)),
transition_offsets=[-4.9*tau**2, -9.8*tau])
# Create a simulation of a ball falling for 40 units of time (each of length tau)
times = np.arange(40)
actual = -4.9*tau**2*times**2
# Simulate the noisy camera data
sim = actual + 3*np.random.randn(40)
# Run filter on camera data
state_means, state_covs = kf.filter(sim)
plt.plot(times, state_means[:,0])
plt.plot(times, sim)
plt.plot(times, actual)
plt.legend(['Filter estimate', 'Camera data', 'Actual'])
plt.xlabel('Time')
plt.ylabel('Height');
print(times)
print(state_means[:,0])
Explanation: Toy example: falling ball
Imagine we have a falling ball whose motion we are tracking with a camera. The state of the ball consists of its position and velocity. We know that we have the relationship $x_t = x_{t-1} + v_{t-1}\tau - \frac{1}{2} g \tau^2$, where $\tau$ is the time (in seconds) elapsed between $t-1$ and $t$ and $g$ is gravitational acceleration. Meanwhile, our camera can tell us the position of the ball every second, but we know from the manufacturer that the camera accuracy, translated into the position of the ball, implies variance in the position estimate of about 3 meters.
In order to use a Kalman filter, we need to give it transition and observation matrices, transition and observation covariance matrices, and the initial state. The state of the system is (position, velocity), so it follows the transition matrix
$$ \left( \begin{array}{cc}
1 & \tau \
0 & 1 \end{array} \right) $$
with offset $(-\tau^2 \cdot g/2, -\tau\cdot g)$. The observation matrix just extracts the position coordinate, (1 0), since we are measuring position. We know that the observation variance is 1, and transition covariance is 0 since we will be simulating the data the same way we specified our model. For the inital state, let's feed our model something bogus like (30, 10) and see how our system evolves.
End of explanation
# Plot variances of x and v, extracting the appropriate values from the covariance matrix
plt.plot(times, state_covs[:,0,0])
plt.plot(times, state_covs[:,1,1])
plt.legend(['Var(x)', 'Var(v)'])
plt.ylabel('Variance')
plt.xlabel('Time');
Explanation: At each point in time we plot the state estimate <i>after</i> accounting for the most recent measurement, which is why we are not at position 30 at time 0. The filter's attentiveness to the measurements allows it to correct for the initial bogus state we gave it. Then, by weighing its model and knowledge of the physical laws against new measurements, it is able to filter out much of the noise in the camera data. Meanwhile the confidence in the estimate increases with time, as shown by the graph below:
End of explanation
# Use smoothing to estimate what the state of the system has been
smoothed_state_means, _ = kf.smooth(sim)
# Plot results
plt.plot(times, smoothed_state_means[:,0])
plt.plot(times, sim)
plt.plot(times, actual)
plt.legend(['Smoothed estimate', 'Camera data', 'Actual'])
plt.xlabel('Time')
plt.ylabel('Height');
Explanation: The Kalman filter can also do <i>smoothing</i>, which takes in all of the input data at once and then constructs its best guess for the state of the system in each period post factum. That is, it does not provide online, running estimates, but instead uses all of the data to estimate the historical state, which is useful if we only want to use the data after we have collected all of it.
End of explanation
df = pd.read_csv("../data/ChungCheonDC/CompositeETCdata.csv")
df_DC = pd.read_csv("../data/ChungCheonDC/CompositeDCdata.csv")
df_DCstd = pd.read_csv("../data/ChungCheonDC/CompositeDCstddata.csv")
ax1 = plt.subplot(111)
ax1_1 = ax1.twinx()
df.plot(figsize=(12,3), x='date', y='reservoirH', ax=ax1_1, color='k', linestyle='-', lw=2)
# Load pricing data for a security
# start = '2013-01-01'
# end = '2015-01-01'
#x = get_pricing('reservoirH', fields='price', start_date=start, end_date=end)
x= df.reservoirH
# Construct a Kalman filter
kf = KalmanFilter(transition_matrices = [1],
observation_matrices = [1],
initial_state_mean = 39.3,
initial_state_covariance = 1,
observation_covariance=1,
transition_covariance=1)
# Use the observed values of the price to get a rolling mean
state_means, _ = kf.filter(x.values)
# Compute the rolling mean with various lookback windows
mean10 = pd.rolling_mean(x, 6)
mean20 = pd.rolling_mean(x, 20)
mean30 = pd.rolling_mean(x, 30)
# Plot original data and estimated mean
plt.plot(state_means)
plt.plot(x, 'k.', ms=2)
plt.plot(mean10)
plt.plot(mean20)
plt.plot(mean30)
plt.title('Kalman filter estimate of average')
plt.legend(['Kalman Estimate', 'Reseroir H', '30-day Moving Average', '60-day Moving Average','90-day Moving Average'])
plt.xlabel('Day')
plt.ylabel('Reservoir Level');
plt.plot(state_means)
plt.plot(x)
plt.title('Kalman filter estimate of average')
plt.legend(['Kalman Estimate', 'Reseroir H'])
plt.xlabel('Day')
plt.ylabel('Reservoir Level');
Explanation: Example: moving average
Because the Kalman filter updates its estimates at every time step and tends to weigh recent observations more than older ones, a particularly useful application is estimation of rolling parameters of the data. When using a Kalman filter, there's no window length that we need to specify. This is useful for computing the moving average if that's what we are interested in, or for smoothing out estimates of other quantities. For instance, if we have already computed the moving Sharpe ratio, we can smooth it using a Kalman filter.
Below, we'll use both a Kalman filter and an n-day moving average to estimate the rolling mean of a dataset. We hope that the mean describes our observations well, so it shouldn't change too much when we add an observation; therefore, we assume that it evolves as a random walk with a small error term. The mean is the model's guess for the mean of the distribution from which measurements are drawn, so our prediction of the next value is simply equal to our estimate of the mean. We assume that the observations have variance 1 around the rolling mean, for lack of a better estimate. Our initial guess for the mean is 0, but the filter quickly realizes that that is incorrect and adjusts.
End of explanation
plt.plot(state_means[-400:])
plt.plot(x[-400:])
plt.plot(mean10[-400:])
plt.title('Kalman filter estimate of average')
plt.legend(['Kalman Estimate', 'Reseroir H', '6-day Moving Average'])
plt.xlabel('Day')
plt.ylabel('Reservoir Level');
# Load pricing data for a security
# start = '2013-01-01'
# end = '2015-01-01'
#x = get_pricing('reservoirH', fields='price', start_date=start, end_date=end)
xH= df.upperH_med
# Construct a Kalman filter
kf = KalmanFilter(transition_matrices = [1],
observation_matrices = [1],
initial_state_mean = 35.5,
initial_state_covariance = 1,
observation_covariance=1,
transition_covariance=.01)
# Use the observed values of the price to get a rolling mean
state_means, _ = kf.filter(xH.values)
# Compute the rolling mean with various lookback windows
mean10 = pd.rolling_mean(xH, 10)
mean20 = pd.rolling_mean(xH, 20)
mean30 = pd.rolling_mean(xH, 30)
# Plot original data and estimated mean
plt.plot(state_means)
plt.plot(xH)
plt.plot(mean10)
plt.plot(mean20)
plt.plot(mean30)
plt.title('Kalman filter estimate of average')
# plt.legend(['Kalman Estimate', 'upperH_med', '10-day Moving Average', '20-day Moving Average','30-day Moving Average'])
plt.xlabel('Day')
plt.ylabel('upperH_med');
txrxID = df_DC.keys()[1:-1]
xmasking = lambda x: np.ma.masked_where(np.isnan(x.values), x.values)
x= df_DC[txrxID[2]]
median10 = pd.rolling_median(x, 6)
mean10 = pd.rolling_max(x, 10)
x1 = median10
x2 = mean10
# Masking array having NaN
xm = xmasking(x)
# Construct a Kalman filter
kf = KalmanFilter(transition_matrices = [1],
observation_matrices = [1],
initial_state_mean = 67.6,
initial_state_covariance = 1,
observation_covariance=1,
transition_covariance=1)
# Use the observed values of the price to get a rolling mean
state_means, _ = kf.filter(xm)
#plt.plot(x1)
plt.plot(x)
plt.plot(x1)
plt.plot(x2)
plt.plot(state_means)
plt.legend([ 'origin x','median x1','mean x2', 'Kalman Estimate'])
plt.plot(x)
plt.plot(state_means)
upperH_med = xmasking(df.upperH_med)
state_means, _ = kf.filter(upperH_med)
plt.plot(df.upperH_med)
plt.plot(state_means)
# plt.plot(xH)
# plt.plot(mean10)
plt.title('Kalman filter estimate of average')
plt.legend(['Kalman Estimate', 'Reseroir H','10-day Moving Average'])
plt.xlabel('Day')
plt.ylabel('upperH_med');
# Import libraries
%matplotlib inline
import pandas as pd
import sys
import matplotlib.pyplot as plt
import numpy as np
import scipy as sc
plt.style.use('ggplot')
np.random.seed(20)
x= df.reservoirH
print x
#-------------------------------------------------------------------------------
# Set up
# Time
t = np.linspace(0,1,100)
# Frequencies in the signal
f1 = 20
f2 = 30
# Some random noise to add to the signal
noise = np.random.random_sample(len(t))
# Complete signal
y = x #2*np.sin(2*np.pi*f1*t+0.2) + 3*np.cos(2*np.pi*f2*t+0.3) + noise*5
# The part of the signal we want to isolate
y1 = x #2*np.sin(2*np.pi*f1*t+0.2)
y
# FFT of the signal
F = sc.fft(y)
# Other specs
N = len(t) # number of samples
dt = 0.001 # inter-sample time difference
w = np.fft.fftfreq(N, dt) # list of frequencies for the FFT
pFrequency = np.where(w>=0)[0] # we only positive frequencies
magnitudeF = abs(F[:len(pFrequency)]) # magnitude of F for the positive frequencies
#-------------------------------------------------------------------------------
# Some functions we will need
# Plots the FFT
def pltfft():
plt.plot(pFrequency,magnitudeF)
plt.xlabel('Hz')
plt.ylabel('Magnitude')
plt.title('FFT of the full signal')
plt.grid(True)
plt.show()
# Plots the full signal
def pltCompleteSignal():
plt.plot(t,y,'b')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.title('Full signal')
plt.grid(True)
plt.show()
# Filter function:
# blocks higher frequency than fmax, lower than fmin and returns the cleaned FT
def blockHigherFreq(FT,fmin,fmax,plot=False):
for i in range(len(F)):
if (i>= fmax) or (i<=fmin):
FT[i] = 0
if plot:
plt.plot(pFrequency,abs(FT[:len(pFrequency)]))
plt.xlabel('Hz')
plt.ylabel('Magnitude')
plt.title('Cleaned FFT')
plt.grid(True)
plt.show()
return FT
# Normalising function (gets the signal in a scale from 0 to 1)
def normalise(signal):
M = max(signal)
normalised = signal/M
return normalised
print signal
plt.plot(y)
#plt.plot(y1)
#-------------------------------------------------------------------------------
# Processing
# Cleaning the FT by selecting only frequencies between 18 and 22
newFT = blockHigherFreq(F,18,22)
# Getting back the cleaned signal
cleanedSignal = sc.ifft(F)
# Error
error = normalise(y1) - normalise(cleanedSignal)
#-------------------------------------------------------------------------------
# Plot the findings
#pltCompleteSignal() #Plot the full signal
#pltfft() #Plot fft
plt.figure()
plt.subplot(3,1,1) #Subplot 1
plt.title('Original signal')
plt.plot(t,y,'g')
plt.subplot(3,1,2) #Subplot 2
plt.plot(t,normalise(cleanedSignal),label='Cleaned signal',color='b')
plt.plot(t,normalise(y1),label='Signal to find',ls='-',color='r')
plt.title('Cleaned signal and signal to find')
plt.legend()
plt.subplot(3,1,3) #Subplot 3
plt.plot(t,error,color='r',label='error')
plt.show()
Explanation: This is a little hard to see, so we'll plot a subsection of the graph.
End of explanation |
6,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Selection Tutorial with Yellowbrick
In this tutorial, we are going to look at scores for a variety of scikit-learn models and compare them using visual diagnostic tools from Yellowbrick in order to select the best model for our data.
The Model Selection Triple
Discussions of machine learning are frequently characterized by a singular focus on model selection. Be it logistic regression, random forests, Bayesian methods, or artificial neural networks, machine learning practitioners are often quick to express their preference. The reason for this is mostly historical. Though modern third-party machine learning libraries have made the deployment of multiple models appear nearly trivial, traditionally the application and tuning of even one of these algorithms required many years of study. As a result, machine learning practitioners tended to have strong preferences for particular (and likely more familiar) models over others.
However, model selection is a bit more nuanced than simply picking the "right" or "wrong" algorithm. In practice, the workflow includes
Step2: Feature Extraction
Our data, including the target, is categorical. We will need to change these values to numeric ones for machine learning. In order to extract this from the dataset, we'll have to use scikit-learn transformers to transform our input dataset into something that can be fit to a model. Luckily, scikit-learn does provide transformers for converting categorical labels into numeric integers
Step4: Preliminary Model Evaluation
Based on the results from the F1 scores above, which model is performing the best?
Visual Model Evaluation
Now let's refactor our model evaluation function to use Yellowbrick's ClassificationReport class, a model visualizer that displays the precision, recall, and F1 scores. This visual model analysis tool integrates numerical scores as well color-coded heatmap in order to support easy interpretation and detection, particularly the nuances of Type I and Type II error, which are very relevant (lifesaving, even) to our use case!
Type I error (or a "false positive") is detecting an effect that is not present (e.g. determining a mushroom is poisonous when it is in fact edible).
Type II error (or a "false negative") is failing to detect an effect that is present (e.g. believing a mushroom is edible when it is in fact poisonous). | Python Code:
from yellowbrick.datasets import load_mushroom
X, y = load_mushroom()
print(X[:5]) # inspect the first five rows
Explanation: Model Selection Tutorial with Yellowbrick
In this tutorial, we are going to look at scores for a variety of scikit-learn models and compare them using visual diagnostic tools from Yellowbrick in order to select the best model for our data.
The Model Selection Triple
Discussions of machine learning are frequently characterized by a singular focus on model selection. Be it logistic regression, random forests, Bayesian methods, or artificial neural networks, machine learning practitioners are often quick to express their preference. The reason for this is mostly historical. Though modern third-party machine learning libraries have made the deployment of multiple models appear nearly trivial, traditionally the application and tuning of even one of these algorithms required many years of study. As a result, machine learning practitioners tended to have strong preferences for particular (and likely more familiar) models over others.
However, model selection is a bit more nuanced than simply picking the "right" or "wrong" algorithm. In practice, the workflow includes:
selecting and/or engineering the smallest and most predictive feature set
choosing a set of algorithms from a model family, and
tuning the algorithm hyperparameters to optimize performance.
The model selection triple was first described in a 2015 SIGMOD paper by Kumar et al. In their paper, which concerns the development of next-generation database systems built to anticipate predictive modeling, the authors cogently express that such systems are badly needed due to the highly experimental nature of machine learning in practice. "Model selection," they explain, "is iterative and exploratory because the space of [model selection triples] is usually infinite, and it is generally impossible for analysts to know a priori which [combination] will yield satisfactory accuracy and/or insights."
Recently, much of this workflow has been automated through grid search methods, standardized APIs, and GUI-based applications. In practice, however, human intuition and guidance can more effectively hone in on quality models than exhaustive search. By visualizing the model selection process, data scientists can steer towards final, explainable models and avoid pitfalls and traps.
The Yellowbrick library is a diagnostic visualization platform for machine learning that allows data scientists to steer the model selection process. Yellowbrick extends the scikit-learn API with a new core object: the Visualizer. Visualizers allow visual models to be fit and transformed as part of the scikit-learn Pipeline process, providing visual diagnostics throughout the transformation of high dimensional data.
About the Data
This tutorial uses the mushrooms data from the Yellowbrick datasets module.
NOTE: The YB version of the mushrooms data differs from the mushroom dataset from the UCI Machine Learning Repository. The Yellowbrick version has been deliberately modified to make modeling a bit more of a challenge.
The data include descriptions of hypothetical samples corresponding to 23 species of gilled mushrooms in the Agaricus and Lepiota Family. Each species was identified as definitely edible, definitely poisonous, or of unknown edibility and not recommended (this latter class was combined with the poisonous one).
Our file, "agaricus-lepiota.txt," contains information for 3 nominally valued attributes and a target value from 8124 instances of mushrooms (4208 edible, 3916 poisonous).
Let's load the data:
End of explanation
from sklearn.metrics import f1_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
def score_model(X, y, estimator, **kwargs):
Test various estimators.
y = LabelEncoder().fit_transform(y)
model = Pipeline([
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
# Instantiate the classification model and visualizer
model.fit(X, y, **kwargs)
expected = y
predicted = model.predict(X)
# Compute and return F1 (harmonic mean of precision and recall)
print("{}: {}".format(estimator.__class__.__name__, f1_score(expected, predicted)))
# Try them all!
from sklearn.svm import LinearSVC, NuSVC, SVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import LogisticRegressionCV, LogisticRegression, SGDClassifier
from sklearn.ensemble import BaggingClassifier, ExtraTreesClassifier, RandomForestClassifier
models = [
SVC(gamma='auto'), NuSVC(gamma='auto'), LinearSVC(),
SGDClassifier(max_iter=100, tol=1e-3), KNeighborsClassifier(),
LogisticRegression(solver='lbfgs'), LogisticRegressionCV(cv=3),
BaggingClassifier(), ExtraTreesClassifier(n_estimators=100),
RandomForestClassifier(n_estimators=100)
]
for model in models:
score_model(X, y, model)
Explanation: Feature Extraction
Our data, including the target, is categorical. We will need to change these values to numeric ones for machine learning. In order to extract this from the dataset, we'll have to use scikit-learn transformers to transform our input dataset into something that can be fit to a model. Luckily, scikit-learn does provide transformers for converting categorical labels into numeric integers: sklearn.preprocessing.LabelEncoder and sklearn.preprocessing.OneHotEncoder.
We'll use a combination of scikit-learn's Pipeline object (here's great post on using pipelines by Zac Stewart), OneHotEncoder, and LabelEncoder:
```python
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import OneHotEncoder, LabelEncoder
y = LabelEncoder().fit_transform(y) # Label-encode targets before modeling
model = Pipeline([
('one_hot_encoder', OneHotEncoder()), # One-hot encode columns before modeling
('estimator', estimator)
])
```
Modeling and Evaluation
Common metrics for evaluating classifiers
Precision is the number of correct positive results divided by the number of all positive results (e.g. How many of the mushrooms we predicted would be edible actually were?).
Recall is the number of correct positive results divided by the number of positive results that should have been returned (e.g. How many of the mushrooms that were poisonous did we accurately predict were poisonous?).
The F1 score is a measure of a test's accuracy. It considers both the precision and the recall of the test to compute the score. The F1 score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst at 0.
precision = true positives / (true positives + false positives)
recall = true positives / (false negatives + true positives)
F1 score = 2 * ((precision * recall) / (precision + recall))
Now we're ready to make some predictions!
Let's build a way to evaluate multiple estimators — first using traditional numeric scores (which we'll later compare to some visual diagnostics from the Yellowbrick library).
End of explanation
from sklearn.pipeline import Pipeline
from yellowbrick.classifier import ClassificationReport
def visualize_model(X, y, estimator):
Test various estimators.
y = LabelEncoder().fit_transform(y)
model = Pipeline([
('one_hot_encoder', OneHotEncoder()),
('estimator', estimator)
])
# Instantiate the classification model and visualizer
visualizer = ClassificationReport(
model, classes=['edible', 'poisonous'],
cmap="Reds", size=(600, 360)
)
visualizer.fit(X, y)
visualizer.score(X, y)
visualizer.poof()
for model in models:
visualize_model(X, y, model)
Explanation: Preliminary Model Evaluation
Based on the results from the F1 scores above, which model is performing the best?
Visual Model Evaluation
Now let's refactor our model evaluation function to use Yellowbrick's ClassificationReport class, a model visualizer that displays the precision, recall, and F1 scores. This visual model analysis tool integrates numerical scores as well color-coded heatmap in order to support easy interpretation and detection, particularly the nuances of Type I and Type II error, which are very relevant (lifesaving, even) to our use case!
Type I error (or a "false positive") is detecting an effect that is not present (e.g. determining a mushroom is poisonous when it is in fact edible).
Type II error (or a "false negative") is failing to detect an effect that is present (e.g. believing a mushroom is edible when it is in fact poisonous).
End of explanation |
6,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Exploring the TF-Hub CORD-19 Swivel Embeddings
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Analyze the embeddings
Let's start off by analyzing the embedding by calculating and plotting a correlation matrix between different terms. If the embedding learned to successfully capture the meaning of different words, the embedding vectors of semantically similar words should be close together. Let's take a look at some COVID-19 related terms.
Step5: We can see that the embedding successfully captured the meaning of the different terms. Each word is similar to the other words of its cluster (i.e. "coronavirus" highly correlates with "SARS" and "MERS"), while they are different from terms of other clusters (i.e. the similarity between "SARS" and "Spain" is close to 0).
Now let's see how we can use these embeddings to solve a specific task.
SciCite
Step6: Training a citaton intent classifier
We'll train a classifier on the SciCite dataset using an Estimator. Let's set up the input_fns to read the dataset into the model
Step7: Let's build a model which use the CORD-19 embeddings with a classification layer on top.
Step8: Train and evaluate the model
Let's train and evaluate the model to see the performance on the SciCite task
Step9: We can see that the loss quickly decreases while especially the accuracy rapidly increases. Let's plot some examples to check how the prediction relates to the true labels | Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
tf.logging.set_verbosity('ERROR')
import tensorflow_datasets as tfds
import tensorflow_hub as hub
try:
from google.colab import data_table
def display_df(df):
return data_table.DataTable(df, include_index=False)
except ModuleNotFoundError:
# If google-colab is not available, just display the raw DataFrame
def display_df(df):
return df
Explanation: Exploring the TF-Hub CORD-19 Swivel Embeddings
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/hub/tutorials/cord_19_embeddings"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/hub/blob/master/examples/colab/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/hub/examples/colab/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/tensorflow/cord-19/swivel-128d/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
The CORD-19 Swivel text embedding module from TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/1)
was built to support researchers analyzing natural languages text related to COVID-19.
These embeddings were trained on the titles, authors, abstracts, body texts, and
reference titles of articles in the CORD-19 dataset.
In this colab we will:
- Analyze semantically similar words in the embedding space
- Train a classifier on the SciCite dataset using the CORD-19 embeddings
Setup
End of explanation
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
with tf.Graph().as_default():
# Load the module
query_input = tf.placeholder(tf.string)
module = hub.Module('https://tfhub.dev/tensorflow/cord-19/swivel-128d/1')
embeddings = module(query_input)
with tf.train.MonitoredTrainingSession() as sess:
# Generate embeddings for some terms
queries = [
# Related viruses
"coronavirus", "SARS", "MERS",
# Regions
"Italy", "Spain", "Europe",
# Symptoms
"cough", "fever", "throat"
]
features = sess.run(embeddings, feed_dict={query_input: queries})
plot_correlation(queries, features)
Explanation: Analyze the embeddings
Let's start off by analyzing the embedding by calculating and plotting a correlation matrix between different terms. If the embedding learned to successfully capture the meaning of different words, the embedding vectors of semantically similar words should be close together. Let's take a look at some COVID-19 related terms.
End of explanation
#@title Set up the dataset from TFDS
class Dataset:
Build a dataset from a TFDS dataset.
def __init__(self, tfds_name, feature_name, label_name):
self.dataset_builder = tfds.builder(tfds_name)
self.dataset_builder.download_and_prepare()
self.feature_name = feature_name
self.label_name = label_name
def get_data(self, for_eval):
splits = THE_DATASET.dataset_builder.info.splits
if tfds.Split.TEST in splits:
split = tfds.Split.TEST if for_eval else tfds.Split.TRAIN
else:
SPLIT_PERCENT = 80
split = "train[{}%:]".format(SPLIT_PERCENT) if for_eval else "train[:{}%]".format(SPLIT_PERCENT)
return self.dataset_builder.as_dataset(split=split)
def num_classes(self):
return self.dataset_builder.info.features[self.label_name].num_classes
def class_names(self):
return self.dataset_builder.info.features[self.label_name].names
def preprocess_fn(self, data):
return data[self.feature_name], data[self.label_name]
def example_fn(self, data):
feature, label = self.preprocess_fn(data)
return {'feature': feature, 'label': label}, label
def get_example_data(dataset, num_examples, **data_kw):
Show example data
with tf.Session() as sess:
batched_ds = dataset.get_data(**data_kw).take(num_examples).map(dataset.preprocess_fn).batch(num_examples)
it = tf.data.make_one_shot_iterator(batched_ds).get_next()
data = sess.run(it)
return data
TFDS_NAME = 'scicite' #@param {type: "string"}
TEXT_FEATURE_NAME = 'string' #@param {type: "string"}
LABEL_NAME = 'label' #@param {type: "string"}
THE_DATASET = Dataset(TFDS_NAME, TEXT_FEATURE_NAME, LABEL_NAME)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 20 #@param {type:"integer"}
data = get_example_data(THE_DATASET, NUM_EXAMPLES, for_eval=False)
display_df(
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.decode('utf8') for ex in data[0]],
LABEL_NAME: [THE_DATASET.class_names()[x] for x in data[1]]
}))
Explanation: We can see that the embedding successfully captured the meaning of the different terms. Each word is similar to the other words of its cluster (i.e. "coronavirus" highly correlates with "SARS" and "MERS"), while they are different from terms of other clusters (i.e. the similarity between "SARS" and "Spain" is close to 0).
Now let's see how we can use these embeddings to solve a specific task.
SciCite: Citation Intent Classification
This section shows how one can use the embedding for downstream tasks such as text classification. We'll use the SciCite dataset from TensorFlow Datasets to classify citation intents in academic papers. Given a sentence with a citation from an academic paper, classify whether the main intent of the citation is as background information, use of methods, or comparing results.
End of explanation
def preprocessed_input_fn(for_eval):
data = THE_DATASET.get_data(for_eval=for_eval)
data = data.map(THE_DATASET.example_fn, num_parallel_calls=1)
return data
def input_fn_train(params):
data = preprocessed_input_fn(for_eval=False)
data = data.repeat(None)
data = data.shuffle(1024)
data = data.batch(batch_size=params['batch_size'])
return data
def input_fn_eval(params):
data = preprocessed_input_fn(for_eval=True)
data = data.repeat(1)
data = data.batch(batch_size=params['batch_size'])
return data
def input_fn_predict(params):
data = preprocessed_input_fn(for_eval=True)
data = data.batch(batch_size=params['batch_size'])
return data
Explanation: Training a citaton intent classifier
We'll train a classifier on the SciCite dataset using an Estimator. Let's set up the input_fns to read the dataset into the model
End of explanation
def model_fn(features, labels, mode, params):
# Embed the text
embed = hub.Module(params['module_name'], trainable=params['trainable_module'])
embeddings = embed(features['feature'])
# Add a linear layer on top
logits = tf.layers.dense(
embeddings, units=THE_DATASET.num_classes(), activation=None)
predictions = tf.argmax(input=logits, axis=1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'logits': logits,
'predictions': predictions,
'features': features['feature'],
'labels': features['label']
})
# Set up a multi-class classification head
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
loss = tf.reduce_mean(loss)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=params['learning_rate'])
train_op = optimizer.minimize(loss, global_step=tf.train.get_or_create_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)
precision = tf.metrics.precision(labels=labels, predictions=predictions)
recall = tf.metrics.recall(labels=labels, predictions=predictions)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
eval_metric_ops={
'accuracy': accuracy,
'precision': precision,
'recall': recall,
})
#@title Hyperparmeters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/1' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
STEPS = 8000#@param {type: "integer"}
EVAL_EVERY = 200 #@param {type: "integer"}
BATCH_SIZE = 10 #@param {type: "integer"}
LEARNING_RATE = 0.01 #@param {type: "number"}
params = {
'batch_size': BATCH_SIZE,
'learning_rate': LEARNING_RATE,
'module_name': EMBEDDING,
'trainable_module': TRAINABLE_MODULE
}
Explanation: Let's build a model which use the CORD-19 embeddings with a classification layer on top.
End of explanation
estimator = tf.estimator.Estimator(functools.partial(model_fn, params=params))
metrics = []
for step in range(0, STEPS, EVAL_EVERY):
estimator.train(input_fn=functools.partial(input_fn_train, params=params), steps=EVAL_EVERY)
step_metrics = estimator.evaluate(input_fn=functools.partial(input_fn_eval, params=params))
print('Global step {}: loss {:.3f}, accuracy {:.3f}'.format(step, step_metrics['loss'], step_metrics['accuracy']))
metrics.append(step_metrics)
global_steps = [x['global_step'] for x in metrics]
fig, axes = plt.subplots(ncols=2, figsize=(20,8))
for axes_index, metric_names in enumerate([['accuracy', 'precision', 'recall'],
['loss']]):
for metric_name in metric_names:
axes[axes_index].plot(global_steps, [x[metric_name] for x in metrics], label=metric_name)
axes[axes_index].legend()
axes[axes_index].set_xlabel("Global Step")
Explanation: Train and evaluate the model
Let's train and evaluate the model to see the performance on the SciCite task
End of explanation
predictions = estimator.predict(functools.partial(input_fn_predict, params))
first_10_predictions = list(itertools.islice(predictions, 10))
display_df(
pd.DataFrame({
TEXT_FEATURE_NAME: [pred['features'].decode('utf8') for pred in first_10_predictions],
LABEL_NAME: [THE_DATASET.class_names()[pred['labels']] for pred in first_10_predictions],
'prediction': [THE_DATASET.class_names()[pred['predictions']] for pred in first_10_predictions]
}))
Explanation: We can see that the loss quickly decreases while especially the accuracy rapidly increases. Let's plot some examples to check how the prediction relates to the true labels:
End of explanation |
6,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started
This notebook gives a whirlwind overview of the ionchannelABC library and can be used for testing purposes of a first installation. The notebook follows the workflow for parameter inference of a generic T-type Ca2+ channel model.
It is recommended to have some understanding of ion channel models, voltage clamp protocols and fundamentals of the Approximate Bayesian Computation algorithm before working through this notebook. Wikipedia and the pyabc documentation will likely be sufficient.
Step1: Setting up an ion channel model and experiments
First we need to load in a cell model. We use IonChannelModel, which is a wrapper around the myokit simulation functionality which handles compilation of the model for use with the pyabc library. The model loads a MMT file which is a description of the mathematics behind the opening/closing of activation/inactivation gates in myokit format (see https
Step2: Now that we have loaded a cell model, we need to specify how we will test it to compare with experimental data. We use the ExperimentData and ExperimentStimProtocol classes to specify the experimental dataset and experimental protocol respectively. These are then combined in the Experiment class. The data is specified in a separate .py file with functions to return the x, y and, if available, error bars extracted from graphs.
We show an example using T-type Ca2+ channel peak current density at a range of activating voltage steps in HL-1 myocytes from Nguyen et al, STIM1 participates in the contractile rhythmicity of HL-1 cells by moderating T-type Ca(2+) channel activity, 2013.
Step3: The stimulation protocol is defined from the experimental methods of the data source. It should be replicated as close as possible to reproduce experimental conditions. This example shows a standard 'I-V curve' testing peak current density at different voltage steps from a resting potential. The transmembrane potential is held at a resting potential of -75mV for sufficient time for the channel to reach its steady-state (we assume 5000ms here), it is stepped to each test potential for 300ms and then returned to the resting potential.
Step4: Having defined what we are doing with the model, we need to define what we do with the simulation data and which part of the protocol (i.e. index of stim_times and stim_levels) we are interested in extracting the data from. The simulation will return a list of pandas.Dataframe containing each of logvars defined in the ion channel model declaration. Here, we want to reduce this data to just the peak current density at the step potential (i.e. index 1 in stim_times and stim_levels). Our list will only have length 1 because we are only interested in data from this point in the protocol, but more complex protocols may return longer lists.
Step5: The final key part of defining the experiment is the experimental conditions, which includes extra/intracellular ion concentrations and temperature reported in the data source. Here, the dictionary keys refer to variables in the [membrane] field of the MMT ion channel definition file.
We can then combine the previous steps in a single Experiment.
Step6: We then add the experiment to the IonChannelModel defined previously. We can test it runs using the sample method with default parameters to debug any problems at this stage.
Step7: The plot_sim_results function makes it easy to plot the output of simulations.
Step8: Clearly the default parameters in the MMT file are not quite right, but we are able to run the simulation and compare to the results.
In practice, the ion channel setup and model experiments can be defined in a separate .py file and loaded in a single step, which we will do below for the next step. Examples are contained in the channel examples folder. By plotting, we can see that 6 separate experiments have been defined.
Step9: Setting up parameter inference for the defined model
Next we need to specify which parameters in our ion channel model should be varied during the parameter inference step. We do this by defining a prior distribution for each parameter in the MMT file we want to vary. The width of the prior distribution should be sufficient to reduce bias while incorporating specific knowledge about the model structure (i.e. if a parameter should be defined positive or in a reasonable range). A good rule-of-thumb is to use an order of magnitude around a parameter value in a previously published model of the channel, but the width can be increased in future runs of the ABC algorithm.
Step10: We can now define additional requirements for the ABC-SMC algorithm. We need a distance function to measure how well our model can approximate experimental data.
The IonChannelDistance class implements a weighted Euclidean distance function. The weight assigned to each data point accounts for the separate experiments (i.e. we do not want to over-fit to behaviour of an experiment just because it has a greater number of data points), the scale of the dependent variable in each experiment, and the size of errors bars in the experimental data (i.e. if we prefer the model to reproduce more closely data points with a lower level of uncertainty).
We can see how this corresponds to the data we are using in this example by plotting the data points using plot_distance_weights.
Step11: We also need to assign a database file for the pyabc implementation of the ABC-SMC algorithm to store information about the ABC particles at intermediate steps as it runs. A temporary location with sufficient storage is a good choice as these files can become quite large for long ABC runs. This can be defined by setting the $TMPDIR environment variable as described in the installation instructions.
The "sqlite
Step12: Running the ABC algorithm
We are now ready to run parameter inference on our ion channel model.
Before starting the algorithm, it is good practice to enable logging options to help any debugging which may be necessary. The default options below should be sufficient.
Step13: ABCSMC from the pyabc library is the main class used for the algorithm. It initialises with a number of options which are well described in the pyabc documentation. Note we initialize some of the passed objects at this stage and do not pass in pre-initialised variables, particulary for the distance function.
A brief description is given below to key options
Step14: The algorithm is initialised and run as specified in pyabc documentation. These lines are not set to run as the algorithm can take several hours to days to finish for large models. Following steps will use a previous run example.
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0.1,
max_nr_populations=20,
min_acceptance_rate=0.01)
Analysing the results
Once the ABC run is complete, we have a number of custom plotting function to analyse the History object output from the running the ABC algorithm.
A compressed example database file is can be found here. On Linux, this can be extracted to the original .db format using tar -xcvf hl-1_icat-generic.tgz.
Firstly, we can load a previously run example file.
Step15: First we can check the convergence of the epsilon value over iterations of the ABC algorithm.
Step16: We can check the posterior distribution of parameters for this model using the plot_parameters_kde function. This can highlight any parameters which were unidentifiable given the available experimental data.
Step17: We can generate some samples of model output using the posterior distribution of parameters to observe the effect on model output. We first create a sampling dataset then use the plot_sim_results function.
Step18: In this example, we see low variation of the model output around the experimental data across experiments. However, are all parameters well identified? (Consider the KDE posterior parameter distribution plot).
Finally, if we want to output quantitative measurements of the channel model we can interrogate out sampled dataset. For example, we can find the peak current density from the first experiment.
Step19: Or if we are interested in the voltage at which the peak current occurs.
Step20: That concludes the main portion of this introduction. Further functionality is included below. For further examples of using the library, see the additional notebooks included for multiple HL-1 cardiac myocyte ion channels in the docs/examples folder.
Extra
Step21: The calculate_parameter_sensitivity function carries out the calculations, and the output can be analysed using the plot_parameter_sensitivity and plot_regression_fit functions.
Step22: See Sobie et al, 2009 for an interpretation of the beta values and goodness-of-fit plots. In summary, a high beta value indicates the model has high sensitivity to changes in that parameter for a particular experiment protocol. However, this is conditional on a reasonable goodness-of-fit indicating the multivariable regression model is valid within this small pertubation space. | Python Code:
# Importing standard libraries
import numpy as np
import pandas as pd
Explanation: Getting Started
This notebook gives a whirlwind overview of the ionchannelABC library and can be used for testing purposes of a first installation. The notebook follows the workflow for parameter inference of a generic T-type Ca2+ channel model.
It is recommended to have some understanding of ion channel models, voltage clamp protocols and fundamentals of the Approximate Bayesian Computation algorithm before working through this notebook. Wikipedia and the pyabc documentation will likely be sufficient.
End of explanation
from ionchannelABC import IonChannelModel
icat = IonChannelModel('icat',
'models/Generic_iCaT.mmt',
vvar='membrane.V',
logvars=['environment.time',
'icat.G_CaT',
'icat.i_CaT'])
Explanation: Setting up an ion channel model and experiments
First we need to load in a cell model. We use IonChannelModel, which is a wrapper around the myokit simulation functionality which handles compilation of the model for use with the pyabc library. The model loads a MMT file which is a description of the mathematics behind the opening/closing of activation/inactivation gates in myokit format (see https://myokit.readthedocs.io/syntax/model.html). We also need to specify the independent variable name in the MMT file (generally transmembrane voltage) and a list of variables we want to log from simulations.
End of explanation
import data.icat.data_icat as data
from ionchannelABC import (Experiment,
ExperimentData,
ExperimentStimProtocol)
vsteps, peak_curr, errs, N = data.IV_Nguyen()
nguyen_data = ExperimentData(x=vsteps, y=peak_curr,
N=N, errs=errs,
err_type='SEM') # this flag is currently not used but may change in future version
Explanation: Now that we have loaded a cell model, we need to specify how we will test it to compare with experimental data. We use the ExperimentData and ExperimentStimProtocol classes to specify the experimental dataset and experimental protocol respectively. These are then combined in the Experiment class. The data is specified in a separate .py file with functions to return the x, y and, if available, error bars extracted from graphs.
We show an example using T-type Ca2+ channel peak current density at a range of activating voltage steps in HL-1 myocytes from Nguyen et al, STIM1 participates in the contractile rhythmicity of HL-1 cells by moderating T-type Ca(2+) channel activity, 2013.
End of explanation
stim_times = [5000, 300, 500] # describes the course of one voltage step in time
stim_levels = [-75, vsteps, -75] # each entry of levels corresponds to the time above
Explanation: The stimulation protocol is defined from the experimental methods of the data source. It should be replicated as close as possible to reproduce experimental conditions. This example shows a standard 'I-V curve' testing peak current density at different voltage steps from a resting potential. The transmembrane potential is held at a resting potential of -75mV for sufficient time for the channel to reach its steady-state (we assume 5000ms here), it is stepped to each test potential for 300ms and then returned to the resting potential.
End of explanation
def max_icat(data):
return max(data[0]['icat.i_CaT'], key=abs)
nguyen_protocol = ExperimentStimProtocol(stim_times,
stim_levels,
measure_index=1, # index from `stim_times` and `stim_levels`
measure_fn=max_icat)
Explanation: Having defined what we are doing with the model, we need to define what we do with the simulation data and which part of the protocol (i.e. index of stim_times and stim_levels) we are interested in extracting the data from. The simulation will return a list of pandas.Dataframe containing each of logvars defined in the ion channel model declaration. Here, we want to reduce this data to just the peak current density at the step potential (i.e. index 1 in stim_times and stim_levels). Our list will only have length 1 because we are only interested in data from this point in the protocol, but more complex protocols may return longer lists.
End of explanation
nguyen_conditions = dict(Ca_o=5000, # extracellular Ca2+ concentration of 5000uM
Ca_subSL=0.2, # sub-sarcolemmal (i.e. intracellular) Ca2+ concentration of 0.2uM
T=295) # experiment temperature of 295K
nguyen_experiment = Experiment(nguyen_protocol, nguyen_data, nguyen_conditions)
Explanation: The final key part of defining the experiment is the experimental conditions, which includes extra/intracellular ion concentrations and temperature reported in the data source. Here, the dictionary keys refer to variables in the [membrane] field of the MMT ion channel definition file.
We can then combine the previous steps in a single Experiment.
End of explanation
icat.add_experiments([nguyen_experiment])
test = icat.sample({}) # empty dictionary as we are not overwriting any of the parameters in the model definition yet
Explanation: We then add the experiment to the IonChannelModel defined previously. We can test it runs using the sample method with default parameters to debug any problems at this stage.
End of explanation
import matplotlib.pyplot as plt
import seaborn as sns
from ionchannelABC import plot_sim_results
%matplotlib inline
plot_sim_results(test, obs=icat.get_experiment_data())
Explanation: The plot_sim_results function makes it easy to plot the output of simulations.
End of explanation
from channels.icat_generic import icat as model
test = model.sample({})
plot_sim_results(test, obs=model.get_experiment_data())
Explanation: Clearly the default parameters in the MMT file are not quite right, but we are able to run the simulation and compare to the results.
In practice, the ion channel setup and model experiments can be defined in a separate .py file and loaded in a single step, which we will do below for the next step. Examples are contained in the channel examples folder. By plotting, we can see that 6 separate experiments have been defined.
End of explanation
from pyabc import (RV, Distribution) # we use two classes from the pyabc library for this definition
limits = dict(g_CaT=(0, 2), # these parameter keys are specific to the icat model being investigated
v_offset=(0, 500),
Vhalf_b=(-100, 100),
k_b=(0, 10),
c_bb=(0, 10),
c_ab=(0, 100),
sigma_b=(0, 100),
Vmax_b=(-100, 100),
Vhalf_g=(-100, 100),
k_g=(-10, 0),
c_bg=(0, 50),
c_ag=(0, 500),
sigma_g=(0, 100),
Vmax_g=(-100, 100))
prior = Distribution(**{key: RV("uniform", a, b - a)
for key, (a,b) in limits.items()})
Explanation: Setting up parameter inference for the defined model
Next we need to specify which parameters in our ion channel model should be varied during the parameter inference step. We do this by defining a prior distribution for each parameter in the MMT file we want to vary. The width of the prior distribution should be sufficient to reduce bias while incorporating specific knowledge about the model structure (i.e. if a parameter should be defined positive or in a reasonable range). A good rule-of-thumb is to use an order of magnitude around a parameter value in a previously published model of the channel, but the width can be increased in future runs of the ABC algorithm.
End of explanation
from ionchannelABC import (IonChannelDistance, plot_distance_weights)
measurements = model.get_experiment_data()
obs = measurements.to_dict()['y']
exp = measurements.to_dict()['exp']
errs = measurements.to_dict()['errs']
distance_fn = IonChannelDistance(obs=obs, exp_map=exp, err_bars=errs, err_th=0.1)
plot_distance_weights(model, distance_fn)
Explanation: We can now define additional requirements for the ABC-SMC algorithm. We need a distance function to measure how well our model can approximate experimental data.
The IonChannelDistance class implements a weighted Euclidean distance function. The weight assigned to each data point accounts for the separate experiments (i.e. we do not want to over-fit to behaviour of an experiment just because it has a greater number of data points), the scale of the dependent variable in each experiment, and the size of errors bars in the experimental data (i.e. if we prefer the model to reproduce more closely data points with a lower level of uncertainty).
We can see how this corresponds to the data we are using in this example by plotting the data points using plot_distance_weights.
End of explanation
import tempfile, os
db_path = ("sqlite:///" +
os.path.join(tempfile.gettempdir(), "example.db"))
print(db_path)
Explanation: We also need to assign a database file for the pyabc implementation of the ABC-SMC algorithm to store information about the ABC particles at intermediate steps as it runs. A temporary location with sufficient storage is a good choice as these files can become quite large for long ABC runs. This can be defined by setting the $TMPDIR environment variable as described in the installation instructions.
The "sqlite:///" at the start of the path is necessary for database access.
End of explanation
import logging
logging.basicConfig()
abc_logger = logging.getLogger('ABC')
abc_logger.setLevel(logging.DEBUG)
eps_logger = logging.getLogger('Epsilon')
eps_logger.setLevel(logging.DEBUG)
cv_logger = logging.getLogger('CV Estimation')
cv_logger.setLevel(logging.DEBUG)
Explanation: Running the ABC algorithm
We are now ready to run parameter inference on our ion channel model.
Before starting the algorithm, it is good practice to enable logging options to help any debugging which may be necessary. The default options below should be sufficient.
End of explanation
from pyabc import ABCSMC
from pyabc.epsilon import MedianEpsilon
from pyabc.populationstrategy import ConstantPopulationSize
from pyabc.sampler import MulticoreEvalParallelSampler
from ionchannelABC import (ion_channel_sum_stats_calculator,
IonChannelAcceptor,
IonChannelDistance,
EfficientMultivariateNormalTransition)
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=IonChannelDistance(
obs=obs,
exp_map=exp,
err_bars=errs,
err_th=0.1),
population_size=ConstantPopulationSize(1000),
summary_statistics=ion_channel_sum_stats_calculator,
transitions=EfficientMultivariateNormalTransition(),
eps=MedianEpsilon(),
sampler=MulticoreEvalParallelSampler(n_procs=12),
acceptor=IonChannelAcceptor())
Explanation: ABCSMC from the pyabc library is the main class used for the algorithm. It initialises with a number of options which are well described in the pyabc documentation. Note we initialize some of the passed objects at this stage and do not pass in pre-initialised variables, particulary for the distance function.
A brief description is given below to key options:
* population_size: Number of particles to use in the ABC algorithm. pyabc ConstantPopulationSize and AdaptivePopulationSize have been tested. Unless adaptive population size is explicitly required, it is recommended to use a constant particle population with sufficient population for the size of the model being tested to avoid parameter distributions collapsing on single point estimates. For this example, we will use 2000, however up to 5000 particles has been tested on more complex models. Larger particle populations will increase algorithm run times.
* summary_statistics: Function to convert raw output from the model into an appropriate format for calculating distance. Use the custom implementation of ion_channel_sum_stats_calculator.
* transitions: pyabc Transition object for pertubation of particles at each algorithm step. Use custom implementation of EfficientMultivariateNormalTransition.
* eps: pyabc Epsilon object defining how acceptance threshold is adapted over iterations. Generally use MedianEpsilon for the median distance of the previous iterations accepted particles.
* sampler: Can be used to specify the number of parallel processes to initiate. Only pyabc MulticoreEvalParallelSampler has been tested. If on local machine, initiate with default parameters. If using computing cluster, the parameter n_procs can specify how many processes to initiate (12 is a good starting point). Warning: increasing the number of processes will not necessarily speed up the algorithm.
* acceptor: pyabc Acceptor object decides which particles to allow to pass to the next iteration. Use custom implementation IonChannelAcceptor.
End of explanation
from pyabc import History
history = History('sqlite:///results/icat-generic/hl-1_icat-generic.db')
history.all_runs()
df, w = history.get_distribution(m=0)
Explanation: The algorithm is initialised and run as specified in pyabc documentation. These lines are not set to run as the algorithm can take several hours to days to finish for large models. Following steps will use a previous run example.
abc_id = abc.new(db_path, obs)
history = abc.run(minimum_epsilon=0.1,
max_nr_populations=20,
min_acceptance_rate=0.01)
Analysing the results
Once the ABC run is complete, we have a number of custom plotting function to analyse the History object output from the running the ABC algorithm.
A compressed example database file is can be found here. On Linux, this can be extracted to the original .db format using tar -xcvf hl-1_icat-generic.tgz.
Firstly, we can load a previously run example file.
End of explanation
evolution = history.get_all_populations()
sns.relplot(x='t', y='epsilon', size='samples', data=evolution[evolution.t>=0])
Explanation: First we can check the convergence of the epsilon value over iterations of the ABC algorithm.
End of explanation
from ionchannelABC import plot_parameters_kde
plot_parameters_kde(df, w, limits, aspect=12, height=0.8)
Explanation: We can check the posterior distribution of parameters for this model using the plot_parameters_kde function. This can highlight any parameters which were unidentifiable given the available experimental data.
End of explanation
n_samples = 10 # increasing this number will produce a better approximation to the true output, recommended: >= 100
# we keep 10 to keep running time low
parameter_samples = df.sample(n=n_samples, weights=w, replace=True)
parameter_samples.head()
parameter_samples = parameter_samples.to_dict(orient='records')
samples = pd.DataFrame({})
for i, theta in enumerate(parameter_samples):
output = model.sample(pars=theta, n_x=50) # n_x changes the resolution of the independent variable
# sometimes this can cause problems with output tending to zero/inf at
# (e.g.) exact reversal potential of the channel model
output['sample'] = i
output['distribution'] = 'posterior'
samples = samples.append(output, ignore_index=True)
g = plot_sim_results(samples, obs=measurements)
xlabels = ["voltage, mV", "voltage, mV", "voltage, mV", "time, ms", "time, ms","voltage, mV"]
ylabels = ["current density, pA/pF", "activation", "inactivation", "recovery", "normalised current","current density, pA/pF"]
for ax, xl in zip(g.axes.flatten(), xlabels):
ax.set_xlabel(xl)
for ax, yl in zip(g.axes.flatten(), ylabels):
ax.set_ylabel(yl)
Explanation: We can generate some samples of model output using the posterior distribution of parameters to observe the effect on model output. We first create a sampling dataset then use the plot_sim_results function.
End of explanation
peak_curr_mean = np.mean(samples[samples.exp==0].groupby('sample').min()['y'])
peak_curr_std = np.std(samples[samples.exp==0].groupby('sample').min()['y'])
print('Peak current density: {0:4.2f} +/- {1:4.2f} pA/pF'.format(peak_curr_mean, peak_curr_std))
Explanation: In this example, we see low variation of the model output around the experimental data across experiments. However, are all parameters well identified? (Consider the KDE posterior parameter distribution plot).
Finally, if we want to output quantitative measurements of the channel model we can interrogate out sampled dataset. For example, we can find the peak current density from the first experiment.
End of explanation
peak_curr_V_indices = samples[samples.exp==0].groupby('sample').idxmin()['y']
peak_curr_V_mean = np.mean(samples.iloc[peak_curr_V_indices]['x'])
peak_curr_V_std = np.std(samples.iloc[peak_curr_V_indices]['x'])
print('Voltage of peak current density: {0:4.2f} +/- {1:4.2f} mV'.format(peak_curr_V_mean, peak_curr_V_std))
Explanation: Or if we are interested in the voltage at which the peak current occurs.
End of explanation
distance_fn = IonChannelDistance(obs=obs,
exp_map=exp,
err_bars=errs,
err_th=0.1)
parameters = ['icat.'+k for k in limits.keys()]
print(parameters)
Explanation: That concludes the main portion of this introduction. Further functionality is included below. For further examples of using the library, see the additional notebooks included for multiple HL-1 cardiac myocyte ion channels in the docs/examples folder.
Extra: Parameter sensitivity
The ionchannelABC library also includes functionality to test the sensitivity of a model to its parameters. This could be used to test which parameters we may expect to be unidentifiable in the ABC algorithm and would generally be carried out before the ABC algorithm is run.
The parameter sensitivity analysis is based on Sobie et al, Parameter sensitivity analysis in electrophysiological models using multivariable regression, 2009.
First, we need to define the distance function used and a list of the full name (including field in the MMT file) of parameters being passed to ABC.
End of explanation
from ionchannelABC import (calculate_parameter_sensitivity,
plot_parameter_sensitivity,
plot_regression_fit)
fitted, regression_fit, r2 = calculate_parameter_sensitivity(
model,
parameters,
distance_fn,
sigma=0.05, # affects how far parameters are perturbed from original values to test sensitivity
n_samples=20) # set to reduced value for demonstration, typically around 1000 in practical use
Explanation: The calculate_parameter_sensitivity function carries out the calculations, and the output can be analysed using the plot_parameter_sensitivity and plot_regression_fit functions.
End of explanation
plot_parameter_sensitivity(fitted, plot_cutoff=0.05)
plot_regression_fit(regression_fit, r2)
Explanation: See Sobie et al, 2009 for an interpretation of the beta values and goodness-of-fit plots. In summary, a high beta value indicates the model has high sensitivity to changes in that parameter for a particular experiment protocol. However, this is conditional on a reasonable goodness-of-fit indicating the multivariable regression model is valid within this small pertubation space.
End of explanation |
6,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Networks
Lecture 3.
Augmenting the width (number of neurons) of the network allows to take more different combinations of the inputs, so somehow increases the dimensionality of the of the inputs. On the other hand, increasing the depth of the NN (number of layers) allows to take more different trasformations of the inputs.
Step1: Demo of world's simplest NN
Step2: Ingredients for backpropagation
Implement two simple modules that know how to compute their "local" gradients. Then build up
a more complicated module that contains those two. We can treat each successive module
as a black-box that just "magically" knows how to analytically compute the gradients.
Below we implement $f(x, y, z) = (x+y)z$ and compute the gradient.
Step3: We can now zoom out, make a "f module", that contains some differentiable magic on the inside and stop caring how it works
Step4: Neural networks in scikit-learn
scikit-learn contains a simple neural network implementation. It is not meant to serve the needs of deep learning, but works well for small problems and is easy to use.
Step5: For any serious Neural Networking use keras
https
Step6: Keras examples
Step7: Comparing a fully connected layer and a Conv layer | Python Code:
%config InlineBackend.figure_format='retina'
%matplotlib inline
# Silence warnings
import warnings
warnings.simplefilter(action="ignore", category=FutureWarning)
warnings.simplefilter(action="ignore", category=UserWarning)
warnings.simplefilter(action="ignore", category=RuntimeWarning)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (8, 8)
plt.rcParams["font.size"] = 14
Explanation: Neural Networks
Lecture 3.
Augmenting the width (number of neurons) of the network allows to take more different combinations of the inputs, so somehow increases the dimensionality of the of the inputs. On the other hand, increasing the depth of the NN (number of layers) allows to take more different trasformations of the inputs.
End of explanation
from sklearn.neural_network import MLPRegressor
np.random.seed(12345)
X = np.random.normal(scale=2, size=(4*400,1))
y = X[:, 0]
clf = MLPRegressor(hidden_layer_sizes=(1,),
validation_fraction=0.2, tol=1e-9, max_iter=3200,
solver='sgd',
learning_rate_init=0.001,
#learning_rate_init=0.01,
#learning_rate_init=0.5,
momentum=0,
activation='tanh',
verbose=False, random_state=2)
# hidden_layer_sizes=(40, 40, 20) is a tuple which length is the number of INNER layers and each entry specify the number of neurons in that INNER layer.
clf.fit(X, y)
# The total number of layers is len(hidden_layer_sizes) + 2, since the first layer is the input layer and the last is the output layer.
print(clf.n_layers_, clf.n_outputs_)
print(clf.loss_)
plt.plot(clf.loss_curve_)
plt.xlabel('Iteration')
plt.ylabel('Loss');
clf.predict([[0.], [1.5], [-1.4]])
Explanation: Demo of world's simplest NN
End of explanation
class Multiply:
def forward(self, x, y):
self.x = x
self.y = y
return x * y
def backward(self, dLdz):
dzdx = self.y
dLdx = dLdz * dzdx
dzdy = self.x
dLdy = dLdz * dzdy
return [dLdx, dLdy]
class Add:
def forward(self, x, y):
self.x = x
self.y = y
return x + y
def backward(self, dLdz):
dzdy = 1
dzdx = 1
return [dLdz * dzdy, dLdz * dzdx]
def f_with_gradients(x, y, z):
# create our operators
q = Add()
f = Multiply()
# feed inputs into the summer first, then do multiplication
# this builds our computational graph
q_out = q.forward(x, y)
f_out = f.forward(q_out, z)
# this one is somehow weird ... but hey.
# step backwards through our graph to compute the gradients
grad_f = f.backward(1.)
grad_q = q.backward(grad_f[0])
# sort our gradients so we have [df/dx, df/dy, df/dz]
gradients = [grad_q[0], grad_q[1], grad_f[1]]
return f_out, gradients
f_with_gradients(-2, 5, -4)
Explanation: Ingredients for backpropagation
Implement two simple modules that know how to compute their "local" gradients. Then build up
a more complicated module that contains those two. We can treat each successive module
as a black-box that just "magically" knows how to analytically compute the gradients.
Below we implement $f(x, y, z) = (x+y)z$ and compute the gradient.
End of explanation
class F:
def forward(self, x, y, z):
self.q = Add()
self.f = Multiply()
self.q_out = self.q.forward(x, y)
self.f_out = self.f.forward(self.q_out, z)
return self.f_out
def backward(self, dfdz):
grad_f = self.f.backward(dfdz)
grad_q = self.q.backward(grad_f[0])
return [grad_q[0], grad_q[1], grad_f[1]]
f = F()
print('f(x, y, z) = ', f.forward(-2, 5, -4))
print('[df/dx, df/dy, df/dz] = ', f.backward(1))
Explanation: We can now zoom out, make a "f module", that contains some differentiable magic on the inside and stop caring how it works:
End of explanation
from sklearn.neural_network import MLPClassifier
from sklearn.datasets import make_blobs
from utils import plot_surface
labels = ["b", "r"]
X, y = make_blobs(n_samples=400, centers=23, random_state=42)
y = np.take(labels, (y < 10))
clf = MLPClassifier(hidden_layer_sizes=(40, 40, 20), early_stopping=True,
validation_fraction=0.2,
activation='relu')
clf.fit(X, y)
plot_surface(clf, X, y)
Explanation: Neural networks in scikit-learn
scikit-learn contains a simple neural network implementation. It is not meant to serve the needs of deep learning, but works well for small problems and is easy to use.
End of explanation
## world's simplest NN with keras
from keras.models import Sequential
from keras.losses import mean_squared_error
from keras.layers import Dense, Activation
from keras.optimizers import SGD
from keras.initializers import RandomUniform
# construct a model as close as possible to the one scikit-learn uses
model = Sequential()
model.add(Dense(units=1, input_dim=1,
bias_initializer=RandomUniform(minval=-3**0.5, maxval=3**0.5)
))
model.add(Activation('tanh'))
# for regression the last layer in sklearn is the identity
model.add(Dense(units=1))
model.compile(loss=mean_squared_error,
optimizer=SGD(lr=0.001))
np.random.seed(12345)
X = np.random.normal(scale=2, size=(1600,1))
y = X[:, 0]
history = model.fit(X, y, epochs=3200, batch_size=200, validation_split=0.2, verbose=False)
print('minimum loss:', np.min(history.history['val_loss']))
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='validation loss');
plt.legend(loc='best');
model.predict([[0.], [1.5], [-1.4]])
Explanation: For any serious Neural Networking use keras
https://keras.io/ is a library that implements all the cool and useful layers and optimizers
that are used in today's deep learning. Importantly it uses code written in C instead of
python so it is fast (and if you have a GPU it will run on that which is even faster).
End of explanation
model.count_params()
model.summary()
Explanation: Keras examples:
* digit recognition with a fully connected NN: https://github.com/fchollet/keras/blob/master/examples/mnist_mlp.py
* digit recognition with a ConvNet: https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py
* ConvNet for Cifar10 https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py
Counting parameters
Because I am too lazy to do it in my head.
End of explanation
from keras.layers import Conv2D
fc = Sequential()
fc.add(Dense(units=4, input_dim=32*32))
fc.add(Activation('relu'))
fc.summary()
nodes = 4
32*32*nodes + nodes
cnn = Sequential()
cnn.add(Conv2D(4, (3, 3), input_shape=(32, 32, 1))) # 32x32 picture with one channel
# 4 -> filters: is the number of filters
# 3 -> strides: An integer or tuple/list of 2 integers, specifying the strides of the convolution along the width and height.
cnn.add(Activation('relu'))
cnn.summary()
n_filters = 4
n_filters * 3*3 + n_filters
Explanation: Comparing a fully connected layer and a Conv layer
End of explanation |
6,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import required python package and set the Cloudant credentials
flightPredict is a helper package used to train and run Spark MLLib models for predicting flight delays based on Weather data
Step1: load data from training data set and print the schema
Step2: Visualize classes in scatter plot based on 2 features
Step3: Load the training data as an RDD of LabeledPoint
Step4: Train multiple classification models
Step5: Load Test data from Cloudant database and compute accuracy metrics
Step6: Accuracy analysis and model refinement
Run Histogram to refine classification
Step7: Customize classification using Training Handler class extension
Add new features to the model
Re-build the models
Re-compute accuracy metrics
Step8: Run the predictive model
runModel(departureAirportCode, departureDateTime, arrivalAirportCode, arrivalDateTime)
Note | Python Code:
sc.addPyFile("https://github.com/ibm-watson-data-lab/simple-data-pipe-connector-flightstats/raw/master/flightPredict/training.py")
sc.addPyFile("https://github.com/ibm-watson-data-lab/simple-data-pipe-connector-flightstats/raw/master/flightPredict/run.py")
import training
import run
%matplotlib inline
from pyspark.mllib.regression import LabeledPoint
from pyspark.mllib.linalg import Vectors
from numpy import array
import numpy as np
import math
from datetime import datetime
from dateutil import parser
sqlContext=SQLContext(sc)
training.sqlContext = sqlContext
training.cloudantHost='dtaieb.cloudant.com'
training.cloudantUserName='weenesserliffircedinvers'
training.cloudantPassword='72a5c4f939a9e2578698029d2bb041d775d088b5'
training.weatherUrl='https://4b88408f-11e5-4ddc-91a6-fbd442e84879:[email protected]'
Explanation: Import required python package and set the Cloudant credentials
flightPredict is a helper package used to train and run Spark MLLib models for predicting flight delays based on Weather data
End of explanation
dbName="pycon_flightpredict_training_set"
%time cloudantdata = training.loadDataSet(dbName,"training")
%time cloudantdata.printSchema()
%time cloudantdata.count()
Explanation: load data from training data set and print the schema
End of explanation
training.scatterPlotForFeatures(cloudantdata, \
"departureWeather.temp","arrivalWeather.temp","Departure Airport Temp", "Arrival Airport Temp")
training.scatterPlotForFeatures(cloudantdata,\
"departureWeather.pressure","arrivalWeather.pressure","Departure Airport Pressure", "Arrival Airport Pressure")
training.scatterPlotForFeatures(cloudantdata,\
"departureWeather.wspd","arrivalWeather.wspd","Departure Airport Wind Speed", "Arrival Airport Wind Speed")
Explanation: Visualize classes in scatter plot based on 2 features
End of explanation
trainingData = training.loadLabeledDataRDD("training")
trainingData.take(5)
Explanation: Load the training data as an RDD of LabeledPoint
End of explanation
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
logRegModel = LogisticRegressionWithLBFGS.train(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, iterations=1000, validateData=False, intercept=False)
print(logRegModel)
from pyspark.mllib.classification import NaiveBayes
#NaiveBayes requires non negative features, set them to 0 for now
modelNaiveBayes = NaiveBayes.train(trainingData.map(lambda lp: LabeledPoint(lp.label, \
np.fromiter(map(lambda x: x if x>0.0 else 0.0,lp.features.toArray()),dtype=np.int)\
))\
)
print(modelNaiveBayes)
from pyspark.mllib.tree import DecisionTree
modelDecisionTree = DecisionTree.trainClassifier(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={})
print(modelDecisionTree)
from pyspark.mllib.tree import RandomForest
modelRandomForest = RandomForest.trainClassifier(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={},numTrees=100)
print(modelRandomForest)
Explanation: Train multiple classification models
End of explanation
dbTestName="pycon_flightpredict_test_set"
testCloudantdata = training.loadDataSet(dbTestName,"test")
testCloudantdata.count()
testData = training.loadLabeledDataRDD("test")
training.displayConfusionTable=True
training.runMetrics(testData,modelNaiveBayes,modelDecisionTree,logRegModel,modelRandomForest)
Explanation: Load Test data from Cloudant database and compute accuracy metrics
End of explanation
rdd = sqlContext.sql("select deltaDeparture from training").map(lambda s: s.deltaDeparture)\
.filter(lambda s: s < 50 and s > 12)
print(rdd.count())
histo = rdd.histogram(50)
#print(histo[0])
#print(histo[1])
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
bins = [i for i in histo[0]]
params = plt.gcf()
plSize = params.get_size_inches()
params.set_size_inches( (plSize[0]*2.5, plSize[1]*2) )
plt.ylabel('Number of records')
plt.xlabel('Bin')
plt.title('Histogram')
intervals = [abs(j-i) for i,j in zip(bins[:-1], bins[1:])]
values=[sum(intervals[:i]) for i in range(0,len(intervals))]
plt.bar(values, histo[1], intervals, color='b', label = "Bins")
plt.xticks(bins[:-1],[int(i) for i in bins[:-1]])
plt.legend()
plt.show()
Explanation: Accuracy analysis and model refinement
Run Histogram to refine classification
End of explanation
class customTrainingHandler(training.defaultTrainingHandler):
def getClassLabel(self, value):
if ( int(value)==0 ):
return "Delayed less than 13 minutes"
elif (int(value)==1 ):
return "Delayed between 13 and 41 minutes"
elif (int(value) == 2 ):
return "Delayed more than 41 minutes"
return value
def numClasses(self):
return 3
def computeClassification(self, s):
return 0 if s.deltaDeparture<13 else (1 if s.deltaDeparture < 41 else 2)
def customTrainingFeaturesNames(self ):
return ["departureTime"]
def customTrainingFeatures(self, s):
dt=parser.parse(s.departureTime)
print(dt)
features=[]
for i in range(0,7):
features.append(1 if dt.weekday()==i else 0)
return features
training.customTrainingHandler=customTrainingHandler()
#reload the training labeled data RDD
trainingData = training.loadLabeledDataRDD("training")
#recompute the models
logRegModel = LogisticRegressionWithLBFGS.train(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, iterations=1000, validateData=False, intercept=False)
modelNaiveBayes = NaiveBayes.train(trainingData.map(lambda lp: LabeledPoint(lp.label, \
np.fromiter(map(lambda x: x if x>0.0 else 0.0,lp.features.toArray()),dtype=np.int)\
))\
)
modelDecisionTree = DecisionTree.trainClassifier(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={})
modelRandomForest = RandomForest.trainClassifier(trainingData.map(lambda lp: LabeledPoint(lp.label,\
np.fromiter(map(lambda x: 0.0 if np.isnan(x) else x,lp.features.toArray()),dtype=np.double )))\
, numClasses=training.getNumClasses(), categoricalFeaturesInfo={},numTrees=100)
#reload the test labeled data
testData = training.loadLabeledDataRDD("test")
#recompute the accuracy metrics
training.displayConfusionTable=True
training.runMetrics(testData,modelNaiveBayes,modelDecisionTree,logRegModel,modelRandomForest)
Explanation: Customize classification using Training Handler class extension
Add new features to the model
Re-build the models
Re-compute accuracy metrics
End of explanation
run.useModels(modelNaiveBayes,modelDecisionTree,logRegModel,modelRandomForest)
run.runModel('BOS', "2016-05-18 20:15-0500", 'AUS', "2016-05-18 22:30-0800" )
Explanation: Run the predictive model
runModel(departureAirportCode, departureDateTime, arrivalAirportCode, arrivalDateTime)
Note: all DateTime must use UTC format
End of explanation |
6,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nerc', 'ukesm1-0-mmh', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NERC
Source ID: UKESM1-0-MMH
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:27
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
6,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">API Examples</h1>
<h3 align="center">Author
Step1: 2. Create CFNCluster
Notice
Step2: After you verified the project information, you can execute the pipeline. When the job is done, you will see the log infomration returned from the cluster.
Checking the disease names
Step3: Run the pipeline with the specific operation.
Step4: To delete the cluster, you just need to set the cluster name and call the below function. | Python Code:
import os
import sys
sys.path.append(os.getcwd().replace("notebooks", "cfncluster"))
## S3 input and output address.
s3_input_files_address = "s3://path/to/input folder"
s3_output_files_address = "s3://path/to/output folder"
## CFNCluster name
your_cluster_name = "testonco"
## The private key pair for accessing cluster.
private_key = "/path/to/private_key.pem"
## If delete cfncluster after job is done.
delete_cfncluster = False
Explanation: <h1 align="center">API Examples</h1>
<h3 align="center">Author: Guorong Xu</h3>
<h3 align="center">2016-09-19</h3>
The notebook is an example that tells you how to calculate correlation, annotate gene clusters and generate JSON files on AWS.
<font color='red'>Notice: Please open the notebook under /notebooks/BasicCFNClusterSetup.ipynb to install CFNCluster package on your Jupyter-notebook server before running the notebook.</font>
1. Configure AWS key pair, data location on S3 and the project information
End of explanation
import CFNClusterManager, ConnectionManager
## Create a new cluster
master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name)
ssh_client = ConnectionManager.connect_master(hostname=master_ip_address,
username="ec2-user",
private_key_file=private_key)
Explanation: 2. Create CFNCluster
Notice: The CFNCluster package can be only installed on Linux box which supports pip installation.
End of explanation
import PipelineManager
## You can call this function to check the disease names included in the annotation.
PipelineManager.check_disease_name()
## Define the disease name from the below list of disease names.
disease_name = "BreastCancer"
Explanation: After you verified the project information, you can execute the pipeline. When the job is done, you will see the log infomration returned from the cluster.
Checking the disease names
End of explanation
import PipelineManager
## define operation
## calculate: calculate correlation;"
## oslom_cluster: clustering the gene moudules;"
## print_oslom_cluster_json: print json files;"
## all: run all operations;"
operation = "all"
## run the pipeline
PipelineManager.run_analysis(ssh_client, disease_name, operation, s3_input_files_address, s3_output_files_address)
Explanation: Run the pipeline with the specific operation.
End of explanation
import CFNClusterManager
if delete_cfncluster == True:
CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name)
Explanation: To delete the cluster, you just need to set the cluster name and call the below function.
End of explanation |
6,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img style='float
Step1: Connect to server
Step2: <hr> Sizing individual plots
The Lightning client let's you easily control plot size by specifying the width in pixels. Let's try a few sizes for the same plot.
Step3: <hr> Global size
Especially when working in the notebook, it can be useful to specify a size globally. We've predefined four sizes | Python Code:
import os
from lightning import Lightning
from numpy import random
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Controlling size in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
Explanation: Connect to server
End of explanation
x = random.rand(100)
y = random.rand(100)
mat = random.rand(100,100)
mat[mat<0.99] = 0
lgn.graph(x, y, mat, width=400)
lgn.graph(x, y, mat, width=800)
Explanation: <hr> Sizing individual plots
The Lightning client let's you easily control plot size by specifying the width in pixels. Let's try a few sizes for the same plot.
End of explanation
series = random.randn(5,50)
lgn.set_size('small')
lgn.line(series)
lgn.set_size('medium')
lgn.line(series)
lgn.set_size('large')
lgn.line(series)
lgn.set_size('full')
lgn.line(series)
Explanation: <hr> Global size
Especially when working in the notebook, it can be useful to specify a size globally. We've predefined four sizes: small, medium, large, and full. Generally, full will be the largest, but full is also adaptive and will match the size of the enclosing div, whereas the others correspond to fixed pixel widths of 400, 600, and 800. The default for notebooks is medium, but for some plot types and use cases you may prefer others.
End of explanation |
6,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dark Matter Substructure from Strong Lenses
Contact
Step3: Constants and Defaults
We start by defining several constants and defaults. Specifically, we are interested in the following parameters
Step6: Subhalo Mass Function
The true subhalo mass function is assumed to be a normalized power-law with index $\alpha$
\begin{equation}
\left.\frac{dP}{dm}\right|{true}=\left{
\begin{array}{cc}
\frac{\left(1-\alpha\right)~m^{-\alpha}}{ \left(M{\rm{max}}^{1-\alpha}~-~M_{\rm{min}}^{1 -\alpha}\right)}& \alpha \neq 1\
\
\frac{m^{-\alpha}}{ \log{\left(M_{\rm{max}}/M_{\rm{min}}\right)}}& \alpha = 1
\end{array}
\right.
\tag{6}
\end{equation}
VK09 convolve this true mass function with a Gaussian to account for "the presence of noise on the data and the statistical uncertainty with which masses are measured". They define their convolved mass function as
\begin{equation}
\left. \frac{dP}{dm} \right|{conv} = \int{M_{\rm{min}}}^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right |_{true}} \frac{e^{-\left(m-m^\prime\right)^2/2{\sigma^2_m}}}{\sqrt{2\pi}\sigma_m} ~dm
\end{equation}
NOTE
Step10: Substructure Likelihood Function
VK09 define the likelihood of detecting $n_s$ substructures, each with mass $m_i$, in a single galaxy as the product of the Poisson probability of detecting $n_s$ substtimes the normalized probability density of observing a substructure with mass $m_i$ within a radius $R$,
\begin{equation}
{\cal L}\left( n_s,\vec{m}~|~\alpha,f,\vec{p}\right) = \frac{e^{-\mu(\alpha,f,< R)}~{\mu(\alpha,f,< R)}^{n_s}}{n_s!} \times \prod_{i=1}^{n_s}P\left(m_i,R~|~\vec{p},\alpha\right)\,.
\tag{1}
\end{equation}
Assuming that the probability of observing a substructure is independent of $R$, the normalized probability density of observing the masses $m_i$ can be expressed as
\begin{equation}
P\left(m_i~|~\vec{p},\alpha\right) = \frac{\int_{M_{\rm{min}}}^{M_{\rm{max}}}{\left. \frac{dP}{dm}\right|{true}\frac{e^{-\left(m-m_i\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} dm}}{\int{M_{\rm{low}}} ^{M_{\rm{high}}} { \int_{M_{\rm{min}}} ^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right|_{true}}\frac{e^{-\left(m-m^\prime\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} ~dm~dm^\prime}}\,.
\tag{2}
\end{equation}
In turn, the probability of measuring $n_s$ substructures in each of $n_l$ lenses, is the product of the likelihoods for each individual lens,
\begin{equation}
{\cal L}\left( {n_s,\vec{m}}~|~\alpha,f,\vec{p} \right) = \prod_{k=1}^{n_l}{\cal L}\left( n_{s,k},\vec{m}_k~|~\alpha,f,\vec{p} \right)\,.
\tag{8}
\end{equation}
It is easier (and more numerically accurate) to work with the logarithm of the likelihood
Step12: Mass Probability
The second term is a Gaussian probability for measuring a subhalo with mass $m_i$
\begin{align}
Prob(m) = \sum_{k=1}^{n_l} \sum_{i=1}^{n_s} \log P\left(m_{k,i} ~|~\vec{p},\alpha \right)
\end{align}
The denominator in Eq 2 does not depend on any specific substructure, $m_i$, and can therefore be extracted from the sum. Additionally, since $\alpha$ and $f$ are assumed to be the same in all lenses, the double sum can be simplified to a single sum over all substructures. The mass probability function therefore becomes
\begin{align}
Prob(m) &= \sum_{j=1}^{n_l \times n_s} \log \int_{M_{\rm{min}}}^{M_{\rm{max}}}{\left. \frac{dP}{dm}\right|{true}\frac{e^{-\left(m-m_j\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} dm}
- n_l n_s \log {\int{M_{\rm{low}}} ^{M_{\rm{high}}} { \int_{M_{\rm{min}}} ^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right|_{true}}\frac{e^{-\left(m-m^\prime\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} ~dm~dm^\prime}}
\end{align}
Since we've already defined the integrand as the convolved mass function, the above equation can be simplified to
\begin{align}
LogProb(m) &= \sum_{k,i=1}^{n_l,n_s} \log {\left. \frac{dP}{dm_{k,i}}\right|{conv}}
- n_l n_s \log {\int{M_{\rm{low}}} ^{M_{\rm{high}}} {{\left. \frac{dP}{dm}\right|_{conv}} dm}}
\end{align}
Step14: Likelihood Function
We can assemble everything back into the logarithm of the likelihood function
\begin{equation}
\log {\cal L} = LogProb(N) + LogProb(m)
\end{equation}
Step17: Simulated Data
To test the likelihood framework we need a simulated data set. First, we define a function to sample the mass function (we choose a general inverse-cdf method, though for a power-law mass function this could be done analytically). Then we define a wrapper to simulate the numbers and masses of specific simpulated lenses.
Step18: Validation Results
We now have everything in place to replicate the results from VK09. We start by generating the top right panel of Figure 1. This is a realization with
Step19: As an extended validation, we attempt to reproduce Figure 3 from VK09. | Python Code:
# General imports
%matplotlib inline
import logging
import numpy as np
import pylab as plt
from scipy import stats
from scipy import integrate
from scipy.integrate import simps,trapz,quad,nquad
from scipy.interpolate import interp1d
from scipy.misc import factorial
Explanation: Dark Matter Substructure from Strong Lenses
Contact: Alex Drlica-Wagner
This notebook is an attempt to replicate the likelihood analysis framework of Vegetti & Koopmans, 2009 (VK09) to constrain the dark matter substructure fraction, $f$, and the index of the dark matter power spectrum, $\alpha$, from a statistical sample of strong lens systems. The analysis of VK09 focuses on building a joint likelihood function for analysing a sample of strong lense systems sim
The substructure mass fraction and the index of the substructure mass function are the same in all lens systems. This assumption is manifest by tying the $\alpha$ and $f$ parameters across all lenses.
The characteristics of the dark matter substructure are independent of radial position in the lens. In reality, we are only measuring the substructure characteristics close to the Einstein radius, $R_{E}$.
Sustructures are independent random variates within each lens (i.e., no 2-halo term) and between lenses.
All equation numbers in this document correspond to the equation numbering in the published version of VK09.
End of explanation
# Constants
MMIN,MMAX = 4e6,4e9
MLOW,MHIGH = 0.3e8,4e9
P = (MMIN,MMAX,MLOW,MHIGH)
NSTEPS=(1000,1000)
# Defaults
ALPHA=1.9
FRAC=0.02
MHALO=1e11
SIGMA = 0
# Utility functions
def create_mass_array(log=True,nsteps=(1500,1300)):
Create an array spanning the true and observable mass ranges.
Parameters:
-----------
p : Tuple of the range of masses (MMIN,MMAX,MHIGH,MLOW)
nsteps : Number of steps to span the ranges (NTRUE, NCONV)
log : Sample in log or linear space
Returns:
--------
m,mp,mm,mmp : The
nsteps = map(int,nsteps)
if log:
m = np.logspace(np.log10(MMIN),np.log10(MMAX),nsteps[0])
mp = np.logspace(np.log10(MLOW),np.log10(MHIGH),nsteps[1])
else:
m = np.linspace(MMIN,MMAX,nsteps[0])
mp = np.linspace(MLOW,MHIGH,nsteps[1])
mm,mmp = np.meshgrid(m,mp)
return m,mp,mm,mmp
def mhalo(radius=None):
Return the halo mass as a function of maximum radius.
WARNING: Returns constant MHALO independent of R!
Parameters:
-----------
radius : Maximum radius for inclused halo mass
Returns:
--------
mhalo : Enclosed halo mass
return MHALO
Explanation: Constants and Defaults
We start by defining several constants and defaults. Specifically, we are interested in the following parameters:
$(M_{\rm min},\ M_{\rm max}$ = (MMIN,MMAX): The minimum and maximum masses of the underlying true substructure mass function.
$(M_{\rm low},\ M_{\rm high})$ = (MLOW,MHIGH): The low and high mass ends of the observable section of the substructure mass function.
$\vec{p}$ = P = (MMIN,MMAX,MLOW,MHIGH): The set of nuisance parameters describing the mass limits.
$\alpha$ = ALPHA: The slope of the substructure mass function (default: 1.9)
$f$ = FRAC: The substructure mass fraction (default: 0.02)
$M(<R)$ = MHALO: The host halo mass (default: 1e11)
$\sigma_{m}$ = SIGMA: Mass measurement error (or intrinsic spread)
End of explanation
def dP_dm_true(m,alpha):
True mass function (Eqn. 6) normalized over full mass range [MMIN,MMAX].
Parameters:
----------
m : True mass of subhalo
alpha : Power-law index of subhalo mass function
Returns:
--------
dP_dm_true : Normalized pdf
m = np.atleast_1d(m)
ret = ((1-alpha)*m**(-alpha))/(MMAX**(1-alpha)-MMIN**(1-alpha))
ret = np.where(alpha==1,(m**-alpha)/np.log(MMAX/MMIN),ret)
return np.where(np.isfinite(ret),ret,np.nan)
def dP_dm_conv(m,mp,alpha,sigma=SIGMA):
The convolved mass function.
Parameters:
-----------
m : The range of true masses
mp : The range of observed masses
Returns:
--------
dP_dm_conv : The integrated convolved mass function
if sigma == 0:
# Convolution replaced with delta function when sigma == 0
return dP_dm_true(np.atleast_2d(mp.T)[0],alpha)
else:
return simps(dP_dm_true(m,alpha)*stats.norm.pdf(m,loc=mp,scale=sigma),m)
Explanation: Subhalo Mass Function
The true subhalo mass function is assumed to be a normalized power-law with index $\alpha$
\begin{equation}
\left.\frac{dP}{dm}\right|{true}=\left{
\begin{array}{cc}
\frac{\left(1-\alpha\right)~m^{-\alpha}}{ \left(M{\rm{max}}^{1-\alpha}~-~M_{\rm{min}}^{1 -\alpha}\right)}& \alpha \neq 1\
\
\frac{m^{-\alpha}}{ \log{\left(M_{\rm{max}}/M_{\rm{min}}\right)}}& \alpha = 1
\end{array}
\right.
\tag{6}
\end{equation}
VK09 convolve this true mass function with a Gaussian to account for "the presence of noise on the data and the statistical uncertainty with which masses are measured". They define their convolved mass function as
\begin{equation}
\left. \frac{dP}{dm} \right|{conv} = \int{M_{\rm{min}}}^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right |_{true}} \frac{e^{-\left(m-m^\prime\right)^2/2{\sigma^2_m}}}{\sqrt{2\pi}\sigma_m} ~dm
\end{equation}
NOTE: I'm not sure I completely believe this motivation for the convolved mass function. What they have done is to incorporate an intrinsic scatter in the true mass function. This has the effect that occasionally masses below the detection threshold will scatter up to detectability. I don't think that this is an accurate representation of the effect of measurement error, but I'd need to think about it a bit more.
NOTE: When the mass uncertainty is zero, $\sigma_m = 0$, then the Gaussian convolution is replaced with at $\delta$-function and the convloved mass function reverts to the true mass function.
End of explanation
def mu0(alpha, frac, radius=None):
Expected number of substructures from the true mass function (Eq. 5).
Parameters:
-----------
alpha : Slope of the substructure mass function
frac : Substructure mass fraction
radius: Enclosed radius
Returns:
--------
mu0 : Predicted number of substructures for the true mass function
alpha = np.atleast_1d(alpha)
integral = ( (2-alpha)*(MMAX**(1-alpha) - MMIN**(1-alpha))) / \
( (1-alpha)*(MMAX**(2-alpha) - MMIN**(2-alpha)))
integral = np.where(alpha==2,-(MMAX**-1 - MMIN**-1)/np.log(MMAX/MMIN),integral)
integral = np.where(alpha==1,np.log(MMAX/MMIN)/(MMAX - MMIN),integral)
return frac * mhalo(radius) * integral
def mu(alpha, frac, radius=None, sigma=SIGMA):
Expected number of substructures from the observable mass function (Eq. 4)
Parameters:
-----------
alpha : Slope of the substructure mass function
frac : Substructure mass fraction
radius: Enclosed radius
sigma : Substructure mass error
Returns:
--------
mu : Predicted number of substructures for the observable mass function
m,mp,mm,mmp = create_mass_array()
_mu0 = mu0(alpha, frac, radius)
_integral = simps(dP_dm_conv(mm,mmp,alpha,sigma=sigma),mp)
return _mu0 * _integral
def LogProbNumber(data, alpha, frac, R=1, sigma=SIGMA):
Logarithm of the joint probability for the number of substructures.
Parameters:
-----------
data : Input data
alpha: Index of the mass function
frac : Substructure mass fraction
Returns:
--------
prob : Logarithm of the joint Poission probability
logging.debug(' LogProbNumber: %s'%len(data))
nsrc = data['nsrc']
_mu = mu(alpha,frac,R,sigma=sigma)
return np.sum(stats.poisson.logpmf(nsrc[:,np.newaxis],_mu),axis=0)
Explanation: Substructure Likelihood Function
VK09 define the likelihood of detecting $n_s$ substructures, each with mass $m_i$, in a single galaxy as the product of the Poisson probability of detecting $n_s$ substtimes the normalized probability density of observing a substructure with mass $m_i$ within a radius $R$,
\begin{equation}
{\cal L}\left( n_s,\vec{m}~|~\alpha,f,\vec{p}\right) = \frac{e^{-\mu(\alpha,f,< R)}~{\mu(\alpha,f,< R)}^{n_s}}{n_s!} \times \prod_{i=1}^{n_s}P\left(m_i,R~|~\vec{p},\alpha\right)\,.
\tag{1}
\end{equation}
Assuming that the probability of observing a substructure is independent of $R$, the normalized probability density of observing the masses $m_i$ can be expressed as
\begin{equation}
P\left(m_i~|~\vec{p},\alpha\right) = \frac{\int_{M_{\rm{min}}}^{M_{\rm{max}}}{\left. \frac{dP}{dm}\right|{true}\frac{e^{-\left(m-m_i\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} dm}}{\int{M_{\rm{low}}} ^{M_{\rm{high}}} { \int_{M_{\rm{min}}} ^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right|_{true}}\frac{e^{-\left(m-m^\prime\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} ~dm~dm^\prime}}\,.
\tag{2}
\end{equation}
In turn, the probability of measuring $n_s$ substructures in each of $n_l$ lenses, is the product of the likelihoods for each individual lens,
\begin{equation}
{\cal L}\left( {n_s,\vec{m}}~|~\alpha,f,\vec{p} \right) = \prod_{k=1}^{n_l}{\cal L}\left( n_{s,k},\vec{m}_k~|~\alpha,f,\vec{p} \right)\,.
\tag{8}
\end{equation}
It is easier (and more numerically accurate) to work with the logarithm of the likelihood:
\begin{align}
\log{\cal L}\left( {n_s,\vec{m}}~|~\alpha,f,\vec{p} \right) &= \sum_{k=1}^{n_l}{\log \cal L}\left( n_{s,k},\vec{m}k~|~\alpha,f,\vec{p} \right)\
&= \sum{k=1}^{n_l} \log \left( \frac{e^{-\mu(\alpha,f,< R)}~{\mu(\alpha,f,< R)}^{n_s}}{n_s!} \right) + \sum_{k=1}^{n_l} \sum_{i=1}^{n_s} \log P\left(m_i,R~|~\vec{p},\alpha\right)
\end{align}
The likelihood is clearly separable into two terms:
1. The first term represents the probability of having the measured abundance of subhalos. We call this term the "abudance probability" or $LogProb(N)$.
2. The second term represnts the probability of measuring a given mass for each subhalo. We call this term the "mass probability" or $LogProb(m)$.
We describe the two terms in more detail below.
Abundance Probability
The first term represents the Poisson probability of detecting $n_s$ substructures when $\mu(\alpha,f,< R)$ are expected.
\begin{equation}
LogProb(N) = \sum_{k=1}^{n_l} \log \left( \frac{e^{-\mu(\alpha,f,< R)}~{\mu(\alpha,f,< R)}^{n_s}}{n_s!} \right)
\end{equation}
The expected number of substructures is calculated from the mass function and the detectable range. First we define the expectation over the full mass range
\begin{align}
\mu_0(\alpha,f,<R,\vec{p}) &= \frac{ f(<R)~ M_{\rm{DM}}(<R)} {\int_{M_{\rm{min}}}^{M_{\rm{max}}}{m~\left.\frac{dP}{dm}\right |{true}~dm}} \
&= f(<R)~M{\rm{DM}}(<R)\left{
\begin{array}{ccc}
\frac{ \left(2 -\alpha\right)~ \left(M_{\rm{max}}^{1-\alpha}~-~M_{\rm{min}}^{1 -\alpha}\right)} {\left(1 -\alpha \right)~ \left(M_{\rm{max}}^{2 -\alpha}~-~M_{\rm{min}}^{2 -\alpha}\right)} & \alpha \neq 2,~ \alpha \neq 1\
\
-\frac{\left(M_{\rm{max}}^{-1}~-~M_{\rm{min}}^{-1}\right)} {\log\left(M_{\rm{max}}~/~M_{\rm{min}}\right)} & \alpha = 2\
\
\frac{\log{\left(M_{\rm{max}} / M_{\rm{min}}\right)}} { \left(M_{\rm{max}}~-~M_{\rm{min}}\right)} & \alpha =1 &\
\end{array}
\right.
\tag{5}
\end{align}
Then we calculate the expectation value for the convolved mass function over the detectable range
\begin{align}
\mu(\alpha,f,<R,\vec{p}) &= \mu_0(\alpha,f,<R,\vec{p}) \int_{M_{\rm{low}}} ^{M_{\rm{high}}} { \left. \frac{dP}{dm}\right |{conv}~dm} \
&= \mu_0(\alpha,f,<R,\vec{p}) \int{M_{\rm{low}}} ^{M_{\rm{high}}} { \int_{M_{\rm{min}}} ^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right |_{true}}\frac{e^{-\left(m-m^\prime\right)^2/2{\sigma^2_m}}}{\sqrt{2\pi}\sigma_m} ~dm~dm^\prime}\,.
\tag{7}
\end{align}
We can then put everything together to define abundance probability function, $LogProb(N)$.
End of explanation
def LogProbMass(data, alpha, sigma=SIGMA):
Logarithm of the joint probability for mass of substructures.
Parameters:
-----------
data : Input data
alpha: Index of the mass function
Returns:
--------
prob: Logarithm of the joint spectral probability
logging.debug(' LogProbMass: %s'%len(data))
m,mp,mm,mmp = create_mass_array()
masses = np.concatenate(data['mass'])
top = np.sum(np.log([dP_dm_conv(m,mi,alpha,sigma=sigma) for mi in masses]))
bottom = len(masses)*np.log(simps(dP_dm_conv(mm,mmp,alpha,sigma=sigma),mp))
return top - bottom
Explanation: Mass Probability
The second term is a Gaussian probability for measuring a subhalo with mass $m_i$
\begin{align}
Prob(m) = \sum_{k=1}^{n_l} \sum_{i=1}^{n_s} \log P\left(m_{k,i} ~|~\vec{p},\alpha \right)
\end{align}
The denominator in Eq 2 does not depend on any specific substructure, $m_i$, and can therefore be extracted from the sum. Additionally, since $\alpha$ and $f$ are assumed to be the same in all lenses, the double sum can be simplified to a single sum over all substructures. The mass probability function therefore becomes
\begin{align}
Prob(m) &= \sum_{j=1}^{n_l \times n_s} \log \int_{M_{\rm{min}}}^{M_{\rm{max}}}{\left. \frac{dP}{dm}\right|{true}\frac{e^{-\left(m-m_j\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} dm}
- n_l n_s \log {\int{M_{\rm{low}}} ^{M_{\rm{high}}} { \int_{M_{\rm{min}}} ^{M_{\rm{max}}} {\left.\frac{dP}{dm}\right|_{true}}\frac{e^{-\left(m-m^\prime\right)^2/2{\sigma_m}^2}}{\sqrt{2\pi}\sigma_m} ~dm~dm^\prime}}
\end{align}
Since we've already defined the integrand as the convolved mass function, the above equation can be simplified to
\begin{align}
LogProb(m) &= \sum_{k,i=1}^{n_l,n_s} \log {\left. \frac{dP}{dm_{k,i}}\right|{conv}}
- n_l n_s \log {\int{M_{\rm{low}}} ^{M_{\rm{high}}} {{\left. \frac{dP}{dm}\right|_{conv}} dm}}
\end{align}
End of explanation
def LogLike(data, alpha, frac, sigma=SIGMA):
Logarithm of the joint likelihood over all lens systems.
logging.debug('LogLike: %s'%len(data))
logpois = LogProbNumber(data, alpha, frac, sigma=sigma)
logprob = LogProbMass(data, alpha, sigma=sigma)
return logpois + logprob
Explanation: Likelihood Function
We can assemble everything back into the logarithm of the likelihood function
\begin{equation}
\log {\cal L} = LogProb(N) + LogProb(m)
\end{equation}
End of explanation
def sample(size,alpha=ALPHA):
Random samples of the mass function.
Parameters:
-----------
size : Number of smaples to make
alpha : Index of the mass function
Returns:
--------
mass : Random samples of the mass function
x = create_mass_array(log=False,nsteps=(1e4,1e1))[0]
pdf = dP_dm_true(x,alpha)
size = int(size)
cdf = np.cumsum(pdf)
cdf = np.insert(cdf, 0, 0.)
cdf /= cdf[-1]
icdf = interp1d(cdf, range(0, len(cdf)), bounds_error=False, fill_value=-1)
u = np.random.uniform(size=size)
index = np.floor(icdf(u)).astype(int)
index = index[index >= 0]
masses = x[index]
return masses
def simulate(nlens=1, alpha=ALPHA, frac=FRAC, sigma=SIGMA):
Generate the simulated data set of lens, sources, and masses.
Parameters:
-----------
nlens: Number of lenses to generate.
alpha: Index of the substructure mass function
frac: Substructure mass fraction
Returns:
--------
data : Array of output lenses and substructures
# First, figure out how many lenses we are sampling
m,mp,mm,mmp = create_mass_array()
pdf = dP_dm_true(m,alpha)
_mu = mu0(alpha,frac)
lenses = stats.poisson.rvs(_mu,size=nlens)
out = []
for i,l in enumerate(lenses):
masses = sample(l,alpha=alpha)
if sigma != 0:
masses += stats.norm.rvs(size=len(masses),scale=sigma)
sel = (masses > MLOW) & (masses < MHIGH)
mass = masses[sel]
out += [(i,len(mass),mass)]
names = ['lens','nsrc','mass']
return np.rec.fromrecords(out,names=names)
# Simulate a large set of lenses
data = simulate(1000, alpha=ALPHA, frac=FRAC, sigma=SIGMA)
# Plot a histogram of the masses
bins = np.logspace(np.log10(MLOW),np.log10(MHIGH),50)
masses = np.concatenate(data['mass'])
n,b,p = plt.hist(masses,bins=bins,log=True,normed=True, label='Samples'); plt.gca().set_xscale('log')
# Plot the pdf normalized over the observable mass range
m,mp,mm,mmp = create_mass_array()
norm = simps(dP_dm_conv(mm,mmp,ALPHA,sigma=SIGMA),mp)
plt.plot(b,dP_dm_true(b,alpha=ALPHA)/norm,label='Normalized PDF')
plt.legend(loc='upper right')
plt.xlabel(r"Mass ($M_\odot$)"); plt.ylabel("Normalized Counts")
Explanation: Simulated Data
To test the likelihood framework we need a simulated data set. First, we define a function to sample the mass function (we choose a general inverse-cdf method, though for a power-law mass function this could be done analytically). Then we define a wrapper to simulate the numbers and masses of specific simpulated lenses.
End of explanation
FRAC=0.005; ALPHA=1.9; MLOW=0.3e8; SIGMA=0
nlens=10; seed = 1
np.random.seed(seed)
fracs = np.linspace(0.001,0.03,151)
alphas = np.linspace(1.0,3.0,51)
data = simulate(nlens,alpha=ALPHA, frac=FRAC, sigma=SIGMA)
loglikes = np.array([LogLike(data,a,fracs) for a in alphas])
loglikes -= loglikes.max()
loglikes = loglikes.T
# Note the typo in VK09's definition of the 3 sigma p-value
levels = -stats.chi2.isf([0.0028,0.05,0.32,1.0],2)/2.
plt.contourf(alphas,fracs,loglikes,levels=levels,cmap='binary')
plt.axvline(ALPHA,ls='--',c='dodgerblue')
plt.axhline(FRAC,ls='--',c='dodgerblue')
plt.colorbar(label=r'$\Delta \log {\cal L}$')
plt.xlabel(r'Slope ($\alpha$)')
plt.ylabel(r'Mass Fraction ($f$)')
Explanation: Validation Results
We now have everything in place to replicate the results from VK09. We start by generating the top right panel of Figure 1. This is a realization with:
* $n_l = 10$
* $f = 0.025$
* $\alpha = 1.9$
* $M_{\rm low} = 3\times 10^7$
* $\sigma_m = 0$
Rather than doing a full MCMC chain, we choose just to scan the likelihood.
End of explanation
FRAC=0.005; ALPHA=1.9; SIGMA=0
nlens=200; seed = 0
fracs = np.linspace(0.001,0.03,151)
alphas = np.linspace(1.0,3.0,51)
fig,axes = plt.subplots(1,3,figsize=(10,3),sharey=True)
for i,m in enumerate([0.3e8,1.0e8,3e8]):
MLOW = m
data = simulate(nlens,alpha=ALPHA, frac=FRAC, sigma=SIGMA)
loglikes = np.array([LogLike(data,a,fracs) for a in alphas])
loglikes -= loglikes.max()
loglikes = loglikes.T
plt.sca(axes[i])
plt.contourf(alphas,fracs,loglikes,levels=levels,cmap='binary')
plt.axvline(ALPHA,ls='--',c='dodgerblue')
plt.axhline(FRAC,ls='--',c='dodgerblue')
plt.xlabel(r'Slope ($\alpha$)')
if i == 0: plt.ylabel(r'Mass Fraction ($f$)')
Explanation: As an extended validation, we attempt to reproduce Figure 3 from VK09.
End of explanation |
6,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading TSV files
Step1: transforming the "Pvalue_MMAP_V2_..." into danger score
Testing the function danger_score
Step3: QUESTION pour Guillaume
Step5: To be or not to be a CNV
Step6: Replace the zero score by the maximum score
Step7: Transforms the scores into P(cnv is real)
Step8: Finally, putting things together
Step9: Create a dict of the cnv
Step10: Create a dictionary of the subjects -
Step12: Histogram of the number of cnv used to compute dangerosity
Step13: Testing dangerosity
Step14: Printing out results | Python Code:
CWD = osp.join(osp.expanduser('~'), 'documents','grants_projects','roberto_projects', \
'guillaume_huguet_CNV','File_OK')
filename = 'Imagen_QC_CIA_MMAP_V2_Annotation.tsv'
fullfname = osp.join(CWD, filename)
arr = np.loadtxt(fullfname, dtype='str', comments=None, delimiter='\Tab',
converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0)
EXPECTED_LINES = 19542
expected_nb_values = EXPECTED_LINES - 1
assert arr.shape[0] == EXPECTED_LINES
line0 = arr[0].split('\t')
print(line0)
danger = 'Pvalue_MMAP_V2_sans_intron_and_Intergenic'
score = 'SCORE'
i_danger = line0.index(danger)
i_score = line0.index(score)
print(i_danger, i_score)
# check that all lines have the same number of tab separated elements
larr = np.asarray([len(arr[i].split('\t')) for i in range(arr.shape[0])])
assert not (larr - larr[0]).any() # all element have the same value
dangers = np.asarray([line.split('\t')[i_danger] for line in arr[1:]])
scores = np.asarray([line.split('\t')[i_score] for line in arr[1:]])
# print(np.unique(scores))
assert len(dangers) == expected_nb_values
assert len(scores) == expected_nb_values
Explanation: Reading TSV files
End of explanation
assert util._test_danger_score_1()
assert util._test_danger_score()
Explanation: transforming the "Pvalue_MMAP_V2_..." into danger score
Testing the function danger_score
End of explanation
danger_not_empty = dangers != ''
danger_scores = dangers[danger_not_empty]
danger_scores = np.asarray([util.danger_score(pstr, util.pH1_with_apriori)
for pstr in danger_scores])
;
Explanation: QUESTION pour Guillaume:
a quoi correspondent les '' dans la colonne "Pvalue_MMAP_V2_sans_intron_and_Intergenic" (danger)?
Ansewer: cnv for which we have no dangerosity information
End of explanation
reload(util)
#get the scores
scores = np.asarray([line.split('\t')[i_score] for line in arr[1:]])
assert len(scores) == expected_nb_values
print(len(np.unique(scores)))
#tmp_score = np.asarray([util.str2floats(s, comma2point=True, sep=' ')[0] for s in scores])
assert scores.shape[0] == EXPECTED_LINES - 1
# h = plt.hist(tmp[tmp > sst.scoreatpercentile(tmp, 99)], bins=100)
# h = plt.hist(tmp[tmp < 50], bins=100)
print("# CNV with score == 0.: ", (tmp==0.).sum())
print("# CNV with score >=15 < 17.5 : ", np.logical_and(tmp >= 15., tmp < 17.5).sum())
tmp.max()
;
Explanation: To be or not to be a CNV: p value from the 'SCORE' column
End of explanation
scoresf = np.asarray([util.str2floats(s, comma2point=True, sep=' ')[0]
for s in scores])
print(scoresf.max(), scoresf.min(),(scoresf==0).sum())
#clean_score = util.process_scores(scores)
#h = plt.hist(clean_score[clean_score < 60], bins=100)
#h = plt.hist(scoresf[scoresf < 60], bins=100)
h = plt.hist(scoresf, bins=100, range=(0,150))
Explanation: Replace the zero score by the maximum score: cf Guillaume's procedure
End of explanation
# Creating a function from score to proba from Guillaume's values
p_cnv = util._build_dict_prob_cnv()
#print(p_cnv.keys())
#print(p_cnv.values())
#### Definition with a piecewise linear function
#score2prob = util.create_score2prob_lin_piecewise(p_cnv)
#scores = np.arange(15,50,1)
#probs = [score2prob(sc) for sc in scores]
#plt.plot(scores, probs)
#### Definition with a corrected regression line
score2prob = util.create_score2prob_lin(p_cnv)
#x = np.arange(0,50,1)
#plt.plot(x, [score2prob(_) for _ in x], '-', p_cnv.keys(), p_cnv.values(), '+')
p_scores = [score2prob(sc) for sc in clean_score]
assert len(p_scores) == EXPECTED_LINES -1
Explanation: Transforms the scores into P(cnv is real)
End of explanation
# re-loading
reload(util)
CWD = osp.join(osp.expanduser('~'), 'documents','grants_projects','roberto_projects', \
'guillaume_huguet_CNV','File_OK')
filename = 'Imagen_QC_CIA_MMAP_V2_Annotation.tsv'
fullfname = osp.join(CWD, filename)
# in numpy array
arr = np.loadtxt(fullfname, dtype='str', comments=None, delimiter='\Tab',
converters=None, skiprows=0, usecols=None, unpack=False, ndmin=0)
line0 = arr[0].split('\t')
i_DANGER = line0.index('Pvalue_MMAP_V2_sans_intron_and_Intergenic')
i_SCORE = line0.index('SCORE')
i_START = line0.index('START')
i_STOP = line0.index('STOP')
i_5pGENE = line0.index("5'gene")
i_3pGENE = line0.index("3'gene")
i_5pDIST = line0.index("5'dist(kb)")
i_3pDIST = line0.index("3'dist(kb)")
#i_LOC = line0.index('Location')
scores = np.asarray([line.split('\t')[i_SCORE] for line in arr[1:]])
clean_score = util.process_scores(scores)
max_score = clean_score.max()
print(line0)
#names_from = ['START', 'STOP', "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"]
#---------- ligne uniques:
names_from = ['IID_projet', 'IID_genotype', "CHR de Merge_CIA_610_660_QC", 'START', 'STOP']
cnv_names = util.make_uiid(arr, names_from)
print("with names from: ", names_from)
print("we have {} unique elements out of {} rows in the tsv".format(
len(np.unique(cnv_names)), len(cnv_names)))
#---------- CNV uniques ?
names_from = ["CHR de Merge_CIA_610_660_QC", 'START', 'STOP']
cnv_names = util.make_uiid(arr, names_from)
print("with names from: ", names_from)
print("we have {} unique elements out of {} rows in the tsv".format(
len(np.unique(cnv_names)), len(cnv_names)))
#---------- sujets uniques ?
names_from = ['IID_projet'] # , 'IID_genotype']
cnv_names = util.make_uiid(arr, names_from)
print("with names from: ", names_from)
print("we have {} unique elements out of {} rows in the tsv".format(
len(np.unique(cnv_names)), len(cnv_names)))
dangers = np.asarray([line.split('\t')[i_DANGER] for line in arr[1:]])
scores = np.asarray([line.split('\t')[i_SCORE] for line in arr[1:]])
#danger_not_empty = dangers != ''
#print(danger_not_empty.sum())
#print(len(np.unique(cnv_name)))
#print(cnv_name[:10])
Explanation: Finally, putting things together
End of explanation
from collections import OrderedDict
cnv = OrderedDict()
names_from = ["CHR de Merge_CIA_610_660_QC", 'START', 'STOP']
#, "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"]
blank_dgr = 0
for line in arr[1:]:
lline = line.split('\t')
dgr = lline[i_DANGER]
scr = lline[i_SCORE]
cnv_iid = util.make_uiid(line, names_from, arr[0])
if dgr != '':
add_cnv = (util.danger_score(lline[i_DANGER], util.pH1_with_apriori),
score2prob(util.process_one_score(lline[i_SCORE], max_score)))
if cnv_iid in cnv.keys():
cnv[cnv_iid].append(add_cnv)
else:
cnv[cnv_iid] = [add_cnv]
else:
blank_dgr += 1
print(len(cnv), (blank_dgr))
print([k for k in cnv.keys()[:5]])
print([k for k in cnv.values()[:5]])
for k in cnv.keys()[3340:3350]:
print(k,': ',cnv[k])
Explanation: Create a dict of the cnv
End of explanation
cnv = OrderedDict({})
#names_from = ['START', 'STOP', "5'gene", "3'gene", "5'dist(kb)", "3'dist(kb)"]
names_from = ['IID_projet']
for line in arr[1:]:
lline = line.split('\t')
dgr = lline[i_DANGER]
scr = lline[i_SCORE]
sub_iid = util.make_uiid(line, names_from, arr[0])
if dgr != '':
add_cnv = (util.danger_score(lline[i_DANGER], util.pH1_with_apriori),
score2prob(util.process_one_score(lline[i_SCORE], max_score)))
if sub_iid in cnv.keys():
cnv[sub_iid].append(add_cnv)
else:
cnv[sub_iid] = [add_cnv]
Explanation: Create a dictionary of the subjects -
End of explanation
print(len(cnv))
nbcnv = [len(cnv[sb]) for sb in cnv]
hist = plt.hist(nbcnv, bins=50)
print(np.max(np.asarray(nbcnv)))
# definition of dangerosity from a list of cnv
def dangerosity(listofcnvs):
inputs: list tuples (danger_score, proba_cnv)
returns: a dangerosity score
last = -1 #slicing the last
tmp = [np.asarray(t) for t in zip(*listofcnvs)]
return tmp[0].dot(tmp[1])
# or: return np.asarray([dgr*prob for (dgr,prob) in listofcnvs]).cumsum()[last]
Explanation: Histogram of the number of cnv used to compute dangerosity
End of explanation
for k in range(1,30, 30):
print(cnv[cnv.keys()[k]], ' yields ', dangerosity(cnv[cnv.keys()[k]]))
test_dangerosity_input = [[(1., .5), (1., .5), (1., .5), (1., .5)],
[(2., 1.)],
[(10000., 0.)]]
test_dangerosity_output = [2., 2., 0]
#print( [dangerosity(icnv) for icnv in test_dangerosity_input]) # == test_dangerosity_output
assert( [dangerosity(icnv) for icnv in test_dangerosity_input] == test_dangerosity_output)
Explanation: Testing dangerosity
End of explanation
dtime = datetime.now().strftime("%y-%m-%d_h%H-%M")
outfile = dtime+'dangerosity_cnv.txt'
fulloutfile = osp.join(CWD, outfile)
with open(fulloutfile, 'w') as outf:
for sub in cnv:
outf.write("\t".join([sub, str(dangerosity(cnv[sub]))]) + "\n")
Explanation: Printing out results
End of explanation |
6,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to artifacts and artifact detection
Since MNE supports the data of many different acquisition systems, the
particular artifacts in your data might behave very differently from the
artifacts you can observe in our tutorials and examples.
Therefore you should be aware of the different approaches and of
the variability of artifact rejection (automatic/manual) procedures described
onwards. At the end consider always to visually inspect your data
after artifact rejection or correction.
Background
Step1: Low frequency drifts and line noise
Step2: we see high amplitude undulations in low frequencies, spanning across tens of
seconds
Step3: On MEG sensors we see narrow frequency peaks at 60, 120, 180, 240 Hz,
related to line noise.
But also some high amplitude signals between 25 and 32 Hz, hinting at other
biological artifacts such as ECG. These can be most easily detected in the
time domain using MNE helper functions
See tut-filter-resample.
ECG
finds ECG events, creates epochs, averages and plots
Step4: we can see typical time courses and non dipolar topographies
not the order of magnitude of the average artifact related signal and
compare this to what you observe for brain signals
EOG | Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
Explanation: Introduction to artifacts and artifact detection
Since MNE supports the data of many different acquisition systems, the
particular artifacts in your data might behave very differently from the
artifacts you can observe in our tutorials and examples.
Therefore you should be aware of the different approaches and of
the variability of artifact rejection (automatic/manual) procedures described
onwards. At the end consider always to visually inspect your data
after artifact rejection or correction.
Background: what is an artifact?
Artifacts are signal interference that can be
endogenous (biological) and exogenous (environmental).
Typical biological artifacts are head movements, eye blinks
or eye movements, heart beats. The most common environmental
artifact is due to the power line, the so-called line noise.
How to handle artifacts?
MNE deals with artifacts by first identifying them, and subsequently removing
them. Detection of artifacts can be done visually, or using automatic routines
(or a combination of both). After you know what the artifacts are, you need
remove them. This can be done by:
- *ignoring* the piece of corrupted data
- *fixing* the corrupted data
For the artifact detection the functions MNE provides depend on whether
your data is continuous (Raw) or epoch-based (Epochs) and depending on
whether your data is stored on disk or already in memory.
Detecting the artifacts without reading the complete data into memory allows
you to work with datasets that are too large to fit in memory all at once.
Detecting the artifacts in continuous data allows you to apply filters
(e.g. a band-pass filter to zoom in on the muscle artifacts on the temporal
channels) without having to worry about edge effects due to the filter
(i.e. filter ringing). Having the data in memory after segmenting/epoching is
however a very efficient way of browsing through the data which helps
in visualizing. So to conclude, there is not a single most optimal manner
to detect the artifacts: it just depends on the data properties and your
own preferences.
In this tutorial we show how to detect artifacts visually and automatically.
For how to correct artifacts by rejection see tut-artifact-rejection.
To discover how to correct certain artifacts by filtering see
tut-filter-resample and to learn how to correct artifacts
with subspace methods like SSP and ICA see
tut-artifact-ssp and tut-artifact-ica.
Artifacts Detection
This tutorial discusses a couple of major artifacts that most analyses
have to deal with and demonstrates how to detect them.
End of explanation
(raw.copy().pick_types(meg='mag')
.del_proj(0)
.plot(duration=60, n_channels=100, remove_dc=False))
Explanation: Low frequency drifts and line noise
End of explanation
raw.plot_psd(tmax=np.inf, fmax=250)
Explanation: we see high amplitude undulations in low frequencies, spanning across tens of
seconds
End of explanation
average_ecg = create_ecg_epochs(raw).average()
print('We found %i ECG events' % average_ecg.nave)
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
average_ecg.plot_joint(**joint_kwargs)
Explanation: On MEG sensors we see narrow frequency peaks at 60, 120, 180, 240 Hz,
related to line noise.
But also some high amplitude signals between 25 and 32 Hz, hinting at other
biological artifacts such as ECG. These can be most easily detected in the
time domain using MNE helper functions
See tut-filter-resample.
ECG
finds ECG events, creates epochs, averages and plots
End of explanation
average_eog = create_eog_epochs(raw).average()
print('We found %i EOG events' % average_eog.nave)
average_eog.plot_joint(**joint_kwargs)
Explanation: we can see typical time courses and non dipolar topographies
not the order of magnitude of the average artifact related signal and
compare this to what you observe for brain signals
EOG
End of explanation |
6,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Process the global variogram
Step1: STOP HERE This will calculate the variogram with chunks
Step2: Now the global variogram
For doing this I need to take a weighted average.
Or.. you can run it in the HEC machine! (as me did)
Step3: Gaussian semivariogram
$\gamma (h)=(s-n)\left(1-\exp \left(-{\frac {h^{2}}{r^{2}a}}\right)\right)+n1_{{(0,\infty )}}(h)$ | Python Code:
# Load Biospytial modules and etc.
%matplotlib inline
import sys
sys.path.append('/apps')
import django
django.setup()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
## Use the ggplot style
plt.style.use('ggplot')
from external_plugins.spystats import tools
%run ../testvariogram.py
%time vg = tools.Variogram(new_data,'residuals1',using_distance_threshold=500000)
### Test creation of chunks
chunks = tools.PartitionDataSet(new_data,namecolumnx='newLon',namecolumny='newLat',n_chunks=3)
sizes = map(lambda c : c.shape[0],chunks)
vg0 = tools.Variogram(chunks[0],response_variable_name='residuals1',using_distance_threshold=500000)
vg1 = tools.Variogram(chunks[1],response_variable_name='residuals1',using_distance_threshold=500000)
vg2 = tools.Variogram(chunks[2],response_variable_name='residuals1',using_distance_threshold=500000)
vg3 = tools.Variogram(chunks[3],response_variable_name='residuals1',using_distance_threshold=500000)
Explanation: Process the global variogram
End of explanation
%time vg0.plot(num_iterations=50,with_envelope=True)
chunks[0].plot(column='residuals1')
%time vg1.plot(num_iterations=50,with_envelope=True)
chunks[1].plot(column='residuals1')
%time vg2.plot(num_iterations=50,with_envelope=True)
chunks[2].plot(column='residuals1')
%time vg3.plot(num_iterations=50,with_envelope=True)
chunks[3].plot(column='residuals1')
envelopes = map(lambda c : c.envelope,chunks)
c = chunks[0]
variograms = [vg0,vg1,vg2,vg3]
envelopes = map(lambda v : v.envelope,variograms)
colors = plt.rcParams['axes.prop_cycle']
colors = ['red','green','grey','orange']
plt.figure(figsize=(12, 6))
for i,envelope in enumerate(envelopes):
plt.plot(envelope.lags,envelope.envhigh,'k--')
plt.plot(envelope.lags,envelope.envlow,'k--')
plt.fill_between(envelope.lags,envelope.envlow,envelope.envhigh,alpha=0.5,color=colors[i])
plt.plot(envelope.lags,envelope.variogram,'o--',lw=2.0,color=colors[i])
plt.legend(labels=['97.5%','emp. varig','2.5%'])
Explanation: STOP HERE This will calculate the variogram with chunks
End of explanation
filename = "../HEC_runs/results/low_q/data_envelope.csv"
envelope_data = pd.read_csv(filename)
Explanation: Now the global variogram
For doing this I need to take a weighted average.
Or.. you can run it in the HEC machine! (as me did)
End of explanation
def gaussianVariogram(h,sill=0,range_a=0,nugget=0):
g_h = ((sill - nugget)*(1 - np.exp(-(h**2 / range_a**2)))) + nugget
return g_h
hx = np.linspace(0,600000,100)
vg = tools.Variogram(new_data,'residuals1',using_distance_threshold=500000)
vg.envelope = envelope_data
vg.empirical = vg.envelope.variogram
vg.lags = vg.envelope.lags
vdata = vg.envelope.dropna()
from scipy.optimize import curve_fit
s = 0.345
r = 100000.0
nugget = 0.33
init_vals = [0.34, 50000, 0.33] # for [amp, cen, wid]
best_vals, covar = curve_fit(gaussianVariogram, xdata=vdata.lags, ydata=vdata.variogram, p0=init_vals)
s =best_vals[0]
r = best_vals[1]
nugget = best_vals[2]
fitted_gaussianVariogram = lambda x : gaussianVariogram(x,sill=s,range_a=r,nugget=nugget)
gammas = pd.DataFrame(map(fitted_gaussianVariogram,hx))
import functools
fitted_gaussian2 = functools.partial(gaussianVariogram,s,r,nugget)
print(s)
print(r)
print(nugget)
vg.plot(refresh=False)
plt.plot(hx,gammas,'green',lw=2)
Mdist = vg.distance_coordinates.flatten()
## Let's do a small subset
ch = Mdist[0:10000000]
#%time covariance_matrix = map(fitted_gaussianVariogram,Mdist)
%time vars = np.array(map(fitted_gaussianVariogram,ch))
plt.imshow(vars.reshape(1000,1000))
## Save it in redis
import redis
con = redis.StrictRedis(host='redis')
con.set('small_dist_mat1',vars)
import multiprocessing as multi
from multiprocessing import Manager
manager = Manager()
p=multi.Pool(processes=4)
%time vars = p.map(fitted_gaussian2,ch,chunksize=len(ch)/3)
%time vars = np.array(map(fitted_gaussianVariogram,ch))
88.36*30
Explanation: Gaussian semivariogram
$\gamma (h)=(s-n)\left(1-\exp \left(-{\frac {h^{2}}{r^{2}a}}\right)\right)+n1_{{(0,\infty )}}(h)$
End of explanation |
6,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Android Management API - Quickstart
If you have not yet read the Android Management API Codelab we recommend that you do so before using this notebook. If you opened this notebook from the Codelab then follow the next instructions on the Codelab.
In order to run this notebook, you need
Step2: Select an enterprise
An Enterprise resource binds an organization to your Android Management solution.
Devices and Policies both belong to an enterprise. Typically, a single enterprise
resource is associated with a single organization. However, you can create multiple
enterprises for the same organization based on their needs. For example, an
organization may want separate enterprises for its different departments or regions.
For this Codelab we have already created an enterprise for you. Run the next cell to select it.
Step3: Create a policy
A Policy is a group of settings that determine the behavior of a managed device
and the apps installed on it. Each Policy resource represents a unique group of device
and app settings and can be applied to one or more devices. Once a device is linked to
a policy, any updates to the policy are automatically applied to the device.
To create a basic policy, run the cell below. You'll see how to create more advanced policies later in this guide.
Step4: Provision a device
Provisioning refers to the process of enrolling a device with an enterprise, applying the appropriate policies to the device, and guiding the user to complete the set up of their device in accordance with those policies. Before attempting to provision a device, ensure that the device is running Android 6.0 or above.
You need an enrollment token for each device that you want to provision (you can use the same token for multiple devices), when creating a token you can specify a policy that will be applied to the device.
Step5: Embed your enrollment token in either an enrollment link or a QR code, and then follow the provisioning instructions below. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# https://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 Google LLC.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
from apiclient.discovery import build
from google_auth_oauthlib.flow import InstalledAppFlow
from random import randint
# This is a public OAuth config, you can use it to run this guide but please use
# different credentials when building your own solution.
CLIENT_CONFIG = {
'installed': {
'client_id':'882252295571-uvkkfelq073vq73bbq9cmr0rn8bt80ee.apps.googleusercontent.com',
'client_secret': 'S2QcoBe0jxNLUoqnpeksCLxI',
'auth_uri':'https://accounts.google.com/o/oauth2/auth',
'token_uri':'https://accounts.google.com/o/oauth2/token'
}
}
SCOPES = ['https://www.googleapis.com/auth/androidmanagement']
# Run the OAuth flow.
flow = InstalledAppFlow.from_client_config(CLIENT_CONFIG, SCOPES)
credentials = flow.run_console()
# Create the API client.
androidmanagement = build('androidmanagement', 'v1', credentials=credentials)
print('\nAuthentication succeeded.')
Explanation: Android Management API - Quickstart
If you have not yet read the Android Management API Codelab we recommend that you do so before using this notebook. If you opened this notebook from the Codelab then follow the next instructions on the Codelab.
In order to run this notebook, you need:
An Android 6.0+ device.
Setup
The base resource of your Android Management solution is a Google Cloud Platform project. All other resources (Enterprises, Devices, Policies, etc) belong to the project and the project controls access to these resources. A solution is typically associated with a single project, but you can create multiple projects if you want to restrict access to resources.
For this Codelab we have already created a project for you (project ID: android-management-io-codelab).
To create and access resources, you need to authenticate with an account that has edit rights over the project. The account running this Codelab has been given rights over the project above. To start the authentication flow, run the cell below.
To run a cell:
Click anywhere in the code block.
Click the ▶ button in the top-left of the code block.
When you build a server-based solution, you should create a
service account
so you don't need to authorize the access every time.
End of explanation
enterprise_name = 'enterprises/LC02de1hmx'
Explanation: Select an enterprise
An Enterprise resource binds an organization to your Android Management solution.
Devices and Policies both belong to an enterprise. Typically, a single enterprise
resource is associated with a single organization. However, you can create multiple
enterprises for the same organization based on their needs. For example, an
organization may want separate enterprises for its different departments or regions.
For this Codelab we have already created an enterprise for you. Run the next cell to select it.
End of explanation
import json
# Create a random policy name to avoid colision with other Codelabs
if 'policy_name' not in locals():
policy_name = enterprise_name + '/policies/' + str(randint(1, 1000000000))
policy_json = '''
{
"applications": [
{
"packageName": "com.google.samples.apps.iosched",
"installType": "FORCE_INSTALLED"
}
],
"debuggingFeaturesAllowed": true
}
'''
androidmanagement.enterprises().policies().patch(
name=policy_name,
body=json.loads(policy_json)
).execute()
Explanation: Create a policy
A Policy is a group of settings that determine the behavior of a managed device
and the apps installed on it. Each Policy resource represents a unique group of device
and app settings and can be applied to one or more devices. Once a device is linked to
a policy, any updates to the policy are automatically applied to the device.
To create a basic policy, run the cell below. You'll see how to create more advanced policies later in this guide.
End of explanation
enrollment_token = androidmanagement.enterprises().enrollmentTokens().create(
parent=enterprise_name,
body={"policyName": policy_name}
).execute()
Explanation: Provision a device
Provisioning refers to the process of enrolling a device with an enterprise, applying the appropriate policies to the device, and guiding the user to complete the set up of their device in accordance with those policies. Before attempting to provision a device, ensure that the device is running Android 6.0 or above.
You need an enrollment token for each device that you want to provision (you can use the same token for multiple devices), when creating a token you can specify a policy that will be applied to the device.
End of explanation
from urllib.parse import urlencode
image = {
'cht': 'qr',
'chs': '500x500',
'chl': enrollment_token['qrCode']
}
qrcode_url = 'https://chart.googleapis.com/chart?' + urlencode(image)
print('Please visit this URL to scan the QR code:', qrcode_url)
enrollment_link = 'https://enterprise.google.com/android/enroll?et=' + enrollment_token['value']
print('Please open this link on your device:', enrollment_link)
Explanation: Embed your enrollment token in either an enrollment link or a QR code, and then follow the provisioning instructions below.
End of explanation |
6,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 7
1) Scrieti un program care la fiecare x secunde unde x va fi aleator ales la fiecare iteratie (din intervalul [a, b] , unde a, b sunt date ca argumente) afiseaza de cate minute ruleaza programul (in minute, cu doua zecimale). Programul va rula la infinit.
Step1: 2) Scrieti doua functii de verificare daca un numar este prim, si verificati care dintre ele este mai eficienta din punct de vedere al timpului.
3) Gasiti toate fisierele duplicate dintr-un director dat ca argument si afisati timpul de rulare. Calea grupurilor de fisiere duplicate vor fi scrise intr-un fisier output.txt. (duplicat in fct de continut)
4) Sa se scrie un script care primeste ca argument un director si creeaza un fisier JSON cu date despre toate fisierele din acel director. Pentru fiecare fisier vor fi afisate urmatoarele informatii
Step2: 5) Sa se creeze doua scripturi care sa comunice intre ele prin date serializate. Primul script va salva periodic o lista cu toate fisierele dintr-un director iar al doilea script va adauga intr-o arhiva toate fisierele cu size mai mic de 100kb si modificate cu cel mult 5 minute in urma (nu va fi adaugat acelasi fisier de 2 ori).
6) Sa se scrie un script care afiseaza in ce zi a saptamanii este anul nou, pentru ultimii x ani (x este dat ca argument). | Python Code:
import time
import random
#import sys
#a = int(sys.argv[1])
#b = int(sys.argv[2])
def wait(x):
time.sleep(x)
def time_cron(a,b):
time_interval = random.uniform(a,b)
# while(1):
# measure process time
t0 = time.clock()
wait(time_interval)
print time.clock() - t0, "seconds process time"
# measure wall time
t0 = time.time()
wait(time_interval)
print time.time() - t0, "seconds wall time"
time_cron(0,2)
Explanation: Lab 7
1) Scrieti un program care la fiecare x secunde unde x va fi aleator ales la fiecare iteratie (din intervalul [a, b] , unde a, b sunt date ca argumente) afiseaza de cate minute ruleaza programul (in minute, cu doua zecimale). Programul va rula la infinit.
End of explanation
import os
import json
import hashlib
import time
def get_file_md5(filePath):
h = hashlib.md5()
h.update(open(filePath,"rb").read())
return h.hexdigest()
def get_file_sha256(filePath):
h = hashlib.sha256()
h.update(open(filePath,"rb").read())
return h.hexdigest()
def get_dir_data(dir_path):
json_data = {}
dir_path = os.path.realpath(dir_path)
json_file = open(os.path.basename(dir_path) + '.json', 'w')
print next(os.walk(dir_path))[2]
#print os.path.basename(dir_path)
for dir_file in next(os.walk(dir_path))[2]:
file_data = {}
#file_data["file_name"] = dir_file
file_data[dir_file] = {}
file_data[dir_file]["file_md5"] = get_file_md5(dir_file)
file_data[dir_file]["file_sha256"] = get_file_sha256(dir_file)
file_data[dir_file]["file_size"] = os.path.getsize(dir_file)
file_time = time.gmtime(os.path.getctime(dir_file))
file_data[dir_file]["file_time"] = time.strftime("%Y-%m-%d %I:%M:%S %p", file_time)
file_data[dir_file]["file_path"] = os.path.realpath(dir_file)
#print file_data
json_data.update(file_data)
#print json_data
#print json_data
json_data = json.dumps(json_data, sort_keys = True, indent=4, separators=(',', ': '))
json_file.write( json_data )
json_file.close()
get_dir_data('./')
Explanation: 2) Scrieti doua functii de verificare daca un numar este prim, si verificati care dintre ele este mai eficienta din punct de vedere al timpului.
3) Gasiti toate fisierele duplicate dintr-un director dat ca argument si afisati timpul de rulare. Calea grupurilor de fisiere duplicate vor fi scrise intr-un fisier output.txt. (duplicat in fct de continut)
4) Sa se scrie un script care primeste ca argument un director si creeaza un fisier JSON cu date despre toate fisierele din acel director. Pentru fiecare fisier vor fi afisate urmatoarele informatii: nume_fisier, md5_fisier, sha256_fisier, size_fisier (in bytes), cand a fost creat fisierul (in format human-readable) si calea absoluta catre fisier.
End of explanation
import datetime as dt
def weekday_new_year(x):
today = dt.datetime.today()
current_year = today.year
#print today, '::', current_year
for i in range(0, x):
print current_year-i, ': ', dt.date(current_year-i, 1, 31).strftime("%A") #.weekday() shows only no
weekday_new_year(5)
Explanation: 5) Sa se creeze doua scripturi care sa comunice intre ele prin date serializate. Primul script va salva periodic o lista cu toate fisierele dintr-un director iar al doilea script va adauga intr-o arhiva toate fisierele cu size mai mic de 100kb si modificate cu cel mult 5 minute in urma (nu va fi adaugat acelasi fisier de 2 ori).
6) Sa se scrie un script care afiseaza in ce zi a saptamanii este anul nou, pentru ultimii x ani (x este dat ca argument).
End of explanation |
6,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hour of Code 2015
For Mr. Clifford's Class (5C)
Perry Grossman
December 2015
Introduction
From the Hour of Code to the Power of Code
How to use programming skills for data analysis, or "data science," the new, hot term
<img src="http
Step1: Some Basic Things
Leveraging a tutorial by David Beazley, Ian Stokes-Rees and Continuum Analytics
Step2: Floor numbering is the numbering scheme used for a building's floors. There are two major schemes in use across the world. In one system, used in the majority of Europe, the ground floor is the floor on the ground and often has no number or is assigned the number zero. Therefore, the next floor up is assigned the number 1 and is the first floor.
The other system, used primarily in the United States and Canada, counts the bottom floor as number 1 or first floor.
https | Python Code:
# you can also access this directly:
from PIL import Image
im = Image.open("DataScienceProcess.jpg")
im
#path=\'DataScienceProcess.jpg'
#image=Image.open(path)
Explanation: Hour of Code 2015
For Mr. Clifford's Class (5C)
Perry Grossman
December 2015
Introduction
From the Hour of Code to the Power of Code
How to use programming skills for data analysis, or "data science," the new, hot term
<img src="http://www.niemanlab.org/images/drew-conway-data-science-venn-diagram.jpg">
<img src="http://qph.is.quoracdn.net/main-qimg-3504cc03d0a1581096eba9ef97cfd7eb?convert_to_webp=true">
End of explanation
# Comments
# ls list of the files in this folder. See below.
This line will make an error because this line is not python code and this is a code cell.
# Leveraging
#http://localhost:8888/notebooks/Dropbox/Python/Harvard%20SEAS%20Tutorial/python-mastery-isr19-master/1-PythonReview.ipynb
ls # NOT PYTHON! command line
pwd # ALSO NOT PYTHON! Shows what folder you are in.
# math
1+2
4000*3
import math
math.sqrt(2)
2 ** (0.5)
637*532.6
from __future__ import division
1/2
(8+5)*4
# Create a variable
name = 'Perry Grossman'
# Print the variable
name
name[6]
Explanation: Some Basic Things
Leveraging a tutorial by David Beazley, Ian Stokes-Rees and Continuum Analytics:
http://localhost:8888/notebooks/Dropbox/Python/Harvard%20SEAS%20Tutorial/python-mastery-isr19-master/1-PythonReview.ipynb
and other resources
End of explanation
from functools import partial
# https://docs.python.org/2/library/functools.html
from random import choice, randint
choice('yes no maybe'.split()) # split is a method
for i in range(10):
print("Call me " + choice('yes no maybe'.split()))
randint(1, 6)
# If you need dice, try this:
roll = partial(randint, 1, 20)
roll()
# how would you make 20 sided dice?
# Create a list of numbers
vals = [3, -8, 2, 7, 6, 2, 5, 12, 4, 9]
#Find the even numbers
evens = []
for v in vals:
if v%2 == 0:
evens.append(v)
#How is this working?
evens
squares = []
for v in vals:
squares.append(v*v)
squares
bigsquares = []
for v in vals:
s = v*v
if s > 10:
bigsquares.append(s)
bigsquares
Explanation: Floor numbering is the numbering scheme used for a building's floors. There are two major schemes in use across the world. In one system, used in the majority of Europe, the ground floor is the floor on the ground and often has no number or is assigned the number zero. Therefore, the next floor up is assigned the number 1 and is the first floor.
The other system, used primarily in the United States and Canada, counts the bottom floor as number 1 or first floor.
https://en.wikipedia.org/wiki/Storey
End of explanation |
6,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Hypothesis testing
The following is a version of thinkstats2.HypothesisTest with just the essential methods
Step2: And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.
Step3: The p-value turns out to be about 7%, which is considered on the border of statistical significance.
Step4: Permutation test
To compute the p-value of an observed difference in means, we can assume that there is no difference between the groups and generate simulated results by shuffling the data.
Step5: Here's an example where we test the observed difference in pregnancy length for first babies and others.
Step6: The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.
Step7: Here's the distrubution of the test statistic (the difference in means) over many simulated samples
Step8: Under the null hypothesis, we often see differences bigger than the observed difference.
Step9: If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.
Step10: But in this example, the result is still not statistically significant.
Difference in standard deviation
In this framework, it is easy to use other test statistics. For example, if we think the variance for first babies might be higher, we can run this test
Step11: But that's not statistically significant either.
Testing correlation
To check whether an observed correlation is statistically significant, we can run a permutation test with a different test statistic.
Step12: Here's an example testing the correlation between birth weight and mother's age.
Step13: The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than $1/1000$, but it is not actually 0.
To get a sense of how unexpected the observed value is under the null hypothesis, we can compare the actual correlation to the largest value we saw in the simulations.
Step14: Testing proportions
Here's an example that tests whether the outcome of a rolling a six-sided die is suspicious, where the test statistic is the total absolute difference between the observed outcomes and the expected long-term averages.
Step15: Here's an example using the data from the book
Step16: The observed deviance from the expected values is not statistically significant.
By convention, it is more common to test data like this using the chi-squared statistic
Step17: Using this test, we get a smaller p-value
Step18: Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.
Chi-square test of pregnancy length
Step19: If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.
Step21: Power
Here's the function that estimates the probability of a non-significant p-value even is there really is a difference between the groups.
Step22: In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.
Exercises
Exercise
Step23: Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import random
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
class HypothesisTest(object):
def __init__(self, data):
self.data = data
self.MakeModel()
self.actual = self.TestStatistic(data)
def PValue(self, iters=1000):
self.test_stats = [self.TestStatistic(self.RunModel())
for _ in range(iters)]
count = sum(1 for x in self.test_stats if x >= self.actual)
return count / iters
def TestStatistic(self, data):
raise UnimplementedMethodException()
def MakeModel(self):
pass
def RunModel(self):
raise UnimplementedMethodException()
Explanation: Hypothesis testing
The following is a version of thinkstats2.HypothesisTest with just the essential methods:
End of explanation
class CoinTest(HypothesisTest):
def TestStatistic(self, data):
heads, tails = data
test_stat = abs(heads - tails)
return test_stat
def RunModel(self):
heads, tails = self.data
n = heads + tails
sample = [random.choice('HT') for _ in range(n)]
hist = thinkstats2.Hist(sample)
data = hist['H'], hist['T']
return data
Explanation: And here's an example that uses it to compute the p-value of an experiment where we toss a coin 250 times and get 140 heads.
End of explanation
ct = CoinTest((140, 110))
pvalue = ct.PValue()
pvalue
Explanation: The p-value turns out to be about 7%, which is considered on the border of statistical significance.
End of explanation
class DiffMeansPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
group1, group2 = data
test_stat = abs(group1.mean() - group2.mean())
return test_stat
def MakeModel(self):
group1, group2 = self.data
self.n, self.m = len(group1), len(group2)
self.pool = np.hstack((group1, group2))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
Explanation: Permutation test
To compute the p-value of an observed difference in means, we can assume that there is no difference between the groups and generate simulated results by shuffling the data.
End of explanation
import first
live, firsts, others = first.MakeFrames()
data = firsts.prglngth.values, others.prglngth.values
Explanation: Here's an example where we test the observed difference in pregnancy length for first babies and others.
End of explanation
ht = DiffMeansPermute(data)
pvalue = ht.PValue()
pvalue
Explanation: The p-value is about 17%, which means it is plausible that the observed difference is just the result of random sampling, and might not be generally true in the population.
End of explanation
ht.PlotCdf()
thinkplot.Config(xlabel='test statistic',
ylabel='CDF')
Explanation: Here's the distrubution of the test statistic (the difference in means) over many simulated samples:
End of explanation
class DiffMeansOneSided(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.mean() - group2.mean()
return test_stat
Explanation: Under the null hypothesis, we often see differences bigger than the observed difference.
End of explanation
ht = DiffMeansOneSided(data)
pvalue = ht.PValue()
pvalue
Explanation: If the hypothesis under test is that first babies come late, the appropriate test statistic is the raw difference between first babies and others, rather than the absolute value of the difference. In that case, the p-value is smaller, because we are testing a more specific hypothesis.
End of explanation
class DiffStdPermute(DiffMeansPermute):
def TestStatistic(self, data):
group1, group2 = data
test_stat = group1.std() - group2.std()
return test_stat
ht = DiffStdPermute(data)
pvalue = ht.PValue()
pvalue
Explanation: But in this example, the result is still not statistically significant.
Difference in standard deviation
In this framework, it is easy to use other test statistics. For example, if we think the variance for first babies might be higher, we can run this test:
End of explanation
class CorrelationPermute(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
xs, ys = data
test_stat = abs(thinkstats2.Corr(xs, ys))
return test_stat
def RunModel(self):
xs, ys = self.data
xs = np.random.permutation(xs)
return xs, ys
Explanation: But that's not statistically significant either.
Testing correlation
To check whether an observed correlation is statistically significant, we can run a permutation test with a different test statistic.
End of explanation
cleaned = live.dropna(subset=['agepreg', 'totalwgt_lb'])
data = cleaned.agepreg.values, cleaned.totalwgt_lb.values
ht = CorrelationPermute(data)
pvalue = ht.PValue()
pvalue
Explanation: Here's an example testing the correlation between birth weight and mother's age.
End of explanation
ht.actual, ht.MaxTestStat()
Explanation: The reported p-value is 0, which means that in 1000 trials we didn't see a correlation, under the null hypothesis, that exceeded the observed correlation. That means that the p-value is probably smaller than $1/1000$, but it is not actually 0.
To get a sense of how unexpected the observed value is under the null hypothesis, we can compare the actual correlation to the largest value we saw in the simulations.
End of explanation
class DiceTest(thinkstats2.HypothesisTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum(abs(observed - expected))
return test_stat
def RunModel(self):
n = sum(self.data)
values = [1, 2, 3, 4, 5, 6]
rolls = np.random.choice(values, n, replace=True)
hist = thinkstats2.Hist(rolls)
freqs = hist.Freqs(values)
return freqs
Explanation: Testing proportions
Here's an example that tests whether the outcome of a rolling a six-sided die is suspicious, where the test statistic is the total absolute difference between the observed outcomes and the expected long-term averages.
End of explanation
data = [8, 9, 19, 5, 8, 11]
dt = DiceTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
Explanation: Here's an example using the data from the book:
End of explanation
class DiceChiTest(DiceTest):
def TestStatistic(self, data):
observed = data
n = sum(observed)
expected = np.ones(6) * n / 6
test_stat = sum((observed - expected)**2 / expected)
return test_stat
Explanation: The observed deviance from the expected values is not statistically significant.
By convention, it is more common to test data like this using the chi-squared statistic:
End of explanation
dt = DiceChiTest(data)
pvalue = dt.PValue(iters=10000)
pvalue
Explanation: Using this test, we get a smaller p-value:
End of explanation
class PregLengthTest(thinkstats2.HypothesisTest):
def MakeModel(self):
firsts, others = self.data
self.n = len(firsts)
self.pool = np.hstack((firsts, others))
pmf = thinkstats2.Pmf(self.pool)
self.values = range(35, 44)
self.expected_probs = np.array(pmf.Probs(self.values))
def RunModel(self):
np.random.shuffle(self.pool)
data = self.pool[:self.n], self.pool[self.n:]
return data
def TestStatistic(self, data):
firsts, others = data
stat = self.ChiSquared(firsts) + self.ChiSquared(others)
return stat
def ChiSquared(self, lengths):
hist = thinkstats2.Hist(lengths)
observed = np.array(hist.Freqs(self.values))
expected = self.expected_probs * len(lengths)
stat = sum((observed - expected)**2 / expected)
return stat
Explanation: Taking this result at face value, we might consider the data statistically significant, but considering the results of both tests, I would not draw any strong conclusions.
Chi-square test of pregnancy length
End of explanation
data = firsts.prglngth.values, others.prglngth.values
ht = PregLengthTest(data)
p_value = ht.PValue()
print('p-value =', p_value)
print('actual =', ht.actual)
print('ts max =', ht.MaxTestStat())
Explanation: If we specifically test the deviations of first babies and others from the expected number of births in each week of pregnancy, the results are statistically significant with a very small p-value. But at this point we have run so many tests, we should not be surprised to find at least one that seems significant.
End of explanation
def FalseNegRate(data, num_runs=1000):
Computes the chance of a false negative based on resampling.
data: pair of sequences
num_runs: how many experiments to simulate
returns: float false negative rate
group1, group2 = data
count = 0
for i in range(num_runs):
sample1 = thinkstats2.Resample(group1)
sample2 = thinkstats2.Resample(group2)
ht = DiffMeansPermute((sample1, sample2))
p_value = ht.PValue(iters=101)
if p_value > 0.05:
count += 1
return count / num_runs
neg_rate = FalseNegRate(data)
neg_rate
Explanation: Power
Here's the function that estimates the probability of a non-significant p-value even is there really is a difference between the groups.
End of explanation
# Solution goes here
def tests(live, iterations=50):
# permute values test
hyp_test = DiffMeansPermute(firsts, others)
p_val1 = hyp_test.PValue(iters=iterations)
tests(sample)
# Solution goes here
# Solution goes here
Explanation: In this example, the false negative rate is 70%, which means that the power of the test (probability of statistical significance if the actual difference is 0.078 weeks) is only 30%.
Exercises
Exercise: As sample size increases, the power of a hypothesis test increases, which means it is more likely to be positive if the effect is real. Conversely, as sample size decreases, the test is less likely to be positive even if the effect is real.
To investigate this behavior, run the tests in this chapter with different subsets of the NSFG data. You can use thinkstats2.SampleRows to select a random subset of the rows in a DataFrame.
What happens to the p-values of these tests as sample size decreases? What is the smallest sample size that yields a positive test?
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: In Section 9.3, we simulated the null hypothesis by permutation; that is, we treated the observed values as if they represented the entire population, and randomly assigned the members of the population to the two groups.
An alternative is to use the sample to estimate the distribution for the population, then draw a random sample from that distribution. This process is called resampling. There are several ways to implement resampling, but one of the simplest is to draw a sample with replacement from the observed values, as in Section 9.10.
Write a class named DiffMeansResample that inherits from DiffMeansPermute and overrides RunModel to implement resampling, rather than permutation.
Use this model to test the differences in pregnancy length and birth weight. How much does the model affect the results?
End of explanation |
6,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This model buils a simple Hierarchial mixed effect model to look at dose response from 5 clinical trials.
In this example we are model the mean response from 5 different clinical trials. The modelling examines the inter trial variation as well as residual variation in the data.
The response data is percentage change in LDL following dosing with a statin.
There trials vary in their size and the different dose levels given.
This example follows the video on you tube
Step1: Load in data and view
Step2: Manipulate data
Note the data passed into the model must be a np array. Although pandas series work ok in this example in more complex examples I have found that pandas series sometime throw indexing errors
Step3: Now build the model
The model is a simple emax model.
The independent variable is dose. The dependent is Mean_response with log normal residual variation.
Log of effect is normal
variance is sample size adjusted.
We place intertrial variation on emax and the baseline e0
Step4: Initiate the Bayesian sampling
Step5: Plot the traces and take a look
Note I could not get a similar output to the video, although the estimates of e0, emax and ed50 look pretty good. Perhaps there is an issue with the model. If you come accross this work and recognise my errors please let me know. Thanks!
Step6: Plot the predicted values from trace on top of the original data
Step7: Create dataframe to plot each study separately
Step8: And now plot individual studies using seaborn | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from pymc3 import Model, Normal, Lognormal, Uniform, trace_to_dataframe, df_summary
Explanation: This model buils a simple Hierarchial mixed effect model to look at dose response from 5 clinical trials.
In this example we are model the mean response from 5 different clinical trials. The modelling examines the inter trial variation as well as residual variation in the data.
The response data is percentage change in LDL following dosing with a statin.
There trials vary in their size and the different dose levels given.
This example follows the video on you tube:
https://youtu.be/U9Nf-ZYHRQA?list=PLvLDbH2lpyXNGV8mpBdF7EFK9LQJzGL-Y
Beginning around 40mins
Load in required libraries
End of explanation
data = pd.read_csv('/5studies.csv')
data.head()
plt.figure(figsize =(10,10))
for study in data.Study.unique():
cols = ['red', 'black', 'blue', 'brown', 'green']
x = data.Dose[data.Study ==study]
y = data.Mean_response[data.Study ==study]
col = max(data.Study)
plt.scatter(x, y, c=cols[study-1])
plt.plot(x,y, c=cols[study-1])
plt.xlabel('Dose')
plt.ylabel('Mean_respnse')
Explanation: Load in data and view
End of explanation
mean_response = np.array(data.Mean_response)
dose = np.array(data.Dose)
# Since we are interested in modelling the inter study variation
# we must create some variables to pass into the model parameters
# How many studies...
n_studies = len(data.Study.unique())
# An array that is used to index the studies, reduced by -1 as the index starts at 0 not 1
study = np.array(data.Study.values-1)
# array to adjust sigma for sample size
n= np.array(data.n)
Explanation: Manipulate data
Note the data passed into the model must be a np array. Although pandas series work ok in this example in more complex examples I have found that pandas series sometime throw indexing errors
End of explanation
pkpd_model = Model()
with pkpd_model:
# Hyperparameter Priors
# for the uniform values, as they are passed in to a Lognormal distribution
# as the spread variable, they reflect a logged value, so upper =4 is equivalent to
# tau = 10000
mu_e0 = Normal('mu_e0', mu=0, sd=100)
omega_e0 = Uniform('omega_e0', lower=0, upper =4)
mu_emax = Normal('mu_emax', mu=0, sd=100)
omega_emax = Uniform('omega_emax', lower=0, upper=4)
# Note how the n_studies variable is passed in with the shape argument
# for e0 and emax
e0 = Lognormal('e0', mu = mu_e0, tau= omega_e0, shape=n_studies)
emax= Lognormal('emax', mu = mu_emax, tau = omega_emax, shape=n_studies)
ed50 = Lognormal('ed50', mu=0, tau=4)
# Normalise sigma for sample size
sigma = np.sqrt(np.square(Uniform('sigma', lower = 0, upper = 10000 ))/n)
# Expected value of outcome
# Note how the study index variable is applied with e0 and emax
resp_median = np.log(e0[study] + (emax[study]*dose)/(ed50+dose))
# Likelihood (sampling distribution) of observations and
resp = Lognormal('resp', mu=resp_median, tau =sigma, observed =mean_response)
resp_pred = Lognormal('resp_pred', mu=resp_median, tau =sigma, shape =len(dose))
Explanation: Now build the model
The model is a simple emax model.
The independent variable is dose. The dependent is Mean_response with log normal residual variation.
Log of effect is normal
variance is sample size adjusted.
We place intertrial variation on emax and the baseline e0
End of explanation
import scipy
from pymc3 import find_MAP, NUTS, sample
with pkpd_model:
# obtain starting values via MAP
start = find_MAP(fmin=scipy.optimize.fmin_powell)
# draw 2000 posterior samples
trace = sample(2000, start=start)
Explanation: Initiate the Bayesian sampling
End of explanation
from pymc3 import traceplot
t =traceplot(trace, lines={k: v['mean'] for k, v in df_summary(trace).iterrows()})
Explanation: Plot the traces and take a look
Note I could not get a similar output to the video, although the estimates of e0, emax and ed50 look pretty good. Perhaps there is an issue with the model. If you come accross this work and recognise my errors please let me know. Thanks!
End of explanation
t_df = trace_to_dataframe(trace)
filter_col = [col for col in list(t_df) if col.startswith('resp_pred__')]
col= pd.DataFrame()
to_col =pd.DataFrame()
for n, cols in enumerate(filter_col):
to_col['resp_pred']=t_df[cols]
to_col['dose'] = dose[n]
col = pd.concat([col, to_col])
plt.figure(figsize=(6,6))
plt.scatter(col['dose'], col['resp_pred'], alpha =0.02, s= 15 ,color ='grey')
plt.scatter(data.Dose, data.Mean_response, alpha =1, color='red')
means = col.groupby('dose', as_index=False).aggregate(np.mean)
plt.plot(means.dose, means.resp_pred)
plt.axis([-10, 100, 0, 15])
Explanation: Plot the predicted values from trace on top of the original data
End of explanation
col= np.empty([1,5])
for n, cols in enumerate(filter_col):
a = study[n]+1
b = dose[n]
c = t_df[cols].quantile(q=0.5)
d = t_df[cols].quantile(q=0.95)
e = t_df[cols].quantile(q=0.05)
f = np.array([a,b,c,d,e]).reshape(1,5)
col = np.concatenate((col,f))
col = np.delete(col, (0), axis=0)
col = pd.DataFrame(col, columns=['study', 'dose', 'mean', 'max', 'min'])
col = col.sort_index(by=['study'])
col.head()
Explanation: Create dataframe to plot each study separately
End of explanation
effect= sns.FacetGrid(col, col="study",hue ="study" ,col_wrap=3, size=3, sharex=True)
effect.map(plt.plot, "dose", "mean", marker="o", ms=4)
effect.map(plt.plot, "dose", "max", linestyle ='--')
effect.map(plt.plot, "dose", "min", linestyle ='--')
Explanation: And now plot individual studies using seaborn
End of explanation |
6,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook was created by Sergey Tomin for Workshop
Step1: Outline
Preliminaries
Step2: <a id="tutorial1"></a>
Tutorial N1. Double Bend Achromat.
We designed a simple lattice to demonstrate the basic concepts and syntax of the optics functions calculation.
Also, we chose DBA to demonstrate the periodic solution for the optical functions calculation.
Step3: Creating lattice
Ocelot has following elements
Step4: hint
Step5: Optical function calculation
Uses | Python Code:
from IPython.display import Image
#Image(filename='gui_example.png')
Explanation: This notebook was created by Sergey Tomin for Workshop: Designing future X-ray FELs. Source and license info is on GitHub. August 2016.
An Introduction to Ocelot
Ocelot is a multiphysics simulation toolkit designed for studying FEL and storage ring based light sources. Ocelot is written in Python. Its central concept is the writing of python's scripts for simulations with the usage of Ocelot's modules and functions and the standard Python libraries.
Ocelot includes following main modules:
* Charge particle beam dynamics module (CPBD)
- optics
- tracking
- matching
- collective effects
- Space Charge (true 3D Laplace solver)
- CSR (Coherent Synchrotron Radiation) (1D model with arbitrary number of dipole) (under development).
- Wakefields (Taylor expansion up to second order for arbitrary geometry).
- MOGA (Multi Objective Genetics Algorithm). (under development but we have already applyed it for a storage ring aplication)
* Native module for spontaneous radiation calculation
* FEL calculations: interface to GENESIS and pre/post-processing
* Modules for online beam control and online optimization of accelerator performances. Work1, work2, work3.
Ocelot extensively uses Python's NumPy (Numerical Python) and SciPy (Scientific Python) libraries, which enable efficient in-core numerical and scientific computation within Python and give you access to various mathematical and optimization techniques and algorithms. To produce high quality figures Python's matplotlib library is used.
It is an open source project and it is being developed by physicists from The European XFEL, DESY (Germany), NRC Kurchatov Institute (Russia).
We still have no documentation but you can find a lot of examples in ocelot/demos/
Ocelot user profile
Ocelot is designed for researchers who want to have the flexibility that is given by high-level languages such as Matlab, Python (with Numpy and SciPy) or Mathematica.
However if someone needs a GUI it can be developed using Python's libraries like a PyQtGraph or PyQt.
For example, you can see GUI for SASE optimization (uncomment and run next block)
End of explanation
import IPython
print('IPython:', IPython.__version__)
import numpy
print('numpy:', numpy.__version__)
import scipy
print('scipy:', scipy.__version__)
import matplotlib
print('matplotlib:', matplotlib.__version__)
import ocelot
print('ocelot:', ocelot.__version__)
Explanation: Outline
Preliminaries: Setup & introduction
Beam dynamics
Tutorial N1. Linear optics.. Web version.
Linear optics. DBA.
Tutorial N2. Tracking.. Web version.
Linear optics of the European XFEL Injector
Tracking. First and second order.
Tutorial N3. Space Charge.. Web version.
Tracking with SC effects.
Tutorial N4. Wakefields.. Web version.
Tracking with Wakefields
FEL calculation
Tutorial N5: Genesis preprocessor. Web version.
Tutorial N6. Genesis postprocessor. Web version.
All IPython (jupyter) notebooks (.ipynb) have analogues in the form of python scripts (.py).
All these notebooks as well as additional files (beam distribution, wakes, ...) you can download here.
Preliminaries
The tutorial includes 4 simple examples dediacted to beam dynamics. However, you should have a basic understanding of Computer Programming terminologies. A basic understanding of Python language is a plus.
This tutorial requires the following packages:
Python version 2.7 or 3.4-3.5
numpy version 1.8 or later: http://www.numpy.org/
scipy version 0.15 or later: http://www.scipy.org/
matplotlib version 1.5 or later: http://matplotlib.org/
ipython version 2.4 or later, with notebook support: http://ipython.org
The easiest way to get these is to download and install the (very large) Anaconda software distribution.
Alternatively, you can download and install miniconda.
The following command will install all required packages:
$ conda install numpy scipy matplotlib ipython-notebook
Ocelot installation
you have to download from GitHub zip file.
Unzip ocelot-master.zip to your working folder ../your_working_dir/.
Rename folder ../your_working_dir/ocelot-master to ../your_working_dir/ocelot.
Add ../your_working_dir/ to PYTHONPATH
Windows 7: go to Control Panel -> System and Security -> System -> Advance System Settings -> Environment Variables.
and in User variables add ../your_working_dir/ to PYTHONPATH. If variable PYTHONPATH does not exist, create it
Variable name: PYTHONPATH
Variable value: ../your_working_dir/
- Linux:
$ export PYTHONPATH=**../your_working_dir/**:$PYTHONPATH
To launch "ipython notebook" or "jupyter notebook"
in command line run following commands:
$ ipython notebook
or
$ ipython notebook --notebook-dir="path_to_your_directory"
or
$ jupyter notebook --notebook-dir="path_to_your_directory"
Checking your installation
You can run the following code to check the versions of the packages on your system:
(in IPython notebook, press shift and return together to execute the contents of a cell)
End of explanation
from __future__ import print_function
# the output of plotting commands is displayed inline within frontends,
# directly below the code cell that produced it
%matplotlib inline
# import from Ocelot main modules and functions
from ocelot import *
# import from Ocelot graphical modules
from ocelot.gui.accelerator import *
Explanation: <a id="tutorial1"></a>
Tutorial N1. Double Bend Achromat.
We designed a simple lattice to demonstrate the basic concepts and syntax of the optics functions calculation.
Also, we chose DBA to demonstrate the periodic solution for the optical functions calculation.
End of explanation
# defining of the drifts
D1 = Drift(l=2.)
D2 = Drift(l=0.6)
D3 = Drift(l=0.3)
D4 = Drift(l=0.7)
D5 = Drift(l=0.9)
D6 = Drift(l=0.2)
# defining of the quads
Q1 = Quadrupole(l=0.4, k1=-1.3)
Q2 = Quadrupole(l=0.8, k1=1.4)
Q3 = Quadrupole(l=0.4, k1=-1.7)
Q4 = Quadrupole(l=0.5, k1=1.3)
# defining of the bending magnet
B = Bend(l=2.7, k1=-.06, angle=2*pi/16., e1=pi/16., e2=pi/16.)
# defining of the sextupoles
SF = Sextupole(l=0.01, k2=1.5) #random value
SD = Sextupole(l=0.01, k2=-1.5) #random value
# cell creating
cell = (D1, Q1, D2, Q2, D3, Q3, D4, B, D5, SD, D5, SF, D6, Q4, D6, SF, D5, SD, D5, B, D4, Q3, D3, Q2, D2, Q1, D1)
Explanation: Creating lattice
Ocelot has following elements: Drift, Quadrupole, Sextupole, Octupole, Bend, SBend, RBend, Edge, Multipole, Hcor, Vcor, Solenoid, Cavity, Monitor, Marker, Undulator.
End of explanation
lat = MagneticLattice(cell)
# to see total lenth of the lattice
print("length of the cell: ", lat.totalLen, "m")
Explanation: hint: to see a simple description of the function put cursor inside () and press Shift-Tab or you can type sign ? before function. To extend dialog window press +* *
The cell is a list of the simple objects which contain a physical information of lattice elements such as length, strength, voltage and so on. In order to create a transport map for every element and bind it with lattice object we have to create new Ocelot object - MagneticLattice() which makes these things automatically.
MagneticLattice(sequence, start=None, stop=None, method=MethodTM()):
* sequence - list of the elements,
other paramenters we will consider in tutorial N2.
End of explanation
tws=twiss(lat, nPoints=1000)
# to see twiss paraments at the begining of the cell, uncomment next line
# print(tws[0])
# to see twiss paraments at the end of the cell, uncomment next line
# print(tws[-1])
# plot optical functions.
plot_opt_func(lat, tws, top_plot = ["Dx", "Dy"], legend=False, font_size=10)
plt.show()
# you also can use standard matplotlib functions for plotting
#s = [tw.s for tw in tws]
#bx = [tw.beta_x for tw in tws]
#plt.plot(s, bx)
#plt.show()
# you can play with quadrupole strength and try to make achromat
Q4.k1 = 1.18
# to make achromat uncomment next line
# Q4.k1 = 1.18543769836
# To use matching function, please see ocelot/demos/ebeam/dba.py
# updating transfer maps after changing element parameters.
lat.update_transfer_maps()
# recalculate twiss parameters
tws=twiss(lat, nPoints=1000)
plot_opt_func(lat, tws, legend=False)
plt.show()
Explanation: Optical function calculation
Uses:
* twiss() function and,
* Twiss() object contains twiss parameters and other information at one certain position (s) of lattice
To calculate twiss parameters you have to run twiss(lattice, tws0=None, nPoints=None) function. If you want to get a periodic solution leave tws0 by default.
You can change the number of points over the cell, If nPoints=None, then twiss parameters are calculated at the end of each element.
twiss() function returns list of Twiss() objects.
You will see the Twiss object contains more information than just twiss parameters.
End of explanation |
6,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
OLGA tpl files, examples and howto
For an tpl file the following methods are available
Step1: Trend selection
A tpl file may contain hundreds of trends, in particular for complex networks. For this reason a filtering method is quite useful. A trend can be specified in an OLGA input files in differnet ways, the identification of a single trend may be not trivial.
The easiest way is to filter all the trends using patters, the command tpl.filter_trends("PT") filters all the pressure trends (or better, all the trends with "PT" in the description, if you have defined a temperature trend in the position "PTTOPSIDE", for example, this trend will be selected too).
The resulting python dictionaly will have a unique index for each filtered trend that can be used to identify the interesting trend(s).
In case of an emply pattern all the available trends will be reported.
Step2: or
Step3: The same outpout can be reported as a pandas dataframe
Step4: The view_trends method provides the same info better arranged
Step5: Dump to excel
To dump all the variables in an excel file use tpl.to_excel()
If no path is provided an excel file with the same name of the tpl file is generated in the working folder. Depending on the tpl size this may take a while.
Extract a specific variable
Once you know the variable(s) index you are interested in (see the filtering paragraph above for more info) you can extract it (or them) and use the data directly in python.
Let's assume you are interested in the inlet pressure and the outlet temperature
Step6: Our targets are
Step7: The tpl object now has the four trends available in the data attribute
Step8: while the label attibute stores the variable type as a dictionary
Step9: Data processing
The results available in the data attribute are numpy arrays and can be easily manipulated and plotted | Python Code:
tpl_path = '../../pyfas/test/test_files/'
fname = '11_2022_BD.tpl'
tpl = fa.Tpl(tpl_path+fname)
Explanation: OLGA tpl files, examples and howto
For an tpl file the following methods are available:
<b>filter_data</b> - return a filtered subset of trends
<b>extract</b> - extract a single trend variable
<b>to_excel</b> - dump all the data to an excel file
The usual workflow should be:
Load the correct tpl
Select the desired variable(s)
Extract the results or dump all the variables to an excel file
Post-process your data in Excel or in the notebook itself
Tpl loading
To load a specific tpl file the correct path and filename have to be provided:
End of explanation
tpl.filter_data('PT')
Explanation: Trend selection
A tpl file may contain hundreds of trends, in particular for complex networks. For this reason a filtering method is quite useful. A trend can be specified in an OLGA input files in differnet ways, the identification of a single trend may be not trivial.
The easiest way is to filter all the trends using patters, the command tpl.filter_trends("PT") filters all the pressure trends (or better, all the trends with "PT" in the description, if you have defined a temperature trend in the position "PTTOPSIDE", for example, this trend will be selected too).
The resulting python dictionaly will have a unique index for each filtered trend that can be used to identify the interesting trend(s).
In case of an emply pattern all the available trends will be reported.
End of explanation
tpl.filter_data("'POSITION:' 'EXIT'")
Explanation: or
End of explanation
pd.DataFrame(tpl.filter_data('PT'), index=("Trends",)).T
Explanation: The same outpout can be reported as a pandas dataframe:
End of explanation
tpl.view_trends('PT')
Explanation: The view_trends method provides the same info better arranged:
End of explanation
tpl.view_trends('TM')
tpl.view_trends('PT')
Explanation: Dump to excel
To dump all the variables in an excel file use tpl.to_excel()
If no path is provided an excel file with the same name of the tpl file is generated in the working folder. Depending on the tpl size this may take a while.
Extract a specific variable
Once you know the variable(s) index you are interested in (see the filtering paragraph above for more info) you can extract it (or them) and use the data directly in python.
Let's assume you are interested in the inlet pressure and the outlet temperature:
End of explanation
# single trend extraction
tpl.extract(11)
tpl.extract(38)
# multiple trends extraction
tpl.extract(12, 37)
Explanation: Our targets are:
<i>variable 11</i> - TM 'POSITION:' 'EXIT' '(C)' 'Fluid temperature'
and
<i>variable 38</i> - PT 'POSITION:' 'TUBINGHEAD' '(PA)' 'Pressure'\
Now we can proceed with the data extraction:
End of explanation
tpl.data.keys()
Explanation: The tpl object now has the four trends available in the data attribute:
End of explanation
tpl.label
Explanation: while the label attibute stores the variable type as a dictionary:
End of explanation
%matplotlib inline
pt_inlet = tpl.data[38]
tm_outlet = tpl.data[11]
fig, ax1 = plt.subplots(figsize=(12, 7));
ax1.grid(True)
p0, = ax1.plot(tpl.time/3600, tm_outlet)
ax1.set_ylabel("Outlet T [C]", fontsize=16)
ax1.set_xlabel("Time [h]", fontsize=16)
ax2 = ax1.twinx()
p1, = ax2.plot(tpl.time/3600, pt_inlet/1e5, 'r')
ax2.grid(False)
ax2.set_ylabel("Inlet P [bara]", fontsize=16)
ax1.tick_params(axis="both", labelsize=16)
ax2.tick_params(axis="both", labelsize=16)
plt.legend((p0, p1), ("Outlet T", "Inlet P"), loc=4, fontsize=16)
plt.title("Inlet P and Outlet T for case FC1", size=20);
Explanation: Data processing
The results available in the data attribute are numpy arrays and can be easily manipulated and plotted:
End of explanation |
6,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_viz_raw
Step1: The visualization module (
Step2: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
channels are also sorted by channel type by default. If you want to use a
custom order for the channels, you can use order parameter of
Step3: Now let's add some ssp projectors to the raw data. Here we read them from a
file and plot them.
Step4: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
Step5: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See
Step6: Plotting channel wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data. | Python Code:
import os.path as op
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'))
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
Explanation: .. _tut_viz_raw:
Visualize Raw data
End of explanation
raw.plot(block=True, events=events)
Explanation: The visualization module (:mod:mne.viz) contains all the plotting functions
that work in combination with MNE data structures. Usually the easiest way to
use them is to call a method of the data container. All of the plotting
method names start with plot. If you're using Ipython console, you can
just write raw.plot and ask the interpreter for suggestions with a
tab key.
To visually inspect your raw data, you can use the python equivalent of
mne_browse_raw.
End of explanation
raw.plot_sensors(kind='3d', ch_type='mag')
Explanation: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
channels are also sorted by channel type by default. If you want to use a
custom order for the channels, you can use order parameter of
:func:raw.plot. The scrollbar on right side of the browser window also
tells us that two of the channels are marked as bad. Bad channels are
color coded gray. By clicking the lines or channel names on the left, you can
mark or unmark a bad channel interactively. You can use +/- keys to adjust
the scale (also = works for magnifying the data). Note that the initial
scaling factors can be set with parameter scalings. If you don't know the
scaling factor for channels, you can automatically set them by passing
scalings='auto'. With pageup/pagedown and home/end keys you can
adjust the amount of data viewed at once. To see all the interactive
features, hit ? or click help in the lower left corner of the
browser window.
We read the events from a file and passed it as a parameter when calling the
method. The events are plotted as vertical lines so you can see how they
align with the raw data.
We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
End of explanation
projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif'))
raw.add_proj(projs)
raw.plot_projs_topomap()
Explanation: Now let's add some ssp projectors to the raw data. Here we read them from a
file and plot them.
End of explanation
raw.plot()
Explanation: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
End of explanation
raw.plot_psd()
Explanation: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See :func:mne.io.Raw.del_proj to actually remove the
projectors.
Raw container also lets us easily plot the power spectra over the raw data.
See the API documentation for more info.
End of explanation
layout = mne.channels.read_layout('Vectorview-mag')
layout.plot()
raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout)
Explanation: Plotting channel wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data.
End of explanation |
6,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Doc2Vec Model
Introduces Gensim's Doc2Vec model and demonstrates its use on the
Lee Corpus <https
Step1: Doc2Vec is a core_concepts_model that represents each
core_concepts_document as a core_concepts_vector. This
tutorial introduces the model and demonstrates how to train and assess it.
Here's a list of what we'll be doing
Step2: Define a Function to Read and Preprocess Text
Below, we define a function to
Step3: Let's take a look at the training corpus
Step4: And the testing corpus looks like this
Step5: Notice that the testing corpus is just a list of lists and does not contain
any tags.
Training the Model
Now, we'll instantiate a Doc2Vec model with a vector size with 50 dimensions and
iterating over the training corpus 40 times. We set the minimum word count to
2 in order to discard words with very few occurrences. (Without a variety of
representative examples, retaining such infrequent words can often make a
model worse!) Typical iteration counts in the published Paragraph Vector paper <https
Step6: Build a vocabulary
Step7: Essentially, the vocabulary is a list (accessible via
model.wv.index_to_key) of all of the unique words extracted from the training corpus.
Additional attributes for each word are available using the model.wv.get_vecattr() method,
For example, to see how many times penalty appeared in the training corpus
Step8: Next, train the model on the corpus.
If optimized Gensim (with BLAS library) is being used, this should take no more than 3 seconds.
If the BLAS library is not being used, this should take no more than 2
minutes, so use optimized Gensim with BLAS if you value your time.
Step9: Now, we can use the trained model to infer a vector for any piece of text
by passing a list of words to the model.infer_vector function. This
vector can then be compared with other vectors via cosine similarity.
Step10: Note that infer_vector() does not take a string, but rather a list of
string tokens, which should have already been tokenized the same way as the
words property of original training document objects.
Also note that because the underlying training/inference algorithms are an
iterative approximation problem that makes use of internal randomization,
repeated inferences of the same text will return slightly different vectors.
Assessing the Model
To assess our new model, we'll first infer new vectors for each document of
the training corpus, compare the inferred vectors with the training corpus,
and then returning the rank of the document based on self-similarity.
Basically, we're pretending as if the training corpus is some new unseen data
and then seeing how they compare with the trained model. The expectation is
that we've likely overfit our model (i.e., all of the ranks will be less than
2) and so we should be able to find similar documents very easily.
Additionally, we'll keep track of the second ranks for a comparison of less
similar documents.
Step11: Let's count how each document ranks with respect to the training corpus
NB. Results vary between runs due to random seeding and very small corpus
Step12: Basically, greater than 95% of the inferred documents are found to be most
similar to itself and about 5% of the time it is mistakenly most similar to
another document. Checking the inferred-vector against a
training-vector is a sort of 'sanity check' as to whether the model is
behaving in a usefully consistent manner, though not a real 'accuracy' value.
This is great and not entirely surprising. We can take a look at an example
Step13: Notice above that the most similar document (usually the same text) is has a
similarity score approaching 1.0. However, the similarity score for the
second-ranked documents should be significantly lower (assuming the documents
are in fact different) and the reasoning becomes obvious when we examine the
text itself.
We can run the next cell repeatedly to see a sampling other target-document
comparisons.
Step14: Testing the Model
Using the same approach above, we'll infer the vector for a randomly chosen
test document, and compare the document to our model by eye. | Python Code:
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
Explanation: Doc2Vec Model
Introduces Gensim's Doc2Vec model and demonstrates its use on the
Lee Corpus <https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>__.
End of explanation
import os
import gensim
# Set file names for train and test data
test_data_dir = os.path.join(gensim.__path__[0], 'test', 'test_data')
lee_train_file = os.path.join(test_data_dir, 'lee_background.cor')
lee_test_file = os.path.join(test_data_dir, 'lee.cor')
Explanation: Doc2Vec is a core_concepts_model that represents each
core_concepts_document as a core_concepts_vector. This
tutorial introduces the model and demonstrates how to train and assess it.
Here's a list of what we'll be doing:
Review the relevant models: bag-of-words, Word2Vec, Doc2Vec
Load and preprocess the training and test corpora (see core_concepts_corpus)
Train a Doc2Vec core_concepts_model model using the training corpus
Demonstrate how the trained model can be used to infer a core_concepts_vector
Assess the model
Test the model on the test corpus
Review: Bag-of-words
.. Note:: Feel free to skip these review sections if you're already familiar with the models.
You may be familiar with the bag-of-words model
<https://en.wikipedia.org/wiki/Bag-of-words_model>_ from the
core_concepts_vector section.
This model transforms each document to a fixed-length vector of integers.
For example, given the sentences:
John likes to watch movies. Mary likes movies too.
John also likes to watch football games. Mary hates football.
The model outputs the vectors:
[1, 2, 1, 1, 2, 1, 1, 0, 0, 0, 0]
[1, 1, 1, 1, 0, 1, 0, 1, 2, 1, 1]
Each vector has 10 elements, where each element counts the number of times a
particular word occurred in the document.
The order of elements is arbitrary.
In the example above, the order of the elements corresponds to the words:
["John", "likes", "to", "watch", "movies", "Mary", "too", "also", "football", "games", "hates"].
Bag-of-words models are surprisingly effective, but have several weaknesses.
First, they lose all information about word order: "John likes Mary" and
"Mary likes John" correspond to identical vectors. There is a solution: bag
of n-grams <https://en.wikipedia.org/wiki/N-gram>__
models consider word phrases of length n to represent documents as
fixed-length vectors to capture local word order but suffer from data
sparsity and high dimensionality.
Second, the model does not attempt to learn the meaning of the underlying
words, and as a consequence, the distance between vectors doesn't always
reflect the difference in meaning. The Word2Vec model addresses this
second problem.
Review: Word2Vec Model
Word2Vec is a more recent model that embeds words in a lower-dimensional
vector space using a shallow neural network. The result is a set of
word-vectors where vectors close together in vector space have similar
meanings based on context, and word-vectors distant to each other have
differing meanings. For example, strong and powerful would be close
together and strong and Paris would be relatively far.
Gensim's :py:class:~gensim.models.word2vec.Word2Vec class implements this model.
With the Word2Vec model, we can calculate the vectors for each word in a document.
But what if we want to calculate a vector for the entire document\ ?
We could average the vectors for each word in the document - while this is quick and crude, it can often be useful.
However, there is a better way...
Introducing: Paragraph Vector
.. Important:: In Gensim, we refer to the Paragraph Vector model as Doc2Vec.
Le and Mikolov in 2014 introduced the Doc2Vec algorithm <https://cs.stanford.edu/~quocle/paragraph_vector.pdf>__,
which usually outperforms such simple-averaging of Word2Vec vectors.
The basic idea is: act as if a document has another floating word-like
vector, which contributes to all training predictions, and is updated like
other word-vectors, but we will call it a doc-vector. Gensim's
:py:class:~gensim.models.doc2vec.Doc2Vec class implements this algorithm.
There are two implementations:
Paragraph Vector - Distributed Memory (PV-DM)
Paragraph Vector - Distributed Bag of Words (PV-DBOW)
.. Important::
Don't let the implementation details below scare you.
They're advanced material: if it's too much, then move on to the next section.
PV-DM is analogous to Word2Vec CBOW. The doc-vectors are obtained by training
a neural network on the synthetic task of predicting a center word based an
average of both context word-vectors and the full document's doc-vector.
PV-DBOW is analogous to Word2Vec SG. The doc-vectors are obtained by training
a neural network on the synthetic task of predicting a target word just from
the full document's doc-vector. (It is also common to combine this with
skip-gram testing, using both the doc-vector and nearby word-vectors to
predict a single target word, but only one at a time.)
Prepare the Training and Test Data
For this tutorial, we'll be training our model using the Lee Background
Corpus
<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>_
included in gensim. This corpus contains 314 documents selected from the
Australian Broadcasting Corporation’s news mail service, which provides text
e-mails of headline stories and covers a number of broad topics.
And we'll test our model by eye using the much shorter Lee Corpus
<https://hekyll.services.adelaide.edu.au/dspace/bitstream/2440/28910/1/hdl_28910.pdf>_
which contains 50 documents.
End of explanation
import smart_open
def read_corpus(fname, tokens_only=False):
with smart_open.open(fname, encoding="iso-8859-1") as f:
for i, line in enumerate(f):
tokens = gensim.utils.simple_preprocess(line)
if tokens_only:
yield tokens
else:
# For training data, add tags
yield gensim.models.doc2vec.TaggedDocument(tokens, [i])
train_corpus = list(read_corpus(lee_train_file))
test_corpus = list(read_corpus(lee_test_file, tokens_only=True))
Explanation: Define a Function to Read and Preprocess Text
Below, we define a function to:
open the train/test file (with latin encoding)
read the file line-by-line
pre-process each line (tokenize text into individual words, remove punctuation, set to lowercase, etc)
The file we're reading is a corpus.
Each line of the file is a document.
.. Important::
To train the model, we'll need to associate a tag/number with each document
of the training corpus. In our case, the tag is simply the zero-based line
number.
End of explanation
print(train_corpus[:2])
Explanation: Let's take a look at the training corpus
End of explanation
print(test_corpus[:2])
Explanation: And the testing corpus looks like this:
End of explanation
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=2, epochs=40)
Explanation: Notice that the testing corpus is just a list of lists and does not contain
any tags.
Training the Model
Now, we'll instantiate a Doc2Vec model with a vector size with 50 dimensions and
iterating over the training corpus 40 times. We set the minimum word count to
2 in order to discard words with very few occurrences. (Without a variety of
representative examples, retaining such infrequent words can often make a
model worse!) Typical iteration counts in the published Paragraph Vector paper <https://cs.stanford.edu/~quocle/paragraph_vector.pdf>__
results, using 10s-of-thousands to millions of docs, are 10-20. More
iterations take more time and eventually reach a point of diminishing
returns.
However, this is a very very small dataset (300 documents) with shortish
documents (a few hundred words). Adding training passes can sometimes help
with such small datasets.
End of explanation
model.build_vocab(train_corpus)
Explanation: Build a vocabulary
End of explanation
print(f"Word 'penalty' appeared {model.wv.get_vecattr('penalty', 'count')} times in the training corpus.")
Explanation: Essentially, the vocabulary is a list (accessible via
model.wv.index_to_key) of all of the unique words extracted from the training corpus.
Additional attributes for each word are available using the model.wv.get_vecattr() method,
For example, to see how many times penalty appeared in the training corpus:
End of explanation
model.train(train_corpus, total_examples=model.corpus_count, epochs=model.epochs)
Explanation: Next, train the model on the corpus.
If optimized Gensim (with BLAS library) is being used, this should take no more than 3 seconds.
If the BLAS library is not being used, this should take no more than 2
minutes, so use optimized Gensim with BLAS if you value your time.
End of explanation
vector = model.infer_vector(['only', 'you', 'can', 'prevent', 'forest', 'fires'])
print(vector)
Explanation: Now, we can use the trained model to infer a vector for any piece of text
by passing a list of words to the model.infer_vector function. This
vector can then be compared with other vectors via cosine similarity.
End of explanation
ranks = []
second_ranks = []
for doc_id in range(len(train_corpus)):
inferred_vector = model.infer_vector(train_corpus[doc_id].words)
sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))
rank = [docid for docid, sim in sims].index(doc_id)
ranks.append(rank)
second_ranks.append(sims[1])
Explanation: Note that infer_vector() does not take a string, but rather a list of
string tokens, which should have already been tokenized the same way as the
words property of original training document objects.
Also note that because the underlying training/inference algorithms are an
iterative approximation problem that makes use of internal randomization,
repeated inferences of the same text will return slightly different vectors.
Assessing the Model
To assess our new model, we'll first infer new vectors for each document of
the training corpus, compare the inferred vectors with the training corpus,
and then returning the rank of the document based on self-similarity.
Basically, we're pretending as if the training corpus is some new unseen data
and then seeing how they compare with the trained model. The expectation is
that we've likely overfit our model (i.e., all of the ranks will be less than
2) and so we should be able to find similar documents very easily.
Additionally, we'll keep track of the second ranks for a comparison of less
similar documents.
End of explanation
import collections
counter = collections.Counter(ranks)
print(counter)
Explanation: Let's count how each document ranks with respect to the training corpus
NB. Results vary between runs due to random seeding and very small corpus
End of explanation
print('Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('SECOND-MOST', 1), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
Explanation: Basically, greater than 95% of the inferred documents are found to be most
similar to itself and about 5% of the time it is mistakenly most similar to
another document. Checking the inferred-vector against a
training-vector is a sort of 'sanity check' as to whether the model is
behaving in a usefully consistent manner, though not a real 'accuracy' value.
This is great and not entirely surprising. We can take a look at an example:
End of explanation
# Pick a random document from the corpus and infer a vector from the model
import random
doc_id = random.randint(0, len(train_corpus) - 1)
# Compare and print the second-most-similar document
print('Train Document ({}): «{}»\n'.format(doc_id, ' '.join(train_corpus[doc_id].words)))
sim_id = second_ranks[doc_id]
print('Similar Document {}: «{}»\n'.format(sim_id, ' '.join(train_corpus[sim_id[0]].words)))
Explanation: Notice above that the most similar document (usually the same text) is has a
similarity score approaching 1.0. However, the similarity score for the
second-ranked documents should be significantly lower (assuming the documents
are in fact different) and the reasoning becomes obvious when we examine the
text itself.
We can run the next cell repeatedly to see a sampling other target-document
comparisons.
End of explanation
# Pick a random document from the test corpus and infer a vector from the model
doc_id = random.randint(0, len(test_corpus) - 1)
inferred_vector = model.infer_vector(test_corpus[doc_id])
sims = model.dv.most_similar([inferred_vector], topn=len(model.dv))
# Compare and print the most/median/least similar documents from the train corpus
print('Test Document ({}): «{}»\n'.format(doc_id, ' '.join(test_corpus[doc_id])))
print(u'SIMILAR/DISSIMILAR DOCS PER MODEL %s:\n' % model)
for label, index in [('MOST', 0), ('MEDIAN', len(sims)//2), ('LEAST', len(sims) - 1)]:
print(u'%s %s: «%s»\n' % (label, sims[index], ' '.join(train_corpus[sims[index][0]].words)))
Explanation: Testing the Model
Using the same approach above, we'll infer the vector for a randomly chosen
test document, and compare the document to our model by eye.
End of explanation |
6,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercice 1
Dans cet exercice, nous allons créer un fichier csv qui contiendra deux colonnes. La première est relative au nom du fichier et la deuxième à son identifiant. Nous allons dans une première étape parcourir l'ensemble des fichiers dans un dossier et dans une seconde étape récupérer l'identifiant à partir du nom des fichiers parcouru.
Le programme python contient une seule fonction qui prend en entrée le chemin du dossier contenant le fichier et donne en sortie un fichier csv avec deux colonnes. Nous allons utilser les modules os, csv, re et les fonctions open() et write().
Nous importons d'abord les modules dont nous avons besoin.
Step2: Nous créeons ensuite notre fonction que nous appellerons fromFileToCSV. Cette fonction prend deux arguements
Step4: La variable files est une liste qui le nom de tous les fichiers sous le chemin stocker dans la variable folderpath.
Nous utilisons random.shuffle(files) pour mélanger aléatoirement la position de chaque nom de fichiers dans la liste files.
Nous parcourons la liste files avec un boucle for et pour chaque fichier avec l'extention .png nous récupérons dans la variable label le premier caractére numérique qui est présent dans le nom du fichier filepath.
Nous initialisons la variable csvLine avec le nom du fichier et le caractére numérique récupéré avec l'expression réculière "^(\d+)_".
Nous ouvrons un fichier csv fourni en argument à la fonction et écrivant la ligne cotenu dans csvLine.
Step5: Exercice 2
Dans cet exercice, nous allons construire un corpus ou une collection de documents à partir d'un fichier texte. Ce fichier contient plusieurs lignes qui correspondent à des tweets. D'abord, et après avoir ouvert le fichier, pour chaque ligne dans ce dernier nous allons créer un nouveau fichier. Cette étape nous donnera un dossier contenant un nombre de fichiers égal au nombre de ligne dans le fichier d'origine. Ensuite, et suivant une certaine proportion que nous allons fournir comme paramètre d'entrée nous allons diviser l'ensemble de fichiers en trois dossiers.
Le programme contenient deux fonctions
Step10: Nous vérifions d'abord avec la fonction os.path.exists si le dossier dans lequel nous allons mettre chaque fichier contenant chaque ligne di fichier donné en argument.
Nous initialisons un compteur de lignes avec une variable numérique que nous appelons file_counter. Nous nous servirons de la valeur de cette variable pour donner un nom unique aux fichiers fraîchement créer.
Nous ouvrons ensuite le fichier original_file_path et parcourons ligne par ligne. À chaque ligne nous créerons un nouveau fichier et écrivons la ligne que nous venons de lire dans le fichier.
La fonction retourne le nom du dossier qui contient tous les fichiers que nous venons de créer.
Step11: Le but de la fonction from_folder_to_folders est de parcourir le dossier retourner par la fonction file_to_files et de copier l'ensemble des fichiers en trois dossiers selon trois proportions données en arguments et représentés par des pourcentages.
D'abord nous créons une liste qui contient tous les fichiers dans le dossier retourné par la fonction file_to_files.
Ensuite et pour chaque fichier dans cette liste nous allons récupéré le chemin relatif du fichier et vérifier s'il s'agit bien d'un fichier texte.
Nous calculons le nombre de fichiers qui doit être mis dans chacun des trois dossiers avec l'équation suivante | Python Code:
import sys, os
import re
from os import listdir
from os.path import isfile, join
Explanation: Exercice 1
Dans cet exercice, nous allons créer un fichier csv qui contiendra deux colonnes. La première est relative au nom du fichier et la deuxième à son identifiant. Nous allons dans une première étape parcourir l'ensemble des fichiers dans un dossier et dans une seconde étape récupérer l'identifiant à partir du nom des fichiers parcouru.
Le programme python contient une seule fonction qui prend en entrée le chemin du dossier contenant le fichier et donne en sortie un fichier csv avec deux colonnes. Nous allons utilser les modules os, csv, re et les fonctions open() et write().
Nous importons d'abord les modules dont nous avons besoin.
End of explanation
def fromFileToCSV (folderpath,csvfilename) :
files = [f for f in listdir(folderpath) if isfile(join(folderpath, f))]
random.shuffle(files)
for filepath in files:
if filepath.endswith(".png"):
label = re.findall("^(\d+)_",filepath)
csvLine = filepath+","+str(label[0])
print csvLine
Le with open remplace toutes ces lignes ci dessous.
myfile = open(join(folderpath,csvfilename), "a")
content = myfile.read()
content = content + "\n" + csvLine
myfile.write(content)
myfile.write("\n")
myfile.close()
with open(join(folderpath,csvfilename), "a") as myfile:
myfile.write(csvLine)
myfile.write("\n")
Explanation: Nous créeons ensuite notre fonction que nous appellerons fromFileToCSV. Cette fonction prend deux arguements : le chemin vers le dossier et le nom du fichier csv. La signature de la fonction est comme suit, fromFileToCSV(folderpath, csvfilename)
End of explanation
def main ():
fromFileToCSV("./lines","fichier_auteur.csv")
if __name__ == '__main__':
Si je veux exécuter un fichier python sur la console en prenant compte des arguements donnés dans la console.
Par exemple : >> python nomfichier.py nomDossier, nomFichierCSV.
if len(sys.argv) == 3:
fromFileToCSV(sys.argv[1],sys.argv[2])
main()
Explanation: La variable files est une liste qui le nom de tous les fichiers sous le chemin stocker dans la variable folderpath.
Nous utilisons random.shuffle(files) pour mélanger aléatoirement la position de chaque nom de fichiers dans la liste files.
Nous parcourons la liste files avec un boucle for et pour chaque fichier avec l'extention .png nous récupérons dans la variable label le premier caractére numérique qui est présent dans le nom du fichier filepath.
Nous initialisons la variable csvLine avec le nom du fichier et le caractére numérique récupéré avec l'expression réculière "^(\d+)_".
Nous ouvrons un fichier csv fourni en argument à la fonction et écrivant la ligne cotenu dans csvLine.
End of explanation
import sys, os
import shutil
import re
import random
from os import listdir
from os.path import isfile, join
def file_to_files (original_file_path):
if not os.path.exists("lines_folder"):
os.makedirs("lines_folder")
file_counter = 0
my_file = open(original_file_path)
lines_liste = my_file.readlines()
for line in lines_liste:
file_counter += 1
my_new_file = open("lines_folder/"+str(file_counter)+'_processed_tweet.txt', 'a')
my_new_file.write(line)
my_new_file.close()
my_file.close()
new_folder = "lines_folder"
return new_folder
Explanation: Exercice 2
Dans cet exercice, nous allons construire un corpus ou une collection de documents à partir d'un fichier texte. Ce fichier contient plusieurs lignes qui correspondent à des tweets. D'abord, et après avoir ouvert le fichier, pour chaque ligne dans ce dernier nous allons créer un nouveau fichier. Cette étape nous donnera un dossier contenant un nombre de fichiers égal au nombre de ligne dans le fichier d'origine. Ensuite, et suivant une certaine proportion que nous allons fournir comme paramètre d'entrée nous allons diviser l'ensemble de fichiers en trois dossiers.
Le programme contenient deux fonctions : la première prendra en entrée le fichier d'origine et donnera en sortie un dossier avec un nombre de fichiers égales au nombre de lignes dans le fichier d'origine. La deuxième fonction prendra comme entrée le chemin relatif ou absolu du dossier fraîchement créé ainsi que trois proportions. C'est-à-dire que la deuxième fonction donnera en sortie trois dossiers avec par exemple 20% des fichiers seront copié dans le premier dossier, 30% des fichiers seront copié dans le deuxième dossier et 50% des fichiers seront copiés dans le troisième dossier.
End of explanation
def from_folder_to_folders (original_folder_path, percentageFolder1, percentageFolder2, percentageFolder3):
list_fichiers_dans_dossier = listdir(orginal_folder_path)
files = [f for f in list_fichiers_dans_dossier if isfile(join(original_folder_path,f))]
Ces instruction sont équivalentes à la création de la liste files qui est au dessus
for f in list_fichier_dans_dossier:
if isfile(join(original_folder_path,f):
files.add(f)
Documentation de random https://docs.python.org/2/library/random.html
Nous mélangeons l'ordre des fichiers dans la liste pour avoir plus de diversité dans chaque dossier.
random.shuffle(files)
nbFilesFolder1 = int((float(percentageFolder1)/100)*len(files))
nbFilesFolder2 = int((float(percentageFolder2)/100)*len(files))
nbFilesFolder3 = int((float(percentageFolder3)/100)*len(files))
if not os.path.exists(join(original_folder_path,"Folder1")):
os.makedirs(join(original_folder_path,"Folder1"))
if not os.path.exists(join(original_folder_path,"Folder2")):
os.makedirs(join(original_folder_path,"Folder2"))
if not os.path.exists(join(original_folder_path,"Folder3")):
os.makedirs(join(original_folder_path,"Folder3"))
enumerate retourne l'index + le contenu de la liste files.
for j,filepath in enumerate(files):
# e.g. sourceFolder = lines_folder/11314_processed_tweet.txt
# "lines_folder/Folder2/"
sourceFolder = os.path.join(original_folder_path,filepath)
if (j > nbFilesFolder1 and j < nbFilesFolder1+nbFilesFolder2):
print "copying the files to folder 2"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder,join(original_folder_path,"Folder2/"))
elif (j > nbFilesFolder1+nbFilesFolder2 and j < len(files)):
print "copying the files to folder 3"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder,join(original_folder_path,"Folder3/"))
else:
print "copytin the files to folder 1"
if filepath.endswith(".txt"):
shutil.copy2(sourceFolder, join(original_folder_path,"Folder1/"))
Explanation: Nous vérifions d'abord avec la fonction os.path.exists si le dossier dans lequel nous allons mettre chaque fichier contenant chaque ligne di fichier donné en argument.
Nous initialisons un compteur de lignes avec une variable numérique que nous appelons file_counter. Nous nous servirons de la valeur de cette variable pour donner un nom unique aux fichiers fraîchement créer.
Nous ouvrons ensuite le fichier original_file_path et parcourons ligne par ligne. À chaque ligne nous créerons un nouveau fichier et écrivons la ligne que nous venons de lire dans le fichier.
La fonction retourne le nom du dossier qui contient tous les fichiers que nous venons de créer.
End of explanation
def main():
file_to_files("data/preprocessedP.txt")
#from_folder_to_folders(file_to_files("data/preprocessedP.txt"), 50, 30, 20)
if __name__ == '__main__':
main()
Explanation: Le but de la fonction from_folder_to_folders est de parcourir le dossier retourner par la fonction file_to_files et de copier l'ensemble des fichiers en trois dossiers selon trois proportions données en arguments et représentés par des pourcentages.
D'abord nous créons une liste qui contient tous les fichiers dans le dossier retourné par la fonction file_to_files.
Ensuite et pour chaque fichier dans cette liste nous allons récupéré le chemin relatif du fichier et vérifier s'il s'agit bien d'un fichier texte.
Nous calculons le nombre de fichiers qui doit être mis dans chacun des trois dossiers avec l'équation suivante :
\begin{equation}
proportions_fichiers = \frac{pourcentage}{100} \times nombre_total_des_fichiers
\end{equation}
Une fois que nous avons la liste de fichiers, nous parcourons la liste selon les proportions fournies en arguments et utilisons la fonction copy2qui prend en entrée le chemin relatif de la source et le chemin relatif de la destination vers laquelle le fichier va être copié.
End of explanation |
6,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img alt="sbmlutils logo" src="./images/sbmlutils-logo-small.png" style="height
Step1: SBML model creator
helper functions for generation of SBML models
constructors with all fields
patterns (like generate Parameters for AssignmentRules)
fbc and comp helpers
unit support
https | Python Code:
from sbmlutils.report import sbmlreport
sbmlreport.create_sbml_report('./examples/glucose/Hepatic_glucose_3.xml',
out_dir='./examples/glucose', validate=True)
Explanation: <img alt="sbmlutils logo" src="./images/sbmlutils-logo-small.png" style="height: 60px;" />
sbmlutils: Python utilities for SBML
<br />
sbmlutils is a collection of python utilities for working with SBML models
implemented on top of the libSBML python bindings.
Features among others
HTML reports of SBML models
helpers for model creation, manipulation, and annotation
interpolation functions to add experimental data to models
dynamic flux balance analysis (DFBA)
file converters (XPP)
The project code is available from https://github.com/matthiaskoenig/sbmlutils
Slides available at http://bit.ly/sbmlutils-flash
Installation
pip install sbmlutils
SBML report
create HTML reports of SBML files
easy navigation and filtering
fbc and comp support
./examples/glucose/Hepatic_glucose_3.html
End of explanation
from __future__ import absolute_import, print_function
import os
import tempfile
import sbmlutils
from sbmlutils import dfba
from sbmlutils.dfba import utils
from sbmlutils.dfba.toy_wholecell import settings as toysettings
from sbmlutils.dfba.toy_wholecell import model_factory as toyfactory
from sbmlutils.dfba.toy_wholecell import simulate as toysimulate
test_dir = tempfile.mkdtemp()
# create toy model
toyfactory.create_model(test_dir)
sbml_path = os.path.join(utils.versioned_directory(test_dir, toyfactory.version),
toysettings.top_file)
print(sbml_path)
from IPython.display import display, HTML
# simulate
dfs = toysimulate.simulate_toy(sbml_path, test_dir, dts=[1.0], figures=False)
display(dfs[0].head(10))
toysimulate.print_species(dfs=dfs)
toysimulate.print_fluxes(dfs=dfs)
Explanation: SBML model creator
helper functions for generation of SBML models
constructors with all fields
patterns (like generate Parameters for AssignmentRules)
fbc and comp helpers
unit support
https://sbmlutils.readthedocs.io/en/latest/notebooks/modelcreator.html#Create-FBA-Model
Misc (small helpers)
Data to splines & piecewise functions
Model annotation based on flat files and regular expressions
converters (XPP -> SBML)
<img alt="sbmlutils logo" src="./images/interpolation_constant.png"/>
<img alt="sbmlutils logo" src="./images/interpolation_linear.png"/>
<img alt="sbmlutils logo" src="./images/interpolation_cubic.png"/>
Dynamic Flux Balance Analysis (DFBA)
DFBA model creation & simulation
SBML encoding
implementation of DFBA based on SBML core, comp and fbc
Proposed encoding: http://bit.ly/dfba-guidelines
2 implementations: sbmlutils & iBioSim
Discussed/presented: Thursday, 11.00
End of explanation |
6,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Lax–Wendroff Method
The scalar advection equation
\begin{equation}
u_t+au_x=0
\end{equation}
has the standard Lax–Wendroff method
\begin{equation}
U^{n+1}_j = U_j^n - \frac{ak}{2h}\left(U_{j+1}^n-U_{j-1}^n\right) + \frac{a^2k^2}{2h^2}\left(U^n_{j-1} -2 U^n-j + U^n_{j+1}\right).
\end{equation}
<h4> Essential Libraries </h4>
Step1: <h4> Lax–Wendroff Advection </h4>
Step2: <h4> Plots | Python Code:
# --------------------/
%matplotlib inline
# --------------------/
import math
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
from scipy import *
from ipywidgets import *
Explanation: <h1> Lax–Wendroff Method
The scalar advection equation
\begin{equation}
u_t+au_x=0
\end{equation}
has the standard Lax–Wendroff method
\begin{equation}
U^{n+1}_j = U_j^n - \frac{ak}{2h}\left(U_{j+1}^n-U_{j-1}^n\right) + \frac{a^2k^2}{2h^2}\left(U^n_{j-1} -2 U^n-j + U^n_{j+1}\right).
\end{equation}
<h4> Essential Libraries </h4>
End of explanation
# --------------------/
# domains and
# parameters
c = 2.0 # velocity
a = -1.0
b = 1.0
T = 1.0
# --------------------/
# mesh and grid points
# nodes
n = 200
# tolerance
tol = 1e-5
h = (b - a) / (1.0 + n)
k = 0.25 * h
m = int(round(T / k))
t = np.linspace(0, T, m)
x = np.linspace(a, b, n + 2)
tt, xx = meshgrid(t,x)
# --------------------/
if abs( k * m - T) > tol:
print 'instability'
# --------------------/
# Courant-Friedrichs-Lewis
CFL = c * k / h
# additional ghost cell
u = np.zeros(n + 3)
U = np.zeros((n + 2,m))
# true solution
f = lambda x: np.exp(-50*x**2)*np.cos(x)
# initial conditions u(x,t0)
u = f(x)
# periodic boundary conditions
u[0], u[-1] = u[-2], u[1]
# --------------------/
# Lax--Wendroff algorithm
for i in range(m):
u[1:-1] = (u[1:-1] -
0.5 * CFL * (u[2:] - u[0:-2]) +
0.5 * CFL**2 * (u[0:-2] - 2*u[1:-1] + u[2:])
)
u[0], u[-1] = u[-2], u[1]
U[:,i] = u
Explanation: <h4> Lax–Wendroff Advection </h4>
End of explanation
# --------------------/
# plots
def evolution(step):
plt.figure(figsize=(5,5))
plt.plot(x, U[:,step], lw=4, alpha=0.5, color='dodgerblue')
plt.grid(color='lightgray', alpha=0.75)
plt.xlim(x.min() - 0.125, x.max() + 0.125)
plt.ylim(U[:,step].min() - 0.125, U[:,step].max() + 0.125)
# --------------------/
# interactive plot
time = widgets.IntSlider(min=1, max=m-1, description='step')
interact(evolution, step=time)
Explanation: <h4> Plots
End of explanation |
6,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial Data Analysis
Step 1
Step1: 1. What does the data describe?
The data describes SAT scores for verbal and math sections in 2001 across the US. It does appear to be complete, except for the issue I'm having with the median score for math. When I ran the median function for sat_scores.math, it returned a value of 521. However, I could not find that value in the dataset. Below are some other observations I made.
2. Does the data look complete? Are there any obvious issues with the observations?¶
Overall, the data does look complete, but doing my EDA I noticed that the median value computed for Math, 521, does not actually appear in the list of Math scores. There must be an issue with the data.
6. Extract a list of the labels from the data, and remove them from the data.
Step2: 3. Create a data dictionary for the dataset.
Step3: 7. Create a list of State names extracted from the data. (Hint
Step4: 8. Print the types of each column
Step5: 9. Do any types need to be reassigned? If so, go ahead and do it.
Step6: 10. Create a dictionary for each column mapping the State to its respective value for that column.
Step7: 11. Create a dictionary with the values for each of the numeric columns
Step8: # Step 3
Step9: The miniumum rate is 4, found in North Dakota, South Dakota, and Mississippi, and the maximum rate is 82 found in Connecticut.
The minimum verbal score is 482 in D.C., and the maximum is 593 in Iowa.
The median verbal score is 526 in Oregon.
The minimum math score is 439 in Ohio, and the maximum is 603, which is interestingly also in Iowa.
The median math score is 521.
Iowa has the highest SAT Scores in the country overall.
13. Write a function using only list comprehensions, no loops, to compute Standard Deviation. Print the Standard Deviation of each numeric column.
Step10: Mean, Median and Mode in NumPy and SciPy
Step11: The median Verbal SAT score is 526, its mean is approximately 532, and its mode is above its mean at 562 (appears, 3 times).
The median Math SAT score is 521, its mean is 531.5, and its mode is below its mean at 499 (appears 6 times).
Step 4
Step12: 19. Plot some scatterplots. BONUS
Step13: Scatter Plotting
Step14: 20. Are there any interesting relationships to note?
Both Verbal and Math scores are highly correlated with each other, whichever way you plot them, with Math appearing to affect Verbal at a faster rate than the other way around.
Step15: 9. Do any types need to be reassigned? If so, go ahead and do it.
Step16: 21. Create box plots for each variable.
Step17: 14. Using MatPlotLib and PyPlot, plot the distribution of the Rate using histograms
Histograms
15. Plot the Math distribution
Step18: 16. Plot the Verbal distribution
Step19: 16. Plot the Rate distribution
Step20: 17. What is the typical assumption for data distribution? and 18. Does that distribution hold true for our data?
The typical assumption of data distribution is that it should follow a normal distribution, with standard deviations being relatively equal on both sides of the mean. Neither of the histograms appear to follow a normal distribution, with the Verbal scores in particular following a right/positive skew. But I need to properly check for normal distribution, and find a way the relative distribution onto the histograms. Perhaps Seaborn has a function that can help me with that.
Seaborn Plotting for Histograms and Fitting a Distribution | Python Code:
import scipy as sci
import pandas as pd
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# Remember that for specific functions, the array function in numpy
# can be useful in listing out the elements in a list (example would
# be for finding the mode.)
with open('./data/sat_scores.csv', 'r') as f:
data = [i.split(",") for i in f.read().split()]
print data
Explanation: Initial Data Analysis
Step 1: Open the sat_scores.csv file. Investigate the data, and answer the questions below.
AND
Step 2: Load the data.
4. Load the data into a list of lists and 5. Print the data
End of explanation
header = data[0]
data = data[1:]
print(header)
Explanation: 1. What does the data describe?
The data describes SAT scores for verbal and math sections in 2001 across the US. It does appear to be complete, except for the issue I'm having with the median score for math. When I ran the median function for sat_scores.math, it returned a value of 521. However, I could not find that value in the dataset. Below are some other observations I made.
2. Does the data look complete? Are there any obvious issues with the observations?¶
Overall, the data does look complete, but doing my EDA I noticed that the median value computed for Math, 521, does not actually appear in the list of Math scores. There must be an issue with the data.
6. Extract a list of the labels from the data, and remove them from the data.
End of explanation
sat_data = {}
for index, column_name in enumerate(header):
sat_data[column_name] = []
for row in data:
sat_data[column_name].append(row[index])
Explanation: 3. Create a data dictionary for the dataset.
End of explanation
state_names = sat_data['State']
print state_names
Explanation: 7. Create a list of State names extracted from the data. (Hint: use the list of labels to index on the State column)
End of explanation
print 'The type of the State column is' + ' ' + str(type (sat_data['State'][2]))
print 'The type of the State column is' + ' ' + str(type (sat_data['Math'][2]))
print 'The type of the State column is' + ' ' + str(type (sat_data['Verbal'][2]))
print 'The type of the State column is' + ' ' + str(type (sat_data['Rate'][2]))
Explanation: 8. Print the types of each column
End of explanation
#Math, Verbal, and Rate need to be reassigned to integers.
for item in sat_data['Math']:
item = int(item)
for item in sat_data['Verbal']:
item = int(item)
for item in sat_data['Rate']:
item = int(item)
Explanation: 9. Do any types need to be reassigned? If so, go ahead and do it.
End of explanation
verbal_values = {x:sat_data['Verbal'] for x in state_names}
math_values = {x:sat_data['Math'] for x in state_names}
rate_values = {x:sat_data['Rate'] for x in state_names}
Explanation: 10. Create a dictionary for each column mapping the State to its respective value for that column.
End of explanation
#SAT_values = {x:sat_data['Verbal'] for x in sat_data['Verbal']}
Explanation: 11. Create a dictionary with the values for each of the numeric columns
End of explanation
#Convert to a pandas dataframe to perform functions.
SAT_scores = pd.DataFrame(sat_data)
SAT_scores['Math'] = SAT_scores.Math.astype(int)
SAT_scores['Verbal'] = SAT_scores.Verbal.astype(int)
SAT_scores['Rate'] = SAT_scores.Rate.astype(int)
print 'The minimum Verbal score is' + ' ' + str(min(SAT_scores.Verbal))
print 'The maximum Verbal score is' + ' ' + str(max(SAT_scores.Verbal))
print 'The minimum Math score is' + ' ' + str(min(SAT_scores.Math))
print 'The maximum Math score is' + ' ' + str(max(SAT_scores.Math))
print 'The minimum Rate is' + ' ' + str(min(SAT_scores.Rate))
print 'The maximum Rate is' + ' ' + str(max(SAT_scores.Rate))
Explanation: # Step 3: Describe the data
12. Print the min and max of each column
End of explanation
#Standard Deviation function.
from math import sqrt
def standard_deviation(column):
num_int = len(column)
mean = sum(column)/len(column)
differences = [x - mean for x in column]
sq_diff = [t ** 2 for t in differences]
num = sum(sq_diff)
den = len(column)-1
var = num/den
print sqrt(var)
standard_deviation(SAT_scores['Math'])
standard_deviation(SAT_scores['Verbal'])
standard_deviation(SAT_scores['Rate'])
#Check to see the standard deviations are right.
print SAT_scores.describe()
#Approximately on point.
Explanation: The miniumum rate is 4, found in North Dakota, South Dakota, and Mississippi, and the maximum rate is 82 found in Connecticut.
The minimum verbal score is 482 in D.C., and the maximum is 593 in Iowa.
The median verbal score is 526 in Oregon.
The minimum math score is 439 in Ohio, and the maximum is 603, which is interestingly also in Iowa.
The median math score is 521.
Iowa has the highest SAT Scores in the country overall.
13. Write a function using only list comprehensions, no loops, to compute Standard Deviation. Print the Standard Deviation of each numeric column.
End of explanation
# Find the mean, median, and mode for the set of verbal scores and the set of math scores.
import numpy as np
print np.median(SAT_scores.Verbal)
print np.median(SAT_scores.Math)
#Numpy doesn't have a built in function for mode. However, stats does;
#its function returns the mode, and how many times the mode appears.
verb_mode = stats.mode(SAT_scores.Verbal)
math_mode = stats.mode(SAT_scores.Math)
print verb_mode
print math_mode
Explanation: Mean, Median and Mode in NumPy and SciPy
End of explanation
#Will be using Pandas dataframe for plotting.
Explanation: The median Verbal SAT score is 526, its mean is approximately 532, and its mode is above its mean at 562 (appears, 3 times).
The median Math SAT score is 521, its mean is 531.5, and its mode is below its mean at 499 (appears 6 times).
Step 4: Visualize the data¶
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
Explanation: 19. Plot some scatterplots. BONUS: Use a PyPlot figure to present multiple plots at once.
End of explanation
sns.pairplot(SAT_scores)
plt.show()
Explanation: Scatter Plotting
End of explanation
# Not really. I had already assigned the Verbal, Math, and Rate columns to integers,
# so no conversion is needed there.
Explanation: 20. Are there any interesting relationships to note?
Both Verbal and Math scores are highly correlated with each other, whichever way you plot them, with Math appearing to affect Verbal at a faster rate than the other way around.
End of explanation
SAT_scores['Verbal'] = SAT_scores['Verbal'].apply(pd.to_numeric)
SAT_scores['Math'] = SAT_scores['Math'].apply(pd.to_numeric)
SAT_scores['Rate'] = SAT_scores['Rate'].apply(pd.to_numeric)
SAT_scores.dtypes
Explanation: 9. Do any types need to be reassigned? If so, go ahead and do it.
End of explanation
# Display box plots to visualize the distribution of the datasets.
# Recall the median verbal score is 526, the mean is 532, the max is 593, the min is 482,
# and the std. deviation is 33.236.
ax = sns.boxplot(y=SAT_scores.Verbal, saturation=0.75, width=0.1, fliersize=5)
ax.set(xlabel = 'SAT Verbal Scores', ylabel = 'Range of Scores')
ax.set_title('2001 Iowa Verbal Scores Distribution', fontsize = 15)
plt.show()
sns.boxplot(data = SAT_scores, y=SAT_scores.Math, saturation=0.75, width=0.1, fliersize=5)
plt.xlabel('SAT Math Scores')
plt.ylabel('Range of Scores')
plt.show()
sns.boxplot(data = SAT_scores, y=SAT_scores.Rate, saturation=0.75, width=0.1, fliersize=5)
plt.xlabel('SAT Rates')
plt.ylabel('Range of Rates')
plt.show()
Explanation: 21. Create box plots for each variable.
End of explanation
SAT_scores.Math.plot (kind='hist', bins=15)
plt.xlabel('SAT Math Scores')
plt.ylabel('Frequency')
plt.show()
Explanation: 14. Using MatPlotLib and PyPlot, plot the distribution of the Rate using histograms
Histograms
15. Plot the Math distribution
End of explanation
SAT_scores.Verbal.plot (kind='hist', bins=15)
plt.xlabel('SAT Verbal Scores')
plt.ylabel('Frequency')
plt.show()
Explanation: 16. Plot the Verbal distribution
End of explanation
SAT_scores.Rate.plot (kind='hist', bins=15)
plt.xlabel('SAT Rates')
plt.ylabel('Frequency')
plt.show()
Explanation: 16. Plot the Rate distribution
End of explanation
# Used seaborn website as guidance: http://seaborn.pydata.org/tutorial/distributions.html
# I used a feature called the 'Kernel Density Estimation" (KDE) to
# visualize a distribution to the data.
# KDE is an estimator that uses each data point to make an estimate of the distibution and attempts to
# smooth it out on the histogram.
# This resulting curve has an area below it equal to one, hence the decimal units for frequency.
sns.distplot(SAT_scores.Verbal, bins=15)
plt.xlabel('SAT Verbal Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.distplot(SAT_scores.Math, bins=15)
plt.xlabel('SAT Math Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.distplot(SAT_scores.Rate, bins=15)
plt.xlabel('SAT Rates')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.kdeplot(SAT_scores.Verbal)
plt.xlabel('SAT Verbal Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.kdeplot(SAT_scores.Math)
plt.xlabel('SAT Math Scores')
plt.ylabel('Frequency (KDE)')
plt.show()
sns.kdeplot(SAT_scores.Rate)
plt.xlabel('SAT Rates')
plt.ylabel('Frequency (KDE)')
plt.show()
Explanation: 17. What is the typical assumption for data distribution? and 18. Does that distribution hold true for our data?
The typical assumption of data distribution is that it should follow a normal distribution, with standard deviations being relatively equal on both sides of the mean. Neither of the histograms appear to follow a normal distribution, with the Verbal scores in particular following a right/positive skew. But I need to properly check for normal distribution, and find a way the relative distribution onto the histograms. Perhaps Seaborn has a function that can help me with that.
Seaborn Plotting for Histograms and Fitting a Distribution
End of explanation |
6,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!-- dom
Step1: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
throughout these lectures.
Optimizing our parameters, more details
With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
$$
C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
This function is one possible way to define the so-called cost function.
It is also common to define
the function $C$ as
$$
C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
$$
since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
Interpretations and optimizing our parameters
The function
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right},
$$
can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value
$$
y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
$$
where $\langle y_i \rangle$ is the mean value. Keep in mind also that
till now we have treated $y_i$ as the exact value. Normally, the
response (dependent or outcome) variable $y_i$ the outcome of a
numerical experiment or another type of experiment and is thus only an
approximation to the true value. It is then always accompanied by an
error estimate, often limited to a statistical error estimate given by
the standard deviation discussed earlier. In the discussion here we
will treat $y_i$ as our exact value for the response variable.
In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
In practical terms it means we will require
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
$$
Interpretations and optimizing our parameters
We can rewrite
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
We note also that since our design matrix is defined as $\boldsymbol{X}\in
{\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
{\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
in our case $p=5$ meaning that we end up with inverting a small
$5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
matrices to invert. The methods discussed here and for many other
supervised learning algorithms like classification with logistic
regression or support vector machines, exhibit dimensionalities which
allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix
$\boldsymbol{X}^T\boldsymbol{X}$.
Small question
Step2: Alternatively, you can use the least squares functionality in Numpy as
Step3: And finally we plot our fit with and compare with data
Step4: Adding error analysis and training set up
We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides.
Since we are not using Scikit-Learn here we can define our own $R2$ function as
Step5: and we would be using it as
Step6: We can easily add our MSE score as
Step7: and finally the relative error as
Step8: The $\chi^2$ function
Normally, the response (dependent or outcome) variable $y_i$ is the
outcome of a numerical experiment or another type of experiment and is
thus only an approximation to the true value. It is then always
accompanied by an error estimate, often limited to a statistical error
estimate given by the standard deviation discussed earlier. In the
discussion here we will treat $y_i$ as our exact value for the
response variable.
Introducing the standard deviation $\sigma_i$ for each measurement
$y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
as
$$
\chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
The $\chi^2$ function
In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
$$
where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
The $\chi^2$ function
We can rewrite
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
$$
The $\chi^2$ function
If we then introduce the matrix
$$
\boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
$$
we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
$$
\beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
$$
We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
$$
\sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
$$
resulting in
$$
\sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
$$
The $\chi^2$ function
The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
$$
y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
$$
By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
$$
and
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
$$
The $\chi^2$ function
For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
Defining
$$
\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
$$
$$
\gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
$$
$$
\gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
$$
$$
\gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
$$
$$
\gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
$$
we obtain
$$
\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
$$
$$
\beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
$$
This approach (different linear and non-linear regression) suffers
often from both being underdetermined and overdetermined in the
unknown coefficients $\beta_i$. A better approach is to use the
Singular Value Decomposition (SVD) method discussed below. Or using
Lasso and Ridge regression. See below.
Fitting an Equation of State for Dense Nuclear Matter
Before we continue, let us introduce yet another example. We are going to fit the
nuclear equation of state using results from many-body calculations.
The equation of state we have made available here, as function of
density, has been derived using modern nucleon-nucleon potentials with
the addition of three-body
forces. This
time the file is presented as a standard csv file.
The beginning of the Python code here is similar to what you have seen
before, with the same initializations and declarations. We use also
pandas again, rather extensively in order to organize our data.
The difference now is that we use Scikit-Learn's regression tools
instead of our own matrix inversion implementation. Furthermore, we
sneak in Ridge regression (to be discussed below) which includes a
hyperparameter $\lambda$, also to be explained below.
The code
Step9: The above simple polynomial in density $\rho$ gives an excellent fit
to the data.
We note also that there is a small deviation between the
standard OLS and the Ridge regression at higher densities. We discuss this in more detail
below.
Splitting our Data in Training and Test data
It is normal in essentially all Machine Learning studies to split the
data in a training set and a test set (sometimes also an additional
validation set). Scikit-Learn has an own function for this. There
is no explicit recipe for how much data should be included as training
data and say test data. An accepted rule of thumb is to use
approximately $2/3$ to $4/5$ of the data as training data. We will
postpone a discussion of this splitting to the end of these notes and
our discussion of the so-called bias-variance tradeoff. Here we
limit ourselves to repeat the above equation of state fitting example
but now splitting the data into a training set and a test set.
Step10: <!-- !split -->
The Boston housing data example
The Boston housing
data set was originally a part of UCI Machine Learning Repository
and has been removed now. The data set is now included in Scikit-Learn's
library. There are 506 samples and 13 feature (predictor) variables
in this data set. The objective is to predict the value of prices of
the house using the features (predictors) listed here.
The features/predictors are
1. CRIM
Step11: and load the Boston Housing DataSet from Scikit-Learn
Step12: Then we invoke Pandas
Step13: and preprocess the data
Step14: We can then visualize the data
Step15: It is now useful to look at the correlation matrix
Step16: From the above coorelation plot we can see that MEDV is strongly correlated to LSTAT and RM. We see also that RAD and TAX are stronly correlated, but we don't include this in our features together to avoid multi-colinearity
Step17: Now we start training our model
Step18: We split the data into training and test sets
Step19: Then we use the linear regression functionality from Scikit-Learn
Step20: Reducing the number of degrees of freedom, overarching view
Many Machine Learning problems involve thousands or even millions of
features for each training instance. Not only does this make training
extremely slow, it can also make it much harder to find a good
solution, as we will see. This problem is often referred to as the
curse of dimensionality. Fortunately, in real-world problems, it is
often possible to reduce the number of features considerably, turning
an intractable problem into a tractable one.
Later we will discuss some of the most popular dimensionality reduction
techniques | Python Code:
%matplotlib inline
# Common imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
import os
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("MassEval2016.dat"),'r')
# Read the experimental data with Pandas
Masses = pd.read_fwf(infile, usecols=(2,3,4,6,11),
names=('N', 'Z', 'A', 'Element', 'Ebinding'),
widths=(1,3,5,5,5,1,3,4,1,13,11,11,9,1,2,11,9,1,3,1,12,11,1),
header=39,
index_col=False)
# Extrapolated values are indicated by '#' in place of the decimal place, so
# the Ebinding column won't be numeric. Coerce to float and drop these entries.
Masses['Ebinding'] = pd.to_numeric(Masses['Ebinding'], errors='coerce')
Masses = Masses.dropna()
# Convert from keV to MeV.
Masses['Ebinding'] /= 1000
# Group the DataFrame by nucleon number, A.
Masses = Masses.groupby('A')
# Find the rows of the grouped DataFrame with the maximum binding energy.
Masses = Masses.apply(lambda t: t[t.Ebinding==t.Ebinding.max()])
A = Masses['A']
Z = Masses['Z']
N = Masses['N']
Element = Masses['Element']
Energies = Masses['Ebinding']
# Now we set up the design matrix X
X = np.zeros((len(A),5))
X[:,0] = 1
X[:,1] = A
X[:,2] = A**(2.0/3.0)
X[:,3] = A**(-1.0/3.0)
X[:,4] = A**(-1.0)
# Then nice printout using pandas
DesignMatrix = pd.DataFrame(X)
DesignMatrix.index = A
DesignMatrix.columns = ['1', 'A', 'A^(2/3)', 'A^(-1/3)', '1/A']
display(DesignMatrix)
Explanation: <!-- dom:TITLE: Week 35: Linear Regression and Review of Statistical Analysis and Probability Theory -->
Week 35: Linear Regression and Review of Statistical Analysis and Probability Theory
<!-- dom:AUTHOR: Morten Hjorth-Jensen at Department of Physics, University of Oslo & Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University -->
<!-- Author: -->
Morten Hjorth-Jensen, Department of Physics, University of Oslo and Department of Physics and Astronomy and National Superconducting Cyclotron Laboratory, Michigan State University
Date: Sep 16, 2020
Copyright 1999-2020, Morten Hjorth-Jensen. Released under CC Attribution-NonCommercial 4.0 license
Plans for week 35, August 24-28
Thursday: Introduction to ordinary Least Squares and derivation of basic equation
Friday: Linear regression and statistical analysis and probability theory
Thursday August 27
Video of Lecture.
Why Linear Regression (aka Ordinary Least Squares and family)
Fitting a continuous function with linear parameterization in terms of the parameters $\boldsymbol{\beta}$.
* Method of choice for fitting a continuous function!
Gives an excellent introduction to central Machine Learning features with understandable pedagogical links to other methods like Neural Networks, Support Vector Machines etc
Analytical expression for the fitting parameters $\boldsymbol{\beta}$
Analytical expressions for statistical propertiers like mean values, variances, confidence intervals and more
Analytical relation with probabilistic interpretations
Easy to introduce basic concepts like bias-variance tradeoff, cross-validation, resampling and regularization techniques and many other ML topics
Easy to code! And links well with classification problems and logistic regression and neural networks
Allows for easy hands-on understanding of gradient descent methods
and many more features
For more discussions of Ridge and Lasso regression, Wessel van Wieringen's article is highly recommended.
Similarly, Mehta et al's article is also recommended.
Regression analysis, overarching aims
Regression modeling deals with the description of the sampling distribution of a given random variable $y$ and how it varies as function of another variable or a set of such variables $\boldsymbol{x} =[x_0, x_1,\dots, x_{n-1}]^T$.
The first variable is called the dependent, the outcome or the response variable while the set of variables $\boldsymbol{x}$ is called the independent variable, or the predictor variable or the explanatory variable.
A regression model aims at finding a likelihood function $p(\boldsymbol{y}\vert \boldsymbol{x})$, that is the conditional distribution for $\boldsymbol{y}$ with a given $\boldsymbol{x}$. The estimation of $p(\boldsymbol{y}\vert \boldsymbol{x})$ is made using a data set with
* $n$ cases $i = 0, 1, 2, \dots, n-1$
Response (target, dependent or outcome) variable $y_i$ with $i = 0, 1, 2, \dots, n-1$
$p$ so-called explanatory (independent or predictor) variables $\boldsymbol{x}i=[x{i0}, x_{i1}, \dots, x_{ip-1}]$ with $i = 0, 1, 2, \dots, n-1$ and explanatory variables running from $0$ to $p-1$. See below for more explicit examples.
The goal of the regression analysis is to extract/exploit relationship between $\boldsymbol{y}$ and $\boldsymbol{x}$ in or to infer causal dependencies, approximations to the likelihood functions, functional relationships and to make predictions, making fits and many other things.
Regression analysis, overarching aims II
Consider an experiment in which $p$ characteristics of $n$ samples are
measured. The data from this experiment, for various explanatory variables $p$ are normally represented by a matrix
$\mathbf{X}$.
The matrix $\mathbf{X}$ is called the design
matrix. Additional information of the samples is available in the
form of $\boldsymbol{y}$ (also as above). The variable $\boldsymbol{y}$ is
generally referred to as the response variable. The aim of
regression analysis is to explain $\boldsymbol{y}$ in terms of
$\boldsymbol{X}$ through a functional relationship like $y_i =
f(\mathbf{X}{i,\ast})$. When no prior knowledge on the form of
$f(\cdot)$ is available, it is common to assume a linear relationship
between $\boldsymbol{X}$ and $\boldsymbol{y}$. This assumption gives rise to
the linear regression model where $\boldsymbol{\beta} = [\beta_0, \ldots,
\beta{p-1}]^{T}$ are the regression parameters.
Linear regression gives us a set of analytical equations for the parameters $\beta_j$.
Examples
In order to understand the relation among the predictors $p$, the set of data $n$ and the target (outcome, output etc) $\boldsymbol{y}$,
consider the model we discussed for describing nuclear binding energies.
There we assumed that we could parametrize the data using a polynomial approximation based on the liquid drop model.
Assuming
$$
BE(A) = a_0+a_1A+a_2A^{2/3}+a_3A^{-1/3}+a_4A^{-1},
$$
we have five predictors, that is the intercept, the $A$ dependent term, the $A^{2/3}$ term and the $A^{-1/3}$ and $A^{-1}$ terms.
This gives $p=0,1,2,3,4$. Furthermore we have $n$ entries for each predictor. It means that our design matrix is a
$p\times n$ matrix $\boldsymbol{X}$.
Here the predictors are based on a model we have made. A popular data set which is widely encountered in ML applications is the
so-called credit card default data from Taiwan. The data set contains data on $n=30000$ credit card holders with predictors like gender, marital status, age, profession, education, etc. In total there are $24$ such predictors or attributes leading to a design matrix of dimensionality $24 \times 30000$. This is however a classification problem and we will come back to it when we discuss Logistic Regression.
General linear models
Before we proceed let us study a case from linear algebra where we aim at fitting a set of data $\boldsymbol{y}=[y_0,y_1,\dots,y_{n-1}]$. We could think of these data as a result of an experiment or a complicated numerical experiment. These data are functions of a series of variables $\boldsymbol{x}=[x_0,x_1,\dots,x_{n-1}]$, that is $y_i = y(x_i)$ with $i=0,1,2,\dots,n-1$. The variables $x_i$ could represent physical quantities like time, temperature, position etc. We assume that $y(x)$ is a smooth function.
Since obtaining these data points may not be trivial, we want to use these data to fit a function which can allow us to make predictions for values of $y$ which are not in the present set. The perhaps simplest approach is to assume we can parametrize our function in terms of a polynomial of degree $n-1$ with $n$ points, that is
$$
y=y(x) \rightarrow y(x_i)=\tilde{y}i+\epsilon_i=\sum{j=0}^{n-1} \beta_j x_i^j+\epsilon_i,
$$
where $\epsilon_i$ is the error in our approximation.
Rewriting the fitting procedure as a linear algebra problem
For every set of values $y_i,x_i$ we have thus the corresponding set of equations
$$
\begin{align}
y_0&=\beta_0+\beta_1x_0^1+\beta_2x_0^2+\dots+\beta_{n-1}x_0^{n-1}+\epsilon_0\
y_1&=\beta_0+\beta_1x_1^1+\beta_2x_1^2+\dots+\beta_{n-1}x_1^{n-1}+\epsilon_1\
y_2&=\beta_0+\beta_1x_2^1+\beta_2x_2^2+\dots+\beta_{n-1}x_2^{n-1}+\epsilon_2\
\dots & \dots \
y_{n-1}&=\beta_0+\beta_1x_{n-1}^1+\beta_2x_{n-1}^2+\dots+\beta_{n-1}x_{n-1}^{n-1}+\epsilon_{n-1}.\
\end{align}
$$
Rewriting the fitting procedure as a linear algebra problem, more details
Defining the vectors
$$
\boldsymbol{y} = [y_0,y_1, y_2,\dots, y_{n-1}]^T,
$$
and
$$
\boldsymbol{\beta} = [\beta_0,\beta_1, \beta_2,\dots, \beta_{n-1}]^T,
$$
and
$$
\boldsymbol{\epsilon} = [\epsilon_0,\epsilon_1, \epsilon_2,\dots, \epsilon_{n-1}]^T,
$$
and the design matrix
$$
\boldsymbol{X}=
\begin{bmatrix}
1& x_{0}^1 &x_{0}^2& \dots & \dots &x_{0}^{n-1}\
1& x_{1}^1 &x_{1}^2& \dots & \dots &x_{1}^{n-1}\
1& x_{2}^1 &x_{2}^2& \dots & \dots &x_{2}^{n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
1& x_{n-1}^1 &x_{n-1}^2& \dots & \dots &x_{n-1}^{n-1}\
\end{bmatrix}
$$
we can rewrite our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The above design matrix is called a Vandermonde matrix.
Generalizing the fitting procedure as a linear algebra problem
We are obviously not limited to the above polynomial expansions. We
could replace the various powers of $x$ with elements of Fourier
series or instead of $x_i^j$ we could have $\cos{(j x_i)}$ or $\sin{(j
x_i)}$, or time series or other orthogonal functions. For every set
of values $y_i,x_i$ we can then generalize the equations to
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_2\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_i\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
Note that we have $p=n$ here. The matrix is symmetric. This is generally not the case!
Generalizing the fitting procedure as a linear algebra problem
We redefine in turn the matrix $\boldsymbol{X}$ as
$$
\boldsymbol{X}=
\begin{bmatrix}
x_{00}& x_{01} &x_{02}& \dots & \dots &x_{0,n-1}\
x_{10}& x_{11} &x_{12}& \dots & \dots &x_{1,n-1}\
x_{20}& x_{21} &x_{22}& \dots & \dots &x_{2,n-1}\
\dots& \dots &\dots& \dots & \dots &\dots\
x_{n-1,0}& x_{n-1,1} &x_{n-1,2}& \dots & \dots &x_{n-1,n-1}\
\end{bmatrix}
$$
and without loss of generality we rewrite again our equations as
$$
\boldsymbol{y} = \boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{\epsilon}.
$$
The left-hand side of this equation is kwown. Our error vector $\boldsymbol{\epsilon}$ and the parameter vector $\boldsymbol{\beta}$ are our unknow quantities. How can we obtain the optimal set of $\beta_i$ values?
Optimizing our parameters
We have defined the matrix $\boldsymbol{X}$ via the equations
$$
\begin{align}
y_0&=\beta_0x_{00}+\beta_1x_{01}+\beta_2x_{02}+\dots+\beta_{n-1}x_{0n-1}+\epsilon_0\
y_1&=\beta_0x_{10}+\beta_1x_{11}+\beta_2x_{12}+\dots+\beta_{n-1}x_{1n-1}+\epsilon_1\
y_2&=\beta_0x_{20}+\beta_1x_{21}+\beta_2x_{22}+\dots+\beta_{n-1}x_{2n-1}+\epsilon_1\
\dots & \dots \
y_{i}&=\beta_0x_{i0}+\beta_1x_{i1}+\beta_2x_{i2}+\dots+\beta_{n-1}x_{in-1}+\epsilon_1\
\dots & \dots \
y_{n-1}&=\beta_0x_{n-1,0}+\beta_1x_{n-1,2}+\beta_2x_{n-1,2}+\dots+\beta_{n-1}x_{n-1,n-1}+\epsilon_{n-1}.\
\end{align}
$$
As we noted above, we stayed with a system with the design matrix
$\boldsymbol{X}\in {\mathbb{R}}^{n\times n}$, that is we have $p=n$. For reasons to come later (algorithmic arguments) we will hereafter define
our matrix as $\boldsymbol{X}\in {\mathbb{R}}^{n\times p}$, with the predictors refering to the column numbers and the entries $n$ being the row elements.
Our model for the nuclear binding energies
In our introductory notes we looked at the so-called liquid drop model. Let us remind ourselves about what we did by looking at the code.
We restate the parts of the code we are most interested in.
End of explanation
# matrix inversion to find beta
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(Energies)
# and then make the prediction
ytilde = X @ beta
Explanation: With $\boldsymbol{\beta}\in {\mathbb{R}}^{p\times 1}$, it means that we will hereafter write our equations for the approximation as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
throughout these lectures.
Optimizing our parameters, more details
With the above we use the design matrix to define the approximation $\boldsymbol{\tilde{y}}$ via the unknown quantity $\boldsymbol{\beta}$ as
$$
\boldsymbol{\tilde{y}}= \boldsymbol{X}\boldsymbol{\beta},
$$
and in order to find the optimal parameters $\beta_i$ instead of solving the above linear algebra problem, we define a function which gives a measure of the spread between the values $y_i$ (which represent hopefully the exact values) and the parameterized values $\tilde{y}_i$, namely
$$
C(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
or using the matrix $\boldsymbol{X}$ and in a more compact matrix-vector notation as
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
This function is one possible way to define the so-called cost function.
It is also common to define
the function $C$ as
$$
C(\boldsymbol{\beta})=\frac{1}{2n}\sum_{i=0}^{n-1}\left(y_i-\tilde{y}_i\right)^2,
$$
since when taking the first derivative with respect to the unknown parameters $\beta$, the factor of $2$ cancels out.
Interpretations and optimizing our parameters
The function
$$
C(\boldsymbol{\beta})=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right},
$$
can be linked to the variance of the quantity $y_i$ if we interpret the latter as the mean value.
When linking (see the discussion below) with the maximum likelihood approach below, we will indeed interpret $y_i$ as a mean value
$$
y_{i}=\langle y_i \rangle = \beta_0x_{i,0}+\beta_1x_{i,1}+\beta_2x_{i,2}+\dots+\beta_{n-1}x_{i,n-1}+\epsilon_i,
$$
where $\langle y_i \rangle$ is the mean value. Keep in mind also that
till now we have treated $y_i$ as the exact value. Normally, the
response (dependent or outcome) variable $y_i$ the outcome of a
numerical experiment or another type of experiment and is thus only an
approximation to the true value. It is then always accompanied by an
error estimate, often limited to a statistical error estimate given by
the standard deviation discussed earlier. In the discussion here we
will treat $y_i$ as our exact value for the response variable.
In order to find the parameters $\beta_i$ we will then minimize the spread of $C(\boldsymbol{\beta})$, that is we are going to solve the problem
$$
{\displaystyle \min_{\boldsymbol{\beta}\in
{\mathbb{R}}^{p}}}\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)^T\left(\boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)\right}.
$$
In practical terms it means we will require
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_{ij}\left(y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right).
$$
Interpretations and optimizing our parameters
We can rewrite
$$
\frac{\partial C(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{X}^T\boldsymbol{y} = \boldsymbol{X}^T\boldsymbol{X}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{X}^T\boldsymbol{X}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1}\boldsymbol{X}^T\boldsymbol{y}.
$$
We note also that since our design matrix is defined as $\boldsymbol{X}\in
{\mathbb{R}}^{n\times p}$, the product $\boldsymbol{X}^T\boldsymbol{X} \in
{\mathbb{R}}^{p\times p}$. In the above case we have that $p \ll n$,
in our case $p=5$ meaning that we end up with inverting a small
$5\times 5$ matrix. This is a rather common situation, in many cases we end up with low-dimensional
matrices to invert. The methods discussed here and for many other
supervised learning algorithms like classification with logistic
regression or support vector machines, exhibit dimensionalities which
allow for the usage of direct linear algebra methods such as LU decomposition or Singular Value Decomposition (SVD) for finding the inverse of the matrix
$\boldsymbol{X}^T\boldsymbol{X}$.
Small question: Do you think the example we have at hand here (the nuclear binding energies) can lead to problems in inverting the matrix $\boldsymbol{X}^T\boldsymbol{X}$? What kind of problems can we expect?
Some useful matrix and vector expressions
The following matrix and vector relation will be useful here and for the rest of the course. Vectors are always written as boldfaced lower case letters and
matrices as upper case boldfaced letters.
2
6
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
2
7
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
2
8
<
<
<
!
!
M
A
T
H
_
B
L
O
C
K
$$
\frac{\partial \log{\vert\boldsymbol{A}\vert}}{\partial \boldsymbol{A}} = (\boldsymbol{A}^{-1})^T.
$$
Interpretations and optimizing our parameters
The residuals $\boldsymbol{\epsilon}$ are in turn given by
$$
\boldsymbol{\epsilon} = \boldsymbol{y}-\boldsymbol{\tilde{y}} = \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta},
$$
and with
$$
\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
we have
$$
\boldsymbol{X}^T\boldsymbol{\epsilon}=\boldsymbol{X}^T\left( \boldsymbol{y}-\boldsymbol{X}\boldsymbol{\beta}\right)= 0,
$$
meaning that the solution for $\boldsymbol{\beta}$ is the one which minimizes the residuals. Later we will link this with the maximum likelihood approach.
Let us now return to our nuclear binding energies and simply code the above equations.
Own code for Ordinary Least Squares
It is rather straightforward to implement the matrix inversion and obtain the parameters $\boldsymbol{\beta}$. After having defined the matrix $\boldsymbol{X}$ we simply need to
write
End of explanation
fit = np.linalg.lstsq(X, Energies, rcond =None)[0]
ytildenp = np.dot(fit,X.T)
Explanation: Alternatively, you can use the least squares functionality in Numpy as
End of explanation
Masses['Eapprox'] = ytilde
# Generate a plot comparing the experimental with the fitted values values.
fig, ax = plt.subplots()
ax.set_xlabel(r'$A = N + Z$')
ax.set_ylabel(r'$E_\mathrm{bind}\,/\mathrm{MeV}$')
ax.plot(Masses['A'], Masses['Ebinding'], alpha=0.7, lw=2,
label='Ame2016')
ax.plot(Masses['A'], Masses['Eapprox'], alpha=0.7, lw=2, c='m',
label='Fit')
ax.legend()
save_fig("Masses2016OLS")
plt.show()
Explanation: And finally we plot our fit with and compare with data
End of explanation
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
Explanation: Adding error analysis and training set up
We can easily test our fit by computing the $R2$ score that we discussed in connection with the functionality of Scikit-Learn in the introductory slides.
Since we are not using Scikit-Learn here we can define our own $R2$ function as
End of explanation
print(R2(Energies,ytilde))
Explanation: and we would be using it as
End of explanation
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
print(MSE(Energies,ytilde))
Explanation: We can easily add our MSE score as
End of explanation
def RelativeError(y_data,y_model):
return abs((y_data-y_model)/y_data)
print(RelativeError(Energies, ytilde))
Explanation: and finally the relative error as
End of explanation
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organize the data into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),4))
X[:,3] = Density**(4.0/3.0)
X[:,2] = Density
X[:,1] = Density**(2.0/3.0)
X[:,0] = 1
# We use now Scikit-Learn's linear regressor and ridge regressor
# OLS part
clf = skl.LinearRegression().fit(X, Energies)
ytilde = clf.predict(X)
EoS['Eols'] = ytilde
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, ytilde))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, ytilde))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, ytilde))
print(clf.coef_, clf.intercept_)
# The Ridge regression with a hyperparameter lambda = 0.1
_lambda = 0.1
clf_ridge = skl.Ridge(alpha=_lambda).fit(X, Energies)
yridge = clf_ridge.predict(X)
EoS['Eridge'] = yridge
# The mean squared error
print("Mean squared error: %.2f" % mean_squared_error(Energies, yridge))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(Energies, yridge))
# Mean absolute error
print('Mean absolute error: %.2f' % mean_absolute_error(Energies, yridge))
print(clf_ridge.coef_, clf_ridge.intercept_)
fig, ax = plt.subplots()
ax.set_xlabel(r'$\rho[\mathrm{fm}^{-3}]$')
ax.set_ylabel(r'Energy per particle')
ax.plot(EoS['Density'], EoS['Energy'], alpha=0.7, lw=2,
label='Theoretical data')
ax.plot(EoS['Density'], EoS['Eols'], alpha=0.7, lw=2, c='m',
label='OLS')
ax.plot(EoS['Density'], EoS['Eridge'], alpha=0.7, lw=2, c='g',
label='Ridge $\lambda = 0.1$')
ax.legend()
save_fig("EoSfitting")
plt.show()
Explanation: The $\chi^2$ function
Normally, the response (dependent or outcome) variable $y_i$ is the
outcome of a numerical experiment or another type of experiment and is
thus only an approximation to the true value. It is then always
accompanied by an error estimate, often limited to a statistical error
estimate given by the standard deviation discussed earlier. In the
discussion here we will treat $y_i$ as our exact value for the
response variable.
Introducing the standard deviation $\sigma_i$ for each measurement
$y_i$, we define now the $\chi^2$ function (omitting the $1/n$ term)
as
$$
\chi^2(\boldsymbol{\beta})=\frac{1}{n}\sum_{i=0}^{n-1}\frac{\left(y_i-\tilde{y}_i\right)^2}{\sigma_i^2}=\frac{1}{n}\left{\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)^T\frac{1}{\boldsymbol{\Sigma^2}}\left(\boldsymbol{y}-\boldsymbol{\tilde{y}}\right)\right},
$$
where the matrix $\boldsymbol{\Sigma}$ is a diagonal matrix with $\sigma_i$ as matrix elements.
The $\chi^2$ function
In order to find the parameters $\beta_i$ we will then minimize the spread of $\chi^2(\boldsymbol{\beta})$ by requiring
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = \frac{\partial }{\partial \beta_j}\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)^2\right]=0,
$$
which results in
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_j} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}\frac{x_{ij}}{\sigma_i}\left(\frac{y_i-\beta_0x_{i,0}-\beta_1x_{i,1}-\beta_2x_{i,2}-\dots-\beta_{n-1}x_{i,n-1}}{\sigma_i}\right)\right]=0,
$$
or in a matrix-vector form as
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right).
$$
where we have defined the matrix $\boldsymbol{A} =\boldsymbol{X}/\boldsymbol{\Sigma}$ with matrix elements $a_{ij} = x_{ij}/\sigma_i$ and the vector $\boldsymbol{b}$ with elements $b_i = y_i/\sigma_i$.
The $\chi^2$ function
We can rewrite
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \boldsymbol{\beta}} = 0 = \boldsymbol{A}^T\left( \boldsymbol{b}-\boldsymbol{A}\boldsymbol{\beta}\right),
$$
as
$$
\boldsymbol{A}^T\boldsymbol{b} = \boldsymbol{A}^T\boldsymbol{A}\boldsymbol{\beta},
$$
and if the matrix $\boldsymbol{A}^T\boldsymbol{A}$ is invertible we have the solution
$$
\boldsymbol{\beta} =\left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1}\boldsymbol{A}^T\boldsymbol{b}.
$$
The $\chi^2$ function
If we then introduce the matrix
$$
\boldsymbol{H} = \left(\boldsymbol{A}^T\boldsymbol{A}\right)^{-1},
$$
we have then the following expression for the parameters $\beta_j$ (the matrix elements of $\boldsymbol{H}$ are $h_{ij}$)
$$
\beta_j = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}\frac{y_i}{\sigma_i}\frac{x_{ik}}{\sigma_i} = \sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}b_ia_{ik}
$$
We state without proof the expression for the uncertainty in the parameters $\beta_j$ as (we leave this as an exercise)
$$
\sigma^2(\beta_j) = \sum_{i=0}^{n-1}\sigma_i^2\left( \frac{\partial \beta_j}{\partial y_i}\right)^2,
$$
resulting in
$$
\sigma^2(\beta_j) = \left(\sum_{k=0}^{p-1}h_{jk}\sum_{i=0}^{n-1}a_{ik}\right)\left(\sum_{l=0}^{p-1}h_{jl}\sum_{m=0}^{n-1}a_{ml}\right) = h_{jj}!
$$
The $\chi^2$ function
The first step here is to approximate the function $y$ with a first-order polynomial, that is we write
$$
y=y(x) \rightarrow y(x_i) \approx \beta_0+\beta_1 x_i.
$$
By computing the derivatives of $\chi^2$ with respect to $\beta_0$ and $\beta_1$ show that these are given by
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_0} = -2\left[ \frac{1}{n}\sum_{i=0}^{n-1}\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0,
$$
and
$$
\frac{\partial \chi^2(\boldsymbol{\beta})}{\partial \beta_1} = -\frac{2}{n}\left[ \sum_{i=0}^{n-1}x_i\left(\frac{y_i-\beta_0-\beta_1x_{i}}{\sigma_i^2}\right)\right]=0.
$$
The $\chi^2$ function
For a linear fit (a first-order polynomial) we don't need to invert a matrix!!
Defining
$$
\gamma = \sum_{i=0}^{n-1}\frac{1}{\sigma_i^2},
$$
$$
\gamma_x = \sum_{i=0}^{n-1}\frac{x_{i}}{\sigma_i^2},
$$
$$
\gamma_y = \sum_{i=0}^{n-1}\left(\frac{y_i}{\sigma_i^2}\right),
$$
$$
\gamma_{xx} = \sum_{i=0}^{n-1}\frac{x_ix_{i}}{\sigma_i^2},
$$
$$
\gamma_{xy} = \sum_{i=0}^{n-1}\frac{y_ix_{i}}{\sigma_i^2},
$$
we obtain
$$
\beta_0 = \frac{\gamma_{xx}\gamma_y-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2},
$$
$$
\beta_1 = \frac{\gamma_{xy}\gamma-\gamma_x\gamma_y}{\gamma\gamma_{xx}-\gamma_x^2}.
$$
This approach (different linear and non-linear regression) suffers
often from both being underdetermined and overdetermined in the
unknown coefficients $\beta_i$. A better approach is to use the
Singular Value Decomposition (SVD) method discussed below. Or using
Lasso and Ridge regression. See below.
Fitting an Equation of State for Dense Nuclear Matter
Before we continue, let us introduce yet another example. We are going to fit the
nuclear equation of state using results from many-body calculations.
The equation of state we have made available here, as function of
density, has been derived using modern nucleon-nucleon potentials with
the addition of three-body
forces. This
time the file is presented as a standard csv file.
The beginning of the Python code here is similar to what you have seen
before, with the same initializations and declarations. We use also
pandas again, rather extensively in order to organize our data.
The difference now is that we use Scikit-Learn's regression tools
instead of our own matrix inversion implementation. Furthermore, we
sneak in Ridge regression (to be discussed below) which includes a
hyperparameter $\lambda$, also to be explained below.
The code
End of explanation
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def R2(y_data, y_model):
return 1 - np.sum((y_data - y_model) ** 2) / np.sum((y_data - np.mean(y_data)) ** 2)
def MSE(y_data,y_model):
n = np.size(y_model)
return np.sum((y_data-y_model)**2)/n
infile = open(data_path("EoS.csv"),'r')
# Read the EoS data as csv file and organized into two arrays with density and energies
EoS = pd.read_csv(infile, names=('Density', 'Energy'))
EoS['Energy'] = pd.to_numeric(EoS['Energy'], errors='coerce')
EoS = EoS.dropna()
Energies = EoS['Energy']
Density = EoS['Density']
# The design matrix now as function of various polytrops
X = np.zeros((len(Density),5))
X[:,0] = 1
X[:,1] = Density**(2.0/3.0)
X[:,2] = Density
X[:,3] = Density**(4.0/3.0)
X[:,4] = Density**(5.0/3.0)
# We split the data in test and training data
X_train, X_test, y_train, y_test = train_test_split(X, Energies, test_size=0.2)
# matrix inversion to find beta
beta = np.linalg.inv(X_train.T.dot(X_train)).dot(X_train.T).dot(y_train)
# and then make the prediction
ytilde = X_train @ beta
print("Training R2")
print(R2(y_train,ytilde))
print("Training MSE")
print(MSE(y_train,ytilde))
ypredict = X_test @ beta
print("Test R2")
print(R2(y_test,ypredict))
print("Test MSE")
print(MSE(y_test,ypredict))
Explanation: The above simple polynomial in density $\rho$ gives an excellent fit
to the data.
We note also that there is a small deviation between the
standard OLS and the Ridge regression at higher densities. We discuss this in more detail
below.
Splitting our Data in Training and Test data
It is normal in essentially all Machine Learning studies to split the
data in a training set and a test set (sometimes also an additional
validation set). Scikit-Learn has an own function for this. There
is no explicit recipe for how much data should be included as training
data and say test data. An accepted rule of thumb is to use
approximately $2/3$ to $4/5$ of the data as training data. We will
postpone a discussion of this splitting to the end of these notes and
our discussion of the so-called bias-variance tradeoff. Here we
limit ourselves to repeat the above equation of state fitting example
but now splitting the data into a training set and a test set.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
Explanation: <!-- !split -->
The Boston housing data example
The Boston housing
data set was originally a part of UCI Machine Learning Repository
and has been removed now. The data set is now included in Scikit-Learn's
library. There are 506 samples and 13 feature (predictor) variables
in this data set. The objective is to predict the value of prices of
the house using the features (predictors) listed here.
The features/predictors are
1. CRIM: Per capita crime rate by town
ZN: Proportion of residential land zoned for lots over 25000 square feet
INDUS: Proportion of non-retail business acres per town
CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
NOX: Nitric oxide concentration (parts per 10 million)
RM: Average number of rooms per dwelling
AGE: Proportion of owner-occupied units built prior to 1940
DIS: Weighted distances to five Boston employment centers
RAD: Index of accessibility to radial highways
TAX: Full-value property tax rate per USD10000
B: $1000(Bk - 0.63)^2$, where $Bk$ is the proportion of [people of African American descent] by town
LSTAT: Percentage of lower status of the population
MEDV: Median value of owner-occupied homes in USD 1000s
Housing data, the code
We start by importing the libraries
End of explanation
from sklearn.datasets import load_boston
boston_dataset = load_boston()
# boston_dataset is a dictionary
# let's check what it contains
boston_dataset.keys()
Explanation: and load the Boston Housing DataSet from Scikit-Learn
End of explanation
boston = pd.DataFrame(boston_dataset.data, columns=boston_dataset.feature_names)
boston.head()
boston['MEDV'] = boston_dataset.target
Explanation: Then we invoke Pandas
End of explanation
# check for missing values in all the columns
boston.isnull().sum()
Explanation: and preprocess the data
End of explanation
# set the size of the figure
sns.set(rc={'figure.figsize':(11.7,8.27)})
# plot a histogram showing the distribution of the target values
sns.distplot(boston['MEDV'], bins=30)
plt.show()
Explanation: We can then visualize the data
End of explanation
# compute the pair wise correlation for all columns
correlation_matrix = boston.corr().round(2)
# use the heatmap function from seaborn to plot the correlation matrix
# annot = True to print the values inside the square
sns.heatmap(data=correlation_matrix, annot=True)
Explanation: It is now useful to look at the correlation matrix
End of explanation
plt.figure(figsize=(20, 5))
features = ['LSTAT', 'RM']
target = boston['MEDV']
for i, col in enumerate(features):
plt.subplot(1, len(features) , i+1)
x = boston[col]
y = target
plt.scatter(x, y, marker='o')
plt.title(col)
plt.xlabel(col)
plt.ylabel('MEDV')
Explanation: From the above coorelation plot we can see that MEDV is strongly correlated to LSTAT and RM. We see also that RAD and TAX are stronly correlated, but we don't include this in our features together to avoid multi-colinearity
End of explanation
X = pd.DataFrame(np.c_[boston['LSTAT'], boston['RM']], columns = ['LSTAT','RM'])
Y = boston['MEDV']
Explanation: Now we start training our model
End of explanation
from sklearn.model_selection import train_test_split
# splits the training and test data set in 80% : 20%
# assign random_state to any value.This ensures consistency.
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state=5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
Explanation: We split the data into training and test sets
End of explanation
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
lin_model = LinearRegression()
lin_model.fit(X_train, Y_train)
# model evaluation for training set
y_train_predict = lin_model.predict(X_train)
rmse = (np.sqrt(mean_squared_error(Y_train, y_train_predict)))
r2 = r2_score(Y_train, y_train_predict)
print("The model performance for training set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
print("\n")
# model evaluation for testing set
y_test_predict = lin_model.predict(X_test)
# root mean square error of the model
rmse = (np.sqrt(mean_squared_error(Y_test, y_test_predict)))
# r-squared score of the model
r2 = r2_score(Y_test, y_test_predict)
print("The model performance for testing set")
print("--------------------------------------")
print('RMSE is {}'.format(rmse))
print('R2 score is {}'.format(r2))
# plotting the y_test vs y_pred
# ideally should have been a straight line
plt.scatter(Y_test, y_test_predict)
plt.show()
Explanation: Then we use the linear regression functionality from Scikit-Learn
End of explanation
# Common imports
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn.linear_model as skl
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, StandardScaler, Normalizer
# Where to save the figures and data files
PROJECT_ROOT_DIR = "Results"
FIGURE_ID = "Results/FigureFiles"
DATA_ID = "DataFiles/"
if not os.path.exists(PROJECT_ROOT_DIR):
os.mkdir(PROJECT_ROOT_DIR)
if not os.path.exists(FIGURE_ID):
os.makedirs(FIGURE_ID)
if not os.path.exists(DATA_ID):
os.makedirs(DATA_ID)
def image_path(fig_id):
return os.path.join(FIGURE_ID, fig_id)
def data_path(dat_id):
return os.path.join(DATA_ID, dat_id)
def save_fig(fig_id):
plt.savefig(image_path(fig_id) + ".png", format='png')
def FrankeFunction(x,y):
term1 = 0.75*np.exp(-(0.25*(9*x-2)**2) - 0.25*((9*y-2)**2))
term2 = 0.75*np.exp(-((9*x+1)**2)/49.0 - 0.1*(9*y+1))
term3 = 0.5*np.exp(-(9*x-7)**2/4.0 - 0.25*((9*y-3)**2))
term4 = -0.2*np.exp(-(9*x-4)**2 - (9*y-7)**2)
return term1 + term2 + term3 + term4
def create_X(x, y, n ):
if len(x.shape) > 1:
x = np.ravel(x)
y = np.ravel(y)
N = len(x)
l = int((n+1)*(n+2)/2) # Number of elements in beta
X = np.ones((N,l))
for i in range(1,n+1):
q = int((i)*(i+1)/2)
for k in range(i+1):
X[:,q+k] = (x**(i-k))*(y**k)
return X
# Making meshgrid of datapoints and compute Franke's function
n = 5
N = 1000
x = np.sort(np.random.uniform(0, 1, N))
y = np.sort(np.random.uniform(0, 1, N))
z = FrankeFunction(x, y)
X = create_X(x, y, n=n)
# split in training and test data
X_train, X_test, y_train, y_test = train_test_split(X,z,test_size=0.2)
clf = skl.LinearRegression().fit(X_train, y_train)
# The mean squared error and R2 score
print("MSE before scaling: {:.2f}".format(mean_squared_error(clf.predict(X_test), y_test)))
print("R2 score before scaling {:.2f}".format(clf.score(X_test,y_test)))
scaler = StandardScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
print("Feature min values before scaling:\n {}".format(X_train.min(axis=0)))
print("Feature max values before scaling:\n {}".format(X_train.max(axis=0)))
print("Feature min values after scaling:\n {}".format(X_train_scaled.min(axis=0)))
print("Feature max values after scaling:\n {}".format(X_train_scaled.max(axis=0)))
clf = skl.LinearRegression().fit(X_train_scaled, y_train)
print("MSE after scaling: {:.2f}".format(mean_squared_error(clf.predict(X_test_scaled), y_test)))
print("R2 score for scaled data: {:.2f}".format(clf.score(X_test_scaled,y_test)))
Explanation: Reducing the number of degrees of freedom, overarching view
Many Machine Learning problems involve thousands or even millions of
features for each training instance. Not only does this make training
extremely slow, it can also make it much harder to find a good
solution, as we will see. This problem is often referred to as the
curse of dimensionality. Fortunately, in real-world problems, it is
often possible to reduce the number of features considerably, turning
an intractable problem into a tractable one.
Later we will discuss some of the most popular dimensionality reduction
techniques: the principal component analysis (PCA), Kernel PCA, and
Locally Linear Embedding (LLE).
Principal component analysis and its various variants deal with the
problem of fitting a low-dimensional affine
subspace to a set of of
data points in a high-dimensional space. With its family of methods it
is one of the most used tools in data modeling, compression and
visualization.
Preprocessing our data
Before we proceed however, we will discuss how to preprocess our
data. Till now and in connection with our previous examples we have
not met so many cases where we are too sensitive to the scaling of our
data. Normally the data may need a rescaling and/or may be sensitive
to extreme values. Scaling the data renders our inputs much more
suitable for the algorithms we want to employ.
Scikit-Learn has several functions which allow us to rescale the
data, normally resulting in much better results in terms of various
accuracy scores. The StandardScaler function in Scikit-Learn
ensures that for each feature/predictor we study the mean value is
zero and the variance is one (every column in the design/feature
matrix). This scaling has the drawback that it does not ensure that
we have a particular maximum or minimum in our data set. Another
function included in Scikit-Learn is the MinMaxScaler which
ensures that all features are exactly between $0$ and $1$. The
More preprocessing
The Normalizer scales each data
point such that the feature vector has a euclidean length of one. In other words, it
projects a data point on the circle (or sphere in the case of higher dimensions) with a
radius of 1. This means every data point is scaled by a different number (by the
inverse of it’s length).
This normalization is often used when only the direction (or angle) of the data matters,
not the length of the feature vector.
The RobustScaler works similarly to the StandardScaler in that it
ensures statistical properties for each feature that guarantee that
they are on the same scale. However, the RobustScaler uses the median
and quartiles, instead of mean and variance. This makes the
RobustScaler ignore data points that are very different from the rest
(like measurement errors). These odd data points are also called
outliers, and might often lead to trouble for other scaling
techniques.
Simple preprocessing examples, Franke function and regression
End of explanation |
6,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Technical Specification Support
How imports work
Imports can be used in different ways depending on the use case and support levels.
People who want to support the latest version of STIX 2 without having to make changes, can implicitly use the latest version
Step1: or,
Step2: People who want to use an explicit version
Step3: or,
Step4: or even, (less preferred)
Step5: The last option makes it easy to update to a new version in one place per file, once you've made the deliberate action to do this.
People who want to use multiple versions in a single file
Step6: or,
Step7: or (less preferred)
Step9: How parsing works
If the version positional argument is not provided the library will make the best attempt using the "spec_version" property found on a Bundle, SDOs, SCOs, or SROs.
You can lock your parse() method to a specific STIX version by
Step10: In the example above if a 2.1 or higher object is parsed, the operation will fail.
How custom content works
CustomObject, CustomObservable, CustomMarking and CustomExtension must be registered explicitly by STIX version. This is a design decision since properties or requirements may change as the STIX Technical Specification advances.
You can perform this by | Python Code:
import stix2
stix2.Indicator()
Explanation: Technical Specification Support
How imports work
Imports can be used in different ways depending on the use case and support levels.
People who want to support the latest version of STIX 2 without having to make changes, can implicitly use the latest version:<div class="alert alert-warning">
Warning
The implicit import method can cause the code to break between major releases to support a newer approved committee specification. Therefore, not recommended for large scale applications relying on specific object support.
</div>
End of explanation
from stix2 import Indicator
Indicator()
Explanation: or,
End of explanation
import stix2.v20
stix2.v20.Indicator()
Explanation: People who want to use an explicit version:
End of explanation
from stix2.v20 import Indicator
Indicator()
Explanation: or,
End of explanation
import stix2.v20 as stix2
stix2.Indicator()
Explanation: or even, (less preferred)
End of explanation
import stix2
stix2.v20.Indicator()
stix2.v21.Indicator()
Explanation: The last option makes it easy to update to a new version in one place per file, once you've made the deliberate action to do this.
People who want to use multiple versions in a single file:
End of explanation
from stix2 import v20, v21
v20.Indicator()
v21.Indicator()
Explanation: or,
End of explanation
from stix2.v20 import Indicator as Indicator_v20
from stix2.v21 import Indicator as Indicator_v21
Indicator_v20()
Indicator_v21()
Explanation: or (less preferred):
End of explanation
from stix2 import parse
indicator = parse({
"type": "indicator",
"id": "indicator--dbcbd659-c927-4f9a-994f-0a2632274394",
"created": "2017-09-26T23:33:39.829Z",
"modified": "2017-09-26T23:33:39.829Z",
"labels": [
"malicious-activity"
],
"name": "File hash for malware variant",
"pattern": "[file:hashes.md5 = 'd41d8cd98f00b204e9800998ecf8427e']",
"valid_from": "2017-09-26T23:33:39.829952Z"
}, version="2.0")
print(indicator.serialize(pretty=True))
Explanation: How parsing works
If the version positional argument is not provided the library will make the best attempt using the "spec_version" property found on a Bundle, SDOs, SCOs, or SROs.
You can lock your parse() method to a specific STIX version by:
End of explanation
import stix2
# Make my custom observable available in STIX 2.0
@stix2.v20.CustomObservable('x-new-object-type',
[("prop", stix2.properties.BooleanProperty())])
class NewObject2(object):
pass
# Make my custom observable available in STIX 2.1
@stix2.v21.CustomObservable('x-new-object-type',
[("prop", stix2.properties.BooleanProperty())])
class NewObject2(object):
pass
Explanation: In the example above if a 2.1 or higher object is parsed, the operation will fail.
How custom content works
CustomObject, CustomObservable, CustomMarking and CustomExtension must be registered explicitly by STIX version. This is a design decision since properties or requirements may change as the STIX Technical Specification advances.
You can perform this by:
End of explanation |
6,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importing the large datasets to a postgresql server and computing their metrics
It is not possible to load the larger data sets in the memory of a local machine therefeore an alternative is to import them to a psql table and query them from there. By adding the right indices this can make the queries fast enough. After this import one can extract some basic statistics using sql and also export smaller portions of the data which can be handled by spark or pandas on a local machine.
Helper functions
Step1: Unzipping the data and converting it to csv format
Unfortunately psql does not support an import of record json files therefore we need to convert the data sets to csv. We use here the command line tool json2csv.
WARNING
Step2: Importing the data in psql
To import the data in psql we create a table with the appropriate shape and import form the csv files generated above.
Some preparation to run psql transactions and queries in python
Step3: Creating tables for with indices for the large datasets
Step4: Importing the datasets to psql
WARNING
Step5: Querying the metrics | Python Code:
import timeit
def stopwatch(function):
start_time = timeit.default_timer()
result = function()
print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time))
return result
Explanation: Importing the large datasets to a postgresql server and computing their metrics
It is not possible to load the larger data sets in the memory of a local machine therefeore an alternative is to import them to a psql table and query them from there. By adding the right indices this can make the queries fast enough. After this import one can extract some basic statistics using sql and also export smaller portions of the data which can be handled by spark or pandas on a local machine.
Helper functions
End of explanation
start_time = timeit.default_timer()
!ls ./data/large-datasets/*.gz | grep -Po '.*(?=.gz)' | xargs -I {} gunzip {}.gz
print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time))
start_time = timeit.default_timer()
!ls ./data/large-datasets/*.json | xargs sed -i 's/|/?/g;s/\u0000/?/g'
print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time))
start_time = timeit.default_timer()
!ls ./data/large-datasets/*.json | grep -Po '.*(?=.json)' | xargs -I {} json2csv -p -d '|' -k asin,helpful,overall,reviewText,reviewTime,reviewerID,reviewerName,summary,unixReviewTime -i {}.json -o {}.csv
!rm ./data/large-datasets/*.json
print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time))
Explanation: Unzipping the data and converting it to csv format
Unfortunately psql does not support an import of record json files therefore we need to convert the data sets to csv. We use here the command line tool json2csv.
WARNING: The following two commands will run for a while, especially the second one. You can expect approximately 1 minute per GB of unzipped data.
End of explanation
import psycopg2 as pg
import pandas as pd
db_conf = {
'user': 'mariosk',
'database': 'amazon_reviews'
}
connection_factory = lambda: pg.connect(user=db_conf['user'], database=db_conf['database'])
def transaction(*statements):
try:
connection = connection_factory()
cursor = connection.cursor()
for statement in statements:
cursor.execute(statement)
connection.commit()
cursor.close()
except pg.DatabaseError as error:
print(error)
finally:
if connection is not None:
connection.close()
def query(statement):
try:
connection = connection_factory()
cursor = connection.cursor()
cursor.execute(statement)
header = [ description[0] for description in cursor.description ]
rows = cursor.fetchall()
cursor.close()
return pd.DataFrame.from_records(rows, columns=header)
except (Exception, pg.DatabaseError) as error:
print(error)
return None
finally:
if connection is not None:
connection.close()
Explanation: Importing the data in psql
To import the data in psql we create a table with the appropriate shape and import form the csv files generated above.
Some preparation to run psql transactions and queries in python
End of explanation
import re
table_names = [ re.search('reviews_(.*)_5.csv', filename).group(1)
for filename
in sorted(os.listdir('./data/large-datasets'))
if not filename.endswith('json') ]
def create_table(table_name):
transaction(
'create table %s (asin text, helpful text, overall double precision, reviewText text, reviewTime text, reviewerID text, reviewerName text, summary text, unixReviewTime int);' % table_name,
'create index {0}_asin ON {0} (asin);'.format(table_name),
'create index {0}_overall ON {0} (overall);'.format(table_name),
'create index {0}_reviewerID ON {0} (reviewerID);'.format(table_name),
'create index {0}_unixReviewTime ON {0} (unixReviewTime);'.format(table_name))
for table_name in table_names:
create_table(table_name)
Explanation: Creating tables for with indices for the large datasets
End of explanation
start_time = timeit.default_timer()
!ls ./data/large-datasets | grep -Po '(?<=reviews_).*(?=_5.csv)' | xargs -I {} psql -U mariosk -d amazon_reviews -c "\copy {} from './data/large-datasets/reviews_{}_5.csv' with (format csv, delimiter '|', header true);"
print('Elapsed time: %i sec' % int(timeit.default_timer() - start_time))
Explanation: Importing the datasets to psql
WARNING: The following command will take long time to complete. Estimate ~1 minute for each GB of csv data.
End of explanation
def average_reviews_per_product(table_name):
return (query('''
with distinct_products as (select count(distinct asin) as products from {0}),
reviews_count as (select cast(count(*) as double precision) as reviews from {0})
select reviews / products as reviews_per_product
from distinct_products cross join reviews_count
'''.format(table_name))
.rename(index={0: table_name.replace('_', ' ')}))
def average_reviews_per_reviewer(table_name):
return (query('''
with distinct_reviewers as (select count(distinct reviewerID) as reviewers from {0}),
reviews_count as (select cast(count(*) as double precision) as reviews from {0})
select reviews / reviewers as reviews_per_reviewer
from distinct_reviewers cross join reviews_count
'''.format(table_name))
.rename(index={ 0: table_name.replace('_', ' ')}))
def percentages_per_rating(table_name):
return (query('''
with rating_counts as (select overall, count(overall) as rating_count from {0} group by overall),
reviews_count as (select cast(count(*) as double precision) as reviews from {0})
select cast(overall as int) as dataset_name, rating_count / reviews as row
from rating_counts cross join reviews_count
'''.format(table_name))
.set_index('dataset_name')
.sort_index()
.transpose()
.rename(index={'row': table_name.replace('_', ' ')}))
def number_of_reviews(table_name):
return (query('''
select count(*) as number_of_reviews from {0}
'''.format(table_name))
.rename(index={ 0: table_name.replace('_', ' ') }))
def all_metrics(table_name):
print(table_name)
return pd.concat(
[ f(table_name)
for f
in [ percentages_per_rating, number_of_reviews, average_reviews_per_product, average_reviews_per_reviewer ]],
axis=1)
metrics = stopwatch(lambda: pd.concat([ all_metrics(table) for table in table_names ]))
metrics.index.name = 'dataset_name'
metrics.to_csv('./metadata/large-datasets-evaluation-metrics.csv')
metrics
Explanation: Querying the metrics
End of explanation |
6,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Overview of the Settings Attribute
OpenPNM objects all include a settings attribute which contains certain information used by OpenPNM. The best example is the algorithm classes, which often require numerous settings such as number of iterations and tolerance for iterative calculations. This tutorial will provide an overview of how these settings work, both from the user perspective as well as for developers.
Step1: Normal Usage
This section is relevant to users of OpenPNM, while the next section is more relevant to developers
Let's look an algorithm that has numerous settings
Step2: We can see that many default settings are already present by printing the settings attribute
Step3: We can override these settings manually
Step4: We could also have updated these settings when creating the algorithm object by passing in a set of arguments. This can be in the form of a dictionary
Step5: Or as a 'dataclass' style, which is how things are done behind the scenes in OpenPNM as described in the section
Step6: One new feature on OpenPNM V3 is that the datatype of some settings is enforced. For instance the 'prefix' setting must be a str, otherwise an error is raised
Step7: OpenPNM uses the traits package to control this behavior, which will be explained in more detail in the next section.
Advanced Usage
The following sections are probably only relevant if you plan to do some development in OpenPN
In the previous section we saw how to define settings, as well as the data-type protections of some settings. In this section we'll demonstrate this mechanism in more detail.
OpenPNM has two settings related classes
Step8: Now we can print s to inspect the settings. We'll see some default values for things that were not initialized like a, while b is the specified value.
Step9: The traits package enforces the datatype of each of these attributes
Step10: Let's look at the attribute protection in action again
Step11: The traits package also enforces the type of values we can put into the list stored in d
Step12: The first one works because we specified a list of strings, while the second fails because it is attempting to write an integer.
Also, we can't accidentally overwrite an attribute that is supposed to be a list with a scalar
Step13: Gotcha With the HasTraits Class
When defining a set of custom settings using the HasTraits or SettingsData class, you MUST specify a type for each attribute value. If not then it is essentially ignored.
Step14: However, if you create a custom class from a basic python object it will work
Step15: The SettingsAttr Class
The problem with the HasTraits class is that there is are lot of helper methods attached to it. This means that when we use the autocomplete functionality of our favorite IDEs (spyder and jupyter), we will have a hard time finding the attributes we set amongst the noise. For this reason we have created a wrapper class called SettingsAttr which works as follows
Step16: Importantly only the the user-created attributes show up, which can be test using the dir() command
Step17: SettingsAttr has as few additional features. You can add a new batch of settings after instantiation as follows
Step18: We can see the updated value of a, as well as the newly added e. Because e contained an integer (6), the datatype of e will be forced to remain an integer
Step19: Note that the _update method begins with an underscore. This prevents it from appearing in the autocomplete menu to ensure it stays clean.
For the sake of completeness, it should also be mentioned that the CustomSettings object which was passed to the SettingsAttr constructor was stored under _settings. The SettingsAttr class has overloaded __getattr__ and __setattr__ methods which dispatch the values to the _settings attribute
Step20: Another aspect to keep in mind is that the _settings attribute is a HasTraits object. This means that all values added to the settings must have an enforced datatype. This is done on the fly, based on the type of value received. For instance, once you set an attribute to string for instance, its type is set
Step22: Adding Documentation to a SettingsData and SettingsAttr Class
One the main reasons for using a dataclass style object for holding settings is so that docstrings for each attribute can be defined and explained
Step23: Note that this docstring was written when we defined DocumentedSettingsData subclass and it attached to it, but we'll be interacting with the SettingsAttr class. When a SettingsAttr is created is adopts the docstring of the received settings object. This can be either a proper SettingsData/HasTraits class or a basic dataclass style object. The docstring can only be set on initialization though, so any new attributes that are created by adding values to the object (i.e. D.zz_top = 'awesome') will not be documented.
Step26: This machinery was designed with the idea of inheriting docstrings using the docrep package. The following illustrates not only how the SettingsData class can be subclassed to add new settings (e.g. from GenericTransport to ReactiveTransport), but also how to use the hightly under-rated docrep package to also inherit the docstrings
Step27: And we can also see that max_iter was added to the values of name and id_num on the parent class
Step28: Again, as mentioned above, this inherited docstring is adopted by the SettingsAttr
Step29: Attaching to an OpenPNM Object
The SettingsAttr wrapper class is so named because it is meant to be an attribute (i.e. attr) on OpenPNM objects. These attached to the settings attribute
Step30: OpenPNM declares SettingsData classes with each file where class is defined, then this is attached upon initialization. This is illustrated below
Step31: Or with some additional user-defined settings and overrides | Python Code:
import openpnm as op
pn = op.network.Cubic([4, 4,])
geo = op.geometry.SpheresAndCylinders(network=pn, pores=pn.Ps, throats=pn.Ts)
air = op.phases.Air(network=pn)
phys = op.physics.Basic(network=pn, phase=air, geometry=geo)
Explanation: Overview of the Settings Attribute
OpenPNM objects all include a settings attribute which contains certain information used by OpenPNM. The best example is the algorithm classes, which often require numerous settings such as number of iterations and tolerance for iterative calculations. This tutorial will provide an overview of how these settings work, both from the user perspective as well as for developers.
End of explanation
alg = op.algorithms.ReactiveTransport(network=pn, phase=air)
Explanation: Normal Usage
This section is relevant to users of OpenPNM, while the next section is more relevant to developers
Let's look an algorithm that has numerous settings:
End of explanation
print(alg.sets)
Explanation: We can see that many default settings are already present by printing the settings attribute:
End of explanation
alg.sets.prefix = 'rxn'
print(alg.sets)
Explanation: We can override these settings manually:
End of explanation
s = {"prefix": "rxn"}
alg = op.algorithms.ReactiveTransport(network=pn, phase=air, settings=s)
print(alg.sets)
Explanation: We could also have updated these settings when creating the algorithm object by passing in a set of arguments. This can be in the form of a dictionary:
End of explanation
class MySettings:
prefix = 'rxn'
# alg = op.algorithms.ReactiveTransport(network=pn, phase=air, settings=MySettings())
# print(alg.sets)
Explanation: Or as a 'dataclass' style, which is how things are done behind the scenes in OpenPNM as described in the section:
End of explanation
from traits.api import TraitError
try:
alg.sets.phase = 1
except TraitError as e:
print(e)
Explanation: One new feature on OpenPNM V3 is that the datatype of some settings is enforced. For instance the 'prefix' setting must be a str, otherwise an error is raised:
End of explanation
from openpnm.utils import SettingsData, SettingsAttr
from traits.api import Int, Str, Float, List, Set
class CustomSettings(SettingsData):
a = Int()
b = Float(4.4)
c = Set()
d = List(Str)
s = CustomSettings()
Explanation: OpenPNM uses the traits package to control this behavior, which will be explained in more detail in the next section.
Advanced Usage
The following sections are probably only relevant if you plan to do some development in OpenPN
In the previous section we saw how to define settings, as well as the data-type protections of some settings. In this section we'll demonstrate this mechanism in more detail.
OpenPNM has two settings related classes: SettingsData and SettingsAttr. The first is a subclass of the HasTraits class from the traits package. It preceeded the Python dataclass by many years and offers far more functionality. For our purposes the main difference is that dataclasses allow developers to specify the type of attributes (i.e. obj.a must be an int), but these are only enforced during object creation. Once the object is made, any value can be assigned to a. The traits package offers the same functionality but also enforces the type of a for all subsequent assignments. We saw this in action in the previous section when we tried to assign an integer to alg.sets.prefix.
The SettingsData and HasTraits Classes
Let's dissect this process:
End of explanation
print(s)
Explanation: Now we can print s to inspect the settings. We'll see some default values for things that were not initialized like a, while b is the specified value.
End of explanation
s.a = 2
s.b = 5.5
print(s)
Explanation: The traits package enforces the datatype of each of these attributes:
End of explanation
try:
s.a = 1.1
except TraitError as e:
print(e)
Explanation: Let's look at the attribute protection in action again:
End of explanation
s.d.append('item')
try:
s.d.append(100)
except TraitError as e:
print(e)
Explanation: The traits package also enforces the type of values we can put into the list stored in d:
End of explanation
try:
s.d = 5
except TraitError as e:
print(e)
Explanation: The first one works because we specified a list of strings, while the second fails because it is attempting to write an integer.
Also, we can't accidentally overwrite an attribute that is supposed to be a list with a scalar:
End of explanation
class MySettings(SettingsData):
a = Int(1)
b = 2
mysets = MySettings()
print(mysets)
Explanation: Gotcha With the HasTraits Class
When defining a set of custom settings using the HasTraits or SettingsData class, you MUST specify a type for each attribute value. If not then it is essentially ignored.
End of explanation
class MySettings:
a = 1
b = 2
mysets = MySettings()
print(mysets.a, mysets.b)
Explanation: However, if you create a custom class from a basic python object it will work:
End of explanation
S = SettingsAttr(s)
print(S)
Explanation: The SettingsAttr Class
The problem with the HasTraits class is that there is are lot of helper methods attached to it. This means that when we use the autocomplete functionality of our favorite IDEs (spyder and jupyter), we will have a hard time finding the attributes we set amongst the noise. For this reason we have created a wrapper class called SettingsAttr which works as follows:
End of explanation
dir(S)
Explanation: Importantly only the the user-created attributes show up, which can be test using the dir() command:
End of explanation
s_new = {'a': 5, 'e': 6}
S._update(s_new)
print(S)
Explanation: SettingsAttr has as few additional features. You can add a new batch of settings after instantiation as follows:
End of explanation
try:
S.e = 5.5
except TraitError as e:
print(e)
Explanation: We can see the updated value of a, as well as the newly added e. Because e contained an integer (6), the datatype of e will be forced to remain an integer:
End of explanation
S.d is S._settings.d
Explanation: Note that the _update method begins with an underscore. This prevents it from appearing in the autocomplete menu to ensure it stays clean.
For the sake of completeness, it should also be mentioned that the CustomSettings object which was passed to the SettingsAttr constructor was stored under _settings. The SettingsAttr class has overloaded __getattr__ and __setattr__ methods which dispatch the values to the _settings attribute:
End of explanation
S.f = 'a string'
try:
S.f = 1.0
except TraitError as e:
print(e)
print(S)
Explanation: Another aspect to keep in mind is that the _settings attribute is a HasTraits object. This means that all values added to the settings must have an enforced datatype. This is done on the fly, based on the type of value received. For instance, once you set an attribute to string for instance, its type is set:
End of explanation
class DocumentedSettingsData(SettingsData):
r
A class that holds the following settings.
Parameters
----------
name : str
The name of the object
id_num : int
The id number of the object
name = Str('foo')
id_num = Int(0)
d = DocumentedSettingsData()
print(d.__doc__)
Explanation: Adding Documentation to a SettingsData and SettingsAttr Class
One the main reasons for using a dataclass style object for holding settings is so that docstrings for each attribute can be defined and explained:
End of explanation
D = SettingsAttr(d)
print(D.__doc__)
Explanation: Note that this docstring was written when we defined DocumentedSettingsData subclass and it attached to it, but we'll be interacting with the SettingsAttr class. When a SettingsAttr is created is adopts the docstring of the received settings object. This can be either a proper SettingsData/HasTraits class or a basic dataclass style object. The docstring can only be set on initialization though, so any new attributes that are created by adding values to the object (i.e. D.zz_top = 'awesome') will not be documented.
End of explanation
import docrep
docstr = docrep.DocstringProcessor()
# This docorator tells docrep to fetch the docstring from this class and make it available elsewhere:
@docstr.get_sections(base='DocumentSettingsData', sections=['Parameters'])
class DocumentedSettingsData(SettingsData):
r
A class that holds the following settings.
Parameters
----------
name : str
The name of the object
id_num : int
The id number of the object
name = Str('foo')
id_num = Int(0)
# This tells docrep to parse this docstring and insert text at the %
@docstr.dedent
class ChildSettingsData(DocumentedSettingsData):
r
A subclass of DocumentedSettingsData that holds some addtional settings
Parameters
----------
%(DocumentSettingsData.parameters)s
max_iter : int
The maximum number of iterations to do
max_iter = Int(10)
E = ChildSettingsData()
print(E.__doc__)
Explanation: This machinery was designed with the idea of inheriting docstrings using the docrep package. The following illustrates not only how the SettingsData class can be subclassed to add new settings (e.g. from GenericTransport to ReactiveTransport), but also how to use the hightly under-rated docrep package to also inherit the docstrings:
End of explanation
E.visible_traits()
Explanation: And we can also see that max_iter was added to the values of name and id_num on the parent class:
End of explanation
S = SettingsAttr(E)
print(S.__doc__)
Explanation: Again, as mentioned above, this inherited docstring is adopted by the SettingsAttr:
End of explanation
isinstance(alg.sets, SettingsAttr)
Explanation: Attaching to an OpenPNM Object
The SettingsAttr wrapper class is so named because it is meant to be an attribute (i.e. attr) on OpenPNM objects. These attached to the settings attribute:
End of explanation
class SpecificSettings(SettingsData):
a = Int(4)
class SomeAlg:
def __init__(self, settings={}, **kwargs):
self.settings = SettingsAttr(SpecificSettings())
self.settings._update(settings)
alg = SomeAlg()
print(alg.settings)
Explanation: OpenPNM declares SettingsData classes with each file where class is defined, then this is attached upon initialization. This is illustrated below:
End of explanation
s = {'name': 'bob', 'a': 3}
alg2 = SomeAlg(settings=s)
print(alg2.settings)
Explanation: Or with some additional user-defined settings and overrides:
End of explanation |
6,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TRYING OUT DIFFERENT ITERATIONS TO FIND THE BEST ONE
Step1: FOUND THAT ACCURACY IS BETTER WITH ~26K ITERATIONS
Step2: PART B
Step3: WHEN WE ADD A HIDDEN LAYER WITH SAME NUMBER OF ITERATIONS,ACCURACY INCREASES TO 99% | Python Code:
#find out for different iterations to find out the optimal iterations
iter1=10000
iter2=15000
iter3=26000
learningRate = tf.train.exponential_decay(learning_rate=0.0008,
global_step= 1,
decay_steps=trainX.shape[0],
decay_rate= 0.95,
staircase=True)
#Define the placeholder variables
numfeatures=trainX.shape[1]
numlabels=trainY.shape[1]
X=tf.placeholder(tf.float32,shape=[None,numfeatures])
Y=tf.placeholder(tf.float32,shape=[None,numlabels])
#Define the weights and biases
#Define weights and biases as variables since it changes over the iterations
w=tf.Variable(tf.random_normal([numfeatures,numlabels],mean=0,
stddev=(np.sqrt(6/numfeatures+
numlabels+1))))
b=tf.Variable(tf.random_normal([1,numlabels],mean=0,
stddev=(np.sqrt(6/numfeatures+
numlabels+1))))
#Find out the predicted Y value
init=tf.initialize_all_variables()
Y_predicted=tf.nn.sigmoid(tf.add(tf.matmul(X,w),b))
#Define the loss function and optimizer
#We use a mean squared loss function
#There is a function in tensorflow tf.nn.l2_loss which finds the mean squared loss without square root
loss=tf.nn.l2_loss(Y_predicted-Y)
optimizer=tf.train.GradientDescentOptimizer(learningRate).minimize(loss)
#Define the session to compute the graph
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter1):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=accuracy.eval(feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter2):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=sess.run(accuracy,feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
Explanation: TRYING OUT DIFFERENT ITERATIONS TO FIND THE BEST ONE
End of explanation
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter3):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=sess.run(accuracy,feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
Explanation: FOUND THAT ACCURACY IS BETTER WITH ~26K ITERATIONS
End of explanation
iter4=26000
learningRate = tf.train.exponential_decay(learning_rate=0.0008,
global_step= 1,
decay_steps=trainX.shape[0],
decay_rate= 0.95,
staircase=True)
init=tf.initialize_all_variables()
h1=tf.nn.sigmoid(tf.add(tf.matmul(X,w1),b1))
Y_predicted = tf.nn.sigmoid(tf.add(tf.matmul(h1, w2), b2))
#Define the loss function and optimizer
#We use a mean squared loss function
#There is a function in tensorflow tf.nn.l2_loss which finds the mean squared loss without square root
loss=tf.nn.l2_loss(Y_predicted-Y)
optimizer=tf.train.GradientDescentOptimizer(learningRate).minimize(loss)
errors=[]
with tf.Session() as sess:
sess.run(init)
prediction=tf.equal(tf.argmax(Y,1),tf.argmax(Y_predicted,1))
accuracy=tf.reduce_mean(tf.cast(prediction,"float"))
for i in range (iter4):
sess.run(optimizer,feed_dict={X:trainX,Y:trainY})
accuracy_value=accuracy.eval(feed_dict={X:trainX,Y:trainY})
errors.append(1-accuracy_value)
print("The error has been reduced to",errors[-1])
print(sess.run(accuracy,feed_dict={X:trainX,Y:trainY}))
`
Explanation: PART B:
WE ADD A HIDDEN LAYER WITH 4 NODES .
End of explanation
plt.plot([np.mean(errors[i-50:i]) for i in range(len(errors))])
plt.show()
Explanation: WHEN WE ADD A HIDDEN LAYER WITH SAME NUMBER OF ITERATIONS,ACCURACY INCREASES TO 99%
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.