Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
3,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Distributed Training with Keras
Learning Objectives
How to define distribution strategy and set input pipeline.
How to create the Keras model.
How to define the callbacks.
How to train and evaluate the model.
Introduction
The tf.distribute.Strategy API provides an abstraction for distributing your training
across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes.
This notebook uses the tf.distribute.MirroredStrategy, which
does in-graph replication with synchronous training on many GPUs on one machine.
Essentially, it copies all of the model's variables to each processor.
Then, it uses all-reduce to combine the gradients from all processors and applies the combined value to all copies of the model.
MirroredStrategy is one of several distribution strategy available in TensorFlow core. You can read about more strategies at distribution strategy guide.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Keras API
This example uses the tf.keras API to build the model and training loop. For custom training loops, see the tf.distribute.Strategy with training loops tutorial.
Import dependencies
Step1: Download the dataset
Download the MNIST dataset and load it from TensorFlow Datasets. This returns a dataset in tf.data format.
Setting with_info to True includes the metadata for the entire dataset, which is being saved here to info.
Among other things, this metadata object includes the number of train and test examples.
Step2: Define distribution strategy
Create a MirroredStrategy object. This will handle distribution, and provides a context manager (tf.distribute.MirroredStrategy.scope) to build your model inside.
Step3: Setup input pipeline
When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory, and tune the learning rate accordingly.
Step4: Pixel values, which are 0-255, have to be normalized to the 0-1 range. Define this scale in a function.
Step5: Apply this function to the training and test data, shuffle the training data, and batch it for training. Notice we are also keeping an in-memory cache of the training data to improve performance.
Step6: Create the model
Create and compile the Keras model in the context of strategy.scope.
Step7: Define the callbacks
The callbacks used here are
Step8: Train and evaluate
Now, train the model in the usual way, calling fit on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.
Step9: As you can see below, the checkpoints are getting saved.
Step10: To see how the model perform, load the latest checkpoint and call evaluate on the test data.
Call evaluate as before using appropriate datasets.
Step11: To see the output, you can download and view the TensorBoard logs at the terminal.
$ tensorboard --logdir=path/to/log-directory
Step12: Export to SavedModel
Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope.
Step13: Load the model without strategy.scope.
Step14: Load the model with strategy.scope. | Python Code:
# Import TensorFlow and TensorFlow Datasets
import tensorflow_datasets as tfds
import tensorflow as tf
import os
# Here we'll show the currently installed version of TensorFlow
print(tf.__version__)
Explanation: Distributed Training with Keras
Learning Objectives
How to define distribution strategy and set input pipeline.
How to create the Keras model.
How to define the callbacks.
How to train and evaluate the model.
Introduction
The tf.distribute.Strategy API provides an abstraction for distributing your training
across multiple processing units. The goal is to allow users to enable distributed training using existing models and training code, with minimal changes.
This notebook uses the tf.distribute.MirroredStrategy, which
does in-graph replication with synchronous training on many GPUs on one machine.
Essentially, it copies all of the model's variables to each processor.
Then, it uses all-reduce to combine the gradients from all processors and applies the combined value to all copies of the model.
MirroredStrategy is one of several distribution strategy available in TensorFlow core. You can read about more strategies at distribution strategy guide.
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Keras API
This example uses the tf.keras API to build the model and training loop. For custom training loops, see the tf.distribute.Strategy with training loops tutorial.
Import dependencies
End of explanation
# Loads the named dataset into a tf.data.Dataset
# TODO: Your code goes here
mnist_train, mnist_test = datasets['train'], datasets['test']
Explanation: Download the dataset
Download the MNIST dataset and load it from TensorFlow Datasets. This returns a dataset in tf.data format.
Setting with_info to True includes the metadata for the entire dataset, which is being saved here to info.
Among other things, this metadata object includes the number of train and test examples.
End of explanation
# Synchronous training across multiple replicas on one machine.
# TODO: Your code goes here
print('Number of devices: {}'.format(strategy.num_replicas_in_sync))
Explanation: Define distribution strategy
Create a MirroredStrategy object. This will handle distribution, and provides a context manager (tf.distribute.MirroredStrategy.scope) to build your model inside.
End of explanation
# You can also do info.splits.total_num_examples to get the total
# number of examples in the dataset.
num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples
BUFFER_SIZE = 10000
BATCH_SIZE_PER_REPLICA = 64
BATCH_SIZE = BATCH_SIZE_PER_REPLICA * strategy.num_replicas_in_sync
Explanation: Setup input pipeline
When training a model with multiple GPUs, you can use the extra computing power effectively by increasing the batch size. In general, use the largest batch size that fits the GPU memory, and tune the learning rate accordingly.
End of explanation
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
Explanation: Pixel values, which are 0-255, have to be normalized to the 0-1 range. Define this scale in a function.
End of explanation
train_dataset = mnist_train.map(scale).cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
eval_dataset = mnist_test.map(scale).batch(BATCH_SIZE)
Explanation: Apply this function to the training and test data, shuffle the training data, and batch it for training. Notice we are also keeping an in-memory cache of the training data to improve performance.
End of explanation
with strategy.scope():
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(10)
])
# Configures the model for training.
# TODO: Your code goes here
Explanation: Create the model
Create and compile the Keras model in the context of strategy.scope.
End of explanation
# Define the checkpoint directory to store the checkpoints
# TODO: Your code goes here
# Name of the checkpoint files
# TODO: Your code goes here
# Function for decaying the learning rate.
# You can define any decay function you need.
def decay(epoch):
if epoch < 3:
return 1e-3
elif epoch >= 3 and epoch < 7:
return 1e-4
else:
return 1e-5
# Callback for printing the LR at the end of each epoch.
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print('\nLearning rate for epoch {} is {}'.format(epoch + 1,
model.optimizer.lr.numpy()))
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir='./logs'),
tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_prefix,
save_weights_only=True),
tf.keras.callbacks.LearningRateScheduler(decay),
PrintLR()
]
Explanation: Define the callbacks
The callbacks used here are:
TensorBoard: This callback writes a log for TensorBoard which allows you to visualize the graphs.
Model Checkpoint: This callback saves the model after every epoch.
Learning Rate Scheduler: Using this callback, you can schedule the learning rate to change after every epoch/batch.
For illustrative purposes, add a print callback to display the learning rate in the notebook.
End of explanation
# Train the model with the new callback
# TODO: Your code goes here
Explanation: Train and evaluate
Now, train the model in the usual way, calling fit on the model and passing in the dataset created at the beginning of the tutorial. This step is the same whether you are distributing the training or not.
End of explanation
# check the checkpoint directory
!ls {checkpoint_dir}
Explanation: As you can see below, the checkpoints are getting saved.
End of explanation
# Loads the weights
model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
eval_loss, eval_acc = model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
Explanation: To see how the model perform, load the latest checkpoint and call evaluate on the test data.
Call evaluate as before using appropriate datasets.
End of explanation
!ls -sh ./logs
Explanation: To see the output, you can download and view the TensorBoard logs at the terminal.
$ tensorboard --logdir=path/to/log-directory
End of explanation
path = 'saved_model/'
# Save the entire model as a SavedModel.
# TODO: Your code goes here
Explanation: Export to SavedModel
Export the graph and the variables to the platform-agnostic SavedModel format. After your model is saved, you can load it with or without the scope.
End of explanation
unreplicated_model = tf.keras.models.load_model(path)
unreplicated_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = unreplicated_model.evaluate(eval_dataset)
print('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
Explanation: Load the model without strategy.scope.
End of explanation
# Recreate the exact same model, including its weights and the optimizer
with strategy.scope():
replicated_model = tf.keras.models.load_model(path)
replicated_model.compile(loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(),
metrics=['accuracy'])
eval_loss, eval_acc = replicated_model.evaluate(eval_dataset)
print ('Eval loss: {}, Eval Accuracy: {}'.format(eval_loss, eval_acc))
Explanation: Load the model with strategy.scope.
End of explanation |
3,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-1', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-1
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
3,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Working with time series data
Some imports
Step1: Case study
Step2: I downloaded and preprocessed some of the data (python-airbase)
Step3: As you can see, the missing values are indicated by -9999. This can be recognized by read_csv by passing the na_values keyword
Step4: Exploring the data
Some useful methods
Step5: info()
Step6: Getting some basic summary statistics about the data with describe
Step7: Quickly visualizing the data
Step8: This does not say too much ..
We can select part of the data (eg the latest 500 data points)
Step9: Or we can use some more advanced time series features -> next section!
Working with time series data
When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available
Step10: Indexing a time series works with strings
Step11: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2012
Step12: Normally you would expect this to access a column named '2012', but as for a DatetimeIndex, pandas also tries to interprete it as a datetime slice.
Or all data of January up to March 2012
Step13: Time and date components can be accessed from the index
Step14: <div class="alert alert-success">
<b>EXERCISE</b>
Step15: <div class="alert alert-success">
<b>EXERCISE</b>
Step16: <div class="alert alert-success">
<b>EXERCISE</b>
Step17: <div class="alert alert-success">
<b>EXERCISE</b>
Step18: The power of pandas
Step19: By default, resample takes the mean as aggregation function, but other methods can also be specified
Step20: The string to specify the new time frequency
Step21: <div class="alert alert-success">
<b>QUESTION</b> | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
try:
import seaborn
except:
pass
pd.options.display.max_rows = 8
Explanation: Working with time series data
Some imports:
End of explanation
from IPython.display import HTML
HTML('<iframe src=http://www.eea.europa.eu/data-and-maps/data/airbase-the-european-air-quality-database-8#tab-data-by-country width=900 height=350></iframe>')
Explanation: Case study: air quality data of European monitoring stations (AirBase)
AirBase (The European Air quality dataBase): hourly measurements of all air quality monitoring stations from Europe.
End of explanation
!head -5 data/airbase_data.csv
Explanation: I downloaded and preprocessed some of the data (python-airbase): data/airbase_data.csv. This file includes the hourly concentrations of NO2 for 4 different measurement stations:
FR04037 (PARIS 13eme): urban background site at Square de Choisy
FR04012 (Paris, Place Victor Basch): urban traffic site at Rue d'Alesia
BETR802: urban traffic site in Antwerp, Belgium
BETN029: rural background site in Houtem, Belgium
See http://www.eea.europa.eu/themes/air/interactive/no2
Importing the data
Import the csv file:
End of explanation
data = pd.read_csv('data/airbase_data.csv', index_col=0, parse_dates=True, na_values=[-9999])
Explanation: As you can see, the missing values are indicated by -9999. This can be recognized by read_csv by passing the na_values keyword:
End of explanation
data.head(3)
data.tail()
Explanation: Exploring the data
Some useful methods:
head and tail
End of explanation
data.info()
Explanation: info()
End of explanation
data.describe()
Explanation: Getting some basic summary statistics about the data with describe:
End of explanation
data.plot(kind='box', ylim=[0,250])
data['BETR801'].plot(kind='hist', bins=50)
data.plot(figsize=(12,6))
Explanation: Quickly visualizing the data
End of explanation
data[-500:].plot(figsize=(12,6))
Explanation: This does not say too much ..
We can select part of the data (eg the latest 500 data points):
End of explanation
data.index
Explanation: Or we can use some more advanced time series features -> next section!
Working with time series data
When we ensure the DataFrame has a DatetimeIndex, time-series related functionality becomes available:
End of explanation
data["2010-01-01 09:00": "2010-01-01 12:00"]
Explanation: Indexing a time series works with strings:
End of explanation
data['2012']
Explanation: A nice feature is "partial string" indexing, where we can do implicit slicing by providing a partial datetime string.
E.g. all data of 2012:
End of explanation
data['2012-01':'2012-03']
Explanation: Normally you would expect this to access a column named '2012', but as for a DatetimeIndex, pandas also tries to interprete it as a datetime slice.
Or all data of January up to March 2012:
End of explanation
data.index.hour
data.index.year
Explanation: Time and date components can be accessed from the index:
End of explanation
data = data['1999':]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all data starting from 1999
</div>
End of explanation
data[data.index.month == 1]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all data in January for all different years
</div>
End of explanation
data['months'] = data.index.month
data[data['months'].isin([1, 2, 3])]
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all data in January, February and March for all different years
</div>
End of explanation
data[(data.index.hour >= 8) & (data.index.hour < 20)]
data.between_time('08:00', '20:00')
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: select all 'daytime' data (between 8h and 20h) for all days
</div>
End of explanation
data.resample('D').head()
Explanation: The power of pandas: resample
A very powerfull method is resample: converting the frequency of the time series (e.g. from hourly to daily data).
The time series has a frequency of 1 hour. I want to change this to daily:
End of explanation
data.resample('D', how='max').head()
Explanation: By default, resample takes the mean as aggregation function, but other methods can also be specified:
End of explanation
data.resample('M').plot() # 'A'
# data['2012'].resample('D').plot()
Explanation: The string to specify the new time frequency: http://pandas.pydata.org/pandas-docs/dev/timeseries.html#offset-aliases
These strings can also be combined with numbers, eg '10D'.
Further exploring the data:
End of explanation
data.groupby(data.index.year).mean().plot()
Explanation: <div class="alert alert-success">
<b>QUESTION</b>: plot the monthly mean and median concentration of the 'FR04037' station for the years 2009-2012
</div>
<div class="alert alert-success">
<b>QUESTION</b>: plot the monthly mininum and maximum daily concentration of the 'BETR801' station
</div>
<div class="alert alert-success">
<b>QUESTION</b>: make a bar plot of the mean of the stations in year of 2012
</div>
<div class="alert alert-success">
<b>QUESTION</b>: The evolution of the yearly averages with, and the overall mean of all stations?
</div>
Combination with groupby
resample can actually be seen as a specific kind of groupby. E.g. taking annual means with data.resample('A', 'mean') is equivalent to data.groupby(data.index.year).mean() (only the result of resample still has a DatetimeIndex).
End of explanation |
3,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lending Club Loan Data
Step1: 2. Loan Book Distribution across the U.S. States (D3 Choropleths by leveraging the "Bokeh" library)
Here, we provide two choropleth maps concerning the Loan Book Value and the Loan Book Volume distribution across the U.S. States. To do so, we have used the "Bokeh" Python library, a GeoJSON file which defines the U.S. States boundaries and it has been produced from a cartographic boundary shapefile that is provided from the official site of the U.S. Census Bureau, and the Pandas DataFrame grouped_agg_df, where we aggregate the number, and the value of loans per U.S. State. "Bokeh" is a Python library for interactive D3 visualizations!
Step2: 2.1 Loan book Value by U.S. States
Step3: 2.2 Loan book Volume by U.S. States | Python Code:
# Required Libraries
import os
import pandas as pd
import numpy as np
# Path Definitions of Required Data Sets
loan_df_path = os.path.join('/media/ML_HOME/ML-Data_Repository/data', 'loan_df')
us_states_GeoJSON = os.path.join('/media/ML_HOME/ML-Data_Repository/maps', 'us_states-albersUSA-Geo.json')
Explanation: Lending Club Loan Data: Loan Book Distribution ("Bokeh" Viz)
Description: Analyze Lending Club's issued loans
These files contain complete loan data for all loans issued through the 2007-2015, including the current loan status ('Current', 'Late', 'Fully Paid', etc.) and latest payment information. Additional features include credit scores, number of finance inquiries, address including zip codes, and state, and collections among others. The file is a matrix of about 890 thousand observations and 75 variables. Here, we use a previously transformed data set, which is however a full copy of the original one. For more information, or if you want to download these data, consult:
Source
Lending Club - About
Lending Club Statistics - Download Data
kaggle Datasets
Table of Contents
<p><div class="lev1 toc-item"><a href="#Lending-Club-Loan-Data:-Loan-Book-Distribution-("Bokeh"-Viz)" data-toc-modified-id="Lending-Club-Loan-Data:-Loan-Book-Distribution-("Bokeh"-Viz)-1"><span class="toc-item-num">1 </span>Lending Club Loan Data: Loan Book Distribution ("Bokeh" Viz)</a></div><div class="lev2 toc-item"><a href="#Description:-Analyze-Lending-Club's-issued-loans" data-toc-modified-id="Description:-Analyze-Lending-Club's-issued-loans-11"><span class="toc-item-num">1.1 </span>Description: Analyze Lending Club's issued loans</a></div><div class="lev2 toc-item"><a href="#Source" data-toc-modified-id="Source-12"><span class="toc-item-num">1.2 </span>Source</a></div><div class="lev2 toc-item"><a href="#1.-Loading-Libraries-and-Data-Sets" data-toc-modified-id="1.-Loading-Libraries-and-Data-Sets-13"><span class="toc-item-num">1.3 </span>1. Loading Libraries and Data Sets</a></div><div class="lev2 toc-item"><a href="#2.-Loan-Book-Distribution-across-the-U.S.-States-(D3-Choropleths-by-leveraging-the-"Bokeh"-library)" data-toc-modified-id="2.-Loan-Book-Distribution-across-the-U.S.-States-(D3-Choropleths-by-leveraging-the-"Bokeh"-library)-14"><span class="toc-item-num">1.4 </span>2. Loan Book Distribution across the U.S. States (D3 Choropleths by leveraging the "Bokeh" library)</a></div><div class="lev3 toc-item"><a href="#2.1-Loan-book-Value-by-U.S.-States" data-toc-modified-id="2.1-Loan-book-Value-by-U.S.-States-141"><span class="toc-item-num">1.4.1 </span>2.1 Loan book Value by U.S. States</a></div><div class="lev3 toc-item"><a href="#2.2-Loan-book-Volume-by-U.S.-States" data-toc-modified-id="2.2-Loan-book-Volume-by-U.S.-States-142"><span class="toc-item-num">1.4.2 </span>2.2 Loan book Volume by U.S. States</a></div>
## 1. Loading Libraries and Data Sets
End of explanation
# Load the Data Set of interest
loan_df = pd.read_pickle(loan_df_path)
# A fast look in the available data set..
loan_df.info(null_counts=True)
# Compute the "Loan Book Amount & Volume" per "US State"
grouped = loan_df.groupby(by=['addr_state'])
grouped_agg = (grouped[['loan_amnt']].agg(np.sum)
.rename(columns={'loan_amnt': 'loanbook_amnt_per_state'}))
grouped_agg['loanbook_vol_per_state'] = grouped['loan_amnt'].agg(np.count_nonzero)
grouped_agg_df = grouped_agg.reset_index()
grouped_agg_df.head()
# Prepare the "grouped_agg_df" Data Frame as a JSON file...
# This JSON file has been appropriately joined into the GeoJSON Data Source, "us_states_GeoJSON", that we use here.
grouped_agg_df[:5].to_json(orient='records')
Explanation: 2. Loan Book Distribution across the U.S. States (D3 Choropleths by leveraging the "Bokeh" library)
Here, we provide two choropleth maps concerning the Loan Book Value and the Loan Book Volume distribution across the U.S. States. To do so, we have used the "Bokeh" Python library, a GeoJSON file which defines the U.S. States boundaries and it has been produced from a cartographic boundary shapefile that is provided from the official site of the U.S. Census Bureau, and the Pandas DataFrame grouped_agg_df, where we aggregate the number, and the value of loans per U.S. State. "Bokeh" is a Python library for interactive D3 visualizations!
End of explanation
# Load the necessary libraries for the D3 Visualization
from bokeh.io import show, output_notebook
from bokeh.palettes import (
YlOrRd9 as palette1,
YlGnBu9 as palette2)
from bokeh.plotting import figure
from bokeh.models import (
GeoJSONDataSource,
LogColorMapper,
HoverTool,
LogTicker,
ColorBar)
# Load the enriched GeoJSON Data Source, with the loanbook measures of interest
with open(us_states_GeoJSON, 'r') as f:
geo_source = GeoJSONDataSource(geojson=f.read())
# Output the Choropleth Plots in Notebook
output_notebook()
# PROVIDE THE CHOROPLETH OF "LOAN BOOK AMOUNT PER STATE"
palette1.reverse()
color_mapper = LogColorMapper(palette=palette1,
low=grouped_agg_df['loanbook_amnt_per_state'].min(),
high=grouped_agg_df['loanbook_amnt_per_state'].max())
# Define the figure "Tools" we want to make available
TOOLS = "pan, wheel_zoom, reset, hover, save"
# Plot the figure
# Define the figure dimensions and its general details
p = figure(title="Loan Book Value by U.S. States", tools=TOOLS,
plot_width=960, plot_height=500,
x_range=(0, 960), y_range=(500, 0),
x_axis_location=None, y_axis_location=None)
# Render the "Bokeh" patches in Glyph
p.patches('xs', 'ys', source=geo_source,
fill_color={'field': "loanbook_amnt_per_state" ,'transform': color_mapper},
fill_alpha=0.7, line_color="white", line_width=0.5)
# Add a Hover Tools over the U.S. States
hover = p.select_one(HoverTool)
hover.point_policy = "follow_mouse"
hover.tooltips = [
("State", "@state"),
("Loan Book Amount", "@loanbook_amnt_per_state{,.2f} USD"),
("(Long, Lat)", "($x, $y)"),
]
# Add a ColorBar Legend
color_bar = ColorBar(color_mapper=color_mapper, ticker=LogTicker(),
background_fill_alpha=0.7,
label_standoff=5,
major_label_text_color='black',
major_tick_line_color='black', major_tick_line_width=1.3, major_tick_out=5,
border_line_color=None, location=(0,0),
orientation='horizontal', width=500)
p.add_layout(color_bar, 'above')
show(p)
Explanation: 2.1 Loan book Value by U.S. States
End of explanation
# PROVIDE THE CHOROPLETH OF "LOAN BOOK VOLUME PER STATE"
palette2.reverse()
color_mapper = LogColorMapper(palette=palette2,
low=grouped_agg_df['loanbook_vol_per_state'].min(),
high=grouped_agg_df['loanbook_vol_per_state'].max())
# Define the figure "Tools" we want to make available
TOOLS = "pan, wheel_zoom, reset, hover, save"
# Plot the figure
# Define the figure dimensions and its general details
p = figure(title="Loan Book Volume by U.S. States", tools=TOOLS,
plot_width=960, plot_height=500,
x_range=(0, 960), y_range=(500, 0),
x_axis_location=None, y_axis_location=None)
# Render the "Bokeh" patches in Glyph
p.patches('xs', 'ys', source=geo_source,
fill_color={'field': "loanbook_vol_per_state" ,'transform': color_mapper},
fill_alpha=0.7, line_color="white", line_width=0.5)
# Add a Hover Tools over the U.S. States
hover = p.select_one(HoverTool)
hover.point_policy = "follow_mouse"
hover.tooltips = [
("State", "@state"),
("Loan Book Volume", "@loanbook_vol_per_state{,}"),
("(Long, Lat)", "($x, $y)"),
]
# Add a ColorBar Legend
color_bar = ColorBar(color_mapper=color_mapper, ticker=LogTicker(),
background_fill_alpha=0.7,
label_standoff=5,
major_label_text_color='black',
major_tick_line_color='black', major_tick_line_width=1.3, major_tick_out=5,
border_line_color=None, location=(0,0),
orientation='horizontal', width=500)
p.add_layout(color_bar, 'above')
show(p)
Explanation: 2.2 Loan book Volume by U.S. States
End of explanation |
3,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Shear Wave Splitting for the Novice
When a shear wave encounters an anisotropic medium, it splits its energy into orthogonally polarised wave sheets. The effect is easily measured on waveforms with -- at least -- 2-component data (provided those 2 components are orthogonal to the wavefront vector, which can be different from the ray vector). The key parameters are the polarisation of the wave fronts (which is captured by the parameter, $\phi$, which ca be defined as a vector in 3 dimensions, but in practice). This angle is measured relative to some well-defined direction, e.g. North, or upwards, in the plane normal to the wave prop
Splitting the signal
Let's start with two components. Put a pulse of energy and some noise on these components, with a polarisation of 40 degrees. Note the pulse of energy is centred in the middle of the trace -- this is deliberate -- it is a feature of this software that analysis is always done at the centre of traces.
Step1: Now let's add a bit of splitting. Note, this shortens trace length slightly. And the pulse is still at the centre.
Step2: Measuring shear wave splitting involves searching for the splitting parameters that, when removed from the data, best linearise the particle motion. We know the splitting parameters so no need to search. Let's just confirm that when we undo the splitting we get linearised particle motion. Again, this shortens the trace, and the pulse is still at the centre.
Step3: The window
The window should capture the power in the pulse of arriving energy in such a way as to maximise the signal to noise ratio. It should also be wide enough to account for pulse broadening when splitting operators are applied to the data.
Step4: The measurement | Python Code:
import sys
sys.path.append("..")
import splitwavepy as sw
import matplotlib.pyplot as plt
import numpy as np
data = sw.Pair(noise=0.05,pol=40,delta=0.1)
data.plot()
Explanation: Shear Wave Splitting for the Novice
When a shear wave encounters an anisotropic medium, it splits its energy into orthogonally polarised wave sheets. The effect is easily measured on waveforms with -- at least -- 2-component data (provided those 2 components are orthogonal to the wavefront vector, which can be different from the ray vector). The key parameters are the polarisation of the wave fronts (which is captured by the parameter, $\phi$, which ca be defined as a vector in 3 dimensions, but in practice). This angle is measured relative to some well-defined direction, e.g. North, or upwards, in the plane normal to the wave prop
Splitting the signal
Let's start with two components. Put a pulse of energy and some noise on these components, with a polarisation of 40 degrees. Note the pulse of energy is centred in the middle of the trace -- this is deliberate -- it is a feature of this software that analysis is always done at the centre of traces.
End of explanation
data.split(40,1.6)
data.plot()
Explanation: Now let's add a bit of splitting. Note, this shortens trace length slightly. And the pulse is still at the centre.
End of explanation
data.unsplit(80,1.6)
data.plot()
Explanation: Measuring shear wave splitting involves searching for the splitting parameters that, when removed from the data, best linearise the particle motion. We know the splitting parameters so no need to search. Let's just confirm that when we undo the splitting we get linearised particle motion. Again, this shortens the trace, and the pulse is still at the centre.
End of explanation
# Let's start afresh, and this time put the splitting on straight away.
data = sw.Pair(delta=0.1,noise=0.01,pol=40,fast=80,lag=1.2)
# plot power in signal
fig, ax1 = plt.subplots()
ax1.plot(data.t(),data.power())
# generate a window
window = data.window(25,12,tukey=0.1)
# window = sw.Window(data.centre(),150)
ax2 = ax1.twinx()
ax2.plot(data.t(),window.asarray(data.t().size),'r')
plt.show()
data.plot(window=window)
# Now repreat but this time apply loads of splitting and see the energy broaden
data = sw.Pair(delta=0.1,noise=0.01,pol=40,fast=80,lag=5.2)
# plot power in signal
fig, ax1 = plt.subplots()
ax1.plot(data.t(),data.power())
# generate a window
window = data.window(25,12,tukey=0.1)
# window = sw.Window(data.centre(),150)
ax2 = ax1.twinx()
ax2.plot(data.t(),window.asarray(data.t().size),'r')
plt.show()
data.plot(window=window)
# large window
largewindow = data.window(23,24,tukey=0.1)
data.plot(window=largewindow)
Explanation: The window
The window should capture the power in the pulse of arriving energy in such a way as to maximise the signal to noise ratio. It should also be wide enough to account for pulse broadening when splitting operators are applied to the data.
End of explanation
# sparse search
tlags = np.linspace(0,7.0,60)
degs = np.linspace(-90,90,60)
M = sw.EigenM(tlags=tlags,degs=degs,noise=0.03,fast=112,lag=5.3,delta=0.2)
M.plot()
# dense search
# tlags = np.linspace(0.,7.0,200)
# degs = np.linspace(0,180,200)
# M = sw.EigenM(M.data,tlags=tlags,degs=degs)
# M.plot()
M.tlags
M = sw.EigenM(delta=0.1,noise=0.02,fast=60,lag=1.3)
M.plot()
np.linspace(0,0.5,15)
p = sw.Pair(delta=0.1,pol=30,fast=30,lag=1.2,noise=0.01)
p.plot()
p.angle
Explanation: The measurement
End of explanation |
3,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Random Sampling
Copyright 2015 Allen Downey
License
Step1: Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters
Step2: Here's what that distribution looks like
Step3: make_sample draws a random sample from this distribution. The result is a NumPy array.
Step4: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
Step5: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean
Step6: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
Step7: The next line runs the simulation 1000 times and puts the results in
sample_means
Step8: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
Step9: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
Step10: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
Step11: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results
Step12: The following function takes an array of sample statistics and prints the SE and CI
Step13: And here's what that looks like
Step14: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
Step15: Here's a test run with n=100
Step16: Now we can use interact to run plot_sample_stats with different values of n. Note
Step17: This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic
Step24: So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
Step25: The following function instantiates a Resampler and runs it.
Step26: Here's a test run with n=100
Step27: Now we can use plot_resampled_stats in an interaction
Step30: Now we can write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
Step31: Here's how it works
Step32: When your StdResampler is working, you should be able to interact with it
Step33: We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data)
Step34: And here's the men's distribution
Step35: I'll simulate a sample of 100 men and 100 women
Step36: The difference in means should be about 17 kg, but will vary from one random sample to the next
Step38: Here's the function that computes Cohen's $d$ again
Step39: The difference in weight between men and women is about 1 standard deviation
Step40: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
Step41: Now we can instantiate a CohenResampler and plot the sampling distribution. | Python Code:
from __future__ import print_function, division
import numpy
import scipy.stats
import matplotlib.pyplot as pyplot
from IPython.html.widgets import interact, fixed
from IPython.html import widgets
# seed the random number generator so we all get the same results
numpy.random.seed(18)
# some nicer colors from http://colorbrewer2.org/
COLOR1 = '#7fc97f'
COLOR2 = '#beaed4'
COLOR3 = '#fdc086'
COLOR4 = '#ffff99'
COLOR5 = '#386cb0'
%matplotlib inline
Explanation: Random Sampling
Copyright 2015 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
weight = scipy.stats.lognorm(0.23, 0, 70.8)
weight.mean(), weight.std()
Explanation: Suppose we want to estimate the average weight of men and women in the U.S.
And we want to quantify the uncertainty of the estimate.
One approach is to simulate many experiments and see how much the results vary from one experiment to the next.
I'll start with the unrealistic assumption that we know the actual distribution of weights in the population. Then I'll show how to solve the problem without that assumption.
Based on data from the BRFSS, I found that the distribution of weight in kg for women in the U.S. is well modeled by a lognormal distribution with the following parameters:
End of explanation
xs = numpy.linspace(20, 160, 100)
ys = weight.pdf(xs)
pyplot.plot(xs, ys, linewidth=4, color=COLOR1)
pyplot.xlabel('weight (kg)')
pyplot.ylabel('PDF')
None
Explanation: Here's what that distribution looks like:
End of explanation
def make_sample(n=100):
sample = weight.rvs(n)
return sample
Explanation: make_sample draws a random sample from this distribution. The result is a NumPy array.
End of explanation
sample = make_sample(n=100)
sample.mean(), sample.std()
Explanation: Here's an example with n=100. The mean and std of the sample are close to the mean and std of the population, but not exact.
End of explanation
def sample_stat(sample):
return sample.mean()
Explanation: We want to estimate the average weight in the population, so the "sample statistic" we'll use is the mean:
End of explanation
def compute_sample_statistics(n=100, iters=1000):
stats = [sample_stat(make_sample(n)) for i in range(iters)]
return numpy.array(stats)
Explanation: One iteration of "the experiment" is to collect a sample of 100 women and compute their average weight.
We can simulate running this experiment many times, and collect a list of sample statistics. The result is a NumPy array.
End of explanation
sample_means = compute_sample_statistics(n=100, iters=1000)
Explanation: The next line runs the simulation 1000 times and puts the results in
sample_means:
End of explanation
pyplot.hist(sample_means, color=COLOR5)
pyplot.xlabel('sample mean (n=100)')
pyplot.ylabel('count')
None
Explanation: Let's look at the distribution of the sample means. This distribution shows how much the results vary from one experiment to the next.
Remember that this distribution is not the same as the distribution of weight in the population. This is the distribution of results across repeated imaginary experiments.
End of explanation
sample_means.mean()
Explanation: The mean of the sample means is close to the actual population mean, which is nice, but not actually the important part.
End of explanation
std_err = sample_means.std()
std_err
Explanation: The standard deviation of the sample means quantifies the variability from one experiment to the next, and reflects the precision of the estimate.
This quantity is called the "standard error".
End of explanation
conf_int = numpy.percentile(sample_means, [5, 95])
conf_int
Explanation: We can also use the distribution of sample means to compute a "90% confidence interval", which contains 90% of the experimental results:
End of explanation
def summarize_sampling_distribution(sample_stats):
print('SE', sample_stats.std())
print('90% CI', numpy.percentile(sample_stats, [5, 95]))
Explanation: The following function takes an array of sample statistics and prints the SE and CI:
End of explanation
summarize_sampling_distribution(sample_means)
Explanation: And here's what that looks like:
End of explanation
def plot_sample_stats(n, xlim=None):
sample_stats = compute_sample_statistics(n, iters=1000)
summarize_sampling_distribution(sample_stats)
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(xlim)
Explanation: Now we'd like to see what happens as we vary the sample size, n. The following function takes n, runs 1000 simulated experiments, and summarizes the results.
End of explanation
plot_sample_stats(100)
Explanation: Here's a test run with n=100:
End of explanation
def sample_stat(sample):
return sample.mean()
slider = widgets.IntSliderWidget(min=10, max=1000, value=100)
interact(plot_sample_stats, n=slider, xlim=fixed([55, 95]))
None
Explanation: Now we can use interact to run plot_sample_stats with different values of n. Note: xlim sets the limits of the x-axis so the figure doesn't get rescaled as we vary n.
End of explanation
def sample_stat(sample):
return sample.std()
slider = widgets.IntSliderWidget(min=10, max=1000, value=100)
interact(plot_sample_stats, n=slider, xlim=fixed([0, 40]))
None
Explanation: This framework works with any other quantity we want to estimate. By changing sample_stat, you can compute the SE and CI for any sample statistic:
Standard deviation of the sample.
Coefficient of variation, which is the sample standard deviation divided by the sample standard mean.
Min or Max
Median (which is the 50th percentile)
10th or 90th percentile.
Interquartile range (IQR), which is the difference between the 75th and 25th percentiles.
NumPy array methods you might find useful include std, min, max, and percentile.
Depending on the results, you might want to adjust xlim.
End of explanation
class Resampler(object):
Represents a framework for computing sampling distributions.
def __init__(self, sample, xlim=None):
Stores the actual sample.
self.sample = sample
self.n = len(sample)
self.xlim = xlim
def resample(self):
Generates a new sample by choosing from the original
sample with replacement.
new_sample = numpy.random.choice(self.sample, self.n, replace=True)
return new_sample
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.mean()
def compute_sample_statistics(self, iters=1000):
Simulates many experiments and collects the resulting sample
statistics.
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sample_stats(self):
Runs simulated experiments and summarizes the results.
sample_stats = self.compute_sample_statistics()
summarize_sampling_distribution(sample_stats)
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(self.xlim)
Explanation: So far we have shown that if we know the actual distribution of the population, we can compute the sampling distribution for any sample statistic, and from that we can compute SE and CI.
But in real life we don't know the actual distribution of the population. If we did, we wouldn't need to estimate it!
In real life, we use the sample to build a model of the population distribution, then use the model to generate the sampling distribution. A simple and popular way to do that is "resampling," which means we use the sample itself as a model of the population distribution and draw samples from it.
Before we go on, I want to collect some of the code from Part One and organize it as a class. This class represents a framework for computing sampling distributions.
End of explanation
def plot_resampled_stats(n=100):
sample = weight.rvs(n)
resampler = Resampler(sample, xlim=[55, 95])
resampler.plot_sample_stats()
Explanation: The following function instantiates a Resampler and runs it.
End of explanation
plot_resampled_stats(100)
Explanation: Here's a test run with n=100
End of explanation
slider = widgets.IntSliderWidget(min=10, max=1000, value=100)
interact(plot_resampled_stats, n=slider, xlim=fixed([1, 15]))
None
Explanation: Now we can use plot_resampled_stats in an interaction:
End of explanation
class StdResampler(Resampler):
Computes the sampling distribution of the standard deviation.
def sample_stat(self, sample):
Computes a sample statistic using the original sample or a
simulated sample.
return sample.std()
Explanation: Now we can write a new class called StdResampler that inherits from Resampler and overrides sample_stat so it computes the standard deviation of the resampled data.
End of explanation
def plot_resampled_stats(n=100):
sample = weight.rvs(n)
resampler = StdResampler(sample, xlim=[0, 40])
resampler.plot_sample_stats()
plot_resampled_stats()
Explanation: Here's how it works:
End of explanation
slider = widgets.IntSliderWidget(min=10, max=1000, value=40)
interact(plot_resampled_stats, n=slider)
None
Explanation: When your StdResampler is working, you should be able to interact with it:
End of explanation
female_weight = scipy.stats.lognorm(0.23, 0, 70.8)
female_weight.mean(), female_weight.std()
Explanation: We can extend this framework to compute SE and CI for a difference in means.
For example, men are heavier than women on average. Here's the women's distribution again (from BRFSS data):
End of explanation
male_weight = scipy.stats.lognorm(0.20, 0, 87.3)
male_weight.mean(), male_weight.std()
Explanation: And here's the men's distribution:
End of explanation
female_sample = female_weight.rvs(100)
male_sample = male_weight.rvs(100)
Explanation: I'll simulate a sample of 100 men and 100 women:
End of explanation
male_sample.mean() - female_sample.mean()
Explanation: The difference in means should be about 17 kg, but will vary from one random sample to the next:
End of explanation
def CohenEffectSize(group1, group2):
Compute Cohen's d.
group1: Series or NumPy array
group2: Series or NumPy array
returns: float
diff = group1.mean() - group2.mean()
n1, n2 = len(group1), len(group2)
var1 = group1.var()
var2 = group2.var()
pooled_var = (n1 * var1 + n2 * var2) / (n1 + n2)
d = diff / numpy.sqrt(pooled_var)
return d
Explanation: Here's the function that computes Cohen's $d$ again:
End of explanation
CohenEffectSize(male_sample, female_sample)
Explanation: The difference in weight between men and women is about 1 standard deviation:
End of explanation
class CohenResampler(Resampler):
def __init__(self, group1, group2, xlim=None):
self.group1 = group1
self.group2 = group2
self.xlim = xlim
def resample(self):
group1 = numpy.random.choice(self.group1, len(self.group1), replace=True)
group2 = numpy.random.choice(self.group2, len(self.group2), replace=True)
return group1, group2
def sample_stat(self, groups):
group1, group2 = groups
return CohenEffectSize(group1, group2)
# NOTE: The following functions are the same as the ones in Resampler,
# so I could just inherit them, but I'm including them for readability
def compute_sample_statistics(self, iters=1000):
stats = [self.sample_stat(self.resample()) for i in range(iters)]
return numpy.array(stats)
def plot_sample_stats(self):
sample_stats = self.compute_sample_statistics()
summarize_sampling_distribution(sample_stats)
pyplot.hist(sample_stats, color=COLOR2)
pyplot.xlabel('sample statistic')
pyplot.xlim(self.xlim)
Explanation: Now we can write a version of the Resampler that computes the sampling distribution of $d$.
End of explanation
resampler = CohenResampler(male_sample, female_sample)
resampler.plot_sample_stats()
Explanation: Now we can instantiate a CohenResampler and plot the sampling distribution.
End of explanation |
3,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Session 5
Step2: <a name="part-1---generative-adversarial-networks-gan--deep-convolutional-gan-dcgan"></a>
Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)
<a name="introduction"></a>
Introduction
Recall from the lecture that a Generative Adversarial Network is two networks, a generator and a discriminator. The "generator" takes a feature vector and decodes this feature vector to become an image, exactly like the decoder we built in Session 3's Autoencoder. The discriminator is exactly like the encoder of the Autoencoder, except it can only have 1 value in the final layer. We use a sigmoid to squash this value between 0 and 1, and then interpret the meaning of it as
Step3: <a name="building-the-encoder"></a>
Building the Encoder
Let's build our encoder just like in Session 3. We'll create a function which accepts the input placeholder, a list of dimensions describing the number of convolutional filters in each layer, and a list of filter sizes to use for the kernel sizes in each convolutional layer. We'll also pass in a parameter for which activation function to apply.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step4: <a name="building-the-discriminator-for-the-training-samples"></a>
Building the Discriminator for the Training Samples
Finally, let's take the output of our encoder, and make sure it has just 1 value by using a fully connected layer. We can use the libs/utils module's, linear layer to do this, which will also reshape our 4-dimensional tensor to a 2-dimensional one prior to using the fully connected layer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step5: Now let's create the discriminator for the real training data coming from X
Step6: And we can see what the network looks like now
Step7: <a name="building-the-decoder"></a>
Building the Decoder
Now we're ready to build the Generator, or decoding network. This network takes as input a vector of features and will try to produce an image that looks like our training data. We'll send this synthesized image to our discriminator which we've just built above.
Let's start by building the input to this network. We'll need a placeholder for the input features to this network. We have to be mindful of how many features we have. The feature vector for the Generator will eventually need to form an image. What we can do is create a 1-dimensional vector of values for each element in our batch, giving us [None, n_features]. We can then reshape this to a 4-dimensional Tensor so that we can build a decoder network just like in Session 3.
But how do we assign the values from our 1-d feature vector (or 2-d tensor with Batch number of them) to the 3-d shape of an image (or 4-d tensor with Batch number of them)? We have to go from the number of features in our 1-d feature vector, let's say n_latent to height x width x channels through a series of convolutional transpose layers. One way to approach this is think of the reverse process. Starting from the final decoding of height x width x channels, I will use convolution with a stride of 2, so downsample by 2 with each new layer. So the second to last decoder layer would be, height // 2 x width // 2 x ?. If I look at it like this, I can use the variable n_pixels denoting the height and width to build my decoder, and set the channels to whatever I want.
Let's start with just our 2-d placeholder which will have None x n_features, then convert it to a 4-d tensor ready for the decoder part of the network (a.k.a. the generator).
Step8: Now we'll build the decoder in much the same way as we built our encoder. And exactly as we've done in Session 3! This requires one additional parameter "channels" which is how many output filters we want for each net layer. We'll interpret the dimensions as the height and width of the tensor in each new layer, the channels is how many output filters we want for each net layer, and the filter_sizes is the size of the filters used for convolution. We'll default to using a stride of two which will downsample each layer. We're also going to collect each hidden layer h in a list. We'll end up needing this for Part 2 when we combine the variational autoencoder w/ the generative adversarial network.
Step9: <a name="building-the-generator"></a>
Building the Generator
Now we're ready to use our decoder to take in a vector of features and generate something that looks like our training images. We have to ensure that the last layer produces the same output shape as the discriminator's input. E.g. we used a [None, 64, 64, 3] input to the discriminator, so our generator needs to also output [None, 64, 64, 3] tensors. In other words, we have to ensure the last element in our dimensions list is 64, and the last element in our channels list is 3.
Step10: Now let's call the generator function with our input placeholder Z. This will take our feature vector and generate something in the shape of an image.
Step11: <a name="building-the-discriminator-for-the-generated-samples"></a>
Building the Discriminator for the Generated Samples
Lastly, we need another discriminator which takes as input our generated images. Recall the discriminator that we have made only takes as input our placeholder X which is for our actual training samples. We'll use the same function for creating our discriminator and reuse the variables we already have. This is the crucial part! We aren't making new trainable variables, but reusing the ones we have. We just create a new set of operations that takes as input our generated image. So we'll have a whole new set of operations exactly like the ones we have created for our first discriminator. But we are going to use the exact same variables as our first discriminator, so that we optimize the same values.
Step12: Now we can look at the graph and see the new discriminator inside the node for the discriminator. You should see the original discriminator and a new graph of a discriminator within it, but all the weights are shared with the original discriminator.
Step13: <a name="gan-loss-functions"></a>
GAN Loss Functions
We now have all the components to our network. We just have to train it. This is the notoriously tricky bit. We will have 3 different loss measures instead of our typical network with just a single loss. We'll later connect each of these loss measures to two optimizers, one for the generator and another for the discriminator, and then pin them against each other and see which one wins! Exciting times!
Recall from Session 3's Supervised Network, we created a binary classification task
Step14: What we've just written is a loss function for our generator. The generator is optimized when the discriminator for the generated samples produces all ones. In contrast to the generator, the discriminator will have 2 measures to optimize. One which is the opposite of what we have just written above, as well as 1 more measure for the real samples. Try writing these two losses and we'll combine them using their average. We want to optimize the Discriminator for the real samples producing all 1s, and the Discriminator for the fake samples producing all 0s
Step15: With our loss functions, we can create an optimizer for the discriminator and generator
Step16: We can also apply regularization to our network. This will penalize weights in the network for growing too large.
Step17: The last thing you may want to try is creating a separate learning rate for each of your generator and discriminator optimizers like so
Step18: Now you can feed the placeholders to your optimizers. If you run into errors creating these, then you likely have a problem with your graph's definition! Be sure to go back and reset the default graph and check the sizes of your different operations/placeholders.
With your optimizers, you can now train the network by "running" the optimizer variables with your session. You'll need to set the var_list parameter of the minimize function to only train the variables for the discriminator and same for the generator's optimizer
Step19: <a name="loading-a-dataset"></a>
Loading a Dataset
Let's use the Celeb Dataset just for demonstration purposes. In Part 2, you can explore using your own dataset. This code is exactly the same as we did in Session 3's homework with the VAE.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step20: <a name="training"></a>
Training
We'll now go through the setup of training the network. We won't actually spend the time to train the network but just see how it would be done. This is because in Part 2, we'll see an extension to this network which makes it much easier to train.
Step21: <a name="equilibrium"></a>
Equilibrium
Equilibrium is at 0.693. Why? Consider what the cost is measuring, the binary cross entropy. If we have random guesses, then we have as many 0s as we have 1s. And on average, we'll be 50% correct. The binary cross entropy is
Step22: When we go to train the network, we switch back and forth between each optimizer, feeding in the appropriate values for each optimizer. The opt_g optimizer only requires the Z and lr_g placeholders, while the opt_d optimizer requires the X, Z, and lr_d placeholders.
Don't train this network for very long because GANs are a huge pain to train and require a lot of fiddling. They very easily get stuck in their adversarial process, or get overtaken by one or the other, resulting in a useless model. What you need to develop is a steady equilibrium that optimizes both. That will likely take two weeks just trying to get the GAN to train and not have enough time for the rest of the assignment. They require a lot of memory/cpu and can take many days to train once you have settled on an architecture/training process/dataset. Just let it run for a short time and then interrupt the kernel (don't restart!), then continue to the next cell.
From there, we'll go over an extension to the GAN which uses a VAE like we used in Session 3. By using this extra network, we can actually train a better model in a fraction of the time and with much more ease! But the network's definition is a bit more complicated. Let's see how the GAN is trained first and then we'll train the VAE/GAN network instead. While training, the "real" and "fake" cost will be printed out. See how this cost wavers around the equilibrium and how we enforce it to try and stay around there by including a margin and some simple logic for updates. This is highly experimental and the research does not have a good answer for the best practice on how to train a GAN. I.e., some people will set the learning rate to some ratio of the performance between fake/real networks, others will have a fixed update schedule but train the generator twice and the discriminator only once.
Step23: <a name="part-2---variational-auto-encoding-generative-adversarial-network-vaegan"></a>
Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)
In our definition of the generator, we started with a feature vector, Z. This feature vector was not connected to anything before it. Instead, we had to randomly create its values using a random number generator of its n_latent values from -1 to 1, and this range was chosen arbitrarily. It could have been 0 to 1, or -3 to 3, or 0 to 100. In any case, the network would have had to learn to transform those values into something that looked like an image. There was no way for us to take an image, and find the feature vector that created it. In other words, it was not possible for us to encode an image.
The closest thing to an encoding we had was taking an image and feeding it to the discriminator, which would output a 0 or 1. But what if we had another network that allowed us to encode an image, and then we used this network for both the discriminator and generative parts of the network? That's the basic idea behind the VAEGAN
Step24: <a name="batch-normalization"></a>
Batch Normalization
You may have noticed from the VAE code that I've used something called "batch normalization". This is a pretty effective technique for regularizing the training of networks by "reducing internal covariate shift". The basic idea is that given a minibatch, we optimize the gradient for this small sample of the greater population. But this small sample may have different characteristics than the entire population's gradient. Consider the most extreme case, a minibatch of 1. In this case, we overfit our gradient to optimize the gradient of the single observation. If our minibatch is too large, say the size of the entire population, we aren't able to manuvuer the loss manifold at all and the entire loss is averaged in a way that doesn't let us optimize anything. What we want to do is find a happy medium between a too-smooth loss surface (i.e. every observation), and a very peaky loss surface (i.e. a single observation). Up until now we only used mini-batches to help with this. But we can also approach it by "smoothing" our updates between each mini-batch. That would effectively smooth the manifold of the loss space. Those of you familiar with signal processing will see this as a sort of low-pass filter on the gradient updates.
In order for us to use batch normalization, we need another placeholder which is a simple boolean
Step25: The original paper that introduced the idea suggests to use batch normalization "pre-activation", meaning after the weight multipllication or convolution, and before the nonlinearity. We can use the tensorflow.contrib.layers.batch_norm module to apply batch normalization to any input tensor give the tensor and the placeholder defining whether or not we are training. Let's use this module and you can inspect the code inside the module in your own time if it interests you.
Step26: <a name="building-the-encoder-1"></a>
Building the Encoder
We can now change our encoder to accept the is_training placeholder and apply batch_norm just before the activation function is applied
Step27: Let's now create the input to the network using a placeholder. We can try a slightly larger image this time. But be careful experimenting with much larger images as this is a big network.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step28: And now we'll connect the input to an encoder network. We'll also use the tf.nn.elu activation instead. Explore other activations but I've found this to make the training much faster (e.g. 10x faster at least!). See the paper for more details
Step29: <a name="building-the-variational-layer"></a>
Building the Variational Layer
In Session 3, we introduced the idea of Variational Bayes when we used the Variational Auto Encoder. The variational bayesian approach requires a richer understanding of probabilistic graphical models and bayesian methods which we we're not able to go over in this course (it requires a few courses all by itself!). For that reason, please treat this as a "black box" in this course.
For those of you that are more familiar with graphical models, Variational Bayesian methods attempt to model an approximate joint distribution of $Q(Z)$ using some distance function to the true distribution $P(X)$. Kingma and Welling show how this approach can be used in a graphical model resembling an autoencoder and can be trained using KL-Divergence, or $KL(Q(Z) || P(X))$. The distribution Q(Z) is the variational distribution, and attempts to model the lower-bound of the true distribution $P(X)$ through the minimization of the KL-divergence. Another way to look at this is the encoder of the network is trying to model the parameters of a known distribution, the Gaussian Distribution, through a minimization of this lower bound. We assume that this distribution resembles the true distribution, but it is merely a simplification of the true distribution. To learn more about this, I highly recommend picking up the book by Christopher Bishop called "Pattern Recognition and Machine Learning" and reading the original Kingma and Welling paper on Variational Bayes.
Now back to coding, we'll create a general variational layer that does exactly the same thing as our VAE in session 3. Treat this as a black box if you are unfamiliar with the math. It takes an input encoding, h, and an integer, n_code defining how many latent Gaussians to use to model the latent distribution. In return, we get the latent encoding from sampling the Gaussian layer, z, the mean and log standard deviation, as well as the prior loss, loss_z.
Step30: Let's connect this layer to our encoding, and keep all the variables it returns. Treat this as a black box if you are unfamiliar with variational bayes!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step31: <a name="building-the-decoder-1"></a>
Building the Decoder
In the GAN network, we built a decoder and called it the generator network. Same idea here. We can use these terms interchangeably. Before we connect our latent encoding, Z to the decoder, we'll implement batch norm in our decoder just like we did with the encoder. This is a simple fix
Step32: Now we'll build a decoder just like in Session 3, and just like our Generator network in Part 1. In Part 1, we created Z as a placeholder which we would have had to feed in as random values. However, now we have an explicit coding of an input image in X stored in Z by having created the encoder network.
Step33: Now we need to build our discriminators. We'll need to add a parameter for the is_training placeholder. We're also going to keep track of every hidden layer in the discriminator. Our encoder already returns the Hs of each layer. Alternatively, we could poll the graph for each layer in the discriminator and ask for the correspond layer names. We're going to need these layers when building our costs.
Step34: Recall the regular GAN and DCGAN required 2 discriminators
Step35: <a name="building-vaegan-loss-functions"></a>
Building VAE/GAN Loss Functions
Let's now see how we can compose our loss. We have 3 losses for our discriminator. Along with measuring the binary cross entropy between each of them, we're going to also measure each layer's loss from our two discriminators using an l2-loss, and this will form our loss for the log likelihood measure. The details of how these are constructed are explained in more details in the paper
Step36: <a name="creating-the-optimizers"></a>
Creating the Optimizers
We now have losses for our encoder, decoder, and discriminator networks. We can connect each of these to their own optimizer and start training! Just like with Part 1's GAN, we'll ensure each network's optimizer only trains its part of the network
Step37: <a name="loading-the-dataset"></a>
Loading the Dataset
We'll now load our dataset just like in Part 1. Here is where you should explore with your own data!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step38: We'll also create a latent manifold just like we've done in Session 3 and Part 1. This is a random sampling of 4 points in the latent space of Z. We then interpolate between them to create a "hyper-plane" and show the decoding of 10 x 10 points on that hyperplane.
Step39: Now create a session and create a coordinator to manage our queues for fetching data from the input pipeline and start our queue runners
Step40: Load an existing checkpoint if it exists to continue training.
Step41: We'll also try resynthesizing a test set of images. This will help us understand how well the encoder/decoder network is doing
Step42: <a name="training-1"></a>
Training
Almost ready for training. Let's get some variables which we'll need. These are the same as Part 1's training process. We'll keep track of t_i which we'll use to create images of the current manifold and reconstruction every so many iterations. And we'll keep track of the current batch number within the epoch and the current epoch number.
Step43: Just like in Part 1, we'll train trying to maintain an equilibrium between our Generator and Discriminator networks. You should experiment with the margin depending on how the training proceeds.
Step44: Now we'll train! Just like Part 1, we measure the real_cost and fake_cost. But this time, we'll always update the encoder. Based on the performance of the real/fake costs, then we'll update generator and discriminator networks. This will take a long time to produce something nice, but not nearly as long as the regular GAN network despite the additional parameters of the encoder and variational networks. Be sure to monitor the reconstructions to understand when your network has reached the capacity of its learning! For reference, on Celeb Net, I would use about 5 layers in each of the Encoder, Generator, and Discriminator networks using as input a 100 x 100 image, and a minimum of 200 channels per layer. This network would take about 1-2 days to train on an Nvidia TITAN X GPU.
Step45: <a name="part-3---latent-space-arithmetic"></a>
Part 3 - Latent-Space Arithmetic
<a name="loading-the-pre-trained-model"></a>
Loading the Pre-Trained Model
We're now going to work with a pre-trained VAEGAN model on the Celeb Net dataset. Let's load this model
Step46: We'll load the graph_def contained inside this dictionary. It follows the same idea as the inception, vgg16, and i2v pretrained networks. It is a dictionary with the key graph_def defined, with the graph's pretrained network. It also includes labels and a preprocess key. We'll have to do one additional thing which is to turn off the random sampling from variational layer. This isn't really necessary but will ensure we get the same results each time we use the network. We'll use the input_map argument to do this. Don't worry if this doesn't make any sense, as we didn't cover the variational layer in any depth. Just know that this is removing a random process from the network so that it is completely deterministic. If we hadn't done this, we'd get slightly different results each time we used the network (which may even be desirable for your purposes).
Step47: Now let's get the relevant parts of the network
Step48: Let's get some data to play with
Step49: Now preprocess the image, and see what the generated image looks like (i.e. the lossy version of the image through the network's encoding and decoding).
Step50: So we lost a lot of details but it seems to be able to express quite a bit about the image. Our inner most layer, Z, is only 512 values yet our dataset was 200k images of 64 x 64 x 3 pixels (about 2.3 GB of information). That means we're able to express our nearly 2.3 GB of information with only 512 values! Having some loss of detail is certainly expected!
<a name="exploring-the-celeb-net-attributes"></a>
Exploring the Celeb Net Attributes
Let's now try and explore the attributes of our dataset. We didn't train the network with any supervised labels, but the Celeb Net dataset has 40 attributes for each of its 200k images. These are already parsed and stored for you in the net dictionary
Step51: Let's see what attributes exist for one of the celeb images
Step52: <a name="find-the-latent-encoding-for-an-attribute"></a>
Find the Latent Encoding for an Attribute
The Celeb Dataset includes attributes for each of its 200k+ images. This allows us to feed into the encoder some images that we know have a specific attribute, e.g. "smiling". We store what their encoding is and retain this distribution of encoded values. We can then look at any other image and see how it is encoded, and slightly change the encoding by adding the encoded of our smiling images to it! The result should be our image but with more smiling. That is just insane and we're going to see how to do it. First lets inspect our latent space
Step53: We have 512 features that we can encode any image with. Assuming our network is doing an okay job, let's try to find the Z of the first 100 images with the 'Bald' attribute
Step54: Let's get all the bald image indexes
Step55: Now let's just load 100 of their images
Step56: Let's see if the mean image looks like a good bald person or not
Step57: Yes that is definitely a bald person. Now we're going to try to find the encoding of a bald person. One method is to try and find every other possible image and subtract the "bald" person's latent encoding. Then we could add this encoding back to any new image and hopefully it makes the image look more bald. Or we can find a bunch of bald people's encodings and then average their encodings together. This should reduce the noise from having many different attributes, but keep the signal pertaining to the baldness.
Let's first preprocess the images
Step58: Now we can find the latent encoding of the images by calculating Z and feeding X with our bald_p images
Step59: Now let's calculate the mean encoding
Step60: Let's try and synthesize from the mean bald feature now and see how it looks
Step61: <a name="latent-feature-arithmetic"></a>
Latent Feature Arithmetic
Let's now try to write a general function for performing everything we've just done so that we can do this with many different features. We'll then try to combine them and synthesize people with the features we want them to have...
Step62: Let's try getting some attributes positive and negative features. Be sure to explore different attributes! Also try different values of n_imgs, e.g. 2, 3, 5, 10, 50, 100. What happens with different values?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
Step63: Now let's interpolate between the "Male" and "Not Male" categories
Step64: And the same for smiling
Step65: There's also no reason why we have to be within the boundaries of 0-1. We can extrapolate beyond, in, and around the space.
Step67: <a name="extensions"></a>
Extensions
Tom White, Lecturer at Victoria University School of Design, also recently demonstrated an alternative way of interpolating using a sinusoidal interpolation. He's created some of the most impressive generative images out there and luckily for us he has detailed his process in the arxiv preprint
Step68: It's certainly worth trying especially if you are looking to explore your own model's latent space in new and interesting ways.
Let's try and load an image that we want to play with. We need an image as similar to the Celeb Dataset as possible. Unfortunately, we don't have access to the algorithm they used to "align" the faces, so we'll need to try and get as close as possible to an aligned face image. One way you can do this is to load up one of the celeb images and try and align an image to it using e.g. Photoshop or another photo editing software that lets you blend and move the images around. That's what I did for my own face...
Step69: Let's see how the network encodes it
Step70: Notice how blurry the image is. Tom White's preprint suggests one way to sharpen the image is to find the "Blurry" attribute vector
Step71: Notice that the image also gets brighter and perhaps other features than simply the bluriness of the image changes. Tom's preprint suggests that this is due to the correlation that blurred images have with other things such as the brightness of the image, possibly due biases in labeling or how photographs are taken. He suggests that another way to unblur would be to synthetically blur a set of images and find the difference in the encoding between the real and blurred images. We can try it like so
Step72: For some reason, it also doesn't like my glasses very much. Let's try and add them back.
Step73: Well, more like sunglasses then. Let's try adding everything in there now!
Step74: Well it was worth a try anyway. We can also try with a lot of images and create a gif montage of the result
Step75: Exploring multiple feature vectors and applying them to images from the celeb dataset to produce animations of a face, saving it as a GIF. Recall you can store each image frame in a list and then use the gif.build_gif function to create a gif. Explore your own syntheses and then include a gif of the different images you create as "celeb.gif" in the final submission. Perhaps try finding unexpected synthetic latent attributes in the same way that we created a blur attribute. You can check the documentation in scipy.ndimage for some other image processing techniques, for instance | Python Code:
# First check the Python version
import sys
if sys.version_info < (3,4):
print('You are running an older version of Python!\n\n',
'You should consider updating to Python 3.4.0 or',
'higher as the libraries built for this course',
'have only been tested in Python 3.4 and higher.\n')
print('Try installing the Python 3.5 version of anaconda'
'and then restart `jupyter notebook`:\n',
'https://www.continuum.io/downloads\n\n')
# Now get necessary libraries
try:
import os
import numpy as np
import matplotlib.pyplot as plt
from skimage.transform import resize
from skimage import data
from scipy.misc import imresize
from scipy.ndimage.filters import gaussian_filter
import IPython.display as ipyd
import tensorflow as tf
from libs import utils, gif, datasets, dataset_utils, nb_utils
except ImportError as e:
print("Make sure you have started notebook in the same directory",
"as the provided zip file which includes the 'libs' folder",
"and the file 'utils.py' inside of it. You will NOT be able",
"to complete this assignment unless you restart jupyter",
"notebook inside the directory created by extracting",
"the zip file or cloning the github repo.")
print(e)
# We'll tell matplotlib to inline any drawn figures like so:
%matplotlib inline
plt.style.use('ggplot')
# Bit of formatting because I don't like the default inline code style:
from IPython.core.display import HTML
HTML(<style> .rendered_html code {
padding: 2px 4px;
color: #c7254e;
background-color: #f9f2f4;
border-radius: 4px;
} </style>)
Explanation: Session 5: Generative Networks
Assignment: Generative Adversarial Networks and Recurrent Neural Networks
<p class="lead">
<a href="https://www.kadenze.com/courses/creative-applications-of-deep-learning-with-tensorflow/info">Creative Applications of Deep Learning with Google's Tensorflow</a><br />
<a href="http://pkmital.com">Parag K. Mital</a><br />
<a href="https://www.kadenze.com">Kadenze, Inc.</a>
</p>
Table of Contents
<!-- MarkdownTOC autolink="true" autoanchor="true" bracket="round" -->
Overview
Learning Goals
Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)
Introduction
Building the Encoder
Building the Discriminator for the Training Samples
Building the Decoder
Building the Generator
Building the Discriminator for the Generated Samples
GAN Loss Functions
Building the Optimizers w/ Regularization
Loading a Dataset
Training
Equilibrium
Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)
Batch Normalization
Building the Encoder
Building the Variational Layer
Building the Decoder
Building VAE/GAN Loss Functions
Creating the Optimizers
Loading the Dataset
Training
Part 3 - Latent-Space Arithmetic
Loading the Pre-Trained Model
Exploring the Celeb Net Attributes
Find the Latent Encoding for an Attribute
Latent Feature Arithmetic
Extensions
Part 4 - Character-Level Language Model
Part 5 - Pretrained Char-RNN of Donald Trump
Getting the Trump Data
Basic Text Analysis
Loading the Pre-trained Trump Model
Inference: Keeping Track of the State
Probabilistic Sampling
Inference: Temperature
Inference: Priming
Assignment Submission
<!-- /MarkdownTOC -->
<a name="overview"></a>
Overview
This is certainly the hardest session and will require a lot of time and patience to complete. Also, many elements of this session may require further investigation, including reading of the original papers and additional resources in order to fully grasp their understanding. The models we cover are state of the art and I've aimed to give you something between a practical and mathematical understanding of the material, though it is a tricky balance. I hope for those interested, that you delve deeper into the papers for more understanding. And for those of you seeking just a practical understanding, that these notebooks will suffice.
This session covered two of the most advanced generative networks: generative adversarial networks and recurrent neural networks. During the homework, we'll see how these work in more details and try building our own. I am not asking you train anything in this session as both GANs and RNNs take many days to train. However, I have provided pre-trained networks which we'll be exploring. We'll also see how a Variational Autoencoder can be combined with a Generative Adversarial Network to allow you to also encode input data, and I've provided a pre-trained model of this type of model trained on the Celeb Faces dataset. We'll see what this means in more details below.
After this session, you are also required to submit your final project which can combine any of the materials you have learned so far to produce a short 1 minute clip demonstrating any aspect of the course you want to invesitgate further or combine with anything else you feel like doing. This is completely open to you and to encourage your peers to share something that demonstrates creative thinking. Be sure to keep the final project in mind while browsing through this notebook!
<a name="learning-goals"></a>
Learning Goals
Learn to build the components of a Generative Adversarial Network and how it is trained
Learn to combine the Variational Autoencoder with a Generative Adversarial Network
Learn to use latent space arithmetic with a pre-trained VAE/GAN network
Learn to build the components of a Character Recurrent Neural Network and how it is trained
Learn to sample from a pre-trained CharRNN model
End of explanation
# We'll keep a variable for the size of our image.
n_pixels = 32
n_channels = 3
input_shape = [None, n_pixels, n_pixels, n_channels]
# And then create the input image placeholder
X = tf.placeholder(name='X'...
Explanation: <a name="part-1---generative-adversarial-networks-gan--deep-convolutional-gan-dcgan"></a>
Part 1 - Generative Adversarial Networks (GAN) / Deep Convolutional GAN (DCGAN)
<a name="introduction"></a>
Introduction
Recall from the lecture that a Generative Adversarial Network is two networks, a generator and a discriminator. The "generator" takes a feature vector and decodes this feature vector to become an image, exactly like the decoder we built in Session 3's Autoencoder. The discriminator is exactly like the encoder of the Autoencoder, except it can only have 1 value in the final layer. We use a sigmoid to squash this value between 0 and 1, and then interpret the meaning of it as: 1, the image you gave me was real, or 0, the image you gave me was generated by the generator, it's a FAKE! So the discriminator is like an encoder which takes an image and then perfoms lie detection. Are you feeding me lies? Or is the image real?
Consider the AE and VAE we trained in Session 3. The loss function operated partly on the input space. It said, per pixel, what is the difference between my reconstruction and the input image? The l2-loss per pixel. Recall at that time we suggested that this wasn't the best idea because per-pixel differences aren't representative of our own perception of the image. One way to consider this is if we had the same image, and translated it by a few pixels. We would not be able to tell the difference, but the per-pixel difference between the two images could be enormously high.
The GAN does not use per-pixel difference. Instead, it trains a distance function: the discriminator. The discriminator takes in two images, the real image and the generated one, and learns what a similar image should look like! That is really the amazing part of this network and has opened up some very exciting potential future directions for unsupervised learning. Another network that also learns a distance function is known as the siamese network. We didn't get into this network in this course, but it is commonly used in facial verification, or asserting whether two faces are the same or not.
The GAN network is notoriously a huge pain to train! For that reason, we won't actually be training it. Instead, we'll discuss an extension to this basic network called the VAEGAN which uses the VAE we created in Session 3 along with the GAN. We'll then train that network in Part 2. For now, let's stick with creating the GAN.
Let's first create the two networks: the discriminator and the generator. We'll first begin by building a general purpose encoder which we'll use for our discriminator. Recall that we've already done this in Session 3. What we want is for the input placeholder to be encoded using a list of dimensions for each of our encoder's layers. In the case of a convolutional network, our list of dimensions should correspond to the number of output filters. We also need to specify the kernel heights and widths for each layer's convolutional network.
We'll first need a placeholder. This will be the "real" image input to the discriminator and the discrimintator will encode this image into a single value, 0 or 1, saying, yes this is real, or no, this is not real.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def encoder(x, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):
# Set the input to a common variable name, h, for hidden layer
h = x
# Now we'll loop over the list of dimensions defining the number
# of output filters in each layer, and collect each hidden layer
hs = []
for layer_i in range(len(channels)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
# Convolve using the utility convolution function
# This requirs the number of output filter,
# and the size of the kernel in `k_h` and `k_w`.
# By default, this will use a stride of 2, meaning
# each new layer will be downsampled by 2.
h, W = utils.conv2d(...
# Now apply the activation function
h = activation(h)
# Store each hidden layer
hs.append(h)
# Finally, return the encoding.
return h, hs
Explanation: <a name="building-the-encoder"></a>
Building the Encoder
Let's build our encoder just like in Session 3. We'll create a function which accepts the input placeholder, a list of dimensions describing the number of convolutional filters in each layer, and a list of filter sizes to use for the kernel sizes in each convolutional layer. We'll also pass in a parameter for which activation function to apply.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def discriminator(X,
channels=[50, 50, 50, 50],
filter_sizes=[4, 4, 4, 4],
activation=utils.lrelu,
reuse=None):
# We'll scope these variables to "discriminator_real"
with tf.variable_scope('discriminator', reuse=reuse):
# Encode X:
H, Hs = encoder(X, channels, filter_sizes, activation, reuse)
# Now make one last layer with just 1 output. We'll
# have to reshape to 2-d so that we can create a fully
# connected layer:
shape = H.get_shape().as_list()
H = tf.reshape(H, [-1, shape[1] * shape[2] * shape[3]])
# Now we can connect our 2D layer to a single neuron output w/
# a sigmoid activation:
D, W = utils.linear(...
return D
Explanation: <a name="building-the-discriminator-for-the-training-samples"></a>
Building the Discriminator for the Training Samples
Finally, let's take the output of our encoder, and make sure it has just 1 value by using a fully connected layer. We can use the libs/utils module's, linear layer to do this, which will also reshape our 4-dimensional tensor to a 2-dimensional one prior to using the fully connected layer.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
D_real = discriminator(X)
Explanation: Now let's create the discriminator for the real training data coming from X:
End of explanation
graph = tf.get_default_graph()
nb_utils.show_graph(graph.as_graph_def())
Explanation: And we can see what the network looks like now:
End of explanation
# We'll need some variables first. This will be how many
# channels our generator's feature vector has. Experiment w/
# this if you are training your own network.
n_code = 16
# And in total how many feature it has, including the spatial dimensions.
n_latent = (n_pixels // 16) * (n_pixels // 16) * n_code
# Let's build the 2-D placeholder, which is the 1-d feature vector for every
# element in our batch. We'll then reshape this to 4-D for the decoder.
Z = tf.placeholder(name='Z', shape=[None, n_latent], dtype=tf.float32)
# Now we can reshape it to input to the decoder. Here we have to
# be mindful of the height and width as described before. We need
# to make the height and width a factor of the final height and width
# that we want. Since we are using strided convolutions of 2, then
# we can say with 4 layers, that first decoder's layer should be:
# n_pixels / 2 / 2 / 2 / 2, or n_pixels / 16:
Z_tensor = tf.reshape(Z, [-1, n_pixels // 16, n_pixels // 16, n_code])
Explanation: <a name="building-the-decoder"></a>
Building the Decoder
Now we're ready to build the Generator, or decoding network. This network takes as input a vector of features and will try to produce an image that looks like our training data. We'll send this synthesized image to our discriminator which we've just built above.
Let's start by building the input to this network. We'll need a placeholder for the input features to this network. We have to be mindful of how many features we have. The feature vector for the Generator will eventually need to form an image. What we can do is create a 1-dimensional vector of values for each element in our batch, giving us [None, n_features]. We can then reshape this to a 4-dimensional Tensor so that we can build a decoder network just like in Session 3.
But how do we assign the values from our 1-d feature vector (or 2-d tensor with Batch number of them) to the 3-d shape of an image (or 4-d tensor with Batch number of them)? We have to go from the number of features in our 1-d feature vector, let's say n_latent to height x width x channels through a series of convolutional transpose layers. One way to approach this is think of the reverse process. Starting from the final decoding of height x width x channels, I will use convolution with a stride of 2, so downsample by 2 with each new layer. So the second to last decoder layer would be, height // 2 x width // 2 x ?. If I look at it like this, I can use the variable n_pixels denoting the height and width to build my decoder, and set the channels to whatever I want.
Let's start with just our 2-d placeholder which will have None x n_features, then convert it to a 4-d tensor ready for the decoder part of the network (a.k.a. the generator).
End of explanation
def decoder(z, dimensions, channels, filter_sizes,
activation=tf.nn.relu, reuse=None):
h = z
hs = []
for layer_i in range(len(dimensions)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
h, W = utils.deconv2d(x=h,
n_output_h=dimensions[layer_i],
n_output_w=dimensions[layer_i],
n_output_ch=channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
reuse=reuse)
h = activation(h)
hs.append(h)
return h, hs
Explanation: Now we'll build the decoder in much the same way as we built our encoder. And exactly as we've done in Session 3! This requires one additional parameter "channels" which is how many output filters we want for each net layer. We'll interpret the dimensions as the height and width of the tensor in each new layer, the channels is how many output filters we want for each net layer, and the filter_sizes is the size of the filters used for convolution. We'll default to using a stride of two which will downsample each layer. We're also going to collect each hidden layer h in a list. We'll end up needing this for Part 2 when we combine the variational autoencoder w/ the generative adversarial network.
End of explanation
# Explore these parameters.
def generator(Z,
dimensions=[n_pixels//8, n_pixels//4, n_pixels//2, n_pixels],
channels=[50, 50, 50, n_channels],
filter_sizes=[4, 4, 4, 4],
activation=utils.lrelu):
with tf.variable_scope('generator'):
G, Hs = decoder(Z_tensor, dimensions, channels, filter_sizes, activation)
return G
Explanation: <a name="building-the-generator"></a>
Building the Generator
Now we're ready to use our decoder to take in a vector of features and generate something that looks like our training images. We have to ensure that the last layer produces the same output shape as the discriminator's input. E.g. we used a [None, 64, 64, 3] input to the discriminator, so our generator needs to also output [None, 64, 64, 3] tensors. In other words, we have to ensure the last element in our dimensions list is 64, and the last element in our channels list is 3.
End of explanation
G = generator(Z)
graph = tf.get_default_graph()
nb_utils.show_graph(graph.as_graph_def())
Explanation: Now let's call the generator function with our input placeholder Z. This will take our feature vector and generate something in the shape of an image.
End of explanation
D_fake = discriminator(G, reuse=True)
Explanation: <a name="building-the-discriminator-for-the-generated-samples"></a>
Building the Discriminator for the Generated Samples
Lastly, we need another discriminator which takes as input our generated images. Recall the discriminator that we have made only takes as input our placeholder X which is for our actual training samples. We'll use the same function for creating our discriminator and reuse the variables we already have. This is the crucial part! We aren't making new trainable variables, but reusing the ones we have. We just create a new set of operations that takes as input our generated image. So we'll have a whole new set of operations exactly like the ones we have created for our first discriminator. But we are going to use the exact same variables as our first discriminator, so that we optimize the same values.
End of explanation
nb_utils.show_graph(graph.as_graph_def())
Explanation: Now we can look at the graph and see the new discriminator inside the node for the discriminator. You should see the original discriminator and a new graph of a discriminator within it, but all the weights are shared with the original discriminator.
End of explanation
with tf.variable_scope('loss/generator'):
loss_G = tf.reduce_mean(utils.binary_cross_entropy(D_fake, tf.ones_like(D_fake)))
Explanation: <a name="gan-loss-functions"></a>
GAN Loss Functions
We now have all the components to our network. We just have to train it. This is the notoriously tricky bit. We will have 3 different loss measures instead of our typical network with just a single loss. We'll later connect each of these loss measures to two optimizers, one for the generator and another for the discriminator, and then pin them against each other and see which one wins! Exciting times!
Recall from Session 3's Supervised Network, we created a binary classification task: music or speech. We again have a binary classification task: real or fake. So our loss metric will again use the binary cross entropy to measure the loss of our three different modules: the generator, the discriminator for our real images, and the discriminator for our generated images.
To find out the loss function for our generator network, answer the question, what makes the generator successful? Successfully fooling the discriminator. When does that happen? When the discriminator for the fake samples produces all ones. So our binary cross entropy measure will measure the cross entropy with our predicted distribution and the true distribution which has all ones.
End of explanation
with tf.variable_scope('loss/discriminator/real'):
loss_D_real = utils.binary_cross_entropy(D_real, ...
with tf.variable_scope('loss/discriminator/fake'):
loss_D_fake = utils.binary_cross_entropy(D_fake, ...
with tf.variable_scope('loss/discriminator'):
loss_D = tf.reduce_mean((loss_D_real + loss_D_fake) / 2)
nb_utils.show_graph(graph.as_graph_def())
Explanation: What we've just written is a loss function for our generator. The generator is optimized when the discriminator for the generated samples produces all ones. In contrast to the generator, the discriminator will have 2 measures to optimize. One which is the opposite of what we have just written above, as well as 1 more measure for the real samples. Try writing these two losses and we'll combine them using their average. We want to optimize the Discriminator for the real samples producing all 1s, and the Discriminator for the fake samples producing all 0s:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# Grab just the variables corresponding to the discriminator
# and just the generator:
vars_d = [v for v in tf.trainable_variables()
if ...]
print('Training discriminator variables:')
[print(v.name) for v in tf.trainable_variables()
if v.name.startswith('discriminator')]
vars_g = [v for v in tf.trainable_variables()
if ...]
print('Training generator variables:')
[print(v.name) for v in tf.trainable_variables()
if v.name.startswith('generator')]
Explanation: With our loss functions, we can create an optimizer for the discriminator and generator:
<a name="building-the-optimizers-w-regularization"></a>
Building the Optimizers w/ Regularization
We're almost ready to create our optimizers. We just need to do one extra thing. Recall that our loss for our generator has a flow from the generator through the discriminator. If we are training both the generator and the discriminator, we have two measures which both try to optimize the discriminator, but in opposite ways: the generator's loss would try to optimize the discriminator to be bad at its job, and the discriminator's loss would try to optimize it to be good at its job. This would be counter-productive, trying to optimize opposing losses. What we want is for the generator to get better, and the discriminator to get better. Not for the discriminator to get better, then get worse, then get better, etc... The way we do this is when we optimize our generator, we let the gradient flow through the discriminator, but we do not update the variables in the discriminator. Let's try and grab just the discriminator variables and just the generator variables below:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
d_reg = tf.contrib.layers.apply_regularization(
tf.contrib.layers.l2_regularizer(1e-6), vars_d)
g_reg = tf.contrib.layers.apply_regularization(
tf.contrib.layers.l2_regularizer(1e-6), vars_g)
Explanation: We can also apply regularization to our network. This will penalize weights in the network for growing too large.
End of explanation
learning_rate = 0.0001
lr_g = tf.placeholder(tf.float32, shape=[], name='learning_rate_g')
lr_d = tf.placeholder(tf.float32, shape=[], name='learning_rate_d')
Explanation: The last thing you may want to try is creating a separate learning rate for each of your generator and discriminator optimizers like so:
End of explanation
opt_g = tf.train.AdamOptimizer(learning_rate=lr_g).minimize(...)
opt_d = tf.train.AdamOptimizer(learning_rate=lr_d).minimize(loss_D + d_reg, var_list=vars_d)
Explanation: Now you can feed the placeholders to your optimizers. If you run into errors creating these, then you likely have a problem with your graph's definition! Be sure to go back and reset the default graph and check the sizes of your different operations/placeholders.
With your optimizers, you can now train the network by "running" the optimizer variables with your session. You'll need to set the var_list parameter of the minimize function to only train the variables for the discriminator and same for the generator's optimizer:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
# You'll want to change this to your own data if you end up training your own GAN.
batch_size = 64
n_epochs = 1
crop_shape = [n_pixels, n_pixels, 3]
crop_factor = 0.8
input_shape = [218, 178, 3]
files = datasets.CELEB()
batch = dataset_utils.create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape)
Explanation: <a name="loading-a-dataset"></a>
Loading a Dataset
Let's use the Celeb Dataset just for demonstration purposes. In Part 2, you can explore using your own dataset. This code is exactly the same as we did in Session 3's homework with the VAE.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
ckpt_name = './gan.ckpt'
sess = tf.Session()
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
tf.get_default_graph().finalize()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
if os.path.exists(ckpt_name + '.index') or os.path.exists(ckpt_name):
saver.restore(sess, ckpt_name)
print("VAE model restored.")
n_examples = 10
zs = np.random.uniform(0.0, 1.0, [4, n_latent]).astype(np.float32)
zs = utils.make_latent_manifold(zs, n_examples)
Explanation: <a name="training"></a>
Training
We'll now go through the setup of training the network. We won't actually spend the time to train the network but just see how it would be done. This is because in Part 2, we'll see an extension to this network which makes it much easier to train.
End of explanation
equilibrium = 0.693
margin = 0.2
Explanation: <a name="equilibrium"></a>
Equilibrium
Equilibrium is at 0.693. Why? Consider what the cost is measuring, the binary cross entropy. If we have random guesses, then we have as many 0s as we have 1s. And on average, we'll be 50% correct. The binary cross entropy is:
\begin{align}
\sum_i \text{X}_i * \text{log}(\tilde{\text{X}}_i) + (1 - \text{X}_i) * \text{log}(1 - \tilde{\text{X}}_i)
\end{align}
Which is written out in tensorflow as:
python
(-(x * tf.log(z) + (1. - x) * tf.log(1. - z)))
Where x is the discriminator's prediction of the true distribution, in the case of GANs, the input images, and z is the discriminator's prediction of the generated images corresponding to the mathematical notation of $\tilde{\text{X}}$. We sum over all features, but in the case of the discriminator, we have just 1 feature, the guess of whether it is a true image or not. If our discriminator guesses at chance, i.e. 0.5, then we'd have something like:
\begin{align}
0.5 * \text{log}(0.5) + (1 - 0.5) * \text{log}(1 - 0.5) = -0.693
\end{align}
So this is what we'd expect at the start of learning and from a game theoretic point of view, where we want things to remain. So unlike our previous networks, where our loss continues to drop closer and closer to 0, we want our loss to waver around this value as much as possible, and hope for the best.
End of explanation
t_i = 0
batch_i = 0
epoch_i = 0
n_files = len(files)
if not os.path.exists('imgs'):
os.makedirs('imgs')
while epoch_i < n_epochs:
batch_i += 1
batch_xs = sess.run(batch) / 255.0
batch_zs = np.random.uniform(
0.0, 1.0, [batch_size, n_latent]).astype(np.float32)
real_cost, fake_cost = sess.run([
loss_D_real, loss_D_fake],
feed_dict={
X: batch_xs,
Z: batch_zs})
real_cost = np.mean(real_cost)
fake_cost = np.mean(fake_cost)
if (batch_i % 20) == 0:
print(batch_i, 'real:', real_cost, '/ fake:', fake_cost)
gen_update = True
dis_update = True
if real_cost > (equilibrium + margin) or \
fake_cost > (equilibrium + margin):
gen_update = False
if real_cost < (equilibrium - margin) or \
fake_cost < (equilibrium - margin):
dis_update = False
if not (gen_update or dis_update):
gen_update = True
dis_update = True
if gen_update:
sess.run(opt_g,
feed_dict={
Z: batch_zs,
lr_g: learning_rate})
if dis_update:
sess.run(opt_d,
feed_dict={
X: batch_xs,
Z: batch_zs,
lr_d: learning_rate})
if batch_i % (n_files // batch_size) == 0:
batch_i = 0
epoch_i += 1
print('---------- EPOCH:', epoch_i)
# Plot example reconstructions from latent layer
recon = sess.run(G, feed_dict={Z: zs})
recon = np.clip(recon, 0, 1)
m1 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/manifold_%08d.png' % t_i)
recon = sess.run(G, feed_dict={Z: batch_zs})
recon = np.clip(recon, 0, 1)
m2 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/reconstructions_%08d.png' % t_i)
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
axs[0].imshow(m1)
axs[1].imshow(m2)
plt.show()
t_i += 1
# Save the variables to disk.
save_path = saver.save(sess, "./" + ckpt_name,
global_step=batch_i,
write_meta_graph=False)
print("Model saved in file: %s" % save_path)
# Tell all the threads to shutdown.
coord.request_stop()
# Wait until all threads have finished.
coord.join(threads)
# Clean up the session.
sess.close()
Explanation: When we go to train the network, we switch back and forth between each optimizer, feeding in the appropriate values for each optimizer. The opt_g optimizer only requires the Z and lr_g placeholders, while the opt_d optimizer requires the X, Z, and lr_d placeholders.
Don't train this network for very long because GANs are a huge pain to train and require a lot of fiddling. They very easily get stuck in their adversarial process, or get overtaken by one or the other, resulting in a useless model. What you need to develop is a steady equilibrium that optimizes both. That will likely take two weeks just trying to get the GAN to train and not have enough time for the rest of the assignment. They require a lot of memory/cpu and can take many days to train once you have settled on an architecture/training process/dataset. Just let it run for a short time and then interrupt the kernel (don't restart!), then continue to the next cell.
From there, we'll go over an extension to the GAN which uses a VAE like we used in Session 3. By using this extra network, we can actually train a better model in a fraction of the time and with much more ease! But the network's definition is a bit more complicated. Let's see how the GAN is trained first and then we'll train the VAE/GAN network instead. While training, the "real" and "fake" cost will be printed out. See how this cost wavers around the equilibrium and how we enforce it to try and stay around there by including a margin and some simple logic for updates. This is highly experimental and the research does not have a good answer for the best practice on how to train a GAN. I.e., some people will set the learning rate to some ratio of the performance between fake/real networks, others will have a fixed update schedule but train the generator twice and the discriminator only once.
End of explanation
tf.reset_default_graph()
Explanation: <a name="part-2---variational-auto-encoding-generative-adversarial-network-vaegan"></a>
Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)
In our definition of the generator, we started with a feature vector, Z. This feature vector was not connected to anything before it. Instead, we had to randomly create its values using a random number generator of its n_latent values from -1 to 1, and this range was chosen arbitrarily. It could have been 0 to 1, or -3 to 3, or 0 to 100. In any case, the network would have had to learn to transform those values into something that looked like an image. There was no way for us to take an image, and find the feature vector that created it. In other words, it was not possible for us to encode an image.
The closest thing to an encoding we had was taking an image and feeding it to the discriminator, which would output a 0 or 1. But what if we had another network that allowed us to encode an image, and then we used this network for both the discriminator and generative parts of the network? That's the basic idea behind the VAEGAN: https://arxiv.org/abs/1512.09300. It is just like the regular GAN, except we also use an encoder to create our feature vector Z.
We then get the best of both worlds: a GAN that looks more or less the same, but uses the encoding from an encoder instead of an arbitrary feature vector; and an autoencoder that can model an input distribution using a trained distance function, the discriminator, leading to nicer encodings/decodings.
Let's try to build it! Refer to the paper for the intricacies and a great read. Luckily, by building the encoder and decoder functions, we're almost there. We just need a few more components and will change these slightly.
Let's reset our graph and recompose our network as a VAEGAN:
End of explanation
# placeholder for batch normalization
is_training = tf.placeholder(tf.bool, name='istraining')
Explanation: <a name="batch-normalization"></a>
Batch Normalization
You may have noticed from the VAE code that I've used something called "batch normalization". This is a pretty effective technique for regularizing the training of networks by "reducing internal covariate shift". The basic idea is that given a minibatch, we optimize the gradient for this small sample of the greater population. But this small sample may have different characteristics than the entire population's gradient. Consider the most extreme case, a minibatch of 1. In this case, we overfit our gradient to optimize the gradient of the single observation. If our minibatch is too large, say the size of the entire population, we aren't able to manuvuer the loss manifold at all and the entire loss is averaged in a way that doesn't let us optimize anything. What we want to do is find a happy medium between a too-smooth loss surface (i.e. every observation), and a very peaky loss surface (i.e. a single observation). Up until now we only used mini-batches to help with this. But we can also approach it by "smoothing" our updates between each mini-batch. That would effectively smooth the manifold of the loss space. Those of you familiar with signal processing will see this as a sort of low-pass filter on the gradient updates.
In order for us to use batch normalization, we need another placeholder which is a simple boolean: True or False, denoting when we are training. We'll use this placeholder to conditionally update batch normalization's statistics required for normalizing our minibatches. Let's create the placeholder and then I'll get into how to use this.
End of explanation
from tensorflow.contrib.layers import batch_norm
help(batch_norm)
Explanation: The original paper that introduced the idea suggests to use batch normalization "pre-activation", meaning after the weight multipllication or convolution, and before the nonlinearity. We can use the tensorflow.contrib.layers.batch_norm module to apply batch normalization to any input tensor give the tensor and the placeholder defining whether or not we are training. Let's use this module and you can inspect the code inside the module in your own time if it interests you.
End of explanation
def encoder(x, is_training, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):
# Set the input to a common variable name, h, for hidden layer
h = x
print('encoder/input:', h.get_shape().as_list())
# Now we'll loop over the list of dimensions defining the number
# of output filters in each layer, and collect each hidden layer
hs = []
for layer_i in range(len(channels)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
# Convolve using the utility convolution function
# This requirs the number of output filter,
# and the size of the kernel in `k_h` and `k_w`.
# By default, this will use a stride of 2, meaning
# each new layer will be downsampled by 2.
h, W = utils.conv2d(h, channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
d_h=2,
d_w=2,
reuse=reuse)
h = batch_norm(h, is_training=is_training)
# Now apply the activation function
h = activation(h)
print('layer:', layer_i, ', shape:', h.get_shape().as_list())
# Store each hidden layer
hs.append(h)
# Finally, return the encoding.
return h, hs
Explanation: <a name="building-the-encoder-1"></a>
Building the Encoder
We can now change our encoder to accept the is_training placeholder and apply batch_norm just before the activation function is applied:
End of explanation
n_pixels = 64
n_channels = 3
input_shape = [None, n_pixels, n_pixels, n_channels]
# placeholder for the input to the network
X = tf.placeholder(...)
Explanation: Let's now create the input to the network using a placeholder. We can try a slightly larger image this time. But be careful experimenting with much larger images as this is a big network.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
channels = [64, 64, 64]
filter_sizes = [5, 5, 5]
activation = tf.nn.elu
n_hidden = 128
with tf.variable_scope('encoder'):
H, Hs = encoder(...
Z = utils.linear(H, n_hidden)[0]
Explanation: And now we'll connect the input to an encoder network. We'll also use the tf.nn.elu activation instead. Explore other activations but I've found this to make the training much faster (e.g. 10x faster at least!). See the paper for more details: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def variational_bayes(h, n_code):
# Model mu and log(\sigma)
z_mu = tf.nn.tanh(utils.linear(h, n_code, name='mu')[0])
z_log_sigma = 0.5 * tf.nn.tanh(utils.linear(h, n_code, name='log_sigma')[0])
# Sample from noise distribution p(eps) ~ N(0, 1)
epsilon = tf.random_normal(tf.stack([tf.shape(h)[0], n_code]))
# Sample from posterior
z = z_mu + tf.multiply(epsilon, tf.exp(z_log_sigma))
# Measure loss
loss_z = -0.5 * tf.reduce_sum(
1.0 + 2.0 * z_log_sigma - tf.square(z_mu) - tf.exp(2.0 * z_log_sigma),
1)
return z, z_mu, z_log_sigma, loss_z
Explanation: <a name="building-the-variational-layer"></a>
Building the Variational Layer
In Session 3, we introduced the idea of Variational Bayes when we used the Variational Auto Encoder. The variational bayesian approach requires a richer understanding of probabilistic graphical models and bayesian methods which we we're not able to go over in this course (it requires a few courses all by itself!). For that reason, please treat this as a "black box" in this course.
For those of you that are more familiar with graphical models, Variational Bayesian methods attempt to model an approximate joint distribution of $Q(Z)$ using some distance function to the true distribution $P(X)$. Kingma and Welling show how this approach can be used in a graphical model resembling an autoencoder and can be trained using KL-Divergence, or $KL(Q(Z) || P(X))$. The distribution Q(Z) is the variational distribution, and attempts to model the lower-bound of the true distribution $P(X)$ through the minimization of the KL-divergence. Another way to look at this is the encoder of the network is trying to model the parameters of a known distribution, the Gaussian Distribution, through a minimization of this lower bound. We assume that this distribution resembles the true distribution, but it is merely a simplification of the true distribution. To learn more about this, I highly recommend picking up the book by Christopher Bishop called "Pattern Recognition and Machine Learning" and reading the original Kingma and Welling paper on Variational Bayes.
Now back to coding, we'll create a general variational layer that does exactly the same thing as our VAE in session 3. Treat this as a black box if you are unfamiliar with the math. It takes an input encoding, h, and an integer, n_code defining how many latent Gaussians to use to model the latent distribution. In return, we get the latent encoding from sampling the Gaussian layer, z, the mean and log standard deviation, as well as the prior loss, loss_z.
End of explanation
# Experiment w/ values between 2 - 100
# depending on how difficult the dataset is
n_code = 32
with tf.variable_scope('encoder/variational'):
Z, Z_mu, Z_log_sigma, loss_Z = variational_bayes(h=Z, n_code=n_code)
Explanation: Let's connect this layer to our encoding, and keep all the variables it returns. Treat this as a black box if you are unfamiliar with variational bayes!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def decoder(z, is_training, dimensions, channels, filter_sizes,
activation=tf.nn.elu, reuse=None):
h = z
for layer_i in range(len(dimensions)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
h, W = utils.deconv2d(x=h,
n_output_h=dimensions[layer_i],
n_output_w=dimensions[layer_i],
n_output_ch=channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
reuse=reuse)
h = batch_norm(h, is_training=is_training)
h = activation(h)
return h
Explanation: <a name="building-the-decoder-1"></a>
Building the Decoder
In the GAN network, we built a decoder and called it the generator network. Same idea here. We can use these terms interchangeably. Before we connect our latent encoding, Z to the decoder, we'll implement batch norm in our decoder just like we did with the encoder. This is a simple fix: add a second argument for is_training and then apply batch normalization just after the deconv2d operation and just before the nonlinear activation.
End of explanation
dimensions = [n_pixels // 8, n_pixels // 4, n_pixels // 2, n_pixels]
channels = [30, 30, 30, n_channels]
filter_sizes = [4, 4, 4, 4]
activation = tf.nn.elu
n_latent = n_code * (n_pixels // 16)**2
with tf.variable_scope('generator'):
Z_decode = utils.linear(
Z, n_output=n_latent, name='fc', activation=activation)[0]
Z_decode_tensor = tf.reshape(
Z_decode, [-1, n_pixels//16, n_pixels//16, n_code], name='reshape')
G = decoder(
Z_decode_tensor, is_training, dimensions,
channels, filter_sizes, activation)
Explanation: Now we'll build a decoder just like in Session 3, and just like our Generator network in Part 1. In Part 1, we created Z as a placeholder which we would have had to feed in as random values. However, now we have an explicit coding of an input image in X stored in Z by having created the encoder network.
End of explanation
def discriminator(X,
is_training,
channels=[50, 50, 50, 50],
filter_sizes=[4, 4, 4, 4],
activation=tf.nn.elu,
reuse=None):
# We'll scope these variables to "discriminator_real"
with tf.variable_scope('discriminator', reuse=reuse):
H, Hs = encoder(
X, is_training, channels, filter_sizes, activation, reuse)
shape = H.get_shape().as_list()
H = tf.reshape(
H, [-1, shape[1] * shape[2] * shape[3]])
D, W = utils.linear(
x=H, n_output=1, activation=tf.nn.sigmoid, name='fc', reuse=reuse)
return D, Hs
Explanation: Now we need to build our discriminators. We'll need to add a parameter for the is_training placeholder. We're also going to keep track of every hidden layer in the discriminator. Our encoder already returns the Hs of each layer. Alternatively, we could poll the graph for each layer in the discriminator and ask for the correspond layer names. We're going to need these layers when building our costs.
End of explanation
D_real, Hs_real = discriminator(X, is_training)
D_fake, Hs_fake = discriminator(G, is_training, reuse=True)
Explanation: Recall the regular GAN and DCGAN required 2 discriminators: one for the generated samples in Z, and one for the input samples in X. We'll do the same thing here. One discriminator for the real input data, X, which the discriminator will try to predict as 1s, and another discriminator for the generated samples that go from X through the encoder to Z, and finally through the decoder to G. The discriminator will be trained to try and predict these as 0s, whereas the generator will be trained to try and predict these as 1s.
End of explanation
with tf.variable_scope('loss'):
# Loss functions
loss_D_llike = 0
for h_real, h_fake in zip(Hs_real, Hs_fake):
loss_D_llike += tf.reduce_sum(tf.squared_difference(
utils.flatten(h_fake), utils.flatten(h_real)), 1)
eps = 1e-12
loss_real = tf.log(D_real + eps)
loss_fake = tf.log(1 - D_fake + eps)
loss_GAN = tf.reduce_sum(loss_real + loss_fake, 1)
gamma = 0.75
loss_enc = tf.reduce_mean(loss_Z + loss_D_llike)
loss_dec = tf.reduce_mean(gamma * loss_D_llike - loss_GAN)
loss_dis = -tf.reduce_mean(loss_GAN)
nb_utils.show_graph(tf.get_default_graph().as_graph_def())
Explanation: <a name="building-vaegan-loss-functions"></a>
Building VAE/GAN Loss Functions
Let's now see how we can compose our loss. We have 3 losses for our discriminator. Along with measuring the binary cross entropy between each of them, we're going to also measure each layer's loss from our two discriminators using an l2-loss, and this will form our loss for the log likelihood measure. The details of how these are constructed are explained in more details in the paper: https://arxiv.org/abs/1512.09300 - please refer to this paper for more details that are way beyond the scope of this course! One parameter within this to pay attention to is gamma, which the authors of the paper suggest control the weighting between content and style, just like in Session 4's Style Net implementation.
End of explanation
learning_rate = 0.0001
opt_enc = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_enc,
var_list=[var_i for var_i in tf.trainable_variables()
if ...])
opt_gen = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_dec,
var_list=[var_i for var_i in tf.trainable_variables()
if ...])
opt_dis = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_dis,
var_list=[var_i for var_i in tf.trainable_variables()
if var_i.name.startswith('discriminator')])
Explanation: <a name="creating-the-optimizers"></a>
Creating the Optimizers
We now have losses for our encoder, decoder, and discriminator networks. We can connect each of these to their own optimizer and start training! Just like with Part 1's GAN, we'll ensure each network's optimizer only trains its part of the network: the encoder's optimizer will only update the encoder variables, the generator's optimizer will only update the generator variables, and the discriminator's optimizer will only update the discriminator variables.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
from libs import datasets, dataset_utils
batch_size = 64
n_epochs = 100
crop_shape = [n_pixels, n_pixels, n_channels]
crop_factor = 0.8
input_shape = [218, 178, 3]
# Try w/ CELEB first to make sure it works, then explore w/ your own dataset.
files = datasets.CELEB()
batch = dataset_utils.create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape)
Explanation: <a name="loading-the-dataset"></a>
Loading the Dataset
We'll now load our dataset just like in Part 1. Here is where you should explore with your own data!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
n_samples = 10
zs = np.random.uniform(
-1.0, 1.0, [4, n_code]).astype(np.float32)
zs = utils.make_latent_manifold(zs, n_samples)
Explanation: We'll also create a latent manifold just like we've done in Session 3 and Part 1. This is a random sampling of 4 points in the latent space of Z. We then interpolate between them to create a "hyper-plane" and show the decoding of 10 x 10 points on that hyperplane.
End of explanation
# We create a session to use the graph
sess = tf.Session()
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
sess.run(init_op)
Explanation: Now create a session and create a coordinator to manage our queues for fetching data from the input pipeline and start our queue runners:
End of explanation
if os.path.exists("vaegan.ckpt"):
saver.restore(sess, "vaegan.ckpt")
print("GAN model restored.")
Explanation: Load an existing checkpoint if it exists to continue training.
End of explanation
n_files = len(files)
test_xs = sess.run(batch) / 255.0
if not os.path.exists('imgs'):
os.mkdir('imgs')
m = utils.montage(test_xs, 'imgs/test_xs.png')
plt.imshow(m)
Explanation: We'll also try resynthesizing a test set of images. This will help us understand how well the encoder/decoder network is doing:
End of explanation
t_i = 0
batch_i = 0
epoch_i = 0
ckpt_name = './vaegan.ckpt'
Explanation: <a name="training-1"></a>
Training
Almost ready for training. Let's get some variables which we'll need. These are the same as Part 1's training process. We'll keep track of t_i which we'll use to create images of the current manifold and reconstruction every so many iterations. And we'll keep track of the current batch number within the epoch and the current epoch number.
End of explanation
equilibrium = 0.693
margin = 0.4
Explanation: Just like in Part 1, we'll train trying to maintain an equilibrium between our Generator and Discriminator networks. You should experiment with the margin depending on how the training proceeds.
End of explanation
while epoch_i < n_epochs:
if batch_i % (n_files // batch_size) == 0:
batch_i = 0
epoch_i += 1
print('---------- EPOCH:', epoch_i)
batch_i += 1
batch_xs = sess.run(batch) / 255.0
real_cost, fake_cost, _ = sess.run([
loss_real, loss_fake, opt_enc],
feed_dict={
X: batch_xs,
is_training: True})
real_cost = -np.mean(real_cost)
fake_cost = -np.mean(fake_cost)
gen_update = True
dis_update = True
if real_cost > (equilibrium + margin) or \
fake_cost > (equilibrium + margin):
gen_update = False
if real_cost < (equilibrium - margin) or \
fake_cost < (equilibrium - margin):
dis_update = False
if not (gen_update or dis_update):
gen_update = True
dis_update = True
if gen_update:
sess.run(opt_gen, feed_dict={
X: batch_xs,
is_training: True})
if dis_update:
sess.run(opt_dis, feed_dict={
X: batch_xs,
is_training: True})
if batch_i % 50 == 0:
print('real:', real_cost, '/ fake:', fake_cost)
# Plot example reconstructions from latent layer
recon = sess.run(G, feed_dict={
Z: zs,
is_training: False})
recon = np.clip(recon, 0, 1)
m1 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/manifold_%08d.png' % t_i)
# Plot example reconstructions
recon = sess.run(G, feed_dict={
X: test_xs,
is_training: False})
recon = np.clip(recon, 0, 1)
m2 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/reconstruction_%08d.png' % t_i)
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
axs[0].imshow(m1)
axs[1].imshow(m2)
plt.show()
t_i += 1
if batch_i % 200 == 0:
# Save the variables to disk.
save_path = saver.save(sess, "./" + ckpt_name,
global_step=batch_i,
write_meta_graph=False)
print("Model saved in file: %s" % save_path)
# One of the threads has issued an exception. So let's tell all the
# threads to shutdown.
coord.request_stop()
# Wait until all threads have finished.
coord.join(threads)
# Clean up the session.
sess.close()
Explanation: Now we'll train! Just like Part 1, we measure the real_cost and fake_cost. But this time, we'll always update the encoder. Based on the performance of the real/fake costs, then we'll update generator and discriminator networks. This will take a long time to produce something nice, but not nearly as long as the regular GAN network despite the additional parameters of the encoder and variational networks. Be sure to monitor the reconstructions to understand when your network has reached the capacity of its learning! For reference, on Celeb Net, I would use about 5 layers in each of the Encoder, Generator, and Discriminator networks using as input a 100 x 100 image, and a minimum of 200 channels per layer. This network would take about 1-2 days to train on an Nvidia TITAN X GPU.
End of explanation
tf.reset_default_graph()
from libs import celeb_vaegan as CV
net = CV.get_celeb_vaegan_model()
Explanation: <a name="part-3---latent-space-arithmetic"></a>
Part 3 - Latent-Space Arithmetic
<a name="loading-the-pre-trained-model"></a>
Loading the Pre-Trained Model
We're now going to work with a pre-trained VAEGAN model on the Celeb Net dataset. Let's load this model:
End of explanation
sess = tf.Session()
g = tf.get_default_graph()
tf.import_graph_def(net['graph_def'], name='net', input_map={
'encoder/variational/random_normal:0': np.zeros(512, dtype=np.float32)})
names = [op.name for op in g.get_operations()]
print(names)
Explanation: We'll load the graph_def contained inside this dictionary. It follows the same idea as the inception, vgg16, and i2v pretrained networks. It is a dictionary with the key graph_def defined, with the graph's pretrained network. It also includes labels and a preprocess key. We'll have to do one additional thing which is to turn off the random sampling from variational layer. This isn't really necessary but will ensure we get the same results each time we use the network. We'll use the input_map argument to do this. Don't worry if this doesn't make any sense, as we didn't cover the variational layer in any depth. Just know that this is removing a random process from the network so that it is completely deterministic. If we hadn't done this, we'd get slightly different results each time we used the network (which may even be desirable for your purposes).
End of explanation
X = g.get_tensor_by_name('net/x:0')
Z = g.get_tensor_by_name('net/encoder/variational/z:0')
G = g.get_tensor_by_name('net/generator/x_tilde:0')
Explanation: Now let's get the relevant parts of the network: X, the input image to the network, Z, the input image's encoding, and G, the decoded image. In many ways, this is just like the Autoencoders we learned about in Session 3, except instead of Y being the output, we have G from our generator! And the way we train it is very different: we use an adversarial process between the generator and discriminator, and use the discriminator's own distance measure to help train the network, rather than pixel-to-pixel differences.
End of explanation
files = datasets.CELEB()
img_i = 50
img = plt.imread(files[img_i])
plt.imshow(img)
Explanation: Let's get some data to play with:
End of explanation
p = CV.preprocess(img)
synth = sess.run(G, feed_dict={X: p[np.newaxis]})
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(p)
axs[1].imshow(synth[0] / synth.max())
Explanation: Now preprocess the image, and see what the generated image looks like (i.e. the lossy version of the image through the network's encoding and decoding).
End of explanation
net.keys()
len(net['labels'])
net['labels']
Explanation: So we lost a lot of details but it seems to be able to express quite a bit about the image. Our inner most layer, Z, is only 512 values yet our dataset was 200k images of 64 x 64 x 3 pixels (about 2.3 GB of information). That means we're able to express our nearly 2.3 GB of information with only 512 values! Having some loss of detail is certainly expected!
<a name="exploring-the-celeb-net-attributes"></a>
Exploring the Celeb Net Attributes
Let's now try and explore the attributes of our dataset. We didn't train the network with any supervised labels, but the Celeb Net dataset has 40 attributes for each of its 200k images. These are already parsed and stored for you in the net dictionary:
End of explanation
plt.imshow(img)
[net['labels'][i] for i, attr_i in enumerate(net['attributes'][img_i]) if attr_i]
Explanation: Let's see what attributes exist for one of the celeb images:
End of explanation
Z.get_shape()
Explanation: <a name="find-the-latent-encoding-for-an-attribute"></a>
Find the Latent Encoding for an Attribute
The Celeb Dataset includes attributes for each of its 200k+ images. This allows us to feed into the encoder some images that we know have a specific attribute, e.g. "smiling". We store what their encoding is and retain this distribution of encoded values. We can then look at any other image and see how it is encoded, and slightly change the encoding by adding the encoded of our smiling images to it! The result should be our image but with more smiling. That is just insane and we're going to see how to do it. First lets inspect our latent space:
End of explanation
bald_label = net['labels'].index('Bald')
bald_label
Explanation: We have 512 features that we can encode any image with. Assuming our network is doing an okay job, let's try to find the Z of the first 100 images with the 'Bald' attribute:
End of explanation
bald_img_idxs = np.where(net['attributes'][:, bald_label])[0]
bald_img_idxs
Explanation: Let's get all the bald image indexes:
End of explanation
bald_imgs = [plt.imread(files[bald_img_i])[..., :3]
for bald_img_i in bald_img_idxs[:100]]
Explanation: Now let's just load 100 of their images:
End of explanation
plt.imshow(np.mean(bald_imgs, 0).astype(np.uint8))
Explanation: Let's see if the mean image looks like a good bald person or not:
End of explanation
bald_p = np.array([CV.preprocess(bald_img_i) for bald_img_i in bald_imgs])
Explanation: Yes that is definitely a bald person. Now we're going to try to find the encoding of a bald person. One method is to try and find every other possible image and subtract the "bald" person's latent encoding. Then we could add this encoding back to any new image and hopefully it makes the image look more bald. Or we can find a bunch of bald people's encodings and then average their encodings together. This should reduce the noise from having many different attributes, but keep the signal pertaining to the baldness.
Let's first preprocess the images:
End of explanation
bald_zs = sess.run(Z, feed_dict=...
Explanation: Now we can find the latent encoding of the images by calculating Z and feeding X with our bald_p images:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
bald_feature = np.mean(bald_zs, 0, keepdims=True)
bald_feature.shape
Explanation: Now let's calculate the mean encoding:
End of explanation
bald_generated = sess.run(G, feed_dict=...
plt.imshow(bald_generated[0] / bald_generated.max())
Explanation: Let's try and synthesize from the mean bald feature now and see how it looks:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
def get_features_for(label='Bald', has_label=True, n_imgs=50):
label_i = net['labels'].index(label)
label_idxs = np.where(net['attributes'][:, label_i] == has_label)[0]
label_idxs = np.random.permutation(label_idxs)[:n_imgs]
imgs = [plt.imread(files[img_i])[..., :3]
for img_i in label_idxs]
preprocessed = np.array([CV.preprocess(img_i) for img_i in imgs])
zs = sess.run(Z, feed_dict={X: preprocessed})
return np.mean(zs, 0)
Explanation: <a name="latent-feature-arithmetic"></a>
Latent Feature Arithmetic
Let's now try to write a general function for performing everything we've just done so that we can do this with many different features. We'll then try to combine them and synthesize people with the features we want them to have...
End of explanation
# Explore different attributes
z1 = get_features_for('Male', True, n_imgs=10)
z2 = get_features_for('Male', False, n_imgs=10)
z3 = get_features_for('Smiling', True, n_imgs=10)
z4 = get_features_for('Smiling', False, n_imgs=10)
b1 = sess.run(G, feed_dict={Z: z1[np.newaxis]})
b2 = sess.run(G, feed_dict={Z: z2[np.newaxis]})
b3 = sess.run(G, feed_dict={Z: z3[np.newaxis]})
b4 = sess.run(G, feed_dict={Z: z4[np.newaxis]})
fig, axs = plt.subplots(1, 4, figsize=(15, 6))
axs[0].imshow(b1[0] / b1.max()), axs[0].set_title('Male'), axs[0].grid('off'), axs[0].axis('off')
axs[1].imshow(b2[0] / b2.max()), axs[1].set_title('Not Male'), axs[1].grid('off'), axs[1].axis('off')
axs[2].imshow(b3[0] / b3.max()), axs[2].set_title('Smiling'), axs[2].grid('off'), axs[2].axis('off')
axs[3].imshow(b4[0] / b4.max()), axs[3].set_title('Not Smiling'), axs[3].grid('off'), axs[3].axis('off')
Explanation: Let's try getting some attributes positive and negative features. Be sure to explore different attributes! Also try different values of n_imgs, e.g. 2, 3, 5, 10, 50, 100. What happens with different values?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation
notmale_vector = z2 - z1
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z1 + notmale_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: Now let's interpolate between the "Male" and "Not Male" categories:
End of explanation
smiling_vector = z3 - z4
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))
ax_i.grid('off')
Explanation: And the same for smiling:
End of explanation
n_imgs = 5
amt = np.linspace(-1.5, 2.5, n_imgs)
zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: There's also no reason why we have to be within the boundaries of 0-1. We can extrapolate beyond, in, and around the space.
End of explanation
def slerp(val, low, high):
Spherical interpolation. val has a range of 0 to 1.
if val <= 0:
return low
elif val >= 1:
return high
omega = np.arccos(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high)))
so = np.sin(omega)
return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega)/so * high
amt = np.linspace(0, 1, n_imgs)
zs = np.array([slerp(amt_i, z1, z2) for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: <a name="extensions"></a>
Extensions
Tom White, Lecturer at Victoria University School of Design, also recently demonstrated an alternative way of interpolating using a sinusoidal interpolation. He's created some of the most impressive generative images out there and luckily for us he has detailed his process in the arxiv preprint: https://arxiv.org/abs/1609.04468 - as well, be sure to check out his twitter bot, https://twitter.com/smilevector - which adds smiles to people :) - Note that the network we're using is only trained on aligned faces that are frontally facing, though this twitter bot is capable of adding smiles to any face. I suspect that he is running a face detection algorithm such as AAM, CLM, or ASM, cropping the face, aligning it, and then running a similar algorithm to what we've done above. Or else, perhaps he has trained a new model on faces that are not aligned. In any case, it is well worth checking out!
Let's now try and use sinusoidal interpolation using his implementation in plat which I've copied below:
End of explanation
img = plt.imread('parag.png')[..., :3]
img = CV.preprocess(img, crop_factor=1.0)[np.newaxis]
Explanation: It's certainly worth trying especially if you are looking to explore your own model's latent space in new and interesting ways.
Let's try and load an image that we want to play with. We need an image as similar to the Celeb Dataset as possible. Unfortunately, we don't have access to the algorithm they used to "align" the faces, so we'll need to try and get as close as possible to an aligned face image. One way you can do this is to load up one of the celeb images and try and align an image to it using e.g. Photoshop or another photo editing software that lets you blend and move the images around. That's what I did for my own face...
End of explanation
img_ = sess.run(G, feed_dict={X: img})
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(img[0]), axs[0].grid('off')
axs[1].imshow(np.clip(img_[0] / np.max(img_), 0, 1)), axs[1].grid('off')
Explanation: Let's see how the network encodes it:
End of explanation
z1 = get_features_for('Blurry', True, n_imgs=25)
z2 = get_features_for('Blurry', False, n_imgs=25)
unblur_vector = z2 - z1
z = sess.run(Z, feed_dict={X: img})
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: Notice how blurry the image is. Tom White's preprint suggests one way to sharpen the image is to find the "Blurry" attribute vector:
End of explanation
from scipy.ndimage import gaussian_filter
idxs = np.random.permutation(range(len(files)))
imgs = [plt.imread(files[idx_i]) for idx_i in idxs[:100]]
blurred = []
for img_i in imgs:
img_copy = np.zeros_like(img_i)
for ch_i in range(3):
img_copy[..., ch_i] = gaussian_filter(img_i[..., ch_i], sigma=3.0)
blurred.append(img_copy)
# Now let's preprocess the original images and the blurred ones
imgs_p = np.array([CV.preprocess(img_i) for img_i in imgs])
blur_p = np.array([CV.preprocess(img_i) for img_i in blurred])
# And then compute each of their latent features
noblur = sess.run(Z, feed_dict={X: imgs_p})
blur = sess.run(Z, feed_dict={X: blur_p})
synthetic_unblur_vector = np.mean(noblur - blur, 0)
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + synthetic_unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: Notice that the image also gets brighter and perhaps other features than simply the bluriness of the image changes. Tom's preprint suggests that this is due to the correlation that blurred images have with other things such as the brightness of the image, possibly due biases in labeling or how photographs are taken. He suggests that another way to unblur would be to synthetically blur a set of images and find the difference in the encoding between the real and blurred images. We can try it like so:
End of explanation
z1 = get_features_for('Eyeglasses', True)
z2 = get_features_for('Eyeglasses', False)
glass_vector = z1 - z2
z = sess.run(Z, feed_dict={X: img})
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: For some reason, it also doesn't like my glasses very much. Let's try and add them back.
End of explanation
n_imgs = 5
amt = np.linspace(0, 1.0, n_imgs)
zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i + amt_i * smiling_vector for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off')
Explanation: Well, more like sunglasses then. Let's try adding everything in there now!
End of explanation
n_imgs = 5
amt = np.linspace(0, 1.5, n_imgs)
z = sess.run(Z, feed_dict={X: imgs_p})
imgs = []
for amt_i in amt:
zs = z + synthetic_unblur_vector * amt_i + amt_i * smiling_vector
g = sess.run(G, feed_dict={Z: zs})
m = utils.montage(np.clip(g, 0, 1))
imgs.append(m)
gif.build_gif(imgs, saveto='celeb.gif')
ipyd.Image(url='celeb.gif?i={}'.format(
np.random.rand()), height=1000, width=1000)
Explanation: Well it was worth a try anyway. We can also try with a lot of images and create a gif montage of the result:
End of explanation
imgs = []
... DO SOMETHING AWESOME ! ...
gif.build_gif(imgs=imgs, saveto='vaegan.gif')
Explanation: Exploring multiple feature vectors and applying them to images from the celeb dataset to produce animations of a face, saving it as a GIF. Recall you can store each image frame in a list and then use the gif.build_gif function to create a gif. Explore your own syntheses and then include a gif of the different images you create as "celeb.gif" in the final submission. Perhaps try finding unexpected synthetic latent attributes in the same way that we created a blur attribute. You can check the documentation in scipy.ndimage for some other image processing techniques, for instance: http://www.scipy-lectures.org/advanced/image_processing/ - and see if you can find the encoding of another attribute that you then apply to your own images. You can even try it with many images and use the utils.montage function to create a large grid of images that evolves over your attributes. Or create a set of expressions perhaps. Up to you just explore!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
End of explanation |
3,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Personal implementation of arXiv
Step1: gathering_game class test
Step3: DQN class
Just take it from [2]
Step6: Experience replay memory
This will be used during the training when the loss function to be minimized will be averaged over a minibatch (sample) of experiences drawn randomly from the replay_memory .memory object using method .sample
Step9: Policy
Step11: Initialization
Step13: Optimize
Step15: Training loop | Python Code:
# General import
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from collections import namedtuple
from itertools import count
#from copy import deepcopy
#from PIL import Image
import math
import random
import torch
import torch.nn as nn
import torch.optim as optim
import torch.autograd as autograd
import torch.nn.functional as F
#import torchvision.transforms as T
# is_ipython = 'inline' in matplotlib.get_backend()
# if is_ipython:
# from IPython import display
Explanation: Personal implementation of arXiv:1702.03037 [cs.MA].
Refs:
[1] DQN paper
[2] An implementation of a simpler game in PyTorch at http://pytorch.org/tutorials/intermediate/reinforcement_q_learning.html
Language chosen PyTorch since new, python, GPU.
End of explanation
from modules.gathering import gathering_game
# test gathering_game class. test init functions
game_pars={}
game_pars['gamma']=.99
game_pars['N_apples']=2
game_pars['N_tagged']=5
# local vars, should not be changed
game_pars['W'] = 33 # Width, always odd
game_pars['H'] = 11 # Height, always odd
game_pars['size_obs_ahead'] = 15 # number of sites the players can see in front of them
game_pars['size_obs_side'] = 10 # number of sites the players can see on their side
test = gathering_game(game_pars)
print('pars',test.pars)
print(test.dir)
print(test.s.shape)
test.show_screen()
test.reset()
# s_t, a_{0,t}, a_{1,t}, s_{t+1}
test.show_screen()
r0,r1=test.transition_and_get_reward(test.actions_dict['stand_still'], test.actions_dict['rotate_right'])
test.show_screen()
# test of observation functions
# test of obs_0
r0,r1=test.transition_and_get_reward(test.actions_dict['rotate_right'], test.actions_dict['rotate_left'])
test.show_screen()
#print('Reward', r0,r1)
obs_0_s=test.obs_0()
to_show = obs_0_s.transpose((2,1,0))
print(to_show.shape)
plt.imshow(to_show,origin='lower')
plt.show()
# test of obs_1
obs_1_s=test.obs_1()
to_show = obs_1_s.transpose((2,1,0))
print(to_show.shape)
plt.imshow(to_show,origin='lower')
plt.show()
test.reset()
test.show_screen()
for i in range(15):
test.transition_and_get_reward(test.actions_dict['step_forward'], test.actions_dict['step_forward'])
test.show_screen()
#r0,r1=test.transition_and_get_reward(test.actions_dict['stand_still'], test.actions_dict['stand_still'])
r0,r1=test.transition_and_get_reward(test.actions_dict['step_forward'], test.actions_dict['step_forward'])
#r0,r1=test.transition_and_get_reward(test.actions_dict['step_left'], test.actions_dict['step_right'])
test.show_screen()
print('Reward',r0,r1)
r0,r1=test.transition_and_get_reward(test.actions_dict['step_right'], test.actions_dict['step_right'])
test.show_screen()
print('Reward', r0,r1)
# test the transition functions by performing random moves:
import time
def random_actions():
# init
game = gathering_game(game_pars)
# play N random actions and show on screen
N = 5
for t in range(N):
print('Time',game.global_time)
a0,a1 = (8*np.random.random((2,))).astype(int)
for k,v in game.actions_dict.items():
if a0 == v:
print('Action 0:',k)
if a1 == v:
print('Action 1:',k)
game.transition_and_get_reward(a0, a1)
game.show_screen()
time.sleep(1)
random_actions()
Explanation: gathering_game class test
End of explanation
# Helper function that compute the output of a cross correlation
def dim_out(dim_in,ks,stride):
return math.floor((dim_in-ks)/stride+1)
class DQN(nn.Module):
def __init__(self, hp):
hp = hyperparameters, dictionary
super(DQN, self).__init__()
# Conv2D has arguments C_in, C_out, ... where C_in is the number of input channels and C_out that of
# output channels, not to be confused with the size of the image at input and output which is automatically
# computed given the input and the kernel_size.
# Further, in the help, (N,C,H,W) are resp. number of samples, number of channels, height, width.
# Note: that instead nn.Linear requires both number of input and output neurons. The reason is that
# conv2d only has parameters in the kernel, which is independent of the number of neurons.
# Note: we do not use any normalization layer
self.C_H = hp['C_H']
ks = hp['kernel_size']
stride = hp['stride']
self.conv1 = nn.Conv2d(hp['C_in'], self.C_H, kernel_size=ks, stride=stride)
self.H1 = dim_out(hp['obs_window_H'],ks,stride)
self.W1 = dim_out(hp['obs_window_W'],ks,stride)
in_size = self.C_H*self.W1*self.H1
self.lin1 = nn.Linear(in_size, in_size) #lots of parameters!
self.conv2 = nn.Conv2d(self.C_H, self.C_H, kernel_size=ks, stride=stride)
H2 = dim_out(self.H1,ks,stride)
W2 = dim_out(self.W1,ks,stride)
in_size = self.C_H*W2*H2
self.lin2 = nn.Linear(in_size, hp['C_out'])
def forward(self, x):
# Apply rectified unit (relu) after each layer
x = F.relu(self.conv1(x))
# to feed into self.lin. we reshape x has a (size(0), rest) tensor where size(0) is number samples.
# -1 tells it to infer size automatically.
x = x.view(x.size(0), -1)
x = F.relu(self.lin1(x))
# reshape to feed it into conv2, this time:
x = x.view(x.size(0), self.C_H, self.H1, self.W1)
x = F.relu(self.conv2(x))
# reshape to feed it into lin2, this time:
x = x.view(x.size(0), -1)
x = F.relu(self.lin2(x))
return x
# TEST of DQN
hp = {}
hp['C_in'] = 3 # for RGB
hp['C_H'] = 32 # number of hidden units (or channels)
hp['C_out'] = 8 # number of actions.
hp['kernel_size'] = 5
hp['stride'] = 2
# width and height of observation region
hp['obs_window_W'] = 21
hp['obs_window_H'] = 16
#print(dim_out(dim_out(30,5,2),5,2))
model_test = DQN(hp)
for p in model_test.parameters():
print(p.size())
# test with a random smaple (use unsqueeze to get extra batch dimension)
x_test = autograd.Variable(torch.randn(3, hp['obs_window_H'], hp['obs_window_W']).unsqueeze(0))
print('x',x_test.size(),type(x_test))
y_pred = model_test(x_test)
print(y_pred.data)
print(y_pred.data.max(1))
print(y_pred.data.max(1)[1])
#print("y : ",y_pred.data.size())
#print(y_pred[0,:])
Explanation: DQN class
Just take it from [2]
End of explanation
# namedtuple: tuple subclass with elements accessible by name with . operator (here name class=name instance)
# e_t = (s_t, a_t, r_t, s_{t+1})
# globally defined and used by replay_memory
experience = namedtuple('Experience',
('observation', 'action', 'reward', 'next_observation'))
class replay_memory(object):
A cyclic buffer of bounded size that holds the transitions observed recently.
It also implements a .sample() method for selecting a random batch of transitions for training.
def __init__(self, capacity):
self.capacity = capacity
self.memory = []
self.position = 0
def push(self, *args):
Saves a transition.
if len(self.memory) < self.capacity:
self.memory.append(None)
self.memory[self.position] = experience(*args)
# cyclicity:
self.position = (self.position + 1) % self.capacity
def sample(self, batch_size):
return random.sample(self.memory, batch_size)
def __len__(self):
return len(self.memory)
# test namedtuple. all its members are torch tensors
# s = torch.randn(3,2,2).unsqueeze(0)
# a = torch.Tensor([1])
# sp = torch.randn(3,2,2).unsqueeze(0)
# r = torch.Tensor([0])
# test_exp = experience(s,a,r,sp)
# test_exp.action
# test of memory: OK
N=1
batch_size = 1
rm_test = replay_memory(N)
for i in range(N):
s = torch.randn(3,2,2).unsqueeze(0)
a = torch.floor(torch.rand(1)*8)
sp = torch.randn(3,2,2).unsqueeze(0)
# r = torch.randn(1)
r = torch.ByteTensor([1])
rm_test.push(s,a,r,sp)
# this is a list of namedtuples
sample_experience = rm_test.sample(batch_size)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
# This is a namedtuple of lists
minibatch = experience(*zip(*sample_experience))
# get obs,action,next_obs,reward batches in Variable
for s in minibatch.next_observation:
if s is None:
print('########### None')
next_obs_batch = autograd.Variable(torch.cat(minibatch.next_observation),
volatile=True)
obs_batch = autograd.Variable(torch.cat(minibatch.observation))
action_batch = autograd.Variable(torch.cat(minibatch.action))
reward_batch = autograd.Variable(torch.cat(minibatch.reward))
sample_experience[0].action
minibatch.action
Explanation: Experience replay memory
This will be used during the training when the loss function to be minimized will be averaged over a minibatch (sample) of experiences drawn randomly from the replay_memory .memory object using method .sample
End of explanation
def eps_decay(eps_start, eps_end, gamma, t):
Returns the value of eps at time t according to epsilon decay from eps_start
to eps_end with decay rate gamma
ret = eps_end + \
(eps_start - eps_end) * np.exp(-1. * t / gamma)
return ret
def policy(model, obs, n_actions, eps):
epsilon-greedy policy. Input:
model : nn approximator for Q,
obs : an observation, tensor below promoted to autograd.Variable
n_action : the number of possible actions (gathering, = 8)
t : time.
Returns an action.
assert(0 <= eps <= 1)
random_num = random.random()
print('rand',random_num, 'eps',eps)
if random_num > eps:
# to be adjusted eventually.
# volatile: Boolean indicating that the Variable should be used in
# inference mode (forward), i.e. don't save the history. See
# :ref:`excluding-subgraphs` for more details.
# Can be changed only on leaf Variables.
print('In max policy')
y_pred = model(autograd.Variable(obs, volatile=True))
# data.max(1) returns an array with 0 component the maximum values for each sample in the batch
# and 1 component their indices, which is selected here, so giving which action maximizes the model for Q.
return y_pred.data.max(1)[1].cpu()
else:
print('In rand policy')
return torch.LongTensor([[random.randrange(n_actions)]])
Explanation: Policy: epsilon greedy.
End of explanation
# preprocess:
def get_preprocessed_obs(game,pl):
preprocessed input observation window of player pl from game.
Convert to float, convert to torch tensor (this doesn't require a copy)
and add a batch dimension
assert(pl==0 or pl==1)
if pl == 0:
ret = game.obs_0()
else:
ret = game.obs_1()
ret = np.ascontiguousarray(ret, dtype=np.float32)
ret = torch.from_numpy(ret).unsqueeze(0)
#print('my_obs',my_obs.size(),type(my_obs))
return ret
# parameters
game_pars={}
game_pars['N_apples']=2
game_pars['N_tagged']=5
# local vars, should not be changed
game_pars['W'] = 33 # Width, always odd
game_pars['H'] = 11 # Height, always odd
game_pars['size_obs_ahead'] = 15 # number of sites the players can see in front of them
game_pars['size_obs_side'] = 10 # number of sites the players can see on their side
# and hyper-parameters
hp = {}
hp['C_in'] = 3 # for RGB
hp['C_H'] = 32 # number of hidden units (or channels)
hp['C_out'] = 8 # number of actions.
hp['kernel_size'] = 5
hp['stride'] = 2
# size of the observation window, related to output of obs_*
hp['obs_window_W'] = 21
hp['obs_window_H'] = 16
# for replay_memory
mem_pars = {}
mem_pars['capacity'] = 2
mem_pars['batch_size'] = 1
# gamma = discount of reward
gamma = .99
# eps for policy
eps_start = 0.9
eps_end = 0.05
decay_rate = 200
#
# Now init the variables
#
# Q function approximators for player 0 and 1
Q_0 = DQN(hp)
Q_1 = DQN(hp)
rpl_memory_0 = replay_memory(mem_pars['capacity'])
rpl_memory_1 = replay_memory(mem_pars['capacity'])
# game definition
game = gathering_game(game_pars)
obs_0 = get_preprocessed_obs(game,0)
obs_1 = get_preprocessed_obs(game,1)
# test of policy: OK
my_obs = obs_1
# nn:
my_model = Q_1
a=policy(my_model, my_obs, game.n_actions, 0.5)
type(a[0,0])
Explanation: Initialization
End of explanation
# Choose minimum square error loss function and SGD optimizer
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer_0 = optim.SGD(Q_0.parameters(),lr=0.01)
optimizer_1 = optim.SGD(Q_1.parameters(),lr=0.01)
def optimize(model, loss_fn, optimizer, rpl_memory, batch_size, gamma):
TODO: understand issue with volatile...
# if the memory is smaller than wanted, don't do anything and keep building memory
print('In optimize: len(rpl_memory), bacth_size', len(rpl_memory), batch_size)
if len(rpl_memory) < batch_size:
return
#otherwise get minibatch of experiences
# this is a list of namedtuples
sample_experience = rpl_memory.sample(batch_size)
# Transpose the batch (see http://stackoverflow.com/a/19343/3343043 for
# detailed explanation).
# This is a namedtuple of lists
minibatch = experience(*zip(*sample_experience))
print('minibatch.reward:',minibatch.reward)
# get obs,action,next_obs,reward batches in Variable
for s in minibatch.next_observation:
if s is None:
print('########### None')
# Compute a mask of non-final states and concatenate the batch elements. This to get rid of None
#non_final_mask = torch.ByteTensor(
# tuple(map(lambda s: s is not None, minibatch.next_observation)))
next_obs_batch = autograd.Variable(torch.cat(minibatch.next_observation),
volatile=True)
obs_batch = autograd.Variable(torch.cat(minibatch.observation))
action_batch = autograd.Variable(torch.cat(minibatch.action))
reward_batch = autograd.Variable(torch.cat(minibatch.reward))
# Compute Q(obs, action) - the model computes Q(obs), then we select the
# columns of actions taken
print("In optimize: obs_batch", obs_batch.data.size())
obs_action_values = model(obs_batch).gather(1, action_batch)
# Compute V(obs')=max_a Q(obs, a) for all next states.
next_obs_values = model(next_obs_batch).max(1)[0]
# Now, we don't want to mess up the loss with a volatile flag, so let's
# clear it. After this, we'll just end up with a Variable that has
# requires_grad=False
next_obs_values.volatile = False
# Compute y
y = (next_obs_values * gamma) + reward_batch
# Compute loss
loss = loss_fn(obs_action_values, y)
# Optimize the model
optimizer.zero_grad()
loss.backward()
for param in model.parameters():
param.grad.data.clamp_(-1, 1)
optimizer.step()
Explanation: Optimize
End of explanation
# training loop over episodes
def train(M,T,eps_start,eps_end,decay_rate,Q_0,Q_1,obs_0,obs_1):
...
for episode in range(M):
for t in range(T):
# policy
eps = eps_decay(eps_start, eps_end, decay_rate, t)
a_0 = policy(Q_0, obs_0, game.n_actions, eps)
a_1 = policy(Q_1, obs_1, game.n_actions, eps)
print(a_0,a_1)
# execute action in emulator. (policy returns a 1x1 tensor)
r_0, r_1 = game.transition_and_get_reward(a_0[0,0], a_1[0,0])
obs_0_p = get_preprocessed_obs(game,0)
obs_1_p = get_preprocessed_obs(game,1)
# store experience (converting r: it is only 0,1 but treat as float since then added to return)
rpl_memory_0.push(obs_0, a_0, torch.FloatTensor([r_0]), obs_0_p)
rpl_memory_1.push(obs_1, a_1, torch.FloatTensor([r_1]), obs_1_p)
obs_0 = obs_0_p
obs_1 = obs_1_p
# optimize
optimize(Q_0, loss_fn, optimizer_0, rpl_memory_0, mem_pars['batch_size'], gamma)
optimize(Q_1, loss_fn, optimizer_1, rpl_memory_1, mem_pars['batch_size'], gamma)
M = 1
T = 2
train(M,T,eps_start,eps_end,decay_rate,Q_0,Q_1,obs_0,obs_1)
Explanation: Training loop
End of explanation |
3,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiprocessing and scarplet
This simple example shows how to use the match_template and compare methods with a multiprocessing worker pool.
It is available as a Jupyter notebook (link) in the repository. Sample data is provided in the data folder.
Step1: For each set of input parameters, we can start a separate masking task. These can be run in parallel, which is what scarplet does by default.
Step2: To compare, we can a loop to fit the templates sequentially. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
from multiprocessing import Pool
import scarplet as sl
from scarplet.datasets import load_synthetic
from scarplet.WindowedTemplate import Scarp
data = load_synthetic()
# Define parmaters for search
scale = 10
age = 10.
angles = np.linspace(-np.pi / 2, np.pi / 2, 181)
nprocs = 3
Explanation: Multiprocessing and scarplet
This simple example shows how to use the match_template and compare methods with a multiprocessing worker pool.
It is available as a Jupyter notebook (link) in the repository. Sample data is provided in the data folder.
End of explanation
# Start separate search tasks
pool = Pool(processes=nprocs)
wrapper = partial(sl.match_template, data, Scarp, scale, age)
results = pool.imap(wrapper, angles, chunksize=1)
%%time
# Reduce the final results as they are completed
ny, nx = data.shape
best = sl.compare(results, nx, ny)
Explanation: For each set of input parameters, we can start a separate masking task. These can be run in parallel, which is what scarplet does by default.
End of explanation
%%time
best = np.zeros((4, ny, nx))
for angle in angles:
results = sl.match_template(data, Scarp, scale, age, angle)
best = sl.compare([best, results], nx, ny)
Explanation: To compare, we can a loop to fit the templates sequentially.
End of explanation |
3,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IDX2016B - Week 2 presentations
Classifiers in machine learning - which should I choose and how do I use it?
Kyle Willett
14 June 2016
Step1: Logistic regression
Logistic regression is a method of fitting a regression model based on one dependent variable (DV) and one or more independent variables (IVs). The difference between logistic and linear regression is that logistic regression predicts results of discrete categories, meaning that $y | x$ is the result of a Bernoulli distribution rather than a Gaussian. Linear regression is more appropriate if the dependent variable is continuous.
Advantages of logistic regression
Step2: So, now you have a predictor for the class of any future object based on the input data (length and width of sepals). For example
Step3: Decision trees
Decision trees are a non-parametric, supervised method for learning both classification and regression. It works by creating series of increasingly deeper rules for separating the independent variables based on combinations of the dependent variables. Rules can include simple thresholds on the DVs, Gini coefficient, cross-entropy, or misclassification.
Advantages of decision trees
Step4: So this model has higher accuracy on the training set since it creates smaller niches and separated areas of different classes. However, this illustrates the danger of overfitting the model; further test sets will likely have poorer performance.
Like logistic regression, you can extract both discrete predictions and probabilities for each class
Step5: How could we not overfit? Maybe try trees with different depths.
Step6: Random forest
Random forest (RF) is an example of an ensemble method of classification; this takes many individual estimators that each have some element of randomness built into them, and then combines the individual results to reduce the total variance (although potentially with a small increase in bias).
Random forests are built on individual decision trees. For the construction of the classifier in each tree, rather than picking the best split among all features at each node in the tree, the algorithm will pick the best split for a random subset of the features and then continue constructing the classifier. This means that all the trees will have slightly different classifications even based on identical training sets. The scikit-learn implementation of RF combines the classifiers by averaging the probabilistic prediction in each tree.
Advantages
Step7: Support vector classification
Support vector machines are another class of supervised learning methods. They rely on finding the weights necessary to create a set of hypervectors that separate the classes in a set. Like decision trees, they can be used for both classification and regression.
Advantages | Python Code:
%matplotlib inline
# Setup - import some packages we'll need
import numpy as np
import matplotlib.pyplot as plt
Explanation: IDX2016B - Week 2 presentations
Classifiers in machine learning - which should I choose and how do I use it?
Kyle Willett
14 June 2016
End of explanation
from sklearn import datasets
# Import some data to fit. We'll use the iris data and fit only to the first two features.
iris = datasets.load_iris()
X = iris.data[:, :2]
Y = iris.target
n_classes = len(set(Y))
# Plot the training data and take a look at the classes
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111)
md = {0:'o',1:'^',2:'s'}
cm = plt.cm.Set1
for i in range(n_classes):
inds = (Y==i)
ax.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=plt.cm.Set1,marker=md[i],s=50)
ax.set_xlabel(iris['feature_names'][0])
ax.set_ylabel(iris['feature_names'][1]);
# Train the logistic regression model
h = 0.02 # step size in the mesh
# Create an instance of the classifier
from sklearn import linear_model
logreg = linear_model.LogisticRegression(C=1e5)
# Fit the data with the classifier
logreg.fit(X, Y)
# Create a 2D grid to evaluate the classifier on
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
# Evaluate the classifier at every point in the grid
Z = logreg.predict(np.c_[xx.ravel(), yy.ravel()])
# Reshape the output so that it can be overplotted on our grid
Z = Z.reshape(xx.shape)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,6))
# Plot the training points
for i in range(n_classes):
inds = (Y==i)
ax1.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=plt.cm.Set1,marker=md[i],s=50)
# Plot the classifier with the training points on top
ax2.pcolormesh(xx, yy, Z, cmap=cm)
for i in range(n_classes):
inds = (Y==i)
ax2.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=cm,marker=md[i],s=50,edgecolor='k')
# Label the axes and remove the ticks
for ax in (ax1,ax2):
ax.set_xlabel(iris['feature_names'][0])
ax.set_ylabel(iris['feature_names'][1])
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max());
Explanation: Logistic regression
Logistic regression is a method of fitting a regression model based on one dependent variable (DV) and one or more independent variables (IVs). The difference between logistic and linear regression is that logistic regression predicts results of discrete categories, meaning that $y | x$ is the result of a Bernoulli distribution rather than a Gaussian. Linear regression is more appropriate if the dependent variable is continuous.
Advantages of logistic regression:
does not assume statistical independence of your IV(s)
does not assume a normal distribution of DV
returns a probabilistic interpretation as the model
model can be quickly updated (using gradient descent, for example)
assumes boundaries are linear, but do not have to be parallel to the IV axes
quite fast
Disadvantages of logistic regression:
does not predict continuous data
requires more data to get reasonable fit
assuming a single continuous boundary means that it does not handle local structure well
End of explanation
length = 6.0
width = 3.2
# Just the discrete answer
data = np.array([width,length]).reshape(1,-1)
pred_class = logreg.predict(data)[0]
target_name = iris['target_names'][pred_class]
print "Overall predicted class of the new flower is {0:}.\n".format(target_name)
# Probabilities for all the classes
pred_probs = logreg.predict_proba(data)
for name,prob in zip(iris['target_names'],pred_probs[0]):
print "\tProbability of class {0:12} is {1:.2f}%.".format(name,prob*100.)
Explanation: So, now you have a predictor for the class of any future object based on the input data (length and width of sepals). For example:
End of explanation
# Let's try it out again on the iris dataset.
from sklearn import tree
tree_classifier = tree.DecisionTreeClassifier()
tree_classifier.fit(X,Y)
# Evaluate the classifier at every point in the gricdb
Z = tree_classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# Reshape the output so that it can be overplotted on our grid
Z = Z.reshape(xx.shape)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,6))
# Plot the training points
for i in range(n_classes):
inds = (Y==i)
ax1.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=plt.cm.Set1,marker=md[i],s=50)
# Plot the classifier with the training points on top
ax2.pcolormesh(xx, yy, Z, cmap=cm)
for i in range(n_classes):
inds = (Y==i)
ax2.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=cm,marker=md[i],s=50,edgecolor='k')
# Label the axes and remove the ticks
for ax in (ax1,ax2):
ax.set_xlabel(iris['feature_names'][0])
ax.set_ylabel(iris['feature_names'][1])
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max());
Explanation: Decision trees
Decision trees are a non-parametric, supervised method for learning both classification and regression. It works by creating series of increasingly deeper rules for separating the independent variables based on combinations of the dependent variables. Rules can include simple thresholds on the DVs, Gini coefficient, cross-entropy, or misclassification.
Advantages of decision trees:
simple to interpret and robust against missing values
$\mathcal{O}(\log N)$ for $N$ data samples
can validate model with statistical tests
Disadvatages of decision trees:
fairly easily prone to over-fitting. To avoid this, use methods like pruning, limits on minimum samples per leaf node, or setting maximum depth of the tree
biased toward classes that are over-represented in the tree
single decision trees can be unstable; better performance by using many in an ensemble (ie, a random forest)
must be rebuilt if new features or training data are added
End of explanation
# Just the discrete answer
data = np.array([width,length]).reshape(1,-1)
pred_class = tree_classifier.predict(data)[0]
target_name = iris['target_names'][pred_class]
print "Overall predicted class of the new flower is {0:}.\n".format(target_name)
# Probabilities for all the classes
pred_probs = tree_classifier.predict_proba(data)
for name,prob in zip(iris['target_names'],pred_probs[0]):
print "\tProbability of class {0:12} is {1:.2f}%.".format(name,prob*100.)
Explanation: So this model has higher accuracy on the training set since it creates smaller niches and separated areas of different classes. However, this illustrates the danger of overfitting the model; further test sets will likely have poorer performance.
Like logistic regression, you can extract both discrete predictions and probabilities for each class:
End of explanation
fig,axarr = plt.subplots(2,3,figsize=(15,10))
for depth,ax in zip(range(1,7),axarr.ravel()):
tree_depthlim = tree.DecisionTreeClassifier(max_depth=depth)
tree_depthlim.fit(X,Y)
# Evaluate the classifier at every point in the gricdb
Z = tree_depthlim.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
# Plot the classifier with the training points on top
ax.pcolormesh(xx, yy, Z, cmap=cm)
for i in range(n_classes):
inds = (Y==i)
ax.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=cm,marker=md[i],s=50,edgecolor='k')
# Label the axes and remove the ticks
ax.set_title('Max. depth = {0}'.format(depth))
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max());
Explanation: How could we not overfit? Maybe try trees with different depths.
End of explanation
from sklearn.ensemble import RandomForestClassifier
rf_classifier = RandomForestClassifier(max_depth=5, n_estimators=10)
rf_classifier.fit(X,Y)
# Evaluate the classifier at every point in the gricdb
Z = rf_classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# Reshape the output so that it can be overplotted on our grid
Z = Z.reshape(xx.shape)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,6))
# Plot the training points
for i in range(n_classes):
inds = (Y==i)
ax1.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=plt.cm.Set1,marker=md[i],s=50)
# Plot the classifier with the training points on top
ax2.pcolormesh(xx, yy, Z, cmap=cm)
for i in range(n_classes):
inds = (Y==i)
ax2.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=cm,marker=md[i],s=50,edgecolor='k')
# Label the axes and remove the ticks
for ax in (ax1,ax2):
ax.set_xlabel(iris['feature_names'][0])
ax.set_ylabel(iris['feature_names'][1])
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max());
Explanation: Random forest
Random forest (RF) is an example of an ensemble method of classification; this takes many individual estimators that each have some element of randomness built into them, and then combines the individual results to reduce the total variance (although potentially with a small increase in bias).
Random forests are built on individual decision trees. For the construction of the classifier in each tree, rather than picking the best split among all features at each node in the tree, the algorithm will pick the best split for a random subset of the features and then continue constructing the classifier. This means that all the trees will have slightly different classifications even based on identical training sets. The scikit-learn implementation of RF combines the classifiers by averaging the probabilistic prediction in each tree.
Advantages:
reduces variance in the model
fast and scalable
few parameters to tune (max depth, number of features, nature of randomness)
Disadvantages:
slightly increases bias, especially for non-balanced datasets
must be rebuilt if new features or training data are added
End of explanation
from sklearn import svm
svm_classifier = svm.SVC()
svm_classifier.fit(X,Y)
# Evaluate the classifier at every point in the gricdb
Z = svm_classifier.predict(np.c_[xx.ravel(), yy.ravel()])
# Reshape the output so that it can be overplotted on our grid
Z = Z.reshape(xx.shape)
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(12,6))
# Plot the training points
for i in range(n_classes):
inds = (Y==i)
ax1.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=plt.cm.Set1,marker=md[i],s=50)
# Plot the classifier with the training points on top
ax2.pcolormesh(xx, yy, Z, cmap=cm)
for i in range(n_classes):
inds = (Y==i)
ax2.scatter(X[inds,0],X[inds,1],c=cm(int(i/float(n_classes-1) * 255)),
cmap=cm,marker=md[i],s=50,edgecolor='k')
# Label the axes and remove the ticks
for ax in (ax1,ax2):
ax.set_xlabel(iris['feature_names'][0])
ax.set_ylabel(iris['feature_names'][1])
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max());
# Just the discrete answer
data = np.array([width,length]).reshape(1,-1)
pred_class = svm_classifier.predict(data)[0]
target_name = iris['target_names'][pred_class]
print "Overall predicted class of the new flower is {0:}.\n".format(target_name)
Explanation: Support vector classification
Support vector machines are another class of supervised learning methods. They rely on finding the weights necessary to create a set of hypervectors that separate the classes in a set. Like decision trees, they can be used for both classification and regression.
Advantages:
work in high-dimensional spaces
can be used even if $\mathcal{N}>n$ (number of dimensions are greater than number of samples)
memory efficient
can tune the kernel that controls decision function
Disadvantages:
must tune the kernel that controls the decision function
many more parameters that can potentially be set
no direct probability estimates
The heart of an SVC is the kernel; this "mathematical trick" is what allows the algorithm to efficiently map coordinates into feature space by only computing the inner product on pairs of images, rather than a complete coordinate transformation. The shape of the kernel also determines the available shapes for the discriminating hyperplances. In scikit-learn, there are four precompiled kernels available (you can also define your own):
linear
polynomial
rbf ("radial basis function"; default)
sigmoid
End of explanation |
3,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 49
Step1: This lesson will control all the keyboard controlling functions in the module.
The typewrite() function will type text into a given textbox. It may be useful to use the mouse operators to navigate and click into a field before running.
Step2: Again, to simulate more human interaction, we can an interval parameter like duration before.
Step3: For more complex characters, we can pass a list of complex characters, like the arrow keys, shift, etc.
Step4: A list of keys are available in the KEYBOARD_KEYS
Step5: These are case-sensitive, but often map to the same function anyway.
Step6: We can also pass variables in hotkey mode, i.e. pressed together. | Python Code:
import pyautogui
Explanation: Lesson 49:
Controlling the Keyboard with Python
Python can be used to control the keyboard and mouse, which allows us to automate any program that uses these as inputs.
Graphical User Interface (GUI) Automation is particularly useful for repetative clicking or keyboard entry. The program's own module will probably deliver better programmatic performance, but GUI automation is more broadly applicable.
We will be using the pyautogui module. Lesson 48 details how to install this package.
End of explanation
# Writes to the cell right below (70 pixels down)
pyautogui.moveRel(0,70)
pyautogui.click()
pyautogui.typewrite('Hello world!')
Explanation: This lesson will control all the keyboard controlling functions in the module.
The typewrite() function will type text into a given textbox. It may be useful to use the mouse operators to navigate and click into a field before running.
End of explanation
# Writes to the cell right below (70 pixels down)
pyautogui.moveRel(0,70)
pyautogui.click()
pyautogui.typewrite('Hello world!', interval=0.2)
Explanation: Again, to simulate more human interaction, we can an interval parameter like duration before.
End of explanation
# Writes to the cell right below (70 pXixels down)
pyautogui.moveRel(0,70)
pyautogui.click()
pyautogui.typewrite(['a','b','left','left','X','Y'], interval=1)
XYab
Explanation: For more complex characters, we can pass a list of complex characters, like the arrow keys, shift, etc.
End of explanation
pyautogui.KEYBOARD_KEYS
Explanation: A list of keys are available in the KEYBOARD_KEYS
End of explanation
pyautogui.typewrite('F1')
pyautogui.typewrite('f1')
Explanation: These are case-sensitive, but often map to the same function anyway.
End of explanation
# Simulates ctrl + alt + delete
pyautogui.hotkey('ctrl','alt','delete')
Explanation: We can also pass variables in hotkey mode, i.e. pressed together.
End of explanation |
3,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris Demo
Check any null and invalid values
Ensure the properties of features and labels
Convert the string value into computational forms
PCA -> Cluster Verification (optional)
Logistic Regreesion/SVM (optional)
Import Iris DataSet from sklearn
Step1: Convert data form into Pandas format
Step2: Check if there is any null values
Step3: 3 group to classify
Step4: 150 instances and 4 features
Step5: Convet all the unique string values into integers. Perform label encoding on the data
Step6: Check the encoded values
Step7: Plot boxplot to visualize the distribution of the data
Step8: Info of features
Step9: Standardising the features
Step10: PCA(optional)
Step11: The last 1 componen has less amount of variance of the data. The first 3 components retains more than 90% of the data.(Here, compared with only 4 features, there're enough instances to support the final results. We shall take all features into consideration)
Consider first 3 components and visualise it using K-means clustering
Step12: Using K-means, we are able to segregate 3 classes well using the first 3 components with maximum variance. (Don't mind the color type, which is meaningless in clustering).
You can apply PCA firstly before using machine learning in the next steps
Splitting the data into training and testing dataset
Step13: Default Logistic Regression(optional)
Step14: Tuned Logistic Regression(optional)
Step15: Search best combinations of parameter values based on the dataset.
+ "C"
Step16: SVM(optional) | Python Code:
from sklearn.datasets import load_iris
irisdata = load_iris()
Explanation: Iris Demo
Check any null and invalid values
Ensure the properties of features and labels
Convert the string value into computational forms
PCA -> Cluster Verification (optional)
Logistic Regreesion/SVM (optional)
Import Iris DataSet from sklearn
End of explanation
import pandas as pd
features = pd.DataFrame(irisdata['data'])
features.columns = irisdata['feature_names']
targets = pd.DataFrame(irisdata['target'])
targets = targets.replace([0,1,2],irisdata['target_names'])
Explanation: Convert data form into Pandas format
End of explanation
features.isnull().sum()
targets.isnull().sum()
Explanation: Check if there is any null values
End of explanation
targets[0].unique()
Explanation: 3 group to classify
End of explanation
features.shape
Explanation: 150 instances and 4 features
End of explanation
from sklearn.preprocessing import LabelEncoder
labelencoder=LabelEncoder()
for col in targets.columns:
targets[col] = labelencoder.fit_transform(targets[col])
Explanation: Convet all the unique string values into integers. Perform label encoding on the data
End of explanation
targets[0].unique()
print(targets.groupby(0).size())
Explanation: Check the encoded values
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
fig,axes = plt.subplots(nrows=2,ncols=2,figsize=(9,9))
fig1 = axes[0,0].boxplot(features['sepal length (cm)'],patch_artist=True)
fig2 = axes[0,1].boxplot(features['sepal width (cm)'],patch_artist=True)
fig3 = axes[1,0].boxplot(features['petal length (cm)'],patch_artist=True)
fig4 = axes[1,1].boxplot(features['petal width (cm)'],patch_artist=True)
Explanation: Plot boxplot to visualize the distribution of the data
End of explanation
features.describe()
features.corr()
Explanation: Info of features
End of explanation
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X = scaler.fit_transform(features)
X
Explanation: Standardising the features
End of explanation
from sklearn.decomposition import PCA
pca = PCA()
pca.fit_transform(X)
covariance = pca.get_covariance()
explained_variance = pca.explained_variance_
explained_variance
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(6, 4))
plt.bar(range(4), explained_variance, alpha=0.5, align='center',
label='individual explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.legend(loc='best')
plt.tight_layout()
Explanation: PCA(optional)
End of explanation
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
x_pca = pca.fit_transform(X)
kmeans = KMeans(n_clusters=3, random_state=5)
x_clustered = kmeans.fit_predict(x_pca)
y = targets.values
y = y.reshape(y.size)
import matplotlib.pyplot as plt
%matplotlib inline
LABEL_COLOR_MAP = {0 : 'g',
1 : 'y',
2 : 'r'
}
label_color = [LABEL_COLOR_MAP[i] for i in x_clustered]
y_color = [LABEL_COLOR_MAP[i] for i in y]
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(5,3))
axes[0].scatter(X[:,0],X[:,1], c= label_color)
axes[0].set_title('PCA')
axes[1].scatter(X[:,0],X[:,1], c= y_color)
axes[1].set_title('True Cluster');
Explanation: The last 1 componen has less amount of variance of the data. The first 3 components retains more than 90% of the data.(Here, compared with only 4 features, there're enough instances to support the final results. We shall take all features into consideration)
Consider first 3 components and visualise it using K-means clustering
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2,random_state=4)
Explanation: Using K-means, we are able to segregate 3 classes well using the first 3 components with maximum variance. (Don't mind the color type, which is meaningless in clustering).
You can apply PCA firstly before using machine learning in the next steps
Splitting the data into training and testing dataset
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn import metrics
modelLR = LogisticRegression(n_jobs=-1)
modelLR.fit(X_train,y_train);
y_pred = modelLR.predict(X_test)
modelLR.score(X_test,y_pred)
confusion_matrix=metrics.confusion_matrix(y_test,y_pred)
confusion_matrix
import matplotlib.pyplot as plt
%matplotlib inline
LABEL_COLOR_MAP = {0 : 'g',
1 : 'y',
2 : 'r'
}
pred_color = [LABEL_COLOR_MAP[i] for i in y_pred]
test_color = [LABEL_COLOR_MAP[i] for i in y_test]
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(5,2))
axes[0].scatter(X_test[:,0],X_test[:,1], c= pred_color)
axes[0].set_title('Predicted')
axes[1].scatter(X_test[:,0],X_test[:,1], c= test_color)
axes[1].set_title('True');
Explanation: Default Logistic Regression(optional)
End of explanation
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import cross_val_score
from sklearn import metrics
from sklearn.model_selection import GridSearchCV
LRs= LogisticRegression()
tuned_parameters = {'C': [0.001, 0.01, 0.1, 1, 10, 100, 1000] ,
'penalty':['l1','l2']
}
modelLR=GridSearchCV(LRs, tuned_parameters,cv=10)
Explanation: Tuned Logistic Regression(optional)
End of explanation
modelLR.fit(X_train,y_train)
print(modelLR.best_params_)
y_pred = modelLR.predict(X_test)
modelLR.score(X_test,y_pred)
confusion_matrix=metrics.confusion_matrix(y_test,y_pred)
confusion_matrix
auc_roc=metrics.classification_report(y_test,y_pred)
auc_roc
import matplotlib.pyplot as plt
%matplotlib inline
LABEL_COLOR_MAP = {0 : 'g',
1 : 'y',
2 : 'r'
}
pred_color = [LABEL_COLOR_MAP[i] for i in y_pred]
test_color = [LABEL_COLOR_MAP[i] for i in y_test]
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(5,2))
axes[0].scatter(X_test[:,0],X_test[:,1], c= pred_color)
axes[0].set_title('Predicted')
axes[1].scatter(X_test[:,0],X_test[:,1], c= test_color)
axes[1].set_title('True');
Explanation: Search best combinations of parameter values based on the dataset.
+ "C": Inverse of regularization strength
+ "Penalty": The norm used in the penalization
End of explanation
from sklearn.svm import SVC
svm= SVC()
tuned_parameters = {
'C': [1, 10, 100,500, 1000], 'kernel': ['linear','rbf'],
'C': [1, 10, 100,500, 1000], 'gamma': [1,0.1,0.01,0.001, 0.0001], 'kernel': ['rbf'],
#'degree': [2,3,4,5,6] , 'C':[1,10,100,500,1000] , 'kernel':['poly']
}
from sklearn.model_selection import RandomizedSearchCV
modelsvm = RandomizedSearchCV(svm, tuned_parameters,cv=10,scoring='accuracy',n_iter=20)
modelsvm.fit(X_train, y_train)
print(modelsvm.best_score_)
modelsvm.cv_results_
print(modelsvm.best_params_)
y_pred= modelsvm.predict(X_test)
print(metrics.accuracy_score(y_pred,y_test))
confusion_matrix=metrics.confusion_matrix(y_test,y_pred)
confusion_matrix
auc_roc=metrics.classification_report(y_test,y_pred)
auc_roc
import matplotlib.pyplot as plt
%matplotlib inline
LABEL_COLOR_MAP = {0 : 'g',
1 : 'y',
2 : 'r'
}
pred_color = [LABEL_COLOR_MAP[i] for i in y_pred]
test_color = [LABEL_COLOR_MAP[i] for i in y_test]
fig,axes = plt.subplots(nrows=1,ncols=2,figsize=(5,2))
axes[0].scatter(X_test[:,0],X_test[:,1], c= pred_color)
axes[0].set_title('Predicted')
axes[1].scatter(X_test[:,0],X_test[:,1], c= test_color)
axes[1].set_title('True');
Explanation: SVM(optional)
End of explanation |
3,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using numpy
The foundation for numerical computation in Python is the numpy package, and essentially all scientific libraries in Python build on this - e.g. scipy, pandas, statsmodels, scikit-learn, cv2 etc. The basic data structure in numpy is the NDArray, and it is essential to become familiar with how to slice and dice this object.
Numpy also has the random, and linalg modules that we will discuss in later lectures.
Resources
Numpy for R users
NumPy
Step1: Array creation
Step2: Array manipulation
Step3: Array indexing
Step4: Boolean indexing
Step5: Fancy indexing
Step6: Calculations and broadcasting
Broadcasting refers to the set of rules that numpy uses to perfrom operations on arrays with different shapes. See official documentation for a clear explanation of the rules. Array shapes can be manipulated using the reshape method or by inserting a new axis with np.newaxis. Note that np.newaxis is an alias for None, which I sometimes use in my examples.
Step7: Combining and splitting arrays
Step8: Reductions
Step9: Standardize by column mean and standard deviation
Step10: Standardize by row mean and standard deviation
Step13: Example
Step14: Using broadcasting
Step15: We want to end up with a 4 by 4 matrix, so sum over the axis with dimension 2. This is axis=2, or axis=-1 since it is the first axis from the end.
Step16: Basically, the distance matrix can be calculated in one line of numpy code
Step17: Let's put them in functions and compare the time.
Step18: Check that the outputs are the same
Step19: But don't give up on loops yet
Step20: What is going on?
This is 3-5 times faster than the broadcasting version! We have just performed Just In Time (JIT) compilation of a function, which will be discussed in a later lecture.
Example
Step21: Use broadcasting to create a new index matrix
Step22: All but one
R uses negative indexing to mean delete the component at that index. Because Python uses negative indexing to mean count from the end, we have to do a little more work to get the same effect. Here are two ways of deleting one item from a vector.
Step23: Universal functions (Ufuncs)
Functions that work on both scalars and arrays are known as ufuncs. For arrays, ufuncs apply the function in an element-wise fashion. Use of ufuncs is an esssential aspect of vectorization and typically much more computationally efficient than using an explicit loop over each element.
Step24: Generalized ufuncs
A universal function performs vectorized looping over scalars. A generalized ufunc performs looping over vectors or arrays. Currently, numpy only ships with a single generalized ufunc. However, they play an important role for JIT compilation with numba, a topic we will cover in future lectures.
Step25: Saving and loading NDArrays
Saving to and loading from text files
Step26: Saving to and loading from binary files (much faster and also preserves dtype)
Step27: Version information | Python Code:
x = np.array([1,2,3,4,5,6])
print(x)
print('dytpe', x.dtype)
print('shape', x.shape)
print('strides', x.strides)
x.shape = (2,3)
print(x)
print('dytpe', x.dtype)
print('shape', x.shape)
print('strides', x.strides)
x = x.astype('complex')
print(x)
print('dytpe', x.dtype)
print('shape', x.shape)
print('strides', x.strides)
Explanation: Using numpy
The foundation for numerical computation in Python is the numpy package, and essentially all scientific libraries in Python build on this - e.g. scipy, pandas, statsmodels, scikit-learn, cv2 etc. The basic data structure in numpy is the NDArray, and it is essential to become familiar with how to slice and dice this object.
Numpy also has the random, and linalg modules that we will discuss in later lectures.
Resources
Numpy for R users
NumPy: creating and manipulating numerical data
Advanced Numpy
100 Numpy Exercises
NDArray
The base structure in numpy is ndarray, used to represent vectors, matrices and higher-dimensional arrays. Each ndarray has the following attributes:
dtype = corresponds to data types in C
shape = dimensions of array
strides = number of bytes to step in each direction when traversing the array
End of explanation
np.array([1,2,3])
np.array([1,2,3], np.float64)
np.arange(3)
np.arange(3, 6, 0.5)
np.array([[1,2,3],[4,5,6]])
np.ones(3)
np.zeros((3,4))
np.eye(4)
np.diag([1,2,3,4])
np.fromfunction(lambda i, j: i**2+j**2, (4,5))
Explanation: Array creation
End of explanation
x = np.fromfunction(lambda i, j: i**2+j**2, (4,5))
x
x.shape
x.size
x.dtype
x.astype(np.int64)
x.T
x.reshape(2,-1)
Explanation: Array manipulation
End of explanation
x
x[0]
x[0,:]
x[:,0]
x[-1]
x[1,1]
x[:, 1:3]
Explanation: Array indexing
End of explanation
x >= 2
x[x > 2]
Explanation: Boolean indexing
End of explanation
x[0, [1,2]]
Explanation: Fancy indexing
End of explanation
x = np.fromfunction(lambda i, j: i**2+j**2, (2,3))
x
x * 5
x + x
x @ x.T
x.T @ x
np.log1p(x)
np.exp(x)
Explanation: Calculations and broadcasting
Broadcasting refers to the set of rules that numpy uses to perfrom operations on arrays with different shapes. See official documentation for a clear explanation of the rules. Array shapes can be manipulated using the reshape method or by inserting a new axis with np.newaxis. Note that np.newaxis is an alias for None, which I sometimes use in my examples.
End of explanation
x
np.r_[x, x]
np.vstack([x, x])
np.concatenate([x, x], axis=0)
np.c_[x,x]
np.hstack([x, x])
np.concatenate([x,x], axis=1)
y = np.r_[x, x]
y
a, b, c = np.hsplit(y, 3)
a
b
c
np.vsplit(y, [3])
np.split(y, [3], axis=0)
np.hstack(np.hsplit(y, 3))
Explanation: Combining and splitting arrays
End of explanation
y
y.sum()
y.sum(0) # column sum
y.sum(1) # row sum
Explanation: Reductions
End of explanation
z = (y - y.mean(0))/y.std(0)
z
z.mean(0), z.std(0)
Explanation: Standardize by column mean and standard deviation
End of explanation
z = (y - y.mean(1)[:,None])/y.std(1)[:,None]
z
z.mean(1), z.std(1)
Explanation: Standardize by row mean and standard deviation
End of explanation
def distance_matrix_py(pts):
Returns matrix of pairwise Euclidean distances. Pure Python version.
n = len(pts)
p = len(pts[0])
m = np.zeros((n, n))
for i in range(n):
for j in range(n):
s = 0
for k in range(p):
s += (pts[i,k] - pts[j,k])**2
m[i, j] = s**0.5
return m
def distance_matrix_np(pts):
Returns matrix of pairwise Euclidean distances. Vectorized numpy version.
return np.sum((pts[None,:] - pts[:, None])**2, -1)**0.5
pts = np.array([(0,0), (4,0), (4,3), (0,3)])
pts
pts.shape
n = pts.shape[0]
p = pts.shape[1]
dist = np.zeros((n, n))
for i in range(n):
for j in range(n):
s = 0
for k in range(p):
s += (pts[i, k] - pts[j, k])**2
dist[i, j] = np.sqrt(s)
dist
Explanation: Example: Calculating pairwise distance matrix using broadcasting and vectorization
Calculate the pairwise distance matrix between the following points
(0,0)
(4,0)
(4,3)
(0,3)
End of explanation
pts[None, :].shape
pts[:, None].shape
m = pts[None, :] - pts[:, None]
m
m**2
(m**2).shape
Explanation: Using broadcasting
End of explanation
np.sum((pts[None, :] - pts[:, None])**2, -1)
Explanation: We want to end up with a 4 by 4 matrix, so sum over the axis with dimension 2. This is axis=2, or axis=-1 since it is the first axis from the end.
End of explanation
np.sqrt(np.sum((pts[None, :] - pts[:, None])**2, -1))
Explanation: Basically, the distance matrix can be calculated in one line of numpy code
End of explanation
def pdist1(pts):
n = pts.shape[0]
p = pts.shape[1]
dist = np.zeros((n, n))
for i in range(n):
for j in range(n):
s = 0
for k in range(p):
s += (pts[i, k] - pts[j, k])**2
dist[i, j] = s
return np.sqrt(dist)
def pdist2(pts):
return np.sqrt(np.sum((pts[None, :] - pts[:, None])**2, -1))
Explanation: Let's put them in functions and compare the time.
End of explanation
np.alltrue(pdist1(pts) == pdist2(pts))
pts = np.random.random((1000, 2))
%timeit pdist1(pts)
%timeit pdist2(pts)
Explanation: Check that the outputs are the same
End of explanation
from numba import njit
@njit
def pdist3(pts):
n = pts.shape[0]
p = pts.shape[1]
dist = np.zeros((n, n))
for i in range(n):
for j in range(n):
s = 0
for k in range(p):
s += (pts[i, k] - pts[j, k])**2
dist[i, j] = s
return np.sqrt(dist)
%timeit pdist3(pts)
Explanation: But don't give up on loops yet
End of explanation
N = 5
np.tri(N)
np.tri(N, N-1)
np.tri(N, N-1, -1)
Explanation: What is going on?
This is 3-5 times faster than the broadcasting version! We have just performed Just In Time (JIT) compilation of a function, which will be discussed in a later lecture.
Example: Consructing leave-one-out arrays
Another example of numpy trickery is to construct a leave-one-out matrix of a vector of length k. In the matrix, each row is a vector of length k-1, with a different vector component dropped each time. This can be used for LOOCV to evalaute the out-of-sample accuracy of a predictive model.
For example, suppose you have data points [(1,4), (2,7), (3,11), (4,9), (5,15)] that you want to perfrom LOOCV on for a simple regression model. For each cross-validation, you use one point for testing, and the remaining 4 points for training. In other words, you want the training set to be:
[(2,7), (3,11), (4,9), (5,15)]
[(1,4), (3,11), (4,9), (5,15)]
[(1,4), (2,7), (4,9), (5,15)]
[(1,4), (2,7), (3,11), (5,15)]
[(1,4), (2,7), (3,11), (4,9)]
Here is one way to do create the training set using numpy tricks.
Create a triangular matrix with N rows, N-1 columns and offset from diagnonal by -1
End of explanation
np.arange(1, N)
np.arange(1, N) - np.tri(N, N-1, -1)
idx = np.arange(1, N) - np.tri(N, N-1, -1).astype('int')
data = np.array([(1,4), (2,7), (3,11), (4,9), (5,15)])
data
data[idx]
Explanation: Use broadcasting to create a new index matrix
End of explanation
def f1(a, k):
idx = np.ones_like(a).astype('bool')
idx[k] = 0
return a[idx]
def f2(a, k):
return np.r_[a[:k], a[k+1:]]
a = np.arange(100)
k = 50
%timeit f1(a, k)
%timeit f2(a, k)
Explanation: All but one
R uses negative indexing to mean delete the component at that index. Because Python uses negative indexing to mean count from the end, we have to do a little more work to get the same effect. Here are two ways of deleting one item from a vector.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
xs = np.linspace(0, 2*np.pi, 100)
ys = np.sin(xs) # np.sin is a universal function
plt.plot(xs, ys);
Explanation: Universal functions (Ufuncs)
Functions that work on both scalars and arrays are known as ufuncs. For arrays, ufuncs apply the function in an element-wise fashion. Use of ufuncs is an esssential aspect of vectorization and typically much more computationally efficient than using an explicit loop over each element.
End of explanation
from numpy.core.umath_tests import matrix_multiply
print(matrix_multiply.signature)
us = np.random.random((5, 2, 3)) # 5 2x3 matrics
vs = np.random.random((5, 3, 4)) # 5 3x4 matrices
us
vs
# perform matrix multiplication for each of the 5 sets of matrices
ws = matrix_multiply(us, vs)
ws.shape
ws
Explanation: Generalized ufuncs
A universal function performs vectorized looping over scalars. A generalized ufunc performs looping over vectors or arrays. Currently, numpy only ships with a single generalized ufunc. However, they play an important role for JIT compilation with numba, a topic we will cover in future lectures.
End of explanation
x1 = np.arange(1,10).reshape(3,3)
x1
np.savetxt('../data/x1.txt', x1)
!cat ../data/x1.txt
x2 = np.loadtxt('../data/x1.txt')
x2
Explanation: Saving and loading NDArrays
Saving to and loading from text files
End of explanation
np.save('../data/x1.npy', x1)
!cat ../data/x1.npy
x3 = np.load('../data/x1.npy')
x3
Explanation: Saving to and loading from binary files (much faster and also preserves dtype)
End of explanation
%load_ext version_information
%version_information numpy, numba, matplotlib
Explanation: Version information
End of explanation |
3,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Using CAD-Based Geometries
In this notebook we'll be exploring how to use CAD-based geometries in OpenMC via the DagMC toolkit. The models we'll be using in this notebook have already been created using Trelis and faceted into a surface mesh represented as .h5m files in the Mesh Oriented DatABase format. We'll be retrieving these files using the function below.
Step2: This notebook is intended to demonstrate how DagMC problems are run in OpenMC. For more information on how DagMC models are created, please refer to the DagMC User's Guide.
Step3: To start, we'll be using a simple U235 fuel pin surrounded by a water moderator, so let's create those materials.
Step4: Now let's get our DAGMC geometry. We'll be using prefabricated models in this notebook. For information on how to create your own DAGMC models, you can refer to the instructions here.
Let's download the DAGMC model. These models come in the form of triangle surface meshes stored using the the Mesh Oriented datABase (MOAB) in an HDF5 file with the extension .h5m. An example of a coarse triangle mesh looks like
Step5: First we'll need to grab some pre-made DagMC models.
Step6: OpenMC expects that the model has the name "dagmc.h5m" so we'll name the file that and indicate to OpenMC that a DAGMC geometry is being used by setting the settings.dagmc attribute to True.
Step7: Unlike conventional geometries in OpenMC, we really have no way of knowing what our model looks like at this point. Thankfully DagMC geometries can be plotted just like any other OpenMC geometry to give us an idea of what we're now working with.
Note that material assignments have already been applied to this model. Materials can be assigned either using ids or names of materials in the materials.xml file. It is recommended that material names are used for assignment for readability.
Step8: Now that we've had a chance to examine the model a bit, we can finish applying our settings and add a source.
Step9: Tallies work in the same way when using DAGMC geometries too. We'll add a tally on the fuel cell here.
Step10: Note
Step11: More Complicated Geometry
Neat! But this pincell is something we could've done with CSG. Let's take a look at something more complex. We'll download a pre-built model of the Utah teapot and use it here.
Step12: Our teapot is made out of iron, so we'll want to create that material and make sure it is in our materials.xml file.
Step13: To make sure we've updated the file correctly, let's make a plot of the teapot.
Step14: Here we start to see some of the advantages CAD geometries provide. This particular file was pulled from the GrabCAD and pushed through the DAGMC workflow without modification (other than the addition of material assignments). It would take a considerable amount of time to create a model like this using CSG!
Step15: Now let's brew some tea! ... using a very hot neutron source. We'll use some well-placed point sources distributed throughout the model.
Step16: ...and setup a couple mesh tallies. One for the kettle, and one for the water inside.
Step17: Note that the performance is significantly lower than our pincell model due to the increased complexity of the model, but it allows us to examine tally results like these | Python Code:
import urllib.request
fuel_pin_url = 'https://tinyurl.com/y3ugwz6w' # 1.2 MB
teapot_url = 'https://tinyurl.com/y4mcmc3u' # 29 MB
def download(url):
Helper function for retrieving dagmc models
u = urllib.request.urlopen(url)
if u.status != 200:
raise RuntimeError("Failed to download file.")
# save file as dagmc.h5m
with open("dagmc.h5m", 'wb') as f:
f.write(u.read())
Explanation: Using CAD-Based Geometries
In this notebook we'll be exploring how to use CAD-based geometries in OpenMC via the DagMC toolkit. The models we'll be using in this notebook have already been created using Trelis and faceted into a surface mesh represented as .h5m files in the Mesh Oriented DatABase format. We'll be retrieving these files using the function below.
End of explanation
%matplotlib inline
from IPython.display import Image
import openmc
Explanation: This notebook is intended to demonstrate how DagMC problems are run in OpenMC. For more information on how DagMC models are created, please refer to the DagMC User's Guide.
End of explanation
# materials
u235 = openmc.Material(name="fuel")
u235.add_nuclide('U235', 1.0, 'ao')
u235.set_density('g/cc', 11)
u235.id = 40
water = openmc.Material(name="water")
water.add_nuclide('H1', 2.0, 'ao')
water.add_nuclide('O16', 1.0, 'ao')
water.set_density('g/cc', 1.0)
water.add_s_alpha_beta('c_H_in_H2O')
water.id = 41
mats = openmc.Materials([u235, water])
mats.export_to_xml()
Explanation: To start, we'll be using a simple U235 fuel pin surrounded by a water moderator, so let's create those materials.
End of explanation
Image("./images/cylinder_mesh.png", width=350)
Explanation: Now let's get our DAGMC geometry. We'll be using prefabricated models in this notebook. For information on how to create your own DAGMC models, you can refer to the instructions here.
Let's download the DAGMC model. These models come in the form of triangle surface meshes stored using the the Mesh Oriented datABase (MOAB) in an HDF5 file with the extension .h5m. An example of a coarse triangle mesh looks like:
End of explanation
download(fuel_pin_url)
Explanation: First we'll need to grab some pre-made DagMC models.
End of explanation
settings = openmc.Settings()
settings.dagmc = True
settings.batches = 10
settings.inactive = 2
settings.particles = 5000
settings.export_to_xml()
Explanation: OpenMC expects that the model has the name "dagmc.h5m" so we'll name the file that and indicate to OpenMC that a DAGMC geometry is being used by setting the settings.dagmc attribute to True.
End of explanation
p = openmc.Plot()
p.width = (25.0, 25.0)
p.pixels = (400, 400)
p.color_by = 'material'
p.colors = {u235: 'yellow', water: 'blue'}
openmc.plot_inline(p)
Explanation: Unlike conventional geometries in OpenMC, we really have no way of knowing what our model looks like at this point. Thankfully DagMC geometries can be plotted just like any other OpenMC geometry to give us an idea of what we're now working with.
Note that material assignments have already been applied to this model. Materials can be assigned either using ids or names of materials in the materials.xml file. It is recommended that material names are used for assignment for readability.
End of explanation
settings.source = openmc.Source(space=openmc.stats.Box([-4., -4., -4.],
[ 4., 4., 4.]))
settings.export_to_xml()
Explanation: Now that we've had a chance to examine the model a bit, we can finish applying our settings and add a source.
End of explanation
tally = openmc.Tally()
tally.scores = ['total']
tally.filters = [openmc.CellFilter(1)]
tallies = openmc.Tallies([tally])
tallies.export_to_xml()
Explanation: Tallies work in the same way when using DAGMC geometries too. We'll add a tally on the fuel cell here.
End of explanation
openmc.run()
Explanation: Note: Applying tally filters in DagMC models requires prior knowledge of the model. Here, we know that the fuel cell's volume ID in the CAD sofware is 1. To identify cells without use of CAD software, load them into the OpenMC plotter where cell, material, and volume IDs can be identified for native both OpenMC and DagMC geometries.
Now we're ready to run the simulation just like any other OpenMC run.
End of explanation
download(teapot_url)
Image("./images/teapot.jpg", width=600)
Explanation: More Complicated Geometry
Neat! But this pincell is something we could've done with CSG. Let's take a look at something more complex. We'll download a pre-built model of the Utah teapot and use it here.
End of explanation
iron = openmc.Material(name="iron")
iron.add_nuclide("Fe54", 0.0564555822608)
iron.add_nuclide("Fe56", 0.919015287728)
iron.add_nuclide("Fe57", 0.0216036861685)
iron.add_nuclide("Fe58", 0.00292544384231)
iron.set_density("g/cm3", 7.874)
mats = openmc.Materials([iron, water])
mats.export_to_xml()
Explanation: Our teapot is made out of iron, so we'll want to create that material and make sure it is in our materials.xml file.
End of explanation
p = openmc.Plot()
p.basis = 'xz'
p.origin = (0.0, 0.0, 0.0)
p.width = (30.0, 20.0)
p.pixels = (450, 300)
p.color_by = 'material'
p.colors = {iron: 'gray', water: 'blue'}
openmc.plot_inline(p)
Explanation: To make sure we've updated the file correctly, let's make a plot of the teapot.
End of explanation
p.width = (18.0, 6.0)
p.basis = 'xz'
p.origin = (10.0, 0.0, 5.0)
p.pixels = (600, 200)
p.color_by = 'material'
openmc.plot_inline(p)
Explanation: Here we start to see some of the advantages CAD geometries provide. This particular file was pulled from the GrabCAD and pushed through the DAGMC workflow without modification (other than the addition of material assignments). It would take a considerable amount of time to create a model like this using CSG!
End of explanation
settings = openmc.Settings()
settings.dagmc = True
settings.batches = 10
settings.particles = 5000
settings.run_mode = "fixed source"
src_locations = ((-4.0, 0.0, -2.0),
( 4.0, 0.0, -2.0),
( 4.0, 0.0, -6.0),
(-4.0, 0.0, -6.0),
(10.0, 0.0, -4.0),
(-8.0, 0.0, -4.0))
# we'll use the same energy for each source
src_e = openmc.stats.Discrete(x=[12.0,], p=[1.0,])
# create source for each location
sources = []
for loc in src_locations:
src_pnt = openmc.stats.Point(xyz=loc)
src = openmc.Source(space=src_pnt, energy=src_e)
sources.append(src)
src_str = 1.0 / len(sources)
for source in sources:
source.strength = src_str
settings.source = sources
settings.export_to_xml()
Explanation: Now let's brew some tea! ... using a very hot neutron source. We'll use some well-placed point sources distributed throughout the model.
End of explanation
mesh = openmc.RegularMesh()
mesh.dimension = (120, 1, 40)
mesh.lower_left = (-20.0, 0.0, -10.0)
mesh.upper_right = (20.0, 1.0, 4.0)
mesh_filter = openmc.MeshFilter(mesh)
pot_filter = openmc.CellFilter([1])
pot_tally = openmc.Tally()
pot_tally.filters = [mesh_filter, pot_filter]
pot_tally.scores = ['flux']
water_filter = openmc.CellFilter([5])
water_tally = openmc.Tally()
water_tally.filters = [mesh_filter, water_filter]
water_tally.scores = ['flux']
tallies = openmc.Tallies([pot_tally, water_tally])
tallies.export_to_xml()
openmc.run()
Explanation: ...and setup a couple mesh tallies. One for the kettle, and one for the water inside.
End of explanation
sp = openmc.StatePoint("statepoint.10.h5")
water_tally = sp.get_tally(scores=['flux'], id=water_tally.id)
water_flux = water_tally.mean
water_flux.shape = (40, 120)
water_flux = water_flux[::-1, :]
pot_tally = sp.get_tally(scores=['flux'], id=pot_tally.id)
pot_flux = pot_tally.mean
pot_flux.shape = (40, 120)
pot_flux = pot_flux[::-1, :]
del sp
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(18, 16))
sub_plot1 = plt.subplot(121, title="Kettle Flux")
sub_plot1.imshow(pot_flux)
sub_plot2 = plt.subplot(122, title="Water Flux")
sub_plot2.imshow(water_flux)
Explanation: Note that the performance is significantly lower than our pincell model due to the increased complexity of the model, but it allows us to examine tally results like these:
End of explanation |
3,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluate impact of kernel activation and initialization
1. Generate random training data
Step3: 2. Build a simple fully connected model
Step4: 3. Weights initialization
http
Step5: b) Sigmoid
Step6: c) tanh
Step7: c) Lecun tanh
Step8: d) SNN - SeLU
Step9: Travail perso
Self normalizing Exponential Unit (SExU)
Step10: Self normalizing Shifted Exponential Unit (SSExU)
The weights should be shifted by $\alpha$ to converge
Step11: Self normalizing Gated Exponential Neural Network (SGENN) | Python Code:
import numpy as np
import matplotlib.pyplot as plt
%pylab inline
%matplotlib inline
pylab.rcParams['figure.figsize'] = (5, 3)
# Create random train data
X_train = np.random.normal(size=(1000, 100))
Y_train = (X_train.sum(axis=1) > 0) * 1
print Y_train.mean()
print X_train.shape
print Y_train.shape
# Normalize it
X_train -= X_train.mean()
X_train /= X_train.std()
plt.hist(X_train.reshape(-1), 50)
plt.show()
Explanation: Evaluate impact of kernel activation and initialization
1. Generate random training data
End of explanation
import keras
import keras.backend as K
from keras.layers import Input, Dense, multiply, Lambda
from keras.models import Model
from keras.activations import tanh_perso, sig_perso
from keras.initializers import VarianceScaling
import shutil
import time
import os
def _func_to_str(func):
if func is a function, returns its string name
return func.func_name if callable(func) else str(func)
def simple_FC_model(activation, initializer):
# Define input tensor
input_tensor = Input(shape=(100,))
if callable(initializer) is True:
initializer = initializer()
# Propagate it through 10 fully connected layers
x = Dense(256,
activation=activation,
kernel_initializer=initializer)(input_tensor)
for _ in range(9):
x = Dense(256,
activation=activation,
kernel_initializer=initializer)(x)
x = Dense(1,
activation='sigmoid',
kernel_initializer='lecun_normal')(x)
# Build the keras model
model = Model(input_tensor, x, name='')
sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='binary_crossentropy')
return model
def show_model(activations, initializers, func_model=None):
Shows prediction distribution for each pair of activation/initializer
Params:
activations: a list of activations
initializers: a list of initializers (same lenght as activations)
start = time.time()
n_fig = len(activations)
is_gated = False if func_model is None else True
fig, axs = plt.subplots(2, n_fig)
for i in range(n_fig):
act, init = zip(activations, initializers)[i]
# Parameters to Strings
act_str = _func_to_str(act)
if is_gated is True:
act_str = 'gated_' + act_str
init_str = _func_to_str(init)
# Build the model and evaluate it
K.clear_session()
func_model = func_model or simple_FC_model
model = func_model(act, init)
get_activations = K.function([model.layers[0].input, K.learning_phase()],
[model.layers[-2].output] )
act_hist = get_activations([X_train, False])[0]
# Show the 1st results
axs[0, i].hist(act_hist.reshape(-1), 50)
axs[0, i].set_title(act_str + " - " + init_str)
# Show the 2nd results
log_dir = './logs/' + act_str + '-' + init_str
if os.path.isdir(log_dir):
shutil.rmtree(log_dir)
tensorboard = keras.callbacks.TensorBoard(histogram_freq=1,
log_dir=log_dir,
write_grads=True)
model.fit(X_train,
Y_train,
validation_data=(X_train, Y_train),
epochs=10,
batch_size=128,
verbose=False,
callbacks=[tensorboard, ])
pred2 = model.predict(X_train)
act_hist2 = get_activations([X_train, False])[0]
axs[1, i].hist(act_hist2.reshape(-1), 50)
# Write some debug
print "{} {} std: {:.4f}, mean: {:.3f}, acc: {}".format(
act_str,
init_str,
act_hist.std(),
act_hist.mean(),
(pred2.round().T == Y_train).mean())
K.clear_session()
end = time.time()
forward_pass_time = (end - start) / n_fig
print "\nTook and average of {:.3} sec. to perfom training".format(forward_pass_time)
plt.show()
Explanation: 2. Build a simple fully connected model
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
activations = ['relu']*4
initializers = ['uniform', 'glorot_uniform', 'normal', 'glorot_normal']
show_model(activations, initializers)
Explanation: 3. Weights initialization
http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf
http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf
Normal: draws weights from a normal distribution with $\mu=0$ and $\sigma = 1$
Glorot normal initializer: draws weights from truncated normal with $\mu=0$ and
$\sigma = \sqrt\frac{2}{\text{fan_in} + \text{fan_out}}$
Lecun normal initializer: draws weights from truncated normal with $\mu=0$ and
$\sigma = \sqrt\frac{1}{\text{fan_in}}$
Uniform: draws weights from a uniform distribution with $f : \mathbb{R} \to [-x_{max}; x_{max}]$ and $x_{max} = 0.05$
Glorot uniform initializer: draws weights Uniform distribution with
$x_{max} = \sqrt\frac{6}{\text{fan_in} + \text{fan_out}}$
Lecun Uniform initializer: draws weights Uniform distribution with
$x_{max} = \sqrt\frac{3}{\text{fan_in}}$
4. Show activation distributions
a) Relu
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
activations = ['sigmoid']*6
initializers = ['uniform', 'glorot_uniform', 'lecun_uniform', 'normal', 'glorot_normal', 'lecun_normal']
show_model(activations, initializers)
Explanation: b) Sigmoid
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
activations = ['tanh']*4
initializers = ['uniform', 'glorot_uniform', 'normal', 'glorot_normal']
show_model(activations, initializers)
Explanation: c) tanh
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
def lecun_tanh(x):
return 1.7159 * K.tanh(2 * x / 3)
activations = [lecun_tanh]*6
initializers = ['uniform', 'glorot_uniform', 'lecun_uniform', 'normal', 'glorot_normal', 'lecun_normal']
show_model(activations, initializers)
Explanation: c) Lecun tanh
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
activations = ['selu']*6
initializers = ['uniform', 'glorot_uniform', 'lecun_uniform', 'normal', 'glorot_normal', 'lecun_normal']
show_model(activations, initializers)
Explanation: d) SNN - SeLU
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
activations = [keras.activations.tanh_perso]*4
initializers = ['glorot_uniform', 'lecun_uniform', 'glorot_normal', 'lecun_normal']
show_model(activations, initializers)
Explanation: Travail perso
Self normalizing Exponential Unit (SExU)
End of explanation
pylab.rcParams['figure.figsize'] = (15, 4)
activations = [keras.activations.sig_perso]*4
initializers = ['glorot_uniform', 'lecun_uniform', 'glorot_normal', 'lecun_normal']
show_model(activations, initializers)
Explanation: Self normalizing Shifted Exponential Unit (SSExU)
The weights should be shifted by $\alpha$ to converge
End of explanation
def gated_activation(n_units, activation=None, initializer=None):
def func(x):
alpha = 1.7580993408473768599402175208123
normalizer = np.sqrt(1 + alpha ** 2)
gate = Dense(n_units,
activation='linear',
kernel_initializer=initializer)(x)
gate = Lambda(lambda x: x + alpha)(gate)
gate = keras.layers.Activation(sig_perso)(gate)
act = Dense(n_units,
activation=activation,
kernel_initializer=initializer)(x)
gated_act = multiply([gate, act])
gated_act = Lambda(lambda x: x / normalizer)(gated_act)
return gated_act
return func
def simple_gated_model(activation, initializer):
# Define input tensor
input_tensor = Input(shape=(100,))
# Propagate it through 20 fully connected layers
x = gated_activation(256, activation, initializer)(input_tensor)
for _ in range(19):
x = gated_activation(256, activation, initializer)(x)
x = Dense(1,
activation='sigmoid',
kernel_initializer='lecun_normal')(x)
# Build the keras model
model = Model(input_tensor, x, name='')
sgd = keras.optimizers.SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='binary_crossentropy')
return model
pylab.rcParams['figure.figsize'] = (15, 4)
activations = [tanh_perso]*6
initializers = ['uniform', 'glorot_uniform', 'lecun_uniform', 'normal', 'glorot_normal', 'lecun_normal']
show_model(activations, initializers, func_model=simple_gated_model)
import numpy as np
import matplotlib.pyplot as plt
import keras
import keras.backend as K
from keras.layers import Input, Dense, multiply, Lambda, Dense_gated
from keras.models import Model
from keras.activations import tanh_perso, sig_perso
from keras.initializers import VarianceScaling
import shutil
import time
import os
# Create random train data
X_train = np.random.normal(size=(1000, 100))
Y_train = (X_train.sum(axis=1) > 0) * 1
print Y_train.mean()
print X_train.shape
print Y_train.shape
# Normalize it
X_train -= X_train.mean()
X_train /= X_train.std()
# Define input tensor
input_tensor = Input(shape=(100,))
my_dense_layer = lambda : Dense_gated(256,
activation1=tanh_perso,
kernel_initializer1='lecun_uniform',
activation2=tanh_perso,
kernel_initializer2='lecun_uniform',
shift=1.75809934084737685994,
normalizer=np.sqrt(1 + 1.75809934084737685994 ** 2) )
# Propagate it through 20 fully connected layers
x = my_dense_layer()(input_tensor)
for _ in range(30):
x = my_dense_layer()(x)
x = Dense(1,
activation='sigmoid',
kernel_initializer='lecun_normal')(x)
# Build the keras model
model = Model(input_tensor, x, name='')
sgd = keras.optimizers.SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=['acc'])
model.fit(X_train,
Y_train,
validation_data=(X_train, Y_train),
epochs=10,
batch_size=128,
verbose=True )
Explanation: Self normalizing Gated Exponential Neural Network (SGENN)
End of explanation |
3,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multiclass Support Vector Machine exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
In this exercise you will
Step1: CIFAR-10 Data Loading and Preprocessing
Step2: SVM Classifier
As you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function.
Step3: The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you
Step4: Inline Question 1
Step5: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss. | Python Code:
import os
os.chdir(os.getcwd() + '/..')
# Run some setup code for this notebook
import random
import numpy as np
import matplotlib.pyplot as plt
from utils.data_utils import load_CIFAR10
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
Explanation: Multiclass Support Vector Machine exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
In this exercise you will:
implement a fully-vectorized loss function for the SVM
implement the fully-vectorized expression for its analytic gradient
check your implementation using numerical gradient
use a validation set to tune the learning rate and regularization strength
optimize the loss function with SGD
visualize the final learned weights
End of explanation
# Load the raw CIFAR-10 data
cifar10_dir = 'datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y == y_train)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Split the data
num_training = 49000
num_validation = 1000
num_test = 1000
num_dev = 500
mask = range(num_training, num_training+num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = np.random.choice(num_training, num_dev, replace=False)
X_dev = X_train[mask]
y_dev = y_train[mask]
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
# Preprocessing: reshape the image data into rows
X_train = X_train.reshape(X_train.shape[0], -1)
X_val = X_val.reshape(X_val.shape[0], -1)
X_test = X_test.reshape(X_test.shape[0], -1)
X_dev = X_dev.reshape(X_dev.shape[0], -1)
print('Train data shape: ', X_train.shape)
print('Validation data shape: ', X_val.shape)
print('Test data shape: ', X_test.shape)
# Preprocessing: subtract the mean image
# first: compute the image mean based on the training data
mean_image = np.mean(X_train, axis=0)
print(mean_image[:10]) # print a few of the elements
plt.figure(figsize=(4, 4))
plt.imshow(mean_image.reshape((32, 32, 3)).astype('uint8'))
plt.show()
# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image
# third: append the bias dimension of ones
X_train = np.hstack((X_train, np.ones((X_train.shape[0], 1))))
X_val = np.hstack((X_val, np.ones((X_val.shape[0], 1))))
X_test = np.hstack((X_test, np.ones((X_test.shape[0], 1))))
X_dev= np.hstack((X_dev, np.ones((X_dev.shape[0], 1))))
print(X_train.shape, X_val.shape, X_test.shape, X_dev.shape)
Explanation: CIFAR-10 Data Loading and Preprocessing
End of explanation
# Evaluate the naive implementation of the loss we provided for you:
from classifiers.linear_classifier import svm_loss_naive
import time
# generate a random SVM weight matrix of small numbers
W = np.random.randn(3073, 10) * 0.0001
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.000005)
print('loss: %f' % (loss, ))
Explanation: SVM Classifier
As you can see, we have prefilled the function compute_loss_naive which uses for loops to evaluate the multiclass SVM loss function.
End of explanation
# gradient check
loss, grad = svm_loss_naive(W, X_dev, y_dev, 0.0)
from utils.gradient_check import grad_check_sparse
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad)
# with regularization
loss, grad = svm_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: svm_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad)
Explanation: The grad returned from the function above is right now all zero. Derive and implement the gradient for the SVM cost function and implement it inline inside the function svm_loss_naive. You will find it helpful to interleave your new code inside the existing function.
To check that you have correctly implemented the gradient correctly, you can numerically estimate the gradient of the loss function and compare the numeric estimate to the gradient that you computed. We have provided code that does this for you:
End of explanation
# implement the function svm_loss_vectorized
tic = time.time()
loss_naive, grad_naive = svm_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Naive loss: %e computed in %fs' % (loss_naive, toc - tic))
from classifiers.linear_classifier import svm_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = svm_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('Vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))
# The losses and grad should match but your vectorized implementation should be much faster.
print('loss difference: %f' % (loss_naive - loss_vectorized))
difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('grad difference: %f' % difference)
Explanation: Inline Question 1:
It is possible that once in a while a dimension in the gradcheck will not match exactly. What could such a discrepancy be caused by? Is it a reason for concern? What is a simple example in one dimension where a gradient check could fail? Hint: the SVM loss function is not strictly speaking differentiable
Your Answer: the SVM loss function is not strictly speaking differentiable
End of explanation
from classifiers.linear_classifier import LinearSVM
svm = LinearSVM()
tic = time.time()
loss_hist = svm.train(X_train, y_train, learning_rate=1e-7, reg=2.5e4, num_iters=1500, batch_size=200, verbose=True)
toc = time.time()
print('That took %fs' % (toc - tic))
# A useful debugging strategy is to plot the loss as a function of
# iteration number:
plt.plot(loss_hist)
plt.xlabel('Iteration number')
plt.ylabel('Loss value')
plt.show()
y_train_pred = svm.predict(X_train)
print('training accuracy: %f' % (np.mean(y_train == y_train_pred)))
y_val_pred = svm.predict(X_val)
print('validation accuracy: %f' % (np.mean(y_val == y_val_pred)))
# Use the validation set to tune hyperparameters (regularization strength and
# learning rate).
# accuracy of about 0.4 on the validation set
learning_rates = [7e-7, 8e-7, 9e-7]
regularization_strengths = [9e2, 1e3, 2e3]
# results[(learning_rate, reg)] = (train_accuracy, val_accuracy)
results = {}
best_val = -1
best_svm = None
for learning_rate in learning_rates:
for reg in regularization_strengths:
model = LinearSVM()
model.train(X_train, y_train, learning_rate=learning_rate, reg=reg, num_iters=5000,
batch_size=300, verbose=True)
y_train_pred = model.predict(X_train)
train_accuracy = np.mean(y_train == y_train_pred)
y_val_pred = model.predict(X_val)
val_accuracy = np.mean(y_val == y_val_pred)
results[(learning_rate, reg)] = (train_accuracy, val_accuracy)
if val_accuracy > best_val:
best_val = val_accuracy
best_svm = model
print('lr %e reg %e train_accuracy: %f val_accuracy: %f' % (learning_rate, reg, train_accuracy, val_accuracy))
print
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train_accuracy: %f val_accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
print('best validation accuracy achieved during cross-validation: %f' % best_val)
for lr, reg in sorted(results):
train_accuracy, val_accuracy = results[(lr, reg)]
print('lr %e reg %e train_accuracy: %f val_accuracy: %f' % (lr, reg, train_accuracy, val_accuracy))
# Visualize the cross-validation results
import math
x_scatter = [math.log10(x[0]) for x in results]
y_scatter = [math.log10(x[1]) for x in results]
# plot training accuracy
marker_size = 100
colors = [results[x][0] for x in results]
plt.subplot(2, 1, 1)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 training accuracy')
# plot validation accuracy
colors = [results[x][1] for x in results]
plt.subplot(2, 1, 2)
plt.scatter(x_scatter, y_scatter, marker_size, c=colors)
plt.colorbar()
plt.xlabel('log learning rate')
plt.ylabel('log regularization strength')
plt.title('CIFAR-10 validation accuracy')
plt.show()
# Evaluate the best svm on test set
y_test_pred = best_svm.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('linear SVM on raw pixels final test set accuracy: %f' % test_accuracy)
# Visualize the learned weights for each class.
w = best_svm.W[:-1, :] # STRIP OUT THE BIAS
w = w.reshape(32, 32, 3, 10)
w_min, w_max = np.min(w), np.max(w)
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
plt.subplot(2, 5, i + 1)
#Rescale the weights to be between 0 and 255
wing = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
plt.imshow(wing.astype('uint8'))
plt.axis('off')
plt.title(classes[i])
Explanation: Stochastic Gradient Descent
We now have vectorized and efficient expressions for the loss, the gradient and our gradient matches the numerical gradient. We are therefore ready to do SGD to minimize the loss.
End of explanation |
3,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtrado eventos de seguridad en forma conservativa con Learninspy
<img style="display
Step1: Carga de datos
Step2: Procesamiento y etiquetado de datos
Step3: Configuración del modelo y su optimización
Step4: Pre-entrenamiento
Step5: Ajuste fino
Step6: Resultados
Step7: Guardar modelo
Step8: KDDTest+ Dataset
Step9: Resultados
Step10: KDDTest -21 Dataset
Step11: Resultados
Step12: Gráficas | Python Code:
# Librerias de Python
import time
import copy
# Dependencias internas
from learninspy.core.autoencoder import StackedAutoencoder
from learninspy.core.model import NetworkParameters
from learninspy.core.optimization import OptimizerParameters
from learninspy.core.stops import criterion
from learninspy.utils.data import split_data, label_data
from learninspy.utils.data import StandardScaler, LocalLabeledDataSet
from learninspy.utils.evaluation import ClassificationMetrics
from learninspy.utils.plots import plot_neurons, plot_fitting, plot_confusion_matrix
# Dependencias externas
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Filtrado eventos de seguridad en forma conservativa con Learninspy
<img style="display: inline;" src="docs/img/Learninspy-logo_grande2.png" width="300" />
Metodología:
Modelar el baseline de los datos para obtener un filtrado conservativo
Recall máximo posible descartando lo normal.
Precision máxima posible reteniendo lo anormal.
Datos usados: NSL-KDD dataset (network traffic)
Referencias:
Tavallaee, M., Bagheri, E., Lu, W., and Ghorbani, A. A. (2009). A detailed analysis of the KDD CUP 99 data set. In Proceedings of the Second IEEE Symposium on Computational Intelligence for Security and Defence Applications 2009.
Dependencias
End of explanation
pathtrain = "/home/leeandro04/Documentos/Datos/KDD/NSL_KDD/20 Percent Training Set.csv"
pathtest = "/home/leeandro04/Documentos/Datos/KDD/NSL_KDD/KDDTest+.csv"
pathtest21 = "/home/leeandro04/Documentos/Datos/KDD/NSL_KDD/KDDTest-21.txt"
alltrain = pd.read_csv(pathtrain, header=None)
test = pd.read_csv(pathtest, header=None)
test21 = pd.read_csv(pathtest21, header=None)
alltrain
# Dropping
drop = [0, 1, 2, 3, 4, 5, 6, 7, 8, # Basic Features
9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, # Content Features
]
for d in drop:
alltrain.drop(d, axis=1, inplace=True)
test.drop(d, axis=1, inplace=True)
test21.drop(d, axis=1, inplace=True)
alltrain.describe()
# Explicacion de ataques en los datos etiquetados de KDDTrain+
dict_attacks = {'dos': ['back', 'land', 'neptune', 'pod', 'smurf', 'teardrop'],
'u2r': ['buffer_overflow', 'loadmodule', 'perl', 'rootkit'],
'r2l': ['ftp_write', 'guess_passwd', 'imap', 'multihop', 'phf', 'spy', 'warezclient', 'warezmaster'],
'probe': ['ipsweep', 'nmap', 'portsweep', 'satan']}
Explanation: Carga de datos
End of explanation
# Separo lo normal de los ataques en base a un Ground Truth (columna de etiquetas)
normal = alltrain[alltrain[41] == 'normal']
attack = alltrain[alltrain[41] != 'normal']
# Tiro las columnas de etiquetas
normal = normal.ix[:, :40]
attack = attack.ix[:, :40]
# Etiqueto datos
normal = label_data(normal.values, [0]*len(normal.values))
attack = label_data(attack.values, [1]*len(attack.values))
train, valid = split_data(normal, fractions=[0.7, 0.3])
print "Dimension de características: ", len(normal[0].features)
print "Cantidad de ejemplos normales: ", len(normal)
print "Cantidad de ejemplos de ataques: ", len(attack)
print "Cantidad de train: ", len(train)
print "Cantidad de valid: ", len(valid)
Explanation: Procesamiento y etiquetado de datos
End of explanation
# Defino configuración del Stacked AutoEncoder y entreno
net_params = NetworkParameters(units_layers=[19, 10, 2], activation='ReLU', classification=True,
dropout_ratios=[0.2, 0.0], strength_l1=1e-6, strength_l2=5e-5)
saekdd = StackedAutoencoder(net_params, dropout=[0.0, 0.0])
# Para el pre-entrenamiento
local_stops_sae = [criterion['MaxIterations'](30),
criterion['AchieveTolerance'](0.95, key='hits')]
global_stops_sae = [criterion['MaxIterations'](20),
criterion['AchieveTolerance'](0.95, key='hits')]
opt_params_sae = OptimizerParameters(algorithm='Adadelta',
options={'step-rate': 1, 'decay': 0.995, 'momentum': 0.7, 'offset': 1e-8},
stops=local_stops_sae, merge_criter='w_avg', merge_goal='hits')
# Para el ajuste fino
local_stops_ft = [criterion['MaxIterations'](5),
criterion['AchieveTolerance'](0.9, key='hits')]
global_stops_ft = [criterion['MaxIterations'](20),
criterion['AchieveTolerance'](0.85, key='hits')]
opt_params_ft = OptimizerParameters(algorithm='GD',
options={'step-rate': 1e-3, 'momentum': 0.9, 'momentum_type': 'nesterov'},
stops=local_stops_ft, merge_criter='w_avg')
Explanation: Configuración del modelo y su optimización
End of explanation
hits_valid = saekdd.fit(train, valid, mini_batch=20, parallelism=10, valid_iters=1,
stops=global_stops_sae, optimizer_params=opt_params_sae, reproducible=True)
hits_attack, predictions = saekdd.evaluate(attack[1000:5000], predictions=True)
print "Hits de valid: ", hits_valid
print "Hits de ataques: ", hits_attack
print "Accuracy de ataques: ", len(filter(lambda (lp, p): lp.label == p, zip(attack[1000:5000], predictions))) / float(len(attack[1000:5000]))
Explanation: Pre-entrenamiento
End of explanation
train2, valid2 = split_data(train+attack[:1000], fractions=[0.7, 0.3])
hits_total = saekdd.finetune(train2, valid2, mini_batch=20, parallelism=10, stops=global_stops_ft, valid_iters=1,
optimizer_params=opt_params_sae, keep_best=True)
Explanation: Ajuste fino
End of explanation
print "Metricas: "
hits, predictions = saekdd.evaluate(valid+attack[1000:], predictions=True)
labels = map(lambda lp: float(lp.label), valid+attack[1000:])
metrics = ClassificationMetrics(zip(predictions, labels), 2)
print "Total of normal events: ", len(valid)
print "Precision of normal: ", metrics.precision(label=0)
print "Recall of normal: ", metrics.recall(label=0)
print "F1-Score of normal: ", metrics.f_measure(label=0)
print "Accuracy of normal: ", metrics.accuracy(label=0)
print ""
print "Total of attack events: ", len(attack[1000:5000])
print "Precision of attacks: ", metrics.precision(label=1)
print "Recall of attacks: ", metrics.recall(label=1)
print "F1-Score of attacks: ", metrics.f_measure(label=1)
print "Accuracy of attacks: ", metrics.accuracy(label=1)
print ""
print "Precision of total: ", metrics.precision()
print "Recall of total: ", metrics.recall()
print "F1-Score of total: ", metrics.f_measure()
print "Accuracy of total: ", metrics.accuracy()
plot_confusion_matrix(metrics.confusion_matrix(), show=True)
reduction = 1. - (metrics.confusion_matrix()[0][1]+metrics.confusion_matrix()[1][1]) / float(sum(sum(metrics.confusion_matrix())))
print "Reduction of total: ", reduction * 100,"%"
Explanation: Resultados
End of explanation
filename = '/tmp/model/nsl-kdd_learninspy_conft'
saekdd.save(filename)
print "Modelo StackedAutoencoder:"
print str(saekdd.params)
print "Optimización no-supervisada:"
print str(opt_params_sae)
print "Fine-tuning supervisado:"
print str(opt_params_ft)
Explanation: Guardar modelo
End of explanation
test.describe()
# Separo lo normal de los ataques en base a un Ground Truth (columna de etiquetas)
normal = test[test[41] == 'normal']
anomal = test[test[41] != 'normal']
# Tiro las columnas de etiquetas
normal = normal.ix[:, :40]
anomal = anomal.ix[:, :40]
# Etiqueto datos
normal = label_data(normal.values, [0]*len(normal.values))
anomal = label_data(anomal.values, [1]*len(anomal.values))
Explanation: KDDTest+ Dataset
End of explanation
print "Metricas: "
hits, predictions = saekdd.evaluate(normal+anomal, predictions=True)
labels = map(lambda lp: float(lp.label), normal+anomal)
metrics = ClassificationMetrics(zip(predictions, labels), 2)
print "Precision of normal: ", metrics.precision(label=0)
print "Recall of normal: ", metrics.recall(label=0)
print "F1-Score of normal: ", metrics.f_measure(label=0)
print "Accuracy of normal: ", metrics.accuracy(label=0)
print ""
print "Precision of attacks: ", metrics.precision(label=1)
print "Recall of attacks: ", metrics.recall(label=1)
print "F1-Score of attacks: ", metrics.f_measure(label=1)
print "Accuracy of attacks: ", metrics.accuracy(label=1)
print ""
print "Precision of total: ", metrics.precision()
print "Recall of total: ", metrics.recall()
print "F1-Score of total: ", metrics.f_measure()
print "Accuracy of total: ", metrics.accuracy()
plot_confusion_matrix(metrics.confusion_matrix(), show=True)
reduction = 1. - (metrics.confusion_matrix()[0][1]+metrics.confusion_matrix()[1][1]) / float(sum(sum(metrics.confusion_matrix())))
print "Reduction of total: ", reduction * 100,"%"
Explanation: Resultados
End of explanation
test21.describe()
# Separo lo normal de los ataques en base a un Ground Truth (columna de etiquetas)
normal = test21[test21[41] == 'normal']
anomal = test21[test21[41] != 'normal']
# Tiro las columnas de etiquetas
normal = normal.ix[:, :40]
anomal = anomal.ix[:, :40]
# Etiqueto datos
normal = label_data(normal.values, [0]*len(normal.values))
anomal = label_data(anomal.values, [1]*len(anomal.values))
Explanation: KDDTest -21 Dataset
End of explanation
print "Metricas: "
hits, predictions = saekdd.evaluate(normal+anomal, predictions=True)
labels = map(lambda lp: float(lp.label), normal+anomal)
metrics = ClassificationMetrics(zip(predictions, labels), 2)
print "Precision of normal: ", metrics.precision(label=0)
print "Recall of normal: ", metrics.recall(label=0)
print "F1-Score of normal: ", metrics.f_measure(label=0)
print "Accuracy of normal: ", metrics.accuracy(label=0)
print ""
print "Precision of attacks: ", metrics.precision(label=1)
print "Recall of attacks: ", metrics.recall(label=1)
print "F1-Score of attacks: ", metrics.f_measure(label=1)
print "Accuracy of attacks: ", metrics.accuracy(label=1)
print ""
print "Precision of total: ", metrics.precision()
print "Recall of total: ", metrics.recall()
print "F1-Score of total: ", metrics.f_measure()
print "Accuracy of total: ", metrics.accuracy()
plot_confusion_matrix(metrics.confusion_matrix(), show=True)
reduction = 1. - (metrics.confusion_matrix()[0][1]+metrics.confusion_matrix()[1][1]) / float(sum(sum(metrics.confusion_matrix())))
print "Reduction of total: ", reduction * 100,"%"
Explanation: Resultados
End of explanation
from learninspy.utils.plots import plot_fitting
print "Desempeño del ajuste fino"
plot_fitting(saekdd)
print "Pesos sinápticos del AE"
plot_neurons(saekdd)
data = normal
x = data[100].features
en1 = saekdd.list_layers[0].encode(x).matrix
print "Patrón original: "
print x
print "Patrón codificado: "
print list(en1.T[0])
print ""
median_feat = np.median(map(lambda r: r.features, data), 0)
median_encod = np.median(map(lambda r: saekdd.list_layers[0].encode(r.features).matrix.T[0], data), 0)
print "Mediana de features originales"
plt.stem(median_feat)
plt.show()
print ""
print "Mediana de features codificadas"
plt.stem(median_encod)
plt.show()
Explanation: Gráficas
End of explanation |
3,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
Functions are defined as
def function_name(parameters)
Step1: Recursive function
recursive function is an easy way to solve some mathemtical problems but performance varies. The following is an example of a recursive function calculating the nth number in the Fibonacci Sequence.
Step2: Class
Class is a blueprint defining the charactaristics and behaviors of an object.
python
class MyClass
Step3: This is a basic class definition, the age and salary are needed when creating this object. The new class can be invoked like this
Step4: The __init__ initilaze the variables stored in the class. When they are called inside the class, we should add a self. in front of the variable. The out(Self) method are arbitary functions that can be used by calling Yourclass.yourfunction(). The input to the functions can be added after the self input.
Scope of variables
Very important
Step5: So is it "call-by-value"? The value does not changed?
But Try this
Step6: Confused? Why the list is changeable but the string is not?
Step7: More Confused? Why the function return the same object, is it say that it is call-by-reference? | Python Code:
def hello(a,b):
return a+b
hello(1,1)
hello('a','b')
Explanation: Functions
Functions are defined as
def function_name(parameters):
End of explanation
def Fibonacci(n):
if n < 2:
return n
else:
return Fibonacci(n-1)+Fibonacci(n-2)
print Fibonacci(10)
def Fibonacci(n):
return n if n < 2 else Fibonacci(n-1)+Fibonacci(n-2)
print Fibonacci(10)
Explanation: Recursive function
recursive function is an easy way to solve some mathemtical problems but performance varies. The following is an example of a recursive function calculating the nth number in the Fibonacci Sequence.
End of explanation
class Person:
def __init__(self,age,salary):
self.age = age
self.salary = salary
def out(self):
print self.age
print self.salary
Explanation: Class
Class is a blueprint defining the charactaristics and behaviors of an object.
python
class MyClass:
...
...
For a simple class, one shall define an instance
python
__init__()
to handle variable when it created. Let's try the following example:
End of explanation
a = Person(30,10000)
a.out()
Explanation: This is a basic class definition, the age and salary are needed when creating this object. The new class can be invoked like this:
End of explanation
a = 'Alice'
print a
def change_my_name(my_name):
my_name = 'Bob'
change_my_name(a)
print a
a = 'Alice'
print a
def change_my_name(my_name):
my_name = 'Bob'
return my_name
b = change_my_name(a)
print b
Explanation: The __init__ initilaze the variables stored in the class. When they are called inside the class, we should add a self. in front of the variable. The out(Self) method are arbitary functions that can be used by calling Yourclass.yourfunction(). The input to the functions can be added after the self input.
Scope of variables
Very important: "call-by-value" or "call-by-reference"?
It is not a simple question, it often confused many people..We now try to do some testing..
End of explanation
a_list_of_names = ['Alice','Bob','Christ','Dora']
print a_list_of_names
def change_a_value(something):
something[0] = 'Not Alice'
change_a_value(a_list_of_names)
print a_list_of_names
Explanation: So is it "call-by-value"? The value does not changed?
But Try this:
End of explanation
a_list_of_names = ['Alice','Bob','Christ','Dora']
print a_list_of_names
a_new_list_of_names = a_list_of_names
print a_new_list_of_names
def change_a_value(something):
something[0] = 'Not Alice'
change_a_value(a_list_of_names)
print "After change_a_value:"
print a_list_of_names
print a_new_list_of_names
print "Is 'a_new_list_of_names' same as 'a_list_of_names' ?"
print a_new_list_of_names is a_list_of_names
Explanation: Confused? Why the list is changeable but the string is not?
End of explanation
some_guy = 'Alice'
a_list_of_names = []
a_list_of_names.append(some_guy)
print "Is 'some_guy' same as the first element in a_list_of_names ?"
print (some_guy is a_list_of_names[0])
another_list_of_names = a_list_of_names
print "Is 'a_list_of_names' same as the 'another_list_of_names' ?"
print (a_list_of_names is another_list_of_names)
some_guy = 'Bob'
another_list_of_names.append(some_guy)
print "We have added Bob to the list, now is 'a_list_of_names' same as the 'another_list_of_names' ? "
print (a_list_of_names is another_list_of_names)
print (some_guy,a_list_of_names,another_list_of_names)
some_guy = 'Christ'
print (some_guy,a_list_of_names,another_list_of_names)
Explanation: More Confused? Why the function return the same object, is it say that it is call-by-reference?
End of explanation |
3,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hypothesis testing for number of mixture components
Step1: Load data, downsample, and keep only first 96 features
Step2: Perform model selection using BIC to find the most likely number of mixture components $\hat{k}$
Step3: Statistical inference
Step 1
Step4: Step 4a
Step5: Step 4b
Step6: Step 5
Step7: Step 6 | Python Code:
import itertools
import csv
import numpy as np
from scipy import linalg
from scipy.stats import cumfreq
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
%matplotlib inline
np.random.seed(1)
Explanation: Hypothesis testing for number of mixture components
End of explanation
with open('/Users/Tyler/Google Drive/DataScience/synapse_features_z-score.csv','r') as csvfile:
reader = csv.reader(csvfile)
X = np.array([[float(e) for e in r] for r in reader])
# Keep only first 96 features
X = X[np.random.choice(range(X.shape[0]),size=100000,replace=False),0:24*4]
#X = X[:,0:24*4]
print 'data loaded'
Explanation: Load data, downsample, and keep only first 96 features
End of explanation
lowest_bic = np.infty
bic = []
n_components_range = range(1, 21)
cv_types = ['spherical', 'tied', 'diag', 'full']
for cv_type in cv_types:
for n_components in n_components_range:
# Fit a mixture of Gaussians with EM
gmm = mixture.GMM(n_components=n_components, covariance_type=cv_type)
gmm.fit(X)
bic.append(gmm.bic(X))
if bic[-1] < lowest_bic:
lowest_bic = bic[-1]
best_gmm = gmm
bic = np.array(bic)
color_iter = itertools.cycle(['k', 'r', 'g', 'b', 'c', 'm', 'y'])
clf = best_gmm
bars = []
# Plot the BIC scores
spl = plt.subplot(1, 1, 1)
for i, (cv_type, color) in enumerate(zip(cv_types, color_iter)):
xpos = np.array(n_components_range) + .2 * (i - 2)
bars.append(plt.bar(xpos, bic[i * len(n_components_range):
(i + 1) * len(n_components_range)],
width=.2, color=color))
plt.xticks(n_components_range)
plt.ylim([bic.min() * 1.01 - .01 * bic.max(), bic.max()])
plt.title('BIC score per model')
xpos = np.mod(bic.argmin(), len(n_components_range)) + .65 +\
.2 * np.floor(bic.argmin() / len(n_components_range))
plt.text(xpos, bic.min() * 0.97 + .03 * bic.max(), '*', fontsize=14)
spl.set_xlabel('Number of components')
spl.legend([b[0] for b in bars], cv_types)
plt.show()
Explanation: Perform model selection using BIC to find the most likely number of mixture components $\hat{k}$
End of explanation
def gmm_test(X,k0,k1,nboot):
nsample = X.shape[0]
gmm0 = mixture.GMM(n_components=k0, covariance_type='full')
gmm0.fit(X)
L0 = sum(gmm0.score(X))
gmm1 = mixture.GMM(n_components=k1, covariance_type='full')
gmm1.fit(X)
L1 = sum(gmm1.score(X))
LRstat = -2*(L1 - L0)
LRstat0 = []
for i in range(nboot):
Xboot = gmm0.sample(n_samples=nsample)
gmm0_boot = mixture.GMM(n_components=k0, covariance_type = 'full')
gmm0_boot.fit(Xboot)
L0_boot = sum(gmm0_boot.score(Xboot))
gmm1_boot = mixture.GMM(n_components=k1, covariance_type = 'full')
gmm1_boot.fit(Xboot)
L1_boot = sum(gmm1_boot.score(Xboot))
LRstat0.append(-2*(L1_boot - L0_boot))
ecdf, lowlim, binsize, extrapoints = cumfreq(LRstat0)
ecdf = ecdf/len(LRstat0)
bin = np.mean([lowlim,lowlim+binsize])
bins = []
for i in range(len(ecdf)):
bins.append(bin)
bin = bin + binsize
if min(bins) > LRstat:
p = 0
else:
p = max(ecdf[bins<=LRstat])
return p
Explanation: Statistical inference
Step 1: Define model and assumptions
$\vec{X} ~ f_{\vec{X}} \in {F_{\vec{X}}(\cdot;\theta): \theta \in \Theta}$
We assume $f$ is a GMM and $\theta = [\bf{\mu}, \bf{\Sigma}, \vec{\pi}, k]$, where k is the number of mixture components and $\vec{\pi}$ are the mixing weights of each mixture component.
Step 2: Formalize test
$H_0: k = k_0$
$H_1: k = k_1$
Step 3: Describe the test statistic
$\Lambda = \frac{L(\theta_1;X)}{L(\theta_0;X)}$
End of explanation
alpha = 0.05
k0 = 1
k1 = 3
nboot = 100
n_samples = np.array(range(1,101,5))*10
n_iterations = 100
pow_null = np.array((), dtype=np.dtype('float64'))
gmm0 = mixture.GMM(n_components=k0, covariance_type='full')
gmm0.means_ = np.array([[0]])
gmm0.covars_ = np.array([[[1]]])
gmm0.weights_ = np.array([1])
for n in n_samples:
p = np.array((), dtype=np.dtype('float64'))
for i in range(n_iterations):
X0 = gmm0.sample(n)
p = np.append(p,gmm_test(X0,k0,k1,nboot))
pow_null = np.append(pow_null, np.sum(1.0*(p < alpha))/n_iterations)
print 'finished sampling from null'
print n
print i
Explanation: Step 4a: Sample from the null
End of explanation
pow_alt = np.array((), dtype=np.dtype('float64'))
gmm1 = mixture.GMM(n_components=k1, covariance_type='full')
gmm1.means_ = np.array([[-2],[0],[2]])
gmm1.covars_ = np.array([[[1]],[[1]],[[1]]])
gmm1.weights_ = np.array([.4, .2, .4])
for n in n_samples:
p = np.array((), dtype=np.dtype('float64'))
for i in range(n_iterations):
X1 = gmm1.sample(n)
p = np.append(p,gmm_test(X1,k0,k1,nboot))
pow_alt = np.append(pow_alt, np.sum(1.0*(p < alpha))/n_iterations)
print 'finished sampling from alternative'
Explanation: Step 4b: Sample from the alternative
End of explanation
plt.scatter(n_samples, pow_null, hold=True, label='null')
plt.scatter(n_samples, pow_alt, color='green', hold=True, label='alt')
plt.xscale('log')
plt.xlabel('number of samples')
plt.ylabel('power')
plt.title('Power of likelihood ratio test under null model')
plt.axhline(alpha, color='red', linestyle='--', label='alpha')
plt.legend(loc=5)
plt.show()
Explanation: Step 5: Plot power vs n
End of explanation
k0 = 1
k1 = 17
nboot = 100
p = gmm_test(X,k0,k1,nboot)
print p
Explanation: Step 6: Apply test to actual data
End of explanation |
3,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Применение машины опорных векторов к выявлению фальшивых купюр
Подключим необходимые библиотеки.
Step1: Данные были взяты из репозитория UCI Machine Learning Repository по адресу http
Step2: В исследуемых данных мы имеем следующее число точек
Step3: Загруженные данные разбиваем на две выборки
Step4: В обучающей выборке имеем столько наблюдений
Step5: Рассмотрим SVM в линейно неразделимом случае с $L^1$ нормой на зазоры $(\xi_i){i=1}^n$
Step6: Параметры вида ядра (и соответственно отображений признаков $\phi
Step7: полимониальное
$$ K( x, y ) = \bigl( 1 + \langle x, y\rangle\bigr)^p \,, $$
Step8: и линейное (в $\mathbb{R}^d$)
$$ K( x, y ) = \langle x, y\rangle \,,$$
Step9: Результаты поиска приведены ниже
Step10: Посмотрим точность лучших моделей в каждом классе ядер на тестовтй выборке.
Линейное ядро
Step11: Гауссовское ядро
Step12: Полимониальное ядро
Step13: Построим ROC-AUC кривую для лучшей моделей. | Python Code:
import numpy as np, pandas as pd
import matplotlib.pyplot as plt
from sklearn import *
%matplotlib inline
random_state = np.random.RandomState( None )
def collect_result( grid_, names = [ ] ) :
df = pd.DataFrame( { "2-Отклонение" : [ np.std(v_[ 2 ] ) for v_ in grid_.grid_scores_ ],
"1-Точность" : [ v_[ 1 ] for v_ in grid_.grid_scores_ ], },
index = pd.MultiIndex.from_tuples(
[ v_[ 0 ].values() for v_ in grid_.grid_scores_ ],
names = names ) )
df.sort_index( )
return df
Explanation: Применение машины опорных векторов к выявлению фальшивых купюр
Подключим необходимые библиотеки.
End of explanation
df = pd.read_csv( 'data_banknote_authentication.txt', sep = ",", decimal = ".", header = None,
names = [ "variance", "skewness", "curtosis", "entropy", "class" ] )
y = df.xs( "class", axis = 1 )
X = df.drop( "class", axis = 1 )
Explanation: Данные были взяты из репозитория UCI Machine Learning Repository по адресу http://archive.ics.uci.edu/ml/datasets/banknote+authentication.
Выборка сконструирована при помощи вейвлет преобразования избражений фальшивых и аутентичных банкнот в градациях серого.
End of explanation
print len( X )
Explanation: В исследуемых данных мы имеем следующее число точек:
End of explanation
X_train, X_test, y_train, y_test = cross_validation.train_test_split( X, y, test_size = 0.60,
random_state = random_state )
Explanation: Загруженные данные разбиваем на две выборки: обучающую ($\text{_train}$) и тестовую. которая будет не будет использоваться при обучении ($\text{_test}$).
Разобьём выборку на обучающую и тестовую в соотношении 2:3.
End of explanation
print len( X_train )
Explanation: В обучающей выборке имеем столько наблюдений:
End of explanation
svm_clf_ = svm.SVC( probability = True, max_iter = 100000 )
Explanation: Рассмотрим SVM в линейно неразделимом случае с $L^1$ нормой на зазоры $(\xi_i){i=1}^n$:
$$ \frac{1}{2} \|\beta\|^2 + C \sum{i=1}^n \xi_i \to \min_{\beta, \beta_0, (\xi_i)_{i=1}^n} \,, $$
при условиях: для любого $i=1,\ldots,n$ требуется $\xi_i \geq 0$ и
$$ \bigl( \beta' \phi(x_i) + \beta_0 \bigr) y_i \geq 1 - \xi_i \,.$$
End of explanation
## Вид ядра : Гауссовское ядро
grid_rbf_ = grid_search.GridSearchCV( svm_clf_, param_grid = {
## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.
"C" : np.logspace( -4, 1, num = 6 ),
"kernel" : [ "rbf" ],
## Параметр "концентрации" Гауссовского ядра
"gamma" : np.logspace( -2, 2, num = 10 ),
}, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )
df_rbf_ = collect_result( grid_rbf_, names = [ "Ядро", "C", "Параметр" ] )
Explanation: Параметры вида ядра (и соответственно отображений признаков $\phi:\mathcal{X}\to\mathcal{H}$) и параметр регуляризации $C$ будем искать с помощью переборного поиска на сетке с $5$-fold кроссвалидацией на тренировочной выборке $\text{X_train}$.
Рассмотрим три ядра: гауссовское
$$ K( x, y ) = \text{exp}\bigl{ -\frac{1}{2\gamma^2} \|x-y\|^2 \bigr} \,,$$
End of explanation
## Вид ядра : Полиномиальное ядро
grid_poly_ = grid_search.GridSearchCV( svm.SVC( probability = True, max_iter = 20000, kernel = "poly" ), param_grid = {
## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.
"C" : np.logspace( -4, 1, num = 6 ),
"kernel" : [ "poly" ],
## Степень полиномиального ядра
"degree" : [ 2, 3, 5, 7 ],
}, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )
df_poly_ = collect_result( grid_poly_, names = [ "Ядро", "C", "Параметр" ] )
Explanation: полимониальное
$$ K( x, y ) = \bigl( 1 + \langle x, y\rangle\bigr)^p \,, $$
End of explanation
## Вид ядра : линейное ядро
grid_linear_ = grid_search.GridSearchCV( svm_clf_, param_grid = {
## Параметр регуляризции: C = 0.0001, 0.001, 0.01, 0.1, 1, 10.
"C" : np.logspace( -4, 1, num = 6 ),
"kernel" : [ "linear" ],
"degree" : [ 0 ]
}, cv = 5, n_jobs = -1, verbose = 0 ).fit( X_train, y_train )
df_linear_ = collect_result( grid_linear_, names = [ "Ядро", "C", "Параметр" ] )
Explanation: и линейное (в $\mathbb{R}^d$)
$$ K( x, y ) = \langle x, y\rangle \,,$$
End of explanation
pd.concat( [ df_linear_, df_poly_, df_rbf_ ], axis = 0 ).sort_index( )
Explanation: Результаты поиска приведены ниже:
End of explanation
print grid_linear_.best_estimator_
print "Accuracy: %0.3f%%" % ( grid_linear_.best_estimator_.score( X_test, y_test ) * 100, )
Explanation: Посмотрим точность лучших моделей в каждом классе ядер на тестовтй выборке.
Линейное ядро
End of explanation
print grid_rbf_.best_estimator_
print "Accuracy: %0.3f%%" % ( grid_rbf_.best_estimator_.score( X_test, y_test ) * 100, )
Explanation: Гауссовское ядро
End of explanation
print grid_poly_.best_estimator_
print "Accuracy: %0.3f%%" % ( grid_poly_.best_estimator_.score( X_test, y_test ) * 100, )
Explanation: Полимониальное ядро
End of explanation
result_ = { name_: metrics.roc_curve( y_test, estimator_.predict_proba( X_test )[:,1] )
for name_, estimator_ in {
"Linear": grid_linear_.best_estimator_,
"Polynomial": grid_poly_.best_estimator_,
"RBF": grid_rbf_.best_estimator_ }.iteritems( ) }
fig = plt.figure( figsize = ( 16, 9 ) )
ax = fig.add_subplot( 111 )
ax.set_ylim( -0.1, 1.1 ) ; ax.set_xlim( -0.1, 1.1 )
ax.set_xlabel( "FPR" ) ; ax.set_ylabel( u"TPR" )
ax.set_title( u"ROC-AUC" )
for name_, value_ in result_.iteritems( ) :
fpr, tpr, _ = value_
ax.plot( fpr, tpr, lw=2, label = name_ )
ax.legend( loc = "lower right" )
Explanation: Построим ROC-AUC кривую для лучшей моделей.
End of explanation |
3,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MNIST Convolutional Neural Network - 2nd model
This time we are going to implement a model similar to the one used by Dan Ciresan, Ueli Meier and Jurgen Schmidhuber in 2012. The model should have an error of 0.23% and it's quite similar to the previous one we implemented from Keras documentation. The network was not only one of the best for MNIST, ranking second best at the moment, but also very good on NIST SD 19 and NORB.
We are also going to use Keras checkpoints because of the many epochs required by the model and we're going to integrate some of the most recent techniques, like dropout.
Again for this notebook we are going to use TensorFlow with Keras.
Step1: We are using TensorFlow-GPU 0.12.1 on Python 3.5.2, running on Windows 10 with Cuda 8.0.
We have 3 machines with the same environment and 3 different GPUs, respectively with 384, 1024 and 1664 Cuda cores.
Imports
Step2: Definitions
Step3: Data load
Step4: Model definition
The model is structurally similar to the previous one, with 2 Convolutional layers and 1 Fully conneted layers.
However there are major difference in values and sizes, and also there is one more intermediate max pooling layer and the activation function is a scaled hyperbolic tangent, as described in the paper. However, since Rectified Linear Units started spreading after 2015, we are going to compare two different CNN, one using tanh (as in the paper) and the other one using relu.
1x29x29-20C4-MP2-40C5-MP3-150N-10N DNN.
<img src="images/cvpr2012.PNG" alt="1x29x29-20C4-MP2-40C5-MP3-150N-10N DNN" style="width
Step5: Training and evaluation
Using non verbose output for training, since we already get some informations from the callback.
Step6: Inspecting the result
Step7: Examples of correct predictions (tanh)
Step8: Examples of incorrect predictions (tanh)
Step9: Examples of correct predictions (relu)
Step10: Examples of incorrect predictions (relu)
Step11: Confusion matrix (tanh)
Step12: Confusion matrix (relu) | Python Code:
import tensorflow as tf
# We don't really need to import TensorFlow here since it's handled by Keras,
# but we do it in order to output the version we are using.
tf.__version__
Explanation: MNIST Convolutional Neural Network - 2nd model
This time we are going to implement a model similar to the one used by Dan Ciresan, Ueli Meier and Jurgen Schmidhuber in 2012. The model should have an error of 0.23% and it's quite similar to the previous one we implemented from Keras documentation. The network was not only one of the best for MNIST, ranking second best at the moment, but also very good on NIST SD 19 and NORB.
We are also going to use Keras checkpoints because of the many epochs required by the model and we're going to integrate some of the most recent techniques, like dropout.
Again for this notebook we are going to use TensorFlow with Keras.
End of explanation
import os.path
from IPython.display import Image
from util import Util
u = Util()
import numpy as np
# Explicit random seed for reproducibility
np.random.seed(1337)
from keras.callbacks import ModelCheckpoint
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Convolution2D, MaxPooling2D
from keras.utils import np_utils
from keras import backend as K
from keras.datasets import mnist
Explanation: We are using TensorFlow-GPU 0.12.1 on Python 3.5.2, running on Windows 10 with Cuda 8.0.
We have 3 machines with the same environment and 3 different GPUs, respectively with 384, 1024 and 1664 Cuda cores.
Imports
End of explanation
batch_size = 512
nb_classes = 10
nb_epoch = 800
# checkpoint path
checkpoints_filepath_tanh = "checkpoints/02_MNIST_tanh_weights.best.hdf5"
checkpoints_filepath_relu = "checkpoints/02_MNIST_relu_weights.best.hdf5"
# model image path
model_image_path = 'images/model_02_MNIST.png' # saving only relu
# input image dimensions
img_rows, img_cols = 28, 28
# number of convolutional filters to use
nb_filters1 = 20
nb_filters2 = 40
# size of pooling area for max pooling
pool_size1 = (2, 2)
pool_size2 = (3, 3)
# convolution kernel size
kernel_size1 = (4, 4)
kernel_size2 = (5, 5)
# dense layer size
dense_layer_size1 = 150
# dropout rate
dropout = 0.15
Explanation: Definitions
End of explanation
# the data, shuffled and split between train and test sets
(X_train, y_train), (X_test, y_test) = mnist.load_data()
u.plot_images(X_train[0:9], y_train[0:9])
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)
Explanation: Data load
End of explanation
model_tanh = Sequential()
model_relu = Sequential()
def initialize_network_with_activation_function(model, activation, checkpoints_filepath):
model.add(Convolution2D(nb_filters1, kernel_size1[0], kernel_size1[1],
border_mode='valid',
input_shape=input_shape, name='covolution_1_' + str(nb_filters1) + '_filters'))
model.add(Activation(activation, name='activation_1_' + activation))
model.add(MaxPooling2D(pool_size=pool_size1, name='max_pooling_1_' + str(pool_size1) + '_pool_size'))
model.add(Convolution2D(nb_filters2, kernel_size2[0], kernel_size2[1]))
model.add(Activation(activation, name='activation_2_' + activation))
model.add(MaxPooling2D(pool_size=pool_size2, name='max_pooling_1_' + str(pool_size2) + '_pool_size'))
model.add(Dropout(dropout))
model.add(Flatten())
model.add(Dense(dense_layer_size1, name='fully_connected_1_' + str(dense_layer_size1) + '_neurons'))
model.add(Activation(activation, name='activation_3_' + activation))
model.add(Dropout(dropout))
model.add(Dense(nb_classes, name='output_' + str(nb_classes) + '_neurons'))
model.add(Activation('softmax', name='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy', 'precision', 'recall', 'mean_absolute_error'])
# loading weights from checkpoints
if os.path.exists(checkpoints_filepath):
model.load_weights(checkpoints_filepath)
initialize_network_with_activation_function(model_tanh, 'tanh', checkpoints_filepath_tanh)
initialize_network_with_activation_function(model_relu, 'relu', checkpoints_filepath_relu)
Image(u.maybe_save_network(model_relu, model_image_path), width=300)
Explanation: Model definition
The model is structurally similar to the previous one, with 2 Convolutional layers and 1 Fully conneted layers.
However there are major difference in values and sizes, and also there is one more intermediate max pooling layer and the activation function is a scaled hyperbolic tangent, as described in the paper. However, since Rectified Linear Units started spreading after 2015, we are going to compare two different CNN, one using tanh (as in the paper) and the other one using relu.
1x29x29-20C4-MP2-40C5-MP3-150N-10N DNN.
<img src="images/cvpr2012.PNG" alt="1x29x29-20C4-MP2-40C5-MP3-150N-10N DNN" style="width: 400px;"/>
The paper doesn't seem to use any dropout layer to avoid overfitting, so we're going to use a dropout of 0.15, way lower then we did before.
It is also worth mentioning that the authors of the paper have their methods to avoid overfitting, like dataset expansion by adding translations, rotations and deformations to the images of the training set.
End of explanation
# checkpoint
checkpoint_tanh = ModelCheckpoint(checkpoints_filepath_tanh, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list_tanh = [checkpoint_tanh]
# training
print('training tanh model')
history_tanh = model_tanh.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list_tanh)
# evaluation
print('evaluating tanh model')
score = model_tanh.evaluate(X_test, Y_test, verbose=1)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print('Test error:', (1-score[2])*100, '%')
u.plot_history(history_tanh)
u.plot_history(history_tanh, metric='loss', loc='upper left')
# checkpoint
checkpoint_relu = ModelCheckpoint(checkpoints_filepath_relu, monitor='val_acc', verbose=1, save_best_only=True, mode='max')
callbacks_list_relu = [checkpoint_relu]
# training
print('training relu model')
history_relu = model_relu.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,
verbose=0, validation_data=(X_test, Y_test), callbacks=callbacks_list_relu)
# evaluation
print('evaluating relu model')
score = model_relu.evaluate(X_test, Y_test, verbose=1)
print('Test score:', score[0])
print('Test accuracy:', score[1])
print('Test error:', (1-score[2])*100, '%')
u.plot_history(history_relu)
u.plot_history(history_relu, metric='loss', loc='upper left')
Explanation: Training and evaluation
Using non verbose output for training, since we already get some informations from the callback.
End of explanation
# The predict_classes function outputs the highest probability class
# according to the trained classifier for each input example.
predicted_classes_tanh = model_tanh.predict_classes(X_test)
predicted_classes_relu = model_relu.predict_classes(X_test)
# Check which items we got right / wrong
correct_indices_tanh = np.nonzero(predicted_classes_tanh == y_test)[0]
incorrect_indices_tanh = np.nonzero(predicted_classes_tanh != y_test)[0]
correct_indices_relu = np.nonzero(predicted_classes_relu == y_test)[0]
incorrect_indices_relu = np.nonzero(predicted_classes_relu != y_test)[0]
Explanation: Inspecting the result
End of explanation
u.plot_images(X_test[correct_indices_tanh[:9]], y_test[correct_indices_tanh[:9]],
predicted_classes_tanh[correct_indices_tanh[:9]])
Explanation: Examples of correct predictions (tanh)
End of explanation
u.plot_images(X_test[incorrect_indices_tanh[:9]], y_test[incorrect_indices_tanh[:9]],
predicted_classes_tanh[incorrect_indices_tanh[:9]])
Explanation: Examples of incorrect predictions (tanh)
End of explanation
u.plot_images(X_test[correct_indices_relu[:9]], y_test[correct_indices_relu[:9]],
predicted_classes_relu[correct_indices_relu[:9]])
Explanation: Examples of correct predictions (relu)
End of explanation
u.plot_images(X_test[incorrect_indices_relu[:9]], y_test[incorrect_indices_relu[:9]],
predicted_classes_relu[incorrect_indices_relu[:9]])
Explanation: Examples of incorrect predictions (relu)
End of explanation
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes_tanh)
Explanation: Confusion matrix (tanh)
End of explanation
u.plot_confusion_matrix(y_test, nb_classes, predicted_classes_relu)
Explanation: Confusion matrix (relu)
End of explanation |
3,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Messy modelling
Step1: Introducing kNN
Step2: Let's examine the shape of the dataset (the number of rows and columns), the types of features it contains, and some summary statistics for each feature.
Step3: Next up, let's convert the pandas dataframe into a numpy array and isolate the outcome variable we'd like to predict (here, 0 means 'non-spam', 1 means 'spam')
Step4: Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our predictive models. The test will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well our models would perform out in the wild on unseen data.
Step5: We are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to theory behind random forests. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on the features (and feature values) that split the classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles these base learners together.
Out of the box, scikit's random forest classifier already performs quite well on the spam dataset
Step6: An overall accuracy of 0.95 is very good for a start, but keep in mind that this is a heavily idealized dataset. Next up, we are going to learn how to pick the best parameters for the random forest algorithm (as well as for an SVM and logistic regression classifier) in order to get better models with (hopefully!) improved accuracy.
The perils of overfitting
In order to build the best possible model that does a good job at describing the underlying trends in a dataset, we need to pick the right HP values. In the following example, we will introduce different strategies of searching for the set of HPs that define the best model, but we will first need to make a slight detour to explain how to avoid a major pitfall when it comes to tuning models - overfitting.
The hallmark of overfitting is good training performance and bad testing performance.
As we mentioned above, HPs are not optimised while a learning algorithm is learning. Hence, we need other strategies to optimise them. The most basic way would just to test different possible values for the HPs and see how the model performs. In a random forest, some hyperparameters we can optimise are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
Let's try out some HP values.
Step7: We can manually write a small loop to test out how well the different combinations of these fare (later, we'll find out better ways to do this) | Python Code:
import wget
import pandas as pd
# Import the dataset
data_url = 'https://raw.githubusercontent.com/nslatysheva/data_science_blogging/master/datasets/spam/spam_dataset.csv'
dataset = wget.download(data_url)
dataset = pd.read_csv(dataset, sep=",")
# Take a peak at the data
dataset.head()
Explanation: Messy modelling: overfitting, cross-validation, and the bias-variance trade-off
Introduction
In the next blog post, you will learn how to tune models. Other posts in this series will include random forests, naive bayes, logistic regression and combinging different models into an ensembled meta-model.
Loading and exploring the dataset
We start off by collecting the dataset. It can be found both online and (in a slightly nicer form) in our GitHub repository, so we just fetch it via wget (note: make sure you first type pip install wget into your Terminal since wget is not a preinstalled Python library). It will download a copy of the dataset to your current working directory.
End of explanation
knn3scores = cross_val_score(knn3, XTrain, yTrain, cv = 5)
print knn3scores
print "Mean of scores KNN3:", knn3scores.mean()
knn99scores = cross_val_score(knn99, XTrain, yTrain, cv = 5)
print knn99scores
print "Mean of scores KNN99:", knn99scores.mean()
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state = 1) #seed 1
knn = KNeighborsClassifier()
n_neighbors = np.arange(3, 151, 2)
grid = GridSearchCV(knn, [{'n_neighbors':n_neighbors}], cv = 10)
grid.fit(XTrain, yTrain)
cv_scores = [x[1] for x in grid.grid_scores_]
train_scores = list()
test_scores = list()
for n in n_neighbors:
knn.n_neighbors = n
knn.fit(XTrain, yTrain)
train_scores.append(metrics.accuracy_score(yTrain, knn.predict(XTrain)))
test_scores.append(metrics.accuracy_score(yTest, knn.predict(XTest)))
plt.plot(n_neighbors, train_scores, c = "blue", label = "Training Scores")
plt.plot(n_neighbors, test_scores, c = "brown", label = "Test Scores")
plt.plot(n_neighbors, cv_scores, c = "black", label = "CV Scores")
plt.xlabel('Number of K nearest neighbors')
plt.ylabel('Classification Accuracy')
plt.gca().invert_xaxis()
plt.legend(loc = "upper left")
plt.show()
Explanation: Introducing kNN
End of explanation
# Examine shape of dataset and some column names
print (dataset.shape)
print (dataset.columns.values)
# Summarise feature values
dataset.describe()
Explanation: Let's examine the shape of the dataset (the number of rows and columns), the types of features it contains, and some summary statistics for each feature.
End of explanation
import numpy as np
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
Explanation: Next up, let's convert the pandas dataframe into a numpy array and isolate the outcome variable we'd like to predict (here, 0 means 'non-spam', 1 means 'spam'):
End of explanation
from sklearn.cross_validation import train_test_split
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
Explanation: Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our predictive models. The test will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well our models would perform out in the wild on unseen data.
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
rf = RandomForestClassifier()
rf.fit(XTrain, yTrain)
rf_predictions = rf.predict(XTest)
print (metrics.classification_report(yTest, rf_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, rf_predictions),2))
Explanation: We are first going to try to predict spam emails with a random forest classifier. Chapter 8 of the Introduction to Statistical Learning book provides a truly excellent introduction to theory behind random forests. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on the features (and feature values) that split the classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles these base learners together.
Out of the box, scikit's random forest classifier already performs quite well on the spam dataset:
End of explanation
n_estimators = np.array([5, 100])
max_features = np.array([10, 50])
Explanation: An overall accuracy of 0.95 is very good for a start, but keep in mind that this is a heavily idealized dataset. Next up, we are going to learn how to pick the best parameters for the random forest algorithm (as well as for an SVM and logistic regression classifier) in order to get better models with (hopefully!) improved accuracy.
The perils of overfitting
In order to build the best possible model that does a good job at describing the underlying trends in a dataset, we need to pick the right HP values. In the following example, we will introduce different strategies of searching for the set of HPs that define the best model, but we will first need to make a slight detour to explain how to avoid a major pitfall when it comes to tuning models - overfitting.
The hallmark of overfitting is good training performance and bad testing performance.
As we mentioned above, HPs are not optimised while a learning algorithm is learning. Hence, we need other strategies to optimise them. The most basic way would just to test different possible values for the HPs and see how the model performs. In a random forest, some hyperparameters we can optimise are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
Let's try out some HP values.
End of explanation
from itertools import product
# get grid of all possible combinations of hp values
hp_combinations = list(itertools.product(n_estimators, max_features))
for hp_combo in range(len(hp_combinations)):
print (hp_combinations[hp_combo])
# Train and output accuracies
rf = RandomForestClassifier(n_estimators=hp_combinations[hp_combo][0],
max_features=hp_combinations[hp_combo][1])
rf.fit(XTrain, yTrain)
RF_predictions = rf.predict(XTest)
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
Explanation: We can manually write a small loop to test out how well the different combinations of these fare (later, we'll find out better ways to do this):
End of explanation |
3,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Check whether a SMIRNOFF-format force field is able to parametrize a dataset of interest
This notebook runs a quick initial analysis of whether a molecule set can be simulated by a given SMIRNOFF-format force field. While some attempt has been made to improve the speed of this code, it is not particuarly high-performance in its current state, and may crash the notebook you try to analyze more than 10,000 molecules.
First, we define global variables, and create a helper function to check for parameterization failures.
It is not important to understand or modify the following two notebook cells.
Step2: Loading the molecule dataset
There are several ways to load molecules into the Open Force Field Toolkit. Below, we show how to load molecule databases from .smi, .sdf, and .mol2 format files. It's important that these databases contain a complete representation of each molecule, including formal charge, stereochemistry, and protonation state.
Note that loading .mol2 files is currently only supported using the OpenEye Toolkit.
Option 1
Step3: Option 2
Step4: Option 3
Step5: Analyze all molecules in the data set
Option 1
Step6: Option 2
Step7: Write a report of parameterization failures
Since the results from above are only held in memory during this Python session, it can be helpful to save this analysis to disk.
The following code will write files containing a 2D image and a tagged SMILES for each unparameterizable motif found above. Results for each molecule are saved to different folders. If molecule names were not provided, these folders will be named molecule_N/, where N is the order in which the molecules were read. A single molecule may have multiple parameterization failures, and each one is written both as an image (eg. molecule_2/Bonds_1-2.png) and a tagged SMILES (eg. molecule_2/Bonds_1-2.smi).
Note that this does not clear previous outputs, so it is possible that running this script on several datasets will overwrite or mix data with previous runs. Run rm -r molecule_* between runs to prevent potential issues.
When a molecule contains even one instance of unparameterizable chemistry, it can result in a large number of ProperTorsions failures being reported. To help reduce redundancy in these cases, the code below groups ProperTorsions output such that the atom indices defining the central bond are written first in the file name, followed by the atom indices of the whole torsion. This way, lexical (alphabetical) displays of file names make it easier to identify possibly-redundant outputs.
Concretely, the second molecule in the SMILES set is an example of this issue, as 1-indexed atom in that molecule is an unparameterizable Hg. This leads to the following output | Python Code:
from openff.toolkit.topology import Molecule, Topology
from openff.toolkit.typing.engines.smirnoff import (ForceField,
UnassignedValenceParameterException, BondHandler, AngleHandler,
ProperTorsionHandler, ImproperTorsionHandler,
vdWHandler)
from simtk import unit
import numpy as np
from rdkit import Chem
from rdkit.Chem import Draw, AllChem
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import display
from copy import deepcopy
import time
import os
# Define "super generics", which are parameters that will match
# each instance of a valence type. By adding these to a ForceField
# object, we ensure that a given ParameterHandler will not encounter
# a parameterization failure
super_generics = {'Bonds':
BondHandler.BondType(smirks='[*:1]~[*:2]',
k=0*unit.kilocalorie/unit.mole/unit.angstrom**2,
length=0*unit.angstrom
),
'Angles':
AngleHandler.AngleType(smirks='[*:1]~[*:2]~[*:3]',
angle=0*unit.degree,
k=0*unit.kilocalorie/unit.mole/unit.degree**2
),
'ProperTorsions':
ProperTorsionHandler.ProperTorsionType(smirks='[*:1]~[*:2]~[*:3]~[*:4]',
phase1=0*unit.degree,
periodicity1=0,
k1=0*unit.kilocalorie/unit.mole,
idivf1=1
),
'ImproperTorsions':
ImproperTorsionHandler.ImproperTorsionType(smirks='[*:1]~[*:2](~[*:3])~[*:4]',
phase1=0*unit.degree,
periodicity1=0,
k1=0*unit.kilocalorie/unit.mole,
idivf1=1
),
'vdW':
vdWHandler.vdWType(smirks='[*:1]',
rmin_half=0*unit.angstrom,
epsilon = 0*unit.kilocalorie/unit.mole
),
}
def report_missing_parameters(molecule, forcefield):
Analyze a molecule using a provided ForceField, generating a report of any
chemical groups in the molecule that are lacking parameters.
Parameters
----------
molecule : an openforcefield.topology.FrozenMolecule
The molecule to analyze
forcefield : an openforcefield.typing.engine.smirnoff.ForceField
The ForceField object to use
Returns
-------
missing_parameters : dict[tagname: list[dict[tagged_smiles:string, image:PIL.Image, atom indices:list[int]]]]
A hierarchical dictionary, with first level keys indicating ForceField tag
names (eg. "Bonds"), and first-level values which are lists of dictionaries.
Each dictionary in this list reflects one missing parameter, and contains the
following key:value pairs :
* "image": PIL.Image
* shows a 2D drawing, highlighting the feature that could not be parametrized
* "tagged_smiles": string
* SMILES of the whole molecule, tagging the atom indices which could not be
parametrized
* "atom_indices": tuple(int)
* The indices of atoms which could not be parametrized
highlight_color = (0.75, 0.75, 0.75)
# Make deepcopies of both inputs, since we may modify them in this function
forcefield = deepcopy(forcefield)
molecule = deepcopy(molecule)
# Set partial charges to placeholder values so that we can skip AM1-BCC
# during parameterization
molecule.partial_charges = (np.zeros(molecule.n_atoms) + 0.1) * unit.elementary_charge
# Prepare dictionary to catch parameterization failure info
success = False
missing_params = {}
while not success:
# Try to parameterize the system, catching the exception if there is one.
try:
forcefield.create_openmm_system(molecule.to_topology(),
charge_from_molecules=[molecule],
allow_nonintegral_charges=True)
success = True
except UnassignedValenceParameterException as e:
success = False
# Ensure that there is a list initialized for missing parameters
# under this tagname
handler_tagname = e.handler_class._TAGNAME
if handler_tagname not in missing_params:
missing_params[handler_tagname] = []
# Create a shortcut to the topology atom tuples attached to
# the parametrization error
top_atom_tuples = e.unassigned_topology_atom_tuples
# Make a summary of the missing parameters from this attempt and add it to
# the missing_params dict
rdmol = molecule.to_rdkit()
for top_atom_tuple in top_atom_tuples:
orig_atom_indices = [i.topology_atom_index for i in top_atom_tuple]
# Make a copy of the input RDMol so that we don't modify the original
this_rdmol = deepcopy(rdmol)
# Attach tags to relevant atoms so that a tagged SMILES can be written
orig_rdatoms = []
for tag_idx, atom_idx in enumerate(orig_atom_indices):
rdatom = this_rdmol.GetAtomWithIdx(atom_idx)
rdatom.SetAtomMapNum(tag_idx + 1)
orig_rdatoms.append(rdatom)
tagged_smiles = Chem.MolToSmiles(this_rdmol)
# Make tagged hydrogens into deuteriums so that RemoveHs doesn't get rid of them
for rdatom in orig_rdatoms:
if rdatom.GetAtomicNum() == 1:
rdatom.SetIsotope(2)
# Remove hydrogens, since they clutter up the 2D drawing
# (tagged Hs are not removed, since they were converted to deuterium)
h_less_rdmol = Chem.RemoveHs(this_rdmol)
# Generate 2D coords, since drawing from 3D can look really weird
Draw.rdDepictor.Compute2DCoords(h_less_rdmol)
# Search over the molecule to find the indices of the tagged atoms
# after hydrogen removal
h_less_atom_indices = [None for i in orig_atom_indices]
for rdatom in h_less_rdmol.GetAtoms():
# Convert deuteriums back into hydrogens
if rdatom.GetAtomicNum() == 1:
rdatom.SetIsotope(1)
atom_map_num = rdatom.GetAtomMapNum()
if atom_map_num == 0:
continue
h_less_atom_indices[atom_map_num-1] = rdatom.GetIdx()
# Once the new atom indices are found, use them to find the H-less
# bond indices
h_less_rdbonds = []
for i in range(len(h_less_atom_indices)-1):
rdbond = h_less_rdmol.GetBondBetweenAtoms(
h_less_atom_indices[i],
h_less_atom_indices[i+1])
h_less_rdbonds.append(rdbond)
h_less_bond_indices = [bd.GetIdx() for bd in h_less_rdbonds]
# Create a 2D drawing of the molecule, highlighting the
# parameterization failure
highlight_atom_colors = {idx:highlight_color for idx in h_less_atom_indices}
highlight_bond_colors = {idx:highlight_color for idx in h_less_bond_indices}
image = Draw.MolsToGridImage([h_less_rdmol],
highlightAtomLists=[h_less_atom_indices],
highlightBondLists=[h_less_bond_indices],
molsPerRow=1,
highlightAtomColors=[highlight_atom_colors],
highlightBondColors=[highlight_bond_colors],
subImgSize=(600,600),
returnPNG=False,
)
# Structure and append the relevant info to the missing_params dictionary
param_description = {'atom_indices': orig_atom_indices,
'image': image,
'tagged_smiles': tagged_smiles
}
missing_params[handler_tagname].append(param_description)
# Add a "super generic" parameter to the top of this handler's ParameterList,
# which will make it always find parameters for each term. This will prevent the same
# parameterization exception from being raised in the next attempt.
param_list = forcefield.get_parameter_handler(handler_tagname).parameters
param_list.insert(0, super_generics[handler_tagname])
return missing_params
Explanation: Check whether a SMIRNOFF-format force field is able to parametrize a dataset of interest
This notebook runs a quick initial analysis of whether a molecule set can be simulated by a given SMIRNOFF-format force field. While some attempt has been made to improve the speed of this code, it is not particuarly high-performance in its current state, and may crash the notebook you try to analyze more than 10,000 molecules.
First, we define global variables, and create a helper function to check for parameterization failures.
It is not important to understand or modify the following two notebook cells.
End of explanation
molecules = Molecule.from_file('example_molecules.smi', allow_undefined_stereo=True)
# We also provide a SMILES dataset of ~1000 problematic molecules
#molecules = Molecule.from_file('problem_smiles.smi', allow_undefined_stereo=True)
print(f'Loaded {len(molecules)} molecules')
Explanation: Loading the molecule dataset
There are several ways to load molecules into the Open Force Field Toolkit. Below, we show how to load molecule databases from .smi, .sdf, and .mol2 format files. It's important that these databases contain a complete representation of each molecule, including formal charge, stereochemistry, and protonation state.
Note that loading .mol2 files is currently only supported using the OpenEye Toolkit.
Option 1: Load a SMILES dataset
End of explanation
molecules = Molecule.from_file('example_molecules.sdf', allow_undefined_stereo=True)
print(f'Loaded {len(molecules)} molecules')
Explanation: Option 2: Load a SDF dataset
End of explanation
try:
molecules = Molecule.from_file('example_molecules.mol2', allow_undefined_stereo=True)
print(f'Loaded {len(molecules)} molecules')
except NotImplementedError as e:
print(e)
print("Loading mol2 files requires the OpenEye Toolkits")
Explanation: Option 3: Load a mol2 dataset
This option requires the OpenEye Toolkit!
End of explanation
start_time = time.time()
forcefield = ForceField('openff-1.0.0.offxml')
results = {}
for mol_idx, molecule in enumerate(molecules):
# Prepare a title for this molecule
if molecule.name == '':
mol_name = f'molecule_{mol_idx+1}'
else:
mol_name = molecule.name
print('\n'*3)
print('=' * 60)
print('=' * 60)
print(f'Processing "{mol_name}" with smiles {molecule.to_smiles()}')
print('=' * 60)
print('=' * 60)
# Analyze missing parameters
time_i = time.time()
missing_params = report_missing_parameters(molecule, forcefield)
print(f'Molecule analysis took {time.time()-time_i} seconds')
results[mol_name] = missing_params
for tagname, missing_tag_params in missing_params.items():
print('~'*60)
print(tagname)
print('~'*60)
for missing_param in missing_tag_params:
print(missing_param['tagged_smiles'])
display(missing_param['image'])
print(f'Processing {len(molecules)} molecules took {time.time()-start_time} seconds')
Explanation: Analyze all molecules in the data set
Option 1: Live visualization (single thread: ~1 second per molecule)
Here, we run the above-defined function on all molecules in the dataset. The parameterization failures will be shown in the notebook as the data set is processed.
Note: If the dataset is large this will take a very long time, and displaying all parameterization failures may run into memory/output limits in the notebook. If you're analyzing more than ~1,000 molecules, use Option 2 below.
End of explanation
from multiprocessing import set_start_method, cpu_count, Pool
set_start_method('fork')
num_threads = max(1, int(cpu_count() * 0.75))
def check_molecule(inputs):
mol_idx = inputs[0]
molecule = inputs[1]
forcefield = ForceField('openff-1.0.0-RC1.offxml')
# Prepare a title for this molecule
if molecule.name == '':
mol_name = f'molecule_{mol_idx+1}'
else:
mol_name = molecule.name
print('\n'*3)
print('=' * 60)
print('=' * 60)
print(f'Processing "{mol_name}" with smiles {molecule.to_smiles()}')
print('=' * 60)
print('=' * 60)
# Analyze missing parameters
time_i = time.time()
missing_params = report_missing_parameters(molecule, forcefield)
print(f'Molecule analysis took {time.time()-time_i} seconds')
return (mol_name, missing_params)
start_time = time.time()
p = Pool(num_threads)
job_args = [(idx, molecule) for idx, molecule in enumerate(molecules)]
result_list = p.map(check_molecule, job_args)
results = dict(result_list)
print(f'Processing {len(molecules)} molecules took {time.time()-start_time} seconds')
Explanation: Option 2: No live visualization (multiple threads: ~(1/num_threads) seconds per molecule)
This method is faster than Option 1, but will not display unparameterizable chemistry in the notebook. 2D depictions and tagged SMILES of unparameterizable chemistry will be written to file in the final cell of the notebook.
This will by default use 75% of the system's CPUs. If this is not desired, manually set num_threads below.
End of explanation
# Iterate over all molecules, and create a folder for each
# one that experienced a parameterization failure
for mol_name, result_dict in results.items():
if result_dict == {}:
continue
if not os.path.exists(mol_name):
os.mkdir(mol_name)
# Write each parameterization failure to file
for tagname, missing_parm_dicts in result_dict.items():
elements = []
for missing_parm_dict in missing_parm_dicts:
inds = missing_parm_dict['atom_indices']
inds_str = '-'.join([str(i) for i in inds])
if tagname == 'ProperTorsions':
cent_atom_1 = min(inds[1], inds[2])
cent_atom_2 = max(inds[1], inds[2])
file_prefix = f'{tagname}__{cent_atom_1}-{cent_atom_2}__{inds_str}'
else:
file_prefix = f'{tagname}_{inds_str}'
png_file = os.path.join(mol_name, file_prefix+'.png')
smi_file = os.path.join(mol_name, file_prefix+'.smi')
missing_parm_dict['image'].save(png_file)
with open(smi_file, 'w') as of:
of.write(missing_parm_dict['tagged_smiles'])
! ls molecule_*/*
Explanation: Write a report of parameterization failures
Since the results from above are only held in memory during this Python session, it can be helpful to save this analysis to disk.
The following code will write files containing a 2D image and a tagged SMILES for each unparameterizable motif found above. Results for each molecule are saved to different folders. If molecule names were not provided, these folders will be named molecule_N/, where N is the order in which the molecules were read. A single molecule may have multiple parameterization failures, and each one is written both as an image (eg. molecule_2/Bonds_1-2.png) and a tagged SMILES (eg. molecule_2/Bonds_1-2.smi).
Note that this does not clear previous outputs, so it is possible that running this script on several datasets will overwrite or mix data with previous runs. Run rm -r molecule_* between runs to prevent potential issues.
When a molecule contains even one instance of unparameterizable chemistry, it can result in a large number of ProperTorsions failures being reported. To help reduce redundancy in these cases, the code below groups ProperTorsions output such that the atom indices defining the central bond are written first in the file name, followed by the atom indices of the whole torsion. This way, lexical (alphabetical) displays of file names make it easier to identify possibly-redundant outputs.
Concretely, the second molecule in the SMILES set is an example of this issue, as 1-indexed atom in that molecule is an unparameterizable Hg. This leads to the following output:
```
molecule_2/ProperTorsions__0-1__2-1-0-23.png
molecule_2/ProperTorsions__0-1__2-1-0-24.png
molecule_2/ProperTorsions__0-1__2-1-0-25.png
molecule_2/ProperTorsions__1-2__0-1-2-3.png
molecule_2/ProperTorsions__1-2__0-1-2-4.png
molecule_2/ProperTorsions__1-2__0-1-2-5.png
```
Here, listing the central atoms early in the filename makes it easy to see that the 1-indexed atom is likely to be the cause of the error.
When reporting parameterization failures, note that the tagged SMILES contains the full identity of the molecule, and that it is not trivial to extract only the motif which caused the parameterization failure. To report a parameterization failure without revealing the identity of the entire molecule, consider cropping the molecule image to only show the tagged atoms and their first or second neighbors, and uploading it to the Open Force Field Toolkit issue tracker
End of explanation |
3,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab of data analysis with python
In this lab we will introduce some of the modules that we will use in the rest of the labs of the course.
The usual beginning of any python module is a list of import statements. In most of our files we will use the following modules
Step1: 1. NUMPY
The numpy module is useful for scientific computing in Python.
1.a Create numpy arrays
The main data structure in numpy is the n-dimensional array. You can define a numpy array from a list or a list or lists. Python will try to build it with the appropiate dimensions. You can check the dimensions of the array with shape()
Step2: Define a new 3x2 array named my_array2 with [1, 2, 3] in the first row and [4,5,6] in the second.
Check the dimensions of the array.
Step3: Until now, we have created arrays defining their elements. But you can also create it defining the range
Step4: Check the functions np.linspace, np.logspace and np.meshgrid which let you create more sophisticated ranges
You can create numpy arrays in several ways. For example numpy provides a number of functions to create special types of matrices.
Create 3 arrays usings ones, zeros and eye. If you have any doubt about the parameters of the functions have a look at the help with the function help( ).
Step5: 1.b Elementwise operations
One of the main advantages of numpy arrays is that operations are propagated to the individual elements of the array
Step6: Compare this with operations over python lists
Step7: 1.c Indexing numpy arrays
There are several operations you can do with numpy arrays similar to the ones you can do with matrices in Matlab. One of the most important is slicing (we saw it when we talked about lists). It consists in extracting some subarray from the array.
Step8: One important thing to consider when you do slicing are the dimensions of the output array. Run the following cell and check the shape of my_array3. Check also its dimension with ndim function
Step9: If you have correctly computed it you will see that my_array3 is one dimensional. Sometimes this can be a problem when you are working with 2D matrixes (and vectors can be considered as 2D matrixes with one of the sizes equal to 1). To solve this, numpy provides the newaxis constant.
Step10: Check again the shape and dimension of my_array3
Step11: When you try to index different rows and columns of a matrix you have to define it element by element. For example, consider that we want to select elements of rows [0, 3] and columns [0, 2], we have to define the row 0 index for each column to be selected....
Step12: To make this easier, we can use the ix_ function which automatically creates all the needed indexes
Step13: Another important array manipulation method is array concatenation or stacking. It is useful to always state explicitly in which direction we want to stack the arrays. For example in the following example we are stacking the arrays vertically.
1.d Concatenate numpy arrays
Step14: EXERCISE
Step15: Numpy also includes the functions hstack() and vstack() to concatenate by columns or rows, respectively.
EXERCISE
Step16: 1.e Matrix multiplications
Finally numpy provides all the basic matrix operations
Step17: EXERCISE
Step18: 1.f Other useful functions
Some functions let you
Step19: Compute the maximum, minimum or, even, the positions of the maximum or minimum
Step20: Sort a vector
Step21: Calculate some statistical parameters
Step22: Obtain random numbers
Step23: In addition to numpy we have a more advanced library for scientific computing, scipy. Scipy includes modules for linear algebra, signal processing, fourier transform, ...
2. Matplotlib
One important step of data analysis is data visualization. In python the simplest plotting library is matplotlib and its sintax is similar to Matlab plotting library. In the next example we plot two sinusoids with different simbols.
Step24: 3. Classification example
One of the main machine learning problems is clasification. In the following example we will load and visualize a dataset that can be used in a clasification problem.
The iris dataset is the most popular pattern recognition dataset. And it consists on 150 instances of 4 features of iris flowers
Step25: In the previous code we have saved the features in matrix X and the class labels in the vector labels. Both are 2D numpy arrays.
We are also printing the shapes of each variable (see that we can also use array_name.shape to get the shape, apart from function shape( )). This shape checking is good to see if we are not making mistakes.
3.2 Visualizing the data
Extract the first two features of the data (sepal length and width) and plot the first versus the second in a figure, use a different color for the data corresponding to different classes.
First of all, you probably want to split the data according to each class label.
Step26: According to this plot, which classes seem more difficult to distinguish?
4. Regression example
Now that we know how to load some data and visualize it we will try to solve a simple regression task.
Our objective in this example is to predict the crime rates in different areas of the US using some socio-demographic data.
This dataset has 127 socioeconomic variables, of different nature
Step27: Take the columns (5,6,17) of the data and save them in a matrix X_com. This will be our input data. Convert this array into a float array. The shape should be (1994,3)
EXERCISE
Step28: EXERCISE
Step29: 4.3 Train/Test splitting
Now we are about to start doing machine learning. But, first of all, we have to separate our data between train and test.
The train data will be used to adjust the parameters of our model (train).
The test data will be used to evaluate our model.
EXERCISE
Step30: 4.4 Normalization
Most machine learning algorithms require that the data is standarized (mean=0, standard deviation= 1). Scikit-learn provides a tool to do that in the object sklearn.preprocessing.StandardScaler
EXERCISE
Step31: 4.5 Training
We will use two different K-NN regressors for this example. One with K (n_neighbors) = 1 and the other with K=7.
Read the API and this example to understand how to fit the model.
EXERCISE
Step32: 4.6 Prediction and evaluation
Now use the two models you have trained to predict the test output y_test. Then evaluate it measuring the Mean-Square Error (MSE).
The formula of MSE is
$$\text{MSE}=\frac{1}{K}\sum_{k=1}^{K}(\hat{y}-y)^2$$
The answer should be
Step33: 4.7 Saving the results
Finally we will save all our prediction for the model with K=1 in a csv file. To do so you can use the following code Snippet, where y_pred are the predicted output values for test. | Python Code:
%matplotlib inline
# The line above is needed to include the figures in this notebook, you can remove it if you work with a normal script
import numpy as np
import csv
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsRegressor
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
Explanation: Lab of data analysis with python
In this lab we will introduce some of the modules that we will use in the rest of the labs of the course.
The usual beginning of any python module is a list of import statements. In most of our files we will use the following modules:
numpy: The basic scientific computing library.
csv: Used for input/output using comma separated values files, one of the standard formats in data management.
matplotlib: Used for plotting figures and graphs.
sklearn: Scikit-learn is the machine learning library for python.
End of explanation
my_array = np.array([[1, 2],[3, 4]])
print my_array
print np.shape(my_array)
Explanation: 1. NUMPY
The numpy module is useful for scientific computing in Python.
1.a Create numpy arrays
The main data structure in numpy is the n-dimensional array. You can define a numpy array from a list or a list or lists. Python will try to build it with the appropiate dimensions. You can check the dimensions of the array with shape()
End of explanation
my_array2 = np.array([[1, 2, 3],[4, 5, 6]])
print my_array2
print np.shape(my_array2)
Explanation: Define a new 3x2 array named my_array2 with [1, 2, 3] in the first row and [4,5,6] in the second.
Check the dimensions of the array.
End of explanation
my_new_array = np.arange(3,11,2)
print my_new_array
Explanation: Until now, we have created arrays defining their elements. But you can also create it defining the range
End of explanation
A1 = np.zeros((3,4))
print A1
A2 = np.ones((2,6))
print A2
A3 = np.eye(5)
print A3
Explanation: Check the functions np.linspace, np.logspace and np.meshgrid which let you create more sophisticated ranges
You can create numpy arrays in several ways. For example numpy provides a number of functions to create special types of matrices.
Create 3 arrays usings ones, zeros and eye. If you have any doubt about the parameters of the functions have a look at the help with the function help( ).
End of explanation
a = np.array([0,1,2,3,4,5])
print a*2
print a**2
Explanation: 1.b Elementwise operations
One of the main advantages of numpy arrays is that operations are propagated to the individual elements of the array
End of explanation
[1,2,3,4,5]*2
Explanation: Compare this with operations over python lists:
End of explanation
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
print x[1:7:2] # start:stop:step
print x[-2:10] # confusing, avoid negative values...
print x[8:10] # equivalent
print x[-3:3:-1] # confusing, avoid negative values...
print x[7:3:-1] # equivalent
print x[:7] # when start value is not indicated, it takes the first
print x[5:] # when stop value is not indicated, it takes the last
print x[:] # select "from first to last" == "all"
Explanation: 1.c Indexing numpy arrays
There are several operations you can do with numpy arrays similar to the ones you can do with matrices in Matlab. One of the most important is slicing (we saw it when we talked about lists). It consists in extracting some subarray from the array.
End of explanation
my_array = np.array([[1, 2],[3, 4]])
my_array3 = my_array[:,1]
print my_array3
print my_array[1,0:2]
print my_array3.shape
print my_array3.ndim
Explanation: One important thing to consider when you do slicing are the dimensions of the output array. Run the following cell and check the shape of my_array3. Check also its dimension with ndim function:
End of explanation
my_array3 = my_array3[:,np.newaxis]
Explanation: If you have correctly computed it you will see that my_array3 is one dimensional. Sometimes this can be a problem when you are working with 2D matrixes (and vectors can be considered as 2D matrixes with one of the sizes equal to 1). To solve this, numpy provides the newaxis constant.
End of explanation
print my_array3.shape
print my_array3.ndim
Explanation: Check again the shape and dimension of my_array3
End of explanation
x = np.array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]])
# We want to select elements of rows [0, 3] and columns [0, 2]
rows = np.array([[0, 0],[3, 3]], dtype=np.intp)
columns = np.array([[0, 2],[0, 2]], dtype=np.intp)
print x[rows, columns]
Explanation: When you try to index different rows and columns of a matrix you have to define it element by element. For example, consider that we want to select elements of rows [0, 3] and columns [0, 2], we have to define the row 0 index for each column to be selected....
End of explanation
# With ix_
rows = np.array([0, 3], dtype=np.intp)
columns = np.array([0, 2], dtype=np.intp)
print np.ix_(rows, columns)
print x[np.ix_(rows, columns)]
Explanation: To make this easier, we can use the ix_ function which automatically creates all the needed indexes
End of explanation
my_array = np.array([[1, 2],[3, 4]])
my_array2 = np.array([[11, 12],[13, 14]])
print np.concatenate( (my_array, my_array2) , axis=1) # columnwise concatenation
Explanation: Another important array manipulation method is array concatenation or stacking. It is useful to always state explicitly in which direction we want to stack the arrays. For example in the following example we are stacking the arrays vertically.
1.d Concatenate numpy arrays
End of explanation
print <COMPLETAR>
Explanation: EXERCISE: Concatenate the first column of my_array and the second column of my_array2
The answer should be:
<pre><code>
[[ 1 12]
[ 3 14]]
</code></pre>
End of explanation
print <COMPLETAR>
Explanation: Numpy also includes the functions hstack() and vstack() to concatenate by columns or rows, respectively.
EXERCISE: Use these functions to concatenate my_array and my_array2 by rows and columns.
The answer should be:
<pre><code>
[[ 1 12]
[ 3 14]]
</code></pre>
End of explanation
x=np.array([1,2,3])
y=np.array([1,2,3])
print x*y #Element-wise
print np.multiply(x,y) #Element-wise
print sum(x*y) # dot product
print #Fast matrix product
Explanation: 1.e Matrix multiplications
Finally numpy provides all the basic matrix operations: multiplications, dot products, ...
You can find information about them in the Numpy manual
End of explanation
x=[1,2,3]
dot_product_x = <COMPLETAR>
print dot_product_x
Explanation: EXERCISE: Try to compute the dot product with python arrays:
The answer should be:
<pre><code>
14
</code></pre>
End of explanation
x = np.array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]])
print x
print np.where(x>4)
print np.nonzero(x>4)
Explanation: 1.f Other useful functions
Some functions let you:
* Find elements holding a condition
End of explanation
print a.argmax(axis=0)
print a.max(axis=0)
# a.min(axis=0), a.argmin(axis=0)
Explanation: Compute the maximum, minimum or, even, the positions of the maximum or minimum
End of explanation
a = np.array([[1,4], [3,1]])
print a
a.sort(axis=1)
print a
a.sort(axis=0)
b = a
print b
Explanation: Sort a vector
End of explanation
x = np.array([[ 0, 1, 2], [ 3, 4, 5], [ 6, 7, 8], [ 9, 10, 11]])
print x.mean(axis=0)
print x.var(axis=0)
print x.std(axis=0)
Explanation: Calculate some statistical parameters
End of explanation
np.random.seed(0)
perm = np.random.permutation(100)
perm[:10]
Explanation: Obtain random numbers
End of explanation
t = np.arange(0.0, 1.0, 0.05)
a1 = np.sin(2*np.pi*t)
a2 = np.sin(4*np.pi*t)
plt.figure()
ax1 = plt.subplot(211)
ax1.plot(t,a1)
plt.xlabel('t')
plt.ylabel('a_1(t)')
ax2 = plt.subplot(212)
ax2.plot(t,a2, 'r.')
plt.xlabel('t')
plt.ylabel('a_2(t)')
plt.show()
Explanation: In addition to numpy we have a more advanced library for scientific computing, scipy. Scipy includes modules for linear algebra, signal processing, fourier transform, ...
2. Matplotlib
One important step of data analysis is data visualization. In python the simplest plotting library is matplotlib and its sintax is similar to Matlab plotting library. In the next example we plot two sinusoids with different simbols.
End of explanation
# Open up the csv file in to a Python object
csv_file_object = csv.reader(open('data/iris_data.csv', 'rb'))
datalist = [] # Create a variable called 'data'.
for row in csv_file_object: # Run through each row in the csv file,
datalist.append(row) # adding each row to the data variable
data = np.array(datalist) # Then convert from a list to an array
# Be aware that each item is currently
# a string in this format
print np.shape(data)
X = data[:,0:-1]
label = data[:,-1,np.newaxis]
print X.shape
print label.shape
Explanation: 3. Classification example
One of the main machine learning problems is clasification. In the following example we will load and visualize a dataset that can be used in a clasification problem.
The iris dataset is the most popular pattern recognition dataset. And it consists on 150 instances of 4 features of iris flowers:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
The objective is usually to distinguish three different classes of iris plant: Iris setosa, Iris versicolor and Iris virginica.
3.1 Loading the data
We give you the data in .csv format. In each line of the csv file we have the 4 real-valued features of each instance and then a string defining the class of that instance: Iris-setosa, Iris-versicolor or Iris-virginica. There are 150 instances of flowers (lines) in the csv file.
Let's se how we can load the data in an array.
End of explanation
x = X[:,0:2]
#print len(set(list(label)))
list_label = [l[0] for l in label]
labels = list(set(list_label))
colors = ['bo', 'ro', 'go']
#print list_label
plt.figure()
for i, l in enumerate(labels):
pos = np.where(np.array(list_label) == l)
plt.plot(x[pos,0], x[pos,1], colors[i])
plt.xlabel('Sepal length')
plt.ylabel('Sepal width')
Explanation: In the previous code we have saved the features in matrix X and the class labels in the vector labels. Both are 2D numpy arrays.
We are also printing the shapes of each variable (see that we can also use array_name.shape to get the shape, apart from function shape( )). This shape checking is good to see if we are not making mistakes.
3.2 Visualizing the data
Extract the first two features of the data (sepal length and width) and plot the first versus the second in a figure, use a different color for the data corresponding to different classes.
First of all, you probably want to split the data according to each class label.
End of explanation
csv_file_object = csv.reader(open('communities.csv', 'rb'))
datalist = []
for row in csv_file_object:
datalist.append(row)
data = np.array(datalist)
print np.shape(data)
Explanation: According to this plot, which classes seem more difficult to distinguish?
4. Regression example
Now that we know how to load some data and visualize it we will try to solve a simple regression task.
Our objective in this example is to predict the crime rates in different areas of the US using some socio-demographic data.
This dataset has 127 socioeconomic variables, of different nature: categorical, integer, real, and for some of them there are also missing data (check wikipedia). This is usually a problem when training machine learning models, but we will ignore that problem and take only a small number of variables that we think can be useful for regression and which have no missing values.
population: population for community
householdsize: mean people per household
medIncome: median household income
The objective in the regresion problem is another real value that contains the total number of violent crimes per 100K population.
4.1 Loading the data
First of all, load the data from file communities.csv in a new array. This array should have 1994 rows (instances) and 128 columns.
End of explanation
X_com = <COMPLETAR>
Nrow = np.shape(data)[0]
Ncol = np.shape(data)[1]
print X_com.shape
y_com = <COMPLETAR>
print y_com.shape
Explanation: Take the columns (5,6,17) of the data and save them in a matrix X_com. This will be our input data. Convert this array into a float array. The shape should be (1994,3)
EXERCISE: Get the last column of the data and save it in an array called y_com. Convert this matrix into a float array.
Check that the shape is (1994,1) .
End of explanation
plt.figure()
plt.plot(<COMPLETAR>, 'bo')
plt.xlabel('X_com[0]')
plt.ylabel('y_com')
plt.figure()
plt.plot(<COMPLETAR>, 'ro')
plt.xlabel('X_com[1]')
plt.ylabel('y_com')
plt.figure()
plt.plot(<COMPLETAR>, 'go')
plt.xlabel('X_com[2]')
plt.ylabel('y_com')
Explanation: EXERCISE: Plot each variable in X_com versus y_com to have a first (partial) view of the data.
End of explanation
from sklearn.cross_validation import train_test_split
Random_state = 131
X_train, X_test, y_train, y_test = train_test_split(<COMPLETAR>, <COMPLETAR>, test_size=<COMPLETAR>, random_state=Random_state)
print X_train.shape
print X_test.shape
print y_train.shape
print y_test.shape
Explanation: 4.3 Train/Test splitting
Now we are about to start doing machine learning. But, first of all, we have to separate our data between train and test.
The train data will be used to adjust the parameters of our model (train).
The test data will be used to evaluate our model.
EXERCISE: Use sklearn.cross_validation.train_test_split to split the data in train (60%) and test (40%). Save the results in variables named X_train, X_test, y_train, y_test.
End of explanation
print "Values before normalizing:\n"
print <COMPLETAR>.mean(axis=0)
print X_test.<COMPLETAR>
print <COMPLETAR>.std(axis=0)
print X_test.<COMPLETAR>
# from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(<COMPLETAR>) # computes mean and std using the train dataset
X_train_normalized = scaler.transform(<COMPLETAR>) # applies the normalization to train
X_test_normalized = scaler.transform(<COMPLETAR>) # applies the normalization to test
print "\nValues after normalizing:\n"
print <COMPLETAR>
print <COMPLETAR>
print <COMPLETAR>
print <COMPLETAR>
Explanation: 4.4 Normalization
Most machine learning algorithms require that the data is standarized (mean=0, standard deviation= 1). Scikit-learn provides a tool to do that in the object sklearn.preprocessing.StandardScaler
EXERCISE: Compute and print the mean and standard deviation of the data. Then normalize the data, such that it has zero mean and unit standard deviation, and check the results.
The answer should be:
<pre><code>
Values before normalizing:
[ 0.06044314 0.46025084 0.36419732]
[ 0.0533208 0.46810777 0.35651629]
[ 0.13651131 0.16684793 0.21110026]
[ 0.11073518 0.15868603 0.20651214]
Values after normalizing:
[ -6.99180587e-16 -2.18145828e-17 1.69596778e-15]
[-0.052174 0.04709039 -0.03638571]
[ 1. 1. 1.]
[ 0.81117952 0.95108182 0.97826567]
</code></pre>
End of explanation
from sklearn import neighbors
knn1_model = neighbors.KNeighborsRegressor(<COMPLETAR>)
knn1_model.fit(<COMPLETAR>.astype(np.float), <COMPLETAR>.astype(np.float))
knn7_model = neighbors.KNeighborsRegressor(<COMPLETAR>)
knn7_model.fit(<COMPLETAR>.astype(np.float), <COMPLETAR>.astype(np.float))
print knn1_model
print knn7_model
Explanation: 4.5 Training
We will use two different K-NN regressors for this example. One with K (n_neighbors) = 1 and the other with K=7.
Read the API and this example to understand how to fit the model.
EXERCISE: Train the two models described above with default parameters.
The answer should be:
<pre><code>
KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=1, p=2,
weights='uniform')
KNeighborsRegressor(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=1, n_neighbors=7, p=2,
weights='uniform')
</code></pre>
End of explanation
y_predict_1 = knn1_model.predict(<COMPLETAR>.astype(np.float))
mse1 = <COMPLETAR>
print " The MSE value for model1 is %f\n " % mse1
y_predict_7 = knn7_model.predict(<COMPLETAR>.astype(np.float))
mse7 = <COMPLETAR>
print " The MSE value for model7 is %f\n " % mse7
print "First 5 prediction values with model 1:\n"
print <COMPLETAR>
print "\nFirst 5 prediction values with model 7:\n"
print <COMPLETAR>
Explanation: 4.6 Prediction and evaluation
Now use the two models you have trained to predict the test output y_test. Then evaluate it measuring the Mean-Square Error (MSE).
The formula of MSE is
$$\text{MSE}=\frac{1}{K}\sum_{k=1}^{K}(\hat{y}-y)^2$$
The answer should be:
<pre><code>
The MSE value for model1 is 0.060090
The MSE value for model7 is 0.038202
First 5 prediction values with model 1:
[[ 0.51]
[ 0.17]
[ 0.46]
[ 0.2 ]
[ 0.34]]
First 5 prediction values with model 7:
[[ 0.40857143]
[ 0.21285714]
[ 0.27428571]
[ 0.32 ]
[ 0.36857143]]
</code></pre>
End of explanation
y_pred = y_predict_1.squeeze()
csv_file_object = csv.writer(open('output.csv', 'wb'))
for index, y_aux in enumerate(<COMPLETAR>): # Run through each row in the csv file,
csv_file_object.writerow([index,y_aux])
Explanation: 4.7 Saving the results
Finally we will save all our prediction for the model with K=1 in a csv file. To do so you can use the following code Snippet, where y_pred are the predicted output values for test.
End of explanation |
3,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Automatic Differentiation with autograd
Technically, autograd is layer that wraps and extends numpy. Hence it is most often imported as follows
Step1: The function sigmoid implements the sigmoid function, which is defined as
$$ \texttt{S}(x) = \frac{1}{1 + \mathrm{e}^{-x}}. $$
Step2: The function S_prime computes the derivative of the Sigmoid function. We implement it using automatic differentiation. This is the closest thing to magic I have seen yet.
Step3: In the lecture we have seen that the following identity holds for the derivative of the sigmoid function | Python Code:
import autograd
import autograd.numpy as np
Explanation: Automatic Differentiation with autograd
Technically, autograd is layer that wraps and extends numpy. Hence it is most often imported as follows:
End of explanation
def S(x):
return 1.0 / (1.0 + np.exp(-x))
def Q(x):
return np.multiply(x, x)
Q_grad = autograd.grad(Q)
Q_grad(1.0)
Explanation: The function sigmoid implements the sigmoid function, which is defined as
$$ \texttt{S}(x) = \frac{1}{1 + \mathrm{e}^{-x}}. $$
End of explanation
S_prime = autograd.grad(S)
Explanation: The function S_prime computes the derivative of the Sigmoid function. We implement it using automatic differentiation. This is the closest thing to magic I have seen yet.
End of explanation
for x in np.arange(-2.0, 2.0, 0.1):
print(S_prime(x)- S(x) * (1.0 - S(x)))
Explanation: In the lecture we have seen that the following identity holds for the derivative of the sigmoid function:
$$ S'(x) = S(x) \cdot \bigl(1 - S(x)\bigr) $$
Let's test this identity.
End of explanation |
3,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href='http
Step1: The Data
There are some fake data csv files you can read in as dataframes
Step2: Style Sheets
Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though).
Here is how to use them.
Before plt.style.use() your plots look like this
Step3: Call the style
Step4: Now your plots look like this
Step5: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities!
Plot Types
There are several plot types built-in to pandas, most of them statistical plots by nature
Step6: Barplots
Step7: Histograms
Step8: Line Plots
Step9: Scatter Plots
Step10: You can use c to color based off another column value
Use cmap to indicate colormap to use.
For all the colormaps, check out
Step11: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column
Step12: BoxPlots
Step13: Hexagonal Bin Plot
Useful for Bivariate Data, alternative to scatterplot
Step14: Kernel Density Estimation plot (KDE) | Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
Explanation: <a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>
Pandas Built-in Data Visualization
In this lecture we will learn about pandas built-in capabilities for data visualization! It's built-off of matplotlib, but it baked into pandas for easier usage!
Let's take a look!
Imports
End of explanation
df1 = pd.read_csv('df1',
index_col = 0)
df2 = pd.read_csv('df2')
Explanation: The Data
There are some fake data csv files you can read in as dataframes:
End of explanation
df1['A'].hist()
Explanation: Style Sheets
Matplotlib has style sheets you can use to make your plots look a little nicer. These style sheets include plot_bmh,plot_fivethirtyeight,plot_ggplot and more. They basically create a set of style rules that your plots follow. I recommend using them, they make all your plots have the same look and feel more professional. You can even create your own if you want your company's plots to all have the same look (it is a bit tedious to create on though).
Here is how to use them.
Before plt.style.use() your plots look like this:
End of explanation
import matplotlib.pyplot as plt
plt.style.use('ggplot')
Explanation: Call the style:
End of explanation
df1['A'].hist()
plt.style.use('bmh')
df1['A'].hist()
plt.style.use('dark_background')
df1['A'].hist()
plt.style.use('fivethirtyeight')
df1['A'].hist()
plt.style.use('ggplot')
Explanation: Now your plots look like this:
End of explanation
df2.plot.area(alpha = 0.4)
Explanation: Let's stick with the ggplot style and actually show you how to utilize pandas built-in plotting capabilities!
Plot Types
There are several plot types built-in to pandas, most of them statistical plots by nature:
df.plot.area
df.plot.barh
df.plot.density
df.plot.hist
df.plot.line
df.plot.scatter
df.plot.bar
df.plot.box
df.plot.hexbin
df.plot.kde
df.plot.pie
You can also just call df.plot(kind='hist') or replace that kind argument with any of the key terms shown in the list above (e.g. 'box','barh', etc..)
Let's start going through them!
Area
End of explanation
df2.head()
df2.plot.bar()
df2.plot.bar(stacked = True)
Explanation: Barplots
End of explanation
df1['A'].plot.hist(bins = 50)
Explanation: Histograms
End of explanation
df1.plot.line(x = df1.index,
y = 'B',
figsize = (12,3),
lw = 1)
Explanation: Line Plots
End of explanation
df1.plot.scatter(x = 'A',
y = 'B')
Explanation: Scatter Plots
End of explanation
df1.plot.scatter(x = 'A',
y = 'B',
c = 'C',
cmap = 'coolwarm')
Explanation: You can use c to color based off another column value
Use cmap to indicate colormap to use.
For all the colormaps, check out: http://matplotlib.org/users/colormaps.html
End of explanation
df1.plot.scatter(x = 'A',
y = 'B',
s = df1['C']*200)
Explanation: Or use s to indicate size based off another column. s parameter needs to be an array, not just the name of a column:
End of explanation
df2.plot.box() # Can also pass a by= argument for groupby
Explanation: BoxPlots
End of explanation
df = pd.DataFrame(np.random.randn(1000, 2),
columns = ['a', 'b'])
df.plot.hexbin(x = 'a',
y = 'b',
gridsize = 25,
cmap = 'Oranges')
Explanation: Hexagonal Bin Plot
Useful for Bivariate Data, alternative to scatterplot:
End of explanation
df2['a'].plot.kde()
df2.plot.density()
Explanation: Kernel Density Estimation plot (KDE)
End of explanation |
3,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Array manipulation routines
Step1: Q1. Let x be a ndarray [10, 10, 3] with all elements set to one. Reshape x so that the size of the second dimension equals 150.
Step2: Q2. Let x be array [[1, 2, 3], [4, 5, 6]]. Convert it to [1 4 2 5 3 6].
Step3: Q3. Let x be array [[1, 2, 3], [4, 5, 6]]. Get the 5th element.
Step4: Q4. Let x be an arbitrary 3-D array of shape (3, 4, 5). Permute the dimensions of x such that the new shape will be (4,3,5).
Step5: Q5. Let x be an arbitrary 2-D array of shape (3, 4). Permute the dimensions of x such that the new shape will be (4,3).
Step6: Q5. Let x be an arbitrary 2-D array of shape (3, 4). Insert a nex axis such that the new shape will be (3, 1, 4).
Step7: Q6. Let x be an arbitrary 3-D array of shape (3, 4, 1). Remove a single-dimensional entries such that the new shape will be (3, 4).
Step8: Q7. Lex x be an array <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/><br/>
and y be an array <br/>
[[ 7 8 9]<br/>
[10 11 12]].<br/>
Concatenate x and y so that a new array looks like <br/>[[1, 2, 3, 7, 8, 9], <br/>[4, 5, 6, 10, 11, 12]].
Step9: Q8. Lex x be an array <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/><br/>
and y be an array <br/>
[[ 7 8 9]<br/>
[10 11 12]].<br/>
Concatenate x and y so that a new array looks like <br/>[[ 1 2 3]<br/>
[ 4 5 6]<br/>
[ 7 8 9]<br/>
[10 11 12]]
Step10: Q8. Let x be an array [1 2 3] and y be [4 5 6]. Convert it to [[1, 4], [2, 5], [3, 6]].
Step11: Q9. Let x be an array [[1],[2],[3]] and y be [[4], [5], [6]]. Convert x to [[[1, 4]], [[2, 5]], [[3, 6]]].
Step12: Q10. Let x be an array [1, 2, 3, ..., 9]. Split x into 3 arrays, each of which has 4, 2, and 3 elements in the original order.
Step13: Q11. Let x be an array<br/>
[[[ 0., 1., 2., 3.],<br/>
[ 4., 5., 6., 7.]],<br/>
[[ 8., 9., 10., 11.],<br/>
[ 12., 13., 14., 15.]]].<br/>
Split it into two such that the first array looks like<br/>
[[[ 0., 1., 2.],<br/>
[ 4., 5., 6.]],<br/>
[[ 8., 9., 10.],<br/>
[ 12., 13., 14.]]].<br/>
and the second one look like
Step14: Q12. Let x be an array <br />
[[ 0., 1., 2., 3.],<br>
[ 4., 5., 6., 7.],<br>
[ 8., 9., 10., 11.],<br>
[ 12., 13., 14., 15.]].<br>
Split it into two arrays along the second axis.
Step15: Q13. Let x be an array <br />
[[ 0., 1., 2., 3.],<br>
[ 4., 5., 6., 7.],<br>
[ 8., 9., 10., 11.],<br>
[ 12., 13., 14., 15.]].<br>
Split it into two arrays along the first axis.
Step16: Q14. Let x be an array [0, 1, 2]. Convert it to <br/>
[[0, 1, 2, 0, 1, 2],<br/>
[0, 1, 2, 0, 1, 2]].
Step17: Q15. Let x be an array [0, 1, 2]. Convert it to <br/>
[0, 0, 1, 1, 2, 2].
Step18: Q16. Let x be an array [0, 0, 0, 1, 2, 3, 0, 2, 1, 0].<br/>
remove the leading the trailing zeros.
Step19: Q17. Let x be an array [2, 2, 1, 5, 4, 5, 1, 2, 3]. Get two arrays of unique elements and their counts.
Step20: Q18. Lex x be an array <br/>
[[ 1 2]<br/>
[ 3 4].<br/>
Flip x along the second axis.
Step21: Q19. Lex x be an array <br/>
[[ 1 2]<br/>
[ 3 4].<br/>
Flip x along the first axis.
Step22: Q20. Lex x be an array <br/>
[[ 1 2]<br/>
[ 3 4].<br/>
Rotate x 90 degrees counter-clockwise.
Step23: Q21 Lex x be an array <br/>
[[ 1 2 3 4]<br/>
[ 5 6 7 8].<br/>
Shift elements one step to right along the second axis. | Python Code:
import numpy as np
np.__version__
Explanation: Array manipulation routines
End of explanation
x = np.ones([10, 10, 3])
out = np.reshape(x, [-1, 150])
print out
assert np.allclose(out, np.ones([10, 10, 3]).reshape([-1, 150]))
Explanation: Q1. Let x be a ndarray [10, 10, 3] with all elements set to one. Reshape x so that the size of the second dimension equals 150.
End of explanation
x = np.array([[1, 2, 3], [4, 5, 6]])
out1 = np.ravel(x, order='F')
out2 = x.flatten(order="F")
assert np.allclose(out1, out2)
print out1
Explanation: Q2. Let x be array [[1, 2, 3], [4, 5, 6]]. Convert it to [1 4 2 5 3 6].
End of explanation
x = np.array([[1, 2, 3], [4, 5, 6]])
out1 = x.flat[4]
out2 = np.ravel(x)[4]
assert np.allclose(out1, out2)
print out1
Explanation: Q3. Let x be array [[1, 2, 3], [4, 5, 6]]. Get the 5th element.
End of explanation
x = np.zeros((3, 4, 5))
out1 = np.swapaxes(x, 1, 0)
out2 = x.transpose([1, 0, 2])
assert out1.shape == out2.shape
print out1.shape
Explanation: Q4. Let x be an arbitrary 3-D array of shape (3, 4, 5). Permute the dimensions of x such that the new shape will be (4,3,5).
End of explanation
x = np.zeros((3, 4))
out1 = np.swapaxes(x, 1, 0)
out2 = x.transpose()
out3 = x.T
assert out1.shape == out2.shape == out3.shape
print out1.shape
Explanation: Q5. Let x be an arbitrary 2-D array of shape (3, 4). Permute the dimensions of x such that the new shape will be (4,3).
End of explanation
x = np.zeros((3, 4))
print np.expand_dims(x, axis=1).shape
Explanation: Q5. Let x be an arbitrary 2-D array of shape (3, 4). Insert a nex axis such that the new shape will be (3, 1, 4).
End of explanation
x = np.zeros((3, 4, 1))
print np.squeeze(x).shape
Explanation: Q6. Let x be an arbitrary 3-D array of shape (3, 4, 1). Remove a single-dimensional entries such that the new shape will be (3, 4).
End of explanation
x = np.array([[1, 2, 3], [4, 5, 6]])
y = np.array([[7, 8, 9], [10, 11, 12]])
out1 = np.concatenate((x, y), 1)
out2 = np.hstack((x, y))
assert np.allclose(out1, out2)
print out2
Explanation: Q7. Lex x be an array <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/><br/>
and y be an array <br/>
[[ 7 8 9]<br/>
[10 11 12]].<br/>
Concatenate x and y so that a new array looks like <br/>[[1, 2, 3, 7, 8, 9], <br/>[4, 5, 6, 10, 11, 12]].
End of explanation
x = np.array([[1, 2, 3], [4, 5, 6]])
y = np.array([[7, 8, 9], [10, 11, 12]])
out1 = np.concatenate((x, y), 0)
out2 = np.vstack((x, y))
assert np.allclose(out1, out2)
print out2
Explanation: Q8. Lex x be an array <br/>
[[ 1 2 3]<br/>
[ 4 5 6].<br/><br/>
and y be an array <br/>
[[ 7 8 9]<br/>
[10 11 12]].<br/>
Concatenate x and y so that a new array looks like <br/>[[ 1 2 3]<br/>
[ 4 5 6]<br/>
[ 7 8 9]<br/>
[10 11 12]]
End of explanation
x = np.array((1,2,3))
y = np.array((4,5,6))
out1 = np.column_stack((x, y))
out2 = np.squeeze(np.dstack((x, y)))
out3 = np.vstack((x, y)).T
assert np.allclose(out1, out2)
assert np.allclose(out2, out3)
print out1
Explanation: Q8. Let x be an array [1 2 3] and y be [4 5 6]. Convert it to [[1, 4], [2, 5], [3, 6]].
End of explanation
x = np.array([[1],[2],[3]])
y = np.array([[4],[5],[6]])
out = np.dstack((x, y))
print out
Explanation: Q9. Let x be an array [[1],[2],[3]] and y be [[4], [5], [6]]. Convert x to [[[1, 4]], [[2, 5]], [[3, 6]]].
End of explanation
x = np.arange(1, 10)
print np.split(x, [4, 6])
Explanation: Q10. Let x be an array [1, 2, 3, ..., 9]. Split x into 3 arrays, each of which has 4, 2, and 3 elements in the original order.
End of explanation
x = np.arange(16).reshape(2, 2, 4)
out1 = np.split(x, [3],axis=2)
out2 = np.dsplit(x, [3])
assert np.allclose(out1[0], out2[0])
assert np.allclose(out1[1], out2[1])
print out1
Explanation: Q11. Let x be an array<br/>
[[[ 0., 1., 2., 3.],<br/>
[ 4., 5., 6., 7.]],<br/>
[[ 8., 9., 10., 11.],<br/>
[ 12., 13., 14., 15.]]].<br/>
Split it into two such that the first array looks like<br/>
[[[ 0., 1., 2.],<br/>
[ 4., 5., 6.]],<br/>
[[ 8., 9., 10.],<br/>
[ 12., 13., 14.]]].<br/>
and the second one look like:<br/>
[[[ 3.],<br/>
[ 7.]],<br/>
[[ 11.],<br/>
[ 15.]]].<br/>
End of explanation
x = np.arange(16).reshape((4, 4))
out1 = np.hsplit(x, 2)
out2 = np.split(x, 2, 1)
assert np.allclose(out1[0], out2[0])
assert np.allclose(out1[1], out2[1])
print out1
Explanation: Q12. Let x be an array <br />
[[ 0., 1., 2., 3.],<br>
[ 4., 5., 6., 7.],<br>
[ 8., 9., 10., 11.],<br>
[ 12., 13., 14., 15.]].<br>
Split it into two arrays along the second axis.
End of explanation
x = np.arange(16).reshape((4, 4))
out1 = np.vsplit(x, 2)
out2 = np.split(x, 2, 0)
assert np.allclose(out1[0], out2[0])
assert np.allclose(out1[1], out2[1])
print out1
Explanation: Q13. Let x be an array <br />
[[ 0., 1., 2., 3.],<br>
[ 4., 5., 6., 7.],<br>
[ 8., 9., 10., 11.],<br>
[ 12., 13., 14., 15.]].<br>
Split it into two arrays along the first axis.
End of explanation
x = np.array([0, 1, 2])
out1 = np.tile(x, [2, 2])
out2 = np.resize(x, [2, 6])
assert np.allclose(out1, out2)
print out1
Explanation: Q14. Let x be an array [0, 1, 2]. Convert it to <br/>
[[0, 1, 2, 0, 1, 2],<br/>
[0, 1, 2, 0, 1, 2]].
End of explanation
x = np.array([0, 1, 2])
print np.repeat(x, 2)
Explanation: Q15. Let x be an array [0, 1, 2]. Convert it to <br/>
[0, 0, 1, 1, 2, 2].
End of explanation
x = np.array((0, 0, 0, 1, 2, 3, 0, 2, 1, 0))
out = np.trim_zeros(x)
print out
Explanation: Q16. Let x be an array [0, 0, 0, 1, 2, 3, 0, 2, 1, 0].<br/>
remove the leading the trailing zeros.
End of explanation
x = np.array([2, 2, 1, 5, 4, 5, 1, 2, 3])
u, indices = np.unique(x, return_counts=True)
print u, indices
Explanation: Q17. Let x be an array [2, 2, 1, 5, 4, 5, 1, 2, 3]. Get two arrays of unique elements and their counts.
End of explanation
x = np.array([[1,2], [3,4]])
out1 = np.fliplr(x)
out2 = x[:, ::-1]
assert np.allclose(out1, out2)
print out1
Explanation: Q18. Lex x be an array <br/>
[[ 1 2]<br/>
[ 3 4].<br/>
Flip x along the second axis.
End of explanation
x = np.array([[1,2], [3,4]])
out1 = np.flipud(x)
out2 = x[::-1, :]
assert np.allclose(out1, out2)
print out1
Explanation: Q19. Lex x be an array <br/>
[[ 1 2]<br/>
[ 3 4].<br/>
Flip x along the first axis.
End of explanation
x = np.array([[1,2], [3,4]])
out = np.rot90(x)
print out
Explanation: Q20. Lex x be an array <br/>
[[ 1 2]<br/>
[ 3 4].<br/>
Rotate x 90 degrees counter-clockwise.
End of explanation
x = np.arange(1, 9).reshape([2, 4])
print np.roll(x, 1, axis=1)
Explanation: Q21 Lex x be an array <br/>
[[ 1 2 3 4]<br/>
[ 5 6 7 8].<br/>
Shift elements one step to right along the second axis.
End of explanation |
3,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
베이지안 모수 추정의 예
베이지안 모수 추정(Bayesian parameter estimation) 방법은 모수의 값에 해당하는 특정한 하나의 숫자를 계산하는 것이 아니라 모수의 값이 가질 수 있는 모든 가능성, 즉 모수의 분포를 계산하는 작업이다.
이때 계산된 모수의 분포를 표현 방법은 두 가지가 있다.
비모수적(non-parametric) 방법
샘플을 제시한 후 히스토그램와 같은 방법으로 임의의 분포를 표현한다. MCMC(Markov chain Monte Carlo)와 같은 몬테카를로 방법에서 사용한다.
세타가 될 수 있는 정보를 준다. 이거로 히스토그램을 그린다. 답이 될 수 있는 후보 정보를 주는 것이다. 수식으로 나타내기 힘든 분포가 있다. 가능한 것을 아예 다 줘서 히스토그램을 그려보면 어느 부분에 몰려 있다. 그러면 0.5에 몰려 있다면 0.5에 밀집되어 있다.
원래 모드를 찾아야 되는데 모드 찾기가 힘들다. 삐죽삐죽하기 때문에 찾기가 힘들어서 중앙값이나 평균값을 찾는 것이 편하다. 그게 세타에 대한 대푯값으로 보통 쓴다.
모수적(parametric) 방법
모수의 분포를 잘 알려진 확률 분포 모형을 사용하여 나타낸다. 이렇게 하면 모수를 나타내는 확률 분포 수식이 다시 모수(parameter)를 가지게 되는데 이를 hyper-parameter라고도 부른다. 모수적 방법은 결국 hypter-parameter의 값을 숫자로 계산하는 작업이 된다.
여기에서는 모수적 방법의 몇 가지 간단한 예를 보인다.
베이지안 모수 추정의 기본 원리
베이지안 모수 추정 방법은 다음 공식을 사용하여 모수의 분포 $p(\theta)$를 $p(\theta \mid x_{1},\ldots,x_{N})$ 로 갱신(update)하는 작업이다.
$$ p(\theta \mid x_{1},\ldots,x_{N}) = \dfrac{p(x_{1},\ldots,x_{N} \mid \theta) \cdot p(\theta)}{p(x_{1},\ldots,x_{N})} \propto p(x_{1},\ldots,x_{N} \mid \theta ) \cdot p(\theta) $$
이 식에서
$p(\theta)$ 는 사전(Prior) 분포라고 한다. 사전 분포는 베이지안 추정 작업을 하기 전에 이미 알고 있던 모수 $\theta$의 분포를 뜻한다.
아무런 지식이 없는 경우에는 보통 uniform 분포 $\text{Beta}(1,1)$나 0 을 중심으로하는 정규 분포 $\mathcal{N}(0, 1)$를 사용한다
$p(\theta \mid x_{1},\ldots,x_{N})$ 는 사후(Posterior) 분포라고 한다. 수학적으로는 데이터 $x_{1},\ldots,x_{N}$가 알려진 상태에서의 $\theta$에 대한 조건부 확률 분포이다. 우리가 베이지안 모수 추정 작업을 통해 구하고자 하는 것이 바로 이 사후 분포이다.
$p(x_{1},\ldots,x_{N} \mid \theta)$ 분포는 우도(Likelihood) 분포라고 한다. 현재 우리가 알고 있는 값은 데이터 $x_{1},\ldots,x_{N}$ 이고 $\theta$가 미지수이다. 이와 반대로 $theta$를 알고 있는 상태에서의 데이터 $x_{1},\ldots,x_{N}$ 가 나올 조건부 확률 분포를 우도라고 한다.
베르누이 분포의 모수 추정
가장 단순한 이산 확률 분포인 베르누이 분포의 모수 $\theta$를 베이지안 추정법으로 추정해 본다.
베르누이 분포의 모수는 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $a=b=1$인 베타 분포로 한다.
$$ P(\theta) \propto \theta^{a−1}(1−\theta)^{b−1} \;\;\; (a=1, b=1)$$
데이터는 모두 독립적인 베르누이 분포의 곱이므로 우도는 다음과 같이 이항 분포가 된다.
$$ p(x_{1},\ldots,x_{N} \mid \theta) = \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} $$
베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 $a'$, $b'$를 가지는 베타 분포가 된다.
$$
\begin{eqnarray}
p(\theta \mid x_{1},\ldots,x_{N})
&\propto & p(x_{1},\ldots,x_{N} \mid \theta) P(\theta) \
&=& \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} \cdot \theta^{a−1}(1−\theta)^{b−1} \
&=& \theta^{\sum_{i=1}^N x_i + a−1} (1 - \theta)^{\sum_{i=1}^N (1-x_i) + b−1 } \
&=& \theta^{N_1 + a−1} (1 - \theta)^{N_0 + b−1 } \
&=& \theta^{a'−1} (1 - \theta)^{b'−1 } \
\end{eqnarray}
$$
이렇게 사전 분포와 사후 분포가 같은 확률 분포 모형을 가지게 하는 사전 분포를 conjugated prior 라고 한다.
갱신된 하이퍼 모수의 값은 다음과 같다.
$$ a' = N_1 + a $$
$$ b' = N_0 + b $$
Step1: 조건부 독립
보통 조건부독립만 성립한다. 예를 들어 고등학교 남학생들만. 초등학교 1~고3까지. 어느 반 골랐는지 몰라. 반이 엄청 많은데 그 중 한 반을 골랐어. 고3일 수도 있고 초딩일수도 있어. 1번 학생 키 180, 2번 학생 50? 당연히 180 근처라고 생각할 것이다. 그래서 우리가 하는 것은 모두 독립이 아니다. 앞의 자료가 뒤에 영향을 미친다. 그런데 모수가 정해져 있는 경우 즉 조건부 독립이라고 했을 때, 고 3이라고 가정했을 때! 반 학생들의 키는 독립적이다.
조건부확률, 조건부 독립을 이해하는 것이 데이터 분석을 이해하는 것이다.
조건부 독립. 사실은 모든 독립은 조건부 독립이다. 이게 옳아. 예를 들어서 주사위의 경우. 주사위를 선택했다는 조건이 붙는 것이고.
무 조건부에서는 다음을 예측할 수 있다. 그래서 독립이 아니다. 무 조건부의 경우는 1~6주사위와 1~2 주사위가 있다면 99번 던졌는데 1과 2만 나왔다. 그래서 100번째에는 1이나 2로 예측을 할 수 있다. 그래서 조건부 안에서는 독립인 것이고 무조건부의 경우에는 독립이 아니다.
같은 클래스와 모수에서 뽑은 Data는 조건부 독립이다.
카테고리 분포의 모수 추정
여기까지는 실제로 안 쓰인다. 너무 쉽기 때문에. 원리를 이해해라. 실제로 쓰이는 것은 아랫단계부터
다음으로 클래스 갯수가 $K$인 카테고리 분포의 모수 $\theta$ 벡터를 베이지안 추정법으로 추정해 본다.
카테고리 분포의 모수의 각 원소는 모두 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $\alpha_i=\dfrac{1}{K}$인 디리클리 분포로 한다.
$$ P(\theta) \propto \prod_{k=1}^K \theta_i^{\alpha_i - 1} \;\;\; (\alpha_i = 1/K , \; \text{ for all } i) $$
데이터는 모두 독립적인 카테고리 분포의 곱이므로 우도는 다음과 같이 다항 분포가 된다.
$$ p(x_{1},\ldots,x_{N} \mid \theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$
베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 $\alpha'_i$를 가지는 디리클리 분포가 된다.
$$
\begin{eqnarray}
p(\theta \mid x_{1},\ldots,x_{N})
&\propto & p(x_{1},\ldots,x_{N} \mid \theta) P(\theta) \
&=& \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} \cdot \prod_{k=1}^K \theta_i^{\alpha_i - 1} \
&=& \prod_{k=1}^K \theta^{\sum_{i=1}^N x_i + \alpha_i − 1} \
&=& \prod_{k=1}^K \theta^{N_i + \alpha_i −1} \
&=& \prod_{k=1}^K \theta^{\alpha'_i −1} \
\end{eqnarray}
$$
이 경우에도 conjugated prior 임을 알 수 있다.
갱신된 하이퍼 모수의 값은 다음과 같다.
$$ \alpha'_i = N_i + \alpha_i $$
Step2: 정규 분포의 기댓값 모수 추정
이번에는 정규 분포의 기댓값 모수를 베이지안 방법으로 추정한다. 분산 모수 $\sigma^2$은 알고 있다고 가정한다.
기댓값은 $-\infty$부터 $\infty$까지의 모든 수가 가능하기 때문에 모수의 사전 분포로는 정규 분포를 사용한다.
$$ P(\mu) = N(\mu_0, \sigma^2_0) = \dfrac{1}{\sqrt{2\pi\sigma_0^2}} \exp \left(-\dfrac{(\mu-\mu_0)^2}{2\sigma_0^2}\right)$$
데이터는 모두 독립적인 정규 분포의 곱이므로 우도는 다음과 같이 된다.
$$ P(x_{1},\ldots,x_{N} \mid \mu) = \prod_{i=1}^N N(x_i \mid \mu ) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\right) $$
$$
\begin{eqnarray}
P(\theta \mid x_{1},\ldots,x_{N})
&\propto & P(x_{1},\ldots,x_{N} \mid \theta) P(\theta) \
&\propto & \exp \left(-\dfrac{(\mu-\mu'_0)^2}{2\sigma_0^{'2}}\right) \
\end{eqnarray}
$$
베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 를 가지는 정규 분포가 된다.
$$
\begin{eqnarray}
\mu'_0 &=& \dfrac{\sigma^2}{N\sigma_0^2 + \sigma^2}\mu_0 + \dfrac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \dfrac{\sum x_i}{N} \
\dfrac{1}{\sigma_0^{'2}} &=& \dfrac{1}{\sigma_0^{2}} + \dfrac{N}{\sigma^{'2}}
\end{eqnarray}
$$ | Python Code:
theta0 = 0.6
a0, b0 = 1, 1
print("step 0: mode = unknown")
xx = np.linspace(0, 1, 1000)
plt.plot(xx, sp.stats.beta(a0, b0).pdf(xx), label="initial");
np.random.seed(0)
x = sp.stats.bernoulli(theta0).rvs(50)
N0, N1 = np.bincount(x, minlength=2)
a1, b1 = a0 + N1, b0 + N0
plt.plot(xx, sp.stats.beta(a1, b1).pdf(xx), label="1st");
print("step 1: mode =", (a1 - 1)/(a1 + b1 - 2))
x = sp.stats.bernoulli(theta0).rvs(50)
N0, N1 = np.bincount(x, minlength=2)
a2, b2 = a1 + N1, b1 + N0
plt.plot(xx, sp.stats.beta(a2, b2).pdf(xx), label="2nd");
print("step 2: mode =", (a2 - 1)/(a2 + b2 - 2))
x = sp.stats.bernoulli(theta0).rvs(50)
N0, N1 = np.bincount(x, minlength=2)
a3, b3 = a2 + N1, b2 + N0
plt.plot(xx, sp.stats.beta(a3, b3).pdf(xx), label="3rd");
print("step 3: mode =", (a3 - 1)/(a3 + b3 - 2))
x = sp.stats.bernoulli(theta0).rvs(50)
N0, N1 = np.bincount(x, minlength=2)
a4, b4 = a3 + N1, b3 + N0
plt.plot(xx, sp.stats.beta(a4, b4).pdf(xx), label="4th");
print("step 4: mode =", (a4 - 1)/(a4 + b4 - 2))
plt.legend()
plt.show()
Explanation: 베이지안 모수 추정의 예
베이지안 모수 추정(Bayesian parameter estimation) 방법은 모수의 값에 해당하는 특정한 하나의 숫자를 계산하는 것이 아니라 모수의 값이 가질 수 있는 모든 가능성, 즉 모수의 분포를 계산하는 작업이다.
이때 계산된 모수의 분포를 표현 방법은 두 가지가 있다.
비모수적(non-parametric) 방법
샘플을 제시한 후 히스토그램와 같은 방법으로 임의의 분포를 표현한다. MCMC(Markov chain Monte Carlo)와 같은 몬테카를로 방법에서 사용한다.
세타가 될 수 있는 정보를 준다. 이거로 히스토그램을 그린다. 답이 될 수 있는 후보 정보를 주는 것이다. 수식으로 나타내기 힘든 분포가 있다. 가능한 것을 아예 다 줘서 히스토그램을 그려보면 어느 부분에 몰려 있다. 그러면 0.5에 몰려 있다면 0.5에 밀집되어 있다.
원래 모드를 찾아야 되는데 모드 찾기가 힘들다. 삐죽삐죽하기 때문에 찾기가 힘들어서 중앙값이나 평균값을 찾는 것이 편하다. 그게 세타에 대한 대푯값으로 보통 쓴다.
모수적(parametric) 방법
모수의 분포를 잘 알려진 확률 분포 모형을 사용하여 나타낸다. 이렇게 하면 모수를 나타내는 확률 분포 수식이 다시 모수(parameter)를 가지게 되는데 이를 hyper-parameter라고도 부른다. 모수적 방법은 결국 hypter-parameter의 값을 숫자로 계산하는 작업이 된다.
여기에서는 모수적 방법의 몇 가지 간단한 예를 보인다.
베이지안 모수 추정의 기본 원리
베이지안 모수 추정 방법은 다음 공식을 사용하여 모수의 분포 $p(\theta)$를 $p(\theta \mid x_{1},\ldots,x_{N})$ 로 갱신(update)하는 작업이다.
$$ p(\theta \mid x_{1},\ldots,x_{N}) = \dfrac{p(x_{1},\ldots,x_{N} \mid \theta) \cdot p(\theta)}{p(x_{1},\ldots,x_{N})} \propto p(x_{1},\ldots,x_{N} \mid \theta ) \cdot p(\theta) $$
이 식에서
$p(\theta)$ 는 사전(Prior) 분포라고 한다. 사전 분포는 베이지안 추정 작업을 하기 전에 이미 알고 있던 모수 $\theta$의 분포를 뜻한다.
아무런 지식이 없는 경우에는 보통 uniform 분포 $\text{Beta}(1,1)$나 0 을 중심으로하는 정규 분포 $\mathcal{N}(0, 1)$를 사용한다
$p(\theta \mid x_{1},\ldots,x_{N})$ 는 사후(Posterior) 분포라고 한다. 수학적으로는 데이터 $x_{1},\ldots,x_{N}$가 알려진 상태에서의 $\theta$에 대한 조건부 확률 분포이다. 우리가 베이지안 모수 추정 작업을 통해 구하고자 하는 것이 바로 이 사후 분포이다.
$p(x_{1},\ldots,x_{N} \mid \theta)$ 분포는 우도(Likelihood) 분포라고 한다. 현재 우리가 알고 있는 값은 데이터 $x_{1},\ldots,x_{N}$ 이고 $\theta$가 미지수이다. 이와 반대로 $theta$를 알고 있는 상태에서의 데이터 $x_{1},\ldots,x_{N}$ 가 나올 조건부 확률 분포를 우도라고 한다.
베르누이 분포의 모수 추정
가장 단순한 이산 확률 분포인 베르누이 분포의 모수 $\theta$를 베이지안 추정법으로 추정해 본다.
베르누이 분포의 모수는 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $a=b=1$인 베타 분포로 한다.
$$ P(\theta) \propto \theta^{a−1}(1−\theta)^{b−1} \;\;\; (a=1, b=1)$$
데이터는 모두 독립적인 베르누이 분포의 곱이므로 우도는 다음과 같이 이항 분포가 된다.
$$ p(x_{1},\ldots,x_{N} \mid \theta) = \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} $$
베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 $a'$, $b'$를 가지는 베타 분포가 된다.
$$
\begin{eqnarray}
p(\theta \mid x_{1},\ldots,x_{N})
&\propto & p(x_{1},\ldots,x_{N} \mid \theta) P(\theta) \
&=& \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} \cdot \theta^{a−1}(1−\theta)^{b−1} \
&=& \theta^{\sum_{i=1}^N x_i + a−1} (1 - \theta)^{\sum_{i=1}^N (1-x_i) + b−1 } \
&=& \theta^{N_1 + a−1} (1 - \theta)^{N_0 + b−1 } \
&=& \theta^{a'−1} (1 - \theta)^{b'−1 } \
\end{eqnarray}
$$
이렇게 사전 분포와 사후 분포가 같은 확률 분포 모형을 가지게 하는 사전 분포를 conjugated prior 라고 한다.
갱신된 하이퍼 모수의 값은 다음과 같다.
$$ a' = N_1 + a $$
$$ b' = N_0 + b $$
End of explanation
def plot_dirichlet(alpha):
def project(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return np.dstack([(x-n12).dot(m1), (x-n12).dot(m2)])[0]
def project_reverse(x):
n1 = np.array([1, 0, 0])
n2 = np.array([0, 1, 0])
n3 = np.array([0, 0, 1])
n12 = (n1 + n2)/2
m1 = np.array([1, -1, 0])
m2 = n3 - n12
m1 = m1/np.linalg.norm(m1)
m2 = m2/np.linalg.norm(m2)
return x[:,0][:, np.newaxis] * m1 + x[:,1][:, np.newaxis] * m2 + n12
eps = np.finfo(float).eps * 10
X = project([[1-eps,0,0], [0,1-eps,0], [0,0,1-eps]])
import matplotlib.tri as mtri
triang = mtri.Triangulation(X[:,0], X[:,1], [[0, 1, 2]])
refiner = mtri.UniformTriRefiner(triang)
triang2 = refiner.refine_triangulation(subdiv=6)
XYZ = project_reverse(np.dstack([triang2.x, triang2.y, 1-triang2.x-triang2.y])[0])
pdf = sp.stats.dirichlet(alpha).pdf(XYZ.T)
plt.tricontourf(triang2, pdf)
plt.axis("equal")
plt.show()
theta0 = np.array([0.2, 0.6, 0.2])
np.random.seed(0)
x1 = np.random.choice(3, 20, p=theta0)
N1 = np.bincount(x1, minlength=3)
x2 = np.random.choice(3, 100, p=theta0)
N2 = np.bincount(x2, minlength=3)
x3 = np.random.choice(3, 1000, p=theta0)
N3 = np.bincount(x3, minlength=3)
a0 = np.ones(3) / 3
plot_dirichlet(a0)
a1 = a0 + N1
plot_dirichlet(a1)
print((a1 - 1)/(a1.sum() - 3))
a2 = a1 + N2
plot_dirichlet(a2)
print((a2 - 1)/(a2.sum() - 3))
a3 = a2 + N3
plot_dirichlet(a3)
print((a3 - 1)/(a3.sum() - 3))
Explanation: 조건부 독립
보통 조건부독립만 성립한다. 예를 들어 고등학교 남학생들만. 초등학교 1~고3까지. 어느 반 골랐는지 몰라. 반이 엄청 많은데 그 중 한 반을 골랐어. 고3일 수도 있고 초딩일수도 있어. 1번 학생 키 180, 2번 학생 50? 당연히 180 근처라고 생각할 것이다. 그래서 우리가 하는 것은 모두 독립이 아니다. 앞의 자료가 뒤에 영향을 미친다. 그런데 모수가 정해져 있는 경우 즉 조건부 독립이라고 했을 때, 고 3이라고 가정했을 때! 반 학생들의 키는 독립적이다.
조건부확률, 조건부 독립을 이해하는 것이 데이터 분석을 이해하는 것이다.
조건부 독립. 사실은 모든 독립은 조건부 독립이다. 이게 옳아. 예를 들어서 주사위의 경우. 주사위를 선택했다는 조건이 붙는 것이고.
무 조건부에서는 다음을 예측할 수 있다. 그래서 독립이 아니다. 무 조건부의 경우는 1~6주사위와 1~2 주사위가 있다면 99번 던졌는데 1과 2만 나왔다. 그래서 100번째에는 1이나 2로 예측을 할 수 있다. 그래서 조건부 안에서는 독립인 것이고 무조건부의 경우에는 독립이 아니다.
같은 클래스와 모수에서 뽑은 Data는 조건부 독립이다.
카테고리 분포의 모수 추정
여기까지는 실제로 안 쓰인다. 너무 쉽기 때문에. 원리를 이해해라. 실제로 쓰이는 것은 아랫단계부터
다음으로 클래스 갯수가 $K$인 카테고리 분포의 모수 $\theta$ 벡터를 베이지안 추정법으로 추정해 본다.
카테고리 분포의 모수의 각 원소는 모두 0부터 1사이의 값을 가지므로 사전 분포는 하이퍼 모수 $\alpha_i=\dfrac{1}{K}$인 디리클리 분포로 한다.
$$ P(\theta) \propto \prod_{k=1}^K \theta_i^{\alpha_i - 1} \;\;\; (\alpha_i = 1/K , \; \text{ for all } i) $$
데이터는 모두 독립적인 카테고리 분포의 곱이므로 우도는 다음과 같이 다항 분포가 된다.
$$ p(x_{1},\ldots,x_{N} \mid \theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$
베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 $\alpha'_i$를 가지는 디리클리 분포가 된다.
$$
\begin{eqnarray}
p(\theta \mid x_{1},\ldots,x_{N})
&\propto & p(x_{1},\ldots,x_{N} \mid \theta) P(\theta) \
&=& \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} \cdot \prod_{k=1}^K \theta_i^{\alpha_i - 1} \
&=& \prod_{k=1}^K \theta^{\sum_{i=1}^N x_i + \alpha_i − 1} \
&=& \prod_{k=1}^K \theta^{N_i + \alpha_i −1} \
&=& \prod_{k=1}^K \theta^{\alpha'_i −1} \
\end{eqnarray}
$$
이 경우에도 conjugated prior 임을 알 수 있다.
갱신된 하이퍼 모수의 값은 다음과 같다.
$$ \alpha'_i = N_i + \alpha_i $$
End of explanation
mu, sigma2 = 2, 4
mu0, sigma20 = 0, 1
xx = np.linspace(1, 3, 1000)
np.random.seed(0)
N = 10
x = sp.stats.norm(mu).rvs(N)
mu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()
sigma20 = 1/(1/sigma20 + N/sigma2)
plt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label="1st");
print(mu0)
N = 20
x = sp.stats.norm(mu).rvs(N)
mu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()
sigma20 = 1/(1/sigma20 + N/sigma2)
plt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label="2nd");
print(mu0)
N = 50
x = sp.stats.norm(mu).rvs(N)
mu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()
sigma20 = 1/(1/sigma20 + N/sigma2)
plt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label="3rd");
print(mu0)
N = 100
x = sp.stats.norm(mu).rvs(N)
mu0 = sigma2/(N*sigma20 + sigma2) * mu0 + (N*sigma20)/(N*sigma20 + sigma2)*x.mean()
sigma20 = 1/(1/sigma20 + N/sigma2)
plt.plot(xx, sp.stats.norm(mu0, sigma20).pdf(xx), label="4th");
print(mu0)
plt.axis([1, 3, 0, 20])
plt.legend()
plt.show()
Explanation: 정규 분포의 기댓값 모수 추정
이번에는 정규 분포의 기댓값 모수를 베이지안 방법으로 추정한다. 분산 모수 $\sigma^2$은 알고 있다고 가정한다.
기댓값은 $-\infty$부터 $\infty$까지의 모든 수가 가능하기 때문에 모수의 사전 분포로는 정규 분포를 사용한다.
$$ P(\mu) = N(\mu_0, \sigma^2_0) = \dfrac{1}{\sqrt{2\pi\sigma_0^2}} \exp \left(-\dfrac{(\mu-\mu_0)^2}{2\sigma_0^2}\right)$$
데이터는 모두 독립적인 정규 분포의 곱이므로 우도는 다음과 같이 된다.
$$ P(x_{1},\ldots,x_{N} \mid \mu) = \prod_{i=1}^N N(x_i \mid \mu ) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\right) $$
$$
\begin{eqnarray}
P(\theta \mid x_{1},\ldots,x_{N})
&\propto & P(x_{1},\ldots,x_{N} \mid \theta) P(\theta) \
&\propto & \exp \left(-\dfrac{(\mu-\mu'_0)^2}{2\sigma_0^{'2}}\right) \
\end{eqnarray}
$$
베이지안 규칙을 사용하여 사후 분포를 구하면 다음과 같이 갱신된 하이퍼 모수 를 가지는 정규 분포가 된다.
$$
\begin{eqnarray}
\mu'_0 &=& \dfrac{\sigma^2}{N\sigma_0^2 + \sigma^2}\mu_0 + \dfrac{N\sigma_0^2}{N\sigma_0^2 + \sigma^2} \dfrac{\sum x_i}{N} \
\dfrac{1}{\sigma_0^{'2}} &=& \dfrac{1}{\sigma_0^{2}} + \dfrac{N}{\sigma^{'2}}
\end{eqnarray}
$$
End of explanation |
3,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization with Matplotlib
Learning Objectives
Step1: Overview
The following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.
Figures and Axes
In Matplotlib a single visualization is a Figure.
A Figure can have multiple areas, called subplots. Each subplot is an Axes.
If you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.
All plotting commands apply to the current Figure and Axes.
The following functions can be used to create and manage Figure and Axes objects.
Function | Description
Step2: Basic plot modification
With a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.
Step3: Here is a list of the single character color strings
Step4: To change the plot's limits, use xlim and ylim
Step5: You can change the ticks along a given axis by using xticks, yticks and tick_params
Step6: Box and grid
You can enable a grid or disable the box. Notice that the ticks and tick labels remain.
Step7: Multiple series
Multiple calls to a plotting function will all target the current Axes
Step8: Subplots
Subplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.
The first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure
Step9: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner
Step10: The subplots function also makes it easy to pass arguments to Figure and to share axes
Step11: More marker and line styling
All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization with Matplotlib
Learning Objectives: Learn how to make basic plots using Matplotlib's pylab API and how to use the Matplotlib documentation.
This notebook focuses only on the Matplotlib API, rather that the broader question of how you can use this API to make effective and beautiful visualizations.
Imports
The following imports should be used in all of your notebooks where Matplotlib in used:
End of explanation
t = np.linspace(0, 10.0, 100)
plt.plot(t, np.sin(t))
plt.xlabel('Time')
plt.ylabel('Signal')
plt.title('My Plot'); # supress text output
Explanation: Overview
The following conceptual organization is simplified and adapted from Benjamin Root's AnatomyOfMatplotlib tutorial.
Figures and Axes
In Matplotlib a single visualization is a Figure.
A Figure can have multiple areas, called subplots. Each subplot is an Axes.
If you don't create a Figure and Axes yourself, Matplotlib will automatically create one for you.
All plotting commands apply to the current Figure and Axes.
The following functions can be used to create and manage Figure and Axes objects.
Function | Description
:-----------------|:----------------------------------------------------------
figure | Creates a new Figure
gca | Get the current Axes instance
savefig | Save the current Figure to a file
sca | Set the current Axes instance
subplot | Create a new subplot Axes for the current Figure
subplots | Create a new Figure and a grid of subplots Axes
Plotting Functions
Once you have created a Figure and one or more Axes objects, you can use the following function to put data onto that Axes.
Function | Description
:-----------------|:--------------------------------------------
bar | Make a bar plot
barh | Make a horizontal bar plot
boxplot | Make a box and whisker plot
contour | Plot contours
contourf | Plot filled contours
hist | Plot a histogram
hist2d | Make a 2D histogram plot
imshow | Display an image on the axes
matshow | Display an array as a matrix
pcolor | Create a pseudocolor plot of a 2-D array
pcolormesh | Plot a quadrilateral mesh
plot | Plot lines and/or markers
plot_date | Plot with data with dates
polar | Make a polar plot
scatter | Make a scatter plot of x vs y
Plot modifiers
You can then use the following functions to modify your visualization.
Function | Description
:-----------------|:---------------------------------------------------------------------
annotate | Create an annotation: a piece of text referring to a data point
box | Turn the Axes box on or off
clabel | Label a contour plot
colorbar | Add a colorbar to a plot
grid | Turn the Axes grids on or off
legend | Place a legend on the current Axes
loglog | Make a plot with log scaling on both the x and y axis
semilogx | Make a plot with log scaling on the x axis
semilogy | Make a plot with log scaling on the y axis
subplots_adjust | Tune the subplot layout
tick_params | Change the appearance of ticks and tick labels
ticklabel_format| Change the ScalarFormatter used by default for linear axes
tight_layout | Automatically adjust subplot parameters to give specified padding
text | Add text to the axes
title | Set a title of the current axes
xkcd | Turns on XKCD sketch-style drawing mode
xlabel | Set the x axis label of the current axis
xlim | Get or set the x limits of the current axes
xticks | Get or set the x-limits of the current tick locations and labels
ylabel | Set the y axis label of the current axis
ylim | Get or set the y-limits of the current axes
yticks | Get or set the y-limits of the current tick locations and labels
Basic plotting
For now, we will work with basic line plots (plt.plot) to show how the Matplotlib pylab plotting API works. In this case, we don't create a Figure so Matplotlib does that automatically.
End of explanation
f = plt.figure(figsize=(10,6)) # 9" x 6", default is 8" x 5.5"
plt.plot(t, np.sin(t), 'r.');
plt.xlabel('x')
plt.ylabel('y')
Explanation: Basic plot modification
With a third argument you can provide the series color and line/marker style. Here we create a Figure object and modify its size.
End of explanation
from matplotlib import lines
lines.lineStyles.keys()
from matplotlib import markers
markers.MarkerStyle.markers.keys()
Explanation: Here is a list of the single character color strings:
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
The following will show all of the line and marker styles:
End of explanation
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(-1.0, 11.0)
plt.ylim(-1.0, 1.0)
Explanation: To change the plot's limits, use xlim and ylim:
End of explanation
plt.plot(t, np.sin(t)*np.exp(-0.1*t),'bo')
plt.xlim(0.0, 10.0)
plt.ylim(-1.0, 1.0)
plt.xticks([0,5,10], ['zero','five','10'])
plt.tick_params(axis='y', direction='inout', length=10)
Explanation: You can change the ticks along a given axis by using xticks, yticks and tick_params:
End of explanation
plt.plot(np.random.rand(100), 'b-')
plt.grid(True)
plt.box(False)
Explanation: Box and grid
You can enable a grid or disable the box. Notice that the ticks and tick labels remain.
End of explanation
plt.plot(t, np.sin(t), label='sin(t)')
plt.plot(t, np.cos(t), label='cos(t)')
plt.xlabel('t')
plt.ylabel('Signal(t)')
plt.ylim(-1.5, 1.5)
plt.xlim(right=12.0)
plt.legend()
Explanation: Multiple series
Multiple calls to a plotting function will all target the current Axes:
End of explanation
plt.subplot(2,1,1) # 2 rows x 1 col, plot 1
plt.plot(t, np.exp(0.1*t))
plt.ylabel('Exponential')
plt.subplot(2,1,2) # 2 rows x 1 col, plot 2
plt.plot(t, t**2)
plt.ylabel('Quadratic')
plt.xlabel('x')
plt.tight_layout()
Explanation: Subplots
Subplots allow you to create a grid of plots in a single figure. There will be an Axes associated with each subplot and only one Axes can be active at a time.
The first way you can create subplots is to use the subplot function, which creates and activates a new Axes for the active Figure:
End of explanation
f, ax = plt.subplots(2, 2)
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
plt.xlabel('x')
plt.ylabel('y')
Explanation: In many cases, it is easier to use the subplots function, which creates a new Figure along with an array of Axes objects that can be indexed in a rational manner:
End of explanation
f, ax = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(6,6))
for i in range(2):
for j in range(2):
plt.sca(ax[i,j])
plt.plot(np.random.rand(20))
if i==1:
plt.xlabel('x')
if j==0:
plt.ylabel('y')
plt.tight_layout()
Explanation: The subplots function also makes it easy to pass arguments to Figure and to share axes:
End of explanation
plt.plot(t, np.sin(t), marker='o', color='darkblue',
linestyle='--', alpha=0.3, markersize=10)
Explanation: More marker and line styling
All plot commands, including plot, accept keyword arguments that can be used to style the lines in more detail. Fro more information see:
Controlling line properties
Specifying colors
End of explanation |
3,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Functions for angular velocity & integration
The particle is an ellipsoid. The reference state (corresponding to no rotation) is that the ellipsoid is axis-aligned and the axis lengths are (a_x, a_y, a_z). The shape parameters in the code below are
l = a_z/a_x
k = a_y/a_x
Its orientation is represented by the rotation (as a Quaternion) from the reference state. See Appendix A of https
Step4: Omega & E (strain) for simple shear flow
Step5: Validate code against axisymmetric case (Jeffery orbits)
Step6: Case 1
Step7: B | Python Code:
def jeffery_omega(L, K, n1, n2, n3, Omega, E):
Compute Jeffery angular velocity
L: (lambda^2-1)/(lambda^2+1)
K: (kappa^2-1)/(kappa^2+1)
n1,n2,n3: vector triplet representing current orientation
Omega: vorticity (lab frame)
E: strain matrix (lab frame)
Returns (3,) ndarray with angular velocity of particle (body frame)
See Appendix A in http://hdl.handle.net/2077/40830
omega1 = n1.dot(Omega) + (L-K)/(L*K-1.) * (n2.dot(E.dot(n3)))
omega2 = n2.dot(Omega) + L * (n1.dot(E.dot(n3)))
omega3 = n3.dot(Omega) - K * (n1.dot(E.dot(n2)))
return np.array([omega1, omega2, omega3])
def jeffery_numerical(L, K, q0, Omega, E, max_t = None, dt = 1e-3):
Integrate one trajectory according to Jeffery's equations.
L: (lambda^2-1)/(lambda^2+1) shape parameter 1
K: (kappa^2-1)/(kappa^2+1) shape parameter 2
q0: quaternion representing initial orientation
Omega: vorticity (lab frame)
E: strain matrix (lab frame)
max_t: Max time of trajectory, defaults to 2 Jeffery periods based on L
dt: Integration timestep
See Appendix A in https://arxiv.org/abs/1705.06997 for quaternion convention.
Returns (ts, qs, n2s, n3s) where
ts is (N,1) ndarray with timestamps (starting at 0) for N steps
qs is (N,4) ndarray with orientations (quaternions) for N steps
n2s is (N,3) ndarray with n2 vector for N steps
n3s is (N,3) ndarray with n3 vector for N steps
if max_t is None:
maxKL = max(abs(L),abs(K))
jeffery_T = 4*np.pi/np.sqrt(1-maxKL*maxKL)
max_t = 2*jeffery_T
N = int(max_t/dt)
ts = np.zeros((N,1))
n2s = np.zeros((N,3))
n3s = np.zeros((N,3))
qs = np.zeros((N,4))
q = q0
t=0
for n in range(N):
R = q.get_R()
n1 = R[:,0]
n2 = R[:,1]
n3 = R[:,2]
ts[n] = n*dt
n2s[n,:] = n2
n3s[n,:] = n3
qs[n,:] = q.q
omega = jeffery_omega(L, K, n1, n2, n3, Omega, E)
qdot = 0.5 * omega.dot(q.get_G())
q = q + dt*qdot
q.normalize()
return ts, qs, n2s, n3s
def jeffery_axisymmetric_exact(L, q0, Omega, E, max_t = None, dt = 1e-1):
Generate one exact trajectory for axisymmetric particle ('Jeffery orbit')
L: (lambda^2-1)/(lambda^2+1) shape parameter
q0: quaternion representing initial orientation
Omega: vorticity (lab frame)
E: strain matrix (lab frame)
max_t: Max time of trajectory, defaults to 2 Jeffery periods based on L
dt: Sample spacing
See Appendix A in https://arxiv.org/abs/1705.06997 for quaternion convention.
Returns (ts, qs, n2s, n3s) where
ts is (N,1) ndarray with timestamps (starting at 0) for N steps
n3s is (N,3) ndarray with n3 vector for N steps
if max_t is None:
jeffery_T = 4*np.pi/np.sqrt(1-L*L)
max_t = 2*jeffery_T
N = int(max_t/dt)
levi_civita = np.zeros((3, 3, 3))
levi_civita[0, 1, 2] = levi_civita[1, 2, 0] = levi_civita[2, 0, 1] = 1
levi_civita[0, 2, 1] = levi_civita[2, 1, 0] = levi_civita[1, 0, 2] = -1
O = -np.einsum('ijk,k',levi_civita, Omega)
B = O + L*E
n30 = q0.get_R().dot(np.array([0,0,1]))
ts = np.zeros((N,1))
n3s = np.zeros( (N,3) )
for n in range(N):
t = dt*n
M = scipy.linalg.expm(B*t)
n3 = M.dot(n30)
n3 = n3/np.linalg.norm(n3)
ts[n] = t
n3s[n,:] = n3
return (ts, n3s)
Explanation: Functions for angular velocity & integration
The particle is an ellipsoid. The reference state (corresponding to no rotation) is that the ellipsoid is axis-aligned and the axis lengths are (a_x, a_y, a_z). The shape parameters in the code below are
l = a_z/a_x
k = a_y/a_x
Its orientation is represented by the rotation (as a Quaternion) from the reference state. See Appendix A of https://arxiv.org/abs/1705.06997 for the quaternion convention.
End of explanation
Omega = np.array([0,0,-.5])
E = np.array([
[0,.5,0],
[.5,0,0],
[0,0,0]
])
Explanation: Omega & E (strain) for simple shear flow
End of explanation
angles = np.pi/2 * np.linspace(0.05,1,5)
## first test is axisymmetric along n3 (K=0)
ax = plt.subplot(1,2,1)
for angle in angles:
q0 = Quaternion(axis=[0,1,0], angle=angle)
l = 7
k = 1
L = (l**2-1)/(l**2+1)
K = (k**2-1)/(k**2+1)
(ts, qs, n2s, n3s) = jeffery_numerical(L, K, q0, Omega, E)
ax.plot(n3s[:,0],n3s[:,1],ls='solid', color='C0')
(ts, n3s) = jeffery_axisymmetric_exact(L,q0,Omega,E)
ax.plot(n3s[:,0],n3s[:,1],ls=(0, (5, 10)),color='C1')
ax.set_xlim(-1.1,1.1)
ax.set_ylim(-1.1,1.1)
ax.set_aspect('equal')
## second test is axisymmetric along n2 (L=0)
ax = plt.subplot(1,2,2)
for angle in angles:
q0_tri = Quaternion(axis=[1,0,0], angle=-angle)
q0_axi = Quaternion(axis=[1,0,0], angle=np.pi/2-angle)
l = 1
k = 7
L = (l**2-1)/(l**2+1)
K = (k**2-1)/(k**2+1)
(ts, qs, n2s, n3s) = jeffery_numerical(L, K, q0_tri, Omega, E)
ax.plot(n2s[:,0],n2s[:,1],ls='solid', color='C0')
(ts, n3s) = jeffery_axisymmetric_exact(K,q0_axi,Omega,E)
ax.plot(n3s[:,0],n3s[:,1],ls=(0, (5, 10)),color='C1')
ax.set_xlim(-1.1,1.1)
ax.set_ylim(-1.1,1.1)
ax.set_aspect('equal')
plt.show()
Explanation: Validate code against axisymmetric case (Jeffery orbits)
End of explanation
rot1=Quaternion(axis=[0,0,1], angle=0.1*np.pi/2) # this sets psi
rot2=Quaternion(axis=[1,0,0], angle=np.pi/2-0.1) # this sets theta
q0 = rot1.mul(rot2)
max_t = 300
fig = plt.figure(figsize=(15,8))
l = 7
k = 1
L = (l**2-1)/(l**2+1)
K = (k**2-1)/(k**2+1)
ax = fig.add_subplot(1,2,1, projection='3d')
(ts, qs, n2s, n3s) = jeffery_numerical(L, K, q0, Omega, E, max_t = max_t)
ax.plot(n3s[:,0],n3s[:,1],n3s[:,2],ls='solid', color='C0')
ax.set_xlim(-1.1,1.1)
ax.set_ylim(-1.1,1.1)
ax.set_zlim(-1.1,1.1)
ax.set_aspect('equal')
ax.set_title('l={:.2f} | k={:.2f}'.format(l,k))
ax = fig.add_subplot(1,2,2, projection='3d')
l = 7
k = 1.2
L = (l**2-1)/(l**2+1)
K = (k**2-1)/(k**2+1)
(ts, qs, n2s, n3s) = jeffery_numerical(L, K, q0, Omega, E, max_t = max_t)
ax.plot(n3s[:,0],n3s[:,1],n3s[:,2],ls='solid', color='C0')
ax.set_xlim(-1.1,1.1)
ax.set_ylim(-1.1,1.1)
ax.set_zlim(-1.1,1.1)
ax.set_aspect('equal')
ax.set_title('l={:.2f} | k={:.2f}'.format(l,k))
plt.show()
Explanation: Case 1: Axisymmetric (1,1,7) vs slightly asymmetric (1,1.2,7)
Side-by-side comparison between two slightly different particles started in the same initial condition.
A: initial condition in integrable region
See Fig 3.11 in Jonas' thesis for definitions of psi & theta.
These initial conditions are inside the integrable region, so the difference between symmetric and asymmetric particle is bounded.
End of explanation
rot1=Quaternion(axis=[0,0,1], angle=0.95*np.pi/2) # this sets psi
rot2=Quaternion(axis=[1,0,0], angle=np.pi/2-0.1) # this sets theta
q0 = rot1.mul(rot2)
max_t = 300
fig = plt.figure(figsize=(15,8))
l = 7
k = 1
L = (l**2-1)/(l**2+1)
K = (k**2-1)/(k**2+1)
ax = fig.add_subplot(1,2,1, projection='3d')
(ts, qs, n2s, n3s) = jeffery_numerical(L, K, q0, Omega, E, max_t = max_t)
ax.plot(n3s[:,0],n3s[:,1],n3s[:,2],ls='solid', color='C0')
np.savetxt('symmetric-l7-k1.csv', np.hstack((ts,qs)), delimiter=',') # export!
ax.set_xlim(-1.1,1.1)
ax.set_ylim(-1.1,1.1)
ax.set_zlim(-1.1,1.1)
ax.set_aspect('equal')
ax.set_title('l={:.2f} | k={:.2f}'.format(l,k))
ax = fig.add_subplot(1,2,2, projection='3d')
l = 7
k = 1.2
L = (l**2-1)/(l**2+1)
K = (k**2-1)/(k**2+1)
(ts, qs, n2s, n3s) = jeffery_numerical(L, K, q0, Omega, E, max_t = max_t)
ax.plot(n3s[:,0],n3s[:,1],n3s[:,2],ls='solid', color='C0')
np.savetxt('asymmetric-l7-k1-2.csv', np.hstack((ts,qs)), delimiter=',') # export!
ax.set_xlim(-1.1,1.1)
ax.set_ylim(-1.1,1.1)
ax.set_zlim(-1.1,1.1)
ax.set_aspect('equal')
ax.set_title('l={:.2f} | k={:.2f}'.format(l,k))
plt.show()
Explanation: B: initial condition in chaotic region
See Fig 3.11 in Jonas' thesis for definitions of psi & theta.
These initial conditions are inside the chaotic region, so the difference between symmetric and asymmetric particle is more pronounced.
End of explanation |
3,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recommender System
The Netflix Challenge
The principle of this recommander system is the same as the Netflix Challenge
Step1: Scale all the grade between 0 (with lowest value) and 10 (the one with highest value)
Step2: Mentee should fill the array by giving a for each topic according to his preferences | Python Code:
authorID_to_titles_stem = utils.load_pickle("../pmi_data/authorID_to_titles_stem.p")
score_by_author = utils.load_pickle("../pmi_data/score_by_author_by_document.p")
Explanation: Recommender System
The Netflix Challenge
The principle of this recommander system is the same as the Netflix Challenge:
In the nexflix challenge we start with one matrix NxM with the user in the row and the movies in the column. The matrix is not full
The goal is to factorize the matrix in 2 others matrix V=NxK and Z=KxM where K is a variable. It represents also the number of features
There is an explanation to this factorization. Indeed the matrix V represent the user and the différent features that caracterize a movie (love, horror, action, period of time,...) and the user give a score to each of those features.
In the mean time we have the matrix Z where each line is a features and each column is a movie. The goal is to describe each movie according to the features
The last step is to do a matrix multiplication among V and Z and we obtain the score a user would give to the movie
Mentor-Mentee recommender system
Is it possible to map that prblem to the Netflix Challenge. Indeed every mentor and every mentee will have their own favorite field. And we can imagine other features as well. But let's keep it simple. So if we can find a list of features to describe the mentors and to reprente what the mentee are looking for, we end up with the same representation as in the Netflix challenge:
-user == mentee
-movie == mentor
To fill the matrix V (the one representing the mentee) we can imagine asking question to the mentee that will help giving a score to each feautres (topics) or directly ask to give a grade to each topics.
To fill the Z matrix (mentor) we will use the dblp file and infer k topics from the titles and give a grade to each mentor in each detected topics
The last step will be to do the matrix multiplication among 1 or many mentee and the mentor. Then assign one mentee to a mentor based one the rulse we want to apply (1-1 mapping, 1-N mapping,...)
End of explanation
def scale_list(list_):
min_ = min(list_)
max_ = max(list_)
diff = max_ - min_
finale_scale = 10
return [(i-min_)/diff * finale_scale for i in list_]
score_by_author = np.apply_along_axis(scale_list, axis=1, arr=score_by_author)
score_by_author[0]
def get_mentor(mentee):
return np.argmax(np.dot(mentee, score_by_author.T), axis=1)
Explanation: Scale all the grade between 0 (with lowest value) and 10 (the one with highest value)
End of explanation
mentee1 = np.zeros(20)
mentee1[1] = 10
author_id = get_mentor([mentee1])
author_id
list(authorID_to_titles_stem.values())[author_id[0]]
Explanation: Mentee should fill the array by giving a for each topic according to his preferences
End of explanation |
3,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Save & Restore with a minist example
Minist예제를 수행하면 알겠지만, Train에 생각보다는 꽤 많은 시간이 소요됩니다.
이 이유만이 아니라 평가시에는 trainnig후에 model의 parameter를 저장했다가 평가시에는 그 parameter를 불러들여서 사용하는 것이 일반적입니다.
여기에 사용되는 함수는 torch.save, torch.load와 model.state_dict(), model.load_state_dict()입니다.
사실 4장의 tutorial의 마지막에 torch.save를 이용하여 model parameter를 저장을 했습니다.
따라서 이번 장에서는 train과정 없이 save된 file로 부터 model의 parameter를 복구하여 사용해보도록 하겠습니다.
```python
training end
torch.save(model.state_dict(), checkpoint_filename)
evaluating start
checkpoint = torch.load(checkpoint_filename)
model.load_state_dict(checkpoint)
```
Step1: 1. 입력DataLoader 설정
Step2: 2. 사전 설정
* model
* loss (train을 하지 않으므로, 생략)
* opimizer (train을 하지 않으므로, 생략)
Step3: 3. Restore model paramter from saved file
Step4: 6. Predict & Evaluate
train을 하지 않았음에도 이전에 학습된 model parameter를 복원하여 정확도가 98%이상인 것을 알 수 있습니다.
Step5: 5. plot weights
model의 weight를 plot하여 봅니다. | Python Code:
%matplotlib inline
Explanation: Save & Restore with a minist example
Minist예제를 수행하면 알겠지만, Train에 생각보다는 꽤 많은 시간이 소요됩니다.
이 이유만이 아니라 평가시에는 trainnig후에 model의 parameter를 저장했다가 평가시에는 그 parameter를 불러들여서 사용하는 것이 일반적입니다.
여기에 사용되는 함수는 torch.save, torch.load와 model.state_dict(), model.load_state_dict()입니다.
사실 4장의 tutorial의 마지막에 torch.save를 이용하여 model parameter를 저장을 했습니다.
따라서 이번 장에서는 train과정 없이 save된 file로 부터 model의 parameter를 복구하여 사용해보도록 하겠습니다.
```python
training end
torch.save(model.state_dict(), checkpoint_filename)
evaluating start
checkpoint = torch.load(checkpoint_filename)
model.load_state_dict(checkpoint)
```
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
import torchvision
from torchvision import datasets, transforms
from torch.autograd import Variable
import matplotlib.pyplot as plt
is_cuda = torch.cuda.is_available() # cuda 사용가능시, True
checkpoint_filename = 'minist.ckpt'
test_loader = torch.utils.data.DataLoader(
datasets.MNIST('data', train=False, transform=transforms.ToTensor()),
batch_size=100, shuffle=False)
Explanation: 1. 입력DataLoader 설정
End of explanation
class MnistModel(nn.Module):
def __init__(self):
super(MnistModel, self).__init__()
# input is 28x28
# padding=2 for same padding
self.conv1 = nn.Conv2d(1, 32, 5, padding=2)
# feature map size is 14*14 by pooling
# padding=2 for same padding
self.conv2 = nn.Conv2d(32, 64, 5, padding=2)
# feature map size is 7*7 by pooling
self.fc1 = nn.Linear(64*7*7, 1024)
self.fc2 = nn.Linear(1024, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2)
x = F.max_pool2d(F.relu(self.conv2(x)), 2)
x = x.view(-1, 64*7*7) # reshape Variable
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
model = MnistModel()
if is_cuda : model.cuda()
Explanation: 2. 사전 설정
* model
* loss (train을 하지 않으므로, 생략)
* opimizer (train을 하지 않으므로, 생략)
End of explanation
checkpoint = torch.load(checkpoint_filename)
model.load_state_dict(checkpoint)
Explanation: 3. Restore model paramter from saved file
End of explanation
model.eval()
correct = 0
for image, target in test_loader:
if is_cuda : image, target = image.cuda(), target.cuda()
image, target = Variable(image, volatile=True), Variable(target)
output = model(image)
prediction = output.data.max(1)[1]
correct += prediction.eq(target.data).sum()
print('\nTest set: Accuracy: {:.2f}%'.format(100. * correct / len(test_loader.dataset)))
Explanation: 6. Predict & Evaluate
train을 하지 않았음에도 이전에 학습된 model parameter를 복원하여 정확도가 98%이상인 것을 알 수 있습니다.
End of explanation
model.state_dict().keys()
plt.rcParams["figure.figsize"] = [8, 4]
weight = model.state_dict()['conv1.weight']
wmax, wmin = torch.max(weight), torch.min(weight)
gridimg = torchvision.utils.make_grid(weight).cpu().numpy().transpose((1,2,0))
plt.imshow(gridimg[:,:,0], vmin = wmin, vmax =wmax, interpolation='nearest', cmap='seismic') # gridimg[:, :, 0]는 한 color channel을 출력
plt.rcParams["figure.figsize"] = [8, 8]
weight = model.state_dict()['conv2.weight'] # 64 x 32 x 5 x 5
weight = weight[:, 0:1, :, :] # 64 x 1 x 5 x 5
wmax, wmin = torch.max(weight), torch.min(weight)
gridimg = torchvision.utils.make_grid(weight).cpu().numpy().transpose((1,2,0))
plt.imshow(gridimg[:,:,0], vmin = wmin, vmax =wmax, interpolation='nearest', cmap='seismic') # gridimg[:, :, 0]는 한 color channel을 출력
Explanation: 5. plot weights
model의 weight를 plot하여 봅니다.
End of explanation |
3,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Indirection
They say "all problems in computer science can be solved with an extra level of indirection."
It certainly provides some real leverage in data wrangling. Rather than write a bunch of spaghetti
code, we will build a table that defines the transformation we would like to perform on the
raw data in order to have something cleaner to work with. In this we can map the indecipherable identifiers
into something more understandable; we can establish formatters; we can translate field encodings into
clear mnemonics, and so on.
We need a tool for finding elements in the translation table; that's table_lookup. Then we can
build our mapping tool, map_raw_table.
Step1: Descriptive statistics - smoking
Step2: What is the effect of smoking on weight?
Step3: Permutation tests
Step4: Is the difference observed between these samples representative of the larger population?
Step5: The 4.5 kg difference is certainly not an artifact of the sample we started with. The smokers definitely weigh less. At the same time, these are not light people in this study. Better go back and understand what was the purpose of the study that led to the selection of these six thousand individuals.
Other Factors | Python Code:
health_map = Table(["raw label", "label", "encoding", "Description"]).with_rows(
[["hhidpn", "id", None, "identifier"],
["r8agey_m", "age", None, "age in years in wave 8"],
["ragender", "gender", ['male','female'], "1 = male, 2 = female)"],
["raracem", "race", ['white','black','other'], "(1 = white, 2 = black, 3 = other)"],
["rahispan", "hispanic", None, "(1 = yes)"],
["raedyrs", "education", None, "education in years"],
["h8cpl", "couple", None, "in a couple household (1 = yes)"],
["r8bpavgs", "blood pressure", None,"average systolic BP"],
["r8bpavgp", "pulse", None, "average pulse"],
["r8smoken", "smoker",None, "currently smokes cigarettes"],
["r8mdactx", "exercise", None, "frequency of moderate exercise (1=everyday, 2=>1perweek, 3=1perweek, 4=1-3permonth\
, 5=never)"],
["r8weightbio", "weight", None, "objective weight in kg"],
["r8heightbio","height", None, "objective height in m"]])
health_map
def table_lookup(table,key_col,key,map_col):
row = np.where(table[key_col]==key)
if len(row[0]) == 1:
return table[map_col][row[0]][0]
else:
return -1
def map_raw_table(raw_table,map_table):
mapped = Table()
for raw_label in raw_table :
if raw_label in map_table["raw label"] :
new_label = table_lookup(map_table,'raw label',raw_label,'label')
encoding = table_lookup(map_table,'raw label',raw_label,'encoding')
if encoding is None :
mapped[new_label] = raw_table[raw_label]
else:
mapped[new_label] = raw_table.apply(lambda x: encoding[x-1], raw_label)
return mapped
# create a more usable table by mapping the raw to finished
health = map_raw_table(hrec06,health_map)
health
Explanation: Indirection
They say "all problems in computer science can be solved with an extra level of indirection."
It certainly provides some real leverage in data wrangling. Rather than write a bunch of spaghetti
code, we will build a table that defines the transformation we would like to perform on the
raw data in order to have something cleaner to work with. In this we can map the indecipherable identifiers
into something more understandable; we can establish formatters; we can translate field encodings into
clear mnemonics, and so on.
We need a tool for finding elements in the translation table; that's table_lookup. Then we can
build our mapping tool, map_raw_table.
End of explanation
def firstQtile(x) : return np.percentile(x,25)
def thirdQtile(x) : return np.percentile(x,25)
summary_ops = (min, firstQtile, np.median, np.mean, thirdQtile, max, sum)
# Let's try what is the effect of smoking
smokers = health.where('smoker',1)
nosmokers = health.where('smoker',0)
print(smokers.num_rows, ' smokers')
print(nosmokers.num_rows, ' non-smokers')
smokers.stats(summary_ops)
nosmokers.stats(summary_ops)
help(smokers.hist)
Explanation: Descriptive statistics - smoking
End of explanation
smokers.hist('weight', bins=20)
nosmokers.hist('weight', bins=20)
np.mean(nosmokers['weight'])-np.mean(smokers['weight'])
Explanation: What is the effect of smoking on weight?
End of explanation
# Lets draw two samples of equal size
n_sample = 200
smoker_sample = smokers.sample(n_sample)
nosmoker_sample = nosmokers.sample(n_sample)
weight = Table().with_columns([('NoSmoke', nosmoker_sample['weight']),('Smoke', smoker_sample['weight'])])
weight.hist(overlay=True,bins=30,normed=True)
weight.stats(summary_ops)
Explanation: Permutation tests
End of explanation
combined = Table().with_column('all', np.append(nosmoker_sample['weight'],smoker_sample['weight']))
combined.num_rows
# permutation test, split the combined into two random groups, do the comparison of those
def getdiff():
A,B = combined.split(n_sample)
return (np.mean(A['all'])-np.mean(B['all']))
# Do the permutation many times and form the distribution of results
num_samples = 300
diff_samples = Table().with_column('diffs', [getdiff() for i in range(num_samples)])
diff_samples.hist(bins=np.arange(-5,5,0.5), normed=True)
Explanation: Is the difference observed between these samples representative of the larger population?
End of explanation
# A sense of the overall population represented - older
health.select(['age','education']).hist(bins=20)
# How does education correlate with age?
health.select(['age','education']).scatter('age', fit_line=True)
health.pivot_hist('race','education',normed=True)
# How are races represented in the dataset and how does hispanic overlay the three?
race = health.select(['race', 'hispanic'])
race['count']=1
by_race = race.group('race',sum)
by_race['race frac'] = by_race['count sum']/np.sum(by_race['count sum'])
by_race['hisp frac'] = by_race['hispanic sum'] / by_race['count sum']
by_race
health.select(['height','weight']).scatter('height','weight',fit_line=True)
Explanation: The 4.5 kg difference is certainly not an artifact of the sample we started with. The smokers definitely weigh less. At the same time, these are not light people in this study. Better go back and understand what was the purpose of the study that led to the selection of these six thousand individuals.
Other Factors
End of explanation |
3,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding a Reduction Operation
This notebook will show you how to add a new reduction operation last_date to the existing backend SQLite.
A reduction operation is a function that maps $N$ rows to 1 row, for example the sum function.
Description
We're going to add a last_date function to ibis. last_date simply returns the latest date of a list of dates.
Step 1
Step2: We just defined a LastDate class that takes one date column as input, and returns a scalar output of the same type as the input. This matches both the requirements of a reduction and the spepcifics of the function that we want to implement.
Note
Step3: Interlude
Step4: Step 3
Step5: Step 4
Step6: Create and execute a bitwise_and expression
Step7: Last country to gain independence in our database
Step8: Last country to gain independence from the Spanish Empire, using the where parameter | Python Code:
import ibis.expr.datatypes as dt
import ibis.expr.rules as rlz
from ibis.expr.operations import Reduction
class LastDate(Reduction):
arg = rlz.column(rlz.date)
where = rlz.optional(rlz.boolean)
output_dtype = rlz.dtype_like('arg')
output_shape = rlz.Shape.SCALAR
Explanation: Adding a Reduction Operation
This notebook will show you how to add a new reduction operation last_date to the existing backend SQLite.
A reduction operation is a function that maps $N$ rows to 1 row, for example the sum function.
Description
We're going to add a last_date function to ibis. last_date simply returns the latest date of a list of dates.
Step 1: Define the Operation
Let's define the last_date operation as a function that takes any date column as input and returns a date:
```python
import datetime
import typing
def last_date(dates: typing.List[datetime.date]) -> datetime.date:
Latest date
```
End of explanation
from ibis.expr.types import (
DateColumn, # not DateValue! reductions are only valid on columns
)
def last_date(date_column, where=None):
return LastDate(date_column, where=where).to_expr()
DateColumn.last_date = last_date
Explanation: We just defined a LastDate class that takes one date column as input, and returns a scalar output of the same type as the input. This matches both the requirements of a reduction and the spepcifics of the function that we want to implement.
Note: It is very important that you write the correct argument rules and output type here. The expression will not work otherwise.
Step 2: Define the API
Because every reduction in ibis has the ability to filter out values during aggregation (a typical feature in databases and analytics tools), to make an expression out of LastDate we need to pass an additional argument: where to our LastDate constructor.
End of explanation
import ibis
people = ibis.table(
dict(name='string', country='string', date_of_birth='date'), name='people'
)
people.date_of_birth.last_date()
people.date_of_birth.last_date(people.country == 'Indonesia')
Explanation: Interlude: Create some expressions using last_date
End of explanation
import sqlalchemy as sa
@ibis.sqlite.add_operation(LastDate)
def _last_date(translator, expr):
# pull out the arguments to the expression
arg, where = expr.op().args
# compile the argument
compiled_arg = translator.translate(arg)
# call the appropriate SQLite function (`max` for the latest/maximum date)
agg = sa.func.max(compiled_arg)
# handle a non-None filter clause
if where is not None:
return agg.filter(translator.translate(where))
return agg
Explanation: Step 3: Turn the Expression into SQL
End of explanation
!curl -LsS -o $TEMPDIR/geography.db 'https://storage.googleapis.com/ibis-tutorial-data/geography.db'
import os
import tempfile
import ibis
db_fname = os.path.join(tempfile.gettempdir(), 'geography.db')
con = ibis.sqlite.connect(db_fname)
Explanation: Step 4: Putting it all Together
End of explanation
independence = con.table('independence')
independence
Explanation: Create and execute a bitwise_and expression
End of explanation
expr = independence.independence_date.last_date()
expr
sql_expr = expr.compile()
print(sql_expr)
expr.execute()
Explanation: Last country to gain independence in our database:
End of explanation
expr = independence.independence_date.last_date(
where=independence.independence_from == 'Spanish Empire'
)
expr
result = expr.execute()
result
Explanation: Last country to gain independence from the Spanish Empire, using the where parameter:
End of explanation |
3,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: TEST-INSTITUTE-2
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
3,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A comparison of methods of random choice
random.choice vs. a random.multinomial based implementation of the same weighted choice. Also compare with a GNU Scientific Library based implementation.
Context
Step1: GSL based multinomial called using CythonGSL wrapper
Step2: GSL based multinomial called directly
Step3: Test equivalence of results
Step4: Conclusion
Step5: The multinomial based method is (surprisingly?) an order of magnitude faster. This is probably fixed in the bleeding edge version of numpy (see https
Step6: For large N the gsl multinomial function is significantly faster than using np.random
Seeding of gsl multinomial generator | Python Code:
import numpy as np
%load_ext Cython
Explanation: A comparison of methods of random choice
random.choice vs. a random.multinomial based implementation of the same weighted choice. Also compare with a GNU Scientific Library based implementation.
Context: random.choice is only available in numpy >= 1.7, so I was trying to find a simple substitute for machines running older numpy versions.
End of explanation
%%cython -l gsl
cimport cython
from cython_gsl cimport *
import numpy as np
from numpy cimport *
cdef gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937)
def multinomial(ndarray[double, ndim=1] p, unsigned int N):
cdef:
size_t K = p.shape[0]
ndarray[uint32_t, ndim=1] n = np.empty_like(p, dtype='uint32')
# void gsl_ran_multinomial (const gsl_rng * r, size_t K, unsigned int N, const double p[], unsigned int n[])
gsl_ran_multinomial(r, K, N, <double*> p.data, <unsigned int *> n.data)
return n
Explanation: GSL based multinomial called using CythonGSL wrapper
End of explanation
%%cython -l gsl
cimport cython
import numpy as np
from numpy cimport *
cdef extern from "gsl/gsl_rng.h":
ctypedef struct gsl_rng_type
ctypedef struct gsl_rng
cdef gsl_rng_type *gsl_rng_mt19937
gsl_rng *gsl_rng_alloc ( gsl_rng_type * T) nogil
cdef extern from "gsl/gsl_randist.h":
void gsl_ran_multinomial ( gsl_rng * r, size_t K,
unsigned int N, double p[],
unsigned int n[] ) nogil
void gsl_rng_set (const gsl_rng * r, unsigned long int s) nogil
cdef gsl_rng *r = gsl_rng_alloc(gsl_rng_mt19937)
def seed_directgsl(unsigned long int seed):
gsl_rng_set(r, seed)
def multinomial_directgsl(ndarray[double, ndim=1] p, unsigned int N):
cdef:
size_t K = p.shape[0]
ndarray[uint32_t, ndim=1] n = np.empty_like(p, dtype='uint32')
# void gsl_ran_multinomial (const gsl_rng * r, size_t K, unsigned int N, const double p[], unsigned int n[])
gsl_ran_multinomial(r, K, N, <double*> p.data, <unsigned int *> n.data)
return n
def choice(p):
n = np.random.random(p.shape)
pcum = p.cumsum()
return pcum.searchsorted(n)
Explanation: GSL based multinomial called directly
End of explanation
p = np.array([0.5, 0.3, 0.2])
prng = np.random.RandomState(3)
print prng.choice(3, size = 3, p = p)
print np.repeat(np.arange(3), prng.multinomial(3, p))
print np.repeat(np.arange(3), multinomial(p, 3))
print np.repeat(np.arange(3), multinomial_directgsl(p, 3))
N = 100000
print np.bincount(np.random.choice(3, size = 3 * N, p = p))/(3.0 * N)
print np.bincount(np.asarray([np.repeat(np.arange(3), np.random.multinomial(3, p)) for i in range(N)]).flatten())/(3.0 *N)
print np.bincount(np.asarray([np.repeat(np.arange(3), multinomial(p, 3)) for i in range(N)]).flatten())/(3.0 *N)
print np.bincount(np.asarray([np.repeat(np.arange(3), multinomial_directgsl(p, 3)) for i in range(N)]).flatten())/(3.0 *N)
print np.bincount(np.asarray([choice(p) for i in range(N)]).flatten())/(3.0 *N)
Explanation: Test equivalence of results
End of explanation
p = np.array([0.5, 0.3, 0.2])
%timeit -n 10000 prng.choice(3, size = 3, p = p)
%timeit -n 10000 np.repeat(np.arange(3), prng.multinomial(3, p))
%timeit -n 10000 np.repeat(np.arange(3), multinomial(p, 3))
%timeit -n 10000 np.repeat(np.arange(3), multinomial_directgsl(p, 3))
%timeit -n 10000 choice(p)
Explanation: Conclusion: All methods are statistically equivalent. They do not give the same results for the same random seed, though.
Time execution times
End of explanation
N = 10000
p = np.random.rand(N)
p /= np.sum(p)
N_arange = np.arange(N)
%timeit -n 100 prng.choice(N, size = N, p = p)
%timeit -n 100 np.repeat(N_arange, prng.multinomial(N, p))
%timeit -n 100 np.repeat(N_arange, multinomial(p, N))
%timeit -n 100 np.repeat(N_arange, multinomial_directgsl(p, N))
%timeit -n 100 choice(p)
Explanation: The multinomial based method is (surprisingly?) an order of magnitude faster. This is probably fixed in the bleeding edge version of numpy (see https://github.com/numpy/numpy/issues/4188).
End of explanation
p = np.array([0.5, 0.3, 0.2])
print(multinomial_directgsl(p, 3))
print(multinomial_directgsl(p, 3))
seed_directgsl(10)
print(multinomial_directgsl(p, 3))
seed_directgsl(10)
print(multinomial_directgsl(p, 3))
Explanation: For large N the gsl multinomial function is significantly faster than using np.random
Seeding of gsl multinomial generator
End of explanation |
3,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting started
Step1: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step2: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available.
Step3: Only if your bucket doesn't already exist
Step4: Finally, validate access to your Cloud Storage bucket by examining its contents
Step5: Part 1. Quickstart for training in AI Platform
This section of the tutorial walks you through submitting a training job to Cloud
AI Platform. This job runs sample code that uses Keras to train a deep neural
network on the United States Census data. It outputs the trained model as a
TensorFlow SavedModel
directory
in your Cloud Storage bucket.
Get training code and dependencies
First, download the training code and change the notebook's working directory
Step6: Notice that the training code is structured as a Python package in the
trainer/ subdirectory
Step7: Run the following cell to install Python dependencies needed to train the model locally. When you run the training job in AI Platform,
dependencies are preinstalled based on the runtime
version
you choose.
Step8: Train your model locally
Before training on AI Platform, train the job locally to verify the file
structure and packaging is correct.
For a complex or resource-intensive job, you
may want to train locally on a small sample of your dataset to verify your code.
Then you can run the job on AI Platform to train on the whole dataset.
This sample runs a relatively quick job on a small dataset, so the local
training and the AI Platform job run the same code on the same data.
Run the following cell to train a model locally
Step9: Train your model using AI Platform
Next, submit a training job to AI Platform. This runs the training module
in the cloud and exports the trained model to Cloud Storage.
First, give your training job a name and choose a directory within your Cloud
Storage bucket for saving intermediate and output files
Step10: Run the following command to package the trainer/ directory, upload it to the
specified --job-dir, and instruct AI Platform to run the
trainer.task module from that package.
The --stream-logs flag lets you view training logs in the cell below. You can
also see logs and other job details in the GCP Console.
Hyperparameter tuning
You can optionally perform hyperparameter tuning by using the included
hptuning_config.yaml configuration file. This file tells AI Platform to tune the batch size and learning rate for training over multiple trials to maximize accuracy.
In this example, the training code uses a TensorBoard
callback,
which creates TensorFlow Summary
Events
during training. AI Platform uses these events to track the metric you want to
optimize. Learn more about hyperparameter tuning in
AI Platform Training.
Step11: Part 2. Quickstart for online predictions in AI Platform
This section shows how to use AI Platform and your trained model from Part 1
to predict a person's income bracket from other Census information about them.
Create model and version resources in AI Platform
To serve online predictions using the model you trained and exported in Part 1,
create a model resource in AI Platform and a version resource
within it. The version resource is what actually uses your trained model to
serve predictions. This structure lets you adjust and retrain your model many times and
organize all the versions together in AI Platform. Learn more about models
and
versions.
While you specify --region $REGION in gcloud commands, you will use regional endpoint. You can also specify --region global to use global endpoint. Please note that you must create versions using the same endpoint as the one you use to create the model. Learn more about available regional endpoints.
First, name and create the model resource
Step12: Next, create the model version. The training job from Part 1 exported a timestamped
TensorFlow SavedModel
directory
to your Cloud Storage bucket. AI Platform uses this directory to create a
model version. Learn more about SavedModel and
AI Platform.
You may be able to find the path to this directory in your training job's logs.
Look for a line like
Step13: Prepare input for prediction
To receive valid and useful predictions, you must preprocess input for prediction in the same way that training data was preprocessed. In a production
system, you may want to create a preprocessing pipeline that can be used identically at training time and prediction time.
For this exercise, use the training package's data-loading code to select a random sample from the evaluation data. This data is in the form that was used to evaluate accuracy after each epoch of training, so it can be used to send test predictions without further preprocessing
Step14: Notice that categorical fields, like occupation, have already been converted to integers (with the same mapping that was used for training). Numerical fields, like age, have been scaled to a
z-score. Some fields have been dropped from the original
data. Compare the prediction input with the raw data for the same examples
Step15: Export the prediction input to a newline-delimited JSON file
Step16: The gcloud command-line tool accepts newline-delimited JSON for online
prediction, and this particular Keras model expects a flat list of
numbers for each input example.
AI Platform requires a different format when you make online prediction requests to the REST API without using the gcloud tool. The way you structure
your model may also change how you must format data for prediction. Learn more
about formatting data for online
prediction.
Submit the online prediction request
Use gcloud to submit your online prediction request.
Step17: Since the model's last layer uses a sigmoid function for its activation, outputs between 0 and 0.5 represent negative predictions ("<=50K") and outputs between 0.5 and 1 represent positive ones (">50K").
Do the predicted income brackets match the actual ones? Run the following cell
to see the true labels.
Step18: Part 3. Developing the Keras model from scratch
At this point, you have trained a machine learning model on AI Platform, deployed the trained model as a version resource on AI Platform, and received online predictions from the deployment. The next section walks through recreating the Keras code used to train your model. It covers the following parts of developing a machine learning model for use with AI Platform
Step19: Then, define some useful constants
Step22: Download and preprocess data
Download the data
Next, define functions to download training and evaluation data. These functions also fix minor irregularities in the data's formatting.
Step23: Use those functions to download the data for training and verify that you have CSV files for training and evaluation
Step24: Next, load these files using Pandas and examine the data
Step26: Preprocess the data
The first preprocessing step removes certain features from the data and
converts categorical features to numerical values for use with Keras.
Learn more about feature engineering and bias in data.
Step27: Run the following cell to see how preprocessing changed the data. Notice in particular that income_bracket, the label that you're training the model to predict, has changed from <=50K and >50K to 0 and 1
Step28: Next, separate the data into features ("x") and labels ("y"), and reshape the label arrays into a format for use with tf.data.Dataset later
Step30: Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
In a production system, you may want to save the means and standard deviations from your training set and use them to perform an identical transformation on test data at prediction time. For convenience in this exercise, temporarily combine the training and evaluation data to scale all of them
Step31: Finally, examine some of your fully preprocessed training data
Step33: Design and train the model
Create training and validation datasets
Create an input function to convert features and labels into a
tf.data.Dataset for training or evaluation
Step34: Next, create these training and evaluation datasets.Use the NUM_EPOCHS
and BATCH_SIZE hyperparameters defined previously to define how the training
dataset provides examples to the model during training. Set up the validation
dataset to provide all its examples in one batch, for a single validation step
at the end of each training epoch.
Step36: Design a Keras Model
Design your neural network using the Keras Sequential API.
This deep neural network (DNN) has several hidden layers, and the last layer uses a sigmoid activation function to output a value between 0 and 1
Step37: Next, create the Keras model object and examine its structure
Step38: Train and evaluate the model
Define a learning rate decay to encourage model paramaters to make smaller
changes as training goes on
Step39: Finally, train the model. Provide the appropriate steps_per_epoch for the
model to train on the entire training dataset (with BATCH_SIZE examples per step) during each epoch. And instruct the model to calculate validation
accuracy with one big validation batch at the end of each epoch.
Step40: Visualize training and export the trained model
Visualize training
Import matplotlib to visualize how the model learned over the training period.
Step41: Plot the model's loss (binary cross-entropy) and accuracy, as measured at the
end of each training epoch
Step42: Over time, loss decreases and accuracy increases. But do they converge to a
stable level? Are there big differences between the training and validation
metrics (a sign of overfitting)?
Learn about how to improve your machine learning
model. Then, feel
free to adjust hyperparameters or the model architecture and train again.
Export the model for serving
AI Platform requires when you create a model version
resource.
Since not all optimizers can be exported to the SavedModel format, you may see
warnings during the export process. As long you successfully export a serving
graph, AI Platform can used the SavedModel to serve predictions.
Step43: You may export a SavedModel directory to your local filesystem or to Cloud
Storage, as long as you have the necessary permissions. In your current
environment, you granted access to Cloud Storage by authenticating your GCP account and setting the GOOGLE_APPLICATION_CREDENTIALS environment variable.
AI Platform training jobs can also export directly to Cloud Storage, because
AI Platform service accounts have access to Cloud Storage buckets in their own
project.
Try exporting directly to Cloud Storage
Step44: You can now deploy this model to AI Platform and serve predictions by
following the steps from Part 2.
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following
commands | Python Code:
PROJECT_ID = "<your-project-id>" #@param {type:"string"}
! gcloud config set project $PROJECT_ID
Explanation: Getting started: Training and prediction with Keras in AI Platform
<img src="https://storage.googleapis.com/cloud-samples-data/ai-platform/census/keras-tensorflow-cmle.png" alt="Keras, TensorFlow, and AI Platform logos" width="300px">
<table align="left">
<td>
<a href="https://cloud.google.com/ml-engine/docs/tensorflow/getting-started-keras">
<img src="https://cloud.google.com/_static/images/cloud/icons/favicons/onecloud/super_cloud.png"
alt="Google Cloud logo" width="32px"> Read on cloud.google.com
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/cloudml-samples/blob/main/notebooks/tensorflow/getting-started-keras.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/main/notebooks/tensorflow/getting-started-keras.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
This tutorial shows how to train a neural network on AI Platform
using the Keras sequential API and how to serve predictions from that
model.
Keras is a high-level API for building and training deep learning models.
tf.keras is TensorFlow’s
implementation of this API.
The first two parts of the tutorial walk through training a model on Cloud
AI Platform using prewritten Keras code, deploying the trained model to
AI Platform, and serving online predictions from the deployed model.
The last part of the tutorial digs into the training code used for this model and ensuring it's compatible with AI Platform. To learn more about building
machine learning models in Keras more generally, read TensorFlow's Keras
tutorials.
Dataset
This tutorial uses the United States Census Income
Dataset provided by the
UC Irvine Machine Learning
Repository. This dataset contains
information about people from a 1994 Census database, including age, education,
marital status, occupation, and whether they make more than $50,000 a year.
Objective
The goal is to train a deep neural network (DNN) using Keras that predicts
whether a person makes more than $50,000 a year (target label) based on other
Census information about the person (features).
This tutorial focuses more on using this model with AI Platform than on
the design of the model itself. However, it's always important to think about
potential problems and unintended consequences when building machine learning
systems. See the Machine Learning Crash Course exercise about
fairness
to learn about sources of bias in the Census dataset, as well as machine
learning fairness more generally.
Costs
This tutorial uses billable components of Google Cloud Platform (GCP):
AI Platform
Cloud Storage
Learn about AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Before you begin
You must do several things before you can train and deploy a model in
AI Platform:
Set up your local development environment.
Set up a GCP project with billing and the necessary
APIs enabled.
Authenticate your GCP account in this notebook.
Create a Cloud Storage bucket to store your training package and your
trained model.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.
Make sure that billing is enabled for your project.
Enable the AI Platform ("Cloud Machine Learning Engine") and Compute Engine APIs.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "<your-bucket-name>" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
# Clone the repository of AI Platform samples
! git clone --depth 1 https://github.com/GoogleCloudPlatform/cloudml-samples
# Set the working directory to the sample code directory
%cd cloudml-samples/census/tf-keras
Explanation: Part 1. Quickstart for training in AI Platform
This section of the tutorial walks you through submitting a training job to Cloud
AI Platform. This job runs sample code that uses Keras to train a deep neural
network on the United States Census data. It outputs the trained model as a
TensorFlow SavedModel
directory
in your Cloud Storage bucket.
Get training code and dependencies
First, download the training code and change the notebook's working directory:
End of explanation
# `ls` shows the working directory's contents. The `p` flag adds trailing
# slashes to subdirectory names. The `R` flag lists subdirectories recursively.
! ls -pR
Explanation: Notice that the training code is structured as a Python package in the
trainer/ subdirectory:
End of explanation
! pip install -r requirements.txt
Explanation: Run the following cell to install Python dependencies needed to train the model locally. When you run the training job in AI Platform,
dependencies are preinstalled based on the runtime
version
you choose.
End of explanation
# Explicitly tell `gcloud ai-platform local train` to use Python 3
! gcloud config set ml_engine/local_python $(which python3)
# This is similar to `python -m trainer.task --job-dir local-training-output`
# but it better replicates the AI Platform environment, especially for
# distributed training (not applicable here).
! gcloud ai-platform local train \
--package-path trainer \
--module-name trainer.task \
--job-dir local-training-output
Explanation: Train your model locally
Before training on AI Platform, train the job locally to verify the file
structure and packaging is correct.
For a complex or resource-intensive job, you
may want to train locally on a small sample of your dataset to verify your code.
Then you can run the job on AI Platform to train on the whole dataset.
This sample runs a relatively quick job on a small dataset, so the local
training and the AI Platform job run the same code on the same data.
Run the following cell to train a model locally:
End of explanation
JOB_NAME = 'my_first_keras_job'
JOB_DIR = 'gs://' + BUCKET_NAME + '/keras-job-dir'
Explanation: Train your model using AI Platform
Next, submit a training job to AI Platform. This runs the training module
in the cloud and exports the trained model to Cloud Storage.
First, give your training job a name and choose a directory within your Cloud
Storage bucket for saving intermediate and output files:
End of explanation
! gcloud ai-platform jobs submit training $JOB_NAME \
--package-path trainer/ \
--module-name trainer.task \
--region $REGION \
--python-version 3.7 \
--runtime-version 1.15 \
--job-dir $JOB_DIR \
--stream-logs
Explanation: Run the following command to package the trainer/ directory, upload it to the
specified --job-dir, and instruct AI Platform to run the
trainer.task module from that package.
The --stream-logs flag lets you view training logs in the cell below. You can
also see logs and other job details in the GCP Console.
Hyperparameter tuning
You can optionally perform hyperparameter tuning by using the included
hptuning_config.yaml configuration file. This file tells AI Platform to tune the batch size and learning rate for training over multiple trials to maximize accuracy.
In this example, the training code uses a TensorBoard
callback,
which creates TensorFlow Summary
Events
during training. AI Platform uses these events to track the metric you want to
optimize. Learn more about hyperparameter tuning in
AI Platform Training.
End of explanation
MODEL_NAME = "my_first_keras_model"
! gcloud ai-platform models create $MODEL_NAME \
--region $REGION
Explanation: Part 2. Quickstart for online predictions in AI Platform
This section shows how to use AI Platform and your trained model from Part 1
to predict a person's income bracket from other Census information about them.
Create model and version resources in AI Platform
To serve online predictions using the model you trained and exported in Part 1,
create a model resource in AI Platform and a version resource
within it. The version resource is what actually uses your trained model to
serve predictions. This structure lets you adjust and retrain your model many times and
organize all the versions together in AI Platform. Learn more about models
and
versions.
While you specify --region $REGION in gcloud commands, you will use regional endpoint. You can also specify --region global to use global endpoint. Please note that you must create versions using the same endpoint as the one you use to create the model. Learn more about available regional endpoints.
First, name and create the model resource:
End of explanation
MODEL_VERSION = "v1"
# Get a list of directories in the `keras_export` parent directory
KERAS_EXPORT_DIRS = ! gsutil ls $JOB_DIR/keras_export/
# Update the directory as needed, in case you've trained
# multiple times
SAVED_MODEL_PATH = keras_export
# Create model version based on that SavedModel directory
! gcloud ai-platform versions create $MODEL_VERSION \
--region $REGION \
--model $MODEL_NAME \
--runtime-version 1.15 \
--python-version 3.7 \
--framework tensorflow \
--origin $SAVED_MODEL_PATH
Explanation: Next, create the model version. The training job from Part 1 exported a timestamped
TensorFlow SavedModel
directory
to your Cloud Storage bucket. AI Platform uses this directory to create a
model version. Learn more about SavedModel and
AI Platform.
You may be able to find the path to this directory in your training job's logs.
Look for a line like:
Model exported to: gs://<your-bucket-name>/keras-job-dir/keras_export/1545439782
Execute the following command to identify your SavedModel directory and use it to create a model version resource:
End of explanation
from trainer import util
_, _, eval_x, eval_y = util.load_data()
prediction_input = eval_x.sample(20)
prediction_targets = eval_y[prediction_input.index]
prediction_input
Explanation: Prepare input for prediction
To receive valid and useful predictions, you must preprocess input for prediction in the same way that training data was preprocessed. In a production
system, you may want to create a preprocessing pipeline that can be used identically at training time and prediction time.
For this exercise, use the training package's data-loading code to select a random sample from the evaluation data. This data is in the form that was used to evaluate accuracy after each epoch of training, so it can be used to send test predictions without further preprocessing:
End of explanation
import pandas as pd
_, eval_file_path = util.download(util.DATA_DIR)
raw_eval_data = pd.read_csv(eval_file_path,
names=util._CSV_COLUMNS,
na_values='?')
raw_eval_data.iloc[prediction_input.index]
Explanation: Notice that categorical fields, like occupation, have already been converted to integers (with the same mapping that was used for training). Numerical fields, like age, have been scaled to a
z-score. Some fields have been dropped from the original
data. Compare the prediction input with the raw data for the same examples:
End of explanation
import json
with open('prediction_input.json', 'w') as json_file:
for row in prediction_input.values.tolist():
json.dump(row, json_file)
json_file.write('\n')
! cat prediction_input.json
Explanation: Export the prediction input to a newline-delimited JSON file:
End of explanation
! gcloud ai-platform predict \
--region $REGION \
--model $MODEL_NAME \
--version $MODEL_VERSION \
--json-instances prediction_input.json
Explanation: The gcloud command-line tool accepts newline-delimited JSON for online
prediction, and this particular Keras model expects a flat list of
numbers for each input example.
AI Platform requires a different format when you make online prediction requests to the REST API without using the gcloud tool. The way you structure
your model may also change how you must format data for prediction. Learn more
about formatting data for online
prediction.
Submit the online prediction request
Use gcloud to submit your online prediction request.
End of explanation
prediction_targets
Explanation: Since the model's last layer uses a sigmoid function for its activation, outputs between 0 and 0.5 represent negative predictions ("<=50K") and outputs between 0.5 and 1 represent positive ones (">50K").
Do the predicted income brackets match the actual ones? Run the following cell
to see the true labels.
End of explanation
import os
from six.moves import urllib
import tempfile
import numpy as np
import pandas as pd
import tensorflow as tf
# Examine software versions
print(__import__('sys').version)
print(tf.__version__)
print(tf.keras.__version__)
Explanation: Part 3. Developing the Keras model from scratch
At this point, you have trained a machine learning model on AI Platform, deployed the trained model as a version resource on AI Platform, and received online predictions from the deployment. The next section walks through recreating the Keras code used to train your model. It covers the following parts of developing a machine learning model for use with AI Platform:
Downloading and preprocessing data
Designing and training the model
Visualizing training and exporting the trained model
While this section provides more detailed insight to the tasks completed in previous parts, to learn more about using tf.keras, read TensorFlow's guide to Keras. To learn more about structuring code as a training packge for AI Platform, read Packaging a training application and reference the complete training code, which is structured as a Python package.
Import libraries and define constants
First, import Python libraries required for training:
End of explanation
### For downloading data ###
# Storage directory
DATA_DIR = os.path.join(tempfile.gettempdir(), 'census_data')
# Download options.
DATA_URL = 'https://storage.googleapis.com/cloud-samples-data/ai-platform' \
'/census/data'
TRAINING_FILE = 'adult.data.csv'
EVAL_FILE = 'adult.test.csv'
TRAINING_URL = '%s/%s' % (DATA_URL, TRAINING_FILE)
EVAL_URL = '%s/%s' % (DATA_URL, EVAL_FILE)
### For interpreting data ###
# These are the features in the dataset.
# Dataset information: https://archive.ics.uci.edu/ml/datasets/census+income
_CSV_COLUMNS = [
'age', 'workclass', 'fnlwgt', 'education', 'education_num',
'marital_status', 'occupation', 'relationship', 'race', 'gender',
'capital_gain', 'capital_loss', 'hours_per_week', 'native_country',
'income_bracket'
]
_CATEGORICAL_TYPES = {
'workclass': pd.api.types.CategoricalDtype(categories=[
'Federal-gov', 'Local-gov', 'Never-worked', 'Private', 'Self-emp-inc',
'Self-emp-not-inc', 'State-gov', 'Without-pay'
]),
'marital_status': pd.api.types.CategoricalDtype(categories=[
'Divorced', 'Married-AF-spouse', 'Married-civ-spouse',
'Married-spouse-absent', 'Never-married', 'Separated', 'Widowed'
]),
'occupation': pd.api.types.CategoricalDtype([
'Adm-clerical', 'Armed-Forces', 'Craft-repair', 'Exec-managerial',
'Farming-fishing', 'Handlers-cleaners', 'Machine-op-inspct',
'Other-service', 'Priv-house-serv', 'Prof-specialty', 'Protective-serv',
'Sales', 'Tech-support', 'Transport-moving'
]),
'relationship': pd.api.types.CategoricalDtype(categories=[
'Husband', 'Not-in-family', 'Other-relative', 'Own-child', 'Unmarried',
'Wife'
]),
'race': pd.api.types.CategoricalDtype(categories=[
'Amer-Indian-Eskimo', 'Asian-Pac-Islander', 'Black', 'Other', 'White'
]),
'native_country': pd.api.types.CategoricalDtype(categories=[
'Cambodia', 'Canada', 'China', 'Columbia', 'Cuba', 'Dominican-Republic',
'Ecuador', 'El-Salvador', 'England', 'France', 'Germany', 'Greece',
'Guatemala', 'Haiti', 'Holand-Netherlands', 'Honduras', 'Hong', 'Hungary',
'India', 'Iran', 'Ireland', 'Italy', 'Jamaica', 'Japan', 'Laos', 'Mexico',
'Nicaragua', 'Outlying-US(Guam-USVI-etc)', 'Peru', 'Philippines', 'Poland',
'Portugal', 'Puerto-Rico', 'Scotland', 'South', 'Taiwan', 'Thailand',
'Trinadad&Tobago', 'United-States', 'Vietnam', 'Yugoslavia'
]),
'income_bracket': pd.api.types.CategoricalDtype(categories=[
'<=50K', '>50K'
])
}
# This is the label (target) we want to predict.
_LABEL_COLUMN = 'income_bracket'
### Hyperparameters for training ###
# This the training batch size
BATCH_SIZE = 128
# This is the number of epochs (passes over the full training data)
NUM_EPOCHS = 20
# Define learning rate.
LEARNING_RATE = .01
Explanation: Then, define some useful constants:
Information for downloading training and evaluation data
Information required for Pandas to interpret the data and convert categorical fields into numeric features
Hyperparameters for training, such as learning rate and batch size
End of explanation
def _download_and_clean_file(filename, url):
Downloads data from url, and makes changes to match the CSV format.
The CSVs may use spaces after the comma delimters (non-standard) or include
rows which do not represent well-formed examples. This function strips out
some of these problems.
Args:
filename: filename to save url to
url: URL of resource to download
temp_file, _ = urllib.request.urlretrieve(url)
with tf.io.gfile.GFile(temp_file, 'r') as temp_file_object:
with tf.io.gfile.GFile(filename, 'w') as file_object:
for line in temp_file_object:
line = line.strip()
line = line.replace(', ', ',')
if not line or ',' not in line:
continue
if line[-1] == '.':
line = line[:-1]
line += '\n'
file_object.write(line)
tf.io.gfile.remove(temp_file)
def download(data_dir):
Downloads census data if it is not already present.
Args:
data_dir: directory where we will access/save the census data
tf.io.gfile.makedirs(data_dir)
training_file_path = os.path.join(data_dir, TRAINING_FILE)
if not tf.io.gfile.exists(training_file_path):
_download_and_clean_file(training_file_path, TRAINING_URL)
eval_file_path = os.path.join(data_dir, EVAL_FILE)
if not tf.io.gfile.exists(eval_file_path):
_download_and_clean_file(eval_file_path, EVAL_URL)
return training_file_path, eval_file_path
Explanation: Download and preprocess data
Download the data
Next, define functions to download training and evaluation data. These functions also fix minor irregularities in the data's formatting.
End of explanation
training_file_path, eval_file_path = download(DATA_DIR)
# You should see 2 files: adult.data.csv and adult.test.csv
!ls -l $DATA_DIR
Explanation: Use those functions to download the data for training and verify that you have CSV files for training and evaluation:
End of explanation
# This census data uses the value '?' for fields (column) that are missing data.
# We use na_values to find ? and set it to NaN values.
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
train_df = pd.read_csv(training_file_path, names=_CSV_COLUMNS, na_values='?')
eval_df = pd.read_csv(eval_file_path, names=_CSV_COLUMNS, na_values='?')
# Here's what the data looks like before we preprocess the data.
train_df.head()
Explanation: Next, load these files using Pandas and examine the data:
End of explanation
UNUSED_COLUMNS = ['fnlwgt', 'education', 'gender']
def preprocess(dataframe):
Converts categorical features to numeric. Removes unused columns.
Args:
dataframe: Pandas dataframe with raw data
Returns:
Dataframe with preprocessed data
dataframe = dataframe.drop(columns=UNUSED_COLUMNS)
# Convert integer valued (numeric) columns to floating point
numeric_columns = dataframe.select_dtypes(['int64']).columns
dataframe[numeric_columns] = dataframe[numeric_columns].astype('float32')
# Convert categorical columns to numeric
cat_columns = dataframe.select_dtypes(['object']).columns
dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.astype(
_CATEGORICAL_TYPES[x.name]))
dataframe[cat_columns] = dataframe[cat_columns].apply(lambda x: x.cat.codes)
return dataframe
prepped_train_df = preprocess(train_df)
prepped_eval_df = preprocess(eval_df)
Explanation: Preprocess the data
The first preprocessing step removes certain features from the data and
converts categorical features to numerical values for use with Keras.
Learn more about feature engineering and bias in data.
End of explanation
prepped_train_df.head()
Explanation: Run the following cell to see how preprocessing changed the data. Notice in particular that income_bracket, the label that you're training the model to predict, has changed from <=50K and >50K to 0 and 1:
End of explanation
# Split train and test data with labels.
# The pop() method will extract (copy) and remove the label column from the dataframe
train_x, train_y = prepped_train_df, prepped_train_df.pop(_LABEL_COLUMN)
eval_x, eval_y = prepped_eval_df, prepped_eval_df.pop(_LABEL_COLUMN)
# Reshape label columns for use with tf.data.Dataset
train_y = np.asarray(train_y).astype('float32').reshape((-1, 1))
eval_y = np.asarray(eval_y).astype('float32').reshape((-1, 1))
Explanation: Next, separate the data into features ("x") and labels ("y"), and reshape the label arrays into a format for use with tf.data.Dataset later:
End of explanation
def standardize(dataframe):
Scales numerical columns using their means and standard deviation to get
z-scores: the mean of each numerical column becomes 0, and the standard
deviation becomes 1. This can help the model converge during training.
Args:
dataframe: Pandas dataframe
Returns:
Input dataframe with the numerical columns scaled to z-scores
dtypes = list(zip(dataframe.dtypes.index, map(str, dataframe.dtypes)))
# Normalize numeric columns.
for column, dtype in dtypes:
if dtype == 'float32':
dataframe[column] -= dataframe[column].mean()
dataframe[column] /= dataframe[column].std()
return dataframe
# Join train_x and eval_x to normalize on overall means and standard
# deviations. Then separate them again.
all_x = pd.concat([train_x, eval_x], keys=['train', 'eval'])
all_x = standardize(all_x)
train_x, eval_x = all_x.xs('train'), all_x.xs('eval')
Explanation: Scaling training data so each numerical feature column has a mean of 0 and a standard deviation of 1 can improve your model.
In a production system, you may want to save the means and standard deviations from your training set and use them to perform an identical transformation on test data at prediction time. For convenience in this exercise, temporarily combine the training and evaluation data to scale all of them:
End of explanation
# Verify dataset features
# Note how only the numeric fields (not categorical) have been standardized
train_x.head()
Explanation: Finally, examine some of your fully preprocessed training data:
End of explanation
def input_fn(features, labels, shuffle, num_epochs, batch_size):
Generates an input function to be used for model training.
Args:
features: numpy array of features used for training or inference
labels: numpy array of labels for each example
shuffle: boolean for whether to shuffle the data or not (set True for
training, False for evaluation)
num_epochs: number of epochs to provide the data for
batch_size: batch size for training
Returns:
A tf.data.Dataset that can provide data to the Keras model for training or
evaluation
if labels is None:
inputs = features
else:
inputs = (features, labels)
dataset = tf.data.Dataset.from_tensor_slices(inputs)
if shuffle:
dataset = dataset.shuffle(buffer_size=len(features))
# We call repeat after shuffling, rather than before, to prevent separate
# epochs from blending together.
dataset = dataset.repeat(num_epochs)
dataset = dataset.batch(batch_size)
return dataset
Explanation: Design and train the model
Create training and validation datasets
Create an input function to convert features and labels into a
tf.data.Dataset for training or evaluation:
End of explanation
# Pass a numpy array by using DataFrame.values
training_dataset = input_fn(features=train_x.values,
labels=train_y,
shuffle=True,
num_epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE)
num_eval_examples = eval_x.shape[0]
# Pass a numpy array by using DataFrame.values
validation_dataset = input_fn(features=eval_x.values,
labels=eval_y,
shuffle=False,
num_epochs=NUM_EPOCHS,
batch_size=num_eval_examples)
Explanation: Next, create these training and evaluation datasets.Use the NUM_EPOCHS
and BATCH_SIZE hyperparameters defined previously to define how the training
dataset provides examples to the model during training. Set up the validation
dataset to provide all its examples in one batch, for a single validation step
at the end of each training epoch.
End of explanation
def create_keras_model(input_dim, learning_rate):
Creates Keras Model for Binary Classification.
Args:
input_dim: How many features the input has
learning_rate: Learning rate for training
Returns:
The compiled Keras model (still needs to be trained)
Dense = tf.keras.layers.Dense
model = tf.keras.Sequential(
[
Dense(100, activation=tf.nn.relu, kernel_initializer='uniform',
input_shape=(input_dim,)),
Dense(75, activation=tf.nn.relu),
Dense(50, activation=tf.nn.relu),
Dense(25, activation=tf.nn.relu),
Dense(1, activation=tf.nn.sigmoid)
])
# Custom Optimizer:
# https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer
optimizer = tf.keras.optimizers.RMSprop(
lr=learning_rate)
# Compile Keras model
model.compile(
loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
Explanation: Design a Keras Model
Design your neural network using the Keras Sequential API.
This deep neural network (DNN) has several hidden layers, and the last layer uses a sigmoid activation function to output a value between 0 and 1:
The input layer has 100 units using the ReLU activation function.
The hidden layer has 75 units using the ReLU activation function.
The hidden layer has 50 units using the ReLU activation function.
The hidden layer has 25 units using the ReLU activation function.
The output layer has 1 units using a sigmoid activation function.
The optimizer uses the binary cross-entropy loss function, which is appropriate for a binary classification problem like this one.
Feel free to change these layers to try to improve the model:
End of explanation
num_train_examples, input_dim = train_x.shape
print('Number of features: {}'.format(input_dim))
print('Number of examples: {}'.format(num_train_examples))
keras_model = create_keras_model(
input_dim=input_dim,
learning_rate=LEARNING_RATE)
# Take a detailed look inside the model
keras_model.summary()
Explanation: Next, create the Keras model object and examine its structure:
End of explanation
# Setup Learning Rate decay.
lr_decay_cb = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: LEARNING_RATE + 0.02 * (0.5 ** (1 + epoch)),
verbose=True)
# Setup TensorBoard callback.
tensorboard_cb = tf.keras.callbacks.TensorBoard(
os.path.join(JOB_DIR, 'keras_tensorboard'),
histogram_freq=1)
Explanation: Train and evaluate the model
Define a learning rate decay to encourage model paramaters to make smaller
changes as training goes on:
End of explanation
history = keras_model.fit(training_dataset,
epochs=NUM_EPOCHS,
steps_per_epoch=int(num_train_examples/BATCH_SIZE),
validation_data=validation_dataset,
validation_steps=1,
callbacks=[lr_decay_cb, tensorboard_cb],
verbose=1)
Explanation: Finally, train the model. Provide the appropriate steps_per_epoch for the
model to train on the entire training dataset (with BATCH_SIZE examples per step) during each epoch. And instruct the model to calculate validation
accuracy with one big validation batch at the end of each epoch.
End of explanation
! pip install matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: Visualize training and export the trained model
Visualize training
Import matplotlib to visualize how the model learned over the training period.
End of explanation
# Visualize History for Loss.
plt.title('Keras model loss')
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='upper right')
plt.show()
# Visualize History for Accuracy.
plt.title('Keras model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.legend(['training', 'validation'], loc='lower right')
plt.show()
Explanation: Plot the model's loss (binary cross-entropy) and accuracy, as measured at the
end of each training epoch:
End of explanation
# Export the model to a local SavedModel directory
export_path = tf.keras.experimental.export_saved_model(keras_model, 'keras_export')
print("Model exported to: ", export_path)
Explanation: Over time, loss decreases and accuracy increases. But do they converge to a
stable level? Are there big differences between the training and validation
metrics (a sign of overfitting)?
Learn about how to improve your machine learning
model. Then, feel
free to adjust hyperparameters or the model architecture and train again.
Export the model for serving
AI Platform requires when you create a model version
resource.
Since not all optimizers can be exported to the SavedModel format, you may see
warnings during the export process. As long you successfully export a serving
graph, AI Platform can used the SavedModel to serve predictions.
End of explanation
# Export the model to a SavedModel directory in Cloud Storage
export_path = tf.keras.experimental.export_saved_model(keras_model, JOB_DIR + '/keras_export')
print("Model exported to: ", export_path)
Explanation: You may export a SavedModel directory to your local filesystem or to Cloud
Storage, as long as you have the necessary permissions. In your current
environment, you granted access to Cloud Storage by authenticating your GCP account and setting the GOOGLE_APPLICATION_CREDENTIALS environment variable.
AI Platform training jobs can also export directly to Cloud Storage, because
AI Platform service accounts have access to Cloud Storage buckets in their own
project.
Try exporting directly to Cloud Storage:
End of explanation
# Delete model version resource
! gcloud ai-platform versions delete $MODEL_VERSION --region $REGION --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --region $REGION --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $JOB_DIR
# If the training job is still running, cancel it
! gcloud ai-platform jobs cancel $JOB_NAME --quiet --verbosity critical
Explanation: You can now deploy this model to AI Platform and serve predictions by
following the steps from Part 2.
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following
commands:
End of explanation |
3,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explore with Sqlite databases
Step1: Get utterances from certain time periods in each experiment or for certain episodes
Step2: Get mutual information between words used in referring expressions and properties of the referent | Python Code:
import sys
sys.path.append("../python/")
import pentoref.IO as IO
import sqlite3 as sqlite
# Create databases if required
if False: # make True if you need to create the databases from the derived data
for corpus_name in ["TAKE", "TAKECV", "PENTOCV"]:
data_dir = "../../../pentoref/{0}_PENTOREF".format(corpus_name)
dfwords, dfutts, dfrefs, dfscenes, dfactions = IO.convert_subcorpus_raw_data_to_dataframes(data_dir)
IO.write_corpus_to_database("{0}.db".format(corpus_name),
corpus_name, dfwords, dfutts, dfrefs, dfscenes, dfactions)
# Connect to database
CORPUS = "PENTOCV"
db = sqlite.connect("{0}.db".format(CORPUS))
cursor = db.cursor()
# get the table column header names
print("utts", [x[1] for x in cursor.execute("PRAGMA table_info(utts)")])
print("words", [x[1] for x in cursor.execute("PRAGMA table_info(words)")])
print("refs", [x[1] for x in cursor.execute("PRAGMA table_info(refs)")])
print("scenes", [x[1] for x in cursor.execute("PRAGMA table_info(scenes)")])
print("actions", [x[1] for x in cursor.execute("PRAGMA table_info(actions)")])
Explanation: Explore with Sqlite databases
End of explanation
for row in db.execute("SELECT gameID, starttime, speaker, utt_clean FROM utts" + \
" WHERE starttime >= 200 AND starttime <= 300" + \
' AND gameID = "r8_1_1_b"' + \
" ORDER BY gameID, starttime"):
print(row)
Explanation: Get utterances from certain time periods in each experiment or for certain episodes
End of explanation
from collections import Counter
from pentoref.IOutils import clean_utt
piece_counter = Counter()
word_counter = Counter()
word_piece_counter = Counter()
for row in db.execute("SELECT id, gameID, text, uttID FROM refs"):
#for row in db.execute("SELECT shape, colour, orientation, gridPosition, gameID, pieceID FROM scenes"):
#isTarget = db.execute('SELECT refID FROM refs WHERE gameID ="' + row[4] + '" AND pieceID ="' + row[5] + '"')
#target = False
#for r1 in isTarget:
# target = True
#if not target:
# continue
#print(r)
#shape, colour, orientation, gridPosition, gameID, pieceID = row
#piece = gridPosition #shape + "_" + colour
piece, gameID, text, uttID = row
if CORPUS in ["TAKECV", "TAKE"]:
for f in db.execute('SELECT word from words WHERE gameID ="' + str(gameID) + '"'):
#print(f)
for word in f[0].lower().split():
word_counter[word] += 1
word_piece_counter[piece+"__"+word]+=1
piece_counter[piece] += 1
elif CORPUS == "PENTOCV":
for word in clean_utt(text.lower()).split():
word_counter[word] += 1
word_piece_counter[piece+"__"+word]+=1
piece_counter[piece] += 1
good_pieces = ["X", "Y", "P", "N", "U", "F", "Z", "L", "T", "I", "W", "V", "UNK"]
print("non standard pieces", {k:v for k,v in piece_counter.items() if k not in good_pieces})
piece_counter
word_counter.most_common(20)
word_total = sum(word_piece_counter.values())
piece_total= sum(piece_counter.values())
for piece, p_count in piece_counter.items():
print("piece:", piece, p_count)
p_piece = p_count/piece_total
highest = -1
best_word = ""
rank = {}
for word, w_count in word_counter.items():
if w_count < 3:
continue
p_word = w_count / word_total
p_word_piece = word_piece_counter[piece+"__"+word] / word_total
mi = (p_word_piece/(p_piece * p_word))
rank[word] = mi
if mi > highest:
highest = mi
best_word = word
if True:
top = 5
for k, v in sorted(rank.items(), key=lambda x:x[1], reverse=True):
print(k, v)
top -=1
if top <= 0:
break
print("*" * 30)
db.close()
Explanation: Get mutual information between words used in referring expressions and properties of the referent
End of explanation |
3,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
Running External Command
Step1: Capturing Output
The standard input and output channels for the process started by run() are bound to the parent’s input and output. That means the calling program cannot capture the output of the command. Pass PIPE for the stdout and stderr arguments to capture the output for later processing.
Step2: Suppressing Output
For cases where the output should not be shown or captured, use DEVNULL to suppress an output stream. This example suppresses both the standard output and error streams.
Step3: Execute on shell
setting the shell argument to a true value causes subprocess to spwan an intermediate shell process which runs the command. the default is to run the command directly
Step4: if you don't run this command on a shell, this is a error, because HOME is not defined
Step5: Working with Pipes Directly
The functions run(), call(), check_call(), and check_output() are wrappers around the Popen class. Using Popen directly gives more control over how the command is run, and how its input and output streams are processed. For example, by passing different arguments for stdin, stdout, and stderr it is possible to mimic the variations of os.popen().
One-way communication With a process
Step6: Connecting Segments of a Pipe
Step7: Interacting with Another Command
Signaling Between Processes
The process management examples for the os module include a demonstration of signaling between processes using os.fork() and os.kill(). Since each Popen instance provides a pid attribute with the process id of the child process, it is possible to do something similar with subprocess. The next example combines two scripts. This child process sets up a signal handler for the USR signal.
Step8: Process Groups / Session
If the process created by Popen spawns sub-processes, those children will not receive any signals sent to the parent. That means when using the shell argument to Popen it will be difficult to cause the command started in the shell to terminate by sending SIGINT or SIGTERM.
Step9: The pid used to send the signal does not match the pid of the child of the shell script waiting for the signal, because in this example there are three separate processes interacting | Python Code:
import subprocess
completed = subprocess.run(['ls', '-l'])
completed
Explanation: The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
Running External Command
End of explanation
completed = subprocess.run(['ls', '-l'], stdout=subprocess.PIPE)
completed
Explanation: Capturing Output
The standard input and output channels for the process started by run() are bound to the parent’s input and output. That means the calling program cannot capture the output of the command. Pass PIPE for the stdout and stderr arguments to capture the output for later processing.
End of explanation
import subprocess
try:
completed = subprocess.run(
'echo to stdout; echo to stderr 1>&2; exit 1',
shell=True,
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL,
)
except subprocess.CalledProcessError as err:
print('ERROR:', err)
else:
print('returncode:', completed.returncode)
print('stdout is {!r}'.format(completed.stdout))
print('stderr is {!r}'.format(completed.stderr))
Explanation: Suppressing Output
For cases where the output should not be shown or captured, use DEVNULL to suppress an output stream. This example suppresses both the standard output and error streams.
End of explanation
import subprocess
completed = subprocess.run('echo $HOME', shell=True, stdout=subprocess.PIPE)
completed
Explanation: Execute on shell
setting the shell argument to a true value causes subprocess to spwan an intermediate shell process which runs the command. the default is to run the command directly
End of explanation
import subprocess
try:
completed = subprocess.run('echo $HOME', stdout=subprocess.PIPE)
except:
print("Get Error if don't execute on shell")
Explanation: if you don't run this command on a shell, this is a error, because HOME is not defined
End of explanation
import subprocess
print("read:")
proc = subprocess.Popen(['echo', '"to stdout"'],
stdout = subprocess.PIPE)
stdout_value = proc.communicate()[0].decode("utf-8")
print('stdout', repr(stdout_value))
Explanation: Working with Pipes Directly
The functions run(), call(), check_call(), and check_output() are wrappers around the Popen class. Using Popen directly gives more control over how the command is run, and how its input and output streams are processed. For example, by passing different arguments for stdin, stdout, and stderr it is possible to mimic the variations of os.popen().
One-way communication With a process
End of explanation
import subprocess
cat = subprocess.Popen(
['cat', 'index.rst'],
stdout=subprocess.PIPE,
)
grep = subprocess.Popen(
['grep', '.. literalinclude::'],
stdin=cat.stdout,
stdout=subprocess.PIPE,
)
cut = subprocess.Popen(
['cut', '-f', '3', '-d:'],
stdin=grep.stdout,
stdout=subprocess.PIPE,
)
end_of_pipe = cut.stdout
print('Included files:')
for line in end_of_pipe:
print(line.decode('utf-8').strip())
Explanation: Connecting Segments of a Pipe
End of explanation
# %load signal_child.py
import os
import signal
import time
import sys
pid = os.getpid()
received = False
def signal_usr1(signum, frame):
"Callback invoked when a signal is received"
global received
received = True
print('CHILD {:>6}: Received USR1'.format(pid))
sys.stdout.flush()
print('CHILD {:>6}: Setting up signal handler'.format(pid))
sys.stdout.flush()
signal.signal(signal.SIGUSR1, signal_usr1)
print('CHILD {:>6}: Pausing to wait for signal'.format(pid))
sys.stdout.flush()
time.sleep(3)
if not received:
print('CHILD {:>6}: Never received signal'.format(pid))
# %load signal_parent.py
import os
import signal
import subprocess
import time
import sys
proc = subprocess.Popen(['python3', 'signal_child.py'])
print('PARENT : Pausing before sending signal...')
sys.stdout.flush()
time.sleep(1)
print('PARENT : Signaling child')
sys.stdout.flush()
os.kill(proc.pid, signal.SIGUSR1)
!python signal_parent.py
Explanation: Interacting with Another Command
Signaling Between Processes
The process management examples for the os module include a demonstration of signaling between processes using os.fork() and os.kill(). Since each Popen instance provides a pid attribute with the process id of the child process, it is possible to do something similar with subprocess. The next example combines two scripts. This child process sets up a signal handler for the USR signal.
End of explanation
import os
import signal
import subprocess
import tempfile
import time
import sys
script = '''#!/bin/sh
echo "Shell script in process $$"
set -x
python3 signal_child.py
'''
script_file = tempfile.NamedTemporaryFile('wt')
script_file.write(script)
script_file.flush()
proc = subprocess.Popen(['sh', script_file.name])
print('PARENT : Pausing before signaling {}...'.format(
proc.pid))
sys.stdout.flush()
time.sleep(1)
print('PARENT : Signaling child {}'.format(proc.pid))
sys.stdout.flush()
os.kill(proc.pid, signal.SIGUSR1)
time.sleep(3)
Explanation: Process Groups / Session
If the process created by Popen spawns sub-processes, those children will not receive any signals sent to the parent. That means when using the shell argument to Popen it will be difficult to cause the command started in the shell to terminate by sending SIGINT or SIGTERM.
End of explanation
import os
import signal
import subprocess
import tempfile
import time
import sys
def show_setting_prgrp():
print('Calling os.setpgrp() from {}'.format(os.getpid()))
os.setpgrp()
print('Process group is now {}'.format(
os.getpid(), os.getpgrp()))
sys.stdout.flush()
script = '''#!/bin/sh
echo "Shell script in process $$"
set -x
python3 signal_child.py
'''
script_file = tempfile.NamedTemporaryFile('wt')
script_file.write(script)
script_file.flush()
proc = subprocess.Popen(
['sh', script_file.name],
preexec_fn=show_setting_prgrp,
)
print('PARENT : Pausing before signaling {}...'.format(
proc.pid))
sys.stdout.flush()
time.sleep(1)
print('PARENT : Signaling process group {}'.format(
proc.pid))
sys.stdout.flush()
os.killpg(proc.pid, signal.SIGUSR1)
time.sleep(3)
Explanation: The pid used to send the signal does not match the pid of the child of the shell script waiting for the signal, because in this example there are three separate processes interacting:
The program subprocess_signal_parent_shell.py
The shell process running the script created by the main python program
The program signal_child.py
To send signals to descendants without knowing their process id, use a process group to associate the children so they can be signaled together. The process group is created with os.setpgrp(), which sets process group id to the process id of the current process. All child processes inherit their process group from their parent, and since it should only be set in the shell created by Popen and its descendants, os.setpgrp() should not be called in the same process where the Popen is created. Instead, the function is passed to Popen as the preexec_fn argument so it is run after the fork() inside the new process, before it uses exec() to run the shell. To signal the entire process group, use os.killpg() with the pid value from the Popen instance.
End of explanation |
3,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LINEAR REGRESSION
is the simplest machine learning model
is used for finding linear relationship between target and one or more predictors
there are two types of linear regression
Step1: Evaluation of your model
Step2: To evaulate the performance of the model, we can compute the error between the real house value (y_test_1) and the predicted values we got form our model (predictions_1).
One such metric is called the residual sum of squares (RSS)
Step3: This number doesn't tell us much - is 7027 good? Is it bad?
Unfortunatelly, there is no right answer - it depends on the data. Sometimes RSS of 7000 indicates very bad model, and sometimes 7000 is as good as it gets.
That's why we use RSS when comparing models - the model with lowest RSS is the best.
The other metrics we can use to evaluate our model is called coefficient of determination.
It's denoted as $R^{2}$ and it is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).
To calculate it, we use .score function in Python.
Step4: This means that only 51% of variability is explained by our model.
In general, $R^{2}$ is a number between 0 and 1 - the closer it is to 1, the better the model is.
Since we got only 0.51, we can conclude that this is not a very good model.
But we can try to build a model with second variable - RM - and check if we can get better result.
More linear regression models
Step5: Since RSS is lower for second modell (and lower the RSS, better the model) and $R^{2}$ is higher for second modell (and we want $R^{2}$ as close to 1 as possible), both measures tells us that second model is better.
However, difference is not big - out second model performs slightly better, but we still can't say it fits our data well.
Next thing we can try is to build a model with all features we have available and see if using multiple features improves performace of the model. | Python Code:
import pandas as pd
import numpy as np
import json
import graphviz
import matplotlib.pyplot as plt
from sklearn import linear_model
pd.set_option("display.max_rows",6)
%matplotlib inline
df_data = pd.read_csv('varsom_ml_preproc.csv', index_col=0)
X = df_data.filter(['mountain_weather_wind_speed_num', 'mountain_weather_precip_most_exposed'])#, 'ZN', 'INDUS', 'CHAS', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT'])
y = df_data['danger_level']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 222, test_size = 0.3) # split the data
lm = linear_model.LinearRegression()
model_lr = lm.fit(X_train, y_train) # train the model
predictions_lr = model_lr.predict(X_test) # predict values for test dataset
print(f'{model_lr.intercept_:.2f}, {model_lr.coef_}')
plt.scatter(y, X['mountain_weather_precip_most_exposed'], c=X['mountain_weather_wind_speed_num'])
print("Our third model: \n \ny = {0:.2f}".format(model_lr.intercept_) + " {0:.2f}".format(model_lr.coef_[0]) + " * CRIM"
+ " + {0:.2f}".format(model_lr.coef_[1]) + " * ZN" + " + {0:.2f}".format(model_lr.coef_[2]) + " * INDUS"
+ " + {0:.2f}".format(model_lr.coef_[3]) + " + * CHAS" + " {0:.2f}".format(model_lr.coef_[4]) + " * RM"
+ " + {0:.2f}".format(model_lr.coef_[5]) + " * AGE" + " + {0:.2f}".format(model_lr.coef_[6]) + " * RAD"
+ "\n {0:.2f}".format(model_lr.coef_[7]) + " * TAX" + " {0:.2f}".format(model_lr.coef_[8]) + " * PTRATIO"
+ " + {0:.2f}".format(model_lr.coef_[9]) + " * B" + " {0:.2f}".format(model_lr.coef_[10]) + " * LSTAT")
from sklearn.model_selection import train_test_split
X_train_1, X_test_1, y_train_1, y_test_1 = train_test_split(df_data, random_state = 222, test_size = 0.3)
# we are importing machine learning model we'll use
lm1 = linear_model.LinearRegression()
model_1 = lm1.fit(X_train_1, y_train_1) # we have just created a model! :)
# as we said before, the model in this simple case is a line that has two parameters
# so we ask: what are our estimated parameters? (alpha and beta?)
print("Our first model: y = {0:.2f}".format(model_1.intercept_) + " {0:.2f}".format(model_1.coef_[0]) + " * x")
print("Intercept: {0:.2f}".format(model_1.intercept_))
print("Extra price per extra unit of LSTAT: {0:.2f}".format(model_1.coef_[0]))
# now we'd like is to predict house price for test data (data that model hasn't seen yet)
predictions_1 = model_1.predict(X_test_1)
predictions_1[0:5]
# let's visualize our regression line
plt.plot(X_test_1, y_test_1, 'o')
plt.plot(X_test_1, predictions_1, color = 'red')
plt.xlabel('% of lower status of the population')
plt.ylabel('Median home value in $1000s')
Explanation: LINEAR REGRESSION
is the simplest machine learning model
is used for finding linear relationship between target and one or more predictors
there are two types of linear regression:
Simple (one feature)
Multiple (two or more features)
The main idea of linear regression is to obtain a line that best fits the data.
That means finding the one line for which total prediction error (for all data points) are as small as possible. (Error is the distance between actual values and values predicted using regression line.)
First linear regression model
First we'll create a simple linear regression model - we saw that LSTAT and RM are two variables that are highly correlated with target. We will see how good predicteions we can get with just one feature - and how to decide which one of these features is better for estimating median house price?
Step one is to divide our dataset into training and testing part - it is important to test our model against data that has never been used for training – that tells us how the model might perform against data that it has not yet seen and it is meant to be representative of how the model might perform in the real world.
That's why we will use only 70% of our data to train the model and then we'll use the rest of data (30%) to evaluate our model.
End of explanation
# let's try to visualize the estimated and real house values for all data points in test dataset
fig, ax = plt.subplots(figsize=(15, 5))
plt.subplot(1, 2, 1)
plt.plot(X_test_1,predictions_1, 'o')
plt.xlabel('% of lower status of the population')
plt.ylabel('Estimated home value in $1000s')
plt.subplot(1, 2, 2)
plt.plot(X_test_1,y_test_1, 'o')
plt.xlabel('% of lower status of the population')
plt.ylabel('Median home value in $1000s')
plt.tight_layout()
plt.show()
Explanation: Evaluation of your model
End of explanation
# first we define our RSS function
def RSS(y, p):
return sum((y - p)**2)
# then we calculate RSS:
RSS_model_1 = RSS(y_test_1, predictions_1)
RSS_model_1
Explanation: To evaulate the performance of the model, we can compute the error between the real house value (y_test_1) and the predicted values we got form our model (predictions_1).
One such metric is called the residual sum of squares (RSS):
End of explanation
lm1.score(X_test_1,y_test_1)
Explanation: This number doesn't tell us much - is 7027 good? Is it bad?
Unfortunatelly, there is no right answer - it depends on the data. Sometimes RSS of 7000 indicates very bad model, and sometimes 7000 is as good as it gets.
That's why we use RSS when comparing models - the model with lowest RSS is the best.
The other metrics we can use to evaluate our model is called coefficient of determination.
It's denoted as $R^{2}$ and it is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).
To calculate it, we use .score function in Python.
End of explanation
# we just repeat everything as before
X_train_2, X_test_2, y_train_2, y_test_2 = train_test_split(boston_data[['RM']], boston_data.MEDV,
random_state = 222, test_size = 0.3) # split the data
lm = linear_model.LinearRegression()
model_2 = lm.fit(X_train_2, y_train_2) # train the model
predictions_2 = model_2.predict(X_test_2) # predict values for test dataset
print("Our second model: y = {0:.2f}".format(model_2.intercept_) + " + {0:.2f}".format(model_2.coef_[0]) + " * x")
# let's visualize our regression line
plt.plot(X_test_2, y_test_2, 'o')
plt.plot(X_test_2, predictions_2, color = 'red')
plt.xlabel('Average number of rooms')
plt.ylabel('Median home value in $1000s')
# let's calculate RSS and R^2
print (RSS(y_test_2, predictions_2))
print (lm.score(X_test_2, y_test_2))
# now we can compare our models
print("RSS for first model is {0:.2f}".format(RSS(y_test_1, predictions_1))
+ ", and RSS for second model is {0:.2f}".format(RSS(y_test_2, predictions_2)) + '\n' + '\n'
+ "R^2 for first model is {0:.2f}".format(lm1.score(X_test_1, y_test_1))
+ ", and R^2 for second model is {0:.2f}".format(lm.score(X_test_2, y_test_2)))
Explanation: This means that only 51% of variability is explained by our model.
In general, $R^{2}$ is a number between 0 and 1 - the closer it is to 1, the better the model is.
Since we got only 0.51, we can conclude that this is not a very good model.
But we can try to build a model with second variable - RM - and check if we can get better result.
More linear regression models
End of explanation
X = boston_data[['CRIM', 'ZN', 'INDUS', 'CHAS', 'RM', 'AGE', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT']]
y = boston_data["MEDV"]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 222, test_size = 0.3) # split the data
lm = linear_model.LinearRegression()
model_lr = lm.fit(X_train, y_train) # train the model
predictions_lr = model_lr.predict(X_test) # predict values for test dataset
print("Our third model: \n \ny = {0:.2f}".format(model_lr.intercept_) + " {0:.2f}".format(model_lr.coef_[0]) + " * CRIM"
+ " + {0:.2f}".format(model_lr.coef_[1]) + " * ZN" + " + {0:.2f}".format(model_lr.coef_[2]) + " * INDUS"
+ " + {0:.2f}".format(model_lr.coef_[3]) + " + * CHAS" + " {0:.2f}".format(model_lr.coef_[4]) + " * RM"
+ " + {0:.2f}".format(model_lr.coef_[5]) + " * AGE" + " + {0:.2f}".format(model_lr.coef_[6]) + " * RAD"
+ "\n {0:.2f}".format(model_lr.coef_[7]) + " * TAX" + " {0:.2f}".format(model_lr.coef_[8]) + " * PTRATIO"
+ " + {0:.2f}".format(model_lr.coef_[9]) + " * B" + " {0:.2f}".format(model_lr.coef_[10]) + " * LSTAT")
# let's evaluate the model
print("RSS for the third model is {0:.2f}".format(RSS(y_test, predictions_lr)) + '\n' + '\n'
+ "R^2 for the third model is {0:.2f}".format(lm.score(X_test, y_test)) )
Explanation: Since RSS is lower for second modell (and lower the RSS, better the model) and $R^{2}$ is higher for second modell (and we want $R^{2}$ as close to 1 as possible), both measures tells us that second model is better.
However, difference is not big - out second model performs slightly better, but we still can't say it fits our data well.
Next thing we can try is to build a model with all features we have available and see if using multiple features improves performace of the model.
End of explanation |
3,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix generation
Init symbols for sympy
Step1: Lame params
Step2: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
Step3: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step4: Christoffel symbols
Step5: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
Step6: Physical coordinates
$u_i=u_{[i]} H_i$
Step7: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
Step8: Virtual work
Step9: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step10: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
Step11: Mass matrix | Python Code:
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
init_printing()
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
Explanation: Matrix generation
Init symbols for sympy
End of explanation
H1=symbols('H1')
H2=S(1)
H3=S(1)
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
for j in range(DIM):
if (i == 0 and j != 1):
dH[i,j]=Symbol('H_{{{},{}}}'.format(i+1,j+1))
dH
Explanation: Lame params
End of explanation
G_up = getMetricTensorUpLame(H1, H2, H3)
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
G_down = getMetricTensorDownLame(H1, H2, H3)
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
DIM=3
G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM)
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
G_down_diff[i,i,k]=2*H[i]*dH[i,k]
GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3))
GK
Explanation: Christoffel symbols
End of explanation
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
B
Explanation: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
P=zeros(12,12)
P[0,0]=H[0]
P[1,0]=dH[0,0]
P[1,1]=H[0]
P[2,0]=dH[0,1]
P[2,2]=H[0]
P[3,0]=dH[0,2]
P[3,3]=H[0]
P[4,4]=H[1]
P[5,4]=dH[1,0]
P[5,5]=H[1]
P[6,4]=dH[1,1]
P[6,6]=H[1]
P[7,4]=dH[1,2]
P[7,7]=H[1]
P[8,8]=H[2]
P[9,8]=dH[2,0]
P[9,9]=H[2]
P[10,8]=dH[2,1]
P[10,10]=H[2]
P[11,8]=dH[2,2]
P[11,11]=H[2]
P=simplify(P)
P
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
row_index = i*3+j
B_P[row_index, row_index] = 1/(H[i]*H[j])
Grad_U_P = simplify(B_P*B*P)
Grad_U_P
Explanation: Physical coordinates
$u_i=u_{[i]} H_i$
End of explanation
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
StrainL=simplify(E*Grad_U_P)
StrainL
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
I = eye(3)
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
%aimport geom_util
u=getUHat3DPlane(alpha1, alpha2, alpha3)
# u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
E_NL
%aimport geom_util
u=getUHatU3MainPlane(alpha1, alpha2, alpha3)
gradup=Grad_U_P*u
# e=E*gradup
# e
E_NLp = E_NonLinear(gradup)*gradup
simplify(E_NLp)
w
Explanation: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
%aimport geom_util
C_tensor = getIsotropicStiffnessTensor()
C = convertStiffnessTensorToMatrix(C_tensor)
C
StrainL.T*C*StrainL*H1
Explanation: Virtual work
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
D_p_T = StrainL*T
simplify(D_p_T)
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# E_NLp = E_NonLinear(gradup)*gradup
# simplify(E_NLp)
# gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
StrainL*gu+simplify(StrainNL)
Explanation: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
L=zeros(12,12)
h=Symbol('h')
p0=1/2-alpha3/h
p1=1/2+alpha3/h
p2=1-(2*alpha3/h)**2
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
D_p_L = StrainL*L
simplify(D_p_L)
h = 0.5
exp=(0.5-alpha3/h)*(1-(2*alpha3/h)**2)#/(1+alpha3*0.8)
p02=integrate(exp, (alpha3, -h/2, h/2))
integral = expand(simplify(p02))
integral
Explanation: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
M_p = L.T*M*L
integrate(M_p, (alpha3, -h/2, h/2))
Explanation: Mass matrix
End of explanation |
3,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First read in the original data
Step1: repeat the processing with all_encounter_data in ICO.py
Step2: After setting up the standard range of outliers, we lost at most 600 points for each variable in all_encounter_data. (And set the times of IQR from 1.5 to 2 does not really remain as many points as I was expecting. So I choose the standard times of 1.5)
And after removing the outliers, the variables seem a lot more normal distributed.
Step3: What about we crush all_encounter_data to all_person_data by group Person_Nbr?
Step4: From above we can tell, after I identify the outliers, no more than 4% points of each variable are removed, which is acceptable for me.
Step5: Now use 04/15 processed person data (added new features)
Step6: Get the dummy value for the categorical features
Step7: Group the quantitive features by Age and Gender
But there are still null values within the grouped mean values.
Step8: Group the quantitive features by Age_group (and maybe gender)
Divide patients into groups with same amount of patients by age and created a new column called Age_group
Divide patients into groups by quantile of age and get dummy values
Step9: The missing value percentage in different age group as following
Step10: Implement ANOVA test
To test if the mean values of the each feature are equal after being grouped
Step11: Fill up the missing values
with the mean of the non-null variable grouped by age group
Step12: Group the quantitive features by DR diagnosis
The missing value percentage in different diagnosis as following
Step13: Implement ANOVA test
To test if the mean values of the each feature are equal after being grouped
Step14: Fill up the missing values
with the mean of the non-null variable grouped by DR diagnosis
Step15: Modeling trial
Varibles
Step16: Decision Tree modeling exploration
Step17: Logistic Regression modeling exploration
Step18: Output the filled up data | Python Code:
import re
data = pd.read_pickle(os.getcwd() + '/data/all_encounter_data.pickle')
Explanation: First read in the original data
End of explanation
d_enc = data.drop(["Enc_ID","Person_ID"], axis=1)
pattern0= re.compile("\d+\s*\/\s*\d+")
index1 = d_enc['Glucose'].str.contains(pattern0, na=False)
temp = d_enc.loc[index1, 'Glucose']
d_enc.loc[index1, 'Glucose'] = d_enc.loc[index1, 'BP']
d_enc.loc[index1, 'BP'] = temp
index2 = d_enc.BP[d_enc.BP.notnull()][~d_enc.BP[d_enc.BP.notnull()].str.contains('/')].index
temp = d_enc.loc[index2, 'Glucose']
d_enc.loc[index2, 'Glucose'] = d_enc.loc[index2, 'BP']
d_enc.loc[index2, 'BP'] = temp
# Split up the BP field into Systolic and Diastolic readings
pattern1 = re.compile("(?P<BP_Systolic>\d+)\s*\/\s*(?P<BP_Diastolic>\d+)")
d_enc = pd.merge(d_enc, d_enc["BP"].str.extract(pattern1, expand=True),
left_index=True, right_index=True).drop("BP", axis=1)
# Define ranges for reasonable values. Identify the data outside of 1.5 times of IQR as outliers
NaN = float("NaN")
quantitive_columns=['A1C', 'BMI', 'Glucose', 'BP_Diastolic', 'BP_Systolic']
for column in quantitive_columns:
d_enc[column] = pd.to_numeric(d_enc[column], errors='coerce')
temp = d_enc[column][d_enc[column].notnull()]
Q2 = temp.quantile(0.75)
Q1 = temp.quantile(0.25)
IQR = Q2-Q1
print(temp[Q1 - 2 * IQR < temp][temp[Q1 - 2 * IQR < temp] < Q2 + 2 * IQR].shape[0], temp.shape[0], d_enc.shape[0])
print(column, Q1 - 2 * IQR, Q2 + 2 * IQR)
for column in quantitive_columns:
d_enc[column] = pd.to_numeric(d_enc[column], errors='coerce')
temp = d_enc[column][d_enc[column].notnull()]
Q2 = temp.quantile(0.75)
Q1 = temp.quantile(0.25)
IQR = Q2-Q1
print(temp[Q1 - 1.5 * IQR < temp][temp[Q1 - 1.5 * IQR < temp] < Q2 + 1.5 * IQR].shape[0], temp.shape[0], d_enc.shape[0])
print(column, Q1 - 1.5 * IQR, Q2 + 1.5 * IQR)
Explanation: repeat the processing with all_encounter_data in ICO.py
End of explanation
import matplotlib.pyplot as plt
for column in quantitive_columns:
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
temp0 = pd.to_numeric(d_enc[column], errors='coerce')
ax1.hist(temp0[temp0.notnull()])
ax1.set_title('before')
temp = temp0[temp0.notnull()]
Q2 = temp.quantile(0.75)
Q1 = temp.quantile(0.25)
IQR = Q2-Q1
d_enc[column] = temp0.map(lambda x: x if Q1 - 1.5 * IQR < x < Q2 + 1.5 * IQR else NaN)
ax2.hist(d_enc[column][d_enc[column].notnull()])
ax2.set_title('after')
f.suptitle(column)
plt.show()
Explanation: After setting up the standard range of outliers, we lost at most 600 points for each variable in all_encounter_data. (And set the times of IQR from 1.5 to 2 does not really remain as many points as I was expecting. So I choose the standard times of 1.5)
And after removing the outliers, the variables seem a lot more normal distributed.
End of explanation
person_data_old = pd.read_pickle(os.getcwd() + '/data/all_person_data_Richard_20170307.pickle')
person_data_new = pd.read_pickle(os.getcwd() + '/data/all_person_data_Dan_20170406.pickle')
person_data_old[quantitive_columns].isnull().sum(axis=0)/person_data_old.shape[0]
person_data_new[quantitive_columns].isnull().sum(axis=0)/person_data_new.shape[0]
Explanation: What about we crush all_encounter_data to all_person_data by group Person_Nbr?
End of explanation
plt.bar(range(0,5),
person_data_new[quantitive_columns].isnull().sum(axis=0)/person_data_new.shape[0])
plt.gca().set_ylim([0,1])
plt.xticks(range(0,5), quantitive_columns)
plt.ylabel('Missing value percentage')
plt.xlabel('Quantative variables')
plt.show()
Explanation: From above we can tell, after I identify the outliers, no more than 4% points of each variable are removed, which is acceptable for me.
End of explanation
person_data_new = pd.read_pickle(os.getcwd() + '/data/all_person_data_Dan_20170415.pickle')
person_data_new.columns.values
quantitive_columns = ["A1C", "BMI", "Glucose", "BP_Systolic", "BP_Diastolic",
'MR_OD_SPH_Numeric', 'MR_OD_CYL_Numeric',
'MR_OS_SPH_Numeric', 'MR_OS_CYL_Numeric',
'MR_OS_DVA_ability', 'MR_OD_DVA_ability',
'MR_OS_NVA_ability', 'MR_OD_NVA_ability']
Explanation: Now use 04/15 processed person data (added new features)
End of explanation
dummy_columns = ['DM', 'ME', 'Glaucoma_Suspect', 'Open_angle_Glaucoma', 'Cataract']
categorical_columns = ['Gender', 'Race', 'recent_smoking_status', 'family_DM', 'family_G']
for column in categorical_columns:
temp = pd.get_dummies(person_data_new[column], prefix=column)
person_data_new[temp.columns.values]=temp
dummy_columns.extend(temp.columns.values.tolist())
Explanation: Get the dummy value for the categorical features
End of explanation
temp = person_data_new.copy()
mean_value = temp.groupby(['Gender', pd.cut(temp['Age'], 6)]).apply(
lambda x: x['A1C'][x['A1C'].notnull()].mean())
missing_index = temp.groupby(['Gender', pd.cut(temp['Age'], 6)]).apply(
lambda x: x['A1C'][x['A1C'].isnull()])
for i in mean_value.index.to_series().tolist():
if i in missing_index.index:
temp.set_value(missing_index[i].index, 'A1C', mean_value[i])
mean_value
temp[temp['A1C'].isnull()].shape[0]
Explanation: Group the quantitive features by Age and Gender
But there are still null values within the grouped mean values.
End of explanation
age_group = np.array([person_data_new.Age.quantile(1.0/6*i) for i in range(1,7)])
age_group
person_data_new['Age_group_numeric']=person_data_new.Age.apply(lambda x: sum(age_group<x)+1)
age_group_dict = {1: '(18, 48]', 2: '(49, 55]', 3: '(56, 60]', 4: '(61, 66]', 5: '(67, 74]', 6: '(75, 114]'}
person_data_new['Age_group'] = person_data_new.Age_group_numeric.apply(lambda x: age_group_dict.get(x))
person_data_new.groupby('Age_group').apply(lambda x: x.shape[0])
temp = pd.get_dummies(person_data_new['Age_group'], prefix = 'Age_group')
person_data_new[temp.columns.values] = temp
dummy_columns.extend(temp.columns.values.tolist())
Explanation: Group the quantitive features by Age_group (and maybe gender)
Divide patients into groups with same amount of patients by age and created a new column called Age_group
Divide patients into groups by quantile of age and get dummy values
End of explanation
person_data_new.groupby('Age_group').apply(lambda x: x[quantitive_columns].isnull().sum(axis=0)/x.shape[0])
Explanation: The missing value percentage in different age group as following:
End of explanation
from scipy.stats import f_oneway
for column in quantitive_columns:
temp = {k:list(v[column]) for k,v in person_data_new[person_data_new[column].notnull()].groupby('Age_group_numeric')}
print column
print f_oneway(temp[1], temp[2], temp[3], temp[4], temp[5], temp[6])
for column in quantitive_columns:
temp = {k:list(v[column]) for k,v in person_data_new[person_data_new[column].notnull()].groupby('Gender')}
print column
print f_oneway(temp['F'], temp['M'])
Explanation: Implement ANOVA test
To test if the mean values of the each feature are equal after being grouped
End of explanation
person_data_fillup = {}
temp = person_data_new.copy()
for column in quantitive_columns:
mean_value = temp.groupby('Age_group').apply(
lambda x: x[column][x[column].notnull()].mean())
missing_index = temp.groupby('Age_group').apply(
lambda x: x[column][x[column].isnull()])
for i in mean_value.index.to_series().tolist():
if i in missing_index.index:
temp.set_value(missing_index[i].index, column, mean_value[i])
person_data_fillup['groupbyAgegroup_mean'] = temp
Explanation: Fill up the missing values
with the mean of the non-null variable grouped by age group
End of explanation
person_data_new.groupby('recent_DR').apply(lambda x: x.shape[0])
person_data_new.groupby('recent_DR').apply(lambda x: x[quantitive_columns].isnull().sum(axis=0)/x.shape[0])
person_data_new.groupby('worst_DR').apply(lambda x: x.shape[0])
person_data_new.groupby('worst_DR').apply(lambda x: x[quantitive_columns].isnull().sum(axis=0)/x.shape[0])
Explanation: Group the quantitive features by DR diagnosis
The missing value percentage in different diagnosis as following:
End of explanation
for column in quantitive_columns:
temp = {k:list(v[column]) for k,v in person_data_new[person_data_new[column].notnull()].groupby('recent_DR')}
print column
print f_oneway(temp['PDR'], temp['SNPDR'], temp['MNPDR'], temp['mNPDR'], temp['no_DR'])
Explanation: Implement ANOVA test
To test if the mean values of the each feature are equal after being grouped
End of explanation
DR_diagnoses = ['PDR', 'SNPDR', 'MNPDR', 'mNPDR', 'no_DR']
temp = person_data_new.copy()
for column in quantitive_columns:
mean_value = temp.groupby('recent_DR').apply(lambda x: x[column][x[column].notnull()].mean())
missing_index = temp.groupby('recent_DR').apply(lambda x: x[column][x[column].isnull()])
for diagnosis in DR_diagnoses:
temp.set_value(missing_index[diagnosis].index, column, mean_value[diagnosis])
person_data_fillup['recent_groupbyDR_mean'] = temp
temp = person_data_new.copy()
for column in quantitive_columns:
mean_value = temp.groupby('worst_DR').apply(lambda x: x[column][x[column].notnull()].mean())
missing_index = temp.groupby('worst_DR').apply(lambda x: x[column][x[column].isnull()])
for diagnosis in DR_diagnoses:
temp.set_value(missing_index[diagnosis].index, column, mean_value[diagnosis])
person_data_fillup['worst_groupbyDR_mean'] = temp
Explanation: Fill up the missing values
with the mean of the non-null variable grouped by DR diagnosis
End of explanation
dummy_columns
quantitive_columns
target_columns = {'recent_groupbyDR_mean': 'recent_DR',
'worst_groupbyDR_mean': 'worst_DR',
'groupbyAgegroup_mean': 'recent_DR'}
Explanation: Modeling trial
Varibles
End of explanation
from sklearn import tree
from sklearn.model_selection import train_test_split
from sklearn import metrics
from sklearn.metrics import confusion_matrix
for method, temp in person_data_fillup.items():
print(method)
X = temp[quantitive_columns + dummy_columns]
y = temp[target_columns[method]]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
clf = tree.DecisionTreeClassifier()
clf = clf.fit(X_train, y_train)
preds = clf.predict(X = X_test)
#preds = label_encoder.inverse_transform(preds.tolist())
#y_test = label_encoder.inverse_transform(y_test)
print(pd.crosstab(y_test, preds))
print(metrics.classification_report(y_true = y_test, y_pred=preds))
tree.export_graphviz(clf, feature_names = quantitive_columns + dummy_columns,
class_names = ['MNPDR','PDR','SNPDR','mNPDR','no_DR'], out_file='DT.dot')
Explanation: Decision Tree modeling exploration
End of explanation
from sklearn.linear_model import LogisticRegression
for method, temp in person_data_fillup.items():
print(method)
X = temp[quantitive_columns + dummy_columns]
y = temp[target_columns[method]]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
clf = LogisticRegression()
clf = clf.fit(X_train, y_train)
preds = clf.predict(X = X_test)
#preds = label_encoder.inverse_transform(preds.tolist())
#y_test = label_encoder.inverse_transform(y_test)
print(pd.crosstab(y_test, preds))
print(metrics.classification_report(y_true = y_test, y_pred=preds))
Explanation: Logistic Regression modeling exploration
End of explanation
temp = person_data_fillup['groupbyAgegroup_mean'][quantitive_columns + dummy_columns + ['worst_DR', 'recent_DR']]
temp.describe(include='all')
#temp.to_pickle('baseline_missingHandled_Dan_20170406.pickle')
temp.to_pickle('Morefeatures_missingHandled_Dan_20170415.pickle')
temp = person_data_new[quantitive_columns + dummy_columns + ['worst_DR', 'recent_DR']]
temp.describe(include='all')
#temp.to_pickle('baseline_raw_Dan_20170406.pickle')
temp.to_pickle('Morefeatures_raw_Dan_20170415.pickle')
Explanation: Output the filled up data
End of explanation |
3,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
With rerun = True, all experiments are executed again (takes several hours). With False, the data are taken from the *.csv files
Step1: The generate function may be used to generate random formulae. Uncomment the function call below to generate different set of formulae.
Step2: Deterministic automata
We compare output of ltl3tela -D1, ltl2tgba -DG (this setting guarantees that the output will be deterministic), Delag and Rabinizer 4.
Step3: The sum of states, edges and acceptance sets over the set of 1000 random formulae
Step4: The sum of states, edges and acceptance sets over the set of patterns and literature formulae, including timeouts (TO) and parse errors (PE). The latter means that ltlcross was unable to parse the produced automaton due to excessive number of acceptance sets.
Step5: Comparison of ltl3tela and ltl2tgba on patterns formulae where the produced automata have different numbers of states
Step6: Merge the tables to get the TeX output later
Step7: Nondeterministic automata
Since ltl3ba sometimes produces slighlty incorrect HOAF, be more tolerant about this (otherwise ltlcross refuses to parse its output)
Step8: The following computations work the same way as for deterministic automata.
Step9: Merge tables
Step10: Highlight & export to LaTeX
Step11: Cross comparisons
In the following tables, the value in (row, col) is the number of cases where tool row delivers a better automaton than tool col. Better means strictly smaller in the lexicographic order on (#states, #acc marks, #edges). The last column in each row sums the number of such victories of the corresponding tool row.
Step12: Formulae excluded from the evaluation
The following tables contain results on formulae excluded from the evaluation due to error (timeout or parse error) in some translation. In case of successful translation, number of states is shown in the table.
The first and second table contain data for deterministic and nondeterministic translators, respectively.
Step13: Finally, we are interested in formulae where the difference between best and worst translator (in terms of states count) is larger than some threshold. Both tables compare deterministic translators over patterns (with threshold 20) and random formulae (threshold 10). Such statistics can be produced by calling large_diff(dataset, list_of_interesting_tools, threshold). | Python Code:
rerun = False
%%bash
ltl3ba -v
ltl3tela -v
ltl2tgba --version
delag --version
ltl2dgra --version # Rabinizer 4
Explanation: With rerun = True, all experiments are executed again (takes several hours). With False, the data are taken from the *.csv files:
End of explanation
def generate(n=1000,func=(lambda x: True),filename=None,priorities='',ap=['a','b','c','d','e']):
if filename is None:
file_h = sys.stdout
else:
file_h = open(filename,'w')
f = spot.randltl(ap,
ltl_priorities=priorities,
simplify=3,tree_size=15).relabel_bse(spot.Abc)
i = 0
printed = set()
while(i < n):
form = next(f)
if form in printed:
continue
if func(form) and not form.is_tt() and not form.is_ff():
print(form,file=file_h)
printed.add(form)
i += 1
f_rand = 'formulae/atva19/rand.ltl'
f_patterns = 'formulae/atva19/patterns.ltl'
# generate(1000, filename = f_rand)
Explanation: The generate function may be used to generate random formulae. Uncomment the function call below to generate different set of formulae.
End of explanation
d_tools = {
"ltl3tela-D1": "ltl3tela -D1 -f %f > %O",
"ltl2tgba-DG": "ltl2tgba -DG %f > %O",
"delag": "delag %f > %O",
"rabinizer4": "ltl2dgra %f > %O"
}
d_order = ["ltl3tela-D1", "ltl2tgba-DG", "delag", "rabinizer4"]
d_cols = ["states", "edges", "acc"]
d_csv_rand = 'formulae/atva19/det.rand.csv'
d_data_rand = LtlcrossRunner(d_tools, formula_files = [f_rand], res_filename = d_csv_rand, cols = d_cols)
if rerun:
d_data_rand.run_ltlcross(automata = False, timeout = '60')
d_data_rand.parse_results()
Explanation: Deterministic automata
We compare output of ltl3tela -D1, ltl2tgba -DG (this setting guarantees that the output will be deterministic), Delag and Rabinizer 4.
End of explanation
det_rand = d_data_rand.cummulative(col = d_cols).unstack(level = 0).loc[d_order, d_cols]
det_rand
d_csv_patterns = 'formulae/atva19/det.patterns.csv'
d_data_patterns = LtlcrossRunner(d_tools, formula_files = [f_patterns], res_filename = d_csv_patterns, cols = d_cols)
if rerun:
d_data_patterns.run_ltlcross(automata = False, timeout = '60')
d_data_patterns.parse_results()
Explanation: The sum of states, edges and acceptance sets over the set of 1000 random formulae:
End of explanation
det_to = pd.DataFrame(d_data_patterns.get_error_count(),columns=['TO.literature'])
det_err = pd.DataFrame(d_data_patterns.get_error_count('parse error',False),columns=['PE.literature'])
det_lit = d_data_patterns.cummulative(col = d_cols).unstack(level = 0).loc[d_order, d_cols]
det_lit = pd.concat([det_lit,det_to,det_err],axis=1,join='inner',sort=False)
det_lit
to = d_data_rand.exit_status
to[to != "ok"].dropna(how='all')
Explanation: The sum of states, edges and acceptance sets over the set of patterns and literature formulae, including timeouts (TO) and parse errors (PE). The latter means that ltlcross was unable to parse the produced automaton due to excessive number of acceptance sets.
End of explanation
d_data_patterns.smaller_than('ltl3tela-D1', 'ltl2tgba-DG')
d_data_patterns.smaller_than('ltl2tgba-DG', 'ltl3tela-D1')
Explanation: Comparison of ltl3tela and ltl2tgba on patterns formulae where the produced automata have different numbers of states:
End of explanation
det_tmp = pd.merge(det_rand, det_lit, suffixes=('.random','.literature'),on='tool')
det_tmp
det = split_cols(det_tmp,'.').swaplevel(axis=1)
det
Explanation: Merge the tables to get the TeX output later:
End of explanation
import os
os.environ['SPOT_HOA_TOLERANT']='TRUE'
Explanation: Nondeterministic automata
Since ltl3ba sometimes produces slighlty incorrect HOAF, be more tolerant about this (otherwise ltlcross refuses to parse its output):
End of explanation
n_tools = {
"ltl3tela": "ltl3tela -f %f > %O",
"ltl2tgba": "ltl2tgba %f > %O",
"ltl2tgba-G": "ltl2tgba -G %f > %O",
"ltl3ba": "ltldo 'ltl3ba -H2' -f %f > %O",
}
n_order = ["ltl3tela", "ltl2tgba-G", "ltl2tgba", "ltl3ba"]
n_cols = ["states", "edges", "acc"]
n_csv_rand = 'formulae/atva19/nondet.rand.csv'
n_data_rand = LtlcrossRunner(n_tools, formula_files = [f_rand], res_filename = n_csv_rand, cols = n_cols)
if rerun:
n_data_rand.run_ltlcross(automata = False, timeout = '60')
n_data_rand.parse_results()
nd_rand = n_data_rand.cummulative(col = n_cols).unstack(level = 0).loc[n_order, n_cols]
nd_rand
n_csv_patterns = 'formulae/atva19/nondet.patterns.csv'
n_data_patterns = LtlcrossRunner(n_tools, formula_files = [f_patterns], res_filename = n_csv_patterns, cols = n_cols)
if rerun:
n_data_patterns.run_ltlcross(automata = False, timeout = '60')
n_data_patterns.parse_results()
nd_to = pd.DataFrame(n_data_patterns.get_error_count(),columns=['TO.literature'])
nd_err = pd.DataFrame(n_data_patterns.get_error_count('parse error',False),columns=['PE.literature'])
nd_lit = n_data_patterns.cummulative(col = n_cols).unstack(level = 0).loc[n_order, n_cols]
nd_lit = pd.concat([nd_lit,nd_to,nd_err],axis=1,join='inner',sort=False)
nd_lit
n_data_patterns.smaller_than('ltl3tela', 'ltl2tgba-G')
n_data_patterns.smaller_than('ltl2tgba-G', 'ltl3tela')
nd_tmp = pd.merge(nd_rand, nd_lit, suffixes=('.random','.literature'),on='tool')
nd_tmp
nd = split_cols(nd_tmp,'.').swaplevel(axis=1)
nd
n_data_patterns.get_error_count()
n_data_rand.get_error_count()
Explanation: The following computations work the same way as for deterministic automata.
End of explanation
det
#Merge det & nondet
merged = pd.concat([det,nd],keys=["deterministic","nondeterministic"],join='outer',sort=False)
merged
Explanation: Merge tables
End of explanation
filename = 'colored_res.tex'
merged_high = highlight_by_level(merged, high_min)
cummulative_to_latex(merged_high, filename)
fix_latex(merged_high, filename)
d_lit_c = len(d_data_patterns.values.dropna())
n_lit_c = len(n_data_patterns.values.dropna())
print('Number of formulas without errors:\n' +
' det: {}\nnondet: {}'.format(d_lit_c, n_lit_c))
Explanation: Highlight & export to LaTeX
End of explanation
d_data_patterns.cross_compare(include_fails=False,props=['states','acc','edges'])
d_data_rand.cross_compare(include_fails=False,props=['states','acc','edges'])
n_data_patterns.cross_compare(include_fails=False,props=['states','acc','edges'])
n_data_rand.cross_compare(include_fails=False,props=['states','acc','edges'])
Explanation: Cross comparisons
In the following tables, the value in (row, col) is the number of cases where tool row delivers a better automaton than tool col. Better means strictly smaller in the lexicographic order on (#states, #acc marks, #edges). The last column in each row sums the number of such victories of the corresponding tool row.
End of explanation
d_fails = d_data_patterns.values[d_data_patterns.values.isnull().any(axis = 1)]['states']\
.join(d_data_patterns.exit_status, lsuffix = '.states', rsuffix = '.response')
for tool in d_order:
d_fails[tool] = d_fails[tool + '.states'].combine_first(d_fails[tool + '.response'])
d_fails_out = d_fails[d_order]
d_fails_out
n_fails = n_data_patterns.values[n_data_patterns.values.isnull().any(axis = 1)]['states']\
.join(n_data_patterns.exit_status, lsuffix = '.states', rsuffix = '.response')
for tool in n_order:
n_fails[tool] = n_fails[tool + '.states'].combine_first(n_fails[tool + '.response'])
n_fails_out = n_fails[n_order]
n_fails_out
Explanation: Formulae excluded from the evaluation
The following tables contain results on formulae excluded from the evaluation due to error (timeout or parse error) in some translation. In case of successful translation, number of states is shown in the table.
The first and second table contain data for deterministic and nondeterministic translators, respectively.
End of explanation
def large_diff(res, tools, threshold):
df = res.values.dropna()['states']
df['diff'] = df.loc[:, tools].max(axis = 1) - df.loc[:, tools].min(axis = 1)
return df[df['diff'] > threshold][tools]
large_diff(d_data_patterns, d_tools, 20)
large_diff(d_data_rand, d_tools, 10)
Explanation: Finally, we are interested in formulae where the difference between best and worst translator (in terms of states count) is larger than some threshold. Both tables compare deterministic translators over patterns (with threshold 20) and random formulae (threshold 10). Such statistics can be produced by calling large_diff(dataset, list_of_interesting_tools, threshold).
End of explanation |
3,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Preamble
Step1: Feature Union with Heterogeneous Data Sources
Polynomial basis function
The polynomial basis function is provided by scikit-learn in the sklearn.preprocessing module.
Step2: Custom basis functions
Unfortunately, this is pretty much the extent of what scikit-learn provides in the way of basis functions. Here we define some standard basis functions, while adhering to the scikit-learn interface. This will be important when we try to incorporate our basis functions in pipelines and feature unions later on. While this is not strictly required, it will certainly make life easier for us down the road.
Radial Basis Function
Step3: Sigmoidal Basis Function
Step4: Real-world Dataset
Now that we have a few basis functions at our disposal, let's try to apply different basis functions to different features of a dataset. We use the diabetes dataset, a real-world dataset with 442 instances and 10 features. We first work through each step manually, and show how the steps can be combined using scikit-learn's feature unions and pipelines to form a single model that will perform all the necessary steps in one fell swoop.
Step5: We print every other feature for just the first few instances, just to get an idea of what the data looks like
Step6: Assume for some reason we are interested in training a model using, say, features 2 and 5 with a polynomial basis, and features 6, 8 and 9 with a radial basis. We first slice up our original dataset.
Step7: Now we apply the respective basis functions.
Polynomial
Step8: Radial
Step9: Now we're ready to concatenate these augmented datasets.
Step10: Now we are ready to train a regressor with this augmented dataset. For this example, we'll simply use a linear regression model.
Step11: (To no one's surprise, our model performs quite poorly, since zero effort was made to identify and incorporate the most informative features or appropriate basis functions. Rather, they were chosen solely to maximize clarity of exposition.)
Recap
So let's recap what we've done.
We started out with a dataset with 442 samples and 10 features, represented by 442x10 matrix X
For one reason or another, we wanted to use different basis functions for different subsets of features. Apparently, we wanted features 2 and 5 for one basis function and features 6, 8 and 9 for another. Therefore, we
sliced the matrix X to obtain 442 by 2 matrix X1 and
sliced the matrix X to obtain 442 by 3 matrix X2.
We
applied a polynomial basis function of degree 2 to X1 with 2 features and 442 samples. This returns a dataset X1_poly with $\begin{pmatrix} 4 \ 2 \end{pmatrix} = 6$ features and 442 samples. (NB
Step12: This effectively composes each of the steps we had to manually perform and amalgamated it into a single transformer. We can even append a regressor at the end to make it a complete estimator/predictor.
Step13: Breaking it Down
The most important thing to note is that everything in scikit-learn is either a transformer or a predictor, and are almost always an estimator. An estimator is simply a class that implements the fit method, while a transfromer and predictor implements a, well, transform and predict method respectively. From this simple interface, we get a surprising hight amount of functionality and flexibility.
Pipeline
A pipeline behaves as a transformer or a predictor depending on what the last step of the pipleline is. If the last step is a transformer, the entire pipeline is a transformer and one can call fit, transform or fit_transform like an ordinary transformer. The same is true if the last step is a predictor. Essentially, all it does is chain the fit_transform calls of every transformer in the pipeline. If we think of ordinary transformers like functions, pipelines can be thought of as a higher-order function that simply composes an arbitary number of functions.
Step14: Union
A union is a transformer that is initialized with an arbitrary number of transformers. When fit_transform is called on a dataset, it simply calls fit_transform of the transformers it was given and horizontally concatenates its results.
Step15: If we run this on the original 442x10 dataset, we expect to get a dataset with the same number of samples and $\begin{pmatrix} 12 \ 2 \end{pmatrix} + 3 = 66 + 3 = 69$ features.
Step16: Putting it all together
The above union applies the basis functions on the entire dataset, but we're interested in applying different basis functions to different features. To do this, we can simply define a rather frivolous transformer that simply slices the input data, and that's exactly what ArraySlicer was for.
Step17: Then we can combine this all together to form our mega-transformer which we showed earlier.
Step18: This gives us a predictor which takes some input, slices up the respective features, churns it through a basis function and finally trains a linear regressor on it, all in one go! | Python Code:
import numpy as np
from scipy.spatial.distance import cdist
from scipy.special import expit
from sklearn.base import BaseEstimator, TransformerMixin
from sklearn.pipeline import make_pipeline, make_union
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_diabetes
Explanation: Preamble
End of explanation
X = np.arange(1, 9).reshape(4, 2)
X
PolynomialFeatures(degree=2).fit_transform(X)
Explanation: Feature Union with Heterogeneous Data Sources
Polynomial basis function
The polynomial basis function is provided by scikit-learn in the sklearn.preprocessing module.
End of explanation
class RadialFeatures(BaseEstimator, TransformerMixin):
def __init__(self, mu=0, s=1):
self.mu = mu
self.s = s
def fit(self, X, y=None):
# this basis function stateless
# need only return self
return self
def transform(self, X, y=None):
return np.exp(-cdist(X, self.mu, 'sqeuclidean')/(2*self.s**2))
Explanation: Custom basis functions
Unfortunately, this is pretty much the extent of what scikit-learn provides in the way of basis functions. Here we define some standard basis functions, while adhering to the scikit-learn interface. This will be important when we try to incorporate our basis functions in pipelines and feature unions later on. While this is not strictly required, it will certainly make life easier for us down the road.
Radial Basis Function
End of explanation
class SigmoidalFeatures(BaseEstimator, TransformerMixin):
def __init__(self, mu=0, s=1):
self.mu = mu
self.s = s
def fit(self, X, y=None):
# this basis function stateless
# need only return self
return self
def transform(self, X, y=None):
return expit(cdist(X, self.mu)/self.s)
mu = np.linspace(0.1, 1, 10).reshape(5, 2)
mu
RadialFeatures(mu=mu).fit_transform(X).round(2)
SigmoidalFeatures(mu=mu).fit_transform(X).round(2)
Explanation: Sigmoidal Basis Function
End of explanation
diabetes = load_diabetes()
X, y = diabetes.data, diabetes.target
X.shape
y.shape
Explanation: Real-world Dataset
Now that we have a few basis functions at our disposal, let's try to apply different basis functions to different features of a dataset. We use the diabetes dataset, a real-world dataset with 442 instances and 10 features. We first work through each step manually, and show how the steps can be combined using scikit-learn's feature unions and pipelines to form a single model that will perform all the necessary steps in one fell swoop.
End of explanation
# sanity check
X[:5, ::2]
# sanity check
y[:5]
Explanation: We print every other feature for just the first few instances, just to get an idea of what the data looks like
End of explanation
X1 = X[:, np.array([2, 5])]
X1.shape
# sanity check
X1[:5]
X2 = X[:, np.array([6, 8, 9])]
X2.shape
# sanity check
X2[:5]
Explanation: Assume for some reason we are interested in training a model using, say, features 2 and 5 with a polynomial basis, and features 6, 8 and 9 with a radial basis. We first slice up our original dataset.
End of explanation
X1_poly = PolynomialFeatures().fit_transform(X1)
X1_poly.shape
# sanity check
X1_poly[:5].round(2)
Explanation: Now we apply the respective basis functions.
Polynomial
End of explanation
mu = np.linspace(0, 1, 6).reshape(2, 3)
mu
X2_radial = RadialFeatures(mu).fit_transform(X2)
X2_radial.shape
# sanity check
X2_radial[:5].round(2)
Explanation: Radial
End of explanation
X_concat = np.hstack((X1_poly, X2_radial))
X_concat.shape
# sanity check
X_concat[:5, ::2].round(2)
Explanation: Now we're ready to concatenate these augmented datasets.
End of explanation
model = LinearRegression()
model.fit(X_concat, y)
model.score(X_concat, y)
Explanation: Now we are ready to train a regressor with this augmented dataset. For this example, we'll simply use a linear regression model.
End of explanation
class ArraySlicer(BaseEstimator, TransformerMixin):
def __init__(self, index_exp):
self.index_exp = index_exp
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
return X[self.index_exp]
model = \
make_pipeline(
make_union(
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([2, 5])]),
PolynomialFeatures()
),
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([6, 8, 9])]),
RadialFeatures(mu)
)
)
)
model.fit(X)
model.transform(X).shape
# sanity check
model.transform(X)[:5, ::2].round(2)
Explanation: (To no one's surprise, our model performs quite poorly, since zero effort was made to identify and incorporate the most informative features or appropriate basis functions. Rather, they were chosen solely to maximize clarity of exposition.)
Recap
So let's recap what we've done.
We started out with a dataset with 442 samples and 10 features, represented by 442x10 matrix X
For one reason or another, we wanted to use different basis functions for different subsets of features. Apparently, we wanted features 2 and 5 for one basis function and features 6, 8 and 9 for another. Therefore, we
sliced the matrix X to obtain 442 by 2 matrix X1 and
sliced the matrix X to obtain 442 by 3 matrix X2.
We
applied a polynomial basis function of degree 2 to X1 with 2 features and 442 samples. This returns a dataset X1_poly with $\begin{pmatrix} 4 \ 2 \end{pmatrix} = 6$ features and 442 samples. (NB: In general, the number of output features for a polynomial basis function of degree $d$ on $n$ features is the number of multisets of cardinality $d$, with elements taken from a finite set of cardinality $n+1$, which is given by the multiset coefficient $\begin{pmatrix} \begin{pmatrix} n + 1 \ d \end{pmatrix} \end{pmatrix} = \begin{pmatrix} n + d \ d \end{pmatrix}$.) So from 442 by 2 matrix X1 we obtain 442 by 6 matrix X1_poly
applied a radial basis function with 2 mean vectors $\mu_1 = \begin{pmatrix} 0 & 0.2 & 0.4 \end{pmatrix}^T$ and $\mu_2 = \begin{pmatrix} 0.6 & 0.8 & 1.0 \end{pmatrix}^T$, which is represented by the 2 by 3 matrix mu. From the 442 by 3 matrix X2, we obtain 442 by 2 matrix X2_radial
Next, we horizontally concatenated 442 by 6 matrix X1_poly with 442 by 2 matrix X2_radial to obtain the final 442 by 8 matrix X_concat
Finally, we fitted a linear model on X_concat.
So this is how we went from a 442x10 matrix X to the 442x8 matrix X_concat.
With Pipeline and Feature Union
First we define a transformer that slices up the input data. Note instead of working with (tuples of) slice objects, it is usually more convenient to use the Numpy function np.index_exp. We explain later why this is necessary.
End of explanation
model = \
make_pipeline(
make_union(
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([2, 5])]),
PolynomialFeatures()
),
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([6, 8, 9])]),
RadialFeatures(mu)
)
),
LinearRegression()
)
model.fit(X, y)
model.score(X, y)
Explanation: This effectively composes each of the steps we had to manually perform and amalgamated it into a single transformer. We can even append a regressor at the end to make it a complete estimator/predictor.
End of explanation
model = \
make_pipeline(
PolynomialFeatures(), # transformer
LinearRegression() # predictor
)
model.fit(X, y)
model.score(X, y)
Explanation: Breaking it Down
The most important thing to note is that everything in scikit-learn is either a transformer or a predictor, and are almost always an estimator. An estimator is simply a class that implements the fit method, while a transfromer and predictor implements a, well, transform and predict method respectively. From this simple interface, we get a surprising hight amount of functionality and flexibility.
Pipeline
A pipeline behaves as a transformer or a predictor depending on what the last step of the pipleline is. If the last step is a transformer, the entire pipeline is a transformer and one can call fit, transform or fit_transform like an ordinary transformer. The same is true if the last step is a predictor. Essentially, all it does is chain the fit_transform calls of every transformer in the pipeline. If we think of ordinary transformers like functions, pipelines can be thought of as a higher-order function that simply composes an arbitary number of functions.
End of explanation
mu_ = np.linspace(0, 10, 30).reshape(3, 10)
model = \
make_union(
PolynomialFeatures(),
RadialFeatures(mu_)
)
Explanation: Union
A union is a transformer that is initialized with an arbitrary number of transformers. When fit_transform is called on a dataset, it simply calls fit_transform of the transformers it was given and horizontally concatenates its results.
End of explanation
model.fit_transform(X).shape
Explanation: If we run this on the original 442x10 dataset, we expect to get a dataset with the same number of samples and $\begin{pmatrix} 12 \ 2 \end{pmatrix} + 3 = 66 + 3 = 69$ features.
End of explanation
model = \
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([2, 5])]),
PolynomialFeatures()
)
model.fit(X)
model.transform(X).shape
# sanity check
model.transform(X)[:5].round(2)
Explanation: Putting it all together
The above union applies the basis functions on the entire dataset, but we're interested in applying different basis functions to different features. To do this, we can simply define a rather frivolous transformer that simply slices the input data, and that's exactly what ArraySlicer was for.
End of explanation
model = \
make_pipeline(
make_union(
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([2, 5])]),
PolynomialFeatures()
),
make_pipeline(
ArraySlicer(np.index_exp[:, np.array([6, 8, 9])]),
RadialFeatures(mu)
)
),
LinearRegression()
)
Explanation: Then we can combine this all together to form our mega-transformer which we showed earlier.
End of explanation
model.fit(X, y)
model.score(X, y)
Explanation: This gives us a predictor which takes some input, slices up the respective features, churns it through a basis function and finally trains a linear regressor on it, all in one go!
End of explanation |
3,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Features of BIDMat and Scala
BIDMat is a multi-platform matrix library similar to R, Matlab, Julia or Numpy/Scipy. It takes full advantage of the very powerful Scala Language. Its intended primarily for machine learning, but is has a broad set of operations and datatypes and should be suitable for many other applications. BIDMat has several unique features
Step1: These calls check that CPU and GPU native libs loaded correctly, and what GPUs are accessible.
If you have a GPU and CUDA installed, GPUmem will printout the fraction of free memory, the absolute free memory and the total memory for the default GPU.
CPU and GPU matrices
BIDMat's matrix types are given in the table below. All are children of the "Mat" parent class, which allows code to be written generically. Many of BIDMach's learning algorithms will run with either single or double precision, dense or sparse input data.
<table style="width
Step2: CPU matrix operations use Intel MKL acceleration for linear algebra, scientific and statistical functions. BIDMat includes "tic" and "toc" for timing, and "flip" and "flop" for floating point performance.
Step3: GPU matrices behave very similarly.
Step4: But much of the power of BIDMat is that we dont have to worry about matrix types. Lets explore that with an example.
SVD (Singular Value Decomposition) on a Budget
Now lets try solving a real problem with this infrastructure
Step5: Notice that the code above used only the "Mat" matrix type. If you examine the variables V and P in a Scala IDE (Eclipse has one) you will find that they both also have type "Mat". Let's try it with an FMat (CPU single precision, dense matrix).
Movie Data Example
We load some data from the MovieLens project.
Step6: Let's take a peek at the singular values on a plot
Step7: Which shrinks a little too fast. Lets look at it on a log-log plot instead
Step8: Now lets try it with a GPU, single-precision, dense matrix.
Step9: That's not bad, the GPU version was nearly 4x faster. Now lets try a sparse, CPU single-precision matrix. Note that by construction our matrix was only 10% dense anyway.
Sparse SVD
Step10: This next one is important. Dense matrix operations are the bread-and-butter of scientific computing, and now most deep learning. But other machine learning tasks (logistic regression, SVMs, k-Means, topic models etc) most commonly take sparse input data like text, URLs, cookies etc. And so performance on sparse matrix operations is critical.
GPU performance on sparse data, especially power law data - which covers most of the case above (the commerically important cases) - has historically been poor. But in fact GPU hardware supports extremely fast sparse operations when the kernels are carefully designed. Such kernels are only available in BIDMat right now. NVIDIA's sparse matrix kernels, which have been tuned for sparse scientific data, do not work well on power-law data.
In any case, let's try BIDMat's GPU sparse matrix type
Step11: That's a 10x improvement end-to-end, which is similar to the GPU's advantage on dense matrices. This result is certainly not specific to SVD, and is reproduced in most ML algorithms. So GPUs have a key role to play in general machine learning, and its likely that at some point they will assume a central role as they currently enjoy in scientific computing and deep learning.
GPU Double Precision
One last performance issue
Step12: Which is noticebly slower, but still 3x faster than the CPU version running in single precision.
Using Cusparse
NVIDIA's cusparse library, which is optimized for scientific data, doesnt perform as well on power-law data.
Step13: Unicode Math Operators, Functions and Variables
As well as the standard operators +,-,*,/, BIDMat includes several other important operators with their standard unicode representation. They have an ASCII alias in case unicode input is difficult. Here they are
Step14: Hadamard (element-wise) multiply
Step15: Dot product, by default along columns
Step16: Dot product along rows
Step17: Kronecker product
Step18: As well as operators, functions in BIDMach can use unicode characters. e.g.
Step19: You can certainly define new unicode operators
Step20: and use as much Greek as you want
Step21: or English
Step22: Transposed Multiplies
Matrix multiply is the most expensive step in many calculations, and often involves transposed matrices. To speed up those calcualtions, we expose two operators that combine the transpose and multiply operations
Step23: Highlights of the Scala Language
Scala is a remarkable language. It is an object-oriented language with similar semantics to Java which it effectively extends. But it also has a particular clean functional syntax for anonymous functions and closures.
It has a REPL (Read-Eval-Print-Loop) like Python, and can be used interactively or it can run scripts in or outside an interactive session.
Like Python, types are determined by assignments, but they are static rather than dynamic. So the language has the economy of Python, but the type-safety of a static language.
Scala includes a tuple type for multiple-value returns, and on-the-fly data structuring.
Finally it has outstanding support for concurrency with parallel classes and an actor system called Akka.
Performance
First we examine the performance of Scala as a scientific language. Let's implement an example that has been widely used to illustrate the performance of the Julia language. Its a random walk, i.e. a 1D array with random steps from one element to the next.
Step24: If we try the same calculation in the Julia language (a new language designed for scientific computing) and in Python we find that
Step25: Which is better, due to the faster random number generation in the vectorized rand function. But More interesting is the GPU running time
Step26: If we run similar operators in Julia and Python we find
Step27: Almost every piece of Java code can be used in Scala. And therefore any piece of Java code can be used interactively.
There's very little work to do. You find a package and add it to your dependencies and then import as you would in Java.
Step28: Apache Commons Math includes a Statistics package with many useful functions and tests. Lets create two arrays of random data and compare them.
Step29: BIDMat has enriched matrix types like FMat, SMat etc, while Apache Commons Math expects Java Arrays of Double precision floats. To get these, we can convert FMat to DMat (double) and extra the data field which contains the matrices data.
Step30: But rather than doing this conversion every time we want to use some BIDMat matrices, we can instruct Scala to do the work for us. We do this with an implicit conversion from FMat to Array[Double]. Simply defining this function will case a coercion whenever we supply an FMat argument to a function that expects Array[Double].
Step31: And magically we can perform t-Tests on BIDMat matrices as though they had known each other all along.
Step32: and its important to get your daily dose of beta
Step33: Deconstruction
Step34: Let's make a raw Java Array of float integers.
Step35: First of all, Scala supports Tuple types for ad-hoc data structuring.
Step36: We can also deconstruct tuples using Scala Pattern matching
Step37: And reduce operations can use deconstruction as well | Python Code:
import BIDMat.{CMat,CSMat,DMat,Dict,IDict,FMat,FND,GMat,GDMat,GIMat,GLMat,GSMat,GSDMat,
HMat,IMat,Image,LMat,Mat,ND,SMat,SBMat,SDMat}
import BIDMat.MatFunctions._
import BIDMat.SciFunctions._
import BIDMat.Solvers._
import BIDMat.JPlotting._
Mat.checkMKL
Mat.checkCUDA
Mat.setInline
if (Mat.hasCUDA > 0) GPUmem
Explanation: Features of BIDMat and Scala
BIDMat is a multi-platform matrix library similar to R, Matlab, Julia or Numpy/Scipy. It takes full advantage of the very powerful Scala Language. Its intended primarily for machine learning, but is has a broad set of operations and datatypes and should be suitable for many other applications. BIDMat has several unique features:
Built from the ground up with GPU + CPU backends. BIDMat code is implementation independent.
GPU memory management uses caching, designed to support iterative algorithms.
Natural and extensible syntax (thanks to scala). Math operators include +,-,*,/,⊗,∙,∘
Probably the most complete support for matrix types: dense matrices of float32, double, int and long. Sparse matrices with single or double elements. All are available on CPU or GPU.
Highest performance sparse matrix operations on power-law data.
BIDMat has several other state-of-the-art features:
* Interactivity. Thanks to the Scala language, BIDMat is interactive and scriptable.
* Massive code base thanks to Java.
* Easy-to-use Parallelism, thanks to Scala's actor framework and parallel collection classes.
* Runs on JVM, extremely portable. Runs on Mac, Linux, Windows, Android.
* Cluster-ready, leverages Hadoop, Yarn, Spark etc.
BIDMat is a library that is loaded by a startup script, and a set of imports that include the default classes and functions. We include them explicitly in this notebook.
End of explanation
val n = 4096 // "val" designates a constant. n is statically typed (as in Int here), but its type is inferred.
val a = rand(n,n) // Create an nxn matrix (on the CPU)
%type a // Most scientific funtions in BIDMat return single-precision results by default.
Explanation: These calls check that CPU and GPU native libs loaded correctly, and what GPUs are accessible.
If you have a GPU and CUDA installed, GPUmem will printout the fraction of free memory, the absolute free memory and the total memory for the default GPU.
CPU and GPU matrices
BIDMat's matrix types are given in the table below. All are children of the "Mat" parent class, which allows code to be written generically. Many of BIDMach's learning algorithms will run with either single or double precision, dense or sparse input data.
<table style="width:4in" align="left">
<tr><td/><td colspan="2"><b>CPU Matrices</b></td><td colspan="2"><b>GPU Matrices</b></td></tr>
<tr><td></td><td><b>Dense</b></td><td><b>Sparse</b></td><td><b>Dense</b></td><td><b>Sparse</b></td></tr>
<tr><td><b>Float32</b></td><td>FMat</td><td>SMat</td><td>GMat</td><td>GSMat</td></tr>
<tr><td><b>Float64</b></td><td>DMat</td><td>SDMat</td><td>GDMat</td><td>GSDMat</td></tr>
<tr><td><b>Int32</b></td><td>IMat</td><td></td><td>GIMat</td><td></td></tr>
<tr><td><b>Int64</b></td><td>LMat</td><td></td><td>GLMat</td><td></td></tr>
</table>
End of explanation
flip; val b = a * a; val gf=gflop
print("The product took %4.2f seconds at %3.0f gflops" format (gf._2, gf._1))
gf
Explanation: CPU matrix operations use Intel MKL acceleration for linear algebra, scientific and statistical functions. BIDMat includes "tic" and "toc" for timing, and "flip" and "flop" for floating point performance.
End of explanation
val ga = grand(n,n) // Another nxn random matrix
flip; val gb = ga * ga; val gf=gflop
print("The product took %4.2f seconds at %3.0f gflops" format (gf._2, gf._1))
gf
%type ga
Explanation: GPU matrices behave very similarly.
End of explanation
def SVD(M:Mat, ndims:Int, niter:Int) = {
var Q = M.zeros(M.nrows, ndims) // A block of ndims column vectors
normrnd(0, 1, Q) // randomly initialize the vectors
Mat.useCache = true // Turn matrix caching on
for (i <- 0 until niter) { // Perform subspace iteration
val P = (Q.t * M *^ M).t // Compute P = M * M^t * Q efficiently
QRdecompt(P, Q, null) // QR-decomposition of P, saving Q
}
Mat.useCache = false // Turn caching off after the iteration
val P = (Q.t * M *^ M).t // Compute P again.
(Q, P ∙ Q) // Return Left singular vectors and singular values
}
Explanation: But much of the power of BIDMat is that we dont have to worry about matrix types. Lets explore that with an example.
SVD (Singular Value Decomposition) on a Budget
Now lets try solving a real problem with this infrastructure: An approximate Singular-Value Decomposition (SVD) or PCA of a matrix $M$. We'll do this by computing the leading eigenvalues and eigenvectors of $MM^T$. The method we use is subspace iteration and it generalizes the power method for computing the largest-magnitude eigenvalue. An eigenvector is a vector $v$ such that
$$Mv =\lambda v$$
where $\lambda$ is a scalar called the eigenvalue.
End of explanation
val ndims = 32 // Number of PCA dimension
val niter = 128 // Number of iterations to do
val S = loadSMat("../data/movielens/train.smat.lz4")(0->10000,0->4000)
val M = full(S) // Put in a dense matrix
flip;
val (svecs, svals) = SVD(M, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
Explanation: Notice that the code above used only the "Mat" matrix type. If you examine the variables V and P in a Scala IDE (Eclipse has one) you will find that they both also have type "Mat". Let's try it with an FMat (CPU single precision, dense matrix).
Movie Data Example
We load some data from the MovieLens project.
End of explanation
S.nnz
plot(svals)
Explanation: Let's take a peek at the singular values on a plot
End of explanation
loglog(row(1 to svals.length), svals)
Explanation: Which shrinks a little too fast. Lets look at it on a log-log plot instead:
End of explanation
val G = GMat(M) // Try a dense GPU matrix
flip;
val (svecs, svals) = SVD(G, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
Explanation: Now lets try it with a GPU, single-precision, dense matrix.
End of explanation
flip; // Try a sparse CPU matrix
val (svecs, svals) = SVD(S, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
Explanation: That's not bad, the GPU version was nearly 4x faster. Now lets try a sparse, CPU single-precision matrix. Note that by construction our matrix was only 10% dense anyway.
Sparse SVD
End of explanation
val GS = GSMat(S) // Try a sparse GPU matrix
flip;
val (svecs, svals) = SVD(GS, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
Explanation: This next one is important. Dense matrix operations are the bread-and-butter of scientific computing, and now most deep learning. But other machine learning tasks (logistic regression, SVMs, k-Means, topic models etc) most commonly take sparse input data like text, URLs, cookies etc. And so performance on sparse matrix operations is critical.
GPU performance on sparse data, especially power law data - which covers most of the case above (the commerically important cases) - has historically been poor. But in fact GPU hardware supports extremely fast sparse operations when the kernels are carefully designed. Such kernels are only available in BIDMat right now. NVIDIA's sparse matrix kernels, which have been tuned for sparse scientific data, do not work well on power-law data.
In any case, let's try BIDMat's GPU sparse matrix type:
End of explanation
val GSD = GSDMat(GS) // Try a sparse, double GPU matrix
flip;
val (svecs, svals) = SVD(GSD, ndims, niter); // Compute the singular vectors and values
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
Explanation: That's a 10x improvement end-to-end, which is similar to the GPU's advantage on dense matrices. This result is certainly not specific to SVD, and is reproduced in most ML algorithms. So GPUs have a key role to play in general machine learning, and its likely that at some point they will assume a central role as they currently enjoy in scientific computing and deep learning.
GPU Double Precision
One last performance issue: GPU hardware normally prioritizes single-precision floating point over double-precision, and there is a big gap on dense matrix operations. But calculations on sparse data are memory-limited and this largely masks the difference in arithmetic. Lets try a sparse, double-precision matrix, which will force all the calculations to double precision.
End of explanation
def SVD(M:Mat, ndims:Int, niter:Int) = {
var Q = M.zeros(M.nrows, ndims)
normrnd(0, 1, Q)
Mat.useCache = true
for (i <- 0 until niter) { // Perform subspace iteration
val P = M * (M ^* Q) // Compute P = M * M^t * Q with cusparse
QRdecompt(P, Q, null)
}
Mat.useCache = false
val P = M * (M ^* Q) // Compute P again.
(Q, getdiag(P ^* Q)) // Left singular vectors and singular values
}
// Try sparse GPU matrix
flip;
val (svecs, svals) = SVD(GS, ndims, niter);
val gf=gflop
print("The calculation took %4.2f seconds at %2.1f gflops" format (gf._2, gf._1))
svals.t
Explanation: Which is noticebly slower, but still 3x faster than the CPU version running in single precision.
Using Cusparse
NVIDIA's cusparse library, which is optimized for scientific data, doesnt perform as well on power-law data.
End of explanation
val a = ones(4,1) * row(1->5)
val b = col(1->5) * ones(1,4)
Explanation: Unicode Math Operators, Functions and Variables
As well as the standard operators +,-,*,/, BIDMat includes several other important operators with their standard unicode representation. They have an ASCII alias in case unicode input is difficult. Here they are:
<pre>
Unicode operator ASCII alias Operation
================ =========== =========
∘ *@ Element-wise (Hadamard) product
∙ dot Column-wise dot product
∙→ dotr Row-wise dot product
⊗ kron Kronecker (Cartesian) product
</pre>
End of explanation
b ∘ a
Explanation: Hadamard (element-wise) multiply
End of explanation
b ∙ a
Explanation: Dot product, by default along columns
End of explanation
b ∙→ a
Explanation: Dot product along rows
End of explanation
b ⊗ a
Explanation: Kronecker product
End of explanation
val ii = row(1->10)
ii on Γ(ii) // Stack this row on the results of a Gamma function applied to it
Explanation: As well as operators, functions in BIDMach can use unicode characters. e.g.
End of explanation
def √(x:Mat) = sqrt(x)
def √(x:Double) = math.sqrt(x)
√(ii)
Explanation: You can certainly define new unicode operators:
End of explanation
val α = row(1->10)
val β = α + 2
val γ = β on Γ(β)
Explanation: and use as much Greek as you want:
End of explanation
class NewMat(nr:Int, nc:Int, data0:Array[Float]) extends FMat(nr,nc,data0) {
def quick(a:FMat) = this * a;
def fox(a:FMat) = this + a;
def over(a:FMat) = this - a;
def lazzy(a:FMat) = this / a ;
}
implicit def convNew(a:FMat):NewMat = new NewMat(a.nrows, a.ncols, a.data)
val n = 2;
val the = rand(n,n);
val brown = rand(n,n);
val jumps = rand(n,n);
val dog = rand(n,n);
the quick brown fox jumps over the lazzy dog
Explanation: or English:
End of explanation
a ^* b
a.t * b
a *^ b
a * b.t
Explanation: Transposed Multiplies
Matrix multiply is the most expensive step in many calculations, and often involves transposed matrices. To speed up those calcualtions, we expose two operators that combine the transpose and multiply operations:
<pre>
^* - transpose the first argument, so a ^* b is equivalent to a.t * b
*^ - transpose the second argument, so a *^ b is equivalent to a * b.t
</pre>
these operators are implemented natively, i.e. they do not actually perform transposes, but implement the effective calculation. This is particulary important for sparse matrices since transpose would involve an index sort.
End of explanation
import java.util.Random
val random = new Random()
def rwalk(m:FMat) = {
val n = m.length
m(0) = random.nextFloat
var i = 1
while (i < n) {
m(i) = m(i-1) + random.nextFloat - 0.5f
i += 1
}
}
val n = 100000000
val a = zeros(n, 1)
tic; val x = rwalk(a); val t=toc
print("computed %2.1f million steps per second in %2.1f seconds" format (n/t/1e6f, t))
Explanation: Highlights of the Scala Language
Scala is a remarkable language. It is an object-oriented language with similar semantics to Java which it effectively extends. But it also has a particular clean functional syntax for anonymous functions and closures.
It has a REPL (Read-Eval-Print-Loop) like Python, and can be used interactively or it can run scripts in or outside an interactive session.
Like Python, types are determined by assignments, but they are static rather than dynamic. So the language has the economy of Python, but the type-safety of a static language.
Scala includes a tuple type for multiple-value returns, and on-the-fly data structuring.
Finally it has outstanding support for concurrency with parallel classes and an actor system called Akka.
Performance
First we examine the performance of Scala as a scientific language. Let's implement an example that has been widely used to illustrate the performance of the Julia language. Its a random walk, i.e. a 1D array with random steps from one element to the next.
End of explanation
tic; rand(a); val b=cumsum(a-0.5f); val t=toc
print("computed %2.1f million steps per second in %2.1f seconds" format (n/t/1e6f, t))
Explanation: If we try the same calculation in the Julia language (a new language designed for scientific computing) and in Python we find that:
<table style="width:4in" align="left">
<tr><td></td><td><b>Scala</b></td><td><b>Julia</b></td><td><b>Python</b></td></tr>
<tr><td><b>with rand</b></td><td>1.0s</td><td>0.43s</td><td>147s</td></tr>
<tr><td><b>without rand</b></td><td>0.1s</td><td>0.26s</td><td>100s</td></tr>
</table>
Vectorized Operations
But does this matter? A random walk can be computed efficiently with vector operations: vector random numbers and a cumulative sum. And in general most ML algorithms can be implemented with vector and matrix operations efficiently. Let's try in BIDMat:
End of explanation
val ga = GMat(a)
tic; rand(ga); val gb=cumsum(ga-0.5f); val t=toc
print("computed %2.1f million steps per second in %2.1f seconds" format (n/t/1e6f, t))
Explanation: Which is better, due to the faster random number generation in the vectorized rand function. But More interesting is the GPU running time:
End of explanation
<img style="width:4in" alt="NGC 4414 (NASA-med).jpg" src="https://upload.wikimedia.org/wikipedia/commons/thumb/c/c3/NGC_4414_%28NASA-med%29.jpg/1200px-NGC_4414_%28NASA-med%29.jpg"/>
Explanation: If we run similar operators in Julia and Python we find:
<table style="width:5in" align="left">
<tr><td></td><td><b>BIDMach(CPU)</b></td><td><b>BIDMach(GPU)</b></td><td><b>Julia</b></td><td><b>Python</b></td></tr>
<tr><td><b>with rand</b></td><td>0.6s</td><td>0.1s</td><td>0.44s</td><td>1.4s</td></tr>
<tr><td><b>without rand</b></td><td>0.3s</td><td>0.05s</td><td>0.26s</td><td>0.5s</td></tr>
</table>
Vectorized operators even the playing field, and bring Python up to speed compared to the other systems. On the other hand, GPU hardware maintains a near-order-of-magnitude advantage for vector operations.
GPU Performance Summary
GPU-acceleration gives an order-of-magnitude speedup (or more) for the following operations:
* Dense matrix multiply
* Sparse matrix multiply
* Vector operations and reductions
* Random numbers and transcendental function evaluation
* Sorting
So its not just for scientific computing or deep learning, but for a much wider gamut of data processing and ML.
Tapping the Java Universe
End of explanation
import org.apache.commons.math3.stat.inference.TestUtils._
Explanation: Almost every piece of Java code can be used in Scala. And therefore any piece of Java code can be used interactively.
There's very little work to do. You find a package and add it to your dependencies and then import as you would in Java.
End of explanation
val x = normrnd(0,1,1,40)
val y = normrnd(0,1,1,40) + 0.5
Explanation: Apache Commons Math includes a Statistics package with many useful functions and tests. Lets create two arrays of random data and compare them.
End of explanation
val dx = DMat(x)
val dy = DMat(y)
tTest(dx.data, dy.data)
Explanation: BIDMat has enriched matrix types like FMat, SMat etc, while Apache Commons Math expects Java Arrays of Double precision floats. To get these, we can convert FMat to DMat (double) and extra the data field which contains the matrices data.
End of explanation
implicit def fMatToDarray(a:FMat):Array[Double] = DMat(a).data
Explanation: But rather than doing this conversion every time we want to use some BIDMat matrices, we can instruct Scala to do the work for us. We do this with an implicit conversion from FMat to Array[Double]. Simply defining this function will case a coercion whenever we supply an FMat argument to a function that expects Array[Double].
End of explanation
tTest(x, y)
Explanation: And magically we can perform t-Tests on BIDMat matrices as though they had known each other all along.
End of explanation
import org.apache.commons.math3.distribution._
val betadist = new BetaDistribution(2,5)
val n = 100000
val x = new DMat(1, n, (0 until n).map(x => betadist.sample).toArray); null
hist(x, 100)
Explanation: and its important to get your daily dose of beta:
End of explanation
<image src="https://sketchesfromthealbum.files.wordpress.com/2015/01/jacquesderrida.jpg" style="width:4in"/>
Explanation: Deconstruction
End of explanation
val i = row(0->10).data
Explanation: Let's make a raw Java Array of float integers.
End of explanation
val j = i.map(x => (x, x*x))
Explanation: First of all, Scala supports Tuple types for ad-hoc data structuring.
End of explanation
j.map{case (x,y) => (y,x)}
Explanation: We can also deconstruct tuples using Scala Pattern matching:
End of explanation
val k = j.reduce((ab,cd) => {val (a,b) = ab; val (c,d) = cd; (a+c, b+d)})
Explanation: And reduce operations can use deconstruction as well:
End of explanation |
3,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../../images/qiskit-heading.png" alt="Note
Step1: Quantum walk, phase I/II on $N=4$ lattice$(t=8)$
Step2: Below is the result when executing the circuit on the simulator.
Step3: And below is the result when executing the circuit on the real device.
Step4: Conclusion
Step5: Below is the result when executing the circuit on the simulator.
Step6: And below is the result when executing the circuit on the real device. | Python Code:
#initialization
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
# importing QISKit
from qiskit import QuantumCircuit, ClassicalRegister, QuantumRegister
from qiskit import Aer, IBMQ, execute
from qiskit.wrapper.jupyter import *
from qiskit.backends.ibmq import least_busy
from qiskit.tools.visualization import matplotlib_circuit_drawer as circuit_drawer
from qiskit.tools.visualization import plot_histogram, qx_color_scheme
IBMQ.load_accounts()
sim_backend = Aer.get_backend('qasm_simulator')
device_backend = least_busy(IBMQ.backends(operational=True, simulator=False))
device_coupling = device_backend.configuration()['coupling_map']
print("the best backend is " + device_backend.name() + " with coupling " + str(device_coupling))
Explanation: <img src="../../../images/qiskit-heading.png" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Topological Quantum Walks on IBM Q
This notebook is based on the paper of Radhakrishnan Balu, Daniel Castillo, and George Siopsis, "Physical realization of topological quantum walks on IBM-Q and beyond" arXiv:1710.03615 [quant-ph](2017).
Contributors
Keita Takeuchi (Univ. of Tokyo) and Rudy Raymond (IBM Research - Tokyo)
Introduction: challenges in implementing topological walk
In this section, we introduce one model of quantum walk called split-step topological quantum walk.
We define Hilbert space of quantum walker states and coin states as
$\mathcal{H}{\mathcal{w}}={\vert x \rangle, x\in\mathbb{Z}_N}, \mathcal{H}{\mathcal{c}}={\vert 0 \rangle, \vert 1 \rangle}$, respectively. Then, step operator is defined as
$$
S^+ := \vert 0 \rangle_c \langle 0 \vert \otimes L^+ + \vert 1 \rangle_c \langle 1 \vert \otimes \mathbb{I}\
S^- := \vert 0 \rangle_c \langle 0 \vert \otimes \mathbb{I} + \vert 1 \rangle_c \langle 1 \vert \otimes L^-,
$$
where
$$
L^{\pm}\vert x \rangle_{\mathcal w} := \vert (x\pm1)\ \rm{mod}\ N \rangle_{\mathcal w}
$$
is a shift operator. The boundary condition is included.
Also, we define the coin operator as
$$
T(\theta):=e^{-i\theta Y} = \begin{bmatrix} \cos\theta & -\sin\theta \ \sin\theta & \cos\theta \end{bmatrix}.
$$
One step of quantum walk is the unitary operator defined as below that uses two mode of coins, i.e., $\theta_1$ and $\theta_2$:
$$
W := S^- T(\theta_2)S^+ T(\theta_1).
$$
Intuitively speaking, the walk consists of flipping coin states and based on the outcome of the coins, the shifting operator is applied to determine the next position of the walk.
Next, we consider a walk with two phases that depend on the current position:
$$
(\theta_1,\theta_2) = \begin{cases}
(\theta_{1}^{-},\ \theta_{2}^{-}) & 0 \leq x < \frac{N}{2} \
(\theta_{1}^{+},\ \theta_{2}^{+}) & \frac{N}{2} \leq x < N.
\end{cases}
$$
Then, two coin operators are rewritten as
$$
\mathcal T_i = \sum^{N-1}_{x=0}e^{-i\theta_i(x) Y_c}\otimes \vert x \rangle_w \langle x \vert,\ i=1,2.
$$
By using this, one step of quantum walk is equal to
$$
W = S^- \mathcal T_2 S^+ \mathcal T_1.
$$
In principle, we can execute the quantum walk by multiplying $W$ many times, but then we need many circuit elements to construct it. This is not possible with the current approximate quantum computers due to large errors produced after each application of circuit elements (gates).
Hamiltonian of topological walk
Altenatively, we can think of time evolution of the states. The Hamiltonian $H$ is regarded as $H=\lim_{n \to \infty}W^n$(See below for further details.).
For example, when $(\theta_1,\ \theta_2) = (0,\ \pi/2)$, the Schrödinger equation is
$$
i\frac{d}{dt}\vert \Psi \rangle = H_{\rm I} \vert \Psi \rangle,\ H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].
$$
If Hamiltonian is time independent, the solution of the Schrödinger equation is
$$
\vert \Psi(t) \rangle = e^{-iHt} \vert \Psi(0) \rangle,
$$
so we can get the final state at arbitrary time $t$ at once without operating W step by step, if we know the corresponding Hamiltonian.
The Hamiltonian can be computed as below.
Set $(\theta_1,\ \theta_2) = (\epsilon,\ \pi/2+\epsilon)$, and $\epsilon\to 0$ and the number of step $s\to \infty$
while $se=t/2$(finite variable). Then,
\begin{align}
H_I&=\lim_{n \to \infty}W^n\
\rm{(LHS)} &= \mathbb{I}-iH_{I}t+O(t^2)\
\rm{(RHS)} &= \lim_{\substack{s\to \infty\ \epsilon\to 0}}(W^4)^{s/4}=
\lim_{\substack{s\to \infty\ \epsilon\to0}}(\mathbb{I}+O(\epsilon))^{s/4}\
&\simeq \lim_{\substack{s\to \infty\ \epsilon\to 0}}\mathbb{I}+\frac{s}{4}O(\epsilon)\
&= \lim_{\epsilon\to 0}\mathbb{I}+iY\otimes [2\mathbb I+L^+ + L^-]t+O(\epsilon).
\end{align}
Therefore,
$$H_{\rm I} = -Y\otimes [2\mathbb I+L^+ + L^-].$$
Computation model
In order to check the correctness of results of the implementation of quantum walk by using IBMQ, we investigate two models, which have different features of coin phases. Let the number of positions on the line $n$ is 4.
- $\rm I / \rm II:\ (\theta_1,\theta_2) = \begin{cases}
(0,\ -\pi/2) & 0 \leq x < 2 \
(0,\ \pi/2) & 2 \leq x < 4
\end{cases}$
- $\rm I:\ (\theta_1,\theta_2)=(0,\ \pi/2),\ 0 \leq x < 4$
That is, the former is a quantum walk on a line with two phases of coins, while the latter is that with only one phase of coins.
<img src="../images/q_walk_lattice_2phase.png" width="30%" height="30%">
<div style="text-align: center;">
Figure 1. Quantum Walk on a line with two phases
</div>
The Hamiltonian operators for each of the walk on the line are, respectively,
$$
H_{\rm I/II} = Y \otimes \mathbb I \otimes \frac{\mathbb I + Z}{2}\
H_{\rm I} = Y\otimes (2\mathbb I\otimes \mathbb I + \mathbb I\otimes X + X \otimes X).
$$
Then, we want to implement the above Hamiltonian operators with the unitary operators as product of two-qubit gates CNOTs, CZs, and single-qubit gate rotation matrices. Notice that the CNOT and CZ gates are
\begin{align}
\rm{CNOT_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes X_t\
\rm{CZ_{ct}}&=\left |0\right\rangle_c\left\langle0\right | \otimes I_t + \left |1\right\rangle_c\left\langle1\right | \otimes Z_t.
\end{align}
Below is the reference of converting Hamiltonian into unitary operators useful for the topological quantum walk.
<br><br>
<div style="text-align: center;">
Table 1. Relation between the unitary operator and product of elementary gates
</div>
|unitary operator|product of circuit elements|
|:-:|:-:|
|$e^{-i\theta X_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CNOT_{cj}}$|
|$e^{-i\theta X_c Z_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_c t}\cdot \rm{CZ_{cj}}$|
|$e^{-i\theta Y_c X_j}$|$\rm{CNOT_{cj}}\cdot e^{i\theta Y_c t}\cdot \rm{CNOT_{cj}}$|
|$e^{-i\theta Y_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Y_c t}\cdot \rm{CNOT_{jc}}$|
|$e^{-i\theta Z_c X_j}$|$\rm{CZ_{cj}}\cdot e^{-i\theta X_j t}\cdot \rm{CZ_{cj}}$|
|$e^{-i\theta Z_c Z_j}$|$\rm{CNOT_{jc}}\cdot e^{-i\theta Z_c t}\cdot \rm{CNOT_{jc}}$|
By using these formula, the unitary operators are represented by only CNOT, CZ, and rotation matrices, so we can implement them by using IBM Q, as below.
Phase I/II:<br><br>
\begin{align}
e^{-iH_{I/II}t}=~&e^{-itY_c \otimes \mathbb I_0 \otimes \frac{\mathbb I_1 + Z_1}{2}}\
=~& e^{-iY_c t}e^{-itY_c\otimes Z_1}\
=~& e^{-iY_c t}\cdot\rm{CNOT_{1c}}\cdot e^{-i Y_c t}\cdot\rm{CNOT_{1c}}
\end{align}
<img src="../images/c12.png" width="50%" height="60%">
<div style="text-align: center;">
Figure 2. Phase I/II on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:coin,\ q[2]:2^1$
</div>
<br><br>
Phase I:<br><br>
\begin{align}
e^{-iH_I t}=~&e^{-itY_c\otimes (2\mathbb I_0\otimes \mathbb I_1 + \mathbb I_0\otimes X_1 + X_0 \otimes X_1)}\
=~&e^{-2itY_c}e^{-itY_c\otimes X_1}e^{-itY_c\otimes X_0 \otimes X_1}\
=~&e^{-2iY_c t}\cdot\rm{CNOT_{c1}}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c0}}\cdot e^{-iY_c t}\cdot\rm{CNOT_{c1}}
\end{align}
<img src="../images/c1.png" width="70%" height="70%">
<div style="text-align: center;">
Figure 3. Phase I on $N=4$ lattice$(t=8)$ - $q[0]:2^0,\ q[1]:2^1,\ q[2]:coin$
</div>
Implementation
End of explanation
t=8 #time
q1_2 = QuantumRegister(3)
c1_2 = ClassicalRegister(3)
qw1_2 = QuantumCircuit(q1_2, c1_2)
qw1_2.x(q1_2[2])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.u3(t, 0, 0, q1_2[1])
qw1_2.cx(q1_2[2], q1_2[1])
qw1_2.measure(q1_2[0], c1_2[0])
qw1_2.measure(q1_2[1], c1_2[2])
qw1_2.measure(q1_2[2], c1_2[1])
print(qw1_2.qasm())
circuit_drawer(qw1_2, style=qx_color_scheme())
Explanation: Quantum walk, phase I/II on $N=4$ lattice$(t=8)$
End of explanation
job = execute(qw1_2, sim_backend, shots=1000)
result = job.result()
plot_histogram(result.get_counts())
Explanation: Below is the result when executing the circuit on the simulator.
End of explanation
%%qiskit_job_status
HTMLProgressBar()
job = execute(qw1_2, backend=device_backend, coupling_map=device_coupling, shots=100)
result = job.result()
plot_histogram(result.get_counts())
Explanation: And below is the result when executing the circuit on the real device.
End of explanation
t=8 #time
q1 = QuantumRegister(3)
c1 = ClassicalRegister(3)
qw1 = QuantumCircuit(q1, c1)
qw1.x(q1[1])
qw1.cx(q1[2], q1[1])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.u3(t, 0, 0, q1[2])
qw1.cx(q1[2], q1[0])
qw1.cx(q1[2], q1[1])
qw1.u3(2*t, 0, 0, q1[2])
qw1.measure(q1[0], c1[0])
qw1.measure(q1[1], c1[1])
qw1.measure(q1[2], c1[2])
print(qw1.qasm())
circuit_drawer(qw1, style=qx_color_scheme())
Explanation: Conclusion: The walker is bounded at the initial state, which is the boundary of two phases, when the quantum walk on the line has two phases.
Quantum walk, phase I on $N=4$ lattice$(t=8)$
End of explanation
job = execute(qw1, sim_backend, shots=1000)
result = job.result()
plot_histogram(result.get_counts())
Explanation: Below is the result when executing the circuit on the simulator.
End of explanation
%%qiskit_job_status
HTMLProgressBar()
job = execute(qw1, backend=device_backend, coupling_map=device_coupling, shots=100)
result = job.result()
plot_histogram(result.get_counts())
Explanation: And below is the result when executing the circuit on the real device.
End of explanation |
3,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Project Euler
Step2: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
Step4: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
Step5: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
Step6: Finally used your count_letters function to solve the original question. | Python Code:
def number_to_words(n):
Given a number n between 1-1000 inclusive return a list of words for the number.
# YOUR CODE HERE
# English name of each digit/ place in dictionary
one = {
0: '',
1: 'one',
2: 'two',
3: 'three',
4: 'four',
5: 'five',
6: 'six',
7: 'seven',
8: 'eight',
9: 'nine'
}
teen = {
10: 'ten',
11: 'eleven',
12: 'twelve',
13: 'thirteen',
14: 'fourteen',
15: 'fifteen',
16: 'sixteen',
17: 'seventeen',
18: 'eighteen',
19: 'nineteen',
}
ten = {
0: '',
2: 'twenty',
3: 'thirty',
4: 'forty',
5: 'fifty',
6: 'sixty',
7: 'seventy',
8: 'eighty',
9: 'ninety'
}
hundred = {
1: 'onehundred',
2: 'twohundred',
3: 'threehundred',
4: 'fourhundred',
5: 'fivehundred',
6: 'sixhundred',
7: 'sevenhundred',
8: 'eighthundred',
9: 'ninehundred'
}
hundredand = {
1: 'onehundredand',
2: 'twohundredand',
3: 'threehundredand',
4: 'fourhundredand',
5: 'fivehundredand',
6: 'sixhundredand',
7: 'sevenhundredand',
8: 'eighthundredand',
9: 'ninehundredand'
}
#return the name of 1-9 as a string
if n in range (0, 10):
return one[n]
#return the name of 10-19 as a string
elif n in range(10, 20):
return teen[n]
#return the name of 20-99 as a string
elif n in range (20, 100):
#turn number in to string
a = str(n)
#Call name of first digat from ten list
b = int(a[0])
#Call name of second digat from one list
c = int(a[1])
#return names as linked string
return ten[b] + one[c]
#return the name of 100-999 as a string
elif n in range (99, 1000):
#turn number into string
a = str(n)
#if last 2 digits are in teens
if int(a[1]) == 1:
#call name of first digit from hundred list
b = int(a[0])
#call name of last 2 digits from teen list
c = int(a[1:])
#return number as linked string
return hundredand[b] + teen[c]
#If it ends in a double zero
if int(a[1:]) == 0:
b = int(a[0])
return hundred[b]
#If last 2 digits are not in teen or 00
else:
#call name of first digit from hundred list
d = int(a[0])
#Call name of second digat from ten list
e = int(a[1])
#Call name of second digat from one list
f = int(a[2])
return hundredand[d] + ten[e] + one[f]
#retun onethousan if n = 1000
elif n == 1000:
return 'onethousand'
#If anything that is not 1 - 1000 is enterd return fail as a string
else:
return 'fail'
Explanation: Project Euler: Problem 17
https://projecteuler.net/problem=17
If the numbers 1 to 5 are written out in words: one, two, three, four, five, then there are 3 + 3 + 5 + 4 + 4 = 19 letters used in total.
If all the numbers from 1 to 1000 (one thousand) inclusive were written out in words, how many letters would be used?
NOTE: Do not count spaces or hyphens. For example, 342 (three hundred and forty-two) contains 23 letters and 115 (one hundred and fifteen) contains 20 letters. The use of "and" when writing out numbers is in compliance with British usage.
First write a number_to_words(n) function that takes an integer n between 1 and 1000 inclusive and returns a list of words for the number as described above
End of explanation
# YOUR CODE HERE
assert number_to_words(9) == 'nine'
assert number_to_words(16) == 'sixteen'
assert number_to_words(56) == 'fiftysix'
assert number_to_words(200) == 'twohundred'
assert number_to_words(315) == 'threehundredandfifteen'
assert number_to_words(638) == 'sixhundredandthirtyeight'
assert number_to_words(1000) == 'onethousand'
assert True # use this for grading the number_to_words tests.
Explanation: Now write a set of assert tests for your number_to_words function that verifies that it is working as expected.
End of explanation
def count_letters(n):
Count the number of letters used to write out the words for 1-n inclusive.
# YOUR CODE HERe
#Return the length number_to_word as an integer
return int(len(number_to_words(n)))
Explanation: Now define a count_letters(n) that returns the number of letters used to write out the words for all of the the numbers 1 to n inclusive.
End of explanation
# YOUR CODE HERE
assert count_letters(9) == 4
assert count_letters(16) == 7
assert count_letters(56) == 8
assert count_letters(200) == 10
assert count_letters(315) == 22
assert count_letters(638) == 24
assert count_letters(1000) == 11
assert True # use this for gradig the count_letters test.
Explanation: Now write a set of assert tests for your count_letters function that verifies that it is working as expected.
End of explanation
# YOUR CODE HERE
n = 0
i = 0
while n < 1000:
n = n + 1
i = i + count_letters(n)
print (i)
assert True # use this for gradig the original sloution.
Explanation: Finally used your count_letters function to solve the original question.
End of explanation |
3,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installation
pip install https
Step1: Importing data
For this tutorial, we are using anthropometric data from the Genetic Investigation of ANthropometric Traits (GIANT) consortium
Step2: Manhattan plots
Import the module for Manhattan plots
Step3: Classic Manhattan plot
Step4: Recoloring the plot
Step5: Adding genome-wide significant line, and suggestive lines
Step6: Plotting two groups in the same figure (double plot)
Step7: Plotting two groups in the same figure (inverted plot)
Step8: QQ plots
First, let's impot the module for QQ plots | Python Code:
%matplotlib inline
#Here we set the dimensions for the figures in this notebook
import matplotlib as mpl
mpl.rcParams['figure.dpi']=150
mpl.rcParams['savefig.dpi']=150
mpl.rcParams['figure.figsize']=7.375, 3.375
Explanation: Installation
pip install https://github.com/khramts/assocplots/archive/master.zip
This tutorial provides examples of code for static Manhattan and QQ pltos. In order to view the figures in this notebook it is necessary to included the following line:
End of explanation
import numpy as np
hip_m=np.genfromtxt('HIP_MEN_chr_pos_rs_pval.txt', dtype=None)
hip_w=np.genfromtxt('HIP_WOMEN_chr_pos_rs_pval.txt', dtype=None)
Explanation: Importing data
For this tutorial, we are using anthropometric data from the Genetic Investigation of ANthropometric Traits (GIANT) consortium:
https://www.broadinstitute.org/collaboration/giant/index.php/GIANT_consortium_data_files
Result are described in Randall JC, Winkler TW, Kutalik Z, Berndt SI, Jackson AU, Monda KL, et al. (2013) Sex-stratified Genome-wide Association Studies Including 270,000 Individuals Show Sexual Dimorphism in Genetic Loci for Anthropometric Traits. PLoS Genet 9(6): e1003500. doi:10.1371/journal.pgen.1003500
http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1003500
In this tutorial we will be using one trait (hip circumference) measured in two groups: males and females. These are the files listed under Sex Stratified Anthropometrics subsection. For example, here is one of the files called GIANT_Randall2013PlosGenet_stage1_publicrelease_HapMapCeuFreq_HIP_WOMEN_N.txt and the first couple of lines looks like this:
MarkerName A1 A2 Freq.Hapmap.Ceu BETA SE.2gc P.2gc N
rs4747841 a g 0.55 0.0054 0.0080 0.50 40354.8
rs4749917 t c 0.45 -0.0054 0.0080 0.50 40354.8
rs737656 a g 0.3667 0.0035 0.0083 0.67 40354.7
rs737657 a g 0.3583 0.0020 0.0083 0.81 40351.8
The P.2gc column is the p-value of the association test. For the Manhattan plot, besides the p-value, we also need to know SNPs chromosome and genomic position. To obtain the chromosome number and position for each SNP I used a python script called LiftRsNumber.py from this Goncalo Abecasis’ group http://genome.sph.umich.edu/wiki/LiftOver
Since we only need to know the SNP's chromosome, position, and p-value, I generated the following file out of the one above: HIP_WOMEN_chr_pos_rs_pval.txt, where column 1 = chromosome, 2=position, 3=SNP rs number, 4=p-value
10 9918166 rs4747841 0.5
10 9918296 rs4749917 0.5
10 98252982 rs737656 0.67
10 98253133 rs737657 0.81
Alternatively, you can download reduced data from https://www.dropbox.com/sh/hw6ao63ieh363nd/AAB13crEGYAic6Fjv3a-yxVVa?dl=0
We'll beging making the plots by importing the data.
End of explanation
from assocplots.manhattan import *
Explanation: Manhattan plots
Import the module for Manhattan plots
End of explanation
chrs = [str(i) for i in range(1,23)]
chrs_names = np.array([str(i) for i in range(1,23)])
chrs_names[1::2] = ''
cmap = plt.get_cmap('viridis')
colors = [cmap(i) for i in [0.0,0.33,0.67,0.90]]
# Alternatively you can input colors by hand
from matplotlib.colors import hex2color
colors = ['#1b9e77', "#d95f02", '#7570b3', '#e7298a']
# Converting from HEX into RGB
colors = [hex2color(colors[i]) for i in range(len(colors))]
# hip_m['f0'].astype(str) is required in Python 3, since it reads unicode string by default
manhattan( hip_m['f3'], hip_m['f1'], hip_m['f0'].astype(str), 'Hip men',
plot_type='single',
chrs_plot=[str(i) for i in range(1,23)],
chrs_names=chrs_names,
cut = 0,
title='Anthropometric traits',
xlabel='chromosome',
ylabel='-log10(p-value)',
lines= [],
colors = colors,
scaling = '-log10')
Explanation: Classic Manhattan plot
End of explanation
# To recolor the plot, select a different color map: http://matplotlib.org/examples/color/colormaps_reference.html
cmap = plt.get_cmap('seismic')
colors = [cmap(i) for i in [0.0,0.33,0.67,0.90]]
manhattan( hip_m['f3'], hip_m['f1'], hip_m['f0'].astype(str), 'Hip men',
plot_type='single',
chrs_plot=[str(i) for i in range(1,23)],
chrs_names=chrs_names,
cut = 0,
title='Anthropometric traits',
xlabel='chromosome',
ylabel='-log10(p-value)',
lines= [],
colors = colors)
Explanation: Recoloring the plot
End of explanation
manhattan( hip_m['f3'], hip_m['f1'], hip_m['f0'].astype(str), 'Hip men',
plot_type='single',
chrs_plot=[str(i) for i in range(1,23)],
chrs_names=chrs_names,
cut = 0,
title='Anthropometric traits',
xlabel='chromosome',
ylabel='-log10(p-value)',
lines= [6, 8],
lines_colors=['b', 'r'],
lines_styles=['-','--'],
lines_widths=[1,2],
colors = colors)
plt.savefig('Manhattan_HipMen.png', dpi=300)
Explanation: Adding genome-wide significant line, and suggestive lines
End of explanation
mpl.rcParams['figure.figsize']=7.375, 5.375
manhattan( hip_m['f3'], hip_m['f1'], hip_m['f0'].astype(str), 'Hip men',
p2=hip_w['f3'], pos2=hip_w['f1'], chr2=hip_w['f0'].astype(str), label2='Hip women',
plot_type='double',
chrs_plot=[str(i) for i in range(1,23)],
chrs_names=chrs_names,
cut = 0,
title='Anthropometric traits',
xlabel='chromosome',
ylabel='-log10(p-value)',
lines= [],
top1 = 15,
top2 = 15,
colors = colors)
plt.subplots_adjust(hspace=0.08)
Explanation: Plotting two groups in the same figure (double plot)
End of explanation
mpl.rcParams['figure.figsize']=7.375, 5.375
manhattan( hip_m['f3'], hip_m['f1'], hip_m['f0'].astype(str), 'Hip men',
p2=hip_w['f3'], pos2=hip_w['f1'], chr2=hip_w['f0'].astype(str), label2='Hip women',
plot_type='inverted',
chrs_plot=[str(i) for i in range(1,23)],
chrs_names=chrs_names,
cut = 0,
title='Anthropometric traits',
xlabel='chromosome',
ylabel='-log10(p-value)',
lines= [],
top1 = 15,
top2 = 15,
colors = colors)
plt.savefig('Manhattan_Hip_inverted.png', dpi=300)
Explanation: Plotting two groups in the same figure (inverted plot)
End of explanation
from assocplots.qqplot import *
# This is an example of a classic QQ plot with 95% confidence interval plotted for the null distribution
mpl.rcParams['figure.dpi']=100
mpl.rcParams['savefig.dpi']=100
mpl.rcParams['figure.figsize']=5.375, 5.375
qqplot([hip_m['f3']],
['HIP men'],
color=['b'],
fill_dens=[0.2],
error_type='theoretical',
distribution='beta',
title='')
plt.savefig('qq_HIPmen_theoretical_error.png', dpi=300)
# Now we want to calculate the genomic control (inflation factor, lambda)
get_lambda(hip_m['f3'], definition = 'median')
# This is a qq plot showing two experimental groups
mpl.rcParams['figure.dpi']=100
mpl.rcParams['savefig.dpi']=100
mpl.rcParams['figure.figsize']=5.375, 5.375
qqplot([hip_m['f3'], hip_w['f3']],
['HIP men', 'HIP women'],
color=['b','r'],
fill_dens=[0.2,0.2],
error_type='experimental',
distribution='beta',
title='Anthropometric traits')
plt.savefig('qq_two_hip_groups.png', dpi=300)
Explanation: QQ plots
First, let's impot the module for QQ plots:
End of explanation |
3,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: FIO-RONM
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:01
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
3,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initialise the libs
Step1: Load the data
Step2: Data exploration
Step3: Helper functions
Step4: Ridge regression model fitting
Step5: Ridge regression on subsets
Using ridge regression with small l2
Step6: Applying a higher L2 value
Step7: Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage
Step8: Minimize the l2 by using cross validation
Step9: Use the best l2 to train the model on all the data | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import linear_model
import numpy as np
from math import ceil
Explanation: Initialise the libs
End of explanation
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}
regressionDir = '/home/weenkus/workspace/Machine Learning - University of Washington/Regression/datasets/'
sales = pd.read_csv(regressionDir + 'kc_house_data.csv', dtype = dtype_dict)
sales = sales.sort(['sqft_living','price'])
# dtype_dict same as above
set_1 = pd.read_csv(regressionDir + 'wk3_kc_house_set_1_data.csv', dtype=dtype_dict)
set_2 = pd.read_csv(regressionDir + 'wk3_kc_house_set_2_data.csv', dtype=dtype_dict)
set_3 = pd.read_csv(regressionDir + 'wk3_kc_house_set_3_data.csv', dtype=dtype_dict)
set_4 = pd.read_csv(regressionDir + 'wk3_kc_house_set_4_data.csv', dtype=dtype_dict)
train_valid_shuffled = pd.read_csv(regressionDir + 'wk3_kc_house_train_valid_shuffled.csv', dtype=dtype_dict)
test = pd.read_csv(regressionDir + 'wk3_kc_house_test_data.csv', dtype=dtype_dict)
training = pd.read_csv(regressionDir + 'wk3_kc_house_train_data.csv', dtype=dtype_dict)
Explanation: Load the data
End of explanation
# Show plots in jupyter
%matplotlib inline
sales.head()
sales['price'].head()
Explanation: Data exploration
End of explanation
def polynomial_dataframe(feature, degree): # feature is pandas.Series type
# assume that degree >= 1
# initialize the dataframe:
poly_dataframe = pd.DataFrame()
# and set poly_dataframe['power_1'] equal to the passed feature
poly_dataframe['power_1'] = feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# assign poly_dataframe[name] to be feature^power; use apply(*)
poly_dataframe[name] = feature;
poly_dataframe[name] = poly_dataframe[name].apply(lambda x: x**power)
return poly_dataframe
Explanation: Helper functions
End of explanation
poly15_data = polynomial_dataframe(sales['sqft_living'], 15) # use equivalent of `polynomial_sframe`
print(poly15_data)
l2_small_penalty = 1.5e-5
model = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model.fit(poly15_data, sales['price'])
model.coef_
plt.plot(poly15_data, model.predict(poly15_data), poly15_data, sales['price'])
plt.show()
Explanation: Ridge regression model fitting
End of explanation
l2_small_penalty=1e-9
poly15_data_set1 = polynomial_dataframe(set_1['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model1 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model1.fit(poly15_data_set1, set_1['price'])
poly15_data_set2 = polynomial_dataframe(set_2['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model2 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model2.fit(poly15_data_set2, set_2['price'])
poly15_data_set3 = polynomial_dataframe(set_3['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model3 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model3.fit(poly15_data_set3, set_3['price'])
poly15_data_set4 = polynomial_dataframe(set_4['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model4 = linear_model.Ridge(alpha=l2_small_penalty, normalize=True)
model4.fit(poly15_data_set4, set_4['price'])
plt.plot(poly15_data_set1, model1.predict(poly15_data_set1), poly15_data_set1, set_1['price'])
plt.show()
plt.plot(poly15_data_set2, model2.predict(poly15_data_set2), poly15_data_set2, set_2['price'])
plt.show()
plt.plot(poly15_data_set3, model3.predict(poly15_data_set3), poly15_data_set3, set_3['price'])
plt.show()
plt.plot(poly15_data_set4, model4.predict(poly15_data_set4), poly15_data_set4, set_4['price'])
plt.show()
print('Model 1 coefficients: ', model1.coef_)
print('Model 2 coefficients: ', model2.coef_)
print('Model 3 coefficients: ', model3.coef_)
print('Model 4 coefficients: ', model4.coef_)
Explanation: Ridge regression on subsets
Using ridge regression with small l2
End of explanation
l2_large_penalty=1.23e2
poly15_data_set1 = polynomial_dataframe(set_1['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model1 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)
model1.fit(poly15_data_set1, set_1['price'])
poly15_data_set2 = polynomial_dataframe(set_2['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model2 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)
model2.fit(poly15_data_set2, set_2['price'])
poly15_data_set3 = polynomial_dataframe(set_3['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model3 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)
model3.fit(poly15_data_set3, set_3['price'])
poly15_data_set4 = polynomial_dataframe(set_4['sqft_living'], 15) # use equivalent of `polynomial_sframe`
model4 = linear_model.Ridge(alpha=l2_large_penalty, normalize=True)
model4.fit(poly15_data_set4, set_4['price'])
plt.plot(poly15_data_set1, model1.predict(poly15_data_set1), poly15_data_set1, set_1['price'])
plt.show()
plt.plot(poly15_data_set2, model2.predict(poly15_data_set2), poly15_data_set2, set_2['price'])
plt.show()
plt.plot(poly15_data_set3, model3.predict(poly15_data_set3), poly15_data_set3, set_3['price'])
plt.show()
plt.plot(poly15_data_set4, model4.predict(poly15_data_set4), poly15_data_set4, set_4['price'])
plt.show()
print('Model 1 coefficients: ', model1.coef_)
print('Model 2 coefficients: ', model2.coef_)
print('Model 3 coefficients: ', model3.coef_)
print('Model 4 coefficients: ', model4.coef_)
Explanation: Applying a higher L2 value
End of explanation
def k_fold_cross_validation(k, l2_penalty, data, output):
n = len(data)
sumRSS = 0
for i in range(k):
# Get the validation/training interval
start = (n*i)/k
end = (n*(i+1))/k-1
#print (i, (ceil(start), ceil(end)))
train_valid_shuffled[0:ceil(start)].append(train_valid_shuffled[ceil(end)+1:n])
# Train the model
model = linear_model.Ridge(alpha=l2_penalty, normalize=True)
model.fit(data, output)
# Calculate RSS
RSS = (abs(output - model.predict(data)) ** 2).sum()
# Add the RSS to the sum for computing the average
sumRSS += RSS
return (sumRSS / k)
print (k_fold_cross_validation(10, 1e-9, poly15_data_set2, set_2['price']))
Explanation: Selecting an L2 penalty via cross-validation
Just like the polynomial degree, the L2 penalty is a "magic" parameter we need to select. We could use the validation set approach as we did in the last module, but that approach has a major disadvantage: it leaves fewer observations available for training. Cross-validation seeks to overcome this issue by using all of the training set in a smart way.
We will implement a kind of cross-validation called k-fold cross-validation. The method gets its name because it involves dividing the training set into k segments of roughtly equal size. Similar to the validation set method, we measure the validation error with one of the segments designated as the validation set. The major difference is that we repeat the process k times as follows:
Set aside segment 0 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
Set aside segment 1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
...
Set aside segment k-1 as the validation set, and fit a model on rest of data, and evalutate it on this validation set
After this process, we compute the average of the k validation errors, and use it as an estimate of the generalization error. Notice that all observations are used for both training and validation, as we iterate over segments of data.
End of explanation
import sys
l2s = np.logspace(3, 9, num=13)
train_valid_shuffled_poly15 = polynomial_dataframe(train_valid_shuffled['sqft_living'], 15)
k = 10
minError = sys.maxsize
for l2 in l2s:
avgError = k_fold_cross_validation(k, l2, train_valid_shuffled_poly15, train_valid_shuffled['price'])
print ('For l2:', l2, ' the CV is ', avgError)
if avgError < minError:
minError = avgError
bestl2 = l2
print (minError)
print (bestl2)
Explanation: Minimize the l2 by using cross validation
End of explanation
train_poly15 = polynomial_dataframe(training['sqft_living'], 15)
test_poly15 = polynomial_dataframe(test['sqft_living'], 15)
model = linear_model.Ridge(alpha=1000, normalize=True)
model.fit(train_poly15, training['price'])
print("Residual sum of squares: %.2f"
% ((model.predict(test_poly15) - test['price']) ** 2).sum())
Explanation: Use the best l2 to train the model on all the data
End of explanation |
3,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integración numérica Montecarlo
Referencia
Step1: Integración Montecarlo tipo 1
Se basa en la definición de valor promedio de una función y en el valor esperado de una variable aleatoria uniforme.
Presentamos esto mediante un ejemplo.
Ejemplo. Aproxime el área bajo la curva $y=x^2$ en el intervalo $\left[0,1\right]$.
Veamos primero cómo luce dicha área.
Step2: Entonces, lo que queremos es aproximar el área de la región $\mathcal{D}$. Llamaremos esta área $A(\mathcal{D})$.
De cálculo integral, sabemos que
$$A(\mathcal{D})=\int_{0}^{1}y\text{d}x=\int_{0}^{1}x^2\text{d}x$$.
Por definición, el valor promedio de una función $f
Step3: En este caso, la integral se puede hacer fácilmente. Comparemos el resultado con el valor real
Step4: Ver que los resultados son distintos cada vez (¿porqué?). Sin embargo, se aproximan más o menos en la misma medida.
Aproximación de integrales en intervalos distintos a $\left[0,1\right]$.
Sin embargo, no todas las integrales que hacemos son en el intervalo $\left[0,1\right]$. En general, podemos integrar cualquier función continua en el intervalo $\left[a,b\right]$, donde $a,b\in\mathbb{R}$ con $a<b$.
Sea $f
Step5: Actividad. Utilizar la anterior función para realizar las siguientes integrales. Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos) y cuyas columnas correspondan a las funciones.
- $\int_{4}^{5} e^{x^2}\text{d}x$.
- $\int_{4}^{5} \frac{1}{log(x)}\text{d}x$.
- $\int_{4}^{5} \frac{sin(x)}{x}\text{d}x$.
Step6: Integración Montecarlo tipo 2
Con la integración montecarlo tipo 1 pudimos aproximar integrales de funciones continuas de una variable en un intervalo dado. En realidad este mismo análisis se puede ampliar para aproximar integrales definidas de funciones continuas de varias variables (integrales sobre áreas, volúmenes e hipervolúmenes) dado que la noción de valor promedio de una función se extiende a cualquier dimensión.
Este es en realidad el caso interesante, pues las integrales de funciones complicadas también se pueden aproximar por métodos numéricos clásicos, pero cuando la dimensión aumenta es cuando montecarlo se vuelve una herramienta relevante. Dado que no lo veremos en clase por la limitación de que la mayoría no han visto cálculo en varias variables, este tema puede ser elegido como proyecto de módulo, donde se exploraría también como mejorar la aproximación de integrales montecarlo.
Como vimos en el ejemplo (y como debe ser claro de su curso de cálculo integral) una de las aplicaciones más importantes de la integración es hallar áreas. Y no solo el área bajo una curva, sino áreas entre curvas y áreas de regiones más complicadas.
Antes de ver la integración montecarlo tipo 2, ¿cómo podemos usar la integración montecarlo tipo 1 para aproximar el área entre curvas?
Ejemplo. Aproxime el área entre las curvas $y=x$, y $y=x^2$ en el intervalo $\left[0,1\right]$.
Veamos primero cómo luce dicha área.
Step7: De cálculo integral, sabemos que
$$A(\mathcal{D})=\int_{0}^{1}x-x^2\text{d}x.$$
Entonces...
Step8: De modo que si la región se puede describir fácilmente, diría el ferras 'no hay pedo, lo pago' (podemos usar montecarlo tipo 1).
Step9: Pero, ¿qué pasa si la geometría de la región no se puede describir fácilmente?
Como en el caso anterior, motivaremos el método con un caso conocido. Vamos a aproximar el valor de $\pi$ usando el área de un círculo unitario.
Dibujemos el círculo unitario en la región $\mathcal{R}=\left[-1,1\right]\times\left[-1,1\right]$.
Step10: Si aproximamos $A(\mathcal{D})$ aproximamos el valor de $\pi$, pues el área del círculo unitario es
Step11: La probabilidad de que el punto $(X,Y)$ esté en el círculo unitario $\mathcal{D}$ es
$$P((X,Y)\in\mathcal{D})=\frac{A(\mathcal{D})}{A(\mathcal{R})}=\frac{\pi}{4}.$$
Luego, definimos una variable aleatoria de Bernoulli $B$ de manera que
$$B=\left\lbrace\begin{array}{ccc}0 & \text{si} & (X,Y)\notin\mathcal{D}\1 & \text{si} & (X,Y)\in\mathcal{D} \end{array}\right.=\left\lbrace\begin{array}{ccc}0 & \text{si} & X^2+Y^2>1\1 & \text{si} & X^2+Y^2\leq 1 \end{array}\right..$$
Entonces, el valor esperado de la variable aleatoria $B$ es
$$E\left[B\right]=\theta=P((X,Y)\in\mathcal{D})=\frac{A(\mathcal{D})}{A(\mathcal{R})}.$$
De lo anterior, una estimación de theta se puede obtener como
$$\theta=\frac{A(\mathcal{D})}{A(\mathcal{R})}\approx \frac{1}{N}\sum_{i=1}^{N}b_i,$$
donde
$$b_i=\left\lbrace\begin{array}{ccc}0 & \text{si} & x_i^2+y_i^2>1\1 & \text{si} & x_i^2+y_i^2\leq 1 \end{array}\right.$$
son realizaciones de la variable aleatoria $B$, que a su vez es producto de las realizaciones $x_i$ e $y_i$ de las variables aleatorias $X$ e $Y$, respectivamente.
Finalmente, la aproximación montecarlo tipo 2 con $N$ términos es
$$A(\mathcal{D})\approx \frac{A(\mathcal{R})}{N}\sum_{i=1}^{N}b_i.$$
Step12: De nuevo, comparemos con el valor real
Step13: Escribamos una función que tenga como entradas
Step14: Actividad. Utilizar la anterior función para aproximar el área de la región descrita por
$$4(2x-1)^4+8(2y-1)^8<1+2(2y-1)^3(3x-2)^2$$
Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos). | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('Ti5zUD08w5s')
YouTubeVideo('jmsFC0mNayM')
Explanation: Integración numérica Montecarlo
Referencia:
- https://ocw.mit.edu/courses/mechanical-engineering/2-086-numerical-computation-for-mechanical-engineers-fall-2014/nutshells-guis/MIT2_086F14_Monte_Carlo.pdf
- http://ta.twi.tudelft.nl/mf/users/oosterle/oosterlee/lec8-hit-2009.pdf
- Sauer, Timothy. Análisis Numérico, 2da. Edición, ISBN: 978-607-32-2059-0.
<img style="float: center; margin: 0px 0px 15px 15px;" src="https://upload.wikimedia.org/wikipedia/commons/f/f2/Integral_as_region_under_curve.svg" width="300px" height="100px" />
Motivación
En análisis de ingeniería, normalmente debemos evaluar integrales definidas sobre un dominio complejo o en un espacio de dimensión alta.
Por ejemplo, podríamos querer calcular:
- la deflexión en una viga de geometría complicada,
- el volumen de una parte tridimensional de una aeronave,
- o evaluar alguna medida de rendimiento (rentabilidad) en algún proceso que sea expresada como una integral de alguna función sin antiderivada primitiva (que se pueda expresar en términos de funciones elementales).
A la mano tenemos herramientas de integración analítica cuando tanto el espacio de integración como la función a integrar son simples. Cuando la función a integrar es difícil (incluso, imposible) de integrar podemos aún recurrir a métodos numéricos de integración.
Desafortunadamente, los métodos determinísiticos de integración fallan cuando:
- la región es demasiado compleja para discretizarla,
- o la función a integrar es demasiado irregular,
- o la convergencia es demasiado lenta debido a la alta dimensionalidad del espacio de integración (ver Maldición de la dimensionalidad).
Por eso en esta clase veremos una técnica alternativa de integración numérica: Integración Montecarlo.
Ejemplos de funciones sin antiderivada primitiva.
De su curso de cálculo integral seguro recordarán (o estarán viendo) que existen funciones cuya integral no tiene primitiva. Es decir, que no podemos encontrar una función que se pueda expresar en forma de funciones elementales cuya derivada sea tal función.
Esto no significa que dicha función no se pueda integrar, ya que sabemos que cualquier función continua es integrable (y la mayoría de funciones que vemos a ese nivel, lo son). Lo que ocurre es que no podemos expresar dicha integral de una forma sencilla (por ejemplo, en función de exponenciales, senos, cosenos, logaritmos...).
Algunas integrales que no son elementales son:
- $\int e^{p(x)}\text{d}x$, donde $p(x)$ es un polinomio de grado mayor o igual a dos.
- $\int \frac{1}{log(x)}\text{d}x$.
- $\int \frac{sin(x)}{x}\text{d}x$
Referencia:
- https://www.gaussianos.com/funciones-sin-primitiva-elemental/
Ejemplos de regiones difíciles de discretizar.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
def parab(x):
return x**2
x = np.linspace(0,1)
y = parab(x)
plt.fill_between(x,y)
plt.text(0.8,0.2,'$\mathcal{D}$',fontsize=20)
plt.show()
Explanation: Integración Montecarlo tipo 1
Se basa en la definición de valor promedio de una función y en el valor esperado de una variable aleatoria uniforme.
Presentamos esto mediante un ejemplo.
Ejemplo. Aproxime el área bajo la curva $y=x^2$ en el intervalo $\left[0,1\right]$.
Veamos primero cómo luce dicha área.
End of explanation
help(np.random.uniform)
N = 100000
x = np.random.uniform(0, 1, N)
A_Dapprox = np.sum(parab(x))/N
A_Dapprox
Explanation: Entonces, lo que queremos es aproximar el área de la región $\mathcal{D}$. Llamaremos esta área $A(\mathcal{D})$.
De cálculo integral, sabemos que
$$A(\mathcal{D})=\int_{0}^{1}y\text{d}x=\int_{0}^{1}x^2\text{d}x$$.
Por definición, el valor promedio de una función $f:\left[a,b\right]\to\mathbb{R}$ en un intervalo $\left[a,b\right]$ es
$$\frac{1}{b-a}\int_{a}^{b}f(x)\text{d}x.$$
Entonces, el área bajo la curva $y=x^2$ es exactamente el valor promedio de $f(x)=x^2$ en $\left[0,1\right]$. Este valor promedio puede aproximarse mediante el promedio de los valores de la función en puntos aleatorios uniformemente distribuidos en el intervalo $\left[0,1\right]$. Es decir,
$$A(\mathcal{D})=\int_{0}^{1}x^2\text{d}x=\int_{0}^{1}f(x)\text{d}x\approx \frac{1}{N}\sum_{i=1}^{N}f(u_i)=\frac{1}{N}\sum_{i=1}^{N}u_i^2$$,
donde $u_i$ son realizaciones de la variable aleatoria $U\sim\mathcal{U}\left[0,1\right]$ ($U$ distribuye uniformemente en el intervalo $\left[0,1\right]$).
¿Cómo construit vectores de números aleatorios?
- Ver numpy.random.
En este caso necesitamos $N$ números aleatorios uniformemente distribuidos...
End of explanation
import pandas as pd
A_D = 1/3
N = np.logspace(1,7,7)
df = pd.DataFrame(index=N,columns=['Valor_aproximacion', 'Error_relativo'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
x = np.random.uniform(0, 1, n.astype(int))
df.loc[n,"Valor_aproximacion"] = np.sum(parab(x))/n
df.loc[n,"Error_relativo"] = np.abs(df.loc[n,"Valor_aproximacion"]-A_D)/A_D
df
Explanation: En este caso, la integral se puede hacer fácilmente. Comparemos el resultado con el valor real:
$$A(\mathcal{D})=\int_{0}^{1}x^2\text{d}x=\left.\frac{x^3}{3}\right|_{x=0}^{x=1}=\frac{1}{3}$$
Hagamos una tabla viendo:
- cantidad de terminos
- valor de la aproximacion
- error relativo
End of explanation
# Escribir la función acá
def int_montecarlo1(f, a, b, N):
return (b-a)/N*np.sum(f(np.random.uniform(a,b,N)))
Explanation: Ver que los resultados son distintos cada vez (¿porqué?). Sin embargo, se aproximan más o menos en la misma medida.
Aproximación de integrales en intervalos distintos a $\left[0,1\right]$.
Sin embargo, no todas las integrales que hacemos son en el intervalo $\left[0,1\right]$. En general, podemos integrar cualquier función continua en el intervalo $\left[a,b\right]$, donde $a,b\in\mathbb{R}$ con $a<b$.
Sea $f:\left[a,b\right]\to\mathbb{R}$ una función continua en el intervalo $\left(a,b\right)$ (por lo tanto es integrable endicho intervalo). Queremos resolver:
$$\int_{a}^{b}f(x)\text{d}x.$$
¿Cómo podemos usar la idea del valor promedio para resolver esto?
El valor promedio de $f$ en $\left[a,b\right]$ es:
$$\frac{1}{b-a}\int_{a}^{b}f(x)\text{d}x.$$
Este valor promedio puede aproximarse mediante el promedio de $N$ valores de la función en puntos aleatorios uniformemente distribuidos en el intervalo $\left[a,b\right]$. Es decir,
$$\frac{1}{b-a}\int_{a}^{b}f(x)\text{d}x\approx \frac{1}{N}\sum_{i=1}^{N}f(u_i)$$,
donde $u_i$ son realizaciones de la variable aleatoria $U\sim\mathcal{U}\left[a,b\right]$ ($U$ distribuye uniformemente en el intervalo $\left[a,b\right]$).
Finalmente, la aproximación montecarlo tipo 1 con $N$ términos es
$$\int_{a}^{b}f(x)\text{d}x\approx \frac{b-a}{N}\sum_{i=1}^{N}f(u_i)$$,
Escribamos una función que tenga como entradas:
- la función a integrar $f$,
- los límites de integración $a$ y $b$, y
- el número de términos que se usará en la aproximación $N$,
y que devuelva la aproximación montecarlo tipo 1 de la integral $\int_{a}^{b}f(x)\text{d}x$.
End of explanation
# Resolver
def func1(x):
return np.exp(x**2)
def func2(x):
return 1/np.log(x)
def func3(x):
return np.sin(x)/x
a, b = 4, 5
N = np.logspace(1,5,5)
df = pd.DataFrame(index=N,columns=['Funcion1', 'Funcion2', 'Funcion3'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
df.loc[n,"Funcion1"] = int_montecarlo1(func1, a, b, n.astype(int))
df.loc[n,"Funcion2"] = int_montecarlo1(func2, a, b, n.astype(int))
df.loc[n,"Funcion3"] = int_montecarlo1(func3, a, b, n.astype(int))
df
Explanation: Actividad. Utilizar la anterior función para realizar las siguientes integrales. Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos) y cuyas columnas correspondan a las funciones.
- $\int_{4}^{5} e^{x^2}\text{d}x$.
- $\int_{4}^{5} \frac{1}{log(x)}\text{d}x$.
- $\int_{4}^{5} \frac{sin(x)}{x}\text{d}x$.
End of explanation
x = np.linspace(-0.1,1.1)
y = parab(x)
plt.plot(x,x,'k--',label='$y=x$')
plt.plot(x,y,'k',label='$y=x^2$')
plt.fill_between(x,x,y)
plt.text(0.5,0.4,'$\mathcal{D}$',fontsize=20)
plt.legend(loc='best')
plt.show()
Explanation: Integración Montecarlo tipo 2
Con la integración montecarlo tipo 1 pudimos aproximar integrales de funciones continuas de una variable en un intervalo dado. En realidad este mismo análisis se puede ampliar para aproximar integrales definidas de funciones continuas de varias variables (integrales sobre áreas, volúmenes e hipervolúmenes) dado que la noción de valor promedio de una función se extiende a cualquier dimensión.
Este es en realidad el caso interesante, pues las integrales de funciones complicadas también se pueden aproximar por métodos numéricos clásicos, pero cuando la dimensión aumenta es cuando montecarlo se vuelve una herramienta relevante. Dado que no lo veremos en clase por la limitación de que la mayoría no han visto cálculo en varias variables, este tema puede ser elegido como proyecto de módulo, donde se exploraría también como mejorar la aproximación de integrales montecarlo.
Como vimos en el ejemplo (y como debe ser claro de su curso de cálculo integral) una de las aplicaciones más importantes de la integración es hallar áreas. Y no solo el área bajo una curva, sino áreas entre curvas y áreas de regiones más complicadas.
Antes de ver la integración montecarlo tipo 2, ¿cómo podemos usar la integración montecarlo tipo 1 para aproximar el área entre curvas?
Ejemplo. Aproxime el área entre las curvas $y=x$, y $y=x^2$ en el intervalo $\left[0,1\right]$.
Veamos primero cómo luce dicha área.
End of explanation
# Usar la funcion int_montecarlo1
def f(x):
return x-x**2
A_Daprox = int_montecarlo1(f, 0, 1, 100000000)
A_Daprox
Explanation: De cálculo integral, sabemos que
$$A(\mathcal{D})=\int_{0}^{1}x-x^2\text{d}x.$$
Entonces...
End of explanation
YouTubeVideo('G8fOTMYDPEA')
Explanation: De modo que si la región se puede describir fácilmente, diría el ferras 'no hay pedo, lo pago' (podemos usar montecarlo tipo 1).
End of explanation
def circ_arriba(x, r):
return np.sqrt(r**2-x**2)
def circ_abajo(x, r):
return -np.sqrt(r**2-x**2)
x = np.linspace(-1,1,100)
y1 = circ_arriba(x, 1)
y2 = circ_abajo(x, 1)
plt.figure(figsize=(5,5))
plt.plot(x,y1,'k')
plt.plot(x,y2,'k')
plt.fill_between(x,y1,y2)
plt.text(0,0,'$\mathcal{D}$',fontsize=20)
plt.text(0.8,0.8,'$\mathcal{R}$',fontsize=20)
plt.show()
Explanation: Pero, ¿qué pasa si la geometría de la región no se puede describir fácilmente?
Como en el caso anterior, motivaremos el método con un caso conocido. Vamos a aproximar el valor de $\pi$ usando el área de un círculo unitario.
Dibujemos el círculo unitario en la región $\mathcal{R}=\left[-1,1\right]\times\left[-1,1\right]$.
End of explanation
N = 1000000
x = np.random.uniform(-1, 1, N)
y = np.random.uniform(-1, 1, N)
X, Y = np.meshgrid(x,y)
plt.figure(figsize=(5,5))
plt.scatter(X,Y)
plt.show()
Explanation: Si aproximamos $A(\mathcal{D})$ aproximamos el valor de $\pi$, pues el área del círculo unitario es:
$$A(\mathcal{D})=\pi(1)^2=\pi.$$
Por otra parte es claro que el área de la región $\mathcal{R}=\left[-1,1\right]\times\left[-1,1\right]$ es
$$A(\mathcal{R})=4.$$
Ahora, haremos uso de nuestro generador de números aleatorios. Supongamos que escogemos un punto aleatorio en la región $\mathcal{R}=\left[-1,1\right]\times\left[-1,1\right]$. Describimos este punto como $(X,Y)$ para $X$ e $Y$ variables aleatorias uniformes sobre el intervalo $\left[-1,1\right]$.
¿Cómo generamos puntos aleatorios en un rectángulo?
End of explanation
def reg_circ(x,y):
return x**2+y**2<=1
A_R = 4
A_Dapprox = A_R*np.sum(reg_circ(x,y))/N
A_Dapprox
Explanation: La probabilidad de que el punto $(X,Y)$ esté en el círculo unitario $\mathcal{D}$ es
$$P((X,Y)\in\mathcal{D})=\frac{A(\mathcal{D})}{A(\mathcal{R})}=\frac{\pi}{4}.$$
Luego, definimos una variable aleatoria de Bernoulli $B$ de manera que
$$B=\left\lbrace\begin{array}{ccc}0 & \text{si} & (X,Y)\notin\mathcal{D}\1 & \text{si} & (X,Y)\in\mathcal{D} \end{array}\right.=\left\lbrace\begin{array}{ccc}0 & \text{si} & X^2+Y^2>1\1 & \text{si} & X^2+Y^2\leq 1 \end{array}\right..$$
Entonces, el valor esperado de la variable aleatoria $B$ es
$$E\left[B\right]=\theta=P((X,Y)\in\mathcal{D})=\frac{A(\mathcal{D})}{A(\mathcal{R})}.$$
De lo anterior, una estimación de theta se puede obtener como
$$\theta=\frac{A(\mathcal{D})}{A(\mathcal{R})}\approx \frac{1}{N}\sum_{i=1}^{N}b_i,$$
donde
$$b_i=\left\lbrace\begin{array}{ccc}0 & \text{si} & x_i^2+y_i^2>1\1 & \text{si} & x_i^2+y_i^2\leq 1 \end{array}\right.$$
son realizaciones de la variable aleatoria $B$, que a su vez es producto de las realizaciones $x_i$ e $y_i$ de las variables aleatorias $X$ e $Y$, respectivamente.
Finalmente, la aproximación montecarlo tipo 2 con $N$ términos es
$$A(\mathcal{D})\approx \frac{A(\mathcal{R})}{N}\sum_{i=1}^{N}b_i.$$
End of explanation
A_D = np.pi
N = np.logspace(1,7,7)
df = pd.DataFrame(index=N,columns=['Valor_aproximacion', 'Error_relativo'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
x = np.random.uniform(-1, 1, n.astype(int))
y = np.random.uniform(-1, 1, n.astype(int))
df.loc[n,"Valor_aproximacion"] = A_R*np.sum(reg_circ(x,y))/n
df.loc[n,"Error_relativo"] = np.abs(df.loc[n,"Valor_aproximacion"]-A_D)/A_D
df
Explanation: De nuevo, comparemos con el valor real
End of explanation
# Escribir la función acá
def int_montecarlo2(region, a1, b1, a2, b2, N):
A_R = (b1-a1)*(b2-a2)
x = np.random.uniform(a1, b1, N.astype(int))
y = np.random.uniform(a2, b2, N.astype(int))
return A_R*np.sum(region(x,y))/N
Explanation: Escribamos una función que tenga como entradas:
- la función que describe la region $region$,
- los límites de la region $a_1$, $b_1$, $a_2$ y $b_2$, con $R=\left[a_1,b_1\right]\times\left[a_2,b_2\right]$ y
- el número de términos que se usará en la aproximación $N$,
y que devuelva la aproximación montecarlo tipo 2 del area de la region.
End of explanation
N = 100
x = np.linspace(0, 1, N)
y = np.linspace(0, 1, N)
def region(x,y):
return 4*(2*x-1)**4+8*(2*y-1)**8 < 1+2*(2*y-1)**3*(3*x-2)**2
X, Y = np.meshgrid(x,y)
plt.figure(figsize=(5,5))
plt.scatter(X,Y,c=~region(X,Y),cmap='bone')
plt.show()
# Resolver
a1, a2, b1, b2 = 0, 0, 1, 1
N = np.logspace(1,5,5)
df = pd.DataFrame(index=N,columns=['Valor_aproximacion'], dtype='float')
df.index.name = "Cantidad_terminos"
for n in N:
df.loc[n,"Valor_aproximacion"] = int_montecarlo2(region, a1, b1, a2, b2, n)
df
Explanation: Actividad. Utilizar la anterior función para aproximar el área de la región descrita por
$$4(2x-1)^4+8(2y-1)^8<1+2(2y-1)^3(3x-2)^2$$
Poner los resultados en una tabla cuyas filas correspondan a la cantidad de términos utilizados en la aproximación (usar 10, 100, 1000, 10000 y 100000 términos).
End of explanation |
3,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Schelling Segregation Model
Background
The Schelling (1971) segregation model is a classic of agent-based modeling, demonstrating how agents following simple rules lead to the emergence of qualitatively different macro-level outcomes. Agents are randomly placed on a grid. There are two types of agents, one constituting the majority and the other the minority. All agents want a certain number (generally, 3) of their 8 surrounding neighbors to be of the same type in order for them to be happy. Unhappy agents will move to a random available grid space. While individual agents do not have a preference for a segregated outcome (e.g. they would be happy with 3 similar neighbors and 5 different ones), the aggregate outcome is nevertheless heavily segregated.
Implementation
This is a demonstration of running a Mesa model in an IPython Notebook. The actual model and agent code are implemented in Schelling.py, in the same directory as this notebook. Below, we will import the model class, instantiate it, run it, and plot the time series of the number of happy agents.
Step1: Now we instantiate a model instance
Step2: We want to run the model until all the agents are happy with where they are. However, there's no guarentee that a given model instantiation will ever settle down. So let's run it for either 100 steps or until it stops on its own, whichever comes first
Step3: The model has a DataCollector object, which checks and stores how many agents are happy at the end of each step. It can also generate a pandas DataFrame of the data it has collected
Step4: Finally, we can plot the 'happy' series
Step5: For testing purposes, here is a table giving each agent's x and y values at each step.
Step6: Effect of Homophily on segregation
Now, we can do a parameter sweep to see how segregation changes with homophily.
First, we create a function which takes a model instance and returns what fraction of agents are segregated -- that is, have no neighbors of the opposite type.
Step7: Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from Schelling import model
Explanation: Schelling Segregation Model
Background
The Schelling (1971) segregation model is a classic of agent-based modeling, demonstrating how agents following simple rules lead to the emergence of qualitatively different macro-level outcomes. Agents are randomly placed on a grid. There are two types of agents, one constituting the majority and the other the minority. All agents want a certain number (generally, 3) of their 8 surrounding neighbors to be of the same type in order for them to be happy. Unhappy agents will move to a random available grid space. While individual agents do not have a preference for a segregated outcome (e.g. they would be happy with 3 similar neighbors and 5 different ones), the aggregate outcome is nevertheless heavily segregated.
Implementation
This is a demonstration of running a Mesa model in an IPython Notebook. The actual model and agent code are implemented in Schelling.py, in the same directory as this notebook. Below, we will import the model class, instantiate it, run it, and plot the time series of the number of happy agents.
End of explanation
model = SchellingModel(10, 10, 0.8, 0.2, 3)
Explanation: Now we instantiate a model instance: a 10x10 grid, with an 80% change of an agent being placed in each cell, approximately 20% of agents set as minorities, and agents wanting at least 3 similar neighbors.
End of explanation
while model.running and model.schedule.steps < 100:
model.step()
print(model.schedule.steps) # Show how many steps have actually run
Explanation: We want to run the model until all the agents are happy with where they are. However, there's no guarentee that a given model instantiation will ever settle down. So let's run it for either 100 steps or until it stops on its own, whichever comes first:
End of explanation
model_out = model.datacollector.get_model_vars_dataframe()
model_out.head()
Explanation: The model has a DataCollector object, which checks and stores how many agents are happy at the end of each step. It can also generate a pandas DataFrame of the data it has collected:
End of explanation
model_out.happy.plot()
Explanation: Finally, we can plot the 'happy' series:
End of explanation
x_positions = model.datacollector.get_agent_vars_dataframe()
x_positions.head()
Explanation: For testing purposes, here is a table giving each agent's x and y values at each step.
End of explanation
from mesa.batchrunner import BatchRunner
def get_segregation(model):
'''
Find the % of agents that only have neighbors of their same type.
'''
segregated_agents = 0
for agent in model.schedule.agents:
segregated = True
for neighbor in model.grid.neighbor_iter(agent.pos):
if neighbor.type != agent.type:
segregated = False
break
if segregated:
segregated_agents += 1
return segregated_agents / model.schedule.get_agent_count()
Explanation: Effect of Homophily on segregation
Now, we can do a parameter sweep to see how segregation changes with homophily.
First, we create a function which takes a model instance and returns what fraction of agents are segregated -- that is, have no neighbors of the opposite type.
End of explanation
parameters = {"height": 10, "width": 10, "density": 0.8, "minority_pc": 0.2,
"homophily": range(1,9)}
model_reporters = {"Segregated_Agents": get_segregation}
param_sweep = BatchRunner(SchellingModel, parameters, iterations=10,
max_steps=200,
model_reporters=model_reporters)
param_sweep.run_all()
df = param_sweep.get_model_vars_dataframe()
plt.scatter(df.homophily, df.Segregated_Agents)
plt.grid(True)
Explanation: Now, we set up the batch run, with a dictionary of fixed and changing parameters. Let's hold everything fixed except for Homophily.
End of explanation |
3,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tasa atractiva mínima (MARR)
Notas de clase sobre ingeniería economica avanzada usando Python
Juan David Velásquez Henao
[email protected]
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Software utilizado
Este es un documento interactivo escrito como un notebook de Jupyter , en el cual se presenta un tutorial sobre finanzas corporativas usando Python. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Linux y OS X.
Haga click aquí para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.
Descargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en Try Jupyter!
Contenido
Bibliografía
[1] SAS/ETS 14.1 User's Guide, 2015.
[2] hp 12c platinum financial calculator. User's guide.
[3] HP Business Consultant II Owner's manual.
[4] C.S. Park and G.P. Sharp-Bette. Advanced Engineering Economics. John Wiley & Sons, Inc., 1990.
Problema del costo de capital
A medida que se invierte más capital los rendimientos obtenidos son menores (es más dificil acceder a inversiones con rentabilidades altas).
A medida que se presta más capital los interses son más altos (es más dificil acceder a créditos baratos)
Si se tiene un proyecto cuyos fondos provienen del aporte de los socios y de diferentes esquemas de financiación, ¿cómo se calculá el costo de dichos fondos?.
<img src="images/wacc-explain.png" width=850>
Caso práctico
Una compañía tiene las siguientes fuentes de financiamiento
Step1: En la modelación de créditos con cashflow se consideran dos tipos de costos | Python Code:
import cashflows as cf
##
## Se tienen cuatro fuentes de capital con diferentes costos
## sus datos se almacenarar en las siguientes listas:
##
monto = [0] * 4
interes = [0] * 4
## emision de acciones
## --------------------------------------
monto[0] = 4000
interes[0] = 25.0 / 1.0 # tasa de descueto de la accion
## préstamo 1.
## -------------------------------------------------------
##
nrate = cf.nominal_rate(const_value=20, nper=5)
credito1 = cf.fixed_ppal_loan(amount = 2000, # monto
nrate = nrate, # tasa de interés
orgpoints = 50/2000) # costos de originación
credito1
Explanation: Tasa atractiva mínima (MARR)
Notas de clase sobre ingeniería economica avanzada usando Python
Juan David Velásquez Henao
[email protected]
Universidad Nacional de Colombia, Sede Medellín
Facultad de Minas
Medellín, Colombia
Software utilizado
Este es un documento interactivo escrito como un notebook de Jupyter , en el cual se presenta un tutorial sobre finanzas corporativas usando Python. Los notebooks de Jupyter permiten incoporar simultáneamente código, texto, gráficos y ecuaciones. El código presentado en este notebook puede ejecutarse en los sistemas operativos Linux y OS X.
Haga click aquí para obtener instrucciones detalladas sobre como instalar Jupyter en Windows y Mac OS X.
Descargue la última versión de este documento a su disco duro; luego, carguelo y ejecutelo en línea en Try Jupyter!
Contenido
Bibliografía
[1] SAS/ETS 14.1 User's Guide, 2015.
[2] hp 12c platinum financial calculator. User's guide.
[3] HP Business Consultant II Owner's manual.
[4] C.S. Park and G.P. Sharp-Bette. Advanced Engineering Economics. John Wiley & Sons, Inc., 1990.
Problema del costo de capital
A medida que se invierte más capital los rendimientos obtenidos son menores (es más dificil acceder a inversiones con rentabilidades altas).
A medida que se presta más capital los interses son más altos (es más dificil acceder a créditos baratos)
Si se tiene un proyecto cuyos fondos provienen del aporte de los socios y de diferentes esquemas de financiación, ¿cómo se calculá el costo de dichos fondos?.
<img src="images/wacc-explain.png" width=850>
Caso práctico
Una compañía tiene las siguientes fuentes de financiamiento:
Un total de \$ 4000 por la emisión de 4.000 acciones. Se espera un dividendo de \$ 0.25 por acción para los próximos años.
Un préstamo bancario (Préstamo 1) de \$ 2.000. El préstamo se paga en 4 cuotas iguales a capital más intereses sobre el saldo total de deuda liquidados a una tasa efectiva de interés del 20%. En el momento del desembolso se cobró una comisión bancaria de \$ 50.
Un préstamo bancario (Préstamo 2) de \$ 1.000 con descuento de 24 puntos. El préstamo se paga en 4 cuotas totales iguales que incluyen intereses más capital. La tasa de interés es del 20%.
La venta de un bono con pago principal de \$ 5.000, el cual fue vendido por \$ 4.000. El capital se dedimirá en 4 periodos y se pagarán intereses a una tasa del 7%. El bono tiene un costo de venta de \$ 50.
El impuesto de renta es del 30%.
Solución
End of explanation
## flujo de caja para el crédito antes de impuestos
credito1.to_cashflow(tax_rate = 30.0)
## la tasa efectiva pagada por el crédito es
## aquella que hace el valor presente cero para
## el flujo de caja anterior (antes o después de
## impuestos)
credito1.true_rate(tax_rate = 30.0)
## se almacenna los datos para este credito
monto[1] = 2000
interes[1] = credito1.true_rate(tax_rate = 30.0)
## préstamo 2.
## -------------------------------------------------------
##
credito2 = cf.fixed_rate_loan(amount = 1000, # monto
nrate = 20, # tasa de interés
start = None,
grace = 0,
life = 4, # número de cuotas
dispoints = 0.24) # costos de originación
credito2
credito2.to_cashflow(tax_rate = 30)
credito2.true_rate(tax_rate = 30)
## se almacenna los datos para este credito
monto[2] = 1000
interes[2] = credito2.true_rate(tax_rate = 30)
## préstamo 3.
## -------------------------------------------------------
##
nrate = cf.nominal_rate(const_value=7, nper=5)
credito3 = cf.bullet_loan(amount = 5000, # monto
nrate = nrate, # tasa de interés
orgpoints = 0.01, # costos de originación
dispoints = 0.20) # puntos de descuento
credito3
credito3.to_cashflow(tax_rate = 30.0) ### malo
credito3.true_rate(tax_rate = 30.0)
## se almacenan los datos de este crédito
monto[3] = 5000
interes[3] = credito3.true_rate(tax_rate = 30.0)
## montos
monto
## tasas
interes
## Costo ponderado del capital (WACC)
## -------------------------------------------------------------
## es el promdio ponderado de las tasas por
## el porcentaje de capital correspondiente a cada fuente
##
s = sum(monto) # capital total
wacc = sum([x*r/s for x, r in zip(monto, interes)])
wacc
Explanation: En la modelación de créditos con cashflow se consideran dos tipos de costos:
Los puntos de descuento (dispoints) como porcentaje sobre el monto de la deuda. Estos son una forma de pago de intereses por anticipado con el fin de bajar la tasa de interés del crédito.
Los puntos de originación (orgpoints) como porcentaje del monto de deuda. Son los costos de constitución del crédito y no son considerados como intereses.
Ya que los intereses de los créditos pueden descontarse como costos financieros, estos disminuyen el pago del impuesto de renta. Por consiguiente, en el análisis de los créditos debe tenerse en cuenta el beneficio por pago de intereses el cual equivale a los impuestos pagados por periodo multiplicados por la tasa del impuesto de renta. Ya que los puntos de descuento son intereses, estos se tienen en cuenta en el cálculo de este beneficio.
End of explanation |
3,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
if,elif,else Statements
if Statements in Python allows us to tell the computer to perform alternative actions based on a certain set of results.
Verbally, we can imagine we are telling the computer
Step1: Let's add in some else logic
Step2: Multiple Branches
Let's get a fuller picture of how far if, elif, and else can take us!
We write this out in a nested strucutre. Take note of how the if,elif,and else line up in the code. This can help you see what if is related to what elif or else statements.
We'll reintroduce a comparison syntax for Python.
Step3: Note how the nested if statements are each checked until a True boolean causes the nested code below it to run. You should also note that you can put in as many elif statements as you want before you close off with an else.
Let's create two more simple examples for the if,elif, and else statements | Python Code:
if True:
print 'It was true!'
Explanation: if,elif,else Statements
if Statements in Python allows us to tell the computer to perform alternative actions based on a certain set of results.
Verbally, we can imagine we are telling the computer:
"Hey if this case happens, perform some action"
We can then expand the idea further with elif and else statements, which allow us to tell the computer:
"Hey if this case happens, perform some action. Else if another case happens, perform some other action. Else-- none of the above cases happened, perform this action"
Let's go ahead and look at the syntax format for if statements to get a better idea of this:
if case1:
perform action1
elif case2:
perform action2
else:
perform action 3
First Example
Let's see a quick example of this:
End of explanation
x = False
if x:
print 'x was True!'
else:
print 'I will be printed in any case where x is not true'
Explanation: Let's add in some else logic:
End of explanation
loc = 'Bank'
if loc == 'Auto Shop':
print 'Welcome to the Auto Shop!'
elif loc == 'Bank':
print 'Welcome to the bank!'
else:
print "Where are you?"
Explanation: Multiple Branches
Let's get a fuller picture of how far if, elif, and else can take us!
We write this out in a nested strucutre. Take note of how the if,elif,and else line up in the code. This can help you see what if is related to what elif or else statements.
We'll reintroduce a comparison syntax for Python.
End of explanation
person = 'Sammy'
if person == 'Sammy':
print 'Welcome Sammy!'
else:
print "Welcome, what's your name?"
person = 'George'
if person == 'Sammy':
print 'Welcome Sammy!'
elif person =='George':
print "Welcome George!"
else:
print "Welcome, what's your name?"
Explanation: Note how the nested if statements are each checked until a True boolean causes the nested code below it to run. You should also note that you can put in as many elif statements as you want before you close off with an else.
Let's create two more simple examples for the if,elif, and else statements:
End of explanation |
3,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic Manipulation
Splipy implements all affine transformations like translate (move), rotate, scale etc. These should be available as operators where this makes sense. To start, we need to import the libraries we are going to use first
Step1: Rotate
Step2: Translate
Step3: Note that translate can also be applied as an operator
Step4: Scaling
Note that scaling is done in relation to the origin. Depending on your use, you might want to center the object around the origin before scaling.
Step5: Scaling is also available as operators
Step6: Control-point manipulation
For special case manipulation, it is possible to manipulate the controlpoints directly | Python Code:
import splipy as sp
import numpy as np
import matplotlib.pyplot as plt
import splipy.curve_factory as curve_factory
Explanation: Basic Manipulation
Splipy implements all affine transformations like translate (move), rotate, scale etc. These should be available as operators where this makes sense. To start, we need to import the libraries we are going to use first
End of explanation
crv = curve_factory.n_gon(6) # create a sample curve
t0 = crv.start() # parametric starting point
t1 = crv.end() # parametric end point
t = np.linspace(t0, t1, 361) # uniform grid of 361 evaluation points on the parametric domain
x = crv(t)
plt.plot(x[:,0], x[:,1]) # plot curve
crv.rotate(10.0/360*2*np.pi) # rotate by 10 degrees (input is in radians)
x = crv(t)
plt.plot(x[:,0], x[:,1], 'r-') # plot curve (in red)
plt.axis('equal')
plt.show()
Explanation: Rotate
End of explanation
crv = curve_factory.n_gon(6) # create a sample curve
t0 = crv.start() # parametric starting point
t1 = crv.end() # parametric end point
t = np.linspace(t0, t1, 361) # uniform grid of 361 evaluation points on the parametric domain
x = crv(t)
plt.plot(x[:,0], x[:,1]) # plot curve
dx = [0.1, 0.1] # translation amount
crv.translate(dx) # move the object by 'dx'
x = crv(t)
plt.plot(x[:,0], x[:,1], 'r-') # plot curve (in red)
plt.axis('equal')
plt.show()
Explanation: Translate
End of explanation
crv.translate([1, 2]) # moves object 1 in x-direction, 2 in y-direction
crv += [1,2] # does the exact same thing
crv = crv + [1,2] # same thing
crv_2 = crv + [1,2] # creates a new object crv_2 which is the translated version of crv
crv += (1,2) # translation vector only needs to be array-like (any indexable input will work)
Explanation: Note that translate can also be applied as an operator
End of explanation
crv = curve_factory.n_gon(6) # create a sample curve
t0 = crv.start() # parametric starting point
t1 = crv.end() # parametric end point
t = np.linspace(t0, t1, 361) # uniform grid of 361 evaluation points on the parametric domain
x = crv(t)
plt.plot(x[:,0], x[:,1]) # plot curve
crv.scale(1.5) # scales the object by a factor of 150%
x = crv(t)
plt.plot(x[:,0], x[:,1], 'r-') # plot curve (in red)
plt.axis('equal')
plt.show()
Explanation: Scaling
Note that scaling is done in relation to the origin. Depending on your use, you might want to center the object around the origin before scaling.
End of explanation
crv.scale(1.5)
crv *= 1.5 # does the exact same thing
crv = crv * 1.5 # same thing
crv_2 = crv * 1.5 # keeps crv unchanged, returns a new object crv_2 which is the scaled version of crv
crv *= (2,1) # doubles the size in x-direction, while leaving the size in y-direction unchanged
Explanation: Scaling is also available as operators
End of explanation
curve = curve_factory.n_gon(6)
# for a slightly more inefficient translation operations, we may manipulate the controlpoints one-by-one
for controlpoint in curve:
controlpoint += [1,0]
# alternative way of iterating over the controlpoints of a spline object
for i in range(len(curve)):
curve[i] += [1,0]
print(curve)
curve[0] += [1,0] # this will move the first controlpoint one unit in the x-direction
curve[0,0] += 1 # exact same thing (now moved a total of two)
print(curve)
Explanation: Control-point manipulation
For special case manipulation, it is possible to manipulate the controlpoints directly
End of explanation |
3,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab exercises
Simplicial complex in Dionysus is just a list of its simplices. See how we define a full triangle spanned on vertices labeled with 0, 1 and 2 in the following example.
Step1: Since specifying each simplex in a complex is a cumbersome task, dionysus has a closure method which automatically adds missing simplices of lower dimensions in the complex. So the above complex can also be defined as follows. | Python Code:
from dionysus import Simplex
complex = [Simplex([0]), Simplex([1]), Simplex([2]), Simplex([0, 1]),
Simplex([0, 2]), Simplex([2, 1]), Simplex([0, 1, 2])]
complex
Explanation: Lab exercises
Simplicial complex in Dionysus is just a list of its simplices. See how we define a full triangle spanned on vertices labeled with 0, 1 and 2 in the following example.
End of explanation
from dionysus import closure
# Closure accepts 2 arguments: a complex and its dimension
complex = closure([Simplex([0, 1, 2])], 2)
complex
Explanation: Since specifying each simplex in a complex is a cumbersome task, dionysus has a closure method which automatically adds missing simplices of lower dimensions in the complex. So the above complex can also be defined as follows.
End of explanation |
3,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
VIX S&P500 Volatility
In this notebook, we'll take a look at the VIX S&P500 Volatility dataset, available on the Quantopian Store. This dataset spans 02 Jan 2004 through the current day. This data has a daily frequency. Calculated by the CBOE, Quantopian sources this data from Quandl. Quandl has multiple data sets for VIX. Quantopian hosts two of them
Step1: Let's go over the columns
Step2: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows
Step3: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
Step4: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread
Step5: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need.
Taking what we've seen from above, let's see how we'd move that into the backtester. | Python Code:
# For use in Quantopian Research, exploring interactively
from quantopian.interactive.data.quandl import cboe_vix as dataset
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
Explanation: VIX S&P500 Volatility
In this notebook, we'll take a look at the VIX S&P500 Volatility dataset, available on the Quantopian Store. This dataset spans 02 Jan 2004 through the current day. This data has a daily frequency. Calculated by the CBOE, Quantopian sources this data from Quandl. Quandl has multiple data sets for VIX. Quantopian hosts two of them: this one, sourced by Quandl directly from the CBOE. A second is delivered to Quandl through Yahoo.
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Free samples and limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
There is a free version of this dataset as well as a paid one. The free sample includes data until 2 months prior to the current date.
To access the most up-to-date values for this data set for trading a live algorithm (as with other partner sets), you need to purchase acess to the full set.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
# Plotting this DataFrame since 2007
df = odo(dataset, pd.DataFrame)
df.head(5)
# So we can plot it, we'll set the index as the `asof_date`
df['asof_date'] = pd.to_datetime(df['asof_date'])
df = df.set_index(['asof_date'])
df.head(5)
import matplotlib.pyplot as plt
df.vix_open.plot(label=str(dataset))
plt.ylabel(str(dataset))
plt.legend()
plt.title("Graphing %s since %s" % (str(dataset), min(df.index)))
Explanation: Let's go over the columns:
- vix_open: opening price for the day indicated on asof_date
- vix_high: high price for the day indicated on asof_date
- vix_low: lowest price for the day indicated by asof_date
- vix_close: closing price for asof_date
- asof_date: the timeframe to which this data applies
- timestamp: this is our timestamp on when we registered the data.
We've done much of the data processing for you. Fields like timestamp are standardized across all our Store Datasets, so the datasets are easy to combine.
We can select columns and rows with ease. Below, we'll do a simple plot.
End of explanation
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
from quantopian.pipeline.data.quandl import cboe_vix
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.quandl import cboe_vix
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(cboe_vix.vix_open.latest, 'open_vix')
Pipeline usage is very similar between the backtester and Research so let's go over how to import this data through pipeline and view its outputs.
End of explanation
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
_print_fields(cboe_vix)
print "---------------------------------------------------\n"
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
pipe = Pipeline()
pipe.add(cboe_vix.vix_open.latest, 'open_vix')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & cboe_vix.vix_open.latest.notnan())
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms via the pipeline API
from quantopian.pipeline.data.quandl import cboe_vix
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add the datasets available
pipe.add(cboe_vix.vix_open.latest, 'vix_open')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Explanation: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need.
Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation |
3,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
x = np.linspace(-1.0,1.0,size)
errors = np.random.normal(sigma**2)
y = np.asarray(m*x + b + errors)
print(x)
print(y)
#?np.random.normal
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
x = np.linspace(-1.0,1.0,size)
errors = np.random.normal(loc = 0, scale = sigma**2)
y = m*x + b + errors
plt.scatter(x, y,size, c = color)
plt.title('Awesome Random Line')
plt.xlabel('The x-axis')
plt.ylabel('The y-axis')
plt.grid(True)
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
#?plt.xlim
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_random_line, m=[-10.0,1.0,0.1], b = [-5.0,5.0,0.1], sigma = [0.0,5.0,0.01], size=[10,100,10], color = ['red','green','blue'])
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
3,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using ReNA to find supervoxels
The aims of the notebook is to provide an illustration of how to use ReNA
to build superpixels. This corresponds to clustering voxels.
Here we use the Haxby dataset, which can be fetched via nilearn.
Loading the data
Step1: Get the connectivity (spatial structure)
Step2: Custering
Step3: Visualizing the results | Python Code:
from nilearn import datasets
dataset = datasets.fetch_haxby(subjects=1)
import numpy as np
from nilearn.input_data import NiftiMasker
masker = NiftiMasker(mask_strategy='epi', smoothing_fwhm=6, memory='cache')
X_masked = masker.fit_transform(dataset.func[0])
X_train = X_masked[:100, :]
X_data = masker.inverse_transform(X_train).get_data()
n_x, n_y, n_z, n_samples = X_data.shape
mask = masker.mask_img_.get_data()
print('number of samples: %i, \nDimensions n_x: %i, n_y: %i, n_z: %i' % (n_samples, n_x, n_y, n_z))
Explanation: Using ReNA to find supervoxels
The aims of the notebook is to provide an illustration of how to use ReNA
to build superpixels. This corresponds to clustering voxels.
Here we use the Haxby dataset, which can be fetched via nilearn.
Loading the data
End of explanation
from sklearn.feature_extraction.image import grid_to_graph
from rena import weighted_connectivity_graph
connectivity_ward = grid_to_graph(n_x=n_x, n_y=n_y, n_z=n_z, mask=mask)
connectivity_rena = weighted_connectivity_graph(X_data,
n_features=X_masked.shape[1],
mask=mask)
import time
from sklearn.cluster import AgglomerativeClustering
from rena import recursive_nearest_agglomeration
n_clusters = 2000
ward = AgglomerativeClustering(n_clusters=n_clusters,
connectivity=connectivity_ward,
linkage='ward')
ti_ward = time.clock()
ward.fit(X_masked.T)
to_ward = time.clock() - ti_ward
labels_ward = ward.labels_
ti_rena = time.clock()
labels_rena = recursive_nearest_agglomeration(X_masked, connectivity_rena,
n_clusters=n_clusters)
to_rena = time.clock() - ti_rena
print('Time Ward: %0.3f, Time ReNA: %0.3f' % (to_ward, to_rena))
Explanation: Get the connectivity (spatial structure)
End of explanation
from rena import reduce_data, approximate_data
X_red_rena = reduce_data(X_masked, labels_rena)
X_red_ward = reduce_data(X_masked, labels_ward)
X_approx_rena = approximate_data(X_red_rena, labels_rena)
X_approx_ward = approximate_data(X_red_ward, labels_ward)
Explanation: Custering
End of explanation
def visualize_labels(labels, masker):
# Shuffle the labels (for better visualization):
permutation = np.random.permutation(labels.shape[0])
labels = permutation[labels]
return masker.inverse_transform(labels)
cut_coords = (-34, -16)
n_image = 0
%matplotlib inline
import matplotlib.pyplot as plt
from nilearn.plotting import plot_stat_map, plot_epi
labels_rena_img = visualize_labels(labels_rena, masker)
labels_ward_img = visualize_labels(labels_ward, masker)
clusters_rena_fig = plot_stat_map(labels_rena_img, bg_img=dataset.anat[0],
title='ReNA: clusters', display_mode='yz',
cut_coords=cut_coords, colorbar=False)
clusters_ward_fig = plot_stat_map(labels_ward_img, bg_img=dataset.anat[0],
title='Ward: clusters', display_mode='yz',
cut_coords=cut_coords, colorbar=False)
compress_rena_fig = plot_epi(masker.inverse_transform(X_approx_rena[n_image]),
title='ReNA: approximated', display_mode='yz',
cut_coords=cut_coords)
compress_ward_fig = plot_epi(masker.inverse_transform(X_approx_ward[n_image]),
title='Ward: approximated', display_mode='yz',
cut_coords=cut_coords)
original_fig = plot_epi(masker.inverse_transform(X_masked[n_image]),
title='original', display_mode='yz',
cut_coords=cut_coords)
plt.show()
# saving data
clusters_rena_fig.savefig('figures/clusters_rena.png')
clusters_ward_fig.savefig('figures/clusters_ward.png')
compress_rena_fig.savefig('figures/compress_rena.png')
compress_ward_fig.savefig('figures/compress_ward.png')
original_fig.savefig('figures/original.png')
Explanation: Visualizing the results
End of explanation |
3,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Infinite Hidden Markov Model
authors
Step1: First we define the possible states in the model. In this case we make them all have normal distributions.
Step2: We then create the HMM object, naming it, logically, "infinite".
Step3: We then add the possible transition, making sure not to add an end state. Thus with no end state, the model is infinite!
Step4: Finally we "bake" the model, finalizing the model.
Step5: Now we can check whether or not our model is infinite.
Step6: Now lets the possible states in the model.
Step7: Now lets test out our model by feeding it a sequence of values. We feed our sequence of values first through a forward algorithm in our HMM.
Step8: That looks good as well. Now lets feed our sequence into the model through a backwards algorithm.
Step9: Continuing on we now feed the sequence in through a forward-backward algorithm.
Step10: Finally we feed the sequence through a Viterbi algorithm to find the most probable sequence of states.
Step11: Finally we try and reproduce the transition matrix from 100,000 samples. | Python Code:
from pomegranate import *
import itertools as it
import numpy as np
Explanation: Infinite Hidden Markov Model
authors:<br>
Jacob Schreiber [<a href="mailto:[email protected]">[email protected]</a>]<br>
Nicholas Farn [<a href="mailto:[email protected]">[email protected]</a>]
This example shows how to use pomegranate to sample from an infinite HMM. The premise is that you have an HMM which does not have transitions to the end state, and so can continue on forever. This is done by not adding transitions to the end state. If you bake a model with no transitions to the end state, you get an infinite model, with no extra work! This change is passed on to all the algorithms.
End of explanation
s1 = State( NormalDistribution( 5, 2 ), name="S1" )
s2 = State( NormalDistribution( 15, 2 ), name="S2" )
s3 = State( NormalDistribution( 25, 2 ), name="S3" )
Explanation: First we define the possible states in the model. In this case we make them all have normal distributions.
End of explanation
model = HiddenMarkovModel( "infinite" )
Explanation: We then create the HMM object, naming it, logically, "infinite".
End of explanation
model.add_transition( model.start, s1, 0.7 )
model.add_transition( model.start, s2, 0.2 )
model.add_transition( model.start, s3, 0.1 )
model.add_transition( s1, s1, 0.6 )
model.add_transition( s1, s2, 0.1 )
model.add_transition( s1, s3, 0.3 )
model.add_transition( s2, s1, 0.4 )
model.add_transition( s2, s2, 0.4 )
model.add_transition( s2, s3, 0.2 )
model.add_transition( s3, s1, 0.05 )
model.add_transition( s3, s2, 0.15 )
model.add_transition( s3, s3, 0.8 )
Explanation: We then add the possible transition, making sure not to add an end state. Thus with no end state, the model is infinite!
End of explanation
model.bake()
Explanation: Finally we "bake" the model, finalizing the model.
End of explanation
# Not implemented: print model.is_infinite()
Explanation: Now we can check whether or not our model is infinite.
End of explanation
print("States")
print("\n".join( state.name for state in model.states ))
Explanation: Now lets the possible states in the model.
End of explanation
sequence = [ 4.8, 5.6, 24.1, 25.8, 14.3, 26.5, 15.9, 5.5, 5.1 ]
print("Forward")
print(model.forward( sequence ))
Explanation: Now lets test out our model by feeding it a sequence of values. We feed our sequence of values first through a forward algorithm in our HMM.
End of explanation
print("Backward")
print(model.backward( sequence ))
Explanation: That looks good as well. Now lets feed our sequence into the model through a backwards algorithm.
End of explanation
print("Forward-Backward")
trans, emissions = model.forward_backward( sequence )
print(trans)
print(emissions)
Explanation: Continuing on we now feed the sequence in through a forward-backward algorithm.
End of explanation
print("Viterbi")
prob, states = model.viterbi( sequence )
print("Prob: {}".format( prob ))
print("\n".join( state[1].name for state in states ))
print()
print("MAP")
prob, states = model.maximum_a_posteriori( sequence )
print("Prob: {}".format( prob ))
print("\n".join( state[1].name for state in states ))
Explanation: Finally we feed the sequence through a Viterbi algorithm to find the most probable sequence of states.
End of explanation
print("Should produce a matrix close to the following: ")
print(" [ [ 0.60, 0.10, 0.30 ] ")
print(" [ 0.40, 0.40, 0.20 ] ")
print(" [ 0.05, 0.15, 0.80 ] ] ")
print()
print("Transition Matrix From 100000 Samples:")
sample, path = model.sample( 100000, path=True )
trans = np.zeros((3,3))
for state, n_state in it.izip( path[1:-2], path[2:-1] ):
state_name = int( state.name[1:] )-1
n_state_name = int( n_state.name[1:] )-1
trans[ state_name, n_state_name ] += 1
trans = (trans.T / trans.sum( axis=1 )).T
print(trans)
Explanation: Finally we try and reproduce the transition matrix from 100,000 samples.
End of explanation |
3,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In the above cell, I have used the first element of the array for calculating 'yactual' value
Step1: The .fit function is throwing out an error saying that first argument in that function must be 2 Dimensional or lesser.
When I try to put in all the three matrixes A, B, C, it is giving an error saying that the first argument is four dimensional, which I could'nt resolve
Hence, to see how it works out for a single matrix, I have used the fit function | Python Code:
len(Amatrix[0])
#performing multiple simple linear regression for only the a,Amatrix, because of error of the .fit function
from sklearn import linear_model
regr=linear_model.LinearRegression()#performing the simple linear regression
regr.fit(a[0].reshape(len(a),1),yactual.reshape(len(yactual),1))
Explanation: In the above cell, I have used the first element of the array for calculating 'yactual' value
End of explanation
plt.scatter(yactual.reshape(len(yactual),1),a[0].reshape(len(yactual),1))
plt.plot([0,2],[0,23],lw=4,color='red')#the line Y=2a+b+9c
plt.show()
Explanation: The .fit function is throwing out an error saying that first argument in that function must be 2 Dimensional or lesser.
When I try to put in all the three matrixes A, B, C, it is giving an error saying that the first argument is four dimensional, which I could'nt resolve
Hence, to see how it works out for a single matrix, I have used the fit function
End of explanation |
3,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: Embeddings de Palavras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Usando a camada Embedding
Keras facilita o uso de embedding de palavras. Vamos dar uma olhada na camada [Embedding] (https
Step3: Quando você cria uma camada de embedding, os pesos para a incorporação são inicializados aleatoriamente (como qualquer outra camada). Durante o treinamento, eles são ajustados gradualmente via retropropagação. Uma vez treinadas, as combinações de palavras aprendidas codificam aproximadamente semelhanças entre as palavras (como foram aprendidas para o problema específico em que seu modelo é treinado).
Se você passar um número inteiro para uma camada de embedding, o resultado substituirá cada número inteiro pelo vetor da tabela de embedding
Step4: Para problemas de texto ou sequência, a camada Embedding usa um tensor 2D de números inteiros, de forma (samples, sequence_length), onde cada entrada é uma sequência de números inteiros. Pode incorporar seqüências de comprimentos variáveis. Você pode alimentar a camada de embedding acima dos lotes com as formas (32, 10) (lote de 32 sequências de comprimento 10) ou (64, 15) (lote de 64 sequências de comprimento 15).
O tensor retornado possui mais um eixo que a entrada, os vetores de embedding são alinhados ao longo do novo último eixo. Passe um lote de entrada (2, 3) e a saída é (2, 3, N)
Step5: Quando recebe um lote de seqüências como entrada, uma camada de embedding retorna um tensor de ponto flutuante 3D, de forma (amostras, comprimento_de_ sequência, dimensão_de_implantação). Para converter dessa sequência de comprimento variável para uma representação fixa, há uma variedade de abordagens padrão. Você pode usar uma camada RNN, Attention ou pooling antes de passá-la para uma camada Dense. Este tutorial usa o pool porque é mais simples. O tutorial [Classificação de texto com um RNN] (text_classification_rnn.ipynb) é um bom próximo passo.
Aprendendo embeddings do zero
Neste tutorial, você treinará um classificador de sentimentos nas críticas de filmes do IMDB. No processo, o modelo aprenderá o embedding do zero. Usaremos para um conjunto de dados pré-processado.
Para carregar um conjunto de dados de texto do zero, consulte o [Carregando texto tutorial] (../ load_data / text.ipynb).
Step6: Obtenha o codificador (tfds.features.text.SubwordTextEncoder) e dê uma rápida olhada no vocabulário.
O \_ no vocabulário representa espaços. Observe como o vocabulário inclui palavras inteiras (terminando com \_) e palavras parciais que podem ser usadas para criar palavras maiores
Step7: As críticas de filmes podem ter diferentes comprimentos. Usaremos o método padded_batch para padronizar os comprimentos das revisões.
Step8: Conforme importado, o texto das revisões é codificado por número inteiro (cada número inteiro representa uma palavra específica ou parte da palavra no vocabulário).
Observe os zeros à direita, porque o lote é preenchido no exemplo mais longo.
Step9: Crie um modelo simples
Usaremos a [Keras Sequential API] (../../guide/keras) para definir nosso modelo. Nesse caso, é um modelo de estilo "Saco contínuo de palavras".
Em seguida, a camada Embedding pega o vocabulário codificado por número inteiro e procura o vetor de embedding para cada índice de palavras. Esses vetores são aprendidos à medida que o modelo treina. Os vetores adicionam uma dimensão à matriz de saída. As dimensões resultantes são
Step10: Compile e treine o modelo
Step11: Com essa abordagem, nosso modelo alcança uma acurácia de validação de cerca de 88% (observe que o modelo está adaptado demais (overfitting), a precisão do treinamento é significativamente maior).
Step12: Recuperar os embeddings aprendidos
Em seguida, vamos recuperar o embedding da palavra aprendida durante o treinamento. Esta será uma matriz de forma (vocab_size, embedding-dimension).
Step13: Vamos agora escrever os pesos no disco. Para usar o [Embedding Projector] (http
Step14: Se você estiver executando este tutorial em [Colaboratory] (https | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
!pip install tf-nightly
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
Explanation: Embeddings de Palavras
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/text/word_embeddings">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
Ver no TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/text/word_embeddings.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Executar no Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/text/word_embeddings.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
Ver código fonte no GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/text/word_embeddings.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixar notebook</a>
</td>
</table>
Este tutorial apresenta embedding de palavras. Ele contém código completo para treinar combinações de palavras do zero em um pequeno conjunto de dados e para visualizá-las usando o [Embedding Projector] (http://projector.tensorflow.org) (mostrado na imagem abaixo).
<img src = "https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding.jpg?raw=1" alt = "Captura de tela do projetor de embedding" width = "400" />
Representando texto como números
Os modelos de aprendizado de máquina recebem vetores (matrizes de números) como entrada. Ao trabalhar com texto, a primeira coisa que devemos fazer é criar uma estratégia para converter seqüências de caracteres em números (ou "vetorizar" o texto) antes de alimentá-lo no modelo. Nesta seção, examinaremos três estratégias para fazê-lo.
Codificações one-hot
Como primeira idéia, podemos "codificar" cada palavra em nosso vocabulário. Considere a frase "The cat sat on the mat". O vocabulário (ou palavras únicas) nesta frase é (cat, mat, on, sat, the). Para representar cada palavra, criaremos um vetor de zeros com comprimento igual ao vocabulário e, em seguida, colocaremos 1 no índice que corresponder à palavra. Essa abordagem é mostrada no diagrama a seguir.
<img src = "https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/one-hot.png?raw=1" alt = "Diagrama de codificações únicas" width ="400"/>
Para criar um vetor que contenha a codificação da sentença, poderíamos concatenar o vetor one-hot de cada palavra.
Ponto-chave: Essa abordagem é ineficiente. Um vetor one-hot é escasso (ou seja, a maioria das indicações é zero). Imagine que temos 10.000 palavras no vocabulário. Para codificar cada palavra, criaríamos um vetor em que 99,99% dos elementos são zero.
Codifique cada palavra com um número único
Uma segunda abordagem que podemos tentar é codificar cada palavra usando um número único. Continuando o exemplo acima, poderíamos atribuir 1 a "cat", 2 a "mat" e assim por diante. Poderíamos então codificar a frase "The cat sat on the mat" como um vetor denso como [5, 1, 4, 3, 5, 2]. Esta abordagem é eficiente. Em vez de um vetor esparso, agora temos um denso (onde todos os elementos estão cheios).
No entanto, existem duas desvantagens nessa abordagem:
A codificação de número inteiro é arbitrária (não captura nenhuma relação entre palavras).
Uma codificação de número inteiro pode ser desafiadora para um modelo interpretar. Um classificador linear, por exemplo, aprende um único peso para cada recurso. Como não há relação entre a similaridade de duas palavras e a similaridade de suas codificações, essa combinação de peso e característica não tem significado.
Embeddings de palavras
O embedding de palavras nos fornece uma maneira de usar uma representação eficiente e densa, na qual palavras semelhantes têm uma codificação semelhante. É importante ressaltar que não precisamos especificar essa codificação manualmente. Um embedding é um vetor denso de valores de ponto flutuante (o comprimento do vetor é um parâmetro que você especifica). Em vez de especificar os valores para o embedding manualmente, eles são parâmetros treináveis (pesos aprendidos pelo modelo durante o treinamento, da mesma maneira que um modelo aprende pesos para uma camada densa). É comum ver combinações de palavras de 8 dimensões (para conjuntos de dados pequenos), com até 1024 dimensões ao trabalhar com conjuntos de dados grandes. Um embedding dimensional mais alto pode capturar relacionamentos refinados entre as palavras, mas exige mais dados para aprender.
<img src = "https://github.com/tensorflow/docs/blob/master/site/en/tutorials/text/images/embedding2.png?raw=1" alt = "Diagrama de um Embedding" width = "400"/>
Acima está um diagrama para um embedding de uma palavra. Cada palavra é representada como um vetor quadridimensional de valores de ponto flutuante. Outra maneira de pensar em um embedding é como "tabela de pesquisa". Depois que esses pesos foram aprendidos, podemos codificar cada palavra procurando o vetor denso a que corresponde na tabela.
Configuração
End of explanation
embedding_layer = layers.Embedding(1000, 5)
Explanation: Usando a camada Embedding
Keras facilita o uso de embedding de palavras. Vamos dar uma olhada na camada [Embedding] (https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding).
A camada de embedding pode ser entendida como uma tabela de pesquisa que mapeia de índices inteiros (que significam palavras específicas) a vetores densos (seus embeddings). A dimensionalidade (ou largura) do embedding é um parâmetro com o qual você pode experimentar para ver o que funciona bem para o seu problema, da mesma maneira que você experimentaria com o número de neurônios em uma camada Dense.
End of explanation
result = embedding_layer(tf.constant([1,2,3]))
result.numpy()
Explanation: Quando você cria uma camada de embedding, os pesos para a incorporação são inicializados aleatoriamente (como qualquer outra camada). Durante o treinamento, eles são ajustados gradualmente via retropropagação. Uma vez treinadas, as combinações de palavras aprendidas codificam aproximadamente semelhanças entre as palavras (como foram aprendidas para o problema específico em que seu modelo é treinado).
Se você passar um número inteiro para uma camada de embedding, o resultado substituirá cada número inteiro pelo vetor da tabela de embedding:
End of explanation
result = embedding_layer(tf.constant([[0,1,2],[3,4,5]]))
result.shape
Explanation: Para problemas de texto ou sequência, a camada Embedding usa um tensor 2D de números inteiros, de forma (samples, sequence_length), onde cada entrada é uma sequência de números inteiros. Pode incorporar seqüências de comprimentos variáveis. Você pode alimentar a camada de embedding acima dos lotes com as formas (32, 10) (lote de 32 sequências de comprimento 10) ou (64, 15) (lote de 64 sequências de comprimento 15).
O tensor retornado possui mais um eixo que a entrada, os vetores de embedding são alinhados ao longo do novo último eixo. Passe um lote de entrada (2, 3) e a saída é (2, 3, N)
End of explanation
(train_data, test_data), info = tfds.load(
'imdb_reviews/subwords8k',
split = (tfds.Split.TRAIN, tfds.Split.TEST),
with_info=True, as_supervised=True)
Explanation: Quando recebe um lote de seqüências como entrada, uma camada de embedding retorna um tensor de ponto flutuante 3D, de forma (amostras, comprimento_de_ sequência, dimensão_de_implantação). Para converter dessa sequência de comprimento variável para uma representação fixa, há uma variedade de abordagens padrão. Você pode usar uma camada RNN, Attention ou pooling antes de passá-la para uma camada Dense. Este tutorial usa o pool porque é mais simples. O tutorial [Classificação de texto com um RNN] (text_classification_rnn.ipynb) é um bom próximo passo.
Aprendendo embeddings do zero
Neste tutorial, você treinará um classificador de sentimentos nas críticas de filmes do IMDB. No processo, o modelo aprenderá o embedding do zero. Usaremos para um conjunto de dados pré-processado.
Para carregar um conjunto de dados de texto do zero, consulte o [Carregando texto tutorial] (../ load_data / text.ipynb).
End of explanation
encoder = info.features['text'].encoder
encoder.subwords[:20]
Explanation: Obtenha o codificador (tfds.features.text.SubwordTextEncoder) e dê uma rápida olhada no vocabulário.
O \_ no vocabulário representa espaços. Observe como o vocabulário inclui palavras inteiras (terminando com \_) e palavras parciais que podem ser usadas para criar palavras maiores
End of explanation
train_batches = train_data.shuffle(1000).padded_batch(10)
test_batches = test_data.shuffle(1000).padded_batch(10)
Explanation: As críticas de filmes podem ter diferentes comprimentos. Usaremos o método padded_batch para padronizar os comprimentos das revisões.
End of explanation
train_batch, train_labels = next(iter(train_batches))
train_batch.numpy()
Explanation: Conforme importado, o texto das revisões é codificado por número inteiro (cada número inteiro representa uma palavra específica ou parte da palavra no vocabulário).
Observe os zeros à direita, porque o lote é preenchido no exemplo mais longo.
End of explanation
embedding_dim=16
model = keras.Sequential([
layers.Embedding(encoder.vocab_size, embedding_dim),
layers.GlobalAveragePooling1D(),
layers.Dense(16, activation='relu'),
layers.Dense(1, activation='sigmoid')
])
model.summary()
Explanation: Crie um modelo simples
Usaremos a [Keras Sequential API] (../../guide/keras) para definir nosso modelo. Nesse caso, é um modelo de estilo "Saco contínuo de palavras".
Em seguida, a camada Embedding pega o vocabulário codificado por número inteiro e procura o vetor de embedding para cada índice de palavras. Esses vetores são aprendidos à medida que o modelo treina. Os vetores adicionam uma dimensão à matriz de saída. As dimensões resultantes são: (lote, sequência, incorporação).
Em seguida, uma camada GlobalAveragePooling1D retorna um vetor de saída de comprimento fixo para cada exemplo calculando a média sobre a dimensão de sequência. Isso permite que o modelo lide com entradas de comprimento variável, da maneira mais simples possível.
Esse vetor de saída de comprimento fixo é canalizado através de uma camada totalmente conectada (dense) com 16 unidades ocultas.
A última camada está densamente conectada com um único nó de saída. Usando a função de ativação sigmóide, esse valor é um valor flutuante entre 0 e 1, representando uma probabilidade (ou nível de confiança) de que a revisão seja positiva.
Cuidado: Este modelo não usa mascaramento; portanto, o preenchimento zero é usado como parte da entrada; portanto, o comprimento do preenchimento pode afetar a saída. Para corrigir isso, consulte o [guia de máscara e preenchimento] (../../guide/keras/masking_and_padding).
End of explanation
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(
train_batches,
epochs=10,
validation_data=test_batches, validation_steps=20)
Explanation: Compile e treine o modelo
End of explanation
import matplotlib.pyplot as plt
history_dict = history.history
acc = history_dict['accuracy']
val_acc = history_dict['val_accuracy']
loss = history_dict['loss']
val_loss = history_dict['val_loss']
epochs = range(1, len(acc) + 1)
plt.figure(figsize=(12,9))
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.figure(figsize=(12,9))
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim((0.5,1))
plt.show()
Explanation: Com essa abordagem, nosso modelo alcança uma acurácia de validação de cerca de 88% (observe que o modelo está adaptado demais (overfitting), a precisão do treinamento é significativamente maior).
End of explanation
e = model.layers[0]
weights = e.get_weights()[0]
print(weights.shape) # formato: (vocab_size, embedding_dim)
Explanation: Recuperar os embeddings aprendidos
Em seguida, vamos recuperar o embedding da palavra aprendida durante o treinamento. Esta será uma matriz de forma (vocab_size, embedding-dimension).
End of explanation
import io
encoder = info.features['text'].encoder
out_v = io.open('vecs.tsv', 'w', encoding='utf-8')
out_m = io.open('meta.tsv', 'w', encoding='utf-8')
for num, word in enumerate(encoder.subwords):
vec = weights[num+1] # pule o 0, está preenchido.
out_m.write(word + "\n")
out_v.write('\t'.join([str(x) for x in vec]) + "\n")
out_v.close()
out_m.close()
Explanation: Vamos agora escrever os pesos no disco. Para usar o [Embedding Projector] (http://projector.tensorflow.org), enviaremos dois arquivos em formato separado por tabulação: um arquivo de vetores (contendo a incorporação) e um arquivo de metadados (contendo as palavras).
End of explanation
try:
from google.colab import files
except ImportError:
pass
else:
files.download('vecs.tsv')
files.download('meta.tsv')
Explanation: Se você estiver executando este tutorial em [Colaboratory] (https://colab.research.google.com), poderá usar o seguinte trecho para fazer o download desses arquivos na máquina local (ou usar o navegador de arquivos, * Exibir -> Tabela de conteúdo -> Navegador de arquivos *).
End of explanation |
3,062 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Formatting csv data for loading into atlasbiowork Postgres database
First, get column names set up. Implement foreign keys FIRST, as csv, and then by join operation with site table.
Then use to_json to nest the values fields for the postgres JSON field.
Step1: soil samples from analysis.csv
Step2: For soil samples, type=31 and values fields are as follows
"values" | Python Code:
import pandas as pd
import numpy as np
import json
#fields for csv
site_fields = ['id', 'name', 'geometry','accuracy']
observation_fields = ['entered', 'values','observer_id', 'site_id', 'type_id', 'parentobs_id']
Explanation: Formatting csv data for loading into atlasbiowork Postgres database
First, get column names set up. Implement foreign keys FIRST, as csv, and then by join operation with site table.
Then use to_json to nest the values fields for the postgres JSON field.
End of explanation
df = pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/analysis.csv')
#add foreign key fields
mystartID = 1000 #primary key to start numbering new data from
df['observer_id'] = 1 # this is my observer_id
df['site_id'] = np.nan
df['type_id'] = np.nan
df['parentobs_id'] = np.nan
df['id']=df.index+mystartID
df.columns
Explanation: soil samples from analysis.csv
End of explanation
#get soil samples fields
soil_samples_renaming = {"value1": "top_cm", "value2": "bottom_cm","date": "oldDate", "id": "sampleID", "type": "description"}
df.rename(columns=soil_samples_renaming, inplace=True)
df['date'] = pd.to_datetime(df['oldDate'],infer_datetime_format=True)
df.columns
#add a few needed fields
df['entered'] = "2017-06-01 00:00:00.000" #arbitrary for loading data
df['observer_id'] = 1 #given that all these observations are mine
df['site_id'] = 0
df['type_id'] = 31 # for soil samples
df['parentobs_id'] = 0
df['samplers'] = ''
#use regex to replace substrings with numbers for num_composited field
replacements = {
r'8': 8,
r'3': 3,
r'4': 4,
r'pit':4,
r'single': 1,
r'density': 1
}
df['num_composited'] = df.description.replace(replacements, regex=True)
#df.loc[df.text.str.contains('\.'), 'text'] = 'other'
df.num_composited.value_counts() #gives occurrences of each unique value
#here we filter for the soil samples only, not the analyses or calculated stats
searchfor = ['single','density','composite sample','8','4','3']
#y = df[df.description.str.contains('|'.join(searchfor))] #df w rows that contain terms
#x = df[~df.description.str.contains('|'.join(searchfor))] #df without rows that contain terms
df = df[df.description.str.contains('|'.join(searchfor))] #df w rows that contain terms
df['description'] = df['description'] + ". " + df['note']
#in order to make a few text changes, e.g. describe samples a bit more
#df.to_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv', index=False)
df = pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv')
df=pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv')
JSONfield = ['top_cm', 'bottom_cm', 'description','num_composited','sampleID','date','samplers']
jsonvalues= df[JSONfield]
jsonvalues.columns
#create dataframe with same length to hold JSON field
json = pd.DataFrame(index = df.index, columns = ['values'])
for i, row in jsonvalues.iterrows():
json.values[i]= jsonvalues.loc[i].to_json()
#print(values.values[i])
#now we create a df with all fields, including the JSON values field
merged = df.merge(json, left_index=True, right_index=True)
merged.to_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples.csv', index=False)
mystart = 1000 #primary key to start with
merged['id'] = merged.index + mystart
observation_fields
#observation_fields.append('group')
final = merged[observation_fields]
final
final.to_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples_readyFK.csv', index=False)
final = pd.read_csv('C:/Users/Peter/Documents/scc/challenge/obs_types/soil_samples_readyFK.csv')
final
final[final['group']=='BCLA1']
Explanation: For soil samples, type=31 and values fields are as follows
"values": {
"top_cm": "28",
"bottom_cm": "35",
"description": "3-inch diameter density sample",
"num_composited": "1",
"sampleID": "Linne1C1",
"date": "2017-04-11",
"samplers": null
}
End of explanation |
3,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip
Step1: 2. What's the current wind speed? How much warmer does it feel than it actually is?
Step2: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
Step3: 4. What's the difference between the high and low temperatures for today?
Step4: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
Step5: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
Step6: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000? | Python Code:
#https://api.forecast.io/forecast/APIKEY/LATITUDE,LONGITUDE,TIME
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/12.971599,77.594563')
data = response.json()
#print(data)
#print(data.keys())
print("Bangalore is in", data['timezone'], "timezone")
timezone_find = data.keys()
#find representation
print("The longitude is", data['longitude'], "The latitude is", data['latitude'])
Explanation: 1 Make a request from the Forecast.io API for where you were born (or lived, or want to visit!)
Tip: Once you've imported the JSON into a variable, check the timezone's name to make sure it seems like it got the right part of the world!
Tip 2: How is north vs. south and east vs. west latitude/longitude represented? Is it the normal North/South/East/West?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
print("The current windspeed at New York is", data['currently']['windSpeed'])
#print(data['currently']) - find how much warmer
print("It is",data['currently']['apparentTemperature'], "warmer it feels than it actually is")
Explanation: 2. What's the current wind speed? How much warmer does it feel than it actually is?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
#print(data.keys())
#print(data['daily']['data'])
now_moon = data['daily']['data']
for i in now_moon:
print("The visibility of moon today in New York is", i['moonPhase'], "and is in the middle of new moon phase and the first quarter moon")
Explanation: 3. Moon Visible in New York
The first daily forecast is the forecast for today. For the place you decided on up above, how much of the moon is currently visible?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941, 2016-06-08T09:00:46-0400')
data = response.json()
TemMax = data['daily']['data']
for i in TemMax:
tem_diff = i['temperatureMax'] - i['temperatureMin']
print("The temparature difference for today approximately is", round(tem_diff))
Explanation: 4. What's the difference between the high and low temperatures for today?
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.712784,-74.005941')
data = response.json()
temp = data['daily']['data']
#print(temp)
count = 0
for i in temp:
count = count+1
print("The high temperature for the day", count, "is", i['temperatureMax'], "and the low temperature is", i['temperatureMin'])
if float(i['temperatureMin']) < 40:
print("it's a cold weather")
elif (float(i['temperatureMin']) > 40) & (float(i['temperatureMin']) < 60):
print("It's a warm day!")
else:
print("It's very hot weather")
Explanation: 5. Next Week's Prediction
Loop through the daily forecast, printing out the next week's worth of predictions. I'd like to know the high temperature for each day, and whether it's hot, warm, or cold, based on what temperatures you think are hot, warm or cold.
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/25.761680,-80.191790, 2016-06-09T12:01:00-0400')
data = response.json()
#print(data['hourly']['data'])
Tem = data['hourly']['data']
count = 0
for i in Tem:
count = count +1
print("The temperature in Miami, Florida on 9th June in the", count, "hour is", i['temperature'])
if float(i['cloudCover']) > 0.5:
print("and is cloudy")
Explanation: 6.Weather in Florida
What's the weather looking like for the rest of today in Miami, Florida? I'd like to know the temperature for every hour, and if it's going to have cloud cover of more than 0.5 say "{temperature} and cloudy" instead of just the temperature.
End of explanation
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1980-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1980 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 1990-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 1990 was", Temp)
response = requests.get('https://api.forecast.io/forecast/4da699cf85f9706ce50848a7e59591b7/40.771133,-73.974187, 2000-12-25T12:01:00-0400')
data = response.json()
Temp = data['currently']['temperature']
print("The temperature in Central Park, NY on the Christmas Day of 2000 was", Temp)
Explanation: 7. Temperature in Central Park
What was the temperature in Central Park on Christmas Day, 1980? How about 1990? 2000?
End of explanation |
3,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Setup
Step1: Prepare Vectors
Step2: Use Scikit's semisupervised learning
There are two semisupervised methods that scikit has. Label Propagation and Label Spreading. The difference is in how they regularize.
Step3: Measuring effectiveness.
Step4: PCA | Python Code:
import tsvopener
import pandas as pd
import numpy as np
from nltk import word_tokenize
from sklearn.feature_extraction.text import CountVectorizer
from scipy.sparse import csr_matrix, vstack
from sklearn.semi_supervised import LabelPropagation, LabelSpreading
regex_categorized = tsvopener.open_tsv("categorized.tsv")
human_categorized = tsvopener.open_tsv("human_categorized.tsv")
# Accuracy Check
#
# match = 0
# no_match = 0
# for key in human_categorized:
# if human_categorized[key] == regex_categorized[key]:
# match += 1
# else:
# no_match += 1
#
# print("accuracy of regex data in {} human-categorized words".format(
# len(human_categorized)))
# print(match/(match+no_match))
#
# accuracy of regex data in 350 human-categorized words
# 0.7857142857142857
Explanation: Setup
End of explanation
# set up targets for the human-categorized data
targets = pd.DataFrame.from_dict(human_categorized, 'index')
targets[0] = pd.Categorical(targets[0])
targets['code'] = targets[0].cat.codes
# form: | word (label) | language | code (1-5)
tmp_dict = {}
for key in human_categorized:
tmp_dict[key] = tsvopener.etymdict[key]
supervised_sents = pd.DataFrame.from_dict(tmp_dict, 'index')
all_sents = pd.DataFrame.from_dict(tsvopener.etymdict, 'index')
vectorizer = CountVectorizer(stop_words='english', max_features=10000)
all_sents.index.get_loc("anyways (adv.)")
# vectorize the unsupervised vectors.
vectors = vectorizer.fit_transform(all_sents.values[:,0])
print(vectors.shape)
# supervised_vectors = vectorizer.fit_transform(supervised_data.values[:,0])
# add labels
# initialize to -1
all_sents['code'] = -1
supervised_vectors = csr_matrix((len(human_categorized),
vectors.shape[1]),
dtype=vectors.dtype)
j = 0
for key in supervised_sents.index:
all_sents.loc[key]['code'] = targets.loc[key]['code']
i = all_sents.index.get_loc(key)
supervised_vectors[j] = vectors[i]
j += 1
# supervised_vectors = csr_matrix((len(human_categorized),
# unsupervised_vectors.shape[1]),
# dtype=unsupervised_vectors.dtype)
# j = 0
# for key in supervised_data.index:
# i = unsupervised_data.index.get_loc(key)
# supervised_vectors[j] = unsupervised_vectors[i]
# j += 1
all_sents.loc['dicky (n.)']
Explanation: Prepare Vectors
End of explanation
num_points = 1000
num_test = 50
x = vstack([vectors[:num_points], supervised_vectors]).toarray()
t = all_sents['code'][:num_points].append(targets['code'])
x_test = x[-num_test:]
t_test = t[-num_test:]
x = x[:-num_test]
t = t[:-num_test]
label_prop_model = LabelSpreading(kernel='knn')
from time import time
print("fitting model")
timer_start = time()
label_prop_model.fit(x, t)
print("runtime: %0.3fs" % (time()-timer_start))
print("done!")
# unsupervised_data['code'].iloc[:1000]
import pickle
# with open("classifiers/labelspreading_knn_all_but_100.pkl", 'bw') as writefile:
# pickle.dump(label_prop_model, writefile)
import smtplib
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login("[email protected]", "Picardy3")
msg = "Job's done!"
server.sendmail("[email protected]", "[email protected]", msg)
server.quit()
targets
Explanation: Use Scikit's semisupervised learning
There are two semisupervised methods that scikit has. Label Propagation and Label Spreading. The difference is in how they regularize.
End of explanation
from sklearn.metrics import precision_score, accuracy_score, f1_score, recall_score
t_pred = label_prop_model.predict(x_test)
print("Metrics based on 50 hold-out points")
print("Macro")
print("accuracy: %f" % accuracy_score(t_test, t_pred))
print("precision: %f" % precision_score(t_test, t_pred, average='macro'))
print("recall: %f" % recall_score(t_test, t_pred, average='macro'))
print("f1: %f" % f1_score(t_test, t_pred, average='macro'))
print("\n\nMicro")
print("accuracy: %f" % accuracy_score(t_test, t_pred))
print("precision: %f" % precision_score(t_test, t_pred, average='micro'))
print("recall: %f" % recall_score(t_test, t_pred, average='micro'))
print("f1: %f" % f1_score(t_test, t_pred, average='micro'))
from sklearn import metrics
import matplotlib.pyplot as pl
labels = ["English", "French", "Greek", "Latin","Norse", "Other"]
labels_digits = [0, 1, 2, 3, 4, 5]
cm = metrics.confusion_matrix(t_test, t_pred, labels_digits)
fig = pl.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
pl.title("Label Spreading with KNN kernel (k=7)")
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
pl.xlabel('Predicted')
pl.ylabel('True')
pl.show()
Explanation: Measuring effectiveness.
End of explanation
supervised_vectors
import matplotlib.pyplot as pl
u, s, v = np.linalg.svd(supervised_vectors.toarray())
pca = np.dot(u[:,0:2], np.diag(s[0:2]))
english = np.empty((0,2))
french = np.empty((0,2))
greek = np.empty((0,2))
latin = np.empty((0,2))
norse = np.empty((0,2))
other = np.empty((0,2))
for i in range(pca.shape[0]):
if targets[0].iloc[i] == "English":
english = np.vstack((english, pca[i]))
elif targets[0].iloc[i] == "French":
french = np.vstack((french, pca[i]))
elif targets[0].iloc[i] == "Greek":
greek = np.vstack((greek, pca[i]))
elif targets[0].iloc[i] == "Latin":
latin = np.vstack((latin, pca[i]))
elif targets[0].iloc[i] == "Norse":
norse = np.vstack((norse, pca[i]))
elif targets[0].iloc[i] == "Other":
other = np.vstack((other, pca[i]))
pl.plot( english[:,0], english[:,1], "ro",
french[:,0], french[:,1], "bs",
greek[:,0], greek[:,1], "g+",
latin[:,0], latin[:,1], "c^",
norse[:,0], norse[:,1], "mD",
other[:,0], other[:,1], "kx")
pl.axis([-5,0,-2, 5])
pl.show()
print (s)
Explanation: PCA: Let's see what it looks like
Performing PCA
End of explanation |
3,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: What is the impact of removing connections with highest coactivation
Step4: What is the optimal combination of both
Step5: The opposite logic of hebbian pruning, when weight pruning is set to 0, clearly affects the model performance.
Acc when full pruning is done at each state is 0.965 {(1,0), (0,1), (1,1)}
Acc with no pruning is 0.977 {(0,0)}
Best acc is still with only magnitude based pruning {(0,0.2), (0, 0.4)}
Opposite of hebbian prunning (removing connections with highest coactivation) only is harmful to the model, with acc equal or worst than full pruning, even with as low as 0.2 pruning
What is the impact of the adding connections with lowest coactivation | Python Code:
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("../../")
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from dynamic_sparse.common.browser import *
Explanation: Experiment:
Opposite of Hebbian Learning: Hebbian Learning by pruning the highest coactivation, instead of the lowest.
Opposite of Hebbian Growth: growth connections by allowing gradient flow on connections with the lowest coactivation, instead of the highest
Motivation.
Verify the relevance of highest coactivated units, by checking their impact on the model when they are pruned
Verify the relevance of lowest coactivated units, by checking their impact on the model when they are added to the model
Conclusions:
The opposite logic of hebbian pruning, when weight pruning is set to 0, clearly affects the model performance.
Acc when full pruning is done at each state is 0.965 {(1,0), (0,1), (1,1)}
Acc with no pruning is 0.977 {(0,0)}
Best acc is still with only magnitude based pruning {(0,0.2), (0, 0.4)}
Opposite of hebbian prunning (removing connections with highest coactivation) only is harmful to the model, with acc equal or worst than full pruning, even with as low as 0.2 pruning
Opposite random growth (adding connections with lowest activation) reduces acc by ~ 0.02
End of explanation
exps = ['neurips_debug_test10', 'neurips_debug_test11']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
Explanation: Load and check data
End of explanation
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
Explanation: ## Analysis
Experiment Details
End of explanation
random_grow = (df['hebbian_grow'] == False)
agg(['hebbian_prune_perc'], random_grow)
agg(['weight_prune_perc'], random_grow)
Explanation: What is the impact of removing connections with highest coactivation
End of explanation
pd.pivot_table(df[random_grow],
index='hebbian_prune_perc',
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
Explanation: What is the optimal combination of both
End of explanation
# with and without hebbian grow
agg('hebbian_grow')
# with and without hebbian grow
pd.pivot_table(df,
index=['hebbian_grow', 'hebbian_prune_perc'],
columns='weight_prune_perc',
values='val_acc_max',
aggfunc=mean_and_std)
Explanation: The opposite logic of hebbian pruning, when weight pruning is set to 0, clearly affects the model performance.
Acc when full pruning is done at each state is 0.965 {(1,0), (0,1), (1,1)}
Acc with no pruning is 0.977 {(0,0)}
Best acc is still with only magnitude based pruning {(0,0.2), (0, 0.4)}
Opposite of hebbian prunning (removing connections with highest coactivation) only is harmful to the model, with acc equal or worst than full pruning, even with as low as 0.2 pruning
What is the impact of the adding connections with lowest coactivation
End of explanation |
3,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pandas querying and metadata with Epochs objects
Demonstrating pandas-style string querying with Epochs metadata.
For related uses of
Step1: We can use this metadata attribute to select subsets of Epochs. This
uses the Pandas
Step2: Next we'll choose a subset of words to keep.
Step3: Note that traditional epochs sub-selection still works. The traditional
MNE methods for selecting epochs will supersede the rich metadata querying.
Step4: Below we'll show a more involved example that leverages the metadata
of each epoch. We'll create a new column in our metadata object and use
it to generate averages for many subsets of trials.
Step5: Now we can quickly extract (and plot) subsets of the data. For example, to
look at words split by word length and concreteness
Step6: To compare words which are 4, 5, 6, 7 or 8 letters long
Step7: And finally, for the interaction between concreteness and continuous length
in letters
Step8: <div class="alert alert-info"><h4>Note</h4><p>Creating an | Python Code:
# Authors: Chris Holdgraf <[email protected]>
# Jona Sassenhagen <[email protected]>
# Eric Larson <[email protected]>
# License: BSD (3-clause)
import mne
import numpy as np
import matplotlib.pyplot as plt
# Load the data from the internet
path = mne.datasets.kiloword.data_path() + '/kword_metadata-epo.fif'
epochs = mne.read_epochs(path)
# The metadata exists as a Pandas DataFrame
print(epochs.metadata.head(10))
Explanation: Pandas querying and metadata with Epochs objects
Demonstrating pandas-style string querying with Epochs metadata.
For related uses of :class:mne.Epochs, see the starting tutorial
tut-epochs-class.
Sometimes you may have a complex trial structure that cannot be easily
summarized as a set of unique integers. In this case, it may be useful to use
the metadata attribute of :class:mne.Epochs objects. This must be a
:class:pandas.DataFrame where each row corresponds to an epoch, and each
column corresponds to a metadata attribute of each epoch. Columns must
contain either strings, ints, or floats.
In this dataset, subjects were presented with individual words
on a screen, and the EEG activity in response to each word was recorded.
We know which word was displayed in each epoch, as well as
extra information about the word (e.g., word frequency).
Loading the data
First we'll load the data. If metadata exists for an :class:mne.Epochs
fif file, it will automatically be loaded in the metadata attribute.
End of explanation
av1 = epochs['Concreteness < 5 and WordFrequency < 2'].average()
av2 = epochs['Concreteness > 5 and WordFrequency > 2'].average()
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
av1.plot_joint(show=False, **joint_kwargs)
av2.plot_joint(show=False, **joint_kwargs)
Explanation: We can use this metadata attribute to select subsets of Epochs. This
uses the Pandas :meth:pandas.DataFrame.query method under the hood.
Any valid query string will work. Below we'll make two plots to compare
between them:
End of explanation
words = ['film', 'cent', 'shot', 'cold', 'main']
epochs['WORD in {}'.format(words)].plot_image(show=False)
Explanation: Next we'll choose a subset of words to keep.
End of explanation
epochs['cent'].average().plot(show=False, time_unit='s')
Explanation: Note that traditional epochs sub-selection still works. The traditional
MNE methods for selecting epochs will supersede the rich metadata querying.
End of explanation
# Create two new metadata columns
metadata = epochs.metadata
is_concrete = metadata["Concreteness"] > metadata["Concreteness"].median()
metadata["is_concrete"] = np.where(is_concrete, 'Concrete', 'Abstract')
is_long = metadata["NumberOfLetters"] > 5
metadata["is_long"] = np.where(is_long, 'Long', 'Short')
epochs.metadata = metadata
Explanation: Below we'll show a more involved example that leverages the metadata
of each epoch. We'll create a new column in our metadata object and use
it to generate averages for many subsets of trials.
End of explanation
query = "is_long == '{0}' & is_concrete == '{1}'"
evokeds = dict()
for concreteness in ("Concrete", "Abstract"):
for length in ("Long", "Short"):
subset = epochs[query.format(length, concreteness)]
evokeds["/".join((concreteness, length))] = list(subset.iter_evoked())
# For the actual visualisation, we store a number of shared parameters.
style_plot = dict(
colors={"Long": "Crimson", "Short": "Cornflowerblue"},
linestyles={"Concrete": "-", "Abstract": ":"},
split_legend=True,
ci=.68,
show_sensors='lower right',
legend='lower left',
truncate_yaxis="auto",
picks=epochs.ch_names.index("Pz"),
)
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
Explanation: Now we can quickly extract (and plot) subsets of the data. For example, to
look at words split by word length and concreteness:
End of explanation
letters = epochs.metadata["NumberOfLetters"].unique().astype(int).astype(str)
evokeds = dict()
for n_letters in letters:
evokeds[n_letters] = epochs["NumberOfLetters == " + n_letters].average()
style_plot["colors"] = {n_letters: int(n_letters)
for n_letters in letters}
style_plot["cmap"] = ("# of Letters", "viridis_r")
del style_plot['linestyles']
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
Explanation: To compare words which are 4, 5, 6, 7 or 8 letters long:
End of explanation
evokeds = dict()
query = "is_concrete == '{0}' & NumberOfLetters == {1}"
for concreteness in ("Concrete", "Abstract"):
for n_letters in letters:
subset = epochs[query.format(concreteness, n_letters)]
evokeds["/".join((concreteness, n_letters))] = subset.average()
style_plot["linestyles"] = {"Concrete": "-", "Abstract": ":"}
fig, ax = plt.subplots(figsize=(6, 4))
mne.viz.plot_compare_evokeds(evokeds, axes=ax, **style_plot)
plt.show()
Explanation: And finally, for the interaction between concreteness and continuous length
in letters:
End of explanation
data = epochs.get_data()
metadata = epochs.metadata.copy()
epochs_new = mne.EpochsArray(data, epochs.info, metadata=metadata)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>Creating an :class:`mne.Epochs` object with metadata is done by passing
a :class:`pandas.DataFrame` to the ``metadata`` kwarg as follows:</p></div>
End of explanation |
3,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-2', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: UHH
Source ID: SANDBOX-2
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
3,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-hr', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: EC-EARTH3-HR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
3,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font size='5' face='Courier New'><h1 align="center"><i>The Primal & Dual Linear Programming Problems
Step1: <font size='7' face='Times New Roman'><b>1. <u>Primal</u></b></font>
Step2: <font size='7' face='Times New Roman'><b>2. <u>Dual</u></b></font> | Python Code:
# Imports
import numpy as np
import gurobipy as gbp
import datetime as dt
# Constants
Aij = np.random.randint(5, 50, 25)
Aij = Aij.reshape(5,5)
AijSum = np.sum(Aij)
Cj = np.random.randint(10, 20, 5)
CjSum = np.sum(Cj)
Bi = np.random.randint(10, 20, 5)
BiSum = np.sum(Bi)
# Matrix Shape
rows = range(len(Aij))
cols = range(len(Aij[0]))
Explanation: <font size='5' face='Courier New'><h1 align="center"><i>The Primal & Dual Linear Programming Problems: Canonical Form</i></h1></font>
<font face='Times New Roman' size='6'><h3 align="center"><u>James D. Gaboardi</u></h3></font>
<font face='Times New Roman' size='5'><h3 align="center">Florida State University | Department of Geography</h3></font>
<p><font size='4' face='Times New Roman'>Adapted from:</font></p>
<p><font size='4' face='Times New Roman'><b>Daskin, M. S.</b> 1995. <i>Network and Discrete Location: Models, Algorithms, and Applications</i>. Hoboken, NJ, USA: John Wiley & Sons, Inc.</font></p>
<font size='7' face='Times New Roman'><b>0. <u>Imports and Data Creation</u></b></font>
End of explanation
# Instantiate Model
mPrimal_Canonical_GUROBI = gbp.Model(' -- Canonical Primal Linear Programming Problem -- ')
# Set Focus to Optimality
gbp.setParam('MIPFocus', 2)
# Decision Variables
desc_var = []
for dest in cols:
desc_var.append([])
desc_var[dest].append(mPrimal_Canonical_GUROBI.addVar(vtype=gbp.GRB.CONTINUOUS,
name='y'+str(dest+1)))
# Update Model
mPrimal_Canonical_GUROBI.update()
#Objective Function
mPrimal_Canonical_GUROBI.setObjective(gbp.quicksum(Cj[dest]*desc_var[dest][0]
for dest in cols),
gbp.GRB.MINIMIZE)
# Constraints
for orig in rows:
mPrimal_Canonical_GUROBI.addConstr(gbp.quicksum(Aij[orig][dest]*desc_var[dest][0]
for dest in cols) - Bi[orig] >= 0)
# Optimize
mPrimal_Canonical_GUROBI.optimize()
# Write LP file
mPrimal_Canonical_GUROBI.write('LP.lp')
print '\n*************************************************************************'
print ' | Decision Variables'
for v in mPrimal_Canonical_GUROBI.getVars():
print ' | ', v.VarName, '=', v.x
print '*************************************************************************'
val = mPrimal_Canonical_GUROBI.objVal
print ' | Objective Value ------------------ ', val
print ' | Aij Sum -------------------------- ', AijSum
print ' | Cj Sum --------------------------- ', CjSum
print ' | Bi Sum --------------------------- ', BiSum
print ' | Matrix Dimensions ---------------- ', Aij.shape
print ' | Date/Time ------------------------ ', dt.datetime.now()
print '*************************************************************************'
print '-- Gurobi Canonical Primal Linear Programming Problem --'
print '\nJames Gaboardi, 2015'
Explanation: <font size='7' face='Times New Roman'><b>1. <u>Primal</u></b></font>
End of explanation
# Instantiate Model
mDual_Canonical_GUROBI = gbp.Model(' -- Canonical Dual Linear Programming Problem -- ')
# Set Focus to Optimality
gbp.setParam('MIPFocus', 2)
# Decision Variables
desc_var = []
for dest in cols:
desc_var.append([])
desc_var[dest].append(mDual_Canonical_GUROBI.addVar(vtype=gbp.GRB.CONTINUOUS,
name='u'+str(dest+1)))
# Update Model
mDual_Canonical_GUROBI.update()
#Objective Function
mDual_Canonical_GUROBI.setObjective(gbp.quicksum(Bi[orig]*desc_var[orig][0]
for orig in rows),
gbp.GRB.MAXIMIZE)
# Constraints
for dest in cols:
mDual_Canonical_GUROBI.addConstr(gbp.quicksum(Aij[orig][dest]*desc_var[dest][0]
for orig in rows) - Cj[dest] <= 0)
# Optimize
mDual_Canonical_GUROBI.optimize()
# Write LP file
mDual_Canonical_GUROBI.write('LP.lp')
print '\n*************************************************************************'
print ' | Decision Variables'
for v in mDual_Canonical_GUROBI.getVars():
print ' | ', v.VarName, '=', v.x
print '*************************************************************************'
val = mDual_Canonical_GUROBI.objVal
print ' | Objective Value ------------------ ', val
print ' | Aij Sum -------------------------- ', AijSum
print ' | Cj Sum --------------------------- ', CjSum
print ' | Bi Sum --------------------------- ', BiSum
print ' | Matrix Dimensions ---------------- ', Aij.shape
print ' | Date/Time ------------------------ ', dt.datetime.now()
print '*************************************************************************'
print '-- Gurobi Canonical Dual Linear Programming Problem --'
print '\nJames Gaboardi, 2015'
Explanation: <font size='7' face='Times New Roman'><b>2. <u>Dual</u></b></font>
End of explanation |
3,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 01
Import
Step2: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
Step3: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
Step5: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
Step6: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 01
Import
End of explanation
def print_sum(a, b):
Print the sum of the arguments a and b.
print(a + b)
#raise NotImplementedError()
Explanation: Interact basics
Write a print_sum function that prints the sum of its arguments a and b.
End of explanation
interact(print_sum, a = (-10., 10., 0.1), b = (-8, 8, 2));
#raise NotImplementedError()
assert True # leave this for grading the print_sum exercise
Explanation: Use the interact function to interact with the print_sum function.
a should be a floating point slider over the interval [-10., 10.] with step sizes of 0.1
b should be an integer slider the interval [-8, 8] with step sizes of 2.
End of explanation
def print_string(s, length=False):
Print the string s and optionally its length.
print(s)
if length:
print(len(s))
#raise NotImplementedError()
Explanation: Write a function named print_string that prints a string and additionally prints the length of that string if a boolean parameter is True.
End of explanation
interact(print_string, s = "Hello World!", length = True);
#raise NotImplementedError()
assert True # leave this for grading the print_string exercise
Explanation: Use the interact function to interact with the print_string function.
s should be a textbox with the initial value "Hello World!".
length should be a checkbox with an initial value of True.
End of explanation |
3,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Algorithms
Step1: PCA
Step2: PCA Algorithm Basics
The PCA Algorithm relies heavily on the Spectral (eigenvalue) related properties of a matrix.
Dumb question $ -$ what are the eigenvalues of a non-square matrix?
Recall that we can think of $m$ features of $n$ samples as being represented by a $(m, n)$ matrix.
Step3: Looks like we'll have to cheat a bit.
I mean -- do something smart!
Consider two matrices, A of shape $(m, n)$ and B of shape $(n, m)$.
We can compute their product and wind up with a matrix of shape $(m, m)$ or $(n, n)$ -- depending on whether A or B comes first.
Pop quiz!
What is the first Matrix that comes to mind for an $(n, m)$ matrix that has shape $(m, n)$?
The TRANSPOSE OF IT!!!
Step4: This.... actually makes sense. Lets compare the other multiplication just to see whats going on.
Step5: Huh.... looks like they share some eigenvalues
Look at the leading 5 numbers.
And it seems like the rest are effectively 0 -- this is just a rounding error. They are 0.
And are they in descending order or is it just my imigination?
Theorem 1
Step6: Discussion
It seems like we can translate them between the two different representations without a problem.
Thats good enough for me!
We can consider the non-square matrix $A$ of shape $(5, 10)$ to have 5 eigenvalues.
WE ONLY NEED A NUMBER OF EIGENVALUES EQUAL TO THE NUMBER OF FEATURES!!!!
We actually do not have to summarize our data as much as it seems like we might.
From here on out we need to define a few statistical Matrices. These will be left as challenges
Define a mean vector $\vec \mu$ derived from the original A, which is a vector of the mean of each of the original features.
Define a matrix B (m, n), such that each element of B is A - $\vec \mu$ applied to each sample.
Define the covariance matrix $S \colon = \frac{1}{n-1} B \times B^T$.
Step7: Recall the Covariance Formular For Two Variables
$Cov(A,B) = \frac{1}{n-1}((a_1 - \mu_A)(b_1-\mu_B)+ ... + (a_n - \mu_a)(b_n - \mu_b))$
Step8: The Trace of a Matrix
$$Tr(A) = \Sigma \lambda_i$$
The trace is the sum of the eigenvalues of a matrix.
It can be alternatively stated as the sum of the values on the diagonal, but this is not obvious!!
In our case, we know the values on the Diagonal are the Variance for the feature in that column/row.
Therefore the $Tr(S)$ is just the total variance in the data set!
The eigenvectors $\vec v_i $ are the directions of maximum variance.
The eigenvalues are the amount of variance in that direction. | Python Code:
# Can't find good material for this...
Explanation: Regression Algorithms
End of explanation
# Can't find good material for this.
Explanation: PCA
End of explanation
# Let us see what this would look like in numpy.
# First make choose m and n such that m != n
m = 5
n = 10
# Make the matrix A
A = np.random.rand(m, n)
print(A)
# Now compute its eigenvalues.
try:
vals, vecs = np.linalg.eig(A)
print(vals)
except:
print("Uh Oh we caused a linear algebra error!")
print("The last two dimensions must be square!")
print("This means we can't compute the eigenvalues of the matrix.")
Explanation: PCA Algorithm Basics
The PCA Algorithm relies heavily on the Spectral (eigenvalue) related properties of a matrix.
Dumb question $ -$ what are the eigenvalues of a non-square matrix?
Recall that we can think of $m$ features of $n$ samples as being represented by a $(m, n)$ matrix.
End of explanation
# Let's double check that real fast.
print("The shape of A is: {}".format(A.shape))
print("A^T has shape: {}".format(A.transpose().shape))
# Let's see what the spectrum looks like.
A_T = A.transpose()
vals, vecs = np.linalg.eig(A_T)
print(vals)
# Darn it it still isn't square!
# What about.... A * A^T
A_AT = np.matmul(A, A_T)
vals, vecs = np.linalg.eig(A_AT)
print(vals)
Explanation: Looks like we'll have to cheat a bit.
I mean -- do something smart!
Consider two matrices, A of shape $(m, n)$ and B of shape $(n, m)$.
We can compute their product and wind up with a matrix of shape $(m, m)$ or $(n, n)$ -- depending on whether A or B comes first.
Pop quiz!
What is the first Matrix that comes to mind for an $(n, m)$ matrix that has shape $(m, n)$?
The TRANSPOSE OF IT!!!
End of explanation
AT_A = np.matmul(A_T, A)
vals, vecs = np.linalg.eig(AT_A)
print(vals)
Explanation: This.... actually makes sense. Lets compare the other multiplication just to see whats going on.
End of explanation
# Exercise try it! Extract an eigenvector from A x A^T and left multiply it by A.
# Check the resulting eigenvector is in A^T x A.
Explanation: Huh.... looks like they share some eigenvalues
Look at the leading 5 numbers.
And it seems like the rest are effectively 0 -- this is just a rounding error. They are 0.
And are they in descending order or is it just my imigination?
Theorem 1:
The matrices $A \times A^T$ and $A^T \times A$ share the same nonzero eigenvalues.
Theorem 2:
The matrices $A \times A^T$ and $A^T \times A$ have non-negative eigenvalues.
Note:
This actually follows from the fact that they are symmetric.
Lemma 1: (A Helper Theorem or important observation.)
To translate an eigenvector of $A \times A^T$ to an eigenvector of $A^T \times A$ we simply left multiply the eigenvector $A \times \vec{v}$. This holds the other way as well.
End of explanation
# Why should the covariance matrix be a square matrix in the number of features?
Explanation: Discussion
It seems like we can translate them between the two different representations without a problem.
Thats good enough for me!
We can consider the non-square matrix $A$ of shape $(5, 10)$ to have 5 eigenvalues.
WE ONLY NEED A NUMBER OF EIGENVALUES EQUAL TO THE NUMBER OF FEATURES!!!!
We actually do not have to summarize our data as much as it seems like we might.
From here on out we need to define a few statistical Matrices. These will be left as challenges
Define a mean vector $\vec \mu$ derived from the original A, which is a vector of the mean of each of the original features.
Define a matrix B (m, n), such that each element of B is A - $\vec \mu$ applied to each sample.
Define the covariance matrix $S \colon = \frac{1}{n-1} B \times B^T$.
End of explanation
# Is the name Covariance Matrix justified?
# What are the values on the Diagonal of the Covariance Matrix?
Explanation: Recall the Covariance Formular For Two Variables
$Cov(A,B) = \frac{1}{n-1}((a_1 - \mu_A)(b_1-\mu_B)+ ... + (a_n - \mu_a)(b_n - \mu_b))$
End of explanation
def gen_noisy_line(n_samples=50):
'''
This function generates a noisy line of slope 1 and returns the
matrix associated with these n_samples, with noise +- 1 from a
straight line.
This matrix follows the convention that
rows are features, and columns are samples.
'''
return matrix_A
def make_B_from_A(matrix_A):
'''
This function generates the B matrix from the sample matrix A.
'''
return matrix_B
def make_S_from_B(matrix_B):
'''
This function generates the matrix S from B.
'''
return matrix_S
Explanation: The Trace of a Matrix
$$Tr(A) = \Sigma \lambda_i$$
The trace is the sum of the eigenvalues of a matrix.
It can be alternatively stated as the sum of the values on the diagonal, but this is not obvious!!
In our case, we know the values on the Diagonal are the Variance for the feature in that column/row.
Therefore the $Tr(S)$ is just the total variance in the data set!
The eigenvectors $\vec v_i $ are the directions of maximum variance.
The eigenvalues are the amount of variance in that direction.
End of explanation |
3,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Projecting terrestrial biodiversity using PREDICTS and LUH2
This notebook shows how to use rasterset to project a PREDICTS model using the LUH2 land-use data.
You can set three parameters below
Step1: Local imports
Step2: Parameters
Step3: Models
This notebook uses Sam's LUH2 abundance models. Thus we need to load a forested and a non-forested model, project using both and then combine the projection.
Step4: Rastersets
Use the PREDICTS python module to generate the appropriate rastersets. Each rasterset is like a DataFrame or hash (dict in python). The columns are variables and hold a function that describes how to compute the data.
Generating a rasterset is a two-step process. First generate a hash (dict in python) and then pass the dict to the constructor.
Each model will be evaluated only where the forested mask is set (or not set). Load the mask from the LUH2 statis data set.
Note that we need to explicitly assign the R model we loaded in the previous cell to the corresponding variable of the rasterset.
Step5: Eval
Now evaluate each model in turn and then combine the data. Because we are guaranteed that the data is non-overlaping (no cell should have valid data in both projections) we can simply add them together (with masked values filled in as 0). The overall mask is the logical AND of the two invalid masks.
Step6: Rendering
Use matplotlib (via rasterio.plot) to render the generated data. This will display the data in-line in the notebook. | Python Code:
import click
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import numpy.ma as ma
import rasterio
from rasterio.plot import show, show_hist
Explanation: Projecting terrestrial biodiversity using PREDICTS and LUH2
This notebook shows how to use rasterset to project a PREDICTS model using the LUH2 land-use data.
You can set three parameters below:
scenario: can be either historical (850CE - 2015CE) or one of he LUH2 scenarios available (all in lowercase, e.g. ssp1_rcp2.6_image).
year: year for which to generate the projection. For the historical scenario the year must be between 850-2015. For the SSP scenarios the year must be between 2015-2100.
what: the name of the variable to evaluate. Many abundance models evaluate a variable called LogAbund. If you want to project abundance than what should be LogAbund. But you can use any of the intermediate variables as well. For example setting what to hpd will generate a projection of human population density.
Imports (non-local)
End of explanation
from projections.rasterset import RasterSet, Raster
from projections.simpleexpr import SimpleExpr
import projections.r2py.modelr as modelr
import projections.predicts as predicts
import projections.utils as utils
Explanation: Local imports
End of explanation
scenario = 'historical'
year = 2000
what = 'LogAbund'
Explanation: Parameters
End of explanation
modf = modelr.load('ab-fst-1.rds')
intercept_f = modf.intercept
predicts.predictify(modf)
modn = modelr.load('ab-nfst-1.rds')
intercept_n = modn.intercept
predicts.predictify(modn)
Explanation: Models
This notebook uses Sam's LUH2 abundance models. Thus we need to load a forested and a non-forested model, project using both and then combine the projection.
End of explanation
fstnf = rasterio.open(utils.luh2_static('fstnf'))
rastersf = predicts.rasterset('luh2', scenario, year, 'f')
rsf = RasterSet(rastersf, mask=fstnf, maskval=0.0)
rastersn = predicts.rasterset('luh2', scenario, year, 'n')
rsn = RasterSet(rastersn, mask=fstnf, maskval=1.0)
vname = modf.output
assert modf.output == modn.output
rsf[vname] = modf
rsn[vname] = modn
Explanation: Rastersets
Use the PREDICTS python module to generate the appropriate rastersets. Each rasterset is like a DataFrame or hash (dict in python). The columns are variables and hold a function that describes how to compute the data.
Generating a rasterset is a two-step process. First generate a hash (dict in python) and then pass the dict to the constructor.
Each model will be evaluated only where the forested mask is set (or not set). Load the mask from the LUH2 statis data set.
Note that we need to explicitly assign the R model we loaded in the previous cell to the corresponding variable of the rasterset.
End of explanation
datan, meta = rsn.eval(what, quiet=True)
dataf, _ = rsf.eval(what, quiet=True)
data_vals = dataf.filled(0) + datan.filled(0)
data = data_vals.view(ma.MaskedArray)
data.mask = np.logical_and(dataf.mask, datan.mask)
Explanation: Eval
Now evaluate each model in turn and then combine the data. Because we are guaranteed that the data is non-overlaping (no cell should have valid data in both projections) we can simply add them together (with masked values filled in as 0). The overall mask is the logical AND of the two invalid masks.
End of explanation
show(data, cmap='viridis')
Explanation: Rendering
Use matplotlib (via rasterio.plot) to render the generated data. This will display the data in-line in the notebook.
End of explanation |
3,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Generating-Fractal-From-Random-Points---The-Chaos-Game" data-toc-modified-id="Generating-Fractal-From-Random-Points---The-Chaos-Game-1"><span class="toc-item-num">1 </span>Generating Fractal From Random Points - The Chaos Game</a></div><div class="lev2 toc-item"><a href="#Initial-Definitions" data-toc-modified-id="Initial-Definitions-1.1"><span class="toc-item-num">1.1 </span>Initial Definitions</a></div><div class="lev2 toc-item"><a href="#Make-A-Fractal" data-toc-modified-id="Make-A-Fractal-1.2"><span class="toc-item-num">1.2 </span>Make A Fractal</a></div><div class="lev4 toc-item"><a href="#Regular-Polygons" data-toc-modified-id="Regular-Polygons-1.2.0.1"><span class="toc-item-num">1.2.0.1 </span>Regular Polygons</a></div><div class="lev4 toc-item"><a href="#Exploring-Further
Step1: Generating Fractal From Random Points - The Chaos Game
Initial Definitions
Step2: Make A Fractal
Step3: Regular Polygons
Step4: Exploring Further
Step5: Randomness on Large Scales
Step6: Learn More
Step7: For Barnsley's Fern | Python Code:
import pickle,glob
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%pylab inline
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Generating-Fractal-From-Random-Points---The-Chaos-Game" data-toc-modified-id="Generating-Fractal-From-Random-Points---The-Chaos-Game-1"><span class="toc-item-num">1 </span>Generating Fractal From Random Points - The Chaos Game</a></div><div class="lev2 toc-item"><a href="#Initial-Definitions" data-toc-modified-id="Initial-Definitions-1.1"><span class="toc-item-num">1.1 </span>Initial Definitions</a></div><div class="lev2 toc-item"><a href="#Make-A-Fractal" data-toc-modified-id="Make-A-Fractal-1.2"><span class="toc-item-num">1.2 </span>Make A Fractal</a></div><div class="lev4 toc-item"><a href="#Regular-Polygons" data-toc-modified-id="Regular-Polygons-1.2.0.1"><span class="toc-item-num">1.2.0.1 </span>Regular Polygons</a></div><div class="lev4 toc-item"><a href="#Exploring-Further:-Dimension" data-toc-modified-id="Exploring-Further:-Dimension-1.2.0.2"><span class="toc-item-num">1.2.0.2 </span>Exploring Further: Dimension</a></div><div class="lev4 toc-item"><a href="#Randomness-on-Large-Scales" data-toc-modified-id="Randomness-on-Large-Scales-1.2.0.3"><span class="toc-item-num">1.2.0.3 </span>Randomness on Large Scales</a></div><div class="lev2 toc-item"><a href="#Learn-More:" data-toc-modified-id="Learn-More:-1.3"><span class="toc-item-num">1.3 </span>Learn More:</a></div><div class="lev2 toc-item"><a href="#Modeling-Life" data-toc-modified-id="Modeling-Life-1.4"><span class="toc-item-num">1.4 </span>Modeling Life</a></div><div class="lev4 toc-item"><a href="#For-Barnsley's-Fern:" data-toc-modified-id="For-Barnsley's-Fern:-1.4.0.1"><span class="toc-item-num">1.4.0.1 </span>For Barnsley's Fern:</a></div>
End of explanation
def placeStartpoint(npts,fixedpts):
#Start Point
#start = (0.5,0.5)
start = (np.random.random(),np.random.random())
if fixedpts == []: #generates a set of random verticies
for i in range(npts):
randx = np.random.random()
randy = np.random.random()
point = (randx,randy)
fixedpts.append(point)
return (start,fixedpts)
def choosePts(npts,fixedpts,frac):
#chooses a vertex at random
#further rules could be applied here
roll = floor(npts*np.random.random())
point = fixedpts[int(roll)]
return point
def placeItteratePts(npts,itt,start,fixedpts,frac):
ittpts = []
for i in range(itt):
point = choosePts(npts,fixedpts,frac) #chooses a vertex at random
# halfway = ((point[0]+start[0])*frac,(point[1]+start[1])*frac) #calculates the halfway point between the starting point and the vertex
halfway = ((point[0]-start[0])*(1.0 - frac)+start[0],(point[1]-start[1])*(1.0 - frac)+start[1])
ittpts.append(halfway)
start = halfway #sets the starting point to the new point
return ittpts
def plotFractal(start,fixedpts,ittpts):
# set axes range
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
plt.axes().set_aspect('equal')
#plots the verticies
plt.scatter(transpose(fixedpts)[0],transpose(fixedpts)[1],alpha=0.8, c='black', edgecolors='none', s=30)
#plots the starting point
plt.scatter(start[0],start[1],alpha=0.8, c='red', edgecolors='none', s=30)
#plots the itterated points
plt.scatter(transpose(ittpts)[0],transpose(ittpts)[1],alpha=0.5, c='blue', edgecolors='none', s=2)
return
def GenerateFractal(npts,frac,itt,reg=False):
#Error Control
if npts < 1 or frac >= 1.0 or frac <= 0.0 or type(npts) is not int or type(frac) is not float or type(itt) is not int:
print("number of points must be a positive integer, compression fraction must be a positive float less than 1.0, itt must be a positive integer")
return
if frac > 0.5:
print("Warning: compression fractions over 1/2 do not lead to fractals")
#Initilize Verticies
if not reg:
fixedpts = [] #Random Verticies
else:
if npts == 3:
fixedpts = [(0.0,0.0),(1.0,0.0),(0.5,0.5*sqrt(3.0))] #Equilateral Triangle (npts = 3)
elif npts == 4:
fixedpts = [(0.0,0.0),(1.0,0.0),(1.0,1.0),(0.0,1.0)] #Square
elif npts == 5:
fixedpts = [(0.0,2./(1+sqrt(5.))),(0.5-2./(5+sqrt(5.)),0.0),(0.5,1.0),(0.5+2./(5+sqrt(5.)),0.0),(1.0,2./(1+sqrt(5.)))] #Regular Pentagon
elif npts == 6:
fixedpts = [(0.0,0.5),(1./4,0.5+.25*sqrt(3.)),(3./4,0.5+.25*sqrt(3.)),(1.0,0.5),(3./4,0.5-.25*sqrt(3.)),(1./4,0.5-.25*sqrt(3.))] #Regular Hexagon
elif npts == 8:
fixedpts = [(0.0,0.0),(1.0,0.0),(1.0,1.0),(0.0,1.0),(0.0,0.5),(1.0,0.5),(0.5,0.0),(0.5,1.0)] #Squares
elif npts == 2:
fixedpts = [(0.0,0.0),(1.0,1.0)] #Line
elif npts == 1:
fixedpts = [(0.5,0.5)] #Line
else:
print("No regular polygon stored with that many verticies, switching to default with randomly assigned verticies")
fixedpts = [] #Random Verticies
#Compression Fraction
# frac = 1.0/2.0 #Sierpinski's Triangle (npts = 3)
# frac = 1.0/2.0 #Sierpinski's "Square" (filled square, npts = 4)
# frac = 1.0/3.0 #Sierpinski's Pentagon (npts = 5)
# frac = 3.0/8.0 #Sierpinski's Hexagon (npts = 6)
if len(fixedpts) != npts and len(fixedpts) != 0:
print("The number of verticies don't match the length of the list of verticies. If you want the verticies generated at random, set fixedpts to []")
return
if len(fixedpts) != 0:
print("Fractal Dimension = {}".format(-log(npts)/log(frac)))
(start, fixedpts) = placeStartpoint(npts,fixedpts)
ittpts = placeItteratePts(npts,itt,start,fixedpts,frac)
plotFractal(start,fixedpts,ittpts)
return
Explanation: Generating Fractal From Random Points - The Chaos Game
Initial Definitions
End of explanation
# Call the GenerateFractal function with a number of verticies, a number of itterations, and the compression fraction
# The starting verticies are random by default. An optional input of True will set the verticies to those of a regular polygon.
GenerateFractal(7,.5,5000)
Explanation: Make A Fractal
End of explanation
GenerateFractal(3,.5,5000,True)
GenerateFractal(5,1./3,50000,True)
GenerateFractal(6,3./8,50000,True)
GenerateFractal(8,1./3,50000,True)
Explanation: Regular Polygons
End of explanation
GenerateFractal(1,.5,50000,True)
GenerateFractal(2,.5,50000,True)
GenerateFractal(4,.5,50000,True)
Explanation: Exploring Further: Dimension
End of explanation
GenerateFractal(10,.5,100)
GenerateFractal(10,.5,5000)
GenerateFractal(100,.5,5000)
GenerateFractal(100,.5,100000)
Explanation: Randomness on Large Scales
End of explanation
def makeFern(f,itt):
colname = ["percent","a","b","c","d","e","f"]
print(pd.DataFrame(data=np.array(f), columns = colname))
x,y = {0.5,0.0}
xypts=[]
if abs(sum(f[j][0] for j in range(len(f)))-1.0) < 10^-10:
print("Probabilities must sum to 1")
return
for i in range(itt):
rand = (np.random.random())
cond = 0.0
for j in range(len(f)):
if (cond <= rand) and (rand <= (cond+f[j][0])):
x = f[j][1]*x+f[j][2]*y+f[j][5]
y = f[j][3]*x+f[j][4]*y+f[j][6]
xypts.append((x,y))
cond = cond + f[j][0]
xmax,ymax = max(abs(transpose(xypts)[0])),max(abs(transpose(xypts)[1]))
plt.axes().set_aspect('equal')
color = transpose([[abs(r)/xmax for r in transpose(xypts)[0]],[abs(g)/ymax for g in transpose(xypts)[1]],[b/itt for b in range(itt)]])
plt.scatter(transpose(xypts)[0],transpose(xypts)[1],alpha=0.5, facecolors=color, edgecolors='none', s=1)
Explanation: Learn More:
Chaos Game Wiki
Numberphile Video
Chaos in the Classroom
Chaos Rules!
Barnsley Fern
Modeling Life
End of explanation
f = ((0.01,0.0,0.0,0.0,0.16,0.0,0.0),
(0.85,0.85,0.08,-0.08,0.85,0.0,1.60),
(0.07,0.20,-0.26,0.23,0.22,0.0,1.60),
(0.07,-0.15,0.28,0.26,0.24,0.0,0.44))
makeFern(f,5000)
Explanation: For Barnsley's Fern:
Use the following values
|Percent|A|B|C|D|E|F|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|0.01|0.0|0.0|0.0|0.16|0.0|0.0|
|0.85|0.85|0.04|-0.04|0.85|0.0|1.60|
|0.07|0.20|-0.26|0.23|0.22|0.0|1.60|
|0.07|-0.15|0.28|0.26|0.24|0.0|0.44|
Of course, this is only one solution so try as changing the values. Some values modify the curl, some change the thickness, others completely rearrange the structure.
End of explanation |
3,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Libraries and Packages
Step1: Connecting to National Data Service
Step2: Extracting Data of Midwestern states of the United states from 1992 - 2016.
The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.
Step3: Filteration of NBI Data
The following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.
Step4: Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.
Step5: A utility function to plot the graphs.
Step6: The following script will select all the bridges in the midwestern United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.
Step7: In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the midwestern United States, as the bridges grow older the probability of reconstruction increases.
Step8: The below figure presents CDF Probability of reconstruction, of bridge in the midwestern United States.
Step9: In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.
one particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.
Step10: A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.
To Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the midwestern United States. | Python Code:
import pymongo
from pymongo import MongoClient
import time
import pandas as pd
import numpy as np
import seaborn as sns
from matplotlib.pyplot import *
import matplotlib.pyplot as plt
import folium
import datetime as dt
import random as rnd
import warnings
import datetime as dt
import csv
%matplotlib inline
Explanation: Libraries and Packages
End of explanation
warnings.filterwarnings(action="ignore")
Client = MongoClient("mongodb://bridges:[email protected]/bridge")
db = Client.bridge
collection = db["bridges"]
Explanation: Connecting to National Data Service: The Lab Benchwork's NBI - MongoDB instance
End of explanation
def getData(state):
pipeline = [{"$match":{"$and":[{"year":{"$gt":1991, "$lt":2017}},{"stateCode":state}]}},
{"$project":{"_id":0,
"structureNumber":1,
"yearBuilt":1,
"yearReconstructed":1,
"deck":1, ## Rating of deck
"year":1,
'owner':1,
"countyCode":1,
"substructure":1, ## rating of substructure
"superstructure":1, ## rating of superstructure
"Structure Type":"$structureTypeMain.typeOfDesignConstruction",
"Type of Wearing Surface":"$wearingSurface/ProtectiveSystem.typeOfWearingSurface",
}}]
dec = collection.aggregate(pipeline)
conditionRatings = pd.DataFrame(list(dec))
## Creating new column: Age
conditionRatings['Age'] = conditionRatings['year']- conditionRatings['yearBuilt']
return conditionRatings
Explanation: Extracting Data of Midwestern states of the United states from 1992 - 2016.
The following query will extract data from the mongoDB instance and project only selected attributes such as structure number, yearBuilt, deck, year, superstructure, owner, countryCode, structure type, type of wearing surface, and subtructure.
End of explanation
## filter and convert them into interger
def filterConvert(conditionRatings):
before = len(conditionRatings)
print("Total Records before filteration: ",len(conditionRatings))
conditionRatings = conditionRatings.loc[~conditionRatings['deck'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['substructure'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['superstructure'].isin(['N','NA'])]
conditionRatings = conditionRatings.loc[~conditionRatings['Structure Type'].isin([19])]
conditionRatings = conditionRatings.loc[~conditionRatings['Type of Wearing Surface'].isin(['6'])]
after = len(conditionRatings)
print("Total Records after filteration: ",len(conditionRatings))
print("Difference: ", before - after)
return conditionRatings
Explanation: Filteration of NBI Data
The following routine removes the missing data such as 'N', 'NA' from deck, substructure,and superstructure , and also removing data with structure Type - 19 and type of wearing surface - 6.
End of explanation
## make it into a function
def findSurvivalProbablities(conditionRatings):
i = 1
j = 2
probabilities = []
while j < 121:
v = list(conditionRatings.loc[conditionRatings['Age'] == i]['deck'])
k = list(conditionRatings.loc[conditionRatings['Age'] == i]['structureNumber'])
Age1 = {key:int(value) for key, value in zip(k,v)}
#v = conditionRatings.loc[conditionRatings['Age'] == j]
v_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['deck'])
k_2 = list(conditionRatings.loc[conditionRatings['Age'] == j]['structureNumber'])
Age2 = {key:int(value) for key, value in zip(k_2,v_2)}
intersectedList = list(Age1.keys() & Age2.keys())
reconstructed = 0
for structureNumber in intersectedList:
if Age1[structureNumber] < Age2[structureNumber]:
if (Age1[structureNumber] - Age2[structureNumber]) < -1:
reconstructed = reconstructed + 1
try:
probability = reconstructed / len(intersectedList)
except ZeroDivisionError:
probability = 0
probabilities.append(probability*100)
i = i + 1
j = j + 1
return probabilities
Explanation: Particularly in the area of determining a deterioration model of bridges, There is an observed sudden increase in condition ratings of bridges over the period of time, This sudden increase in the condition rating is attributed to the reconstruction of the bridges. NBI dataset contains an attribute to record this reconstruction of the bridge. An observation of an increase in condition rating of bridges over time without any recorded information of reconstruction of that bridge in NBI dataset suggests that dataset is not updated consistently. In order to have an accurate deterioration model, such unrecorded reconstruction activities must be accounted in the deterioration model of the bridges.
End of explanation
def plotCDF(cumsum_probabilities):
fig = plt.figure(figsize=(15,8))
ax = plt.axes()
plt.title('CDF of Reonstruction Vs Age')
plt.xlabel('Age')
plt.ylabel('CDF of Reonstruction')
plt.yticks([0,10,20,30,40,50,60,70,80,90,100])
plt.ylim(0,100)
x = [i for i in range(1,120)]
y = cumsum_probabilities
ax.plot(x,y)
return plt.show()
Explanation: A utility function to plot the graphs.
End of explanation
states = ['25','09','23','33','44','50','34','36','42']
# Mapping state code to state abbreviation
stateNameDict = {'25':'MA',
'04':'AZ',
'08':'CO',
'38':'ND',
'09':'CT',
'19':'IA',
'26':'MI',
'48':'TX',
'35':'NM',
'17':'IL',
'51':'VA',
'23':'ME',
'16':'ID',
'36':'NY',
'56':'WY',
'29':'MO',
'39':'OH',
'28':'MS',
'11':'DC',
'21':'KY',
'18':'IN',
'06':'CA',
'47':'TN',
'12':'FL',
'24':'MD',
'34':'NJ',
'46':'SD',
'13':'GA',
'55':'WI',
'30':'MT',
'54':'WV',
'15':'HI',
'32':'NV',
'37':'NC',
'10':'DE',
'33':'NH',
'44':'RI',
'50':'VT',
'42':'PA',
'05':'AR',
'20':'KS',
'45':'SC',
'22':'LA',
'40':'OK',
'72':'PR',
'41':'OR',
'27':'MN',
'53':'WA',
'01':'AL',
'31':'NE',
'02':'AK',
'49':'UT'
}
def getProbs(states, stateNameDict):
# Initializaing the dataframes for deck, superstructure and subtructure
df_prob_recon = pd.DataFrame({'Age':range(1,61)})
df_cumsum_prob_recon = pd.DataFrame({'Age':range(1,61)})
for state in states:
conditionRatings_state = getData(state)
stateName = stateNameDict[state]
print("STATE - ",stateName)
conditionRatings_state = filterConvert(conditionRatings_state)
print("\n")
probabilities_state = findSurvivalProbablities(conditionRatings_state)
cumsum_probabilities_state = np.cumsum(probabilities_state)
df_prob_recon[stateName] = probabilities_state[:60]
df_cumsum_prob_recon[stateName] = cumsum_probabilities_state[:60]
# df_prob_recon.set_index('Age', inplace = True)
# df_cumsum_prob_recon.set_index('Age', inplace = True)
return df_prob_recon, df_cumsum_prob_recon
df_prob_recon, df_cumsum_prob_recon = getProbs(states, stateNameDict)
# save dataframes into csv files
df_prob_recon
Explanation: The following script will select all the bridges in the midwestern United States, filter missing and not required data. The script also provides information of how much of the data is being filtered.
End of explanation
plt.figure(figsize=(12,8))
plt.title("CDF Probability of Reconstruction vs Age")
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']
for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):
plt.plot(df_cumsum_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)
plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction');
plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)
plt.ylim(1,60)
plt.show()
Explanation: In following figures, shows the cumulative distribution function of the probability of reconstruction over the bridges' lifespan, of bridges in the midwestern United States, as the bridges grow older the probability of reconstruction increases.
End of explanation
plt.figure(figsize = (16,12))
plt.xlabel('Age')
plt.ylabel('Mean')
# Initialize the figure
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
# multiple line plot
num = 1
linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']
for n, column in enumerate(df_cumsum_prob_recon.drop('Age', axis=1)):
# Find the right spot on the plot
plt.subplot(4,3, num)
# Plot the lineplot
plt.plot(df_cumsum_prob_recon['Age'], df_cumsum_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)
# Same limits for everybody!
plt.xlim(1,60)
plt.ylim(1,100)
# Not ticks everywhere
if num in range(10) :
plt.tick_params(labelbottom='off')
if num not in [1,4,7,10]:
plt.tick_params(labelleft='off')
# Add title
plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])
plt.text(30, -1, 'Age', ha='center', va='center')
plt.text(1, 50, 'Probability', ha='center', va='center', rotation='vertical')
num = num + 1
# general title
plt.suptitle("CDF Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
Explanation: The below figure presents CDF Probability of reconstruction, of bridge in the midwestern United States.
End of explanation
plt.figure(figsize=(12,8))
plt.title("Probability of Reconstruction vs Age")
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
linestyles =[':','-.','--','-',':','-.','--','-',':','-.','--','-']
for num, state in enumerate(df_cumsum_prob_recon.drop('Age', axis = 1)):
plt.plot(df_prob_recon[state], color = palette[num], linestyle = linestyles[num], linewidth = 4)
plt.xlabel('Age'); plt.ylabel('Probablity of Reconstruction');
plt.legend([state for state in df_cumsum_prob_recon.drop('Age', axis = 1)], loc='upper left', ncol = 2)
plt.ylim(1,25)
plt.show()
Explanation: In the following figures, provides the probability of reconstruction at every age. Note this is not a cumulative probability function. the constant number of reconstruction of the bridges can be explained by various factors.
one particularly interesting reason could be funding provided to reconstruct bridges, this explain why some of the states have perfect linear curve.
End of explanation
plt.figure(figsize = (16,12))
plt.xlabel('Age')
plt.ylabel('Mean')
# Initialize the figure
plt.style.use('seaborn-darkgrid')
# create a color palette
palette = [
'blue', 'blue', 'green', 'magenta', 'cyan', 'brown', 'grey', 'red', 'silver', 'purple', 'gold', 'black','olive'
]
# multiple line plot
num = 1
linestyles = [':','-.','--','-',':','-.','--','-',':','-.','--','-']
for n, column in enumerate(df_prob_recon.drop('Age', axis=1)):
# Find the right spot on the plot
plt.subplot(4,3, num)
# Plot the lineplot
plt.plot(df_prob_recon['Age'], df_prob_recon[column], linestyle = linestyles[n] , color=palette[num], linewidth=4, alpha=0.9, label=column)
# Same limits for everybody!
plt.xlim(1,60)
plt.ylim(1,25)
# Not ticks everywhere
if num in range(10) :
plt.tick_params(labelbottom='off')
if num not in [1,4,7,10]:
plt.tick_params(labelleft='off')
# Add title
plt.title(column, loc='left', fontsize=12, fontweight=0, color=palette[num])
plt.text(30, -1, 'Age', ha='center', va='center')
plt.text(1, 12.5, 'Probability', ha='center', va='center', rotation='vertical')
num = num + 1
# general title
plt.suptitle("Probability of Reconstruction vs Age", fontsize=13, fontweight=0, color='black', style='italic', y=1.02)
Explanation: A key observation in this investigation of several state reveals a constant number of bridges are reconstructed every year, this could be an effect of fixed budget allocated for reconstruction by the state. This also highlights the fact that not all bridges that might require reconstruction are reconstructed.
To Understand this phenomena in clearing, the following figure presents probability of reconstruction vs age of all individual states in the midwestern United States.
End of explanation |
3,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Non-parametric embedding with UMAP.
This notebook shows an example of a non-parametric embedding using the same training loops as are used with a parametric embedding.
load data
Step1: create parametric umap model
Step2: plot results
Step3: plotting loss | Python Code:
from tensorflow.keras.datasets import mnist
(train_images, Y_train), (test_images, Y_test) = mnist.load_data()
train_images = train_images.reshape((train_images.shape[0], -1))/255.
test_images = test_images.reshape((test_images.shape[0], -1))/255.
Explanation: Non-parametric embedding with UMAP.
This notebook shows an example of a non-parametric embedding using the same training loops as are used with a parametric embedding.
load data
End of explanation
from umap.parametric_umap import ParametricUMAP
embedder = ParametricUMAP(parametric_embedding=False, verbose=True)
embedding = embedder.fit_transform(train_images)
Explanation: create parametric umap model
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
embedding[:, 0],
embedding[:, 1],
c=Y_train.astype(int),
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
Explanation: plot results
End of explanation
embedder._history.keys()
fig, ax = plt.subplots()
ax.plot(embedder._history['loss'])
ax.set_ylabel('Cross Entropy')
ax.set_xlabel('Epoch')
Explanation: plotting loss
End of explanation |
3,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Loading
Get some data to play with
Step1: Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)
Split the data to get going
Step2: Exercises
Load the iris dataset from the sklearn.datasets module using the load_iris function.
The function returns a dictionary-like object that has the same attributes as digits.
What is the number of classes, features and data points in this dataset?
Use a scatterplot to visualize the dataset.
You can look at DESCR attribute to learn more about the dataset.
Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path | Python Code:
from sklearn.datasets import load_digits
import numpy as np
digits = load_digits()
digits.keys()
digits.data.shape
digits.target.shape
digits.target
np.bincount(digits.target)
import matplotlib.pyplot as plt
%matplotlib notebook
# you can also use matplotlib inline
plt.matshow(digits.data[0].reshape(8, 8), cmap=plt.cm.Greys)
digits.target[0]
fig, axes = plt.subplots(4, 4)
for x, y, ax in zip(digits.data, digits.target, axes.ravel()):
ax.set_title(y)
ax.imshow(x.reshape(8, 8), cmap="gray_r")
ax.set_xticks(())
ax.set_yticks(())
plt.tight_layout()
Explanation: Data Loading
Get some data to play with
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(digits.data,
digits.target)
Explanation: Data is always a numpy array (or sparse matrix) of shape (n_samples, n_features)
Split the data to get going
End of explanation
# %load solutions/load_iris.py
Explanation: Exercises
Load the iris dataset from the sklearn.datasets module using the load_iris function.
The function returns a dictionary-like object that has the same attributes as digits.
What is the number of classes, features and data points in this dataset?
Use a scatterplot to visualize the dataset.
You can look at DESCR attribute to learn more about the dataset.
Usually data doesn't come in that nice a format. You can find the csv file that contains the iris dataset at the following path:
python
import sklearn.datasets
import os
iris_path = os.path.join(sklearn.datasets.__path__[0], 'data', 'iris.csv')
Try loading the data from there using pandas pd.read_csv method.
End of explanation |
3,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https
Step2: If you choose chain_length 3 the data will look like this
Step3: Load the data.
Step4: Looking at what we loaded. | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow as tf
import collections
import os
from google.colab import auth
auth.authenticate_user()
#@title Choices about the dataset you want to load.
# Make choices about the dataset here.
chain_length = 3 #@param {type:"slider", min:3, max:4, step:1}
mode = 'valid' #@param ['train', 'test', 'valid']
Explanation: Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
The dataset used for the Paired associate inference task
This is the dataset used for the paired associated inference task in
"MEMO: A Deep Network for Flexible Combination of Episodic Memories
".
End of explanation
# Train has 500 shards, valid 150, test 100.
if mode == 'train':
num_shards = 500
elif mode == 'test':
num_shards = 100
elif mode == 'valid':
num_shards = 150
DatasetInfo = collections.namedtuple(
'DatasetInfo',
['basepath', 'size', 'chain_length']
)
_DATASETS = dict(
memo=DatasetInfo(
basepath=mode,
size=num_shards,
chain_length=chain_length)
)
def _get_dataset_files(dataset_info, root):
Generates lists of files for a given dataset version.
basepath = dataset_info.basepath
base = os.path.join(root, basepath)
num_files = dataset_info.size
length = len(str(num_files))
template = 'trials-{:0%d}-of-{:0%d}' % (5, 5)
return [os.path.join(base, template.format(i, num_files))
for i in range(num_files)]
def parser_tf_examples(raw_data, chain_length=chain_length):
if chain_length == 3:
feature_map = {
'trials' : tf.io.FixedLenFeature(
shape=[48, 3, 1000],
dtype=tf.float32),
'correct_answer': tf.io.FixedLenFeature(
shape=[48],
dtype=tf.int64),
'difficulty': tf.io.FixedLenFeature(
shape=[48],
dtype=tf.int64),
'trial_type': tf.io.FixedLenFeature(
shape=[48],
dtype=tf.int64),
'memory': tf.io.FixedLenFeature(
shape=[32, 2, 1000],
dtype=tf.float32),
}
elif chain_length == 4:
feature_map = {
'trials' : tf.io.FixedLenFeature(
shape=[96, 3, 1000],
dtype=tf.float32),
'correct_answer': tf.io.FixedLenFeature(
shape=[96],
dtype=tf.int64),
'difficulty': tf.io.FixedLenFeature(
shape=[96],
dtype=tf.int64),
'trial_type': tf.io.FixedLenFeature(
shape=[96],
dtype=tf.int64),
'memory': tf.io.FixedLenFeature(
shape=[48, 2, 1000],
dtype=tf.float32),
}
example = tf.io.parse_example(raw_data, feature_map)
batch = [example["trials"],
example["correct_answer"],
example["difficulty"],
example["trial_type"],
example["memory"]]
return batch
Explanation: If you choose chain_length 3 the data will look like this:
trials shape: (48, 3, 1000); 48 trials x the target picture, left and right option x picture dimensions.
correct answer: (48); whether the left or right picture is correct.
difficulty (48); How far apart are the target picture and the two options.(e.g. AB are 0 steps apart, AC is 1)
trial type (48); See below.
memory shape (32, 2, 1000); Content of memory store, 32 pairs of images.
Trial types:
* 1: AB
* 2: BC
* 3: AC
If you choose chain_length 4 the data will look like this:
* trials: (96, 3, 1000)
* correct answer: (96)
* difficulty: (96)
* trial type: (96)
* memory shape: (48, 2, 1000)
Trial types:
* 1: AB
* 2: BC
* 3: AC
* 4: CD
* 5: BD
* 6: AD
End of explanation
dataset_info = 'memo'
root = 'gs://deepmind-memo/length' + str(chain_length) + '/'
num_epochs = 100
shuffle_buffer_size = 150
num_readers = 4
dataset_info = _DATASETS['memo']
filenames = _get_dataset_files(dataset_info, root)
num_map_threads = 4
batch_size = 10
data = tf.data.Dataset.from_tensor_slices(filenames)
data = data.repeat(num_epochs)
data = data.shuffle(shuffle_buffer_size)
data = data.interleave(tf.data.TFRecordDataset,
cycle_length=num_readers, block_length=1)
data = data.shuffle(shuffle_buffer_size)
data = data.map(parser_tf_examples, num_parallel_calls=num_map_threads)
data = data.batch(batch_size)
Explanation: Load the data.
End of explanation
iterator = data.__iter__()
element = iterator.get_next()
print(element[0].shape) # trials
print(element[1].shape) # correct answer
print(element[2].shape) # difficulty
print(element[3].shape) # trialtype
print(element[4].shape) # memory
Explanation: Looking at what we loaded.
End of explanation |
3,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
Step9: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step10: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. | Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
#self.activation_function = lambda x : 0 # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
def sigmoid(x):
return 1. / (1. + np.exp(-x)) # Replace 0 with your sigmoid calculation here
self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error * 1.0
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:, None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
<img src="assets/neural_network.png" width=300px>
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project.
End of explanation
# import sys
### Set the hyperparameters here ###
iterations = 15000
learning_rate = 0.1
hidden_nodes = 6
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of iterations
This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation |
3,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skill Clustering by Matrix Factorization
Steps of skill clustering
Step1: First, we try it on count matrix as the matrix is already avail.
NMF on count matrix
Step2: There are various choices to initialize NMF including random and by SVD. We try random NMF, denoted as rnmf. | Python Code:
import my_util as my_util
import cluster_skill_helpers as cluster_skill_helpers
from cluster_skill_helpers import *
import random as rd
HOME_DIR = 'd:/larc_projects/job_analytics/'
SKILL_DAT = HOME_DIR + 'data/clean/skill_cluster/'
SKILL_RES = HOME_DIR + 'results/' + 'skill_cluster/new/'
Explanation: Skill Clustering by Matrix Factorization
Steps of skill clustering:
+ Obtain a representation of skills in a space of latent factors: this can be done by Matrix Factorization (MF) approach
+ Measure distance between skills in the latent space
+ Cluster skills based on their distance in the space
We can try MF on count matrix or tfidf matrix. However, on building these matrices, we need to take of "duplication" problem.
End of explanation
# Load count matrix
skill_df = pd.read_csv(SKILL_DAT + 'skill_index.csv')
skills = skill_df['skill']
doc_skill = mmread(SKILL_DAT + 'doc_skill.mtx')
Explanation: First, we try it on count matrix as the matrix is already avail.
NMF on count matrix
End of explanation
ks = range(10, 60, 10)
rnmf = {k: NMF(n_components=k, random_state=0) for k in ks}
print( "Fitting NMF using random initialization..." )
print('No. of factors, Error, Running time')
rnmf_error = []
for k in ks:
t0 = time()
rnmf[k].fit(doc_skill)
elapsed = time() - t0
err = rnmf[k].reconstruction_err_
print('%d, %0.1f, %0.1fs' %(k, err, elapsed))
rnmf_error.append(err)
# end
# Save learned factor-skill matrices
nmf_dir = SKILL_RES + 'nmf/'
for k in ks:
fname = '{}factor_skill.csv'.format(k)
pd.DataFrame(rnmf[k].components_).to_csv(nmf_dir + fname, index=False)
print('saved {}factor-skill matrix'.format(k))
Explanation: There are various choices to initialize NMF including random and by SVD. We try random NMF, denoted as rnmf.
End of explanation |
3,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Face verification using Siamese Networks
Goals
train a network for face similarity using siamese networks
work data augmentation, generators and hard negative mining
use the model on your picture
Dataset
We will be using Labeled Faces in the Wild (LFW) dataset available openly at http
Step1: Processing the dataset
The dataset consists of folders corresponding to each identity. The folder name is the name of the person.
We map each class (identity) to an integer id, and build mappings as dictionaries name_to_classid and classid_to_name
Step2: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries
Step3: The following histogram shows the number of images per class
Step4: Siamese nets
A siamese net takes as input two images $x_1$ and $x_2$ and outputs a single value which corresponds to the similarity between $x_1$ and $x_2$, as follows
Step5: Let's build positive and a negative pairs for class 5
Step6: Now that we have a way to compute the pairs, let's load all the possible JPEG-compressed image files into a single numpy array in RAM. There are more than 1000 images, so 100MB of RAM will be used, which will not cause any issue.
Note
Step7: The following function builds a large number of positives/negatives pairs (train and test)
Step9: Data augmentation and generator
We're building a generator, which will modify images through dataaugmentation on the fly.
The generator enables
We use iaa library which offers tremendous possibilities for data augmentation
Step10: Exercise
- Add your own dataaugmentations in the process. You may look at
Step11: Simple convolutional model
Step12: Exercise
- Build a convolutional model which transforms the input to a fixed dimension $d = 50$
- You may alternate convolutions and maxpooling and layers,
- Use the relu activation on convolutional layers,
- At the end, Flatten the last convolutional output and plug it into a dense layer.
- Feel free to use some Dropout prior to the Dense layer.
Use between 32 and 128 channels on convolutional layers. Be careful
Step13: Exercise
Assemble the siamese model by combining
Step14: We can now fit the model and checkpoint it to keep the best version. We can expect to get a model with around 0.75 as "accuracy_sim" on the validation set
Step15: Exercise
Finding the most similar images
Run the shared_conv model on all images;
(Optional) add Charles and Olivier's faces from the test_images folder to the test set;
build a most_sim function which returns the most similar vectors to a given vector.
Step16: Most similar faces
The following enables to display an image alongside with most similar images
Step17: Note that this model is still underfitting, even when running queries against the training set. Even if the results are not correct, the mistakes often seem to "make sense" though.
Running a model to convergence on higher resolution images, possibly with a deeper and wider convolutional network might yield better results. In the next notebook we will try with a better loss and with hard negative mining.
Playing with the camera
- The following code enables you to find the most similar faces to yours
- What do you observe?
- Try to think of reasons why it doesn't work very well, and how you could improve it. | Python Code:
import tensorflow as tf
# If you have a GPU, execute the following lines to restrict the amount of VRAM used:
gpus = tf.config.experimental.list_physical_devices('GPU')
if len(gpus) > 1:
print("Using GPU {}".format(gpus[0]))
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
else:
print("Using CPU")
import os
import random
import itertools
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense, Input, Concatenate, Lambda, Dot
from tensorflow.keras.layers import Conv2D, MaxPool2D, GlobalAveragePooling2D, Flatten, Dropout
import numpy as np
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
Explanation: Face verification using Siamese Networks
Goals
train a network for face similarity using siamese networks
work data augmentation, generators and hard negative mining
use the model on your picture
Dataset
We will be using Labeled Faces in the Wild (LFW) dataset available openly at http://vis-www.cs.umass.edu/lfw/
For computing purposes, we'll only restrict ourselves to a subpart of the dataset. You're welcome to train on the whole dataset on GPU, by setting USE_SUBSET=False in the following cells,
We will also load pretrained weights
End of explanation
PATH = "lfw/lfw-deepfunneled/"
USE_SUBSET = True
dirs = sorted(os.listdir(PATH))
if USE_SUBSET:
dirs = dirs[:500]
name_to_classid = {d: i for i, d in enumerate(dirs)}
classid_to_name = {v: k for k, v in name_to_classid.items()}
num_classes = len(name_to_classid)
print("number of classes: ", num_classes)
Explanation: Processing the dataset
The dataset consists of folders corresponding to each identity. The folder name is the name of the person.
We map each class (identity) to an integer id, and build mappings as dictionaries name_to_classid and classid_to_name
End of explanation
# read all directories
img_paths = {c: [PATH + subfolder + "/" + img
for img in sorted(os.listdir(PATH + subfolder))]
for subfolder, c in name_to_classid.items()}
# retrieve all images
all_images_path = []
for img_list in img_paths.values():
all_images_path += img_list
# map to integers
path_to_id = {v: k for k, v in enumerate(all_images_path)}
id_to_path = {v: k for k, v in path_to_id.items()}
all_images_path[:10]
len(all_images_path)
# build mappings between images and class
classid_to_ids = {k: [path_to_id[path] for path in v] for k, v in img_paths.items()}
id_to_classid = {v: c for c, imgs in classid_to_ids.items() for v in imgs}
dict(list(id_to_classid.items())[0:13])
Explanation: In each directory, there is one or more images corresponding to the identity. We map each image path with an integer id, then build a few dictionaries:
- mappings from imagepath and image id: path_to_id and id_to_path
- mappings from class id to image ids: classid_to_ids and id_to_classid
End of explanation
plt.hist([len(v) for k, v in classid_to_ids.items()], bins=range(1, 10))
plt.show()
np.median([len(ids) for ids in classid_to_ids.values()])
[(classid_to_name[x], len(classid_to_ids[x]))
for x in np.argsort([len(v) for k, v in classid_to_ids.items()])[::-1][:10]]
Explanation: The following histogram shows the number of images per class: there are many classes with only one image.
These classes are useful as negatives, only as we can't make a positive pair with them.
End of explanation
# build pairs of positive image ids for a given classid
def build_pos_pairs_for_id(classid, max_num=50):
imgs = classid_to_ids[classid]
if len(imgs) == 1:
return []
pos_pairs = list(itertools.combinations(imgs, 2))
random.shuffle(pos_pairs)
return pos_pairs[:max_num]
# build pairs of negative image ids for a given classid
def build_neg_pairs_for_id(classid, classes, max_num=20):
imgs = classid_to_ids[classid]
neg_classes_ids = random.sample(classes, max_num+1)
if classid in neg_classes_ids:
neg_classes_ids.remove(classid)
neg_pairs = []
for id2 in range(max_num):
img1 = imgs[random.randint(0, len(imgs) - 1)]
imgs2 = classid_to_ids[neg_classes_ids[id2]]
img2 = imgs2[random.randint(0, len(imgs2) - 1)]
neg_pairs += [(img1, img2)]
return neg_pairs
Explanation: Siamese nets
A siamese net takes as input two images $x_1$ and $x_2$ and outputs a single value which corresponds to the similarity between $x_1$ and $x_2$, as follows:
<img src="images/siamese.svg" style="width: 600px;" />
In order to train such a system, one has to build positive and negative pairs for the training.
End of explanation
build_pos_pairs_for_id(5, max_num=10)
build_neg_pairs_for_id(5, list(range(num_classes)), max_num=6)
Explanation: Let's build positive and a negative pairs for class 5
End of explanation
from skimage.io import imread
from skimage.transform import resize
def resize100(img):
return resize(
img, (100, 100), preserve_range=True, mode='reflect', anti_aliasing=True
)[20:80, 20:80, :]
def open_all_images(id_to_path):
all_imgs = []
for path in id_to_path.values():
all_imgs += [np.expand_dims(resize100(imread(path)), 0)]
return np.vstack(all_imgs)
all_imgs = open_all_images(id_to_path)
all_imgs.shape
print(f"{all_imgs.nbytes / 1e6} MB")
Explanation: Now that we have a way to compute the pairs, let's load all the possible JPEG-compressed image files into a single numpy array in RAM. There are more than 1000 images, so 100MB of RAM will be used, which will not cause any issue.
Note: if you plan on opening more images, you should not open them all at once, and rather build a generator
End of explanation
def build_train_test_data(split=0.8):
listX1 = []
listX2 = []
listY = []
split = int(num_classes * split)
# train
for class_id in range(split):
pos = build_pos_pairs_for_id(class_id)
neg = build_neg_pairs_for_id(class_id, list(range(split)))
for pair in pos:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [1]
for pair in neg:
if sum(listY) > len(listY) / 2:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [0]
perm = np.random.permutation(len(listX1))
X1_ids_train = np.array(listX1)[perm]
X2_ids_train = np.array(listX2)[perm]
Y_ids_train = np.array(listY)[perm]
listX1 = []
listX2 = []
listY = []
#test
for id in range(split, num_classes):
pos = build_pos_pairs_for_id(id)
neg = build_neg_pairs_for_id(id, list(range(split, num_classes)))
for pair in pos:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [1]
for pair in neg:
if sum(listY) > len(listY) / 2:
listX1 += [pair[0]]
listX2 += [pair[1]]
listY += [0]
X1_ids_test = np.array(listX1)
X2_ids_test = np.array(listX2)
Y_ids_test = np.array(listY)
return (X1_ids_train, X2_ids_train, Y_ids_train,
X1_ids_test, X2_ids_test, Y_ids_test)
X1_ids_train, X2_ids_train, train_Y, X1_ids_test, X2_ids_test, test_Y = build_train_test_data()
X1_ids_train.shape, X2_ids_train.shape, train_Y.shape
np.mean(train_Y)
X1_ids_test.shape, X2_ids_test.shape, test_Y.shape
np.mean(test_Y)
Explanation: The following function builds a large number of positives/negatives pairs (train and test)
End of explanation
from imgaug import augmenters as iaa
seq = iaa.Sequential([
iaa.Fliplr(0.5), # horizontally flip 50% of the images
# You can add more transformation like random rotations, random change of luminance, etc.
])
class Generator(tf.keras.utils.Sequence):
def __init__(self, X1, X2, Y, batch_size, all_imgs):
self.batch_size = batch_size
self.X1 = X1
self.X2 = X2
self.Y = Y
self.imgs = all_imgs
self.num_samples = Y.shape[0]
def __len__(self):
return self.num_samples // self.batch_size
def __getitem__(self, batch_index):
This method returns the `batch_index`-th batch of the dataset.
Keras choose by itself the order in which batches are created, and several may be created
in the same time using multiprocessing. Therefore, avoid any side-effect in this method!
low_index = batch_index * self.batch_size
high_index = (batch_index + 1) * self.batch_size
imgs1 = seq.augment_images(self.imgs[self.X1[low_index:high_index]])
imgs2 = seq.augment_images(self.imgs[self.X2[low_index:high_index]])
targets = self.Y[low_index:high_index]
return ([imgs1, imgs2], targets)
gen = Generator(X1_ids_train, X2_ids_train, train_Y, 32, all_imgs)
print("Number of batches: {}".format(len(gen)))
[x1, x2], y = gen[0]
x1.shape, x2.shape, y.shape
plt.figure(figsize=(16, 6))
for i in range(6):
plt.subplot(2, 6, i + 1)
plt.imshow(x1[i] / 255)
plt.axis('off')
for i in range(6):
plt.subplot(2, 6, i + 7)
plt.imshow(x2[i] / 255)
if y[i]==1.0:
plt.title("similar")
else:
plt.title("different")
plt.axis('off')
plt.show()
Explanation: Data augmentation and generator
We're building a generator, which will modify images through dataaugmentation on the fly.
The generator enables
We use iaa library which offers tremendous possibilities for data augmentation
End of explanation
test_X1 = all_imgs[X1_ids_test]
test_X2 = all_imgs[X2_ids_test]
test_X1.shape, test_X2.shape, test_Y.shape
Explanation: Exercise
- Add your own dataaugmentations in the process. You may look at: http://imgaug.readthedocs.io for instance use iaa.Affine;
- Be careful not to make the task to difficult, and to add meaningful augmentations;
- Rerun the generator plot above to check whether the image pairs look not too distorted to recognize the identities.
Test images
In addition to our generator, we need test images, unaffected by the augmentation
End of explanation
@tf.function
def contrastive_loss(y_true, y_pred, margin=0.25):
'''Contrastive loss from Hadsell-et-al.'06
http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf
'''
y_true = tf.cast(y_true, "float32")
return tf.reduce_mean( y_true * tf.square(1 - y_pred) +
(1 - y_true) * tf.square(tf.maximum(y_pred - margin, 0)))
@tf.function
def accuracy_sim(y_true, y_pred, threshold=0.5):
'''Compute classification accuracy with a fixed threshold on similarity.
'''
y_thresholded = tf.cast(y_pred > threshold, "float32")
return tf.reduce_mean(tf.cast(tf.equal(y_true, y_thresholded), "float32"))
Explanation: Simple convolutional model
End of explanation
class SharedConv(tf.keras.Model):
def __init__(self):
super().__init__(self, name="sharedconv")
# TODO
def call(self, inputs):
# TODO
shared_conv = SharedConv()
# %load solutions/shared_conv.py
all_imgs.shape
shared_conv.predict(all_imgs[:10]).shape
shared_conv.summary()
Explanation: Exercise
- Build a convolutional model which transforms the input to a fixed dimension $d = 50$
- You may alternate convolutions and maxpooling and layers,
- Use the relu activation on convolutional layers,
- At the end, Flatten the last convolutional output and plug it into a dense layer.
- Feel free to use some Dropout prior to the Dense layer.
Use between 32 and 128 channels on convolutional layers. Be careful: large convolutions on high dimensional images can be very slow on CPUs.
Try to run your randomly initialized shared_conv model on a batch of the first 10 images from all_imgs. What is the expected shape of the output?
End of explanation
class Siamese(tf.keras.Model):
def __init__(self, shared_conv):
super().__init__(self, name="siamese")
# TODO
def call(self, inputs):
pass # TODO
model = Siamese(shared_conv)
model.compile(loss=contrastive_loss, optimizer='rmsprop', metrics=[accuracy_sim])
# %load solutions/siamese.py
Explanation: Exercise
Assemble the siamese model by combining:
shared_conv on both inputs;
compute the cosine similarity using the Dot layer with normalize=True on the outputs of the two shared_conv instance lanes;
the loss of siamese model is the constrastive loss defined previously;
use the accuracy_sim function defined previously as a metric.
End of explanation
from tensorflow.keras.callbacks import ModelCheckpoint
from tensorflow.keras.models import load_model
best_model_fname = "siamese_checkpoint.h5"
best_model_cb = ModelCheckpoint(best_model_fname, monitor='val_accuracy_sim',
save_best_only=True, verbose=1)
model.fit_generator(generator=gen,
epochs=15,
validation_data=([test_X1, test_X2], test_Y),
callbacks=[best_model_cb], verbose=2)
model.load_weights("siamese_checkpoint.h5")
# You may load a pre-trained model if you have the exact solution architecture.
# This model is a start, but far from perfect !
# model.load_weights("siamese_pretrained.h5")
Explanation: We can now fit the model and checkpoint it to keep the best version. We can expect to get a model with around 0.75 as "accuracy_sim" on the validation set:
End of explanation
# TODO
emb = None
def most_sim(x, emb, topn=3):
return None
# %load solutions/most_similar.py
Explanation: Exercise
Finding the most similar images
Run the shared_conv model on all images;
(Optional) add Charles and Olivier's faces from the test_images folder to the test set;
build a most_sim function which returns the most similar vectors to a given vector.
End of explanation
def display(img):
img = img.astype('uint8')
plt.imshow(img)
plt.axis('off')
plt.show()
interesting_classes = list(filter(lambda x: len(x[1]) > 4, classid_to_ids.items()))
class_id = random.choice(interesting_classes)[0]
query_id = random.choice(classid_to_ids[class_id])
print("query:", classid_to_name[class_id], query_id)
# display(all_imgs[query_id])
print("nearest matches")
for result_id, sim in most_sim(emb[query_id], emb):
class_name = classid_to_name.get(id_to_classid.get(result_id))
print(class_name, result_id, sim)
display(all_imgs[result_id])
Explanation: Most similar faces
The following enables to display an image alongside with most similar images:
The results are weak, first because of the size of the dataset
Also, the network can be greatly improved
End of explanation
import cv2
def camera_grab(camera_id=0, fallback_filename=None):
camera = cv2.VideoCapture(camera_id)
try:
# take 10 consecutive snapshots to let the camera automatically tune
# itself and hope that the contrast and lightning of the last snapshot
# is good enough.
for i in range(10):
snapshot_ok, image = camera.read()
if snapshot_ok:
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
else:
print("WARNING: could not access camera")
if fallback_filename:
image = imread(fallback_filename)
finally:
camera.release()
return image
image = camera_grab(camera_id=0,
fallback_filename='test_images/olivier/img_olivier_0.jpeg')
x = resize100(image)
out = shared_conv(np.reshape(x, (1, 60, 60, 3)))
print("query image:")
display(x)
for id, sim in most_sim(out[0], emb, topn=10):
class_name = classid_to_name.get(id_to_classid.get(id))
if class_name is None:
print(id)
print(class_name, id, sim)
display(all_imgs[id])
Explanation: Note that this model is still underfitting, even when running queries against the training set. Even if the results are not correct, the mistakes often seem to "make sense" though.
Running a model to convergence on higher resolution images, possibly with a deeper and wider convolutional network might yield better results. In the next notebook we will try with a better loss and with hard negative mining.
Playing with the camera
- The following code enables you to find the most similar faces to yours
- What do you observe?
- Try to think of reasons why it doesn't work very well, and how you could improve it.
End of explanation |
3,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment
Step1: Problem 3
Write a Python program that solves $Ax = b$ using LU decomposition. Use the functions <i>lu_factor</i> and <i>lu_solve</i> from <i>scipy.linalg</i> package.
$$ A = \begin{bmatrix}
1 & 4 & 1 \
1 & 6 & -1 \
2 & -1 & 2
\end{bmatrix}B = \begin{bmatrix}
7 \
13 \
5
\end{bmatrix}$$
Solution
We can use the functions lu_factor and lu_solve from scipy.linalg to solve the problem by using LU decomposition. Function lu_factor returns lu (N,N) which is a matrix containing U in its upper triangle, and L in its lower triangle. The function also returns piv (N,) which i representing the permutation matrix P.
In function <i>lu_decomp1(A, b)</i>, the result of lu_factor is saved into a variable (PLU) which is later referred in lu_solve(PLU, b) function call, which gives us the result of $Ax = b$.
An alternative method for solving the problem would be to use "lu"-function, which unpacks the matrices into separate variables, which could be useful if you need to modify the variables or you don't want to use lu_solve to calculate the end result.
The expected result is
Step2: Problem 6
Invert the following matrices with any method
$$ A = \begin{bmatrix}
5 & -3 & -1 & 0 \
-2 & 1 & 1 & 1 \
3 & -5 & 1 & 2 \
0 & 8 & -4 & -3
\end{bmatrix} B = \begin{bmatrix}
1 & 3 & -9 & 6 & 4 \
2 & -1 & 6 & 7 & 1 \
3 & 2 & -3 & 15 & 5 \
8 & -1 & 1 & 4 & 2 \
11 & 1 & -2 & 18 & 7
\end{bmatrix}$$
Comment on the reliability of the results.
Solution
Probably the simplest way to inverse the given matrices is to use inv() function from the numpy.linalg package. Inv() function returns inverse of the matrix given as a parameter in the function call.
Step3: Reliability
The result of matrix A is correct and the results have 16 decimal precision.
In this case, the matrix B is also correctly inverted. However, the determinent is close to zero and if we would be reducing the precision to be less than it's now, we would not be able to invert the matrix.
Step4: If you want to invert matrices with small determinant, the solution is to ensure the tolerances are low enough so that the inv() function can invert the matrix.
Problem 9
Use the Gauss-Seidel with relaxation to solve $Ax = b$, where
$$A = \begin{bmatrix}
4 & -1 & 0 & 0 \
-1 & 4 & -1 & 0 \
0 & -1 & 4 & -1 \
0 & 0 & -1 & 3
\end{bmatrix}
B = \begin{bmatrix}
15 \
10 \
10 \
10 \
\end{bmatrix}$$
Take $x_i = b_i/A_{ii}$ as the starting vector, and use $ω = 1.1$ for the relaxation
factor.
Solution
We can use the sample code created during class as a baseline for the exercise. We need to make couple of modifications to the source code in order to take the value of omega into the account. We also want to stop iterating once good enough accuracy is achieved. Accuracy is defined in the tol -variable.
The tolerance needed is calculated by taking dot product of xOld (taken before iteration) and x (current iteration) and comparing it against the tol variable. If the difference is less than the tol variable, it means the value of x has not changed more than the tolerence, which indicates we are close to the level of accuracy we need.
Expected result is | Python Code:
# Initial import statements
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.pyplot import *
from numpy import *
from numpy.linalg import *
Explanation: Assignment: 05 LU decomposition etc.
Introduction to Numerical Problem Solving, Spring 2017
19.2.2017, Joonas Forsberg<br />
Helsinki Metropolia University of Applied Sciences
End of explanation
from scipy.linalg import lu_factor, lu_solve
# Create a function which can be used later if needed
def lu_decomp1(A, b):
# Solve by using lu_factor and lu_solve
PLU = lu_factor(A)
x = lu_solve(PLU, b)
return x
# Create variables
A = np.matrix(((1, 4, 1),
(1, 6, -1),
(2, -1, 2)))
b = np.array(([7, 13, 5]))
x = lu_decomp1(A, b)
print(dot(inv(A), b))
print("Result = {}".format(x))
Explanation: Problem 3
Write a Python program that solves $Ax = b$ using LU decomposition. Use the functions <i>lu_factor</i> and <i>lu_solve</i> from <i>scipy.linalg</i> package.
$$ A = \begin{bmatrix}
1 & 4 & 1 \
1 & 6 & -1 \
2 & -1 & 2
\end{bmatrix}B = \begin{bmatrix}
7 \
13 \
5
\end{bmatrix}$$
Solution
We can use the functions lu_factor and lu_solve from scipy.linalg to solve the problem by using LU decomposition. Function lu_factor returns lu (N,N) which is a matrix containing U in its upper triangle, and L in its lower triangle. The function also returns piv (N,) which i representing the permutation matrix P.
In function <i>lu_decomp1(A, b)</i>, the result of lu_factor is saved into a variable (PLU) which is later referred in lu_solve(PLU, b) function call, which gives us the result of $Ax = b$.
An alternative method for solving the problem would be to use "lu"-function, which unpacks the matrices into separate variables, which could be useful if you need to modify the variables or you don't want to use lu_solve to calculate the end result.
The expected result is: $[5.5,0.9,-2.1]$
End of explanation
A = np.array([[5, -3, -1, 0],
[-2, 1, 1, 1],
[3, -5, 1, 2],
[0, 8, -4, -3]])
B = np.array(([1, 3, -9, 6, 4],
[2, -1, 6, 7, 1],
[3, 2, -3, 15, 5],
[8, -1, 1, 4, 2],
[11, 1, -2, 18, 7]))
ainv = inv(A)
binv = inv(B)
print("Inverse of A:\n {}".format(ainv))
print("\nInverse of B:\n {}".format(binv))
Explanation: Problem 6
Invert the following matrices with any method
$$ A = \begin{bmatrix}
5 & -3 & -1 & 0 \
-2 & 1 & 1 & 1 \
3 & -5 & 1 & 2 \
0 & 8 & -4 & -3
\end{bmatrix} B = \begin{bmatrix}
1 & 3 & -9 & 6 & 4 \
2 & -1 & 6 & 7 & 1 \
3 & 2 & -3 & 15 & 5 \
8 & -1 & 1 & 4 & 2 \
11 & 1 & -2 & 18 & 7
\end{bmatrix}$$
Comment on the reliability of the results.
Solution
Probably the simplest way to inverse the given matrices is to use inv() function from the numpy.linalg package. Inv() function returns inverse of the matrix given as a parameter in the function call.
End of explanation
print("Determinant of A: {}".format(np.linalg.det(A)))
print("Determinant of B: {}".format(np.linalg.det(B)))
Explanation: Reliability
The result of matrix A is correct and the results have 16 decimal precision.
In this case, the matrix B is also correctly inverted. However, the determinent is close to zero and if we would be reducing the precision to be less than it's now, we would not be able to invert the matrix.
End of explanation
def gaussSeidel(A, b):
omega = 1.1
# Amount of iterations
p = 1000
# Define tolerance
tol = 1.0e-9
n = len(b)
x = np.zeros(n)
# Generate array based on starting vector
for y in range(n):
x[y] = b[y]/A[y, y]
# Iterate p times
for k in range(p):
xOld = x.copy()
for i in range(n):
s = 0
for j in range(n):
if j != i:
s = s + A[i, j] * x[j]
x[i] = omega/A[i, i] * (b[i] - s) + (1 - omega)*x[i]
# Break execution if we are within the tolerance needed
dx = math.sqrt(np.dot(x-xOld,x-xOld))
if dx < tol: return x
return x
A = np.array(([4.0, -1, 0, 0],
[-1, 4, -1, 0],
[0, -1, 4, -1],
[0, 0, -1, 3]))
b = np.array(([15.0, 10, 10, 10]))
x = gaussSeidel(A, b)
print("Result = {}".format(x))
Explanation: If you want to invert matrices with small determinant, the solution is to ensure the tolerances are low enough so that the inv() function can invert the matrix.
Problem 9
Use the Gauss-Seidel with relaxation to solve $Ax = b$, where
$$A = \begin{bmatrix}
4 & -1 & 0 & 0 \
-1 & 4 & -1 & 0 \
0 & -1 & 4 & -1 \
0 & 0 & -1 & 3
\end{bmatrix}
B = \begin{bmatrix}
15 \
10 \
10 \
10 \
\end{bmatrix}$$
Take $x_i = b_i/A_{ii}$ as the starting vector, and use $ω = 1.1$ for the relaxation
factor.
Solution
We can use the sample code created during class as a baseline for the exercise. We need to make couple of modifications to the source code in order to take the value of omega into the account. We also want to stop iterating once good enough accuracy is achieved. Accuracy is defined in the tol -variable.
The tolerance needed is calculated by taking dot product of xOld (taken before iteration) and x (current iteration) and comparing it against the tol variable. If the difference is less than the tol variable, it means the value of x has not changed more than the tolerence, which indicates we are close to the level of accuracy we need.
Expected result is: [ 5. 5. 5. 5.]
End of explanation |
3,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RoadRunner transit model example I - basics
Author
Step1: Import the model
Step2: Example 1
Step3: Next, we initialise and set up a RoadRunnerModel choosing to use the four-parameter nonlinear limb darkening model and giving it the mid-exposure time array
Step4: Evaluation for scalar parameters
After the transit model has been initialised and the data set, we can evaluate the model for a given radius ratio (k), limb darkening ccoefficients (ldc), zero epoch (t0), orbital period (p), scaled semi-major axis ($a/R_\star$, a), orbital inclination (i), eccentricity (e), and argument of periastron (w). Eccentricity and argument of periastron are optional and default to zero if not given.
The tm.evaluate method returns a 1D array with shape (npt) with the transit model evaluated for each mid-exposure time given in the time array.
Note
Step5: Evaluation for a set of parameters
Like the rest of the PyTransit transit models, the RoadRunner model can be evaluated simultaneously for a set of parameters. This is also done using tm.evaluate, but now each argument is a vector with npv values. Model evaluation is parallelised and can be significantly faster than looping over an parameter array in Python.
Now, the tm.evaluate returns a 2D array with shape [npv, npt] with the transit model evaluated for each parameter vector and mid-transit time given in the time array
Step6: Supersampling
A single photometry observation is always an exposure over time. If the exposure time is short compared to the changes in the transit signal shape during the exposure, the observation can be modelled by evaluating the model at the mid-exposure time. However, if the exposure time is long, we need to simluate the integration by calculating the model average over the exposure time (although numerical integration is also a valid approach, it is slightly more demanding computationally and doesn't improve the accuracy significantly). This is achieved by supersampling the model, that is, evaluating the model at several locations inside the exposure and averaging the samples.
Evaluating the model many times for each observation naturally increases the computational burden of the model, but is necessary to model long-cadence observations from the Kepler and TESS telescopes.
All the transit models in PyTransit support supersampling.
GPU computing
Step7: Example 2
Step8: The second dataset considers a more realistic scenario where we have three separate transits observed in two passbands. We create this by tiling our time array three times.
Step9: Achromatic radius ratio
Let's see how this works in practice. We divide our current light curve into two halves observed in different passbands. These passbands have different limb darkening, but we first assume that the radius ratio is achromatic.
Step10: Chromatic radius ratio
Next, we assume that the radius ratio is chromatic, that is, it depends on the passband. This is achieved by giving the model an array of radius ratios (where the number should equal to the number of passbands) instead of giving it a scalar radius ratio.
Step11: Different superampling rates
Next, let's set different supersampling rates to the two light curves. There's no reason why we couldn't also let them have different passbands, but it's better to keep things simple at this stage.
Step12: Everything together
Finally, let's throw everything together and create a set of light curves observed in different passbands, requiring different supersampling rates, assuming chromatic radius ratios, for a set of parameter vectors. | Python Code:
%pylab inline
rc('figure', figsize=(13,5))
def plot_lc(time, flux, c=None, ylim=(0.9865, 1.0025), ax=None):
if ax is None:
fig, ax = subplots()
else:
fig, ax = None, ax
ax.plot(time, flux, c=c)
ax.autoscale(axis='x', tight=True)
setp(ax, xlabel='Time [d]', ylabel='Flux', xlim=time[[0,-1]], ylim=ylim)
if fig is not None:
fig.tight_layout()
return ax
Explanation: RoadRunner transit model example I - basics
Author: Hannu Parviainen<br>
Last modified: 16.9.2020
The RoadRunner transit model (Parviainen, submitted 2020) implemented by pytransit.RoadRunnerModel is a fast transit model that allows for any radially symmetric function to be used to model stellar limb darkening. The model offers flexibility with performance that is similar or superior to the analytical quadratic model by Mandel & Agol (2002) implemented by pytransit.QuadraticModel.
The model follows the standard PyTransit API. The limb darkening model is given in the initialisation, and can be either the name of a set of built-in standard analytical limb darkening models
constant, linear, quadratic, nonlinear, general, power2, and power2-pm,
an instance of pytransit.LDTkModel, a Python callable that takes an array of $\mu$ values and a parameter vector, or a tuple with two callables where the first is the limb darkening model and the second a function returning the stellar surface brightness integrated over the stellar disk.
I demonstrate the use of custom limb darkening models and the LDTk-based limb darkening model (pytransit.LDTkModel) in the next notebooks, and here show basic examples of the RoadRunner model use with the named limb darkening models.
End of explanation
from pytransit import RoadRunnerModel
Explanation: Import the model
End of explanation
time = linspace(-0.05, 0.05, 1500)
Explanation: Example 1: simple light curve
We begin with a simple light curve without any fancy stuff such as multipassband modeling. First, we create a time array centred around zero
End of explanation
tm = RoadRunnerModel('nonlinear')
tm.set_data(time)
Explanation: Next, we initialise and set up a RoadRunnerModel choosing to use the four-parameter nonlinear limb darkening model and giving it the mid-exposure time array
End of explanation
flux1 = tm.evaluate(k=0.1, ldc=[0.36, 0.04, 0.1, 0.05], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
plot_lc(time, flux1);
Explanation: Evaluation for scalar parameters
After the transit model has been initialised and the data set, we can evaluate the model for a given radius ratio (k), limb darkening ccoefficients (ldc), zero epoch (t0), orbital period (p), scaled semi-major axis ($a/R_\star$, a), orbital inclination (i), eccentricity (e), and argument of periastron (w). Eccentricity and argument of periastron are optional and default to zero if not given.
The tm.evaluate method returns a 1D array with shape (npt) with the transit model evaluated for each mid-exposure time given in the time array.
Note: The first tm.set_data and tm.evaluate evaluation takes a significantly longer time than the succeeding calls to these methods. This is because most of the PyTransit routines are accelerated with numba, and numba takes some time compiling all the required methods.
End of explanation
npv = 5
ks = normal(0.10, 0.002, (npv, 1))
t0s = normal(0, 0.001, npv)
ps = normal(1.0, 0.01, npv)
smas = normal(4.2, 0.1, npv)
incs = uniform(0.48*pi, 0.5*pi, npv)
es = uniform(0, 0.25, size=npv)
os = uniform(0, 2*pi, size=npv)
ldc = uniform(0, 0.2, size=(npv,1,4))
flux2 = tm.evaluate(ks, ldc, t0s, ps, smas, incs, es, os)
plot_lc(time, flux2.T);
Explanation: Evaluation for a set of parameters
Like the rest of the PyTransit transit models, the RoadRunner model can be evaluated simultaneously for a set of parameters. This is also done using tm.evaluate, but now each argument is a vector with npv values. Model evaluation is parallelised and can be significantly faster than looping over an parameter array in Python.
Now, the tm.evaluate returns a 2D array with shape [npv, npt] with the transit model evaluated for each parameter vector and mid-transit time given in the time array
End of explanation
tm = RoadRunnerModel('nonlinear')
tm.set_data(time, exptimes=0.02, nsamples=10)
flux3 = tm.evaluate(k=0.1, ldc=[0.36, 0.04, 0.1, 0.05], t0=0.0, p=1.0, a=4.2, i=0.5*pi, e=0.0, w=0.0)
ax = plot_lc(time, flux1, c='0.75')
plot_lc(time, flux3, ax=ax);
Explanation: Supersampling
A single photometry observation is always an exposure over time. If the exposure time is short compared to the changes in the transit signal shape during the exposure, the observation can be modelled by evaluating the model at the mid-exposure time. However, if the exposure time is long, we need to simluate the integration by calculating the model average over the exposure time (although numerical integration is also a valid approach, it is slightly more demanding computationally and doesn't improve the accuracy significantly). This is achieved by supersampling the model, that is, evaluating the model at several locations inside the exposure and averaging the samples.
Evaluating the model many times for each observation naturally increases the computational burden of the model, but is necessary to model long-cadence observations from the Kepler and TESS telescopes.
All the transit models in PyTransit support supersampling.
GPU computing: supersampling increases the computational burden of a single observation, what also leads to increasing advantage of using a GPU version of the transit model rather than a CPU version.
End of explanation
lcids1 = zeros(time.size, int)
lcids1[time.size//2:] = 1
plot_lc(time, lcids1, ylim=(-0.5, 1.5));
Explanation: Example 2: heterogeneous light curve
Multiple passbands
PyTransit aims to simplify modelling of heterogeneous light curves as much as possible. Here heterogeneous means that we can model light curves observed in different passbands, with different instruments, and with different supersampling requirements in one go. This is because most of the real exoplanet transit modelling science cases nowadays involve heterogeneous datasets, such as modelling long-cadence Kepler light curves together with short-cadence ground-based observations, or transmission spectroscopy where the light curves are created from a spectroscopic time series.
To model heterogeneous light curves, PyTransit designates each observation (exposure, datapoint) to a specific light curve, and each light curve to a specific passband. This is done throught the light curve index array (lcids) and passband index array (pbids). Light curve index array is an integer array giving an index for each observed datapoints (suchs as, the indices for dataset of light curves would be either 0 or 1), while the passband index array is an integer array containing a passband index for each light curve in the dataset. So, a dataset of two light curves observed in a same passband would be
times = [0, 1, 2, 3]
lcids = [0, 0, 1, 1]
pbids = [0, 0]
while a dataset containing two light curves observed in different passbands would be
times = [0, 1, 2, 3]
lcids = [0, 0, 1, 1]
pbids = [0, 1]
Let's create two datasets. The first one divides our single light curve into two halves parts and gives each a different light curve index (0 for the first half and 1 for the second)
End of explanation
time2 = tile(time, 3)
lcids2 = repeat([0, 1, 1], time.size)
ax = plot_lc(arange(time2.size), lcids2, ylim=(-0.5, 1.5))
[ax.axvline(i*time.size, c='k', ls='--') for i in range(1,3)];
Explanation: The second dataset considers a more realistic scenario where we have three separate transits observed in two passbands. We create this by tiling our time array three times.
End of explanation
tm = RoadRunnerModel('power-2')
tm.set_data(time, lcids=lcids1, pbids=[0, 1])
flux = tm.evaluate(k=0.1, ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(time, flux);
tm.set_data(time2, lcids=lcids2, pbids=[0, 1])
flux = tm.evaluate(k=0.1, ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(arange(flux.size), flux);
Explanation: Achromatic radius ratio
Let's see how this works in practice. We divide our current light curve into two halves observed in different passbands. These passbands have different limb darkening, but we first assume that the radius ratio is achromatic.
End of explanation
tm.set_data(time, lcids=lcids1, pbids=[0, 1])
flux = tm.evaluate(k=[0.105, 0.08], ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(time, flux);
tm.set_data(time2, lcids=lcids2, pbids=[0, 1])
flux = tm.evaluate(k=[0.105, 0.08], ldc=[[3.1, 0.1],[2.1, 0.03]], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(arange(flux.size), flux);
Explanation: Chromatic radius ratio
Next, we assume that the radius ratio is chromatic, that is, it depends on the passband. This is achieved by giving the model an array of radius ratios (where the number should equal to the number of passbands) instead of giving it a scalar radius ratio.
End of explanation
tm.set_data(time, lcids=lcids1, exptimes=[0.0, 0.02], nsamples=[1, 10])
flux = tm.evaluate(k=0.105, ldc=[3.1, 0.1], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(time, flux);
tm.set_data(time2, lcids=lcids2, exptimes=[0.0, 0.02], nsamples=[1, 10])
flux = tm.evaluate(k=0.105, ldc=[3.1, 0.1], t0=0.0, p=1.0, a=4.3, i=0.5*pi)
plot_lc(arange(flux.size), flux);
Explanation: Different superampling rates
Next, let's set different supersampling rates to the two light curves. There's no reason why we couldn't also let them have different passbands, but it's better to keep things simple at this stage.
End of explanation
tm = RoadRunnerModel('quadratic-tri')
time3 = tile(time, 3)
lcids3 = repeat([0, 1, 2], time.size)
tm.set_data(time3, lcids=lcids3, pbids=[0, 1, 2], exptimes=[0.0, 0.02, 0.0], nsamples=[1, 10, 1])
npv = 5
ks = uniform(0.09, 0.1, (npv, 3))
t0s = normal(0, 0.002, npv)
ps = normal(1.0, 0.01, npv)
smas = normal(5.0, 0.1, npv)
incs = uniform(0.48*pi, 0.5*pi, npv)
es = uniform(0, 0.25, size=npv)
os = uniform(0, 2*pi, size=npv)
ldc = uniform(0, 0.5, size=(npv,3,2))
flux = tm.evaluate(k=ks, ldc=ldc, t0=t0s, p=ps, a=smas, i=incs, e=es, w=os)
plot_lc(arange(flux.shape[1]), flux.T + linspace(0, 0.06, npv), ylim=(0.988, 1.065));
Explanation: Everything together
Finally, let's throw everything together and create a set of light curves observed in different passbands, requiring different supersampling rates, assuming chromatic radius ratios, for a set of parameter vectors.
End of explanation |
3,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: Exploratory analysis
First let's check out what the data look like and see if we can identify some patterns.
Step2: From a cursory look at the data we can see that there are at least two ways to genderize job titles
Step3: Extracting gender
This is the part where we actually do things.
Bracket notation
Example
Step4: Looks like our regex is pretty good!
Both nouns and qualifiers are subject to genderization. Here are some examples of how these postfixes are meant to transform the preceding word
Step5: Looks like there is only one case, and the '(s)' plural postfix is frankly unnecessary (here the singular form already implies you can supervise many sites). So instead of making the regex needlessly complex to account for this, we'll just skip this particular case in the substitution code.
Caveat
Step11: It appears that masculine words like "Accastilleur" and "Contrôleur", which both end in -eur, sometimes use the "(se)" and sometimes the "(euse)" postfix!
We also see that conversely, (euse) is both used for words ending in -er (like "Manager") and -eur ("Contrôleur")! Which means the postfixes are not normalized and we unfortunately have a many-to-many mapping
Step12: Now all that's left is to run it | Python Code:
from itertools import chain
import pandas as pd
import re
from bob_emploi.data_analysis.lib import cleaned_data
jobs = cleaned_data.rome_jobs('../../../data')
Explanation: Author: Paul Duan
Skip the run test because the ROME version has to be updated to make it work in the exported repository. TODO: Update ROME and remove the skiptest flag.
ROME Genderization
Problem statement: This notebook is an exploration of how we could normalize the genderization of (French) job titles in ROME (Répertoire Opérationnel des Métiers et des Emplois), which is published by Pôle Emploi. These are often inconsistently specified, which leads to confusion and poorer user experience. In addition it makes the job title longer than it should be, which makes them harder to identify at a glance.
Input: ROME job titles, with hardcoded genderization in the plaintext job titles.
Desired output: A more structured format where jobs have both a masculine and a feminine version (which may be the same if the job is not genderized).
Example problem and output: An English-language example of the problem would be that a job such as "Senior Fireman" might sometimes be genderized as "Senior Fireman / Senior Firewoman" and sometimes as "Senior Fire(wo)man". We want to turn any variant of these possible genderizations into a normalized pair of two fields: ("Senior Fireman", "Senior Firewoman"). If a job title is not gendered, for example "Artist", we want to simply return ("Artist", "Artist").
Additional outputs (TODO): We could also re-define a normalized way of returning a genderized string when the user's gender is unknown (like the current ROME job titles, but enforcing a consistent notation for how genderization is handled). This, along with the rest, could be given back to Pôle Emploi so we can help them push the improvements upstream to the official ROME.
End of explanation
jobs
Explanation: Exploratory analysis
First let's check out what the data look like and see if we can identify some patterns.
End of explanation
is_genderized = jobs['name'].apply(lambda x: '(' in x or '/' in x)
print('Number of genderized names :', sum(is_genderized))
print('Out of total names :', len(jobs), 'i.e.', sum(is_genderized)/len(jobs), '%')
Explanation: From a cursory look at the data we can see that there are at least two ways to genderize job titles:
* masculine_title / feminine_title (e.g. "Abbateur / Abbateuse"); we'll call this the slash notation
* masculine_title(feminine_postfix) (e.g."Accompagnateur(trice)"); we'll call this the bracket notation
When the adjective itself needs to be in concordance with the gender:
* In the bracket notation, the adjective itself will have the gender postfix in a bracket, e.g. "Accompagnateur(trice) médicosocial(e) vie journalière" in jobs.loc['10220']
* In the slash notation, both will be repeated, as in "Accompagnateur médicosocial / Accompagnatrice médicosociale" in jobs.loc['10219']
Remarks:
There isn't an obvious logic as to when bracket notation is used. One hypothesis I had was that this would depend on whether there would be an adjective to be concorded (in which case the bracket notation seems more adapted), but this is not the case. There are many examples of cases where bracket notation is still used despite there being no adjective (e.g. "Accompagnateur(trice) voyages" in jobs.loc['10212'], especially since later we have a "Accompagnateur / Accompagnatrice tourisme", as well as examples of items in slash notations that also have an adjective.
One tricky thing is that the postfix is sometimes meant to replace the word end as in "Accompagnateur(trice), whereas other times it's meant to be added to the end, as in "socioprofessionnel(le)"; this can be covered easily enough since there aren't too many possible postfixes, but this is one thing we have to be mindful of.
Another annoying thing is that in the slash notation, usually qualifiers are not repeated and are instead meant to be left-distributive; for example, "Accompagnateur / Accompagnatrice tourisme" should translate to ("Accompagnateur tourisme", "Accompagnatrice tourisme").
With that said, other times they are repeated, as in "Accompagnateur médicosocial / Accompagnatrice médicosociale", especially (but not necessarily) when the qualifier is a concorded adjective.
Though rarer, is sometimes also happens on the left side, for example with "Responsable éditorial / éditoriale web" in jobs.loc['38966'] which should translate to ("Responsable éditorial web", "Responsable éditoriale web") and features both a distributive qualifier on the left and on the right.
How many genderized job names are there?
As a quick way to estimate this number we'll just count the number of names containing slashes or brackets:
End of explanation
postfix_rule = re.compile(r"(?<=\S)\(([\S]+?)\)")
has_bracket = jobs['name'].apply(lambda x: '(' in x)
postfixes = jobs[has_bracket].name.apply(
lambda x: re.findall(postfix_rule, x))
postfixes_types = set(chain.from_iterable(postfixes))
print(postfixes_types)
Explanation: Extracting gender
This is the part where we actually do things.
Bracket notation
Example: "Accompagnateur(trice) médicosocial(e) vie journalière"; we want ("Accompagnateur médicosocial vie journalière", "Accompagnatrice médicosociale vie journalière").
This is the more complex case. Here let's extract brackets and the preceding word (non-greedily, to account for cases where there are multiple brackets) but do not match when the character before the bracket is non-alphabetical. This is because we want to make sure the bracket is a postfix directly appended to a noun without space, since brackets can only be used by themselves.
As such, a possible regex expression that would capture both a genderized word and its postfix is:
r"(\S+?)\((\S+?)\)"
To only capture the bracket, a regex would be:
r"(?<=\S)\(([\S]+?)\)"
With the positive lookbehind ensuring that the preceding character is not a space.
How many types of postfixes are there?
Because rules for properly handling postfixes are complicated (and postfixes for according jobs and adjectives are few), I believe it's better to simply exhaustively list them then hardcode their associated substitution rules. Let's list them:
(side note: as a bonus this also verifies that our regex is good and has no false positives)
End of explanation
jobs[jobs.name.apply(lambda x: '(s)' in x)]
Explanation: Looks like our regex is pretty good!
Both nouns and qualifiers are subject to genderization. Here are some examples of how these postfixes are meant to transform the preceding word:
Abbateur(se) -> Abbateur, Abbateuse
Accompagnateur(trice) -> Accompagnateur, Accompagnatrice
social(e) -> social, sociale
Technicien(ne) -> Technicien, Technicienne
administratif(ive) -> administratif, administrative
Only the 's' postfix looks out of place, as it doesn't seem to be a gender postfix (it's a plural postfix). Let's see if this is a big deal:
End of explanation
print(jobs[jobs.name.apply(lambda x: '(euse)' in x)][:3])
print(jobs[jobs.name.apply(lambda x: '(se)' in x)][:3])
Explanation: Looks like there is only one case, and the '(s)' plural postfix is frankly unnecessary (here the singular form already implies you can supervise many sites). So instead of making the regex needlessly complex to account for this, we'll just skip this particular case in the substitution code.
Caveat: Is the mapping of postfixes to word endings bijective? Looking at the list of postfixes, it appears that we both have "se" and "euse" as possible postfixes. This is problematic, because it means the mapping is possibly inconsistent:
End of explanation
POSTFIX_MAP = {
'e': [''], # empty string if the postfix can be appended
'ère': ['er'],
'se': ['r'],
'sse': [''],
'euse': ['eur', 'er'],
've': ['f'],
're': ['r'],
'rice': ['eur'],
'ne': [''],
'trice': ['teur'],
'le': [''],
'ive': ['if'],
'ière': ['ier'],
}
def check_mapping_specification(postfix_map):
Check whether the mapping of postfix to word endings
correctly returns a list of possible word endings to substitute
that goes from more specified (longer) to more general.
for postfix, postfix_map in postfix_map.items():
if postfix_map != sorted(postfix_map, key=len, reverse=True):
return False
return True
def substitute_postfix(word, postfix, postfix_map):
Perform the correct postfix substitution from a
masculine word to a feminine word, taking into account the
fact that some postfix are meant to be appended to
the base string while others are meant to be substituted.
Both nouns and adjectives may be genderized.
Examples:
- Abbateur(se) -> Abbateur, Abbateuse
- Accompagnateur(trice) -> Accompagnateur, Accompagnatrice
- social(e) -> social, sociale
- Technicien(ne) -> Technicien, Technicienne
- administratif(ive) -> administratif, administrative
etc. The integral set of postfixes in the ROME dataset is:
{'e', 'ère', 'se', 'sse', 'euse', 've', 're',
'rice', 'ne', 'trice', 'le', 's', 'ive', 'ière'}
(The 's' postfix is an exception, as it relates to
the plural form rather than gender.)
Since there are only a limited number of postfixes, we can
exhaustively list them and the word ending they are meant to
substitute, which is safer in case new unaccounted ones pop up.
Note that the same postfix can be specified multiple ways
for multiple types of word endings.
For example, "Manager(euse)" is feminized as "Manageuse", but
both "Masseur(euse)" and "Masseur(se)" both translate to "Masseuse".
Therefore the substitution dictionary has the form:
{postfix: [possible_word_ending1, possible_word_ending2, ...]}
Here we perform the first substitution that matches the actual
word ending. This means the list of possible word endings must
therefore be ordered by decreasing length, so more specific cases
are always checked before more general ones.
if not len(word):
raise ValueError("word is empty")
if postfix in POSTFIX_MAP:
known_endings = POSTFIX_MAP[postfix]
for known_ending in known_endings:
if word.endswith(known_ending):
root = word[:len(word) - len(known_ending)]
return root + postfix
error_string = "{0}: unmapped word ending for postfix '{1}'"
raise ValueError(error_string.format(word, postfix))
else:
raise ValueError("unknown postfix:" + postfix)
def extract_bracket_notation(raw_job_name, postfix_map):
Extract the genderized strings from a job title
with the bracket notation, by going through the string
and replacing brackets.
Return None if the item doesn't appear to be in
bracket notation.
# nb: we only want brackets directly following a character
bracket_regex = re.compile(r"(\S+?)\((\S+?)\)")
matches = re.findall(bracket_regex, raw_job_name)
if not matches:
return None
# masculine name is just the string without the bracket content;
# to get feminine names we also substitute the relevant words.
# we'll perform these deletions/substitutions iteratively
masculine_name = feminine_name = raw_job_name
for word, postfix in matches:
if postfix != 's': # if not the plural postfix edge case
masculine_name = masculine_name.replace("({})".format(postfix), '')
feminine_name = feminine_name.replace("({})".format(postfix), '')
new_word = substitute_postfix(word, postfix, postfix_map)
feminine_name = feminine_name.replace(word, new_word)
return masculine_name, feminine_name
def extract_slash_notation(raw_job_name):
Extract the genderized strings from a job title
with the slash notation. Simply split the raw job
name according to the slashes, then deal with the
qualifiers distribution by appending all additional
words from the feminine (right) string to the masculine one.
In addition, any leading words on the left side must be
appended to the right side as well. One subtlety is that
detecting when the leading words end (or if they exist) is
complicated by the fact that the first common word may be
gendered, so we need to fuzzy match. For our purposes simply
comparing the first few characters should be enough.
Return None if the item doesn't appear to be in
slash notation.
chars_to_compare = 3
substrings = raw_job_name.split(' / ')
if len(substrings) != 2:
return None
masculine_name, feminine_name = substrings
feminine_words = feminine_name.split(' ')
masculine_words = masculine_name.split(' ')
n_masculine_words = len(masculine_words)
# insert until a word that looks like right side's first
# word is encountered
to_insert = []
for word in masculine_words:
if word[:chars_to_compare] != feminine_words[0][:chars_to_compare]:
to_insert.append(word)
else:
break
feminine_words = to_insert + feminine_words
feminine_name = ' '.join(feminine_words)
# append extra right-side words to the the left side
for i, word in enumerate(feminine_words):
if i > n_masculine_words - 1:
masculine_words.append(word)
masculine_name = ' '.join(masculine_words)
return masculine_name, feminine_name
def genderize(df, postfix_map):
Take a dataframe of the same form as the
one returned by cleaned_data.rome_jobs and genderize it,
adding a masculine_name and a feminine_name field to it.
By default, masculine_name = feminine_name = raw_job_name,
then we overwrite the value when either the slash notation
or bracket notation rule is successful.
masculine_name = df['name'].copy()
feminine_name = df['name'].copy()
bracket_output = df.name.apply(extract_bracket_notation,
postfix_map=postfix_map)
slash_output = df.name.apply(extract_slash_notation)
is_bracket = bracket_output.notnull()
is_slash = slash_output.notnull()
masculine_name[is_bracket] = bracket_output[is_bracket].apply(
lambda x: x[0])
feminine_name[is_bracket] = bracket_output[is_bracket].apply(
lambda x: x[1])
masculine_name[is_slash] = slash_output[is_slash].apply(
lambda x: x[0])
feminine_name[is_slash] = slash_output[is_slash].apply(
lambda x: x[1])
df['masculine_name'] = masculine_name
df['feminine_name'] = feminine_name
return df
Explanation: It appears that masculine words like "Accastilleur" and "Contrôleur", which both end in -eur, sometimes use the "(se)" and sometimes the "(euse)" postfix!
We also see that conversely, (euse) is both used for words ending in -er (like "Manager") and -eur ("Contrôleur")! Which means the postfixes are not normalized and we unfortunately have a many-to-many mapping :(
Slash notation
Example: "Accordeur / Accordeuse de pianos"; we want ("Accordeur de pianos", "Accordeuse de pianos").
This case is easier: we'll just split the sentence according to slashes. The left side of the slash is always the masculine case, and the right side the feminine case.
Caveat: We need to make sure we handle qualifiers properly, because sometimes they are repeated on both sides of the slash and sometimes not, in which case they are meant to be distributive. For example, in "Responsable éditorial / éditoriale web", the word "Responsable" on the left side is meant to be repeated on the right side, and conversely the word "web" on the right side must be repeated on the left side. Sometimes this is not the case (all qualifiers are already repeated on both sides), sometimes one side only features distributive qualifiers, sometimes both (as in the example in the previous sentence).
One way of handling this is to first insert any words at the beginning of the left side that is not present in the ride side in front of the string in the right side.
Then once this is done we can consider that if the left side contains n words, then these also represent the first n words on the right side, with all extra words on the ride side needing to be appended to the left side as well.
Putting it all back together
End of explanation
postfix_map = POSTFIX_MAP
assert check_mapping_specification(postfix_map), "ill-specified postfix map"
genderize(jobs, postfix_map)
Explanation: Now all that's left is to run it:
End of explanation |
3,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is an example of using Python and R together within a Jupyter notebook. First, let's generate some data within python.
Step1: Now, we pass those two variables into R and perform linear regression, and get back the result.
Step2: Now let's look at the contents of the variable that we got back (which should contain the parameter estimates) | Python Code:
import numpy
%load_ext rpy2.ipython
x=numpy.random.randn(100)
beta=3
y=beta*x+numpy.random.randn(100)
Explanation: This is an example of using Python and R together within a Jupyter notebook. First, let's generate some data within python.
End of explanation
%%R -i x,y -o beta_est
result=lm(y~x)
beta_est=result$coefficients
summary(result)
Explanation: Now, we pass those two variables into R and perform linear regression, and get back the result.
End of explanation
print(beta_est)
Explanation: Now let's look at the contents of the variable that we got back (which should contain the parameter estimates)
End of explanation |
3,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Model
Business Problem
Startup XYZ is in the business of giving personal loans, structured as non-recourse loans. The defaults on their loans are much higher than their competitors. Also, the underlying collaterals lose their value way too quicky and has resulted in huge losses for Bank XYZ.
Alice was recently appointed as the Senior VP of the Risk Organization. She comes from a strong analytics background and wants to leverage data science to identify customer's risk before approving loan.
She's appointed you as a consultant to help her and the team solve this problem.
Note
Step1: 3. Refine
Lets check the dataset for compeleteness - by checking for missing values
Missing values
Step2: So, we see that years have missing values. The column is numeric. We have three options for dealing with missing values
Options to treat Missing Values
REMOVE - NAN rows
IMPUTATION - Replace them with something??
Mean
Median
Fixed Number - Domain Relevant
High Number (999) - Issue with modelling
BINNING - Categorical variable and "Missing becomes a number
DOMAIN SPECIFIC - Entry error, pipeline, etc.
Step3: We also need to check for quality - by checking for outliers in the data. For this workshop, we will skip doing that. But remember to check for outliers when doing in real-life
4. Explore
The goal is to build some intuition around the data
Single Variable Exploration - Univariate Analysis
Step4: Dual Variable Exploration - Bivariate Analysis
Step5: EXERCISE
Three Variables Exploration
Explore the relationship between age, income and defualt
5. Transform
Step6: Two of the columns are categorical in nature - grade and ownership.
To build models, we need all of the features to be numeric. There exists a number of ways to convert categorical variables to numeric values.
We will use one of the popular options
Step7: EXERCISE
Do label encoding on ownership
6. Model
Common approaches
Step8: Step 2 - Build decision tree model
Step9: Step 3 - Visualize the decision tree
Step10: Let's see the decision boundaries
Step11: EXERCISE
Change the depth of the Decision Tree classifier to 10 and plot the decision boundaries again.
Lets understand first just the difference between Class prediction and Class Probabilities
Step12: Model Validation
While we have created the model, we still don't have a measure of how good the model is. We need to measure some accuracy metric of the model and have confidence that it will generalize well. We should be confident that when we put the model in production (real-life), the accuracy we get from the model results should mirror the metrics we obtained when we built the model.
Selecting the right accuracy metric for the model is important.
This wiki has a good overview of some of the common metrics.
We will use a metric - Area Under the Curve
Area Under the Curve
In a Receiver Operating Characteristic (ROC) curve the true positive rate (Sensitivity) is plotted in function of the false positive rate (100-Specificity) for different cut-off points. Each point on the ROC curve represents a sensitivity/specificity pair corresponding to a particular decision threshold. A test with perfect discrimination (no overlap in the two distributions) has a ROC curve that passes through the upper left corner (100% sensitivity, 100% specificity). Therefore the closer the ROC curve is to the upper left corner, the higher the overall accuracy of the test
(source)
Step13: EXERCISE
Build a decison tree classifier with max_depth = 10 and plot confusion_matrix & auc
Cross-validation
Now that we have chosen the error metric, how do we find the generalization error?
We do this using cross-validation. ([source]
(https
Step14: EXERCISE
Build a classifier with max_depth = 10 and run a 5-fold CV to get the auc score.
Build a classifier with max_depth = 20 and run a 5-fold CV to get the auc score.
Bagging
Decision trees in general have low bias and high variance. We can think about it like this
Step15: EXERCISE
Change the number of trees from 10 to 100 and make it 5-fold. And report the cross-validation error (Hint
Step16: Model serialization
We need to serialize the model and the label encoders. | Python Code:
#Load the libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#Default Variables
%matplotlib inline
plt.rcParams['figure.figsize'] = (8,6)
plt.style.use('ggplot')
pd.set_option('display.float_format', lambda x: '%.2f' % x)
#Load the training dataset
df = pd.read_csv("../data/historical_loan.csv")
#View the first few rows of training dataset
df.head()
#View the columns of the train dataset
df.columns
#View the data types of the train dataset
df.dtypes
#View the number of records in the data
df.shape
#View summary of raw data
df.describe()
Explanation: Machine Learning Model
Business Problem
Startup XYZ is in the business of giving personal loans, structured as non-recourse loans. The defaults on their loans are much higher than their competitors. Also, the underlying collaterals lose their value way too quicky and has resulted in huge losses for Bank XYZ.
Alice was recently appointed as the Senior VP of the Risk Organization. She comes from a strong analytics background and wants to leverage data science to identify customer's risk before approving loan.
She's appointed you as a consultant to help her and the team solve this problem.
Note: This case study was inspired by the bank marketing case study. The data is a modified version of what is available in that site
Brainstorming
1. Frame
The first step is to convert the business problem into an analytics problem
Alice wants to know customer's risk. Let's try to predict the propensity of a customer to default, given the details he/she has entered on the loan application form
2. Acquire
After discussions with the IT team of Startup XYZ, you have obtained some historical data from the bank. It has the following columns
Application Attributes:
- years: Number of years the applicant has been employed
- ownership: Whether the applicant owns a house or not
- income: Annual income of the applicant
- age: Age of the applicant
- amount : Amount of Loan requested by the applicant
Behavioural Attributes:
- grade: Credit grade of the applicant
Outcome Variable:
default : Whether the applicant has defaulted or not
Load the data
End of explanation
# Find if df has missing values.
df.isnull().head()
# In a large dataset, this is hard to find if there are any missing values or not.
# We can chain operators on the output. Let's use sum()
df.isnull().sum()
Explanation: 3. Refine
Lets check the dataset for compeleteness - by checking for missing values
Missing values
End of explanation
# Let's replace missing values with mean
# There's a fillna function
df.years = df.years.fillna(np.mean(df.years))
#Finding unique values of years
pd.unique(df.years)
Explanation: So, we see that years have missing values. The column is numeric. We have three options for dealing with missing values
Options to treat Missing Values
REMOVE - NAN rows
IMPUTATION - Replace them with something??
Mean
Median
Fixed Number - Domain Relevant
High Number (999) - Issue with modelling
BINNING - Categorical variable and "Missing becomes a number
DOMAIN SPECIFIC - Entry error, pipeline, etc.
End of explanation
# Create histogram for target variable - default
df.default.plot.hist()
# Explore grade
df.grade.value_counts().plot.barh()
# Explore age
df.age.plot.hist(bins=50)
Explanation: We also need to check for quality - by checking for outliers in the data. For this workshop, we will skip doing that. But remember to check for outliers when doing in real-life
4. Explore
The goal is to build some intuition around the data
Single Variable Exploration - Univariate Analysis
End of explanation
# Explore the impact of age with income
df.plot.scatter(x='age', y='income', alpha=0.7)
Explanation: Dual Variable Exploration - Bivariate Analysis
End of explanation
# Let's again revisit the data types in the dataset
df.dtypes
Explanation: EXERCISE
Three Variables Exploration
Explore the relationship between age, income and defualt
5. Transform
End of explanation
from sklearn.preprocessing import LabelEncoder
# Let's not modify the original dataset.
# Let's transform it in another dataset
df_encoded = df.copy()
# instantiate label encoder
le_grade = LabelEncoder()
# fit label encoder
le_grade = le_grade.fit(df_encoded["grade"])
df_encoded.grade = le_grade.transform(df.grade)
df_encoded.head()
Explanation: Two of the columns are categorical in nature - grade and ownership.
To build models, we need all of the features to be numeric. There exists a number of ways to convert categorical variables to numeric values.
We will use one of the popular options: LabelEncoding
End of explanation
X_2 = df_encoded.loc[:,('age', 'amount')]
y = df_encoded.loc[:,'default']
Explanation: EXERCISE
Do label encoding on ownership
6. Model
Common approaches:
Linear models
Tree-based models
Neural Networks
...
Some choices to consider:
Interpretability
Run-time
Model complexity
Scalability
For the purpose of this workshop, we will use tree-based models.
We will do the following two:
Decision Tree
Random Forest
Decision Trees
Decision Trees are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features.
Let's first build a model using just two features to build some intuition around decision trees
Step 1 - Create features matrix and target vector
End of explanation
from sklearn import tree
# instantiate the decision tree object
clf_dt_2 = tree.DecisionTreeClassifier(max_depth=2)
# fit the decision tree model
clf_dt_2 = clf_dt_2.fit(X_2, y)
Explanation: Step 2 - Build decision tree model
End of explanation
import pydotplus
from IPython.display import Image
dot_data = tree.export_graphviz(clf_dt_2, out_file='tree.dot', feature_names=X_2.columns,
class_names=['no', 'yes'], filled=True,
rounded=True, special_characters=True)
# Incase you don't have graphviz installed
# txt = open("tree_3.dot").read().replace("\\n", "\n ").replace(";", ";\n")
# print(txt)
graph = pydotplus.graph_from_dot_file('tree.dot')
Image(graph.create_png())
Explanation: Step 3 - Visualize the decision tree
End of explanation
def plot_boundaries(X2, clf):
x_min, x_max = X2.iloc[:, 0].min() - 1, X2.iloc[:, 0].max() + 1
y_min, y_max = X2.iloc[:, 1].min() - 1, X2.iloc[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, (x_max - x_min)/100),
np.arange(y_min, y_max, (y_max - y_min)/100))
Z = clf.predict_proba(np.c_[xx.ravel(), yy.ravel()])[:,1]
Z = Z.reshape(xx.shape)
target = clf.predict(X2)
plt.scatter(x = X2.iloc[:,0], y = X2.iloc[:,1], c = y, s = 20, cmap=plt.cm.magma)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.viridis, alpha = 0.4)
plot_boundaries(X_2, clf_dt_2)
Explanation: Let's see the decision boundaries
End of explanation
pred_class = clf_dt_10.predict(X_2)
pred_proba = clf_dt_10.predict_proba(X_2)
plt.hist(pred_class);
import seaborn as sns
sns.kdeplot(pred_proba[:,1], shade=True)
Explanation: EXERCISE
Change the depth of the Decision Tree classifier to 10 and plot the decision boundaries again.
Lets understand first just the difference between Class prediction and Class Probabilities
End of explanation
from sklearn.metrics import confusion_matrix
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
X = df_encoded.iloc[:,1:]
y = df_encoded.iloc[:,0]
clf_dt = tree.DecisionTreeClassifier(max_depth=5)
def pred_df(clf, X, y):
clf = clf.fit(X,y)
y_pred = clf.predict(X)
y_proba = clf.predict_proba(X)[:,1]
pred_df = pd.DataFrame({"actual": np.array(y), "predicted": y_pred, "probability": y_proba})
return pred_df
pred_dt = pred_df(clf_dt, X,y)
pred_dt.head()
pd.crosstab(pred_dt.predicted, pred_dt.actual)
confusion_matrix(pred_dt.predicted, pred_dt.actual)
def plot_prediction(pred_df):
pred_df_0 = pred_df[pred_df.actual == 0]
pred_df_1 = pred_df[pred_df.actual == 1]
sns.kdeplot(pred_df_0.probability, shade=True, label="no default")
sns.kdeplot(pred_df_1.probability, shade=True, label="default")
plot_prediction(pred_dt)
def plot_roc_auc(pred_df):
fpr, tpr, thresholds = roc_curve(pred_df.actual, pred_df.probability)
auc_score = roc_auc_score(pred_df.actual,pred_df.probability)
plt.plot(fpr, tpr, label='AUC = %0.2f' % auc_score)
plt.xlabel("False Positive Rate")
plt.ylabel("True Positive Rate")
plt.legend(loc="lower right")
return print("AUC = %0.2f" % auc_score)
plot_roc_auc(pred_dt)
Explanation: Model Validation
While we have created the model, we still don't have a measure of how good the model is. We need to measure some accuracy metric of the model and have confidence that it will generalize well. We should be confident that when we put the model in production (real-life), the accuracy we get from the model results should mirror the metrics we obtained when we built the model.
Selecting the right accuracy metric for the model is important.
This wiki has a good overview of some of the common metrics.
We will use a metric - Area Under the Curve
Area Under the Curve
In a Receiver Operating Characteristic (ROC) curve the true positive rate (Sensitivity) is plotted in function of the false positive rate (100-Specificity) for different cut-off points. Each point on the ROC curve represents a sensitivity/specificity pair corresponding to a particular decision threshold. A test with perfect discrimination (no overlap in the two distributions) has a ROC curve that passes through the upper left corner (100% sensitivity, 100% specificity). Therefore the closer the ROC curve is to the upper left corner, the higher the overall accuracy of the test
(source)
End of explanation
from sklearn.model_selection import StratifiedKFold
def cross_val(clf, k):
# Instantiate stratified k fold.
kf = StratifiedKFold(n_splits=k)
# Let's use an array to store the results of cross-validation
kfold_auc_score = []
# Run kfold CV
for train_index, test_index in kf.split(X,y):
clf = clf.fit(X.iloc[train_index], y.iloc[train_index])
proba = clf.predict_proba(X.iloc[test_index])[:,1]
auc_score = roc_auc_score(y.iloc[test_index],proba)
print(auc_score)
kfold_auc_score.append(auc_score)
print("Mean K Fold CV:", np.mean(kfold_auc_score))
cross_val(clf_dt, 3)
Explanation: EXERCISE
Build a decison tree classifier with max_depth = 10 and plot confusion_matrix & auc
Cross-validation
Now that we have chosen the error metric, how do we find the generalization error?
We do this using cross-validation. ([source]
(https://en.wikipedia.org/wiki/Cross-validation_(statistics))
From wiki:
One round of cross-validation involves partitioning a sample of data into complementary subsets, performing the analysis on one subset (called the training set), and validating the analysis on the other subset (called the validation set or testing set). To reduce variability, multiple rounds of cross-validation are performed using different partitions, and the validation results are averaged over the rounds.
We will use StratifiedKFold.
This ensures that in each fold, the proportion of positive class and negative class remain similar to the original dataset
This is the process we will follow to get the mean cv-score
Generate k-fold
Train the model using k-1 fold
Predict for the kth fold
Find the accuracy.
Append it to the array
Repeat 2-5 for different validation folds
Report the mean cross validation score
End of explanation
from sklearn.ensemble import RandomForestClassifier
clf_rf = RandomForestClassifier(n_estimators=10)
cross_val(clf_rf, 5)
Explanation: EXERCISE
Build a classifier with max_depth = 10 and run a 5-fold CV to get the auc score.
Build a classifier with max_depth = 20 and run a 5-fold CV to get the auc score.
Bagging
Decision trees in general have low bias and high variance. We can think about it like this: given a training set, we can keep asking questions until we are able to distinguish between ALL examples in the data set. We could keep asking questions until there is only a single example in each leaf. Since this allows us to correctly classify all elements in the training set, the tree is unbiased. However, there are many possible trees that could distinguish between all elements, which means higher variance.
How do we reduce variance?
In order to reduce the variance of a single error tree, we usually place a restriction on the number of questions asked in a tree. This is true for single decision trees which we have seen in previous notebooks.
Along with this other method to do reduce variance is to ensemble models of decision trees. The goal of ensemble methods is to combine the predictions of several base estimators built with a given learning algorithm in order to improve generalizability / robustness over a single estimator.
How to ensemble?
Averaging: Build several estimators independently and then average their predictions. On average, the combined estimator is usually better than any of the single base estimator because its variance is reduced. Examples:
Bagging
Random Forest
Extremely Randomized Trees
Boosting: Build base estimators sequentially and then try to reduce the bias of the combined estimator. The motivation is to combine several weak models to produce a powerful ensemble.
AdaBoost
Gradient Boosting (e.g. xgboost)
Random Forest
In random forests, each tree in the ensemble is built from a sample drawn with replacement (i.e., a bootstrap sample) from the training set. In addition, when splitting a node during the construction of the tree, the split that is chosen is no longer the best split among all features. Instead, the split that is picked is the best split among a random subset of the features.
As a result of this randomness, the bias of the forest usually slightly increases (with respect to the bias of a single non-random tree) but, due to averaging, its variance also decreases, usually more than compensating for the increase in bias, hence yielding an overall better model.
Random Forest Model
The advantage of the scikit-learn API is that the syntax remains fairly consistent across all the classifiers.
If we change the DecisionTreeClassifier to RandomForestClassifier in the above code, we should be good to go :-)
End of explanation
final_model = RandomForestClassifier(n_estimators=100)
final_model = final_model.fit(X, y)
Explanation: EXERCISE
Change the number of trees from 10 to 100 and make it 5-fold. And report the cross-validation error (Hint: You should get ~ 0.74. )
A more detailed version of bagging and random forest can be found in the speakers' introductory machine learning workshop material
bagging
random forest
Model Selection
We choose the model and its hyper-parameters that has the best cross-validation score on the chosen error metric.
In our case, it is random forest.
Now - how do we get the model?
We need to run the model with the chosen hyper-parameters on all of the train data. And serialize it.
End of explanation
from sklearn.externals import joblib
joblib.dump(final_model, "model.pkl")
joblib.dump(le_grade, "le_grade.pkl")
joblib.dump(le_ownership, "le_ownership.pkl");
Explanation: Model serialization
We need to serialize the model and the label encoders.
End of explanation |
3,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Multi-Task Learning Example
This is a simple example to show how to use mxnet for multi-task learning.
The network is jointly going to learn whether a number is odd or even and to actually recognize the digit.
For example
1
Step1: Parameters
Step2: Data
We get the traditionnal MNIST dataset and add a new label to the existing one. For each digit we return a new label that stands for Odd or Even
Step3: We assign the transform to the original dataset
Step4: We load the datasets DataLoaders
Step5: Multi-task Network
The output of the featurization is passed to two different outputs layers
Step6: We can use two different losses, one for each output
Step7: We create and initialize the network
Step8: Evaluate Accuracy
We need to evaluate the accuracy of each task separately
Step9: Training Loop
We need to balance the contribution of each loss to the overall training and do so by tuning this alpha parameter within [0,1].
Step10: Testing | Python Code:
import logging
import random
import time
import matplotlib.pyplot as plt
import mxnet as mx
from mxnet import gluon, nd, autograd
import numpy as np
Explanation: Multi-Task Learning Example
This is a simple example to show how to use mxnet for multi-task learning.
The network is jointly going to learn whether a number is odd or even and to actually recognize the digit.
For example
1 : 1 and odd
2 : 2 and even
3 : 3 and odd
etc
In this example we don't expect the tasks to contribute to each other much, but for example multi-task learning has been successfully applied to the domain of image captioning. In A Multi-task Learning Approach for Image Captioning by Wei Zhao, Benyou Wang, Jianbo Ye, Min Yang, Zhou Zhao, Ruotian Luo, Yu Qiao, they train a network to jointly classify images and generate text captions
End of explanation
batch_size = 128
epochs = 5
ctx = mx.gpu() if mx.context.num_gpus() > 0 else mx.cpu()
lr = 0.01
Explanation: Parameters
End of explanation
train_dataset = gluon.data.vision.MNIST(train=True)
test_dataset = gluon.data.vision.MNIST(train=False)
def transform(x,y):
x = x.transpose((2,0,1)).astype('float32')/255.
y1 = y
y2 = y % 2 #odd or even
return x, np.float32(y1), np.float32(y2)
Explanation: Data
We get the traditionnal MNIST dataset and add a new label to the existing one. For each digit we return a new label that stands for Odd or Even
End of explanation
train_dataset_t = train_dataset.transform(transform)
test_dataset_t = test_dataset.transform(transform)
Explanation: We assign the transform to the original dataset
End of explanation
train_data = gluon.data.DataLoader(train_dataset_t, shuffle=True, last_batch='rollover', batch_size=batch_size, num_workers=5)
test_data = gluon.data.DataLoader(test_dataset_t, shuffle=False, last_batch='rollover', batch_size=batch_size, num_workers=5)
print("Input shape: {}, Target Labels: {}".format(train_dataset[0][0].shape, train_dataset_t[0][1:]))
Explanation: We load the datasets DataLoaders
End of explanation
class MultiTaskNetwork(gluon.HybridBlock):
def __init__(self):
super(MultiTaskNetwork, self).__init__()
self.shared = gluon.nn.HybridSequential()
with self.shared.name_scope():
self.shared.add(
gluon.nn.Dense(128, activation='relu'),
gluon.nn.Dense(64, activation='relu'),
gluon.nn.Dense(10, activation='relu')
)
self.output1 = gluon.nn.Dense(10) # Digist recognition
self.output2 = gluon.nn.Dense(1) # odd or even
def hybrid_forward(self, F, x):
y = self.shared(x)
output1 = self.output1(y)
output2 = self.output2(y)
return output1, output2
Explanation: Multi-task Network
The output of the featurization is passed to two different outputs layers
End of explanation
loss_digits = gluon.loss.SoftmaxCELoss()
loss_odd_even = gluon.loss.SigmoidBCELoss()
Explanation: We can use two different losses, one for each output
End of explanation
mx.random.seed(42)
random.seed(42)
net = MultiTaskNetwork()
net.initialize(mx.init.Xavier(), ctx=ctx)
net.hybridize() # hybridize for speed
trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate':lr})
Explanation: We create and initialize the network
End of explanation
def evaluate_accuracy(net, data_iterator):
acc_digits = mx.metric.Accuracy(name='digits')
acc_odd_even = mx.metric.Accuracy(name='odd_even')
for i, (data, label_digit, label_odd_even) in enumerate(data_iterator):
data = data.as_in_context(ctx)
label_digit = label_digit.as_in_context(ctx)
label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1)
output_digit, output_odd_even = net(data)
acc_digits.update(label_digit, output_digit.softmax())
acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5)
return acc_digits.get(), acc_odd_even.get()
Explanation: Evaluate Accuracy
We need to evaluate the accuracy of each task separately
End of explanation
alpha = 0.5 # Combine losses factor
for e in range(epochs):
# Accuracies for each task
acc_digits = mx.metric.Accuracy(name='digits')
acc_odd_even = mx.metric.Accuracy(name='odd_even')
# Accumulative losses
l_digits_ = 0.
l_odd_even_ = 0.
for i, (data, label_digit, label_odd_even) in enumerate(train_data):
data = data.as_in_context(ctx)
label_digit = label_digit.as_in_context(ctx)
label_odd_even = label_odd_even.as_in_context(ctx).reshape(-1,1)
with autograd.record():
output_digit, output_odd_even = net(data)
l_digits = loss_digits(output_digit, label_digit)
l_odd_even = loss_odd_even(output_odd_even, label_odd_even)
# Combine the loss of each task
l_combined = (1-alpha)*l_digits + alpha*l_odd_even
l_combined.backward()
trainer.step(data.shape[0])
l_digits_ += l_digits.mean()
l_odd_even_ += l_odd_even.mean()
acc_digits.update(label_digit, output_digit.softmax())
acc_odd_even.update(label_odd_even, output_odd_even.sigmoid() > 0.5)
print("Epoch [{}], Acc Digits {:.4f} Loss Digits {:.4f}".format(
e, acc_digits.get()[1], l_digits_.asscalar()/(i+1)))
print("Epoch [{}], Acc Odd/Even {:.4f} Loss Odd/Even {:.4f}".format(
e, acc_odd_even.get()[1], l_odd_even_.asscalar()/(i+1)))
print("Epoch [{}], Testing Accuracies {}".format(e, evaluate_accuracy(net, test_data)))
Explanation: Training Loop
We need to balance the contribution of each loss to the overall training and do so by tuning this alpha parameter within [0,1].
End of explanation
def get_random_data():
idx = random.randint(0, len(test_dataset))
img = test_dataset[idx][0]
data, _, _ = test_dataset_t[idx]
data = data.as_in_context(ctx).expand_dims(axis=0)
plt.imshow(img.squeeze().asnumpy(), cmap='gray')
return data
data = get_random_data()
digit, odd_even = net(data)
digit = digit.argmax(axis=1)[0].asnumpy()
odd_even = (odd_even.sigmoid()[0] > 0.5).asnumpy()
print("Predicted digit: {}, odd: {}".format(digit, odd_even))
Explanation: Testing
End of explanation |
3,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
이 자체로 훌륭한 테스트 코드라고 말하는 것은 어렵다.
주피터 노트북용 테스트 코드였다.
python에서는 테스트 코드를 작성할 수 있는 unittest 모듈을 제공한다.
Step1: 개발 프로세스를 살펴보면
TDD => 테스트 주도 개발 ( Test Driven Development )
왜 중요할까요?
테스트 > 코드 => "테스트 코드가 원래 코드보다 더 많으니까 개발 시간이 일단 오래 걸려요"
프로젝트가 커지면 커질수록, 의존성++
테스트 코드 (X) => 유지보수 하기가 굉장히 어렵다.
조그마한 기능 단위로 테스팅을 무조건 시작해야합니다. ( unit test ) => 통합 테스트 ( integration test )
테스트 주도 개발을 시작하는 방법
유닛 테스트를 합니다.
테스트 시나리오 => 일련의 기능들을 어떤 순서와 방법으로 여러 형태를 테스트를 해보는 프로세스를 정한다.
TDD Cycle을 아래 3개처럼 부른다.
RED => 경고. 테스트가 실패 합니다.
GREEN => 테스트가 성공하도록 코드를 변경함
Refactor => 계속 성공하면서, 코드를 더 예쁘게 리팩토링 합니다. | Python Code:
# 우선 형태만 보면
# class TestDoubleFunction(unittest.TestCase):
# def test_5_should_return_10(self):
# self.assertEqual(double(5), 10) # 이거랑 동일 assert double(5) == 10
# 주피터노트북에서는 이러한 형태로 테스트 못한다. 그래서 일단 pass
# 우선 hello.py라는 txt파일을 만든다. 안에 내용은
# def hello(name):
# print("hello, {name}".format(name=name))
# hello("kimkipoy")
# hello("김기표")
%run hello.py
Explanation: 이 자체로 훌륭한 테스트 코드라고 말하는 것은 어렵다.
주피터 노트북용 테스트 코드였다.
python에서는 테스트 코드를 작성할 수 있는 unittest 모듈을 제공한다.
End of explanation
# 직접 해 볼 양식
# "김기표, 010-6235-3317, 주소\"
# def preprocess_user_information(information):
# pass
# "김기표, 010-6235-****"
#함수를 짜기 전에 먼저 테스트 코드를 짠다.
def preprocess_user_information(information):
return information[:-4] + "****"
assert preprocess_user_information("김기표, 010-6235-3317") == "김기표, 010-6235-****"
assert preprocess_user_information("김기정, 010-6666-3317") == "김기정, 010-6666-****"
assert preprocess_user_information("김기, 010-1111-5736") == "김기, 010-1111-****"
#root폴더에다가 txt파일을 만들어라. 아래와 같은 양식으로
# 김기표, 880518-1111111
# 김기표일, 880518-222222
# 김기표이, 880518-333333
# 김기표삼, 880518-444444
# 김사, 880518-555555
def get_information(file_name):
with open(file_name, "r", encoding='utf8') as f:
return f.read()
get_information("info.txt")
info_file = get_information("info.txt")
informations = info_file.split("\n")
informations
# information = f.readlines()
# 프린트를 테스팅하는 것은 불가능
# 프린트 하기 전에 이것을 별로 만들어주는 함수를 만들고 그것을 테스트
def preprocess(information):
return information[:-7] + "*" * 7
assert preprocess("김기표, 880518-1111111") == "김기표, 880518-*******"
assert preprocess("김기표일, 880518-2222222") == "김기표일, 880518-*******"
[
preprocess(information)
for information
in informations
]
# 정규표현식 ( = Regular Expression; Regex )
# 기본 원리만 알면 => 다 적용할 수 있습니다. ( 기본 원리가 아주 많은 문법 )
info_file
import re #re라는 패키지에서 정규표현식 제공
pattern = re.compile("(?P<birth>\d{6})[-]\d{7}")
# 940223-1701234 => birth == "940223" (birth는 변수명)
pattern.sub("\g<birth>-*******", info_file)
Explanation: 개발 프로세스를 살펴보면
TDD => 테스트 주도 개발 ( Test Driven Development )
왜 중요할까요?
테스트 > 코드 => "테스트 코드가 원래 코드보다 더 많으니까 개발 시간이 일단 오래 걸려요"
프로젝트가 커지면 커질수록, 의존성++
테스트 코드 (X) => 유지보수 하기가 굉장히 어렵다.
조그마한 기능 단위로 테스팅을 무조건 시작해야합니다. ( unit test ) => 통합 테스트 ( integration test )
테스트 주도 개발을 시작하는 방법
유닛 테스트를 합니다.
테스트 시나리오 => 일련의 기능들을 어떤 순서와 방법으로 여러 형태를 테스트를 해보는 프로세스를 정한다.
TDD Cycle을 아래 3개처럼 부른다.
RED => 경고. 테스트가 실패 합니다.
GREEN => 테스트가 성공하도록 코드를 변경함
Refactor => 계속 성공하면서, 코드를 더 예쁘게 리팩토링 합니다.
End of explanation |
3,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
内容索引
该小结主要介绍了NumPy数组的基本操作。
子目1中,介绍创建和索引数组,数据类型,dtype类,自定义异构数据类型。
子目2中,介绍数组的索引和切片,主要是对[]运算符的操作。
子目3中,介绍如何改变数组的维度,分别介绍了ravel函数、flatten函数、transpose函数、resize函数、reshape函数的用法。
Step1: ndarray是一个多维数组对象,该对象由实际的数据、描述这些数据的元数据组成,大部分数组操作仅仅修改元数据部分,而不改变底层的实际数据。
用arange函数创建数组
Step2: 数组的shape属性返回一个元祖(tuple),元组中的元素即NumPy数组每一个维度的大小。
1. 创建多维数组
array函数可以依据给定的对象生成数组。
给定的对象应是类数组,如python的列表、numpy的arange函数
Step3: 选取元素
Step4: NumPy数据类型
Numpy除了Python支持的整型、浮点型、复数型之外,还添加了很多其他的数据类型。
Type Remarks Character code
bool_ compatible
Step5: 复数不能转换成整数和浮点数
Numpy数组中每一个元素均为相同的数据类型,现在给出单个元素所占字节
Step6: dtype类的属性
Step7: str属性可以给出数据类型的字符串表示,该字符串的首个字符表示字节序,然后是字符编码,然后是所占字节数
字节序是指位长为32和64的字(word)存储的顺序,包括大端序(big-endian)和小端序(little-endian)。
大端序是将最高位字节存储在最低的内存地址处,用>表示;与之相反,小端序是将最低位字节存储在最低的内存地址处,用<表示。
创建自定义数据类型
自定义数据类型是一种异构数据类型,可以当做用来记录电子表格或数据库中一行数据的结构。
下面我们创建一种自定义的异构数据类型,该数据类型包括一个用字符串记录的名字、一个用整数记录的数字以及一个用浮点数记录的价格。
Step8: 2. 数组的索引和切片
Step9: 多维数组的切片和索引
Step10: 用三维坐标选定任意一个房间,即楼层、行号、列号
Step11: 3. 改变数组的维度
ravel 完成展平操作
Step12: flatten 也是展平
flatten函数会请求分配内存来保存结果,而ravel函数只是返回数组的一个视图(view)
Step13: 用元组设置维度
Step14: transpose转置矩阵
Step15: resize和reshape函数功能一样
但resize会直接改变所操作的数组 | Python Code:
%pylab inline
Explanation: 内容索引
该小结主要介绍了NumPy数组的基本操作。
子目1中,介绍创建和索引数组,数据类型,dtype类,自定义异构数据类型。
子目2中,介绍数组的索引和切片,主要是对[]运算符的操作。
子目3中,介绍如何改变数组的维度,分别介绍了ravel函数、flatten函数、transpose函数、resize函数、reshape函数的用法。
End of explanation
a = arange(5)
a.dtype
a
a.shape
Explanation: ndarray是一个多维数组对象,该对象由实际的数据、描述这些数据的元数据组成,大部分数组操作仅仅修改元数据部分,而不改变底层的实际数据。
用arange函数创建数组
End of explanation
m = array([arange(2), arange(2)])
print m
print m.shape
print type(m)
print type(m.shape)
Explanation: 数组的shape属性返回一个元祖(tuple),元组中的元素即NumPy数组每一个维度的大小。
1. 创建多维数组
array函数可以依据给定的对象生成数组。
给定的对象应是类数组,如python的列表、numpy的arange函数
End of explanation
a = array([[1,2],[3,4]])
print a[0,0]
print a[0,1]
Explanation: 选取元素
End of explanation
print float64(42)
print int8(42.0)
print bool(42)
print float(True)
arange(8, dtype=uint16)
Explanation: NumPy数据类型
Numpy除了Python支持的整型、浮点型、复数型之外,还添加了很多其他的数据类型。
Type Remarks Character code
bool_ compatible: Python bool '?'
bool8 8 bits
Integers:
byte compatible: C char 'b'
short compatible: C short 'h'
intc compatible: C int 'i'
int_ compatible: Python int 'l'
longlong compatible: C long long 'q'
intp large enough to fit a pointer 'p'
int8 8 bits
int16 16 bits
int32 32 bits
int64 64 bits
Unsigned integers:
ubyte compatible: C unsigned char 'B'
ushort compatible: C unsigned short 'H'
uintc compatible: C unsigned int 'I'
uint compatible: Python int 'L'
ulonglong compatible: C long long 'Q'
uintp large enough to fit a pointer 'P'
uint8 8 bits
uint16 16 bits
uint32 32 bits
uint64 64 bits
Floating-point numbers:
half 'e'
single compatible: C float 'f'
double compatible: C double
float_ compatible: Python float 'd'
longfloat compatible: C long float 'g'
float16 16 bits
float32 32 bits
float64 64 bits
float96 96 bits, platform?
float128 128 bits, platform?
Complex floating-point numbers:
csingle 'F'
complex_ compatible: Python complex 'D'
clongfloat 'G'
complex64 two 32-bit floats
complex128 two 64-bit floats
complex192 two 96-bit floats, platform?
complex256 two 128-bit floats, platform?
Any Python object:
object_ any Python object 'O'
每一种数据类型均有对应的类型转换函数
End of explanation
a.dtype
a.dtype.itemsize
Explanation: 复数不能转换成整数和浮点数
Numpy数组中每一个元素均为相同的数据类型,现在给出单个元素所占字节
End of explanation
t = dtype('float64')
print t.char
print t.type
print t.str
Explanation: dtype类的属性
End of explanation
t = dtype([('name', str_, 40), ('numitems', int32), ('price', float32)])
t
t['name']
itemz = array([('Meaning of life DVD', 32, 3.14), ('Butter', 13, 2.72)], dtype=t)
itemz[1]
Explanation: str属性可以给出数据类型的字符串表示,该字符串的首个字符表示字节序,然后是字符编码,然后是所占字节数
字节序是指位长为32和64的字(word)存储的顺序,包括大端序(big-endian)和小端序(little-endian)。
大端序是将最高位字节存储在最低的内存地址处,用>表示;与之相反,小端序是将最低位字节存储在最低的内存地址处,用<表示。
创建自定义数据类型
自定义数据类型是一种异构数据类型,可以当做用来记录电子表格或数据库中一行数据的结构。
下面我们创建一种自定义的异构数据类型,该数据类型包括一个用字符串记录的名字、一个用整数记录的数字以及一个用浮点数记录的价格。
End of explanation
a = arange(9)
#下标0-7, 以2为步长
print a[:7:2]
#以负数下标翻转数组
print a[::-1]
print a[::-2]
Explanation: 2. 数组的索引和切片
End of explanation
b = arange(24).reshape(2,3,4)
print b.shape
print b
Explanation: 多维数组的切片和索引
End of explanation
#选取第一层楼所有房间
print b[0]
print
print b[0, :, :]
#多个冒号用一个省略号代替
b[0, ...]
#间隔选元素
b[0,1,::2]
#多维数组执行翻转一维数组的命令,将在最前面的维度上翻转元素的顺序
b[::-1]
b[::-1,::-1,::-1]
Explanation: 用三维坐标选定任意一个房间,即楼层、行号、列号
End of explanation
b.ravel()
Explanation: 3. 改变数组的维度
ravel 完成展平操作
End of explanation
b.flatten()
Explanation: flatten 也是展平
flatten函数会请求分配内存来保存结果,而ravel函数只是返回数组的一个视图(view)
End of explanation
b.shape = (6, 4)
b
Explanation: 用元组设置维度
End of explanation
b.transpose()
Explanation: transpose转置矩阵
End of explanation
b.reshape(2,3,4)
b
b.resize(2,12)
b
Explanation: resize和reshape函数功能一样
但resize会直接改变所操作的数组
End of explanation |
3,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GLM
Step1: Local Functions
Step2: Generate Data
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you'd like to know more https
Step3: View means of the various combinations (poisson mean values)
Step4: Briefly Describe Dataset
Step5: Observe
Step6: 1. Manual method, create design matrices and manually specify model
Create Design Matrices
Step7: Create Model
Step8: Sample Model
Step9: View Diagnostics
Step10: Observe
Step11: Observe
Step12: Sample Model
Step13: View Traces
Step14: Transform coeffs
Step15: Observe
Step16: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of | Python Code:
## Interactive magics
%matplotlib inline
import sys
import warnings
warnings.filterwarnings('ignore')
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from scipy import optimize
# pymc3 libraries
import pymc3 as pm
import theano as thno
import theano.tensor as T
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 14, 6
np.random.seed(0)
Explanation: GLM: Poisson Regression
A minimal reproducable example of poisson regression to predict counts using dummy data.
This Notebook is basically an excuse to demo poisson regression using PyMC3, both manually and using the glm library to demo interactions using the patsy library. We will create some dummy data, poisson distributed according to a linear model, and try to recover the coefficients of that linear model through inference.
For more statistical detail see:
Basic info on Wikipedia
GLMs: Poisson regression, exposure, and overdispersion in Chapter 6.2 of ARM, Gelmann & Hill 2006
This worked example from ARM 6.2 by Clay Ford
This very basic model is insipired by a project by Ian Osvald, which is concerend with understanding the various effects of external environmental factors upon the allergic sneezing of a test subject.
Contents
Setup
Local Functions
Generate Data
Poisson Regression
Create Design Matrices
Create Model
Sample Model
View Diagnostics and Outputs
Package Requirements (shown as a conda-env YAML):
```
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.5
- jupyter
- ipywidgets
- numpy
- scipy
- matplotlib
- pandas
- pytables
- scikit-learn
- statsmodels
- seaborn
- patsy
- requests
- pip
- pip:
- regex
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
```
Setup
End of explanation
def strip_derived_rvs(rvs):
'''Convenience fn: remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
def plot_traces_pymc(trcs, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
Explanation: Local Functions
End of explanation
# decide poisson theta values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# create samples
q = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df.tail()
Explanation: Generate Data
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you'd like to know more https://twitter.com/ianozsvald
Assumptions:
The subject sneezes N times per day, recorded as nsneeze (int)
The subject may or may not drink alcohol during that day, recorded as alcohol (boolean)
The subject may or may not take an antihistamine medication during that day, recorded as the negative action nomeds (boolean)
I postulate (probably incorrectly) that sneezing occurs at some baseline rate, which increases if an antihistamine is not taken, and further increased after alcohol is consumed.
The data is aggegated per day, to yield a total count of sneezes on that day, with a boolean flag for alcohol and antihistamine usage, with the big assumption that nsneezes have a direct causal relationship.
Create 4000 days of data: daily counts of sneezes which are poisson distributed w.r.t alcohol consumption and antihistamine usage
End of explanation
df.groupby(['alcohol','nomeds']).mean().unstack()
Explanation: View means of the various combinations (poisson mean values)
End of explanation
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df,
kind='count', size=4, aspect=1.5)
Explanation: Briefly Describe Dataset
End of explanation
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
Explanation: Observe:
This looks a lot like poisson-distributed count data (because it is)
With nomeds == False and alcohol == False (top-left, akak antihistamines WERE used, alcohol was NOT drunk) the mean of the poisson distribution of sneeze counts is low.
Changing alcohol == True (top-right) increases the sneeze count nsneeze slightly
Changing nomeds == True (lower-left) increases the sneeze count nsneeze further
Changing both alcohol == True and nomeds == True (lower-right) increases the sneeze count nsneeze a lot, increasing both the mean and variance.
Poisson Regression
Our model here is a very simple Poisson regression, allowing for interaction of terms:
$$ \theta = exp(\beta X)$$
$$ Y_{sneeze_count} ~ Poisson(\theta)$$
Create linear model for interaction of terms
End of explanation
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
Explanation: 1. Manual method, create design matrices and manually specify model
Create Design Matrices
End of explanation
with pm.Model() as mdl_fish:
# define priors, weakly informative Normal
b0 = pm.Normal('b0_intercept', mu=0, sd=10)
b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sd=10)
b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sd=10)
b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sd=10)
# define linear model and exp link function
theta = (b0 +
b1 * mx_ex['alcohol[T.True]'] +
b2 * mx_ex['nomeds[T.True]'] +
b3 * mx_ex['alcohol[T.True]:nomeds[T.True]'])
## Define Poisson likelihood
y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
Explanation: Create Model
End of explanation
with mdl_fish:
trc_fish = pm.sample(2000, tune=1000, njobs=4)[1000:]
Explanation: Sample Model
End of explanation
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)]
plot_traces_pymc(trc_fish, varnames=rvs_fish)
Explanation: View Diagnostics
End of explanation
np.exp(pm.df_summary(trc_fish, varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
Explanation: Observe:
The model converges quickly and traceplots looks pretty well mixed
Transform coeffs and recover theta values
End of explanation
with pm.Model() as mdl_fish_alt:
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Poisson())
Explanation: Observe:
The contributions from each feature as a multiplier of the baseline sneezecount appear to be as per the data generation:
exp(b0_intercept): mean=1.02 cr=[0.96, 1.08]
Roughly linear baseline count when no alcohol and meds, as per the generated data:
theta_noalcohol_meds = 1 (as set above)
theta_noalcohol_meds = exp(b0_intercept)
= 1
exp(b1_alcohol): mean=2.88 cr=[2.69, 3.09]
non-zero positive effect of adding alcohol, a ~3x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_meds = 3 (as set above)
theta_alcohol_meds = exp(b0_intercept + b1_alcohol)
= exp(b0_intercept) * exp(b1_alcohol)
= 1 * 3 = 3
exp(b2_nomeds[T.True]): mean=5.76 cr=[5.40, 6.17]
larger, non-zero positive effect of adding nomeds, a ~6x multiplier of
baseline sneeze count, as per the generated data:
theta_noalcohol_nomeds = 6 (as set above)
theta_noalcohol_nomeds = exp(b0_intercept + b2_nomeds)
= exp(b0_intercept) * exp(b2_nomeds)
= 1 * 6 = 6
exp(b3_alcohol[T.True]:nomeds[T.True]): mean=2.12 cr=[1.98, 2.30]
small, positive interaction effect of alcohol and meds, a ~2x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_nomeds = 36 (as set above)
theta_alcohol_nomeds = exp(b0_intercept + b1_alcohol + b2_nomeds + b3_alcohol:nomeds)
= exp(b0_intercept) * exp(b1_alcohol) * exp(b2_nomeds * b3_alcohol:nomeds)
= 1 * 3 * 6 * 2 = 36
2. Alternative method, using pymc.glm
Create Model
Alternative automatic formulation using pmyc.glm
End of explanation
with mdl_fish_alt:
trc_fish_alt = pm.sample(4000, tune=2000)[2000:]
Explanation: Sample Model
End of explanation
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)]
plot_traces_pymc(trc_fish_alt, varnames=rvs_fish_alt)
Explanation: View Traces
End of explanation
np.exp(pm.df_summary(trc_fish_alt, varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
Explanation: Transform coeffs
End of explanation
np.percentile(trc_fish_alt['mu'], [25,50,75])
Explanation: Observe:
The traceplots look well mixed
The transformed model coeffs look moreorless the same as those generated by the manual model
Note also that the mu coeff is for the overall mean of the dataset and has an extreme skew, if we look at the median value ...
End of explanation
df['nsneeze'].mean()
Explanation: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of:
End of explanation |
3,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>SKLearn predictor - Regressor</h1>
<hr style="border
Step1: <span>
Build a processor.
</span>
<br>
<span>
This is required by the regressor in order to parse the input raw data.<br>
A ATTPlainHitProcessor is needed here.
</span>
Step2: <span>
We build the regressor now, injecting the processor
</span>
Step3: <span>
We define the training data source file
</span>
Step4: <span>
And load the dataset
</span>
Step5: <span>
Now, train
</span>
Step6: <span>
And finally test
</span> | Python Code:
import sys
#sys.path.insert(0, 'I:/git/att/src/python/')
sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')
Explanation: <h1>SKLearn predictor - Regressor</h1>
<hr style="border: 1px solid #000;">
<span>
<h2>ATT hit predictor.</h2>
</span>
<br>
<span>
This notebook shows how the hit predictor works.<br>
The Hit predictor aim is to guess (x,y) coords from serial port readings.
There are two steps: Train and Predict.
</span>
<span>
Set modules path first:
</span>
End of explanation
from hit.process.processor import ATTPlainHitProcessor
plainProcessor = ATTPlainHitProcessor()
Explanation: <span>
Build a processor.
</span>
<br>
<span>
This is required by the regressor in order to parse the input raw data.<br>
A ATTPlainHitProcessor is needed here.
</span>
End of explanation
from hit.train.regressor import ATTSkLearnHitRegressor
regressor = ATTSkLearnHitRegressor(plainProcessor)
Explanation: <span>
We build the regressor now, injecting the processor
</span>
End of explanation
TRAIN_VALUES_FILE_LEFT = "train_data/train_points_20160129_left.txt"
Explanation: <span>
We define the training data source file
</span>
End of explanation
import numpy as np
(training_values, Y) = regressor.collect_train_hits_from_file(TRAIN_VALUES_FILE_LEFT)
print "Train Values: ", np.shape(training_values), np.shape(Y)
Explanation: <span>
And load the dataset
</span>
End of explanation
regressor.train(training_values, Y)
Explanation: <span>
Now, train
</span>
End of explanation
hit = "hit: {1568:6 1416:5 3230:6 787:8 2757:4 0:13 980:4 3116:4 l}"
print '(6,30)'
print regressor.predict(hit)
Explanation: <span>
And finally test
</span>
End of explanation |
3,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: 1. Get zip code from wikipedia
Step6: 2. Convert zip code to coordinates
Step7: 3. Sanity check
Step8: 4. Get bussiness type and # of establishments per year from US census
Check US census for the data. It can be downloaded as csv format.
Step9: 3. Collect property values per zip code over time
Step12: Neighborhood boundaries in SF | Python Code:
GET SF ZIP CODES from http://www.city-data.com/zipmaps/San-Francisco-California.html
import itertools
sf_zip_codes = [94102, 94103, 94104, 94105, 94107, 94108, 94109, 94110, 94111, 94112, 94114, 94115, 94116, 94117, 94118, 94121, 94122, 94123, 94124, 94127, 94129, 94131, 94132, 94133, 94134, 94158]
Explanation: 1. Get zip code from wikipedia
End of explanation
Geopy has zip code converter!
from geopy.geocoders import Nominatim
geolocator = Nominatim()
location = geolocator.geocode("78704")
print 'EXAMPLE:'
print(location.address)
print((location.latitude, location.longitude))
But something is wrong.
location = geolocator.geocode(sf_zip_codes[0])
print 'EXAMPLE:'
print(location.address)
print((location.latitude, location.longitude))
So we're using Google Geocode API.
GOOGLE_KEY = ''
query_url = 'https://maps.googleapis.com/maps/api/geocode/json?address=94102&key=%s' % (GOOGLE_KEY)
r = requests.get(query_url)
r.json()
Get coordinates.
temp = r.json()
temp_ = temp['results'][0]['geometry']['location']
temp_
lats = []
lngs = []
for sf_zip_code in sf_zip_codes:
query_url = 'https://maps.googleapis.com/maps/api/geocode/json?address=%s&key=%s' % (str(sf_zip_code),GOOGLE_KEY)
r = requests.get(query_url)
temp = r.json()
lat = temp['results'][0]['geometry']['location']['lat']
lng = temp['results'][0]['geometry']['location']['lng']
lats.append(lat)
lngs.append(lng)
Explanation: 2. Convert zip code to coordinates
End of explanation
import folium
m = folium.Map(location=[37.7786871, -122.4212424],zoom_start=13)
m.circle_marker(location=[37.7786871, -122.4212424],radius=100)
for i in range(len(sf_zip_codes)):
m.circle_marker(location=[lats[i], lngs[i]], radius=500, #100 seems good enough for now
popup=str(sf_zip_codes[i]), line_color = "#980043",
fill_color="#980043", fill_opacity=.2)
m.create_map(path='sf_zip_code_map.html')
Explanation: 3. Sanity check: map visualization
End of explanation
# business type
df = pd.read_csv('zbp13detail.txt')
df.head()
sf_zip_codes = [94102, 94103, 94104, 94105, 94107, 94108, 94109, 94110, 94111, 94112, 94114, 94115, 94116, 94117, 94118, 94121, 94122, 94123, 94124, 94127, 94129, 94131, 94132, 94133, 94134, 94158]
oak_zip_codes = [94601, 94602, 94603, 94605, 94606, 94607, 94610, 94611, 94612, 94613, 94621]
bay_zip_codes = sf_zip_codes + oak_zip_codes
# save zipcode file
import csv
myfile = open('bay_zip_codes.csv', 'wb')
wr = csv.writer(myfile)
wr.writerow(bay_zip_codes)
# load zipcode file
with open('bay_zip_codes.csv', 'rb') as f:
reader = csv.reader(f)
bay_zip_codes = list(reader)[0]
# convert str list to int list
bay_zip_codes = map(int, bay_zip_codes)
df_sf_oak = df.loc[df['zip'].isin(bay_zip_codes)]
# save as a file
df_sf_oak.to_csv('ZCBT_sf_oak_2013.csv',encoding='utf-8',index=False)
# sf1.sort(columns='est',ascending=False)
df_sf_oak.tail()
# let's compare to EPA
epa = b.loc[b['zip'] == 94303]
epa.sort(columns='est',ascending=False)
Explanation: 4. Get bussiness type and # of establishments per year from US census
Check US census for the data. It can be downloaded as csv format.
End of explanation
import trulia.stats as trustat
import trulia.location as truloc
zip_code_stats = trulia.stats.TruliaStats(TRULIA_KEY).get_zip_code_stats(zip_code='90025', start_date='2014-01-01', end_date='2014-01-31')
temp = zip_code_stats['listingStats']['listingStat']
df = DataFrame(temp)
df.head()
def func(x,key):
k = x['subcategory'][0][key] # here I read key values
return pd.Series(k)
df['numProperties']=df['listingPrice'].apply((lambda x: func(x,'numberOfProperties')))
df['medPrice']=df['listingPrice'].apply((lambda x: func(x,'medianListingPrice')))
df['avrPrice']=df['listingPrice'].apply((lambda x: func(x,'averageListingPrice')))
df = df.drop('listingPrice',1)
df.head()
Explanation: 3. Collect property values per zip code over time
End of explanation
Get neighborhoods
neighborhoods = trulia.location.LocationInfo(TRULIA_KEY).get_neighborhoods_in_city('San Francisco', 'CA')
neighborhoods
Trulia does not provide coordinates.
Alamo_Square = neighborhoods[0]
Alamo_Square
neighborhood_stats = trustat.TruliaStats(TRULIA_KEY).get_neighborhood_stats(neighborhood_id=7183, start_date='2012-01-01', end_date='2012-06-30')
neighborhood_stats.keys()
neighborhood_stats['listingStats'].keys()
a = neighborhood_stats['listingStats']['listingStat']
b = DataFrame(a)
b.head()
# Let's focus on All properties
x = b['listingPrice'][0]
x['subcategory'][0]
x['subcategory'][0]['type']
b['numProperties']=b['listingPrice'].apply((lambda x: func(x,'numberOfProperties')))
b['medPrice']=b['listingPrice'].apply((lambda x: func(x,'medianListingPrice')))
b['avrPrice']=b['listingPrice'].apply((lambda x: func(x,'averageListingPrice')))
b.drop('listingPrice',1)
matplotlib.dates.date2num(a)
date_list=[]
for date in b['weekEndingDate']:
date_list.append(datetime.strptime(date,'%Y-%m-%d'))
#a = datetime.strptime(b['weekEndingDate'],'%Y-%m-%d')
# plot time vs. value
dates = matplotlib.dates.date2num(date_list)
fig, ax = plt.subplots()
ax.plot_date(dates, b.medPrice,'-')
Explanation: Neighborhood boundaries in SF
End of explanation |
3,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Set up working directory
Step1: README
This part of pipeline search for the SSU rRNA gene fragments, classify them, and extract reads aligned specific region. It is also heavy lifting part of the whole pipeline (more cpu will help).
This part works with one seqfile a time. You just need to change the "Seqfile" and maybe other parameters in the two cells bellow.
To run commands, click "Cell" then "Run All". After it finishes, you will see "*** pipeline runs successsfully
Step2: Other parameters to set
Step3: Pass hits to mothur aligner
Step4: Get aligned seqs that have > 50% matched to references
Step5: Search is done here (the computational intensive part). Hooray!
\$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter
Step6: Classify SSU rRNA gene seqs using SILVA
Step7: Classify SSU rRNA gene seqs with Greengene for copy correction later
Step8: This part of pipeline (working with one sequence file) finishes here. Next we will combine samples for community analysis (see unsupervised analysis).
Following are files useful for community analysis | Python Code:
cd /usr/local/notebooks
mkdir -p ./workdir
#check seqfile files to process in data directory (make sure you still remember the data directory)
!ls ./data/test/data
Explanation: Set up working directory
End of explanation
Seqfile='./data/test/data/2d.fa'
Explanation: README
This part of pipeline search for the SSU rRNA gene fragments, classify them, and extract reads aligned specific region. It is also heavy lifting part of the whole pipeline (more cpu will help).
This part works with one seqfile a time. You just need to change the "Seqfile" and maybe other parameters in the two cells bellow.
To run commands, click "Cell" then "Run All". After it finishes, you will see "*** pipeline runs successsfully :)" at bottom of this pape.
If your computer has many processors, there are two ways to make use of the resource:
Set "Cpu" higher number.
make more copies of this notebook (click "File" then "Make a copy" in menu bar), so you can run the step on multiple files at the same time.
(Again we assume the "Seqfile" is quality trimmed.)
Here we will process one file at a time; set the "Seqfile" variable to the seqfile name to be be processed
First part of seqfile basename (separated by ".") will be the label of this sample, so named it properly.
e.g. for "/usr/local/notebooks/data/test/data/1c.fa", "1c" will the label of this sample.
End of explanation
Cpu='2' # number of maxixum threads for search and alignment
Hmm='./data/SSUsearch_db/Hmm.ssu.hmm' # hmm model for ssu
Gene='ssu'
Script_dir='./SSUsearch/scripts'
Gene_model_org='./data/SSUsearch_db/Gene_model_org.16s_ecoli_J01695.fasta'
Ali_template='./data/SSUsearch_db/Ali_template.silva_ssu.fasta'
Start='577' #pick regions for de novo clustering
End='727'
Len_cutoff='100' # min length for reads picked for the region
Gene_tax='./data/SSUsearch_db/Gene_tax.silva_taxa_family.tax' # silva 108 ref
Gene_db='./data/SSUsearch_db/Gene_db.silva_108_rep_set.fasta'
Gene_tax_cc='./data/SSUsearch_db/Gene_tax_cc.greengene_97_otus.tax' # greengene 2012.10 ref for copy correction
Gene_db_cc='./data/SSUsearch_db/Gene_db_cc.greengene_97_otus.fasta'
# first part of file basename will the label of this sample
import os
Filename=os.path.basename(Seqfile)
Tag=Filename.split('.')[0]
import os
Hmm=os.path.abspath(Hmm)
Seqfile=os.path.abspath(Seqfile)
Script_dir=os.path.abspath(Script_dir)
Gene_model_org=os.path.abspath(Gene_model_org)
Ali_template=os.path.abspath(Ali_template)
Gene_tax=os.path.abspath(Gene_tax)
Gene_db=os.path.abspath(Gene_db)
Gene_tax_cc=os.path.abspath(Gene_tax_cc)
Gene_db_cc=os.path.abspath(Gene_db_cc)
os.environ.update(
{'Cpu':Cpu,
'Hmm':os.path.abspath(Hmm),
'Gene':Gene,
'Seqfile':os.path.abspath(Seqfile),
'Filename':Filename,
'Tag':Tag,
'Script_dir':os.path.abspath(Script_dir),
'Gene_model_org':os.path.abspath(Gene_model_org),
'Ali_template':os.path.abspath(Ali_template),
'Start':Start,
'End':End,
'Len_cutoff':Len_cutoff,
'Gene_tax':os.path.abspath(Gene_tax),
'Gene_db':os.path.abspath(Gene_db),
'Gene_tax_cc':os.path.abspath(Gene_tax_cc),
'Gene_db_cc':os.path.abspath(Gene_db_cc)})
!echo "*** make sure: parameters are right"
!echo "Seqfile: $Seqfile\nCpu: $Cpu\nFilename: $Filename\nTag: $Tag"
cd workdir
mkdir -p $Tag.ssu.out
### start hmmsearch
!echo "*** hmmsearch starting"
!time hmmsearch --incE 10 --incdomE 10 --cpu $Cpu \
--domtblout $Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
-o /dev/null -A $Tag.ssu.out/$Tag.qc.$Gene.sto \
$Hmm $Seqfile
!echo "*** hmmsearch finished"
!python $Script_dir/get-seq-from-hmmout.py \
$Tag.ssu.out/$Tag.qc.$Gene.hmmdomtblout \
$Tag.ssu.out/$Tag.qc.$Gene.sto \
$Tag.ssu.out/$Tag.qc.$Gene
Explanation: Other parameters to set
End of explanation
!echo "*** Starting mothur align"
!cat $Gene_model_org $Tag.ssu.out/$Tag.qc.$Gene > $Tag.ssu.out/$Tag.qc.$Gene.RFadded
# mothur does not allow tab between its flags, thus no indents here
!time mothur "#align.seqs(candidate=$Tag.ssu.out/$Tag.qc.$Gene.RFadded, template=$Ali_template, threshold=0.5, flip=t, processors=$Cpu)"
!rm -f mothur.*.logfile
Explanation: Pass hits to mothur aligner
End of explanation
!python $Script_dir/mothur-align-report-parser-cutoff.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.report \
$Tag.ssu.out/$Tag.qc.$Gene.align \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter \
0.5
!python $Script_dir/remove-gap.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa
Explanation: Get aligned seqs that have > 50% matched to references
End of explanation
!python $Script_dir/region-cut.py $Tag.ssu.out/$Tag.qc.$Gene.align.filter $Start $End $Len_cutoff
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter."$Start"to"$End".cut.lenscreen $Tag.ssu.out/$Tag.forclust
Explanation: Search is done here (the computational intensive part). Hooray!
\$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter:
aligned SSU rRNA gene fragments
\$Tag.ssu.out/\$Tag.qc.\$Gene.align.filter.fa:
unaligned SSU rRNA gene fragments
Extract the reads mapped 150bp region in V4 (577-727 in E.coli SSU rRNA gene position) for unsupervised clustering
End of explanation
!rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy
!mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db, taxonomy=$Gene_tax, cutoff=50, processors=$Cpu)"
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.silva.taxonomy.count
!rm -f mothur.*.logfile
Explanation: Classify SSU rRNA gene seqs using SILVA
End of explanation
!rm -f $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy
!mothur "#classify.seqs(fasta=$Tag.ssu.out/$Tag.qc.$Gene.align.filter.fa, template=$Gene_db_cc, taxonomy=$Gene_tax_cc, cutoff=50, processors=$Cpu)"
!mv $Tag.ssu.out/$Tag.qc.$Gene.align.filter.*.wang.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy
!python $Script_dir/count-taxon.py \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy \
$Tag.ssu.out/$Tag.qc.$Gene.align.filter.wang.gg.taxonomy.count
!rm -f mothur.*.logfile
# check the output directory
!ls $Tag.ssu.out
Explanation: Classify SSU rRNA gene seqs with Greengene for copy correction later
End of explanation
!echo "*** pipeline runs successsfully :)"
Explanation: This part of pipeline (working with one sequence file) finishes here. Next we will combine samples for community analysis (see unsupervised analysis).
Following are files useful for community analysis:
1c.577to727: aligned fasta file of seqs mapped to target region for de novo clustering
1c.qc.ssu.align.filter: aligned fasta file of all SSU rRNA gene fragments
1c.qc.ssu.align.filter.wang.gg.taxonomy: Greengene taxonomy (for copy correction)
1c.qc.ssu.align.filter.wang.silva.taxonomy: SILVA taxonomy
End of explanation |
3,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2022 The TensorFlow Authors.
Step1: Assess privacy risks of an Image classification model with Secret Sharer Attack
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: Functions for the model, and the CIFAR-10 data
Step8: Secret sharer attack on the model
The general idea of secret sharer is to check if the model behaves differently on data it has seen vs. has not seen. Such memorization does not happen only on generative sequence models. It is thus natural to ask if the idea can be adapted to image classification tasks as well.
Here, we present one potential way to do secret sharer on image classification task. Specifically, we will consider
two types of secrets, where the secret is
- (an image with each pixel sampled uniformly at random, a random label)
- (an image with text on it, a random label)
But of course, you can try other secrets, for example, you can use images from another dataset (like MNIST), and a fixed label.
Generate Secrets
First, we define the functions needed to generate random image, image with random text, and random labels.
Step10: Now we will use the functions above to generate the secrets. Here, we plan to try secrets that are repeated once, 10 times and 50 times. For each repetition value, we will pick 20 secrets, to get a more accurate exposure estimation. We will leave out 65536 samples as references.
Step11: Train the Model
We will train two models, one with the original CIFAR-10 data, the other with CIFAR-10 combined with the secrets.
Step13: Secret Sharer Evaluation
Similar to perplexity in language model, here we will use the cross entropy loss for our image classification model to measure how confident the model is on an example. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2022 The TensorFlow Authors.
End of explanation
# @title Install dependencies
# You may need to restart the runtime to use tensorflow-privacy.
from IPython.display import clear_output
!pip install git+https://github.com/tensorflow/privacy.git
clear_output()
# @title Imports
import functools
import os
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from PIL import Image, ImageDraw, ImageFont
from matplotlib import pyplot as plt
import math
from tensorflow_privacy.privacy.privacy_tests.secret_sharer.generate_secrets import SecretConfig, construct_secret, generate_random_sequences, construct_secret_dataset
from tensorflow_privacy.privacy.privacy_tests.secret_sharer.exposures import compute_exposure_interpolation, compute_exposure_extrapolation
from tensorflow_privacy.privacy.privacy_tests.membership_inference_attack.utils import log_loss
Explanation: Assess privacy risks of an Image classification model with Secret Sharer Attack
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/privacy_tests/secret_sharer/secret_sharer_image_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/privacy/blob/master/tensorflow_privacy/privacy/privacy_tests/secret_sharer/secret_sharer_image_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In this colab, we adapt secret sharer in an image classification model. We will train a model with "secrets", i.e. random images, inserted in the training data, and then evaluate if the model has "memorized" those secrets.
Setup
You may set the runtime to use a GPU by Runtime > Change runtime type > Hardware accelerator.
End of explanation
# @title Functions for defining model and loading data.
def small_cnn():
Setup a small CNN for image classification.
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Input(shape=(32, 32, 3)))
for _ in range(3):
model.add(tf.keras.layers.Conv2D(32, (3, 3), activation='relu'))
model.add(tf.keras.layers.MaxPooling2D())
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10))
return model
def load_cifar10():
def convert_to_numpy(ds):
images, labels = [], []
for sample in tfds.as_numpy(ds):
images.append(sample['image'])
labels.append(sample['label'])
return np.array(images).astype(np.float32) / 255, np.array(labels).astype(np.int32)
ds_train = tfds.load('cifar10', split='train')
ds_test = tfds.load('cifar10', split='test')
x_train, y_train = convert_to_numpy(ds_train)
x_test, y_test = convert_to_numpy(ds_test)
# x has shape (n, 32, 32, 3), y has shape (n,)
return x_train, y_train, x_test, y_test
# @title Function for training the model.
def train_model(x_train, y_train, x_test, y_test,
learning_rate=0.02, batch_size=250, epochs=50):
model = small_cnn()
optimizer = tf.keras.optimizers.SGD(lr=learning_rate, momentum=0.9)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss, metrics=['accuracy'])
# Train model
model.fit(
x_train,
y_train,
epochs=epochs,
validation_data=(x_test, y_test),
batch_size=batch_size,
verbose=2)
return model
Explanation: Functions for the model, and the CIFAR-10 data
End of explanation
# @title Functions for generating secrets
def generate_random_label(n, nclass, seed):
Generates random labels.
return np.random.RandomState(seed).choice(nclass, n)
def generate_uniform_random(shape, n, seed):
Generates uniformly random images.
rng = np.random.RandomState(seed)
data = rng.uniform(size=(n,) + shape)
return data
def images_from_texts(sequences, shape, font_fn, num_lines=3, bg_color=(255, 255, 255), fg_color=(0, 0, 0)):
Generates an image with a given text sequence.
characters_per_line = len(sequences[0]) // num_lines
if characters_per_line * num_lines < len(sequences[0]):
characters_per_line += 1
line_height = shape[1] // num_lines
font_size = line_height
font_width = ImageFont.truetype(font_fn, font_size).getsize('a')[0]
if font_width > shape[0] / characters_per_line:
font_size = int(math.floor(font_size / font_width * (shape[0] / characters_per_line)))
assert font_size > 0
font = ImageFont.truetype(font_fn, font_size)
imgs = []
for sequence in sequences:
img = Image.new('RGB', shape, color=bg_color)
d = ImageDraw.Draw(img)
for i in range(num_lines):
d.text((0, i * line_height),
sequence[i * characters_per_line:(i + 1) * characters_per_line],
font=font, fill=fg_color)
imgs.append(img)
return imgs
def generate_random_text_image(shape, n, seed, font_fn, vocab, pattern, num_lines, bg_color, fg_color):
Generates images with random texts.
text_sequences = generate_random_sequences(vocab, pattern, n, seed)
imgs = images_from_texts(text_sequences, shape, font_fn, num_lines, bg_color, fg_color)
return np.array([np.array(i) for i in imgs])
# The function for plotting text on image needs a font, so we download it here.
# You can try other fonts. Notice that the images_from_texts is implemented under the assumption that the font is monospace.
!wget https://github.com/google/fonts/raw/main/apache/robotomono/RobotoMono%5Bwght%5D.ttf
font_fn = 'RobotoMono[wght].ttf'
Explanation: Secret sharer attack on the model
The general idea of secret sharer is to check if the model behaves differently on data it has seen vs. has not seen. Such memorization does not happen only on generative sequence models. It is thus natural to ask if the idea can be adapted to image classification tasks as well.
Here, we present one potential way to do secret sharer on image classification task. Specifically, we will consider
two types of secrets, where the secret is
- (an image with each pixel sampled uniformly at random, a random label)
- (an image with text on it, a random label)
But of course, you can try other secrets, for example, you can use images from another dataset (like MNIST), and a fixed label.
Generate Secrets
First, we define the functions needed to generate random image, image with random text, and random labels.
End of explanation
#@title Generate secrets
num_repetitions = [1, 10, 50]
num_secrets_for_repetitions = [20] * len(num_repetitions)
num_references = 65536
secret_config_text = SecretConfig(name='random text image', num_repetitions=num_repetitions, num_secrets_for_repetitions=num_secrets_for_repetitions, num_references=num_references)
secret_config_rand = SecretConfig(name='uniform random image', num_repetitions=num_repetitions, num_secrets_for_repetitions=num_secrets_for_repetitions, num_references=num_references)
seed = 123
shape = (32, 32)
nclass = 10
n = num_references + sum(num_secrets_for_repetitions)
# setting for text image
num_lines = 3
bg_color=(255, 255, 0)
fg_color=(0, 0, 0)
image_text = generate_random_text_image(shape, n, seed,
font_fn,
list('0123456789'), 'My SSN is {}{}{}-{}{}-{}{}{}{}',
num_lines, bg_color, fg_color)
image_text = image_text.astype(np.float32) / 255
image_rand = generate_uniform_random(shape + (3,), n, seed)
label = generate_random_label(n, nclass, seed)
data_text = list(zip(image_text, label)) # pair up the image and label
data_rand = list(zip(image_rand, label))
`construct_secret` partitions data into subsets of secrets that are going to be
repeated for different number of times, and a references set. It returns a SecretsSet with 3 fields:
config is the configuration of the secrets set
references is a list of `num_references` samples to be used as references
secrets is a dictionary, where the key is the number of repetition, the value is a list of samples
secrets_text = construct_secret(secret_config_text, data_text)
secrets_rand = construct_secret(secret_config_rand, data_rand)
#@title Let's look at the secrets we generated
def visualize_images(imgs):
f, axes = plt.subplots(1, len(imgs))
for i, img in enumerate(imgs):
axes[i].imshow(img)
visualize_images(image_text[:5])
visualize_images(image_rand[:5])
Explanation: Now we will use the functions above to generate the secrets. Here, we plan to try secrets that are repeated once, 10 times and 50 times. For each repetition value, we will pick 20 secrets, to get a more accurate exposure estimation. We will leave out 65536 samples as references.
End of explanation
# @title Train a model with original data
x_train, y_train, x_test, y_test = load_cifar10()
model_original = train_model(x_train, y_train, x_test, y_test)
# @title Train model with original data combined with secrets
# `construct_secret_dataset` returns a list of secrets, repeated for the
# required number of times.
secret_dataset = construct_secret_dataset([secrets_text, secrets_rand])
x_secret, y_secret = zip(*secret_dataset)
x_combined = np.concatenate([x_train, x_secret])
y_combined = np.concatenate([y_train, y_secret])
print(f'We will inject {len(x_secret)} samples so the total number of training data is {x_combined.shape[0]}')
model_secret = train_model(x_combined, y_combined, x_test, y_test)
Explanation: Train the Model
We will train two models, one with the original CIFAR-10 data, the other with CIFAR-10 combined with the secrets.
End of explanation
# @title Functions for computing losses and exposures
def calculate_losses(model, samples, is_logit=False, batch_size=1000):
Calculate losses of model prediction on data, provided true labels.
data, labels = zip(*samples)
data, labels = np.array(data), np.array(labels)
pred = model.predict(data, batch_size=batch_size, verbose=0)
if is_logit:
pred = tf.nn.softmax(pred).numpy()
loss = log_loss(labels, pred)
return loss
def compute_loss_for_secret(secrets, model):
losses_ref = calculate_losses(model, secrets.references)
losses = {rep: calculate_losses(model, samples) for rep, samples in secrets.secrets.items()}
return losses, losses_ref
def compute_exposure_for_secret(secrets, model):
losses, losses_ref = compute_loss_for_secret(secrets, model)
exposure_interpolation = compute_exposure_interpolation(losses, losses_ref)
exposure_extrapolation = compute_exposure_extrapolation(losses, losses_ref)
return exposure_interpolation, exposure_extrapolation, losses, losses_ref
# @title Check the exposures
exp_i_orig_text, exp_e_orig_text, _, _ = compute_exposure_for_secret(secrets_text, model_original)
exp_i_orig_rand, exp_e_orig_rand, _, _ = compute_exposure_for_secret(secrets_rand, model_original)
exp_i_scrt_text, exp_e_scrt_text, _, _ = compute_exposure_for_secret(secrets_text, model_secret)
exp_i_scrt_rand, exp_e_scrt_rand, _, _ = compute_exposure_for_secret(secrets_rand, model_secret)
# First, let's confirm that the model trained with original data won't show any exposure
print('On model trained with original data:')
print('Text secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_orig_text.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_orig_text.items()]))
print('Random secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_orig_rand.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_orig_rand.items()]))
# Then, let's look at the model trained with combined data
print('On model trained with original data + secrets:')
print('Text secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_scrt_text.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_scrt_text.items()]))
print('Random secret')
print(' Interpolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_i_scrt_rand.items()]))
print(' Extrapolation:', '; '.join([f'repetition={r}, avg_exposure={np.mean(exp):.2f}±{np.std(exp):.2f}' for r, exp in exp_e_scrt_rand.items()]))
Explanation: Secret Sharer Evaluation
Similar to perplexity in language model, here we will use the cross entropy loss for our image classification model to measure how confident the model is on an example.
End of explanation |
3,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TF-Hub로 Kaggle 문제를 해결하는 방법
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 이 튜토리얼에서는 Kaggle의 데이터세트를 사용하기 때문에 Kaggle 계정에 대한 API 토큰을 만들고 Colab 환경에 토큰을 업로드해야 합니다.
Step3: 시작하기
데이터
Kaggle의 영화 리뷰에 대한 감정 분석 작업을 해결해 보려고 합니다. 데이터세트는 Rotten Tomatoes 영화 리뷰의 구문론적 하위 문구로 구성됩니다. 여기서 해야 할 작업은 문구에 1에서 5까지의 척도로 부정적 또는 긍정적 레이블을 지정하는 것입니다.
API를 사용하여 데이터를 다운로드하려면 먼저 경쟁 규칙을 수락해야 합니다.
Step4: 참고
Step5: 모델 훈련하기
참고
Step6: 예측
검증 세트 및 훈련 세트에 대한 예측을 실행합니다.
Step7: 혼동 행렬
특히 다중 클래스 문제에 대한 또 다른 매우 흥미로운 통계는 혼동 행렬입니다. 혼동 행렬을 사용하면 레이블이 올바르게 지정된 예와 그렇지 않은 예의 비율을 시각화할 수 있습니다. 분류자의 편향된 정도와 레이블 분포가 적절한지 여부를 쉽게 확인할 수 있습니다. 예측값의 가장 큰 부분이 대각선을 따라 분포되는 것이 이상적입니다. | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install -q kaggle
import tensorflow as tf
import tensorflow_hub as hub
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import zipfile
from sklearn import model_selection
Explanation: TF-Hub로 Kaggle 문제를 해결하는 방법
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub_on_kaggle"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/text_classification_with_tf_hub_on_kaggle.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/hub/tutorials/text_classification_with_tf_hub_on_kaggle.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/hub/tutorials/text_classification_with_tf_hub_on_kaggle.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
<td><a href="https://tfhub.dev/google/nnlm-en-dim128/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub 모델보기</a></td>
</table>
TF-허브는 재사용 가능한 리소스, 특히 사전 훈련된 모듈 형태로 머신러닝에 대한 전문 지식을 공유하는 플랫폼입니다. 이 튜토리얼에서는 TF-허브 텍스트 임베딩 모듈을 사용하여 합리적인 기준 정확성으로 간단한 감정 분류자를 훈련합니다. 그런 다음 Kaggle에 예측을 제출합니다.
TF-허브를 사용한 텍스트 분류에 대한 자세한 튜토리얼과 정확성 향상을 위한 추가 단계는 TF-허브를 이용한 텍스트 분류를 살펴보세요.
설정
End of explanation
import os
import pathlib
# Upload the API token.
def get_kaggle():
try:
import kaggle
return kaggle
except OSError:
pass
token_file = pathlib.Path("~/.kaggle/kaggle.json").expanduser()
token_file.parent.mkdir(exist_ok=True, parents=True)
try:
from google.colab import files
except ImportError:
raise ValueError("Could not find kaggle token.")
uploaded = files.upload()
token_content = uploaded.get('kaggle.json', None)
if token_content:
token_file.write_bytes(token_content)
token_file.chmod(0o600)
else:
raise ValueError('Need a file named "kaggle.json"')
import kaggle
return kaggle
kaggle = get_kaggle()
Explanation: 이 튜토리얼에서는 Kaggle의 데이터세트를 사용하기 때문에 Kaggle 계정에 대한 API 토큰을 만들고 Colab 환경에 토큰을 업로드해야 합니다.
End of explanation
SENTIMENT_LABELS = [
"negative", "somewhat negative", "neutral", "somewhat positive", "positive"
]
# Add a column with readable values representing the sentiment.
def add_readable_labels_column(df, sentiment_value_column):
df["SentimentLabel"] = df[sentiment_value_column].replace(
range(5), SENTIMENT_LABELS)
# Download data from Kaggle and create a DataFrame.
def load_data_from_zip(path):
with zipfile.ZipFile(path, "r") as zip_ref:
name = zip_ref.namelist()[0]
with zip_ref.open(name) as zf:
return pd.read_csv(zf, sep="\t", index_col=0)
# The data does not come with a validation set so we'll create one from the
# training set.
def get_data(competition, train_file, test_file, validation_set_ratio=0.1):
data_path = pathlib.Path("data")
kaggle.api.competition_download_files(competition, data_path)
competition_path = (data_path/competition)
competition_path.mkdir(exist_ok=True, parents=True)
competition_zip_path = competition_path.with_suffix(".zip")
with zipfile.ZipFile(competition_zip_path, "r") as zip_ref:
zip_ref.extractall(competition_path)
train_df = load_data_from_zip(competition_path/train_file)
test_df = load_data_from_zip(competition_path/test_file)
# Add a human readable label.
add_readable_labels_column(train_df, "Sentiment")
# We split by sentence ids, because we don't want to have phrases belonging
# to the same sentence in both training and validation set.
train_indices, validation_indices = model_selection.train_test_split(
np.unique(train_df["SentenceId"]),
test_size=validation_set_ratio,
random_state=0)
validation_df = train_df[train_df["SentenceId"].isin(validation_indices)]
train_df = train_df[train_df["SentenceId"].isin(train_indices)]
print("Split the training data into %d training and %d validation examples." %
(len(train_df), len(validation_df)))
return train_df, validation_df, test_df
train_df, validation_df, test_df = get_data(
"sentiment-analysis-on-movie-reviews",
"train.tsv.zip", "test.tsv.zip")
Explanation: 시작하기
데이터
Kaggle의 영화 리뷰에 대한 감정 분석 작업을 해결해 보려고 합니다. 데이터세트는 Rotten Tomatoes 영화 리뷰의 구문론적 하위 문구로 구성됩니다. 여기서 해야 할 작업은 문구에 1에서 5까지의 척도로 부정적 또는 긍정적 레이블을 지정하는 것입니다.
API를 사용하여 데이터를 다운로드하려면 먼저 경쟁 규칙을 수락해야 합니다.
End of explanation
train_df.head(20)
Explanation: 참고: 이 경쟁에서 주어진 과제는 전체 리뷰를 평가하는 것이 아니라 리뷰 내의 개별 문구를 평가하는 것입니다. 이것은 훨씬 더 어려운 작업입니다.
End of explanation
class MyModel(tf.keras.Model):
def __init__(self, hub_url):
super().__init__()
self.hub_url = hub_url
self.embed = hub.load(self.hub_url).signatures['default']
self.sequential = tf.keras.Sequential([
tf.keras.layers.Dense(500),
tf.keras.layers.Dense(100),
tf.keras.layers.Dense(5),
])
def call(self, inputs):
phrases = inputs['Phrase'][:,0]
embedding = 5*self.embed(phrases)['default']
return self.sequential(embedding)
def get_config(self):
return {"hub_url":self.hub_url}
model = MyModel("https://tfhub.dev/google/nnlm-en-dim128/1")
model.compile(
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.optimizers.Adam(),
metrics = [tf.keras.metrics.SparseCategoricalAccuracy(name="accuracy")])
history = model.fit(x=dict(train_df), y=train_df['Sentiment'],
validation_data=(dict(validation_df), validation_df['Sentiment']),
epochs = 25)
Explanation: 모델 훈련하기
참고: 이 작업을 회귀로 모델링할 수도 있습니다(TF-허브를 사용한 텍스트 분류 참조).
End of explanation
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
train_eval_result = model.evaluate(dict(train_df), train_df['Sentiment'])
validation_eval_result = model.evaluate(dict(validation_df), validation_df['Sentiment'])
print(f"Training set accuracy: {train_eval_result[1]}")
print(f"Validation set accuracy: {validation_eval_result[1]}")
Explanation: 예측
검증 세트 및 훈련 세트에 대한 예측을 실행합니다.
End of explanation
predictions = model.predict(dict(validation_df))
predictions = tf.argmax(predictions, axis=-1)
predictions
cm = tf.math.confusion_matrix(validation_df['Sentiment'], predictions)
cm = cm/cm.numpy().sum(axis=1)[:, tf.newaxis]
sns.heatmap(
cm, annot=True,
xticklabels=SENTIMENT_LABELS,
yticklabels=SENTIMENT_LABELS)
plt.xlabel("Predicted")
plt.ylabel("True")
Explanation: 혼동 행렬
특히 다중 클래스 문제에 대한 또 다른 매우 흥미로운 통계는 혼동 행렬입니다. 혼동 행렬을 사용하면 레이블이 올바르게 지정된 예와 그렇지 않은 예의 비율을 시각화할 수 있습니다. 분류자의 편향된 정도와 레이블 분포가 적절한지 여부를 쉽게 확인할 수 있습니다. 예측값의 가장 큰 부분이 대각선을 따라 분포되는 것이 이상적입니다.
End of explanation |
3,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Developmental file for modifying the 1D advection solver to work for multiple wave equations
Step1: Prototype implementation of LF flux for multiple-u's | Python Code:
import os
import sys
sys.path.insert(0, os.path.abspath('../../'))
import numpy as np
from matplotlib import pyplot as plt
import arrayfire as af
from dg_maxwell import params
from dg_maxwell import lagrange
from dg_maxwell import wave_equation as w1d
from dg_maxwell import utils
af.set_backend('opencl')
af.set_device(1)
af.info()
plt.rcParams['figure.figsize'] = 12, 7.5
plt.rcParams['lines.linewidth'] = 1.5
plt.rcParams['font.family'] = 'serif'
plt.rcParams['font.weight'] = 'bold'
plt.rcParams['font.size'] = 20
plt.rcParams['font.sans-serif'] = 'serif'
plt.rcParams['text.usetex'] = True
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['axes.titlesize'] = 'medium'
plt.rcParams['axes.labelsize'] = 'medium'
plt.rcParams['xtick.major.size'] = 8
plt.rcParams['xtick.minor.size'] = 4
plt.rcParams['xtick.major.pad'] = 8
plt.rcParams['xtick.minor.pad'] = 8
plt.rcParams['xtick.color'] = 'k'
plt.rcParams['xtick.labelsize'] = 'medium'
plt.rcParams['xtick.direction'] = 'in'
plt.rcParams['ytick.major.size'] = 8
plt.rcParams['ytick.minor.size'] = 4
plt.rcParams['ytick.major.pad'] = 8
plt.rcParams['ytick.minor.pad'] = 8
plt.rcParams['ytick.color'] = 'k'
plt.rcParams['ytick.labelsize'] = 'medium'
plt.rcParams['ytick.direction'] = 'in'
plt.rcParams['text.usetex'] = True
plt.rcParams['text.latex.unicode'] = True
# 1. Set the initial conditions
E_00 = 1.
E_01 = 1.
B_00 = 0.2
B_01 = 0.5
E_z_init = E_00 * af.sin(2 * np.pi * params.element_LGL) \
+ E_01 * af.cos(2 * np.pi * params.element_LGL)
B_y_init = B_00 * af.sin(2 * np.pi * params.element_LGL) \
+ B_01 * af.cos(2 * np.pi * params.element_LGL)
u_init = af.constant(0., d0 = params.N_LGL, d1 = params.N_Elements, d2 = 2)
u_init[:, :, 0] = E_z_init
u_init[:, :, 1] = B_y_init
element_LGL_flat = af.flat(params.element_LGL)
E_z_init_flat = af.flat(u_init[:, :, 0])
B_y_init_flat = af.flat(u_init[:, :, 1])
plt.plot(element_LGL_flat, E_z_init_flat, label = r'$E_z$')
plt.plot(element_LGL_flat, B_y_init_flat, label = r'$B_y$')
plt.title(r'Plot of $E_z(t = 0)$ and $B_y(t = 0)$')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.legend(prop={'size': 14})
plt.show()
Explanation: Developmental file for modifying the 1D advection solver to work for multiple wave equations
End of explanation
# Older LF flux code
u_n = u_init[:, :, :]
u_iplus1_0 = af.shift(u_n[0, :], 0, -1)
u_i_N_LGL = u_n[-1, :]
flux_iplus1_0 = w1d.flux_x(u_iplus1_0)
flux_i_N_LGL = w1d.flux_x(u_i_N_LGL)
boundary_flux = (flux_iplus1_0 + flux_i_N_LGL) / 2 \
- params.c_lax * (u_iplus1_0 - u_i_N_LGL) / 2
print(boundary_flux)
Explanation: Prototype implementation of LF flux for multiple-u's
End of explanation |
3,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tabular data
Step1: Starting from reading this dataset, to answering questions about this data in a few lines of code
Step2: How does the survival rate of the passengers differ between sexes?
Step3: Or how does it differ between the different classes?
Step4: Are young people more likely to survive?
Step5: All the needed functionality for the above examples will be explained throughout this tutorial.
Data structures
Pandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created
Step6: Attributes of a Series
Step7: You can access the underlying numpy array representation with the .values attribute
Step8: We can access series values via the index, just like for NumPy arrays
Step9: Unlike the NumPy array, though, this index can be something other than integers
Step10: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.
In fact, it's possible to construct a series directly from a Python dictionary
Step11: We can index the populations like a dict as expected
Step12: but with the power of numpy arrays
Step13: DataFrames
Step14: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute
Step15: To check the data types of the different columns
Step16: An overview of that information can be given with the info() method
Step17: Also a DataFrame has a values attribute, but attention
Step18: If we don't like what the index looks like, we can reset it and set one of our columns
Step19: To access a Series representing a column in the data, use typical indexing syntax
Step20: Basic operations on Series/Dataframes
As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
Step21: Elementwise-operations (like numpy)
Just like with numpy arrays, many operations are element-wise
Step22: Alignment! (unlike numpy)
Only, pay attention to alignment
Step23: Reductions (like numpy)
The average population number
Step24: The minimum area
Step25: For dataframes, often only the numeric columns are included in the result
Step26: <div class="alert alert-success">
<b>EXERCISE</b>
Step27: <div class="alert alert-success">
<b>EXERCISE</b>
Step28: Some other useful methods
Sorting the rows of the DataFrame according to the values in a column
Step29: One useful method to use is the describe method, which computes summary statistics for each column
Step30: The plot method can be used to quickly visualize the data in different ways
Step31: However, for this dataset, it does not say that much
Step32: You can play with the kind keyword | Python Code:
df = pd.read_csv("data/titanic.csv")
df.head()
Explanation: Tabular data
End of explanation
df['Age'].hist()
Explanation: Starting from reading this dataset, to answering questions about this data in a few lines of code:
What is the age distribution of the passengers?
End of explanation
df.groupby('Sex')[['Survived']].aggregate(lambda x: x.sum() / len(x))
Explanation: How does the survival rate of the passengers differ between sexes?
End of explanation
df.groupby('Pclass')['Survived'].aggregate(lambda x: x.sum() / len(x)).plot(kind='bar')
Explanation: Or how does it differ between the different classes?
End of explanation
df['Survived'].sum() / df['Survived'].count()
df25 = df[df['Age'] <= 25]
df25['Survived'].sum() / len(df25['Survived'])
Explanation: Are young people more likely to survive?
End of explanation
s = pd.Series([0.1, 0.2, 0.3, 0.4])
s
Explanation: All the needed functionality for the above examples will be explained throughout this tutorial.
Data structures
Pandas provides two fundamental data objects, for 1D (Series) and 2D data (DataFrame).
Series
A Series is a basic holder for one-dimensional labeled data. It can be created much as a NumPy array is created:
End of explanation
s.index
Explanation: Attributes of a Series: index and values
The series has a built-in concept of an index, which by default is the numbers 0 through N - 1
End of explanation
s.values
Explanation: You can access the underlying numpy array representation with the .values attribute:
End of explanation
s[0]
Explanation: We can access series values via the index, just like for NumPy arrays:
End of explanation
s2 = pd.Series(np.arange(4), index=['a', 'b', 'c', 'd'])
s2
s2['c']
Explanation: Unlike the NumPy array, though, this index can be something other than integers:
End of explanation
pop_dict = {'Germany': 81.3,
'Belgium': 11.3,
'France': 64.3,
'United Kingdom': 64.9,
'Netherlands': 16.9}
population = pd.Series(pop_dict)
population
Explanation: In this way, a Series object can be thought of as similar to an ordered dictionary mapping one typed value to another typed value.
In fact, it's possible to construct a series directly from a Python dictionary:
End of explanation
population['France']
Explanation: We can index the populations like a dict as expected:
End of explanation
population * 1000
Explanation: but with the power of numpy arrays:
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
Explanation: DataFrames: Multi-dimensional Data
A DataFrame is a tablular data structure (multi-dimensional object to hold labeled data) comprised of rows and columns, akin to a spreadsheet, database table, or R's data.frame object. You can think of it as multiple Series object which share the same index.
<img src="img/dataframe.png" width=110%>
One of the most common ways of creating a dataframe is from a dictionary of arrays or lists.
Note that in the IPython notebook, the dataframe will display in a rich HTML view:
End of explanation
countries.index
countries.columns
Explanation: Attributes of the DataFrame
A DataFrame has besides a index attribute, also a columns attribute:
End of explanation
countries.dtypes
Explanation: To check the data types of the different columns:
End of explanation
countries.info()
Explanation: An overview of that information can be given with the info() method:
End of explanation
countries.values
Explanation: Also a DataFrame has a values attribute, but attention: when you have heterogeneous data, all values will be upcasted:
End of explanation
countries = countries.set_index('country')
countries
Explanation: If we don't like what the index looks like, we can reset it and set one of our columns:
End of explanation
countries['area']
Explanation: To access a Series representing a column in the data, use typical indexing syntax:
End of explanation
# redefining the example objects
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
countries = pd.DataFrame({'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']})
Explanation: Basic operations on Series/Dataframes
As you play around with DataFrames, you'll notice that many operations which work on NumPy arrays will also work on dataframes.
End of explanation
population / 100
countries['population'] / countries['area']
Explanation: Elementwise-operations (like numpy)
Just like with numpy arrays, many operations are element-wise:
End of explanation
s1 = population[['Belgium', 'France']]
s2 = population[['France', 'Germany']]
s1
s2
s1 + s2
Explanation: Alignment! (unlike numpy)
Only, pay attention to alignment: operations between series will align on the index:
End of explanation
population.mean()
Explanation: Reductions (like numpy)
The average population number:
End of explanation
countries['area'].min()
Explanation: The minimum area:
End of explanation
countries.median()
Explanation: For dataframes, often only the numeric columns are included in the result:
End of explanation
population / population['Belgium'].mean()
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population numbers relative to Belgium
</div>
End of explanation
countries['population']*1000000 / countries['area']
countries['density'] = countries['population']*1000000 / countries['area']
countries
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: Calculate the population density for each country and add this as a new column to the dataframe.
</div>
End of explanation
countries.sort_values('density', ascending=False)
Explanation: Some other useful methods
Sorting the rows of the DataFrame according to the values in a column:
End of explanation
countries.describe()
Explanation: One useful method to use is the describe method, which computes summary statistics for each column:
End of explanation
countries.plot()
Explanation: The plot method can be used to quickly visualize the data in different ways:
End of explanation
countries['population'].plot(kind='bar')
Explanation: However, for this dataset, it does not say that much:
End of explanation
pd.read
states.to
Explanation: You can play with the kind keyword: 'line', 'bar', 'hist', 'density', 'area', 'pie', 'scatter', 'hexbin'
Importing and exporting data
A wide range of input/output formats are natively supported by pandas:
CSV, text
SQL database
Excel
HDF5
json
html
pickle
...
End of explanation |
3,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Riemann interactive
In this notebook, we show interactive solutions of two Riemann problems for shallow water equations and acoustics. The user can interactively modify the phase planes and x-t planes and see its corresponding solutions. The code to produce these apps uses the mpld3 library, and it can be found on the clawpack folder riemann/riemann_interactive.py.
First we need to load the mpld3 and numpy libraries as well as the riemann_interactive code.
Step1: Shallow water equations
In this section we show the interactive riemann solution of the exact solver for the shallow water equations. We first define the initial left and right state, the $g$ paramter and the plot range, then we call our riemann interactive plotting and display it. The commented line "mpld3.save_html" can be uncommented to save the output as html. On the app feel free to drag and drop the $q_l$ and $q_r$ states in the phase plane. The time can also be adjusted by dragging up and down the black dot in the horizontal time bar in the $x-t$ plane.
Step2: Acoustic equations
Here we show the exact solution for the Riemann problem of linear acoustics. We again determine the initial left and right states, and we provide the eigenvectors ($r_1,r_2$) and eigenvalues ($\lambda_1,\lambda_2$) of the solution. It is written in this way, so one can easily input the eigenvalues of any other two dimensional linear Riemann problem. Additional optional input can be added to adjust the plotted window. In this case, the time is fixed in "plotopts" and only the left and right states ($q_l,q_r$) can be moved interactively. | Python Code:
import mpld3
import numpy as np
from clawpack.riemann import riemann_interactive
Explanation: Riemann interactive
In this notebook, we show interactive solutions of two Riemann problems for shallow water equations and acoustics. The user can interactively modify the phase planes and x-t planes and see its corresponding solutions. The code to produce these apps uses the mpld3 library, and it can be found on the clawpack folder riemann/riemann_interactive.py.
First we need to load the mpld3 and numpy libraries as well as the riemann_interactive code.
End of explanation
# Define left and right state (h,hu)
ql = np.array([3.0, 5.0])
qr = np.array([3.0, -5.0])
# Defineoptional parameters (otherwise chooses default values)
plotopts = {'g':1.0, 'time':2.0, 'tmax':5, 'hmax':10, 'humin':-15, 'humax':15}
# Call interactive function (can be called without any argument)
pt = riemann_interactive.shallow_water(ql,qr,**plotopts)
#mpld3.save_html(pt,"test2.html")
mpld3.display()
Explanation: Shallow water equations
In this section we show the interactive riemann solution of the exact solver for the shallow water equations. We first define the initial left and right state, the $g$ paramter and the plot range, then we call our riemann interactive plotting and display it. The commented line "mpld3.save_html" can be uncommented to save the output as html. On the app feel free to drag and drop the $q_l$ and $q_r$ states in the phase plane. The time can also be adjusted by dragging up and down the black dot in the horizontal time bar in the $x-t$ plane.
End of explanation
# Define left and right state
ql = np.array([-2.0, 2.0])
qr = np.array([0.0, -3.0])
# Define two eigenvectors and eigenvalues (acoustics)
zz = 2.0
rho0 = 1.0
r1 = np.array([zz,1.0])
r2 = np.array([-zz,1.0])
lam1 = zz/rho0
lam2 = -zz/rho0
plotopts={'q1min':-5, 'q1max':5, 'q2min':-5, 'q2max':5, 'domain':5, 'time':1,
'title1':"Pressure", 'title2':"Velocity"}
riemann_interactive.linear_phase_plane(ql,qr,r1,r2,lam1,lam2,**plotopts)
mpld3.display()
Explanation: Acoustic equations
Here we show the exact solution for the Riemann problem of linear acoustics. We again determine the initial left and right states, and we provide the eigenvectors ($r_1,r_2$) and eigenvalues ($\lambda_1,\lambda_2$) of the solution. It is written in this way, so one can easily input the eigenvalues of any other two dimensional linear Riemann problem. Additional optional input can be added to adjust the plotted window. In this case, the time is fixed in "plotopts" and only the left and right states ($q_l,q_r$) can be moved interactively.
End of explanation |
3,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Since we announced our collaboration with the World Bank and more partners to create the Open Traffic platform, we’ve been busy. We’ve shared two technical previews of the OSMLR linear referencing system. Now we’re ready to share more about how we’re using Mapzen Map Matching to “snap” GPS-derived locations to OSMLR segments, and how we’re using a data-driven approach to evaluate and improve the algorithms.
A "data-driven" approach to improving map-matching - Part I
Step1: User vars
Step2: 1. Generate Routes
The first step in route generation is picking a test region, which for us was San Francisco. Routes are defined as a set of start and stop coordinates, which we obtain by randomly sampling venues from Mapzen’s Who’s on First gazetteer for the specified city. Additionally, we want to limit our route distances to be between ½ km and 1 km because this is the localized scale at which map matching actually takes place.
In this example, we specify 200 fake routes
Step3: a) Get random start and end coordinates
Step4: A sample route
Step5: b) Get the route shapes and attributes
For each route, we then pass the start and end coordinates to the Turn-By-Turn API to obtain the coordinates of the road segments along the route
Step6: The Turn-By-Turn API returns the shape of the route as an encoded polyline
Step7: The route shape then gets passed to the map matching service in order to obtain the coordinates and attributes of the road segments (i.e. edges) that lie along the original route
Step8: We can inspect the attributes returned for our example route
Step9: 2. Iterate Through Routes, Generate Fake GPS, and Score the Matches
a) Define the noise levels and sample rates for synthetic GPS
Now that we have a set of "ground-truthed" routes, meaning route coordinates and their associated road segments, we want to simulate what the GPS data recorded along these routes might look like. This involves two main steps
Step10: b) Defining a validation metric
In order to validate our matched routes, we need an effective method of scoring. Should Type I error (false negatives) carry as much weight as Type II (false negatives)? Should a longer mismatched segment carry a greater penalty than one that is shorter?
We adopt the scoring mechanism proposed by Newton and Krumm (2009), upon whose map-matching algorithm the Meili service is based
Step11: Iterating through each our 200 routes at 5 sample rates and 21 levels of noise, we simulate 21,000 distinct GPS traces. We then pass each of these simulated traces through to the map-matching service, and compare the resulting routes against the ground-truthed, pre-perturbation route segments.
<center><i>The same route is perturbed with varying levels of gaussian noise (red dots) with standard deviations ranging from 0 to 100 m. The resulting matched routes are shown as red lines, which only deviate from the true route at higher levels of noise.</i></center>
The previous step will typically take a long time to run if you are generating a lot of routes (> 10), so it's a good idea to save your dataframes for later use.
Step12: d) Check for Pattern Failure
Ensure that the Reporter is not failing frequently for any particular sample rate or noise level
Step13: 3. Visualize the Scores
The graphs below represent the median scores for 6 error metrics applied to each of our 21,000 routes, broken down by sample rate and noise level. Plots in the left column are based solely on error rate, i.e the percentage of Type I, Type II, or Type I/II mismatches. The right-hand column shows the same metrics as the left, but weighted by segment length. The top right plot thus represents the metric used by Newton and Krumm, and the two plots below it represent the same value broken out by error type. | Python Code:
from __future__ import division
from matplotlib import pyplot as plt
from matplotlib import cm, colors, patheffects
import numpy as np
import os
import glob
import urllib
import json
import pandas as pd
from random import shuffle, choice
import pickle
import sys; sys.path.insert(0, os.path.abspath('..'));
import validator.validator as val
%matplotlib inline
Explanation: Since we announced our collaboration with the World Bank and more partners to create the Open Traffic platform, we’ve been busy. We’ve shared two technical previews of the OSMLR linear referencing system. Now we’re ready to share more about how we’re using Mapzen Map Matching to “snap” GPS-derived locations to OSMLR segments, and how we’re using a data-driven approach to evaluate and improve the algorithms.
A "data-driven" approach to improving map-matching - Part I:
VALIDATION
============================================================================================
Mapzen has been testing and matching GPS measurements from some of Open Traffic’s partners since development began, but one burning question remained: were our matches any good? Map-matching real-time GPS traces is one thing, but without on-the-ground knowledge about where the traces actually came from, it was impossible to to determine how close to — or far from — the truth our predictions were.
Our in-house solution was to use Mapzen's very own Turn-By-Turn routing API to simulate fake GPS data, send the synthetic data through the Mapzen Map Matching service, and compare the results to the original routes used to simulate the fake traces. We have documented this process below:
0. Setup test environment
End of explanation
mapzenKey = os.environ.get('MAPZEN_API')
gmapsKey = os.environ.get('GOOGLE_MAPS')
Explanation: User vars
End of explanation
cityName = 'San Francisco'
minRouteLen = 0.5 # specified in km
maxRouteLen = 1 # specified in km
numRoutes = 200
Explanation: 1. Generate Routes
The first step in route generation is picking a test region, which for us was San Francisco. Routes are defined as a set of start and stop coordinates, which we obtain by randomly sampling venues from Mapzen’s Who’s on First gazetteer for the specified city. Additionally, we want to limit our route distances to be between ½ km and 1 km because this is the localized scale at which map matching actually takes place.
In this example, we specify 200 fake routes:
End of explanation
# Using Mapzen Venues (requires good Who's on First coverage)
routeList = val.get_routes_by_length(cityName, minRouteLen, maxRouteLen, numRoutes, apiKey=mapzenKey)
## Using Google Maps POIs (better for non-Western capitals):
# routeList = val.get_POI_routes_by_length(cityName, minRouteLen, maxRouteLen, numRoutes, gmapsKey)
Explanation: a) Get random start and end coordinates
End of explanation
myRoute = routeList[2]
myRoute
Explanation: A sample route:
End of explanation
shape, routeUrl = val.get_route_shape(myRoute)
Explanation: b) Get the route shapes and attributes
For each route, we then pass the start and end coordinates to the Turn-By-Turn API to obtain the coordinates of the road segments along the route:
End of explanation
shape
Explanation: The Turn-By-Turn API returns the shape of the route as an encoded polyline:
End of explanation
edges, matchedPts, shapeCoords, _ = val.get_trace_attrs(shape)
edges = val.get_coords_per_second(shapeCoords, edges, '2768')
Explanation: The route shape then gets passed to the map matching service in order to obtain the coordinates and attributes of the road segments (i.e. edges) that lie along the original route:
End of explanation
val.format_edge_df(edges).head()
Explanation: We can inspect the attributes returned for our example route:
End of explanation
sampleRates = [1, 5, 10, 20, 30] # specified in seconds
noiseLevels = np.linspace(0, 100, 21) # specified in meters
Explanation: 2. Iterate Through Routes, Generate Fake GPS, and Score the Matches
a) Define the noise levels and sample rates for synthetic GPS
Now that we have a set of "ground-truthed" routes, meaning route coordinates and their associated road segments, we want to simulate what the GPS data recorded along these routes might look like. This involves two main steps:
1. resampling the coordinates to reflect real-world GPS sample frequencies
2. perturbing the coordinates to simulate the effect of GPS "noise"
To resample the coordinates, we use the known speeds along each road segment to retain points along the route after every $n$ seconds.
To simulate GPS noise, we randomly sample from a normal distribution with standard deviation equal to a specified level of noise (in meters). We then apply this vector of noise to a given coordinate pair, and use a rolling average to smooth out the change in noise between subsequent "measurements", recreating the phenomenon of GPS "drift".
<center><i>A route (blue line) is resampled at 5 second intervals (blue dots). The resampled points are then perturbed with noise sampled from a gaussian distribution with mean 0 and standard deviation of 60. The resulting “measurements” (red dots) represent a simulated GPS trace.</i></center>
Since we are interested in assessing the performance of map-matching under a variety of different conditions, we define a range of realistic sample rates and noise levels:
End of explanation
matchDf, _, _ = val.get_route_metrics(routeList, sampleRates, noiseLevels, saveResults=False)
Explanation: b) Defining a validation metric
In order to validate our matched routes, we need an effective method of scoring. Should Type I error (false negatives) carry as much weight as Type II (false negatives)? Should a longer mismatched segment carry a greater penalty than one that is shorter?
We adopt the scoring mechanism proposed by Newton and Krumm (2009), upon whose map-matching algorithm the Meili service is based:
<img src="krumm_newson_dist.png" alt="Drawing" style="width: 400px;" align="center"/>
<center><i>From Newton and Krumm (2009)</i></center>
In the above schematic, $\text{d}+$ refers to a false positive or Type I error, while $\text{d}-$ represents a false negative, or Type II error. The final reported error is a combination of both types of mismatches, weighted by their respective lengths.
c) Generate the scores
Behind the scenes, the get_route_metrics() function will perform the following actions:
1. resample points along a given route at each of the specified sample rates
2. apply gaussian noise to each of the resample points at each of the specified noise levels
3. pass these synthetic measurements to the Open Traffic Reporter and record the matched routes that are returned
4. compare the segments on the "matched" route to the segments of the original route and score the results
End of explanation
matchDf.to_csv('{0}_{1}_matches.csv'.format(cityName, str(numRoutes)), index=False)
Explanation: Iterating through each our 200 routes at 5 sample rates and 21 levels of noise, we simulate 21,000 distinct GPS traces. We then pass each of these simulated traces through to the map-matching service, and compare the resulting routes against the ground-truthed, pre-perturbation route segments.
<center><i>The same route is perturbed with varying levels of gaussian noise (red dots) with standard deviations ranging from 0 to 100 m. The resulting matched routes are shown as red lines, which only deviate from the true route at higher levels of noise.</i></center>
The previous step will typically take a long time to run if you are generating a lot of routes (> 10), so it's a good idea to save your dataframes for later use.
End of explanation
val.plot_pattern_failure(matchDf, sampleRates, noiseLevels)
Explanation: d) Check for Pattern Failure
Ensure that the Reporter is not failing frequently for any particular sample rate or noise level
End of explanation
val.plot_distance_metrics(matchDf, sampleRates)
Explanation: 3. Visualize the Scores
The graphs below represent the median scores for 6 error metrics applied to each of our 21,000 routes, broken down by sample rate and noise level. Plots in the left column are based solely on error rate, i.e the percentage of Type I, Type II, or Type I/II mismatches. The right-hand column shows the same metrics as the left, but weighted by segment length. The top right plot thus represents the metric used by Newton and Krumm, and the two plots below it represent the same value broken out by error type.
End of explanation |
3,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves
Step1: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres
Step2: Getting started
Let us start by importing phoebe, numpy and matplotlib
Step3: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
Step4: Let us plot this mock passband transmission function to see what it looks like
Step5: Let us now save these data in a file that we will use to register a new passband.
Step6: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
Step7: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset
Step8: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue
Step9: Checking the content property again shows that the table has been successfully computed
Step10: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]'
Step11: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson
Step12: This makes perfect sense
Step13: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again
Step14: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
Step15: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete.
Step16: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables
Step17: This completes the computation of Castelli & Kurucz auxiliary tables.
Computing PHOENIX response
PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step
Step18: There is one extra step that we need to do for phoenix atmospheres
Step19: Now we can compare all three model atmospheres
Step20: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp
Step21: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes
Step22: Still an appreciable difference.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom | Python Code:
#!pip install -I "phoebe>=2.2,<2.3"
Explanation: Adding new passbands to PHOEBE
In this tutorial we will show you how to add your own passband to PHOEBE. Adding a custom passband involves:
downloading and setting up model atmosphere tables;
providing a passband transmission function;
defining and registering passband parameters;
computing blackbody response for the passband;
[optional] computing Castelli & Kurucz (2004) passband tables;
[optional] computing Husser et al. (2013) PHOENIX passband tables;
[optional] if the passband is one of the passbands included in the Wilson-Devinney code, importing the WD response; and
saving the generated passband file.
<!-- * \[optional\] computing Werner et al. (2012) TMAP passband tables; -->
Let's first make sure we have the correct version of PHOEBE installed. Uncomment the following line if running in an online notebook session such as colab.
End of explanation
import phoebe
from phoebe import u
# Register a passband:
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband'
)
# Blackbody response:
pb.compute_blackbody_response()
# CK2004 response:
pb.compute_ck2004_response(path='tables/ck2004')
pb.compute_ck2004_intensities(path='tables/ck2004')
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
# PHOENIX response:
pb.compute_phoenix_response(path='tables/phoenix')
pb.compute_phoenix_intensities(path='tables/phoenix')
pb.compute_phoenix_ldcoeffs()
pb.compute_phoenix_ldints()
# Impute missing values from the PHOENIX model atmospheres:
pb.impute_atmosphere_grid(pb._phoenix_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid)
for i in range(len(pb._phoenix_intensity_axes[3])):
pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:])
pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:])
# Wilson-Devinney response:
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
# Save the passband:
pb.save('my_passband.fits')
Explanation: If you plan on computing model atmosphere intensities (as opposed to only blackbody intensities), you will need to download atmosphere tables and unpack them into a local directory of your choice. Keep in mind that this will take a long time. Plan to go for lunch or leave it overnight. The good news is that this needs to be done only once. For the purpose of this document, we will use a local tables/ directory and assume that we are computing intensities for all available model atmospheres:
mkdir tables
cd tables
wget http://phoebe-project.org/static/atms/ck2004.tgz
wget http://phoebe-project.org/static/atms/phoenix.tgz
<!-- wget http://phoebe-project.org/static/atms/tmap.tgz -->
Once the data are downloaded, unpack the archives:
tar xvzf ck2004.tgz
tar xvzf phoenix.tgz
<!-- tar xvzf tmap.tgz -->
That should leave you with the following directory structure:
tables
|____ck2004
| |____TxxxxxGxxPxx.fits (3800 files)
|____phoenix
| |____ltexxxxx-x.xx-x.x.PHOENIX-ACES-AGSS-COND-SPECINT-2011.fits (7260 files)
I don't care about the details, just show/remind me how it's done
Makes sense, and we don't judge: you want to get to science. Provided that you have the passband transmission file available and the atmosphere tables already downloaded, the sequence that will generate/register a new passband is:
End of explanation
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger(clevel='WARNING')
Explanation: Getting started
Let us start by importing phoebe, numpy and matplotlib:
End of explanation
wl = np.linspace(300, 360, 61)
ptf = np.zeros(len(wl))
ptf[(wl>=320) & (wl<=340)] = 1.0
Explanation: Passband transmission function
The passband transmission function is typically a user-provided two-column file. The first column is wavelength, and the second column is passband transmission. For the purposes of this tutorial, we will simulate the passband as a uniform box.
End of explanation
plt.xlabel('Wavelength [nm]')
plt.ylabel('Passband transmission')
plt.plot(wl, ptf, 'b-')
plt.show()
Explanation: Let us plot this mock passband transmission function to see what it looks like:
End of explanation
np.savetxt('my_passband.ptf', np.vstack((wl, ptf)).T)
Explanation: Let us now save these data in a file that we will use to register a new passband.
End of explanation
pb = phoebe.atmospheres.passbands.Passband(
ptf='my_passband.ptf',
pbset='Custom',
pbname='mypb',
effwl=330.,
wlunits=u.nm,
calibrated=True,
reference='A completely made-up passband published in Nowhere (2017)',
version=1.0,
comments='This is my first custom passband')
Explanation: Registering a passband
The first step in introducing a new passband into PHOEBE is registering it with the system. We use the Passband class for that.
End of explanation
pb.content
Explanation: The first argument, ptf, is the passband transmission file we just created. Of course, you would provide an actual passband transmission function that comes from a respectable source rather than this silly tutorial.
The next two arguments, pbset and pbname, should be taken in unison. The way PHOEBE refers to passbands is a pbset:pbname string, for example Johnson:V, Cousins:Rc, etc. Thus, our fake passband will be Custom:mypb.
The following two arguments, effwl and wlunits, also come as a pair. PHOEBE uses effective wavelength to apply zero-level passband corrections when better options (such as model atmospheres) are unavailable. Effective wavelength is a transmission-weighted average wavelength in the units given by wlunits.
The calibrated parameter instructs PHOEBE whether to take the transmission function as calibrated, i.e. the flux through the passband is absolutely calibrated. If set to True, PHOEBE will assume that absolute intensities computed using the passband transmission function do not need further calibration. If False, the intensities are considered as scaled rather than absolute, i.e. correct to a scaling constant. Most modern passbands provided in the recent literature are calibrated.
The reference parameter holds a reference string to the literature from which the transmission function was taken from. It is common that updated transmission functions become available, which is the point of the version parameter. If there are multiple versions of the transmission function, PHOEBE will by default take the largest value, or the value that is explicitly requested in the filter string, i.e. Johnson:V:1.0 or Johnson:V:2.0.
Finally, the comments parameter is a convenience parameter to store any additional pertinent information.
Computing blackbody response
To significantly speed up calculations, passband intensities are stored in lookup tables instead of computing them over and over again on the fly. Computed passband tables are tagged in the content property of the class:
End of explanation
pb.compute_blackbody_response()
Explanation: Since we have not computed any tables yet, the list is empty for now. Blackbody functions for computing the lookup tables are built into PHOEBE and you do not need any auxiliary files to generate them. The lookup tables are defined for effective temperatures between 300K and 500,000K. To compute the blackbody response, issue:
End of explanation
pb.content
Explanation: Checking the content property again shows that the table has been successfully computed:
End of explanation
pb.Inorm(Teff=5772, atm='blackbody', ld_func='linear', ld_coeffs=[0.0])
Explanation: We can now test-drive the blackbody lookup table we just created. For this we will use a low-level class method that computes normal emergent passband intensity, Inorm(). For the sake of simplicity, we will turn off limb darkening by setting ld_func to 'linear' and ld_coeffs to '[0.0]':
End of explanation
jV = phoebe.get_passband('Johnson:V')
teffs = np.linspace(5000, 8000, 100)
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='mypb')
plt.plot(teffs, jV.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='jV')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now plot a range of temperatures, to make sure that normal emergent passband intensities do what they are supposed to do. While at it, let us compare what we get for the Johnson:V passband.
End of explanation
pb.compute_ck2004_response(path='tables/ck2004', verbose=False)
Explanation: This makes perfect sense: Johnson V transmission function is wider than our boxed transmission function, so intensity in the V band is larger the lower temperatures. However, for the hotter temperatures the contribution to the UV flux increases and our box passband with a perfect transmission of 1 takes over.
Computing Castelli & Kurucz (2004) response
For any real science you will want to generate model atmosphere tables. The default choice in PHOEBE are the models computed by Fiorella Castelli and Bob Kurucz (website, paper) that feature new opacity distribution functions. In principle, you can generate PHOEBE-compatible tables for any model atmospheres, but that would require a bit of book-keeping legwork in the PHOEBE backend. Contact us to discuss an extension to other model atmospheres.
To compute Castelli & Kurucz (2004) passband tables, we will use the previously downloaded model atmospheres. We start with the ck2004 normal intensities:
End of explanation
pb.content
Explanation: Note, of course, that you will need to change the path to point to the directory where you unpacked the ck2004 tables. The verbosity parameter verbose will report on the progress as computation is being done. Depending on your computer speed, this step will take up to a minute to complete. We can now check the passband's content attribute again:
End of explanation
loggs = np.ones(len(teffs))*4.43
abuns = np.zeros(len(teffs))
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.legend(loc='lower right')
plt.show()
Explanation: Let us now use the same low-level function as before to compare normal emergent passband intensity for our custom passband for blackbody and ck2004 model atmospheres. One other complication is that, unlike blackbody model that depends only on the temperature, the ck2004 model depends on surface gravity (log g) and heavy metal abundances as well, so we need to pass those arrays.
End of explanation
pb.compute_ck2004_intensities(path='tables/ck2004', verbose=False)
Explanation: Quite a difference. That is why using model atmospheres is superior when accuracy is of importance. Next, we need to compute direction-dependent intensities for all our limb darkening and boosting needs. This is a step that takes a long time; depending on your computer speed, it can take a few minutes to complete.
End of explanation
pb.compute_ck2004_ldcoeffs()
pb.compute_ck2004_ldints()
Explanation: This step will allow PHOEBE to compute all direction-dependent intensities on the fly, including the interpolation of the limb darkening coefficients that is model-independent. When limb darkening models are preferred (for example, when you don't quite trust direction-dependent intensities from the model atmosphere), we need to calculate two more tables: one for limb darkening coefficients and the other for the integrated limb darkening. That is done by two methods that can take a couple of minutes to complete:
End of explanation
pb.compute_phoenix_response(path='tables/phoenix', verbose=False)
pb.compute_phoenix_intensities(path='tables/phoenix', verbose=False)
pb.compute_phoenix_ldcoeffs()
pb.compute_phoenix_ldints()
print(pb.content)
Explanation: This completes the computation of Castelli & Kurucz auxiliary tables.
Computing PHOENIX response
PHOENIX is a 3-D model atmosphere code. Because of that, it is more complex and better behaved for cooler stars (down to ~2300K). The steps to compute PHOENIX intensity tables are analogous to the ones we used for ck2004; so we can do all of them in a single step:
End of explanation
pb.impute_atmosphere_grid(pb._phoenix_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ld_photon_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_energy_grid)
pb.impute_atmosphere_grid(pb._phoenix_ldint_photon_grid)
for i in range(len(pb._phoenix_intensity_axes[3])):
pb.impute_atmosphere_grid(pb._phoenix_Imu_energy_grid[:,:,:,i,:])
pb.impute_atmosphere_grid(pb._phoenix_Imu_photon_grid[:,:,:,i,:])
Explanation: There is one extra step that we need to do for phoenix atmospheres: because there are gaps in the coverage of atmospheric parameters, we need to impute those values in order to allow for seamless interpolation. This is achieved by the call to impute_atmosphere_grid(). It is a computationally intensive step that can take 10+ minutes.
End of explanation
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix')
plt.legend(loc='lower right')
plt.show()
Explanation: Now we can compare all three model atmospheres:
End of explanation
pb.import_wd_atmcof('atmcofplanck.dat', 'atmcof.dat', 22)
Explanation: We see that, as temperature increases, model atmosphere intensities can differ quite a bit. That explains why the choice of a model atmosphere is quite important and should be given proper consideration.
Importing Wilson-Devinney response
PHOEBE no longer shares any codebase with the WD code, but for comparison purposes it is sometimes useful to use the same atmosphere tables. If the passband you are registering with PHOEBE has been defined in WD's atmcof.dat and atmcofplanck.dat files, PHOEBE can import those coefficients and use them to compute intensities.
To import a set of WD atmospheric coefficients, you need to know the corresponding index of the passband (you can look it up in the WD user manual available at ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/ebdoc2003.2feb2004.pdf.gz) and you need to grab the files ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcofplanck.dat.gz and ftp://ftp.astro.ufl.edu/pub/wilson/lcdc2003/atmcof.dat.gz from Bob Wilson's webpage. For this particular passband the index is 22. To import, issue:
End of explanation
pb.content
plt.xlabel('Temperature [K]')
plt.ylabel('Inorm [W/m^3]')
plt.plot(teffs, pb.Inorm(teffs, atm='blackbody', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='blackbody')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='ck2004', ldatm='ck2004', ld_func='linear', ld_coeffs=[0.0]), label='ck2004')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='phoenix', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='phoenix')
plt.plot(teffs, pb.Inorm(teffs, loggs, abuns, atm='extern_atmx', ldatm='phoenix', ld_func='linear', ld_coeffs=[0.0]), label='wd_atmx')
plt.legend(loc='lower right')
plt.show()
Explanation: We can consult the content attribute to see the entire set of supported tables, and plot different atmosphere models for comparison purposes:
End of explanation
pb.save('~/.phoebe/atmospheres/tables/passbands/my_passband.fits')
Explanation: Still an appreciable difference.
Saving the passband table
The final step of all this (computer's) hard work is to save the passband file so that these steps do not need to be ever repeated. From now on you will be able to load the passband file explicitly and PHOEBE will have full access to all of its tables. Your new passband will be identified as 'Custom:mypb'.
To make PHOEBE automatically load the passband, it needs to be added to one of the passband directories that PHOEBE recognizes. If there are no proprietary aspects that hinder the dissemination of the tables, please consider contributing them to PHOEBE so that other users can use them.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.