Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
7,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Science Glossary on Kaggle
Kaggle is the place to do data science projects. There are so many algorithms and concepts to learn. Kaggle Kernels are one of the best resources on internet to understand the practical implementation of algorithms. There are almost 200,000 kernels published on kaggle and sometimes it becomes diffcult to search for the right implementation. I have used the Meta Kaggle database to create a glossary of data science models, techniques and tools shared on kaggle kernels. One can use this kernel as the one place to find other great kernels shared by great authors. Hope you like this kernel.
Contents
<ul>
<li>1. Regression Algorithms
<ul>
<li>1.1 Linear Regression</li>
<li>1.2 Logistic Regression</li>
</ul>
</li>
<li>2. Regularization Algorithms
<ul>
<li>2.1 Ridge Regression Regression</li>
<li>2.2 Lasso Regression</li>
<li>2.3 Elastic Net</li>
</ul>
</li>
</li>
<li>3. Tree Based Models
<ul>
<li>3.1 Decision Tree</li>
<li>3.2 Random Forests</li>
<li>3.3 Lightgbm</li>
<li>3.4 XgBoost</li>
<li>3.5 Cat Boost</li>
</ul>
</li>
<li>4. Neural Networks and Deep Learning
<ul>
<li>4.1 Neural Networks</li>
<li>4.2 AutoEncoders</li>
<li>4.3 DeepLearning</li>
<li>4.4 Convolutional Neural Networks</li>
<li>4.5 LSTMs</li>
<li>4.6 GRUs</li>
<li>4.7 MxNet</li>
<li>4.8 ResNet</li>
<li>4.9 CapsuleNets</li>
<li>4.10 VGGs</li>
<li>4.11 Inception Nets</li>
<li>4.12 Computer Vision</li>
<li>4.13 Transfer Learning</li>
</ul>
</li>
<li>5. Clustering Algorithms
<ul>
<li>5.1 K Means Clustering </li>
<li>5.2 Hierarchial Clustering</li>
<li>5.3 DB Scan</li>
<li>5.4 Unsupervised Learning </li>
</ul>
</li>
<li>6. Misc - Models
<ul>
<li>6.1 K Naive Bayes </li>
<li>6.2 SVMs</li>
<li>6.3 KNN</li>
<li>6.4 Recommendation Engine </li>
</ul>
</li>
<li>7.1 Data Science Techniques - Preprocessing
<ul>
<li>a. EDA, Exploration </li>
<li>b. Feature Engineering </li>
<li>c. Feature Selection </li>
<li>d. Outlier Treatment</li>
<li>e. Anomaly Detection</li>
<li>f. SMOTE</li>
<li>g. Pipeline</li>
<li>g. Missing Values</li>
</ul>
</li>
<li>7.2 Data Science Techniques - Dimentionality Reduction
<ul>
<li>a. Dataset Decomposition </li>
<li>b. PCA </li>
<li>c. Tsne </li>
</ul>
</li>
<li>7.3 Data Science Techniques - Post Modelling
<ul>
<li>a. Cross Validation </li>
<li>b. Model Selection </li>
<li>c. Model Tuning </li>
<li>d. Grid Search </li>
</ul>
</li>
<li>7.4 Data Science Techniques - Ensemblling
<ul>
<li>a. Ensembling </li>
<li>b. Stacking </li>
<li>c. Bagging</li>
</ul>
</li>
<li>8. Text Data
<ul>
<li>8.1. NLP </li>
<li>8.2. Topic Modelling </li>
<li>8.3. Word Embeddings </li>
</ul>
</li>
<li>9. Data Science Tools
<ul>
<li>9.1 Scikit Learn </li>
<li>9.2 TensorFlow </li>
<li>9.3 Theano </li>
<li>9.4 Kears </li>
<li>9.5 PyTorch </li>
<li>9.6 Vopal Wabbit </li>
<li>9.7 ELI5 </li>
<li>9.8 HyperOpt </li>
<li>9.9 Pandas </li>
<li>9.10 Sql </li>
<li>9.11 BigQuery </li>
</ul>
</li>
<li>10. Data Visualizations
<ul>
<li>10.1. Visualizations </li>
<li>10.2. Plotly </li>
<li>10.3. Seaborn </li>
<li>10.4. D3.Js </li>
<li>10.5. Bokeh </li>
</ul>
</li>
<li>11. Time Series
<ul>
<li>11.1. Time Series Analysis </li>
<li>10.2. ARIMA </li>
</ul>
</li>
</ul>
<br><br>
1. Regression Algorithms
Step1: 2. Regularization Algorithms
Step2: 3. Tree Based Models
Step3: 4. Neural Networks and Deep Learning Models
Step4: 5. Clustering Algorithms
Step5: 6. Misc - Models
Step6: 7. Important Data Science Techniques
7.1 Preprocessing
Step7: 7.2 Dimentionality Reduction
Step8: 7.3 Post Modelling Techniques
Step9: 7.4 Ensemblling
Step10: 8. Text Data
Step11: 9. Data Science Tools
Step12: 10. Data Visualization
Step13: 11. Time Series
Step14: 12. Some of the Best Tutorials on Kaggle | Python Code:
tokens = ["linear regression"]
best_kernels(tokens, 10)
tokens = ['logistic regression', "logistic"]
best_kernels(tokens, 10)
Explanation: Data Science Glossary on Kaggle
Kaggle is the place to do data science projects. There are so many algorithms and concepts to learn. Kaggle Kernels are one of the best resources on internet to understand the practical implementation of algorithms. There are almost 200,000 kernels published on kaggle and sometimes it becomes diffcult to search for the right implementation. I have used the Meta Kaggle database to create a glossary of data science models, techniques and tools shared on kaggle kernels. One can use this kernel as the one place to find other great kernels shared by great authors. Hope you like this kernel.
Contents
<ul>
<li>1. Regression Algorithms
<ul>
<li>1.1 Linear Regression</li>
<li>1.2 Logistic Regression</li>
</ul>
</li>
<li>2. Regularization Algorithms
<ul>
<li>2.1 Ridge Regression Regression</li>
<li>2.2 Lasso Regression</li>
<li>2.3 Elastic Net</li>
</ul>
</li>
</li>
<li>3. Tree Based Models
<ul>
<li>3.1 Decision Tree</li>
<li>3.2 Random Forests</li>
<li>3.3 Lightgbm</li>
<li>3.4 XgBoost</li>
<li>3.5 Cat Boost</li>
</ul>
</li>
<li>4. Neural Networks and Deep Learning
<ul>
<li>4.1 Neural Networks</li>
<li>4.2 AutoEncoders</li>
<li>4.3 DeepLearning</li>
<li>4.4 Convolutional Neural Networks</li>
<li>4.5 LSTMs</li>
<li>4.6 GRUs</li>
<li>4.7 MxNet</li>
<li>4.8 ResNet</li>
<li>4.9 CapsuleNets</li>
<li>4.10 VGGs</li>
<li>4.11 Inception Nets</li>
<li>4.12 Computer Vision</li>
<li>4.13 Transfer Learning</li>
</ul>
</li>
<li>5. Clustering Algorithms
<ul>
<li>5.1 K Means Clustering </li>
<li>5.2 Hierarchial Clustering</li>
<li>5.3 DB Scan</li>
<li>5.4 Unsupervised Learning </li>
</ul>
</li>
<li>6. Misc - Models
<ul>
<li>6.1 K Naive Bayes </li>
<li>6.2 SVMs</li>
<li>6.3 KNN</li>
<li>6.4 Recommendation Engine </li>
</ul>
</li>
<li>7.1 Data Science Techniques - Preprocessing
<ul>
<li>a. EDA, Exploration </li>
<li>b. Feature Engineering </li>
<li>c. Feature Selection </li>
<li>d. Outlier Treatment</li>
<li>e. Anomaly Detection</li>
<li>f. SMOTE</li>
<li>g. Pipeline</li>
<li>g. Missing Values</li>
</ul>
</li>
<li>7.2 Data Science Techniques - Dimentionality Reduction
<ul>
<li>a. Dataset Decomposition </li>
<li>b. PCA </li>
<li>c. Tsne </li>
</ul>
</li>
<li>7.3 Data Science Techniques - Post Modelling
<ul>
<li>a. Cross Validation </li>
<li>b. Model Selection </li>
<li>c. Model Tuning </li>
<li>d. Grid Search </li>
</ul>
</li>
<li>7.4 Data Science Techniques - Ensemblling
<ul>
<li>a. Ensembling </li>
<li>b. Stacking </li>
<li>c. Bagging</li>
</ul>
</li>
<li>8. Text Data
<ul>
<li>8.1. NLP </li>
<li>8.2. Topic Modelling </li>
<li>8.3. Word Embeddings </li>
</ul>
</li>
<li>9. Data Science Tools
<ul>
<li>9.1 Scikit Learn </li>
<li>9.2 TensorFlow </li>
<li>9.3 Theano </li>
<li>9.4 Kears </li>
<li>9.5 PyTorch </li>
<li>9.6 Vopal Wabbit </li>
<li>9.7 ELI5 </li>
<li>9.8 HyperOpt </li>
<li>9.9 Pandas </li>
<li>9.10 Sql </li>
<li>9.11 BigQuery </li>
</ul>
</li>
<li>10. Data Visualizations
<ul>
<li>10.1. Visualizations </li>
<li>10.2. Plotly </li>
<li>10.3. Seaborn </li>
<li>10.4. D3.Js </li>
<li>10.5. Bokeh </li>
</ul>
</li>
<li>11. Time Series
<ul>
<li>11.1. Time Series Analysis </li>
<li>10.2. ARIMA </li>
</ul>
</li>
</ul>
<br><br>
1. Regression Algorithms
End of explanation
tokens = ['Ridge']
best_kernels(tokens, 10)
tokens = ['Lasso']
best_kernels(tokens, 10)
tokens = ['ElasticNet']
best_kernels(tokens, 4)
Explanation: 2. Regularization Algorithms
End of explanation
tokens = ['Decision Tree']
best_kernels(tokens, 10)
tokens = ['random forest']
best_kernels(tokens, 10)
tokens = ['lightgbm', 'light gbm', 'lgb']
best_kernels(tokens, 10)
tokens = ['xgboost', 'xgb']
best_kernels(tokens, 10)
tokens = ['catboost']
best_kernels(tokens, 10)
Explanation: 3. Tree Based Models
End of explanation
tokens = ['neural network']
best_kernels(tokens, 10)
tokens = ['autoencoder']
best_kernels(tokens, 10)
tokens = ['deep learning']
best_kernels(tokens, 10)
tokens = ['convolutional neural networks', 'cnn']
best_kernels(tokens, 10)
tokens = ['lstm']
best_kernels(tokens, 10)
tokens = ['gru']
ignore = ['grupo']
best_kernels(tokens, 10, ignore)
tokens = ['mxnet']
best_kernels(tokens, 10)
tokens = ['resnet']
best_kernels(tokens, 10)
tokens = ['Capsule network', 'capsulenet']
best_kernels(tokens, 5)
tokens = ['vgg']
best_kernels(tokens, 5)
tokens = ['inception']
best_kernels(tokens, 5)
tokens = ['computer vision']
best_kernels(tokens, 5)
tokens = ['transfer learning']
best_kernels(tokens, 5)
tokens = ['yolo']
best_kernels(tokens, 5)
Explanation: 4. Neural Networks and Deep Learning Models
End of explanation
tokens = ['kmeans', 'k means']
best_kernels(tokens, 10)
tokens = ['hierarchical clustering']
best_kernels(tokens, 3)
tokens = ['dbscan']
best_kernels(tokens, 10)
tokens = ['unsupervised']
best_kernels(tokens, 10)
Explanation: 5. Clustering Algorithms
End of explanation
tokens = ['naive bayes']
best_kernels(tokens, 10)
tokens = ['svm']
best_kernels(tokens, 10)
tokens = ['knn']
best_kernels(tokens, 10)
tokens = ['recommendation engine']
best_kernels(tokens, 5)
Explanation: 6. Misc - Models
End of explanation
tokens = ['EDA', 'exploration', 'exploratory']
best_kernels(tokens, 10)
tokens = ['feature engineering']
best_kernels(tokens, 10)
tokens = ['feature selection']
best_kernels(tokens, 10)
tokens = ['outlier treatment', 'outlier']
best_kernels(tokens, 10)
tokens = ['anomaly detection', 'anomaly']
best_kernels(tokens, 8)
tokens = ['smote']
best_kernels(tokens, 5)
tokens = ['pipeline']
best_kernels(tokens, 10)
tokens = ['missing value']
best_kernels(tokens, 10)
Explanation: 7. Important Data Science Techniques
7.1 Preprocessing
End of explanation
tokens = ['dataset decomposition', 'dimentionality reduction']
best_kernels(tokens, 2)
tokens = ['PCA']
best_kernels(tokens, 10)
tokens = ['Tsne', 't-sne']
best_kernels(tokens, 10)
Explanation: 7.2 Dimentionality Reduction
End of explanation
tokens = ['cross validation']
best_kernels(tokens, 10)
tokens = ['model selection']
best_kernels(tokens, 10)
tokens = ['model tuning', 'tuning']
best_kernels(tokens, 10)
tokens = ['gridsearch', 'grid search']
best_kernels(tokens, 10)
Explanation: 7.3 Post Modelling Techniques
End of explanation
tokens = ['ensemble']
best_kernels(tokens, 10)
tokens = ['stacking', 'stack']
best_kernels(tokens, 10)
tokens = ['bagging']
best_kernels(tokens, 10)
Explanation: 7.4 Ensemblling
End of explanation
tokens = ['NLP', 'Natural Language Processing', 'text mining']
best_kernels(tokens, 10)
tokens = ['topic modelling']
best_kernels(tokens, 8)
tokens = ['word embedding','fasttext', 'glove', 'word2vec']
best_kernels(tokens, 8)
Explanation: 8. Text Data
End of explanation
tokens = ['scikit']
best_kernels(tokens, 10)
tokens = ['tensorflow', 'tensor flow']
best_kernels(tokens, 10)
tokens = ['theano']
best_kernels(tokens, 10)
tokens = ['keras']
best_kernels(tokens, 10)
tokens = ['pytorch']
best_kernels(tokens, 10)
tokens = ['vowpal wabbit','vowpalwabbit']
best_kernels(tokens, 10)
tokens = ['eli5']
best_kernels(tokens, 10)
tokens = ['hyperopt']
best_kernels(tokens, 5)
tokens = ['pandas']
best_kernels(tokens, 10)
tokens = ['SQL']
best_kernels(tokens, 10)
tokens = ['bigquery', 'big query']
best_kernels(tokens, 10)
tokens = ['gpu']
best_kernels(tokens, 10)
Explanation: 9. Data Science Tools
End of explanation
tokens = ['visualization', 'visualisation']
best_kernels(tokens, 10)
tokens = ['plotly', 'plot.ly']
best_kernels(tokens, 10)
tokens = ['seaborn']
best_kernels(tokens, 10)
tokens = ['d3.js']
best_kernels(tokens, 4)
tokens = ['bokeh']
best_kernels(tokens, 10)
Explanation: 10. Data Visualization
End of explanation
tokens = ['time series']
best_kernels(tokens, 10)
tokens = ['arima']
best_kernels(tokens, 10)
Explanation: 11. Time Series
End of explanation
tokens = ['tutorial']
best_kernels(tokens, 10)
Explanation: 12. Some of the Best Tutorials on Kaggle
End of explanation |
7,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning
Step1: Preparing the Dataset
Load dataset from tab-seperated text file
Dataset contains three columns
Step2: Shuffle dataset
Split dataset into 70% training and 30% test data
Seed random number generator for reproducibility
Step3: Standardize training and test datasets (mean zero, unit variance)
Step4: Check dataset (here
Step6: Implementing a Perceptron in NumPy
Implement function for perceptron training in NumPy
Step7: Train the perceptron for 2 epochs
Step9: Implement a function for perceptron predictions in NumPy
Step10: Compute training and test error
Step11: Visualize the decision boundary
Perceptron is a linear function with threshold
$$w_{1}x_{1} + w_{2}x_{2} + b \geq 0.$$
We can rearrange this equation as follows
Step12: Suggested exercises
Train a zero-weight perceptron with different learning rates and compare the model parameters and decision boundaries to each other. What do you observe?
Repeat the previous exercise with randomly initialized weights.
Step13: Implementing a Perceptron in TensorFlow
Setting up the perceptron graph
Step14: Training the perceptron for 5 training samples for illustration purposes
Step15: Continue training of the graph after restoring the session from a local checkpoint (this can be useful if we have to interrupt out computational session)
Now train a complete epoch
Step16: Suggested Exercises
3) Plot the decision boundary for this TensorFlow perceptron. Why do you think the TensorFlow implementation performs better than our NumPy implementation on the test set?
- Hint 1
Step17: Theoretically, we could restart the Jupyter notebook now (we would just have to prepare the dataset again then, though)
We are going to restore the session from a meta graph (notice "tf.Session()")
First, we have to load the datasets again | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -d -p tensorflow,numpy,matplotlib
%matplotlib inline
import tensorflow as tf
import numpy as np
import os
import matplotlib.pyplot as plt
Explanation: Accompanying code examples of the book "Introduction to Artificial Neural Networks and Deep Learning: A Practical Guide with Applications in Python" by Sebastian Raschka. All code examples are released under the MIT license. If you find this content useful, please consider supporting the work by buying a copy of the book.
Other code examples and content are available on GitHub. The PDF and ebook versions of the book are available through Leanpub.
Ch02 - The Perceptron
Hands-on Section
Table of Contents
Preparing the Dataset
Implementing a Perceptron in NumPy
Implementing a Perceptron in TensorFlow
End of explanation
data = np.genfromtxt('perceptron_toydata.txt', delimiter='\t')
X, y = data[:, :2], data[:, 2]
y = y.astype(np.int)
print('Class label counts:', np.bincount(y))
plt.scatter(X[y==0, 0], X[y==0, 1], label='class 0', marker='o')
plt.scatter(X[y==1, 0], X[y==1, 1], label='class 1', marker='s')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.legend()
plt.show()
Explanation: Preparing the Dataset
Load dataset from tab-seperated text file
Dataset contains three columns: feature 1, feature 2, and class labels
Dataset contains 100 entries sorted by class labels, 50 examples from each class
End of explanation
shuffle_idx = np.arange(y.shape[0])
shuffle_rng = np.random.RandomState(123)
shuffle_rng.shuffle(shuffle_idx)
X, y = X[shuffle_idx], y[shuffle_idx]
X_train, X_test = X[shuffle_idx[:70]], X[shuffle_idx[70:]]
y_train, y_test = y[shuffle_idx[:70]], y[shuffle_idx[70:]]
Explanation: Shuffle dataset
Split dataset into 70% training and 30% test data
Seed random number generator for reproducibility
End of explanation
mu, sigma = X_train.mean(axis=0), X_train.std(axis=0)
X_train = (X_train - mu) / sigma
X_test = (X_test - mu) / sigma
Explanation: Standardize training and test datasets (mean zero, unit variance)
End of explanation
plt.scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')
plt.scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')
plt.xlabel('feature 1')
plt.ylabel('feature 2')
plt.legend()
plt.show()
Explanation: Check dataset (here: training dataset) after preprocessing steps
End of explanation
def perceptron_train(features, targets, mparams=None,
zero_weights=True, learning_rate=1., seed=None):
Perceptron training function for binary class labels
Parameters
----------
features : numpy.ndarray, shape=(n_samples, m_features)
A 2D NumPy array containing the training examples
targets : numpy.ndarray, shape=(n_samples,)
A 1D NumPy array containing the true class labels
mparams : dict or None (default: None)
A dictionary containing the model parameters, for instance
as returned by this function. If None, a new model parameter
dictionary is initialized. Note that the values in mparams
are updated inplace if a mparams dict is provided.
zero_weights : bool (default: True)
Initializes weights to all zeros, otherwise model weights are
initialized to small random number from a normal distribution
with mean zero and standard deviation 0.1.
learning_rate : float (default: 1.0)
A learning rate for the parameter updates. Note that a learning
rate has no effect on the direction of the decision boundary
if if the model weights are initialized to all zeros.
seed : int or None (default: None)
Seed for the pseudo-random number generator that initializes the
weights if zero_weights=False
Returns
-------
mparams : dict
The model parameters after training the perceptron for one epoch.
The mparams dictionary has the form:
{'weights': np.array([weight_1, weight_2, ... , weight_m]),
'bias': np.array([bias])}
# initialize model parameters
if mparams is None:
mparams = {'bias': np.zeros(1)}
if zero_weights:
mparams['weights'] = np.zeros(features.shape[1])
else:
rng = np.random.RandomState(seed)
mparams['weights'] = rng.normal(loc=0.0, scale=0.1,
size=(features.shape[1]))
# train one epoch
for training_example, true_label in zip(features, targets):
linear = np.dot(training_example, mparams['weights']) + mparams['bias']
# if class 1 was predicted but true label is 0
if linear > 0. and not true_label:
mparams['weights'] -= learning_rate * training_example
mparams['bias'] -= learning_rate * 1.
# if class 0 was predicted but true label is 1
elif linear <= 0. and true_label:
mparams['weights'] += learning_rate * training_example
mparams['bias'] += learning_rate * 1.
return mparams
Explanation: Implementing a Perceptron in NumPy
Implement function for perceptron training in NumPy
End of explanation
model_params = perceptron_train(X_train, y_train,
mparams=None, zero_weights=True)
for _ in range(2):
_ = perceptron_train(X_train, y_train, mparams=model_params)
Explanation: Train the perceptron for 2 epochs
End of explanation
def perceptron_predict(features, mparams):
Perceptron prediction function for binary class labels
Parameters
----------
features : numpy.ndarray, shape=(n_samples, m_features)
A 2D NumPy array containing the training examples
mparams : dict
The model parameters aof the perceptron in the form:
{'weights': np.array([weight_1, weight_2, ... , weight_m]),
'bias': np.array([bias])}
Returns
-------
predicted_labels : np.ndarray, shape=(n_samples)
NumPy array containing the predicted class labels.
linear = np.dot(features, mparams['weights']) + mparams['bias']
predicted_labels = np.where(linear.reshape(-1) > 0., 1, 0)
return predicted_labels
Explanation: Implement a function for perceptron predictions in NumPy
End of explanation
train_errors = np.sum(perceptron_predict(X_train, model_params) != y_train)
test_errors = np.sum(perceptron_predict(X_test, model_params) != y_test)
print('Number of training errors', train_errors)
print('Number of test errors', test_errors)
Explanation: Compute training and test error
End of explanation
x_min = -2
y_min = ( -(model_params['weights'][0] * x_min) / model_params['weights'][1]
-(model_params['bias'] / model_params['weights'][1]) )
x_max = 2
y_max = ( -(model_params['weights'][0] * x_max) / model_params['weights'][1]
-(model_params['bias'] / model_params['weights'][1]) )
fig, ax = plt.subplots(1, 2, sharex=True, figsize=(7, 3))
ax[0].plot([x_min, x_max], [y_min, y_max])
ax[1].plot([x_min, x_max], [y_min, y_max])
ax[0].scatter(X_train[y_train==0, 0], X_train[y_train==0, 1], label='class 0', marker='o')
ax[0].scatter(X_train[y_train==1, 0], X_train[y_train==1, 1], label='class 1', marker='s')
ax[1].scatter(X_test[y_test==0, 0], X_test[y_test==0, 1], label='class 0', marker='o')
ax[1].scatter(X_test[y_test==1, 0], X_test[y_test==1, 1], label='class 1', marker='s')
ax[1].legend(loc='upper left')
plt.show()
Explanation: Visualize the decision boundary
Perceptron is a linear function with threshold
$$w_{1}x_{1} + w_{2}x_{2} + b \geq 0.$$
We can rearrange this equation as follows:
$$w_{1}x_{1} + b \geq 0 - w_{2}x_{2}$$
$$- \frac{w_{1}x_{1}}{{w_2}} - \frac{b}{w_2} \leq x_{2}$$
End of explanation
# %load solutions/01_weight_zero_learning_rate.py
# %load solutions/02_random_weights_learning_rate.py
Explanation: Suggested exercises
Train a zero-weight perceptron with different learning rates and compare the model parameters and decision boundaries to each other. What do you observe?
Repeat the previous exercise with randomly initialized weights.
End of explanation
g = tf.Graph()
n_features = X_train.shape[1]
with g.as_default() as g:
# initialize model parameters
features = tf.placeholder(dtype=tf.float32,
shape=[None, n_features], name='features')
targets = tf.placeholder(dtype=tf.float32,
shape=[None, 1], name='targets')
params = {
'weights': tf.Variable(tf.zeros(shape=[n_features, 1],
dtype=tf.float32), name='weights'),
'bias': tf.Variable([[0.]], dtype=tf.float32, name='bias')}
# forward pass
linear = tf.matmul(features, params['weights']) + params['bias']
ones = tf.ones(shape=tf.shape(linear))
zeros = tf.zeros(shape=tf.shape(linear))
prediction = tf.where(tf.less(linear, 0.), zeros, ones, name='prediction')
# weight update
diff = targets - prediction
weight_update = tf.assign_add(params['weights'],
tf.reshape(diff * features, (n_features, 1)))
bias_update = tf.assign_add(params['bias'], diff)
saver = tf.train.Saver()
Explanation: Implementing a Perceptron in TensorFlow
Setting up the perceptron graph
End of explanation
with tf.Session(graph=g) as sess:
sess.run(tf.global_variables_initializer())
i = 0
for example, target in zip(X_train, y_train):
feed_dict = {features: example.reshape(-1, n_features),
targets: target.reshape(-1, 1)}
_, _ = sess.run([weight_update, bias_update], feed_dict=feed_dict)
i += 1
if i >= 4:
break
modelparams = sess.run(params)
print('Model parameters:\n', modelparams)
saver.save(sess, save_path='perceptron')
pred = sess.run(prediction, feed_dict={features: X_train})
errors = np.sum(pred.reshape(-1) != y_train)
print('Number of training errors:', errors)
Explanation: Training the perceptron for 5 training samples for illustration purposes
End of explanation
with tf.Session(graph=g) as sess:
saver.restore(sess, os.path.abspath('perceptron'))
for epoch in range(1):
for example, target in zip(X_train, y_train):
feed_dict = {features: example.reshape(-1, n_features),
targets: target.reshape(-1, 1)}
_, _ = sess.run([weight_update, bias_update], feed_dict=feed_dict)
modelparams = sess.run(params)
saver.save(sess, save_path='perceptron')
pred = sess.run(prediction, feed_dict={features: X_train})
train_errors = np.sum(pred.reshape(-1) != y_train)
pred = sess.run(prediction, feed_dict={features: X_train})
test_errors = np.sum(pred.reshape(-1) != y_train)
print('Number of training errors', train_errors)
print('Number of test errors', test_errors)
Explanation: Continue training of the graph after restoring the session from a local checkpoint (this can be useful if we have to interrupt out computational session)
Now train a complete epoch
End of explanation
# %load solutions/03_tensorflow-boundary.py
Explanation: Suggested Exercises
3) Plot the decision boundary for this TensorFlow perceptron. Why do you think the TensorFlow implementation performs better than our NumPy implementation on the test set?
- Hint 1: you can re-use the code that we used in the NumPy section
- Hint 2: since the bias is a 2D array, you need to access the float value via modelparams['bias'][0]
End of explanation
with tf.Session() as sess:
saver = tf.train.import_meta_graph(os.path.abspath('perceptron.meta'))
saver.restore(sess, os.path.abspath('perceptron'))
pred = sess.run('prediction:0', feed_dict={'features:0': X_train})
train_errors = np.sum(pred.reshape(-1) != y_train)
pred = sess.run('prediction:0', feed_dict={'features:0': X_test})
test_errors = np.sum(pred.reshape(-1) != y_test)
print('Number of training errors', train_errors)
print('Number of test errors', test_errors)
Explanation: Theoretically, we could restart the Jupyter notebook now (we would just have to prepare the dataset again then, though)
We are going to restore the session from a meta graph (notice "tf.Session()")
First, we have to load the datasets again
End of explanation |
7,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img class="logo" src="images/python-logo.png" height=100 align='right'/>
Python
High-level
General purpose
Multiple programming paradigms
Interpreted
Variables
Step1: Containers
Data types for holding many variables
Lists
One-dimensional, ordered container whose contents and length can change (mutable).
Step2: The elements of a list do not need to be of the same type
Step3: Elements can be added or removed from a list
Step4: Elements of a list can be changed
Step5: Lists Indexing
Step6: List slicing
Syntax
Step7: Tuples
One-dimensional, ordered container whose contents and length CANNOT change (immutable).
Step8: Can be 'unpacked' to assign variable. Often used with functions which return multiple items.
Step9: Dictionaries
Unordered collection of key/value pairs whose size and content can change
Step10: Entries can be added or remove from dictionaries
Step11: Note
Step12: for loops
Syntax
Step13: Functions
Step14: Functions can have multiple, no and even default arguments
Step15: Functions can return multiple values
Step17: Classes
Step18: Libraries
Python has a large number of libraries which extend the basic functionality.
The standard library is included with Python and contains a number of helpful features.
Third-party libraries can add even more powerful functionality!
Libraries must be imported to be used. | Python Code:
var1 = 1 # interger
var2 = 2.34 # floating point numbers
var3 = 5.6 + 7.8j # complex numbers
var4 = "Hello World" # strings
var5 = True # booleans
var6 = None # special value to indicate the absence of a value
print("var1 value:", var1, "type:", type(var1))
print("var2 value:", var2, "type:", type(var2))
print("var3 value:", var3, "type:", type(var3))
print("var4 value:", var4, "type:", type(var4))
print("var5 value:", var5, "type:", type(var5))
print("var6 value:", var6, "type:", type(var6))
Explanation: <img class="logo" src="images/python-logo.png" height=100 align='right'/>
Python
High-level
General purpose
Multiple programming paradigms
Interpreted
Variables
End of explanation
hydrometeors = ['rain', 'snow', 'hail'] # create a list holding three elements
print(hydrometeors)
print('length:', len(hydrometeors))
Explanation: Containers
Data types for holding many variables
Lists
One-dimensional, ordered container whose contents and length can change (mutable).
End of explanation
mixed_type_list = ['rain', 4.5, 99, None]
print(mixed_type_list)
Explanation: The elements of a list do not need to be of the same type:
End of explanation
hydrometeors = ['rain', 'snow', 'hail']
hydrometeors.append('drizzle') # add 'drizzle' to the end of the list
print(hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
hydrometeors.insert(1, 'graupel') # insert graupel before position 1
print(hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
del hydrometeors[0] # remove the first element from the list
print(hydrometeors)
hydrometeors = ['rain', 'snow', 'hail']
observation = hydrometeors.pop() # remove the last item from the list and store it in hydrometeor
print("observation:", observation)
print("hydrometeors:", hydrometeors)
Explanation: Elements can be added or removed from a list
End of explanation
hydrometeors = ['rain', 'snow', 'hail']
print("Before change:", hydrometeors)
hydrometeors[0] = 'virga'
print("After change:", hydrometeors)
Explanation: Elements of a list can be changed
End of explanation
hydrometeors = ['rain', 'snow', 'hail']
print('index 0:', hydrometeors[0]) # indexing begins at 0
print('index 1:', hydrometeors[1])
print('index 2:', hydrometeors[2])
hydrometeors[3] # Trying to access elements which do not exist raises a IndexError
hydrometeors = ['rain', 'snow', 'hail']
print('index -1:', hydrometeors[-1])
print('index -2:', hydrometeors[-2])
print('index -3:', hydrometeors[-3])
Explanation: Lists Indexing
End of explanation
hydrometeors = ['rain', 'snow', 'hail', 'drizzle', 'graupel', 'virga']
print(hydrometeors[2:4]) # select elements from index 2 to index 4
hydrometeors[:3] # start from beginning
hydrometeors[3:] # until the end
hydrometeors[3:-1] # negative indices
hydrometeors[1::2] # every 2nd element
Explanation: List slicing
Syntax: list_variable[start:end:step]
End of explanation
t = ('rain', 'snow', 'hail')
print(t)
print(len(t))
t[0] = 'virga' # tuples cannot be changed
Explanation: Tuples
One-dimensional, ordered container whose contents and length CANNOT change (immutable).
End of explanation
observations = ('rain', 'snow', 'hail') # tuple with three elements
obs1, obs2, obs3 = observations # unpack tuple into obs1, obs2, obs3 variables
print("observations:", observations)
print("obs1:", obs1)
print("obs2:", obs2)
print("obs3:", obs3)
Explanation: Can be 'unpacked' to assign variable. Often used with functions which return multiple items.
End of explanation
d = {'site': 'KLOT', 'amount': 20, 'wind': 'east'}
print(d.keys())
print(d.values())
print('site:', d['wind'])
print('amount:', d['amount'])
print('wind:', d['wind'])
print("wind before change:", d['wind'])
d['wind'] = 'west'
print("wind after change:", d['wind'])
Explanation: Dictionaries
Unordered collection of key/value pairs whose size and content can change
End of explanation
d = {'site': 'KLOT', 'amount': 20, 'wind': 'east'}
print(d)
del d['wind']
print(d)
d['wind_speed'] = 'east'
d['wind_direction'] = '10 m/s'
print(d)
Explanation: Entries can be added or remove from dictionaries
End of explanation
hydrometeor = 'rain'
if hydrometeor == 'rain':
print("You saw rain")
hydrometeor = 'hail'
if hydrometeor == 'rain':
print("You saw rain")
else:
print("You did NOT see rain")
hydrometeor = 'snow'
if hydrometeor == 'rain':
print("You saw rain")
elif hydrometeor == 'snow':
print("You saw snow")
else:
print("I do not know what you saw")
Explanation: Note: Dictionaries do not preserve the order in which entries are added. If you need ordering use a OrderedDict from the collections module.
Flow control
If statements
End of explanation
hydrometeors = ['rain', 'snow', 'hail']
for hydrometeor in hydrometeors: # loop over elements in a list
print(hydrometeor)
for i in range(5): # loop over the number 0 to 4
print(i)
d = {'site': 'KLOT', 'amount': 20, 'wind': 'east'}
for key, value in d.items():
print(key, ':', value)
Explanation: for loops
Syntax: <br/>
for variable in iterable:<br/>
code block
End of explanation
# simple
def func(arg1):
print(arg1)
return 42
# call a function
return_value = func("Hello World")
print("ret_value:", return_value)
Explanation: Functions
End of explanation
def add_numbers(number1, number2):
return number1 + number2
def say_hello():
print("Hello AMS")
def favorite_hydrometeor(name, hydrometeor='snow'):
print("Hello", name)
print("Your favorite hydrometeor is", hydrometeor)
print(add_numbers(1, 2))
say_hello()
favorite_hydrometeor("Jonathan")
favorite_hydrometeor("Jonathan", hydrometeor="hail")
Explanation: Functions can have multiple, no and even default arguments
End of explanation
def sum_and_product(a, b):
return a+b, a*b
sum_ab, product_ab = sum_and_product(2, 3)
print("sum", sum_ab)
print("product", product_ab)
Explanation: Functions can return multiple values:
End of explanation
class Point(object):
A class to store the coordinate in a plane
def __init__(self, x, y):
self.x = x # an attribute
self.y = y # an attribute
def sum_of_coordinates(self): # a class method
return self.x + self.y
home = Point(2, 3)
print(home.x)
print(home.y)
home.sum_of_coordinates()
Explanation: Classes
End of explanation
import math # import the entire math module
math.sqrt(2)
from random import randrange # import just the randrange function from the random module
for i in range(5):
print(randrange(1, 10))
Explanation: Libraries
Python has a large number of libraries which extend the basic functionality.
The standard library is included with Python and contains a number of helpful features.
Third-party libraries can add even more powerful functionality!
Libraries must be imported to be used.
End of explanation |
7,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
COSC Learning Lab
01_device_control.py
Related Scripts
Step1: Implementation
Step2: Execution
Step3: HTTP | Python Code:
help('learning_lab.01_device_control')
Explanation: COSC Learning Lab
01_device_control.py
Related Scripts:
* 03_management_interface.py
Table of Contents
Table of Contents
Documentation
Implementation
Execution
HTTP
Documentation
End of explanation
from importlib import import_module
script = import_module('learning_lab.01_device_control')
from inspect import getsource
print(getsource(script.main))
print(getsource(script.demonstrate))
Explanation: Implementation
End of explanation
run ../learning_lab/01_device_control.py
Explanation: Execution
End of explanation
from basics.odl_http import http_history
from basics.http import http_history_to_html
from IPython.core.display import HTML
HTML(http_history_to_html(http_history()))
Explanation: HTTP
End of explanation |
7,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Read and plot data
The file ex1data1.csv contains dataset
first column is population in a city ;
second column is the profit in that city
Step1: Object for linear regression is to minimize the cost function
$$J(\theta)=\frac{1}{2m} \sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2$$
where $h_{\theta}(x)$ is given by the linear model
Step2: Update $\theta$ with step $\alpha$ (learning rate ) to minimize $J(\theta)$
$$\theta_j
Step3: plot linear line with opimized $\theta$
Step4: Visualizing $J(\theta)$
Step5: Linear regression with multiple variables
Step6: predict price of house 1650 sqfeet and 3 bedrooms
Step7: compare with sklearn Linear Regression model | Python Code:
import csv
import pandas as pd
import numpy as np
from numpy import genfromtxt
data = pd.read_csv('./ex1data1.csv', delimiter=',',
names=['population','profit'])
data.head()
%matplotlib inline
'''
import matplotlib.pyplot as plt
x= data['population']
y= data['profit']
plt.plot(x,y,'rx')
plt.ylabel('profit in $10,000s')
plt.xlabel('population in 10,000s')
plt.show()
'''
data.plot(x='population',y='profit',kind='scatter', figsize=(12,8))
Explanation: Read and plot data
The file ex1data1.csv contains dataset
first column is population in a city ;
second column is the profit in that city
End of explanation
def computeCost(X,Y, theta ):
inner= np.power(((X*theta.T)-Y),2)
return np.sum(inner)/(2*len(X))
def h(theta, x ):
return theta*x.T
data.insert(0,'Ones',1)
data.head()
x=data.iloc[:,0:2]
y=data.iloc[:,2:3]
print("x=", x.head()," \n y=", y.head())
X= np.matrix(x.values)
Y= np.matrix(y.values)
theta = np.matrix(np.array([0,0]))
theta.shape, X.shape, Y.shape
computeCost(X,Y, theta )
Explanation: Object for linear regression is to minimize the cost function
$$J(\theta)=\frac{1}{2m} \sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)})^2$$
where $h_{\theta}(x)$ is given by the linear model:
$$ h_{\theta}(x) = \theta^T x = \theta_0+\theta_1 x_1$$
End of explanation
alpha = 1e-2
iteration = 1000
error = 1e-8
def gradientDescent( X,Y, theta, alpha, iters):
temp = np.matrix(np.zeros(theta.shape))
parameters = int(theta.ravel().shape[1])
cost = np.zeros(iters)
for i in range(iters):
error = (X * theta.T) - Y
for j in range(parameters):
term = np.multiply(error, X[:,j])
temp[0,j] = theta[0,j] - ((alpha / len(X)) * np.sum(term))
theta = temp
#print(theta)
cost[i] = computeCost(X, Y, theta)
return theta, cost
g, cost = gradientDescent(X,Y, theta, alpha, iteration)
g
computeCost(X, Y, g)
Explanation: Update $\theta$ with step $\alpha$ (learning rate ) to minimize $J(\theta)$
$$\theta_j := \theta_j - \alpha \frac{1}{m} \sum_{i=1}^m(h_{\theta}(x^{(i)})-y^{(i)}) x_j^{(i)} $$
End of explanation
x = np.linspace(data.population.min(), data.population.max(),100)
predict = g[0,0]+ g[0, 1]*x
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(x, predict, 'r', label='Prediction')
ax.scatter(data.population, data.profit, label='Traning Data')
ax.legend(loc=2)
ax.set_xlabel('Population')
ax.set_ylabel('Profit')
ax.set_title('Predicted Profit vs. Population Size')
Explanation: plot linear line with opimized $\theta$
End of explanation
fig, ax = plt.subplots(figsize=(12,8))
ax.plot(np.arange(iteration), cost, 'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
Explanation: Visualizing $J(\theta)$
End of explanation
data2 = pd.read_csv('./ex1data2.csv', delimiter=',',
names=['size','bedroom','price'])
data2.head()
def featureNormalize(X, Y):
row, col = X.shape
#for i in range(col):
# X.iloc[:,i]=(X.iloc[:,i]-X.iloc[:,i].mean())/X.iloc[:,i].std()
X = (X-X.mean())/X.std()
Y = (Y-Y.mean())/Y.std()
#print(X)
return X,Y
x,y = featureNormalize(data2.iloc[:,0:2],data2.iloc[:,2:3])
x.insert(0,'ones',1)
x = np.matrix(x.values)
y = np.matrix(y.values)
theta = np.matrix(np.array([0,0,0]))
g, cost = gradientDescent(x,y ,theta, 0.01, iteration)
computeCost(x,y,g),g
fig, ax = plt.subplots(figsize=(12,8))
for i in [.1,0.01,0.001]:
g, cost = gradientDescent(x,y ,theta, i, iteration)
ax.plot(np.arange(iteration), cost)
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
Explanation: Linear regression with multiple variables
End of explanation
g, cost = gradientDescent(x,y ,theta, 0.01, iteration)
xtest = np.array([1650,3])
xtestscaled = (xtest- data2.iloc[:,0:2].mean())/data2.iloc[:,0:2].std()
xtestscaled = np.matrix(xtestscaled.values)
ones = np.ones((1,1))
xtestscaled = np.hstack((ones,xtestscaled))
#print( xtestscaled,g)
pre_y= h(g, xtestscaled)
#print(pre_y[0,0])
pre_y = pre_y [0,0]* data2.iloc[:,2:3].std()+ data2.iloc[:,2:3].mean()
pre_y
Explanation: predict price of house 1650 sqfeet and 3 bedrooms
End of explanation
from sklearn import linear_model
x=data2.iloc[:,0:2].values.reshape(-1,2)
y=data2.iloc[:,2:3]
model = linear_model.LinearRegression()
model.fit(x, y)
pre_y = model.predict([[1650,3]])
print( " predicted y (price) ", pre_y[0,0])
Explanation: compare with sklearn Linear Regression model
End of explanation |
7,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
Step1: Object detection imports
Here are the imports from the object detection module.
Step2: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
Step3: Load a (frozen) Tensorflow model into memory.
Step4: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
Step5: Helper code
Step6: Detection
Step7: Check that valid files dont overlap with train files
Step8: Calculate MaP by category (8 mins roughly)
Step9: Make nice performance table
Step10: Calculate MaP for each image | Python Code:
import numpy as np
import os
import pickle
import six.moves.urllib as urllib
import sys
sys.path.append("..")
import tarfile
import tensorflow as tf
import zipfile
from object_detection.eval_util import evaluate_detection_results_pascal_voc
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
%matplotlib inline
%load_ext autoreload
%autoreload 2
# This is needed since the notebook is stored in the object_detection folder.
from utils import label_map_util
from utils import visualization_utils as vis_util
Explanation: Object Detection Demo
Welcome to the object detection inference walkthrough! This notebook will walk you step by step through the process of using a pre-trained model to detect objects in an image. Make sure to follow the installation instructions before you start.
Imports
End of explanation
from utils import label_map_util
from utils import visualization_utils as vis_util
def get_annotations(image_path):
img_id = os.path.basename(image_path)[:-4]
annotation_path = os.path.join(
os.path.split(os.path.dirname(image_path))[0], 'Annotations',
'{}.xml'.format(img_id)
)
return xml_to_dict(annotation_path)
from utils.kitti import show_groundtruth, create_results_list
from utils.kitti import visualize_predictions
import glob
Explanation: Object detection imports
Here are the imports from the object detection module.
End of explanation
# What model to download.
FREEZE_DIR = 'atrous_frozen_v2/'
PATH_TO_CKPT = os.path.join(FREEZE_DIR,
'frozen_inference_graph.pb'
)
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'kitti_map.pbtxt')
NUM_CLASSES = 9
Explanation: Model preparation
Variables
Any model exported using the export_inference_graph.py tool can be loaded here simply by changing PATH_TO_CKPT to point to a new .pb file.
By default we use an "SSD with Mobilenet" model here. See the detection model zoo for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
End of explanation
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
Explanation: Load a (frozen) Tensorflow model into memory.
End of explanation
PATH_TO_LABELS = os.path.join('data', 'kitti_map.pbtxt')
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map,
max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
Explanation: Loading label map
Label maps map indices to category names, so that when our convolution network predicts 5, we know that this corresponds to airplane. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
End of explanation
def load_image_into_numpy_array(image):
(im_width, im_height) = image.size
return np.array(image.getdata()).reshape(
(im_height, im_width, 3)).astype(np.uint8)
with open('kitti_data/train.txt') as f:
train_ids = f.readlines()[0].split(',')
with open('kitti_data/valid.txt') as f:
valid_ids = f.readlines()[0].split(',')
len(train_ids)
len(valid_ids)
Explanation: Helper code
End of explanation
PATH_TO_TEST_IMAGES_DIR = 'voc_kitti_valid/VOC2012/JPEGImages/'
p = 'voc_kitti_valid/VOC2012/JPEGImages/1023.jpg'
TEST_IMAGE_PATHS = [ p]
FIGSIZE = (20, 20)
import glob
def glob_base(pat): return list(map(os.path.basename, glob.glob(pat)))
from create_dataset import *
Explanation: Detection
End of explanation
valid_ids = glob_base(VOC_VALID_DIR + '/VOC2012/JPEGImages/*.jpg')
train_ids = glob_base(VOC_TRAIN_DIR+ '/VOC2012/JPEGImages/*.jpg')
assert len(pd.Index(valid_ids).intersection(train_ids)) == 0
test_dir = 'voc_kitti_valid/VOC2012/JPEGImages/'
test_image_paths = [os.path.join(test_dir, x) for x in valid_ids]
len(test_image_paths)
train_labs= glob.glob('kitti_data/training/label_2/*.txt')
test_labs = glob.glob('kitti_data/valid/label_2/*.txt')
Explanation: Check that valid files dont overlap with train files
End of explanation
%%time
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
res = create_results_list(test_image_paths, sess, detection_graph)
import pandas as pd
perf = pd.Series(evaluate_detection_results_pascal_voc(res, categories))
perf
Explanation: Calculate MaP by category (8 mins roughly)
End of explanation
def clean_idx(perf):
x = list(perf.index.map(lambda x: x[33:]))
x[-1] = 'Total'
perf.index = x
return perf
perf = clean_idx(perf)
perf.to_frame('rcnn_mAP')#.round(3).to_csv('~/Desktop/faster_rcnn_mAP_by_category.csv')
def get_dict_slice(res, slc_obj):
'''get a slice of the values for each key in a dict'''
output = {}
for k in res.keys():
output[k] = res[k][slc_obj]
return output
Explanation: Make nice performance table
End of explanation
%%capture
img_scores = {image_id: evaluate_detection_results_pascal_voc(
get_dict_slice(res, slice(i, i+1)), categories)
for i, image_id in enumerate((res['image_id'][:-1]))}
OVERALL_PERF_KEY = 'Precision/[email protected]'
#pickle.dump(res, open('mobile_net_valid_results_dct.pkl', 'wb'))
res['image_id'][0]
from kitti_constants import name_to_id
from object_detection.utils.visualization_utils import visualize_boxes_and_labels_on_image_array
from utils.kitti import get_boxes_scores_classes, visualize_predictions
def get_img_scores(image_path):
imageid = os.path.basename(image_path)[:-4]
return pd.Series(img_scores[imageid]).round(2).dropna()
%%time
%precision 4
import time
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
image_path = np.random.choice(test_image_paths)
image = Image.open(image_path)
image_np = load_image_into_numpy_array(image)
start = time.time()
image_process = visualize_predictions(image_np, sess, detection_graph)
# boxes, scores, classes, num_detections = get_boxes_scores_classes(image_np, sess, detection_graph)
print('inference time: {} seconds'.format(
np.round(time.time() - start, 2)))
print ('MaP scores\n{}'.format(get_img_scores(image_path)))
plt.figure(figsize=FIGSIZE)
plt.imshow(image_process)
plt.title('Model', fontsize=16)
#plt.imsave(image_process, 'worst_prediction labs.jpg')
plt.figure(figsize=FIGSIZE)
truth_img = show_groundtruth(image_path)
plt.imshow(truth_img)
plt.title('Human Labels', fontsize=16)
plt.figure(figsize=FIGSIZE)
plt.imshow(load_image_into_numpy_array(Image.open(image_path)))
plt.title('Raw Image')
#plt.savefig('worst_prediction labs.jpg')
Explanation: Calculate MaP for each image
End of explanation |
7,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtering and Annotation Tutorial
Filter
You can filter the rows of a table with Table.filter. This returns a table of those rows for which the expression evaluates to True.
Step1: We can also express this query in multiple ways using aggregations
Step2: Annotate
You can add new fields to a table with annotate. As an example, let's create a new column called cleaned_occupation that replaces missing entries in the occupation field labeled as 'other' with 'none.'
Step3: Compare this to what we had before
Step4: Note
Step5: Select and Transmute
There are two other annotate methods
Step6: We can also create a new field that stores the age relative to the average. Note that new fields must be assigned a name (in this case mean_shifted_age)
Step7: transmute replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. transmute is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with transmute replacing select.
Step8: Global Fields
Finally, you can add global fields with annotate_globals. Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps. | Python Code:
import hail as hl
hl.utils.get_movie_lens('data/')
users = hl.read_table('data/users.ht')
users.filter(users.occupation == 'programmer').count()
Explanation: Filtering and Annotation Tutorial
Filter
You can filter the rows of a table with Table.filter. This returns a table of those rows for which the expression evaluates to True.
End of explanation
users.aggregate(hl.agg.filter(users.occupation == 'programmer', hl.agg.count()))
users.aggregate(hl.agg.counter(users.occupation == 'programmer'))[True]
Explanation: We can also express this query in multiple ways using aggregations:
End of explanation
missing_occupations = hl.set(['other', 'none'])
t = users.annotate(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
Explanation: Annotate
You can add new fields to a table with annotate. As an example, let's create a new column called cleaned_occupation that replaces missing entries in the occupation field labeled as 'other' with 'none.'
End of explanation
users.show()
Explanation: Compare this to what we had before:
End of explanation
users.describe()
Explanation: Note: annotate is functional: it doesn't mutate users, but returns a new table. This is also true of filter. In fact, all operations in Hail are functional.
End of explanation
users.select(users.sex, users.occupation).show()
Explanation: Select and Transmute
There are two other annotate methods: select and transmute. select allows you to create new tables from old ones by selecting existing fields, or creating new ones.
First, let's extract the sex and occupation fields:
End of explanation
mean_age = round(users.aggregate(hl.agg.stats(users.age)).mean)
users.select(users.sex, users.occupation, mean_shifted_age = users.age - mean_age).show()
Explanation: We can also create a new field that stores the age relative to the average. Note that new fields must be assigned a name (in this case mean_shifted_age):
End of explanation
missing_occupations = hl.set(['other', 'none'])
t = users.select(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
missing_occupations = hl.set(['other', 'none'])
t = users.transmute(
cleaned_occupation = hl.if_else(missing_occupations.contains(users.occupation),
hl.null('str'),
users.occupation))
t.show()
Explanation: transmute replaces any fields mentioned on the right-hand side with the new fields, but leaves unmentioned fields unchanged. transmute is useful for transforming data into a new form. Compare the following two snippts of code. The second is identical to the first with transmute replacing select.
End of explanation
t = users.annotate_globals(cohort = 5, cloudable = hl.set(['sample1', 'sample10', 'sample15']))
t.describe()
t.cloudable
hl.eval(t.cloudable)
Explanation: Global Fields
Finally, you can add global fields with annotate_globals. Globals are useful for storing metadata about a dataset or storing small data structures like sets and maps.
End of explanation |
7,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I am looking to work out how to handle Bayesian estimation in a rate counting system.
Model
Foreground data we want is Binomial with trials 100 and probability 0.5, the number of samples will be changed to see how that impacts the results
Background is taken to be Binomial with trails 1e3 and probability 0.01, number of samples will be the same as foreground
Add a nuisiance parameter, $\alpha$, to represent systematic uncertantity.
Is this U[0,100] or just a number?
Step1: Now it is time to create the Bayesian model that will estimate these parameters.
Matching the above we will have | Python Code:
samples = tb.logspace(1, 10000, 10)
fore = [np.random.binomial(100, 0.50, size=v) for v in samples]
print(tb.logspace(1, 1000, 10))
for v in fore[::-1]:
h = np.histogram(v, 20)
plt.hist(v, 20, label='{0}'.format(np.floor(len(v))))
plt.yscale('log')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Foreground')
back = [np.random.binomial(1e3, 0.01, size=v) for v in samples]
for v in back[::-1]:
h = np.histogram(v, 20)
plt.hist(v, 20, label='{0}'.format(np.floor(len(v))))
plt.yscale('log')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.title('Background')
alpha = 10.0 # make the same as the cetner of the Background, no real reason
syn = []
for i in range(len(fore)):
syn.append(fore[i] + back[i] + alpha)
syn[5]
Explanation: I am looking to work out how to handle Bayesian estimation in a rate counting system.
Model
Foreground data we want is Binomial with trials 100 and probability 0.5, the number of samples will be changed to see how that impacts the results
Background is taken to be Binomial with trails 1e3 and probability 0.01, number of samples will be the same as foreground
Add a nuisiance parameter, $\alpha$, to represent systematic uncertantity.
Is this U[0,100] or just a number?
End of explanation
foreN = pymc.Uniform('foreN', lower=0, upper=1e6)
backN = pymc.Uniform('backN', lower=0, upper=1e6) # , observed=True, value=[1e3], plot=True)
foreP = pymc.Uniform('foreP', lower=0, upper=1) # , observed=True, value=[0.5, 0.4, 0.55])
backP = pymc.Uniform('backP', lower=0, upper=1)
foreB = pymc.Binomial('fore', n=foreN, p=foreP, observed=True, value=fore[-1])
backB = pymc.Binomial('back', n=backN, p=backP, observed=True, value=back[-1])
# @pymc.stochastic
# def alphaB(value=100):
# return value
M = pymc.MCMC([foreB, backB, foreN, backN, foreP, backP])
M.sample(iter=2e4, burn=1e3, thin=4)
pymc.Matplot.plot(M)
pprint(M.stats())
Explanation: Now it is time to create the Bayesian model that will estimate these parameters.
Matching the above we will have:
* ForeB = Bi()
* BackB = Bi()
* $\alpha$B = Number
End of explanation |
7,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1"><span class="toc-item-num">1 </span>Data</a></span></li><li><span><a href="#Model" data-toc-modified-id="Model-2"><span class="toc-item-num">2 </span>Model</a></span></li><li><span><a href="#Training" data-toc-modified-id="Training-3"><span class="toc-item-num">3 </span>Training</a></span></li><li><span><a href="#Explore-Latent-Space" data-toc-modified-id="Explore-Latent-Space-4"><span class="toc-item-num">4 </span>Explore Latent Space</a></span></li></ul></div>
Step1: Data
Step2: Model
Step3: Training
Step4: Explore Latent Space | Python Code:
import sys
import yaml
import tensorflow as tf
import numpy as np
import pandas as pd
import functools
from pathlib import Path
from datetime import datetime
from tqdm import tqdm_notebook as tqdm
# Plotting
import matplotlib
import matplotlib.pyplot as plt
from matplotlib import animation
plt.rcParams['animation.ffmpeg_path'] = str('/usr/bin/ffmpeg /usr/share/ffmpeg')
%load_ext autoreload
%autoreload 2
from progan import ProGan
import gan_utils
from load_data import preprocess_images
from ds_utils.plot_utils import plot_sample_imgs
data_folder = Path.home() / "Documents/datasets"
# load model config
with open('configs/progan_celeba_config.yaml', 'r') as f:
config = yaml.load(f)
HIDDEN_DIM = config['data']['z_size']
IMG_SHAPE = config['data']['input_shape']
BATCH_SIZE = config['training']['batch_size']
IMG_IS_BW = IMG_SHAPE[2] == 1
PLOT_IMG_SHAPE = IMG_SHAPE[:2] if IMG_IS_BW else IMG_SHAPE
config
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Data" data-toc-modified-id="Data-1"><span class="toc-item-num">1 </span>Data</a></span></li><li><span><a href="#Model" data-toc-modified-id="Model-2"><span class="toc-item-num">2 </span>Model</a></span></li><li><span><a href="#Training" data-toc-modified-id="Training-3"><span class="toc-item-num">3 </span>Training</a></span></li><li><span><a href="#Explore-Latent-Space" data-toc-modified-id="Explore-Latent-Space-4"><span class="toc-item-num">4 </span>Explore Latent Space</a></span></li></ul></div>
End of explanation
# load Fashion MNIST dataset
((X_train, y_train), (X_test, y_test)) = tf.keras.datasets.fashion_mnist.load_data()
X_train = preprocess_images(X_train)
X_test = preprocess_images(X_test)
print(X_train[0].shape)
print(X_train[0].max())
print(X_train[0].min())
print(X_train.shape)
assert X_train[0].shape == tuple(config['data']['input_shape'])
train_ds = tf.data.Dataset.from_tensor_slices(X_train).take(5000)
test_ds = tf.data.Dataset.from_tensor_slices(X_test).take(256)
sys.path.append("../")
from tmp_load_data import load_imgs_tfdataset
train_ds = load_imgs_tfdataset(data_folder/'img_align_celeba', '*.jpg', config, 500, zipped=False, tanh_range=True)
test_ds = load_imgs_tfdataset(data_folder/'img_align_celeba', '*.jpg', config, 100, zipped=False, tanh_range=True)
for a in train_ds:
n_a = a.numpy()
print(n_a.shape)
print(n_a.max())
print(n_a.min())
print(n_a.shape)
plt.imshow((n_a+1)/2)
break
Explanation: Data
End of explanation
# instantiate GAN
gan = ProGan(config)
# test generator
generator = gan.generators[2][0]
generator_out = generator.predict(np.random.randn(BATCH_SIZE, HIDDEN_DIM))
generator_out.shape
generator_out.max()
# test discriminator
discriminator = gan.discriminators[2][0]
discriminator_out = discriminator.predict(generator_out)
discriminator_out.shape
# plot random generated image
plot_img_shape = generator.output_shape[1:]
plt.imshow(generator.predict([np.random.randn(1, HIDDEN_DIM)])[0]
.reshape(plot_img_shape), cmap='gray' if IMG_IS_BW else 'jet')
plt.show()
Explanation: Model
End of explanation
# setup model directory for checkpoint and tensorboard logs
model_name = "progan_celeba"
model_dir = Path.home() / "Documents/models/tf_playground/gan" / model_name
model_dir.mkdir(exist_ok=True, parents=True)
export_dir = model_dir / 'export'
export_dir.mkdir(exist_ok=True)
log_dir = model_dir / "logs" / datetime.now().strftime("%Y%m%d-%H%M%S")
nb_epochs = config['training']['nb_epochs']
gan.train(train_ds=train_ds,
validation_ds=test_ds,
nb_epochs=nb_epochs,
log_dir=log_dir,
checkpoint_dir=None,
is_tfdataset=True)
# export Keras model (.h5)
gan.generator.save(str(export_dir / 'generator.h5'))
gan.discriminator.save(str(export_dir / 'discriminator.h5'))
# plot generator results
plot_side = 5
plot_generator = gan.generators[2][0]
plot_img_shape = plot_generator.output_shape[1:]
plot_sample_imgs(lambda x: plot_generator.predict(np.random.randn(plot_side*plot_side, HIDDEN_DIM)),
img_shape=plot_img_shape,
plot_side=plot_side,
cmap='gray' if IMG_IS_BW else 'jet')
Explanation: Training
End of explanation
%matplotlib inline
render_dir = Path.home() / 'Documents/videos/gan' / "gan_celeba"
nb_samples = 30
nb_transition_frames = 10
nb_frames = min(2000, (nb_samples-1)*nb_transition_frames)
# random list of z vectors
z_s = np.random.randn(nb_samples, 1, HIDDEN_DIM)
# setup plot
dpi = 100
fig, ax = plt.subplots(dpi=dpi, figsize=(PLOT_IMG_SHAPE[0] / dpi, PLOT_IMG_SHAPE[1] / dpi))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1)
im = ax.imshow(gan.generator.predict(z_s[0])[0].reshape(PLOT_IMG_SHAPE), cmap='gray' if IMG_IS_BW else 'jet')
plt.axis('off')
def animate(i, gan, z_s, nb_transition_frames):
z_start = z_s[i//nb_transition_frames]
z_end = z_s[i//nb_transition_frames+1]
z_diff = z_end - z_start
cur_z = z_start + (z_diff/nb_transition_frames)*(i%nb_transition_frames)
im.set_data(gan.generator.predict(cur_z)[0].reshape(PLOT_IMG_SHAPE))
ani = animation.FuncAnimation(fig, animate, frames=nb_frames, interval=1,
fargs=[gan, z_s, nb_transition_frames])
if render_dir:
render_dir.mkdir(parents=True, exist_ok=True)
ani.save(str(render_dir / (datetime.now().strftime("%Y%m%d-%H%M%S") + '.mp4')),
animation.FFMpegFileWriter(fps=30))
render_dir = Path.home() / 'Documents/videos/gan' / "gan_fmnist_idxs"
nb_transition_frames = 150
# random list of z vectors
rand_idx = np.random.randint(len(X_train))
z_start = np.random.randn(1, HIDDEN_DIM)
vals = np.linspace(-1., 1., nb_transition_frames)
# setup plot
dpi = 100
fig, ax = plt.subplots(dpi=dpi, figsize=(PLOT_IMG_SHAPE[0] / dpi, PLOT_IMG_SHAPE[1] / dpi))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1)
#fig, ax = plt.subplots(dpi=100, figsize=(5, 4))
im = ax.imshow(gan.generator.predict(z_s[0])[0].reshape(PLOT_IMG_SHAPE), cmap='gray' if IMG_IS_BW else 'jet')
plt.axis('off')
def animate(i, gan, z_start, idx, vals):
z_start[0][idx:idx+10] = vals[i]
im.set_data(gan.generator.predict(z_start)[0].reshape(PLOT_IMG_SHAPE))
for z_idx in range(100):
ani = animation.FuncAnimation(fig, animate, frames=nb_transition_frames, interval=10,
fargs=[gan, z_start.copy(), z_idx, vals])
if render_dir:
render_dir.mkdir(parents=True, exist_ok=True)
ani.save(str(render_dir / 'idx{}.mp4'.format(z_idx)), animation.FFMpegFileWriter(fps=30))
Explanation: Explore Latent Space
End of explanation |
7,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 6
CHE 116
Step1: 4. Prediction Intervals and Loops (19 Points + 12 EC)
[1 point] "The 95% prediction interval for a geometric probability distribution" can be described with what mathematical equation? Answer as a $\LaTeX$ equation.
[6 points] Using a for loop, compute a lower (starting at 0) 90% prediction interval for the binomial distribution with $N = 12, p = 0.3$.
[6 points] Using a for loop, compute an upper (ending at N) 95% prediction interval for the binomial distribution with $N = 20, p = 0.6$.
[6 points] Using a for loop, compute a 80% prediction interval for the geomemtric distribution for $p = 0.02$. Just pick a large number for the upper-bound of the for loop.
[12 Extra Credit Points]. Repeat 4.3 using a while loop.
4.1
$$
P(n < x) = 0.9
$$
Step2: 5. Normal Distribution (8 Points)
Use scipy.stats here as needed. Except for 5.1 and 5.3, answer in Python.
[2 points] In the $Z$-score equation $ Z = (x - \mu) / \sigma$, what is $x$?
[1 point] What is $P(x < -2)$ for a standard normal distribution?
[1 point] What is $P(X > 2)$ for a standard normal distribution? Use your knowledge of probability expression, not scipy.stats to answer this one.
[2 points] Given that $\mu = 2$, $\sigma = 1.2$, what is the probability of observing a sample between -2 and 0? Answer using a $Z$-score.
[2 points] Given that $\mu = 2$, $\sigma = 1.2$, what is the probability of observing a sample between -2 and 0? Answer without using a $Z$-score.
5.1
The bounds of an integral
Step3: 5.3
$$
1 - 0.023 = 0.977
$$ | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
#3.1
p = [0.2, 0.5, 0.8]
n = np.arange(1, 8)
for i, pi in enumerate(p):
plt.plot(n, pi * (1 - pi)**(n - 1), 'o-', label='$p={}$'.format(pi), color='C{}'.format(i))
plt.axvline(x = 1/ pi, color='C{}'.format(i))
plt.title('Problem 3.1 - Geometric')
plt.xlabel('$n$')
plt.ylabel('$P(n)$')
plt.legend()
plt.show()
#3.2
from scipy.special import comb,factorial
N = 4
p = 0.70
mu = N * p
x = np.arange(0, N+1)
plt.plot(x, comb(N, x) * p**x *(1 - p)**(N - x), 'o-', label='binomial')
plt.plot(x, np.exp(-mu) * mu**x / factorial(x), 'o-', label='Poisson')
plt.title('Problem 3.2 - Binomial vs Geometric')
plt.xlabel('$n$')
plt.ylabel('$P(n)$')
plt.legend()
plt.show()
#3.3
from scipy.special import comb,factorial
N = 25
p = 0.10
mu = N * p
x = np.arange(0, N+1)
plt.plot(x, comb(N, x) * p**x *(1 - p)**(N - x), 'o-', label='binomial')
plt.plot(x, np.exp(-mu) * mu**x / factorial(x), 'o-', label='Poisson')
plt.title('Problem 3.3 - Binomial vs Geometric')
plt.xlabel('$n$')
plt.ylabel('$P(n)$')
plt.legend()
plt.show()
#3.4
L = 1 / 4
t = np.linspace(0,7,100)
tsmall = np.linspace(0,5,100)
plt.plot(t, L * np.exp(-L * t))
plt.fill_between(tsmall, 0, L * np.exp(-L * tsmall))
plt.axvline(x=5)
plt.title('Problem 3.4 - Exponential')
plt.xlabel('$t$')
plt.ylabel('$P(t)$')
plt.show()
Explanation: Homework 6
CHE 116: Numerical Methods and Statistics
2/22/2018
1. Review Questions (10 Points)
[1 point] A probability mass function must give a positive number for each element in the sample space and $\underline{\hspace{0.5in}}$?
[1 point] Which of these are invalid sample spaces and which are valid: ${1,3,-2}$, ${A, B}$, ${\textrm{Ace of hearts}, \textrm{king of diamonds}}$, all real numbers.
[1 point] What rule allows me to rewrite $P(x \,|\,y)P(y)$ as $P(x, y)$?
[2 points] If there is a 10% chance of rain for 3 days in a row, what's the probability of there being rain at least once within those days?
[2 points] Harry says that expected value is like an average, so you can compute two ways: $ E[X] = \sum_i^N \frac{x_i}{N} $ and the way we learned in class: $E[X] = \sum_i P(x) \cdot x$. Is Harry correct or is there an issue with his logic?
[1 point] How many elements will I have in my list if I create it using list(range(5,8))?
[2 points] In the binomial distribution, we only consider number of successes. Let's try considering each permutation as unique. For example, if $N = 3$ and $n = 1$, you could have $100$, $010$, and $001$. If $N = 10$, how many unique permutations are possible for all numbers of successes? Review your HW 5, questions 1.2-1.5.
1.1
sum to 1
1.2
all are valid
1.3
Definition of conditional
1.4
Binomial with $p=0.1,\,N=3$. Being asked $1 - P(n = 0)$. Binomial coefficient is 1 for $0$, so just $1 - (1 - 0.1)^{3} = 0.271$
1.5
Expected value is only conceptually like an average. We do not have data, so the first expression requires a sum of data. We have elements in a sample space, so only the second equation can be used. The law of large numbers connects them, but that's in the limit of large amounts of data.
1.6
3
1.7
$$
2^{10} = 1024
$$
2. Marginal Probability Review (19 Points)
You are a baby being carried in a stork to your parents. Your parents live in either:
USA (u, 320)
China (c, 1300)
Germany (g, 80)
The probability of your birth location is proportional to the populations. As a baby, you are concerned with your career options, which are
Rock star (r)
Professor (p)
Doctor (d)
Answer the following using $B$ as the random variable for birthplace and $J$ as the random variable for job. We have the following information:
$$P(J = r \,|\, B = c) = 0.05$$
$$P(J = d \,|\, B = c) = 0.5$$
$$P(J = r \,|\, B = u) = 0.8$$
$$P(J = p\,|\, B = u) = 0.01$$
$$P(J = p\,|\, B = g) = 0.75$$
$$P(J = d \,|\, B = g) = 0.2$$
[2 point] Write out the missing conditionals and marginal probabilities.
[4 points] What is the probability that you will be a professor?
[3 points] What is the probability that you will be a rock star born in China?
[2 point] You were born in Germany. What's the probability of becoming a doctor?
[4 points] Consider the random variable $Z$, which indicates if you are a doctor or rockstar (true for $J=d$ and $J=r$). What is $P(Z = 1 \,|\, B=u)$?
[4 points] What is $P(B=g \,|\, Z = 0)$? Find a way to re-use the calculation you did in 2.2 to help
2.1
$$P(J = p \,|\, B = c) = 0.45$$
$$P(J = d \,|\, B = u) = 0.19$$
$$P(J = r \,| \,B = g) = 0.05$$
$$P(B = c) = \frac{1300}{1700} \approx 0.76$$
$$P(B = u) = \frac{320}{1700} \approx 0.19$$
$$P(B = g) = \frac{80}{1700} \approx 0.05$$
2.2
$$
P(J = p) = \sum_b P(J = p\, | B = b) P(B = b) = 0.45 \times 0.76 + 0.01\times 0.19 + 0.75\times 0.05 = 0.38
$$
2.3
$$
P(J = r, B = c) = P(J = r\, |\, B = c) P(B = c) = 0.05\times 0.76 = 0.038
$$
2.4
$$
P(J = d\, |\, B = g) = 0.2
$$
2.5
$$
P(Z = 1 \, |\, B = u) = P(J = d, J = p\, | \, B = u) = 1 - P(J = r\, | \, B = u) = 0.2
$$
2.6
$$
P(B = g\, |\, Z = 0) = \frac{P(Z = 0\, |\, B = g) P(B = g)}{P(Z = 0)} = \frac{P(Z = p\, |\, B = g) P(B = g)}{P(B = p)}
$$
$$
P(B = g\, |\, Z = 0) = \frac{0.75 \times 0.05}{0.38} \approx 0.1
$$
3. Plotting Probability Distributions (18 Points)
Label your axes, add a title, and use LaTeX in your labels when necessary. Use dots connected by lines for discrete and lines for continuous.
[6 points] Plot three different parameter of the geometric distribution: $p = 0.2, p = 0.5, p = 0.8$. Add vertical lines at their means. Extra credit: accomplish the plot of the three lines using a for loop.
[4 points] Plot the binomial distribution for $N = 25, p = 0.7$. Recall that the Poisson is an approximation to the Binomial. Plot the Poisson approximation to this Binomial distribution on the same plot.
[2 points] Make a second plot with the binomial and Poisson, but use $N = 25, p = 0.10$. How good is the approximation?
[6 points] The command plt.fill_between can be used to plot an area under a curve. For example, fill_between(x, 0, y) will fill the area between 0 and y, where y could be a numpy array. Using fill_between, show the cumulative probability function for the exponential distribution from $t = 0$ to $t = 5$ with $\lambda = 0.25$. Ensure that there are two lines on your plot: one that is the exponential pdf and one that is the fill_between. The pdf should extend further than your fill_between line. Add a vertical line at $t=5$. No legend necessary.
End of explanation
#4.2
N = 12
p = 0.3
psum = 0
for ni in range(0, N+1):
psum += comb(N, ni) * p**ni * (1 - p)**(N - ni)
if psum >= 0.9:
break
print('Interval is [0, {}]'.format(ni))
#4.3
N = 20
p = 0.6
psum = 0
#reverse the range so we count down from the top
for ni in range(N + 1, -1, -1):
psum += comb(N, ni) * p**ni * (1 - p)**(N - ni)
if psum >= 0.95:
break
print('Interval is [{}, N]'.format(ni))
#4.4
p = 0.02
psum = 0
for ni in range(1, 500):
psum += p * (1 - p) ** (ni - 1)
if psum >= 0.8:
break
print('Interval is [1, {}]'.format(ni))
#4.5
N = 20
p = 0.6
psum = 0
# count down
ni = N
while psum < 0.95:
psum += comb(N, ni) * p**ni * (1 - p)**(N - ni)
ni -= 1
#add 1, since when we broke we had just subtracted 1
print('Interval is [{}, N]'.format(ni + 1))
Explanation: 4. Prediction Intervals and Loops (19 Points + 12 EC)
[1 point] "The 95% prediction interval for a geometric probability distribution" can be described with what mathematical equation? Answer as a $\LaTeX$ equation.
[6 points] Using a for loop, compute a lower (starting at 0) 90% prediction interval for the binomial distribution with $N = 12, p = 0.3$.
[6 points] Using a for loop, compute an upper (ending at N) 95% prediction interval for the binomial distribution with $N = 20, p = 0.6$.
[6 points] Using a for loop, compute a 80% prediction interval for the geomemtric distribution for $p = 0.02$. Just pick a large number for the upper-bound of the for loop.
[12 Extra Credit Points]. Repeat 4.3 using a while loop.
4.1
$$
P(n < x) = 0.9
$$
End of explanation
#5.2
import scipy.stats as ss
print(ss.norm.cdf(-2))
Explanation: 5. Normal Distribution (8 Points)
Use scipy.stats here as needed. Except for 5.1 and 5.3, answer in Python.
[2 points] In the $Z$-score equation $ Z = (x - \mu) / \sigma$, what is $x$?
[1 point] What is $P(x < -2)$ for a standard normal distribution?
[1 point] What is $P(X > 2)$ for a standard normal distribution? Use your knowledge of probability expression, not scipy.stats to answer this one.
[2 points] Given that $\mu = 2$, $\sigma = 1.2$, what is the probability of observing a sample between -2 and 0? Answer using a $Z$-score.
[2 points] Given that $\mu = 2$, $\sigma = 1.2$, what is the probability of observing a sample between -2 and 0? Answer without using a $Z$-score.
5.1
The bounds of an integral
End of explanation
#5.4
zlo = (-2 - 0) / 1.2
zhi = (0 - 0) / 1.2
print(ss.norm.cdf(zhi) - ss.norm.cdf(zlo))
#5.5
print(ss.norm.cdf(0, loc=2, scale=1.2) - ss.norm.cdf(-2, loc=2, scale=1.2))
Explanation: 5.3
$$
1 - 0.023 = 0.977
$$
End of explanation |
7,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some simulated data.
Step2: Create a scatterplot using the a colormap.
Full list of colormaps | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Title: Set The Color Of A Matplotlib Plot
Slug: set_the_color_of_a_matplotlib
Summary: Set The Color Of A Matplotlib Plot
Date: 2016-05-01 12:00
Category: Python
Tags: Data Visualization
Authors: Chris Albon
Import numpy and matplotlib.pyplot
End of explanation
n = 100
r = 2 * np.random.rand(n)
theta = 2 * np.pi * np.random.rand(n)
area = 200 * r**2 * np.random.rand(n)
colors = theta
Explanation: Create some simulated data.
End of explanation
c = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.RdYlGn)
c1 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.Blues)
c2 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.BrBG)
c3 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.Greens)
c4 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.RdGy)
c5 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.YlOrRd)
c6 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.autumn)
c7 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.binary)
c8 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.gist_earth)
c9 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.gist_heat)
c10 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.hot)
c11 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.spring)
c12 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.summer)
c12 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.winter)
c13 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.bone)
c14 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.cool)
c15 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.YlGn)
c16 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.RdBu)
c17 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.PuOr)
c18 = plt.scatter(theta, r, c=colors, s=area, cmap=plt.cm.Oranges)
Explanation: Create a scatterplot using the a colormap.
Full list of colormaps: http://wiki.scipy.org/Cookbook/Matplotlib/Show_colormaps
End of explanation |
7,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial how to use xgboost
Step1: Let's do the same for classification problem
Tips
Step2: Visualisation | Python Code:
import xgboost as xgb
from sklearn.datasets import load_boston
from sklearn.cross_validation import train_test_split
from sklearn.metrics import r2_score, auc
boston = load_boston()
#print(boston.DESCR)
print(boston.data.shape)
X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target)
model = xgb.XGBRegressor()
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print( r2_score(y_test, y_pred) )
Explanation: Tutorial how to use xgboost
End of explanation
#you should import load_iris
#you should import f1_score
iris = load_iris()
Explanation: Let's do the same for classification problem
Tips: use iris dataset for this and f1_score for measure quality and xgb.XGBClassifier()
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
sns.set(style="whitegrid", palette="husl")
%matplotlib inline
iris_melt = pd.melt(iris, "species", var_name="measurement")
f, ax = plt.subplots(1, figsize=(15,9))
sns.stripplot(x="measurement", y="value", hue="species", data=iris_melt, jitter=True, edgecolor="white", ax=ax)
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target)
model = #create a model here
model.fit(X_train, y_train)
y_pred = model.predict(X_test)
print( f1_score(y_test, y_pred, average='micro') )
Explanation: Visualisation
End of explanation |
7,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Flickr30k Captions to Corpus
P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image description to visual denotations
Step1: Plan
Have a look inside the captions flickr30k.tar.gz
Step2: Now for the word Embeddings
Step3: Filter images and vocab jointly
Step4: Assemble a ready-for-use embedding
Let's filter the embedding to make it sleeker, and add some entries up front for RNN convenience
Step5: Check that this arrangement makes sense
Step6: Finally, save the data into a useful structure | Python Code:
import os
import numpy as np
import datetime
t_start=datetime.datetime.now()
import pickle
data_path = './data/Flickr30k'
output_dir = './data/cache'
output_filepath = os.path.join(output_dir,
'CAPTIONS_%s_%s.pkl' % (
data_path.replace('./', '').replace('/', '_'),
t_start.strftime("%Y-%m-%d_%H-%M"),
), )
output_filepath
Explanation: Flickr30k Captions to Corpus
P. Young, A. Lai, M. Hodosh, and J. Hockenmaier. From image description to visual denotations: New similarity metrics for semantic inference over event descriptions. Transactions of the Association for Computational Linguistics (to appear).
End of explanation
WORD_FREQ_MIN=5
IMG_WORD_FREQ_MIN=5
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
img_to_captions=dict()
tarfilepath = os.path.join(data_path, 'flickr30k.tar.gz')
if os.path.isfile(tarfilepath):
import tarfile
with tarfile.open(tarfilepath, 'r:gz').extractfile('results_20130124.token') as tokenized:
n_captions = 0
for l in tokenized.readlines():
#print(l) # This is bytes
img_num, caption = l.decode("utf-8").strip().split("\t")
img, num = img_num.split("#")
#print(img, caption); break
if img not in img_to_captions: img_to_captions[img]=[]
img_to_captions[img].append(caption)
n_captions += 1
print("Found %d images, with a total of %d captions" % (len(img_to_captions),n_captions, ))
# Found 31783 images, with a total of 158915 captions
good_img_to_captions, good_img_to_captions_title = img_to_captions, 'all'
len(good_img_to_captions)
# Filter for the images that we care about
if False:
# This is a super-small list, which means we won't get the chance to see
# enough Text to figure out how to make sentences. ABANDON THIS 'SIMPLIFICATION'
import re
good_caption = re.compile( r'\b(cat|kitten)s?\b', flags=re.IGNORECASE )
good_img_to_captions = { img:captions
for img, captions in img_to_captions.items()
for caption in captions
if good_caption.search( caption )
} # img=='3947306345.jpg'
good_img_to_captions_title = 'feline'
#good_img_to_captions
len(good_img_to_captions)
img_arr = sorted(good_img_to_captions.keys())
# extract the vocab where each word is required to occur WORD_FREQ_MIN times overall
word_freq_all=dict()
#for img in img_to_captions.keys(): # everything
for img in img_arr: # Our selection
for caption in img_to_captions[img]:
for w in caption.lower().split():
if not w in word_freq_all: word_freq_all[w]=0
word_freq_all[w] += 1
word_freq = { w:f for w,f in word_freq_all.items() if f>=WORD_FREQ_MIN }
freq_word = sorted([ (f,w) for w,f in word_freq.items() ], reverse=True)
vocab = set( word_freq.keys() )
len(vocab), freq_word[0:20]
# 7734, [(271698, 'a'), (151039, '.'), (83466, 'in'), (62978, 'the'), (45669, 'on'), (44263, 'and'), ...
# extract the vocab where each word is required to occur in IMG_WORD_FREQ_MIN *images* overall
word_freq_imgs=dict()
#for img in img_to_captions.keys(): # everything
for img in img_arr: # Our selection
img_caption_words=set()
for caption in img_to_captions[img]:
for w in caption.lower().split():
img_caption_words.add(w)
for w in img_caption_words:
if not w in word_freq_imgs: word_freq_imgs[w]=0
word_freq_imgs[w] += 1
word_freq = { w:f for w,f in word_freq_imgs.items() if f>=IMG_WORD_FREQ_MIN }
freq_word = sorted([ (f,w) for w,f in word_freq.items() ], reverse=True)
vocab = set( word_freq.keys() )
len(vocab), freq_word[0:20]
# 7219, [(31783, '.'), (31635, 'a'), (28076, 'in'), (24180, 'the'), (21235, 'is'), (21201, 'and'), ...
sorted([ (f,w) for w,f in word_freq.items() if not w.isalpha() and '-' not in w ], reverse=True)
stop_words = set ( stopwords.words('english') )
punc = set ("- . , : ; ' \" & $ % ( ) ! ? #".split())
[ (w, w in stop_words) for w in "while with of at in".split() ]
stop_words_seen = vocab.intersection( stop_words.union(punc) )
', '.join(stop_words_seen)
len(stop_words_seen), len(stop_words)
Explanation: Plan
Have a look inside the captions flickr30k.tar.gz : includes results_20130124.token
Extract contents of flickr30k.tar.gz to dict( photo_id -> [captions] )
Filter out a subset of those photo_id to convert
Save off image array and corpus to an easy-to-load filetype
End of explanation
glove_dir = './data/RNN/'
glove_100k_50d = 'glove.first-100k.6B.50d.txt'
glove_100k_50d_path = os.path.join(glove_dir, glove_100k_50d)
if not os.path.isfile( glove_100k_50d_path ):
raise RuntimeError("You need to download GloVE Embeddings "+
": Use the downloader in 5-Text-Corpus-and-Embeddings.ipynb")
else:
print("GloVE available locally")
# Due to size constraints, only use the first 100k vectors (i.e. 100k most frequently used words)
import glove
embedding_full = glove.Glove.load_stanford( glove_100k_50d_path )
embedding_full.word_vectors.shape
# Find words in word_arr that don't appear in GloVe
#word_arr = stop_words_seen # Great : these all have embeddings
#word_arr = [ w for w,f in word_freq.items() if f>WORD_FREQ_MIN] # This seems we're not missing much...
word_arr = vocab
missing_arr=[]
for w in word_arr:
if not w in embedding_full.dictionary:
missing_arr.append(w)
len(missing_arr), ', '.join( sorted(missing_arr) )
Explanation: Now for the word Embeddings
End of explanation
# Let's filter out the captions for the words that appear in our GloVe embedding
# And ignore the images that then have no captions
img_to_valid_captions, words_used = dict(), set()
captions_total, captions_valid_total = 0,0
for img, captions in good_img_to_captions.items():
captions_total += len(captions)
captions_valid=[]
for caption in captions:
c = caption.lower()
caption_valid=True
for w in c.split():
if w not in embedding_full.dictionary:
caption_valid=False
if w not in vocab:
caption_valid=False
if caption_valid:
captions_valid.append( c )
words_used.update( c.split() )
if len(captions_valid)>0:
img_to_valid_captions[img]=captions_valid
captions_valid_total += len(captions_valid)
else:
#print("Throwing out %s" % (img,), captions)
pass
print("%d images remain of %d. %d captions remain of %d. Words used : %d" % (
len(img_to_valid_captions.keys()), len(good_img_to_captions.keys()),
captions_valid_total, captions_total,
len(words_used),)
)
# 31640 images remain of 31783. 135115 captions remain of 158915. Words used : 7399 (5 min appearances overall)
# 31522 images remain of 31783. 133106 captions remain of 158915. Words used : 6941 (5 min images)
# So, we only got rid of ~150 images, but 23k captions... if we require 5 mentions minimum
# And only got rid of ~250 images, but 25k captions... if we require 5 minimum image appearances
Explanation: Filter images and vocab jointly
End of explanation
# Construct an ordered word list:
action_words = "{MASK} {UNK} {START} {STOP} {EXTRA}".split(' ')
# Then want the 'real words' to have :
# all the stop_words_seen (so that these can be identified separately)
# followed by the remainder of the words_used, in frequency order
def words_in_freq_order(word_arr, word_freq=word_freq):
# Create list of freq, word pairs
word_arr_freq = [ (word_freq[w], w) for w in word_arr]
return [ w for f,w in sorted(word_arr_freq, reverse=True) ]
stop_words_sorted = words_in_freq_order( stop_words_seen )
rarer_words_sorted = words_in_freq_order( words_used - stop_words_seen )
#", ".join( stop_words_sorted )
#", ".join( words_in_freq_order( words_used )[0:100] )
#", ".join( rarer_words_sorted[0:100] )
len(words_used), len(action_words), len(stop_words_sorted), len(rarer_words_sorted)
EMBEDDING_DIM = embedding_full.word_vectors.shape[1]
action_embeddings = np.zeros( (len(action_words), EMBEDDING_DIM,), dtype='float32')
for idx,w in enumerate(action_words):
if idx>0: # Ignore {MASK}
action_embeddings[idx, idx] = 1.0 # Make each row a very simple (but distinct) vector for simplicity
stop_words_idx = [ embedding_full.dictionary[w] for w in stop_words_sorted ]
rarer_words_idx = [ embedding_full.dictionary[w] for w in rarer_words_sorted ]
embedding = np.vstack([
action_embeddings,
embedding_full.word_vectors[ stop_words_idx ],
embedding_full.word_vectors[ rarer_words_idx ],
])
embedding_word_arr = action_words + stop_words_sorted + rarer_words_sorted
#stop_words_idx
Explanation: Assemble a ready-for-use embedding
Let's filter the embedding to make it sleeker, and add some entries up front for RNN convenience
End of explanation
embedding_dictionary = { w:i for i,w in enumerate(embedding_word_arr) }
# Check that this all ties together...
#word_check='{START}' # an action word - not found in GloVe
#word_check='this' # a stop word
word_check='hammer' # a 'rare' word
#embedding_dictionary[word_check]
( embedding[ embedding_dictionary[word_check] ] [0:6],
embedding_full.word_vectors[ embedding_full.dictionary.get( word_check, 0) ] [0:6], )
Explanation: Check that this arrangement makes sense :
End of explanation
np.random.seed(1) # Consistent values for train/test (for this )
save_me = dict(
img_to_captions = img_to_valid_captions,
action_words = action_words,
stop_words = stop_words_sorted,
embedding = embedding,
embedding_word_arr = embedding_word_arr,
img_arr = img_arr_save,
train_test = np.random.random( (len(img_arr_save),) ),
)
if not os.path.exists(output_dir):
os.makedirs(output_dir)
with open( output_filepath, 'wb') as f:
pickle.dump(save_me, f)
print("Corpus saved to '%s'" % (output_filepath,))
Explanation: Finally, save the data into a useful structure
End of explanation |
7,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
intervals =[[-2.0,-1.1],[-1.0,-0.6],[-0.5,-0.1],[0.0,0.4],[0.5,0.9],[1.0,1.4],
[1.5,1.9],[2.0,2.4],[2.5,2.9],[3.0,3.4],[3.5,3.9],[4.0,5.0]]
Step1: bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
mid_points = (bins[1
Step2: h = df.iloc[10,mask]
bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
mid_points = (bins[1 | Python Code:
def fit_normal_to_hist(h):
if not all(h==0):
bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
mid_points = (bins[1:] + bins[:-1])/2
popt,pcov = opt.curve_fit(lambda x,mu,sig: stats.norm.pdf(x,mu,sig), mid_points,norm_hist)
else:
popt = [float('nan'),float('nan')]
return popt[1]
def ZL_std(h):
intervals =[[-2.0,-1.1],[-1.0,-0.6],[-0.5,-0.1],[0.0,0.4],[0.5,0.9],[1.0,1.4],
[1.5,1.9],[2.0,2.4],[2.5,2.9],[3.0,3.4],[3.5,3.9],[4.0,5.0]]
if not all(h==0):
sum_i1 = 0
sum_i2 = 0
for i in range(1,len(h)):
p = h[i]/100
v1,v2 = intervals[i]
sum_i1 += p*(v2**3 - v1**3)/(3*(v2-v1))
sum_i2 += p*(v2**2 - v1**2)/(2*(v2-v1))
zl_std = np.sqrt(sum_i1 - sum_i2**2)
else:
zl_std = float('nan')
return zl_std
def Hist_std(h):
if not all(h==0):
bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
mid_points = (bins[1:] + bins[:-1])/2
MeanCrude = np.dot(norm_hist,mid_points)
VarCrude = np.dot(norm_hist,(mid_points-MeanCrude)**2)
bin_widths = np.diff(bins)
BinWidth = bin_widths.mean()
VarSheppard = VarCrude - (BinWidth**2)/12 #variance, Sheppard's correction
hist_std = np.sqrt(VarSheppard)
else:
hist_std = float('nan')
return hist_std
Explanation: intervals =[[-2.0,-1.1],[-1.0,-0.6],[-0.5,-0.1],[0.0,0.4],[0.5,0.9],[1.0,1.4],
[1.5,1.9],[2.0,2.4],[2.5,2.9],[3.0,3.4],[3.5,3.9],[4.0,5.0]]
End of explanation
mask = df.columns.str.contains(',')
mask
df.columns[mask]
Explanation: bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
mid_points = (bins[1:] + bins[:-1])/2
bin_widths = np.diff(bins)
MeanCrude = np.dot(norm_hist,mid_points)
MeanCrude
VarCrude = np.dot(norm_hist,(mid_points-MeanCrude)**2)
VarCrude
total = 0
for i in range(0,len(norm_hist)):
total += norm_hist[i]*mid_points[i]
total
for i in range(0,len(norm_hist)):
tt[i] = (mid_points[i] - MeanCrude)**2
tt
VarSheppard = VarCrude - (BinWidth**2)/12
VarSheppard
MeanCrude = sum(Prob . CatMid); %crude mean and variance
VarCrude = sum(Prob . (CatMid-MeanCrude).^2);
BinWidth = mean(diff(CatBounds(:)));
VarSheppard = VarCrude - BinWidth^2/12; %variance, Sheppard's correction
def Entr(h):
if not all(h==0):
bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
bin_widths = np.diff(bins)
np.dot(h,np.log(h/bin_widths))
bin_density = h.astype(float) / np.dot(bin_widths, h)
popt
else:
popt = [float('nan'),float('nan')]
return popt[1]
End of explanation
df['GA_std'] = df.iloc[:,mask].apply(fit_normal_to_hist,axis=1)
df['ZL_std'] = df.iloc[:,mask].apply(ZL_std,axis=1)
df['Hist_std'] = df.iloc[:,mask].apply(Hist_std,axis=1)
df.head(10)
dfList = []
writer = pd.ExcelWriter(out_data + 'PointForecasts.xlsx')
years = [2014,2015,2016]
quarters = [1,2,3,4]
for year in years:
for q in quarters:
f = str(year) + 'Q' + str(q)
fname = f + '.csv'
if os.path.isfile(raw_data_path + '\\' + fname):
raw_df = pd.read_csv(raw_data_path + '\\' + fname,header = True)
# find the row where the growth expectations start
dum = raw_df[raw_df['TARGET_PERIOD'] == 'GROWTH EXPECTATIONS; YEAR-ON-YEAR CHANGE IN REAL GDP'].index[0]
mask_columns = ~raw_df.columns.str.contains('Unnamed')
df = raw_df.iloc[0:dum-1,[0,1,2]]
df['source'] = str(year) + '-Q' + str(q)
df = df.rename(columns={'TARGET_PERIOD':'target','FCT_SOURCE':'id','POINT':'point'})
df = df[['source','target','id','point']]
df['id'] = df['id'].astype('int')
df['point'] = df['point'].astype('float32')
df.to_excel(writer,f,index=False)
dfList.append(df)
writer.save()
Explanation: h = df.iloc[10,mask]
bins =np.array([-2.0,-1.0,-0.5,0.0,0.5,1.0,1.5,2.0,2.5,3.0,3.5,4.0,5.0])
orig_hist = np.array(h).astype(float)
norm_hist = orig_hist/float(sum(orig_hist))
mid_points = (bins[1:] + bins[:-1])/2
popt,pcov = opt.curve_fit(lambda x,mu,sig: stats.norm.pdf(x,mu,sig), mid_points,norm_hist)
popt
for i in range(0,10):
print(i,fit_normal_to_hist(df.iloc[i,mask].astype(float)))
End of explanation |
7,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Day 2 pre-class assignment
Goals for today's pre-class assignment
Make sure that you can get a Jupyter notebook up and running!
Learn about algorithms, computer programs, and their relationship
To devise and think about the components of an algorithm for a simple task
Learn about Python, IPython, and IPython notebooks and understand why we're using it in class.
Install NetLogo and verify that it works (we need it for class)
Assignment instructions
Pre-class assignments will be composed of a combination of videos, text to read, and small assignments. The goal of these assignments is to prepare you for class the following day. You should watch the videos and read the text, and then do the assigned work. You also need to provide feedback in the Google Form at the bottom of the notebook. You will be graded on making a good-faith effort, not on correctness!
To make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' or 'return' key (you'll have to do this to get movies to run). To edit a cell (to add answers, for example) you double-click, add your text, and then enter it by holding down 'shift' and pressing 'enter'.
This assignment is due by 11
Step1: Algorithms
Step2: Further reading on algorithms and computer programs
note
Step4: Assignment | Python Code:
# The command below this comment imports the functionality that we need to display
# YouTube videos in a Jupyter Notebook. You need to run this cell before you
# run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
Explanation: Day 2 pre-class assignment
Goals for today's pre-class assignment
Make sure that you can get a Jupyter notebook up and running!
Learn about algorithms, computer programs, and their relationship
To devise and think about the components of an algorithm for a simple task
Learn about Python, IPython, and IPython notebooks and understand why we're using it in class.
Install NetLogo and verify that it works (we need it for class)
Assignment instructions
Pre-class assignments will be composed of a combination of videos, text to read, and small assignments. The goal of these assignments is to prepare you for class the following day. You should watch the videos and read the text, and then do the assigned work. You also need to provide feedback in the Google Form at the bottom of the notebook. You will be graded on making a good-faith effort, not on correctness!
To make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' or 'return' key (you'll have to do this to get movies to run). To edit a cell (to add answers, for example) you double-click, add your text, and then enter it by holding down 'shift' and pressing 'enter'.
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 2. Submission instructions can be found at the end of the notebook.
End of explanation
# the command below this comment actually displays a specific YouTube video,
# with a given width and height. You can watch the video in full-screen (much higher
# resolution) mode by clicking the little box in the bottom-right corner of the video.
YouTubeVideo("jT0KZ849fak",width=640,height=360)
Explanation: Algorithms
End of explanation
YouTubeVideo("L03BzGmLUUE",width=640,height=360)
Explanation: Further reading on algorithms and computer programs
note: This isn't mandatory, but might be helpful!
Wikipedia page on algorithms
Wikipedia page on computer programs
Assignment: Algorithms and computer programs
Question 1: Come up with an algorithm for a simple task that you do every day (i.e., putting on your shoes). What are the steps of this algorithm?
Put your answer to Question 1 here! (double-click on this text to edit this cell, and hit shift+enter to save the text)
Question 2: Think about the algorithm you devised in the previous question and the video you just watched. Identify the various parts of your algorithm, as defined by the video.
Put your answer to question 2 here! (double-click on this text to edit this cell, and hit shift+enter to save the text)
Python, IPython, and IPython notebooks
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://docs.google.com/forms/d/e/1FAIpQLSedVfvn-6kn3oTiyl2IeJglS7twoa8CdkcyLopqv-XWOtnUYQ/viewform"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment: Python, IPython, and IPython notebooks, and NetLogo
Question 3: Given the video you watched, why do you think we're using IPython notebooks in class instead of a command-line version of Python?
Put your answer here! (double-click on this text to edit this cell, and hit shift+enter to save the text)
Question 4: This isn't a question so much as a to-do. You need to install the program NetLogo on your computer for tomorrow's class. (You may first need to install Java on your computer to make this work!) To do this, go to the link above, download the appropriate version for your computer, and then follow the installation instructions. To test that it works, do the following:
Start up NetLogo. It may be in the start menu/Dock on your computer, or in the Applications folder.
Go to the "File" menu, then click on "Models Library".
Click on any of the directories and pick a directory and a model.
Click the "setup" button, and then "go". The model should start.
Click "go" again to stop the model.
Congratulations! You've installed NetLogo.
Change the text here to say that you have installed and tested NetLogo. (double-click on this text to edit this cell, and hit shift+enter to save the text)
Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation |
7,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using BagIt to tag oceanographic data
BagIt is a packaging format that supports storage of arbitrary digital content. The "bag" consists of arbitrary content and "tags," the metadata files. BagIt packages can be used to facilitate data sharing with federal archive centers - thus ensuring digital preservation of oceanographic datasets within IOOS and its regional associations. NOAA NCEI supports reading from a Web Accessible Folder (WAF) containing bagit archives. For an example please see
Step1: Instead of "bagging" the CSV file we will use this create a metadata rich netCDF file.
We can convert the table to a DSG, Discrete Sampling Geometry, using pocean.dsg. The first thing we need to do is to create a mapping from the data column names to the netCDF axes.
Step2: Now we can create a Orthogonal Multidimensional Timeseries Profile object...
Step3: ... And add some extra metadata before we close the file.
Step4: Time to create the archive for the file with BagIt. We have to create a folder for the bag.
Step5: Now we can create the bag and copy the netCDF file to a data sub-folder.
Step6: Last, but not least, we have to set bag metadata and update the existing bag with it.
Step7: That is it! Simple and efficient!!
The cell below illustrates the bag directory tree.
(Note that the commands below will not work on Windows and some *nix systems may require the installation of the command tree, however, they are only need for this demonstration.)
Step8: We can add more files to the bag as needed. | Python Code:
import os
import pandas as pd
fname = os.path.join("data", "dsg", "timeseriesProfile.csv")
df = pd.read_csv(fname, parse_dates=["time"])
df.head()
Explanation: Using BagIt to tag oceanographic data
BagIt is a packaging format that supports storage of arbitrary digital content. The "bag" consists of arbitrary content and "tags," the metadata files. BagIt packages can be used to facilitate data sharing with federal archive centers - thus ensuring digital preservation of oceanographic datasets within IOOS and its regional associations. NOAA NCEI supports reading from a Web Accessible Folder (WAF) containing bagit archives. For an example please see: http://ncei.axiomdatascience.com/cencoos/
On this notebook we will use the python interface for BagIt to create a "bag" of a time-series profile data. First let us load our data from a comma separated values file (CSV).
End of explanation
axes = {"t": "time", "x": "lon", "y": "lat", "z": "depth"}
Explanation: Instead of "bagging" the CSV file we will use this create a metadata rich netCDF file.
We can convert the table to a DSG, Discrete Sampling Geometry, using pocean.dsg. The first thing we need to do is to create a mapping from the data column names to the netCDF axes.
End of explanation
import os
import tempfile
from pocean.dsg import OrthogonalMultidimensionalTimeseriesProfile as omtsp
output_fp, output = tempfile.mkstemp()
os.close(output_fp)
ncd = omtsp.from_dataframe(df.reset_index(), output=output, axes=axes, mode="a")
Explanation: Now we can create a Orthogonal Multidimensional Timeseries Profile object...
End of explanation
naming_authority = "ioos"
st_id = "Station1"
ncd.naming_authority = naming_authority
ncd.id = st_id
print(ncd)
ncd.close()
Explanation: ... And add some extra metadata before we close the file.
End of explanation
temp_bagit_folder = tempfile.mkdtemp()
temp_data_folder = os.path.join(temp_bagit_folder, "data")
Explanation: Time to create the archive for the file with BagIt. We have to create a folder for the bag.
End of explanation
import shutil
import bagit
bag = bagit.make_bag(temp_bagit_folder, checksum=["sha256"])
shutil.copy2(output, temp_data_folder + "/parameter1.nc")
Explanation: Now we can create the bag and copy the netCDF file to a data sub-folder.
End of explanation
urn = "urn:ioos:station:{naming_authority}:{st_id}".format(
naming_authority=naming_authority, st_id=st_id
)
bag_meta = {
"Bag-Count": "1 of 1",
"Bag-Group-Identifier": "ioos_bagit_testing",
"Contact-Name": "Kyle Wilcox",
"Contact-Phone": "907-230-0304",
"Contact-Email": "[email protected]",
"External-Identifier": urn,
"External-Description": "Sensor data from station {}".format(urn),
"Internal-Sender-Identifier": urn,
"Internal-Sender-Description": "Station - URN:{}".format(urn),
"Organization-address": "1016 W 6th Ave, Ste. 105, Anchorage, AK 99501, USA",
"Source-Organization": "Axiom Data Science",
}
bag.info.update(bag_meta)
bag.save(manifests=True, processes=4)
Explanation: Last, but not least, we have to set bag metadata and update the existing bag with it.
End of explanation
!tree $temp_bagit_folder
!cat $temp_bagit_folder/manifest-sha256.txt
Explanation: That is it! Simple and efficient!!
The cell below illustrates the bag directory tree.
(Note that the commands below will not work on Windows and some *nix systems may require the installation of the command tree, however, they are only need for this demonstration.)
End of explanation
shutil.copy2(output, temp_data_folder + "/parameter2.nc")
shutil.copy2(output, temp_data_folder + "/parameter3.nc")
shutil.copy2(output, temp_data_folder + "/parameter4.nc")
bag.save(manifests=True, processes=4)
!tree $temp_bagit_folder
!cat $temp_bagit_folder/manifest-sha256.txt
Explanation: We can add more files to the bag as needed.
End of explanation |
7,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Algorithms Exercise 2
Imports
Step2: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should
Step3: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Algorithms Exercise 2
Imports
End of explanation
def find_peaks(a):
Find the indices of the local maxima in a sequence.
# YOUR CODE HERE
#I always start with an empty list k.
k=[]
for i in range(0, len(a)):
#Check to see if the number in index i is greater than the numbers in the adjacent indicies, whilst being in range of the list.
if (i==len(a)-1 or a[i]>a[i+1]) and a[i]>a[i-1]:
k.append(i)
return np.array(k)
p1 = find_peaks([2,0,1,0,2,0,1])
assert np.allclose(p1, np.array([0,2,4,6]))
p2 = find_peaks(np.array([0,1,2,3]))
assert np.allclose(p2, np.array([3]))
p3 = find_peaks([3,2,1,0])
assert np.allclose(p3, np.array([0]))
Explanation: Peak finding
Write a function find_peaks that finds and returns the indices of the local maxima in a sequence. Your function should:
Properly handle local maxima at the endpoints of the input array.
Return a Numpy array of integer indices.
Handle any Python iterable as input.
End of explanation
from sympy import pi, N
pi_digits_str = str(N(pi, 10001))[2:]
# YOUR CODE HERE
h = pi_digits_str
j=[]
for i in h:
j.append(int(i))
n = np.array(j)
v = find_peaks(n)
m = np.diff(v)
f = plt.figure(figsize=(10,6))
plt.hist(m, bins=20)
plt.ylabel('Distance between maxima')
plt.xlabel('Index of maxima')
m
assert True # use this for grading the pi digits histogram
Explanation: Here is a string with the first 10000 digits of $\pi$ (after the decimal). Write code to perform the following:
Convert that string to a Numpy array of integers.
Find the indices of the local maxima in the digits of $\pi$.
Use np.diff to find the distances between consequtive local maxima.
Visualize that distribution using an appropriately customized histogram.
End of explanation |
7,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="http
Step1: <div id='sylvester' />
Sylvester Equation
The Sylvester Equation has the following form matricial form
Step2: Why is the vecoperator useful?
This operator is useful because of the following identity for the matrices $A$, $B$, and $C$ that belong to $\mathbb{R}^{n\times n}$
Step3: So, since the residual is decreasing we expect convergence. Let's look at the following example,
Step4: Unfortunately in this case it is divergent.
One alternativs is to implement Algorithm 2 and hope it will converge.
A better alternative is to use GMRes!!
<div id='GMResTest' />
Using the beautiful GMRes
Back to TOC
Step5: This is beautiful!!
We were able to solve a linear system of equations without even requiring to build the 'large' matrix associated to the linear system!
Computing the 'truly' relative residues
Step6: <div id='chal' />
Challenge | Python Code:
import numpy as np
import scipy as sp
from scipy import linalg as la
import scipy.sparse.linalg as spla
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['font.size'] = 14
mpl.rcParams['axes.labelsize'] = 20
mpl.rcParams['xtick.labelsize'] = 14
mpl.rcParams['ytick.labelsize'] = 14
Explanation: <center>
<img src="http://sct.inf.utfsm.cl/wp-content/uploads/2020/04/logo_di.png" style="width:60%">
<h1> INF285 - Computación Científica </h1>
<h2> Sylvester Equation with GMRes </h2>
<h2> <a href="#acknowledgements"> [S]cientific [C]omputing [T]eam </a> </h2>
<h2> Version: 1.02</h2>
</center>
<div id='toc' />
Table of Contents
Sylvester Equation
GMRes with afun(x)
Using a Jacobi/Gauss-Seidel as iterative solver
Using the beautiful GMRes
Challenge
Acknowledgements
End of explanation
A = np.array([[1, 3],[2, 4]])
print(A.flatten('F'))
Explanation: <div id='sylvester' />
Sylvester Equation
The Sylvester Equation has the following form matricial form:
$$A\,X+X\,B=C$$
where $A\in\mathbb{R}^{n\times n}$, $B\in\mathbb{R}^{n\times n}$, $C\in\mathbb{R}^{n\times n}$ and $X\in\mathbb{R}^{n\times n}$.
$A$, $B$ and $C$ are given, the problem is to find the matrix $X$.
See https://en.wikipedia.org/wiki/Sylvester_equation.
Thus, an algorithm as PALU o classical version of ietrative solvers can't be applied directly.
If you want to do so, please take a look to the vec operator: https://en.wikipedia.org/wiki/Vectorization_%28mathematics%29 , together with NumPy flatten(order='F') procedure and the NumPy Kronecker product of two arrays, np.kron.
Notice that if you really want to translate the Sylvester Equation into the tradicional form $\widehat{A}\,\widehat{\mathbf{x}}=\widehat{\mathbf{b}}$, just be aware that $\widehat{A}\in\mathbb{R}^{n^2\times n^2}$, $\widehat{\mathbf{x}}\in\mathbb{R}^{n^2}$, and $\widehat{\mathbf{b}}\in\mathbb{R}^{n^2}$.
This means that if $n$ is large, $n^2$ is huge!
And you may not have enough memory to store $\widehat{A}$.
Vectorization: A quick review
In Mathematics the vec operator is an operator that translate a matrix into a vector as follows:
$$
\text{vec}
\left(
\begin{bmatrix}
a & b \
c & d
\end{bmatrix}
\right)
=
\begin{bmatrix}
a \
c \
b \
d
\end{bmatrix}.
$$
This can be achieved with the flatten function from NumPy with the parameter F. Consider the following numerical example:
End of explanation
def solve_JGS_iterative_Sylvester(A,B,C,m,alg=1):
if alg==1:
# Algorithm 1
# AX+XB=C
# X=A^{-1}(C-XB)
# X^{(i+1)}=A^{-1}(C-X^{(i)} B)
X0 = np.zeros_like(A)
X1 = np.zeros_like(A)
for i in range(m):
X1=np.linalg.solve(A,C-np.dot(X0,B))
X0=X1
# Just 'printing' a residual!
print(np.linalg.norm(np.dot(A,X1)+np.dot(X1,B)-C))
return X1
# elif algo==2: # TO DO !!!!!!!!!!!!!
# Algorithm 2
# AX+XB=C
# X = (C-AX)B^{-1}
# X^{(i+1)}=(C-A X^{(i)})B^{-1}
# How do we implement this? Hint: You only need to use np.linalg.solve in a convenient way.
# First TEST
n = 10
np.random.seed(0)
A = np.random.rand(n,n)+10*np.eye(n)
#print(np.linalg.eigvals(A))
B = np.random.rand(n,n)
#print(np.linalg.eigvals(B))
C = np.random.rand(n,n)
X_JGS=solve_JGS_iterative_Sylvester(A,B,C,10)
Explanation: Why is the vecoperator useful?
This operator is useful because of the following identity for the matrices $A$, $B$, and $C$ that belong to $\mathbb{R}^{n\times n}$:
$$
\text{vec}
\left(
A\,B\,C
\right)
=
(C^T \otimes A)\,\text{vec}(B),
$$
where $\otimes$ is the Kronecker product.
This implies that the Sylvester equation $A\,X+X\,B=C$, after adding two identity matrices $I\in\mathbb{R}^{n\times n}$ conveniently we obtain,
$$
A\,X\,I+I\,X\,B=C,
$$
so, if we apply the vecoperator we obtain,
$$
\begin{align}
\text{vec}(A\,X\,I+I\,X\,B) & = \text{vec}(C)\
\text{vec}(A\,X\,I)+\text{vec}(I\,X\,B) & = \text{vec}(C)\
(I \otimes A)\,\text{vec}(X)+(B^T \otimes I)\text{vec}(X) & = \text{vec}(C)\
\left((I \otimes A)+(B^T \otimes I)\right)\text{vec}(X) & = \text{vec}(C).
\end{align}
$$
Thus, the true linear system we are solving is the following:
$$
\widehat{A}\,\widehat{\mathbf{x}}=\widehat{\mathbf{b}},
$$
where,
$$
\begin{align}
\widehat{A} & = (I \otimes A)+(B^T \otimes I) \in \mathbb{R}^{n^2\times n^2},\
\widehat{\mathbf{x}} &= \text{vec}(X) \in \mathbb{R}^{n^2},\
\widehat{\mathbf{b}} &= \text{vec}(C) \in \mathbb{R}^{n^2}.
\end{align}
$$
Why don't we just use PALU with $\widehat{A}\,\widehat{\mathbf{x}}=\widehat{\mathbf{b}}$?
Because we may run out of memory!
Notice that the original matrices are of size $n \times n$, but $\widehat{A}$ is of size $n^2 \times n^2 $, which may be huge depending on the value of $n$!
<div id='GMRes' />
GMRes with afun($\mathbf{x}$)
Back to TOC
GMRes is a member of the family of Krylov methods. It finds an approximation of $\mathbf{x}$ restricted to live on the Krylov sub-space $\mathcal{K_k}$, where $\mathcal{K_k}={\mathbf{r}_0, A\,\mathbf{r}_0, A^2\,\mathbf{r}_0, \cdots, A^{k-1}\,\mathbf{r}_0}$ and $\mathbf{r}_0 = \mathbf{b} - A\,\mathbf{x}_0$ is the residual vector of the initial guess.
The idea behind this method is to look for improvements to the initial guess $\mathbf{x}_0$ in the Krylov space. At the $k$-th iteration, we enlarge the Krylov space by adding $A^k\,\mathbf{r}_0$, reorthogonalize the basis, and then use least squares to find the best improvement to add to $\mathbf{x}_0$.
The algorithm is as follows:
Generalized Minimum Residual Method
$\mathbf{x}0$ = initial guess<br>
$\mathbf{r}$ = $\mathbf{b} - A\,\mathbf{x}_0$ = $\mathbf{b} - $<span style="color:blue">afun</span>$(\mathbf{x}_0)$<br>
$\mathbf{q}_1$ = $\mathbf{r} / \|\mathbf{r}\|_2$<br>
for $k = 1, ..., m$<br>
$\qquad \ \ \mathbf{y} = A\,\mathbf{q}_k$ = <span style="color:blue">afun</span>$(\mathbf{q}_k)$ <br>
$\qquad$ for $j = 1,2,...,k$ <br>
$\qquad \qquad$ $h{jk} = \mathbf{q}j^*\,\mathbf{y}$<br>
$\qquad \qquad$ $\mathbf{y} = \mathbf{y} - h{jk}\, \mathbf{q}j$<br>
$\qquad$ end<br>
$\qquad \ h{k+1,k} = \|y\|2 \qquad$ (If $h{k+1,k} = 0$ , skip next line and terminate at bottom.) <br>
$\qquad \ \mathbf{q}{k+1} = \mathbf{y}/h{k+1,k}$ <br>
$\qquad$ Minimize $\left\|\widehat{H}_k\, \mathbf{c}_k - [\|\mathbf{r}\|_2 \ 0 \ 0 \ ... \ 0]^T \right\|_2$ for $\mathbf{c}_k$ <br>
$\qquad$ $\mathbf{x}_k = Q_k \, \mathbf{c}_k + \mathbf{x}_0$ <br>
end
<div id='jacAndGSIterative' />
Using a Jacobi/Gauss-Seidel iterative solver
Back to TOC
The Sylvester Equation could be solve iteratively by building matrix-fix-point-iterative procedures.
For instance, we propose 2 algorihtms, the first one is derived as follows,
\begin{align}
A\,X+X\,B &= C,\
X &= A^{-1}\,(C-X\,B),\
X^{(0)} &= \underline{\underline{0}},\
X^{(i+1)} &= A^{-1}\,(C-X^{(i)}\,B),
\end{align}
where $\underline{\underline{0}}$ is the matrix of dimension $n\times n$ o $0$.
Recall please that we don't compute the inverse of a matrix in general, i.e. we don't need to compute $A^{-1}$, but what we do is to solve the corresponding linear system of equations.
The second alternative is the following,
\begin{align}
A\,X+X\,B &= C,\
X &= A^{-1}\,(C-A\,X)\,B^{-1},\
X^{(0)} &= \underline{\underline{0}},\
X^{(i+1)} &= A^{-1}\,(C-A\,X^{(i)})\,B^{-1},
\end{align}
where, again, please don't compute the inverse of $B$.
However in this case is a bit more challenging the implementation since it is a bit trickier to get the corresponding linear system of equation, just think out of the transpose-box!
End of explanation
# Second TEST
n = 10
np.random.seed(0)
A = np.random.rand(n,n)+2*np.eye(n)
#print(np.linalg.eigvals(A))
B = np.random.rand(n,n)
#print(np.linalg.eigvals(B))
C = np.random.rand(n,n)
X_JGS=solve_JGS_iterative_Sylvester(A,B,C,10)
Explanation: So, since the residual is decreasing we expect convergence. Let's look at the following example,
End of explanation
# This function computes the 'matrix-vector' product of the matrix we don't have explicitly stored!!
def compute_matrix_vector_product(x,A,B,n):
X = np.reshape(x,(n,n),order='F')
out = np.dot(A,X)+np.dot(X,B)
return out.flatten('F')
# This is part of the interface that SciPy requires.
Ax = lambda x: compute_matrix_vector_product(x,A,B,n)
# This is the famous 'afun'!!
afun = spla.LinearOperator((n**2, n**2), matvec=Ax)
# Just running GMRes
x, exitCode = spla.gmres(afun, C.flatten('F'), tol=1e-10)
# Just reshaping the
X_GMRes = np.reshape(x,(n,n),order='F')
print('residual: ',np.linalg.norm(np.dot(A,X_GMRes)+np.dot(X_GMRes,B)-C))
#print(X_GMRes)
Explanation: Unfortunately in this case it is divergent.
One alternativs is to implement Algorithm 2 and hope it will converge.
A better alternative is to use GMRes!!
<div id='GMResTest' />
Using the beautiful GMRes
Back to TOC
End of explanation
Ax_JGS = Ax(X_JGS.flatten(order='F'))
Ax_GMRes = Ax(X_GMRes.flatten(order='F'))
c = C.flatten(order='F')
print(np.linalg.norm(Ax_JGS-c)/np.linalg.norm(c))
print(np.linalg.norm(Ax_GMRes-c)/np.linalg.norm(c))
Explanation: This is beautiful!!
We were able to solve a linear system of equations without even requiring to build the 'large' matrix associated to the linear system!
Computing the 'truly' relative residues
End of explanation
# This is a very instructive implementation of GMRes.
def GMRes_Ax(A, b, x0=np.array([0.0]), m=10, flag_display=True, threshold=1e-12):
n = len(b)
if len(x0)==1:
x0=np.zeros(n)
r0 = b - np.dot(A, x0)
nr0=np.linalg.norm(r0)
out_res=np.array(nr0)
Q = np.zeros((n,n))
H = np.zeros((n,n))
Q[:,0] = r0 / nr0
flag_break=False
for k in np.arange(np.min((m,n))):
y = np.dot(A, Q[:,k])
if flag_display:
print('||y||=',np.linalg.norm(y))
for j in np.arange(k+1):
H[j][k] = np.dot(Q[:,j], y)
if flag_display:
print('H[',j,'][',k,']=',H[j][k])
y = y - np.dot(H[j][k],Q[:,j])
if flag_display:
print('||y||=',np.linalg.norm(y))
# All but the last equation are treated equally. Why?
if k+1<n:
H[k+1][k] = np.linalg.norm(y)
if flag_display:
print('H[',k+1,'][',k,']=',H[k+1][k])
if (np.abs(H[k+1][k]) > 1e-16):
Q[:,k+1] = y/H[k+1][k]
else:
print('flag_break has been activated')
flag_break=True
# Do you remember e_1? The canonical vector.
e1 = np.zeros((k+1)+1)
e1[0]=1
H_tilde=H[0:(k+1)+1,0:k+1]
else:
H_tilde=H[0:k+1,0:k+1]
# Solving the 'SMALL' least square problem.
# This could be improved with Givens rotations!
ck = np.linalg.lstsq(H_tilde, nr0*e1)[0]
if k+1<n:
x = x0 + np.dot(Q[:,0:(k+1)], ck)
else:
x = x0 + np.dot(Q, ck)
# Why is 'norm_small' equal to 'norm_full'?
norm_small=np.linalg.norm(np.dot(H_tilde,ck)-nr0*e1)
out_res = np.append(out_res,norm_small)
if flag_display:
norm_full=np.linalg.norm(b-np.dot(A,x))
print('..........||b-A\,x_k||=',norm_full)
print('..........||H_k\,c_k-nr0*e1||',norm_small);
if flag_break:
if flag_display:
print('EXIT: flag_break=True')
break
if norm_small<threshold:
if flag_display:
print('EXIT: norm_small<threshold')
break
return x,out_res
Explanation: <div id='chal' />
Challenge: What do we need to change in our implementation of GMRes to be able to use the lambda function "Ax"?
Back to TOC
End of explanation |
7,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on
Step1: Set parameters
Step2: Compute covariance using automated regularization
Step3: Show the evoked data
Step4: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1. | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Denis A. Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.datasets import sample
from mne.cov import compute_covariance
print(__doc__)
Explanation: Whitening evoked data with a noise covariance
Evoked data are loaded and then whitened using a given noise covariance
matrix. It's an excellent quality check to see if baseline signals match
the assumption of Gaussian white noise during the baseline period.
Covariance estimation and diagnostic plots are based on
:footcite:EngemannGramfort2015.
References
.. footbibliography::
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 40, n_jobs=1, fir_design='firwin')
raw.info['bads'] += ['MEG 2443'] # bads + 1 more
events = mne.read_events(event_fname)
# let's look at rare events, button presses
event_id, tmin, tmax = 2, -0.2, 0.5
reject = dict(mag=4e-12, grad=4000e-13, eeg=80e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=('meg', 'eeg'),
baseline=None, reject=reject, preload=True)
# Uncomment next line to use fewer samples and study regularization effects
# epochs = epochs[:20] # For your data, use as many samples as you can!
Explanation: Set parameters
End of explanation
method_params = dict(diagonal_fixed=dict(mag=0.01, grad=0.01, eeg=0.01))
noise_covs = compute_covariance(epochs, tmin=None, tmax=0, method='auto',
return_estimators=True, verbose=True, n_jobs=1,
projs=None, rank=None,
method_params=method_params)
# With "return_estimator=True" all estimated covariances sorted
# by log-likelihood are returned.
print('Covariance estimates sorted from best to worst')
for c in noise_covs:
print("%s : %s" % (c['method'], c['loglik']))
Explanation: Compute covariance using automated regularization
End of explanation
evoked = epochs.average()
evoked.plot(time_unit='s') # plot evoked response
Explanation: Show the evoked data:
End of explanation
evoked.plot_white(noise_covs, time_unit='s')
Explanation: We can then show whitening for our various noise covariance estimates.
Here we should look to see if baseline signals match the
assumption of Gaussian white noise. we expect values centered at
0 within 2 standard deviations for 95% of the time points.
For the Global field power we expect a value of 1.
End of explanation |
7,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Overview
Ray is a Python-based distributed execution $\bf \text{engine}$. The same code can be run on a single machine to achieve efficient multiprocessing, and it can be used on a cluster for large computations.
When using Ray, several processes are involved.
• Multiple $\bf \text{worker}$ processes execute tasks and store results in object stores. Each worker is a separate process.
• One $\bf \text{object store}$ per node stores immutable objects in shared memory and allows workers to efficiently share objects on the same node with minimal copying and deserialization.
• One $\bf \text{local scheduler}$ per node assigns tasks to workers on the same node.
• A $\bf \text{global scheduler}$ receives tasks from local schedulers and assigns them to other local schedulers.
• A $\bf \text{driver}$ is the Python process that the user controls. For example, if the user is running a script or using a Python shell, then the driver is the Python process that runs the script or the shell. A driver is similar to a worker in that it can submit tasks to its local scheduler and get objects from the object store, but it is different in that the local scheduler will not assign tasks to the driver to be executed.
• A $\bf \text{Redis}$ server maintains much of the system’s state. For example, it keeps track of which objects live on which machines and of the task specifications (but not data). It can also be queried directly for debugging purposes.
## Starting Ray
To start Ray, start Python and run the following commands.
Step1: Immutable remote objects
In Ray, we can create and compute on objects. We refer to these objects as $ \bf \text{remote objects}$, and we use $\bf \text{object IDs}$ to refer to them. Remote objects are stored in $\bf \text{object stores}$, and there is $\bf \text{one}$ object store $\bf \text{per node}$ in the cluster. In the cluster setting, we may $\bf \text{not}$ actually know which machine each object lives on.
An $\bf \text{object ID}$ is essentially a unique ID that can be used to refer to a remote object. If you’re familiar with Futures, our object IDs are conceptually similar.
We assume that remote objects are $\bf \text{immutable}$. That is, their values cannot be changed after creation. $\it\text{This allows remote objects to be replicated in multiple object stores without}$$\it\text{needing to synchronize the copies}$.
Put and Get
The commands ray.get and ray.put can be used to convert between Python objects and object IDs, as shown in the example below.
Step2: The command ray.put(x) would be run by a worker process or by the driver process (the driver process is the one running your script). It takes a Python object and copies it to the local object store (here local means on the same node). Once the object has been stored in the object store, its value cannot be changed.
In addition, ray.put(x) returns an object ID, which is essentially an ID that can be used to refer to the newly created remote object. If we save the object ID in a variable with x_id = ray.put(x), then we can pass x_id into remote functions, and those remote functions will operate on the corresponding remote object.
The command ray.get(x_id) takes an object ID and creates a Python object from the corresponding remote object. For some objects like arrays, we can use shared memory and avoid copying the object. For other objects, this copies the object from the object store to the worker process’s heap. If the remote object corresponding to the object ID x_id does not live on the same node as the worker that calls ray.get(x_id), then the remote object will first be transferred from an object store that has it to the object store that needs it.
Step3: If the remote object corresponding to the object ID x_id has not been created yet, the command ray.get(x_id) will wait until the remote object has been created.
A very common use case of ray.get is to get a list of object IDs. In this case, you can call ray. get(object_ids) where object_ids is a list of object IDs.
Step4: Asynchronous Computation in Ray
Ray enables arbitrary Python functions to be executed asynchronously. This is done by designating a Python function as a $\bf \text{remote function}$.
For example, a normal Python function looks like this.
Step5: A remote function looks like this.
Step6: Remote functions
Whereas calling add1(1,2) returns 3 and causes the Python interpreter to block until the computation has finished, calling add2.remote(1, 2) immediately returns an object ID and creates a task. The task will be scheduled by the system and executed asynchronously (potentially on a different machine). When the task finishes executing, its return value will be stored in the object store.
Step7: The following simple example demonstrates how asynchronous tasks can be used to parallelize computation.
Step8: There is a sharp distinction between $\it \text{submitting a task}$ and $\it \text{executing the task}$. When a remote function is called, the task of executing that function is $\bf \text{submitted to a local scheduler}$, and object IDs for the outputs of the task are immediately returned. However, the task will not be executed until the system actually schedules the task on a worker. Task execution is not done lazily. The system moves the input data to the task, and the task will execute $\bf \text{as soon as}$ its input dependencies are available and there are enough resources for the computation.
$\bf \text{When a task is submitted, each argument may be passed in by value or by object ID}$. For example, these lines have the same behavior.
Step9: Remote functions $\bf \text{never}$ return actual values, they always return object IDs.
When the remote function is actually executed, it operates on $\bf \text{Python objects}$. That is, if the remote function was called with any object IDs, the system will retrieve the corresponding objects from the object store.
Note that a remote function can return multiple object IDs.
Step10: Expressing dependencies between tasks
Programmers can express dependencies between tasks by passing the object ID output of one task as an argument to another task. For example, we can launch three tasks as follows, each of which depends on the previous task.
Step11: The second task above will not execute until the first has finished, and the third will not execute until the second has finished. In this example, there are no opportunities for parallelism.
The ability to compose tasks makes it easy to express interesting dependencies. Consider the following implementation of a tree reduce.
Step12: Remote Functions Within Remote Functions
So far, we have been calling remote functions only from the driver. But worker processes can also call remote func- tions. To illustrate this, consider the following example. | Python Code:
import ray
ray.init("172.56.22.22:11592")
Explanation: Tutorial
Overview
Ray is a Python-based distributed execution $\bf \text{engine}$. The same code can be run on a single machine to achieve efficient multiprocessing, and it can be used on a cluster for large computations.
When using Ray, several processes are involved.
• Multiple $\bf \text{worker}$ processes execute tasks and store results in object stores. Each worker is a separate process.
• One $\bf \text{object store}$ per node stores immutable objects in shared memory and allows workers to efficiently share objects on the same node with minimal copying and deserialization.
• One $\bf \text{local scheduler}$ per node assigns tasks to workers on the same node.
• A $\bf \text{global scheduler}$ receives tasks from local schedulers and assigns them to other local schedulers.
• A $\bf \text{driver}$ is the Python process that the user controls. For example, if the user is running a script or using a Python shell, then the driver is the Python process that runs the script or the shell. A driver is similar to a worker in that it can submit tasks to its local scheduler and get objects from the object store, but it is different in that the local scheduler will not assign tasks to the driver to be executed.
• A $\bf \text{Redis}$ server maintains much of the system’s state. For example, it keeps track of which objects live on which machines and of the task specifications (but not data). It can also be queried directly for debugging purposes.
## Starting Ray
To start Ray, start Python and run the following commands.
End of explanation
x = "example"
ray.put(x) # Object ID
Explanation: Immutable remote objects
In Ray, we can create and compute on objects. We refer to these objects as $ \bf \text{remote objects}$, and we use $\bf \text{object IDs}$ to refer to them. Remote objects are stored in $\bf \text{object stores}$, and there is $\bf \text{one}$ object store $\bf \text{per node}$ in the cluster. In the cluster setting, we may $\bf \text{not}$ actually know which machine each object lives on.
An $\bf \text{object ID}$ is essentially a unique ID that can be used to refer to a remote object. If you’re familiar with Futures, our object IDs are conceptually similar.
We assume that remote objects are $\bf \text{immutable}$. That is, their values cannot be changed after creation. $\it\text{This allows remote objects to be replicated in multiple object stores without}$$\it\text{needing to synchronize the copies}$.
Put and Get
The commands ray.get and ray.put can be used to convert between Python objects and object IDs, as shown in the example below.
End of explanation
x_id = ray.put("example")
ray.get(x_id) # "example"
Explanation: The command ray.put(x) would be run by a worker process or by the driver process (the driver process is the one running your script). It takes a Python object and copies it to the local object store (here local means on the same node). Once the object has been stored in the object store, its value cannot be changed.
In addition, ray.put(x) returns an object ID, which is essentially an ID that can be used to refer to the newly created remote object. If we save the object ID in a variable with x_id = ray.put(x), then we can pass x_id into remote functions, and those remote functions will operate on the corresponding remote object.
The command ray.get(x_id) takes an object ID and creates a Python object from the corresponding remote object. For some objects like arrays, we can use shared memory and avoid copying the object. For other objects, this copies the object from the object store to the worker process’s heap. If the remote object corresponding to the object ID x_id does not live on the same node as the worker that calls ray.get(x_id), then the remote object will first be transferred from an object store that has it to the object store that needs it.
End of explanation
result_ids = [ray.put(i) for i in range(10)]
ray.get(result_ids) # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
Explanation: If the remote object corresponding to the object ID x_id has not been created yet, the command ray.get(x_id) will wait until the remote object has been created.
A very common use case of ray.get is to get a list of object IDs. In this case, you can call ray. get(object_ids) where object_ids is a list of object IDs.
End of explanation
def add1(a, b):
return a + b
Explanation: Asynchronous Computation in Ray
Ray enables arbitrary Python functions to be executed asynchronously. This is done by designating a Python function as a $\bf \text{remote function}$.
For example, a normal Python function looks like this.
End of explanation
@ray.remote
def add2(a, b):
return a + b
Explanation: A remote function looks like this.
End of explanation
x_id = add2.remote(1, 2)
ray.get(x_id) # 3
Explanation: Remote functions
Whereas calling add1(1,2) returns 3 and causes the Python interpreter to block until the computation has finished, calling add2.remote(1, 2) immediately returns an object ID and creates a task. The task will be scheduled by the system and executed asynchronously (potentially on a different machine). When the task finishes executing, its return value will be stored in the object store.
End of explanation
import time
def f1():
time.sleep(1)
@ray.remote
def f2():
time.sleep(1)
# The following takes ten seconds.
[f1() for _ in range(10)]
# The following takes one second (assuming the system has at least ten CPUs).
ray.get([f2.remote() for _ in range(10)])
Explanation: The following simple example demonstrates how asynchronous tasks can be used to parallelize computation.
End of explanation
add2.remote(1, 2)
add2.remote(1, ray.put(2))
add2.remote(ray.put(1), ray.put(2))
Explanation: There is a sharp distinction between $\it \text{submitting a task}$ and $\it \text{executing the task}$. When a remote function is called, the task of executing that function is $\bf \text{submitted to a local scheduler}$, and object IDs for the outputs of the task are immediately returned. However, the task will not be executed until the system actually schedules the task on a worker. Task execution is not done lazily. The system moves the input data to the task, and the task will execute $\bf \text{as soon as}$ its input dependencies are available and there are enough resources for the computation.
$\bf \text{When a task is submitted, each argument may be passed in by value or by object ID}$. For example, these lines have the same behavior.
End of explanation
@ray.remote(num_return_vals=3)
def return_multiple():
return 1, 2, 3
a_id, b_id, c_id = return_multiple.remote()
Explanation: Remote functions $\bf \text{never}$ return actual values, they always return object IDs.
When the remote function is actually executed, it operates on $\bf \text{Python objects}$. That is, if the remote function was called with any object IDs, the system will retrieve the corresponding objects from the object store.
Note that a remote function can return multiple object IDs.
End of explanation
@ray.remote
def f(x):
return x + 1
x = f.remote(0)
y = f.remote(x)
z = f.remote(y)
ray.get(z) # 3
Explanation: Expressing dependencies between tasks
Programmers can express dependencies between tasks by passing the object ID output of one task as an argument to another task. For example, we can launch three tasks as follows, each of which depends on the previous task.
End of explanation
import numpy as np
@ray.remote
def generate_data():
return np.random.normal(size=1000)
@ray.remote
def aggregate_data(x, y):
return x + y
# Generate some random data. This launches 100 tasks that will be scheduled
# on various nodes. The resulting data will be distributed around the
# cluster.
data = [generate_data.remote() for _ in range(100)]
# Perform a tree reduce.
while len(data) > 1:
data.append(aggregate_data.remote(data.pop(0), data.pop(0)))
# Fetch the result.
ray.get(data)
Explanation: The second task above will not execute until the first has finished, and the third will not execute until the second has finished. In this example, there are no opportunities for parallelism.
The ability to compose tasks makes it easy to express interesting dependencies. Consider the following implementation of a tree reduce.
End of explanation
@ray.remote
def sub_experiment(i, j):
# Run the jth sub-experiment for the ith experiment.
return i + j
@ray.remote
def run_experiment(i):
sub_results = []
# Launch tasks to perform 10 sub-experiments in parallel.
for j in range(10):
sub_results.append(sub_experiment.remote(i, j))
# Return the sum of the results of the sub-experiments.
return sum(ray.get(sub_results))
results = [run_experiment.remote(i) for i in range(5)]
ray.get(results) # [45, 55, 65, 75, 85]
Explanation: Remote Functions Within Remote Functions
So far, we have been calling remote functions only from the driver. But worker processes can also call remote func- tions. To illustrate this, consider the following example.
End of explanation |
7,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
https
Step1: Certain functions in the itertools module may be useful for computing permutations | Python Code:
assert 65 ^ 42 == 107
assert 107 ^ 42 == 65
assert ord('a') == 97
assert chr(97) == 'a'
Explanation: https://projecteuler.net/problem=59
Each character on a computer is assigned a unique code and the preferred standard is ASCII (American Standard Code for Information Interchange). For example, uppercase A = 65, asterisk (*) = 42, and lowercase k = 107.
A modern encryption method is to take a text file, convert the bytes to ASCII, then XOR each byte with a given value, taken from a secret key. The advantage with the XOR function is that using the same encryption key on the cipher text, restores the plain text; for example, 65 XOR 42 = 107, then 107 XOR 42 = 65.
For unbreakable encryption, the key is the same length as the plain text message, and the key is made up of random bytes. The user would keep the encrypted message and the encryption key in different locations, and without both "halves", it is impossible to decrypt the message.
Unfortunately, this method is impractical for most users, so the modified method is to use a password as a key. If the password is shorter than the message, which is likely, the key is repeated cyclically throughout the message. The balance for this method is using a sufficiently long password key for security, but short enough to be memorable.
Your task has been made easy, as the encryption key consists of three lower case characters. Using cipher.txt (in this directory), a file containing the encrypted ASCII codes, and the knowledge that the plain text must contain common English words, decrypt the message and find the sum of the ASCII values in the original text.
The following cell shows examples of how to perform XOR in Python and how to go back and forth between characters and integers:
End of explanation
from itertools
#Worked with Andrew Exton (Not in the class)
import csv
#Opens the file and creates an empty list
encryptedFile = open("cipher.txt")
encryptedValues = []
#Reads the file and puts the information in a list
for row in csv.reader(encryptedFile):
for value in row:
encryptedValues.append(value)
#Iterating through ASCII values 97 to 123 three times to create 26 ^ 3 combinations of encryptions
for encryptionA in range(97,123):
for encryptionB in range(97,123):
for encryptionC in range(97,123):
encryption = [encryptionA, encryptionB, encryptionC]
encryptionStr = str(chr(encryptionA)) + str(chr(encryptionB)) + str(chr(encryptionC))
#print(encryptionStr)
#Creates an empty string and variable index
message = ""
index = 0
#Loops through each encrypted value in the file and decrypts it appending it to the message string
for encryptedValue in encryptedValues:
decryptedValue = int(encryptedValue) ^ encryption[index]
message += str(chr(decryptedValue))
index += 1
#Because we know the password is three characters, this loops the index 0,1,2 and repeats
if index > 1:
index = 0
#Searches for the word "the" in message and if it is there it will print the message and the password
isText = message.find("the", 0, len(message))
if isText:
print (encryptionStr)
print(message)
# This cell will be used for grading, leave it at the end of the notebook.
Explanation: Certain functions in the itertools module may be useful for computing permutations:
End of explanation |
7,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Date string for filenames
This will be inserted into all filenames (reading and writing)
Step1: Import & combine generation and CO₂ intensity data
CO₂ intensity
Step2: Generation
Step3: State RPS information
Use data from LBNL to determine what year each state's RPS required that renewable energy be procured.
Step4: Dumbell plot of annual state index
Dumbell plot code
https
Step5: This version of the plot is probably better/more flexible
Step6: State CO₂ intensity changes
Step7: Largest and smallest changes
Step8: Population
Step9: State electricity imports/exports
Data is from EIA State Energy Data System. Most recent update was March 9, 2018. Code are described in EIA documentation. Negative values indicate flows out of the state. | Python Code:
file_date = '2018-03-06'
us_state_abbrev = {
'United States': 'US',
'Alabama': 'AL',
'Alaska': 'AK',
'Arizona': 'AZ',
'Arkansas': 'AR',
'California': 'CA',
'Colorado': 'CO',
'Connecticut': 'CT',
'Delaware': 'DE',
'Florida': 'FL',
'Georgia': 'GA',
'Hawaii': 'HI',
'Idaho': 'ID',
'Illinois': 'IL',
'Indiana': 'IN',
'Iowa': 'IA',
'Kansas': 'KS',
'Kentucky': 'KY',
'Louisiana': 'LA',
'Maine': 'ME',
'Maryland': 'MD',
'Massachusetts': 'MA',
'Michigan': 'MI',
'Minnesota': 'MN',
'Mississippi': 'MS',
'Missouri': 'MO',
'Montana': 'MT',
'Nebraska': 'NE',
'Nevada': 'NV',
'New Hampshire': 'NH',
'New Jersey': 'NJ',
'New Mexico': 'NM',
'New York': 'NY',
'North Carolina': 'NC',
'North Dakota': 'ND',
'Ohio': 'OH',
'Oklahoma': 'OK',
'Oregon': 'OR',
'Pennsylvania': 'PA',
'Rhode Island': 'RI',
'South Carolina': 'SC',
'South Dakota': 'SD',
'Tennessee': 'TN',
'Texas': 'TX',
'Utah': 'UT',
'Vermont': 'VT',
'Virginia': 'VA',
'Washington': 'WA',
'West Virginia': 'WV',
'Wisconsin': 'WI',
'Wyoming': 'WY',
}
Explanation: Date string for filenames
This will be inserted into all filenames (reading and writing)
End of explanation
index_path = join(data_path, 'Monthly index states {}.csv'.format(file_date))
monthly_index = pd.read_csv(index_path)
annual_index = monthly_index.groupby(['state', 'year']).sum()
annual_index.drop(['month', 'quarter'], axis=1, inplace=True)
annual_index['index (g/kwh)'] = (annual_index['final co2 (kg)']
/ annual_index['generation (mwh)'])
annual_index.head()
Explanation: Import & combine generation and CO₂ intensity data
CO₂ intensity
End of explanation
gen_path = join(data_path, 'Monthly generation states {}.csv'.format(file_date))
monthly_gen = pd.read_csv(gen_path)
annual_gen = monthly_gen.groupby(['fuel category', 'state', 'year']).sum()
annual_gen.drop(['month', 'quarter'], axis=1, inplace=True)
annual_gen.head()
Explanation: Generation
End of explanation
path = os.path.join(base_path, 'Data storage',
'rps_compliance_data_july_2017.xlsx')
rps = pd.read_excel(path, header=35, usecols='A:V', na_values=['-'])
rps.index = rps.index.droplevel([1, 2])
rps.index.names = ['state', 'Type']
rps_tidy = pd.melt(rps.xs('Total RPS', level='Type').reset_index(),
id_vars='state', var_name='year', value_vars=rps.columns,
value_name='Generation').dropna().sort_values(['state', 'year'])
rps_start = {}
for state in rps_tidy['state'].unique():
first_year = rps_tidy.loc[rps_tidy['state'] == state, 'year'].min()
rps_start[state] = first_year
rps_start
Explanation: State RPS information
Use data from LBNL to determine what year each state's RPS required that renewable energy be procured.
End of explanation
sns.set()
sns.set_style('white')
Explanation: Dumbell plot of annual state index
Dumbell plot code
https://github.com/iturki/Data-Analysis-and-Visualization-Projects/blob/master/dumbbell-chart-python/dumbbbell_plot.py
End of explanation
def dumbell_plot(data, years, axis_labels, legend_loc=[], offset_divider=35,
rps_start={}, fig_kwargs={}, figsize=(5,9), legend=True, rps_legend=False,
text_h_align='right', palette='deep', style=None):
'''
This is an example to create a dumbbell chart in Python.
If you would like to provide your data and customize the graph, modify the variables in the section below.
Please be aware that you need matplotlib installed in order for this to work.
'''
prop_cycle = plt.rcParams['axes.prop_cycle']
if style:
plt.style.use(style)
colors = sns.color_palette()
else:
colors = sns.color_palette(palette)
# Styles to be used when plotting the different elements of the graph.
axis_label_style = dict(horizontalalignment=text_h_align,
verticalalignment='center', fontsize=10)
data = data.loc[:, years]
min_data = data.min(axis=1)
max_data = data.max(axis=1)
# Create the figure
fig, ax = plt.subplots(figsize=figsize, **fig_kwargs)
index = range(len(axis_labels))
# Auto-set the state abbr text offset
label_offset = data.max().max() / offset_divider
# Loop N times
for i, (data, year) in enumerate(zip(data.T.values, years)):
color = colors[i]
for value, label, j in zip(data, axis_labels, index):
facecolor = None
if label in rps_start and rps_start[label] <= year:
facecolor = 'w'
ax.scatter(value, j, facecolors=facecolor, zorder=3, color=color,
linewidth=2, s=50)
plt.hlines(y=j, xmin=min_data[j], xmax=max_data[j], zorder=2)
if i == 0:
ax.text(min_data[j] - label_offset, j, label,
**axis_label_style)
plt.yticks(index, ['' for x in axis_labels])
if legend:
for i, year in enumerate(years):
ax.scatter(x=legend_loc[i], y=51, color=colors[i], zorder=3,
s=50, linewidth=2)
plt.text(x=legend_loc[i], y=52, s=str(year), ha='center')
plt.hlines(y=51, xmin=legend_loc[0], zorder=2, xmax=legend_loc[-1])
# Add filled and hollow circles to show RPS status in legend
if rps_legend:
ax.scatter(x=legend_loc[i], y=49, color=colors[i], s=50,
linewidth=2)
plt.text(x=legend_loc[i] * 1.4, y=48.9, s='No RPS', ha='left', va='center')
ax.scatter(x=legend_loc[i], y=47.5, color=colors[i], s=50,
linewidth=2, facecolor='w')
plt.text(x=legend_loc[i] * 1.4, y=47.4, s='RPS', ha='left', va='center')
annual_index.head()
Explanation: This version of the plot is probably better/more flexible
End of explanation
barbell_index = annual_index.pivot_table(values='index (g/kwh)',
index='state', columns='year')
barbell_index.sort_values(by=[2017], inplace=True)
barbell_index.head()
len([x for x in rps_start.values() if x <=2008])
rps_start
states_index = list(barbell_index.index)
dumbell_plot(barbell_index, [2001, 2008, 2017], states_index,
legend_loc=[500, 300, 100], rps_legend=True,
rps_start=rps_start, palette='colorblind')
plt.vlines(439, -1, 50, colors=['0.5'], zorder=1, #linestyles='dashed',
linewidth=1)
plt.ylim(-1, 53)
# plt.text(x=200, y=40, s='2017\nNational Average\n(439 g CO$_2$ $\mathregular{kWh^{-1}}$)',
# ha='center', va='center', size=11)
plt.text(x=680, y=3, s='2017\nNational Average\n(439 g CO$_2$ $\mathregular{kWh^{-1}}$)',
ha='center', va='center', size=11)
sns.despine(left=True)
plt.xlabel('g CO$_2$ $\mathregular{kWh^{-1}}$')
# label as part of a larger figure
plt.text(-0.02, 0.89, s='a)', ha='right', size=11, transform=ax.transAxes)
path = join(cwd, '..', 'Figures',
'State CO2 intensity {}.pdf'.format(file_date))
plt.savefig(path, bbox_inches='tight')
barbell_index.loc['USA', 2001] = 630
barbell_index.loc['USA', 2008] = 580
barbell_index.loc['USA', 2017] = 439
barbell_index.sort_values(by=[2017], inplace=True)
states_index = list(barbell_index.index)
dumbell_plot(barbell_index, [2001, 2008, 2017], states_index,
legend_loc=[500, 300, 100],
rps_start=rps_start, palette='colorblind')
plt.vlines(439, -1, 50, colors=['0.5'], zorder=1, #linestyles='dashed',
linewidth=1)
plt.ylim(-1, 53)
plt.text(x=200, y=40, s='2017\nNational Average\n(439 g CO$_2$ $\mathregular{kWh^{-1}}$)',
ha='center', size=11)
sns.despine(left=True)
plt.xlabel('g CO$_2$ $\mathregular{kWh^{-1}}$')
path = join(cwd, '..', 'Figures',
'State CO2 intensity {}.pdf'.format('David Hawkins'))
plt.savefig(path, bbox_inches='tight')
Explanation: State CO₂ intensity changes
End of explanation
barbell_index['change'] = barbell_index[2017] - barbell_index[2001]
barbell_index['% change'] = barbell_index['change'] / barbell_index[2001]
max_change_state = barbell_index.sort_values('change').index[0]
max_change_value = barbell_index.sort_values('change')['change'].values[0]
start = barbell_index.loc[max_change_state, 2001]
end = barbell_index.loc[max_change_state, 2017]
print('The largest absolute reduction is {:0.1f} g/kWh in {}, from {:.1f} to {:.1f}'
.format(max_change_value, max_change_state, start, end))
min_change_state = barbell_index.sort_values('change').index[-1]
min_change_value = barbell_index.sort_values('change')['change'].values[-1]
start = barbell_index.loc[min_change_state, 2001]
end = barbell_index.loc[min_change_state, 2017]
print('The smallest absolute reduction is {:0.1f} g/kWh in {}, from {:.1f} to {:.1f}'
.format(min_change_value, min_change_state, start, end))
max_relchange_state = barbell_index.sort_values('% change').index[0]
max_relchange_value = barbell_index.sort_values('% change')['% change'].values[0]
start = barbell_index.loc[max_relchange_state, 2001]
end = barbell_index.loc[max_relchange_state, 2017]
print('The largest relative reduction is {:0.1%} g/kWh in {}, from {:.1f} to {:.1f}'
.format(max_relchange_value, max_relchange_state, start, end))
min_relchange_state = barbell_index.sort_values('% change').index[-1]
min_relchange_value = barbell_index.sort_values('% change')['% change'].values[-1]
start = barbell_index.loc[min_relchange_state, 2001]
end = barbell_index.loc[min_relchange_state, 2017]
print('The smallest relative reduction is {:0.1%} g/kWh in {}, from {:.1f} to {:.1f}'
.format(min_relchange_value, min_relchange_state, start, end))
Explanation: Largest and smallest changes
End of explanation
path = os.path.join(base_path, 'Data storage', 'Derived data',
'State population.csv')
pop = pd.read_csv(path)
pop.columns = pop.columns.str.lower()
pop.head()
pop.tail()
pop['state'] = pop['state'].map(us_state_abbrev)
pop.head()
annual_index_pop = annual_index.reset_index().merge(pop, on=['state', 'year'])
annual_index_pop.head()
annual_index_pop['tonne CO2/pop'] = (annual_index_pop['final co2 (kg)']
/ 1000
/ annual_index_pop['population'])
annual_index_pop['MWh/pop'] = (annual_index_pop.loc[:, 'generation (mwh)']
/ annual_index_pop['population'])
annual_index_pop.describe(percentiles=[.1, .25])
barbell_pop = annual_index_pop.pivot_table(values='tonne CO2/pop',
index='state', columns='year')
barbell_pop.sort_values(by=2017, inplace=True)
barbell_pop.drop(['WY', 'ND', 'WV'], inplace=True)
# states_pop = ['{} '.format(x) for x in barbell_pop.index]
states_pop = list(barbell_pop.index)
# rps_states = list(rps_tidy['state'].unique())
dumbell_plot(barbell_pop, [2001, 2008, 2017], states_pop, offset_divider=30,
rps_start=rps_start, legend=False, palette='colorblind')
plt.ylim(-1, 53)
plt.xlim(None, 25)
sns.despine(left=True)
plt.xlabel('Tonne $\mathregular{CO_2 \ Capita^{-1}}$')
# label as part of a larger figure
plt.text(-0.02, 0.89, s='b)', ha='right', size=11, transform=ax.transAxes)
path = join(fig_export_path,
'State CO2 per capita {}.pdf'.format(file_date))
plt.savefig(path, bbox_inches='tight')
barbell_pop = annual_index_pop.pivot_table(values='tonne CO2/pop',
index='state', columns='year')
barbell_pop.sort_values(by=2017, inplace=True)
barbell_pop = barbell_pop.loc[['WV', 'ND', 'WY']]
states_pop = ['WY', 'ND', 'WV']
# rps_states = list(rps_tidy['state'].unique())
dumbell_plot(barbell_pop, [2001, 2008, 2017], states_pop, offset_divider=30,
rps_start=rps_start, legend=False, palette='colorblind',
figsize=(2.5, 9))
plt.ylim(-1, 53)
# plt.xlim(None, 25)
sns.despine(left=True)
plt.xlabel('Tonne $\mathregular{CO_2 \ Capita^{-1}}$')
path = join(fig_export_path,
'State CO2 per capita insert {}.pdf'.format(file_date))
plt.savefig(path, bbox_inches='tight')
Explanation: Population
End of explanation
path = os.path.join(base_path, 'Data storage', 'State energy flows',
'use_all_phy_update.csv')
flows = pd.read_csv(path)
# ELISP is electricity flows
flows = flows.loc[flows['MSN'] == 'ELISP']
# Drop all years before 2001
flows.drop([str(x) for x in range(1960, 2001)], axis=1, inplace=True)
flows.drop(['Data_Status', 'MSN'], axis=1, inplace=True)
# Electricity flows are given in million kWh (see documentation)
flows.loc[:, '2001':] *= 1000
flows.columns = flows.columns.str.lower()
flows = flows.melt(id_vars='state', var_name='year',
value_name='MWh flow')
flows['year'] = flows['year'].astype(int)
flows.tail()
annual_index_pop_flows = pd.merge(annual_index_pop, flows, on=['state', 'year'])
annual_index_pop_flows['MWh flow/pop'] = (annual_index_pop_flows['MWh flow']
/ annual_index_pop_flows['population'])
annual_index_pop_flows['share flow'] = (annual_index_pop_flows['MWh/pop']
/ annual_index_pop_flows['MWh flow/pop'])
annual_index_pop_flows.tail()
barbell_flows = annual_index_pop_flows.pivot_table(values='MWh flow/pop',
index='state', columns='year')
barbell_flows.head()
barbell_flows = annual_index_pop_flows.pivot_table(values='MWh flow/pop',
index='state', columns='year')
barbell_flows.sort_values(by=2015, ascending=False, inplace=True)
# barbell_flows.drop('VT', inplace=True)
states_flows = list(barbell_flows.index)
dumbell_plot(barbell_flows, [2001, 2008, 2015], states_flows, offset_divider=8,
rps_start=rps_start, legend=True, palette='colorblind',
legend_loc=[-20, -40, -60])
plt.ylim(-1, 53)
sns.despine(left=True)
plt.xlabel('MWh flow/Capita')
plt.vlines(0, -1, 53, colors=['0.5'], linewidth=1)
# plt.xticks()
# ax = plt.gca()
# ax.set_xscale("log", nonposx='clip')
path = join(fig_export_path, 'SI', 'State MWh flow per capita.pdf')
# plt.savefig(path, bbox_inches='tight')
Explanation: State electricity imports/exports
Data is from EIA State Energy Data System. Most recent update was March 9, 2018. Code are described in EIA documentation. Negative values indicate flows out of the state.
End of explanation |
7,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The flexibility that can be seen below can be exploited on these contexts
Step1: It's __repr__ is descent
Step2: Json serializable
Step3: My beloved yaml-serializable
Step4: And of course, parseable | Python Code:
from fito import as_operation, SpecField, PrimitiveField, Operation
from time import sleep
class DatabaseConnection(Operation):
host = PrimitiveField(pos=0)
def __repr__(self): return "connection(db://{})".format(self.host)
@as_operation(database=SpecField, experiment_config=SpecField)
def run_experiment(database, experiment_config):
sleep(0.25)
@as_operation()
def experiment_1(alpha=0.5, beta=10):
return alpha + beta / 2
Explanation: The flexibility that can be seen below can be exploited on these contexts:
Running some experiments from config files in yaml/json
Automatize them and use the json representation to send them as messages
Attach metadata to the experiment (use data stores)
End of explanation
run = run_experiment(DatabaseConnection('localhost'), experiment_1(alpha=10))
print run
Explanation: It's __repr__ is descent
End of explanation
print run.json.dumps()
Explanation: Json serializable:
End of explanation
print run.yaml.dumps()
Explanation: My beloved yaml-serializable
End of explanation
print Operation.from_yaml(run.yaml.dumps()).yaml.dumps()
Explanation: And of course, parseable :)
End of explanation |
7,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model selection and serving with Ray Tune and Ray Serve
{image} /images/serve.svg
Step4: Data interface
Let's start with a simulated data interface. This class acts as the
interface between your training code and your database. We simulate
that new data arrives each day with a day parameter. So, calling
get_data(day=3) would return all data we received until day 3.
We also implement an incremental data method, so calling
get_incremental_data(day=3) would return all data collected
between day 2 and day 3.
Step5: PyTorch neural network classifier
Next, we will introduce our PyTorch neural network model and the
train and test function. These are adapted directly from
our {doc}PyTorch MNIST example </tune/examples/includes/mnist_pytorch>.
We only introduced an additional neural network layer with a configurable
layer size. This is not strictly needed for learning good performance on
MNIST, but it is useful to demonstrate scenarios where your hyperparameter
search space affects the model complexity.
Step6: Tune trainable for model selection
We'll now define our Tune trainable function. This function takes
a config parameter containing the hyperparameters we should train
the model on, and will start a full training run. This means it
will take care of creating the model and optimizer and repeatedly
call the train function to train the model. Also, this function
will report the training progress back to Tune.
Step7: Configuring the search space and starting Ray Tune
We would like to support two modes of training the model
Step8: To continue training from an existing model, we can use this function
instead. It takes a starting model (a checkpoint) as a parameter and
the old config.
Note that this time the search space does not contain the
layer size parameter. Since we continue to train an existing model,
we cannot change the layer size mid training, so we just continue
to use the existing one.
Step9: Serving tuned models with Ray Serve
Let's now turn to the model serving part with Ray Serve. Serve allows
you to deploy your models as multiple deployments. Broadly speaking,
a deployment handles incoming requests and replies with a result. For
instance, our MNIST deployment takes an image as input and outputs the
digit it recognized from it. This deployment can be exposed over HTTP.
First, we will define our deployment. This loads our PyTorch
MNIST model from a checkpoint, takes an image as an input and
outputs our digit prediction according to our trained model
Step11: We would like to have a fixed location where we store the currently
active model. We call this directory model_dir. Every time we
would like to update our model, we copy the checkpoint of the new
model to this directory. We then update the deployment to the new version.
Step12: Since we would like to continue training from the current existing
model, we introduce an utility function that fetches the currently
served checkpoint as well as the hyperparameter config and achieved
accuracy.
Step14: Putting everything together
Now we only need to glue this code together. This is the main
entrypoint of the script, and we will define three methods | Python Code:
import argparse
import json
import os
import shutil
import sys
from functools import partial
from math import ceil
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import ray
from ray import tune, serve
from ray.serve.exceptions import RayServeException
from ray.tune import CLIReporter
from ray.tune.schedulers import ASHAScheduler
from torch.utils.data import random_split, Subset
from torchvision.datasets import MNIST
from torchvision.transforms import transforms
Explanation: Model selection and serving with Ray Tune and Ray Serve
{image} /images/serve.svg
:align: center
{contents}
:backlinks: none
:local: true
This tutorial will show you an end-to-end example how to train a
model using Ray Tune on incrementally arriving data and deploy
the model using Ray Serve.
A machine learning workflow can be quite simple: You decide on
the objective you're trying to solve, collect and annotate the
data, and build a model to hopefully solve your problem. But
usually the work is not over yet. First, you would likely continue
to do some hyperparameter optimization to obtain the best possible
model (called model selection). Second, your trained model
somehow has to be moved to production - in other words, users
or services should be enabled to use your model to actually make
predictions. This part is called model serving.
Fortunately, Ray includes two libraries that help you with these
two steps: Ray Tune and Ray Serve. And even more, they compliment
each other nicely. Most notably, both are able to scale up your
workloads easily - so both your model training and serving benefit
from additional resources and can adapt to your environment. If you
need to train on more data or have more hyperparameters to tune,
Ray Tune can leverage your whole cluster for training. If you have
many users doing inference on your served models, Ray Serve can
automatically distribute the inference to multiple nodes.
This tutorial will show you an end-to-end example how to train a MNIST
image classifier on incrementally arriving data and automatically
serve an updated model on a HTTP endpoint.
By the end of this tutorial you will be able to
Do hyperparameter optimization on a simple MNIST classifier
Continue to train this classifier from an existing model with
newly arriving data
Automatically create and serve data deployments with Ray Serve
Roadmap and desired functionality
The general idea of this example is that we simulate newly arriving
data each day. So at day 0 we might have some initial data available
already, but at each day, new data arrives.
Our approach here is that we offer two ways to train: From scratch and
from an existing model. Maybe you would like to train and select models
from scratch each week with all data available until then, e.g. each
Sunday, like this:
```{code-block} bash
Train with all data available at day 0
python tune-serve-integration-mnist.py --from_scratch --day 0
```
During the other days you might want to improve your model, but
not train everything from scratch, saving some cluster resources.
```{code-block} bash
Train with data arriving between day 0 and day 1
python tune-serve-integration-mnist.py --from_existing --day 1
Train with incremental data on the other days, too
python tune-serve-integration-mnist.py --from_existing --day 2
python tune-serve-integration-mnist.py --from_existing --day 3
python tune-serve-integration-mnist.py --from_existing --day 4
python tune-serve-integration-mnist.py --from_existing --day 5
python tune-serve-integration-mnist.py --from_existing --day 6
Retrain from scratch every 7th day:
python tune-serve-integration-mnist.py --from_scratch --day 7
```
This example will support both modes. After each model selection run,
we will tell Ray Serve to serve an updated model. We also include a
small utility to query our served model to see if it works as it should.
{code-block} bash
$ python tune-serve-integration-mnist.py --query 6
Querying model with example #6. Label = 1, Response = 1, Correct = True
Imports
Let's start with our dependencies. Most of these should be familiar
if you worked with PyTorch before. The most notable import for Ray
is the from ray import tune, serve import statement - which
includes almost all the things we need from the Ray side.
End of explanation
class MNISTDataInterface(object):
Data interface. Simulates that new data arrives every day.
def __init__(self, data_dir, max_days=10):
self.data_dir = data_dir
self.max_days = max_days
transform = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]
)
self.dataset = MNIST(
self.data_dir, train=True, download=True, transform=transform
)
def _get_day_slice(self, day=0):
if day < 0:
return 0
n = len(self.dataset)
# Start with 30% of the data, get more data each day
return min(n, ceil(n * (0.3 + 0.7 * day / self.max_days)))
def get_data(self, day=0):
Get complete normalized train and validation data to date.
end = self._get_day_slice(day)
available_data = Subset(self.dataset, list(range(end)))
train_n = int(0.8 * end) # 80% train data, 20% validation data
return random_split(available_data, [train_n, end - train_n])
def get_incremental_data(self, day=0):
Get next normalized train and validation data day slice.
start = self._get_day_slice(day - 1)
end = self._get_day_slice(day)
available_data = Subset(self.dataset, list(range(start, end)))
train_n = int(0.8 * (end - start)) # 80% train data, 20% validation data
return random_split(available_data, [train_n, end - start - train_n])
Explanation: Data interface
Let's start with a simulated data interface. This class acts as the
interface between your training code and your database. We simulate
that new data arrives each day with a day parameter. So, calling
get_data(day=3) would return all data we received until day 3.
We also implement an incremental data method, so calling
get_incremental_data(day=3) would return all data collected
between day 2 and day 3.
End of explanation
class ConvNet(nn.Module):
def __init__(self, layer_size=192):
super(ConvNet, self).__init__()
self.layer_size = layer_size
self.conv1 = nn.Conv2d(1, 3, kernel_size=3)
self.fc = nn.Linear(192, self.layer_size)
self.out = nn.Linear(self.layer_size, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 3))
x = x.view(-1, 192)
x = self.fc(x)
x = self.out(x)
return F.log_softmax(x, dim=1)
def train(model, optimizer, train_loader, device=None):
device = device or torch.device("cpu")
model.train()
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
def test(model, data_loader, device=None):
device = device or torch.device("cpu")
model.eval()
correct = 0
total = 0
with torch.no_grad():
for batch_idx, (data, target) in enumerate(data_loader):
data, target = data.to(device), target.to(device)
outputs = model(data)
_, predicted = torch.max(outputs.data, 1)
total += target.size(0)
correct += (predicted == target).sum().item()
return correct / total
Explanation: PyTorch neural network classifier
Next, we will introduce our PyTorch neural network model and the
train and test function. These are adapted directly from
our {doc}PyTorch MNIST example </tune/examples/includes/mnist_pytorch>.
We only introduced an additional neural network layer with a configurable
layer size. This is not strictly needed for learning good performance on
MNIST, but it is useful to demonstrate scenarios where your hyperparameter
search space affects the model complexity.
End of explanation
def train_mnist(
config,
start_model=None,
checkpoint_dir=None,
num_epochs=10,
use_gpus=False,
data_fn=None,
day=0,
):
# Create model
use_cuda = use_gpus and torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
model = ConvNet(layer_size=config["layer_size"]).to(device)
# Create optimizer
optimizer = optim.SGD(
model.parameters(), lr=config["lr"], momentum=config["momentum"]
)
# Load checkpoint, or load start model if no checkpoint has been
# passed and a start model is specified
load_dir = None
if checkpoint_dir:
load_dir = checkpoint_dir
elif start_model:
load_dir = start_model
if load_dir:
model_state, optimizer_state = torch.load(os.path.join(load_dir, "checkpoint"))
model.load_state_dict(model_state)
optimizer.load_state_dict(optimizer_state)
# Get full training datasets
train_dataset, validation_dataset = data_fn(day=day)
train_loader = torch.utils.data.DataLoader(
train_dataset, batch_size=config["batch_size"], shuffle=True
)
validation_loader = torch.utils.data.DataLoader(
validation_dataset, batch_size=config["batch_size"], shuffle=True
)
for i in range(num_epochs):
train(model, optimizer, train_loader, device)
acc = test(model, validation_loader, device)
if i == num_epochs - 1:
with tune.checkpoint_dir(step=i) as checkpoint_dir:
torch.save(
(model.state_dict(), optimizer.state_dict()),
os.path.join(checkpoint_dir, "checkpoint"),
)
tune.report(mean_accuracy=acc, done=True)
else:
tune.report(mean_accuracy=acc)
Explanation: Tune trainable for model selection
We'll now define our Tune trainable function. This function takes
a config parameter containing the hyperparameters we should train
the model on, and will start a full training run. This means it
will take care of creating the model and optimizer and repeatedly
call the train function to train the model. Also, this function
will report the training progress back to Tune.
End of explanation
def tune_from_scratch(num_samples=10, num_epochs=10, gpus_per_trial=0.0, day=0):
data_interface = MNISTDataInterface("~/data", max_days=10)
num_examples = data_interface._get_day_slice(day)
config = {
"batch_size": tune.choice([16, 32, 64]),
"layer_size": tune.choice([32, 64, 128, 192]),
"lr": tune.loguniform(1e-4, 1e-1),
"momentum": tune.uniform(0.1, 0.9),
}
scheduler = ASHAScheduler(
metric="mean_accuracy",
mode="max",
max_t=num_epochs,
grace_period=1,
reduction_factor=2,
)
reporter = CLIReporter(
parameter_columns=["layer_size", "lr", "momentum", "batch_size"],
metric_columns=["mean_accuracy", "training_iteration"],
)
analysis = tune.run(
partial(
train_mnist,
start_model=None,
data_fn=data_interface.get_data,
num_epochs=num_epochs,
use_gpus=True if gpus_per_trial > 0 else False,
day=day,
),
resources_per_trial={"cpu": 1, "gpu": gpus_per_trial},
config=config,
num_samples=num_samples,
scheduler=scheduler,
progress_reporter=reporter,
verbose=0,
name="tune_serve_mnist_fromscratch",
)
best_trial = analysis.get_best_trial("mean_accuracy", "max", "last")
best_accuracy = best_trial.metric_analysis["mean_accuracy"]["last"]
best_trial_config = best_trial.config
best_checkpoint = best_trial.checkpoint.dir_or_data
return best_accuracy, best_trial_config, best_checkpoint, num_examples
Explanation: Configuring the search space and starting Ray Tune
We would like to support two modes of training the model: Training
a model from scratch, and continuing to train a model from an
existing one.
This is our function to train a number of models with different
hyperparameters from scratch, i.e. from all data that is available
until the given day. Our search space can thus also contain parameters
that affect the model complexity (such as the layer size), since it
does not have to be compatible to an existing model.
End of explanation
def tune_from_existing(
start_model, start_config, num_samples=10, num_epochs=10, gpus_per_trial=0.0, day=0
):
data_interface = MNISTDataInterface("/tmp/mnist_data", max_days=10)
num_examples = data_interface._get_day_slice(day) - data_interface._get_day_slice(
day - 1
)
config = start_config.copy()
config.update(
{
"batch_size": tune.choice([16, 32, 64]),
"lr": tune.loguniform(1e-4, 1e-1),
"momentum": tune.uniform(0.1, 0.9),
}
)
scheduler = ASHAScheduler(
metric="mean_accuracy",
mode="max",
max_t=num_epochs,
grace_period=1,
reduction_factor=2,
)
reporter = CLIReporter(
parameter_columns=["lr", "momentum", "batch_size"],
metric_columns=["mean_accuracy", "training_iteration"],
)
analysis = tune.run(
partial(
train_mnist,
start_model=start_model,
data_fn=data_interface.get_incremental_data,
num_epochs=num_epochs,
use_gpus=True if gpus_per_trial > 0 else False,
day=day,
),
resources_per_trial={"cpu": 1, "gpu": gpus_per_trial},
config=config,
num_samples=num_samples,
scheduler=scheduler,
progress_reporter=reporter,
verbose=0,
name="tune_serve_mnist_fromsexisting",
)
best_trial = analysis.get_best_trial("mean_accuracy", "max", "last")
best_accuracy = best_trial.metric_analysis["mean_accuracy"]["last"]
best_trial_config = best_trial.config
best_checkpoint = best_trial.checkpoint.dir_or_data
return best_accuracy, best_trial_config, best_checkpoint, num_examples
Explanation: To continue training from an existing model, we can use this function
instead. It takes a starting model (a checkpoint) as a parameter and
the old config.
Note that this time the search space does not contain the
layer size parameter. Since we continue to train an existing model,
we cannot change the layer size mid training, so we just continue
to use the existing one.
End of explanation
@serve.deployment(name="mnist", route_prefix="/mnist")
class MNISTDeployment:
def __init__(self, checkpoint_dir, config, metrics, use_gpu=False):
self.checkpoint_dir = checkpoint_dir
self.config = config
self.metrics = metrics
use_cuda = use_gpu and torch.cuda.is_available()
self.device = torch.device("cuda" if use_cuda else "cpu")
model = ConvNet(layer_size=self.config["layer_size"]).to(self.device)
model_state, optimizer_state = torch.load(
os.path.join(self.checkpoint_dir, "checkpoint"), map_location=self.device
)
model.load_state_dict(model_state)
self.model = model
def __call__(self, flask_request):
images = torch.tensor(flask_request.json["images"])
images = images.to(self.device)
outputs = self.model(images)
predicted = torch.max(outputs.data, 1)[1]
return {"result": predicted.numpy().tolist()}
Explanation: Serving tuned models with Ray Serve
Let's now turn to the model serving part with Ray Serve. Serve allows
you to deploy your models as multiple deployments. Broadly speaking,
a deployment handles incoming requests and replies with a result. For
instance, our MNIST deployment takes an image as input and outputs the
digit it recognized from it. This deployment can be exposed over HTTP.
First, we will define our deployment. This loads our PyTorch
MNIST model from a checkpoint, takes an image as an input and
outputs our digit prediction according to our trained model:
End of explanation
def serve_new_model(model_dir, checkpoint, config, metrics, day, use_gpu=False):
print("Serving checkpoint: {}".format(checkpoint))
checkpoint_path = _move_checkpoint_to_model_dir(
model_dir, checkpoint, config, metrics
)
serve.start(detached=True)
MNISTDeployment.deploy(checkpoint_path, config, metrics, use_gpu)
def _move_checkpoint_to_model_dir(model_dir, checkpoint, config, metrics):
Move backend checkpoint to a central `model_dir` on the head node.
If you would like to run Serve on multiple nodes, you might want to
move the checkpoint to a shared storage, like Amazon S3, instead.
os.makedirs(model_dir, 0o755, exist_ok=True)
checkpoint_path = os.path.join(model_dir, "checkpoint")
meta_path = os.path.join(model_dir, "meta.json")
if os.path.exists(checkpoint_path):
shutil.rmtree(checkpoint_path)
shutil.copytree(checkpoint, checkpoint_path)
with open(meta_path, "wt") as fp:
json.dump(dict(config=config, metrics=metrics), fp)
return checkpoint_path
Explanation: We would like to have a fixed location where we store the currently
active model. We call this directory model_dir. Every time we
would like to update our model, we copy the checkpoint of the new
model to this directory. We then update the deployment to the new version.
End of explanation
def get_current_model(model_dir):
checkpoint_path = os.path.join(model_dir, "checkpoint")
meta_path = os.path.join(model_dir, "meta.json")
if not os.path.exists(checkpoint_path) or not os.path.exists(meta_path):
return None, None, None
with open(meta_path, "rt") as fp:
meta = json.load(fp)
return checkpoint_path, meta["config"], meta["metrics"]
Explanation: Since we would like to continue training from the current existing
model, we introduce an utility function that fetches the currently
served checkpoint as well as the hyperparameter config and achieved
accuracy.
End of explanation
# The query function will send a HTTP request to Serve with some
# test data obtained from the MNIST dataset.
if __name__ == "__main__":
This script offers training a new model from scratch with all
available data, or continuing to train an existing model
with newly available data.
For instance, we might get new data every day. Every Sunday, we
would like to train a new model from scratch.
Naturally, we would like to use hyperparameter optimization to
find the best model for out data.
First, we might train a model with all data available at this day:
```{code-block} bash
python tune-serve-integration-mnist.py --from_scratch --day 0
```
On the coming days, we want to continue to train this model with
newly available data:
```{code-block} bash
python tune-serve-integration-mnist.py --from_existing --day 1
python tune-serve-integration-mnist.py --from_existing --day 2
python tune-serve-integration-mnist.py --from_existing --day 3
python tune-serve-integration-mnist.py --from_existing --day 4
python tune-serve-integration-mnist.py --from_existing --day 5
python tune-serve-integration-mnist.py --from_existing --day 6
# Retrain from scratch every 7th day:
python tune-serve-integration-mnist.py --from_scratch --day 7
```
We can also use this script to query our served model
with some test data:
```{code-block} bash
python tune-serve-integration-mnist.py --query 6
Querying model with example #6. Label = 1, Response = 1, Correct = T
python tune-serve-integration-mnist.py --query 28
Querying model with example #28. Label = 2, Response = 7, Correct = F
```
parser = argparse.ArgumentParser(description="MNIST Tune/Serve example")
parser.add_argument("--model_dir", type=str, default="~/mnist_tune_serve")
parser.add_argument(
"--from_scratch",
action="store_true",
help="Train and select best model from scratch",
default=True,
)
parser.add_argument(
"--from_existing",
action="store_true",
help="Train and select best model from existing model",
default=False,
)
parser.add_argument(
"--day",
help="Indicate the day to simulate the amount of data available to us",
type=int,
default=0,
)
parser.add_argument(
"--query", help="Query endpoint with example", type=int, default=-1
)
parser.add_argument(
"--smoke-test",
action="store_true",
help="Finish quickly for testing",
default=True,
)
args = parser.parse_args()
if args.smoke_test:
ray.init(num_cpus=3, namespace="tune-serve-integration")
else:
ray.init(namespace="tune-serve-integration")
model_dir = os.path.expanduser(args.model_dir)
if args.query >= 0:
import requests
dataset = MNISTDataInterface("/tmp/mnist_data", max_days=0).dataset
data = dataset[args.query]
label = data[1]
# Query our model
response = requests.post(
"http://localhost:8000/mnist", json={"images": [data[0].numpy().tolist()]}
)
try:
pred = response.json()["result"][0]
except: # noqa: E722
pred = -1
print(
"Querying model with example #{}. "
"Label = {}, Response = {}, Correct = {}".format(
args.query, label, pred, label == pred
)
)
sys.exit(0)
gpus_per_trial = 0.5 if not args.smoke_test else 0.0
serve_gpu = True if gpus_per_trial > 0 else False
num_samples = 8 if not args.smoke_test else 1
num_epochs = 10 if not args.smoke_test else 1
if args.from_scratch: # train everyday from scratch
print("Start training job from scratch on day {}.".format(args.day))
acc, config, best_checkpoint, num_examples = tune_from_scratch(
num_samples, num_epochs, gpus_per_trial, day=args.day
)
print(
"Trained day {} from scratch on {} samples. "
"Best accuracy: {:.4f}. Best config: {}".format(
args.day, num_examples, acc, config
)
)
serve_new_model(
model_dir, best_checkpoint, config, acc, args.day, use_gpu=serve_gpu
)
if args.from_existing:
old_checkpoint, old_config, old_acc = get_current_model(model_dir)
if not old_checkpoint or not old_config or not old_acc:
print("No existing model found. Train one with --from_scratch " "first.")
sys.exit(1)
acc, config, best_checkpoint, num_examples = tune_from_existing(
old_checkpoint,
old_config,
num_samples,
num_epochs,
gpus_per_trial,
day=args.day,
)
print(
"Trained day {} from existing on {} samples. "
"Best accuracy: {:.4f}. Best config: {}".format(
args.day, num_examples, acc, config
)
)
serve_new_model(
model_dir, best_checkpoint, config, acc, args.day, use_gpu=serve_gpu
)
Explanation: Putting everything together
Now we only need to glue this code together. This is the main
entrypoint of the script, and we will define three methods:
Train new model from scratch with all data
Continue training from existing model with new data only
Query the model with test data
Internally, this will just call the tune_from_scratch and
tune_from_existing() functions.
Both training functions will then call serve_new_model() to serve
the newly trained or updated model.
End of explanation |
7,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
02 - Introduction to Machine Learning
by Alejandro Correa Bahnsen
version 0.1, Feb 2016
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Jake Vanderplas
What is Machine Learning?
In this section we will begin to explore the basic principles of machine learning.
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task
Step1: A classification algorithm may be used to draw a dividing boundary
between the two clusters of points
Step2: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data
Step3: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data
Step4: Quick Question
Step5: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot
Step6: Dimensionality Reduction
Step7: Clustering
Step8: Lets then evaluate the performance of the clustering versus the ground truth
Step9: Classification Logistic Regression
Step10: Recap | Python Code:
# Import libraries
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set();
cmap = mpl.colors.ListedColormap(sns.color_palette("hls", 3))
# Create a random set of examples
from sklearn.datasets.samples_generator import make_blobs
X, Y = make_blobs(n_samples=50, centers=2,random_state=23, cluster_std=2.90)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cmap)
plt.show()
Explanation: 02 - Introduction to Machine Learning
by Alejandro Correa Bahnsen
version 0.1, Feb 2016
Part of the class Practical Machine Learning
This notebook is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Special thanks goes to Jake Vanderplas
What is Machine Learning?
In this section we will begin to explore the basic principles of machine learning.
Machine Learning is about building programs with tunable parameters (typically an
array of floating point values) that are adjusted automatically so as to improve
their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those
algorithms can be seen as building blocks to make computers learn to behave more
intelligently by somehow generalizing rather that just storing and retrieving data items
like a database system would do.
We'll take a look at two very simple machine learning tasks here.
The first is a classification task: the figure shows a
collection of two-dimensional data, colored according to two different class
labels.
End of explanation
from sklearn.linear_model import SGDClassifier
clf = SGDClassifier(loss="hinge", alpha=0.01, n_iter=200, fit_intercept=True)
clf.fit(X, Y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, m_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - .5, X[:, 0].max() + .5
y_min, y_max = X[:, 1].min() - .5, X[:, 1].max() + .5
xx, yy = np.meshgrid(np.arange(x_min, x_max, .05), np.arange(y_min, y_max, .05))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape)
plt.contour(xx, yy, Z)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=cmap)
plt.show()
Explanation: A classification algorithm may be used to draw a dividing boundary
between the two clusters of points:
End of explanation
a = 0.5
b = 1.0
# x from 0 to 10
x = 30 * np.random.random(20)
# y = a*x + b with noise
y = a * x + b + np.random.normal(size=x.shape)
plt.scatter(x, y)
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
clf.fit(x[:, None], y)
# underscore at the end indicates a fit parameter
print(clf.coef_)
print(clf.intercept_)
x_new = np.linspace(0, 30, 100)
y_new = clf.predict(x_new[:, None])
plt.scatter(x, y)
plt.plot(x_new, y_new)
Explanation: This may seem like a trivial task, but it is a simple version of a very important concept.
By drawing this separating line, we have learned a model which can generalize to new
data: if you were to drop another point onto the plane which is unlabeled, this algorithm
could now predict whether it's a blue or a red point.
The next simple task we'll look at is a regression task: a simple best-fit line
to a set of data:
End of explanation
from IPython.core.display import Image, display
imp_path = 'https://raw.githubusercontent.com/jakevdp/sklearn_pycon2015/master/notebooks/images/'
display(Image(url=imp_path+'iris_setosa.jpg'))
print("Iris Setosa\n")
display(Image(url=imp_path+'iris_versicolor.jpg'))
print("Iris Versicolor\n")
display(Image(url=imp_path+'iris_virginica.jpg'))
print("Iris Virginica")
display(Image(url='https://raw.githubusercontent.com/albahnsen/PracticalMachineLearningClass/6160065e1e574a20edddc47116a0512d20656e26/notebooks/iris_with_length.png'))
print('Iris versicolor and the petal and sepal width and length')
print('From, Python Data Analytics, Apress, 2015.')
Explanation: Again, this is an example of fitting a model to data, such that the model can make
generalizations about new data. The model has been learned from the training
data, and can be used to predict the result of test data:
here, we might be given an x-value, and the model would
allow us to predict the y value. Again, this might seem like a trivial problem,
but it is a basic example of a type of operation that is fundamental to
machine learning tasks.
Representation of Data in Scikit-learn
Machine learning is about creating models from data: for that reason, we'll start by
discussing how data can be represented in order to be understood by the computer. Along
with this, we'll build on our matplotlib examples from the previous section and show some
examples of how to visualize data.
Most machine learning algorithms implemented in scikit-learn expect data to be stored in a
two-dimensional array or matrix. The arrays can be
either numpy arrays, or in some cases scipy.sparse matrices.
The size of the array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify).
A sample can be a document, a picture, a sound, a video, an astronomical object,
a row in database or CSV file,
or whatever you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each
item in a quantitative manner. Features are generally real-valued, but may be boolean or
discrete-valued in some cases.
The number of features must be fixed in advance. However it can be very high dimensional
(e.g. millions of features) with most of them being zeros for a given sample. This is a case
where scipy.sparse matrices can be useful, in that they are
much more memory-efficient than numpy arrays.
A Simple Example: the Iris Dataset
As an example of a simple dataset, we're going to take a look at the
iris data stored by scikit-learn.
The data consists of measurements of three different species of irises.
There are three species of iris in the dataset, which we can picture here:
End of explanation
from sklearn.datasets import load_iris
iris = load_iris()
iris.keys()
n_samples, n_features = iris.data.shape
print((n_samples, n_features))
print(iris.data[0])
print(iris.data.shape)
print(iris.target.shape)
print(iris.target)
print(iris.target_names)
Explanation: Quick Question:
If we want to design an algorithm to recognize iris species, what might the data be?
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature
number i must be a similar kind of quantity for each sample.
Loading the Iris Data with Scikit-Learn
Scikit-learn has a very straightforward set of data on these iris species. The data consist of
the following:
Features in the Iris dataset:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Target classes to predict:
Iris Setosa
Iris Versicolour
Iris Virginica
scikit-learn embeds a copy of the iris CSV file along with a helper function to load it into numpy arrays:
End of explanation
import pandas as pd # Pandas is a topic of next session
data_temp = pd.DataFrame(iris.data, columns=iris.feature_names)
data_temp['target'] = iris.target
data_temp['target'] = data_temp['target'].astype('category')
data_temp['target'].cat.categories = iris.target_names
sns.pairplot(data_temp, hue='target', palette=sns.color_palette("hls", 3))
Explanation: This data is four dimensional, but we can visualize two of the dimensions
at a time using a simple scatter-plot:
End of explanation
X, y = iris.data, iris.target
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
pca.fit(X)
X_reduced = pca.transform(X)
X_reduced.shape
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y, cmap=cmap)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
ax.set_title('Iris Dataset by PCA', size=14)
ax.scatter(X_reduced[:,0],X_reduced[:,1],X_reduced[:,2], c=y, cmap=cmap)
ax.set_xlabel('First eigenvector')
ax.set_ylabel('Second eigenvector')
ax.set_zlabel('Third eigenvector')
ax.w_xaxis.set_ticklabels(())
ax.w_yaxis.set_ticklabels(())
ax.w_zaxis.set_ticklabels(())
plt.show()
Explanation: Dimensionality Reduction: PCA
Principle Component Analysis (PCA) is a dimension reduction technique that can find the combinations of variables that explain the most variance.
Consider the iris dataset. It cannot be visualized in a single 2D plot, as it has 4 features. We are going to extract 2 combinations of sepal and petal dimensions to visualize it:
End of explanation
from sklearn.cluster import KMeans
k_means = KMeans(n_clusters=3, random_state=0) # Fixing the RNG in kmeans
k_means.fit(X)
y_pred = k_means.predict(X)
plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=y_pred, cmap=cmap);
Explanation: Clustering: K-means
Clustering groups together observations that are homogeneous with respect to a given criterion, finding ''clusters'' in the data.
Note that these clusters will uncover relevent hidden structure of the data only if the criterion used highlights it.
End of explanation
from sklearn.metrics import confusion_matrix
# Compute confusion matrix
cm = confusion_matrix(y, y_pred)
np.set_printoptions(precision=2)
print(cm)
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.Blues):
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(iris.target_names))
plt.xticks(tick_marks, iris.target_names, rotation=45)
plt.yticks(tick_marks, iris.target_names)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.figure()
plot_confusion_matrix(cm)
Explanation: Lets then evaluate the performance of the clustering versus the ground truth
End of explanation
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression()
clf.fit(X, y)
y_pred = clf.predict(X)
cm = confusion_matrix(y, y_pred)
print(cm)
plt.figure()
plot_confusion_matrix(cm)
Explanation: Classification Logistic Regression
End of explanation
from IPython.display import Image
Image(url="http://scikit-learn.org/dev/_static/ml_map.png")
Explanation: Recap: Scikit-learn's estimator interface
Scikit-learn strives to have a uniform interface across all methods,
and we'll see examples of these below. Given a scikit-learn estimator
object named model, the following methods are available:
Available in all Estimators
model.fit() : fit training data. For supervised learning applications,
this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)).
For unsupervised learning applications, this accepts only a single argument,
the data X (e.g. model.fit(X)).
Available in supervised estimators
model.predict() : given a trained model, predict the label of a new set of data.
This method accepts one argument, the new data X_new (e.g. model.predict(X_new)),
and returns the learned label for each object in the array.
model.predict_proba() : For classification problems, some estimators also provide
this method, which returns the probability that a new observation has each categorical label.
In this case, the label with the highest probability is returned by model.predict().
model.score() : for classification or regression problems, most (all?) estimators implement
a score method. Scores are between 0 and 1, with a larger score indicating a better fit.
Available in unsupervised estimators
model.predict() : predict labels in clustering algorithms.
model.transform() : given an unsupervised model, transform new data into the new basis.
This also accepts one argument X_new, and returns the new representation of the data based
on the unsupervised model.
model.fit_transform() : some estimators implement this method,
which more efficiently performs a fit and a transform on the same input data.
Flow Chart: How to Choose your Estimator
This is a flow chart created by scikit-learn super-contributor Andreas Mueller which gives a nice summary of which algorithms to choose in various situations. Keep it around as a handy reference!
End of explanation |
7,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's use CoNLL 2002 data to build a NER system
CoNLL2002 corpus is available in NLTK. We use Spanish data.
Step1: Data format
Step2: Features
Next, define some features. In this example we use word identity, word suffix, word shape and word POS tag; also, some information from nearby words is used.
This makes a simple baseline, but you certainly can add and remove some features to get (much?) better results - experiment with it.
Step3: This is what word2features extracts
Step4: Extract the features from the data
Step5: Train the model
To train the model, we create pycrfsuite.Trainer, load the training data and call 'train' method.
First, create pycrfsuite.Trainer and load the training data to CRFsuite
Step6: Set training parameters. We will use L-BFGS training algorithm (it is default) with Elastic Net (L1 + L2) regularization.
Step7: Possible parameters for the default training algorithm
Step8: Train the model
Step9: trainer.train saves model to a file
Step10: We can also get information about the final state of the model by looking at the trainer's logparser. If we had tagged our input data using the optional group argument in add, and had used the optional holdout argument during train, there would be information about the trainer's performance on the holdout set as well.
Step11: We can also get this information for every step using trainer.logparser.iterations
Step12: Make predictions
To use the trained model, create pycrfsuite.Tagger, open the model and use "tag" method
Step13: Let's tag a sentence to see how it works
Step15: Evaluate the model
Step16: Predict entity labels for all sentences in our testing set ('testb' Spanish data)
Step17: ..and check the result. Note this report is not comparable to results in CONLL2002 papers because here we check per-token results (not per-entity). Per-entity numbers will be worse.
Step18: Let's check what classifier learned
Step19: We can see that, for example, it is very likely that the beginning of an organization name (B-ORG) will be followed by a token inside organization name (I-ORG), but transitions to I-ORG from tokens with other labels are penalized. Also note I-PER -> B-LOC transition | Python Code:
nltk.corpus.conll2002.fileids()
%%time
train_sents = list(nltk.corpus.conll2002.iob_sents('esp.train'))
test_sents = list(nltk.corpus.conll2002.iob_sents('esp.testb'))
Explanation: Let's use CoNLL 2002 data to build a NER system
CoNLL2002 corpus is available in NLTK. We use Spanish data.
End of explanation
train_sents[0]
Explanation: Data format:
End of explanation
def word2features(sent, i):
word = sent[i][0]
postag = sent[i][1]
features = [
'bias',
'word.lower=' + word.lower(),
'word[-3:]=' + word[-3:],
'word[-2:]=' + word[-2:],
'word.isupper=%s' % word.isupper(),
'word.istitle=%s' % word.istitle(),
'word.isdigit=%s' % word.isdigit(),
'postag=' + postag,
'postag[:2]=' + postag[:2],
]
if i > 0:
word1 = sent[i-1][0]
postag1 = sent[i-1][1]
features.extend([
'-1:word.lower=' + word1.lower(),
'-1:word.istitle=%s' % word1.istitle(),
'-1:word.isupper=%s' % word1.isupper(),
'-1:postag=' + postag1,
'-1:postag[:2]=' + postag1[:2],
])
else:
features.append('BOS')
if i < len(sent)-1:
word1 = sent[i+1][0]
postag1 = sent[i+1][1]
features.extend([
'+1:word.lower=' + word1.lower(),
'+1:word.istitle=%s' % word1.istitle(),
'+1:word.isupper=%s' % word1.isupper(),
'+1:postag=' + postag1,
'+1:postag[:2]=' + postag1[:2],
])
else:
features.append('EOS')
return features
def sent2features(sent):
return [word2features(sent, i) for i in range(len(sent))]
def sent2labels(sent):
return [label for token, postag, label in sent]
def sent2tokens(sent):
return [token for token, postag, label in sent]
Explanation: Features
Next, define some features. In this example we use word identity, word suffix, word shape and word POS tag; also, some information from nearby words is used.
This makes a simple baseline, but you certainly can add and remove some features to get (much?) better results - experiment with it.
End of explanation
sent2features(train_sents[0])[1]
Explanation: This is what word2features extracts:
End of explanation
%%time
X_train = [sent2features(s) for s in train_sents]
y_train = [sent2labels(s) for s in train_sents]
X_test = [sent2features(s) for s in test_sents]
y_test = [sent2labels(s) for s in test_sents]
Explanation: Extract the features from the data:
End of explanation
%%time
trainer = pycrfsuite.Trainer(verbose=False)
for xseq, yseq in zip(X_train, y_train):
trainer.append(xseq, yseq)
Explanation: Train the model
To train the model, we create pycrfsuite.Trainer, load the training data and call 'train' method.
First, create pycrfsuite.Trainer and load the training data to CRFsuite:
End of explanation
trainer.set_params({
'c1': 1.0, # coefficient for L1 penalty
'c2': 1e-3, # coefficient for L2 penalty
'max_iterations': 50, # stop earlier
# include transitions that are possible, but not observed
'feature.possible_transitions': True
})
Explanation: Set training parameters. We will use L-BFGS training algorithm (it is default) with Elastic Net (L1 + L2) regularization.
End of explanation
trainer.params()
Explanation: Possible parameters for the default training algorithm:
End of explanation
%%time
trainer.train('conll2002-esp.crfsuite')
Explanation: Train the model:
End of explanation
!ls -lh ./conll2002-esp.crfsuite
Explanation: trainer.train saves model to a file:
End of explanation
trainer.logparser.last_iteration
Explanation: We can also get information about the final state of the model by looking at the trainer's logparser. If we had tagged our input data using the optional group argument in add, and had used the optional holdout argument during train, there would be information about the trainer's performance on the holdout set as well.
End of explanation
print(len(trainer.logparser.iterations), trainer.logparser.iterations[-1])
Explanation: We can also get this information for every step using trainer.logparser.iterations
End of explanation
tagger = pycrfsuite.Tagger()
tagger.open('conll2002-esp.crfsuite')
Explanation: Make predictions
To use the trained model, create pycrfsuite.Tagger, open the model and use "tag" method:
End of explanation
example_sent = test_sents[2]
print(' '.join(sent2tokens(example_sent)), end='\n\n')
print("Predicted:", ' '.join(tagger.tag(sent2features(example_sent))))
print("Correct: ", ' '.join(sent2labels(example_sent)))
Explanation: Let's tag a sentence to see how it works:
End of explanation
def bio_classification_report(y_true, y_pred):
Classification report for a list of BIO-encoded sequences.
It computes token-level metrics and discards "O" labels.
Note that it requires scikit-learn 0.15+ (or a version from github master)
to calculate averages properly!
lb = LabelBinarizer()
y_true_combined = lb.fit_transform(list(chain.from_iterable(y_true)))
y_pred_combined = lb.transform(list(chain.from_iterable(y_pred)))
tagset = set(lb.classes_) - {'O'}
tagset = sorted(tagset, key=lambda tag: tag.split('-', 1)[::-1])
class_indices = {cls: idx for idx, cls in enumerate(lb.classes_)}
return classification_report(
y_true_combined,
y_pred_combined,
labels = [class_indices[cls] for cls in tagset],
target_names = tagset,
)
Explanation: Evaluate the model
End of explanation
%%time
y_pred = [tagger.tag(xseq) for xseq in X_test]
Explanation: Predict entity labels for all sentences in our testing set ('testb' Spanish data):
End of explanation
print(bio_classification_report(y_test, y_pred))
Explanation: ..and check the result. Note this report is not comparable to results in CONLL2002 papers because here we check per-token results (not per-entity). Per-entity numbers will be worse.
End of explanation
from collections import Counter
info = tagger.info()
def print_transitions(trans_features):
for (label_from, label_to), weight in trans_features:
print("%-6s -> %-7s %0.6f" % (label_from, label_to, weight))
print("Top likely transitions:")
print_transitions(Counter(info.transitions).most_common(15))
print("\nTop unlikely transitions:")
print_transitions(Counter(info.transitions).most_common()[-15:])
Explanation: Let's check what classifier learned
End of explanation
def print_state_features(state_features):
for (attr, label), weight in state_features:
print("%0.6f %-6s %s" % (weight, label, attr))
print("Top positive:")
print_state_features(Counter(info.state_features).most_common(20))
print("\nTop negative:")
print_state_features(Counter(info.state_features).most_common()[-20:])
Explanation: We can see that, for example, it is very likely that the beginning of an organization name (B-ORG) will be followed by a token inside organization name (I-ORG), but transitions to I-ORG from tokens with other labels are penalized. Also note I-PER -> B-LOC transition: a positive weight means that model thinks that a person name is often followed by a location.
Check the state features:
End of explanation |
7,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 10
Step1: An example we've already encountered is when we're trying to handle an exception.
Step2: There are different categories of scope. It's always helpful to know which of these categories a variable falls into.
Global scope
A variable in global scope can be "seen" and accessed from pretty much anywhere. It's defining characteristic is that it's not created in any particular function or block of any kind. This lack of context makes it global.
Step3: (Small caveat
Step4: (Small caveat
Step5: a and b exist in the global namespace. c and d exist in the function namespace of the function func.
The whole point of namespaces is, essentially, to keep a conceptual grip on the program you're writing.
Anyone using the Rodeo IDE?
Likewise, every function will also have its own namespace of variables. As will every class (which we'll get next week!).
What happens when namespaces collide?
Step6: This effect is referred to as variable shadowing
Step7: If, however, you really want a global variable to be accessed locally--to disable the shadowing that is inherent in Python--you can use the global keyword to tell Python that, yes, this is indeed a global variable.
Step8: Part 2
Step9: In what namespace is b?
Global. It's no different from a.
How about this one
Step10: What is j at the end?
18 (the last value of i in the range--9--times two). Seeing a pattern yet?
Let's go back to the very first example in the lecture.
Step11: What is i in these cases? Is there a case where i does not exist?
Nope, i is in the global namespace.
Blocks
The whole point is to illustrate that blocks in Python--conditionals, loops, exception handlers--all exist in their same enclosing scope and do NOT define new namespaces.
This is somewhat of a departure from Java, where you could define an int counter inside a loop, but it would disappear once the loop ended, so you'd have to define the counter outside the loop in order to use it afterwards.
To illustrate this idea of a namespace being confined to functions, classes, and the global namespace, here's a bunch of nested conditionals that ultimately define a variable
Step12: b is a global variable. So it makes sense that it's accessible anywhere, whether in the print statement or in the nested conditionals. But there's a caveat here--anyone know what it is?
What if one of the conditionals fails?
Here's the same code again, but I've simply changed the starting value of a. | Python Code:
def func(x):
print(x)
x = 10
func(20)
print(x)
Explanation: Lecture 10: Variable Scope
CSCI 1360: Foundations for Informatics and Analytics
Overview and Objectives
We've spoken a lot about data structures and orders of execution (loops, functions, and so on). But now that we're intimately familiar with different ways of blocking our code, we haven't yet touched on how this affects the variables we define, and where it's legal to use them. By the end of this lecture, you should be able to:
Define the scope of a variable, based on where it is created
Understand the concept of a namespace in Python, and its role in limiting variable scope
Conceptualize how variable scoping fits into the larger picture of modular program design
Part 1: What is scope?
(couldn't resist)
Scope refers to where a variable is defined. Another way to look at scope is to ask about the lifetime of a variable.
Hopefully, it doesn't come as a surprise that some variables aren't always accessible everywhere in your program.
End of explanation
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
copy = i
print(i) # Does this work?
print(copy) # What about this?
Explanation: An example we've already encountered is when we're trying to handle an exception.
End of explanation
# This is a global variable. It can be accessed anywhere in this notebook.
a = 0
Explanation: There are different categories of scope. It's always helpful to know which of these categories a variable falls into.
Global scope
A variable in global scope can be "seen" and accessed from pretty much anywhere. It's defining characteristic is that it's not created in any particular function or block of any kind. This lack of context makes it global.
End of explanation
# This is our global variable, redefined.
a = 0
def f():
# This is a local variable. It disappears when the function ends.
b = 0
print(a) # a still exists here; b does not.
Explanation: (Small caveat: there is the concept of "built-in" scope, such as range or len or SyntaxError, which are technically even more "global" than global variables, since they're seen anywhere in Python writ large. "global" in this context means "seen anywhere in your program")
Local scope
The next step down: these are variables defined within a specific context, such as inside a function, and no longer exist once the function or context ends.
End of explanation
a = 0
b = 0
def func():
c = 0
d = 0
Explanation: (Small caveat: there is the concept of "nonlocal" scope, where you have variables defined inside functions, when those functions are themselves defined inside functions. This gets into functional programming, which Python does support and is gaining momentum in data science, but which is beyond the scope (ha!) of this course)
Namespaces
This brings us to the overarching concept of a namespace.
A namespace is a collection, or pool, of variables in Python. The global namespace is the pool of global variables that exist in a program.
End of explanation
a = 0
def func():
a = 1
print(a) # What gets printed?
Explanation: a and b exist in the global namespace. c and d exist in the function namespace of the function func.
The whole point of namespaces is, essentially, to keep a conceptual grip on the program you're writing.
Anyone using the Rodeo IDE?
Likewise, every function will also have its own namespace of variables. As will every class (which we'll get next week!).
What happens when namespaces collide?
End of explanation
i = 0
def func1():
i = 10
def func2():
i = 20
def func3(i):
i = 40
# ...
def funcOneHundredBillion():
i = 938948292
print(i) # Wait, what is i?
Explanation: This effect is referred to as variable shadowing: the locally-scoped variable takes precedence over the globally-scoped variable. It shadows the global variable.
This is not a bug--in the name of program simplicity, this limits the scope of the effects of changing a variable's value to a single function, rather than your entire program!
If you have multiple functions that all use similar variable-naming conventions--or, even more likely, you have a program that's written by lots of different people who like to use the variable i in everything--it'd be catastrophic if one change to a variable i resulted in a change to every variable i.
End of explanation
i = 10
def func():
global i
i = 20
func()
print(i)
Explanation: If, however, you really want a global variable to be accessed locally--to disable the shadowing that is inherent in Python--you can use the global keyword to tell Python that, yes, this is indeed a global variable.
End of explanation
a = 0
if a == 0:
b = 1
Explanation: Part 2: Scoping and blocks
This is a separate section for any Java/C/C++ converts in the room.
We've seen how Python creates namespaces at different hierarchies--one for every function, one for each class, and one single global namespace--which holds variables that are defined.
But what about variables defined inside blocks--constructs like for loops and if statements and try/except blocks?
Let's take a look at an example.
End of explanation
i = 42
for i in range(10):
i = i * 2
j = i
Explanation: In what namespace is b?
Global. It's no different from a.
How about this one:
End of explanation
import numpy as np
try:
i = np.random.randint(100)
if i % 2 == 0:
raise
except:
print(i) # What is i?
print(i) # What is i?
Explanation: What is j at the end?
18 (the last value of i in the range--9--times two). Seeing a pattern yet?
Let's go back to the very first example in the lecture.
End of explanation
a = 1
if a % 2 == 1:
if 2 - 1 == a:
if a * 1 == 1:
if a / 1 == 1:
for i in range(10):
for j in range(10):
b = i * j
print(b)
Explanation: What is i in these cases? Is there a case where i does not exist?
Nope, i is in the global namespace.
Blocks
The whole point is to illustrate that blocks in Python--conditionals, loops, exception handlers--all exist in their same enclosing scope and do NOT define new namespaces.
This is somewhat of a departure from Java, where you could define an int counter inside a loop, but it would disappear once the loop ended, so you'd have to define the counter outside the loop in order to use it afterwards.
To illustrate this idea of a namespace being confined to functions, classes, and the global namespace, here's a bunch of nested conditionals that ultimately define a variable:
End of explanation
#a = 1
a = 0
if a % 2 == 1:
if 2 - 1 == a:
if a * 1 == 1:
if a / 1 == 1:
for i in range(10):
for j in range(10):
b = i * j
print(b)
Explanation: b is a global variable. So it makes sense that it's accessible anywhere, whether in the print statement or in the nested conditionals. But there's a caveat here--anyone know what it is?
What if one of the conditionals fails?
Here's the same code again, but I've simply changed the starting value of a.
End of explanation |
7,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section 6.4.3
Theis and Hantush implementations and type curves
Timing and accuracy of the implementations
Here we do some testing of Theis and Hantush well function computations using a variety if implementations in Python.
We have implemented 4 variants of the Theis well functions and four variants of the Hantush well function compared them and timed time.
While we can directly use scipy.special.exp1 for the Theis well function, there is no function in scipy.special for the equivalent Hantush wellf function. That's why we have to integrate it ourselves. Be most straight forward and in fact best way is to integrate the kernal of the formula numerically and for that use the scipy.integrate.quad method, as it is fast, stable and at least 10 didgets accurate.
Ed J.M. Veling in has warned that accurate computation of Hantush is essential for many applications and has shown in a paper how to to accurately compute it. In this notebook we explore some implementations of the function and test their performance.
@TO 2020-12-14
Theis and Hantush well functions
Theis well function ($\mbox{W(}u)$)
$$s(r, t) = \frac{Q_0} {4 \pi kD} W_{theis}(u),\,\,\,\,\,u=\frac{r^2 S }{4 kD t}$$
The Theis well function is mathematically known as the exponential integral or $\mbox{expint}(z)$. In Python this function is avaialble in the scipy.special module as exp1. So you import it as
from scipy.special import exp1
or
from scipy.special import exp1 as Wth
This renames exp1 to Wth, if you should prefer it.
$$ W_{theis} = \mbox{expint}(u) = \mbox{scipy.special.exp1}(u)$$
There exist two mathematical formulaa for the exponential integral
$$W(u) = exp1(u) = \mbox{expint}(u) = \intop_u^\infty \frac{e^{-y}} y dy$$
Then there is the power series form
Step5: Two variant implementations of the Theis well fuction
W_theis0
Step10: Four variant implementations of the Hantush well function
Step11: Timing the functions
Step12: Results of the timing
Theis
Step13: The inflection point of the Hantush graphs, where $u=r/(2\lambda)=\rho/2$ | Python Code:
import numpy as np
from scipy.integrate import quad
from scipy.special import exp1
import matplotlib.pyplot as plt
from timeit import timeit
import pdb
def newfig(title=None, xlabel=None, ylabel=None,
xscale=None, yscale=None, xlim=None, ylim=None, figsize=(12, 8), size=15):
fig, ax = plt.subplots()
fig.set_size_inches(figsize)
ax.set_title(title, size=size)
ax.set_xlabel(xlabel, size=size)
ax.set_ylabel(ylabel, size=size)
if xscale: ax.set_xscale(xscale)
if yscale: ax.set_yscale(yscale)
if xlim: ax.set_xlim(xlim)
if ylim: ax.set_ylim(ylim)
ax.grid()
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(size)
return ax
# Handy for object inspection:
attribs = lambda obj: [o for o in dir(obj) if not o.startswith('_')]
def clrs():
colors = 'brgkmc'
for i in range(100):
yield colors[i % len(colors)]
Explanation: Section 6.4.3
Theis and Hantush implementations and type curves
Timing and accuracy of the implementations
Here we do some testing of Theis and Hantush well function computations using a variety if implementations in Python.
We have implemented 4 variants of the Theis well functions and four variants of the Hantush well function compared them and timed time.
While we can directly use scipy.special.exp1 for the Theis well function, there is no function in scipy.special for the equivalent Hantush wellf function. That's why we have to integrate it ourselves. Be most straight forward and in fact best way is to integrate the kernal of the formula numerically and for that use the scipy.integrate.quad method, as it is fast, stable and at least 10 didgets accurate.
Ed J.M. Veling in has warned that accurate computation of Hantush is essential for many applications and has shown in a paper how to to accurately compute it. In this notebook we explore some implementations of the function and test their performance.
@TO 2020-12-14
Theis and Hantush well functions
Theis well function ($\mbox{W(}u)$)
$$s(r, t) = \frac{Q_0} {4 \pi kD} W_{theis}(u),\,\,\,\,\,u=\frac{r^2 S }{4 kD t}$$
The Theis well function is mathematically known as the exponential integral or $\mbox{expint}(z)$. In Python this function is avaialble in the scipy.special module as exp1. So you import it as
from scipy.special import exp1
or
from scipy.special import exp1 as Wth
This renames exp1 to Wth, if you should prefer it.
$$ W_{theis} = \mbox{expint}(u) = \mbox{scipy.special.exp1}(u)$$
There exist two mathematical formulaa for the exponential integral
$$W(u) = exp1(u) = \mbox{expint}(u) = \intop_u^\infty \frac{e^{-y}} y dy$$
Then there is the power series form:
$$\mbox{expint}(u) = \sum_0^\infty\left[-\gamma - \ln u + u -\frac{u^2}{2 \times 2!}
+\frac{u^3}{3 \times 3!} - \frac{u^4}{4 \times 4!} - ...\right],\,\,\,\,\,\gamma=0.577216...$$
The $\gamma$ is so-called Euler's constant, a basic math constant like $e$, $\pi$ etc.
The second, power series form of Theis's well function comes in very handy for understanding the function's behavior and straight-forward analysis of pumping tests for wells in confined and unconfined aquifers that show Theis behavior, by which we mean, there is no equilibrium ever, because all the extractd water originates from storage in the aquifer only.
Hantush's well function ($\mbox{Wh}(u, \rho)$).
Hantush considers transient drawdown due to a well well in a semie-confined aquifer. Hence there is an drawdwown-induced infiltration from an adjacent layer, in which the head is assumed constant. This means that extrated water comes not only from storage (which is the case initially) but also from this induced infiltration. After longer times the full extraction originates from this induced infiltration, due to which the drawdown becomes stationary. But in very early times, the drawdown of the Theis and Hantush wells are the same. So, also the mathematical form of the Hantush solution resembles that of the Theis solution.
$$s(r, t) = \frac {Q_0}{4 \pi kD} Wh(u, \rho),\,\,\,\,\,u=\frac{r^2 S}{4 kD t}, \,\,\,\rho = \frac r \lambda$$
where $\lambda = \sqrt{kD c}$.
Like it is the case with the Theis well function, the Hantush well function can be written as an definite integral
$$W_h(u,\rho) = \intop_u^\infty \frac{e^{-y -\frac{\left(\frac{\rho}{2}\right)^2}{y} }}{y} dy$$
The Hantush well function may also be computed as a power series:
$$ W_h(u, \rho) = \sum_{n=0}^{\infty}\frac {-1^n} {n!} \left( \frac \rho 2 \right)^{2n} u^{-n} E_{n+1}\left(\frac {\rho^2} {4 u} \right) $$
$$ E_{n+1} = \frac 1 n \left[ e^{-u} - u E_n (u) \right] , \,\,(n=1, 2, 3, ...) $$
In which $E_i$ is the ith repeated integral of the exponential function and $E_1 = \mbox{expint}$.
Four methods are implemented below. But in conclusion, just stay with the one using the quad as it is fast enough and extremely accurate.
Import required functionality
End of explanation
def W_theis0(u):
Return Theis well function using scipy.special function exp1 directly.
return exp1(u)
def W_theis1(u):
Return Theis well function by integrating using scipy functionality.
This turns out to be a very accurate yet fast impementation, about as fast
as the exp1 function form scipy.special.
In fact we define three functions and finally compute the desired answer
with the last one. The three functions are nicely packages in the overall
W_theis1 function.
def funcTh(y): return np.exp(-y) / y
def Wth2(u): return quad(funcTh, u, np.inf)
WTh = np.frompyfunc(Wth2, 1, 2)
return WTh(u)
def W_theis2(u, practically_log_inf=20, steps_per_log_cycle=50):
Theis well function using smart integration
if np.isscalar(u):
u = np.array([u])
# Genereate integration point from first u tot practially inf and mix with the
# given u, so they are in the array of u values.
lu0 = np.log10(u[0])
n = int((practically_log_inf - lu0) * steps_per_log_cycle)
uu = np.unique(np.hstack((np.logspace(lu0, practically_log_inf, n), u)))
kernel = np.exp(-uu)
dlnu = np.diff(np.log(uu))
Wuu = np.cumsum(np.hstack((0, (kernel[:-1] + kernel[1:]) * dlnu / 2)))
Wuu = Wuu[-1] - Wuu # This holds the integral from each uu to infinity
# So now just look up the Wuu values where uu is u
W = np.zeros_like(u)
for i, ui in enumerate(u):
W[i] = Wuu[np.where(uu==ui)[0][0]]
return W
def W_theis3(u):
Return Theis well function using power series.
tol = 1e-16
gam = 0.577216
if np.isscalar(u):
u = np.array([u])
u1 = u[u <= 15] # All outcomes for u > 15 are identical to zero
terms0 = u1
W = -gam - np.log(u1) + terms0
for i in range(2, 250):
terms1 = -terms0 * u1 * (i -1) / (i * i)
W += terms1
if np.max(np.abs(terms0 + terms1)) < tol:
break
terms0 = terms1
return np.hstack((W, np.zeros_like(u[u > 15])))
Explanation: Two variant implementations of the Theis well fuction
W_theis0: exp1 directly from scipy.special
W_theis1: by integration using scipy and numpy functionality.
End of explanation
def W_hantush0(u=None, rho=None):
Return Hantush well function computed as a power series.
w = 0.
r2u = (rho/2) ** 2 / u
term = exp1(r2u)
E0 = exp1(r2u)
w = term
for n in range(1, 11):
E1 = (1/n) * (np.exp(-r2u) - r2u * E0)
term = term * (-1)/(n+1) * (rho/2) ** 2 / u * E1/E0
w += term
E0 = E1
return w
def W_hantush1(u, rho):
Return Hantush well function by straight-forward integration.
A large number of points are required to be accurate, but it won't
still be as accurate as the quad method from scipy.integrate which is
also at least as fast.
if np.isscalar(u):
u = np.asarray([u])
w = np.zeros_like(u)
for i, uu in enumerate(u):
y = np.logspace(np.log10(uu), 10, 5000)
arg = np.exp(-y - (rho/2) ** 2 / y ) / y
w[i] = np.sum(np.diff(y) * 0.5 * (arg[:-1]+ arg[1:]))
return w
def W_hantush2(u, rho):
Return Hantush well function by integration trying to be smarter.
This function is no faster than the previous one with 5000 points.
Parameters
----------
u = np.ndarray of floats
an array of u values u = r**2 S / (4 kD t)
rho: float
value of r/lambda with lambda = sqrt(kD c)
if np.isscalar(u):
u = np.asarray([u])
uu = np.unique(np.hstack((np.logspace(np.log10(np.min(u)), 10, 5000), u)))
arg = np.exp(-uu - (rho/2) ** 2 / uu) / uu
duu = np.diff(uu)
S = np.hstack((0, (arg[1:] + arg[:-1])* duu / 2))
Wsum = np.zeros_like(u)
for i, ui in enumerate(u):
Wsum[i] = np.sum(S[uu > ui])
return Wsum
def W_hantush3(u, rho):
Return Hantush well function by integration using scipy functinality.
This is efficient and accurate to 1e-9, which the other direct integration
methods don't achieve, even with 5000 points.
def whkernel(y, rho): return np.exp(-y - (rho/2) ** 2 / y ) / y
def whquad(u, rho): return quad(whkernel, u, np.inf, args=(rho))
Wh = np.frompyfunc(whquad, 2, 2) # 2 inputs and tow outputs h and err
return Wh(u, rho)[0] # cut-off err
Explanation: Four variant implementations of the Hantush well function
End of explanation
u = np.logspace(-3, 1, 41)
rho = 0.003
theis_funcs = [W_theis0, W_theis1, W_theis2, W_theis3]
hantush_funcs = [W_hantush0, W_hantush1, W_hantush2, W_hantush3]
for i, f in enumerate(theis_funcs):
print(f'W_theis{i}: ', f(u)[:3])
for i, f in enumerate(hantush_funcs):
print(f'W_hantush{i}: ',f(u, rho)[:3])
print('W_theis0 :')
%timeit W_theis0(u)
print('W_theis1(u) :')
%timeit W_theis1(u)
print('W_theis2(u) :')
%timeit W_theis2(u)
print('W_theis3(u) :')
%timeit W_theis3(u)
print('W_hantush0(u, rho) :')
%timeit W_hantush0(u, rho)
print('W_hantus1(, rho) :')
%timeit W_hantush1(u, rho)
print('W_hantush2(u, rho) :')
%timeit W_hantush2(u, rho)
print('W_hantush3(u, rho) :')
%timeit W_hantush3(u, rho)
Explanation: Timing the functions
End of explanation
rhos = [0.01, 0.03, 0.1, 0.3, 1,]
colors = 'brgkmc'
u = np.logspace(-6, 1, 71)
ax = newfig('Theis and Hantush type curves', "1/u", r"W(u), Wh(u, $\rho$)", xscale='log', yscale='linear',
ylim=(10, -0.1))
ax.plot(1/u, W_theis0(u), lw=3, label='Theis', zorder=100)
for i, rho in enumerate(rhos):
clr = colors[i % len(colors)]
#ax.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho))
ax.plot(1/u, W_hantush3(u, rho), color=clr, label=r'Hantush, $\rho$={:.2f}'.format(rho))
ax.plot(1/(rho/2), W_hantush3(rho/2, rho), 'o', color=clr, zorder=100 + 1)
ax.legend()
rhos = [0., 0.1, 0.3, 1, 3]
u = np.logspace(-6, 1, 71)
plt.title('Hantush type curves')
plt.xscale('log')
plt.yscale('linear')
plt.grid()
for rho in rhos:
plt.plot(1/u, W_hantush2(u, rho), '.', label='rho={:.1f}'.format(rho))
plt.plot(1/u, W_hantush3(u, rho), label='rho={:.1f}'.format(rho))
plt.legend()
plt.show()
Explanation: Results of the timing
Theis:
W_theis0 :
6.06 µs ± 261 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
W_theis1(u) :
7.11 µs ± 163 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
W_theis2(u) :
299 µs ± 6.79 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
W_theis3(u) :
553 µs ± 33.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Their is almost no difference in speed between direcly using exp1 from scipy and integrating numerically using quad. Both are equally accurate.
Thex explicit integration is slow just as the summation.
Hantush:
W_hantush0(u, rho) :
86 µs ± 1.69 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
W_hantus1(, rho) :
7.53 ms ± 72.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
W_hantush2(u, rho) :
882 µs ± 26.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
W_hantush3(u, rho) :
8.64 ms ± 75.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Note that my "smart" integration (W_hantush2) is about 9 time faster than the simple integration and th quad solution. So it turns aout to be smart enough after all.
The smart and normal integration methods are equally accurate to 5 didgets with 5000 points and haveing 1e10 as upper limit. The quad method has 10 didgets accuracy.
The series method is the slowest of all, 10 times slower than the quad and simple integration methods and 100 times slower than my smart method.
The series method is also not accurate. The number of terms to include must be
much larger, which would make it even slower te compute.
End of explanation
rhos = [0.01, 0.03, 0.1, 0.3, 1, 3, 5]
s = r"The inflection point is where $u = r/(2 \lambda) = \rho/2$"
u = np.logspace(-6, 1, 71)
ax = newfig('Hantush type curves, ' + s, "1/u", r"W($u, r/\lambda$)", xscale='log', figsize=(12, 6), size=15)
for rho, clr in zip(rhos, clrs()):
ax.plot(1/u, W_hantush3(u, rho), color=clr, label='rho={:.2f}'.format(rho))
ax.plot(2/rho, W_hantush3(rho/2, rho), 'o', color=clr, label="Inflection point.")
ax.legend()
plt.show()
Explanation: The inflection point of the Hantush graphs, where $u=r/(2\lambda)=\rho/2$
End of explanation |
7,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural Machine Translation
Welcome to your first programming assignment for this week!
You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models.
This notebook was produced together with NVIDIA's Deep Learning Institute.
Let's load all the packages you will need for this assignment.
Step1: 1 - Translating human readable dates into machine readable dates
The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task.
The network will input a date written in a variety of possible formats (e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987") and translate them into standardized, machine readable dates (e.g. "1958-08-29", "1968-03-30", "1987-06-24"). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.
<!--
Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !-->
1.1 - Dataset
We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
Step2: You've loaded
Step3: You now have
Step4: 2 - Neural machine translation with attention
If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.
The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.
2.1 - Attention mechanism
In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
<table>
<td>
<img src="images/attn_model.png" style="width
Step6: Now you can use these layers to implement one_step_attention(). In order to propagate a Keras tensor object X through one of these layers, use layer(X) (or layer([X,Y]) if it requires multiple inputs.), e.g. densor(X) will propagate X through the Dense(1) layer defined above.
Step7: You will be able to check the expected output of one_step_attention() after you've coded the model() function.
Exercise
Step9: Now you can use these layers $T_y$ times in a for loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps
Step10: Run the following cell to create your model.
Step11: Let's get a summary of the model to check if it matches the expected output.
Step12: Expected Output
Step13: The last step is to define all your inputs and outputs to fit the model
Step14: Let's now fit the model and run it for one epoch.
Step15: While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples
Step16: You can now see the results on new examples.
Step17: You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character.
3 - Visualizing Attention (Optional / Ungraded)
Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.
Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this
Step18: Navigate through the output of model.summary() above. You can see that the layer named attention_weights outputs the alphas of shape (m, 30, 1) before dot_2 computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.
The function attention_map() pulls out the attention values from your model and plots them. | Python Code:
from keras.layers import Bidirectional, Concatenate, Permute, Dot, Input, LSTM, Multiply
from keras.layers import RepeatVector, Dense, Activation, Lambda
from keras.optimizers import Adam
from keras.utils import to_categorical
from keras.models import load_model, Model
import keras.backend as K
import numpy as np
from faker import Faker
import random
from tqdm import tqdm
from babel.dates import format_date
from nmt_utils import *
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Neural Machine Translation
Welcome to your first programming assignment for this week!
You will build a Neural Machine Translation (NMT) model to translate human readable dates ("25th of June, 2009") into machine readable dates ("2009-06-25"). You will do this using an attention model, one of the most sophisticated sequence to sequence models.
This notebook was produced together with NVIDIA's Deep Learning Institute.
Let's load all the packages you will need for this assignment.
End of explanation
m = 10000
dataset, human_vocab, machine_vocab, inv_machine_vocab = load_dataset(m)
dataset[:10]
Explanation: 1 - Translating human readable dates into machine readable dates
The model you will build here could be used to translate from one language to another, such as translating from English to Hindi. However, language translation requires massive datasets and usually takes days of training on GPUs. To give you a place to experiment with these models even without using massive datasets, we will instead use a simpler "date translation" task.
The network will input a date written in a variety of possible formats (e.g. "the 29th of August 1958", "03/30/1968", "24 JUNE 1987") and translate them into standardized, machine readable dates (e.g. "1958-08-29", "1968-03-30", "1987-06-24"). We will have the network learn to output dates in the common machine-readable format YYYY-MM-DD.
<!--
Take a look at [nmt_utils.py](./nmt_utils.py) to see all the formatting. Count and figure out how the formats work, you will need this knowledge later. !-->
1.1 - Dataset
We will train the model on a dataset of 10000 human readable dates and their equivalent, standardized, machine readable dates. Let's run the following cells to load the dataset and print some examples.
End of explanation
Tx = 30
Ty = 10
X, Y, Xoh, Yoh = preprocess_data(dataset, human_vocab, machine_vocab, Tx, Ty)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("Xoh.shape:", Xoh.shape)
print("Yoh.shape:", Yoh.shape)
Explanation: You've loaded:
- dataset: a list of tuples of (human readable date, machine readable date)
- human_vocab: a python dictionary mapping all characters used in the human readable dates to an integer-valued index
- machine_vocab: a python dictionary mapping all characters used in machine readable dates to an integer-valued index. These indices are not necessarily consistent with human_vocab.
- inv_machine_vocab: the inverse dictionary of machine_vocab, mapping from indices back to characters.
Let's preprocess the data and map the raw text data into the index values. We will also use Tx=30 (which we assume is the maximum length of the human readable date; if we get a longer input, we would have to truncate it) and Ty=10 (since "YYYY-MM-DD" is 10 characters long).
End of explanation
index = 0
print("Source date:", dataset[index][0])
print("Target date:", dataset[index][1])
print()
print("Source after preprocessing (indices):", X[index])
print("Target after preprocessing (indices):", Y[index])
print()
print("Source after preprocessing (one-hot):", Xoh[index])
print("Target after preprocessing (one-hot):", Yoh[index])
Explanation: You now have:
- X: a processed version of the human readable dates in the training set, where each character is replaced by an index mapped to the character via human_vocab. Each date is further padded to $T_x$ values with a special character (< pad >). X.shape = (m, Tx)
- Y: a processed version of the machine readable dates in the training set, where each character is replaced by the index it is mapped to in machine_vocab. You should have Y.shape = (m, Ty).
- Xoh: one-hot version of X, the "1" entry's index is mapped to the character thanks to human_vocab. Xoh.shape = (m, Tx, len(human_vocab))
- Yoh: one-hot version of Y, the "1" entry's index is mapped to the character thanks to machine_vocab. Yoh.shape = (m, Tx, len(machine_vocab)). Here, len(machine_vocab) = 11 since there are 11 characters ('-' as well as 0-9).
Lets also look at some examples of preprocessed training examples. Feel free to play with index in the cell below to navigate the dataset and see how source/target dates are preprocessed.
End of explanation
# Defined shared layers as global variables
repeator = RepeatVector(Tx)
concatenator = Concatenate(axis=-1)
densor1 = Dense(10, activation = "tanh")
densor2 = Dense(1, activation = "relu")
activator = Activation(softmax, name='attention_weights') # We are using a custom softmax(axis = 1) loaded in this notebook
dotor = Dot(axes = 1)
Explanation: 2 - Neural machine translation with attention
If you had to translate a book's paragraph from French to English, you would not read the whole paragraph, then close the book and translate. Even during the translation process, you would read/re-read and focus on the parts of the French paragraph corresponding to the parts of the English you are writing down.
The attention mechanism tells a Neural Machine Translation model where it should pay attention to at any step.
2.1 - Attention mechanism
In this part, you will implement the attention mechanism presented in the lecture videos. Here is a figure to remind you how the model works. The diagram on the left shows the attention model. The diagram on the right shows what one "Attention" step does to calculate the attention variables $\alpha^{\langle t, t' \rangle}$, which are used to compute the context variable $context^{\langle t \rangle}$ for each timestep in the output ($t=1, \ldots, T_y$).
<table>
<td>
<img src="images/attn_model.png" style="width:500;height:500px;"> <br>
</td>
<td>
<img src="images/attn_mechanism.png" style="width:500;height:500px;"> <br>
</td>
</table>
<caption><center> Figure 1: Neural machine translation with attention</center></caption>
Here are some properties of the model that you may notice:
There are two separate LSTMs in this model (see diagram on the left). Because the one at the bottom of the picture is a Bi-directional LSTM and comes before the attention mechanism, we will call it pre-attention Bi-LSTM. The LSTM at the top of the diagram comes after the attention mechanism, so we will call it the post-attention LSTM. The pre-attention Bi-LSTM goes through $T_x$ time steps; the post-attention LSTM goes through $T_y$ time steps.
The post-attention LSTM passes $s^{\langle t \rangle}, c^{\langle t \rangle}$ from one time step to the next. In the lecture videos, we were using only a basic RNN for the post-activation sequence model, so the state captured by the RNN output activations $s^{\langle t\rangle}$. But since we are using an LSTM here, the LSTM has both the output activation $s^{\langle t\rangle}$ and the hidden cell state $c^{\langle t\rangle}$. However, unlike previous text generation examples (such as Dinosaurus in week 1), in this model the post-activation LSTM at time $t$ does will not take the specific generated $y^{\langle t-1 \rangle}$ as input; it only takes $s^{\langle t\rangle}$ and $c^{\langle t\rangle}$ as input. We have designed the model this way, because (unlike language generation where adjacent characters are highly correlated) there isn't as strong a dependency between the previous character and the next character in a YYYY-MM-DD date.
We use $a^{\langle t \rangle} = [\overrightarrow{a}^{\langle t \rangle}; \overleftarrow{a}^{\langle t \rangle}]$ to represent the concatenation of the activations of both the forward-direction and backward-directions of the pre-attention Bi-LSTM.
The diagram on the right uses a RepeatVector node to copy $s^{\langle t-1 \rangle}$'s value $T_x$ times, and then Concatenation to concatenate $s^{\langle t-1 \rangle}$ and $a^{\langle t \rangle}$ to compute $e^{\langle t, t'}$, which is then passed through a softmax to compute $\alpha^{\langle t, t' \rangle}$. We'll explain how to use RepeatVector and Concatenation in Keras below.
Lets implement this model. You will start by implementing two functions: one_step_attention() and model().
1) one_step_attention(): At step $t$, given all the hidden states of the Bi-LSTM ($[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$) and the previous hidden state of the second LSTM ($s^{<t-1>}$), one_step_attention() will compute the attention weights ($[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$) and output the context vector (see Figure 1 (right) for details):
$$context^{<t>} = \sum_{t' = 0}^{T_x} \alpha^{<t,t'>}a^{<t'>}\tag{1}$$
Note that we are denoting the attention in this notebook $context^{\langle t \rangle}$. In the lecture videos, the context was denoted $c^{\langle t \rangle}$, but here we are calling it $context^{\langle t \rangle}$ to avoid confusion with the (post-attention) LSTM's internal memory cell variable, which is sometimes also denoted $c^{\langle t \rangle}$.
2) model(): Implements the entire model. It first runs the input through a Bi-LSTM to get back $[a^{<1>},a^{<2>}, ..., a^{<T_x>}]$. Then, it calls one_step_attention() $T_y$ times (for loop). At each iteration of this loop, it gives the computed context vector $c^{<t>}$ to the second LSTM, and runs the output of the LSTM through a dense layer with softmax activation to generate a prediction $\hat{y}^{<t>}$.
Exercise: Implement one_step_attention(). The function model() will call the layers in one_step_attention() $T_y$ using a for-loop, and it is important that all $T_y$ copies have the same weights. I.e., it should not re-initiaiize the weights every time. In other words, all $T_y$ steps should have shared weights. Here's how you can implement layers with shareable weights in Keras:
1. Define the layer objects (as global variables for examples).
2. Call these objects when propagating the input.
We have defined the layers you need as global variables. Please run the following cells to create them. Please check the Keras documentation to make sure you understand what these layers are: RepeatVector(), Concatenate(), Dense(), Activation(), Dot().
End of explanation
# GRADED FUNCTION: one_step_attention
def one_step_attention(a, s_prev):
Performs one step of attention: Outputs a context vector computed as a dot product of the attention weights
"alphas" and the hidden states "a" of the Bi-LSTM.
Arguments:
a -- hidden state output of the Bi-LSTM, numpy-array of shape (m, Tx, 2*n_a)
s_prev -- previous hidden state of the (post-attention) LSTM, numpy-array of shape (m, n_s)
Returns:
context -- context vector, input of the next (post-attetion) LSTM cell
### START CODE HERE ###
# Use repeator to repeat s_prev to be of shape (m, Tx, n_s) so that you can concatenate it with all hidden states "a" (≈ 1 line)
s_prev = repeator(s_prev)
# Use concatenator to concatenate a and s_prev on the last axis (≈ 1 line)
concat = concatenator([a, s_prev])
# Use densor1 to propagate concat through a small fully-connected neural network to compute the "intermediate energies" variable e. (≈1 lines)
e = densor1(concat)
# Use densor2 to propagate e through a small fully-connected neural network to compute the "energies" variable energies. (≈1 lines)
energies = densor2(e)
# Use "activator" on "energies" to compute the attention weights "alphas" (≈ 1 line)
alphas = activator(energies)
# Use dotor together with "alphas" and "a" to compute the context vector to be given to the next (post-attention) LSTM-cell (≈ 1 line)
context = dotor([alphas, a])
### END CODE HERE ###
return context
Explanation: Now you can use these layers to implement one_step_attention(). In order to propagate a Keras tensor object X through one of these layers, use layer(X) (or layer([X,Y]) if it requires multiple inputs.), e.g. densor(X) will propagate X through the Dense(1) layer defined above.
End of explanation
n_a = 32
n_s = 64
post_activation_LSTM_cell = LSTM(n_s, return_state = True)
output_layer = Dense(len(machine_vocab), activation=softmax)
Explanation: You will be able to check the expected output of one_step_attention() after you've coded the model() function.
Exercise: Implement model() as explained in figure 2 and the text above. Again, we have defined global layers that will share weights to be used in model().
End of explanation
# GRADED FUNCTION: model
def model(Tx, Ty, n_a, n_s, human_vocab_size, machine_vocab_size):
Arguments:
Tx -- length of the input sequence
Ty -- length of the output sequence
n_a -- hidden state size of the Bi-LSTM
n_s -- hidden state size of the post-attention LSTM
human_vocab_size -- size of the python dictionary "human_vocab"
machine_vocab_size -- size of the python dictionary "machine_vocab"
Returns:
model -- Keras model instance
# Define the inputs of your model with a shape (Tx,)
# Define s0 and c0, initial hidden state for the decoder LSTM of shape (n_s,)
X = Input(shape=(Tx, human_vocab_size))
s0 = Input(shape=(n_s,), name='s0')
c0 = Input(shape=(n_s,), name='c0')
s = s0
c = c0
# Initialize empty list of outputs
outputs = []
### START CODE HERE ###
# Step 1: Define your pre-attention Bi-LSTM. Remember to use return_sequences=True. (≈ 1 line)
a = Bidirectional(LSTM(n_a, return_sequences=True))
# Step 2: Iterate for Ty steps
for t in range(Ty):
# Step 2.A: Perform one step of the attention mechanism to get back the context vector at step t (≈ 1 line)
context = one_step_attention(a(X), s)
# Step 2.B: Apply the post-attention LSTM cell to the "context" vector.
# Don't forget to pass: initial_state = [hidden state, cell state] (≈ 1 line)
s, _, c = post_activation_LSTM_cell(context, initial_state=[s, c])
# Step 2.C: Apply Dense layer to the hidden state output of the post-attention LSTM (≈ 1 line)
out = output_layer(s)
# Step 2.D: Append "out" to the "outputs" list (≈ 1 line)
outputs.append(out)
# Step 3: Create model instance taking three inputs and returning the list of outputs. (≈ 1 line)
model = Model(inputs=[X, s0, c0], outputs=outputs)
### END CODE HERE ###
return model
Explanation: Now you can use these layers $T_y$ times in a for loop to generate the outputs, and their parameters will not be reinitialized. You will have to carry out the following steps:
Propagate the input into a Bidirectional LSTM
Iterate for $t = 0, \dots, T_y-1$:
Call one_step_attention() on $[\alpha^{<t,1>},\alpha^{<t,2>}, ..., \alpha^{<t,T_x>}]$ and $s^{<t-1>}$ to get the context vector $context^{<t>}$.
Give $context^{<t>}$ to the post-attention LSTM cell. Remember pass in the previous hidden-state $s^{\langle t-1\rangle}$ and cell-states $c^{\langle t-1\rangle}$ of this LSTM using initial_state= [previous hidden state, previous cell state]. Get back the new hidden state $s^{<t>}$ and the new cell state $c^{<t>}$.
Apply a softmax layer to $s^{<t>}$, get the output.
Save the output by adding it to the list of outputs.
Create your Keras model instance, it should have three inputs ("inputs", $s^{<0>}$ and $c^{<0>}$) and output the list of "outputs".
End of explanation
model = model(Tx, Ty, n_a, n_s, len(human_vocab), len(machine_vocab))
Explanation: Run the following cell to create your model.
End of explanation
model.summary()
Explanation: Let's get a summary of the model to check if it matches the expected output.
End of explanation
### START CODE HERE ### (≈2 lines)
opt = Adam(lr=0.005, beta_1=0.9, beta_2=0.999, decay=0.01)
model.compile(loss='mean_squared_error', optimizer=opt)
### END CODE HERE ###
Explanation: Expected Output:
Here is the summary you should see
<table>
<tr>
<td>
**Total params:**
</td>
<td>
52,960
</td>
</tr>
<tr>
<td>
**Trainable params:**
</td>
<td>
52,960
</td>
</tr>
<tr>
<td>
**Non-trainable params:**
</td>
<td>
0
</td>
</tr>
<tr>
<td>
**bidirectional_1's output shape **
</td>
<td>
(None, 30, 64)
</td>
</tr>
<tr>
<td>
**repeat_vector_1's output shape **
</td>
<td>
(None, 30, 64)
</td>
</tr>
<tr>
<td>
**concatenate_1's output shape **
</td>
<td>
(None, 30, 128)
</td>
</tr>
<tr>
<td>
**attention_weights's output shape **
</td>
<td>
(None, 30, 1)
</td>
</tr>
<tr>
<td>
**dot_1's output shape **
</td>
<td>
(None, 1, 64)
</td>
</tr>
<tr>
<td>
**dense_3's output shape **
</td>
<td>
(None, 11)
</td>
</tr>
</table>
As usual, after creating your model in Keras, you need to compile it and define what loss, optimizer and metrics your are want to use. Compile your model using categorical_crossentropy loss, a custom Adam optimizer (learning rate = 0.005, $\beta_1 = 0.9$, $\beta_2 = 0.999$, decay = 0.01) and ['accuracy'] metrics:
End of explanation
s0 = np.zeros((m, n_s))
c0 = np.zeros((m, n_s))
outputs = list(Yoh.swapaxes(0,1))
Explanation: The last step is to define all your inputs and outputs to fit the model:
- You already have X of shape $(m = 10000, T_x = 30)$ containing the training examples.
- You need to create s0 and c0 to initialize your post_activation_LSTM_cell with 0s.
- Given the model() you coded, you need the "outputs" to be a list of 11 elements of shape (m, T_y). So that: outputs[i][0], ..., outputs[i][Ty] represent the true labels (characters) corresponding to the $i^{th}$ training example (X[i]). More generally, outputs[i][j] is the true label of the $j^{th}$ character in the $i^{th}$ training example.
End of explanation
model.fit([Xoh, s0, c0], outputs, epochs=1, batch_size=100)
Explanation: Let's now fit the model and run it for one epoch.
End of explanation
model.load_weights('models/model.h5')
Explanation: While training you can see the loss as well as the accuracy on each of the 10 positions of the output. The table below gives you an example of what the accuracies could be if the batch had 2 examples:
<img src="images/table.png" style="width:700;height:200px;"> <br>
<caption><center>Thus, dense_2_acc_8: 0.89 means that you are predicting the 7th character of the output correctly 89% of the time in the current batch of data. </center></caption>
We have run this model for longer, and saved the weights. Run the next cell to load our weights. (By training a model for several minutes, you should be able to obtain a model of similar accuracy, but loading our model will save you time.)
End of explanation
EXAMPLES = ['3 May 1979', '5 April 09', '21th of August 2016', 'Tue 10 Jul 2007', 'Saturday May 9 2018', 'March 3 2001', 'March 3rd 2001', '1 March 2001']
for example in EXAMPLES:
source = string_to_int(example, Tx, human_vocab)
source = np.array(list(map(lambda x: to_categorical(x, num_classes=len(human_vocab)), source))).swapaxes(0,1)
prediction = model.predict([source, s0, c0])
prediction = np.argmax(prediction, axis = -1)
output = [inv_machine_vocab[int(i)] for i in prediction]
print("source:", example)
print("output:", ''.join(output))
Explanation: You can now see the results on new examples.
End of explanation
model.summary()
Explanation: You can also change these examples to test with your own examples. The next part will give you a better sense on what the attention mechanism is doing--i.e., what part of the input the network is paying attention to when generating a particular output character.
3 - Visualizing Attention (Optional / Ungraded)
Since the problem has a fixed output length of 10, it is also possible to carry out this task using 10 different softmax units to generate the 10 characters of the output. But one advantage of the attention model is that each part of the output (say the month) knows it needs to depend only on a small part of the input (the characters in the input giving the month). We can visualize what part of the output is looking at what part of the input.
Consider the task of translating "Saturday 9 May 2018" to "2018-05-09". If we visualize the computed $\alpha^{\langle t, t' \rangle}$ we get this:
<img src="images/date_attention.png" style="width:600;height:300px;"> <br>
<caption><center> Figure 8: Full Attention Map</center></caption>
Notice how the output ignores the "Saturday" portion of the input. None of the output timesteps are paying much attention to that portion of the input. We see also that 9 has been translated as 09 and May has been correctly translated into 05, with the output paying attention to the parts of the input it needs to to make the translation. The year mostly requires it to pay attention to the input's "18" in order to generate "2018."
3.1 - Getting the activations from the network
Lets now visualize the attention values in your network. We'll propagate an example through the network, then visualize the values of $\alpha^{\langle t, t' \rangle}$.
To figure out where the attention values are located, let's start by printing a summary of the model .
End of explanation
attention_map = plot_attention_map(model, human_vocab, inv_machine_vocab, "Tuesday 09 Oct 1993", num = 7, n_s = 64)
Explanation: Navigate through the output of model.summary() above. You can see that the layer named attention_weights outputs the alphas of shape (m, 30, 1) before dot_2 computes the context vector for every time step $t = 0, \ldots, T_y-1$. Lets get the activations from this layer.
The function attention_map() pulls out the attention values from your model and plots them.
End of explanation |
7,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning Tutorial
(C) 2019 by Damir Cavar
This notebook was inspired by numerous totorials and other notebooks online, and books like Weidman (2019), ...
General Conventions
In the following Python code I will make use of type hints for Python to make explicit the variable and return types of all the functions used. This is supposed to make the code semantics more transparent.
Step1: Our Core Python Libraries
We will make use of the scientific computing package numpy in this notebook. In the following cell we import numpy and refer to it as np
Step2: We will need the ndarray object from numpy. By importing it directly here, we intend to simplify and reduce the code in the following
Step3: Some of the functions that we will need to use will be plotted. We use the pyplot library from the matplotlib, refering to it as plt.
Step5: Activation Functions
The following function is taken from Weidman (2019)
Step6: We can plot the Leaky ReLU function as follows
Step8: We can reformulate the ReLU function as a special variant of the numpy.clip function, applied here to the nparray x
Step9: Derivatives
Derivatives are the amount of change of the result of a function when changing the input slightly at a certain value a.
$$\frac{df}{du}(a) = \lim_{\Delta \rightarrow 0} \frac{f(a + \Delta) - f(a - \Delta)}{2 \times \Delta}$$
To approximate the limit, we can set a very small value for $\Delta$
Step11: A simplified derivative function could be formulated as follows
Step13: Softmax
$$softmax(z_i) = \frac{e^{z_i}}{\sum_{j=1}^d}\mbox{ with } 1 \leq i \geq d$$
We can define the softmax function as follows
Step14: The following example shows the effect | Python Code:
from typing import Callable
Explanation: Deep Learning Tutorial
(C) 2019 by Damir Cavar
This notebook was inspired by numerous totorials and other notebooks online, and books like Weidman (2019), ...
General Conventions
In the following Python code I will make use of type hints for Python to make explicit the variable and return types of all the functions used. This is supposed to make the code semantics more transparent.
End of explanation
import numpy as np
Explanation: Our Core Python Libraries
We will make use of the scientific computing package numpy in this notebook. In the following cell we import numpy and refer to it as np:
End of explanation
from numpy import ndarray
Explanation: We will need the ndarray object from numpy. By importing it directly here, we intend to simplify and reduce the code in the following:
End of explanation
from matplotlib import pyplot as plt
Explanation: Some of the functions that we will need to use will be plotted. We use the pyplot library from the matplotlib, refering to it as plt.
End of explanation
def leaky_relu(x: ndarray) -> ndarray:
Apply Leaky ReLU to each element in ndarray.
return np.maximum(0.2 * x, x)
Explanation: Activation Functions
The following function is taken from Weidman (2019):
End of explanation
%matplotlib inline
x = np.arange(-2, 2, 0.05)
y = leaky_relu(x)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y)
ax.set_title("Leaky ReLU")
ax.set_xlabel("x")
ax.set_ylabel("y")
Explanation: We can plot the Leaky ReLU function as follows:
End of explanation
%matplotlib inline
x = np.arange(-2, 2, 0.05)
y = x.clip(min=0)
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y)
ax.set_title("ReLU")
ax.set_xlabel("x")
ax.set_ylabel("y")
def deriv(func : Callable[[ndarray], ndarray],
input_ : ndarray,
delta : float = 0.001) -> ndarray:
Evaluates the derivative of a function 'func' at every element in the 'input_' array.
return (func(input_ + delta) - func(input_ - delta)) / (2 * delta)
Explanation: We can reformulate the ReLU function as a special variant of the numpy.clip function, applied here to the nparray x:
End of explanation
%matplotlib inline
def f(x):
return 1/x
x = np.linspace(0.1,1.5,150)
y = f(x)
a = .4
h = 0.001
fprime = (f(a + h) - f(a)) / h # derivative
tan = f(a) + fprime * (x - a) # tangent
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
ax.plot(x, y, 'b', a, f(a), 'om', x, tan, '--r')
ax.set_title("Slope")
ax.set_xlabel("x")
ax.set_ylabel("y")
Explanation: Derivatives
Derivatives are the amount of change of the result of a function when changing the input slightly at a certain value a.
$$\frac{df}{du}(a) = \lim_{\Delta \rightarrow 0} \frac{f(a + \Delta) - f(a - \Delta)}{2 \times \Delta}$$
To approximate the limit, we can set a very small value for $\Delta$:
$$\frac{df}{du}(a) = \frac{f(a + 0.001) - f(a - 0.001)}{2 \times 0.001} = \frac{f(a + 0.001) - f(a - 0.001)}{0.002}$$
We can simplify this equation to optimize the computation by taking $\Delta = 0.001$ only once into account:
$$\frac{df}{du}(a) = \frac{f(a + 0.001) - f(a)}{0.001}$$
This is in fact the slope of the function f(x) at point a, represented in the following by the tangent (red line):
End of explanation
def deriv(func: Callable[[ndarray], ndarray],
input_: ndarray,
delta: float = 0.001) -> ndarray:
Computes the derivate of func for every value in the input array.
return (func(input_ + delta) - func(input_)) / delta
Explanation: A simplified derivative function could be formulated as follows:
End of explanation
def softmax(x: ndarray) -> ndarray:
Compute softmax values for each sets of scores in x.
return np.exp(x) / np.sum(np.exp(x), axis=0)
Explanation: Softmax
$$softmax(z_i) = \frac{e^{z_i}}{\sum_{j=1}^d}\mbox{ with } 1 \leq i \geq d$$
We can define the softmax function as follows:
End of explanation
scores = [3.0, 1.0, 0.2]
softmaxscores = softmax(scores)
print("Softmax:", softmaxscores, "\tSum:", sum(softmaxscores))
Explanation: The following example shows the effect:
End of explanation |
7,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Moving frame calculations
General idea
Fundamental thing to start with
$$
f(z) = \bar{f}(\bar{z})
$$
Then you need a general group transformation.
Which you should simplify as far as possible.
First
Step1: Write this out as a matrix
Now pick the cross-section
Conformal
OK, let's try again. This time we are gonna be awesome and do conformal. The Taylor expansion of a general conformal map up to third order is
$$
\bar{z} = c_0 + c_1 z + c_2 z^2 + c_3 z^3
$$
Or in components,
$$
\begin{align}
\bar{x} &= a_0 + a_1 x + a_2 (x^2 - y^2) + a_3 (x^3 - 3 xy^2) - b_1 y - 2b_2xy - 3b_3x^2y + b_3y^3 \
3 & = 4
\end{align}
$$
Step2: Write this out as a matrix
Now for the cross-section | Python Code:
from sympy import Function, Symbol, symbols, init_printing, expand, I, re, im
from IPython.display import Math, display
init_printing()
from transvectants import *
def disp(expr):
display(Math(my_latex(expr)))
# p and q are \bar{x} \bar{y}
x, y = symbols('x y')
p, q = symbols('p q')
a, b, c, d = symbols('a b c d')
p = ((a*x - b*y)*(1 + c*x - d*y) + (b*x + a*y)*(d*x + c*y))/((1 + c*x - d*y)**2 + (d*x + c*y)**2)
q = ((b*x + a*y)*(1 + c*x - d*y) - (a*x - b*y)*(d*x + c*y))/((1 + c*x - d*y)**2 + (d*x + c*y)**2)
# Can we tidy this later - but this does work
g = Function('g')(p, q)
g
# In the below, interpret fb_blah as the the f derivative
foo = diff(g, x).subs([(x, 0), (y, 0)])
foo
disp(diff(g, x).subs([(x, 0), (y, 0)]))
disp(diff(g, x).subs([(x, 0), (y, 0)]))
disp(diff(g, x, x).subs([(x, 0), (y, 0)]))
disp(diff(g, x, y).subs([(x, 0), (y, 0)]))
disp(diff(g, y, y).subs([(x, 0), (y, 0)]))
disp(diff(g, x, x, x).subs([(x, 0), (y, 0)]))
disp(diff(g, x, x, y).subs([(x, 0), (y, 0)]))
disp(diff(g, x, y, y).subs([(x, 0), (y, 0)]))
print('boo')
disp(diff(g, y, y, y).subs([(x, 0), (y, 0)]))
Explanation: Moving frame calculations
General idea
Fundamental thing to start with
$$
f(z) = \bar{f}(\bar{z})
$$
Then you need a general group transformation.
Which you should simplify as far as possible.
First: translations can be removed. This is always true, since you have choice of cross-section. It is an extremely good idea since it means that you will now evaluate everything at (x,y) = (0,0).
Second: Remove any other group parameters you can.
Now prolong the group action. Just write it out, don't think. The code below should help.
Turn the prolonged action into a matrix up to the appropriate number of derivatives. Remember that you are solving for the entries of the matrix, not for the vectors.
Now comes the art. You need to find the rest of the cross-section. Choose values for sets of the barred derivatives in order to get all the parameters. What is left over is an invariant.
Mobius
Fundamental thing to start with
$$
f(z) = \bar{f}(\bar{z})
$$
A general Mobius transformation is
$$
\bar{z} = \frac{\alpha z + \beta}{\gamma z + \delta}
$$
Assuming $\delta \neq 0$, we can normalise it: $\delta = 1$. For our cross-section, we'll choose $\bar{z} = 0$. From any point $z$, this determines $\beta$, so wlog assume we start at $z = 0$, i.e. that $\beta = 0$. So $z = 0$ from now on!!!.
$$
\bar{x} + i\bar{y} = \frac{(a + ib)(x + iy)}{1 + (c + id)(x + iy)}
$$
After the zeroth-order frame translates the general point $\bar{x}$ to $0$. So all derivative calculations will be evaluated at $x = y = 0$.
End of explanation
x, y = symbols('x y', real=True)
a0, a1, a2, a3, b0, b1, b2, b3 = symbols('a_0 a_1 a_2 a_3 b_0 b_1 b_2 b_3', real=True)
z = x + I*y
# We have removed the a_0 + I*b_0 term to take out the translation
w = (a1 + I*b1)*z + (a2 + I*b2)*z**2 + (a3 + I*b3)*z**3
p = re(w)
q = im(w)
p
fb = Function('g')(p, q)
disp(diff(fb, x).subs([(x, 0), (y, 0)]))
disp(diff(fb, y).subs([(x, 0), (y, 0)]))
disp(diff(fb, x, x).subs([(x, 0), (y, 0)]))
disp(diff(fb, x, y).subs([(x, 0), (y, 0)]))
disp(diff(fb, y, y).subs([(x, 0), (y, 0)]))
disp(diff(fb, x, x, x).subs([(x, 0), (y, 0)]))
disp(diff(fb, x, x, y).subs([(x, 0), (y, 0)]))
disp(diff(fb, x, y, y).subs([(x, 0), (y, 0)]))
print('boo')
disp(diff(fb, y, y, y).subs([(x, 0), (y, 0)]))
Explanation: Write this out as a matrix
Now pick the cross-section
Conformal
OK, let's try again. This time we are gonna be awesome and do conformal. The Taylor expansion of a general conformal map up to third order is
$$
\bar{z} = c_0 + c_1 z + c_2 z^2 + c_3 z^3
$$
Or in components,
$$
\begin{align}
\bar{x} &= a_0 + a_1 x + a_2 (x^2 - y^2) + a_3 (x^3 - 3 xy^2) - b_1 y - 2b_2xy - 3b_3x^2y + b_3y^3 \
3 & = 4
\end{align}
$$
End of explanation
disp(expand(partial_transvectDant((f, f, f), [[0, 1], [0, 1], [0, 2], [0, 2]])))
disp(expand(partial_transvectant((f, f, f, f, f), [[0, 1], [0, 1], [2, 3], [2, 3], [2, 4]]) ) -2*(expand(partial_transvectant((f, f, f, f, f), [[0, 1], [1, 2], [2, 3], [3, 0], [0, 4]]) )))
disp(expand(partial_transvectant((f, f, f), [[0, 1], [0, 1], [0, 1], [0, 2]])))
disp(expand(partial_transvectant((f, f), [[0, 1], [0, 1], [0, 1]])))
#C = transvectant(f, f, 2)
#D = -partial_transvectant((f, f, f), [[0, 1], [1, 2]])
# We are going to build these by weight, not degree.
# Hence order does not match dispaper
# Weight 4 (2 of 'em)
I4_1 = partial_transvectant((f,f),[[0,1],[0,1]]) # = C
I4_2 = partial_transvectant((f, f, f), [[0, 1], [1, 2]]) # = -D
# Weight 6 (2 of 'em)
print('weight 3:')
I6_1 = partial_transvectant((f,f,f),[[0,1],[0,1],[0,2]]) # = transvectant(f, C, 1)
I6_2 = partial_transvectant((f,f,f,f),[[0,1],[0,2],[0,3]])
# Weight 8 (7 of 'em??)
print('weight 4:')
I8_1 = expand(partial_transvectant((f,f,f),[[0,1],[0,1],[1,2],[0,2]]))
I8_2 = expand(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[2,3]]))
I8_3 = expand(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0]]))
I8_4 = expand(partial_transvectant((f,f,f,f),[[0,1],[1,2],[1,2],[2,3]]))
I8_5 = expand(partial_transvectant((f,f,f,f,f),[[0,1],[1,2],[2,3],[3,4]]))
I8_6 = expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[3,4]]))
I8_7 = expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[0,4]]))
print('weight 2')
disp(I4_1)
disp(I4_2)
print('weight 3')
disp(I6_1)
disp(expand(I6_2))
print('weight 4')
disp(I8_1)
print('')
disp(I8_2)
print('')
disp(I8_3)
print('')
disp(I8_4)
print('')
disp(I8_5)
print('')
disp(I8_6)
print('')
disp(I8_7)
# Only 'weight 4' affine invariant
disp(I4_2/I4_1)
# Only 'weight 6' affine invariant
disp(I6_2/I6_1)
disp(partial_transvectant((f,f,f,f,f),[[0,2],[1,2],[2,3],[3,4]]))
disp(partial_transvectant((f,f,C),[[0,1],[1,2]]))
#disp(transvectant(C, C, 2))
funcs = (C, f**2)
pairs = [[0, 1]]
disp(partial_transvectant(funcs, pairs))
# Construct linear, quadratic, cubic forms
fx, fy, fxx, fxy, fyy, fxxx, fxxy, fxyy, fyyy = symbols('f_x, f_y, f_{xx}, f_{xy}, f_{yy}, f_{xxx}, f_{xxy}, f_{xyy}, f_{yyy}')
l = fx*x + fy*y
q = fxx*x*x + 2*fxy*x*y + fyy*y*y
c = fxxx*x*x*x + 3*fxxy*x*x*y + 3*fxyy*x*y*y + fyyy*y*y*y
# I3 as a form (Robert's method to annoy us...)
disp(-expand(transvectant(q,transvectant(c,c,2),2)/288))
# I5
disp(expand(transvectant(transvectant(c,c,2),transvectant(c,c,2),2)/10368))
# I6
disp(transvectant(c,l**3,3)/36)
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[0,1]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[0,2]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[0,1],[0,2]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0],[0,1]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0],[0,2]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3],[2,3]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[1,2],[2,3],[3,0],[0,1],[1,2]])))
disp(simplify(partial_transvectant((f,f,f),[[0,1],[1,2],[2,0]])))
disp(simplify(partial_transvectant((f,f,f),[[0,1],[1,2],[0,1]])))
disp(simplify(partial_transvectant((f,f,f),[[0,1],[1,2],[2,0],[0,1]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3]])))
disp(simplify(partial_transvectant((f,f,f,f),[[0,1],[0,1],[1,2],[1,2],[2,3],[2,3]])))
disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[1,2],[2,3],[3,4]])))
disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[1,2],[2,3],[3,4]])))
disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[3,4]])))
disp(expand(partial_transvectant((f,f,f,f,f),[[0,1],[0,2],[0,3],[0,4]])))
Explanation: Write this out as a matrix
Now for the cross-section
End of explanation |
7,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 4 in-class problems - Solutions
Using what you learned in Lab, answer questions 4.7, 4.8, 4.10, 4.11, 4.12, and 4.13
Step1: Remember, these states are represented in the HV basis
Step2: The sim_transform function creates the matrix $\bar{\mathbf{S}}$ that can convert from one basis to another. As an example, it will create the tranform matrix to convert from HV to ±45 if you run
Step3: 4.11
Step4: 4.12 | Python Code:
from numpy import sin,cos,sqrt,pi
from qutip import *
Explanation: Chapter 4 in-class problems - Solutions
Using what you learned in Lab, answer questions 4.7, 4.8, 4.10, 4.11, 4.12, and 4.13
End of explanation
H = Qobj([[1],[0]])
V = Qobj([[0],[1]])
P45 = Qobj([[1/sqrt(2)],[1/sqrt(2)]])
M45 = Qobj([[1/sqrt(2)],[-1/sqrt(2)]])
R = Qobj([[1/sqrt(2)],[-1j/sqrt(2)]])
L = Qobj([[1/sqrt(2)],[1j/sqrt(2)]])
Explanation: Remember, these states are represented in the HV basis:
End of explanation
def sim_transform(o_basis1, o_basis2, n_basis1, n_basis2):
a = n_basis1.dag()*o_basis1
b = n_basis1.dag()*o_basis2
c = n_basis2.dag()*o_basis1
d = n_basis2.dag()*o_basis2
return Qobj([[a.data[0,0],b.data[0,0]],[c.data[0,0],d.data[0,0]]])
Explanation: The sim_transform function creates the matrix $\bar{\mathbf{S}}$ that can convert from one basis to another. As an example, it will create the tranform matrix to convert from HV to ±45 if you run:
Shv45 = sim_transform(H,V,P45,M45) # creates the matrix Shv45
Then you can convert a ket from HV to ±45 by applying the Shv45 matrix:
Shv45*H # will convert H from the HV basis to the ±45 basis
To convert operators, you have to sandwich the operator between $\bar{\mathbf{S}}$ and $\bar{\mathbf{S}}^\dagger$:
Shv45*Ph*Shv45.dag() # converts Ph from HV basis to the ±45 basis.
End of explanation
def Rp(theta):
return Qobj([[cos(theta),-sin(theta)],[sin(theta),cos(theta)]]).tidyup()
Shv45 = sim_transform(H,V,P45,M45)
Explanation: 4.11: Express $\hat{R}_p(\theta)$ in ±45 basis
End of explanation
Rp45 = Shv45*Rp(pi/4)*Shv45.dag()
Rp45*Shv45*P45 == Shv45*V # convert P45 to the ±45 basis
Rp45* Qobj([[1],[0]])
ShvLR = sim_transform(H,V,L,R)
ShvLR*Rp(pi/4)*ShvLR.dag()
Explanation: 4.12:
End of explanation |
7,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2017 Geosoft Inc.
https
Step1: Display a grid on a map
One of the most common tasks is to display a data grid in colour.
Step2: Add Contours and Shading
Now we will improve the script by adding contour lines and a shading effect. We will also make a double-line outer-contour.
Step3: Location Reference, Scale Bar, Colour Legend and Title
Now we will improve the map my adding location reference annotations, a scale bar, colour legend and map title.
Step4: Display in a Geosoft viewer | Python Code:
import geosoft.gxpy.gx as gx
import geosoft.gxpy.view as gxview
import geosoft.gxpy.group as gxgroup
import geosoft.gxpy.agg as gxagg
import geosoft.gxpy.grid as gxgrd
import geosoft.gxpy.viewer as gxviewer
import geosoft.gxpy.utility as gxu
import geosoft.gxpy.map as gxmap
from IPython.display import Image
gxc = gx.GXpy()
url = 'https://github.com/GeosoftInc/gxpy/raw/9.3.1/examples/data/'
gxu.url_retrieve(url + 'Wittichica Creek Residual Total Field.grd')
gxu.url_retrieve(url + 'Wittichica Creek Residual Total Field.grd.gi')
gxu.url_retrieve(url + 'Wittichica Creek Residual Total Field.grd.xml')
Explanation: Copyright (c) 2017 Geosoft Inc.
https://github.com/GeosoftInc/gxpy
BSD 2-clause License
2D Views and Maps
For this lesson we will start with a residual total magnetic intensity (TMI) data grid for the Wittichica Creek area of British Columbia, Canada. This data was downloaded from the Geoscience Data Portal of the Geoscience Canada using Geosoft Seeker.
The most common requirement for a grid is to present the grid data as a colour image that can be viewed or printed. In this exercise we will first view the grid, then add contours, shading and a colour legend, and finally we will add location annotations and complete the map ready for printing or sharing.
Lessons
<!--- # Run this from a code cell to create TOC markdown: -->
<!--- import geosoft.gxpy.utility; print(geosoft.gxpy.utility.jupyter_markdown_toc('2d views and maps')) -->
Understanding Geosoft Maps and Views
Imports, GX Context and get data from GitHub
Display a grid on a map
Add Contours and Shading
Location Reference, Scale Bar, Colour Legend and Title
Display in a Geosoft viewer
See also: Tutorial page
Some map features in this notebook require a Geosoft End-User License.
Understanding Geosoft Maps and Views
Geosoft maps are used to present geoscience information on a 2D surface, which can be a computer screen or printed on paper.
Geosoft Maps are stored in a file that has a descriptive name and an extension .map.
A Map can be thought of as a physical piece of paper, and we use map centimeters or map millimetres to reference locations relative to the bottom-left corner, which is location (0,0).
A Map contains Views, and a Map may have any number of Views.
A View represents some spatial extent within a defined Earth coordinate system that is scaled and located as desired on the surface of the map.
Views can be 2D or 3D, with 3D views rendered on a map as a 2D perspective of the information in the 3D View..
Views contain named Groups, with each Group containing a set of graphical elements that display spatial information. Basic drawing Groups contain lines, coloured areas and text. More advanced Group types support more complex data structures like Aggregates for grids and images, Voxels that display a Geosoft voxset, or a CSymb Group that contains data points coloured by size based on a data value.
3D Views can contain 3D Groups, such as a something drawn on a relief surface, or a 3D Geosoft Geosurface, or a set of vectors from a Geosoft Vector voxel.
Geosoft Maps can be opened and viewed in a Geosoft Viewer.
Imports, GX Context and get data from GitHub
End of explanation
import geosoft.gxpy.gx as gx
import geosoft.gxpy.map as gxmap
import geosoft.gxpy.view as gxview
import geosoft.gxpy.group as gxgroup
import geosoft.gxpy.agg as gxagg
import geosoft.gxpy.grid as gxgrd
import geosoft.gxpy.viewer as gxviewer
gxc = gx.GXpy()
# get the grid extent and coordinate system from which we will create a default map named after the grid
# do this in a separate `with...` as the Aggregate_group class needs access to the grid file.
with gxgrd.Grid('Wittichica Creek Residual Total Field.grd') as grid:
extent = grid.extent_2d()
coordinate_system = grid.coordinate_system
grid_file_name = grid.file_name
map_file_name = grid_file_name + '.map'
# create a map for this grid on A4 media, scale to fit the extent
with gxmap.Map.new(map_file_name,
data_area=extent,
media="A4",
margins=(1, 3.5, 3, 1),
coordinate_system=coordinate_system,
overwrite=True) as gmap:
# work with the data view
with gxview.View.open(gmap, "data") as v:
# add the grid image to the view
with gxagg.Aggregate_image.new(grid_file_name) as agg:
gxgroup.Aggregate_group.new(v, agg)
# display the map as an image
Image(gxmap.Map.open(map_file_name).image_file(pix_width=800))
Explanation: Display a grid on a map
One of the most common tasks is to display a data grid in colour.
End of explanation
with gxmap.Map.new(map_file_name,
data_area=extent,
media="A4",
margins=(1, 3.5, 3, 1),
coordinate_system=coordinate_system,
overwrite=True) as gmap:
# work with the data view, draw a line around the data view
with gxview.View.open(gmap, "data") as v:
# add the grid image to the view, with shading, 20 nT contour interval to match default contour lines
with gxagg.Aggregate_image.new(grid_file_name, shade=True, contour=20) as agg:
gxgroup.Aggregate_group.new(v, agg)
# contour the grid
gxgroup.contour(v, 'TMI_contour', grid_file_name)
# display the map as an image
Image(gxmap.Map.open(map_file_name).image_file(pix_width=800))
Explanation: Add Contours and Shading
Now we will improve the script by adding contour lines and a shading effect. We will also make a double-line outer-contour.
End of explanation
with gxmap.Map.new(map_file_name,
data_area=extent,
media="A4",
margins=(1, 3.5, 3, 1),
coordinate_system=coordinate_system,
overwrite=True) as gmap:
# work with the data view, draw a line around the data view
with gxview.View.open(gmap, "data") as v:
# add the grid image to the view, with shading, 20 nT contour interval to match default contour lines
with gxagg.Aggregate_image.new(grid_file_name, shade=True, contour=20) as agg:
gxgroup.Aggregate_group.new(v, agg)
# colour legend
gxgroup.legend_color_bar(v, 'TMI_legend',
title='Res TMI\nnT',
location=(1.2,0),
cmap=agg.layer_color_map(0),
cmap2=agg.layer_color_map(1))
# contour the grid
gxgroup.contour(v, 'TMI_contour', grid_file_name)
# map title and creator tag
with gxview.View.open(gmap, "base") as v:
with gxgroup.Draw(v, 'title') as g:
g.text("Tutorial Example\nresidual mag",
reference=gxgroup.REF_BOTTOM_CENTER,
location=(100, 10),
text_def=gxgroup.Text_def(height=3.5,
weight=gxgroup.FONT_WEIGHT_BOLD))
g.text("created by:" + gxc.gid,
location=(1, 1.5),
text_def=gxgroup.Text_def(height=1.2,
italics=True))
# add a map surround to the map
gmap.surround(outer_pen='kt500', inner_pen='kt100', gap=0.1)
# annotate the data view locations
gmap.annotate_data_xy(grid=gxmap.GRID_CROSSES)
gmap.annotate_data_ll(grid=gxmap.GRID_LINES,
grid_pen=gxgroup.Pen(line_color='b'),
text_def=gxgroup.Text_def(color='b',
height=0.15,
italics=True))
# scale bar
gmap.scale_bar(location=(1, 3, 1.5),
text_def=gxgroup.Text_def(height=0.15))
# display the map as an image
Image(gxmap.Map.open(map_file_name).image_file(pix_width=800))
Explanation: Location Reference, Scale Bar, Colour Legend and Title
Now we will improve the map my adding location reference annotations, a scale bar, colour legend and map title.
End of explanation
gxviewer.view_document(map_file_name, wait_for_close=False)
Explanation: Display in a Geosoft viewer
End of explanation |
7,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem
Step1: Unit Test
The following unit test is expected to fail until you solve the challenge. | Python Code:
def fib_recursive(n):
# TODO: Implement me
pass
num_items = 10
cache = [None] * (num_items + 1)
def fib_dynamic(n):
# TODO: Implement me
pass
def fib_iterative(n):
# TODO: Implement me
pass
Explanation: <small><i>This notebook was prepared by Donne Martin. Source and license info is on GitHub.</i></small>
Challenge Notebook
Problem: Implement fibonacci recursively, dynamically, and iteratively.
Constraints
Test Cases
Algorithm
Code
Unit Test
Solution Notebook
Constraints
None
Test Cases
n = 0 -> 0
n = 1 -> 1
n > 1 -> 0, 1, 1, 2, 3, 5, 8, 13, 21, 34...
Algorithm
Refer to the Solution Notebook. If you are stuck and need a hint, the solution notebook's algorithm discussion might be a good place to start.
Code
End of explanation
# %load test_fibonacci.py
from nose.tools import assert_equal
class TestFib(object):
def test_fib(self, func):
result = []
for i in range(num_items):
result.append(func(i))
fib_seq = [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
assert_equal(result, fib_seq)
print('Success: test_fib')
def main():
test = TestFib()
test.test_fib(fib_recursive)
test.test_fib(fib_dynamic)
test.test_fib(fib_iterative)
if __name__ == '__main__':
main()
Explanation: Unit Test
The following unit test is expected to fail until you solve the challenge.
End of explanation |
7,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building an App Engine app to serve ML predictions
Learning Objectives
Deploy a web application that consumes your model service on Cloud AI Platform.
Introduction
Verify that you have previously Trained your Keras model and Deployed it predicting with Keras model on Cloud AI Platform. If not, go back to train_keras_ai_platform_babyweight.ipynb and deploy_keras_ai_platform_babyweight.ipynb create them.
In the previous notebook, we deployed our model to CAIP. In this notebook, we'll make a Flask app to show how our models can interact with a web application which could be deployed to App Engine with the Flexible Environment.
Step 1
Step1: Step 3
Step2: Run the below cell, and copy the output into the Google Cloud Shell | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
%%bash
# Check your project name
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os
os.environ["BUCKET"] = "your-bucket-id-here" # Recommended: use your project name
Explanation: Building an App Engine app to serve ML predictions
Learning Objectives
Deploy a web application that consumes your model service on Cloud AI Platform.
Introduction
Verify that you have previously Trained your Keras model and Deployed it predicting with Keras model on Cloud AI Platform. If not, go back to train_keras_ai_platform_babyweight.ipynb and deploy_keras_ai_platform_babyweight.ipynb create them.
In the previous notebook, we deployed our model to CAIP. In this notebook, we'll make a Flask app to show how our models can interact with a web application which could be deployed to App Engine with the Flexible Environment.
Step 1: Review Flask App code in application folder
Let's start with what our users will see. In the application folder, we have prebuilt the components for web application. In the templates folder, the <a href="application/templates/index.html">index.html</a> file is the visual GUI our users will make predictions with.
It works by using an HTML form to make a POST request to our server, passing along the values captured by the input tags.
The form will render a little strangely in the notebook since the notebook environment does not run javascript, nor do we have our web server up and running. Let's get to that!
Step 2: Set environment variables
End of explanation
%%bash
# TODO 1: Deploy a web application that consumes your model service on Cloud AI Platform
gsutil -m rm -r gs://$BUCKET/baby_app
gsutil -m cp -r application/ gs://$BUCKET/baby_app
Explanation: Step 3: Complete application code in application/main.py
We can set up our server with python using Flask. Below, we've already built out most of the application for you.
The @app.route() decorator defines a function to handle web reqests. Let's say our website is www.example.com. With how our @app.route("/") function is defined, our sever will render our <a href="application/templates/index.html">index.html</a> file when users go to www.example.com/ (which is the default route for a website).
So, when a user pings our server with www.example.com/predict, they would use @app.route("/predict", methods=["POST"]) to make a prediction. The data that gets sent over the internet isn't a dictionary, but a string like below:
name1=value1&name2=value2 where name corresponds to the name on the input tag of our html form, and the value is what the user entered. Thankfully, Flask makes it easy to transform this string into a dictionary with request.form.to_dict(), but we still need to transform the data into a format our model expects. We've done this with the gender2str and the plurality2str utility functions.
Ok! Let's set up a webserver to take in the form inputs, process them into features, and send these features to our model on Cloud AI Platform to generate predictions to serve to back to users.
Fill in the TODO comments in <a href="application/main.py">application/main.py</a>. Give it a go first and review the solutions folder if you get stuck.
Note: AppEngine test configurations have already been set for you in the file <a href="application/app.yaml">application/app.yaml</a>. Review app.yaml documentation for additional configuration options.
Step 4: Deploy application
So how do we know that it works? We'll have to deploy our website and find out! Notebooks aren't made for website deployment, so we'll move our operation to the Google Cloud Shell.
By default, the shell doesn't have Flask installed, so copy over the following command to install it.
python3 -m pip install --user Flask==0.12.1
Next, we'll need to copy our web app to the Cloud Shell. We can use Google Cloud Storage as an inbetween.
End of explanation
%%bash
echo rm -r baby_app/
echo mkdir baby_app/
echo gsutil cp -r gs://$BUCKET/baby_app ./
echo python3 baby_app/main.py
Explanation: Run the below cell, and copy the output into the Google Cloud Shell
End of explanation |
7,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Clase 6
Step1: 2. Generador congruencial lineal
Step2: Ejemplo
Step3: Generador mínimo estándar
Step4: Generado Randu (Usado por IBM)
Step5: 3. Método de Box-Muller | Python Code:
import numpy as np
import seaborn as sns
import scipy.stats as stats
%matplotlib inline
Explanation: Clase 6: Generación de números aleatorios y simulación Montecarlo
Juan Diego Sánchez Torres,
Profesor, MAF ITESO
Departamento de Matemáticas y Física
[email protected]
Tel. 3669-34-34 Ext. 3069
Oficina: Cubículo 4, Edificio J, 2do piso
1. Motivación
Presentar los métodos básicos para la generación de números aleatorios uniformes y normales.
End of explanation
def lcg(n, m=2**31-1, a=16807, c=0, seed=2**30):
x = np.zeros(n+1)
x[0]=seed
for i in range(1,n+1):
x[i] = (a * x[i-1]+c)%m
return x[1:]/m
Explanation: 2. Generador congruencial lineal
End of explanation
lcg(10, m=31, a=13, c=0, seed=3)
Explanation: Ejemplo
End of explanation
x=lcg(10000)
sns.distplot(x, color="b", fit=stats.uniform);
Explanation: Generador mínimo estándar
End of explanation
x=lcg(10000, m=2**31, a=2**16+3, c=0, seed=3)
sns.distplot(x, color="b", fit=stats.uniform);
Explanation: Generado Randu (Usado por IBM)
End of explanation
def bm(n):
m=2**31-1
a=16807
c=0
seed=2**30
x = np.zeros(n+1)
x[0]=seed
for i in range(1,n+1):
x[i] = (a * x[i-1]+c)%m
u=x[1:]/m
u1=u[:int((n/2))]
u2=u[int(n/2):]
nn=np.concatenate((np.sqrt(-2*np.log(1-u1))*np.cos(2*np.pi*u2), np.sqrt(-2*np.log(1-u1))*np.sin(2*np.pi*u2)),axis=0)
return nn
y=bm(100000)
sns.distplot(y, color="b", fit=stats.norm);
Explanation: 3. Método de Box-Muller
End of explanation |
7,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification configurations
TODO
add relevant ranges
set proper types for default configuration
find solution for string/float type in numpy
To change configurations, redefine values for respective keys in given dictionary. Any iterable with len() function is accepted, values returned should match type specified in defaults.
TOC
SVC
Generator
Defaults
Configs
Linear
Polynomial
Radial basis
Sigmoid
SVC<a id='svc'></a>
TODO
resolve class_weight and random_state parameters
Step1: <a id='svc_poly'></a>
Step2: <a id='svc_rbf'></a>
Step3: <a id='svc_sigmoid'></a>
Step4: SVC defaults<a id='svc_defaults'></a>
Step5: SVC generator<a id='svc_gen'></a> | Python Code:
def svc_linear_config():
return {
'C': (1.0,),
'kernel': ('linear',),
'shrinking': (True, False),
'probability': (True, False),
'tol': (0.001,),
# 'class_weight': ('balanced',),
'max_iter': (-1,),
'decision_function_shape': ('ovo', 'ovr'),
}
Explanation: Classification configurations
TODO
add relevant ranges
set proper types for default configuration
find solution for string/float type in numpy
To change configurations, redefine values for respective keys in given dictionary. Any iterable with len() function is accepted, values returned should match type specified in defaults.
TOC
SVC
Generator
Defaults
Configs
Linear
Polynomial
Radial basis
Sigmoid
SVC<a id='svc'></a>
TODO
resolve class_weight and random_state parameters
End of explanation
def svc_poly_config():
return {
'C': (1.0,),
'kernel': ('poly',),
'degree': (3,),
'gamma': ('auto',),
'coef0': (0.0,),
'shrinking': (True, False),
'probability': (True, False),
'tol': (0.001,),
# 'class_weight': ('balanced',),
'max_iter': (-1,),
'decision_function_shape': ('ovo', 'ovr'),
}
Explanation: <a id='svc_poly'></a>
End of explanation
def svc_rbf_config():
return {
'C': (1.0,),
'kernel': ('rbf',),
'gamma': ('auto',),
'shrinking': (True, False),
'probability': (True, False),
'tol': (0.001,),
# 'class_weight': ('balanced',),
'max_iter': (-1,),
'decision_function_shape': ('ovo', 'ovr'),
}
Explanation: <a id='svc_rbf'></a>
End of explanation
def svc_sigmoid_config():
return {
'C': (1.0,),
'kernel': ('sigmoid',),
'gamma': ('auto',),
'coef0': (0.0,),
'shrinking': (True, False),
'probability': (True, False),
'tol': (0.001,),
# 'class_weight': ('balanced',),
'max_iter': (-1,),
'decision_function_shape': ('ovo', 'ovr'),
}
Explanation: <a id='svc_sigmoid'></a>
End of explanation
svc_defaults = [('C', 'f4', 1.0),
('kernel', 'S10', 'rbf'),
('degree', 'i4', 3),
('gamma', 'S5', 'auto'),
('coef0', 'f4', 0.0),
('shrinking', 'b', True),
('probability', 'b', False),
('tol', 'f4', 0.001),
('cache_size', 'f4', 200),
# ('class_weight', None),
('verbose', 'b', False),
('max_iter', 'i4', -1),
('decision_function_shape', 'S3', 'ovr'),
# ('random_state', None)]
]
svc_configs = (svc_linear_config(),
svc_poly_config(),
svc_rbf_config(),
svc_sigmoid_config())
Explanation: SVC defaults<a id='svc_defaults'></a>
End of explanation
import modules.utils as utils
import modules.values as values
utils.generate_parameters(values.parameters_path + 'svc', svc_configs, svc_defaults)
Explanation: SVC generator<a id='svc_gen'></a>
End of explanation |
7,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Modeling and Simulation in Python
Chapter 15
Copyright 2017 Allen Downey
License
Step1: The coffee cooling problem
I'll use a State object to store the initial temperature.
Step2: And a System object to contain the system parameters.
Step4: The update function implements Newton's law of cooling.
Step5: Here's how it works.
Step7: Here's a version of run_simulation that uses linrange to make an array of time steps.
Step8: And here's how it works.
Step9: Here's what the results look like.
Step10: And here's the final temperature
Step12: Encapsulation
Before we go on, let's define a function to initialize System objects with relevant parameters
Step13: Here's how we use it
Step14: Exercises
Exercise | Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
Explanation: Modeling and Simulation in Python
Chapter 15
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
init = State(T=90)
Explanation: The coffee cooling problem
I'll use a State object to store the initial temperature.
End of explanation
coffee = System(init=init,
volume=300,
r=0.01,
T_env=22,
t_0=0,
t_end=30,
dt=1)
Explanation: And a System object to contain the system parameters.
End of explanation
def update_func(state, t, system):
Update the thermal transfer model.
state: State (temp)
t: time
system: System object
returns: State (temp)
r, T_env, dt = system.r, system.T_env, system.dt
T = state.T
T += -r * (T - T_env) * dt
return State(T=T)
Explanation: The update function implements Newton's law of cooling.
End of explanation
update_func(init, 0, coffee)
Explanation: Here's how it works.
End of explanation
def run_simulation(system, update_func):
Runs a simulation of the system.
Add a TimeFrame to the System: results
system: System object
update_func: function that updates state
init = system.init
t_0, t_end, dt = system.t_0, system.t_end, system.dt
frame = TimeFrame(columns=init.index)
frame.row[t_0] = init
ts = linrange(t_0, t_end, dt)
for t in ts:
frame.row[t+dt] = update_func(frame.row[t], t, system)
return frame
Explanation: Here's a version of run_simulation that uses linrange to make an array of time steps.
End of explanation
results = run_simulation(coffee, update_func)
Explanation: And here's how it works.
End of explanation
plot(results.T, label='coffee')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
Explanation: Here's what the results look like.
End of explanation
coffee.T_final = get_last_value(results.T)
T_final = get_last_value(results.T)
Explanation: And here's the final temperature:
End of explanation
def make_system(T_init, r, volume, t_end):
Makes a System object with the given parameters.
T_init: initial temperature in degC
r: heat transfer rate, in 1/min
volume: volume of liquid in mL
t_end: end time of simulation
returns: System object
init = State(T=T_init)
return System(init=init,
r=r,
volume=volume,
temp=T_init,
t_0=0,
t_end=t_end,
dt=1,
T_env=22)
Explanation: Encapsulation
Before we go on, let's define a function to initialize System objects with relevant parameters:
End of explanation
coffee = make_system(T_init=90, r=0.01, volume=300, t_end=30)
results = run_simulation(coffee, update_func)
T_final = get_last_value(results.T)
Explanation: Here's how we use it:
End of explanation
# Solution goes here
plot(results.T, label='milk')
decorate(xlabel='Time (minutes)',
ylabel='Temperature (C)')
Explanation: Exercises
Exercise: Simulate the temperature of 50 mL of milk with a starting temperature of 5 degC, in a vessel with the same insulation, for 15 minutes, and plot the results.
By trial and error, find a value for r that makes the final temperature close to 20 C.
End of explanation |
7,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classification
Use up to five categories, thresholding with these values
Step1: That is not very balanced. Better take zero and minus category times two.
Step2: Even the splitted data sets look similar. Seems there is no problem with doubling the underrepresented categories.
Now for the mapping
$N \times M \to N \times M \times 3$
Step3: Testwise training
Only for first leg
Step4: Conclusion | Python Code:
def get_categories(y):
y = y.dropna()
plus = (0.1 < y)
zero = (-0.1 <= y) & (y <= 0.1)
minus = (y < -0.1)
return plus, zero, minus
def get_count(plus, zero, minus):
return pd.concat(map(operator.methodcaller("sum"), [plus, zero, minus]), axis=1, keys=["plus", "zero", "minus"])
plus, zero, minus = get_categories(y)
count = get_count(plus, zero, minus)
count.plot.bar(figsize=(8, 3))
plt.legend(("> 0.1", "≈ 0", "< 0.1"), framealpha=0.7, loc=9)
plt.title("Number of samples per category.")
plt.xticks(range(6), range(1, 7))
plt.xlabel("Leg")
plt.ylabel("Samples")
plt.savefig("classification-samples-per-category.pdf", format="pdf", dpi=300, bbox_inches="tight")
plt.show()
scaling = count.apply(lambda x: [1 / (xi / x[0]) for xi in x], axis=1)
scaling
# The necessary scaling factor
Explanation: Classification
Use up to five categories, thresholding with these values:
+ → 0.1
0 → 0
- → -0.1
End of explanation
for dataset, name in zip(splitted_dataset(x, y), ("Training Set", "Validation Set", "Test Set")):
cats = get_categories(y)
cnt = get_count(*cats)
cnt.plot.bar(figsize=(12, 3))
plt.title(name)
plt.show()
Explanation: That is not very balanced. Better take zero and minus category times two.
End of explanation
target = np.dstack((plus.values, zero.values, minus.values)).astype(float)
target.shape
scaled_target = target * scaling.values
scaled_target.shape
inputs = x.dropna().values
inputs = np.diff(inputs, axis=1)
inputs.shape
Explanation: Even the splitted data sets look similar. Seems there is no problem with doubling the underrepresented categories.
Now for the mapping
$N \times M \to N \times M \times 3$
End of explanation
leg_nr = 0
(x_train, y_train), (x_val, y_val), (x_test, y_test) = splitted_dataset(
pd.DataFrame(inputs), pd.DataFrame(target[:,leg_nr,:]))
input_layer = keras.layers.Input(shape=(7,), name="inputs")
hidden_layer = keras.layers.Dense(30, activation="relu", name="hidden")(input_layer)
output_layer = keras.layers.Dense(3, activation="softmax", name="predictions")(hidden_layer)
model = keras.models.Model(inputs=input_layer, outputs=output_layer)
print(model.summary())
model.compile(optimizer="Adam",
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train.values[:-1], y_train.values[1:], epochs=100, batch_size=32,
validation_data=(x_val.values[:-1], y_val.values[1:]),
class_weight={k:v for k, v in enumerate(scaling.iloc[leg_nr].values)})
test_pred = model.predict(x_test.values[:-1])
def accuracy(y1, y2):
return np.equal(np.argmax(y1, axis=-1), np.argmax(y2, axis=-1)).sum() / len(y1)
# Predicted accuracy
accuracy(test_pred, y_test.values[1:])
# Naive accuracy
accuracy(y_test.values[:-1], y_test.values[1:])
Explanation: Testwise training
Only for first leg
End of explanation
results_dict = {}
# Try for all the legs:
#days = 1
for days in range(1, 23, 3):
network_predictions = []
naive_predictions = []
for leg_nr in range(6):
(x_train, y_train), (x_val, y_val), (x_test, y_test) = splitted_dataset(
pd.DataFrame(inputs), pd.DataFrame(target[:,leg_nr,:]))
input_layer = keras.layers.Input(shape=(7,), name="inputs")
hidden_layer = keras.layers.Dense(30, activation="relu", name="hidden")(input_layer)
output_layer = keras.layers.Dense(3, activation="softmax", name="predictions")(hidden_layer)
model = keras.models.Model(inputs=input_layer, outputs=output_layer)
model.compile(optimizer="Adam",
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train.values[:-days], y_train.values[days:], epochs=100, batch_size=32,
validation_data=(x_val.values[:-days], y_val.values[days:]),
class_weight={k:v for k, v in enumerate(scaling.iloc[leg_nr].values)},
verbose=0)
test_pred = model.predict(x_test.values[:-days])
pred_acc = accuracy(test_pred, y_test.values[days:])
naive_acc = accuracy(y_test.values[:-days], y_test.values[days:])
network_predictions.append(pred_acc)
naive_predictions.append(naive_acc)
results = pd.DataFrame([pd.Series(naive_predictions), pd.Series(network_predictions)])
results.columns = ["V" + str(i) for i in range(1, 7)]
results.index = ["Naive", "Network"]
results_dict[days] = results
results_dict[1].T.plot.bar(figsize=(4, 4), width=0.8)
plt.legend(loc="lower center")
plt.axhline(results_dict[1].loc["Naive"].mean(), color="#1f77b4")
plt.axhline(results_dict[1].loc["Network"].mean(), color="#ff7f0e")
plt.ylabel("Accuracy")
plt.xticks(np.arange(6), list(range(1, 7)))
plt.savefig("classification-1.pdf", format="pdf", dpi=300, bbox_inches="tight")
plt.show()
plt.figure(figsize=(4,4))
plt.plot(list(results_dict.keys()),
[results_dict[i].loc["Naive"].mean() for i in results_dict],
list(results_dict.keys()),
[results_dict[i].loc["Network"].mean() for i in results_dict],
linewidth=2)
plt.legend(("Naive", "Network"))
plt.ylabel("Mean accuracy")
plt.axhline(0.5, color="grey", alpha=0.75)
plt.xlim(1, 22)
plt.grid(axis="x")
plt.xticks(list(results_dict.keys()))
plt.savefig("classification-all.pdf", format="pdf", dpi=300, bbox_inches="tight")
plt.show()
Explanation: Conclusion:
Classification works equally bad.
End of explanation |
7,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Serializing the model
Jump_to lesson 12 video
Step1: It's also possible to save the whole model, including the architecture, but it gets quite fiddly and we don't recommend it. Instead, just save the parameters, and recreate the model directly.
Step2: Pets
Jump_to lesson 12 video
Step3: Custom head
Jump_to lesson 12 video
Step4: adapt_model and gradual unfreezing
Jump_to lesson 12 video
Step5: Batch norm transfer
Jump_to lesson 12 video
Step6: Pytorch already has an apply method we can use
Step7: Discriminative LR and param groups
Jump_to lesson 12 video
Step8: Export | Python Code:
path = datasets.untar_data(datasets.URLs.IMAGEWOOF_160)
size = 128
bs = 64
tfms = [make_rgb, RandomResizedCrop(size, scale=(0.35,1)), np_to_float, PilRandomFlip()]
val_tfms = [make_rgb, CenterCrop(size), np_to_float]
il = ImageList.from_files(path, tfms=tfms)
sd = SplitData.split_by_func(il, partial(grandparent_splitter, valid_name='val'))
ll = label_by_func(sd, parent_labeler, proc_y=CategoryProcessor())
ll.valid.x.tfms = val_tfms
data = ll.to_databunch(bs, c_in=3, c_out=10, num_workers=8)
len(il)
loss_func = LabelSmoothingCrossEntropy()
opt_func = adam_opt(mom=0.9, mom_sqr=0.99, eps=1e-6, wd=1e-2)
learn = cnn_learner(xresnet18, data, loss_func, opt_func, norm=norm_imagenette)
def sched_1cycle(lr, pct_start=0.3, mom_start=0.95, mom_mid=0.85, mom_end=0.95):
phases = create_phases(pct_start)
sched_lr = combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds(phases, cos_1cycle_anneal(mom_start, mom_mid, mom_end))
return [ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
lr = 3e-3
pct_start = 0.5
cbsched = sched_1cycle(lr, pct_start)
learn.fit(40, cbsched)
st = learn.model.state_dict()
type(st)
', '.join(st.keys())
st['10.bias']
mdl_path = path/'models'
mdl_path.mkdir(exist_ok=True)
Explanation: Serializing the model
Jump_to lesson 12 video
End of explanation
torch.save(st, mdl_path/'iw5')
Explanation: It's also possible to save the whole model, including the architecture, but it gets quite fiddly and we don't recommend it. Instead, just save the parameters, and recreate the model directly.
End of explanation
pets = datasets.untar_data(datasets.URLs.PETS)
pets.ls()
pets_path = pets/'images'
il = ImageList.from_files(pets_path, tfms=tfms)
il
#export
def random_splitter(fn, p_valid): return random.random() < p_valid
random.seed(42)
sd = SplitData.split_by_func(il, partial(random_splitter, p_valid=0.1))
sd
n = il.items[0].name; n
re.findall(r'^(.*)_\d+.jpg$', n)[0]
def pet_labeler(fn): return re.findall(r'^(.*)_\d+.jpg$', fn.name)[0]
proc = CategoryProcessor()
ll = label_by_func(sd, pet_labeler, proc_y=proc)
', '.join(proc.vocab)
ll.valid.x.tfms = val_tfms
c_out = len(proc.vocab)
data = ll.to_databunch(bs, c_in=3, c_out=c_out, num_workers=8)
learn = cnn_learner(xresnet18, data, loss_func, opt_func, norm=norm_imagenette)
learn.fit(5, cbsched)
Explanation: Pets
Jump_to lesson 12 video
End of explanation
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
st = torch.load(mdl_path/'iw5')
m = learn.model
m.load_state_dict(st)
cut = next(i for i,o in enumerate(m.children()) if isinstance(o,nn.AdaptiveAvgPool2d))
m_cut = m[:cut]
xb,yb = get_batch(data.valid_dl, learn)
pred = m_cut(xb)
pred.shape
ni = pred.shape[1]
#export
class AdaptiveConcatPool2d(nn.Module):
def __init__(self, sz=1):
super().__init__()
self.output_size = sz
self.ap = nn.AdaptiveAvgPool2d(sz)
self.mp = nn.AdaptiveMaxPool2d(sz)
def forward(self, x): return torch.cat([self.mp(x), self.ap(x)], 1)
nh = 40
m_new = nn.Sequential(
m_cut, AdaptiveConcatPool2d(), Flatten(),
nn.Linear(ni*2, data.c_out))
learn.model = m_new
learn.fit(5, cbsched)
Explanation: Custom head
Jump_to lesson 12 video
End of explanation
def adapt_model(learn, data):
cut = next(i for i,o in enumerate(learn.model.children())
if isinstance(o,nn.AdaptiveAvgPool2d))
m_cut = learn.model[:cut]
xb,yb = get_batch(data.valid_dl, learn)
pred = m_cut(xb)
ni = pred.shape[1]
m_new = nn.Sequential(
m_cut, AdaptiveConcatPool2d(), Flatten(),
nn.Linear(ni*2, data.c_out))
learn.model = m_new
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
for p in learn.model[0].parameters(): p.requires_grad_(False)
learn.fit(3, sched_1cycle(1e-2, 0.5))
for p in learn.model[0].parameters(): p.requires_grad_(True)
learn.fit(5, cbsched, reset_opt=True)
Explanation: adapt_model and gradual unfreezing
Jump_to lesson 12 video
End of explanation
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def apply_mod(m, f):
f(m)
for l in m.children(): apply_mod(l, f)
def set_grad(m, b):
if isinstance(m, (nn.Linear,nn.BatchNorm2d)): return
if hasattr(m, 'weight'):
for p in m.parameters(): p.requires_grad_(b)
apply_mod(learn.model, partial(set_grad, b=False))
learn.fit(3, sched_1cycle(1e-2, 0.5))
apply_mod(learn.model, partial(set_grad, b=True))
learn.fit(5, cbsched, reset_opt=True)
Explanation: Batch norm transfer
Jump_to lesson 12 video
End of explanation
learn.model.apply(partial(set_grad, b=False));
Explanation: Pytorch already has an apply method we can use:
End of explanation
learn = cnn_learner(xresnet18, data, loss_func, opt_func, c_out=10, norm=norm_imagenette)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def bn_splitter(m):
def _bn_splitter(l, g1, g2):
if isinstance(l, nn.BatchNorm2d): g2 += l.parameters()
elif hasattr(l, 'weight'): g1 += l.parameters()
for ll in l.children(): _bn_splitter(ll, g1, g2)
g1,g2 = [],[]
_bn_splitter(m[0], g1, g2)
g2 += m[1:].parameters()
return g1,g2
a,b = bn_splitter(learn.model)
test_eq(len(a)+len(b), len(list(m.parameters())))
Learner.ALL_CBS
#export
from types import SimpleNamespace
cb_types = SimpleNamespace(**{o:o for o in Learner.ALL_CBS})
cb_types.after_backward
#export
class DebugCallback(Callback):
_order = 999
def __init__(self, cb_name, f=None): self.cb_name,self.f = cb_name,f
def __call__(self, cb_name):
if cb_name==self.cb_name:
if self.f: self.f(self.run)
else: set_trace()
#export
def sched_1cycle(lrs, pct_start=0.3, mom_start=0.95, mom_mid=0.85, mom_end=0.95):
phases = create_phases(pct_start)
sched_lr = [combine_scheds(phases, cos_1cycle_anneal(lr/10., lr, lr/1e5))
for lr in lrs]
sched_mom = combine_scheds(phases, cos_1cycle_anneal(mom_start, mom_mid, mom_end))
return [ParamScheduler('lr', sched_lr),
ParamScheduler('mom', sched_mom)]
disc_lr_sched = sched_1cycle([0,3e-2], 0.5)
learn = cnn_learner(xresnet18, data, loss_func, opt_func,
c_out=10, norm=norm_imagenette, splitter=bn_splitter)
learn.model.load_state_dict(torch.load(mdl_path/'iw5'))
adapt_model(learn, data)
def _print_det(o):
print (len(o.opt.param_groups), o.opt.hypers)
raise CancelTrainException()
learn.fit(1, disc_lr_sched + [DebugCallback(cb_types.after_batch, _print_det)])
learn.fit(3, disc_lr_sched)
disc_lr_sched = sched_1cycle([1e-3,1e-2], 0.3)
learn.fit(5, disc_lr_sched)
Explanation: Discriminative LR and param groups
Jump_to lesson 12 video
End of explanation
!./notebook2script.py 11a_transfer_learning.ipynb
Explanation: Export
End of explanation |
7,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Author
Step1: First let's check if there are new or deleted files (only matching by file names).
Step2: Cool, no new nor deleted files.
Now let's set up a dataset that, for each table, links both the old and the new file together.
Step3: Let's make sure the structure hasn't changed
Step4: OK, let's check what's new in there
Step5: Ouch, it seems they have decided to rename one column, however the opposite change was done when going to v341, so they're just reverting back. Lucky us we never used it.
Now let's see for each file if there are more or less rows.
Step6: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
Step7: Alright, so the only change seems to be 10 new jobs added. Let's take a look (only showing interesting fields)
Step8: Those are indeed new jobs. Some are related to ecology sneaking in.
OK, let's check at the changes in items
Step9: As anticipated it is a very minor change (hard to see it visually)
Step10: The new ones seem legit to me and related to the new jobs.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
Step11: So in addition to the added items, there are few fixes. Let's have a look at them | Python Code:
import collections
import glob
import os
from os import path
import matplotlib_venn
import pandas as pd
rome_path = path.join(os.getenv('DATA_FOLDER'), 'rome/csv')
OLD_VERSION = '342'
NEW_VERSION = '343'
old_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(OLD_VERSION)))
new_version_files = frozenset(glob.glob(rome_path + '/*{}*'.format(NEW_VERSION)))
Explanation: Author: Pascal, [email protected]
Date: 2020-06-24
ROME update from v342 to v343
In March 2020 a new version of the ROME was released. I want to investigate what changed and whether we need to do anything about it.
You might not be able to reproduce this notebook, mostly because it requires to have the two versions of the ROME in your data/rome/csv folder which happens only just before we switch to v343. You will have to trust me on the results ;-)
Skip the run test because it requires older versions of the ROME.
End of explanation
new_files = new_version_files - frozenset(f.replace(OLD_VERSION, NEW_VERSION) for f in old_version_files)
deleted_files = old_version_files - frozenset(f.replace(NEW_VERSION, OLD_VERSION) for f in new_version_files)
print('{:d} new files'.format(len(new_files)))
print('{:d} deleted files'.format(len(deleted_files)))
Explanation: First let's check if there are new or deleted files (only matching by file names).
End of explanation
# Load all ROME datasets for the two versions we compare.
VersionedDataset = collections.namedtuple('VersionedDataset', ['basename', 'old', 'new'])
def read_csv(filename):
try:
return pd.read_csv(filename)
except pd.errors.ParserError:
display(f'While parsing: {filename}')
raise
rome_data = [VersionedDataset(
basename=path.basename(f),
old=read_csv(f.replace(NEW_VERSION, OLD_VERSION)),
new=read_csv(f))
for f in sorted(new_version_files)]
def find_rome_dataset_by_name(data, partial_name):
for dataset in data:
if 'unix_{}_v{}_utf8.csv'.format(partial_name, NEW_VERSION) == dataset.basename:
return dataset
raise ValueError('No dataset named {}, the list is\n{}'.format(partial_name, [d.basename for d in data]))
Explanation: Cool, no new nor deleted files.
Now let's set up a dataset that, for each table, links both the old and the new file together.
End of explanation
for dataset in rome_data:
if set(dataset.old.columns) != set(dataset.new.columns):
print('Columns of {} have changed.'.format(dataset.basename))
Explanation: Let's make sure the structure hasn't changed:
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
print(f'New columns: {set(jobs.old.columns) - set(jobs.new.columns)}')
print(f'Old columns: {set(jobs.new.columns) - set(jobs.old.columns)}')
Explanation: OK, let's check what's new in there:
End of explanation
same_row_count_files = 0
for dataset in rome_data:
diff = len(dataset.new.index) - len(dataset.old.index)
if diff > 0:
print('{:d}/{:d} values added in {}'.format(
diff, len(dataset.new.index), dataset.basename))
elif diff < 0:
print('{:d}/{:d} values removed in {}'.format(
-diff, len(dataset.old.index), dataset.basename))
else:
same_row_count_files += 1
print('{:d}/{:d} files with the same number of rows'.format(
same_row_count_files, len(rome_data)))
Explanation: Ouch, it seems they have decided to rename one column, however the opposite change was done when going to v341, so they're just reverting back. Lucky us we never used it.
Now let's see for each file if there are more or less rows.
End of explanation
jobs = find_rome_dataset_by_name(rome_data, 'referentiel_appellation')
new_jobs = set(jobs.new.code_ogr) - set(jobs.old.code_ogr)
obsolete_jobs = set(jobs.old.code_ogr) - set(jobs.new.code_ogr)
stable_jobs = set(jobs.new.code_ogr) & set(jobs.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_jobs), len(new_jobs), len(stable_jobs)), (OLD_VERSION, NEW_VERSION));
Explanation: There are some minor changes in many files, but based on my knowledge of ROME, none from the main files.
The most interesting ones are in referentiel_appellation, item, and liens_rome_referentiels, so let's see more precisely.
End of explanation
pd.options.display.max_colwidth = 2000
jobs.new[jobs.new.code_ogr.isin(new_jobs)][['code_ogr', 'libelle_appellation_long', 'code_rome']]
Explanation: Alright, so the only change seems to be 10 new jobs added. Let's take a look (only showing interesting fields):
End of explanation
items = find_rome_dataset_by_name(rome_data, 'item')
new_items = set(items.new.code_ogr) - set(items.old.code_ogr)
obsolete_items = set(items.old.code_ogr) - set(items.new.code_ogr)
stable_items = set(items.new.code_ogr) & set(items.old.code_ogr)
matplotlib_venn.venn2((len(obsolete_items), len(new_items), len(stable_items)), (OLD_VERSION, NEW_VERSION));
Explanation: Those are indeed new jobs. Some are related to ecology sneaking in.
OK, let's check at the changes in items:
End of explanation
items.new[items.new.code_ogr.isin(new_items)].head()
Explanation: As anticipated it is a very minor change (hard to see it visually): 9 new ones have been created. Let's have a look at them.
End of explanation
links = find_rome_dataset_by_name(rome_data, 'liens_rome_referentiels')
old_links_on_stable_items = links.old[links.old.code_ogr.isin(stable_items)]
new_links_on_stable_items = links.new[links.new.code_ogr.isin(stable_items)]
old = old_links_on_stable_items[['code_rome', 'code_ogr']]
new = new_links_on_stable_items[['code_rome', 'code_ogr']]
links_merged = old.merge(new, how='outer', indicator=True)
links_merged['_diff'] = links_merged._merge.map({'left_only': 'removed', 'right_only': 'added'})
links_merged._diff.value_counts()
Explanation: The new ones seem legit to me and related to the new jobs.
The changes in liens_rome_referentiels include changes for those items, so let's only check the changes not related to those.
End of explanation
job_group_names = find_rome_dataset_by_name(rome_data, 'referentiel_code_rome').new.set_index('code_rome').libelle_rome
item_names = items.new.set_index('code_ogr').libelle.drop_duplicates()
links_merged['job_group_name'] = links_merged.code_rome.map(job_group_names)
links_merged['item_name'] = links_merged.code_ogr.map(item_names)
display(links_merged[links_merged._diff == 'removed'].dropna().head(5))
links_merged[links_merged._diff == 'added'].dropna().head(5)
Explanation: So in addition to the added items, there are few fixes. Let's have a look at them:
End of explanation |
7,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TF-Keras Image Classification Distributed Multi-Worker Training on GPU using Vertex Training with Custom Container
<table align="left">
<td>
<a href="https
Step1: Vertex Training using Vertex SDK and Custom Container
Build Custom Container
Step2: Initialize Vertex SDK
Step3: Create a Vertex Tensorboard Instance
Step4: Option
Step5: Training Output Artifact
Step6: Clean Up Artifact | Python Code:
PROJECT_ID = "YOUR PROJECT ID"
BUCKET_NAME = "gs://YOUR BUCKET NAME"
REGION = "YOUR REGION"
SERVICE_ACCOUNT = "YOUR SERVICE ACCOUNT"
content_name = "tf-keras-img-cls-dist-multi-worker-gpu-cust-cont"
Explanation: TF-Keras Image Classification Distributed Multi-Worker Training on GPU using Vertex Training with Custom Container
<table align="left">
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/community-content/tf_keras_image_classification_distributed_multi_worker_with_vertex_sdk/multi_worker_vertex_training_on_gpu_with_custom_container.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Setup
End of explanation
hostname = "gcr.io"
image_name = content_name
tag = "latest"
custom_container_image_uri = f"{hostname}/{PROJECT_ID}/{image_name}:{tag}"
! cd trainer && docker build -t $custom_container_image_uri -f gpu.Dockerfile .
! docker run --rm $custom_container_image_uri --epochs 2 --local-mode
! docker push $custom_container_image_uri
! gcloud container images list --repository $hostname/$PROJECT_ID
Explanation: Vertex Training using Vertex SDK and Custom Container
Build Custom Container
End of explanation
! pip install -r requirements.txt
from google.cloud import aiplatform
aiplatform.init(
project=PROJECT_ID,
staging_bucket=BUCKET_NAME,
location=REGION,
)
Explanation: Initialize Vertex SDK
End of explanation
tensorboard = aiplatform.Tensorboard.create(
display_name=content_name,
)
Explanation: Create a Vertex Tensorboard Instance
End of explanation
display_name = content_name
gcs_output_uri_prefix = f"{BUCKET_NAME}/{display_name}"
replica_count = 4
machine_type = "n1-standard-4"
accelerator_count = 1
accelerator_type = "NVIDIA_TESLA_K80"
container_args = [
"--epochs",
"50",
"--batch-size",
"32",
]
custom_container_training_job = aiplatform.CustomContainerTrainingJob(
display_name=display_name,
container_uri=custom_container_image_uri,
)
custom_container_training_job.run(
args=container_args,
base_output_dir=gcs_output_uri_prefix,
replica_count=replica_count,
machine_type=machine_type,
accelerator_type=accelerator_type,
accelerator_count=accelerator_count,
tensorboard=tensorboard.resource_name,
service_account=SERVICE_ACCOUNT,
)
print(f"Custom Training Job Name: {custom_container_training_job.resource_name}")
print(f"GCS Output URI Prefix: {gcs_output_uri_prefix}")
Explanation: Option: Use a Previously Created Vertex Tensorboard Instance
tensorboard_name = "Your Tensorboard Resource Name or Tensorboard ID"
tensorboard = aiplatform.Tensorboard(tensorboard_name=tensorboard_name)
Run a Vertex SDK CustomContainerTrainingJob
End of explanation
! gsutil ls $gcs_output_uri_prefix
Explanation: Training Output Artifact
End of explanation
! gsutil rm -rf $gcs_output_uri_prefix
Explanation: Clean Up Artifact
End of explanation |
7,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-lmec', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-LMEC
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
7,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualization 1
Step1: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
Step2: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
Explanation: Visualization 1: Matplotlib Basics Exercises
End of explanation
plt.figure(figsize=(10,8))
plt.scatter(np.random.randn(100),np.random.randn(100),s=50,c='b',marker='d',alpha=.7)
plt.xlabel('x-coordinate')
plt.ylabel('y-coordinate')
plt.title('100 Random Points')
Explanation: Scatter plots
Learn how to use Matplotlib's plt.scatter function to make a 2d scatter plot.
Generate random data using np.random.randn.
Style the markers (color, size, shape, alpha) appropriately.
Include an x and y label and title.
End of explanation
plt.figure(figsize=(10,8))
p=plt.hist(np.random.randn(100000),bins=50,color='g')
plt.xlabel('value')
plt.ylabel('frequency')
plt.title('Distrobution 100000 Random Points with mean of 0 and variance of 1')
Explanation: Histogram
Learn how to use Matplotlib's plt.hist function to make a 1d histogram.
Generate randpom data using np.random.randn.
Figure out how to set the number of histogram bins and other style options.
Include an x and y label and title.
End of explanation |
7,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Predict Shakespeare with Cloud TPUs and Keras
Overview
This example uses tf.keras to build a language model and train it on a Cloud TPU. This language model predicts the next character of text given the text so far. The trained model can generate new snippets of text that read in a similar style to the text training data.
The model trains for 10 epochs and completes in approximately 5 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this Colab, you will learn how to
Step3: Build the input dataset
We just downloaded some text. The following shows the start of the text and a random snippet so we can get a feel for the whole text.
Step5: Build the model
The model is defined as a two-layer, forward-LSTM, the same model should work both on CPU and TPU.
Because our vocabulary size is 256, the input dimension to the Embedding layer is 256.
When specifying the arguments to the LSTM, it is important to note how the stateful argument is used. When training we will make sure that stateful=False because we do want to reset the state of our model between batches, but when sampling (computing predictions) from a trained model, we want stateful=True so that the model can retain information across the current batch and generate more interesting text.
Step6: Train the model
First, we need to create a distribution strategy that can use the TPU. In this case it is TPUStrategy. You can create and compile the model inside its scope. Once that is done, future calls to the standard Keras methods fit, evaluate and predict use the TPU.
Again note that we train with stateful=False because while training, we only care about one batch at a time.
Step7: Make predictions with the model
Use the trained model to make predictions and generate your own Shakespeare-esque play.
Start the model off with a seed sentence, then generate 250 characters from it. The model makes five predictions from the initial seed.
The predictions are done on the CPU so the batch size (5) in this case does not have to be divisible by 8.
Note that when we are doing predictions or, to be more precise, text generation, we set stateful=True so that the model's state is kept between batches. If stateful is false, the model state is reset between each batch, and the model will only be able to use the information from the current batch (a single character) to make a prediction.
The output of the model is a set of probabilities for the next character (given the input so far). To build a paragraph, we predict one character at a time and sample a character (based on the probabilities provided by the model). For example, if the input character is "o" and the output probabilities are "p" (0.65), "t" (0.30), others characters (0.05), then we allow our model to generate text other than just "Ophelia" and "Othello." | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: <a href="https://colab.research.google.com/github/tensorflow/tpu/blob/master/tools/colab/shakespeare_with_tpu_and_keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!wget --show-progress --continue -O /content/shakespeare.txt http://www.gutenberg.org/files/100/100-0.txt
Explanation: Predict Shakespeare with Cloud TPUs and Keras
Overview
This example uses tf.keras to build a language model and train it on a Cloud TPU. This language model predicts the next character of text given the text so far. The trained model can generate new snippets of text that read in a similar style to the text training data.
The model trains for 10 epochs and completes in approximately 5 minutes.
This notebook is hosted on GitHub. To view it in its original repository, after opening the notebook, select File > View on GitHub.
Learning objectives
In this Colab, you will learn how to:
* Build a two-layer, forward-LSTM model.
* Use distribution strategy to produce a tf.keras model that runs on TPU version and then use the standard Keras methods to train: fit, predict, and evaluate.
* Use the trained model to make predictions and generate your own Shakespeare-esque play.
Instructions
<h3> Train on TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
On the main menu, click Runtime and select Change runtime type. Set "TPU" as the hardware accelerator.
Click Runtime again and select Runtime > Run All. You can also run the cells manually with Shift-ENTER.
TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage (GCS)
Data, model, and training
In this example, you train the model on the combined works of William Shakespeare, then use the model to compose a play in the style of The Great Bard:
<blockquote>
Loves that led me no dumbs lack her Berjoy's face with her to-day.
The spirits roar'd; which shames which within his powers
Which tied up remedies lending with occasion,
A loud and Lancaster, stabb'd in me
Upon my sword for ever: 'Agripo'er, his days let me free.
Stop it of that word, be so: at Lear,
When I did profess the hour-stranger for my life,
When I did sink to be cried how for aught;
Some beds which seeks chaste senses prove burning;
But he perforces seen in her eyes so fast;
And _
</blockquote>
Download data
Download The Complete Works of William Shakespeare as a single text file from Project Gutenberg. You use snippets from this file as the training data for the model. The target snippet is offset by one character.
End of explanation
!head -n5 /content/shakespeare.txt
!echo "..."
!shuf -n5 /content/shakespeare.txt
import numpy as np
import tensorflow as tf
import os
import distutils
if distutils.version.LooseVersion(tf.__version__) < '2.0':
raise Exception('This notebook is compatible with TensorFlow 2.0 or higher.')
SHAKESPEARE_TXT = '/content/shakespeare.txt'
def transform(txt):
return np.asarray([ord(c) for c in txt if ord(c) < 255], dtype=np.int32)
def input_fn(seq_len=100, batch_size=1024):
Return a dataset of source and target sequences for training.
with tf.io.gfile.GFile(SHAKESPEARE_TXT, 'r') as f:
txt = f.read()
source = tf.constant(transform(txt), dtype=tf.int32)
ds = tf.data.Dataset.from_tensor_slices(source).batch(seq_len+1, drop_remainder=True)
def split_input_target(chunk):
input_text = chunk[:-1]
target_text = chunk[1:]
return input_text, target_text
BUFFER_SIZE = 10000
ds = ds.map(split_input_target).shuffle(BUFFER_SIZE).batch(batch_size, drop_remainder=True)
return ds.repeat()
Explanation: Build the input dataset
We just downloaded some text. The following shows the start of the text and a random snippet so we can get a feel for the whole text.
End of explanation
EMBEDDING_DIM = 512
def lstm_model(seq_len=100, batch_size=None, stateful=True):
Language model: predict the next word given the current word.
source = tf.keras.Input(
name='seed', shape=(seq_len,), batch_size=batch_size, dtype=tf.int32)
embedding = tf.keras.layers.Embedding(input_dim=256, output_dim=EMBEDDING_DIM)(source)
lstm_1 = tf.keras.layers.LSTM(EMBEDDING_DIM, stateful=stateful, return_sequences=True)(embedding)
lstm_2 = tf.keras.layers.LSTM(EMBEDDING_DIM, stateful=stateful, return_sequences=True)(lstm_1)
predicted_char = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(256, activation='softmax'))(lstm_2)
return tf.keras.Model(inputs=[source], outputs=[predicted_char])
Explanation: Build the model
The model is defined as a two-layer, forward-LSTM, the same model should work both on CPU and TPU.
Because our vocabulary size is 256, the input dimension to the Embedding layer is 256.
When specifying the arguments to the LSTM, it is important to note how the stateful argument is used. When training we will make sure that stateful=False because we do want to reset the state of our model between batches, but when sampling (computing predictions) from a trained model, we want stateful=True so that the model can retain information across the current batch and generate more interesting text.
End of explanation
tf.keras.backend.clear_session()
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
# This is the TPU initialization code that has to be at the beginning.
tf.tpu.experimental.initialize_tpu_system(resolver)
print("All devices: ", tf.config.list_logical_devices('TPU'))
strategy = tf.distribute.experimental.TPUStrategy(resolver)
with strategy.scope():
training_model = lstm_model(seq_len=100, stateful=False)
training_model.compile(
optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.01),
loss='sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'])
training_model.fit(
input_fn(),
steps_per_epoch=100,
epochs=10
)
training_model.save_weights('/tmp/bard.h5', overwrite=True)
Explanation: Train the model
First, we need to create a distribution strategy that can use the TPU. In this case it is TPUStrategy. You can create and compile the model inside its scope. Once that is done, future calls to the standard Keras methods fit, evaluate and predict use the TPU.
Again note that we train with stateful=False because while training, we only care about one batch at a time.
End of explanation
BATCH_SIZE = 5
PREDICT_LEN = 250
# Keras requires the batch size be specified ahead of time for stateful models.
# We use a sequence length of 1, as we will be feeding in one character at a
# time and predicting the next character.
prediction_model = lstm_model(seq_len=1, batch_size=BATCH_SIZE, stateful=True)
prediction_model.load_weights('/tmp/bard.h5')
# We seed the model with our initial string, copied BATCH_SIZE times
seed_txt = 'Looks it not like the king? Verily, we must go! '
seed = transform(seed_txt)
seed = np.repeat(np.expand_dims(seed, 0), BATCH_SIZE, axis=0)
# First, run the seed forward to prime the state of the model.
prediction_model.reset_states()
for i in range(len(seed_txt) - 1):
prediction_model.predict(seed[:, i:i + 1])
# Now we can accumulate predictions!
predictions = [seed[:, -1:]]
for i in range(PREDICT_LEN):
last_word = predictions[-1]
next_probits = prediction_model.predict(last_word)[:, 0, :]
# sample from our output distribution
next_idx = [
np.random.choice(256, p=next_probits[i])
for i in range(BATCH_SIZE)
]
predictions.append(np.asarray(next_idx, dtype=np.int32))
for i in range(BATCH_SIZE):
print('PREDICTION %d\n\n' % i)
p = [predictions[j][i] for j in range(PREDICT_LEN)]
generated = ''.join([chr(c) for c in p]) # Convert back to text
print(generated)
print()
assert len(generated) == PREDICT_LEN, 'Generated text too short'
Explanation: Make predictions with the model
Use the trained model to make predictions and generate your own Shakespeare-esque play.
Start the model off with a seed sentence, then generate 250 characters from it. The model makes five predictions from the initial seed.
The predictions are done on the CPU so the batch size (5) in this case does not have to be divisible by 8.
Note that when we are doing predictions or, to be more precise, text generation, we set stateful=True so that the model's state is kept between batches. If stateful is false, the model state is reset between each batch, and the model will only be able to use the information from the current batch (a single character) to make a prediction.
The output of the model is a set of probabilities for the next character (given the input so far). To build a paragraph, we predict one character at a time and sample a character (based on the probabilities provided by the model). For example, if the input character is "o" and the output probabilities are "p" (0.65), "t" (0.30), others characters (0.05), then we allow our model to generate text other than just "Ophelia" and "Othello."
End of explanation |
7,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyISC Example
Step1: Create Data
Create a data set with 3 columns from different probablity distributions
Step2: Used Anomaly Detector
Create an anomaly detector using as first argument the used statistical models. The we use
- a onesided Poisson distribution for modelling the first fequency column (column 1) (as in the first example),
- a twosided Poisson distribution for the second frequency column (column 2),
- and a Gaussin (Normal) distribution for the last column (column 3).
Given that we now have more than one variable, it is necessary to also add a method to combine the output from the statistical models, which in this case is the maximum anomaly score of each component model
Step3: Train the anomaly detector
Step4: Compute the anomaly scores for each data point
Step5: Anomaly Scores
Now we can print some example of normal frequencies vs. anomaly scores for the 15 first normal data points
Step6: The anomalous frequencies vs. anomaly scores for the 15 anomalous data points
Step7: As can be seen above, the anomalous data also have higher anomaly scores than the normal frequencies as it should be.<br/><br/>
This becomes even more visible if we plot the anomaly scores (y-axis) against each data point (x-axis)
Step8: We can also look at the details of each column in terms of their individual anomaly scores | Python Code:
import pyisc;
import numpy as np
from scipy.stats import poisson, norm
%matplotlib inline
from pylab import plot
Explanation: pyISC Example: MultivariableAnomaly Detection
In this example, we extend the simple example with one Poisson distributed variable to the multivariate case with three variables, two Poisson distributed variables and one Gaussian distributed variable.
End of explanation
po_normal = poisson(10)
po_anomaly = poisson(25)
po_normal2 = poisson(2)
po_anomaly2 = poisson(3)
gs_normal = norm(1, 12)
gs_anomaly = norm(2,30)
normal_len = 10000
anomaly_len = 15
data = np.column_stack(
[
[1] * (normal_len+anomaly_len),
list(po_normal.rvs(normal_len))+list(po_anomaly.rvs(anomaly_len)),
list(po_normal2.rvs(normal_len))+list(po_anomaly2.rvs(anomaly_len)),
list(gs_normal.rvs(normal_len))+list(gs_anomaly.rvs(anomaly_len)),
]
)
Explanation: Create Data
Create a data set with 3 columns from different probablity distributions:
End of explanation
anomaly_detector = pyisc.AnomalyDetector(
component_models=[
pyisc.P_PoissonOnesided(1,0), # columns 1 and 0
pyisc.P_Poisson(2,0), # columns 2 and 0
pyisc.P_Gaussian(3) # column 3
],
output_combination_rule=pyisc.cr_max
)
Explanation: Used Anomaly Detector
Create an anomaly detector using as first argument the used statistical models. The we use
- a onesided Poisson distribution for modelling the first fequency column (column 1) (as in the first example),
- a twosided Poisson distribution for the second frequency column (column 2),
- and a Gaussin (Normal) distribution for the last column (column 3).
Given that we now have more than one variable, it is necessary to also add a method to combine the output from the statistical models, which in this case is the maximum anomaly score of each component model:
End of explanation
anomaly_detector.fit(data);
Explanation: Train the anomaly detector:
End of explanation
scores = anomaly_detector.anomaly_score(data)
Explanation: Compute the anomaly scores for each data point:
End of explanation
from pandas import DataFrame
df= DataFrame(data[:15], columns=['#Days', 'Freq1','Freq2','Measure'])
df['Anomaly Score'] = scores[:15]
print df.to_string()
Explanation: Anomaly Scores
Now we can print some example of normal frequencies vs. anomaly scores for the 15 first normal data points:
End of explanation
df= DataFrame(data[-15:], columns=['#Days', 'Freq1','Freq2','Measure'])
df['Anomaly Score'] = scores[-15:]
print df.to_string()
Explanation: The anomalous frequencies vs. anomaly scores for the 15 anomalous data points:
End of explanation
plot(scores, '.');
Explanation: As can be seen above, the anomalous data also have higher anomaly scores than the normal frequencies as it should be.<br/><br/>
This becomes even more visible if we plot the anomaly scores (y-axis) against each data point (x-axis):
End of explanation
score_details = anomaly_detector.anomaly_score_details(data)
df= DataFrame(data[-15:], columns=['#Days', 'Freq1','Freq2','Measure'])
df['Anomaly:Freq1'] = [detail[1][0] for detail in score_details[-15:]] # Anomaly Score of Freq1
df['Anomaly:Freq2'] = [detail[1][1] for detail in score_details[-15:]] # Anomaly Score of Freq2
df['Anomaly:Measure'] = [detail[1][2] for detail in score_details[-15:]] # Anomaly Score of Measure
df['Anomaly Score'] = [detail[0] for detail in score_details[-15:]] # Combined Anomaly Score
df
Explanation: We can also look at the details of each column in terms of their individual anomaly scores:
End of explanation |
7,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Model10
Step2: Feature functions(private)
Step3: Feature function(public)
Step4: Utility functions
Step5: GMM
Classifying questions
features
Step7: B. Modeling
Select model
Step8: Training and testing model
Step9: Writing result | Python Code:
import gzip
import pickle
from os import path
from collections import defaultdict
from numpy import sign
Load buzz data as a dictionary.
You can give parameter for data so that you will get what you need only.
def load_buzz(root='../data', data=['train', 'test', 'questions'], format='pklz'):
buzz_data = {}
for ii in data:
file_path = path.join(root, ii + "." + format)
with gzip.open(file_path, "rb") as fp:
buzz_data[ii] = pickle.load(fp)
return buzz_data
Explanation: Model10: GMM
A. Functions
There have four different functions.
Data reader: Read data from file.
Feature functions(private): Functions which extract features are placed in here. It means that if you make a specific feature function, you can add the one into here.
Feature function(public): We can use only this function for feature extraction.
Utility functions: All the funtions except functions which are mentioned in above should be placed in here.
Data reader
End of explanation
from numpy import sign, abs
def _feat_basic(bd, group):
X = []
for item in bd[group].items():
qid = item[1]['qid']
q = bd['questions'][qid]
#item[1]['q_length'] = max(q['pos_token'].keys())
item[1]['q_length'] = len(q['question'].split())
item[1]['category'] = q['category'].lower()
item[1]['answer'] = q['answer'].lower()
X.append(item[1])
return X
def _feat_sign_val(data):
for item in data:
item['sign_val'] = sign(item['position'])
def _get_pos(bd, sign_val=None):
# bd is not bd, bd is bd['train']
unwanted_index = []
pos_uid = defaultdict(list)
pos_qid = defaultdict(list)
for index, key in enumerate(bd):
if sign_val and sign(bd[key]['position']) != sign_val:
unwanted_index.append(index)
else:
pos_uid[bd[key]['uid']].append(bd[key]['position'])
pos_qid[bd[key]['qid']].append(bd[key]['position'])
return pos_uid, pos_qid, unwanted_index
def _get_avg_pos(bd, sign_val=None):
pos_uid, pos_qid, unwanted_index = _get_pos(bd, sign_val)
avg_pos_uid = {}
avg_pos_qid = {}
if not sign_val:
sign_val = 1
for key in pos_uid:
pos = pos_uid[key]
avg_pos_uid[key] = sign_val * (sum(pos) / len(pos))
for key in pos_qid:
pos = pos_qid[key]
avg_pos_qid[key] = sign_val * (sum(pos) / len(pos))
return avg_pos_uid, avg_pos_qid, unwanted_index
def _feat_avg_pos(data, bd, group, sign_val):
avg_pos_uid, avg_pos_qid, unwanted_index = _get_avg_pos(bd['train'], sign_val=sign_val)
if group == 'train':
for index in sorted(unwanted_index, reverse=True):
del data[index]
for item in data:
if item['uid'] in avg_pos_uid:
item['avg_pos_uid'] = avg_pos_uid[item['uid']]
else:
vals = avg_pos_uid.values()
item['avg_pos_uid'] = sum(vals) / float(len(vals))
if item['qid'] in avg_pos_qid:
item['avg_pos_qid'] = avg_pos_qid[item['qid']]
else:
vals = avg_pos_qid.values()
item['avg_pos_qid'] = sum(vals) / float(len(vals))
# Response position can be longer than length of question
if item['avg_pos_uid'] > item['q_length']:
item['avg_pos_uid'] = item['q_length']
if item['avg_pos_qid'] > item['q_length']:
item['avg_pos_qid'] = item['q_length']
Explanation: Feature functions(private)
End of explanation
def featurize(bd, group, sign_val=None, extra=None):
# Basic features
# qid(string), uid(string), position(float)
# answer'(string), 'potistion'(float), 'qid'(string), 'uid'(string)
X = _feat_basic(bd, group=group)
# Some extra features
if extra:
for func_name in extra:
func_name = '_feat_' + func_name
if func_name in ['_feat_avg_pos']:
globals()[func_name](X, bd, group=group, sign_val=sign_val)
else:
globals()[func_name](X)
if group == 'train':
y = []
for item in X:
y.append(item['position'])
del item['position']
return X, y
elif group == 'test':
return X
else:
raise ValueError(group, 'is not the proper type')
Explanation: Feature function(public)
End of explanation
import csv
def select(data, keys):
unwanted = data[0].keys() - keys
for item in data:
for unwanted_key in unwanted:
del item[unwanted_key]
return data
def write_result(test_set, predictions, file_name='guess.csv'):
predictions = sorted([[id, predictions[index]] for index, id in enumerate(test_set.keys())])
predictions.insert(0,["id", "position"])
with open(file_name, "w") as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(predictions)
Explanation: Utility functions
End of explanation
%matplotlib inline
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
def plot_gmm(X, models, n_components, covariance_type='diag',
figsize=(10, 20), suptitle=None, xlabel=None, ylabel=None):
color_iter = ['r', 'g', 'b', 'c', 'm', 'y', 'k', 'gray', 'pink', 'lime']
plt.figure(figsize=figsize)
plt.suptitle(suptitle, fontsize=20)
for i, model in enumerate(models):
mm = getattr(mixture, model)(n_components=n_components,
covariance_type=covariance_type)
mm.fit(X_pos_qid)
Y = mm.predict(X_pos_qid)
plt.subplot(len(models), 1, 1 + i)
for i, color in enumerate(color_iter):
plt.scatter(X_pos_qid[Y == i, 0], X_pos_qid[Y == i, 1], .7, color=color)
plt.title(model, fontsize=15)
plt.xlabel(xlabel, fontsize=12)
plt.ylabel(ylabel, fontsize=12)
plt.grid()
plt.show()
from collections import UserDict
import numpy as np
class DictDict(UserDict):
def __init__(self, bd):
UserDict.__init__(self)
self._set_bd(bd)
def sub_keys(self):
return self[list(self.keys())[0]].keys()
def select(self, sub_keys):
vals = []
for key in self:
vals.append([self[key][sub_key] for sub_key in sub_keys])
return np.array(vals)
def sub_append(self, sub_key, values):
for index, key in enumerate(self):
self[key][sub_key] = values[index]
class Users(DictDict):
def _set_bd(self, bd):
pos_uid, _, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_uid:
u = np.array(pos_uid[key])
ave_pos_uid = sum(abs(u)) / float(len(u))
acc_ratio_uid = len(u[u > 0]) / float(len(u))
self[key] = {'ave_pos_uid': ave_pos_uid,
'acc_ratio_uid': acc_ratio_uid}
class Questions(DictDict):
def _set_bd(self, bd):
_, pos_qid, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_qid:
u = np.array(pos_qid[key])
ave_pos_qid = sum(abs(u)) / float(len(u))
acc_ratio_qid = len(u[u > 0]) / float(len(u))
self[key] = bd['questions'][key]
self[key]['ave_pos_qid'] = ave_pos_qid
self[key]['acc_ratio_qid'] = acc_ratio_qid
users = Users(load_buzz())
questions = Questions(load_buzz())
X_pos_uid = users.select(['ave_pos_uid', 'acc_ratio_uid'])
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid'])
plot_gmm(X_pos_uid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying users',
xlabel='abs(position)',
ylabel='accuracy ratio')
plot_gmm(X_pos_qid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying questions',
xlabel='abs(position)',
ylabel='accuracy ratio')
# Question category
n_components = 8
gmm = mixture.GMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_qid)
pred_cat_qid = gmm.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 8
gmm = mixture.GMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_uid)
pred_cat_uid = gmm.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
users.sub_append('cat', [str(x) for x in pred_cat_uid])
questions.sub_append('cat', [str(x) for x in pred_cat_qid])
print(users[1])
print(questions[1])
Explanation: GMM
Classifying questions
features: avg_pos, accuracy rate
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos'])
X_train = select(X_train, regression_keys)
for index, item in enumerate(X_train):
uid = item['uid']
qid = item['qid']
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
item['uid'] = str(uid)
item['qid'] = str(qid)
X_train[1]
import multiprocessing
from sklearn import linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
from sklearn.feature_extraction import DictVectorizer
import math
from numpy import abs, sqrt
vec = DictVectorizer()
X_train = vec.fit_transform(X_train)
regressor_names =
LinearRegression
Ridge
Lasso
ElasticNet
print ("=== Linear Cross validation RMSE scores:")
for regressor in regressor_names.split():
scores = cross_val_score(getattr(linear_model, regressor)(),
X_train, y_train,
cv=10,
scoring='mean_squared_error',
n_jobs=multiprocessing.cpu_count()-1
)
print (regressor, sqrt(abs(scores)).mean())
Explanation: B. Modeling
Select model
End of explanation
def transform(X):
for index, item in enumerate(X):
uid = item['uid']
qid = item['qid']
item['uid'] = str(uid)
item['qid'] = str(qid)
# uid
if uid in users:
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
else:
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
# qid
if qid in questions:
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
else:
acc = questions.select(['acc_ratio_qid'])
item['acc_ratio_qid'] = sum(acc) / float(len(acc))
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['avg_pos'])
X_train = select(X_train, regression_keys)
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
X_test = select(X_test, regression_keys)
transform(X_train)
transform(X_test)
X_train[1]
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
regressor = linear_model.LassoCV(n_jobs=3, normalize=True)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
Explanation: Training and testing model
End of explanation
write_result(load_buzz()['test'], predictions)
Explanation: Writing result
End of explanation |
7,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cas', 'sandbox-1', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: CAS
Source ID: SANDBOX-1
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
7,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mean, Median, Mode, and introducing NumPy
Mean vs. Median
Let's create some fake income data, centered around 27,000 with a normal distribution and standard deviation of 15,000, with 10,000 data points. (We'll discuss those terms more later, if you're not familiar with them.)
Then, compute the mean (average) - it should be close to 27,000
Step1: We can segment the income data into 50 buckets, and plot it as a histogram
Step2: Now compute the median - since we have a nice, even distribution it too should be close to 27,000
Step3: Now we'll add Donald Trump into the mix. Darn income inequality!
Step4: The median won't change much, but the mean does
Step5: Mode
Next, let's generate some fake age data for 500 people | Python Code:
import numpy as np
incomes = np.random.normal(27000, 15000, 10000)
np.mean(incomes)
Explanation: Mean, Median, Mode, and introducing NumPy
Mean vs. Median
Let's create some fake income data, centered around 27,000 with a normal distribution and standard deviation of 15,000, with 10,000 data points. (We'll discuss those terms more later, if you're not familiar with them.)
Then, compute the mean (average) - it should be close to 27,000:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(incomes, 50)
plt.show()
Explanation: We can segment the income data into 50 buckets, and plot it as a histogram:
End of explanation
np.median(incomes)
Explanation: Now compute the median - since we have a nice, even distribution it too should be close to 27,000:
End of explanation
incomes = np.append(incomes, [1000000000])
Explanation: Now we'll add Donald Trump into the mix. Darn income inequality!
End of explanation
np.median(incomes)
np.mean(incomes)
Explanation: The median won't change much, but the mean does:
End of explanation
ages = np.random.randint(18, high=90, size=500)
ages
from scipy import stats
stats.mode(ages)
Explanation: Mode
Next, let's generate some fake age data for 500 people:
End of explanation |
7,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy Creating Layered Quadtree Grids with GRIDGEN
FloPy has a module that can be used to drive the GRIDGEN program. This notebook shows how it works.
The Flopy GRIDGEN module requires that the gridgen executable can be called using subprocess (i.e., gridgen is in your path).
Step1: Setup Base MODFLOW Grid
GRIDGEN works off of a base MODFLOW grid. The following information defines the basegrid.
Step2: Create the Gridgen Object
Step3: Add an Optional Active Domain
Cells outside of the active domain will be clipped and not numbered as part of the final grid. If this step is not performed, then all cells will be included in the final grid.
Step4: Refine the Grid
Step5: Plot the Gridgen Input
Step6: Build the Grid
Step7: Plot the Grid
Step8: Create a Flopy ModflowDisu Object
Step9: Intersect Features with the Grid
Step10: Plot Intersected Features | Python Code:
%matplotlib inline
import os
import sys
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import flopy
from flopy.utils.gridgen import Gridgen
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
Explanation: FloPy Creating Layered Quadtree Grids with GRIDGEN
FloPy has a module that can be used to drive the GRIDGEN program. This notebook shows how it works.
The Flopy GRIDGEN module requires that the gridgen executable can be called using subprocess (i.e., gridgen is in your path).
End of explanation
Lx = 100.
Ly = 100.
nlay = 2
nrow = 51
ncol = 51
delr = Lx / ncol
delc = Ly / nrow
h0 = 10
h1 = 5
top = h0
botm = np.zeros((nlay, nrow, ncol), dtype=np.float32)
botm[1, :, :] = -10.
ms = flopy.modflow.Modflow(rotation=-20.)
dis = flopy.modflow.ModflowDis(ms, nlay=nlay, nrow=nrow, ncol=ncol, delr=delr,
delc=delc, top=top, botm=botm)
Explanation: Setup Base MODFLOW Grid
GRIDGEN works off of a base MODFLOW grid. The following information defines the basegrid.
End of explanation
model_ws = os.path.join('.', 'data')
g = Gridgen(dis, model_ws=model_ws)
Explanation: Create the Gridgen Object
End of explanation
# setup the active domain
adshp = os.path.join(model_ws, 'ad0')
adpoly = [[[(0, 0), (0, 60), (40, 80), (60, 0), (0, 0)]]]
# g.add_active_domain(adpoly, range(nlay))
Explanation: Add an Optional Active Domain
Cells outside of the active domain will be clipped and not numbered as part of the final grid. If this step is not performed, then all cells will be included in the final grid.
End of explanation
x = Lx * np.random.random(10)
y = Ly * np.random.random(10)
wells = list(zip(x, y))
g.add_refinement_features(wells, 'point', 3, range(nlay))
rf0shp = os.path.join(model_ws, 'rf0')
river = [[[(-20, 10), (60, 60)]]]
g.add_refinement_features(river, 'line', 3, range(nlay))
rf1shp = os.path.join(model_ws, 'rf1')
g.add_refinement_features(adpoly, 'polygon', 1, range(nlay))
rf2shp = os.path.join(model_ws, 'rf2')
Explanation: Refine the Grid
End of explanation
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
mm = flopy.plot.ModelMap(model=ms)
mm.plot_grid()
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none')
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1)
Explanation: Plot the Gridgen Input
End of explanation
g.build(verbose=False)
Explanation: Build the Grid
End of explanation
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, linewidth=0.5)
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', edgecolor='none', alpha=0.2)
flopy.plot.plot_shapefile(rf1shp, ax=ax, linewidth=10, alpha=0.2)
flopy.plot.plot_shapefile(rf0shp, ax=ax, facecolor='red', radius=1, alpha=0.2)
Explanation: Plot the Grid
End of explanation
mu = flopy.modflow.Modflow(model_ws=model_ws, modelname='mfusg')
disu = g.get_disu(mu)
disu.write_file()
# print(disu)
Explanation: Create a Flopy ModflowDisu Object
End of explanation
adpoly_intersect = g.intersect(adpoly, 'polygon', 0)
print(adpoly_intersect.dtype.names)
print(adpoly_intersect)
print(adpoly_intersect.nodenumber)
well_intersect = g.intersect(wells, 'point', 0)
print(well_intersect.dtype.names)
print(well_intersect)
print(well_intersect.nodenumber)
river_intersect = g.intersect(river, 'line', 0)
print(river_intersect.dtype.names)
# print(river_intersect)
# print(river_intersect.nodenumber)
Explanation: Intersect Features with the Grid
End of explanation
a = np.zeros((g.nodes), dtype=np.int)
a[adpoly_intersect.nodenumber] = 1
a[well_intersect.nodenumber] = 2
a[river_intersect.nodenumber] = 3
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1, 1, 1, aspect='equal')
g.plot(ax, a=a, masked_values=[0], edgecolor='none', cmap='jet')
flopy.plot.plot_shapefile(rf2shp, ax=ax, facecolor='yellow', alpha=0.25)
Explanation: Plot Intersected Features
End of explanation |
7,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Based on similar work with Twin Cities Pioneer Press Schools that Work
Step1: Setting things up
Let's load the data and give it a quick look.
Step2: Checking out correlations
Let's start looking at how variables in our dataset relate to each other so we know what to expect when we start modeling.
Step3: The percentage of students enrolled in free/reduced-price lunch programs is often used as a proxy for poverty.
Step4: Conversely, the education level of a student's parents is often a good predictor of how well a student will do in school.
Step5: Running the regression
Like we did last week, we'll use scikit-learn to run basic single-variable regressions. Let's start by looking at California's Academic Performance index as it relates to the percentage of students, per school, enrolled in free/reduced-price lunch programs.
Step6: In our naive universe where we're only paying attention to two variables -- academic performance and free/reduced lunch -- we can clearly see that some percentage of schools is overperforming the performance that would be expected of them, taking poverty out of the equation.
A handful, in particular, seem to be dramatically overperforming. Let's look at them
Step7: Let's look specifically at Solano Avenue Elementary, which has an API of 922 and 80 percent of students being in the free/reduced lunch program. If you were to use the above regression to predict how well Solano would do, it would look like this | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
%matplotlib inline
Explanation: Based on similar work with Twin Cities Pioneer Press Schools that Work
End of explanation
df = pd.read_csv('data/apib12tx.csv')
df.describe()
Explanation: Setting things up
Let's load the data and give it a quick look.
End of explanation
df.corr()
Explanation: Checking out correlations
Let's start looking at how variables in our dataset relate to each other so we know what to expect when we start modeling.
End of explanation
df.plot(kind="scatter", x="MEALS", y="API12B")
Explanation: The percentage of students enrolled in free/reduced-price lunch programs is often used as a proxy for poverty.
End of explanation
df.plot(kind="scatter", x="AVG_ED", y="API12B")
Explanation: Conversely, the education level of a student's parents is often a good predictor of how well a student will do in school.
End of explanation
data = np.asarray(df[['API12B','MEALS']])
x, y = data[:, 1:], data[:, 0]
lr = LinearRegression()
lr.fit(x, y)
# plot the linear regression line on the scatter plot
lr.coef_
lr.score(x, y)
plt.scatter(x, y, color='blue')
plt.plot(x, lr.predict(x), color='red', linewidth=1)
Explanation: Running the regression
Like we did last week, we'll use scikit-learn to run basic single-variable regressions. Let's start by looking at California's Academic Performance index as it relates to the percentage of students, per school, enrolled in free/reduced-price lunch programs.
End of explanation
df[(df['MEALS'] >= 80) & (df['API12B'] >= 900)]
Explanation: In our naive universe where we're only paying attention to two variables -- academic performance and free/reduced lunch -- we can clearly see that some percentage of schools is overperforming the performance that would be expected of them, taking poverty out of the equation.
A handful, in particular, seem to be dramatically overperforming. Let's look at them:
End of explanation
lr.predict(80)
Explanation: Let's look specifically at Solano Avenue Elementary, which has an API of 922 and 80 percent of students being in the free/reduced lunch program. If you were to use the above regression to predict how well Solano would do, it would look like this:
End of explanation |
7,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Moduly moduly aneb O importování
aliasy
lze importovat jen jednu třídu/funkci/proměnnou, ale moduly mohou mit i více úrovní
Step1: lze naimportovat jednotlive funkce
Step2: ale v jiném modulu má jiný význam
Step3: lze přidat adresář k těm, ve kterých Python hledá importované moduly
Step4: ještě jeden příklad stejného jména, tady i se stejným významem - ale je to totéž?
Step5: poslední, ale nedoporučovaná specialita syntaxe
from xy import *
pozor na "jmenné znečistění" (nevíme, co všechno importujeme)
se značkou "*" nelze importovat uvnitř funkcí/tříd
Step6: co nás ještě může zajímat (vše - jako třeba help a.k.a. docstring - je dostupné pro programové zpracování)
Step7: ... a teď vlastní modul? Jak prosté...
vše, co chcete (třídy pro řešení Fisherovy úlohy z pojednání o objektech), nakopírujete do souboru jako novy.py
python
import novy
novy.Fisher("data.csv")
aktualizace
upravili jsme svůj kód, nebo nahráli novou verzi
python
from imp import reload
pro fajnšmekry ještě
- hlubší úrovně nutne "reloadnout" ručně (nebo dreload aka "deep reload")
- nutno znovu vytvořit objekty z dotčených tříd
Step8: ještě pár pojmů
packages
celé adresáře (jednotlivé soubory jsou podmoduly)
vložení (prázdného) __init__.py
jak použít program jako modul? před části kódu, která se nemá spustit při importu, přidáme řádek
.. if __name__=="__main__" | Python Code:
from os import path
path.exists("data.csv")
Explanation: Moduly moduly aneb O importování
aliasy
lze importovat jen jednu třídu/funkci/proměnnou, ale moduly mohou mit i více úrovní
End of explanation
from os.path import exists
Explanation: lze naimportovat jednotlive funkce
End of explanation
from sys import path
path[-5:]
Explanation: ale v jiném modulu má jiný význam
End of explanation
path.append("/home/jovyan/work/")
path[-5:]
Explanation: lze přidat adresář k těm, ve kterých Python hledá importované moduly
End of explanation
from math import pi
import numpy as np
pi is np.pi, pi==np.pi
Explanation: ještě jeden příklad stejného jména, tady i se stejným významem - ale je to totéž?
End of explanation
## !! nedoporucovano
#from matplotlib.pyplot import *
## lepe takto
import numpy as np
import matplotlib.pyplot as pl
Explanation: poslední, ale nedoporučovaná specialita syntaxe
from xy import *
pozor na "jmenné znečistění" (nevíme, co všechno importujeme)
se značkou "*" nelze importovat uvnitř funkcí/tříd
End of explanation
print(np.__version__, np.__file__)
print(np.__doc__[:1000]+"...")
np?
print("modul np obsahuje %i pojmenovanych funkci/proměnných/podmodulů"%len(np.__all__))
np.__all__[10:20]
Explanation: co nás ještě může zajímat (vše - jako třeba help a.k.a. docstring - je dostupné pro programové zpracování)
End of explanation
from imp import reload
reload(np)
Explanation: ... a teď vlastní modul? Jak prosté...
vše, co chcete (třídy pro řešení Fisherovy úlohy z pojednání o objektech), nakopírujete do souboru jako novy.py
python
import novy
novy.Fisher("data.csv")
aktualizace
upravili jsme svůj kód, nebo nahráli novou verzi
python
from imp import reload
pro fajnšmekry ještě
- hlubší úrovně nutne "reloadnout" ručně (nebo dreload aka "deep reload")
- nutno znovu vytvořit objekty z dotčených tříd
End of explanation
#nejak takto
%matplotlib inline
import fisher
fg=fisher.FisherGraph("data.csv")
fg.fit()
#fg.graph([0,30],"X","Y","Importovano")
Explanation: ještě pár pojmů
packages
celé adresáře (jednotlivé soubory jsou podmoduly)
vložení (prázdného) __init__.py
jak použít program jako modul? před části kódu, která se nemá spustit při importu, přidáme řádek
.. if __name__=="__main__":
kde se hledají: PYTHONPATH nebo sys.path (lze přidávat za běhu)
hlavičky
spustitelný (v linuxu?)
#!/usr/bin/python nebo
#! /usr/bin/env python
charset (kódování)
# -*- coding: utf-8 -*-
docstring (co všechno v modulu najdete)
cvičení zde
Level 1
vytvořit modul řešící Fišerův problém (soubor fisher.py)
naimportovat jej do nového notebooku
Level 2
vytvořit globální proměnnou v rámci modulu fontsize s hodnotou 25
na vhodných místech nahradit konstantu proměnnou
Level 3
příkazová řádka? příkazová řádka!
End of explanation |
7,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
If you want to add modules from the /src part of the project.
Step1: If you are doing live changes in the src files you plan to use, try the autoreload extension | Python Code:
import sys
sys.path.append(os.path.join(PROJ_ROOT, "src"))
Explanation: If you want to add modules from the /src part of the project.
End of explanation
%load_ext autoreload
%autoreload 1
# now instead of import use %aimport
Explanation: If you are doing live changes in the src files you plan to use, try the autoreload extension
End of explanation |
7,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 3
sample_id = 42
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return (x-np.min(x))/(np.max(x)-np.min(x))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return np.eye(max(x)+1)[x] #not sure why this fails on the first try but then passes
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, [None, *image_shape], name='x')
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, [None, n_classes], name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
weight = tf.Variable(tf.truncated_normal([conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[-1], conv_num_outputs],
stddev=0.1))
bias = tf.Variable(tf.constant(0.0, shape=[conv_num_outputs]))
strides = [1, conv_strides[0], conv_strides[1], 1]
conv1 = tf.nn.conv2d(x_tensor, weight, strides, padding='SAME')
conv1 = tf.nn.bias_add(conv1, bias)
conv1 = tf.nn.relu(conv1)
conv1 = tf.nn.max_pool(conv1, ksize=[1, pool_ksize[0], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1],
padding='SAME')
return conv1
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
return tf.reshape(x_tensor, [-1, np.prod(x_tensor.get_shape().as_list()[1:])])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
return tf.nn.relu(output(x_tensor, num_outputs))
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
weights = tf.Variable(tf.truncated_normal([x_tensor.get_shape().as_list()[1], num_outputs], stddev=0.1))
bias = tf.Variable(tf.truncated_normal([num_outputs], stddev=0.1))
return tf.add(tf.matmul(x_tensor, weights), bias)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# Convolution and Max Pool Parameters
conv_num_outputs = [32, 32, 64]
conv_ksize = [(3, 3), (3, 3), (3, 3)]
conv_strides = [(1, 1), (1, 1), (1, 1)]
pool_ksize = [(2, 2), (2, 2), (2, 2)]
pool_strides = [(2, 2), (2, 2), (2, 2)]
# Convolution and Max Pool Layers
conv1 = conv2d_maxpool(x, conv_num_outputs[0], conv_ksize[0], conv_strides[0], pool_ksize[0], pool_strides[0])
#conv1 = tf.nn.dropout(conv1, keep_prob-0.1)
conv2 = conv2d_maxpool(conv1, conv_num_outputs[1], conv_ksize[1], conv_strides[1], pool_ksize[1], pool_strides[1])
#conv2 = tf.nn.dropout(conv2, keep_prob+0.1)
conv3 = conv2d_maxpool(conv2, conv_num_outputs[2], conv_ksize[1], conv_strides[1], pool_ksize[1], pool_strides[1])
conv3 = tf.nn.dropout(conv3, keep_prob)
# Apply a Flatten Layer
flat = flatten(conv3)
# Fully Connected Layers
num_outputs = [512, 50]
fc1 = fully_conn(flat, num_outputs[0])
fc1 = tf.nn.dropout(fc1, keep_prob+0.3)
#fc2 = fully_conn(fc1, num_outputs[1])
#fc2 = tf.nn.dropout(fc2, keep_prob+0.2)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
final_output = output(fc1, 10)
# TODO: return output
return final_output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer(0.01).minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
session.run(optimizer, feed_dict = {x: feature_batch, y: label_batch, keep_prob: keep_probability})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
validation_accuracy = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})
training_loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
print('Accuracy: {:5.3f}'.format(validation_accuracy),
'Cost: {:5.3f}'.format(training_loss))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 30
batch_size = 256
keep_probability = 0.50
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
7,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calcule numérique avec la méthode Monte-Carlo (première partie)
TODO
- traduire en français certaines phrases restées en anglais
Approximation numérique d'une surface avec la méthode Monte-Carlo
$\newcommand{\ounk}{{\color{red}{O_1}}}$
$\newcommand{\aunk}{{\color{red}{\mathcal{A}_1}}}$
$\newcommand{\nunk}{{\color{red}{\mathcal{N}_1}}}$
$\newcommand{\okn}{{\color{blue}{O_2}}}$
$\newcommand{\akn}{{\color{blue}{\mathcal{A}_2}}}$
$\newcommand{\nkn}{{\color{blue}{\mathcal{N}_2}}}$
Conceptuellement, pour calculer la surface $\aunk$ d'un objet $\ounk$ avec la methode Monte-Carlo, il suffit
Step1: Le même principe peut être appliqué pour calculer un volume.
Cette methode très simple est parfois très utile pour calculer la surface (ou le volume) de figures géometriques complexes. En revanche, elle suppose l'existance d'une procedure ou d'une fonction permettant de dire si un point est tombé dans l'objet $O_2$ ou non.
Application au calcul d'intégrales
Calculer l'intégrale d'une fonction, revient à calculer la surface entre la courbe décrivant cette fonction et l'axe des abscisses (les surfaces au dessus de l'axe des abscisses sont ajoutées et les surfaces au dessous sont retranchées).
Exemple written in Python
Step2: The function to integrate
Step3: Random points
Step4: Numerical computation of the integral with Monte-Carlo
Step5: The actual integral value
Step6: The error ratio
Step7: Graphical illustration | Python Code:
import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(0., 2. * np.pi, 100)
x = np.cos(t) + np.cos(2. * t)
y = np.sin(t)
N = 100
rand = np.array([np.random.uniform(low=-3, high=3, size=N), np.random.uniform(low=-3, high=3, size=N)]).T
fig, ax = plt.subplots(1, 1, figsize=(7, 7))
ax.plot(rand[:,0], rand[:,1], '.k')
ax.plot(x, y, "-r", linewidth=2)
ax.plot([-3, -3, 3, 3, -3], [-3, 3, 3, -3, -3], "-b", linewidth=2)
ax.set_axis_off()
ax.set_xlim([-4, 4])
ax.set_ylim([-4, 4])
plt.show()
Explanation: Calcule numérique avec la méthode Monte-Carlo (première partie)
TODO
- traduire en français certaines phrases restées en anglais
Approximation numérique d'une surface avec la méthode Monte-Carlo
$\newcommand{\ounk}{{\color{red}{O_1}}}$
$\newcommand{\aunk}{{\color{red}{\mathcal{A}_1}}}$
$\newcommand{\nunk}{{\color{red}{\mathcal{N}_1}}}$
$\newcommand{\okn}{{\color{blue}{O_2}}}$
$\newcommand{\akn}{{\color{blue}{\mathcal{A}_2}}}$
$\newcommand{\nkn}{{\color{blue}{\mathcal{N}_2}}}$
Conceptuellement, pour calculer la surface $\aunk$ d'un objet $\ounk$ avec la methode Monte-Carlo, il suffit:
1. de placer cet objet $\ounk$ entièrement dans une figure géométrique $\okn$ dont on connait la surface $\mathcal \akn$ (par exemple un carré ou un rectangle)
2. tirer aléatoirement un grand nombre de points dans cette figure $\okn$ (tirage uniforme)
3. compter le nombre de points $\nunk$ tombés dans l'objet $\ounk$ dont on veut calculer la surface
4. calculer le rapport $\frac{\nunk}{\nkn}$ où $\nkn$ est le nombre total de points tiré aléatoirement (en multipliant par 100 ce rapport on obtient le pourcentage de points tombés dans l'objet $\ounk$)
5. appliquer ce rapport à la surface $\mathcal \akn$ de la figure englobante $\okn$ (le carré, rectangle, ... dont on connait la surface) pour obtenir la surface $\aunk$ recherchée: $\aunk \simeq \frac{\nunk}{\nkn} \mathcal \akn$
End of explanation
import sympy as sp
sp.init_printing()
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: Le même principe peut être appliqué pour calculer un volume.
Cette methode très simple est parfois très utile pour calculer la surface (ou le volume) de figures géometriques complexes. En revanche, elle suppose l'existance d'une procedure ou d'une fonction permettant de dire si un point est tombé dans l'objet $O_2$ ou non.
Application au calcul d'intégrales
Calculer l'intégrale d'une fonction, revient à calculer la surface entre la courbe décrivant cette fonction et l'axe des abscisses (les surfaces au dessus de l'axe des abscisses sont ajoutées et les surfaces au dessous sont retranchées).
Exemple written in Python
End of explanation
def f(x):
return -x**2 + 3.
Explanation: The function to integrate
End of explanation
N = 100000 # The number of random points
x_lower_bound = -4.0
x_upper_bound = 4.0
y_lower_bound = -16.0
y_upper_bound = 16.0
random_points = np.array([np.random.uniform(low=x_lower_bound, high=x_upper_bound, size=N),
np.random.uniform(low=y_lower_bound, high=y_upper_bound, size=N)]).T
Explanation: Random points
End of explanation
# Points between f and the abscissa
random_points_in_pos = np.array([p for p in random_points if 0 <= p[1] <= f(p[0])])
random_points_in_neg = np.array([p for p in random_points if 0 > p[1] >= f(p[0])])
ratio_pos = float(len(random_points_in_pos)) / float(N)
ratio_neg = float(len(random_points_in_neg)) / float(N)
print('Percentage of "positive" points between f and the abscissa: {:.2f}%'.format(ratio_pos * 100))
print('Percentage of "negative" points between f and the abscissa: {:.2f}%'.format(ratio_neg * 100))
s2 = (x_upper_bound - x_lower_bound) * (y_upper_bound - y_lower_bound)
print("Box surface:", s2)
s1 = ratio_pos * s2 - ratio_neg * s2
print("Function integral (numerical computation using Monte-Carlo):", s1)
Explanation: Numerical computation of the integral with Monte-Carlo
End of explanation
x = sp.symbols("x")
integ = sp.Integral(f(x), (x, x_lower_bound, x_upper_bound))
sp.Eq(integ, integ.doit())
Explanation: The actual integral value
End of explanation
actual_s1 = float(integ.doit())
error = actual_s1 - s1
print("Error ratio = {:.6f}%".format(abs(error / actual_s1) * 100.))
Explanation: The error ratio
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(12, 8))
x_array = np.arange(x_lower_bound, x_upper_bound, 0.01)
y_array = f(x_array)
plt.axis([x_lower_bound, x_upper_bound, y_lower_bound, y_upper_bound])
plt.plot(random_points[:,0], random_points[:,1], ',k')
plt.plot(random_points_in_pos[:,0], random_points_in_pos[:,1], ',r')
plt.plot(random_points_in_neg[:,0], random_points_in_neg[:,1], ',r')
plt.hlines(y=0, xmin=x_lower_bound, xmax=x_upper_bound)
plt.plot(x_array, y_array, '-r', linewidth=2)
plt.show()
Explanation: Graphical illustration
End of explanation |
7,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load data
Step1: Get the number of coordinates reported for each network
Step2: Generate random coordinates
The assigned coodinates are generated for each network witha proability equivalent to there volume size compare to the total volume of the brain
Step3: Generate the p-values for each network
Step4: Map the p-values to the template
Step5: FDR correction of the p-values | Python Code:
#seed_data = pd.read_csv('20160128_AD_Decrease_Meta_Christian.csv')
template_036= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale036.nii.gz')
template_020= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale020.nii.gz')
template_012= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale012.nii.gz')
template_007= nib.load('/home/cdansereau/data/template_cambridge_basc_multiscale_nii_sym/template_cambridge_basc_multiscale_sym_scale007.nii.gz')
scale = '36'
flag_dmn = False
if scale == '7':
template = template_007
else:
template = template_036
#seed_data = pd.read_csv('20160404_AD_Decrease_Meta_DMN_nonDMN_Final.csv')
#seed_data = pd.read_csv('20160404_AD_Increase_Meta_DMN_nonDMN_Final.csv')
#seed_data = pd.read_csv('20160205_MCI_Decrease_Meta_DMN_nonDMN_Final.csv')
#seed_data = pd.read_csv('20160204_MCI_Increase_Meta_DMN_nonDMN_Final.csv')
#seed_data = pd.read_csv('20160404_ADMCI_Decrease_Meta_DMN_nonDMN_Final.csv')
seed_data = pd.read_csv('20160404_ADMCI_Increase_Meta_DMN_nonDMN_Final.csv')
if flag_dmn:
#output_stats = 'AD_decrease_scale'+scale+'_stats_seedDMN.mat'
#output_vol = 'AD_decrease_ratio_scale'+scale+'_vol_seedDMN.nii.gz'
#output_stats = 'AD_increase_scale'+scale+'_stats_seedDMN.mat'
#output_vol = 'AD_increase_ratio_scale'+scale+'_vol_seedDMN.nii.gz'
#output_stats = 'MCI_decrease_scale'+scale+'_stats_seedDMN.mat'
#output_vol = 'MCI_decrease_ratio_scale'+scale+'_vol_seedDMN.nii.gz'
#output_stats = 'MCI_increase_scale'+scale+'_stats_seedDMN.mat'
#output_vol = 'MCI_increase_ratio_scale'+scale+'_vol_seedDMN.nii.gz'
#output_stats = 'ADMCI_decrease_scale'+scale+'_stats_seedDMN.mat'
#output_vol = 'ADMCI_decrease_ratio_scale'+scale+'_vol_seedDMN.nii.gz'
output_stats = 'ADMCI_increase_scale'+scale+'_stats_seedDMN.mat'
output_vol = 'ADMCI_increase_ratio_scale'+scale+'_vol_seedDMN.nii.gz'
else:
#output_stats = 'AD_decrease_scale'+scale+'_stats_nonDMN.mat'
#output_vol = 'AD_decrease_ratio_scale'+scale+'_vol_seednonDMN.nii.gz'
#output_stats = 'AD_increase_scale'+scale+'_stats_seednonDMN.mat'
#output_vol = 'AD_increase_ratio_scale'+scale+'_vol_seednonDMN.nii.gz'
#output_stats = 'MCI_decrease_scale'+scale+'_stats_seednonDMN.mat'
#output_vol = 'MCI_decrease_ratio_scale'+scale+'_vol_seednonDMN.nii.gz'
#output_stats = 'MCI_increase_scale'+scale+'_stats_seednonDMN.mat'
#output_vol = 'MCI_increase_ratio_scale'+scale+'_vol_seednonDMN.nii.gz'
#output_stats = 'ADMCI_decrease_scale'+scale+'_stats_seednonDMN.mat'
#output_vol = 'ADMCI_decrease_ratio_scale'+scale+'_vol_seednonDMN.nii.gz'
output_stats = 'ADMCI_increase_scale'+scale+'_stats_seednonDMN.mat'
output_vol = 'ADMCI_increase_ratio_scale'+scale+'_vol_seednonDMN.nii.gz'
seed_data
seed_data[seed_data['Seed_cambridge']==5][['x','y','z']].values.shape
Explanation: Load data
End of explanation
from numpy.linalg import norm
# find the closest network to the coordo
def get_nearest_net(template,world_coor):
list_coord = np.array(np.where(template.get_data()>0))
mni_coord = apply_affine(template.get_affine(),list_coord.T)
distances = norm(mni_coord-np.array(world_coor),axis=1)
#print distances.shape
idx_nearest_net = np.where(distances == np.min(distances))[0][0]
return int(template.get_data()[list_coord[:,idx_nearest_net][0],list_coord[:,idx_nearest_net][1],list_coord[:,idx_nearest_net][2]])
#get_nearest_net(template,[-15,-10,-10])
# Convert from world MNI space to the EPI voxel space
def get_world2vox(template, mni_coord):
return np.round(apply_affine(npl.inv(template.get_affine()),mni_coord)+[1])
network_votes = np.zeros((np.max(template.get_data().flatten()),1))[:,0]
network_votes
# get the voxel coordinates of the MNI seeds
if flag_dmn:
seed_data = seed_data[seed_data['Seed_cambridge']==5]
else:
seed_data = seed_data[seed_data['Seed_cambridge']!=5]
mni_space_targets = seed_data[['x','y','z']].values
vox_corrd = get_world2vox(template,mni_space_targets)
votes = []
n_outofbrain=0
for i in range(vox_corrd.shape[0]):
net_class = template.get_data()[vox_corrd[i,0],vox_corrd[i,1],vox_corrd[i,2]]
if net_class==0:
n_outofbrain+=1
votes.append(get_nearest_net(template,[mni_space_targets[i,0],mni_space_targets[i,1],mni_space_targets[i,2]]))
else:
votes.append(net_class)
print('Out of brain coordinates: '+ str(n_outofbrain))
votes = np.array(votes)
# take one vote for each study only
uni_pmid = np.unique(seed_data['PMID'])
votes.shape
frequency_votes=np.zeros((len(uni_pmid),len(network_votes)))
#for i in range(len(uni_pmid)):
# frequency_votes = np.hstack((frequency_votes,np.unique(votes[(seed_data['PMID']==uni_pmid[i]).values])))
for i in range(len(uni_pmid)):
aa = votes[(seed_data['PMID']==uni_pmid[i]).values]
for j in aa:
frequency_votes[i,j-1] = (aa == j).sum()/float(len(aa))
print frequency_votes
# compile the stats for each network
#for i in range(1,len(network_votes)+1):
# network_votes[i-1] = np.mean(frequency_votes==i)
network_votes = np.mean(frequency_votes,axis=0)
print network_votes
#vox_corrd[np.array(votes)==5,:]
get_nearest_net(template,[-24,-10, 22])
get_nearest_net(template,[17, -14, -22])
def gen1perm(n_seeds,proba):
ratio_votes_1study = np.zeros_like(proba)
perm_votes = np.random.choice(range(0,len(proba)),size=(n_seeds,1),p=proba)
for j in perm_votes:
ratio_votes_1study[j] = (perm_votes == j).sum()/float(len(perm_votes))
return ratio_votes_1study
# check if the proba is respected
#print proba_networks
#gen1perm(10000,proba_networks)
#ange(0,len(proba_networks))
Explanation: Get the number of coordinates reported for each network
End of explanation
'''
from numpy.random import permutation
def permute_table(frequency_votes,n_iter):
h0_results = []
for n in range(n_iter):
perm_freq = frequency_votes.copy()
#print perm_freq
for i in range(perm_freq.shape[0]):
perm_freq[i,:] = permutation(perm_freq[i,:])
#print perm_freq
h0_results.append(np.mean(perm_freq,axis=0))
return np.array(h0_results).T
'''
def compute_freq(votes,data_ratio_votes,seed_data,proba):
# take one vote for each study only
uni_pmid = np.unique(seed_data['PMID'])
ratio_votes=np.zeros((data_ratio_votes.shape[0],data_ratio_votes.shape[1],10000))
for idx_perm in range(ratio_votes.shape[-1]):
# frequency_votes = np.hstack((frequency_votes,np.unique(votes[(seed_data['PMID']==uni_pmid[i]).values])))
for i in range(len(uni_pmid)):
aa = votes[(seed_data['PMID']==uni_pmid[i]).values]
n_seeds = len(aa)
ratio_votes[i,:,idx_perm] = gen1perm(n_seeds,proba)
#print ratio_votes.shape
# compute the frequency
freq_data = np.mean(ratio_votes,axis=0)
for i in range(freq_data.shape[0]):
freq_data[i,:] = np.sort(freq_data[i,:])[::-1]
return freq_data
# Total volume of the brain
total_volume = np.sum(template.get_data()>0)
# compute the proba of each network
proba_networks=[]
for i in range(1,len(network_votes)+1):
proba_networks.append(np.sum(template.get_data()==i)/(total_volume*1.))
proba_networks = np.array(proba_networks)
print np.sum(proba_networks)
print proba_networks
# generate random values
'''
def gen_rnd_hits(proba,n_seeds):
results_h0 = np.random.choice(range(0,len(proba)),size=(n_seeds,1000),p=proba)
#results_h0 = permute_table(frequency_votes,1000)
print results_h0.shape
ditributions = []
for i in range(frequency_votes.shape[1]):
results_h0[i,:] = np.sort(results_h0[i,:])[::-1]
#ditributions.append(one_way_pdf)
#return ditributions
return results_h0
'''
#dist_data = gen_rnd_hits(proba_networks,np.sum(network_votes))
dist_data = compute_freq(votes,frequency_votes,seed_data,proba_networks)
plt.figure()
plt.hist(dist_data[0],bins=np.arange(0,1,.01))
plt.figure()
plt.plot(dist_data[0].T)
Explanation: Generate random coordinates
The assigned coodinates are generated for each network witha proability equivalent to there volume size compare to the total volume of the brain
End of explanation
def getpval_old(nhit,dist_data):
distribution_val = np.histogram(dist_data,bins=np.arange(0,1,0.01))
idx_bin = np.where((distribution_val[1]>=round(nhit,2)) & (distribution_val[1]<=round(nhit,2)))[0][0]
#print distribution_val[1]
return (np.sum(distribution_val[0][idx_bin:-1])+1)/(dist_data.shape[0]+1.)
def getpval(target,dist_data):
dist_sorted = np.sort(np.copy(dist_data))
b = np.sum(dist_sorted > target)
#print b
#print dist_data.shape[0]
#print distribution_val[1]
return ((b+1.)/(dist_data.shape[0]+1.))
print network_votes
pval_results=[]
for i in range(0,len(dist_data)):
pval_results.append(getpval(network_votes[i],dist_data[i,:]))
print pval_results
plt.figure()
plt.bar(np.arange(1,len(pval_results)+1),pval_results,width=0.5,align='center')
plt.xlabel('Networks')
plt.ylabel('p-value')
Explanation: Generate the p-values for each network
End of explanation
from proteus.matrix import tseries as ts
hitfreq_vol = ts.vec2map(network_votes,template)
pval_vol = ts.vec2map(1-np.array(pval_results),template)
plt.figure()
plotting.plot_stat_map(hitfreq_vol,cut_coords=(0,0,0),draw_cross=False)
plt.figure()
plotting.plot_stat_map(pval_vol,cut_coords=(0,0,0),draw_cross=False)
Explanation: Map the p-values to the template
End of explanation
# correct for FRD
from statsmodels.sandbox.stats.multicomp import fdrcorrection0
fdr_test,fdr_pval=fdrcorrection0(pval_results,alpha=0.05)
print network_votes
print fdr_test
print fdr_pval
# save the results
path_output = '/home/cdansereau/git/Projects/metaad/maps_results/'
stats_results = {'Hits':network_votes ,'pvalues':pval_results,'fdr_test':fdr_test,'fdr_pval':fdr_pval,'n_outofbrain':n_outofbrain}
scipy.io.savemat(path_output + output_stats, stats_results)
hitfreq_vol.to_filename(os.path.join(path_output,output_vol))
#hitfreq_vol.to_filename(os.path.join('/home/cdansereau/git/Projects/metaad/maps_results/','AD_pval_vol.nii.gz'))
Explanation: FDR correction of the p-values
End of explanation |
7,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize Raw data
Step1: The visualization module (
Step2: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
scrollbar on right side of the browser window also tells us that two of the
channels are marked as bad. Bad channels are color coded gray. By
clicking the lines or channel names on the left, you can mark or unmark a bad
channel interactively. You can use +/- keys to adjust the scale (also = works
for magnifying the data). Note that the initial scaling factors can be set
with parameter scalings. If you don't know the scaling factor for
channels, you can automatically set them by passing scalings='auto'. With
pageup/pagedown and home/end keys you can adjust the amount of data
viewed at once.
Drawing annotations
You can enter annotation mode by pressing a key. In annotation mode you
can mark segments of data (and modify existing annotations) with the left
mouse button. You can use the description of any existing annotation or
create a new description by typing when the annotation dialog is active.
Notice that the description starting with the keyword 'bad' means that
the segment will be discarded when epoching the data. Existing annotations
can be deleted with the right mouse button. Annotation mode is exited by
pressing a again or closing the annotation window. See also
Step3: We read the events from a file and passed it as a parameter when calling the
method. The events are plotted as vertical lines so you can see how they
align with the raw data.
We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
Step4: We used ch_groups='position' to color code the different regions. It uses
the same algorithm for dividing the regions as order='position' of
Step5: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
Step6: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See
Step7: Plotting channel-wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data. | Python Code:
import os.path as op
import numpy as np
import mne
data_path = op.join(mne.datasets.sample.data_path(), 'MEG', 'sample')
raw = mne.io.read_raw_fif(op.join(data_path, 'sample_audvis_raw.fif'),
preload=True)
raw.set_eeg_reference('average', projection=True) # set EEG average reference
events = mne.read_events(op.join(data_path, 'sample_audvis_raw-eve.fif'))
Explanation: Visualize Raw data
End of explanation
raw.plot(block=True, lowpass=40)
Explanation: The visualization module (:mod:mne.viz) contains all the plotting functions
that work in combination with MNE data structures. Usually the easiest way to
use them is to call a method of the data container. All of the plotting
method names start with plot. If you're using Ipython console, you can
just write raw.plot and ask the interpreter for suggestions with a
tab key.
To visually inspect your raw data, you can use the python equivalent of
mne_browse_raw.
End of explanation
raw.plot(butterfly=True, group_by='position')
Explanation: The channels are color coded by channel type. Generally MEG channels are
colored in different shades of blue, whereas EEG channels are black. The
scrollbar on right side of the browser window also tells us that two of the
channels are marked as bad. Bad channels are color coded gray. By
clicking the lines or channel names on the left, you can mark or unmark a bad
channel interactively. You can use +/- keys to adjust the scale (also = works
for magnifying the data). Note that the initial scaling factors can be set
with parameter scalings. If you don't know the scaling factor for
channels, you can automatically set them by passing scalings='auto'. With
pageup/pagedown and home/end keys you can adjust the amount of data
viewed at once.
Drawing annotations
You can enter annotation mode by pressing a key. In annotation mode you
can mark segments of data (and modify existing annotations) with the left
mouse button. You can use the description of any existing annotation or
create a new description by typing when the annotation dialog is active.
Notice that the description starting with the keyword 'bad' means that
the segment will be discarded when epoching the data. Existing annotations
can be deleted with the right mouse button. Annotation mode is exited by
pressing a again or closing the annotation window. See also
:class:mne.Annotations and marking_bad_segments. To see all the
interactive features, hit ? key or click help in the lower left
corner of the browser window.
<div class="alert alert-danger"><h4>Warning</h4><p>Annotations are modified in-place immediately at run-time.
Deleted annotations cannot be retrieved after deletion.</p></div>
The channels are sorted by channel type by default. You can use the
group_by parameter of :func:raw.plot <mne.io.Raw.plot> to group the
channels in a different way. group_by='selection' uses the same channel
groups as MNE-C's mne_browse_raw (see CACCJEJD). The selections are
defined in mne-python/mne/data/mne_analyze.sel and by modifying the
channels there, you can define your own selection groups. Notice that this
also affects the selections returned by :func:mne.read_selection. By
default the selections only work for Neuromag data, but
group_by='position' tries to mimic this behavior for any data with sensor
positions available. The channels are grouped by sensor positions to 8 evenly
sized regions. Notice that for this to work effectively, all the data
channels in the channel array must be present. The order parameter allows
to customize the order and select a subset of channels for plotting (picks).
Here we use the butterfly mode and group the channels by position. To toggle
between regular and butterfly modes, press 'b' key when the plotter window is
active. Notice that group_by also affects the channel groupings in
butterfly mode.
End of explanation
raw.plot_sensors(kind='3d', ch_type='mag', ch_groups='position')
Explanation: We read the events from a file and passed it as a parameter when calling the
method. The events are plotted as vertical lines so you can see how they
align with the raw data.
We can check where the channels reside with plot_sensors. Notice that
this method (along with many other MNE plotting functions) is callable using
any MNE data container where the channel information is available.
End of explanation
projs = mne.read_proj(op.join(data_path, 'sample_audvis_eog-proj.fif'))
raw.add_proj(projs)
raw.plot_projs_topomap()
Explanation: We used ch_groups='position' to color code the different regions. It uses
the same algorithm for dividing the regions as order='position' of
:func:raw.plot <mne.io.Raw.plot>. You can also pass a list of picks to
color any channel group with different colors.
Now let's add some ssp projectors to the raw data. Here we read them from a
file and plot them.
End of explanation
raw.plot()
Explanation: The first three projectors that we see are the SSP vectors from empty room
measurements to compensate for the noise. The fourth one is the average EEG
reference. These are already applied to the data and can no longer be
removed. The next six are the EOG projections that we added. Every data
channel type has two projection vectors each. Let's try the raw browser
again.
End of explanation
raw.plot_psd(tmax=np.inf, average=False)
Explanation: Now click the proj button at the lower right corner of the browser
window. A selection dialog should appear, where you can toggle the projectors
on and off. Notice that the first four are already applied to the data and
toggling them does not change the data. However the newly added projectors
modify the data to get rid of the EOG artifacts. Note that toggling the
projectors here doesn't actually modify the data. This is purely for visually
inspecting the effect. See :func:mne.io.Raw.del_proj to actually remove the
projectors.
Raw container also lets us easily plot the power spectra over the raw data.
Here we plot the data using spatial_colors to map the line colors to
channel locations (default in versions >= 0.15.0). Other option is to use the
average (default in < 0.15.0). See the API documentation for more info.
End of explanation
layout = mne.channels.read_layout('Vectorview-mag')
layout.plot()
raw.plot_psd_topo(tmax=30., fmin=5., fmax=60., n_fft=1024, layout=layout)
Explanation: Plotting channel-wise power spectra is just as easy. The layout is inferred
from the data by default when plotting topo plots. This works for most data,
but it is also possible to define the layouts by hand. Here we select a
layout with only magnetometer channels and plot it. Then we plot the channel
wise spectra of first 30 seconds of the data.
End of explanation |
7,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
lasio uses the logging module to log warnings and other information when manipulating LAS files.
Step1: Sometimes you may want more or less information shown to you when you are reading LAS files with lasio.
By default in a Jupyter Notebook the logging level is set to WARNING, so you will only see a certain class of messages
Step2: To get more information when loading a file, you can set the logging level to INFO. First, instantiate the root logger with a basic configuration
Step3: Then get the lasio logger object and set the logging level to INFO
Step4: To get more information, you can set the logging level to DEBUG
Step5: To see all of lasio's logger messages (for development purposes), use TRACE_LASIO. It's not obvious in the example below, but this shows a message for each line of the LAS file, e.g.
Step6: One strategy for suppressing logging messages is to set the logger level to a very high level, such that only messages with a CRITICAL designation are shown
Step7: In that case, no messages were logged since no CRITICAL issues were encountered.
Just to prove that the LAS file loaded, even though no messages were shown, here's a header item | Python Code:
import logging
import lasio
Explanation: lasio uses the logging module to log warnings and other information when manipulating LAS files.
End of explanation
l = lasio.read("../tests/examples/sample.las")
Explanation: Sometimes you may want more or less information shown to you when you are reading LAS files with lasio.
By default in a Jupyter Notebook the logging level is set to WARNING, so you will only see a certain class of messages:
End of explanation
logging.basicConfig()
Explanation: To get more information when loading a file, you can set the logging level to INFO. First, instantiate the root logger with a basic configuration:
End of explanation
lasio_logger = logging.getLogger("lasio")
lasio_logger.setLevel(logging.INFO)
l = lasio.read('../tests/examples/sample.las')
Explanation: Then get the lasio logger object and set the logging level to INFO:
End of explanation
lasio_logger.setLevel(logging.DEBUG)
l = lasio.read('../tests/examples/sample.las')
Explanation: To get more information, you can set the logging level to DEBUG:
End of explanation
lasio_logger.setLevel(logging.TRACE_LASIO)
l = lasio.read('../tests/examples/sample.las')
Explanation: To see all of lasio's logger messages (for development purposes), use TRACE_LASIO. It's not obvious in the example below, but this shows a message for each line of the LAS file, e.g.:
DEBUG:lasio.las:Reading data section ~A DEPTH DT RHOB NPHI SFLU SFLA ILM ILD
TRACE_LASIO:lasio.reader:Line 44: 8 items counted in '1670.000 123.450 2550.000 0.450 123.450 123.450 110.200 105.600'
TRACE_LASIO:lasio.reader:Line 45: 8 items counted in '1669.875 123.450 2550.000 0.450 123.450 123.450 110.200 105.600'
TRACE_LASIO:lasio.reader:Line 46: 8 items counted in '1669.750 123.450 2550.000 0.450 123.450 123.450 110.200 105.600'
DEBUG:lasio.reader:Consistently found 8 columns
This will significantly slow down your code, so only do this if you need to.
End of explanation
lasio_logger.setLevel(logging.CRITICAL)
l = lasio.read('../tests/examples/sample.las')
Explanation: One strategy for suppressing logging messages is to set the logger level to a very high level, such that only messages with a CRITICAL designation are shown:
End of explanation
l.header['Well'].SRVC
Explanation: In that case, no messages were logged since no CRITICAL issues were encountered.
Just to prove that the LAS file loaded, even though no messages were shown, here's a header item:
End of explanation |
7,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Automatic Differentiation
This example demonstrates automatic differentiation using both an operator overloading method and a source code transformation method. The function we will use is reasonably complex. It is a routine from my blade element momentum code (available here). The full code is much more involved, but we will focus on just one function to keep things simpler. The actual routine is written in Fortran but I've converted it into Python for this demonstration. The function has several inputs, and three outputs. For simplicity we will focus on just the derivatives of the first output, although all are available.
Step2: Just for convenience, we are going to wrap this function. Our wrapper will take two inputs
Step3: Now we setup some values for the inputs and parameters and put them into a variable array and an parameter array. Also we will save n which is the number of variables in x.
Step4: First, let's find the derivatives of the first output (fzero) with respect to x using finite differencing. We already know how to do this.
Step5: Now let's compute exact derivatives using automatic differentiation (AD). We will use the algopy module in Python. This is an operator overloading method. If you are using Matlab there are not any AD methods built in, but you can find some 3rd party tools.
To use algopy properly we are need to import overloaded versions of the functions we are using (sin, cos, etc.). These overloaded versions will be able to keep track of partial derivatives through a chain rule.
Step6: That's it! We have numerically exact derivatives now for our output w.r.t all inputs in x. (We will show comparisons in accuracy at the end.
algopy is pretty easy to use, but I rarely use it myself for two main reasons. 1) It's very slow. Overloaded methods are already slow, and pure Python itself is slow, so for large problems there is a significant slow down. 2) algopy can handle most things in numpy but it's not as versatile and powerful as some other tools. The tool I use most frequently for AD is Tapenade, which works for Fortran and C code. This is nice because Python can call Fortran and C code fairly easily. I use Tapenade because 1) Fortran is fast and callable from Python so we can move computational bottlenecks to Fortran and still have an easy to use wrapper in Python. Tapenade also uses a source code transformation method, which keeps the AD part fast as well. 2) Tapenade is powerful and can handle forward and reverse mode, vectorized modes, loops, etc.
The Fortran version of the function at the beginning is reproduced below
Step7: This approach is a little more work, but it's much faster and not difficult once you've done it a few times.
How do we check that we did it correctly? Comparing against finite difference is ok, but the best way is to compare against complex step if possible because it is also exact.
To compute the gradients using complex step we need to import complex versions of our functions. We will also need to redefine the absolute value function because the complex version of absolute value$^1$ is the square root of the sum of squares of the real and imaginary part, which is not what we want. You can find more details on functions that need to be overloaded in complex step here.
$^1$ Recall that absolute value is not differentiable at 0, and is generally best to avoid if possible. In this particular code it is fine because I know that the argument will never be zero. It can be negative or positive depending on the operating conditions, but will never cross over to zero.
Step8: Let's see how we did. We will compare our errors relative to complex step. First finite differencing
Step9: The errors are pretty small, except for the third entry is really bad. If we use a different step size for that one entry, we can do a little better. It turns out the function is very insensitive to Rhub at this point and so getting an accurate gradient with FD is difficult.
Let's now look at the two AD methods | Python Code:
from math import pi
import numpy as np
from math import sin, cos, acos, exp, sqrt
def inductionFactors(r, chord, Rhub, Rtip, phi, cl, cd, B,
Vx, Vy, useCd, hubLoss, tipLoss, wakerotation):
Computes induction factors and residual error at a given location
on the blade. Full details on inputs/outputs ommitted here.
sigma_p = B/2.0/pi*chord/r
sphi = sin(phi)
cphi = cos(phi)
# resolve into normal and tangential forces
if not useCd:
cn = cl*cphi
ct = cl*sphi
else:
cn = cl*cphi + cd*sphi
ct = cl*sphi - cd*cphi
# Prandtl's tip and hub loss factor
Ftip = 1.0
if tipLoss:
factortip = B/2.0*(Rtip - r)/(r*abs(sphi))
Ftip = 2.0/pi*acos(exp(-factortip))
Fhub = 1.0
if hubLoss:
factorhub = B/2.0*(r - Rhub)/(Rhub*abs(sphi))
Fhub = 2.0/pi*acos(exp(-factorhub))
F = Ftip * Fhub
# bem parameters
k = sigma_p*cn/4.0/F/sphi/sphi
kp = sigma_p*ct/4.0/F/sphi/cphi
# compute axial induction factor
if phi > 0.0: # momentum/empirical
# update axial induction factor
if k <= 2.0/3.0: # momentum state
a = k/(1+k)
else: # Glauert(Buhl) correction
g1 = 2.0*F*k - (10.0/9-F)
g2 = 2.0*F*k - (4.0/3-F)*F
g3 = 2.0*F*k - (25.0/9-2*F)
if abs(g3) < 1e-6: # avoid singularity
a = 1.0 - 1.0/2.0/sqrt(g2)
else:
a = (g1 - sqrt(g2)) / g3
else: # propeller brake region (a and ap not directly used but update anyway)
if k > 1.0:
a = k/(k-1.0)
else:
a = 0.0 # dummy value
# compute tangential induction factor
ap = kp/(1.0-kp)
if not wakerotation:
ap = 0.0
kp = 0.0
# error function
lambda_r = Vy/Vx
if phi > 0: # momentum/empirical
fzero = sphi/(1.0-a) - cphi/lambda_r*(1.0-kp)
else: # propeller brake region
fzero = sphi*(1.0-k) - cphi/lambda_r*(1.0-kp)
return fzero, a, ap
Explanation: Automatic Differentiation
This example demonstrates automatic differentiation using both an operator overloading method and a source code transformation method. The function we will use is reasonably complex. It is a routine from my blade element momentum code (available here). The full code is much more involved, but we will focus on just one function to keep things simpler. The actual routine is written in Fortran but I've converted it into Python for this demonstration. The function has several inputs, and three outputs. For simplicity we will focus on just the derivatives of the first output, although all are available.
End of explanation
# wrap function
def function(x, params):
# unpack variables
r, chord, Rhub, Rtip, phi, cl, cd, Vx, Vy = x
B, useCd, hubLoss, tipLoss, wakerotation = params
# call the original function
return inductionFactors(r, chord, Rhub, Rtip, phi, cl, cd, B,
Vx, Vy, useCd, hubLoss, tipLoss, wakerotation)
Explanation: Just for convenience, we are going to wrap this function. Our wrapper will take two inputs: x and params. The x vector will contain all the variables that we want to take derivatives with respect to. The params are parameters that do not change during a simulation and so we don't need derivatives with respect to them (these are things like B the number of blades, and various boolean options like useCd).
End of explanation
# setup inputs
r = 0.5
chord = 0.1
Rhub = 0.1
Rtip = 1.0
phi = 0.2
cl = 0.3
cd = 0.002
B = 3
Vx = 1.0
Vy = 5.0
useCd = True
hubLoss = True
tipLoss = True
wakerotation = True
x = np.array([r, chord, Rhub, Rtip, phi, cl, cd, Vx, Vy])
params = np.array([B, useCd, hubLoss, tipLoss, wakerotation])
n = len(x)
Explanation: Now we setup some values for the inputs and parameters and put them into a variable array and an parameter array. Also we will save n which is the number of variables in x.
End of explanation
# ------ finite difference --------
output, a, ap = function(x, params) # we are ignoring the other outputs although we could easily get there derivatives as well
g_fd = np.zeros(n) # initialize gradient vector for finite difference
for i in range(n): # iterate across all vars
# step size
step = 1e-6*x[i]
# take a step
xplus = np.copy(x)
xplus[i] += step
output_plus, a, ap = function(xplus, params)
g_fd[i] = (output_plus - output) / step
Explanation: First, let's find the derivatives of the first output (fzero) with respect to x using finite differencing. We already know how to do this.
End of explanation
# You can ignore this. I'm just ignoring a printed FutureWarning about a change affecting something internal to algopy
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
from algopy import UTPM # just the name of the algorithm (stands for univariate Taylor propagation of matrices)
from algopy import sin, cos, exp, sqrt # overloaded versions of functions we use
from algopy import arccos as acos # need to rename b.c. using the math version (acos) whereas numpy uses arccos
# create an algopy version of x
x_algopy = UTPM.init_jacobian(x)
# create an algopy version of outputs
output, a, ap = function(x_algopy, params)
# extract the gradients
g_ad_oo = UTPM.extract_jacobian(output) # could call again for the other outputs
Explanation: Now let's compute exact derivatives using automatic differentiation (AD). We will use the algopy module in Python. This is an operator overloading method. If you are using Matlab there are not any AD methods built in, but you can find some 3rd party tools.
To use algopy properly we are need to import overloaded versions of the functions we are using (sin, cos, etc.). These overloaded versions will be able to keep track of partial derivatives through a chain rule.
End of explanation
from _bem import inductionfactors_dv
# get derivative of each input
I = np.eye(n)
dr = I[0, :]
dchord = I[1, :]
dRhub = I[2, :]
dRtip = I[3, :]
dphi = I[4, :]
dcl = I[5, :]
dcd = I[6, :]
dVx = I[7, :]
dVy = I[8, :]
fzero, a, ap, doutput_dx, da_dx, dap_dx = inductionfactors_dv(r, chord, Rhub, Rtip,
phi, cl, cd, B, Vx, Vy, dr, dchord, dRhub, dRtip, dphi, dcl, dcd, dVx, dVy)
# rename the gradient
g_ad_sc = doutput_dx
Explanation: That's it! We have numerically exact derivatives now for our output w.r.t all inputs in x. (We will show comparisons in accuracy at the end.
algopy is pretty easy to use, but I rarely use it myself for two main reasons. 1) It's very slow. Overloaded methods are already slow, and pure Python itself is slow, so for large problems there is a significant slow down. 2) algopy can handle most things in numpy but it's not as versatile and powerful as some other tools. The tool I use most frequently for AD is Tapenade, which works for Fortran and C code. This is nice because Python can call Fortran and C code fairly easily. I use Tapenade because 1) Fortran is fast and callable from Python so we can move computational bottlenecks to Fortran and still have an easy to use wrapper in Python. Tapenade also uses a source code transformation method, which keeps the AD part fast as well. 2) Tapenade is powerful and can handle forward and reverse mode, vectorized modes, loops, etc.
The Fortran version of the function at the beginning is reproduced below:
```Fortran
subroutine inductionFactors(r, chord, Rhub, Rtip, phi, cl, cd, B, &
Vx, Vy, useCd, hubLoss, tipLoss, wakerotation, &
fzero, a, ap)
implicit none
integer, parameter :: dp = kind(0.d0)
! in
real(dp), intent(in) :: r, chord, Rhub, Rtip, phi, cl, cd
integer, intent(in) :: B
real(dp), intent(in) :: Vx, Vy
logical, intent(in) :: useCd, hubLoss, tipLoss, wakerotation
!f2py logical, optional, intent(in) :: useCd = 1, hubLoss = 1, tipLoss = 1, wakerotation = 1
! out
real(dp), intent(out) :: fzero, a, ap
! local
real(dp) :: pi, sigma_p, sphi, cphi, lambda_r
real(dp) :: factortip, Ftip, factorhub, Fhub
real(dp) :: k, kp, cn, ct, F
real(dp) :: g1, g2, g3
! constants
pi = 3.1415926535897932_dp
sigma_p = B/2.0_dp/pi*chord/r
sphi = sin(phi)
cphi = cos(phi)
! resolve into normal and tangential forces
if ( .not. useCd ) then
cn = cl*cphi
ct = cl*sphi
else
cn = cl*cphi + cd*sphi
ct = cl*sphi - cd*cphi
end if
! Prandtl's tip and hub loss factor
Ftip = 1.0_dp
if ( tipLoss ) then
factortip = B/2.0_dp*(Rtip - r)/(r*abs(sphi))
Ftip = 2.0_dp/pi*acos(exp(-factortip))
end if
Fhub = 1.0_dp
if ( hubLoss ) then
factorhub = B/2.0_dp*(r - Rhub)/(Rhub*abs(sphi))
Fhub = 2.0_dp/pi*acos(exp(-factorhub))
end if
F = Ftip * Fhub
! bem parameters
k = sigma_p*cn/4.0_dp/F/sphi/sphi
kp = sigma_p*ct/4.0_dp/F/sphi/cphi
! compute axial induction factor
if (phi > 0) then ! momentum/empirical
! update axial induction factor
if (k <= 2.0_dp/3.0) then ! momentum state
a = k/(1+k)
else ! Glauert(Buhl) correction
g1 = 2.0_dp*F*k - (10.0_dp/9-F)
g2 = 2.0_dp*F*k - (4.0_dp/3-F)*F
g3 = 2.0_dp*F*k - (25.0_dp/9-2*F)
if (abs(g3) < 1e-6_dp) then ! avoid singularity
a = 1.0_dp - 1.0_dp/2.0/sqrt(g2)
else
a = (g1 - sqrt(g2)) / g3
end if
end if
else ! propeller brake region (a and ap not directly used but update anyway)
if (k > 1) then
a = k/(k-1)
else
a = 0.0_dp ! dummy value
end if
end if
! compute tangential induction factor
ap = kp/(1-kp)
if (.not. wakerotation) then
ap = 0.0_dp
kp = 0.0_dp
end if
! error function
lambda_r = Vy/Vx
if (phi > 0) then ! momentum/empirical
fzero = sphi/(1-a) - cphi/lambda_r*(1-kp)
else ! propeller brake region
fzero = sphi*(1-k) - cphi/lambda_r*(1-kp)
end if
end subroutine inductionFactors
```
We then run this code through Tapenade and it creates a source code transformed version that computes derivatives in addition to function values (I've used a forward mode in this case). It's not pretty to look at, but it's automatically generated. If we change the original source, we would need to regenerate.
```fortran
! Generated by TAPENADE (INRIA, Tropics team)
! Tapenade 3.9 (r5096) - 24 Feb 2014 16:54
!
! Differentiation of inductionfactors in forward (tangent) mode:
! variations of useful results: ap fzero a
! with respect to varying inputs: r rtip rhub chord phi cd cl
! vx vy
! RW status of diff variables: r:in rtip:in ap:out rhub:in chord:in
! fzero:out phi:in cd:in cl:in vx:in vy:in a:out
SUBROUTINE INDUCTIONFACTORS_DV(r, chord, rhub, rtip, phi, cl, cd, b, &
vx, vy, usecd, hubloss, tiploss, wakerotation, &
rd, chordd, rhubd, rtipd, phid, cld, cdd, vxd, vyd, &
fzero, a, ap, fzerod, ad, apd, nbdirs)
! Hint: nbdirsmax should be the maximum number of differentiation directions
IMPLICIT NONE
INTEGER, PARAMETER :: dp=KIND(0.d0)
! in
REAL(dp), INTENT(IN) :: r, chord, rhub, rtip, phi, cl, cd
REAL(dp), DIMENSION(nbdirs), INTENT(IN) :: rd, chordd, rhubd, rtipd&
& , phid, cld, cdd
INTEGER, INTENT(IN) :: b
REAL(dp), INTENT(IN) :: vx, vy
REAL(dp), DIMENSION(nbdirs), INTENT(IN) :: vxd, vyd
LOGICAL, INTENT(IN) :: usecd, hubloss, tiploss, wakerotation
INTEGER, intent(in) :: nbdirs
!f2py logical, optional, intent(in) :: useCd = 1, hubLoss = 1, tipLoss = 1, wakerotation = 1
! out
REAL(dp), INTENT(OUT) :: fzero, a, ap
REAL(dp), DIMENSION(nbdirs), INTENT(OUT) :: fzerod, ad, apd
! local
REAL(dp) :: pi, sigma_p, sphi, cphi, lambda_r
REAL(dp), DIMENSION(nbdirs) :: sigma_pd, sphid, cphid, lambda_rd
REAL(dp) :: factortip, ftip, factorhub, fhub
REAL(dp), DIMENSION(nbdirs) :: factortipd, ftipd, factorhubd, fhubd
REAL(dp) :: k, kp, cn, ct, f
REAL(dp), DIMENSION(nbdirs) :: kd, kpd, cnd, ctd, fd
REAL(dp) :: g1, g2, g3
REAL(dp), DIMENSION(nbdirs) :: g1d, g2d, g3d
INTRINSIC KIND
INTRINSIC SIN
INTRINSIC COS
INTRINSIC ABS
INTRINSIC EXP
INTRINSIC ACOS
INTRINSIC SQRT
REAL(dp) :: arg1
REAL(dp), DIMENSION(nbdirs) :: arg1d
REAL(dp) :: result1
REAL(dp), DIMENSION(nbdirs) :: result1d
INTEGER :: nd
REAL(dp) :: abs1d(nbdirs)
REAL(dp) :: abs0d(nbdirs)
REAL(dp) :: abs2
REAL(dp) :: abs1
REAL(dp) :: abs0
! constants
pi = 3.1415926535897932_dp
DO nd=1,nbdirs
sigma_pd(nd) = (bchordd(nd)r/(2.0_dppi)-bchordrd(nd)/(2.0_dppi&
& ))/r2
sphid(nd) = phid(nd)COS(phi)
cphid(nd) = -(phid(nd)SIN(phi))
END DO
sigma_p = b/2.0_dp/pichord/r
sphi = SIN(phi)
cphi = COS(phi)
! resolve into normal and tangential forces
IF (.NOT.usecd) THEN
DO nd=1,nbdirs
cnd(nd) = cld(nd)cphi + clcphid(nd)
ctd(nd) = cld(nd)sphi + clsphid(nd)
END DO
cn = clcphi
ct = clsphi
ELSE
DO nd=1,nbdirs
cnd(nd) = cld(nd)cphi + clcphid(nd) + cdd(nd)sphi + cdsphid(nd&
& )
ctd(nd) = cld(nd)sphi + clsphid(nd) - cdd(nd)cphi - cdcphid(nd&
& )
END DO
cn = clcphi + cdsphi
ct = clsphi - cdcphi
END IF
! Prandtl's tip and hub loss factor
ftip = 1.0_dp
IF (tiploss) THEN
IF (sphi .GE. 0.) THEN
DO nd=1,nbdirs
abs0d(nd) = sphid(nd)
END DO
abs0 = sphi
ELSE
DO nd=1,nbdirs
abs0d(nd) = -sphid(nd)
END DO
abs0 = -sphi
END IF
factortip = b/2.0_dp(rtip-r)/(rabs0)
arg1 = EXP(-factortip)
DO nd=1,nbdirs
factortipd(nd) = (b(rtipd(nd)-rd(nd))rabs0/2.0_dp-b(rtip-r)(&
& rd(nd)abs0+rabs0d(nd))/2.0_dp)/(r*abs0)2
arg1d(nd) = -(factortipd(nd)EXP(-factortip))
IF (arg1 .EQ. 1.0 .OR. arg1 .EQ. (-1.0)) THEN
result1d(nd) = 0.0
ELSE
result1d(nd) = -(arg1d(nd)/SQRT(1.0-arg12))
END IF
ftipd(nd) = 2.0_dpresult1d(nd)/pi
END DO
result1 = ACOS(arg1)
ftip = 2.0_dp/piresult1
ELSE
DO nd=1,nbdirs
ftipd(nd) = 0.0
END DO
END IF
fhub = 1.0_dp
IF (hubloss) THEN
IF (sphi .GE. 0.) THEN
DO nd=1,nbdirs
abs1d(nd) = sphid(nd)
END DO
abs1 = sphi
ELSE
DO nd=1,nbdirs
abs1d(nd) = -sphid(nd)
END DO
abs1 = -sphi
END IF
factorhub = b/2.0_dp(r-rhub)/(rhubabs1)
arg1 = EXP(-factorhub)
DO nd=1,nbdirs
factorhubd(nd) = (b(rd(nd)-rhubd(nd))rhubabs1/2.0_dp-b(r-rhub)&
& (rhubd(nd)abs1+rhubabs1d(nd))/2.0_dp)/(rhubabs1)2
arg1d(nd) = -(factorhubd(nd)EXP(-factorhub))
IF (arg1 .EQ. 1.0 .OR. arg1 .EQ. (-1.0)) THEN
result1d(nd) = 0.0
ELSE
result1d(nd) = -(arg1d(nd)/SQRT(1.0-arg12))
END IF
fhubd(nd) = 2.0_dpresult1d(nd)/pi
END DO
result1 = ACOS(arg1)
fhub = 2.0_dp/piresult1
ELSE
DO nd=1,nbdirs
fhubd(nd) = 0.0
END DO
END IF
f = ftipfhub
DO nd=1,nbdirs
fd(nd) = ftipd(nd)fhub + ftipfhubd(nd)
! bem parameters
kd(nd) = ((((sigma_pd(nd)cn+sigma_pcnd(nd))f/4.0_dp-sigma_pcnfd&
& (nd)/4.0_dp)*sphi/f2-sigma_pcnsphid(nd)/(4.0_dpf))/sphi-&
& sigma_pcnsphid(nd)/(4.0_dpfsphi))/sphi2
kpd(nd) = ((((sigma_pd(nd)ct+sigma_pctd(nd))f/4.0_dp-sigma_pct&
& fd(nd)/4.0_dp)sphi/f2-sigma_pctsphid(nd)/(4.0_dpf))cphi/&
& sphi2-sigma_pctcphid(nd)/(4.0_dpfsphi))/cphi2
END DO
k = sigma_pcn/4.0_dp/f/sphi/sphi
kp = sigma_pct/4.0_dp/f/sphi/cphi
! compute axial induction factor
IF (phi .GT. 0) THEN
! momentum/empirical
! update axial induction factor
IF (k .LE. 2.0_dp/3.0) THEN
DO nd=1,nbdirs
! momentum state
ad(nd) = (kd(nd)(1+k)-kkd(nd))/(1+k)2
END DO
a = k/(1+k)
ELSE
DO nd=1,nbdirs
! Glauert(Buhl) correction
g1d(nd) = 2.0_dp(fd(nd)k+fkd(nd)) + fd(nd)
g2d(nd) = 2.0_dp(fd(nd)k+fkd(nd)) - (4.0_dp/3-f)fd(nd) + fd(&
& nd)f
g3d(nd) = 2.0_dp(fd(nd)k+fkd(nd)) + 2fd(nd)
END DO
g1 = 2.0_dpfk - (10.0_dp/9-f)
g2 = 2.0_dpfk - (4.0_dp/3-f)f
g3 = 2.0_dpfk - (25.0_dp/9-2f)
IF (g3 .GE. 0.) THEN
abs2 = g3
ELSE
abs2 = -g3
END IF
IF (abs2 .LT. 1e-6_dp) THEN
result1 = SQRT(g2)
DO nd=1,nbdirs
! avoid singularity
IF (g2 .EQ. 0.0) THEN
result1d(nd) = 0.0
ELSE
result1d(nd) = g2d(nd)/(2.0SQRT(g2))
END IF
ad(nd) = result1d(nd)/2.0/result12
END DO
a = 1.0_dp - 1.0_dp/2.0/result1
ELSE
result1 = SQRT(g2)
DO nd=1,nbdirs
IF (g2 .EQ. 0.0) THEN
result1d(nd) = 0.0
ELSE
result1d(nd) = g2d(nd)/(2.0SQRT(g2))
END IF
ad(nd) = ((g1d(nd)-result1d(nd))g3-(g1-result1)*g3d(nd))/g3&
& 2
END DO
a = (g1-result1)/g3
END IF
END IF
ELSE IF (k .GT. 1) THEN
! propeller brake region (a and ap not directly used but update anyway)
DO nd=1,nbdirs
ad(nd) = (kd(nd)(k-1)-kkd(nd))/(k-1)2
END DO
a = k/(k-1)
ELSE
! dummy value
a = 0.0_dp
DO nd=1,nbdirs
ad(nd) = 0.0
END DO
END IF
DO nd=1,nbdirs
! compute tangential induction factor
apd(nd) = (kpd(nd)(1-kp)+kpkpd(nd))/(1-kp)2
END DO
ap = kp/(1-kp)
IF (.NOT.wakerotation) THEN
ap = 0.0_dp
kp = 0.0_dp
DO nd=1,nbdirs
apd(nd) = 0.0
kpd(nd) = 0.0
END DO
END IF
DO nd=1,nbdirs
! error function
lambda_rd(nd) = (vyd(nd)vx-vyvxd(nd))/vx2
END DO
lambda_r = vy/vx
IF (phi .GT. 0) THEN
DO nd=1,nbdirs
! momentum/empirical
fzerod(nd) = (sphid(nd)(1-a)+sphiad(nd))/(1-a)2 - (cphid(nd)&
& lambda_r-cphilambda_rd(nd))(1-kp)/lambda_r2 + cphikpd(nd)/&
& lambda_r
END DO
fzero = sphi/(1-a) - cphi/lambda_r(1-kp)
ELSE
DO nd=1,nbdirs
! propeller brake region
fzerod(nd) = sphid(nd)(1-k) - sphikd(nd) - (cphid(nd)lambda_r-&
& cphilambda_rd(nd))(1-kp)/lambda_r2 + cphikpd(nd)/lambda_r
END DO
fzero = sphi(1-k) - cphi/lambda_r*(1-kp)
END IF
END SUBROUTINE INDUCTIONFACTORS_DV
```
We now build this Fortran code into a shared library that I called _bem. I will skip the details because this isn't our focus, but this is fairly easy to do. We can now access this function from Python by importing the library.
This AD method let's us compute combinations of partial derivatives if we want, but generally we just want each derivatie separately. There are nine inputs in x and we are using the array version that let's us compute multiple derivatives simultaneously. We will set:
dr = [1, 0, 0, 0, 0, 0, 0, 0, 0]
dchord = [0, 1, 0, 0, 0, 0, 0, 0, 0]
and so on ...
This will set the first derivative of doutput_dx to doutput_dr, the second to doutput_dchord, etc.
End of explanation
# import complex versions
from cmath import sin, cos, acos, exp, sqrt
# redine absolute value
def c_abs(x):
if x.real < 0:
return -x
else:
return x
abs = c_abs
# initialize
g_cs = np.zeros(n)
# iterate across entires in x
for i in range(n):
step_complex = 1e-30 # take a really small step
# new xvalue: x + ih
xcomplex = np.copy(x).astype(complex)
xcomplex[i] += complex(0.0, step_complex)
# call function
output_complex, a_complex, ap_complex = function(xcomplex, params)
# compute gradient
g_cs[i] = output_complex.imag / step_complex
Explanation: This approach is a little more work, but it's much faster and not difficult once you've done it a few times.
How do we check that we did it correctly? Comparing against finite difference is ok, but the best way is to compare against complex step if possible because it is also exact.
To compute the gradients using complex step we need to import complex versions of our functions. We will also need to redefine the absolute value function because the complex version of absolute value$^1$ is the square root of the sum of squares of the real and imaginary part, which is not what we want. You can find more details on functions that need to be overloaded in complex step here.
$^1$ Recall that absolute value is not differentiable at 0, and is generally best to avoid if possible. In this particular code it is fine because I know that the argument will never be zero. It can be negative or positive depending on the operating conditions, but will never cross over to zero.
End of explanation
from __future__ import print_function
print('error_fd =', (g_fd - g_cs)/g_cs)
Explanation: Let's see how we did. We will compare our errors relative to complex step. First finite differencing:
End of explanation
print('error_ad_oo =', (g_ad_oo - g_cs)/g_cs)
print('error_ad_sc =', (g_ad_sc - g_cs)/g_cs)
Explanation: The errors are pretty small, except for the third entry is really bad. If we use a different step size for that one entry, we can do a little better. It turns out the function is very insensitive to Rhub at this point and so getting an accurate gradient with FD is difficult.
Let's now look at the two AD methods: one with operator overloading and one with source code transformation
End of explanation |
7,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing a Local Search Engine
This notebook shows one important application of <em style="color
Step1: The function get_text takes a path specifiying a .pdf file. It converts the .pdf file into a text file and returns the
resulting text. This function assumes that the program pdftotext is installed. This program can be dowloaded at
<a href="https
Step2: Let us test this for one file.
Step3: In order to split the text contained in a file into words, we need the regular expressions provided by the module re.
Step4: The function tokenize takes a string s and returns the set of words that have been found in the string s. We assume that the words contain only latin characters. Furthermore, we convert the words to lower case.
Step5: Let us check how many different words occur in the file that we have read above.
Step6: We need the module os to traverse directories. os is short for operating system.
Step7: The class Document represents a single file. This class maintains three member variables
Step8: The class Index contains three member variables
Step9: The method $\texttt{self}.\texttt{buildIndex}(d)$ takes an Index $\texttt{self}$ and a directory $d$. It traverses the directory $d$ recursively and collects all .pdf files contained in $d$ and its subdirectories. These files are converted to text and their words are added to the InvertedIndex.
Step10: The function _addToIndex takes a document identifier $d$ and a set of words $W$ occurring in the document specified by $d$ and extends the InvertedIndex so that for every word $w$ in Words we have that
$$ d \in \texttt{InvertedIndex}[w]. $$
Step11: Let us build an Index for a directory containing some literature regarding my lectures on algorithm.
Step12: The method $\texttt{self}.\texttt{retrieve}(Q)$ takes an Index self and a query $Q$. $Q$ is a string containing multiple words.
The method returns the set of those documents that contain all the words occurring in $Q$. | Python Code:
import subprocess
Explanation: Implementing a Local Search Engine
This notebook shows one important application of <em style="color:blue;">dictionaries</em> and <em style="color:blue;">sets</em>:
It implements a local search engine that can be used to index .pdf documents on the local file system. The index can then be used to search for all documents that contain specified words. The main data structure used is a so called
<em style="color:blue;">inverted index</em>, which is a <em style="color:blue;">dictionary</em> mapping words to the
<em style="color:blue;">sets</em> of documents that contain these words.
The module subprocess enables us to execute shell command and to capture their output via a pipe from which we can read.
End of explanation
def get_text(path):
command = ['pdftotext', '-enc', 'UTF-8', '-q', path, '-']
process = subprocess.Popen(command, stdout=subprocess.PIPE)
Lines = process.stdout.readlines()
return ''.join([str(line, 'utf-8', 'ignore') for line in Lines])
Explanation: The function get_text takes a path specifiying a .pdf file. It converts the .pdf file into a text file and returns the
resulting text. This function assumes that the program pdftotext is installed. This program can be dowloaded at
<a href="https://www.xpdfreader.com/download.html">https://www.xpdfreader.com/download.html</a>.
End of explanation
%%time
text = get_text('../Literature/DualPivotQuicksort.pdf')
print(len(text))
Explanation: Let us test this for one file.
End of explanation
import re
Explanation: In order to split the text contained in a file into words, we need the regular expressions provided by the module re.
End of explanation
def tokenize(s):
return set(t.lower() for t in re.findall(r'[A-Za-z]+', s))
Explanation: The function tokenize takes a string s and returns the set of words that have been found in the string s. We assume that the words contain only latin characters. Furthermore, we convert the words to lower case.
End of explanation
len(tokenize(text))
Explanation: Let us check how many different words occur in the file that we have read above.
End of explanation
import os
Explanation: We need the module os to traverse directories. os is short for operating system.
End of explanation
class Document:
def __init__(self, path, docID, Words):
self.path = path
self.docID = docID
self.Words = Words
Explanation: The class Document represents a single file. This class maintains three member variables:
- path is the absolut file path specifying the location of the file containing the pdf document,
- docID is a natural number that serves as a unique identifier for the document,
- Words is the set of words contained in the file.
This class only serves as a container of its member variables, hence it has no methods.
End of explanation
class Index:
def __init__(self):
self.InvertedIndex = {}
self.ID2Doc = {}
self.fileCount = 0
Explanation: The class Index contains three member variables:
- InvertedIndex is a dictionary that maps every word to the set of documents containing this word.
In this set, the documents are represented by their unique identifiers.
- ID2Doc is a dictionary mapping the document identifiers to the corresponding Document objects.
- fileCount is a counter that is needed to create unique document identifiers.
End of explanation
def buildIndex(self, directory):
for root, _, files in os.walk(directory):
for fileName in files:
if fileName[-4:] == '.pdf':
fullpath = os.path.abspath(os.path.join(root, fileName))
print('indexing', fullpath, end=' has ')
try:
fileText = get_text(fullpath)
tokenSet = tokenize(fileText)
print(len(tokenSet), 'different words.')
self.fileCount += 1
document = Document(fullpath, self.fileCount, tokenSet)
self.ID2Doc[self.fileCount] = document
self._addToIndex(self.fileCount, tokenSet)
except:
print('unable to read', path)
continue
Index.buildIndex = buildIndex
Explanation: The method $\texttt{self}.\texttt{buildIndex}(d)$ takes an Index $\texttt{self}$ and a directory $d$. It traverses the directory $d$ recursively and collects all .pdf files contained in $d$ and its subdirectories. These files are converted to text and their words are added to the InvertedIndex.
End of explanation
def _addToIndex(self, documentID, Words):
for term in Words:
try:
docSet = self.InvertedIndex[term]
docSet.add(documentID)
except KeyError:
self.InvertedIndex[term] = { documentID }
Index._addToIndex = _addToIndex
Explanation: The function _addToIndex takes a document identifier $d$ and a set of words $W$ occurring in the document specified by $d$ and extends the InvertedIndex so that for every word $w$ in Words we have that
$$ d \in \texttt{InvertedIndex}[w]. $$
End of explanation
%%time
index = Index()
index.buildIndex("../Literature")
Explanation: Let us build an Index for a directory containing some literature regarding my lectures on algorithm.
End of explanation
def retrieve(self, Query):
SearchStrings = list(tokenize(Query))
result = set()
Documents = self.InvertedIndex.get(SearchStrings[0], set())
for word in SearchStrings[1:]:
Documents &= self.InvertedIndex.get(word, set())
return { self.ID2Doc[docID].path for docID in Documents }
Index.retrieve = retrieve
index.retrieve("trie avl")
Explanation: The method $\texttt{self}.\texttt{retrieve}(Q)$ takes an Index self and a query $Q$. $Q$ is a string containing multiple words.
The method returns the set of those documents that contain all the words occurring in $Q$.
End of explanation |
7,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Capture Faces from Scraped Pictures
We used haarcascade for frontal face from OpenCV to capture the frontal faces from the pictures scraped from My Ladyboy Date and Date in Asia, and cropped them to the 224 by 224 size for input into the model. Girl and Ladyboy pictures are only the first profile pictures on respective dating sites whereas Ladyboy Big are the pictures in the detail section.
Step1: Ladyboy
Step2: Ladyboy Big
Step3: Girl | Python Code:
import cv2
from PIL import Image
import math
import copy
#the usual data science stuff
import os,sys
import glob
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
%matplotlib inline
ladyboy_big_input = '../data/ladyboy_big/'
ladyboy_big_output = '../data/processed/ladyboy_big/'
ladyboy_input = '../data/ladyboy/'
ladyboy_output = '../data/processed/ladyboy/'
girl_input = '../data/girl/'
girl_output = '../data/processed/girl/'
cascade_file_src = "haarcascade_frontalface_default.xml"
faceCascade = cv2.CascadeClassifier(cascade_file_src)
Explanation: Capture Faces from Scraped Pictures
We used haarcascade for frontal face from OpenCV to capture the frontal faces from the pictures scraped from My Ladyboy Date and Date in Asia, and cropped them to the 224 by 224 size for input into the model. Girl and Ladyboy pictures are only the first profile pictures on respective dating sites whereas Ladyboy Big are the pictures in the detail section.
End of explanation
#i=0
for root, dirs, files in os.walk(ladyboy_input):
for name in files:
#print(i)
#i+=1
imagePath = os.path.join(root, name)
# load image on gray scale :
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect faces in the image :
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
#skip if face not detected
if(len(faces)==0):
continue
#open image
im = Image.open(imagePath)
#get box dimensions
(x, y, w, h) = faces[0]
center_x = x+w/2
center_y = y+h/2
b_dim = min(max(w,h)*1.2,im.width, im.height)
box = (int(center_x-b_dim/2), int(center_y-b_dim/2),
int(center_x+b_dim/2), int(center_y+b_dim/2))
# Crop Image
crpim = im.crop(box).resize((224,224))
#plt.imshow(np.asarray(crpim))
#save file
crpim.save(ladyboy_output+name,format='JPEG')
Explanation: Ladyboy
End of explanation
#i=0
for root, dirs, files in os.walk(ladyboy_big_input):
for name in files:
#print(i)
#i+=1
imagePath = os.path.join(root, name)
# load image on gray scale :
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect faces in the image :
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
#skip if face not detected
if(len(faces)==0):
continue
#open image
im = Image.open(imagePath)
#get box dimensions
(x, y, w, h) = faces[0]
center_x = x+w/2
center_y = y+h/2
b_dim = min(max(w,h)*1.2,im.width, im.height)
box = (int(center_x-b_dim/2), int(center_y-b_dim/2),
int(center_x+b_dim/2), int(center_y+b_dim/2))
# Crop Image
crpim = im.crop(box).resize((224,224))
#plt.imshow(np.asarray(crpim))
#save file
crpim.save(ladyboy_big_output+name,format='JPEG')
Explanation: Ladyboy Big
End of explanation
#i=0
for root, dirs, files in os.walk(girl_input):
for name in files:
#print(i)
#i+=1
imagePath = os.path.join(root, name)
# load image on gray scale :
image = cv2.imread(imagePath)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
# detect faces in the image :
faces = faceCascade.detectMultiScale(gray, 1.2, 5)
#skip if face not detected
if(len(faces)==0):
continue
#open image
im = Image.open(imagePath)
#get box dimensions
(x, y, w, h) = faces[0]
center_x = x+w/2
center_y = y+h/2
b_dim = min(max(w,h)*1.2,im.width, im.height)
box = (int(center_x-b_dim/2), int(center_y-b_dim/2),
int(center_x+b_dim/2), int(center_y+b_dim/2))
# Crop Image
crpim = im.crop(box).resize((224,224))
#plt.imshow(np.asarray(crpim))
#save file
crpim.save(girl_output+name,format='JPEG')
Explanation: Girl
End of explanation |
7,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img width=700px; src="../img/logoUPSayPlusCDS_990.png">
<p style="margin-top
Step1: The SciPy library is one of the core packages that make up the SciPy stack. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization.
1. File input/output - scipy.io
Scipy provides an io module to help load some data type. We can easily read MATLAB .mat files using io.loadmat and io.savemat.
Step2: <div class="alert alert-success">
<b>EXERCISE - `scipy.io`</b>
Step3: The scipy.interpolate.interp1d class can build a linear interpolation function
Step4: Then the scipy.interpolate.linear_interp instance needs to be evaluated at the time of interest
Step5: A cubic interpolation can also be selected by providing the kind optional keyword argument
Step6: Let's see the difference by plotting the results.
Step7: <div class="alert alert-success">
<b>EXERCISE - `scipy.interpolate`</b>
Step8: Finding the minimum of a scalar function
Let’s define the following function
Step9: and plot it
Step10: This function has a global minimum around -1.3 and a local minimum around 3.8.
The general and efficient way to find a minimum for this function is to conduct a gradient descent starting from a given initial point. The BFGS algorithm is a good way of doing this
Step11: A possible issue with this approach is that, if the function has local minima the algorithm may find these local minima instead of the global minimum depending on the initial point
Step12: If we don’t know the neighborhood of the global minimum to choose the initial point, we need to resort to costlier global optimization. To find the global minimum, we use scipy.optimize.basinhopping() (which combines a local optimizer with stochastic sampling of starting points for the local optimizer)
Step13: Finding the roots of a scalar function
To find a root, i.e. a point where $f(x) = 0$, of the function f above we can use for example scipy.optimize.fsolve()
Step14: Note that only one root is found. Inspecting the plot of f reveals that there is a second root around -2.5. We find the exact value of it by adjusting our initial guess
Step15: Curve fitting
Suppose we have data sampled from $f$ with some noise
Step16: Now if we know the functional form of the function from which the samples were drawn ($x^2 + \sin(x)$ in this case) but not the amplitudes of the terms, we can find those by least squares curve fitting. First we have to define the function to fit
Step17: Then we can use scipy.optimize.curve_fit() to find $a$ and $b$
Step18: Summary in a single plot
Step21: <div class="alert alert-success">
<b>EXERCISE - `scipy.optimize`</b>
Step22: 4. Numerical integration - scipy.integrate
Given a function object, the most generic integration routine is scipy.integrate.quad().
Step23: If only fixed sample are given, the trapeze method (scipy.integrate.trapz()) or Simpson's integration rule scipy.integrate.simps()) can be used.
Step24: <div class="alert alert-success">
<b>EXERCISE - `scipy.integrate`</b>
Step25: Using the * operator does not lead to a matrix multiplication since the matrix returned is a $3 \times 3$ matrix. Instead, it multiply each column of $A$ by the vector $b$.
Step26: You need to use the function np.dot to obtain the matrix multiplication.
Step27: However, by converting $A$ and $b$ to matrices (i.e., np.matrix), it is possible to use the * operator directly.
Step28: <div class="alert alert-success">
<b>EXERCISE - `scipy.linalg`</b>
Step29: 1-sample t-test
scipy.stats.ttest_1samp() tests if the population mean of data is likely to be equal to a given value. Let see if the VIQ of our population is equal to 0.
Step30: With a p-value of $10^{-28}$ we can claim that the population mean for the IQ (VIQ measure) is not 0.
2-sample t-test
scipy.stats.ttest_ind() can compare two populations and check if the difference is significant or not. We can study if there is a difference of the VIQ between Male and Female.
Step31: To see if this difference is significant, we can use scipy.stats.ttest_ind().
Step32: <div class="alert alert-success">
<b>EXERCISE</b>
Step33: Then we specify an OLS model and fit it
Step34: We can inspect the various statistics derived from the fit
Step35: Intercept | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
Explanation: <img width=700px; src="../img/logoUPSayPlusCDS_990.png">
<p style="margin-top: 3em; margin-bottom: 2em;"><b><big><big><big><big>Introduction to Scipy and Statsmodels libraries</big></big></big></big></b></p>
End of explanation
from scipy.io import loadmat, savemat
a = np.ones((3, 3))
savemat('file.mat', {'a': a}) # savemat expects a dictionary
data = loadmat('file.mat', struct_as_record=True)
data['a']
Explanation: The SciPy library is one of the core packages that make up the SciPy stack. It provides many user-friendly and efficient numerical routines such as routines for numerical integration and optimization.
1. File input/output - scipy.io
Scipy provides an io module to help load some data type. We can easily read MATLAB .mat files using io.loadmat and io.savemat.
End of explanation
measured_time = np.linspace(0, 1, 10)
noise = (np.random.random(10)*2 - 1) * 1e-1
measures = np.sin(2 * np.pi * measured_time) + noise
Explanation: <div class="alert alert-success">
<b>EXERCISE - `scipy.io`</b>:
<ul>
<li>Load the matfile from `data/spectra.mat` using `scipy.io.loadmat`.</li>
<li>Extract from the loaded dictionary two variables (`spectra`, `frequency`). You should call `ravel` the `frequency` array to obtain a 1-D array.</li>
<li>Plot the spectra in function of the frequency.</li>
</ul>
</div>
2. Signal interpolation - scipy.interpolate
The scipy.interpolate is useful for fitting a function from experimental data and thus evaluating points where no measure exists. By imagining experimental data close to a sine function:
End of explanation
from scipy.interpolate import interp1d
linear_interp = interp1d(measured_time, measures)
Explanation: The scipy.interpolate.interp1d class can build a linear interpolation function:
End of explanation
computed_time = np.linspace(0, 1, 50)
linear_results = linear_interp(computed_time)
Explanation: Then the scipy.interpolate.linear_interp instance needs to be evaluated at the time of interest:
End of explanation
cubic_interp = interp1d(measured_time, measures, kind='cubic')
cubic_results = cubic_interp(computed_time)
Explanation: A cubic interpolation can also be selected by providing the kind optional keyword argument:
End of explanation
plt.plot(measured_time, measures, 'or', label='Measures')
plt.plot(computed_time, linear_results, label='Linear interpolation')
plt.plot(computed_time, cubic_results, label='Cubic interpolation')
plt.legend()
plt.xlabel('Time')
plt.ylabel('Amplitude')
plt.show()
Explanation: Let's see the difference by plotting the results.
End of explanation
from scipy import optimize
Explanation: <div class="alert alert-success">
<b>EXERCISE - `scipy.interpolate`</b>:
<ul>
<li>Interpolate each spectra values corresponding to the integral frequencies {401, 402, ..., 3999} using `scipy.interpolate.interp1d`.</li>
<li>Plot the spectra in function of the frequencies.</li>
</ul>
</div>
3. Optimization - scipy.optimize
Optimization is the problem of finding a numerical solution to a minimization or equality.
The scipy.optimize module provides useful algorithms for function minimization (scalar or multi-dimensional), curve fitting and root finding.
End of explanation
def f(x):
return x ** 2 + 10 * np.sin(x)
Explanation: Finding the minimum of a scalar function
Let’s define the following function:
End of explanation
x = np.arange(-10, 10, 0.1)
plt.plot(x, f(x))
plt.show()
Explanation: and plot it:
End of explanation
res = optimize.minimize(f, 0, method='L-BFGS-B')
res
Explanation: This function has a global minimum around -1.3 and a local minimum around 3.8.
The general and efficient way to find a minimum for this function is to conduct a gradient descent starting from a given initial point. The BFGS algorithm is a good way of doing this:
End of explanation
res2 = optimize.minimize(f, 3, method='L-BFGS-B')
res2
Explanation: A possible issue with this approach is that, if the function has local minima the algorithm may find these local minima instead of the global minimum depending on the initial point:
End of explanation
optimize.basinhopping(f, 3, niter=1000)
Explanation: If we don’t know the neighborhood of the global minimum to choose the initial point, we need to resort to costlier global optimization. To find the global minimum, we use scipy.optimize.basinhopping() (which combines a local optimizer with stochastic sampling of starting points for the local optimizer):
End of explanation
root = optimize.fsolve(f, 1) # our initial guess is 1
root
Explanation: Finding the roots of a scalar function
To find a root, i.e. a point where $f(x) = 0$, of the function f above we can use for example scipy.optimize.fsolve():
End of explanation
root2 = optimize.fsolve(f, -2.5)
root2
Explanation: Note that only one root is found. Inspecting the plot of f reveals that there is a second root around -2.5. We find the exact value of it by adjusting our initial guess:
End of explanation
xdata = np.linspace(-10, 10, num=100)
ydata = f(xdata) + np.random.normal(0, 2, xdata.shape)
Explanation: Curve fitting
Suppose we have data sampled from $f$ with some noise:
End of explanation
def f2(x, a, b):
return a*x**2 + b*np.sin(x)
Explanation: Now if we know the functional form of the function from which the samples were drawn ($x^2 + \sin(x)$ in this case) but not the amplitudes of the terms, we can find those by least squares curve fitting. First we have to define the function to fit:
End of explanation
guess = [2, 2]
params, params_covariance = optimize.curve_fit(f2, xdata, ydata, guess)
params
Explanation: Then we can use scipy.optimize.curve_fit() to find $a$ and $b$:
End of explanation
x = np.arange(-10, 10, 0.1)
plt.plot(xdata, ydata)
# plot the local minima
plt.plot(res.x, f(res.x), 'or', label='minimum')
plt.plot(res2.x, f(res2.x), 'or')
# plot the roots
plt.plot(root, f(root), '^g', label='roots')
plt.plot(root2, f(root2), '^g')
# plot the curved fitted
plt.plot(x, f2(x, params[0], params[1]), '--', label='fitted')
plt.legend()
plt.show()
Explanation: Summary in a single plot
End of explanation
# import helper regarding normal distribution
from scipy.stats import norm
def find_nearest_index(array, value):
Find the nearest index of a value in an array.
idx = (np.abs(array - value)).argmin()
return idx
def model_bi_functions(freqs, a=1e-5, b=0.01,
scale=100, mu=3300, sigma=300):
Model to be fitted.
It corresponds to a line from [0, f0] and a
Normal distribution profile from [f0, end].
Parameters
----------
freqs : ndarray, shape (n_freqs,)
Frequencies for which the spectrum will be calculated
a : float, (default=1e-5)
Slope of the line.
b : float, (default=0.01)
Values where the line cut the y-axis.
scale : float, (default=100)
Scaling factor for the amplitude of the Gaussian profile.
mu : float, (default=3300)
Central value of the Gaussian profile.
sigma : float, (default=300)
Standard deviation of the Gaussian profile.
y = np.zeros(freqs.shape)
# find the index of the inflexion point
f0_idx = find_nearest_index(freqs, mu - 3 * sigma)
# line equation
y[:f0_idx] = a * freqs[:f0_idx] + b
# Gaussian profile
y[f0_idx:] = ((a * freqs[f0_idx] + b) +
(scale * norm.pdf(freqs[f0_idx:], mu, sigma)))
return y
y = model_bi_functions(frequency_interp)
plt.plot(frequency_interp, y)
plt.xlabel('Frequency')
plt.ylabel('Amplitude')
Explanation: <div class="alert alert-success">
<b>EXERCISE - `scipy.optimize`</b>:
The previous spectra can be modelled using a simple function `model_bi_functions` which we defined as:
<br><br>
$$
S(f)=\left\{
\begin{array}{ll}
a f + b, & 0 < f < \mu - 3 \sigma \\
(a (\mu - 3 \sigma) + b) + \exp\left( - \frac{(f - \mu)^{2}}{2 \sigma^{2}} \right), & f \geq \mu - 3 \sigma\\
\end{array}
\right.
$$
See below a plot which illustrate the profile of this function.
<ul>
<li>Using `scipy.optimize.curve_fit`, fit `model_bi_functions` in the first spectra from `spectra_interp`. You also have to use `frequency_interp` as `x` values. Use the initial parameters `[0.0, 0.01, 100, 3300, 300]`</li>
<li>Plot the results.</li>
</ul>
</div>
End of explanation
from scipy.integrate import quad
res, err = quad(np.sin, 0, np.pi / 2)
res
Explanation: 4. Numerical integration - scipy.integrate
Given a function object, the most generic integration routine is scipy.integrate.quad().
End of explanation
x = np.linspace(0, np.pi / 2, num=200)
y = np.sin(x)
from scipy.integrate import simps
res = simps(y, x)
res
Explanation: If only fixed sample are given, the trapeze method (scipy.integrate.trapz()) or Simpson's integration rule scipy.integrate.simps()) can be used.
End of explanation
A = np.array([[ 3, 3, -1],
[ 2, -3, 4],
[-1, .5, -1]])
b = np.array([[ 1],
[-2],
[ 0]])
Explanation: <div class="alert alert-success">
<b>EXERCISE - `scipy.integrate`</b>:
We would be interested in the area under the Gaussian profile since it is related to what we want to quantify.
<ul>
<li>Using `scipy.integrate.simps`, compute the area under the Gaussian profile between $[\mu - 3 \sigma, \mu + 3 \sigma]$. Those parameters can be found as the results of the curve fitting previusly done. The indexes corresponding to the interval values can be computed using `find_nearest_index`.</li>
<li>You can do the same using the original data to see the difference od quantification.</li>
</ul>
</div>
5. Linear algebra - scipy.linalg
The scipy.linalg offers basic operation used in linear algebra such as inverse (scipy.linalg.inv), pseudo-inverse (scipy.linalg.pinv), determinant (scipy.linalg.det) as well as decompostion as standard decompisition as SVD, QR, or Cholesky among others.
<div class="alert alert-warning">
<b>`np.array` vs. `np.matrix`:</b>
<br><br>
By default the multiplication between two `np.array` (i.e. `*` operator) do not lead to a matrix multiplication. You need to use `np.dot` to perform this operation.
<br><br>
Another possibility is to convert the `np.array` to `np.matrix` which perform this operation when using the operator `*`. The operations become more readable when there is a lot of algebric operations involved.
<br><br>
We illustrate this behaviour in the example below.
</div>
Let's declare two arrays of shape $3 \times 3$ and $3 \times 1$, respectively.
End of explanation
A * b
Explanation: Using the * operator does not lead to a matrix multiplication since the matrix returned is a $3 \times 3$ matrix. Instead, it multiply each column of $A$ by the vector $b$.
End of explanation
np.dot(A, b)
Explanation: You need to use the function np.dot to obtain the matrix multiplication.
End of explanation
A = np.matrix(A)
b = np.matrix(b)
A * b
Explanation: However, by converting $A$ and $b$ to matrices (i.e., np.matrix), it is possible to use the * operator directly.
End of explanation
import pandas as pd
data = pd.read_csv('data/brain_size.csv', sep=';', na_values=".")
data.head()
Explanation: <div class="alert alert-success">
<b>EXERCISE - `scipy.linalg`</b>:
<ul>
<li>Solve the following system of linear equations using the normal equation.</li>
</ul>
<br>
$$
\left[\begin{array}{cc}
3x & 3y & -z \\
2x & -3y & 4z \\
-x & .5y & -z
\end{array}\right]
\left[\begin{array}{cc}
x_1 \\
x_2 \\
x_3
\end{array}\right] =
\left[\begin{array}{cc}
-1 \\
-2 \\
0
\end{array}\right]
$$
This problem can be seen as:
$$ A x = b $$
$x$ can be find such that:
$$ x = (A^{T} A)^{-1} A^{T} b $$
Find $x$ using the above equation
</div>
<div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Solve the following system of linear equations using SVD.</li>
</ul>
<br>
The above problem can also be solved using an SVD decomposition such that:
$$ x = V S^{-1} (U^{T} b) $$
where $U$, $S$, and $V^{T}$ can be found with `scipy.linalg.svd` such that:
`U, S, Vh = svd(A)`
</div>
6. Statistics - scipy.stats and statsmodel
scipy.stats
scipy.stats contains mainly helper of most common continuous and discrete distribution.
In addition, this module contain statistical functions to perform statistical tests for instance.
End of explanation
from scipy.stats import ttest_1samp
ttest_1samp(data['VIQ'], 0)
Explanation: 1-sample t-test
scipy.stats.ttest_1samp() tests if the population mean of data is likely to be equal to a given value. Let see if the VIQ of our population is equal to 0.
End of explanation
groupby_gender = data.groupby('Gender')
for gender, value in groupby_gender['VIQ']:
print((gender, value.mean()))
Explanation: With a p-value of $10^{-28}$ we can claim that the population mean for the IQ (VIQ measure) is not 0.
2-sample t-test
scipy.stats.ttest_ind() can compare two populations and check if the difference is significant or not. We can study if there is a difference of the VIQ between Male and Female.
End of explanation
from scipy.stats import ttest_ind
female_viq = data[data['Gender'] == 'Female']['VIQ']
male_viq = data[data['Gender'] == 'Male']['VIQ']
ttest_ind(female_viq, male_viq)
Explanation: To see if this difference is significant, we can use scipy.stats.ttest_ind().
End of explanation
x = np.linspace(-5, 5, 20)
np.random.seed(1)
# normal distributed noise
y = -5 + 3 * x + 4 * np.random.normal(size=x.shape)
# Create a data frame containing all the relevant variables
data = pd.DataFrame({'x': x, 'y': y})
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>:
<ul>
<li>Test the difference between weights in males and females. You can fill the missing data using `pandas.fillna()` and using the mean weight of the population.</li>
<li>Use non parametric statistics to test the difference between VIQ in males and females (refer to `scipy.stats.mannwhitneyu`).</li>
</ul>
<br>
</div>
statsmodels
Given two set of observations, x and y, we want to test the hypothesis that y is a linear function of x. In other terms:
$$
y = x \times coef + intercept + e
$$
where e is observation noise. We will use the statsmodels module to:
Fit a linear model. We will use the simplest strategy, ordinary least squares (OLS).
Test that coef is non zero.
End of explanation
from statsmodels.formula.api import ols
model = ols("y ~ x + 1", data).fit()
Explanation: Then we specify an OLS model and fit it:
End of explanation
print(model.summary())
Explanation: We can inspect the various statistics derived from the fit:
End of explanation
from statsmodels.formula.api import ols
data = pd.read_csv('data/brain_size.csv', sep=';', na_values=".")
model = ols("VIQ ~ Gender + 1", data).fit()
print(model.summary())
Explanation: Intercept: We can remove the intercept using - 1 in the formula, or force the use of an intercept using + 1.
Let's see another example: is VIQ can be predicted using Gender.
End of explanation |
7,062 | Given the following text description, write Python code to implement the functionality described.
Description:
Check if a Rook can reach the given destination in a single move
Function to check if it is possible to reach destination in a single move by a rook ; Given arrays
| Python Code:
def check(current_row , current_col , destination_row , destination_col ) :
if(current_row == destination_row ) :
return("POSSIBLE ")
elif(current_col == destination_col ) :
return("POSSIBLE ")
else :
return("NOT ▁ POSSIBLE ")
current_row = 8
current_col = 8
destination_row = 8
destination_col = 4
output = check(current_row , current_col , destination_row , destination_col )
print(output )
|
7,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SAP CAR
The following subsections show a graphical representation of the file format portions and how to generate them.
First we need to perform some setup to import the packet classes
Step1: SAPCAR Archive version 2.00
We first create a temporary file and compress it inside an archive file
Step2: The file is comprised of the following main structures
Step3: SAPCAR Entry Header
Step4: SAPCAR Data Block
Step5: SAPCAR Compressed Data
Step6: SAPCAR Archive version 2.01
Step7: The file is comprised of the following main structures
Step8: SAPCAR Entry Header
Step9: SAPCAR Data Block
Step10: SAPCAR Compressed data | Python Code:
from pysap.SAPCAR import *
from IPython.display import display
Explanation: SAP CAR
The following subsections show a graphical representation of the file format portions and how to generate them.
First we need to perform some setup to import the packet classes:
End of explanation
with open("some_file", "w") as fd:
fd.write("Some string to compress")
f0 = SAPCARArchive("archive_file.car", mode="wb", version=SAPCAR_VERSION_200)
f0.add_file("some_file")
Explanation: SAPCAR Archive version 2.00
We first create a temporary file and compress it inside an archive file:
End of explanation
f0._sapcar.canvas_dump()
Explanation: The file is comprised of the following main structures:
SAPCAR Archive Header
End of explanation
f0._sapcar.files0[0].canvas_dump()
Explanation: SAPCAR Entry Header
End of explanation
f0._sapcar.files0[0].blocks[0].canvas_dump()
Explanation: SAPCAR Data Block
End of explanation
f0._sapcar.files0[0].blocks[0].compressed.canvas_dump()
Explanation: SAPCAR Compressed Data
End of explanation
f1 = SAPCARArchive("archive_file.car", mode="wb", version=SAPCAR_VERSION_201)
f1.add_file("some_file")
Explanation: SAPCAR Archive version 2.01
End of explanation
f1._sapcar.canvas_dump()
Explanation: The file is comprised of the following main structures:
SAPCAR Archive Header
End of explanation
f1._sapcar.files1[0].canvas_dump()
Explanation: SAPCAR Entry Header
End of explanation
f1._sapcar.files1[0].blocks[0].canvas_dump()
Explanation: SAPCAR Data Block
End of explanation
f1._sapcar.files1[0].blocks[0].compressed.canvas_dump()
from os import remove
remove("some_file")
remove("archive_file.car")
Explanation: SAPCAR Compressed data
End of explanation |
7,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Harmonic Minimization
Here we demonstrate some simple example code showing how we might find the inherent structure for some initially random configuration of particles. Note that this code will work on CPU, GPU, or TPU out of the box.
First thing we need to do is set some parameters that define our simulation, including what kind of box we're using (specified using a metric function and a wrapping function).
Step2: Next we need to generate some random positions as well as particle sizes.
Step3: Then we need to construct our FIRE minimization function. Like all simulations in JAX MD, the FIRE optimizer is two functions
Step4: Now let's actually do minimization, keepting track of the energy and particle positions as we go.
Step5: Let's plot the nearest distance for different species pairs. We see that particles on average have neighbors that are the right distance apart.
Step6: Now let's plot the system. It's nice and minimized!
Step7: If we want, we can visualize the entire minimization.
Step8: Finally, let's plot the energy trajectory that we observer during FIRE minimization. | Python Code:
#@title Imports & Utils
!pip install jax-md
import numpy as onp
import jax.numpy as np
from jax.config import config
config.update('jax_enable_x64', True)
from jax import random
from jax import jit
from jax_md import space, smap, energy, minimize, quantity, simulate
from jax_md.colab_tools import renderer
import matplotlib
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style(style='white')
def format_plot(x, y):
plt.grid(True)
plt.xlabel(x, fontsize=20)
plt.ylabel(y, fontsize=20)
def finalize_plot(shape=(1, 1)):
plt.gcf().set_size_inches(
shape[0] * 1.5 * plt.gcf().get_size_inches()[1],
shape[1] * 1.5 * plt.gcf().get_size_inches()[1])
plt.tight_layout()
Explanation: <a href="https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/minimization.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
End of explanation
N = 1000
dimension = 2
box_size = quantity.box_size_at_number_density(N, 0.8, dimension)
displacement, shift = space.periodic(box_size)
Explanation: Harmonic Minimization
Here we demonstrate some simple example code showing how we might find the inherent structure for some initially random configuration of particles. Note that this code will work on CPU, GPU, or TPU out of the box.
First thing we need to do is set some parameters that define our simulation, including what kind of box we're using (specified using a metric function and a wrapping function).
End of explanation
key = random.PRNGKey(0)
R = box_size * random.uniform(key, (N, dimension), dtype=np.float32)
# The system ought to be a 50:50 mixture of two types of particles, one
# large and one small.
sigma = np.array([[1.0, 1.2], [1.2, 1.4]])
N_2 = int(N / 2)
species = np.where(np.arange(N) < N_2, 0, 1)
Explanation: Next we need to generate some random positions as well as particle sizes.
End of explanation
energy_fn = energy.soft_sphere_pair(displacement, species=species, sigma=sigma)
fire_init, fire_apply = minimize.fire_descent(energy_fn, shift)
fire_apply = jit(fire_apply)
fire_state = fire_init(R)
Explanation: Then we need to construct our FIRE minimization function. Like all simulations in JAX MD, the FIRE optimizer is two functions: an init_fn that creates the state of the optimizer and an apply_fn that updates the state to a new state.
End of explanation
E = []
trajectory = []
for i in range(200):
fire_state = fire_apply(fire_state)
E += [energy_fn(fire_state.position)]
trajectory += [fire_state.position]
R = fire_state.position
trajectory = np.stack(trajectory)
Explanation: Now let's actually do minimization, keepting track of the energy and particle positions as we go.
End of explanation
metric = lambda R: space.distance(space.map_product(displacement)(R, R))
dr = metric(R)
plt.plot(np.min(dr[:N_2, :N_2] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{AA}$')
plt.plot(np.min(dr[:N_2, N_2:] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{AB}$')
plt.plot(np.min(dr[N_2:, N_2:] + 5 * np.eye(N_2, N_2), axis=0), 'o',
label='$\\sigma_{BB}$')
plt.legend()
format_plot('', 'min neighbor distance')
finalize_plot()
Explanation: Let's plot the nearest distance for different species pairs. We see that particles on average have neighbors that are the right distance apart.
End of explanation
ms = 45
R_plt = onp.array(fire_state.position)
plt.plot(R_plt[:N_2, 0], R_plt[:N_2, 1], 'o', markersize=ms * 0.5)
plt.plot(R_plt[N_2:, 0], R_plt[N_2:, 1], 'o', markersize=ms * 0.7)
plt.xlim([0, np.max(R[:, 0])])
plt.ylim([0, np.max(R[:, 1])])
plt.axis('off')
finalize_plot((2, 2))
Explanation: Now let's plot the system. It's nice and minimized!
End of explanation
diameter = np.where(species, 1.4, 1.0)
color = np.where(species[:, None],
np.array([[1.0, 0.5, 0.05]]),
np.array([[0.15, 0.45, 0.8]]))
renderer.render(box_size,
{ 'particles': renderer.Disk(trajectory, diameter, color)},
buffer_size=50)
Explanation: If we want, we can visualize the entire minimization.
End of explanation
plt.plot(E, linewidth=3)
format_plot('step', '$E$')
finalize_plot()
Explanation: Finally, let's plot the energy trajectory that we observer during FIRE minimization.
End of explanation |
7,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Experiments in making a finder chart generator for astroplan
Use astroquery's SkyView to get images of the field near a astroplan.FixedTarget.
Step2: Basic, default plot
Step3: Plot with my choice of colormap, reticle, with logarithmic colormap
Step4: Works for lots of wavelengths
Step5: Available surveys | Python Code:
import matplotlib.pyplot as plt
import numpy as np
from astroplan import FixedTarget
import astropy.units as u
from astropy.wcs import WCS
from astropy.coordinates import SkyCoord
from astropy.io import fits
from astroquery.skyview import SkyView
@u.quantity_input(fov_radius=u.deg)
def plot_finder_image(target, survey='DSS', fov_radius=10*u.arcmin,
log=False, ax=None, grid=False, reticle=False, style_kwargs=None):
Plot survey image centered on ``target`` with `~astroquery.skyview.SkyView`.
Survey images are retrieved from NASA Goddard's SkyView service, and
plotted using WCSAxes.
Parameters
----------
target : `~astroplan.FixedTarget`, `~astropy.coordinates.SkyCoord`
Coordinates of celestial object
survey : string
Name of survey to retrieve image from. For dictionary of
available surveys, use
`from astroquery.skyview import SkyView; print(SkyView.survey_dict)`
fov_radius : `~astropy.units.Quantity`
Radius of field of view of retrieved image. Defaults to 10 arcmin.
log : bool, optional
Take the natural logarithm of the FITS image if `True`.
False by default.
ax : `~matplotlib.axes.Axes` or None, optional.
The `~matplotlib.axes.Axes` object to be drawn on.
If None, uses the current `~matplotlib.axes.Axes`.
grid : bool, optional.
Grid is drawn if `True`. `False` by default.
reticle : bool, optional
Draw reticle on the center of the FOV if `True`. Default is `False`.
style_kwargs : dict or None, optional.
A dictionary of keywords passed into `~matplotlib.pyplot.imshow`
to set plotting styles.
coord = target if not hasattr(target, 'coord') else target.coord
position = coord.icrs
coordinates = 'icrs'
target_name = None if isinstance(target, SkyCoord) else target.name
hdu = SkyView.get_images(position=position, coordinates=coordinates,
survey=survey, radius=fov_radius, grid=grid)[0][0]
wcs = WCS(hdu.header)
# Set up axes & plot styles if needed.
if ax is None:
ax = plt.gca(projection=wcs)
if style_kwargs is None:
style_kwargs = {}
style_kwargs = dict(style_kwargs)
style_kwargs.setdefault('cmap', 'Greys')
style_kwargs.setdefault('origin', 'lower')
if log:
image_data = np.log(hdu.data)
else:
image_data = hdu.data
ax.imshow(image_data, **style_kwargs)
# Draw reticle
if reticle:
pixel_width = image_data.shape[0]
inner, outer = 0.03, 0.08
reticle_kwargs = dict(linewidth=2, color='m')
ax.axvline(x=0.5*pixel_width, ymin=0.5+inner, ymax=0.5+outer, **reticle_kwargs)
ax.axvline(x=0.5*pixel_width, ymin=0.5-inner, ymax=0.5-outer, **reticle_kwargs)
ax.axhline(y=0.5*pixel_width, xmin=0.5+inner, xmax=0.5+outer, **reticle_kwargs)
ax.axhline(y=0.5*pixel_width, xmin=0.5-inner, xmax=0.5-outer, **reticle_kwargs)
# Labels, title, grid
ax.set(xlabel='RA', ylabel='DEC')
if target_name is not None:
ax.set_title(target_name)
ax.grid(grid)
# Redraw the figure for interactive sessions.
ax.figure.canvas.draw()
return ax
Explanation: Experiments in making a finder chart generator for astroplan
Use astroquery's SkyView to get images of the field near a astroplan.FixedTarget.
End of explanation
%matplotlib inline
target = FixedTarget.from_name('HD 189733')
ax = plot_finder_image(target)
plt.show()
Explanation: Basic, default plot:
End of explanation
target = FixedTarget.from_name('Kepler-452')
ax = plot_finder_image(target, log=True, survey='DSS', fov_radius=8*u.arcmin, reticle=True,
style_kwargs={'cmap' : plt.cm.Greys_r})
plt.show()
Explanation: Plot with my choice of colormap, reticle, with logarithmic colormap:
End of explanation
target = FixedTarget.from_name('Sgr A*')
ax = plot_finder_image(target, log=True, survey='Fermi 5', fov_radius=10*u.deg, reticle=True,
style_kwargs={'cmap' : plt.cm.Blues_r})
plt.show()
Explanation: Works for lots of wavelengths:
End of explanation
SkyView.list_surveys()
Explanation: Available surveys:
End of explanation |
7,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Steady-state superradiance
We consider a system of $N$ two-level systems (TLSs) with identical frequency $\omega_{0}$, incoherently pumped at a rate $\gamma_\text{P}$ and de-excitating at a collective emission rate $\gamma_\text{CE}$,
\begin{eqnarray}
\dot{\rho} &=&
-i\lbrack \omega_{0}J_z,\rho \rbrack
+\frac{\gamma_\text {CE}}{2}\mathcal{L}{J{-}}[\rho]
+\frac{\gamma_\text{P}}{2}\sum_{n=1}^{N}\mathcal{L}{J{+,n}}[\rho]
\end{eqnarray}
This system can sustain superradiant light emission and line narrowing [1-3], whose peak intensity scales proportionally to $N^2$.
Step1: 1) Time evolution
We study the system of Eq. (1) (above) by using the Permutational Invariant Quantum Solver (PIQS) to build the Liouvillian of the system. Using QuTiP's $\texttt{mesolve}()$ we can calculate operators expectation values in time as well as higher order correlation functions [4,5].
System properties
Step2: Liouvillian and steady state $\rho_\text{ss}$
Step3: Time integration for $g^{(2)}(\tau)$ and $\langle J_{+}J_{-}\rangle (t)$
We define the $g^{(2)}(\tau)$ of the system as the two-time correlation function mapping the photonic degrees of freedom onto the TLS collective operators
\begin{eqnarray}
g^{(2)}(\tau) = \frac{\langle
Step4: Visualization
Step5: 2) Maximum of light emission as a function of $\frac{\gamma_\text{P}}{N\gamma_\text{CE}}$
We perform a study of the scaling of the steady state light emission of the system as a function of the pumping rate, normalized by the number of TLSs and the collective emission rate. The results show an optimal point for $\frac{\gamma_\text{P}}{N\gamma_\text{CE}}\simeq 1$.
Step6: Visualization
Step7: References
[1] D. Meiser and M.J. Holland, Phys. Rev. A 81, 033847 (2010)
[2] D. Meiser and M.J. Holland, Phys. Rev. A 81, 063827 (2010)
[3] J.G. Bohnet et al. Nature 484, 78 (2012)
[4] J.R. Johansson, P.D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012) http | Python Code:
import matplotlib.pyplot as plt
from qutip import *
from piqs import *
Explanation: Steady-state superradiance
We consider a system of $N$ two-level systems (TLSs) with identical frequency $\omega_{0}$, incoherently pumped at a rate $\gamma_\text{P}$ and de-excitating at a collective emission rate $\gamma_\text{CE}$,
\begin{eqnarray}
\dot{\rho} &=&
-i\lbrack \omega_{0}J_z,\rho \rbrack
+\frac{\gamma_\text {CE}}{2}\mathcal{L}{J{-}}[\rho]
+\frac{\gamma_\text{P}}{2}\sum_{n=1}^{N}\mathcal{L}{J{+,n}}[\rho]
\end{eqnarray}
This system can sustain superradiant light emission and line narrowing [1-3], whose peak intensity scales proportionally to $N^2$.
End of explanation
N = 10
system = Dicke(N = N)
[jx, jy, jz, jp, jm] = jspin(N)
w0 = 1
h0 = w0 * jz
gCE = 1
gP = N * gCE
system.hamiltonian = h0
system.collective_emission = gCE
system.pumping = gP
Explanation: 1) Time evolution
We study the system of Eq. (1) (above) by using the Permutational Invariant Quantum Solver (PIQS) to build the Liouvillian of the system. Using QuTiP's $\texttt{mesolve}()$ we can calculate operators expectation values in time as well as higher order correlation functions [4,5].
System properties
End of explanation
L = system.liouvillian()
rhoss = steadystate(L)
jpjm_ss = expect(jp*jm, rhoss)
Explanation: Liouvillian and steady state $\rho_\text{ss}$
End of explanation
# time evolution parameters
nt = 1000
td = np.log(N)/(N*gCE)
tmax = 5 * td
t = np.linspace(0, tmax, nt)
# initial state
rho0= dicke(N, N/2, -N/2)
# calculate g2(tau)
A = jp*jm
rhoA = jm*rhoss*jp
#g2(tau)
result1 = mesolve(L, rhoA, t, [], e_ops = [A], options = Options(store_states=True))
g2t = result1.expect[0]
#rho(t)
result2 = mesolve(L, rho0, t, [], e_ops = A, options = Options(store_states=True))
rhot = result2.states
jpjmt = result2.expect[0]
Explanation: Time integration for $g^{(2)}(\tau)$ and $\langle J_{+}J_{-}\rangle (t)$
We define the $g^{(2)}(\tau)$ of the system as the two-time correlation function mapping the photonic degrees of freedom onto the TLS collective operators
\begin{eqnarray}
g^{(2)}(\tau) = \frac{\langle: J^\dagger(\tau) a^\dagger(0) a(\tau) a(0) :\rangle}{|\langle: a^\dagger(0) a(0) :\rangle|^2}= \frac{\langle: J_{+}(\tau) J_{+}(0) J_{-}(\tau) J_{-}(0) :\rangle}{|\langle J_{+}(0) J_{-}(0) \rangle|^2}
\end{eqnarray}
End of explanation
j2max = (0.5 * N + 1) * (0.5 * N)
plt.rc('text', usetex = True)
label_size = 20
plt.rc('xtick', labelsize=label_size)
plt.rc('ytick', labelsize=label_size)
fig1 = plt.figure()
plt.plot(t/td, g2t/jpjm_ss**2, '-')
plt.plot(t/td, 1+0*g2t, '--')
plt.xlabel(r'$\tau/t_\mathrm{D}$', fontsize = label_size)
plt.ylabel(r'$g^{(2)}(\tau)$', fontsize = label_size)
plt.xticks([0,(tmax/2)/td,tmax/td])
plt.show()
plt.close()
fig2 = plt.figure()
plt.plot(t/td, jpjmt/j2max, '-')
plt.xlabel(r'$t/t_\mathrm{D}$', fontsize = label_size)
plt.ylabel(r'$\langle J_{+}J_{-}\rangle (t)$', fontsize = label_size)
plt.xticks([0,(tmax/2)/td,tmax/td])
plt.title(r'Light emission', fontsize = label_size)
plt.show()
plt.close()
Explanation: Visualization
End of explanation
# Cycle on Coefficients
gCE = 1
gP0 = 1
gP_min_exp = -20
gP_max_exp = 20
gP_stepsize = 0.5
gP_list = np.arange(gP_min_exp, gP_max_exp+1, gP_stepsize)*0.1
gP_list_log = 10**(gP_list)
jpjmss_max_list = []
for i in gP_list_log:
gP = i*gP0
system = Dicke(hamiltonian = jz, N = N, pumping = gP, collective_emission = gCE)
liouv = system.liouvillian()
#steadystate
rho_ss = steadystate(liouv)
jpjm_ss = expect(jp*jm, rho_ss)
jpjmss_max_list.append(jpjm_ss)
Explanation: 2) Maximum of light emission as a function of $\frac{\gamma_\text{P}}{N\gamma_\text{CE}}$
We perform a study of the scaling of the steady state light emission of the system as a function of the pumping rate, normalized by the number of TLSs and the collective emission rate. The results show an optimal point for $\frac{\gamma_\text{P}}{N\gamma_\text{CE}}\simeq 1$.
End of explanation
intensity_max = float(N)*gCE/2*(float(N)*gCE/2+1)
normalized_intensity = np.array(jpjmss_max_list)/intensity_max
plt.semilogx(gP_list_log/(gCE*N), normalized_intensity, '-')
label_size = 20
plt.xlabel(r'${\gamma_\mathrm{P}}/\left({N\gamma_\mathrm{CE}}\right)$', fontsize = label_size)
plt.ylabel(r'$\langle J_{+}J_{-}\rangle_\mathrm{ss}$', fontsize = label_size)
plt.title(r'Steady-state light emission', fontsize = label_size)
plt.show()
plt.close()
Explanation: Visualization
End of explanation
qutip.about()
Explanation: References
[1] D. Meiser and M.J. Holland, Phys. Rev. A 81, 033847 (2010)
[2] D. Meiser and M.J. Holland, Phys. Rev. A 81, 063827 (2010)
[3] J.G. Bohnet et al. Nature 484, 78 (2012)
[4] J.R. Johansson, P.D. Nation, and F. Nori, Comp. Phys. Comm. 183, 1760 (2012) http://qutip.org
End of explanation |
7,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step17: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step20: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step23: Decoding - Training
Create a training decoding layer
Step26: Decoding - Inference
Create inference decoder
Step29: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step32: Build the Neural Network
Apply the functions you implemented above to
Step33: Neural Network Training
Hyperparameters
Tune the following parameters
Step35: Build the Graph
Build the graph using the neural network you implemented.
Step39: Batch and pad the source and target sequences
Step42: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step44: Save Parameters
Save the batch_size and save_path parameters for inference.
Step46: Checkpoint
Step49: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step51: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text = [[source_vocab_to_int[word] for word in line.split()] for line in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in line.split()] + [target_vocab_to_int['<EOS>']] for line in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
input_data = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len')
source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
rnn_inputs = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size)
lstm = lambda: tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob)
return tf.nn.dynamic_rnn(tf.contrib.rnn.MultiRNNCell([lstm() for _ in range(num_layers)]), rnn_inputs, source_sequence_length, dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
taining_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, taining_helper, encoder_state, output_layer)
output = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_summary_length)
return output[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size], name='start_tokens')
taining_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, taining_helper, encoder_state, output_layer)
output = tf.contrib.seq2seq.dynamic_decode(decoder, maximum_iterations=max_target_sequence_length)
return output[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
lstm = lambda: tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(rnn_size), output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([lstm() for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope('decoder'):
dec_train = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length,
max_target_sequence_length, output_layer, keep_prob)
with tf.variable_scope('decoder', reuse=True):
dec_infer = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], max_target_sequence_length,
target_vocab_size, output_layer, batch_size, keep_prob)
return dec_train, dec_infer
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, encoder_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length,
source_vocab_size, enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
return decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sentence_length,
rnn_size, num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, dec_embedding_size)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 3
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 300
decoding_embedding_size = 300
# Learning Rate
learning_rate = 0.01
# Dropout Keep Probability
keep_probability = 0.75
display_step = 20
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
7,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kubeflow pipelines
Learning Objectives
Step1: Setup a Kubeflow cluster on GCP
TODO 1
To deploy a Kubeflow cluster
in your GCP project, use the AI Platform pipelines
Step2: Create an experiment
TODO 2
We will start by creating a Kubeflow client to pilot the Kubeflow cluster
Step3: Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment
Step4: Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline
Step5: Let's make sure the experiment has been created correctly
Step6: Packaging your code into Kubeflow components
We have packaged our taxifare ml pipeline into three components
Step7: Now that the container images are pushed to the registry in your project, we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to
* describing what arguments Kubeflow needs to pass to the containers when it runs them
* telling Kubeflow where to fetch the corresponding Docker images
In the cells below, we have three of these "Kubeflow component description files", one for each of our components.
TODO 3
IMPORTANT
Step8: Create a Kubeflow pipeline
The code below creates a kubeflow pipeline by decorating a regular function with the
@dsl.pipeline decorator. Now the arguments of this decorated function will be
the input parameters of the Kubeflow pipeline.
Inside the function, we describe the pipeline by
* loading the yaml component files we created above into a Kubeflow op
* specifying the order into which the Kubeflow ops should be run
Step9: The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below
Step10: If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the
Python description of the pipeline into yaml description!
Now let's feed Kubeflow with our pipeline and run it using our client | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
pip freeze | grep kfp || pip install kfp
from os import path
import kfp
import kfp.compiler as compiler
import kfp.components as comp
import kfp.dsl as dsl
import kfp.gcp as gcp
import kfp.notebook
Explanation: Kubeflow pipelines
Learning Objectives:
1. Learn how to deploy a Kubeflow cluster on GCP
1. Learn how to create a experiment in Kubeflow
1. Learn how to package you code into a Kubeflow pipeline
1. Learn how to run a Kubeflow pipeline in a repeatable and traceable way
Introduction
In this notebook, we will first setup a Kubeflow cluster on GCP.
Then, we will create a Kubeflow experiment and a Kubflow pipeline from our taxifare machine learning code. At last, we will run the pipeline on the Kubeflow cluster, providing us with a reproducible and traceable way to execute machine learning code.
End of explanation
HOST = "<KFP HOST>"
BUCKET = "<YOUR PROJECT>"
Explanation: Setup a Kubeflow cluster on GCP
TODO 1
To deploy a Kubeflow cluster
in your GCP project, use the AI Platform pipelines:
Go to AI Platform Pipelines in the GCP Console.
Create a new instance
Hit "Configure"
Check the box "Allow access to the following Cloud APIs"
Hit "Create New Cluster"
Hit "Deploy"
When the cluster is ready, go back to the AI Platform pipelines page and click on "SETTINGS" entry for your cluster.
This will bring up a pop up with code snippets on how to access the cluster
programmatically.
Copy the "host" entry and set the "HOST" variable below with that.
End of explanation
client = kfp.Client(host=HOST)
Explanation: Create an experiment
TODO 2
We will start by creating a Kubeflow client to pilot the Kubeflow cluster:
End of explanation
client.list_experiments()
Explanation: Let's look at the experiments that are running on this cluster. Since you just launched it, you should see only a single "Default" experiment:
End of explanation
exp = client.create_experiment(name='taxifare')
Explanation: Now let's create a 'taxifare' experiment where we could look at all the various runs of our taxifare pipeline:
End of explanation
client.list_experiments()
Explanation: Let's make sure the experiment has been created correctly:
End of explanation
# Builds the taxifare trainer container in case you skipped the optional part of lab 1
!taxifare/scripts/build.sh
# Pushes the taxifare trainer container to gcr/io
!taxifare/scripts/push.sh
# Builds the KF component containers and push them to gcr/io
!cd pipelines && make components
Explanation: Packaging your code into Kubeflow components
We have packaged our taxifare ml pipeline into three components:
* ./components/bq2gcs that creates the training and evaluation data from BigQuery and exports it to GCS
* ./components/trainjob that launches the training container on AI-platform and exports the model
* ./components/deploymodel that deploys the trained model to AI-platform as a REST API
Each of these components has been wrapped into a Docker container, in the same way we did with the taxifare training code in the previous lab.
If you inspect the code in these folders, you'll notice that the main.py or main.sh files contain the code we previously executed in the notebooks (loading the data to GCS from BQ, or launching a training job to AI-platform, etc.). The last line in the Dockerfile tells you that these files are executed when the container is run.
So we just packaged our ml code into light container images for reproducibility.
We have made it simple for you to build the container images and push them to the Google Cloud image registry gcr.io in your project:
End of explanation
%%writefile bq2gcs.yaml
name: bq2gcs
description: |
This component creates the training and
validation datasets as BiqQuery tables and export
them into a Google Cloud Storage bucket at
gs://<BUCKET>/taxifare/data.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-bq2gcs
args: ["--bucket", {inputValue: Input Bucket}]
%%writefile trainjob.yaml
name: trainjob
description: |
This component trains a model to predict that taxi fare in NY.
It takes as argument a GCS bucket and expects its training and
eval data to be at gs://<BUCKET>/taxifare/data/ and will export
the trained model at gs://<BUCKET>/taxifare/model/.
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-trainjob
args: [{inputValue: Input Bucket}]
%%writefile deploymodel.yaml
name: deploymodel
description: |
This component deploys a trained taxifare model on GCP as taxifare:dnn.
It takes as argument a GCS bucket and expects the model to deploy
to be found at gs://<BUCKET>/taxifare/model/export/savedmodel/
inputs:
- {name: Input Bucket , type: String, description: 'GCS directory path.'}
implementation:
container:
image: gcr.io/<YOUR PROJECT>/taxifare-deploymodel
args: [{inputValue: Input Bucket}]
Explanation: Now that the container images are pushed to the registry in your project, we need to create yaml files describing to Kubeflow how to use these containers. It boils down essentially to
* describing what arguments Kubeflow needs to pass to the containers when it runs them
* telling Kubeflow where to fetch the corresponding Docker images
In the cells below, we have three of these "Kubeflow component description files", one for each of our components.
TODO 3
IMPORTANT: Modify the image URI in the cell
below to reflect that you pushed the images into the gcr.io associated with your project.
End of explanation
# TODO 3
PIPELINE_TAR = 'taxifare.tar.gz'
BQ2GCS_YAML = './bq2gcs.yaml'
TRAINJOB_YAML = './trainjob.yaml'
DEPLOYMODEL_YAML = './deploymodel.yaml'
@dsl.pipeline(
name='Taxifare',
description='Train a ml model to predict the taxi fare in NY')
def pipeline(gcs_bucket_name='<bucket where data and model will be exported>'):
bq2gcs_op = comp.load_component_from_file(BQ2GCS_YAML)
bq2gcs = bq2gcs_op(
input_bucket=gcs_bucket_name,
)
trainjob_op = comp.load_component_from_file(TRAINJOB_YAML)
trainjob = trainjob_op(
input_bucket=gcs_bucket_name,
)
deploymodel_op = comp.load_component_from_file(DEPLOYMODEL_YAML)
deploymodel = deploymodel_op(
input_bucket=gcs_bucket_name,
)
trainjob.after(bq2gcs)
deploymodel.after(trainjob)
Explanation: Create a Kubeflow pipeline
The code below creates a kubeflow pipeline by decorating a regular function with the
@dsl.pipeline decorator. Now the arguments of this decorated function will be
the input parameters of the Kubeflow pipeline.
Inside the function, we describe the pipeline by
* loading the yaml component files we created above into a Kubeflow op
* specifying the order into which the Kubeflow ops should be run
End of explanation
compiler.Compiler().compile(pipeline, PIPELINE_TAR)
ls $PIPELINE_TAR
Explanation: The pipeline function above is then used by the Kubeflow compiler to create a Kubeflow pipeline artifact that can be either uploaded to the Kubeflow cluster from the UI, or programatically, as we will do below:
End of explanation
# TODO 4
run = client.run_pipeline(
experiment_id=exp.id,
job_name='taxifare',
pipeline_package_path='taxifare.tar.gz',
params={
'gcs_bucket_name': BUCKET,
},
)
Explanation: If you untar and uzip this pipeline artifact, you'll see that the compiler has transformed the
Python description of the pipeline into yaml description!
Now let's feed Kubeflow with our pipeline and run it using our client:
End of explanation |
7,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing Historical Rainfall in Palo Alto, CA with CHIRPS Data
In this Notebook we demonstrate how you can use Climate Hazards Group InfraRed Precipitation With Station (CHIRPS) data. CHIRPS is a 30+ year quasi-global rainfall dataset that enables comparing current rainfall patters with historical averages providing very accurate results. Palo Alto is used as an example location, but we encourage you try out other locations.
Palo Alto has a Mediterranean climate with cool, relatively wet winters and warm, dry summers. However, because the city is located next to the Santa Cruz Mountains that block the passage of rain-producing weather system, there is a so-called rain shadow in Palo Alto resulting in a very low average annual rainfall.
We will make several plots which show how precipitation can differ during one year and between different years. Note that this notebook is using Python 3.
First of all, we are importing some modules
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Here we define dataset_key and point of interest.
Step2: Now it's time to read the actual data as Pandas DataFrame. We add separate columns for 'year' and 'month' for later use. For reading the data, we use function get_data_from_point_API from module named get_data_from_point_API which is in the Notebook folder (the same folder where this script is located in GitHub). At the end of this box we also print out Pandas data structure so that you can see how it looks like.
Step3: Now that we have fetched the time-series of precipitation data, we can easily compute the following statistics
Step4: The following plot will show the number of days by year with no observed precipitation. We can see that the difference between the years is not significant.
Step5: In this plot we look at how many days had more than 20 mm precipitation in a year. In 1998, one of the strongest El Niño years in the recent history, is the clear winner. Daniel Swain also brought out the fact in his Weather West blog that California’s wettest years on record were 1982-1983 and 1997-1998, and they occurred during the strongest El Niño years. Those years clearly stand out in our demo as well.
Daniel also says that the common belief that El Niño always brings a lot of water to the Golden State is not particularly true. It does, however, increase the potential of more precipitation.
For example, the years 2015-2016 when the El Niño was very strong, don't stand out in this plot. The unusal precipitation pattern of 2016 was also discussed by Daniel Swain in his blogpost, because curiously, it was almost the opposite of what was expected based upon theoretical and empirical models for ENSO teleconnections.
Step6: The next plot is about annual total precipitation. Two from the three driest years in the whole period, 2013 and 2015, have been very recent. 1998 has been among the strongest and is exceeded only by the exceptionally large values of 1982 and 1983. Those, again, were strongest El Niño years we talked about above. The El Niño of 2015-2016 still doesn't stand out.
Step7: Daily maximum precipitation was on 1982. Again, this plot confirms results from previous plots.
Step8: The average annual cycle of precipitation shows that it mostly rains during the winter months and the summer is usually dry.
Step9: Finally, let's look at a histogram. As we saw from the previous plots, Palo Alto has very many completely dry days. From the histogram we can see that when it does rain, it rains a lot! Almost 350 days since 1981 there has been 8-16 mm/day and near 300 days there has been 16-32 mm/day. Duing 30 days of the entire period it has been raining even 64-128 mm/day. | Python Code:
%matplotlib notebook
import pandas as pd
import numpy
from po_data_process import get_data_from_point_API, make_histogram, make_plot
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore",category=matplotlib.cbook.mplDeprecation)
Explanation: Analyzing Historical Rainfall in Palo Alto, CA with CHIRPS Data
In this Notebook we demonstrate how you can use Climate Hazards Group InfraRed Precipitation With Station (CHIRPS) data. CHIRPS is a 30+ year quasi-global rainfall dataset that enables comparing current rainfall patters with historical averages providing very accurate results. Palo Alto is used as an example location, but we encourage you try out other locations.
Palo Alto has a Mediterranean climate with cool, relatively wet winters and warm, dry summers. However, because the city is located next to the Santa Cruz Mountains that block the passage of rain-producing weather system, there is a so-called rain shadow in Palo Alto resulting in a very low average annual rainfall.
We will make several plots which show how precipitation can differ during one year and between different years. Note that this notebook is using Python 3.
First of all, we are importing some modules
End of explanation
API_key = open('APIKEY').read().strip()
dataset_key = 'chg_chirps_global_05'
#Palo Alto
latitude = 37.42
longitude = -122.17
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Here we define dataset_key and point of interest.
End of explanation
data = get_data_from_point_API(dataset_key, longitude, latitude, API_key)
data['time'] = pd.to_datetime(data['time'])
data['year'] = data['time'].dt.year
data['month'] = data['time'].dt.month
data = data.loc[data['year'] < 2019]
print (data.keys())
Explanation: Now it's time to read the actual data as Pandas DataFrame. We add separate columns for 'year' and 'month' for later use. For reading the data, we use function get_data_from_point_API from module named get_data_from_point_API which is in the Notebook folder (the same folder where this script is located in GitHub). At the end of this box we also print out Pandas data structure so that you can see how it looks like.
End of explanation
print ('There have been ' + str(len(data.loc[data['precip'] == 0])) + ' completely dry days in Palo Alto of ' + str(len(data)) + ' total.')
print ('It means that ' + str(round((100. * len(data.loc[data['precip'] == 0]))/len(data),2)) + '% of the days since 1981 have been completely dry in Palo Alto.')
Explanation: Now that we have fetched the time-series of precipitation data, we can easily compute the following statistics:
1. Number of completely dry days since 1981
2. Number of completely dry days by year
3. Number of days with more than 20 mm precipitation in a year
4. Annual precipitation by year
5. Daily maximum precipitation by year
6. Average annual cycle of precipitation
7. Histogram
End of explanation
make_plot(data.loc[data['precip'] == 0].groupby('year').count()['precip'],dataset_key,'Completely dry days by year')
Explanation: The following plot will show the number of days by year with no observed precipitation. We can see that the difference between the years is not significant.
End of explanation
make_plot(data.loc[data['precip'] > 20].groupby('year').count()['precip'],dataset_key,'Number of days with more than 20 mm precipitation in a year')
Explanation: In this plot we look at how many days had more than 20 mm precipitation in a year. In 1998, one of the strongest El Niño years in the recent history, is the clear winner. Daniel Swain also brought out the fact in his Weather West blog that California’s wettest years on record were 1982-1983 and 1997-1998, and they occurred during the strongest El Niño years. Those years clearly stand out in our demo as well.
Daniel also says that the common belief that El Niño always brings a lot of water to the Golden State is not particularly true. It does, however, increase the potential of more precipitation.
For example, the years 2015-2016 when the El Niño was very strong, don't stand out in this plot. The unusal precipitation pattern of 2016 was also discussed by Daniel Swain in his blogpost, because curiously, it was almost the opposite of what was expected based upon theoretical and empirical models for ENSO teleconnections.
End of explanation
make_plot(data.groupby('year').sum()['precip'],dataset_key,'Annual precipitation by year')
Explanation: The next plot is about annual total precipitation. Two from the three driest years in the whole period, 2013 and 2015, have been very recent. 1998 has been among the strongest and is exceeded only by the exceptionally large values of 1982 and 1983. Those, again, were strongest El Niño years we talked about above. The El Niño of 2015-2016 still doesn't stand out.
End of explanation
make_plot(data.groupby('year')['precip'].max(),dataset_key,'Daily maximum precipitation by year')
Explanation: Daily maximum precipitation was on 1982. Again, this plot confirms results from previous plots.
End of explanation
make_plot(data.groupby('month')['precip'].mean(),dataset_key, 'Average annual cycle of precipitation')
Explanation: The average annual cycle of precipitation shows that it mostly rains during the winter months and the summer is usually dry.
End of explanation
bins = [1,2,4,8,16,32,64,128]
make_histogram(data['precip'],bins)
Explanation: Finally, let's look at a histogram. As we saw from the previous plots, Palo Alto has very many completely dry days. From the histogram we can see that when it does rain, it rains a lot! Almost 350 days since 1981 there has been 8-16 mm/day and near 300 days there has been 16-32 mm/day. Duing 30 days of the entire period it has been raining even 64-128 mm/day.
End of explanation |
7,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
I downloaded a CSV of City of Chicago employee salary data,
which includes the names, titles, departments and salaries of Chicago employees. I was
interested to see whether men and women earn similar salaries for similar roles.
City data don't report the gender of employees, so I used an employee's first name
as a proxy, which is explained in more detail below.
Step1: To simplify the analysis, I restricted my attention to full-time employees with a salary.
Step2: To make grouping and matching first names easier, I extracted the first name from each
employee record and lower-cased it
Step3: Annual salary is represented as a string with a leading $ sign, which I converted to floats.
Step4: The first ten rows of the data set look like this now
Step5: Gender prediction
To estimate the gender of an employee based on his or her first name, I used a data set of
baby names. For each unique name, I counted
how many times, from years 1940 to 2016, that name had been given to a boy versus a girl. If
the name was more frequently given to boys, then I predicted the gender associated with the
name to be male, and vice-versa for female.
Step6: The above list of names and associated genders can be combined with the worker data to
predict the gender of each Chicago employee using a join
Step7: Analysis
I wanted to know wether men and women were paid equally if they shared the same job title and
department. To answer this, I specifically looked at full-time, salaried employees, and jobs
for which both men and women were employed under the same title and department.
For example, given the job title POLICE OFFICER in the POLICE department, a position for which
both men and women are employed, do male and female officers have similar salaries? More
generally, are men and women paid equally across all job titles and departments? | Python Code:
workers = pd.read_csv('Current_Employee_Names__Salaries__and_Position_Titles.csv')
Explanation: Introduction
I downloaded a CSV of City of Chicago employee salary data,
which includes the names, titles, departments and salaries of Chicago employees. I was
interested to see whether men and women earn similar salaries for similar roles.
City data don't report the gender of employees, so I used an employee's first name
as a proxy, which is explained in more detail below.
End of explanation
workers = workers[(workers['Salary or Hourly']=='Salary') & (workers['Full or Part-Time']=='F')]
Explanation: To simplify the analysis, I restricted my attention to full-time employees with a salary.
End of explanation
workers['Name'] = workers['Name'].apply(lambda s: s.split(',')[1].split()[0].lower())
Explanation: To make grouping and matching first names easier, I extracted the first name from each
employee record and lower-cased it:
End of explanation
workers['Annual Salary'] = workers['Annual Salary'].apply(lambda s: float(s.strip('$')))
Explanation: Annual salary is represented as a string with a leading $ sign, which I converted to floats.
End of explanation
workers.head(10)
Explanation: The first ten rows of the data set look like this now:
End of explanation
# Data are in seperate CSV files per year, and are concatenated here
name_data = []
for yob in range(1940, 2017):
df = pd.read_csv('names/yob' + str(yob) + '.txt',
header=0, names=['Name', 'Gender', 'Count'])
name_data.append(df)
names = pd.concat(name_data, axis=0)
# Lower-case first name so that it can be joined with the workers dataframe
names['Name'] = names['Name'].str.lower()
names.head(5)
# Count how often a name is given to boys and girls
gender_frequency = names.groupby(['Name', 'Gender']).sum().reset_index()
gender_frequency.sample(5)
def predict_gender(df):
max_idx = df['Count'].idxmax()
return df.loc[max_idx]
# Select the more frequent gender for each name
gender_guess = gender_frequency.groupby('Name').agg(predict_gender).reset_index()
gender_guess.sample(10)
Explanation: Gender prediction
To estimate the gender of an employee based on his or her first name, I used a data set of
baby names. For each unique name, I counted
how many times, from years 1940 to 2016, that name had been given to a boy versus a girl. If
the name was more frequently given to boys, then I predicted the gender associated with the
name to be male, and vice-versa for female.
End of explanation
workers = pd.merge(workers, gender_guess, on='Name', how='inner')
workers[['Name', 'Job Titles', 'Department', 'Gender', 'Annual Salary']].sample(10)
Explanation: The above list of names and associated genders can be combined with the worker data to
predict the gender of each Chicago employee using a join:
End of explanation
# Focus on these columns
workers = workers[['Job Titles', 'Department', 'Gender', 'Annual Salary']]
# Remove jobs for which only men or only women are employed
job_groups = workers[['Job Titles', 'Gender', 'Annual Salary']].groupby(['Job Titles'])
def male_and_female(grp):
return np.any(grp['Gender']=='M') and np.any(grp['Gender']=='F')
job_groups = job_groups.filter(male_and_female)
# Look at the maximum salary of each gender for each job title
job_group_maximums = job_groups.groupby(['Job Titles', 'Gender']).agg(np.max)
job_group_maximums.head(30)
higher_max_male_salary_count = 0
total_jobs = 0
for job_title, df in job_group_maximums.groupby(level=0):
assert len(df) == 2
if df.loc[(job_title, 'M')][0] > df.loc[(job_title, 'F')][0]:
higher_max_male_salary_count += 1
total_jobs += 1
higher_max_male_salary_percentage = 100 * higher_max_male_salary_count / total_jobs
higher_max_male_salary_percentage
ax = sns.stripplot(x="Gender", y="Annual Salary", data=job_group_maximums.reset_index(), jitter=True)
plt.show()
Explanation: Analysis
I wanted to know wether men and women were paid equally if they shared the same job title and
department. To answer this, I specifically looked at full-time, salaried employees, and jobs
for which both men and women were employed under the same title and department.
For example, given the job title POLICE OFFICER in the POLICE department, a position for which
both men and women are employed, do male and female officers have similar salaries? More
generally, are men and women paid equally across all job titles and departments?
End of explanation |
7,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Churn prediction
Evasão escolar
Churn prediction é um tipo de trabalho muito comum em data science, sendo uma questão de classificação binária. Trada-se do possível abandono de um cliente ou de um aluno. Analisamos o dataset e tentamos prever as situações de risco de abandono, para tomarmos medidas proativas de fidelização.
Para este exemplo, usamos um dataset real, porém desidentificado, de uma pesquisa que realizei há algum tempo, a pedido de uma instituição de ensino, para identificar os alunos com maior probabilidade de abandorarem o curso.
Da população de alunos, foram eliminados os que possuem percentual de bolsa de estudos igual ou superior a 50%, pois trata-se de situações especiais. Os dados são coletados semanalmente, a partir dos resultados das primeiras provas de cada período.
Step1: Separação dos dados
Precisamos separar os dados de teste dos dados de treino, virtualmente esquecendo que os dados de testes existem!
Step2: Padronização dos atributos
Como vamos usar SVM, precisamos colocar os atributos numéricos na mesma escala, e codificar os atributos de categoria. Temos um atributo de categoria
Step3: Kernel linear
Step4: Kernel RBF com C=2 e sem gamma
Step5: Kernel RBF com C=1 e gamma=10
Step6: Kernel Poly com C=2 e gamma=10
Step7: Kernel Sigmoid com C=2 e gamma=100
Step8: Nem sempre o modelo que tem melhor score é o modelo que dá a melhor previsão para dados ainda não vistos. Pode ser o caso de "overfitting". Vamos testar vários tipos de kernel e combinações de parâmetros.
Step9: A melhor maneira de testar é comparar os resultados um a um.
Step10: Este foi o melhor resultado que conseguimos. | Python Code:
import pandas as pd
import numpy as np
from sklearn import svm, datasets
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import svm
df = pd.read_csv('evasao.csv')
df.head()
df.describe()
features = df[['periodo','bolsa','repetiu','ematraso','disciplinas','faltas']]
labels = df[['abandonou']]
features.head()
labels.head()
Explanation: Churn prediction
Evasão escolar
Churn prediction é um tipo de trabalho muito comum em data science, sendo uma questão de classificação binária. Trada-se do possível abandono de um cliente ou de um aluno. Analisamos o dataset e tentamos prever as situações de risco de abandono, para tomarmos medidas proativas de fidelização.
Para este exemplo, usamos um dataset real, porém desidentificado, de uma pesquisa que realizei há algum tempo, a pedido de uma instituição de ensino, para identificar os alunos com maior probabilidade de abandorarem o curso.
Da população de alunos, foram eliminados os que possuem percentual de bolsa de estudos igual ou superior a 50%, pois trata-se de situações especiais. Os dados são coletados semanalmente, a partir dos resultados das primeiras provas de cada período.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.33, random_state=42)
Explanation: Separação dos dados
Precisamos separar os dados de teste dos dados de treino, virtualmente esquecendo que os dados de testes existem!
End of explanation
padronizador = StandardScaler().fit(X_train[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']])
X_train_1 = pd.DataFrame(padronizador.transform(X_train[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']]))
X_train_scaled = pd.DataFrame(X_train_1)
X_train_scaled = X_train_scaled.assign(e = X_train['ematraso'].values)
X_train_scaled.head()
Explanation: Padronização dos atributos
Como vamos usar SVM, precisamos colocar os atributos numéricos na mesma escala, e codificar os atributos de categoria. Temos um atributo de categoria: 'ematraso' e ele possui apenas dois valores: zero e um, logo, já está codificado. Se fossem múltiplos valores, teríamos que usar algo como o OneHotEncoder para transformá-lo em variáveis binárias.
End of explanation
modeloLinear = svm.SVC(kernel='linear')
modeloLinear.fit(X_train_scaled.values, y_train.values.reshape(201,))
modeloLinear.score(X_train_scaled.values, y_train.values.reshape(201,))
Explanation: Kernel linear
End of explanation
modeloRbf = svm.SVC(kernel='rbf',C=2)
modeloRbf.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloRbf.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
Explanation: Kernel RBF com C=2 e sem gamma
End of explanation
modeloRbfg10 = svm.SVC(kernel='rbf',C=1,gamma=10)
modeloRbfg10.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloRbfg10.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
Explanation: Kernel RBF com C=1 e gamma=10
End of explanation
modeloPoly = svm.SVC(kernel='poly',C=2,gamma=10)
modeloPoly.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloPoly.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
Explanation: Kernel Poly com C=2 e gamma=10
End of explanation
modeloSig = svm.SVC(kernel='sigmoid',C=2,gamma=100)
modeloSig.fit(X_train_scaled.values, y_train.values.reshape(y_train.size))
modeloSig.score(X_train_scaled.values, y_train.values.reshape(y_train.size))
Explanation: Kernel Sigmoid com C=2 e gamma=100
End of explanation
X_test_1 = padronizador.transform(X_test[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']])
X_test_1 = pd.DataFrame(padronizador.transform(X_test[['periodo', 'bolsa', 'repetiu', 'disciplinas', 'faltas']]))
X_test_scaled = pd.DataFrame(X_test_1)
X_test_scaled = X_test_scaled.assign(e = X_test['ematraso'].values)
X_test_scaled.head()
Explanation: Nem sempre o modelo que tem melhor score é o modelo que dá a melhor previsão para dados ainda não vistos. Pode ser o caso de "overfitting". Vamos testar vários tipos de kernel e combinações de parâmetros.
End of explanation
predicoes = modeloRbfg10.predict(X_test_scaled)
printResults(predicoes)
predicoesGamma1 = modeloRbf.predict(X_test_scaled)
printResults(predicoesGamma1)
Explanation: A melhor maneira de testar é comparar os resultados um a um.
End of explanation
predicoesPoly = modeloPoly.predict(X_test_scaled)
printResults(predicoesPoly)
predicoesSig = modeloSig.predict(X_test_scaled)
printResults(predicoesSig)
def printResults(pr):
acertos = 0
errosAbandono = 0
errosPermanencia = 0
for n in range(0,len(pr)):
if pr[n] == y_test.values.flatten()[n]:
acertos = acertos + 1
else:
if pr[n] == 0:
errosAbandono = errosAbandono + 1
else:
errosPermanencia = errosPermanencia + 1
print('Acertos',acertos)
print('Percentual',acertos / len(pr))
print('Erramos ao dizer que o aluno abandonou', errosAbandono)
print('Erramos ao dizer que o aluno permaneceu', errosPermanencia)
Explanation: Este foi o melhor resultado que conseguimos.
End of explanation |
7,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Exercise - Solutions
Follow along with the instructions in bold. Watch the solutions video if you get stuck!
The Data
Source
Step1: Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
Step2: Check out the head of the dataframe
Step3: Make the index a time series by using
Step4: Plot out the time series data.
Step5: Train Test Split
Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future)
Create a test train split using indexing (hint
Step6: Scale the Data
Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
Step8: Batch Function
We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
Step9: Setting Up The RNN Model
Import TensorFlow
Step10: The Constants
Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these)
Step11: Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.
Step12: Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
Step13: Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)
Step14: Loss Function and Optimizer
Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
Step15: Initialize the global variables
Step16: Create an instance of tf.train.Saver()
Step17: Session
Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
Step18: Predicting Future (Test Data)
Show the test_set (the last 12 months of your original complete data set)
Step19: Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set!
Generative Session
NOTE
Step20: Show the result of the predictions.
Step21: Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.
Step22: Create a new column on the test_set called "Generated" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.
Step23: View the test_set dataframe.
Step24: Plot out the two columns for comparison. | Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Time Series Exercise - Solutions
Follow along with the instructions in bold. Watch the solutions video if you get stuck!
The Data
Source: https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75#!ds=22ox&display=line
Monthly milk production: pounds per cow. Jan 62 - Dec 75
Import numpy pandas and matplotlib
End of explanation
milk = pd.read_csv('monthly-milk-production.csv',index_col='Month')
Explanation: Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
End of explanation
milk.head()
Explanation: Check out the head of the dataframe
End of explanation
milk.index = pd.to_datetime(milk.index)
Explanation: Make the index a time series by using:
milk.index = pd.to_datetime(milk.index)
End of explanation
milk.plot()
Explanation: Plot out the time series data.
End of explanation
milk.info()
train_set = milk.head(156)
test_set = milk.tail(12)
Explanation: Train Test Split
Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future)
Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 3 months of data is the test set, with everything before it is the training.
End of explanation
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
train_scaled = scaler.fit_transform(train_set)
test_scaled = scaler.transform(test_set)
Explanation: Scale the Data
Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
End of explanation
def next_batch(training_data,batch_size,steps):
INPUT: Data, Batch Size, Time Steps per batch
OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:]
# STEP 1: Use np.random.randint to set a random starting point index for the batch.
# Remember that each batch needs have the same number of steps in it.
# This means you should limit the starting point to len(data)-steps
# STEP 2: Now that you have a starting index you'll need to index the data from
# the random start to random start + steps. Then reshape this data to be (1,steps)
# STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:]
# You'll need to reshape these into tensors for the RNN. Depending on your indexing it
# will be either .reshape(-1,steps-1,1) or .reshape(-1,steps,1)
def next_batch(training_data,batch_size,steps):
# Grab a random starting point for each batch
rand_start = np.random.randint(0,len(training_data)-steps)
# Create Y data for time series in the batches
y_batch = np.array(training_data[rand_start:rand_start+steps+1]).reshape(1,steps+1)
return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1)
Explanation: Batch Function
We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
End of explanation
import tensorflow as tf
Explanation: Setting Up The RNN Model
Import TensorFlow
End of explanation
# Just one feature, the time series
num_inputs = 1
# Num of steps in each batch
num_time_steps = 12
# 100 neuron layer, play with this
num_neurons = 100
# Just one output, predicted time series
num_outputs = 1
## You can also try increasing iterations, but decreasing learning rate
# learning rate you can play with this
learning_rate = 0.03
# how many iterations to go through (training steps), you can play with this
num_train_iterations = 4000
# Size of the batch of data
batch_size = 1
Explanation: The Constants
Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these):
* Number of Inputs (1)
* Number of Time Steps (12)
* Number of Neurons per Layer (100)
* Number of Outputs (1)
* Learning Rate (0.003)
* Number of Iterations for Training (4000)
* Batch Size (1)
End of explanation
X = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs])
y = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])
Explanation: Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.
End of explanation
# Also play around with GRUCell
cell = tf.contrib.rnn.OutputProjectionWrapper(
tf.contrib.rnn.BasicLSTMCell(num_units=num_neurons, activation=tf.nn.relu),
output_size=num_outputs)
Explanation: Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
End of explanation
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
Explanation: Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)
End of explanation
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train = optimizer.minimize(loss)
Explanation: Loss Function and Optimizer
Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
End of explanation
init = tf.global_variables_initializer()
Explanation: Initialize the global variables
End of explanation
saver = tf.train.Saver()
Explanation: Create an instance of tf.train.Saver()
End of explanation
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)
with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:
sess.run(init)
for iteration in range(num_train_iterations):
X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps)
sess.run(train, feed_dict={X: X_batch, y: y_batch})
if iteration % 100 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
# Save Model for Later
saver.save(sess, "./ex_time_series_model")
Explanation: Session
Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
End of explanation
test_set
Explanation: Predicting Future (Test Data)
Show the test_set (the last 12 months of your original complete data set)
End of explanation
with tf.Session() as sess:
# Use your Saver instance to restore your saved rnn time series model
saver.restore(sess, "./ex_time_series_model")
# Create a numpy array for your genreative seed from the last 12 months of the
# training set data. Hint: Just use tail(12) and then pass it to an np.array
train_seed = list(train_scaled[-12:])
## Now create a for loop that
for iteration in range(12):
X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_batch})
train_seed.append(y_pred[0, -1, 0])
Explanation: Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set!
Generative Session
NOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set)
Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.
End of explanation
train_seed
Explanation: Show the result of the predictions.
End of explanation
results = scaler.inverse_transform(np.array(train_seed[12:]).reshape(12,1))
Explanation: Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.
End of explanation
test_set['Generated'] = results
Explanation: Create a new column on the test_set called "Generated" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.
End of explanation
test_set
Explanation: View the test_set dataframe.
End of explanation
test_set.plot()
Explanation: Plot out the two columns for comparison.
End of explanation |
7,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mass matrix diagonalization (lumping)
Step3: Elemental mass matrices
Step4: The elemental mass matrices look like
Step6: Lumping
One method for lumping is to sum the matrix per rows, i.e.
$$M^\text{(lumped)}{ii}= \sum{j} M_{ij}$$
Step9: One method for lumping is to sum the matrix per rows, i.e.
$$M^\text{(lumped)}{ii} = c M{ii}$$
with $c$ adjusted to satisfy $\sum_j M^\text{(lumped)}{jj} = \int\Omega \rho d\Omega$. Particularly, we can choose $c = Tr(M)/M_\text{total}$.
Step10: We can compare the methods for the tetrahedron
Step11: We can compare the methods for the serendipity quadrilaterals. For this type
of element we can't use the row lumping method since it leads to negative
masses. | Python Code:
from sympy import *
init_session()
Explanation: Mass matrix diagonalization (lumping)
End of explanation
def mass_tet4():
Mass matrix for a 4 node tetrahedron
r, s, t = symbols("r s t")
N = Matrix([1 - r - s - t, r, s, t])
return (N * N.T).integrate((t, 0, 1 - r - s), (s, 0, 1 - r), (r, 0, 1))
def mass_quad8():
Mass matrix for a 8 node quadrilateral
r, s = symbols("r s")
Haux = Matrix([
(1 - r**2)*(1 + s),
(1 - s**2)*(1 - r),
(1 - r**2)*(1 - s),
(1 - s**2)*(1 + r)])
N = S(1)/4*Matrix([
(1 + r)*(1 + s) - Haux[0] - Haux[3],
(1 - r)*(1 + s) - Haux[0] - Haux[1],
(1 - r)*(1 - s) - Haux[1] - Haux[2],
(1 + r)*(1 - s) - Haux[2] - Haux[3],
2*Haux[0], 2*Haux[1], 2*Haux[2], 2*Haux[3]])
return (N * N.T).integrate((s, -1, 1), (r, -1, 1))
Explanation: Elemental mass matrices
End of explanation
mass_tet4()
mass_quad8()
Explanation: The elemental mass matrices look like
End of explanation
def row_lump(mass_mat):
Matrix lumping by row summing
return diag(*[sum(mass_mat[i, :]) for i in range(mass_mat.shape[0])])
Explanation: Lumping
One method for lumping is to sum the matrix per rows, i.e.
$$M^\text{(lumped)}{ii}= \sum{j} M_{ij}$$
End of explanation
def diag_scaling_lump(mass_mat):
Matrix lumping by diagonal scaling method
mass = sum(mass_mat)
trace = mass_mat.trace()
c = mass/trace
return diag(*[c*mass_mat[i, i] for i in range(mass_mat.shape[0])])
def min_dist_lump(mass_mat):
Matrix lumping by minimizing the Frobenius norm subject
to a constraint of conservation of mass.
num = mass_mat.shape[0]
mass = sum(mass_mat)
lamda = symbols("lambda")
Ms = symbols('M0:%d'%num)
var = list(Ms)
mass_diag = diag(*var)
C = mass_mat - mass_diag
fun = (C.T*C).trace() + lamda*(mass - sum(mass_diag))
var.append(lamda)
grad = [diff(fun, x) for x in var]
sol = solve(grad, var)
return diag(*list(sol.values())[:-1])
Explanation: One method for lumping is to sum the matrix per rows, i.e.
$$M^\text{(lumped)}{ii} = c M{ii}$$
with $c$ adjusted to satisfy $\sum_j M^\text{(lumped)}{jj} = \int\Omega \rho d\Omega$. Particularly, we can choose $c = Tr(M)/M_\text{total}$.
End of explanation
row_lump(mass_tet4())
diag_scaling_lump(mass_tet4())
min_dist_lump(mass_tet4())
Explanation: We can compare the methods for the tetrahedron
End of explanation
row_lump(mass_quad8())
diag_scaling_lump(mass_quad8())
min_dist_lump(mass_quad8())
from IPython.core.display import HTML
def css_styling():
styles = open('./styles/custom_barba.css', 'r').read()
return HTML(styles)
css_styling()
Explanation: We can compare the methods for the serendipity quadrilaterals. For this type
of element we can't use the row lumping method since it leads to negative
masses.
End of explanation |
7,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'hammoz-consortium', 'sandbox-2', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: HAMMOZ-CONSORTIUM
Source ID: SANDBOX-2
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:03
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
7,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting publications
In this section, we will try to predict whether an user will be an author. If they are an author, we will try to predict the number of stories they would write based off observable characteristics, such as the number of years have they been on the site, how much they read, and whether or not they are in a community.
Step1: Linear Modelling
For this problem, we will use regression analysis. That is, we have some information at hand, and we want to use that information to predict something. This works if we there's some underlying relationship between our information and the thing we want to predict.
More specifically, we will use the following models
Step2: The first approach that we will try is the logit model, or logistic regression, as explained above.
$Pr(Author_i) = \dfrac{1}{1-e^{-x_i^t \beta}}$
Here, the probability that user $i$ is an author is a function of a vector of characteristics $x$. These characteristics will be things that we can observe. In particular, we chose the following
Step3: Results indicate there is some correlation between two of the independent variables
Step4: The data is clustered around the zeros. Let's try a log transformation.
Step5: Regression Model
Step6: The log transformations helped increase the fit from and R-squared of ~0.05 to ~0.20.
From these results, we can see that
Step7: Without 'fs', we lost some information but not much | Python Code:
# opens raw data
with open ('../data/clean_data/df_profile', 'rb') as fp:
df = pickle.load(fp)
# creates copy with non-missing observations
df_active = df.loc[df.status != 'inactive', ].copy()
Explanation: Predicting publications
In this section, we will try to predict whether an user will be an author. If they are an author, we will try to predict the number of stories they would write based off observable characteristics, such as the number of years have they been on the site, how much they read, and whether or not they are in a community.
End of explanation
# examines status of users
status = df_active['status'].value_counts()
# plots chart
status.plot.pie(autopct='%.f', figsize=(5,5))
plt.ylabel('')
plt.show()
Explanation: Linear Modelling
For this problem, we will use regression analysis. That is, we have some information at hand, and we want to use that information to predict something. This works if we there's some underlying relationship between our information and the thing we want to predict.
More specifically, we will use the following models:
Logistic regression
Logistic regression lets us predict the probability that variable is (or is not) something. The model is as follows:
$Pr(Y_i=1) = \dfrac{1}{1+e^{-(\beta_0 + \beta_1 X_{i1} + \beta_2 X_{i2} + ... + \beta_k X_{ik} + \epsilon_i)}}$
in which $Y_i$ is $i$th observation of the dependent variable (variable of interest), and $X_{ij}$ is the $i$th observation of the $j$th independent variable.
For our model to work, we would need a few assumptions. These are:
Linearity between odds ratio and independent variables / correct functional form
Strict exogeneity $E[\epsilon|X] = 0$
No multicollinearity
Linear regression
Linear regression lets us predict the value of something. The model is as follows:
$Y_i = \beta_0 + \beta_1 X_{i1} + \beta_2 X_{i2} + ... + \beta_k X_{ik} + \epsilon_i$
in which $Y_i$ is $i$th observation of the dependent variable (variable of interest), and $X_{ij}$ is the $i$th observation of the $j$th independent variable.
For this model to work, we would need a few assumptions. These are:
Linearity between dependent and independent variables / correct functional form
Strict exogeneity $E[\epsilon|X] = 0$
No multicollinearity
Normally distributed errors
Spherical errors (homoscedasticity)
Probability of being an author
Before introducing any type of model, we can already see from our sample data that out of all active users, only ~18% of them are authors. As such, if we simply guess "not author" for all users, we will be correct ~82% of the time. Our goal is to come up with a classification method that has greater accuracy.
End of explanation
# displays correlation matrix
df_active.corr()
# creates design_matrix
X = df_active
X['intercept'] = 1
# displays variance inflation factor
vif_results = pd.DataFrame()
vif_results['VIF Factor'] = [vif(X.values, i) for i in range(X.shape[1])]
vif_results['features'] = X.columns
vif_results
Explanation: The first approach that we will try is the logit model, or logistic regression, as explained above.
$Pr(Author_i) = \dfrac{1}{1-e^{-x_i^t \beta}}$
Here, the probability that user $i$ is an author is a function of a vector of characteristics $x$. These characteristics will be things that we can observe. In particular, we chose the following:
* the number of years since they first joined the site
* the number of favorite authors+stories they have (logarithmically scaled)
* whether or not they have written a profile
* whether or not they are in a community
One more characteristic that we would also like to later include, but have not retrieved yet the data for:
* whether or not they have signed a review
The reasoning behind these variables is simple -- we assume that the longer and more involved an user is on the site, the more likely they are to publish.
From there, we randomly split up our data into two subsets: one to used as training (to build our model), and one to be used as testing (to test the accuracy of the model we built).
The model produced by the training data has statistically significant positive $\beta$ coefficients for all our explanatory variables. This is the relationship we expected. However, when we use our model to predict the testing set, we achieve only ~86% accuracy. This is an improvement from blind guessing, but only slightly.
Estimating number of stories written
Multicollinearity
End of explanation
sns.pairplot(data=df_active, y_vars=['st'], x_vars=['fa', 'fs', 'age'])
plt.show()
Explanation: Results indicate there is some correlation between two of the independent variables: 'fa' and 'fs', implying one of them may not be necessary in the model.
Nonlinearity
We know from earlier distributions that some of the variables are heavily right-skewed. We created some scatter plots to confirm that the assumption of linearity holds.
End of explanation
# takes log transformation
df_active['st'] = np.log(df_active['st']+1)
df_active['fa'] = np.log(df_active['fa']+1)
df_active['fs'] = np.log(df_active['fs']+1)
sns.pairplot(data=df_active, y_vars=['st'], x_vars=['fa', 'fs', 'age'])
plt.show()
Explanation: The data is clustered around the zeros. Let's try a log transformation.
End of explanation
# runs OLS regression
formula = 'st ~ fa + fs + cc + age'
reg = smf.ols(data=df_active, formula=formula).fit()
print(reg.summary())
Explanation: Regression Model
End of explanation
# runs OLS regression
formula = 'st ~ fa + cc + age'
reg = smf.ols(data=df_active, formula=formula).fit()
print(reg.summary())
Explanation: The log transformations helped increase the fit from and R-squared of ~0.05 to ~0.20.
From these results, we can see that:
A 1% change in number of authors favorited is associated with a ~15% change in the number of stories written.
A 1% change in number of stories favorited is associated with a ~4% change in the number of stories written.
Being in a community is associated with a ~0.7 increase in the number of stories written.
One more year on the site is associated with a ~3% change in the number of stories written.
We noted earlier that 'fa' and 'fs' had a correlation of ~0.7. As such, we reran the regression without 'fa' first, then again without 'fs'. The model without 'fs' yielded a better fit (R-squared), as well as AIC and BIC.
End of explanation
def graph(formula, x_range):
y = np.array(x_range)
x = formula(y)
plt.plot(y,x)
graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1)))-1),
range(2,100,1))
graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[2])-1),
range(2,100,1))
plt.show()
ages = [0, 1, 5, 10, 15]
for age in ages:
graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[3]*age)-1),
range(2,100,1))
plt.show()
Explanation: Without 'fs', we lost some information but not much:
A 1% change in number of authors favorited is associated with a ~20% change in the number of stories written.
Being in a community is associated with a ~0.7 increase in the number of stories written.
One more year on the site is associated with a ~3% change in the number of stories written.
All these results seem to confirm a basic intuition that the more active an user reads (as measured by favoriting authors and stories), the likely it is that user will write more stories. Being longer on the site and being part of a community is also correlated to publications.
To get a sense of the actual magnitude of these effects, let's attempt some plots:
End of explanation |
7,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions
Step2: Expected output
Step3: Expected Output
Step4: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be
Step5: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
Step7: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step15: Expected Output
Step16: Expected Output
Step18: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise
Step20: Expected Output | Python Code:
### START CODE HERE ### (≈ 1 line of code)
test = 'Hellow World'
### END CODE HERE ###
print ("test: " + test)
Explanation: Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
Instructions:
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
After this assignment you will:
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
About iPython Notebooks
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
Exercise: Set test to "Hello World" in the cell below to print "Hello World" and run the two cells below.
End of explanation
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1.0 + math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
Explanation: Expected output:
test: Hello World
<font color='blue'>
What you need to remember:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
1 - Building basic functions with numpy
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
1.1 - sigmoid function, np.exp()
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
Exercise: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
Reminder:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
End of explanation
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
Explanation: Expected Output:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
End of explanation
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
Explanation: In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
End of explanation
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
Explanation: Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
End of explanation
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
### START CODE HERE ### (≈ 1 line of code)
s = 1.0 / (1.0 + np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
Explanation: Any time you need more info on a numpy function, we encourage you to look at the official documentation.
You can also create a new cell in the notebook and write np.exp? (for example) to get quick access to the documentation.
Exercise: Implement the sigmoid function using numpy.
Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \
x_2 \
... \
x_n \
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \
\frac{1}{1+e^{-x_2}} \
... \
\frac{1}{1+e^{-x_n}} \
\end{pmatrix}\tag{1} $$
End of explanation
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
### START CODE HERE ### (≈ 2 lines of code)
s = 1.0 / (1.0 + np.exp(-x))
ds = s * (1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
Exercise: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
End of explanation
# GRADED FUNCTION: image2vector
def image2vector(image):
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0] * image.shape[1] * image.shape[2], 1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
Explanation: Expected Output:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
1.3 - Reshaping arrays
Two common numpy functions used in deep learning are np.shape and np.reshape().
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(lengthheight3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
Exercise: Implement image2vector() that takes an input of shape (length, height, 3) and returns a vector of shape (length*height*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with image.shape[0], etc.
End of explanation
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, axis = 1, keepdims = True)
# Divide x by its norm.
x = x / x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
Explanation: Expected Output:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \
2 & 6 & 4 \
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \
\sqrt{56} \
\end{bmatrix}\tag{4} $$and $$ x_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
Exercise: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
End of explanation
# GRADED FUNCTION: softmax
def softmax(x):
Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp / x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
Note:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
1.5 - Broadcasting and the softmax function
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.
Exercise: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
Instructions:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
$\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \
\vdots & \vdots & \vdots & \ddots & \vdots \
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \
\vdots & \vdots & \vdots & \ddots & \vdots \
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \
softmax\text{(second row of x)} \
... \
softmax\text{(last row of x)} \
\end{pmatrix} $$
End of explanation
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
Explanation: Expected Output:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
Note:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
What you need to remember:
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
End of explanation
# GRADED FUNCTION: L1
def L1(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
Explanation: As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.
2.1 Implement the L1 and L2 loss functions
Exercise: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
Reminder:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align}\tag{6}$$
End of explanation
# GRADED FUNCTION: L2
def L2(yhat, y):
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.dot(y - yhat, y - yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
Explanation: Expected Output:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
Exercise: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then np.dot(x,x) = $\sum_{j=0}^n x_j^{2}$.
L2 loss is defined as $$\begin{align} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align}\tag{7}$$
End of explanation |
7,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Deep Learning II</font>
Regressão Linear
Para entender completamente L1/L2, começaremos com a forma como eles são usados com regressão linear, que é a maneira mais simples de compreender.
Vamos tentar entender o impacto da complexidade do modelo na magnitude dos coeficientes. Como exemplo, vamos simular uma curva de seno (entre 60 ° e 300 °) e adicionar algum ruído aleatório usando o seguinte código
Step1: Isso se assemelha a uma curva de seno, mas não exatamente por causa do ruído. Usaremos isso como um exemplo para testar diferentes cenários. Vamos tentar estimar a função seno usando a regressão polinomial com potências de x na forma 1 a 15. Isso permite adicionar uma coluna para cada potência até 15 em nosso dataframe. Isso pode ser feito usando o seguinte código
Step2: Agora que temos todos as 15 potências, vamos construir 15 modelos de regressão linear diferentes com cada modelo contendo variáveis com potências de x. Por exemplo, o conjunto de características do modelo 8 será - {x, x_2, x_3, ..., x_8}.
Primeiro, definiremos uma função genérica que absorverá a potência máxima requerida de x como entrada e retornará uma lista contendo
Step3: Observe que esta função não irá plotar o ajuste do modelo para todas as potências, mas irá retornar o RSS e os coeficientes para todos os modelos.
Agora, podemos executar todos os 15 modelos e comparar os resultados. Para facilitar a análise, vamos armazenar todos os resultados em um dataframe do Pandas e plotar 6 modelos para ter uma ideia da tendência. Considere o seguinte código
Step4: À medida que a complexidade do modelo aumenta, os modelos tendem a ajustar desvios ainda menores no conjunto de dados de treinamento. Embora isso leve ao overfitting, deixamos essa questão de lado por algum tempo e chegamos ao nosso principal objetivo, ou seja, o impacto na magnitude dos coeficientes. Isso pode ser analisado observando o dataframe criado acima.
Step5: O tamanho dos coeficientes aumenta exponencialmente com aumento na complexidade do modelo. Espero que isso dê alguma intuição sobre porque colocar uma restrição na magnitude dos coeficientes pode ser uma boa ideia para reduzir a complexidade do modelo.
Vamos tentar entender isso ainda melhor.
O que um coeficiente grande significa? Isso significa que estamos colocando muita ênfase nesse recurso, ou seja, a característica particular é um bom preditor para o resultado. Quando se torna muito grande, o algoritmo inicia a modelagem de relações intrínsecas para estimar a saída e acaba sendo tendo overfitting para os dados de treinamento específicos. Isso é o overfitting e a regularização pode ser a solução para resolver ou pelo menos atenuar o problema.
Regularização L1 (Lasso)
LASSO representa o Operador de Menor Absoluto de Contração e Seleção. Eu sei que a definição não é muito intuitiva, mas existem duas palavras-chave
Step6: Observe os parâmetros adicionais definidos na função Lasso - 'max_iter'. Este é o número máximo de iterações para as quais queremos que o modelo seja executado caso não ocorra a convergência antes.
Vamos verificar a saída para 10 valores diferentes de alfa usando o seguinte código
Step7: Isso novamente nos diz que a complexidade do modelo diminui com o aumento dos valores de alfa. Mas observe a linha reta em alfa = 1. Parece um pouco estranho não?. Vamos explorar isso ainda mais, analisando os coeficientes.
Além da inferência esperada de RSS para alphas mais elevados, podemos ver o seguinte
Step8: Podemos observar que mesmo para um pequeno valor de alfa, um número significativo de coeficientes é zero. Isso também explica o ajuste de linha horizontal para alfa = 1 nas parcelas de LASSO. Esse fenômeno da maioria dos coeficientes sendo zero é chamado de "sparsity". Embora o LASSO realize a seleção de características, esse nível de sparsity é alcançado somente em casos especiais.
Isso tem algumas implicações realmente interessantes nos casos de uso da regressão LASSO em comparação com a regressão Ridge.
Regularização L2 (Ridge)
Como mencionado anteriormente, a regressão Ridge executa a "regularização L2", ou seja, adiciona um fator de soma de quadrados de coeficientes no objetivo de otimização. Assim, a regressão Ridge optimiza o seguinte
Step9: Observe a função 'Ridge' usada aqui. É preciso 'alfa' como parâmetro na inicialização. Além disso, tenha em mente que a normalização dos inputs geralmente é uma boa ideia em todos os tipos de regressão e também deve ser usada no caso de regressão Ridge.
Agora, analisemos o resultado da regressão de Ridge para 10 valores diferentes de α variando de 1e-15 a 20. Esses valores foram escolhidos para que possamos analisar facilmente a tendência com alteração nos valores de α. Estes, no entanto, diferem de caso para caso.
Observe que cada um desses 10 modelos conterá todas as 15 variáveis e somente o valor de alfa seria diferente. Isso é diferente do caso de regressão linear simples em que cada modelo tinha um subconjunto de recursos.
Código Python
Step10: Aqui podemos observar claramente que, à medida que o valor de alfa aumenta, a complexidade do modelo reduz. Embora os valores mais elevados de alfa reduzam o overfitting, valores significativamente altos também podem causar adequação (por exemplo, alfa = 5). Assim, alfa deve ser escolhido com sabedoria. Uma técnica amplamente aceita é a validação cruzada, ou seja, o valor de alfa é iterado em uma variedade de valores e é escolhido aquele que fornece maior pontuação de validação cruzada.
Dê uma olhada no valor dos coeficientes nos modelos acima | Python Code:
# Import
import numpy as np
import pandas as pd
import random
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 12, 10
# Definindo array de input com valores randômicos
x = np.array([i*np.pi/180 for i in range(60,300,4)])
np.random.seed(10)
y = np.sin(x) + np.random.normal(0,0.15,len(x))
data = pd.DataFrame(np.column_stack([x,y]),columns=['x','y'])
plt.plot(data['x'],data['y'],'.')
Explanation: <font color='blue'>Data Science Academy - Deep Learning II</font>
Regressão Linear
Para entender completamente L1/L2, começaremos com a forma como eles são usados com regressão linear, que é a maneira mais simples de compreender.
Vamos tentar entender o impacto da complexidade do modelo na magnitude dos coeficientes. Como exemplo, vamos simular uma curva de seno (entre 60 ° e 300 °) e adicionar algum ruído aleatório usando o seguinte código:
End of explanation
for i in range(2,16):
colname = 'x_%d'%i
data[colname] = data['x']**i
print (data.head())
Explanation: Isso se assemelha a uma curva de seno, mas não exatamente por causa do ruído. Usaremos isso como um exemplo para testar diferentes cenários. Vamos tentar estimar a função seno usando a regressão polinomial com potências de x na forma 1 a 15. Isso permite adicionar uma coluna para cada potência até 15 em nosso dataframe. Isso pode ser feito usando o seguinte código:
End of explanation
# Import
from sklearn.linear_model import LinearRegression
# Modelo
def linear_regression(data, power, models_to_plot):
predictors=['x']
if power>=2:
predictors.extend(['x_%d'%i for i in range(2,power+1)])
# Fit do modelo
linreg = LinearRegression(normalize=True)
linreg.fit(data[predictors],data['y'])
y_pred = linreg.predict(data[predictors])
# Plot
if power in models_to_plot:
plt.subplot(models_to_plot[power])
plt.tight_layout()
plt.plot(data['x'],y_pred)
plt.plot(data['x'],data['y'],'.')
plt.title('Plot Para a Potência: %d'%power)
# RSS
rss = sum((y_pred-data['y'])**2)
ret = [rss]
ret.extend([linreg.intercept_])
ret.extend(linreg.coef_)
return ret
Explanation: Agora que temos todos as 15 potências, vamos construir 15 modelos de regressão linear diferentes com cada modelo contendo variáveis com potências de x. Por exemplo, o conjunto de características do modelo 8 será - {x, x_2, x_3, ..., x_8}.
Primeiro, definiremos uma função genérica que absorverá a potência máxima requerida de x como entrada e retornará uma lista contendo: [modelo RSS, interceptação, coef_x, coef_x2, ... até a potência introduzida]. Aqui, RSS refere-se a "Soma Residual dos Quadrados" ou Residual Sum of Square, que é a soma do quadrado de erros entre os valores previstos e reais no conjunto de dados de treinamento. O código Python que define a função é:
End of explanation
# Inicializa o dataframe para armazenar os resultados
col = ['rss','intercept'] + ['coef_x_%d'%i for i in range(1,16)]
ind = ['model_pow_%d'%i for i in range(1,16)]
coef_matrix_simple = pd.DataFrame(index=ind, columns=col)
# Defina as potências para as quais é necessário um plot
models_to_plot = {1:231,3:232,6:233,9:234,12:235,15:236}
# Iteração através de todas as potências e assimilação dos resultados
for i in range(1,16):
coef_matrix_simple.iloc[i-1,0:i+2] = linear_regression(data, power = i, models_to_plot = models_to_plot)
Explanation: Observe que esta função não irá plotar o ajuste do modelo para todas as potências, mas irá retornar o RSS e os coeficientes para todos os modelos.
Agora, podemos executar todos os 15 modelos e comparar os resultados. Para facilitar a análise, vamos armazenar todos os resultados em um dataframe do Pandas e plotar 6 modelos para ter uma ideia da tendência. Considere o seguinte código:
Esperamos que os modelos com crescente complexidade melhorem os dados e resultem em valores RSS mais baixos. Isso pode ser verificado observando os gráficos gerados para 6 modelos:
End of explanation
# Define o formato de exibição como científico para facilitar a análise
pd.options.display.float_format = '{:,.2g}'.format
coef_matrix_simple
Explanation: À medida que a complexidade do modelo aumenta, os modelos tendem a ajustar desvios ainda menores no conjunto de dados de treinamento. Embora isso leve ao overfitting, deixamos essa questão de lado por algum tempo e chegamos ao nosso principal objetivo, ou seja, o impacto na magnitude dos coeficientes. Isso pode ser analisado observando o dataframe criado acima.
End of explanation
from sklearn.linear_model import Lasso
def lasso_regression(data, predictors, alpha, models_to_plot={}):
lassoreg = Lasso(alpha = alpha, normalize = True, max_iter = 1e5)
lassoreg.fit(data[predictors],data['y'])
y_pred = lassoreg.predict(data[predictors])
if alpha in models_to_plot:
plt.subplot(models_to_plot[alpha])
plt.tight_layout()
plt.plot(data['x'],y_pred)
plt.plot(data['x'],data['y'],'.')
plt.title('Plot for alpha: %.3g'%alpha)
rss = sum((y_pred-data['y'])**2)
ret = [rss]
ret.extend([lassoreg.intercept_])
ret.extend(lassoreg.coef_)
return ret
Explanation: O tamanho dos coeficientes aumenta exponencialmente com aumento na complexidade do modelo. Espero que isso dê alguma intuição sobre porque colocar uma restrição na magnitude dos coeficientes pode ser uma boa ideia para reduzir a complexidade do modelo.
Vamos tentar entender isso ainda melhor.
O que um coeficiente grande significa? Isso significa que estamos colocando muita ênfase nesse recurso, ou seja, a característica particular é um bom preditor para o resultado. Quando se torna muito grande, o algoritmo inicia a modelagem de relações intrínsecas para estimar a saída e acaba sendo tendo overfitting para os dados de treinamento específicos. Isso é o overfitting e a regularização pode ser a solução para resolver ou pelo menos atenuar o problema.
Regularização L1 (Lasso)
LASSO representa o Operador de Menor Absoluto de Contração e Seleção. Eu sei que a definição não é muito intuitiva, mas existem duas palavras-chave: "absoluto" e "seleção".
Vamos considerar o primeiro e se preocupar com o último mais tarde.
A regressão Lasso executa a regularização L1, ou seja, acrescenta um fator de soma do valor absoluto dos coeficientes no objetivo de otimização. Assim, a regressão LASSO otimiza o seguinte:
Objetivo = RSS + α * (soma do valor absoluto dos coeficientes)
Aqui, α (alfa) fornece um trade-off entre o balanceamento do RSS e a magnitude dos coeficientes.
Α = 0: os mesmos coeficientes que a regressão linear simples
Α = ∞: todos os coeficientes zero (mesma lógica do que antes)
0 <α <∞: coeficientes entre 0 e o de regressão linear simples
Vamos executar a regressão LASSO no mesmo problema descrito acima. Primeiro, definiremos uma função genérica:
End of explanation
# InicializA os preditores de todas as 15 potências de x
predictors=['x']
predictors.extend(['x_%d'%i for i in range(2,16)])
# Define os valores alfa para testar
alpha_lasso = [1e-15, 1e-10, 1e-8, 1e-5,1e-4, 1e-3,1e-2, 1, 5, 10]
# Inicializa o dataframe para armazenar coeficientes
col = ['rss','intercept'] + ['coef_x_%d'%i for i in range(1,16)]
ind = ['alpha_%.2g'%alpha_lasso[i] for i in range(0,10)]
coef_matrix_lasso = pd.DataFrame(index=ind, columns=col)
# Define os modelos para o Plot
models_to_plot = {1e-10:231, 1e-5:232,1e-4:233, 1e-3:234, 1e-2:235, 1:236}
# Iteração sobre os 10 valores alfa:
for i in range(10):
coef_matrix_lasso.iloc[i,] = lasso_regression(data, predictors, alpha_lasso[i], models_to_plot)
Explanation: Observe os parâmetros adicionais definidos na função Lasso - 'max_iter'. Este é o número máximo de iterações para as quais queremos que o modelo seja executado caso não ocorra a convergência antes.
Vamos verificar a saída para 10 valores diferentes de alfa usando o seguinte código:
End of explanation
coef_matrix_lasso.apply(lambda x: sum(x.values==0),axis=1)
Explanation: Isso novamente nos diz que a complexidade do modelo diminui com o aumento dos valores de alfa. Mas observe a linha reta em alfa = 1. Parece um pouco estranho não?. Vamos explorar isso ainda mais, analisando os coeficientes.
Além da inferência esperada de RSS para alphas mais elevados, podemos ver o seguinte:
Para os mesmos valores de alfa, os coeficientes de regressão LASSO são muito menores em comparação com o da regressão Ridge (compare a linha 1 das 2 tabelas).
Para o mesmo alfa, o LASSO tem maior RSS (ajuste mais desfavorável) em comparação com a regressão Ridge.
Muitos dos coeficientes são zero mesmo para valores muito pequenos de alfa.
As inferências # 1,2 podem não generalizar sempre, mas serão válidas para muitos casos. Vamos verificar o número de coeficientes que são zero em cada modelo usando o seguinte código:
End of explanation
from sklearn.linear_model import Ridge
def ridge_regression(data, predictors, alpha, models_to_plot={}):
# Fit do modelo usando Regressão Rdige
ridgereg = Ridge(alpha = alpha, normalize = True)
ridgereg.fit(data[predictors],data['y'])
y_pred = ridgereg.predict(data[predictors])
# Verificamos se um gráfico deve ser feito para o alfa inserido
if alpha in models_to_plot:
plt.subplot(models_to_plot[alpha])
plt.tight_layout()
plt.plot(data['x'],y_pred)
plt.plot(data['x'],data['y'],'.')
plt.title('Plot for alpha: %.3g'%alpha)
# Retorna o resultado em formato pré-definido
rss = sum((y_pred-data['y'])**2)
ret = [rss]
ret.extend([ridgereg.intercept_])
ret.extend(ridgereg.coef_)
return ret
Explanation: Podemos observar que mesmo para um pequeno valor de alfa, um número significativo de coeficientes é zero. Isso também explica o ajuste de linha horizontal para alfa = 1 nas parcelas de LASSO. Esse fenômeno da maioria dos coeficientes sendo zero é chamado de "sparsity". Embora o LASSO realize a seleção de características, esse nível de sparsity é alcançado somente em casos especiais.
Isso tem algumas implicações realmente interessantes nos casos de uso da regressão LASSO em comparação com a regressão Ridge.
Regularização L2 (Ridge)
Como mencionado anteriormente, a regressão Ridge executa a "regularização L2", ou seja, adiciona um fator de soma de quadrados de coeficientes no objetivo de otimização. Assim, a regressão Ridge optimiza o seguinte:
Objetivo = RSS + α * (soma do quadrado dos coeficientes)
Aqui, α (alfa) é o parâmetro que equilibra a quantidade de ênfase dada à minimização de RSS, minimizando a soma do quadrado de coeficientes. Α pode ter vários valores:
Α = 0: O objetivo se torna igual à regressão linear simples. Obteremos os mesmos coeficientes que a regressão linear simples.
Α = ∞: Os coeficientes serão zero. Por quê? Por causa do peso infinito no quadrado de coeficientes, qualquer coisa menor que zero tornará o objetivo infinito.
0 <α <∞: A magnitude de α determinará o peso atribuído a diferentes partes do objetivo. Os coeficientes estarão entre 0 e outros para regressão linear simples.
Espero que isso dê algum sentido sobre o impacto da magnitude dos coeficientes. Uma coisa é certa: qualquer valor não-zero daria valores inferiores aos da regressão linear simples. Por quanto? Vamos ver a regressão Ridge em ação no mesmo problema que o anterior.
Primeiro, vamos definir uma função genérica para regressão Ridge semelhante à definida para regressão linear simples. O código Python é:
End of explanation
predictors=['x']
predictors.extend(['x_%d'%i for i in range(2,16)])
alpha_ridge = [1e-15, 1e-10, 1e-8, 1e-4, 1e-3,1e-2, 1, 5, 10, 20]
col = ['rss','intercept'] + ['coef_x_%d'%i for i in range(1,16)]
ind = ['alpha_%.2g'%alpha_ridge[i] for i in range(0,10)]
coef_matrix_ridge = pd.DataFrame(index=ind, columns=col)
models_to_plot = {1e-15:231, 1e-10:232, 1e-4:233, 1e-3:234, 1e-2:235, 5:236}
for i in range(10):
coef_matrix_ridge.iloc[i,] = ridge_regression(data, predictors, alpha_ridge[i], models_to_plot)
Explanation: Observe a função 'Ridge' usada aqui. É preciso 'alfa' como parâmetro na inicialização. Além disso, tenha em mente que a normalização dos inputs geralmente é uma boa ideia em todos os tipos de regressão e também deve ser usada no caso de regressão Ridge.
Agora, analisemos o resultado da regressão de Ridge para 10 valores diferentes de α variando de 1e-15 a 20. Esses valores foram escolhidos para que possamos analisar facilmente a tendência com alteração nos valores de α. Estes, no entanto, diferem de caso para caso.
Observe que cada um desses 10 modelos conterá todas as 15 variáveis e somente o valor de alfa seria diferente. Isso é diferente do caso de regressão linear simples em que cada modelo tinha um subconjunto de recursos.
Código Python:
End of explanation
pd.options.display.float_format = '{:,.2g}'.format
coef_matrix_ridge
Explanation: Aqui podemos observar claramente que, à medida que o valor de alfa aumenta, a complexidade do modelo reduz. Embora os valores mais elevados de alfa reduzam o overfitting, valores significativamente altos também podem causar adequação (por exemplo, alfa = 5). Assim, alfa deve ser escolhido com sabedoria. Uma técnica amplamente aceita é a validação cruzada, ou seja, o valor de alfa é iterado em uma variedade de valores e é escolhido aquele que fornece maior pontuação de validação cruzada.
Dê uma olhada no valor dos coeficientes nos modelos acima:
Código Python:
End of explanation |
7,078 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Given a list of variant length features, for example: | Problem:
import pandas as pd
import numpy as np
import sklearn
f = load_data()
from sklearn.preprocessing import MultiLabelBinarizer
new_f = MultiLabelBinarizer().fit_transform(f) |
7,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: CSE 6040, Fall 2015 [02]
Step5: Q
Step6: 1
Step7: Method 1. Let's use a data structure called a dictionary, which stores (key, value) pairs.
Step8: Method 2. Let's use a different data structure, called a set. It essentially implements a set in the mathematical sense.
Step9: Q
Step11: The variable m in the example contains a match object if the pattern is found. You can perform queries against the match object, such as the one illustrated above.
Beyond exact text, there is a rich syntax for specifying complex patterns.
For instance, you can use character classes to match both "Impossible" and "impossible" using the pattern, "[Ii]mpossible". That is, the characters in square brackets represent the set of all characters that may match at the given position. As another example, you could match any digit using the character class, "[0123456789]", or the more compact range notation, "[0-9]". Thus, the pattern, "cat[xyz][0-9]hole" would match "catx3hole" but neither "catX3hole" nor "catxzhole".
You can also match the complement of a character class set using a caret, "^", just after the opening square bracket. For instance, "cat[a-z][^0-9]hole" would match "catx@hole" but not "catx3hole".
There are some common character classes, which have additional shortcuts. For instance, the special escaped d, or \d, will match any digit. So, to match a 7-digit phone number, you might use the pattern, "\d\d\d-\d\d\d\d".
Parentheses actually have a special meaning in a regular expression pattern. Therefore, to match an exact parenthesis, you need to prefix it (or escape it) with a backslash, \.
Example 2. For instance, suppose you wish to match a phone number written in the US standard format, like "(404) 555-1212". The pattern is a 3-digit area code, surrounded by parentheses, followed by a space, followed by a 7-digit number separated between the third and fourth digits. This pattern can be encoded as the following regular expression pattern
Step13: Example 4. You can make the phone number pattern more robust by allowing zero or more spaces between the area code and the phone number, using the * option
Step14: Beyond "*", other wildcards include "+" to match one or more, as well as "?" to match zero or one instances.
Example 5. It's also possible to match alternatives, using the or symbol, |. For instance, suppose you wish to recognize either of the words, "prefix" or "suffix"
Step16: Q
Step18: Example 6. Another common use-case is matching a string but extracting just a portion of it. For this purpose, you can use groups.
Consider the simple form of phone numbers, such as "(404) 555-1212". Suppose you wish to match a phone number, but then extract just the digits of the area code from the remainder of the number. For instance, for the above you might produce the list, ['404','555-1212'].
You can identify a group inside the pattern by enclosing the subpattern within parentheses, and then using a special syntax to give it a name. The name allows you to refer to the matched substring later on
Step21: In the preceding example, the syntax, (?P<name>xxx) defines a group named name for the subpattern represented abstractly by xxx. The example calls the match object's method, group("name"), to extract the matching substring.
Example 7. One pitfall with regular expression patterns is that they get messy quickly. The re.compile() function takes a special flag, re.VERBOSE, which allows you to write regular expressions in a more structured and hopefully also more readable way. In particular, you can insert arbitrary amounts of whitespace as well as comments.
Step22: 3
Step23: Q | Python Code:
quote = I wish you'd stop talking.
I wish you'd stop prying and trying to find things out.
I wish you were dead. No. That was silly and unkind.
But I wish you'd stop talking.
print (quote)
def countWords1 (s):
Counts the number of words in a given input string.
Lines = s.split ('\n')
count = 0
for line in Lines:
Words_in_line = line.split ()
count = count + len (Words_in_line)
return count
def countWords2 (s):
Counts the number of words in a given input string.
return len (quote.split ())
count1 = countWords1 (quote)
count2 = countWords2 (quote)
print ("\nWord count: Method 1 says %d words, and Method 2 says %d." % (count1, count2))
assert count1 == count2
Explanation: CSE 6040, Fall 2015 [02]: Processing unstructured text
Over the next two classes, we will build toward our first computational data mining problem, called the association rule mining problem. The basic task is to identify commonly co-occurring items in a series of transactions.
We will apply this problem to a corpus of unstructured text. Consequently, this first class will introduce (or review, for some of you) a few essential useful Python tools for this problem:
Strings
Sets
Dictionaries
Regular expressions
Files
0: Word count
Given a fragment of text, represented by a string (possibly including newlines), how many words does it contain?
Consider the following two methods to count words in a string. Look at these with a partner.
End of explanation
def yourCountWords (s):
Insert your method here.
return 0
# Write some code to test your implementation here as well.
Explanation: Q: Which would the two of you predict will be better, and why?
(Insert your response to the above question(s) here.)
Q: When might these methods not work as expected? With your partner, come up with one example input and write your own word counter, below, that handles that case.
End of explanation
Emails = ['[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]', '[email protected]']
print (Emails)
true_answer = 5
print ("\n'Emails' has %d unique addresses." % true_answer)
Explanation: 1: Counting unique strings
Suppose you are given a Python list of email addresses. Determine the number of unique addresses.
End of explanation
Dict = {}
for email in Emails:
Dict[email] = 1
count = len (Dict)
assert count == true_answer
print ("(Method 1 worked!)")
Explanation: Method 1. Let's use a data structure called a dictionary, which stores (key, value) pairs.
End of explanation
UniqueEmails = set (Emails)
count = len (UniqueEmails)
assert count == true_answer
print ("Method 2 worked!")
Explanation: Method 2. Let's use a different data structure, called a set. It essentially implements a set in the mathematical sense.
End of explanation
import re # Loads the regular expression library
p = re.compile ("impossible")
m = p.search ("This mission is impossible.")
if m == None:
print ("Not found.")
else:
print ("Found pattern at position %d" % m.start ())
Explanation: Q: So, which method is better, and why? If you think one method is better than another for this problem, for what kinds of problems would you prefer the other method?
(Insert your response to the above question(s) here.)
2: Regular expressions
The preceding exercise hints at a general problem of finding specific patterns in text. A handy tool for this problem is Python's regular expression library.
A regular expression is specially formatted pattern, written as a string. Matching patterns with regular expressions has 3 steps:
You come up with a pattern to find
You compile it into a pattern object
You apply the pattern object to a string, to find instances of the pattern within the string
It is easiest to see by example. What follows is just a small sample of what is possible with regular expressions in Python; refer to the regular expression documentation for many more examples and details.
Example 1. The simplest pattern is text that you wish to match exactly. For instance, suppose you wish to find an instance of the word "impossible" in a piece of text. Here is a snippet of Python code to do it.
Run this snippet. Try changing the search string and see what happens.
End of explanation
def findPhone1 (s):
Returns the first instance of a phone number in 's', or 'None'.
phonePattern = re.compile ("\(\d\d\d\) \d\d\d-\d\d\d\d")
hasPhone = phonePattern.search (s)
if hasPhone:
a = hasPhone.start ()
b = hasPhone.end ()
phone = s[a:b]
else:
phone = None
return phone
message = "Hi Betty. Give me a ring at (404) 555-1212 when you get a chance."
findPhone1 (message)
Explanation: The variable m in the example contains a match object if the pattern is found. You can perform queries against the match object, such as the one illustrated above.
Beyond exact text, there is a rich syntax for specifying complex patterns.
For instance, you can use character classes to match both "Impossible" and "impossible" using the pattern, "[Ii]mpossible". That is, the characters in square brackets represent the set of all characters that may match at the given position. As another example, you could match any digit using the character class, "[0123456789]", or the more compact range notation, "[0-9]". Thus, the pattern, "cat[xyz][0-9]hole" would match "catx3hole" but neither "catX3hole" nor "catxzhole".
You can also match the complement of a character class set using a caret, "^", just after the opening square bracket. For instance, "cat[a-z][^0-9]hole" would match "catx@hole" but not "catx3hole".
There are some common character classes, which have additional shortcuts. For instance, the special escaped d, or \d, will match any digit. So, to match a 7-digit phone number, you might use the pattern, "\d\d\d-\d\d\d\d".
Parentheses actually have a special meaning in a regular expression pattern. Therefore, to match an exact parenthesis, you need to prefix it (or escape it) with a backslash, \.
Example 2. For instance, suppose you wish to match a phone number written in the US standard format, like "(404) 555-1212". The pattern is a 3-digit area code, surrounded by parentheses, followed by a space, followed by a 7-digit number separated between the third and fourth digits. This pattern can be encoded as the following regular expression pattern:
\(\d\d\d\) \d\d\d-\d\d\d\d
Try the following example, which demonstrates the phone number matching pattern.
End of explanation
def findPhone2 (s):
Returns the first instance of a phone number in 's', or 'None'.
phonePattern = re.compile ("\(\d\d\d\) *\d\d\d-\d\d\d\d")
hasPhone = phonePattern.search (s)
if hasPhone:
a = hasPhone.start ()
b = hasPhone.end ()
phone = s[a:b]
else:
phone = None
return phone
findPhone2 ("Phone: (404)555-1212")
Explanation: Example 4. You can make the phone number pattern more robust by allowing zero or more spaces between the area code and the phone number, using the * option:
End of explanation
fixFinder = re.compile ("(pre|suf)fix")
assert fixFinder.search ("prefix")
assert fixFinder.search ("suffix")
assert not fixFinder.search ("infix")
Explanation: Beyond "*", other wildcards include "+" to match one or more, as well as "?" to match zero or one instances.
Example 5. It's also possible to match alternatives, using the or symbol, |. For instance, suppose you wish to recognize either of the words, "prefix" or "suffix":
End of explanation
def yourPhoneFinder (s):
Returns the first instance of a phone number in 's', or 'None'.
# Fix the pattern:
phonePattern = re.compile ("\(\d\d\d\) *\d\d\d-\d\d\d\d")
hasPhone = phonePattern.search (s)
if hasPhone:
a = hasPhone.start ()
b = hasPhone.end ()
phone = s[a:b]
else:
phone = None
return phone
assert yourPhoneFinder ("(404)555-1212")
assert yourPhoneFinder ("(404) 555-1212")
assert yourPhoneFinder ("404-555-1212")
assert yourPhoneFinder ("4045551212")
Explanation: Q: Apply these technique to our phone number finder. Define a function that can match phone numbers in any of the following forms:
(404) 555-1212
404-555-1212
404-5551212
404555-1212
4045551212
End of explanation
def findPhone3 (s):
Returns a the first instance of a phone number in 's', or 'None'.
phonePattern = re.compile ("\((?P<areacode>\d\d\d)\) (?P<number>\d\d\d-\d\d\d\d)")
hasPhone = phonePattern.search (s)
if hasPhone:
areacode = hasPhone.group ('areacode')
number = hasPhone.group ('number')
phone = [areacode, number]
else:
phone = None
return phone
findPhone3 (message)
Explanation: Example 6. Another common use-case is matching a string but extracting just a portion of it. For this purpose, you can use groups.
Consider the simple form of phone numbers, such as "(404) 555-1212". Suppose you wish to match a phone number, but then extract just the digits of the area code from the remainder of the number. For instance, for the above you might produce the list, ['404','555-1212'].
You can identify a group inside the pattern by enclosing the subpattern within parentheses, and then using a special syntax to give it a name. The name allows you to refer to the matched substring later on:
End of explanation
def findPhone4 (s):
Returns a the first instance of a phone number in 's', or 'None'.
phonePattern = re.compile (r
# Area code:
\(
(?P<areacode>\d\d\d)
\)
# Optional separator (one or more spaces)
\s*
# Phone number
(?P<number>\d\d\d-\d\d\d\d)
, re.VERBOSE)
hasPhone = phonePattern.search (s)
if hasPhone:
areacode = hasPhone.group ('areacode')
number = hasPhone.group ('number')
phone = [areacode, number]
else:
phone = None
return phone
findPhone4 (message)
Explanation: In the preceding example, the syntax, (?P<name>xxx) defines a group named name for the subpattern represented abstractly by xxx. The example calls the match object's method, group("name"), to extract the matching substring.
Example 7. One pitfall with regular expression patterns is that they get messy quickly. The re.compile() function takes a special flag, re.VERBOSE, which allows you to write regular expressions in a more structured and hopefully also more readable way. In particular, you can insert arbitrary amounts of whitespace as well as comments.
End of explanation
inbox = open ('skilling-j.inbox', 'r') # 'r' = read mode; use 'w' for writing
assert inbox # Makes sure it opened OK
all_messages = inbox.read ()
inbox.close () # Should close a file when you are done
# Print first 500 characters
print all_messages[0:500]
Explanation: 3: File I/O
Reading from or writing to a file is accomplished through a file object.
In this example, the file "skilling-j.inbox" is a text file containing a bunch of email messages. You should download it if you haven't done so already: [download]. (If you are pulling directly from the class notebooks Github repo, this file is included already.)
Example 1. You can read the entire file into a string by using open() to create a file object, and then reading its contents:
End of explanation
inbox = open ('skilling-j.inbox', 'r') # 'r' = read mode; use 'w' for writing
assert inbox # Makes sure it opened OK
count = 0
for line in inbox: # reads one line at a time
count = count + 1
inbox.close ()
print ("The file has %d lines." % count)
Explanation: Q: Do you anticipate any pitfalls in this approach?
(Insert your response to the above question(s) here.)
Example 2. A more memory-efficient way to read a file is to read chunks at a time. For text files, a particularly convenient way is to read the file one line at a time, using a file object's readline () method, or by looping directly over the object. The following example does so in order to count the number of lines in the file.
End of explanation |
7,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Variational Autoencoder in TensorFlow
Variational Autoencoders (VAE) are a popular model that allows for unsupervised (and semi-supervised) learning. In this notebook, we'll implement a simple VAE on the MNIST dataset.
One of the primary goals of the VAE (and auto-encoders in general) is to reconstruct the original input. Why would we want to do that? At first glance, such a model seems silly
Step2: Encoder
The encoder deterministically transforms the data $x$ from the data space to the latent space of $z$. Since we're dealing with a variational autoencoder, we attempt to model the distribution of the latent space given the input, represented by $q(z|x)$. This isn't immediately obvious in the code implementation, but we assume a standard Gaussian prior on this distribution, and our encoder returns the mean and variance (actually log-variance) of this distribution. We use log-variance because our model returns a real number, while variances must be positive.
MNIST is a very simple dataset, so let's also keep the model simple
Step4: Note that we use a couple features of TF-Slim here
Step6: Loss
Prof. Jun Zhu talked in class about the theoretical motivation for the loss of the VAE model. Like all variational inference techniques, it tries to match the variational posterior distribution (here a neural network) with the true posterior. However, at the end of the derivation, we can think of our model as trading off two goals
Step8: Visualization
It'll be nice to visualize the reconstructions that our model generates to see what it learns. This helper function plots the original inputs in one column and the reconstructions next to them in another column. I also may or may not have stolen it from Alex Lew, who included it in his GAN notebook (03B)...
Step9: Define the graph and train
All of the functions we've written thus far are just that
Step10: <sub>[1] The primary purpose of TensorFlow is to construct a computation graph connecting Tensors and operations. Each of these nodes must be assigned a unique name; if the user does not specify one, a unique name is automatically generated, like 'Placeholder_2', with the number at the end incrementing each time you create a new node of that type. Attempting to create a node with a name already found in the graph raises an error.</sub>
<sub>So how can this be problematic? In the Coding Environments notebook (00B), it was mentioned that code from previously run cells persists. As such, if we're programming interactively and want to rebuild our graph after some updates, the new updated nodes we want to add collide with the names from our previous run, throwing an error. Why didn't we have to worry about this before? In the past, we haven't been naming our variables, so TensorFlow has been giving the nodes new unique names every time we update the graph and adding them to the collection of nodes from previous runs; the old nodes are never called, so they just sit there. However, TF-Slim does name the variables it generates, thus causing the problem. We can solve this by creating a new graph object before we define our computation graph, so every time we want to make modifications to the graph, we start anew.</sub>
<sub>If you're confused by that explanation, I wouldn't worry about it. It's not necessary for the program to run. It's there so we can re-run the cell defining the computation graph without restarting the entire kernel to clear memory of previous variables. In a traditionally written Python program (i.e. not IPython), you wouldn't need to do this.</sub>
For training, we'll stay simple and train for 20000 iterations, visualizing our results with 5 digits from the validation set after every 1000 minibatches. Notice that this model is completely unsupervised | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
slim = tf.contrib.slim
# Import data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Variational Autoencoder in TensorFlow
Variational Autoencoders (VAE) are a popular model that allows for unsupervised (and semi-supervised) learning. In this notebook, we'll implement a simple VAE on the MNIST dataset.
One of the primary goals of the VAE (and auto-encoders in general) is to reconstruct the original input. Why would we want to do that? At first glance, such a model seems silly: a simple identity function achieves the same thing with perfect results. However, with an autoencoder, we can learn a compresesed representation in a smaller latent space, allowing us to learn features and structure of the data. Autoencoders are composed of two arms, the encoder and decoder, which convert values from the data space to the latent space and vice versa, respectively.
Importantly, since we're simply reconstructing the original input, we do not necessarily need labels to do our learning, as we have in previous examples. This is significant, as labels are often far more expensive to acquire than raw data, often prohibitively so. VAEs therefore allow us to leverage abundant unlabeled data. That said, VAEs are also able to take advantage of labels when available as well, either in a completely supervised or semi-supervised setting. Altogether, autoencoders can achieve impressive results on tasks like denoising, segmentation, and even predicting future images.
Imports and Data
First, some package imports and loading of the data. This is similar to what we've done before, with the main difference being that we're going to use TensorFlow Slim, as a follow-up to notebook 02A.
End of explanation
def encoder(x):
Network q(z|x)
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
mu_logvar = slim.fully_connected(x, 128, scope='fc1')
mu_logvar = slim.fully_connected(mu_logvar, 128, activation_fn=None, scope='fc2')
return mu_logvar
Explanation: Encoder
The encoder deterministically transforms the data $x$ from the data space to the latent space of $z$. Since we're dealing with a variational autoencoder, we attempt to model the distribution of the latent space given the input, represented by $q(z|x)$. This isn't immediately obvious in the code implementation, but we assume a standard Gaussian prior on this distribution, and our encoder returns the mean and variance (actually log-variance) of this distribution. We use log-variance because our model returns a real number, while variances must be positive.
MNIST is a very simple dataset, so let's also keep the model simple: an MLP with 2 fully connected layers. We name the output mu_logvar as we will be interpretting the first half of the final 128-dimensional vector as the mean $\mu$ and the second half as the log-variance log($\sigma^2$).
End of explanation
def decoder(mu_logvar):
Network p(x|z)
# Interpret z as concatenation of mean and log variance
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
# Standard deviation must be positive
stddev = tf.sqrt(tf.exp(logvar))
# Draw a z from the distribution
epsilon = tf.random_normal(tf.shape(stddev))
z = mu + tf.multiply(stddev, epsilon)
# Decoding arm
with slim.arg_scope([slim.fully_connected],
activation_fn=tf.nn.relu,
weights_initializer=tf.truncated_normal_initializer(0.0, 0.1)):
x_logits = slim.fully_connected(z, 128, scope='fc1')
x_logits = slim.fully_connected(x_logits, 784, activation_fn=None, scope='fc2')
# x_hat to be generated from a Bernoulli distribution
x_dist = tf.contrib.distributions.Bernoulli(logits=x_logits, dtype=tf.float32)
return x_logits, x_dist
Explanation: Note that we use a couple features of TF-Slim here:
We use slim.fully_connected() to specify which layers we want to use, without having to worry about defining weight or bias variables beforehand.
We use slim.arg_scope() to specify default arguments so we can leave them out of the definitions of each of the fully connected layers. We can still override the activation_fn for the last layer though.
For this simple model, TF-Slim doesn't actually benefit us all that much, but for the sake of demonstration, we'll stick with it.
Decoder
The decoder is the generative arm of the auotencoder. Just like our encoder learned parameters of a distribution $p(z|x)$, our decoder will learn parameters of a distribution $p(x|z)$. Beceause $x$ is binary data (black and white pixels), we will use a Bernoulli distribution. Our generative neural network will learn the mean of this Bernoulli distribution for each pixel we want to generate. Another viewpoint: if our neural network outputs $\hat{x}_j$ for pixel $j$, it means we believe that the pixel will be white with that probability.
Again, since MNIST is simple, we'll use a 2 layer MLP for the decoder. Importantly, since we are focusing on reconstruction, we make sure that the final output of the decoder $\hat{x}$ is the same dimensions as our input $x$.
End of explanation
def optimizer(x_logits, x, mu_logvar):
Define loss functions (reconstruction, KL divergence) and optimizer
with tf.variable_scope('optimizer') as scope:
# Reconstruction loss
reconstruction = tf.reduce_sum(tf.nn.sigmoid_cross_entropy_with_logits(labels=x, logits=x_logits), reduction_indices=[1])
# KL divergence
mu, logvar = tf.split(mu_logvar, num_or_size_splits=2, axis=1)
kl_d = -0.5 * tf.reduce_sum(1.0 + logvar - tf.square(mu) - tf.exp(logvar), reduction_indices=[1])
# Total loss
loss = tf.reduce_mean(reconstruction + kl_d)
# ADAM optimizer
train_step = tf.train.AdamOptimizer().minimize(loss)
return train_step
Explanation: Loss
Prof. Jun Zhu talked in class about the theoretical motivation for the loss of the VAE model. Like all variational inference techniques, it tries to match the variational posterior distribution (here a neural network) with the true posterior. However, at the end of the derivation, we can think of our model as trading off two goals:
Reconstruction loss: Our generator produces parameters to a Bernoulli distribution that is supposed to represent $p(x | z)$; because we assume that $z$ is the latent representation of an actual data point $x$, we can measure how well we achieve this goal by measuring the likelihood of $x$ according to that Bernoulli distribution. Another way of thinking of this is that we can measure how similar our reconstructed image is to our original image. The measure of similarity we use is cross-entropy: we think of our model as classifying each pixel as black or white, and we measure how good the classifier is using the classic sigmoid cross-entropy loss.
KL Divergence: Because this model is variational, we also include a KL penalty to impose a Gaussian prior on the latent space. The exact derivation of this term can be found in the original Auto-Encoding Variational Bayes paper. Is a standard Gaussian prior a good assumption? What are the potential weaknesses of this approach?
We use the ADAM algorithm that we've used before for optimization.
End of explanation
def visualize_row(image, reconstruction, img_width=28, cmap='gray'):
Takes in a tensor of images of given width, and displays them in a column
in a plot, using `cmap` to map from numbers to colors.
fig, ax = plt.subplots(1, 2)
image = np.reshape(image, [-1, img_width])
reconstruction = np.reshape(reconstruction, [-1, img_width])
plt.figure()
ax[0].imshow(np.clip(image, 0, 1), cmap=cmap)
ax[1].imshow(np.clip(reconstruction, 0, 1), cmap=cmap)
plt.show()
Explanation: Visualization
It'll be nice to visualize the reconstructions that our model generates to see what it learns. This helper function plots the original inputs in one column and the reconstructions next to them in another column. I also may or may not have stolen it from Alex Lew, who included it in his GAN notebook (03B)...
End of explanation
# Reset the graph
tf.reset_default_graph()
# Define input placeholder
x = tf.placeholder(tf.float32,[None, 784], name='x')
# Define VAE graph
with tf.variable_scope('encoder'):
mu_logvar = encoder(x)
with tf.variable_scope('decoder'):
x_logits, x_dist = decoder(mu_logvar)
x_hat = x_dist.sample()
# Optimization
with tf.variable_scope('unlabeled') as scope:
train_step_unlabeled = optimizer(x_logits, x, mu_logvar)
Explanation: Define the graph and train
All of the functions we've written thus far are just that: functions. We still need to call them to assemble our TensorFlow computation graph. At this point, this should be becoming familiar.
One of the small differences is the inclusion of tf.reset_default_graph(), added to remedy a small, unfortunate side effect of using Jupyter and TensorFlow in conjunction, but you don't have to worry about it too much to understand the model. A more detailed explanation if you're interested below [1].
End of explanation
with tf.Session() as sess:
# Initialize all variables
sess.run(tf.global_variables_initializer())
# Train VAE model
for i in range(20000):
# Get a training minibatch
batch = mnist.train.next_batch(100)
# Binarize the data
x_binarized = (batch[0] > 0.5).astype(np.float32)
# Train on minibatch
sess.run(train_step_unlabeled, feed_dict={x: x_binarized}) # No labels
# Visualize reconstructions every 1000 iterations
if i % 1000 == 0:
batch = mnist.validation.next_batch(5)
x_binarized = (batch[0] > 0.5).astype(np.float32)
reconstructions = sess.run(x_hat, feed_dict={x: x_binarized})
print("Iteration {0}:".format(i))
visualize_row(batch[0], reconstructions)
Explanation: <sub>[1] The primary purpose of TensorFlow is to construct a computation graph connecting Tensors and operations. Each of these nodes must be assigned a unique name; if the user does not specify one, a unique name is automatically generated, like 'Placeholder_2', with the number at the end incrementing each time you create a new node of that type. Attempting to create a node with a name already found in the graph raises an error.</sub>
<sub>So how can this be problematic? In the Coding Environments notebook (00B), it was mentioned that code from previously run cells persists. As such, if we're programming interactively and want to rebuild our graph after some updates, the new updated nodes we want to add collide with the names from our previous run, throwing an error. Why didn't we have to worry about this before? In the past, we haven't been naming our variables, so TensorFlow has been giving the nodes new unique names every time we update the graph and adding them to the collection of nodes from previous runs; the old nodes are never called, so they just sit there. However, TF-Slim does name the variables it generates, thus causing the problem. We can solve this by creating a new graph object before we define our computation graph, so every time we want to make modifications to the graph, we start anew.</sub>
<sub>If you're confused by that explanation, I wouldn't worry about it. It's not necessary for the program to run. It's there so we can re-run the cell defining the computation graph without restarting the entire kernel to clear memory of previous variables. In a traditionally written Python program (i.e. not IPython), you wouldn't need to do this.</sub>
For training, we'll stay simple and train for 20000 iterations, visualizing our results with 5 digits from the validation set after every 1000 minibatches. Notice that this model is completely unsupervised: we never include the digit labels at any point in the process. Within a few thousand iterations, the model should start producing reasonable looking results:
End of explanation |
7,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chain Rule
考慮 $F = f(\mathbf{a},\mathbf{g}(\mathbf{b},\mathbf{h}(\mathbf{c}, \mathbf{i}))$
$\mathbf{a},\mathbf{b},\mathbf{c},$ 代表著權重 , $\mathbf{i}$ 是輸入
站在 \mathbf{g} 的角度,為了要更新權重,我們想算
$\frac{\partial F}{\partial b_i}$
我們需要什麼? 由 chain rule 得知
$\frac{\partial F}{\partial b_i} =
\sum_j \frac{\partial F}{\partial g_j}\frac{\partial g_j}{\partial b_i}$
或者寫成 Jabobian 的形式
$\frac{\partial F}{\partial \mathbf{b}} =
\frac{\partial F}{\partial \mathbf{g}} \frac{\partial \mathbf{g}}{\partial \mathbf{b}}$
所以我們希望前面能傳給我們 $\frac{\partial F}{\partial \mathbf{g}}$
將心比心,因為 $\mathbf{h}$ 也要算 $\frac{\partial F}{\partial \mathbf{c}}$, 所以我們還要負責傳 $\frac{\partial F}{\partial \mathbf{h}}$ 給他。 而因為
$\frac{\partial F}{\partial \mathbf{h}}=
\frac{\partial F}{\partial \mathbf{g}} \frac{\partial \mathbf{g}}{\partial \mathbf{h}}$
所以 $\mathbf{g}$ 中間真正需要負責計算的東西就是 $\frac{\partial \mathbf{g}}{\partial \mathbf{h}}$ 和 $\frac{\partial \mathbf{g}}{\partial \mathbf{b}}$
Gradient descent
誤差函數
我們的誤差函數還是 Cross entropy,
假設輸入值 $x$ 對應到的真實類別是 $y$, 那我們定義誤差函數
$ loss = -\log(q_y)=- \log(Predict(Y=y|x)) $
或比較一般的
$ loss = - p \cdot \log q $
其中 $ p_i = \Pr(Y=i|x) $ 代表真實發生的機率
以一層 hidden layer 的 feedforward neural network 來看
$ L= loss = -p \cdot \log \sigma(C(f(Ax+b))+d) $
由於
$-\log \sigma (Z) = 1 \log (\sum e^{Z_j})-Z$
$\frac{\partial -\log \sigma (Z)}{\partial Z} = 1 \sigma(Z)^T - \delta$
let $U = f(Ax+b) $, $Z=CU+d$
$ \frac{\partial L}{\partial d} = \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial d}
= \frac{\partial L}{\partial Z}
= p^T (1 \sigma(Z)^T - \delta)
= \sigma(Z)^T - p^T
= \sigma(CU+d)^T - p^T
$
$ \frac{\partial L}{\partial C_{i,j} }
= \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial C_{i,j}}
= (p^T (1 \sigma(Z)^T - \delta))_i U_j
= (\sigma(Z) - p)_i U_j
$
所以
$ \frac{\partial L}{\partial C }
= (\sigma(Z) - p) U^T
$
到目前為止,都跟原來 softmax 的結果一樣。
繼續計算 A, b 的偏微分
$ \frac{\partial L}{\partial U }
= \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial U}
= (p^T (1 \sigma(Z)^T - \delta)) C
= (\sigma(Z) - p)^T C
$
$ \frac{\partial U_k}{\partial b_i}
= \frac{\partial f(A_kx+b_k)}{\partial b_i}
= \delta_{k,i} f'(Ax+b)_i $
$ \frac{\partial L}{\partial b_i }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$
$ \frac{\partial L}{\partial A_{i,j} }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
任務:先暴力的利用上面直接微分好的式子來試試看
把之前的 softmax, relu, sigmoid 都拿回來看看
計算 relu 和 sigmoid 的微分
來試試看 mod 3 問題
隨機設定 A,b,C,d (可以嘗試不同的隱藏層維度)
看看 loss
設定一個 x
計算 gradient
扣掉 gradient
看看 loss 是否有減少?
Step1: $ \frac{\partial L}{\partial d} = \sigma(CU+d)^T - p^T$
$ \frac{\partial L}{\partial C } = (\sigma(Z) - p) U^T$
$ \frac{\partial L}{\partial b_i }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$
$ \frac{\partial L}{\partial A_{i,j} }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
Step2: 練習:隨機訓練 20000 次
Step3: 練習:井字棋的判定 | Python Code:
# 參考範例, 各種函數、微分
%run -i solutions/ff_funcs.py
# 參考範例, 計算 loss
%run -i solutions/ff_compute_loss2.py
Explanation: Chain Rule
考慮 $F = f(\mathbf{a},\mathbf{g}(\mathbf{b},\mathbf{h}(\mathbf{c}, \mathbf{i}))$
$\mathbf{a},\mathbf{b},\mathbf{c},$ 代表著權重 , $\mathbf{i}$ 是輸入
站在 \mathbf{g} 的角度,為了要更新權重,我們想算
$\frac{\partial F}{\partial b_i}$
我們需要什麼? 由 chain rule 得知
$\frac{\partial F}{\partial b_i} =
\sum_j \frac{\partial F}{\partial g_j}\frac{\partial g_j}{\partial b_i}$
或者寫成 Jabobian 的形式
$\frac{\partial F}{\partial \mathbf{b}} =
\frac{\partial F}{\partial \mathbf{g}} \frac{\partial \mathbf{g}}{\partial \mathbf{b}}$
所以我們希望前面能傳給我們 $\frac{\partial F}{\partial \mathbf{g}}$
將心比心,因為 $\mathbf{h}$ 也要算 $\frac{\partial F}{\partial \mathbf{c}}$, 所以我們還要負責傳 $\frac{\partial F}{\partial \mathbf{h}}$ 給他。 而因為
$\frac{\partial F}{\partial \mathbf{h}}=
\frac{\partial F}{\partial \mathbf{g}} \frac{\partial \mathbf{g}}{\partial \mathbf{h}}$
所以 $\mathbf{g}$ 中間真正需要負責計算的東西就是 $\frac{\partial \mathbf{g}}{\partial \mathbf{h}}$ 和 $\frac{\partial \mathbf{g}}{\partial \mathbf{b}}$
Gradient descent
誤差函數
我們的誤差函數還是 Cross entropy,
假設輸入值 $x$ 對應到的真實類別是 $y$, 那我們定義誤差函數
$ loss = -\log(q_y)=- \log(Predict(Y=y|x)) $
或比較一般的
$ loss = - p \cdot \log q $
其中 $ p_i = \Pr(Y=i|x) $ 代表真實發生的機率
以一層 hidden layer 的 feedforward neural network 來看
$ L= loss = -p \cdot \log \sigma(C(f(Ax+b))+d) $
由於
$-\log \sigma (Z) = 1 \log (\sum e^{Z_j})-Z$
$\frac{\partial -\log \sigma (Z)}{\partial Z} = 1 \sigma(Z)^T - \delta$
let $U = f(Ax+b) $, $Z=CU+d$
$ \frac{\partial L}{\partial d} = \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial d}
= \frac{\partial L}{\partial Z}
= p^T (1 \sigma(Z)^T - \delta)
= \sigma(Z)^T - p^T
= \sigma(CU+d)^T - p^T
$
$ \frac{\partial L}{\partial C_{i,j} }
= \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial C_{i,j}}
= (p^T (1 \sigma(Z)^T - \delta))_i U_j
= (\sigma(Z) - p)_i U_j
$
所以
$ \frac{\partial L}{\partial C }
= (\sigma(Z) - p) U^T
$
到目前為止,都跟原來 softmax 的結果一樣。
繼續計算 A, b 的偏微分
$ \frac{\partial L}{\partial U }
= \frac{\partial L}{\partial Z} \frac{\partial CU+d}{\partial U}
= (p^T (1 \sigma(Z)^T - \delta)) C
= (\sigma(Z) - p)^T C
$
$ \frac{\partial U_k}{\partial b_i}
= \frac{\partial f(A_kx+b_k)}{\partial b_i}
= \delta_{k,i} f'(Ax+b)_i $
$ \frac{\partial L}{\partial b_i }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$
$ \frac{\partial L}{\partial A_{i,j} }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
任務:先暴力的利用上面直接微分好的式子來試試看
把之前的 softmax, relu, sigmoid 都拿回來看看
計算 relu 和 sigmoid 的微分
來試試看 mod 3 問題
隨機設定 A,b,C,d (可以嘗試不同的隱藏層維度)
看看 loss
設定一個 x
計算 gradient
扣掉 gradient
看看 loss 是否有減少?
End of explanation
# 計算 gradient
%run -i solutions/ff_compute_gradient.py
# 更新權重,計算新的 loss
%run -i solutions/ff_update.py
Explanation: $ \frac{\partial L}{\partial d} = \sigma(CU+d)^T - p^T$
$ \frac{\partial L}{\partial C } = (\sigma(Z) - p) U^T$
$ \frac{\partial L}{\partial b_i }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i$
$ \frac{\partial L}{\partial A_{i,j} }
= ((\sigma(Z) - p)^T C)_i f'(Ax+b)_i x_j$
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
# 參考範例
%run -i solutions/ff_train_mod3.py
plt.plot(L_history);
# 訓練結果測試
for i in range(16):
x = Vector(i%2, (i>>1)%2, (i>>2)%2, (i>>3)%2)
y = i%3
U = relu(A@x+b)
q = softmax(C@U+d)
print(q.argmax(), y)
Explanation: 練習:隨機訓練 20000 次
End of explanation
def truth(x):
x = x.reshape(3,3)
return int(x.all(axis=0).any() or
x.all(axis=1).any() or
x.diagonal().all() or
x[::-1].diagonal().all())
%run -i solutions/ff_train_ttt.py
plt.plot(accuracy_history);
Explanation: 練習:井字棋的判定
End of explanation |
7,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LeNet Lab Solution
Source
Step1: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
Step2: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
Step3: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
Step4: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
Step5: SOLUTION
Step6: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
Step7: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
Step8: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
Step9: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
Step10: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section. | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", reshape=False)
X_train, y_train = mnist.train.images, mnist.train.labels
X_validation, y_validation = mnist.validation.images, mnist.validation.labels
X_test, y_test = mnist.test.images, mnist.test.labels
assert(len(X_train) == len(y_train))
assert(len(X_validation) == len(y_validation))
assert(len(X_test) == len(y_test))
print()
print("Image Shape: {}".format(X_train[0].shape))
print()
print("Training Set: {} samples".format(len(X_train)))
print("Validation Set: {} samples".format(len(X_validation)))
print("Test Set: {} samples".format(len(X_test)))
Explanation: LeNet Lab Solution
Source: Yan LeCun
Load Data
Load the MNIST data, which comes pre-loaded with TensorFlow.
You do not need to modify this section.
End of explanation
import numpy as np
# Pad images with 0s
X_train = np.pad(X_train, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_validation = np.pad(X_validation, ((0,0),(2,2),(2,2),(0,0)), 'constant')
X_test = np.pad(X_test, ((0,0),(2,2),(2,2),(0,0)), 'constant')
print("Updated Image Shape: {}".format(X_train[0].shape))
Explanation: The MNIST data that TensorFlow pre-loads comes as 28x28x1 images.
However, the LeNet architecture only accepts 32x32xC images, where C is the number of color channels.
In order to reformat the MNIST data into a shape that LeNet will accept, we pad the data with two rows of zeros on the top and bottom, and two columns of zeros on the left and right (28+2+2 = 32).
You do not need to modify this section.
End of explanation
import random
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
index = random.randint(0, len(X_train))
image = X_train[index].squeeze()
plt.figure(figsize=(1,1))
plt.imshow(image, cmap="gray")
print(y_train[index])
Explanation: Visualize Data
View a sample from the dataset.
You do not need to modify this section.
End of explanation
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Explanation: Preprocess Data
Shuffle the training data.
You do not need to modify this section.
End of explanation
import tensorflow as tf
EPOCHS = 10
BATCH_SIZE = 128
Explanation: Setup TensorFlow
The EPOCH and BATCH_SIZE values affect the training speed and model accuracy.
You do not need to modify this section.
End of explanation
from tensorflow.contrib.layers import flatten
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# SOLUTION: Layer 1: Convolutional. Input = 32x32x1. Output = 28x28x6.
conv1_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 1, 6), mean = mu, stddev = sigma))
conv1_b = tf.Variable(tf.zeros(6))
conv1 = tf.nn.conv2d(x, conv1_W, strides=[1, 1, 1, 1], padding='VALID') + conv1_b
# SOLUTION: Activation.
conv1 = tf.nn.relu(conv1)
# SOLUTION: Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Layer 2: Convolutional. Output = 10x10x16.
conv2_W = tf.Variable(tf.truncated_normal(shape=(5, 5, 6, 16), mean = mu, stddev = sigma))
conv2_b = tf.Variable(tf.zeros(16))
conv2 = tf.nn.conv2d(conv1, conv2_W, strides=[1, 1, 1, 1], padding='VALID') + conv2_b
# SOLUTION: Activation.
conv2 = tf.nn.relu(conv2)
# SOLUTION: Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='VALID')
# SOLUTION: Flatten. Input = 5x5x16. Output = 400.
fc0 = flatten(conv2)
# SOLUTION: Layer 3: Fully Connected. Input = 400. Output = 120.
fc1_W = tf.Variable(tf.truncated_normal(shape=(400, 120), mean = mu, stddev = sigma))
fc1_b = tf.Variable(tf.zeros(120))
fc1 = tf.matmul(fc0, fc1_W) + fc1_b
# SOLUTION: Activation.
fc1 = tf.nn.relu(fc1)
# SOLUTION: Layer 4: Fully Connected. Input = 120. Output = 84.
fc2_W = tf.Variable(tf.truncated_normal(shape=(120, 84), mean = mu, stddev = sigma))
fc2_b = tf.Variable(tf.zeros(84))
fc2 = tf.matmul(fc1, fc2_W) + fc2_b
# SOLUTION: Activation.
fc2 = tf.nn.relu(fc2)
# SOLUTION: Layer 5: Fully Connected. Input = 84. Output = 10.
fc3_W = tf.Variable(tf.truncated_normal(shape=(84, 10), mean = mu, stddev = sigma))
fc3_b = tf.Variable(tf.zeros(10))
logits = tf.matmul(fc2, fc3_W) + fc3_b
return logits
Explanation: SOLUTION: Implement LeNet-5
Implement the LeNet-5 neural network architecture.
This is the only cell you need to edit.
Input
The LeNet architecture accepts a 32x32xC image as input, where C is the number of color channels. Since MNIST images are grayscale, C is 1 in this case.
Architecture
Layer 1: Convolutional. The output shape should be 28x28x6.
Activation. Your choice of activation function.
Pooling. The output shape should be 14x14x6.
Layer 2: Convolutional. The output shape should be 10x10x16.
Activation. Your choice of activation function.
Pooling. The output shape should be 5x5x16.
Flatten. Flatten the output shape of the final pooling layer such that it's 1D instead of 3D. The easiest way to do is by using tf.contrib.layers.flatten, which is already imported for you.
Layer 3: Fully Connected. This should have 120 outputs.
Activation. Your choice of activation function.
Layer 4: Fully Connected. This should have 84 outputs.
Activation. Your choice of activation function.
Layer 5: Fully Connected (Logits). This should have 10 outputs.
Output
Return the result of the 2nd fully connected layer.
End of explanation
x = tf.placeholder(tf.float32, (None, 32, 32, 1))
y = tf.placeholder(tf.int32, (None))
one_hot_y = tf.one_hot(y, 10)
Explanation: Features and Labels
Train LeNet to classify MNIST data.
x is a placeholder for a batch of input images.
y is a placeholder for a batch of output labels.
You do not need to modify this section.
End of explanation
rate = 0.001
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
Explanation: Training Pipeline
Create a training pipeline that uses the model to classify MNIST data.
You do not need to modify this section.
End of explanation
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Evaluation
Evaluate how well the loss and accuracy of the model for a given dataset.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(X_train)
print("Training...")
print()
for i in range(EPOCHS):
X_train, y_train = shuffle(X_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = X_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
validation_accuracy = evaluate(X_validation, y_validation)
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, 'lenet')
print("Model saved")
Explanation: Train the Model
Run the training data through the training pipeline to train the model.
Before each epoch, shuffle the training set.
After each epoch, measure the loss and accuracy of the validation set.
Save the model after training.
You do not need to modify this section.
End of explanation
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(X_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Evaluate the Model
Once you are completely satisfied with your model, evaluate the performance of the model on the test set.
Be sure to only do this once!
If you were to measure the performance of your trained model on the test set, then improve your model, and then measure the performance of your model on the test set again, that would invalidate your test results. You wouldn't get a true measure of how well your model would perform against real data.
You do not need to modify this section.
End of explanation |
7,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Step13: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch = n_seqs * n_steps
n_batches = len(arr)//characters_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * characters_per_batch]
# Reshape into n_seqs rows
arr = arr.reshape((n_seqs, -1))
for n in range(0, arr.shape[1], n_steps):
# The features
x = arr[:, n:n+n_steps]
# The targets, shifted by one
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/[email protected]" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell(lstm_size, keep_prob):
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell(lstm_size, keep_prob) for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
x: Input tensor
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# That is, the shape should be batch_size*num_steps rows by lstm_size columns
seq_output = tf.concat(lstm_output, axis=1)
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
End of explanation
batch_size = 100 # Sequences per batch
num_steps = 100 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.001 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here I'm defining the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
7,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Invoking an ML API
This notebook demonstrates how to invoke a deployed ML model (in this case, the Google Cloud Natural Language API)
from a batch or streaming pipeline
We will use Apache Beam.
Install Beam
Restart the kernel after installing Beam
Step2: Try out Beam
Step3: Changing input to BigQuery and running on Cloud
Use DataflowRunner | Python Code:
%pip install --upgrade --quiet apache-beam[gcp]
Explanation: Invoking an ML API
This notebook demonstrates how to invoke a deployed ML model (in this case, the Google Cloud Natural Language API)
from a batch or streaming pipeline
We will use Apache Beam.
Install Beam
Restart the kernel after installing Beam
End of explanation
!rm -rf output.txt* beam-temp*
import apache_beam as beam
from apache_beam.ml.gcp import naturallanguageml as nlp
def parse_nlp_result(response):
Pulls required info from a response that looks like this:
sentences {
text {
content: "I love walking along the Seine."
}
sentiment {
magnitude: 0.699999988079071
score: 0.699999988079071
}
}
entities {
name: "Seine"
type: LOCATION
metadata {
key: "mid"
value: "/m/0f3vz"
}
metadata {
key: "wikipedia_url"
value: "https://en.wikipedia.org/wiki/Seine"
}
salience: 1.0
mentions {
text {
content: "Seine"
begin_offset: 25
}
type: PROPER
}
}
document_sentiment {
magnitude: 0.699999988079071
score: 0.699999988079071
}
language: "en"
def get_entity_value(entities, search_key):
for entity in entities:
return (entity.metadata[search_key])
return ''
return [
# response, # entire string
response.sentences[0].text.content, # first sentence
[entity.name for entity in response.entities], # all entities
[entity.metadata['wikipedia_url'] for entity in response.entities], # urls
response.language,
response.document_sentiment.score
]
features = nlp.types.AnnotateTextRequest.Features(
extract_entities=True,
extract_document_sentiment=True,
extract_syntax=False
)
p = beam.Pipeline()
(p
| beam.Create(['Has President Obama been to Paris?', 'Sophie loves walking along the Seine.', "C'est terrible"])
| beam.Map(lambda x : nlp.Document(x, type='PLAIN_TEXT'))
| nlp.AnnotateText(features)
| beam.Map(parse_nlp_result)
| beam.io.WriteToText('output.txt')
)
result = p.run()
result.wait_until_finish()
!cat output.txt*
Explanation: Try out Beam
End of explanation
%%bigquery
SELECT text FROM `bigquery-public-data.hacker_news.comments`
WHERE author = 'AF' LIMIT 10
%%writefile nlp_pipeline.py
PROJECT='ai-analytics-solutions'
BUCKET='ai-analytics-solutions-kfpdemo'
REGION='us-central1'
from datetime import datetime
import apache_beam as beam
def parse_nlp_result(response):
return [
# response, # entire string
response.sentences[0].text.content,
response.language,
response.document_sentiment.score
]
def run():
from apache_beam.ml.gcp import naturallanguageml as nlp
features = nlp.types.AnnotateTextRequest.Features(
extract_entities=True,
extract_document_sentiment=True,
extract_syntax=False
)
options = beam.options.pipeline_options.PipelineOptions()
google_cloud_options = options.view_as(beam.options.pipeline_options.GoogleCloudOptions)
google_cloud_options.project = PROJECT
google_cloud_options.region = REGION
google_cloud_options.job_name = 'nlpapi-{}'.format(datetime.now().strftime("%Y%m%d-%H%M%S"))
google_cloud_options.staging_location = 'gs://{}/staging'.format(BUCKET)
google_cloud_options.temp_location = 'gs://{}/temp'.format(BUCKET)
options.view_as(beam.options.pipeline_options.StandardOptions).runner = 'DataflowRunner' # 'DirectRunner'
p = beam.Pipeline(options=options)
(p
| 'bigquery' >> beam.io.Read(beam.io.BigQuerySource(
query="SELECT text FROM `bigquery-public-data.hacker_news.comments` WHERE author = 'AF' AND LENGTH(text) > 10",
use_standard_sql=True))
| 'txt' >> beam.Map(lambda x : x['text'])
| 'doc' >> beam.Map(lambda x : nlp.Document(x, type='PLAIN_TEXT'))
# | 'todict' >> beam.Map(lambda x : nlp.Document.to_dict(x))
| 'nlp' >> nlp.AnnotateText(features, timeout=10)
| 'parse' >> beam.Map(parse_nlp_result)
| 'gcs' >> beam.io.WriteToText('gs://{}/output.txt'.format(BUCKET), num_shards=1)
)
result = p.run()
result.wait_until_finish()
if __name__ == '__main__':
run()
!python3 nlp_pipeline.py
!gsutil cat gs://$BUCKET/output.txt*
Explanation: Changing input to BigQuery and running on Cloud
Use DataflowRunner
End of explanation |
7,085 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SuchLinkedTrees
I didn't want to write this either.
Working with linked trees
If you are interested in studying how two groups of organisms interact (or,
rather, have interacted over evolutionary time), you will find yourself with
two trees of distinct groups of taxa that are linked by a matrix of
interaction observations. This is sometimes called a 'dueling trees' problem.
If the trees happen to have the same number of taxa, and the interaction
matrix happens to be a unit matrix, then you can compute the distance matrix
for each of your tres and use the
Mantel test to compare them.
However, this is a pretty special case. Hommola et al. describe a method
extends the Mantel test in this paper here
Step1: Let your trees be linked!
Now, we create a SuchLinkedTrees object, which connects the host and
guest trees to the link matrix (True or False values derived from the
count matrix). This pre-indexes the table for fast access later, so creating
object takes a little while.
Step2: What's goodies are inside?
Let's look at how SuchLinkedTrees slices the link data by clade.
We'll pick a random clade...
Step3: This one looks good.
Now, we tell our SuchLinkedTrees object to subset itself using that clade.
The default subset is the root node of the guest tree.
Step4: print SLT.col_ids
print SLT.subset_columns
print SLT.subset_leafs
Paired distances
We can test whether or not the guest organisms are likely to have co-diversified
with the host organisms by computing the distances between each possible pair of
links in the link matrix through each of the two trees, and then testing how well
those two sets of distances correlate with one another.
Step5: Yeesh. Lousy correlation, isn't it?
Anyway, thanks to SuchTree, it only takes a tiny sliver of
a second to compute the 29,890 distances through each of the two
trees.
Unfortunately, for $n$ links, the number of link pairs is
$$\frac{n (n-1)}{2}$$
This is $\mathcal{O}(n^2)$ scaling, which is $bad$. The biggest clade is
$$ \frac{ 54327 \times 54326 }{2}= 1,475,684,301 $$
link pairs. And that's just one particularly big clade!
Solution
Step6: Clade 7027 is not particularly big, so we've actually over-sampled it
by a little more than $10\times$.
It shoudln't much different, though.
Step7: But, of couse, sampled distributions don't look exactly like the distributions
from which they were taken, so let's have a look at 'em.
Step8: Down to business
Now that you've seen the major features of SuchLinkedTrees, let's use it to do something
useful.
How about my main dissertation question?
Are there any co-diversifying bacteria in my fish?
Step9: If exhaustively measure correlations for clades with fewer than 4000 links,
and used sampled distances for clades with more than 4000 links, it takes
about six hours on one core.
It can be threaded, but I haven't gotten around to trying that yet.
Step10: Most of this is garbage. Let's focus on the good bits.
Step11: Correlation smorrilation
Well, the only reason I'm using Pearson's $r$ is because it happens to be an
$\mathcal{O}(n)$ algorithm. It also assumes that the distributions of the
things you are testing are normal.
We already saw that they are not normal.
We really should use a rank-order correlation test like Kendall's $\tau$
because they don't make any assumptions about the distributions, but
these are all $\mathcal{O}(n^2)$ algorithms.
Once again, the massive diversity of microbes forces us to really sweat the
details.
So, let's just use Pearson's $r$ to out what the correlations probably
are, and then find Kendall's $\tau$ for the ones that look interesting.
Step12: The Moment of Truth
Well, there's one clade up there that has a $\tau$ of 0.394 with a $p$ of 0.040588.
Could this be... it? | Python Code:
%load_ext Cython
%pylab inline
from SuchTree import SuchTree
import pandas as pd
import numpy as np
import seaborn
from SuchTree import SuchLinkedTrees, pearson
T1 = SuchTree( 'SuchTree/tests/test.tree' )
T2 = SuchTree( 'http://edhar.genomecenter.ucdavis.edu/~russell/fishpoo/fishpoo2_p200_c2_unique_2_clustalo_fasttree.tree' )
links = pd.read_csv( 'http://edhar.genomecenter.ucdavis.edu/~russell/fishpoo/fishpoo2_p200_c2_host_count_table.tsv',
sep='\t', index_col='Host')
links.index = map( lambda x : x.replace(' ','_'), links.index )
Explanation: SuchLinkedTrees
I didn't want to write this either.
Working with linked trees
If you are interested in studying how two groups of organisms interact (or,
rather, have interacted over evolutionary time), you will find yourself with
two trees of distinct groups of taxa that are linked by a matrix of
interaction observations. This is sometimes called a 'dueling trees' problem.
If the trees happen to have the same number of taxa, and the interaction
matrix happens to be a unit matrix, then you can compute the distance matrix
for each of your tres and use the
Mantel test to compare them.
However, this is a pretty special case. Hommola et al. describe a method
extends the Mantel test in this paper here :
A Permutation Test of Host–Parasite Cospeciation. Molecular Biology and Evolution, Vol. 26, No. 7. (01 July 2009), pp. 1457-1468, by Kerstin Hommola, Judith E. Smith, Yang Qiu, Walter R. Gilks
End of explanation
%%time
SLT = SuchLinkedTrees( T1, T2, links )
Explanation: Let your trees be linked!
Now, we create a SuchLinkedTrees object, which connects the host and
guest trees to the link matrix (True or False values derived from the
count matrix). This pre-indexes the table for fast access later, so creating
object takes a little while.
End of explanation
SLT.TreeB.get_leafs( 7027 )
Explanation: What's goodies are inside?
Let's look at how SuchLinkedTrees slices the link data by clade.
We'll pick a random clade...
End of explanation
SLT.subset( 7027 )
print 'subset size :', SLT.subset_size
print 'subset links :', SLT.subset_n_links
print 'link pairs :', ( SLT.subset_n_links * ( SLT.subset_n_links -1 ) ) / 2
Explanation: This one looks good.
Now, we tell our SuchLinkedTrees object to subset itself using that clade.
The default subset is the root node of the guest tree.
End of explanation
result = SLT.linked_distances()
seaborn.jointplot( result['TreeA'], result['TreeB'] )
Explanation: print SLT.col_ids
print SLT.subset_columns
print SLT.subset_leafs
Paired distances
We can test whether or not the guest organisms are likely to have co-diversified
with the host organisms by computing the distances between each possible pair of
links in the link matrix through each of the two trees, and then testing how well
those two sets of distances correlate with one another.
End of explanation
result_sampled = SLT.sample_linked_distances(sigma=0.05, n=10000, buckets=10)
result_sampled
Explanation: Yeesh. Lousy correlation, isn't it?
Anyway, thanks to SuchTree, it only takes a tiny sliver of
a second to compute the 29,890 distances through each of the two
trees.
Unfortunately, for $n$ links, the number of link pairs is
$$\frac{n (n-1)}{2}$$
This is $\mathcal{O}(n^2)$ scaling, which is $bad$. The biggest clade is
$$ \frac{ 54327 \times 54326 }{2}= 1,475,684,301 $$
link pairs. And that's just one particularly big clade!
Solution : sampling
We can avoid this problem (with some care) using SuchLinkedTrees.sample_linked_distances().
End of explanation
seaborn.jointplot( result_sampled['TreeA'], result_sampled['TreeB'] )
Explanation: Clade 7027 is not particularly big, so we've actually over-sampled it
by a little more than $10\times$.
It shoudln't much different, though.
End of explanation
seaborn.kdeplot(result['TreeB'])
seaborn.kdeplot(result_sampled['TreeB'][100000:400000])
Explanation: But, of couse, sampled distributions don't look exactly like the distributions
from which they were taken, so let's have a look at 'em.
End of explanation
import pyprind
p = pyprind.ProgBar( len( list( SLT.TreeB.get_internal_nodes() ) ), monitor=True, title='sampling trees...' )
big_nodes = []
table = {}
for n,node in enumerate( SLT.TreeB.get_internal_nodes() ) :
p.update()
SLT.subset( node )
if SLT.subset_n_links > 4000 :
big_nodes.append( node )
result = SLT.sample_linked_distances( sigma=0.05, n=1000, buckets=100)
else :
result = SLT.linked_distances()
table[node] = { 'n_leafs' : SLT.subset_size,
'n_links' : SLT.subset_n_links,
'n_pairs' : result['n_pairs'],
'n_samples' : result['n_samples'],
'deviatnon_a': result['deviation_a'],
'deviation_b': result['deviation_b'],
'r' : pearson( result['TreeA'], result['TreeB'] ) }
Explanation: Down to business
Now that you've seen the major features of SuchLinkedTrees, let's use it to do something
useful.
How about my main dissertation question?
Are there any co-diversifying bacteria in my fish?
End of explanation
C = pd.DataFrame( table ).T
seaborn.jointplot( 'n_links', 'r', data=C )
Explanation: If exhaustively measure correlations for clades with fewer than 4000 links,
and used sampled distances for clades with more than 4000 links, it takes
about six hours on one core.
It can be threaded, but I haven't gotten around to trying that yet.
End of explanation
seaborn.jointplot( 'n_links', 'r', data=C.query('n_leafs > 5 and r > 0.05') )
CC = C.query('n_leafs > 5 and r > 0.05').sort_values('r', ascending=False)
print CC.shape
CC.head()
Explanation: Most of this is garbage. Let's focus on the good bits.
End of explanation
from scipy.stats import kendalltau, pearsonr
pearson_p = {}
kendall_tau = {}
kendall_p = {}
for n,node in enumerate( CC.index ) :
SLT.subset(node)
result = SLT.linked_distances()
p_r,p_p = pearsonr( result['TreeA'], result['TreeB'] )
k_t,k_p = kendalltau( result['TreeA'], result['TreeB'] )
pearson_p[node] = p_p
kendall_tau[node] = k_t
kendall_p[node] = k_p
CC['pearson_p'] = pd.Series(pearson_p)
CC['kendall_tau'] = pd.Series(kendall_tau)
CC['kendall_p'] = pd.Series(kendall_p)
CC.head()
seaborn.jointplot( 'n_links', 'kendall_tau', data=CC )
Explanation: Correlation smorrilation
Well, the only reason I'm using Pearson's $r$ is because it happens to be an
$\mathcal{O}(n)$ algorithm. It also assumes that the distributions of the
things you are testing are normal.
We already saw that they are not normal.
We really should use a rank-order correlation test like Kendall's $\tau$
because they don't make any assumptions about the distributions, but
these are all $\mathcal{O}(n^2)$ algorithms.
Once again, the massive diversity of microbes forces us to really sweat the
details.
So, let's just use Pearson's $r$ to out what the correlations probably
are, and then find Kendall's $\tau$ for the ones that look interesting.
End of explanation
SLT.subset( 79047 )
result = SLT.linked_distances()
seaborn.jointplot( result['TreeA'], result['TreeB'] )
Explanation: The Moment of Truth
Well, there's one clade up there that has a $\tau$ of 0.394 with a $p$ of 0.040588.
Could this be... it?
End of explanation |
7,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lets Test Facebook Prophet to predict Cryptocurrency Prices
Prophet is a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers.
Disclaimer
Step1: Make prediction
Step2: Bitcoin
Step3: Ethereum
Step4: Bitcoin Cash
Step5: Ripple
Step6: LiteCoin
Step7: Cardano | Python Code:
import pandas as pd
import numpy as np
from fbprophet import Prophet
import time
import seaborn as sns
import matplotlib.pyplot as plt
import datetime
%matplotlib inline
import bs4
bs4.__version__
'4.4.1'
import html5lib
html5lib.__version__
'0.9999999'
# top 15 coins - at the time of writing this!
coins = ['bitcoin', 'ethereum', 'bitcoin-cash', 'ripple', 'litecoin', 'cardano', 'iota', 'dash', 'nem', 'monero', 'bitcoin-gold', 'stellar', 'neo', 'eos', 'ethereum-classic']
coin_market_info = {}
for coin in coins:
# getting data from 2017 until now!
coin_market_info[str(coin)] = pd.read_html("https://coinmarketcap.com/currencies/{0}/historical-data/?start=20170101&end={1}".format(str(coin),time.strftime("%Y%m%d")))[0]
coin_market_info[str(coin)] = coin_market_info[str(coin)].assign(Date=pd.to_datetime(bitcoin_market_info['Date']))
prophets = {}
for coin in coins:
df = coin_market_info[str(coin)][['Date', 'Open']]
df = df.rename(index=str, columns={"Date": "ds", "Open": "y"})
# log-transform the prices
# df['y'] = np.log(df['y'])
prophets[str(coin)] = Prophet()
prophets[str(coin)].fit(df);
Explanation: Lets Test Facebook Prophet to predict Cryptocurrency Prices
Prophet is a procedure for forecasting time series data. It is based on an additive model where non-linear trends are fit with yearly and weekly seasonality, plus holidays. It works best with daily periodicity data with at least one year of historical data. Prophet is robust to missing data, shifts in the trend, and large outliers.
Disclaimer: I Have no idea what I'm doing
Inspired by awesome David Sheehan blog post
TODO: I've read some blog posts regarding "Prediction of Cryptocurrency Prices," and all of them only used history of one particular coin, to predict its value in the future. I think there is a problem with this approach; because I've seen some strong co-relation (with my eyes!) between changes in the price of Bitcoin and other AltCoins. More on this later!
End of explanation
%%time
prophecies = {}
for coin in coins:
# predict price for next 90 days
prophecies[str(coin)] = prophets[str(coin)].make_future_dataframe(periods=120)
prophecies[str(coin)] = prophets[str(coin)].predict(future)
Explanation: Make prediction
End of explanation
%%time
p = prophets['bitcoin'].plot(prophecies['bitcoin'])
pc = prophets['bitcoin'].plot_components(prophecies['bitcoin'])
Explanation: Bitcoin
End of explanation
%%time
coin = coins[1]
p = prophets[str(coin)].plot(prophecies[str(coin)])
pc = prophets[str(coin)].plot_components(prophecies[str(coin)])
Explanation: Ethereum
End of explanation
%%time
coin = coins[2]
p = prophets[str(coin)].plot(prophecies[str(coin)])
pc = prophets[str(coin)].plot_components(prophecies[str(coin)])
Explanation: Bitcoin Cash
End of explanation
%%time
coin = coins[3]
p = prophets[str(coin)].plot(prophecies[str(coin)])
pc = prophets[str(coin)].plot_components(prophecies[str(coin)])
Explanation: Ripple
End of explanation
%%time
coin = coins[4]
p = prophets[str(coin)].plot(prophecies[str(coin)])
pc = prophets[str(coin)].plot_components(prophecies[str(coin)])
Explanation: LiteCoin
End of explanation
%%time
coin = coins[5]
p = prophets[str(coin)].plot(prophecies[str(coin)])
pc = prophets[str(coin)].plot_components(prophecies[str(coin)])
Explanation: Cardano
End of explanation |
7,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder-Decoder Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing15_200_512_04drb/encdec_noing15_200_512_04drb_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
for i, sample in enumerate(report['train_samples']):
print_sample(sample, report['best_bleu_matches_train'][i] if 'best_bleu_matches_train' in report else None)
for i, sample in enumerate(report['valid_samples']):
print_sample(sample, report['best_bleu_matches_valid'][i] if 'best_bleu_matches_valid' in report else None)
for i, sample in enumerate(report['test_samples']):
print_sample(sample, report['best_bleu_matches_test'][i] if 'best_bleu_matches_test' in report else None)
Explanation: Generations
End of explanation
def print_bleu(blue_struct):
print 'Overall Score: ', blue_struct['score'], '\n'
print '1-gram Score: ', blue_struct['components']['1']
print '2-gram Score: ', blue_struct['components']['2']
print '3-gram Score: ', blue_struct['components']['3']
print '4-gram Score: ', blue_struct['components']['4']
# Training Set BLEU Scores
print_bleu(report['train_bleu'])
# Validation Set BLEU Scores
print_bleu(report['valid_bleu'])
# Test Set BLEU Scores
print_bleu(report['test_bleu'])
# All Data BLEU Scores
print_bleu(report['combined_bleu'])
Explanation: BLEU Analysis
End of explanation
# Training Set BLEU n-pairs Scores
print_bleu(report['n_pairs_bleu_train'])
# Validation Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_valid'])
# Test Set n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_test'])
# Combined n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_all'])
# Ground Truth n-pairs BLEU Scores
print_bleu(report['n_pairs_bleu_gold'])
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
print 'Average (Train) Generated Score: ', report['average_alignment_train']
print 'Average (Valid) Generated Score: ', report['average_alignment_valid']
print 'Average (Test) Generated Score: ', report['average_alignment_test']
print 'Average (All) Generated Score: ', report['average_alignment_all']
print 'Average Gold Score: ', report['average_alignment_gold']
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
7,088 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Actor-Critic 방법으로 CartPole의 문제 풀기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step4: 모델
행위자와 비평가는 각각 동작 확률과 비평 값을 생성하는 하나의 신경망을 사용하여 모델링됩니다. 이 튜토리얼에서는 모델 하위 클래스화를 사용하여 모델을 정의합니다.
순방향 전달 중에 모델은 상태를 입력으로 받고 상태 종속 값 함수를 모델링하는 동작 확률과 비평 값 $V$를 모두 출력합니다. 목표는 예상 이익을 최대화하는 $\pi$ 정책을 기반으로 행동을 선택하는 모델을 훈련하는 것입니다.
Cartpole-v0의 경우, 상태를 나타내는 네 가지 값이 있는데, 각각 카트 위치, 카트 속도, 막대 각도 및 막대 속도입니다. 에이전트는 카트를 각각 왼쪽(0)과 오른쪽(1)으로 밀기 위해 두 가지 동작을 취할 수 있습니다.
자세한 내용은 OpenAI Gym의 CartPole-v0 위키 페이지를 참조하세요.
Step7: 훈련
에이전트를 훈련하기 위해 다음 단계를 따릅니다.
환경에서 에이전트를 실행하여 에피소드별로 훈련 데이터를 수집합니다.
각 시간 스텝에서 예상 이익을 계산합니다.
결합된 Actor-Critic 모델의 손실을 계산합니다.
그래디언트를 계산하고 네트워크 매개변수를 업데이트합니다.
성공 기준 또는 최대 에피소드에 도달할 때까지 1~4를 반복합니다.
1. 훈련 데이터 수집하기
지도 학습에서와 같이 Actor-Critic 모델을 훈련하려면 훈련 데이터가 필요합니다. 그러나, 이러한 데이터를 수집하려면 모델이 환경에서 "실행"되어야 합니다.
여기서는 각 에피소드에 대한 훈련 데이터를 수집합니다. 그런 다음, 모델의 가중치에 의해 매개변수화된 현재 정책을 기반으로 동작 확률과 비평 값을 생성하기 위해 각 타임스텝에서 모델의 순방향 전달을 환경 상태에서 실행합니다.
다음 동작은 모델에 의해 생성된 동작 확률로부터 샘플링되며, 그런 다음 환경에 적용되어 다음 상태와 보상을 생성합니다.
이 프로세스는 더 빠른 훈련을 위해 나중에 TensorFlow 그래프로 컴파일할 수 있도록 TensorFlow 연산을 사용하는 run_episode 함수에서 구현됩니다. tf.TensorArray는 가변 길이 배열에서 Tensor 반복을 지원하는 데 사용되었습니다.
Step9: 2. 예상 이익 계산하기
한 에피소드 동안 수집된 각 타임스텝 $t$, ${r_{t}}^{T}{t=1}$에서 보상의 시퀀스를 예상 이익 ${G{t}}^{T}_{t=1}$의 시퀀스로 변환합니다. 여기서 보상의 합계는 현재 타임스텝 $t$에서 $T$까지 계산되며, 각 보상에 기하급수적으로 감소하는 할인 계수 $\gamma$를 곱합니다.
$$G_{t} = \sum^{T}{t'=t} \gamma^{t'-t}r{t'}$$
$\gamma\in(0,1)$ 이후, 현재 타임스텝에서 더 멀리 떨어진 보상에는 더 적은 가중치가 부여됩니다.
직관적으로, 예상 이익은 단순히 지금 보상이 이후 보상보다 낫다는 것을 암시합니다. 이것은 수학적 의미에서 보상의 합이 수렴하도록 하려는 것입니다.
또한, 훈련을 안정화하기 위해 이익의 결과 시퀀스를 표준화합니다(즉, 평균이 0이고 단위 표준 편차를 갖도록 함).
Step11: 3. Actor-Critic 손실
여기서는 하이브리드 Actor-Critic 모델을 사용하고 있기 때문에 아래와 같이 훈련을 위해 행위자와 비평가 손실의 조합인 손실 함수를 사용합니다.
$$L = L_{actor} + L_{critic}$$
Actor 손실
비평가가 상태 종속 기준선인 정책 그래디언트를 기반으로 행위자 손실을 공식화하고 단일 샘플(에피소드별) 추정치를 계산합니다.
$$L_{actor} = -\sum^{T}{t=1} log\pi{\theta}(a_{t} | s_{t})[G(s_{t}, a_{t}) - V^{\pi}{\theta}(s{t})]$$
여기서
Step13: 4. 매개변수를 업데이트하기 위한 훈련 단계 정의하기
위의 모든 단계를 모든 에피소드에서 실행되는 훈련 단계로 결합합니다. 손실 함수로 이어지는 모든 단계는 tf.GradientTape 컨텍스트로 실행되어 자동 미분이 가능합니다.
이 튜토리얼에서는 Adam 옵티마이저를 사용하여 모델 매개변수에 그래디언트를 적용합니다.
할인되지 않은 보상의 합계인 episode_reward도 이 단계에서 계산됩니다. 이 값은 나중에 성공 기준이 충족되는지 평가하는 데 사용됩니다.
tf.function 컨텍스트를 train_step 함수에 적용하여 호출 가능한 TensorFlow 그래프로 컴파일할 수 있고, 그러면 훈련 속도가 10배 빨라질 수 있습니다.
Step14: 5. 훈련 루프 실행하기
성공 기준 또는 최대 에피소드 수에 도달할 때까지 훈련 단계를 실행하는 방식으로 훈련을 실행합니다.
대기열을 사용하여 에피소드 보상의 실행 레코드를 유지합니다. 100회 시도에 도달하면 가장 오래된 보상이 대기열의 왼쪽(꼬리쪽) 끝에서 제거되고 최근 보상이 머리쪽(오른쪽)에 추가됩니다. 계산 효율을 높이기 위해 보상의 누적 합계도 유지됩니다.
런타임에 따라 훈련은 1분 이내에 완료될 수 있습니다.
Step15: 시각화
훈련 후에는 모델이 환경에서 어떻게 동작하는지 시각화하는 것이 좋습니다. 아래 셀을 실행하여 모델의 한 에피소드 실행에 대한 GIF 애니메이션을 생성할 수 있습니다. Colab에서 환경의 이미지를 올바르게 렌더링하려면 OpenAI Gym에 대한 추가 패키지를 설치해야 합니다. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install gym
%%bash
# Install additional packages for visualization
sudo apt-get install -y xvfb python-opengl > /dev/null 2>&1
pip install pyvirtualdisplay > /dev/null 2>&1
pip install git+https://github.com/tensorflow/docs > /dev/null 2>&1
import collections
import gym
import numpy as np
import tensorflow as tf
import tqdm
from matplotlib import pyplot as plt
from tensorflow.keras import layers
from typing import Any, List, Sequence, Tuple
# Create the environment
env = gym.make("CartPole-v0")
# Set seed for experiment reproducibility
seed = 42
env.seed(seed)
tf.random.set_seed(seed)
np.random.seed(seed)
# Small epsilon value for stabilizing division operations
eps = np.finfo(np.float32).eps.item()
Explanation: Actor-Critic 방법으로 CartPole의 문제 풀기
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/reinforcement_learning/actor_critic"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/reinforcement_learning/actor_critic.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/reinforcement_learning/actor_critic.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/reinforcement_learning/actor_critic.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
이 튜토리얼에서는 TensorFlow로 Actor-Critic 방법을 구현하여 Open AI Gym CartPole-V0 환경에서 에이전트를 훈련하는 방법을 보여줍니다. 독자가 강화 학습의 정책 그래디언드 방법에 어느 정도 익숙하다고 가정합니다.
Actor-Critic 방법
Actor-Critic 방법은 가치 함수와 독립적인 정책 함수를 나타내는 Temporal Difference(TD) 학습 방법입니다.
정책 함수(또는 정책)는 에이전트가 주어진 상태에 따라 취할 수 있는 동작에 대한 확률 분포를 반환합니다. 가치 함수는 주어진 상태에서 시작하여 특정 정책에 따라 영원히 동작하는 에이전트의 예상 이익을 결정합니다.
Actor-Critic 방법에서 정책은 주어진 상태에 따라 가능한 일련의 동작을 제안하는 행위자라고 하며, 추정값 함수는 주어진 정책에 따라 행위자가 취한 동작을 평가하는 비평가라고 합니다.
이 튜토리얼에서 행위자와 비평가 모두 두 개의 출력이 있는 하나의 신경망을 사용하여 표현됩니다.
CartPole-v0
CartPole-v0 환경에서는 마찰이 없는 트랙을 따라 이동하는 카트에 막대가 연결되어 있습니다. 막대는 똑바른 상태에서 시작되고 에이전트의 목표는 카트에 -1 또는 +1의 힘을 가하여 카트가 넘어지는 것을 방지하는 것입니다. 막대가 똑바로 유지될 때마다 +1의 보상이 주어집니다. 에피소드는 (1) 막대가 수직에서 15도 이상 기울어지거나 (2) 카트가 중앙에서 2.4 단위 이상 이동하면 끝납니다.
<center>
<pre data-md-type="custom_pre"><figure>
<image src="images/cartpole-v0.gif">
<figcaption>Cartpole-v0 환경에서 훈련된 Actor-Critic 모델</figcaption>
</image></figure></pre>
</center>
이 문제는 에피소드에 대한 평균 총 보상이 100회 연속 시도에서 195에 도달하면 "해결"된 것으로 간주됩니다.
설정
필요한 패키지를 가져오고 전역 설정을 구성합니다.
End of explanation
class ActorCritic(tf.keras.Model):
Combined actor-critic network.
def __init__(
self,
num_actions: int,
num_hidden_units: int):
Initialize.
super().__init__()
self.common = layers.Dense(num_hidden_units, activation="relu")
self.actor = layers.Dense(num_actions)
self.critic = layers.Dense(1)
def call(self, inputs: tf.Tensor) -> Tuple[tf.Tensor, tf.Tensor]:
x = self.common(inputs)
return self.actor(x), self.critic(x)
num_actions = env.action_space.n # 2
num_hidden_units = 128
model = ActorCritic(num_actions, num_hidden_units)
Explanation: 모델
행위자와 비평가는 각각 동작 확률과 비평 값을 생성하는 하나의 신경망을 사용하여 모델링됩니다. 이 튜토리얼에서는 모델 하위 클래스화를 사용하여 모델을 정의합니다.
순방향 전달 중에 모델은 상태를 입력으로 받고 상태 종속 값 함수를 모델링하는 동작 확률과 비평 값 $V$를 모두 출력합니다. 목표는 예상 이익을 최대화하는 $\pi$ 정책을 기반으로 행동을 선택하는 모델을 훈련하는 것입니다.
Cartpole-v0의 경우, 상태를 나타내는 네 가지 값이 있는데, 각각 카트 위치, 카트 속도, 막대 각도 및 막대 속도입니다. 에이전트는 카트를 각각 왼쪽(0)과 오른쪽(1)으로 밀기 위해 두 가지 동작을 취할 수 있습니다.
자세한 내용은 OpenAI Gym의 CartPole-v0 위키 페이지를 참조하세요.
End of explanation
# Wrap OpenAI Gym's `env.step` call as an operation in a TensorFlow function.
# This would allow it to be included in a callable TensorFlow graph.
def env_step(action: np.ndarray) -> Tuple[np.ndarray, np.ndarray, np.ndarray]:
Returns state, reward and done flag given an action.
state, reward, done, _ = env.step(action)
return (state.astype(np.float32),
np.array(reward, np.int32),
np.array(done, np.int32))
def tf_env_step(action: tf.Tensor) -> List[tf.Tensor]:
return tf.numpy_function(env_step, [action],
[tf.float32, tf.int32, tf.int32])
def run_episode(
initial_state: tf.Tensor,
model: tf.keras.Model,
max_steps: int) -> List[tf.Tensor]:
Runs a single episode to collect training data.
action_probs = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
values = tf.TensorArray(dtype=tf.float32, size=0, dynamic_size=True)
rewards = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)
initial_state_shape = initial_state.shape
state = initial_state
for t in tf.range(max_steps):
# Convert state into a batched tensor (batch size = 1)
state = tf.expand_dims(state, 0)
# Run the model and to get action probabilities and critic value
action_logits_t, value = model(state)
# Sample next action from the action probability distribution
action = tf.random.categorical(action_logits_t, 1)[0, 0]
action_probs_t = tf.nn.softmax(action_logits_t)
# Store critic values
values = values.write(t, tf.squeeze(value))
# Store log probability of the action chosen
action_probs = action_probs.write(t, action_probs_t[0, action])
# Apply action to the environment to get next state and reward
state, reward, done = tf_env_step(action)
state.set_shape(initial_state_shape)
# Store reward
rewards = rewards.write(t, reward)
if tf.cast(done, tf.bool):
break
action_probs = action_probs.stack()
values = values.stack()
rewards = rewards.stack()
return action_probs, values, rewards
Explanation: 훈련
에이전트를 훈련하기 위해 다음 단계를 따릅니다.
환경에서 에이전트를 실행하여 에피소드별로 훈련 데이터를 수집합니다.
각 시간 스텝에서 예상 이익을 계산합니다.
결합된 Actor-Critic 모델의 손실을 계산합니다.
그래디언트를 계산하고 네트워크 매개변수를 업데이트합니다.
성공 기준 또는 최대 에피소드에 도달할 때까지 1~4를 반복합니다.
1. 훈련 데이터 수집하기
지도 학습에서와 같이 Actor-Critic 모델을 훈련하려면 훈련 데이터가 필요합니다. 그러나, 이러한 데이터를 수집하려면 모델이 환경에서 "실행"되어야 합니다.
여기서는 각 에피소드에 대한 훈련 데이터를 수집합니다. 그런 다음, 모델의 가중치에 의해 매개변수화된 현재 정책을 기반으로 동작 확률과 비평 값을 생성하기 위해 각 타임스텝에서 모델의 순방향 전달을 환경 상태에서 실행합니다.
다음 동작은 모델에 의해 생성된 동작 확률로부터 샘플링되며, 그런 다음 환경에 적용되어 다음 상태와 보상을 생성합니다.
이 프로세스는 더 빠른 훈련을 위해 나중에 TensorFlow 그래프로 컴파일할 수 있도록 TensorFlow 연산을 사용하는 run_episode 함수에서 구현됩니다. tf.TensorArray는 가변 길이 배열에서 Tensor 반복을 지원하는 데 사용되었습니다.
End of explanation
def get_expected_return(
rewards: tf.Tensor,
gamma: float,
standardize: bool = True) -> tf.Tensor:
Compute expected returns per timestep.
n = tf.shape(rewards)[0]
returns = tf.TensorArray(dtype=tf.float32, size=n)
# Start from the end of `rewards` and accumulate reward sums
# into the `returns` array
rewards = tf.cast(rewards[::-1], dtype=tf.float32)
discounted_sum = tf.constant(0.0)
discounted_sum_shape = discounted_sum.shape
for i in tf.range(n):
reward = rewards[i]
discounted_sum = reward + gamma * discounted_sum
discounted_sum.set_shape(discounted_sum_shape)
returns = returns.write(i, discounted_sum)
returns = returns.stack()[::-1]
if standardize:
returns = ((returns - tf.math.reduce_mean(returns)) /
(tf.math.reduce_std(returns) + eps))
return returns
Explanation: 2. 예상 이익 계산하기
한 에피소드 동안 수집된 각 타임스텝 $t$, ${r_{t}}^{T}{t=1}$에서 보상의 시퀀스를 예상 이익 ${G{t}}^{T}_{t=1}$의 시퀀스로 변환합니다. 여기서 보상의 합계는 현재 타임스텝 $t$에서 $T$까지 계산되며, 각 보상에 기하급수적으로 감소하는 할인 계수 $\gamma$를 곱합니다.
$$G_{t} = \sum^{T}{t'=t} \gamma^{t'-t}r{t'}$$
$\gamma\in(0,1)$ 이후, 현재 타임스텝에서 더 멀리 떨어진 보상에는 더 적은 가중치가 부여됩니다.
직관적으로, 예상 이익은 단순히 지금 보상이 이후 보상보다 낫다는 것을 암시합니다. 이것은 수학적 의미에서 보상의 합이 수렴하도록 하려는 것입니다.
또한, 훈련을 안정화하기 위해 이익의 결과 시퀀스를 표준화합니다(즉, 평균이 0이고 단위 표준 편차를 갖도록 함).
End of explanation
huber_loss = tf.keras.losses.Huber(reduction=tf.keras.losses.Reduction.SUM)
def compute_loss(
action_probs: tf.Tensor,
values: tf.Tensor,
returns: tf.Tensor) -> tf.Tensor:
Computes the combined actor-critic loss.
advantage = returns - values
action_log_probs = tf.math.log(action_probs)
actor_loss = -tf.math.reduce_sum(action_log_probs * advantage)
critic_loss = huber_loss(values, returns)
return actor_loss + critic_loss
Explanation: 3. Actor-Critic 손실
여기서는 하이브리드 Actor-Critic 모델을 사용하고 있기 때문에 아래와 같이 훈련을 위해 행위자와 비평가 손실의 조합인 손실 함수를 사용합니다.
$$L = L_{actor} + L_{critic}$$
Actor 손실
비평가가 상태 종속 기준선인 정책 그래디언트를 기반으로 행위자 손실을 공식화하고 단일 샘플(에피소드별) 추정치를 계산합니다.
$$L_{actor} = -\sum^{T}{t=1} log\pi{\theta}(a_{t} | s_{t})[G(s_{t}, a_{t}) - V^{\pi}{\theta}(s{t})]$$
여기서:
$T$: 에피소드별로 달라질 수 있는 에피소드별 타임스텝의 수
$s_{t}$: $t$ 타임스텝의 상태
$a_{t}$: $s$ 상태에 따라 $t$ 타임스텝에서 선택된 동작
$\pi_{\theta}$: $\theta$에 의해 매개변수화된 정책(행위자)
$V^{\pi}_{\theta}$: 마찬가지로 $\theta$에 의해 매개변수화된 값 함수(비평가)
$G = G_{t}$: 주어진 상태에 대한 예상 이익, 타임스텝 $t$에서 동작 쌍
결합된 손실을 최소화하여 보상이 더 높은 행동의 확률을 최대화하려고 하므로 합계에 음의 항을 추가합니다.
<br>
이점
$L_{actor}$ 공식에서 $G - V$ 항을 이점이라고 하며, 이는 특정한 상태에서 $\pi$ 정책에 따라 선택된 임의의 동작보다 이 상태에 얼마나 더 나은 동작이 주어지는지를 나타냅니다.
기준선을 제외할 수 있지만 이로 인해 훈련 중에 큰 변동이 발생할 수 있습니다. 그리고 비평가 $V$를 기준선으로 선택할 때의 좋은 점은 가능한 한 $G$에 가깝게 훈련되어 변동이 낮아진다는 것입니다.
또한, 비평가가 없으면 알고리즘이 예상 이익을 바탕으로 특정 상태에서 취하는 행동의 확률을 높이려고 시도할 것이며, 이 때 동작 사이의 상대적 확률이 같게 유지된다면 큰 차이가 생기지 않습니다.
예를 들어, 주어진 상태에서 두 행동의 예상 이익이 같다고 가정합니다. 비평가가 없으면 알고리즘은 목표 $J$에 따라 이들 동작의 확률을 높이려고 합니다. 비평가의 경우, 이점($G - V = 0$)이 없기 때문에 동작의 확률을 높이는 데 따른 이점이 없으며 알고리즘이 그래디언트를 0으로 설정합니다.
<br>
비평가 손실
$V$를 $G$에 최대한 가깝게 훈련하는 것은 다음 손실 함수를 사용한 회귀 문제로 설정할 수 있습니다.
$$L_{critic} = L_{\delta}(G, V^{\pi}_{\theta})$$
여기서 $L_{\delta}$는 Huber 손실로, 제곱 오차 손실보다 데이터의 이상 값에 덜 민감합니다.
End of explanation
optimizer = tf.keras.optimizers.Adam(learning_rate=0.01)
@tf.function
def train_step(
initial_state: tf.Tensor,
model: tf.keras.Model,
optimizer: tf.keras.optimizers.Optimizer,
gamma: float,
max_steps_per_episode: int) -> tf.Tensor:
Runs a model training step.
with tf.GradientTape() as tape:
# Run the model for one episode to collect training data
action_probs, values, rewards = run_episode(
initial_state, model, max_steps_per_episode)
# Calculate expected returns
returns = get_expected_return(rewards, gamma)
# Convert training data to appropriate TF tensor shapes
action_probs, values, returns = [
tf.expand_dims(x, 1) for x in [action_probs, values, returns]]
# Calculating loss values to update our network
loss = compute_loss(action_probs, values, returns)
# Compute the gradients from the loss
grads = tape.gradient(loss, model.trainable_variables)
# Apply the gradients to the model's parameters
optimizer.apply_gradients(zip(grads, model.trainable_variables))
episode_reward = tf.math.reduce_sum(rewards)
return episode_reward
Explanation: 4. 매개변수를 업데이트하기 위한 훈련 단계 정의하기
위의 모든 단계를 모든 에피소드에서 실행되는 훈련 단계로 결합합니다. 손실 함수로 이어지는 모든 단계는 tf.GradientTape 컨텍스트로 실행되어 자동 미분이 가능합니다.
이 튜토리얼에서는 Adam 옵티마이저를 사용하여 모델 매개변수에 그래디언트를 적용합니다.
할인되지 않은 보상의 합계인 episode_reward도 이 단계에서 계산됩니다. 이 값은 나중에 성공 기준이 충족되는지 평가하는 데 사용됩니다.
tf.function 컨텍스트를 train_step 함수에 적용하여 호출 가능한 TensorFlow 그래프로 컴파일할 수 있고, 그러면 훈련 속도가 10배 빨라질 수 있습니다.
End of explanation
%%time
min_episodes_criterion = 100
max_episodes = 10000
max_steps_per_episode = 1000
# Cartpole-v0 is considered solved if average reward is >= 195 over 100
# consecutive trials
reward_threshold = 195
running_reward = 0
# Discount factor for future rewards
gamma = 0.99
# Keep last episodes reward
episodes_reward: collections.deque = collections.deque(maxlen=min_episodes_criterion)
with tqdm.trange(max_episodes) as t:
for i in t:
initial_state = tf.constant(env.reset(), dtype=tf.float32)
episode_reward = int(train_step(
initial_state, model, optimizer, gamma, max_steps_per_episode))
episodes_reward.append(episode_reward)
running_reward = statistics.mean(episodes_reward)
t.set_description(f'Episode {i}')
t.set_postfix(
episode_reward=episode_reward, running_reward=running_reward)
# Show average episode reward every 10 episodes
if i % 10 == 0:
pass # print(f'Episode {i}: average reward: {avg_reward}')
if running_reward > reward_threshold and i >= min_episodes_criterion:
break
print(f'\nSolved at episode {i}: average reward: {running_reward:.2f}!')
Explanation: 5. 훈련 루프 실행하기
성공 기준 또는 최대 에피소드 수에 도달할 때까지 훈련 단계를 실행하는 방식으로 훈련을 실행합니다.
대기열을 사용하여 에피소드 보상의 실행 레코드를 유지합니다. 100회 시도에 도달하면 가장 오래된 보상이 대기열의 왼쪽(꼬리쪽) 끝에서 제거되고 최근 보상이 머리쪽(오른쪽)에 추가됩니다. 계산 효율을 높이기 위해 보상의 누적 합계도 유지됩니다.
런타임에 따라 훈련은 1분 이내에 완료될 수 있습니다.
End of explanation
# Render an episode and save as a GIF file
from IPython import display as ipythondisplay
from PIL import Image
from pyvirtualdisplay import Display
display = Display(visible=0, size=(400, 300))
display.start()
def render_episode(env: gym.Env, model: tf.keras.Model, max_steps: int):
screen = env.render(mode='rgb_array')
im = Image.fromarray(screen)
images = [im]
state = tf.constant(env.reset(), dtype=tf.float32)
for i in range(1, max_steps + 1):
state = tf.expand_dims(state, 0)
action_probs, _ = model(state)
action = np.argmax(np.squeeze(action_probs))
state, _, done, _ = env.step(action)
state = tf.constant(state, dtype=tf.float32)
# Render screen every 10 steps
if i % 10 == 0:
screen = env.render(mode='rgb_array')
images.append(Image.fromarray(screen))
if done:
break
return images
# Save GIF image
images = render_episode(env, model, max_steps_per_episode)
image_file = 'cartpole-v0.gif'
# loop=0: loop forever, duration=1: play each frame for 1ms
images[0].save(
image_file, save_all=True, append_images=images[1:], loop=0, duration=1)
import tensorflow_docs.vis.embed as embed
embed.embed_file(image_file)
Explanation: 시각화
훈련 후에는 모델이 환경에서 어떻게 동작하는지 시각화하는 것이 좋습니다. 아래 셀을 실행하여 모델의 한 에피소드 실행에 대한 GIF 애니메이션을 생성할 수 있습니다. Colab에서 환경의 이미지를 올바르게 렌더링하려면 OpenAI Gym에 대한 추가 패키지를 설치해야 합니다.
End of explanation |
7,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matplotlib
Matplotlib é uma biblioteca para geração de gráficos 2D em python com uma ótima integração com o Jupyter e uma API simples e similar a do Matlab.
Step1: Para a geração de simples gráficos de linhas, basta utilizarmos o método plot
Step2: Para gerar gráficos de dispersão (scatter plot), podemos utilizar o método scatter
Step3: Analogamente, para gráficos de barras, usamos o bar
Step4: Podemos também personalziar nosso gráfico, definindo um título, rótulos para os eixos, dentre outras opções
Step5: Integração com o Pandas
O Pandas já possui um conjunto de operações que permitem a geração de gráficos a partir de Series e DataFrames!
Step6: Por padrão, se chamarmos o método plot de um DataFrame, ele vai escolher a melhor representação gráfica para suas colunas. No exemplo do Titanic, como muitos dados são quantitativos, ele gera um único gráfico de linhas
Step7: Podemos também realizar os plots sobre colunas específicas do DataFrame
Step8: Os resultados de operações mais complexas sobre um DataFrame também podem ser plotados em gráficos! | Python Code:
# permite que os gráficos sejam renderizados no notebook
%matplotlib inline
import matplotlib.pyplot as plt #API para geração de gráficos
Explanation: Matplotlib
Matplotlib é uma biblioteca para geração de gráficos 2D em python com uma ótima integração com o Jupyter e uma API simples e similar a do Matlab.
End of explanation
y = [1, 7, 3, 5, 12]
x = [1, 2, 3, 4, 5]
plt.plot(x, y, marker='o');
plt.plot(x, y, marker='x', c='r') # gráfico vermelho ('r') onde os pontos são 'x'
plt.grid() # adiciona as guias no gráfico
Explanation: Para a geração de simples gráficos de linhas, basta utilizarmos o método plot
End of explanation
plt.scatter(x, y, marker='x');
Explanation: Para gerar gráficos de dispersão (scatter plot), podemos utilizar o método scatter
End of explanation
plt.bar(x, y);
Explanation: Analogamente, para gráficos de barras, usamos o bar
End of explanation
anos = [1950, 1960, 1970, 1980, 1990, 2000, 2010]
pib = [300.2, 543.3, 1075.9, 2862.5, 5979.6, 10289.7, 14958.3]
plt.plot(anos, pib, marker='o')
plt.title('PIB')
plt.xlabel('Ano')
plt.ylabel(u'Bilhões de R$')
plt.grid()
import numpy as np
r = np.random.rand(100)
_ = plt.hist(r, 30)
Explanation: Podemos também personalziar nosso gráfico, definindo um título, rótulos para os eixos, dentre outras opções
End of explanation
import pandas as pd
df = pd.read_csv('titanic.csv')
df
Explanation: Integração com o Pandas
O Pandas já possui um conjunto de operações que permitem a geração de gráficos a partir de Series e DataFrames!
End of explanation
df.head(10).plot()
Explanation: Por padrão, se chamarmos o método plot de um DataFrame, ele vai escolher a melhor representação gráfica para suas colunas. No exemplo do Titanic, como muitos dados são quantitativos, ele gera um único gráfico de linhas
End of explanation
df[["Age", "Fare"]].head(10).plot(kind='bar')
Explanation: Podemos também realizar os plots sobre colunas específicas do DataFrame
End of explanation
df['Survived'].apply(lambda s: "Yes" if s == 1 else "No").value_counts().plot(kind='bar')
Explanation: Os resultados de operações mais complexas sobre um DataFrame também podem ser plotados em gráficos!
End of explanation |
7,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scott Cole
6 May 2017
This notebook is to formalize the hypothesis that the neural response to a very fast movement in the preferred direction can be more similar to that of a movement in the opposite direction (perceptually
Step1: 1. Define kernels of neuronal response to static gratings
The kernels plotted here represent the transient and sustained neural responses. (I believe these are firing rates in thalamic projection neurons in response to static gratings?)
Step2: 2. Estimate neural response to preferred and opposite directions
Preferred direction | Python Code:
# Import libraries
import numpy as np
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
import matplotlib.pyplot as plt
Explanation: Scott Cole
6 May 2017
This notebook is to formalize the hypothesis that the neural response to a very fast movement in the preferred direction can be more similar to that of a movement in the opposite direction (perceptually: the wagon wheel effect).
The predicted neural response to a motion is computed by convolving the neural responses to static gratings
End of explanation
kernel_fast = np.array([0, .5, 1, .8, .4, .2, .1, 0])
kernel_slow = np.hstack([np.arange(0,1,.2),np.arange(1,0,-.04)])
plt.figure(figsize=(5,6))
plt.subplot(2,1,1)
plt.plot(kernel_fast,'k')
plt.xlim((0,30))
plt.ylabel('Neural response\n(fast)',size=15)
plt.subplot(2,1,2)
plt.plot(kernel_slow,'k')
plt.xlim((0,30))
plt.xlabel('Time (a.u.)',size=20)
plt.ylabel('Neural response\n(slow)',size=15)
Explanation: 1. Define kernels of neuronal response to static gratings
The kernels plotted here represent the transient and sustained neural responses. (I believe these are firing rates in thalamic projection neurons in response to static gratings?)
End of explanation
# Define times of sustained-response-inducing (slow)
# and transient-response-inducing (fast) stimuli
slow_event_times = np.arange(0,100,20)
fast_event_times = np.arange(10,110,20)
# Compute rasters of events
N = 200
slow_event_raster = np.zeros(N)
slow_event_raster[slow_event_times] = 1
fast_event_raster = np.zeros(N)
fast_event_raster[fast_event_times] = 1
# Compute trace of neural activity
slow_neural = np.convolve(slow_event_times, kernel_slow, mode='same')
fast_neural = np.convolve(fast_event_times, kernel_fast, mode='same')
neural = slow_neural + fast_neural
Explanation: 2. Estimate neural response to preferred and opposite directions
Preferred direction
End of explanation |
7,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Data
Step2: Compare Chi-Squared Statistics
Step3: View Results | Python Code:
# Load libraries
from sklearn.datasets import load_iris
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import chi2
Explanation: Title: Chi-Squared For Feature Selection
Slug: chi-squared_for_feature_selection
Summary: How to remove irrelevant features using chi-squared for machine learning in Python.
Date: 2017-09-14 12:00
Category: Machine Learning
Tags: Feature Selection
Authors: Chris Albon
<a alt="chi-squared_for_feature_selection" href="https://machinelearningflashcards.com">
<img src="chi-squared_for_feature_selection/Chi-Squared_For_Feature_Selection_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
# Load iris data
iris = load_iris()
# Create features and target
X = iris.data
y = iris.target
# Convert to categorical data by converting data to integers
X = X.astype(int)
Explanation: Load Data
End of explanation
# Select two features with highest chi-squared statistics
chi2_selector = SelectKBest(chi2, k=2)
X_kbest = chi2_selector.fit_transform(X, y)
Explanation: Compare Chi-Squared Statistics
End of explanation
# Show results
print('Original number of features:', X.shape[1])
print('Reduced number of features:', X_kbest.shape[1])
Explanation: View Results
End of explanation |
7,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Collect the data
Get the name of the 4 fields we have to select
Step1: Get the select field correspondind to the 4 names found before
Step2: Get the value corresponding to the "Informatique"
Step3: Get all the values of the academic period field
In the second select_Field, in the option tag, we take all value execept the one equal to null
We only keep the period that are bigger than 2007 (in case there were older periods)
Step4: Get all the values of the pedagogic period field correspoding to the bachelor semester
in the 3rd select_field, we take all value that contains 'Bachelor' in the label
Since we need to find the first and last record of a student, we only consider the 1st, 5th and 6th semester.
It is not possible to finish his bachelor during the 2, 3 or 4 semester but it is possible to finish during the 5th semester if we miss some credits during our last year and we only need one semester to finish
Step5: Collect the data
Create a function that will parse one request and return a dataFrame
Step6: We iterate over all the parameters. We decided to skip the 'Type de semestre' (HIVERETE) since it is a redundant information. An odd semester is always in Autumn and an even one is always in Spring
Step7: How many years it took each student to go from the first to the sixth semester
As said before, here we check student that are in semester 1 (beginning) and semester 6 or 5 (in case they did the bachelor in 3.5 or 4.5 year)
Step8: Show total number of student that made at least one semester
Step9: Eliminate student who don't finished their studies
We group by sciper number (which we now is unique for each student). It return a sciper with a dataframe containing all the entries for one student
We keep people that appear in semester 1, 5 and 6. => those are the people that graduated in informatique
We drop all other people because
Step10: Person that didn't complete the first year in compute Science, we don't consider them since we can't know when they begin their first year
Step11: Nomber of person that complete the bachelor in computer science
Step12: Number of person that tried at least the first years or last one
Step13: Person that tried the first year but never finished the bachelor
Step14: Compute the average time (in years) to complete the bachelor
we choose to ouptut the result in years since it is more significant for human than month. To have the number of months we just need to multiply by 12
In total
Step15: Female
Step16: Male
Step17: Test the results
Step18: We want to see if the difference of the average years for female and male are statistically significant with a threshold of 95%
We use a Welch's T-Test (which does not assume equal population variance) | Python Code:
select = soupe.find_all('select')
select_name = [s.attrs['name'] for s in select]
select_name
Explanation: Collect the data
Get the name of the 4 fields we have to select
End of explanation
select_field = [soupe.find('select',{'name': name}) for name in select_name]
Explanation: Get the select field correspondind to the 4 names found before
End of explanation
option_unite_acad = select_field[0].find_all('option')
#option_unite_acad[[opt.text == 'Informatique' for opt in option_unite_acad]]
option_unite_acad
unite_acad ={opt['value']: opt.text for opt in option_unite_acad if opt.text == 'Informatique'}
unite_acad
Explanation: Get the value corresponding to the "Informatique"
End of explanation
option = select_field[1].find_all('option')
period_acad = {opt['value']: opt.text for opt in option if opt['value'] != 'null' and int(opt.text.split('-')[0]) >= 2007}
period_acad
Explanation: Get all the values of the academic period field
In the second select_Field, in the option tag, we take all value execept the one equal to null
We only keep the period that are bigger than 2007 (in case there were older periods)
End of explanation
option = select_field[2].find_all('option')
period_pedago = {opt['value']: opt.text for opt in option if 'Bachelor' in opt.text and ('1' in opt.text or '5' in opt.text or '6' in opt.text) }
period_pedago
option = select_field[3].find_all('option')
hiverEte = {opt['value']: opt.text for opt in option if opt['value'] != 'null'}
hiverEte
Explanation: Get all the values of the pedagogic period field correspoding to the bachelor semester
in the 3rd select_field, we take all value that contains 'Bachelor' in the label
Since we need to find the first and last record of a student, we only consider the 1st, 5th and 6th semester.
It is not possible to finish his bachelor during the 2, 3 or 4 semester but it is possible to finish during the 5th semester if we miss some credits during our last year and we only need one semester to finish
End of explanation
def parseRequest(u_a, p_a, p_p, h_e):
#Send request
url = 'http://isa.epfl.ch/imoniteur_ISAP/!GEDPUBLICREPORTS.html?ww_x_GPS=-1&ww_i_reportModel=133685247&ww_i_reportModelXsl=133685270&ww_x_UNITE_ACAD='+u_a[0]+'&ww_x_PERIODE_ACAD='+p_a[0]+'&ww_x_PERIODE_PEDAGO='+p_p[0]+'&ww_x_HIVERETE='+ h_e
r = requests.get(url)
soupe = BeautifulSoup(r.text, 'html.parser')
#get the header , we also replace the space by '_' (easier to use the dataframe later)
th_tag = soupe.find_all('th')
th = [t.text.replace(' ', '_') for t in th_tag]
#remove the first th that correspond to the title of the table
th = th[1:]
#save the size of the header
header_size = len(th)
#add new information (new columns): year_start, year_stop, semester number
th = np.append(th, ['Year_start', 'Year_stop', 'Semester'])
#Find all the 'tr' tag
tr_tag = soupe.find_all('tr')
#drop the 2 first tag that correspond to the title and the headers of the table
tr_tag = tr_tag[2:]
#Temporary dictionary that will collect all the entry of the dataframe
data = []
#Read the request line by line and fill the dataframe
for tr in tr_tag:
#create the new entry
row = [r.text.replace('\xa0', ' ') for r in tr]
#one row contains 12 elements but the header has only 11-> drop the last one because it is always empty
row = row[:header_size]
##add the new information to the row
#split the academic period
year = p_a[1].split('-')
#find the semester
semester = p_p[1].split()[2]
newCol = [int(year[0]), int(year[1]), semester]
#concat the row with the new info
row += newCol
data.append(row)
df = pd.DataFrame(data, columns= th)
return df
Explanation: Collect the data
Create a function that will parse one request and return a dataFrame
End of explanation
list_df = []
for u_a in unite_acad.items():
for p_a in period_acad.items():
for p_p in period_pedago.items():
print('Request for: ',u_a[1], p_a[1], p_p[1])
list_df.append(parseRequest(u_a,p_a, p_p, 'null'))
Student = pd.concat(list_df, ignore_index=True)
Student
Explanation: We iterate over all the parameters. We decided to skip the 'Type de semestre' (HIVERETE) since it is a redundant information. An odd semester is always in Autumn and an even one is always in Spring
End of explanation
Student.index = Student.No_Sciper + Student.Semester.astype(str) + Student.Year_start.astype(str)
Student.index.is_unique
Explanation: How many years it took each student to go from the first to the sixth semester
As said before, here we check student that are in semester 1 (beginning) and semester 6 or 5 (in case they did the bachelor in 3.5 or 4.5 year)
End of explanation
len(Student.No_Sciper.unique())
Explanation: Show total number of student that made at least one semester
End of explanation
def computeTotalYears(df):
start = df.Year_start.min()
end = df.Year_stop.max()
end_semester = df[df.Year_stop == end].Semester
if(end_semester == '6').any():
return (int(end) - int(start))
else:
return (int(end) - int(start) -0.5)
Student_copy = Student.copy()
Student_copy.index = Student.index
#We init the dataframe
#store people that complete the 3 years in informatique
Bachelor = pd.DataFrame(columns = ['Sciper', 'Civilité', 'Years'])
#store people that complet only the 2 last years
Only_5_6 = pd.DataFrame(columns = ['Sciper', 'Civilité', 'Years'])
#Groupe by sciper
grouped = Student_copy.groupby(['No_Sciper'])
for scip, group in grouped:
if((group.Semester != '1').all() and (group.Semester == '5').any() and (group.Semester == '6').any()):
total = computeTotalYears(group)
Only_5_6.ix[scip] = [scip,group.Civilité.iloc[0] , total ]
elif((group.Semester == '1').any() and (group.Semester == '5').any() and (group.Semester == '6').any()):
total = computeTotalYears(group)
Bachelor.ix[scip] = [scip,group.Civilité.iloc[0] , total ]
Bachelor.Years.max()
Bachelor.Years.min()
Bachelor.head()
Explanation: Eliminate student who don't finished their studies
We group by sciper number (which we now is unique for each student). It return a sciper with a dataframe containing all the entries for one student
We keep people that appear in semester 1, 5 and 6. => those are the people that graduated in informatique
We drop all other people because:
* if they don't appear in semester 6 it means they never finished the Bachelor
* if they appear only in semester 5 and 6 it means that they began in another section (usually in communication system), but we can't know when they began epfl without loading the data for all sections
But just to have an idea, we keep the person who only take part to semester 5 and 6, just to see the proportion
End of explanation
Only_5_6.count()
Explanation: Person that didn't complete the first year in compute Science, we don't consider them since we can't know when they begin their first year
End of explanation
Bachelor.count()
Explanation: Nomber of person that complete the bachelor in computer science
End of explanation
len(grouped)
Explanation: Number of person that tried at least the first years or last one
End of explanation
len(grouped) - len(Bachelor) - len(Only_5_6)
Explanation: Person that tried the first year but never finished the bachelor
End of explanation
len(Bachelor)
average = Bachelor.Years.sum()/len(Bachelor)
average
Bachelor.Years.max()
Bachelor.Years.min()
Bachelor.Years.hist(bins = 10, range=[3, 8])
Explanation: Compute the average time (in years) to complete the bachelor
we choose to ouptut the result in years since it is more significant for human than month. To have the number of months we just need to multiply by 12
In total
End of explanation
Female = Bachelor[Bachelor.Civilité == 'Madame']
len(Female)
averageFemale = Female.Years.sum()/len(Female)
averageFemale
Female.Years.hist(bins = 10, range=[3, 8])
Explanation: Female
End of explanation
Male = Bachelor[Bachelor.Civilité == 'Monsieur']
len(Male)
average = Male.Years.sum()/len(Male)
average
Male.Years.hist(bins = 10, range=[3, 8])
Explanation: Male
End of explanation
import scipy.stats as stats
Explanation: Test the results
End of explanation
stats.ttest_ind(a = Female.Years, b= Male.Years, equal_var=False)
Explanation: We want to see if the difference of the average years for female and male are statistically significant with a threshold of 95%
We use a Welch's T-Test (which does not assume equal population variance): it measures whether the average value differs significantly across samples.
End of explanation |
7,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Radiopadre Tutorial
O. Smirnov <[email protected]>, January 2018
Radiopadre is a framework, built on the Jupyter notebook, for browsing and visualizing data reduction products. It is particularly useful for visualizing data products on remote servers, where connection latencies and/or lack of software etc. limits the usual visualization options. It includes integration with the JS9 browser-based FITS viewer (with CARTA integration coming soon).
The general use case for Radiopadre is "here I am sitting with a slow ssh connection into a remote cluster node, my pipeline has produced 500 plots/logs/FITS images, how do I make sense of this mess?" More specifically, there are three (somewhat overlapping) scenarios that Radiopadre is designed for
Step1: Most objects knows how to show() themselves
So what can you see from the above? dd is a directory object than can render itself -- you get a directory listing. Clearly, Radiopadre can recognize certain types of files -- you can see an images/ subdirectory above, a measurement set, a couple of FITS files, some PNG images, etc. Clicking on a file will either download it or display it in a new tab (this works well for PNG or text files -- don't click on FITS files unless you mean to download a whole copy!) FITS files have a "JS9" button next to them that invokes the JS9 viewer either below the cell, or in a new browser tab. Try it!
Now let's get some objects from the directory listing and get them to render.
Step2: Most things are list-like
What you see above is that different object types know how to show themselves intelligently. You also see that a directory object acts like a Python list -- dd[n] gets the n-th object from the directory. What about a slice?
Step3: Since a directory is a list of files, it makes sence that the Python slice syntax [5
Step4: And a text file is really just a list of lines, so
Step5: NB
Step6: Other useful things to do with directories/lists of files
If you have a list of image or FITS files, you can ask for thumbnails by calling .thumbs().
Step7: And calling .images on a directory returns a list of images. For which we can, of course, render thumbnails
Step8: Other such "list of files by type" attributes are .fits, .tables, and .dirs
Step9: And the show_all() method will call show() on every file object in the list. This is useful if you want to render a bunch of objects with the same parameters
Step10: Accessing a single file by name
The (pattern) operation applied to a directory always returns a filelist (possibly an empty one), even if the pattern is not relly a pattern and selects only one file
Step11: If you want to get at one specific file, using dd(name_or_pattern)[0] becomes a hassle. Filelists therefore support a direct [name_or_pattern] operation which always returns a single file object. If name_or_pattern matches multiple files, only the first one is returned (but radiopadre will show you a transient warning message).
Step12: Working with text files
By default, radiopadre renders the beginning and end of a text file. But you can also explicitly render just the head, or just the tail, or the full file.
Step13: "Watching" text files
If you're still running a reduction and want to keep an eye on a log file that's being updated, use the .watch() method. This works exactly like .show() and takes the same arguments, but adds a "refresh" button at the top right corner of the cell, which re-executes the cell every time you click it.
Step14: Running shell commands
Use .sh("command") on a directory object to quickly run a shell command in that directory. The result is output as a list-of-lines, so all the usual display tricks work.
Step15: Working with FITS files
As you saw above, FITS files can be rendered with show(), or viewed via the JS9 buttons. There's also an explicit .js9() method which invokes the viewer in a cell
Step16: With multiple FITS files, it's possible to load all of them into JS9, and use the "<" and ">" keys to switch between images. Use the "JS9 all" button to do this
Step17: There's a shortcut for doing this directly -- just call .js9() on a list of FITS files (note that "collective" functions such as .thumbs() and .js9() will only work on homogenuous filelists, i.e. lists of FITS files. Don't try calling them on a list contaning a mix of files -- it won't work!)
Step18: The .header attribute of a FITS file object returns the FITS header, in the same kind of object (list-of-lines) as a text file. So all the tricks we did on text files above still apply
Step19: If you want to read in data from the FITS file, the .fitsobj attribute returns a PrimaryHDU object, just like astropy.io.fits.open(filename) would
Step20: Working with CASA tables
As you saw above, a CASA table object knows how to render itself as a table. Default is to render rows 0 to 100. With array columns, the default display becomes a little unwieldy
Step21: With optional arguments to .show(), you can render just a subset of rows (given as start_row, nrows), and a subset of columns, taking a slice through an array column. The below tells radiopadre to render the first 10 rows, taking the column TIME in its entirety, and taking a [32
Step22: If you want to render all columns with a common slice, use the optional _ argument (we saw this above). The given slice will be applied to all columns as much as possible (or at least to those that match the shape)
Step23: The .table attribute returns a casacore table object with which you can do all the normal casacore table operations
Step24: But if you want to quickly read data from a table, radiopadre provides some fancier methods. For example, subtables of the table are available as a .SUBTABLE_NAME attribute. This gives another table object, with all the functions above available
Step25: Accessing table columns
Columns of the table can be read via a .COLUMN attribute. You can either use it a-la getcol()
Step26: ...or else apply a numpy-style array index with []
Step27: Another useful feature is creating a masked array from a combination of a column and FLAG/FLAG_ROW. Append _F to the column name to get a masked array
Step28: So combining the above, here's how to compute the UVW in wavelengths of all baselines to antenna 1, and make a uv-coverage plot of that subset of baselines
Step29: The ls() function
...is where it all begins. As you saw, ls() gives you the current directory. You can also use ls with filename patterns, and also specify a sort order
Step30: You can also use the "R" switch for a recursive directory listing
Step31: Or give a filename to get an object representing that one file
Step32: Om the same principle, give a subdirectory name to get a directory object
Step33: One thing to note is that ls() (i.e. with no patterns) doesn't necessarily list all files. The files included by default are governed by radiopadre settings. Below we'll see how to change those.
Using and changing settings
The settings object we imported above can be used to set various defaults of Radiopadre. Like most other objects, it knows how to render itself
Step34: Using "with" to change settings temporarily
Python's with statement works with radiopadre settings to change settings temporarily. For example, the default FITS rendering settings look like this
Step35: Here's how we can render FITS images with different settings, without changing the global settings. Whatever we set in with only applies in the body of the with statement. In this case it is particularly useful, as it will also apply to the JS9 displays by default | Python Code:
from radiopadre import ls, settings
dd = ls() # calls radiopadre.ls() to get a directory listing, assigns this to dd
dd # standard notebook feature: the result of the last expression on the cell is rendered in HTML
dd.show()
print "Calling .show() on an object renders it in HTML anyway, same as if it was the last statement in the cell"
Explanation: Radiopadre Tutorial
O. Smirnov <[email protected]>, January 2018
Radiopadre is a framework, built on the Jupyter notebook, for browsing and visualizing data reduction products. It is particularly useful for visualizing data products on remote servers, where connection latencies and/or lack of software etc. limits the usual visualization options. It includes integration with the JS9 browser-based FITS viewer (with CARTA integration coming soon).
The general use case for Radiopadre is "here I am sitting with a slow ssh connection into a remote cluster node, my pipeline has produced 500 plots/logs/FITS images, how do I make sense of this mess?" More specifically, there are three (somewhat overlapping) scenarios that Radiopadre is designed for:
Just browsing: interactively exploring the aforementioned 500 files using a notebook.
Automated reporting: customized Radiopadre notebooks that automatically generate a report composed of a pipeline's outputs and intermediate products. Since your pipeline's output is (hopefully!) structured, i.e. in terms of filename conventions etc., you can write a notebook to exploit that structure and make a corresponding report automatically.
Sharing notebooks: fiddle with a notebook until everything is visualized just right, insert explanatory text in mardkdown cells in between, voila, you have an instant report you can share with colleagues.
Installing Radiopadre
Refer to README.md on the github repository: https://github.com/ratt-ru/radiopadre
Running this tutorial
Data files for this tutorial are available here: https://www.dropbox.com/sh/be4pc23rsavj67s/AAB2Ejv8cLsVT8wj60DiqS8Ya?dl=0
Download the tutorial and untar itsomewhere. Then run Radiopadre (locally or remotely, if you unpacked the tutorial on a remote node) in the resulting directory. A Jupyter console will pop up in your browser. Click on radiopadre-tutorial.ipynb to open it in a separate window, then click the "Run all" button on the toolbar (or use "Cell|Run all" in the menu, which is the same thing.) Wait for the notebook to run through and render, then carry on reading.
Every Radiopadre notebook starts with this
End of explanation
images_subdir = dd[0]
demo_ms = dd[1]
fits_image = dd[2]
log_file = dd[-1] # last file in directory... consistent with Python list syntax
images_subdir.show()
demo_ms.show(_=(32,0)) # _ selects channels/correlations... more detail later
fits_image.show()
log_file.show()
# be prepared for a lot of output below... scroll through it
Explanation: Most objects knows how to show() themselves
So what can you see from the above? dd is a directory object than can render itself -- you get a directory listing. Clearly, Radiopadre can recognize certain types of files -- you can see an images/ subdirectory above, a measurement set, a couple of FITS files, some PNG images, etc. Clicking on a file will either download it or display it in a new tab (this works well for PNG or text files -- don't click on FITS files unless you mean to download a whole copy!) FITS files have a "JS9" button next to them that invokes the JS9 viewer either below the cell, or in a new browser tab. Try it!
Now let's get some objects from the directory listing and get them to render.
End of explanation
images_subdir[5:10]
Explanation: Most things are list-like
What you see above is that different object types know how to show themselves intelligently. You also see that a directory object acts like a Python list -- dd[n] gets the n-th object from the directory. What about a slice?
End of explanation
sub_ms = demo_ms[5:10] # gives us a table containing rows 5 through 9 of the MS
sub_ms.show(_=(32,0)) # _ selects channels/correlations... more detail later
Explanation: Since a directory is a list of files, it makes sence that the Python slice syntax [5:10] returns an object that is also a list of files. There are other list-like objects in radiopadre. For example, an MS can be considered a list of rows. So...
End of explanation
log_file[-10:] # extract last ten lines and show them
Explanation: And a text file is really just a list of lines, so:
End of explanation
png_files = dd("*.png") # on directories, () works like a shell pattern
png_files
log_file("Gain plots") # on text files, () works like grep
demo_ms("ANTENNA1==1").show(_=(32,0)) # on tables, () does a TaQL query
Explanation: NB: FITS images and PNG images are not lists in any sense, so this syntax doesn't work on them. (In the future I'll consider supporting numpy-like slicing, e.g. [100:200,100:200], to transparently extract subsections of images, but for now this is not implemented.)
And list-like things can be searched with ()
Radiopadre's list-like objects (directories/file lists, text files, CASA tables) also support a "search" function, invoked by calling them like a function. This returns an object that is subset of the original object. Three examples:
End of explanation
png_files.thumbs() # for PNG images, these are nice and clickable!
Explanation: Other useful things to do with directories/lists of files
If you have a list of image or FITS files, you can ask for thumbnails by calling .thumbs().
End of explanation
images_subdir.images.thumbs()
Explanation: And calling .images on a directory returns a list of images. For which we can, of course, render thumbnails:
End of explanation
dd.fits.show()
dd.tables.show()
dd.dirs.show()
dd.fits.thumbs(vmin=-1e-4, vmax=0.01) # and FITS files also know how to make themselves a thumbnail
# note that thumbs() takes optional arguments just like show()
Explanation: Other such "list of files by type" attributes are .fits, .tables, and .dirs:
End of explanation
# note the difference: dd.fits selects all files of type FITS, dd("*fits") selects all files matching "*fits".
# In our case this happens to be one and the same thing, but it doesn't have to be
dd("*fits").show_all(vmin=0, vmax=1e-2, colormap='hot')
# show_all() passes all its arguments to the show() method of each file.
Explanation: And the show_all() method will call show() on every file object in the list. This is useful if you want to render a bunch of objects with the same parameters:
End of explanation
dirties = dd("j0839-5417_2-MFS-dirty.fits")
print "This is a list:", type(dirties), len(dirties) # this is a list even though we only specified one file
print "This is a single file:", type(dirties[0]) # so we have to use [0] to get at the FITS file itself
# Note that the summary attribute returns a short summary of any radiopadre object (as text or HTML).
# You can show() or print it
print "This is a summary of the list:",dirties.summary
print "And now in HTML:"
dirties.summary.show()
print "This is a summary of the file:",dirties[0].summary
print "And now in HTML:"
dirties[0].summary.show()
Explanation: Accessing a single file by name
The (pattern) operation applied to a directory always returns a filelist (possibly an empty one), even if the pattern is not relly a pattern and selects only one file:
End of explanation
dirty_image = dd["*fits"] # matches 2 files. if you re-execute this with Ctrl+Enter, you'll see a warning
print type(dirty_image)
dirty_image = dd["*dirty*fits"] # this will match just the one file
dirty_image.show()
Explanation: If you want to get at one specific file, using dd(name_or_pattern)[0] becomes a hassle. Filelists therefore support a direct [name_or_pattern] operation which always returns a single file object. If name_or_pattern matches multiple files, only the first one is returned (but radiopadre will show you a transient warning message).
End of explanation
log_file
log_file.head(5) # same as log_file.show(head=5). Number is optional -- default is 10
log_file.tail(5) # same as log_file.show(tail=5)
log_file.full() # same as log_file.show(full=True). Use the scrollbar next to the cell output.
log_file("Gain") # same as log_file.grep("Gain") or log_file.show(grep="Gain")
# and of course all objects are just "lists of lines", so the normal list slicing syntax works
log_file("Gain")[10:20].show()
log_file("Gain")[-1]
Explanation: Working with text files
By default, radiopadre renders the beginning and end of a text file. But you can also explicitly render just the head, or just the tail, or the full file.
End of explanation
log_file.watch(head=0, tail=10)
Explanation: "Watching" text files
If you're still running a reduction and want to keep an eye on a log file that's being updated, use the .watch() method. This works exactly like .show() and takes the same arguments, but adds a "refresh" button at the top right corner of the cell, which re-executes the cell every time you click it.
End of explanation
dd.sh("df -h")
dd.sh("df -h")("/boot")
Explanation: Running shell commands
Use .sh("command") on a directory object to quickly run a shell command in that directory. The result is output as a list-of-lines, so all the usual display tricks work.
End of explanation
dirty_image.summary.show()
dirty_image.js9()
Explanation: Working with FITS files
As you saw above, FITS files can be rendered with show(), or viewed via the JS9 buttons. There's also an explicit .js9() method which invokes the viewer in a cell:
End of explanation
dd("*fits")
Explanation: With multiple FITS files, it's possible to load all of them into JS9, and use the "<" and ">" keys to switch between images. Use the "JS9 all" button to do this:
End of explanation
# If you're wondering how to tell JS9 to start with specific scale settings, use the "with settings" trick
# shown here. It will be explained below...
with settings.fits(vmin=-1e-4, vmax=0.01):
dd("*fits").js9()
Explanation: There's a shortcut for doing this directly -- just call .js9() on a list of FITS files (note that "collective" functions such as .thumbs() and .js9() will only work on homogenuous filelists, i.e. lists of FITS files. Don't try calling them on a list contaning a mix of files -- it won't work!)
End of explanation
dirty_image.header
dirty_image.header("CDELT*")
dirty_image.header.full()
Explanation: The .header attribute of a FITS file object returns the FITS header, in the same kind of object (list-of-lines) as a text file. So all the tricks we did on text files above still apply:
End of explanation
dirty_image.fitsobj
Explanation: If you want to read in data from the FITS file, the .fitsobj attribute returns a PrimaryHDU object, just like astropy.io.fits.open(filename) would:
End of explanation
demo_ms
Explanation: Working with CASA tables
As you saw above, a CASA table object knows how to render itself as a table. Default is to render rows 0 to 100. With array columns, the default display becomes a little unwieldy:
End of explanation
demo_ms.show(0,10,TIME=(),DATA=(slice(32,34),None))
Explanation: With optional arguments to .show(), you can render just a subset of rows (given as start_row, nrows), and a subset of columns, taking a slice through an array column. The below tells radiopadre to render the first 10 rows, taking the column TIME in its entirety, and taking a [32:34,:] slice through the DATA column.
End of explanation
demo_ms.show(0, 10, _=(32,0)) # selects channel 32, correlation 0 from all 2D array columns. Doesn't apply to
# other types of columns
Explanation: If you want to render all columns with a common slice, use the optional _ argument (we saw this above). The given slice will be applied to all columns as much as possible (or at least to those that match the shape):
End of explanation
print type(demo_ms.table)
Explanation: The .table attribute returns a casacore table object with which you can do all the normal casacore table operations:
End of explanation
demo_ms.ANTENNA
## and .subtables gives you a list of all the subtables
for subtable in demo_ms.subtables:
subtable.show()
Explanation: But if you want to quickly read data from a table, radiopadre provides some fancier methods. For example, subtables of the table are available as a .SUBTABLE_NAME attribute. This gives another table object, with all the functions above available:
End of explanation
data = demo_ms.DATA(0,5)
print data.shape
data
Explanation: Accessing table columns
Columns of the table can be read via a .COLUMN attribute. You can either use it a-la getcol():
End of explanation
demo_ms.DATA[0:10,:,0] # read rows 0~9, corrrelation 0
Explanation: ...or else apply a numpy-style array index with []:
End of explanation
demo_ms.DATA_F[0,:,0]
pylab.plot(demo_ms.DATA[0,:,0],'+b')
pylab.plot(demo_ms.DATA_F[0,:,0],'xr')
# of course all of these things work together
demo_ms("ANTENNA1==1 && ANTENNA2==3").DATA_F[:20,32:64,:].shape
demo_ms.UVW()
Explanation: Another useful feature is creating a masked array from a combination of a column and FLAG/FLAG_ROW. Append _F to the column name to get a masked array:
End of explanation
import numpy as np
freqs = demo_ms.SPECTRAL_WINDOW.CHAN_FREQ(0, 1) # read frequencies for spw 0
print freqs
subset = demo_ms("ANTENNA1 == 1")
uvw_lambda = subset.UVW()[np.newaxis,:,:]*3e+8/freqs[0,:,np.newaxis,np.newaxis]
print uvw_lambda.shape
import pylab
pylab.plot(uvw_lambda[:,:,0].flatten(), uvw_lambda[:,:,1].flatten(), '.')
Explanation: So combining the above, here's how to compute the UVW in wavelengths of all baselines to antenna 1, and make a uv-coverage plot of that subset of baselines:
End of explanation
ls("*txt -rt") # give *txt files in reverse order of modification time
logs = ls("*txt -rt") # of course this just returns a list-of-files object
logs
Explanation: The ls() function
...is where it all begins. As you saw, ls() gives you the current directory. You can also use ls with filename patterns, and also specify a sort order:
End of explanation
ls("*png -R")
Explanation: You can also use the "R" switch for a recursive directory listing:
End of explanation
image = ls("1525170187-1_meqtrees-gjones_plots-chan.png")
image
Explanation: Or give a filename to get an object representing that one file:
End of explanation
images_dir = ls("images")
images_dir
Explanation: Om the same principle, give a subdirectory name to get a directory object:
End of explanation
settings # same as settings.show(), if it's the last expression in the cell
# and the various sections will also render themselves
settings.files
# changing settings is as easy as
settings.files.include = "*png"
# the new settings apply from that point onwards, so you probably want to do this at the top of a notebook
ls()
# from now on, only "*png" files will be listed. Unless you override this by an explicit pattern to ls(),
# e.g. in this case "*" overrides settings.files.include:
ls("*")
Explanation: One thing to note is that ls() (i.e. with no patterns) doesn't necessarily list all files. The files included by default are governed by radiopadre settings. Below we'll see how to change those.
Using and changing settings
The settings object we imported above can be used to set various defaults of Radiopadre. Like most other objects, it knows how to render itself:
End of explanation
settings.fits
Explanation: Using "with" to change settings temporarily
Python's with statement works with radiopadre settings to change settings temporarily. For example, the default FITS rendering settings look like this:
End of explanation
with settings.fits(vmin=1e-6, vmax=1, colormap='hot', scale='log'):
ls("*fits").show() # this shows a list of FITS files
ls("*fits").show_all() # and this calls show() on every FITS file
# observe that the global settings haven't changed:
settings.fits
Explanation: Here's how we can render FITS images with different settings, without changing the global settings. Whatever we set in with only applies in the body of the with statement. In this case it is particularly useful, as it will also apply to the JS9 displays by default:
End of explanation |
7,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
6. Reactor
(c) 2019, 2020 Dr. Ramil Nugmanov;
(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev
Installation instructions of CGRtools package information and tutorial's files see on https
Step1: Reactor objects stores single transformation and can apply it to molecules or CGRs.
Transformations is ReactionContainer object which in reactant side consist of query for matching group and in product side patch for updating matched atoms and bonds
Step2: 6.1. Products generation
Reactor works similar to ChemAxon Reactions enumeration.
Example here presents application of it to create esters from acids.
First we need to construct carboxy group matcher query. Then, ether group need to be specified.
Atom numbers in query and patch should be mapped to each other. The same atoms should have same numbers.
Step3: One can notice presence of separate oxygen (water) and ester group.
The second group can substituted by calling reactor on observed product.
Step4: second_stage has 3 components in a single MoleculeContainer object. We can split it into individual molecules and place all molecules into ReactionContainer object. Since in CGRtools atom-to-atom mapping corresponds to numbering of atoms in molecules, the resulting product has AAM according to the rule applied. Thus, reaction has correct AAM and nothing special should be made to keep or find it.
Step5: For multicomponent reactions one can merge molecules of reactants into single MoleculeContainer object and apply reactor on it.
It is possible to generate all available products in case that molecule has several groups matching the query.
Step6: Let's have a look at molecules in set.
Note to lost isotope mark.
Step7: 6.2. MetaReactions (reactions on CGRs).
Reactor could be applied to CGR to introduce changes into reaction.
6.2.1. Example of atom-to-atom mapping fixing.
Step8: Reaction has AAM error in nitro-group
Lets try to use Reactor for AAM fixing
Step9: Now time to prepare and apply Template to CGR based on reaction with incorrect AAM.
Template is Reaction container with query in reactants and patch in products
Step10: Reactor class accept single template. Existence of dynamic bond in it is not a problem.
Step11: Reactor object is callable and accept as argument molecule or CGR.
NOTE
Step12: CGRreactor returns None if template could not be applied, otherwise patched structure is returned.
Step13: One can see that nitro group has no dynamic bonds any more. CGR corresponds only to substitution.
Step14: 6.2.2 Reaction transformation
Example of E2 to SN2 transformation.
E2 and SN2 are concurrent reactions.
We can easily change reaction center of E2 reaction to SN2. It could be achieved by substitution of reaction center corresponding to double bond formation in E2 reaction by the one corresponding to formation of new single bond with base as in SN2. | Python Code:
import pkg_resources
if pkg_resources.get_distribution('CGRtools').version.split('.')[:2] != ['4', '0']:
print('WARNING. Tutorial was tested on 4.0 version of CGRtools')
else:
print('Welcome!')
# load data for tutorial
from pickle import load
from traceback import format_exc
with open('reactions.dat', 'rb') as f:
reactions = load(f) # list of ReactionContainer objects
r1 = reactions[0] # reaction
m6 = r1.reactants[1]
m6copy = m6.copy()
m6copy.atom(5)._Core__isotope = 13
Explanation: 6. Reactor
(c) 2019, 2020 Dr. Ramil Nugmanov;
(c) 2019 Dr. Timur Madzhidov; Ravil Mukhametgaleev
Installation instructions of CGRtools package information and tutorial's files see on https://github.com/stsouko/CGRtools
NOTE: Tutorial should be performed sequentially from the start. Random cell running will lead to unexpected results.
End of explanation
from CGRtools import CGRReactor, Reactor # import of Reactor
from CGRtools.containers import * # import of required objects
from CGRtools.containers.bonds import DynamicBond
Explanation: Reactor objects stores single transformation and can apply it to molecules or CGRs.
Transformations is ReactionContainer object which in reactant side consist of query for matching group and in product side patch for updating matched atoms and bonds
End of explanation
acid = QueryContainer() # this query matches acids. Use construction possibilities.
acid.add_atom('C', neighbors=3) # add carboxyl carbon. Hybridization is irrelevant here
acid.add_atom('O', neighbors=1) # add hydroxyl oxygen. Hybridization is irrelevant here
acid.add_atom('O') # add carbonyl oxygen. Number of neighbors is irrelevant here.
acid.add_bond(1, 2, 1) # create single bond between carbon and hydroxyl oxygen
acid.add_bond(1, 3, 2) # create double bond
print(acid)
acid.clean2d()
acid
methyl_ester = QueryContainer() # create patch - how carboxyl group should be changed. We write methylated group
methyl_ester.add_atom('C', 1) # second argument is predefined atom mapping. Notice that mapping corresponds...
methyl_ester.add_atom('O', 2) # ... to order in already created acid group. Atom 2 is released water.
methyl_ester.add_atom('O', 4)
methyl_ester.add_atom('O', 3)
methyl_ester.add_atom('C', 5)
methyl_ester.add_bond(1, 4, 1)
methyl_ester.add_bond(1, 3, 2)
methyl_ester.add_bond(4, 5, 1)
# No bond between atom 1 and atom 2. This bond will be broken.
methyl_ester.clean2d()
methyl_ester
m6 # acid
template = ReactionContainer([acid], [methyl_ester]) # merge query and patch in template, which is ReactionContainer
reactor = CGRReactor(template) # Reactor is initialized
reacted_acid = next(reactor(m6)) # application of Reactor to molecule
reacted_acid.clean2d() # calculate coordinates
reacted_acid # desired methylated ester have been generated
Explanation: 6.1. Products generation
Reactor works similar to ChemAxon Reactions enumeration.
Example here presents application of it to create esters from acids.
First we need to construct carboxy group matcher query. Then, ether group need to be specified.
Atom numbers in query and patch should be mapped to each other. The same atoms should have same numbers.
End of explanation
second_stage = next(reactor(reacted_acid)) # apply transformation on product of previous transformation
second_stage.clean2d() # recalculate coordinates for correct drawing
second_stage
Explanation: One can notice presence of separate oxygen (water) and ester group.
The second group can substituted by calling reactor on observed product.
End of explanation
products = second_stage.split() # split product into individual molecules
react = ReactionContainer([m6], products) # unite reagent and product into reaction.
react
Explanation: second_stage has 3 components in a single MoleculeContainer object. We can split it into individual molecules and place all molecules into ReactionContainer object. Since in CGRtools atom-to-atom mapping corresponds to numbering of atoms in molecules, the resulting product has AAM according to the rule applied. Thus, reaction has correct AAM and nothing special should be made to keep or find it.
End of explanation
m6copy
enums = set() # the set enums is used to select structurally diverse products
for m in reactor(m6copy): # limit=0 is enumeration of all possible products by reactor
print(m) # print signatures for observed molecules. Notice presence of water as component of product
m.clean2d() # recalculate coordinates
enums.update(m.split()) # split product into separate molecules
enums = list(enums) # set of all resulting molecules
Explanation: For multicomponent reactions one can merge molecules of reactants into single MoleculeContainer object and apply reactor on it.
It is possible to generate all available products in case that molecule has several groups matching the query.
End of explanation
enums[0]
enums[1]
enums[2]
Explanation: Let's have a look at molecules in set.
Note to lost isotope mark.
End of explanation
reactions[1] # reaction under study
cgr = ~reactions[1] # generate reaction CGR
print(cgr)
cgr.clean2d()
cgr
cgr.centers_list # reaction has two reaction centers. [10,11,12] - pseudo reaction appeared due to AAM error
Explanation: 6.2. MetaReactions (reactions on CGRs).
Reactor could be applied to CGR to introduce changes into reaction.
6.2.1. Example of atom-to-atom mapping fixing.
End of explanation
nitro = QueryCGRContainer() # construct query for invalid reaction center - CGR of wrongly mapped nitro-group
nitro.add_atom('N', charge=1, p_charge=1) # atom 1
nitro.add_atom('O', charge=0, p_charge=-1) # atom 2. Notice that due to AAM error charge was changed
nitro.add_atom('O', charge=-1, p_charge=0) # atom 3. Notice that due to AAM error charge was changed
nitro.add_atom('C') # atom 4
nitro.add_bond(1, 2, DynamicBond(2, 1)) # bond between atoms 1 and 2. Due to AAM error bond is dynamic ('2>1' type)
nitro.add_bond(1, 3, DynamicBond(1, 2)) # bond between atoms 1 and 3. Due to AAM error bond is dynamic ('1>2' type)
nitro.add_bond(1, 4, 1) # ordinary bond
print(nitro)
# this query matches reaction center in CGR appeared due to AAM error.
nitro.clean2d()
nitro
nitro < cgr # query matches CGR of reaction with error.
valid_nitro = QueryCGRContainer() # construct nitro group without dynamic atoms. Notice that atom order should correspond object nitro
valid_nitro.add_atom('N', charge=1, p_charge=1) # ordinary N atom
valid_nitro.add_atom('O', charge=-1, p_charge=-1) # ordinary negatively charged oxygen atom
valid_nitro.add_atom('O') # ordinary oxygen atom
valid_nitro.add_bond(1, 2, 1) # ordinary single bond
valid_nitro.add_bond(1, 3, 2) # ordinary double bond
print(valid_nitro)
valid_nitro.clean2d()
valid_nitro
Explanation: Reaction has AAM error in nitro-group
Lets try to use Reactor for AAM fixing
End of explanation
template = ReactionContainer([nitro], [valid_nitro]) # template shows how wrong part of CGR is transformed into correct one.
print(template) # notice complex structure of query: CGR signature is given in braces, then >> and molecule signature
template
Explanation: Now time to prepare and apply Template to CGR based on reaction with incorrect AAM.
Template is Reaction container with query in reactants and patch in products
End of explanation
reactor = CGRReactor(template)
Explanation: Reactor class accept single template. Existence of dynamic bond in it is not a problem.
End of explanation
fixed = next(reactor(cgr)) # fix CGR
Explanation: Reactor object is callable and accept as argument molecule or CGR.
NOTE: fixed is new CGR object
End of explanation
print(fixed)
fixed
Explanation: CGRreactor returns None if template could not be applied, otherwise patched structure is returned.
End of explanation
fixed.centers_list # reaction center appeared due to AAM error before does not exist. Only 1 reaction center is found
Explanation: One can see that nitro group has no dynamic bonds any more. CGR corresponds only to substitution.
End of explanation
from CGRtools.files import MRVRead
from io import StringIO
e2 = next(MRVRead('e2.mrv')) # read E2 reaction from ChemAxon MRV file
e2
# create CGR query for E2 reaction side
e2query = QueryCGRContainer()
e2query.add_atom('C', 1) # create carbon with mapping number 1
e2query.add_atom('C', 2) # create carbon with mapping number 2
# addition of iodine atom
e2query.add_atom('I', 3, neighbors=1, p_neighbors=0, charge=0, p_charge=-1)
# addition of OH- or RO- groups
e2query.add_atom('O', 4, neighbors=[0, 1], p_neighbors=[0, 1], charge=-1, p_charge=0)
e2query.add_bond(1, 2, DynamicBond(1, 2)) # bond between two carbon corresponds to formation of double from single
e2query.add_bond(1, 3, DynamicBond(1)) # bond between carbon and halogen breaks in E2 reaction
print(e2query) # it is CGR of E2 reaction center
e2query.clean2d()
e2query
e2_cgr = ~e2 # compose reaction into CGR
e2_cgr
e2query < e2_cgr # E2 CGR pattern works!
# create patch creating SN2 reaction. Notice that ordering of atoms correspond to that of E2 CGR query
sn2patch = QueryCGRContainer()
sn2patch.add_atom('C', 1) # save atom unchanged.
sn2patch.add_atom('C', 2) # it is central atom.
sn2patch.add_atom('I', 3, charge=0, p_charge=-1)
sn2patch.add_atom('O', 4, charge=-1, p_charge=0)
sn2patch.add_bond(1, 2, 1) # set carbon - carbon single bond that is unchanged in SN2 reaction
sn2patch.add_bond(1, 3, DynamicBond(1)) # this bond is broken in SN2 reaction
sn2patch.add_bond(1, 4, DynamicBond(None, 1)) # it corresponds to formation of bond O(S)-C bond in SN2 reaction
print(sn2patch)
sn2patch.clean2d()
sn2patch
reactor = CGRReactor(ReactionContainer([e2query], [sn2patch])) # create template and pass it to Reactor
sn2_cgr = next(reactor(e2_cgr)) # apply Reactor on E2 reaction
print(sn2_cgr)
sn2_cgr
# decompose CGR into reaction
sn2 = ReactionContainer.from_cgr(sn2_cgr)
sn2.clean2d()
sn2
Explanation: 6.2.2 Reaction transformation
Example of E2 to SN2 transformation.
E2 and SN2 are concurrent reactions.
We can easily change reaction center of E2 reaction to SN2. It could be achieved by substitution of reaction center corresponding to double bond formation in E2 reaction by the one corresponding to formation of new single bond with base as in SN2.
End of explanation |
7,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Part 4
Step1: Import libraries
Step2: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment
Step3: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
Step4: Build the ANN index
Use the build method implemented in the indexer.py module to load the embeddings from the CSV files, create the ANN index model and train it on the embedding data, and save the SavedModel file to Cloud Storage. You pass the following three parameters to this method
Step5: Build the index using AI Platform Training
Submit an AI Platform Training job to build the ScaNN index at scale. The index_builder directory contains the expected training application packaging structure for submitting the AI Platform Training job.
Step6: After the AI Platform Training job finishes, check that the scann_index folder has been created in your Cloud Storage bucket
Step7: Test the ANN index
Test the ANN index by using the ScaNNMatcher class implemented in the index_server/matching.py module.
Run the following code snippets to create an item embedding from random generated values and pass it to scann_matcher, which returns the items IDs for the five items that are the approximate nearest neighbors of the embedding you submitted. | Python Code:
!pip install -q scann
Explanation: Part 4: Create an approximate nearest neighbor index for the item embeddings
This notebook is the fourth of five notebooks that guide you through running the Real-time Item-to-item Recommendation with BigQuery ML Matrix Factorization and ScaNN solution.
Use this notebook to create an approximate nearest neighbor (ANN) index for the item embeddings by using the ScaNN framework. You create the index as a model, train the model on AI Platform Training, then export the index to Cloud Storage so that it can serve ANN information.
Before starting this notebook, you must run the 03_create_embedding_lookup_model notebook to process the item embeddings data and export it to Cloud Storage.
After completing this notebook, run the 05_deploy_lookup_and_scann_caip notebook to deploy the solution. Once deployed, you can submit song IDs to the solution and get similar song recommendations in return, based on the ANN index.
Setup
Import the required libraries, configure the environment variables, and authenticate your GCP account.
End of explanation
import tensorflow as tf
import numpy as np
from datetime import datetime
Explanation: Import libraries
End of explanation
PROJECT_ID = 'yourProject' # Change to your project.
BUCKET = 'yourBucketName' # Change to the bucket you created.
REGION = 'yourTrainingRegion' # Change to your AI Platform Training region.
EMBEDDING_FILES_PREFIX = f'gs://{BUCKET}/bqml/item_embeddings/embeddings-*'
OUTPUT_INDEX_DIR = f'gs://{BUCKET}/bqml/scann_index'
Explanation: Configure GCP environment settings
Update the following variables to reflect the values for your GCP environment:
PROJECT_ID: The ID of the Google Cloud project you are using to implement this solution.
BUCKET: The name of the Cloud Storage bucket you created to use with this solution. The BUCKET value should be just the bucket name, so myBucket rather than gs://myBucket.
REGION: The region to use for the AI Platform Training job.
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab. If you use an AI Platform notebook, you should already be authenticated.
End of explanation
from index_builder.builder import indexer
indexer.build(EMBEDDING_FILES_PREFIX, OUTPUT_INDEX_DIR)
Explanation: Build the ANN index
Use the build method implemented in the indexer.py module to load the embeddings from the CSV files, create the ANN index model and train it on the embedding data, and save the SavedModel file to Cloud Storage. You pass the following three parameters to this method:
embedding_files_path, which specifies the Cloud Storage location from which to load the embedding vectors.
num_leaves, which provides the value for a hyperparameter that tunes the model based on the trade-off between retrieval latency and recall. A higher num_leaves value will use more data and provide better recall, but will also increase latency. If num_leaves is set to None or 0, the num_leaves value is the square root of the number of items.
output_dir, which specifies the Cloud Storage location to write the ANN index SavedModel file to.
Other configuration options for the model are set based on the rules-of-thumb provided by ScaNN.
Build the index locally
End of explanation
if tf.io.gfile.exists(OUTPUT_INDEX_DIR):
print("Removing {} contents...".format(OUTPUT_INDEX_DIR))
tf.io.gfile.rmtree(OUTPUT_INDEX_DIR)
print("Creating output: {}".format(OUTPUT_INDEX_DIR))
tf.io.gfile.makedirs(OUTPUT_INDEX_DIR)
timestamp = datetime.utcnow().strftime('%y%m%d%H%M%S')
job_name = f'ks_bqml_build_scann_index_{timestamp}'
!gcloud ai-platform jobs submit training {job_name} \
--project={PROJECT_ID} \
--region={REGION} \
--job-dir={OUTPUT_INDEX_DIR}/jobs/ \
--package-path=index_builder/builder \
--module-name=builder.task \
--config='index_builder/config.yaml' \
--runtime-version=2.2 \
--python-version=3.7 \
--\
--embedding-files-path={EMBEDDING_FILES_PREFIX} \
--output-dir={OUTPUT_INDEX_DIR} \
--num-leaves=500
Explanation: Build the index using AI Platform Training
Submit an AI Platform Training job to build the ScaNN index at scale. The index_builder directory contains the expected training application packaging structure for submitting the AI Platform Training job.
End of explanation
!gsutil ls {OUTPUT_INDEX_DIR}
Explanation: After the AI Platform Training job finishes, check that the scann_index folder has been created in your Cloud Storage bucket:
End of explanation
from index_server.matching import ScaNNMatcher
scann_matcher = ScaNNMatcher(OUTPUT_INDEX_DIR)
vector = np.random.rand(50)
scann_matcher.match(vector, 5)
Explanation: Test the ANN index
Test the ANN index by using the ScaNNMatcher class implemented in the index_server/matching.py module.
Run the following code snippets to create an item embedding from random generated values and pass it to scann_matcher, which returns the items IDs for the five items that are the approximate nearest neighbors of the embedding you submitted.
End of explanation |
7,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Series
Step1: Series é na verdade um array NumPy de 1 dimensão. Ele consiste de um array NumPy com um array de rótulos.
Criando Series
O construtor geral para criar uma Series é da seguinte maneira
Step2: Exemplo 2
Step3: Utilizando dicionário Python
Podemos utilizar um dicionário para criar uma Series. Nesse caso, se um índice for fornecido, os rótulos serão construídos a partir deles. Caso não seja fornecido, as chaves do dicionário serão utilizadas como rótulos.
Os valores dos dicionários são utilizados para popular a Series.
Step4: Outra opção seria passar o rótulos que queremos definir via parâmetro index.
Considere o dicionário abaixo com preços de ações de algumas empresas.
Step5: Para criar o nosso próprio índice, iremos utilizar as chaves do dicionário que acabamos de criar.
Step6: Vamos adicionar um elemento (chave) que não existe no dicionário preco_acoes.
Step7: O resultado é que o valor para essa chave será definido como NaN (Not A Number), indicando que está faltando.
Também podemos utilizar o parâmetro name que permite nomear a Series e que pode ser útil para combinar objetos de Series em uma estrutura DataFrame.
Utilizando valores escalares
Para dados escalares, um índice deve ser fornecido. O valor será repetido pela quantidade de valores que estão no índice.
Podemos utilizar esse método para fornecer uma rápida forma de inicialização.
Step8: Operações em Series
O comportamento de Series é muito similar ao que fizemos em arrays NumPy, com uma diferença
Step9: Atribuições
Valores podem ser definidos e acessados utilizando o ínidice do rótulo da mesma forma que um dicionário.
Step10: Podemos evitar esse erro, utilizando o método get disponível. Nesse caso, o valor nan é devolvido caso não existe na Series.
Step11: Outras operações
Podemos utilizar operações aritméticas e estatísticas, da mesma forma que nos arrays NumPy. | Python Code:
import pandas as pd
Explanation: Series
End of explanation
import numpy as np
ser1 = pd.Series(np.random.rand(7))
ser1
Explanation: Series é na verdade um array NumPy de 1 dimensão. Ele consiste de um array NumPy com um array de rótulos.
Criando Series
O construtor geral para criar uma Series é da seguinte maneira:
python
s = pd.Series(dados)
onde dados pode ser um dos itens abaixo:
* um numpy.ndarray
* um dicionário
* um valor escalar
Para testar a criação das Series iremos utilizar os três itens citados acima.
Utilizando numpy.ndarray
Nesse caso, o índice deve ser do mesmo tamanho do dado. Se um índice não for específicado, o índice padrão [0, ... n-1] será criado, onde n é o tamanho do dado.
Exemplo 1: Para criar uma Series com 7 números randomicos entre 0 e 1, podemos utilizar o método rand do numpy. Note que não especificamos o índice.
End of explanation
nome_meses = ['Jan', 'Fev', 'Mar', 'Abr', 'Mai']
print(nome_meses)
meses = pd.Series(np.arange(1, 6), index=nome_meses)
meses
meses.index
Explanation: Exemplo 2: Vamos criar uma Series com os 5 primeiros meses de um ano, sendo que os indices devem ser os nomes.
End of explanation
dicionario = {'US' : 'dolar',
'BR' : 'real',
'UK' : 'libra',
'JP' : 'iene'}
moedas = pd.Series(dicionario)
moedas
Explanation: Utilizando dicionário Python
Podemos utilizar um dicionário para criar uma Series. Nesse caso, se um índice for fornecido, os rótulos serão construídos a partir deles. Caso não seja fornecido, as chaves do dicionário serão utilizadas como rótulos.
Os valores dos dicionários são utilizados para popular a Series.
End of explanation
preco_acoes = {'GOOG' : 737.44,
'FB' : 120.38,
'TWTR' : 18.44,
'AMZN' : 744.58,
'AAPL' : 99.40,
'NFLX' : 85.55}
Explanation: Outra opção seria passar o rótulos que queremos definir via parâmetro index.
Considere o dicionário abaixo com preços de ações de algumas empresas.
End of explanation
rotulos = list(preco_acoes.keys())
print(rotulos)
Explanation: Para criar o nosso próprio índice, iremos utilizar as chaves do dicionário que acabamos de criar.
End of explanation
rotulos.append('YHOO')
acoes = pd.Series(preco_acoes, index=rotulos)
acoes
Explanation: Vamos adicionar um elemento (chave) que não existe no dicionário preco_acoes.
End of explanation
ser2 = pd.Series(10, index=['col1', 'col2', 'col3'])
ser2
Explanation: O resultado é que o valor para essa chave será definido como NaN (Not A Number), indicando que está faltando.
Também podemos utilizar o parâmetro name que permite nomear a Series e que pode ser útil para combinar objetos de Series em uma estrutura DataFrame.
Utilizando valores escalares
Para dados escalares, um índice deve ser fornecido. O valor será repetido pela quantidade de valores que estão no índice.
Podemos utilizar esse método para fornecer uma rápida forma de inicialização.
End of explanation
acoes[:4]
acoes[acoes > 100]
Explanation: Operações em Series
O comportamento de Series é muito similar ao que fizemos em arrays NumPy, com uma diferença: quando realizamos a operação de fatiamento (slicing), ele também fatia o índice.
Fatiamento (Slicing)
End of explanation
dicionario['BR']
acoes['GOOG']
acoes['GOOG'] = 1200
acoes
print(acoes['AOL'])
Explanation: Atribuições
Valores podem ser definidos e acessados utilizando o ínidice do rótulo da mesma forma que um dicionário.
End of explanation
acoes.get('AOL', np.NaN)
Explanation: Podemos evitar esse erro, utilizando o método get disponível. Nesse caso, o valor nan é devolvido caso não existe na Series.
End of explanation
acoes
# Média
np.mean(acoes)
# Desvio Padrão
np.std(acoes)
ser1
ser1 * 2
np.sqrt(ser1)
Explanation: Outras operações
Podemos utilizar operações aritméticas e estatísticas, da mesma forma que nos arrays NumPy.
End of explanation |
7,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This (article] [http
Step1: Next, enable iPython to display matplotlib graphs. As an alternative you can run ipython notebook.
Step2: We will read in the file like we did in the previous article but I'm going to tell it to treat the date column as a date field so I can do some re-sampling later.
Step3: Now that we have read in the data, we can do some quick analysis
Step4: We can actually learn some pretty helpful info from this simple command
Step5: It is easy to call describe on a single column too. I can see that my average price is \$56.18 but it ranges from \$10.06 to \$99.97.
I am showing the output of dtypes so that you can see that the date column is a datetime field. I also scan this to make sure that any columns that have numbers are floats or ints so that I can do additional analysis in the future.
Step6: Now we remove some columns to make additional analysis easier.
Step7: This representation has multiple lines for each customer. In order to understand purchasing patterns, let's group all the customers by name.
Step8: Now that our data is in a simple format to manipulate, let's determine how much each customer purchased during our time frame.
The sum function allows us to quickly sum up all the values by customer. We can also sort the data using the sort command.
Step9: Now that we know what the data look like, tt is very simple to create a quick bar chart plot.
Step10: Unfortunately this chart is a little ugly. With a few tweaks we can make it a little more impactful.
Let's try
Step11: This actually tells us a little about our biggest customers and how much difference there is between their sales and our smallest customers.
Now, let's try to see how the sales break down by category.
Step12: We can use groupby to organize the data by category and name.
Step13: The category representation looks good but we need to break it apart to graph it as a stacked bar graph. Unstack can do this for us.
Step14: Now plot it.
Step15: Now clean some of this up a little bit.
We can specify the figure size and customize the legend.
Step16: Now that we know who the biggest customers are and how they purchase products, we might want to look at purchase patterns in more detail.
Let's take another look at the data and try to see how large the individual purchases are. A histogram allows us to group purchases together so we can see how big the customer transactions are.
Step17: After looking at this group
We can look at purchase patterns over time. We can see that most of our transactions are less than $500 and only a very few are about $1500.
Another interesting way to look at the data would be by sales over time. Do we have certain months where we are busier than others?
Let's get the data down to order size and date.
Step18: If we want to analyze the data by date, we need to set the date column as the index.
Step19: One of the really cool things that pandas allows us to do is resample the data. If we want to look at the data by month, we can easily resample and sum it all up.
purchase_patterns.resample('M',how=sum)
Plotting the data is now very easy
Step20: December is our peak month and April is the slowest.
Let's say we really like this plot and want to save it somewhere for a presentation. | Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
pd.__version__
Explanation: This (article] [http://pbpython.com/simple-graphing-pandas.html] will walk through how to start doing some simple graphing in pandas.
I am using a new data file that is the same format as my previous article but includes data for only 20 customers.
First we are going to import pandas, numpy and matplot lib.
I am also showing the versions I'm testing so you can make sure yours is compatible.
End of explanation
%matplotlib inline
Explanation: Next, enable iPython to display matplotlib graphs. As an alternative you can run ipython notebook.
End of explanation
sales=pd.read_csv("sample-salesv2.csv",parse_dates=['date'])
sales.head()
Explanation: We will read in the file like we did in the previous article but I'm going to tell it to treat the date column as a date field so I can do some re-sampling later.
End of explanation
sales.describe()
Explanation: Now that we have read in the data, we can do some quick analysis
End of explanation
sales['unit price'].describe()
Explanation: We can actually learn some pretty helpful info from this simple command:
For example, we can tell that customers on average purchases 10.3 items per transaction and that the average cost of the transaction was $579.84. It is also easy to see the min and max so you understand the range of the data.
End of explanation
sales.dtypes
Explanation: It is easy to call describe on a single column too. I can see that my average price is \$56.18 but it ranges from \$10.06 to \$99.97.
I am showing the output of dtypes so that you can see that the date column is a datetime field. I also scan this to make sure that any columns that have numbers are floats or ints so that I can do additional analysis in the future.
End of explanation
customers = sales[['name','ext price','date']]
customers.head()
Explanation: Now we remove some columns to make additional analysis easier.
End of explanation
customer_group = customers.groupby('name')
customer_group.size()
Explanation: This representation has multiple lines for each customer. In order to understand purchasing patterns, let's group all the customers by name.
End of explanation
sales_totals = customer_group.sum()
sales_totals.sort_values(by=['ext price']).head()
Explanation: Now that our data is in a simple format to manipulate, let's determine how much each customer purchased during our time frame.
The sum function allows us to quickly sum up all the values by customer. We can also sort the data using the sort command.
End of explanation
my_plot = sales_totals.plot(kind='bar')
Explanation: Now that we know what the data look like, tt is very simple to create a quick bar chart plot.
End of explanation
my_plot = sales_totals.sort_values(by=['ext price'],ascending=False).plot(kind='bar',legend=None,title="Total Sales by Customer")
my_plot.set_xlabel("Customers")
my_plot.set_ylabel("Sales ($)")
Explanation: Unfortunately this chart is a little ugly. With a few tweaks we can make it a little more impactful.
Let's try:
- sorting the data in descending order.
- Removing the legend
- Adding a title
- Labeling the axes
End of explanation
customers = sales[['name','category','ext price','date']]
customers.head()
Explanation: This actually tells us a little about our biggest customers and how much difference there is between their sales and our smallest customers.
Now, let's try to see how the sales break down by category.
End of explanation
category_group=customers.groupby(['name','category']).sum()
category_group.head()
Explanation: We can use groupby to organize the data by category and name.
End of explanation
category_group.unstack().head()
Explanation: The category representation looks good but we need to break it apart to graph it as a stacked bar graph. Unstack can do this for us.
End of explanation
my_plot = category_group.unstack().plot(kind='bar',stacked=True,title="Total Sales by Customer")
my_plot.set_xlabel("Customers")
my_plot.set_ylabel("Sales")
Explanation: Now plot it.
End of explanation
my_plot = category_group.unstack().plot(kind='bar',stacked=True,title="Total Sales by Customer",figsize=(9, 7))
my_plot.set_xlabel("Customers")
my_plot.set_ylabel("Sales")
my_plot.legend(["Total","Belts","Shirts","Shoes"], loc=9,ncol=4)
Explanation: Now clean some of this up a little bit.
We can specify the figure size and customize the legend.
End of explanation
purchase_patterns = sales[['ext price','date']]
purchase_patterns.head()
purchase_plot = purchase_patterns['ext price'].hist(bins=20)
purchase_plot.set_title("Purchase Patterns")
purchase_plot.set_xlabel("Order Amount($)")
purchase_plot.set_ylabel("Number of orders")
Explanation: Now that we know who the biggest customers are and how they purchase products, we might want to look at purchase patterns in more detail.
Let's take another look at the data and try to see how large the individual purchases are. A histogram allows us to group purchases together so we can see how big the customer transactions are.
End of explanation
purchase_patterns = sales[['ext price','date']]
purchase_patterns.head()
Explanation: After looking at this group
We can look at purchase patterns over time. We can see that most of our transactions are less than $500 and only a very few are about $1500.
Another interesting way to look at the data would be by sales over time. Do we have certain months where we are busier than others?
Let's get the data down to order size and date.
End of explanation
purchase_patterns = purchase_patterns.set_index('date')
purchase_patterns.head()
Explanation: If we want to analyze the data by date, we need to set the date column as the index.
End of explanation
purchase_plot = purchase_patterns.resample('M').sum().plot(title="Total Sales by Month",legend=None)
Explanation: One of the really cool things that pandas allows us to do is resample the data. If we want to look at the data by month, we can easily resample and sum it all up.
purchase_patterns.resample('M',how=sum)
Plotting the data is now very easy
End of explanation
fig = purchase_plot.get_figure()
fig.savefig("total-sales.png")
Explanation: December is our peak month and April is the slowest.
Let's say we really like this plot and want to save it somewhere for a presentation.
End of explanation |
7,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Loading of Libraries and Classes.
Step1: Create forward bond future PV (Exposure) time profile
Setting up parameters
Step2: Data input for the CouponBond portfolio
The word portfolio is used to describe just a dict of CouponBonds.
This line creates a referenceDateList
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
Create Simulator
This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be
inside the Monte Carlo simulation range [trim_start,trim_end]
Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories.
# SDE parameters - Vasicek SDE
# dr(t) = k(θ − r(t))dt + σdW(t)
self.kappa = x[0]
self.theta = x[1]
self.sigma = x[2]
self.r0 = x[3]
myVasicek = MC_Vasicek_Sim()
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
For debugging uncomment this to choose a single date for the forward bond
print(startDates)
startDates = [date(2005,3,10)] # or
startDates = [date(2005,3,10) + SixMonthDelay]
maturities = [(x+TwoYearsDelay) for x in startDates]
You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life.
Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate)
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates))
Step3: Create Libor and portfolioScheduleOfCF. This datelist contains all dates
to be used in any calculation of the portfolio positions.
BondCoupon class has to have a method getScheduleComplete, which return
fullSet on [0] and datelist on [1], calculated by BondCoupon as | Python Code:
%matplotlib inline
from datetime import date
import time
import pandas as pd
import numpy as np
pd.options.display.max_colwidth = 60
from Curves.Corporates.CorporateDailyVasicek import CorporateRates
from Boostrappers.CDSBootstrapper.CDSVasicekBootstrapper import BootstrapperCDSLadder
from MonteCarloSimulators.Vasicek.vasicekMCSim import MC_Vasicek_Sim
from Products.Rates.CouponBond import CouponBond
from Products.Credit.CDS import CDS
from Scheduler.Scheduler import Scheduler
import quandl
import matplotlib.pyplot as plt
from parameters import WORKING_DIR
import itertools
marker = itertools.cycle((',', '+', '.', 'o', '*'))
from IPython.core.pylabtools import figsize
figsize(15, 4)
from pandas import ExcelWriter
import numpy.random as nprnd
from pprint import pprint
Explanation: Loading of Libraries and Classes.
End of explanation
t_step = 1.0 / 365.0
simNumber = 10
trim_start = date(2005,3,10)
trim_end = date(2010,12,31) # Last Date of the Portfolio
start = date(2005, 3, 10)
referenceDate = date(2005, 5, 10)
Explanation: Create forward bond future PV (Exposure) time profile
Setting up parameters
End of explanation
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
# Create Simulator
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek = MC_Vasicek_Sim(datelist = [trim_start,trim_end],x = xOIS,simNumber = simNumber,t_step =1/365.0 )
#myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
# Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
# For debugging uncomment this to choose a single date for the forward bond
# print(startDates)
startDates = [date(2005,3,10)+SixMonthDelay,date(2005,3,10)+TwoYearsDelay ]
maturities = [(x+TwoYearsDelay) for x in startDates]
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates)):
notional=(-1.0)**i
myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional,
maturity= maturities[i], freq="3M", referencedate=referenceDate)
Explanation: Data input for the CouponBond portfolio
The word portfolio is used to describe just a dict of CouponBonds.
This line creates a referenceDateList
myScheduler = Scheduler()
ReferenceDateList = myScheduler.getSchedule(start=referenceDate,end=trim_end,freq="1M", referencedate=referenceDate)
Create Simulator
This section creates Monte Carlo Trajectories in a wide range. Notice that the BondCoupon maturities have to be
inside the Monte Carlo simulation range [trim_start,trim_end]
Sigma has been artificially increased (OIS has smaller sigma) to allow for visualization of distinct trajectories.
# SDE parameters - Vasicek SDE
# dr(t) = k(θ − r(t))dt + σdW(t)
self.kappa = x[0]
self.theta = x[1]
self.sigma = x[2]
self.r0 = x[3]
myVasicek = MC_Vasicek_Sim()
xOIS = [ 3.0, 0.07536509, -0.208477, 0.07536509]
myVasicek.setVasicek(x=xOIS,minDay=trim_start,maxDay=trim_end,simNumber=simNumber,t_step=1/365.0)
myVasicek.getLibor()
Create Coupon Bond with several startDates.
SixMonthDelay = myScheduler.extractDelay("6M")
TwoYearsDelay = myScheduler.extractDelay("2Y")
startDates = [referenceDate + nprnd.randint(0,3)*SixMonthDelay for r in range(10)]
For debugging uncomment this to choose a single date for the forward bond
print(startDates)
startDates = [date(2005,3,10)] # or
startDates = [date(2005,3,10) + SixMonthDelay]
maturities = [(x+TwoYearsDelay) for x in startDates]
You can change the coupon and see its effect on the Exposure Profile. The breakevenRate is calculated, for simplicity, always at referenceDate=self.start, that is, at the first day of the CouponBond life.
Below is a way to create random long/short bond portfolio of any size. The notional only affects the product class at the last stage of calculation. In my case, the only parameters affected are Exposure (PV on referenceDate), pvAvg(average PV on referenceDate)
myPortfolio = {}
coupon = 0.07536509
for i in range(len(startDates)):
notional=(-1.0)**i
myPortfolio[i] = CouponBond(fee=1.0,start=startDates[i],coupon=coupon,notional=notional,
maturity= maturities[i], freq="3M", referencedate=referenceDate)
End of explanation
portfolioScheduleOfCF = set(ReferenceDateList)
for i in range(len(myPortfolio)):
portfolioScheduleOfCF=portfolioScheduleOfCF.union(myPortfolio[i].getScheduleComplete()[0]
)
portfolioScheduleOfCF = sorted(portfolioScheduleOfCF.union(ReferenceDateList))
OIS = myVasicek.getSmallLibor(datelist=portfolioScheduleOfCF)
# at this point OIS contains all dates for which the discount curve should be known.
# If the OIS doesn't contain that date, it would not be able to discount the cashflows and the calcualtion would faill.
print(OIS)
pvs={}
for t in portfolioScheduleOfCF:
pvs[t] = np.zeros([1,simNumber])
for i in range(len(myPortfolio)):
myPortfolio[i].setLibor(OIS)
pvs[t] = pvs[t] + myPortfolio[i].getExposure(referencedate=t).values
#print(portfolioScheduleOfCF)
#print(pvs)
pvsPlot = pd.DataFrame.from_dict(list(pvs.items()))
pvsPlot.index= list(pvs.keys())
pvs1={}
for i,t in zip(pvsPlot.values,pvsPlot.index):
pvs1[t]=i[1][0]
pvs = pd.DataFrame.from_dict(data=pvs1,orient="index")
ax=pvs.plot(legend=False)
ax.set_xlabel("Year")
ax.set_ylabel("Coupon Bond Exposure")
Explanation: Create Libor and portfolioScheduleOfCF. This datelist contains all dates
to be used in any calculation of the portfolio positions.
BondCoupon class has to have a method getScheduleComplete, which return
fullSet on [0] and datelist on [1], calculated by BondCoupon as:
def getScheduleComplete(self):
self.datelist=self.myScheduler.getSchedule(start=self.start,end=self.maturity,freq=self.freq,referencedate=self.referencedate)
self.ntimes = len(self.datelist)
fullset = sorted(set(self.datelist)
.union([self.referencedate])
.union([self.start])
.union([self.maturity])
)
return fullset,self.datelist
portfolioScheduleOfCF is the concatenation of all fullsets. It defines the set of all dates for which Libor should be known.
End of explanation |
7,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced
Step1: Units
Each FloatParameter or FloatArrayParameter has an associated unit. Let's look at the 'sma' Parameter for the binary orbit.
Step2: From the representation above, we can already see that the units are in solar radii. We can access the units directly via get_default_unit.
Step3: Calling get_value returns only the float of the value in these units.
Step4: Alternatively, you can access an astropy quantity object that contains the value and unit by calling get_quantity.
Step5: Both get_value and get_quantity also accept a unit argument which will return the value or quantity in the requested units (if able to convert). This unit argument takes either a unit object (we imported a forked version of astropy units from within PHOEBE) or a string representation that can be parsed.
Step6: Similarly when setting the value, you can provide either a Quantity object or a value and unit. These will still be stored within PHOEBE according to the default_unit of the Parameter object.
Step7: If for some reason you want to change the default units, you can, but just be careful that this could cause some float-point precision issues. | Python Code:
#!pip install -I "phoebe>=2.4,<2.5"
import phoebe
from phoebe import u,c
logger = phoebe.logger(clevel='WARNING')
b = phoebe.default_binary()
Explanation: Advanced: Parameter Units
In this tutorial we will learn about how units are handled in the frontend and how to translate between different units.
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component')
Explanation: Units
Each FloatParameter or FloatArrayParameter has an associated unit. Let's look at the 'sma' Parameter for the binary orbit.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_default_unit()
Explanation: From the representation above, we can already see that the units are in solar radii. We can access the units directly via get_default_unit.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_value()
Explanation: Calling get_value returns only the float of the value in these units.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
Explanation: Alternatively, you can access an astropy quantity object that contains the value and unit by calling get_quantity.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').get_value(unit=u.km)
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity(unit='km')
Explanation: Both get_value and get_quantity also accept a unit argument which will return the value or quantity in the requested units (if able to convert). This unit argument takes either a unit object (we imported a forked version of astropy units from within PHOEBE) or a string representation that can be parsed.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').set_value(3800000*u.km)
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
b.get_parameter(qualifier='sma', component='binary', context='component').set_value(3900000, unit='km')
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
Explanation: Similarly when setting the value, you can provide either a Quantity object or a value and unit. These will still be stored within PHOEBE according to the default_unit of the Parameter object.
End of explanation
b.get_parameter(qualifier='sma', component='binary', context='component').set_default_unit('mm')
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity()
b.get_parameter(qualifier='sma', component='binary', context='component').get_quantity(unit='solRad')
Explanation: If for some reason you want to change the default units, you can, but just be careful that this could cause some float-point precision issues.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.