Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
1,200 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2015, 2016 Sebastian Raschka
https
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Obtaining the IMDb movie review dataset
The IMDB movie review set can be downloaded from http
Step3: Shuffling the DataFrame
Step4: Optional
Step5: <hr>
Note
If you have problems with creating the movie_data.csv file in the previous chapter, you can find a download a zip archive at
https
Step6: Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts
Step7: As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created
Step8: <br>
Assessing word relevancy via term frequency-inverse document frequency
Step9: When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency
Step10: As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the TfidfTransformer calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are
Step11: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors
Step12: <br>
Cleaning text data
Step13: <br>
Processing documents into tokens
Step14: <br>
<br>
Training a logistic regression model for document classification
Strip HTML and punctuation to speed up the GridSearch later
Step15: <hr>
<hr>
Start comment
Step16: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx indices)
Step17: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Now, the best_score_ attribute of the GridSearchCV object, which becomes available after fitting, returns the average accuracy score of the best model
Step18: As we can see, the result above is consistent with the average score computed the cross_val_score.
Step19: End comment.
<hr>
<hr>
<br>
<br>
Working with bigger data - online algorithms and out-of-core learning | Python Code:
%load_ext watermark
%watermark -a 'Sebastian Raschka' -u -d -v -p numpy,pandas,matplotlib,sklearn,nltk
Explanation: Copyright (c) 2015, 2016 Sebastian Raschka
https://github.com/rasbt/python-machine-learning-book
MIT License
Python Machine Learning - Code Examples
Chapter 8 - Applying Machine Learning To Sentiment Analysis
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
<br>
<br>
Overview
Obtaining the IMDb movie review dataset
Introducing the bag-of-words model
Transforming words into feature vectors
Assessing word relevancy via term frequency-inverse document frequency
Cleaning text data
Processing documents into tokens
Training a logistic regression model for document classification
Working with bigger data – online algorithms and out-of-core learning
Summary
<br>
<br>
End of explanation
import pyprind
import pandas as pd
import os
# change the `basepath` to the directory of the
# unzipped movie dataset
#basepath = '/Users/Sebastian/Desktop/aclImdb/'
basepath = './aclImdb'
labels = {'pos': 1, 'neg': 0}
pbar = pyprind.ProgBar(50000)
df = pd.DataFrame()
for s in ('test', 'train'):
for l in ('pos', 'neg'):
path = os.path.join(basepath, s, l)
for file in os.listdir(path):
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
txt = infile.read()
df = df.append([[txt, labels[l]]], ignore_index=True)
pbar.update()
df.columns = ['review', 'sentiment']
Explanation: Obtaining the IMDb movie review dataset
The IMDB movie review set can be downloaded from http://ai.stanford.edu/~amaas/data/sentiment/.
After downloading the dataset, decompress the files.
A) If you are working with Linux or MacOS X, open a new terminal windowm cd into the download directory and execute
tar -zxf aclImdb_v1.tar.gz
B) If you are working with Windows, download an archiver such as 7Zip to extract the files from the download archive.
Compatibility Note:
I received an email from a reader who was having troubles with reading the movie review texts due to encoding issues. Typically, Python's default encoding is set to 'utf-8', which shouldn't cause troubles when running this IPython notebook. You can simply check the encoding on your machine by firing up a new Python interpreter from the command line terminal and execute
>>> import sys
>>> sys.getdefaultencoding()
If the returned result is not 'utf-8', you probably need to change your Python's encoding to 'utf-8', for example by typing export PYTHONIOENCODING=utf8 in your terminal shell prior to running this IPython notebook. (Note that this is a temporary change, and it needs to be executed in the same shell that you'll use to launch ipython notebook.
Alternatively, you can replace the lines
with open(os.path.join(path, file), 'r') as infile:
...
pd.read_csv('./movie_data.csv')
...
df.to_csv('./movie_data.csv', index=False)
by
with open(os.path.join(path, file), 'r', encoding='utf-8') as infile:
...
pd.read_csv('./movie_data.csv', encoding='utf-8')
...
df.to_csv('./movie_data.csv', index=False, encoding='utf-8')
in the following cells to achieve the desired effect.
End of explanation
import numpy as np
np.random.seed(0)
df = df.reindex(np.random.permutation(df.index))
Explanation: Shuffling the DataFrame:
End of explanation
df.to_csv('./movie_data.csv', index=False)
import pandas as pd
df = pd.read_csv('./movie_data.csv')
df.head(3)
Explanation: Optional: Saving the assembled data as CSV file:
End of explanation
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer
count = CountVectorizer()
docs = np.array([
'The sun is shining',
'The weather is sweet',
'The sun is shining, the weather is sweet, and one and one is two'])
bag = count.fit_transform(docs)
Explanation: <hr>
Note
If you have problems with creating the movie_data.csv file in the previous chapter, you can find a download a zip archive at
https://github.com/rasbt/python-machine-learning-book/tree/master/code/datasets/movie
<hr>
<br>
<br>
Introducing the bag-of-words model
...
Transforming documents into feature vectors
By calling the fit_transform method on CountVectorizer, we just constructed the vocabulary of the bag-of-words model and transformed the following three sentences into sparse feature vectors:
1. The sun is shining
2. The weather is sweet
3. The sun is shining, the weather is sweet, and one and one is two
End of explanation
print(count.vocabulary_)
Explanation: Now let us print the contents of the vocabulary to get a better understanding of the underlying concepts:
End of explanation
print(bag.toarray())
Explanation: As we can see from executing the preceding command, the vocabulary is stored in a Python dictionary, which maps the unique words that are mapped to integer indices. Next let us print the feature vectors that we just created:
Each index position in the feature vectors shown here corresponds to the integer values that are stored as dictionary items in the CountVectorizer vocabulary. For example, the rst feature at index position 0 resembles the count of the word and, which only occurs in the last document, and the word is at index position 1 (the 2nd feature in the document vectors) occurs in all three sentences. Those values in the feature vectors are also called the raw term frequencies: tf (t,d)—the number of times a term t occurs in a document d.
End of explanation
np.set_printoptions(precision=2)
Explanation: <br>
Assessing word relevancy via term frequency-inverse document frequency
End of explanation
from sklearn.feature_extraction.text import TfidfTransformer
tfidf = TfidfTransformer(use_idf=True, norm='l2', smooth_idf=True)
print(tfidf.fit_transform(count.fit_transform(docs)).toarray())
Explanation: When we are analyzing text data, we often encounter words that occur across multiple documents from both classes. Those frequently occurring words typically don't contain useful or discriminatory information. In this subsection, we will learn about a useful technique called term frequency-inverse document frequency (tf-idf) that can be used to downweight those frequently occurring words in the feature vectors. The tf-idf can be de ned as the product of the term frequency and the inverse document frequency:
$$\text{tf-idf}(t,d)=\text{tf (t,d)}\times \text{idf}(t,d)$$
Here the tf(t, d) is the term frequency that we introduced in the previous section,
and the inverse document frequency idf(t, d) can be calculated as:
$$\text{idf}(t,d) = \text{log}\frac{n_d}{1+\text{df}(d, t)},$$
where $n_d$ is the total number of documents, and df(d, t) is the number of documents d that contain the term t. Note that adding the constant 1 to the denominator is optional and serves the purpose of assigning a non-zero value to terms that occur in all training samples; the log is used to ensure that low document frequencies are not given too much weight.
Scikit-learn implements yet another transformer, the TfidfTransformer, that takes the raw term frequencies from CountVectorizer as input and transforms them into tf-idfs:
End of explanation
tf_is = 3
n_docs = 3
idf_is = np.log((n_docs+1) / (3+1))
tfidf_is = tf_is * (idf_is + 1)
print('tf-idf of term "is" = %.2f' % tfidf_is)
Explanation: As we saw in the previous subsection, the word is had the largest term frequency in the 3rd document, being the most frequently occurring word. However, after transforming the same feature vector into tf-idfs, we see that the word is is
now associated with a relatively small tf-idf (0.45) in document 3 since it is
also contained in documents 1 and 2 and thus is unlikely to contain any useful, discriminatory information.
However, if we'd manually calculated the tf-idfs of the individual terms in our feature vectors, we'd have noticed that the TfidfTransformer calculates the tf-idfs slightly differently compared to the standard textbook equations that we de ned earlier. The equations for the idf and tf-idf that were implemented in scikit-learn are:
$$\text{idf} (t,d) = log\frac{1 + n_d}{1 + \text{df}(d, t)}$$
The tf-idf equation that was implemented in scikit-learn is as follows:
$$\text{tf-idf}(t,d) = \text{tf}(t,d) \times (\text{idf}(t,d)+1)$$
While it is also more typical to normalize the raw term frequencies before calculating the tf-idfs, the TfidfTransformer normalizes the tf-idfs directly.
By default (norm='l2'), scikit-learn's TfidfTransformer applies the L2-normalization, which returns a vector of length 1 by dividing an un-normalized feature vector v by its L2-norm:
$$v_{\text{norm}} = \frac{v}{||v||2} = \frac{v}{\sqrt{v{1}^{2} + v_{2}^{2} + \dots + v_{n}^{2}}} = \frac{v}{\big (\sum_{i=1}^{n} v_{i}^{2}\big)^\frac{1}{2}}$$
To make sure that we understand how TfidfTransformer works, let us walk
through an example and calculate the tf-idf of the word is in the 3rd document.
The word is has a term frequency of 3 (tf = 3) in document 3, and the document frequency of this term is 3 since the term is occurs in all three documents (df = 3). Thus, we can calculate the idf as follows:
$$\text{idf}("is", d3) = log \frac{1+3}{1+3} = 0$$
Now in order to calculate the tf-idf, we simply need to add 1 to the inverse document frequency and multiply it by the term frequency:
$$\text{tf-idf}("is",d3)= 3 \times (0+1) = 3$$
End of explanation
tfidf = TfidfTransformer(use_idf=True, norm=None, smooth_idf=True)
raw_tfidf = tfidf.fit_transform(count.fit_transform(docs)).toarray()[-1]
raw_tfidf
l2_tfidf = raw_tfidf / np.sqrt(np.sum(raw_tfidf**2))
l2_tfidf
Explanation: If we repeated these calculations for all terms in the 3rd document, we'd obtain the following tf-idf vectors: [3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]. However, we notice that the values in this feature vector are different from the values that we obtained from the TfidfTransformer that we used previously. The nal step that we are missing in this tf-idf calculation is the L2-normalization, which can be applied as follows:
$$\text{tfi-df}_{norm} = \frac{[3.39, 3.0, 3.39, 1.29, 1.29, 1.29, 2.0 , 1.69, 1.29]}{\sqrt{[3.39^2, 3.0^2, 3.39^2, 1.29^2, 1.29^2, 1.29^2, 2.0^2 , 1.69^2, 1.29^2]}}$$
$$=[0.5, 0.45, 0.5, 0.19, 0.19, 0.19, 0.3, 0.25, 0.19]$$
$$\Rightarrow \text{tfi-df}_{norm}("is", d3) = 0.45$$
As we can see, the results match the results returned by scikit-learn's TfidfTransformer (below). Since we now understand how tf-idfs are calculated, let us proceed to the next sections and apply those concepts to the movie review dataset.
End of explanation
df.loc[0, 'review'][-50:]
import re
def preprocessor(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text)
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
return text
preprocessor(df.loc[0, 'review'][-50:])
preprocessor("</a>This :) is :( a test :-)!")
df['review'] = df['review'].apply(preprocessor)
Explanation: <br>
Cleaning text data
End of explanation
from nltk.stem.porter import PorterStemmer
porter = PorterStemmer()
def tokenizer(text):
return text.split()
def tokenizer_porter(text):
return [porter.stem(word) for word in text.split()]
tokenizer('runners like running and thus they run')
tokenizer_porter('runners like running and thus they run')
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
stop = stopwords.words('english')
[w for w in tokenizer_porter('a runner likes running and runs a lot')[-10:]
if w not in stop]
Explanation: <br>
Processing documents into tokens
End of explanation
X_train = df.loc[:25000, 'review'].values
y_train = df.loc[:25000, 'sentiment'].values
X_test = df.loc[25000:, 'review'].values
y_test = df.loc[25000:, 'sentiment'].values
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import TfidfVectorizer
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
tfidf = TfidfVectorizer(strip_accents=None,
lowercase=False,
preprocessor=None)
param_grid = [{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
{'vect__ngram_range': [(1, 1)],
'vect__stop_words': [stop, None],
'vect__tokenizer': [tokenizer, tokenizer_porter],
'vect__use_idf':[False],
'vect__norm':[None],
'clf__penalty': ['l1', 'l2'],
'clf__C': [1.0, 10.0, 100.0]},
]
lr_tfidf = Pipeline([('vect', tfidf),
('clf', LogisticRegression(random_state=0))])
gs_lr_tfidf = GridSearchCV(lr_tfidf, param_grid,
scoring='accuracy',
cv=5,
verbose=1,
n_jobs=-1)
gs_lr_tfidf.fit(X_train, y_train)
print('Best parameter set: %s ' % gs_lr_tfidf.best_params_)
print('CV Accuracy: %.3f' % gs_lr_tfidf.best_score_)
clf = gs_lr_tfidf.best_estimator_
print('Test Accuracy: %.3f' % clf.score(X_test, y_test))
Explanation: <br>
<br>
Training a logistic regression model for document classification
Strip HTML and punctuation to speed up the GridSearch later:
End of explanation
from sklearn.linear_model import LogisticRegression
import numpy as np
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import StratifiedKFold
from sklearn.cross_validation import cross_val_score
else:
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
np.random.seed(0)
np.set_printoptions(precision=6)
y = [np.random.randint(3) for i in range(25)]
X = (y + np.random.randn(25)).reshape(-1, 1)
if Version(sklearn_version) < '0.18':
cv5_idx = list(StratifiedKFold(y, n_folds=5, shuffle=False, random_state=0))
else:
cv5_idx = list(StratifiedKFold(n_splits=5, shuffle=False, random_state=0).split(X, y))
cross_val_score(LogisticRegression(random_state=123), X, y, cv=cv5_idx)
Explanation: <hr>
<hr>
Start comment:
Please note that gs_lr_tfidf.best_score_ is the average k-fold cross-validation score. I.e., if we have a GridSearchCV object with 5-fold cross-validation (like the one above), the best_score_ attribute returns the average score over the 5-folds of the best model. To illustrate this with an example:
End of explanation
if Version(sklearn_version) < '0.18':
from sklearn.grid_search import GridSearchCV
else:
from sklearn.model_selection import GridSearchCV
gs = GridSearchCV(LogisticRegression(), {}, cv=cv5_idx, verbose=3).fit(X, y)
Explanation: By executing the code above, we created a simple data set of random integers that shall represent our class labels. Next, we fed the indices of 5 cross-validation folds (cv3_idx) to the cross_val_score scorer, which returned 5 accuracy scores -- these are the 5 accuracy values for the 5 test folds.
Next, let us use the GridSearchCV object and feed it the same 5 cross-validation sets (via the pre-generated cv3_idx indices):
End of explanation
gs.best_score_
Explanation: As we can see, the scores for the 5 folds are exactly the same as the ones from cross_val_score earlier.
Now, the best_score_ attribute of the GridSearchCV object, which becomes available after fitting, returns the average accuracy score of the best model:
End of explanation
cross_val_score(LogisticRegression(), X, y, cv=cv5_idx).mean()
Explanation: As we can see, the result above is consistent with the average score computed the cross_val_score.
End of explanation
import numpy as np
import re
from nltk.corpus import stopwords
def tokenizer(text):
text = re.sub('<[^>]*>', '', text)
emoticons = re.findall('(?::|;|=)(?:-)?(?:\)|\(|D|P)', text.lower())
text = re.sub('[\W]+', ' ', text.lower()) +\
' '.join(emoticons).replace('-', '')
tokenized = [w for w in text.split() if w not in stop]
return tokenized
def stream_docs(path):
with open(path, 'r', encoding='utf-8') as csv:
next(csv) # skip header
for line in csv:
text, label = line[:-3], int(line[-2])
yield text, label
next(stream_docs(path='./movie_data.csv'))
def get_minibatch(doc_stream, size):
docs, y = [], []
try:
for _ in range(size):
text, label = next(doc_stream)
docs.append(text)
y.append(label)
except StopIteration:
return None, None
return docs, y
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.linear_model import SGDClassifier
vect = HashingVectorizer(decode_error='ignore',
n_features=2**21,
preprocessor=None,
tokenizer=tokenizer)
clf = SGDClassifier(loss='log', random_state=1, n_iter=1)
doc_stream = stream_docs(path='./movie_data.csv')
import pyprind
pbar = pyprind.ProgBar(45)
classes = np.array([0, 1])
for _ in range(45):
X_train, y_train = get_minibatch(doc_stream, size=1000)
if not X_train:
break
X_train = vect.transform(X_train)
clf.partial_fit(X_train, y_train, classes=classes)
pbar.update()
X_test, y_test = get_minibatch(doc_stream, size=5000)
X_test = vect.transform(X_test)
print('Accuracy: %.3f' % clf.score(X_test, y_test))
clf = clf.partial_fit(X_test, y_test)
Explanation: End comment.
<hr>
<hr>
<br>
<br>
Working with bigger data - online algorithms and out-of-core learning
End of explanation |
1,201 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression with scikit-learn and statmodels
This notebook demonstrates how to conduct a valid regression analysis using a combination of Sklearn and statmodels libraries. While sklearn is popular and powerful from an operational point of view, it does not provide the detailed metrics required to statistically analyze your model, evaluate the importance of predictors, build or simplify your model.
We use other libraries like statmodels or scipy.stats to bridge this gap.
ToC
- Scikit-learn
- Predicting housing prices without data normalization
- Exploratory data anslysis
- Data cleaning
- Train test split
- Multiple regression
- Accuracy assessment
- Predicting housing prices with data normalization and statsmodels
- Scale the housing data to Std. normal distribution
- Evaluate model parameters using statsmodels
- Evaluate model using charts
- Inverse transform the scaled data and calculate RMSE
- Conclusion
Scikit-learn
Scikit-learn is one of the science kits for SciPy stack. Scikit has a collection of prediction and learning algorithms, grouped into
- classification
- clustering
- regression
- dimensionality reduction
Each algorithm follows a typical pattern with a fit, predict method. In addition you get a set of utility methods that help with splitting datasets into train-test sets and for validating the outputs.
Step1: Predicting housing prices without data normalization
Exploratory data anslysis (EDA)
Step2: Find the correlation between each of the numerical columns to the house price
Step3: From this chart, we now,
- distribution of house price is normal (last chart)
- some scatters show a higher correlation, while some other show no correlation.
Step4: Data cleaning
Throw out the text column and split the data into predictor and predicted variables
Step5: Train test split
Step6: Multiple regression
We use a number of numerical columns to regress the house price. Each column's influence will vary, just like in real life, the number of bedrooms might not influence as much as population density. We can determine the influence from the correlation shown in the heatmap above
Step7: Fit
Step8: Create a table showing the coefficient (influence) of each of the columns
Step9: Note, the coefficients for house age, number of rooms is pretty large. However that does not really mean they are more influential compared to income. It is simply because our dataset has not been normalized and the data range for each of these columns vary widely.
Predict
Step10: Accuracy assessment / Model validation
Step11: Distribution of residuals
Step12: Quantifying errors
Step13: RMSE
Step14: Combine the predicted values with input
Step15: Predicting housing prices with data normalization and statmodels
As seen earlier, even though sklearn will perform regression, it is hard to compare which of the predictor variables are influential in determining the house price. To answer this better, let us standardize our data to Std. Normal distribution using sklearn preprocessing.
Scale the housing data to Std. Normal distribution
We use StandardScaler from sklearn.preprocessing to normalize each predictor to mean 0 and unit variance. What we end up with is z-score for each record.
$$
z-score = \frac{x_{i} - \mu}{\sigma}
$$
Step16: Train test split
Step17: Train the model
Step18: From the table above, we notice Avg Income has more influence on the Price than other variables. This was not apparent before scaling the data. Further this corroborates the correlation matrix produced during exploratory data analysis.
Evaluate model parameters using statsmodels
statmodels is a different Python library built for and by statisticians. Thus it provides a lot more information on your model than sklearn. We use it here to refit against the data and evaluate the strength of fit.
Step19: The regression coefficients are identical between sklearn and statsmodels libraries. The $R^{2}$ of 0.919 is as high as it gets. This indicates the predicted (train) Price varies similar to actual. Another measure of health is the S (std. error) and p-value of coefficients. The S of Avg. Number of Bedrooms is as low as other predictors, however it has a high p-value indicating a low confidence in predicting its coefficient.
Similar is the p-value of the intercept.
Predict for unkown values
Step20: Evaluate model using charts
In addition to the numerical metrics used above, we need to look at the distribution of residuals to evaluate if the model.
Step21: From the charts above,
- Fitted vs predicted chart shows a strong correlation between the predictions and actual values
- Fitted vs Residuals chart shows an even distribution around the 0 mean line. There are not patterns evident, which means our model does not leak any systematic phenomena into the residuals (errors)
- Quantile-Quantile plot of residuals vs std. normal and the histogram of residual plots show a sufficiently normal distribution of residuals.
Thus all assumptions hold good.
Inverse Transform the scaled data and calculate RMSE
Step22: Calculate RMSE
RMSE root mean squared error. This is useful as it tell you in terms of the dependent variable, what the mean error in prediction is. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
Explanation: Regression with scikit-learn and statmodels
This notebook demonstrates how to conduct a valid regression analysis using a combination of Sklearn and statmodels libraries. While sklearn is popular and powerful from an operational point of view, it does not provide the detailed metrics required to statistically analyze your model, evaluate the importance of predictors, build or simplify your model.
We use other libraries like statmodels or scipy.stats to bridge this gap.
ToC
- Scikit-learn
- Predicting housing prices without data normalization
- Exploratory data anslysis
- Data cleaning
- Train test split
- Multiple regression
- Accuracy assessment
- Predicting housing prices with data normalization and statsmodels
- Scale the housing data to Std. normal distribution
- Evaluate model parameters using statsmodels
- Evaluate model using charts
- Inverse transform the scaled data and calculate RMSE
- Conclusion
Scikit-learn
Scikit-learn is one of the science kits for SciPy stack. Scikit has a collection of prediction and learning algorithms, grouped into
- classification
- clustering
- regression
- dimensionality reduction
Each algorithm follows a typical pattern with a fit, predict method. In addition you get a set of utility methods that help with splitting datasets into train-test sets and for validating the outputs.
End of explanation
usa_house = pd.read_csv('../udemy_ml_bootcamp/Machine Learning Sections/Linear-Regression/USA_housing.csv')
usa_house.head(5)
usa_house.info()
usa_house.describe()
Explanation: Predicting housing prices without data normalization
Exploratory data anslysis (EDA)
End of explanation
sns.pairplot(usa_house)
Explanation: Find the correlation between each of the numerical columns to the house price
End of explanation
fig, axs = plt.subplots(1,2, figsize=[15,5])
sns.distplot(usa_house['Price'], ax=axs[0])
sns.heatmap(usa_house.corr(), ax=axs[1], annot=True)
fig.tight_layout()
Explanation: From this chart, we now,
- distribution of house price is normal (last chart)
- some scatters show a higher correlation, while some other show no correlation.
End of explanation
usa_house.columns
X = usa_house[['Avg Income', 'Avg House Age', 'Avg. Number of Rooms',
'Avg. Number of Bedrooms', 'Area Population']]
y = usa_house[['Price']]
Explanation: Data cleaning
Throw out the text column and split the data into predictor and predicted variables
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33)
len(X_train)
len(X_test)
X_test.head()
Explanation: Train test split
End of explanation
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
Explanation: Multiple regression
We use a number of numerical columns to regress the house price. Each column's influence will vary, just like in real life, the number of bedrooms might not influence as much as population density. We can determine the influence from the correlation shown in the heatmap above
End of explanation
lm.fit(X_train, y_train) #no need to capture the return. All is stored in lm
Explanation: Fit
End of explanation
cdf = pd.DataFrame(lm.coef_[0], index=X_train.columns, columns=['coefficients'])
cdf
Explanation: Create a table showing the coefficient (influence) of each of the columns
End of explanation
y_predicted = lm.predict(X_test)
len(y_predicted)
Explanation: Note, the coefficients for house age, number of rooms is pretty large. However that does not really mean they are more influential compared to income. It is simply because our dataset has not been normalized and the data range for each of these columns vary widely.
Predict
End of explanation
plt.scatter(y_test, y_predicted) #actual vs predicted
Explanation: Accuracy assessment / Model validation
End of explanation
sns.distplot((y_test - y_predicted))
Explanation: Distribution of residuals
End of explanation
from sklearn import metrics
metrics.mean_absolute_error(y_test, y_predicted)
Explanation: Quantifying errors
End of explanation
import numpy
numpy.sqrt(metrics.mean_squared_error(y_test, y_predicted))
Explanation: RMSE
End of explanation
X_test['predicted_price'] = y_predicted
X_test.head()
Explanation: Combine the predicted values with input
End of explanation
from sklearn.preprocessing import StandardScaler
s_scaler = StandardScaler()
# get all columns except 'Address' which is non numeric
usa_house.columns[:-1]
usa_house_scaled = s_scaler.fit_transform(usa_house[usa_house.columns[:-1]])
usa_house_scaled = pd.DataFrame(usa_house_scaled, columns=usa_house.columns[:-1])
usa_house_scaled.head()
usa_house_scaled.describe().round(3) # round the numbers for dispaly
Explanation: Predicting housing prices with data normalization and statmodels
As seen earlier, even though sklearn will perform regression, it is hard to compare which of the predictor variables are influential in determining the house price. To answer this better, let us standardize our data to Std. Normal distribution using sklearn preprocessing.
Scale the housing data to Std. Normal distribution
We use StandardScaler from sklearn.preprocessing to normalize each predictor to mean 0 and unit variance. What we end up with is z-score for each record.
$$
z-score = \frac{x_{i} - \mu}{\sigma}
$$
End of explanation
X_scaled = usa_house_scaled[['Avg Income', 'Avg House Age', 'Avg. Number of Rooms',
'Avg. Number of Bedrooms', 'Area Population']]
y_scaled = usa_house_scaled[['Price']]
Xscaled_train, Xscaled_test, yscaled_train, yscaled_test = \
train_test_split(X_scaled, y_scaled, test_size=0.33)
Explanation: Train test split
End of explanation
lm_scaled = LinearRegression()
lm_scaled.fit(Xscaled_train, yscaled_train)
cdf_scaled = pd.DataFrame(lm_scaled.coef_[0],
index=Xscaled_train.columns, columns=['coefficients'])
cdf_scaled
lm_scaled.intercept_
Explanation: Train the model
End of explanation
import statsmodels.api as sm
import statsmodels
from statsmodels.regression import linear_model
yscaled_train.shape
Xscaled_train = sm.add_constant(Xscaled_train)
sm_ols = linear_model.OLS(yscaled_train, Xscaled_train) # i know, the param order is inverse
sm_model = sm_ols.fit()
sm_model.summary()
Explanation: From the table above, we notice Avg Income has more influence on the Price than other variables. This was not apparent before scaling the data. Further this corroborates the correlation matrix produced during exploratory data analysis.
Evaluate model parameters using statsmodels
statmodels is a different Python library built for and by statisticians. Thus it provides a lot more information on your model than sklearn. We use it here to refit against the data and evaluate the strength of fit.
End of explanation
yscaled_predicted = lm_scaled.predict(Xscaled_test)
residuals_scaled = yscaled_test - yscaled_predicted
Explanation: The regression coefficients are identical between sklearn and statsmodels libraries. The $R^{2}$ of 0.919 is as high as it gets. This indicates the predicted (train) Price varies similar to actual. Another measure of health is the S (std. error) and p-value of coefficients. The S of Avg. Number of Bedrooms is as low as other predictors, however it has a high p-value indicating a low confidence in predicting its coefficient.
Similar is the p-value of the intercept.
Predict for unkown values
End of explanation
fig, axs = plt.subplots(2,2, figsize=(10,10))
# plt.tight_layout()
plt1 = axs[0][0].scatter(x=yscaled_test, y=yscaled_predicted)
axs[0][0].set_title('Fitted vs Predicted')
axs[0][0].set_xlabel('Price - test')
axs[0][0].set_ylabel('Price - predicted')
plt2 = axs[0][1].scatter(x=yscaled_test, y=residuals_scaled)
axs[0][1].hlines(0, xmin=-3, xmax=3)
axs[0][1].set_title('Fitted vs Residuals')
axs[0][1].set_xlabel('Price - test (fitted)')
axs[0][1].set_ylabel('Residuals')
from numpy import random
axs[1][0].scatter(x=sorted(random.randn(len(residuals_scaled))),
y=sorted(residuals_scaled['Price']))
axs[1][0].set_title('QQ plot of Residuals')
axs[1][0].set_xlabel('Std. normal z scores')
axs[1][0].set_ylabel('Residuals')
sns.distplot(residuals_scaled, ax=axs[1][1])
axs[1][1].set_title('Histogram of residuals')
plt.tight_layout()
Explanation: Evaluate model using charts
In addition to the numerical metrics used above, we need to look at the distribution of residuals to evaluate if the model.
End of explanation
Xscaled_train.columns
usa_house_fitted = Xscaled_test[Xscaled_test.columns[0:]]
usa_house_fitted['Price'] = yscaled_test
usa_house_fitted.head()
usa_house_fitted_inv = s_scaler.inverse_transform(usa_house_fitted)
usa_house_fitted_inv = pd.DataFrame(usa_house_fitted_inv,
columns=usa_house_fitted.columns)
usa_house_fitted_inv.head().round(3)
yinv_predicted = (yscaled_predicted * s_scaler.scale_[-1]) + s_scaler.mean_[-1]
yinv_predicted.shape
usa_house_fitted_inv['Price predicted'] = yinv_predicted
usa_house_fitted_inv.head().round(3)
Explanation: From the charts above,
- Fitted vs predicted chart shows a strong correlation between the predictions and actual values
- Fitted vs Residuals chart shows an even distribution around the 0 mean line. There are not patterns evident, which means our model does not leak any systematic phenomena into the residuals (errors)
- Quantile-Quantile plot of residuals vs std. normal and the histogram of residual plots show a sufficiently normal distribution of residuals.
Thus all assumptions hold good.
Inverse Transform the scaled data and calculate RMSE
End of explanation
mse_scaled = metrics.mean_squared_error(usa_house_fitted_inv['Price'],
usa_house_fitted_inv['Price predicted'])
numpy.sqrt(mse_scaled)
mae_scaled = metrics.mean_absolute_error(usa_house_fitted_inv['Price'],
usa_house_fitted_inv['Price predicted'])
mae_scaled
Explanation: Calculate RMSE
RMSE root mean squared error. This is useful as it tell you in terms of the dependent variable, what the mean error in prediction is.
End of explanation |
1,202 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
DATASCI W261
Step1: Data Visualiazation
Step2: MrJob class code
The solution of linear model $$ \textbf{Y} = \textbf{X}\theta $$ is
Step3: Driver
Step4: Gradient descent - doesn't work | Python Code:
%matplotlib inline
import numpy as np
import pylab
size = 1000
x = np.random.uniform(-40, 40, size)
y = x * 1.0 - 4 + np.random.normal(0,5,size)
data = zip(range(size),y,x)
#data = np.concatenate((y, x), axis=1)
np.savetxt('LinearRegression.csv',data,'%i,%f,%f')
data[:10]
Explanation: DATASCI W261: Machine Learning at Scale
Version 1: One MapReduce Stage (join data at the first reducer)
Data Generation
Data Information:
+ Sizes: 1000 points
+ True model: y = 1.0 * x - 4
+ Noise:Normal Distributed mean = 0, var = 5
End of explanation
pylab.plot(x, y,'*')
pylab.show()
Explanation: Data Visualiazation
End of explanation
%%writefile linearRegressionXSquare.py
#Version 1: One MapReduce Stage (join data at the first reducer)
from mrjob.job import MRJob
class MRMatrixX2(MRJob):
#Emit all the data need to caculate cell i,j in result matrix
def mapper(self, _, line):
v = line.split(',')
# add 1s to calculate intercept
v.append('1.0')
for i in range(len(v)-2):
for j in range(len(v)-2):
yield (j,i),(int(v[0]),float(v[i+2]))
yield (i,j),(int(v[0]),float(v[i+2]))
# Sum up the product for cell i,j
def reducer(self, key, values):
idxdict = {}
s = 0.0
preidx = -1
preval = 0
f = []
for idx, value in values:
if str(idx) in idxdict:
s = s + value * idxdict[str(idx)]
else:
idxdict[str(idx)] = value
yield key,s
if __name__ == '__main__':
MRMatrixX2.run()
%%writefile linearRegressionXy.py
from mrjob.job import MRJob
class MRMatrixXY(MRJob):
def mapper(self, _, line):
v = line.split(',')
# product of y*xi
for i in range(len(v)-2):
yield i, float(v[1])*float(v[i+2])
# To calculate Intercept
yield i+1, float(v[1])
# Sum up the products
def reducer(self, key, values):
yield key,sum(values)
if __name__ == '__main__':
MRMatrixXY.run()
Explanation: MrJob class code
The solution of linear model $$ \textbf{Y} = \textbf{X}\theta $$ is:
$$ \hat{\theta} = (\textbf{X}^T\textbf{X})^{-1}\textbf{X}^T\textbf{y} $$
If $\textbf{X}^T\textbf{X}$ is denoted by $A$, and $\textbf{X}^T\textbf{y}$ is denoted by $b$, then
$$ \hat{\theta} = A^{-1}b $$
There are two MrJob classes to calculate intermediate results:
+ linearRegressionXSquare.py calculates $A = \textbf{X}^T\textbf{X}$
+ linearRegressionXy.py calculates $b = \textbf{X}^T\textbf{y}$
End of explanation
from numpy import linalg,array,empty
from linearRegressionXSquare import MRMatrixX2
from linearRegressionXy import MRMatrixXY
mr_job1 = MRMatrixX2(args=['LinearRegression.csv'])
mr_job2 = MRMatrixXY(args=['LinearRegression.csv'])
X_Square = []
X_Y = []
# Calculate XT*X Covariance Matrix
print "Matrix XT*X:"
with mr_job1.make_runner() as runner:
# Run MrJob MatrixMultiplication Job
runner.run()
# Extract the output I.E. ship data to driver be careful if data you ship is too big
for line in runner.stream_output():
key,value = mr_job1.parse_output_line(line)
X_Square.append((key,value))
print key, value
print " "
# Calculate XT*Y
print "Vector XT*Y:"
with mr_job2.make_runner() as runner:
runner.run()
for line in runner.stream_output():
key,value = mr_job2.parse_output_line(line)
X_Y.append((key,value))
print key, value
print " "
#Local Processing the output from two MrJob
n = len(X_Y)
if(n*n!=len(X_Square)):
print 'Error!'
else:
XX = empty(shape=[n,n])
for v in X_Square:
XX[v[0][0],v[0][1]] = v[1]
XY = empty(shape=[n,1])
for v in X_Y:
XY[v[0],0] = v[1]
print XX
print
print XY
theta = linalg.solve(XX,XY)
print "Coefficients:",theta[0,0],',',theta[1,0]
Explanation: Driver:
Driver run tow MrJob class to get $\textbf{X}^T\textbf{X}$ and $\textbf{X}^T\textbf{y}$. And it calculate $(\textbf{X}^T\textbf{X})^{-1}$ by numpy.linalg.solve.
End of explanation
%%writefile MrJobBatchGDUpdate_LinearRegression.py
from mrjob.job import MRJob
# This MrJob calculates the gradient of the entire training set
# Mapper: calculate partial gradient for each example
#
class MrJobBatchGDUpdate_LinearRegression(MRJob):
# run before the mapper processes any input
def read_weightsfile(self):
# Read weights file
with open('weights.txt', 'r') as f:
self.weights = [float(v) for v in f.readline().split(',')]
# Initialze gradient for this iteration
self.partial_Gradient = [0]*len(self.weights)
self.partial_count = 0
# Calculate partial gradient for each example
def partial_gradient(self, _, line):
D = (map(float,line.split(',')))
# y_hat is the predicted value given current weights
y_hat = self.weights[0]+self.weights[1]*D[1]
# Update parial gradient vector with gradient form current example
self.partial_Gradient = [self.partial_Gradient[0]+ D[0]-y_hat, self.partial_Gradient[1]+(D[0]-y_hat)*D[1]]
self.partial_count = self.partial_count + 1
#yield None, (D[0]-y_hat,(D[0]-y_hat)*D[1],1)
# Finally emit in-memory partial gradient and partial count
def partial_gradient_emit(self):
yield None, (self.partial_Gradient,self.partial_count)
# Accumulate partial gradient from mapper and emit total gradient
# Output: key = None, Value = gradient vector
def gradient_accumulater(self, _, partial_Gradient_Record):
total_gradient = [0]*2
total_count = 0
for partial_Gradient,partial_count in partial_Gradient_Record:
total_count = total_count + partial_count
total_gradient[0] = total_gradient[0] + partial_Gradient[0]
total_gradient[1] = total_gradient[1] + partial_Gradient[1]
yield None, [v/total_count for v in total_gradient]
def steps(self):
return [self.mr(mapper_init=self.read_weightsfile,
mapper=self.partial_gradient,
mapper_final=self.partial_gradient_emit,
reducer=self.gradient_accumulater)]
if __name__ == '__main__':
MrJobBatchGDUpdate_LinearRegression.run()
from numpy import random, array
from MrJobBatchGDUpdate_LinearRegression import MrJobBatchGDUpdate_LinearRegression
learning_rate = 0.05
stop_criteria = 0.000005
# Generate random values as inital weights
weights = array([random.uniform(-3,3),random.uniform(-3,3)])
# Write the weights to the files
with open('weights.txt', 'w+') as f:
f.writelines(','.join(str(j) for j in weights))
# Update centroids iteratively
i = 0
while(1):
# create a mrjob instance for batch gradient descent update over all data
mr_job = MrJobBatchGDUpdate_LinearRegression(args=['--file', 'weights.txt', 'LinearRegression.csv'])
print "iteration ="+str(i)+" weights =",weights
# Save weights from previous iteration
weights_old = weights
with mr_job.make_runner() as runner:
runner.run()
# stream_output: get access of the output
for line in runner.stream_output():
# value is the gradient value
key,value = mr_job.parse_output_line(line)
# Update weights
weights = weights - learning_rate*array(value)
i = i + 1
if i>100: break
# Write the updated weights to file
with open('weights.txt', 'w+') as f:
f.writelines(','.join(str(j) for j in weights))
# Stop if weights get converged
if(sum((weights_old-weights)**2)<stop_criteria):
break
print "Final weights\n"
print weights
Explanation: Gradient descent - doesn't work
End of explanation |
1,203 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Run from bootstrap paths
Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.
Tasks covered in this notebook
Step1: Loading things from storage
First we'll reload some of the stuff we stored before. Of course, this starts with opening the file.
Step2: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
Step3: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.
Step4: initialize engine
if we do not select a platform the fastest possible will be chosen but we explicitly request to use the one in the config file
Step5: Running RETIS
Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object.
Step6: Before we can sample we still need to set the actual MoveScheme which determines the
set of moves to apply to our set of samples and effectively doing the steps in
replica (sampleset) space. We pick the default scheme for mstis and feed it with
the engine to be used.
Step7: and finally generate the PathSampler object to conduct the simulation.
Step8: Now everything is ready | Python Code:
%matplotlib inline
import openpathsampling as paths
import numpy as np
import math
# the openpathsampling OpenMM engine
import openpathsampling.engines.openmm as eng
Explanation: Run from bootstrap paths
Now we will use the initial trajectories we obtained from bootstrapping to run an MSTIS simulation. This will show both how objects can be regenerated from storage and how regenerated equivalent objects can be used in place of objects that weren't stored.
Tasks covered in this notebook:
* Loading OPS objects from storage
* Ways of assigning initial trajectories to initial samples
* Setting up a path sampling simulation with various move schemes
* Visualizing trajectories while the path sampling is running
End of explanation
old_store = paths.AnalysisStorage("ala_mstis_bootstrap.nc")
Explanation: Loading things from storage
First we'll reload some of the stuff we stored before. Of course, this starts with opening the file.
End of explanation
print "PathMovers:", len(old_store.pathmovers)
print "Engines:", len(old_store.engines)
print "Samples:", len(old_store.samples)
print "Trajectories:", len(old_store.trajectories)
print "Ensembles:", len(old_store.ensembles)
print "SampleSets:", len(old_store.samplesets)
print "Snapshots:", len(old_store.snapshots)
print "Networks:", len(old_store.networks)
Explanation: A lot of information can be recovered from the old storage, and so we don't have the recreate it. However, we did not save our network, so we'll have to create a new one. Since the network creates the ensembles, that means we will have to translate the trajectories from the old ensembles to new ensembles.
End of explanation
# template = old_store.snapshots[0]
engine = old_store.engines['default']
mstis = old_store.networks[0]
sset = old_store.tag['sampleset']
Explanation: Loading from storage is very easy. Each store is a list. We take the 0th snapshot as a template (it doesn't actually matter which one) for the next storage we'll create. There's only one engine stored, so we take the only one.
End of explanation
platform = 'CUDA'
engine.initialize(platform)
print 'Engine uses platform `%s`' % engine.platform
sset.sanity_check()
Explanation: initialize engine
if we do not select a platform the fastest possible will be chosen but we explicitly request to use the one in the config file
End of explanation
# logging creates ops_output.log file with details of what the calculation is doing
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
storage = paths.storage.Storage("ala_mstis_production.nc", "w")
storage.snapshots.save(old_store.snapshots[0]);
Explanation: Running RETIS
Now we run the full calculation. Up to here, we haven't been storing any of our results. This time, we'll start a storage object, and we'll save the network we've created. Then we'll run a new PathSampling calculation object.
End of explanation
scheme = paths.DefaultScheme(mstis, engine)
Explanation: Before we can sample we still need to set the actual MoveScheme which determines the
set of moves to apply to our set of samples and effectively doing the steps in
replica (sampleset) space. We pick the default scheme for mstis and feed it with
the engine to be used.
End of explanation
mstis_calc = paths.PathSampling(
storage=storage,
sample_set=sset,
move_scheme=scheme
)
mstis_calc.save_frequency = 10
Explanation: and finally generate the PathSampler object to conduct the simulation.
End of explanation
mstis_calc.run(5)
print len(storage.steps)
# commented out during development, so we can "run all" and then do more
storage.close()
Explanation: Now everything is ready: let's run the simulation! The first step takes a little since all
necessary information, i.e. the engines, topologies, initial snapshots, ..., need to be
stored. Then the monte carlo steps will be performed.
End of explanation |
1,204 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook demonstrates how to perform diffusivity and ionic conductivity analyses starting from a series of VASP AIMD simulations using Python Materials Genomics (pymatgen) and its add-on package pymatgen-diffusion. These notebooks are described in detail in
Deng, Z.; Zhu, Z.; Chu, I.-H.; Ong, S. P. Data-Driven First-Principles Methods for the Study and Design of
Alkali Superionic Conductors. Chem. Mater. 2017, 29 (1), 281–288 DOI
Step1: Preparation
The DiffusionAnalyzer class in pymatgen can be instantiated from a supplied list of sequential vasprun.xml output files from the AIMD simulations. An example code (commented out) is shown below.
Step2: In this work, all trajectories are stored in an efficient document-based MongoDB database. The format of the documents in the database is a binary JSON format. Here, we will instead instantiate the DiffusionAnalyzer from a pre-serialized DiffusionAnalyzer for each temperature.
Step3: MSD vs time plot
For each temperature, we can plot the mean square displacement against time as follows (only 1000 K shown).
Step4: Activation energy and ionic conductivity
From diffusivity at each temperature, we can obtain activation energy and room temperature ionic conductivity by constructing an Arrhenius plot.
Step5: From the temperatures and diffusivities, one may obtained the extrapolated room-temperature conductivity as follows.
Step6: Probability density function analysis
We can compute the probability density function from the AIMD trajectories using the ProbabilityDensityAnalysis class implemented in the pymatgen-diffusion add-on. We will use the calculation at 800K as an example. The probability density function can then be output to a CHGCAR-like file for visualization in VESTA.
Step7: The VESTA visualization software can be used to visualize isosurfaces in the probability density. The 800K probability density function at an isosurface of 0.002 is shown below.
Step8: van Hove correlation function analysis
We can compute the van Hove correlation function from a DiffusionAnalyzer using the VanHoveAnalysis class implemented in the pymatgen-diffusion add-on. We will use the calculation at 800K as an example.
Step9: We can then plot the self ($G_s$) and distinct ($G_d$) parts of the van Hove correlation function as follows. | Python Code:
from IPython.display import Image
%matplotlib inline
import matplotlib.pyplot as plt
import json
import collections
from pymatgen.core import Structure
from pymatgen.analysis.diffusion_analyzer import DiffusionAnalyzer, \
get_arrhenius_plot, get_extrapolated_conductivity
from pymatgen.analysis.diffusion.aimd.pathway import ProbabilityDensityAnalysis
from pymatgen.analysis.diffusion.aimd.van_hove import VanHoveAnalysis
Explanation: Introduction
This notebook demonstrates how to perform diffusivity and ionic conductivity analyses starting from a series of VASP AIMD simulations using Python Materials Genomics (pymatgen) and its add-on package pymatgen-diffusion. These notebooks are described in detail in
Deng, Z.; Zhu, Z.; Chu, I.-H.; Ong, S. P. Data-Driven First-Principles Methods for the Study and Design of
Alkali Superionic Conductors. Chem. Mater. 2017, 29 (1), 281–288 DOI: 10.1021/acs.chemmater.6b02648.
If you find these notebooks useful and use the functionality demonstrated, please consider citing the above work.
Let's start by importing some modules and classes that we will be using.
End of explanation
# files = ["run1/vasprun.xml", "run2/vasprun.xml", "run3/vasprun.xml"]
# analyzer = DiffusionAnalyzer.from_files(files, specie="Li", smoothed=False)
Explanation: Preparation
The DiffusionAnalyzer class in pymatgen can be instantiated from a supplied list of sequential vasprun.xml output files from the AIMD simulations. An example code (commented out) is shown below.
End of explanation
temperatures = [600, 800, 1000, 1200]
analyzers = collections.OrderedDict()
for temp in temperatures:
with open("aimd_data/%d.json" % temp) as f:
d = json.load(f)
analyzers[temp] = DiffusionAnalyzer.from_dict(d)
Explanation: In this work, all trajectories are stored in an efficient document-based MongoDB database. The format of the documents in the database is a binary JSON format. Here, we will instead instantiate the DiffusionAnalyzer from a pre-serialized DiffusionAnalyzer for each temperature.
End of explanation
plt = analyzers[1000].get_msd_plot()
title = plt.title("1000K", fontsize=24)
Explanation: MSD vs time plot
For each temperature, we can plot the mean square displacement against time as follows (only 1000 K shown).
End of explanation
diffusivities = [d.diffusivity for d in analyzers.values()]
plt = get_arrhenius_plot(temperatures, diffusivities)
Explanation: Activation energy and ionic conductivity
From diffusivity at each temperature, we can obtain activation energy and room temperature ionic conductivity by constructing an Arrhenius plot.
End of explanation
rts = get_extrapolated_conductivity(temperatures, diffusivities,
new_temp=300, structure=analyzers[800].structure,
species="Li")
print("The Li ionic conductivity for Li6PS5Cl at 300 K is %.4f mS/cm" % rts)
Explanation: From the temperatures and diffusivities, one may obtained the extrapolated room-temperature conductivity as follows.
End of explanation
structure = analyzers[800].structure
trajectories = [s.frac_coords for s in analyzers[800].get_drift_corrected_structures()]
pda = ProbabilityDensityAnalysis(structure, trajectories, species="Li")
pda.to_chgcar("aimd_data/CHGCAR.vasp") # Output to a CHGCAR-like file for visualization in VESTA.
Explanation: Probability density function analysis
We can compute the probability density function from the AIMD trajectories using the ProbabilityDensityAnalysis class implemented in the pymatgen-diffusion add-on. We will use the calculation at 800K as an example. The probability density function can then be output to a CHGCAR-like file for visualization in VESTA.
End of explanation
Image(filename='Isosurface_800K_0.png')
Explanation: The VESTA visualization software can be used to visualize isosurfaces in the probability density. The 800K probability density function at an isosurface of 0.002 is shown below.
End of explanation
vha = VanHoveAnalysis(analyzers[800])
Explanation: van Hove correlation function analysis
We can compute the van Hove correlation function from a DiffusionAnalyzer using the VanHoveAnalysis class implemented in the pymatgen-diffusion add-on. We will use the calculation at 800K as an example.
End of explanation
vha.get_3d_plot(type="self")
vha.get_3d_plot(type="distinct")
Explanation: We can then plot the self ($G_s$) and distinct ($G_d$) parts of the van Hove correlation function as follows.
End of explanation |
1,205 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Lasso
Modified from the github repo
Step1: Hitters dataset
Let's load the dataset from the previous lab.
Step2: Exercise Compare the previous methods to the Lasso on this dataset. Tune $\lambda$ and compare the LOO risk to other methods (ridge, forward selection, etc.)
The following is a fast implementation of the lasso path cross-validated using LOO.
Step3: The following is the fitted coefficient vector for this chosen lambda.
Step4: The above is the MSE for the selected model. The best performance for ridge regression was roughly 120,000, so this does not outperform ridge. We can also compare this to the selected model from forward stagewise regression | Python Code:
# %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import scale
from sklearn.model_selection import LeaveOneOut
from sklearn.linear_model import LinearRegression, lars_path, Lasso, LassoCV
%matplotlib inline
n=100
p=1000
X = np.random.randn(n,p)
X = scale(X)
sprob = 0.02
Sbool = np.random.rand(p) < sprob
s = np.sum(Sbool)
print("Number of non-zero's: {}".format(s))
mu = 100.
beta = np.zeros(p)
beta[Sbool] = mu * np.random.randn(s)
eps = np.random.randn(n)
y = X.dot(beta) + eps
larper = lars_path(X,y,method="lasso")
S = set(np.where(Sbool)[0])
for j in S:
_ = plt.plot(larper[0],larper[2][j,:],'r')
for j in set(range(p)) - S:
_ = plt.plot(larper[0],larper[2][j,:],'k',linewidth=.5)
_ = plt.title('Lasso path for simulated data')
_ = plt.xlabel('lambda')
_ = plt.ylabel('Coef')
Explanation: The Lasso
Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python which is based on the book by James et al. Intro to Statistical Learning.
End of explanation
# In R, I exported the dataset from package 'ISLR' to a csv file.
df = pd.read_csv('../data/Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
df.head()
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
X.head(5)
Explanation: Hitters dataset
Let's load the dataset from the previous lab.
End of explanation
loo = LeaveOneOut()
looiter = loo.split(X)
hitlasso = LassoCV(cv=looiter)
hitlasso.fit(X,y)
print("The selected lambda value is {:.2f}".format(hitlasso.alpha_))
Explanation: Exercise Compare the previous methods to the Lasso on this dataset. Tune $\lambda$ and compare the LOO risk to other methods (ridge, forward selection, etc.)
The following is a fast implementation of the lasso path cross-validated using LOO.
End of explanation
hitlasso.coef_
np.mean(hitlasso.mse_path_[hitlasso.alphas_ == hitlasso.alpha_])
Explanation: The following is the fitted coefficient vector for this chosen lambda.
End of explanation
bforw = [-0.21830515, 0.38154135, 0. , 0. , 0. ,
0.16139123, 0. , 0. , 0. , 0. ,
0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. ,
0. , 0. , -0.19429699, 0. ]
print(", ".join(X.columns[(hitlasso.coef_ != 0.) != (bforw != 0.)]))
Explanation: The above is the MSE for the selected model. The best performance for ridge regression was roughly 120,000, so this does not outperform ridge. We can also compare this to the selected model from forward stagewise regression:
[-0.21830515, 0.38154135, 0. , 0. , 0. ,
0.16139123, 0. , 0. , 0. , 0. ,
0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. ,
0. , 0. , -0.19429699, 0. ]
This is not exactly the same model with differences in the inclusion or exclusion of AtBat, HmRun, Runs, RBI, Years, CHmRun, Errors, League_N, Division_W, NewLeague_N
End of explanation |
1,206 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras Backend
In this notebook we will be using the Keras backend module, which provides an abstraction over both Theano and Tensorflow.
Let's try to re-implement the Logistic Regression Model using the keras.backend APIs.
The following code will look like very similar to what we would write in Theano or Tensorflow (with the only difference that it may run on both the two backends).
Step1: Your Turn
Please switch to the Theano backend and restart the notebook.
You should see no difference in the execution!
Reminder
Step2: Notes
Step3: Then, given the gradient of MSE wrt to w and b, we can define how we update the parameters via SGD
Step4: The whole model can be encapsulated in a function, which takes as input x and target, returns the current loss value and updates its parameter according to updates.
Step5: Training
Training is now just a matter of calling the function we have just defined. Each time train is called, indeed, w and b will be updated using the SGD rule.
Having generated some random training data, we will feed the train function for several epochs and observe the values of w, b, and loss.
Step6: We can also plot the loss history | Python Code:
import keras.backend as K
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from kaggle_data import load_data, preprocess_data, preprocess_labels
X_train, labels = load_data('../data/kaggle_ottogroup/train.csv', train=True)
X_train, scaler = preprocess_data(X_train)
Y_train, encoder = preprocess_labels(labels)
X_test, ids = load_data('../data/kaggle_ottogroup/test.csv', train=False)
X_test, _ = preprocess_data(X_test, scaler)
nb_classes = Y_train.shape[1]
print(nb_classes, 'classes')
dims = X_train.shape[1]
print(dims, 'dims')
feats = dims
training_steps = 25
x = K.placeholder(dtype="float", shape=X_train.shape)
target = K.placeholder(dtype="float", shape=Y_train.shape)
# Set model weights
W = K.variable(np.random.rand(dims, nb_classes))
b = K.variable(np.random.rand(nb_classes))
# Define model and loss
y = K.dot(x, W) + b
loss = K.categorical_crossentropy(y, target)
activation = K.softmax(y) # Softmax
lr = K.constant(0.01)
grads = K.gradients(loss, [W,b])
updates = [(W, W-lr*grads[0]), (b, b-lr*grads[1])]
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
# Training
loss_history = []
for epoch in range(training_steps):
current_loss = train([X_train, Y_train])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: {}".format(current_loss))
loss_history = [np.mean(lh) for lh in loss_history]
# plotting
plt.plot(range(len(loss_history)), loss_history, 'o', label='Logistic Regression Training phase')
plt.ylabel('cost')
plt.xlabel('epoch')
plt.legend()
plt.show()
Explanation: Keras Backend
In this notebook we will be using the Keras backend module, which provides an abstraction over both Theano and Tensorflow.
Let's try to re-implement the Logistic Regression Model using the keras.backend APIs.
The following code will look like very similar to what we would write in Theano or Tensorflow (with the only difference that it may run on both the two backends).
End of explanation
# Placeholders and variables
x = K.placeholder()
target = K.placeholder()
w = K.variable(np.random.rand())
b = K.variable(np.random.rand())
Explanation: Your Turn
Please switch to the Theano backend and restart the notebook.
You should see no difference in the execution!
Reminder: please keep in mind that you can execute shell commands from a notebook (pre-pending a ! sign).
Thus:
shell
!cat ~/.keras/keras.json
should show you the content of your keras configuration file.
Moreover
Try to play a bit with the learning reate parameter to see how the loss history floats...
Exercise: Linear Regression
To get familiar with automatic differentiation, we start by learning a simple linear regression model using Stochastic Gradient Descent (SGD).
Recall that given a dataset ${(x_i, y_i)}_{i=0}^N$, with $x_i, y_i \in \mathbb{R}$, the objective of linear regression is to find two scalars $w$ and $b$ such that $y = w\cdot x + b$ fits the dataset. In this tutorial we will learn $w$ and $b$ using SGD and a Mean Square Error (MSE) loss:
$$\mathcal{l} = \frac{1}{N} \sum_{i=0}^N (w\cdot x_i + b - y_i)^2$$
Starting from random values, parameters $w$ and $b$ will be updated at each iteration via the following rule:
$$w_t = w_{t-1} - \eta \frac{\partial \mathcal{l}}{\partial w}$$
<br>
$$b_t = b_{t-1} - \eta \frac{\partial \mathcal{l}}{\partial b}$$
where $\eta$ is the learning rate.
NOTE: Recall that linear regression is indeed a simple neuron with a linear activation function!!
Definition: Placeholders and Variables
First of all, we define the necessary variables and placeholders for our computational graph. Variables maintain state across executions of the computational graph, while placeholders are ways to feed the graph with external data.
For the linear regression example, we need three variables: w, b, and the learning rate for SGD, lr.
Two placeholders x and target are created to store $x_i$ and $y_i$ values.
End of explanation
# Define model and loss
# %load ../solutions/sol_2311.py
Explanation: Notes:
In case you're wondering what's the difference between a placeholder and a variable, in short:
Use K.variable() for trainable variables such as weights (W) and biases (b) for your model.
Use K.placeholder() to feed actual data (e.g. training examples)
Model definition
Now we can define the $y = w\cdot x + b$ relation as well as the MSE loss in the computational graph.
End of explanation
# %load ../solutions/sol_2312.py
Explanation: Then, given the gradient of MSE wrt to w and b, we can define how we update the parameters via SGD:
End of explanation
train = K.function(inputs=[x, target], outputs=[loss], updates=updates)
Explanation: The whole model can be encapsulated in a function, which takes as input x and target, returns the current loss value and updates its parameter according to updates.
End of explanation
# Generate data
np_x = np.random.rand(1000)
np_target = 0.96*np_x + 0.24
# Training
loss_history = []
for epoch in range(200):
current_loss = train([np_x, np_target])[0]
loss_history.append(current_loss)
if epoch % 20 == 0:
print("Loss: %.03f, w, b: [%.02f, %.02f]" % (current_loss, K.eval(w), K.eval(b)))
Explanation: Training
Training is now just a matter of calling the function we have just defined. Each time train is called, indeed, w and b will be updated using the SGD rule.
Having generated some random training data, we will feed the train function for several epochs and observe the values of w, b, and loss.
End of explanation
# Plot loss history
# %load ../solutions/sol_2313.py
Explanation: We can also plot the loss history:
End of explanation |
1,207 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feedback or issues?
For any feedback or questions, please open an issue.
Vertex SDK for Python
Step1: Enter your project and GCS bucket
Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
Step3: Setting up Customer Managed Encryption Keys
By default, Google Cloud automatically encrypts data when it is at rest using encryption keys managed by Google. If you have specific compliance or regulatory requirements related to the keys that protect your data, you can use customer-managed encryption keys (CMEK) for your training jobs.
For more info on using CMEK on Vertex AI, please see
Step5: Create a key
Step6: Give permissions to key to the Vertex AI service account
Step7: Initialize Vertex SDK for Python
Initialize the client for Vertex AI
All resources created during this Notebook run will encrypted with the encryption key created above.
You can override the encryption key at each function call.
Step8: Create Managed Image Dataset from CSV
This section will create a managed Image dataset from the Flowers dataset. For more imformation on this dataset please visit https
Step9: Launch a Training Job to Create a Model
Train an AutoML Image Classification model.
Step10: Deploy Your Model
Deploy your model, then wait until the model FINISHES deployment before proceeding to prediction.
Step11: Predict on Endpoint
Take one sample from the data imported to the dataset
This sample will be encoded to base64 and passed to the endpoint for prediction
Step12: Undeploy Model from Endpoint | Python Code:
!pip3 uninstall -y google-cloud-aiplatform
!pip3 install --upgrade google-cloud-kms
!pip3 install google-cloud-aiplatform
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Feedback or issues?
For any feedback or questions, please open an issue.
Vertex SDK for Python: AutoML Image Classfication Training with Customer Managed Encryption Keys (CMEK) Example
To use this Jupyter notebook, create a copy of the notebook in Colab and open it. You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Colab automatically displays the return value of the last line in each cell.
This notebook demonstrate how to train an AutoML Image Classification model with CMEK. It will require you provide a bucket where the dataset will be stored.
Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK.
Install SDK
After the SDK installation the kernel will be automatically restarted.
End of explanation
import sys
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user()
REGION = "YOUR REGION" # e.g. us-central1
MY_PROJECT = "YOUR PROJECT ID"
MY_STAGING_BUCKET = "gs://YOUR BUCKET" # bucket should be in same region as ucaip
Explanation: Enter your project and GCS bucket
Enter your Project Id in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
End of explanation
KEY_RING_ID = "your_key_ring_name"
# Reference: https://cloud.google.com/kms/docs/samples/kms-create-key-ring
def create_key_ring(project_id, location_id, id):
Creates a new key ring in Cloud KMS
Args:
project_id (string): Google Cloud project ID (e.g. 'my-project').
location_id (string): Cloud KMS location (e.g. 'us-east1').
id (string): ID of the key ring to create (e.g. 'my-key-ring').
Returns:
KeyRing: Cloud KMS key ring.
# Import the client library.
from google.cloud import kms
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the parent location name.
location_name = f"projects/{project_id}/locations/{location_id}"
# Build the key ring.
key_ring = {}
# Call the API.
created_key_ring = client.create_key_ring(
request={"parent": location_name, "key_ring_id": id, "key_ring": key_ring}
)
print("Created key ring: {}".format(created_key_ring.name))
return created_key_ring
create_key_ring(project_id=MY_PROJECT, location_id=REGION, id=KEY_RING_ID)
Explanation: Setting up Customer Managed Encryption Keys
By default, Google Cloud automatically encrypts data when it is at rest using encryption keys managed by Google. If you have specific compliance or regulatory requirements related to the keys that protect your data, you can use customer-managed encryption keys (CMEK) for your training jobs.
For more info on using CMEK on Vertex AI, please see: https://cloud.google.com/vertex-ai/docs/general/cmek#before_you_begin
You can create a key using the guide above or executing the the Notebook cells below.
Register your application for Cloud Key Management Service (KMS) API in Google Cloud Platform at https://console.cloud.google.com/flows/enableapi?apiid=cloudkms.googleapis.com
Create a key ring
Create a key ring
End of explanation
KEY_ID = "your_key_id"
# Reference: https://cloud.google.com/kms/docs/samples/kms-create-key-symmetric-encrypt-decrypt
def create_key_symmetric_encrypt_decrypt(project_id, location_id, key_ring_id, id):
Creates a new symmetric encryption/decryption key in Cloud KMS.
Args:
project_id (string): Google Cloud project ID (e.g. 'my-project').
location_id (string): Cloud KMS location (e.g. 'us-east1').
key_ring_id (string): ID of the Cloud KMS key ring (e.g. 'my-key-ring').
id (string): ID of the key to create (e.g. 'my-symmetric-key').
Returns:
CryptoKey: Cloud KMS key.
# Import the client library.
from google.cloud import kms
# Create the client.
client = kms.KeyManagementServiceClient()
# Build the parent key ring name.
key_ring_name = client.key_ring_path(project_id, location_id, key_ring_id)
# Build the key.
purpose = kms.CryptoKey.CryptoKeyPurpose.ENCRYPT_DECRYPT
algorithm = (
kms.CryptoKeyVersion.CryptoKeyVersionAlgorithm.GOOGLE_SYMMETRIC_ENCRYPTION
)
key = {
"purpose": purpose,
"version_template": {
"algorithm": algorithm,
},
}
# Call the API.
created_key = client.create_crypto_key(
request={"parent": key_ring_name, "crypto_key_id": id, "crypto_key": key}
)
print("Created symmetric key: {}".format(created_key.name))
return created_key
create_key_symmetric_encrypt_decrypt(
project_id=MY_PROJECT, location_id=REGION, key_ring_id=KEY_RING_ID, id=KEY_ID
)
Explanation: Create a key
End of explanation
# Reference: https://cloud.google.com/vertex-ai/docs/general/cmek#granting_permissions
# Get the service account
SERVICE_ACCOUNT = ! gcloud projects get-iam-policy {MY_PROJECT} \
--flatten="bindings[].members" \
--format="table(bindings.members)" \
--filter="bindings.role:roles/aiplatform.serviceAgent" \
| grep -oP "[email protected]"
SERVICE_ACCOUNT = SERVICE_ACCOUNT[0]
print(f"Service account is: {SERVICE_ACCOUNT}")
# Give permissions
!gcloud kms keys add-iam-policy-binding {KEY_ID} \
--keyring={KEY_RING_ID} \
--location={REGION} \
--project={MY_PROJECT} \
--member=serviceAccount:{SERVICE_ACCOUNT} \
--role=roles/cloudkms.cryptoKeyEncrypterDecrypter
# Create the full resource identifier for the created key
ENCRYPTION_SPEC_KEY_NAME = f"projects/{MY_PROJECT}/locations/{REGION}/keyRings/{KEY_RING_ID}/cryptoKeys/{KEY_ID}"
Explanation: Give permissions to key to the Vertex AI service account
End of explanation
from google.cloud import aiplatform
aiplatform.init(
project=MY_PROJECT,
staging_bucket=MY_STAGING_BUCKET,
location=REGION,
encryption_spec_key_name=ENCRYPTION_SPEC_KEY_NAME,
)
Explanation: Initialize Vertex SDK for Python
Initialize the client for Vertex AI
All resources created during this Notebook run will encrypted with the encryption key created above.
You can override the encryption key at each function call.
End of explanation
IMPORT_FILE = (
"gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv"
)
ds = aiplatform.ImageDataset.create(
display_name="flowers",
gcs_source=[IMPORT_FILE],
import_schema_uri=aiplatform.schema.dataset.ioformat.image.single_label_classification,
)
ds.resource_name
Explanation: Create Managed Image Dataset from CSV
This section will create a managed Image dataset from the Flowers dataset. For more imformation on this dataset please visit https://www.tensorflow.org/datasets/catalog/tf_flowers.
End of explanation
job = aiplatform.AutoMLImageTrainingJob(
display_name="train-iris-automl-mbsdk-1",
prediction_type="classification",
multi_label=False,
model_type="CLOUD",
base_model=None,
)
# This will take around half an hour to run
model = job.run(
dataset=ds,
model_display_name="iris-classification-model-mbsdk",
training_fraction_split=0.6,
validation_fraction_split=0.2,
test_fraction_split=0.2,
budget_milli_node_hours=8000,
disable_early_stopping=False,
)
Explanation: Launch a Training Job to Create a Model
Train an AutoML Image Classification model.
End of explanation
endpoint = model.deploy()
Explanation: Deploy Your Model
Deploy your model, then wait until the model FINISHES deployment before proceeding to prediction.
End of explanation
test_item = !gsutil cat $IMPORT_FILE | head -n1
test_item, test_label = str(test_item[0]).split(",")
print(test_item, test_label)
import base64
import tensorflow as tf
with tf.io.gfile.GFile(test_item, "rb") as f:
content = f.read()
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [{"content": base64.b64encode(content).decode("utf-8")}]
prediction = endpoint.predict(instances=instances_list)
prediction
Explanation: Predict on Endpoint
Take one sample from the data imported to the dataset
This sample will be encoded to base64 and passed to the endpoint for prediction
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy Model from Endpoint
End of explanation |
1,208 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
User guide and example for the Landlab SPACE component
This notebook provides a brief introduction and user's guide for the Stream Power And Alluvial Conservation Equation (SPACE) component for landscape evolution modeling. It combines two documents, a User's Manual and a notebook-based example, written Charles M. Shobe to accompany the following publication
Step1: Two Landlab components are essential to running the SPACE model
Step2: Step 3
Step3: In this configuration, the model domain is set to drain water and sediment out of the only open boundary on the grid, the lower-left corner. There are several options for changing boundary conditions in Landlab. See Hobley et al. (2017) or the Landlab online documentation.
Step 4
Step4: Step 5
Step5: Visualization of results
Sediment flux map
Step6: Sedimentograph
Once the data required for the time series has been saved during the time loop, the time series may be plotted using standard matplotlib plotting commands | Python Code:
## Import Numpy and Matplotlib packages
import numpy as np
import matplotlib.pyplot as plt # For plotting results; optional
## Import Landlab components
# Pit filling; optional
from landlab.components import DepressionFinderAndRouter
# Flow routing
from landlab.components import FlowAccumulator
# SPACE model
from landlab.components import Space # SPACE model
## Import Landlab utilities
from landlab import RasterModelGrid # Grid utility
from landlab import imshow_grid # For plotting results; optional
Explanation: User guide and example for the Landlab SPACE component
This notebook provides a brief introduction and user's guide for the Stream Power And Alluvial Conservation Equation (SPACE) component for landscape evolution modeling. It combines two documents, a User's Manual and a notebook-based example, written Charles M. Shobe to accompany the following publication:
Shobe, C. M., Tucker, G. E., & Barnhart, K. R. (2017). The SPACE 1.0 model: a Landlab component for 2-D calculation of sediment transport, bedrock erosion, and landscape evolution. Geoscientific Model Development, 10(12), 4577-4604, https://doi.org/10.5194/gmd-10-4577-2017.
This notebook contains text from user manual along with executable code for its examples.
(User's Manual and example notebook written by C.M. Shobe in July 2017; combined into a notebook, updated for compatibility with Landlab 2.x, and added to the Landlab tutorials collection by Greg Tucker, July 2021.)
Background on SPACE component
The Landlab SPACE (Stream Power with Alluvium Conservation and Entrainment) component computes sediment transport and bedrock erosion across two-dimensional model landscapes. The SPACE model provides advantages relative to many other fluvial erosion models in that it 1) allows simultaneous erosion of sediment and bedrock, 2) explicitly treats sediment fluxes rather than relying on a proxy for bed cover, and 3) is easily coupled with other surface process components in Landlab. The SPACE component enhances Landlab’s functionality by enabling modeling of bedrock-alluvial channels, rather than simply using parameterized sediment-flux-dependent incision models.
This user manual teaches users how to use the SPACE component using two
examples provided in Shobe et al. (2017).
This user manual serves as a supplement to that manuscript.
Prerequisites: A working knowledge of the Python programming language
(SPACE and Landlab support Python 3.x) as well as the NumPy
and MatPlotLib libraries. Basic familiarity with the Landlab modeling toolkit (see Hobley et al., 2017 GMD, and Barnhart et al., 2020 eSurf) is recommended.
Model description
Input parameters
Sediment erodibility $K_s$: Governs the rate of sediment entrainment; may be specified as a single floating point number, an array of length equal to the number of grid nodes, or a string naming an existing grid field.
Bedrock erodibility $K_r$: Governs the rate of bedrock erosion; may be specified as a single floating point number, an array of length equal to the number of grid nodes, or a string naming an existing grid field.
Fraction of fine sediment $F_f$: The unitless fraction (0–1) of rock that does not get converted to sediment, but is assumed to exit the model domain as “fine sediment,” or wash load.
Sediment porosity $\phi$: The unitless fraction (0–1) of sediment thickness caused by pore space.
Sediment entrainment length scale $H_$: Length scale governing the shape of the exponential sediment entrainment and bedrock erosion func- tions. $H_$ may be thought of as reflecting bedrock surface roughness, with larger $H_*$ representing a rougher bedrock surface.
Effective settling velocity $V$: Settling velocity of sediment after accounting for the upward effects of turbulence. For details, see discussion by Davy and Lague, 2009.
Stream power exponent $m$: Exponent on drainage area or discharge in the stream power framework. Generally $\approx 0.5$.
Stream power exponent $n$: Exponent on channel slope in the stream power framework. Generally $\approx 1$.
Sediment erosion threshold $\omega_{cs}$: Threshold erosive power required to entrain sediment.
Bedrock erosion threshold $\omega_{cr}$: Threshold erosive power required to erode bedrock.
Discharge field: The field name or array to use for water discharge. The default is to use the grid field surface_water__discharge, which is simply drainage area multiplied by the default rainfall rate (1 m/yr). To use custom spatially/temporally varying rainfall, use water__unit_flux_in to specify water input to the FlowAccumulator.
Solver: string indicating the solver to use. Options at present include:
'basic' (default): explicit forward-time extrapolation. Simple but will become unstable if time step is too large.
'adaptive': subdivides global time step as needed to prevent slopes from reversing and alluvium from going negative.
Model Variables
Variables listed here are updated by the component at the grid locations listed. NOTE: because flow routing, calculation of discharge, and calculation of flow depth (if applicable) are handled by other Landlab components, variables such as water discharge and flow depth are not altered by the SPACE model and are not listed here.
soil__depth, node, [m]: Thickness of soil (also called sediment or alluvium) at every node. The name “soil” was used to match existing Landlab components. Soil thickness is calculated at every node incorporating the effects of sediment entrainment and deposition and bedrock erosion.
sediment__flux, node, [m$^3$/yr]: The volumetric flux of sediment at each node. Sediment flux is used to calculate sediment deposition rates.
Steps of a SPACE model
Note: these steps are for a SPACE model that is not coupled to any other Landlab components. To see examples of how to couple Landlab components, please refer to the Landlab documentation: http://landlab.github.io.
Step 1: Import the necessary libraries
The SPACE component is required, as are the model grid component and a flow routing component. It is generally a good idea to also include a depression handler such as LakeMapperBarnes or DepressionFinderAndRouter. These depression handlers route flow across flats or pits in a digital elevation model.
End of explanation
# Set grid parameters
num_rows = 20
num_columns = 20
node_spacing = 100.0
# track sediment flux at the node adjacent to the outlet at lower-left
node_next_to_outlet = num_columns + 1
# Instantiate model grid
mg = RasterModelGrid((num_rows, num_columns), node_spacing)
# add field ’topographic elevation’ to the grid
mg.add_zeros("node", "topographic__elevation")
# set constant random seed for consistent topographic roughness
np.random.seed(seed=5000)
# Create initial model topography:
# plane tilted towards the lower−left corner
topo = mg.node_y / 100000.0 + mg.node_x / 100000.0
# add topographic roughness
random_noise = (
np.random.rand(len(mg.node_y)) / 1000.0
) # impose topography values on model grid
mg["node"]["topographic__elevation"] += topo + random_noise
# add field 'soil__depth' to the grid
mg.add_zeros("node", "soil__depth")
# Set 2 m of initial soil depth at core nodes
mg.at_node["soil__depth"][mg.core_nodes] = 2.0 # meters
# Add field 'bedrock__elevation' to the grid
mg.add_zeros("bedrock__elevation", at="node")
# Sum 'soil__depth' and 'bedrock__elevation'
# to yield 'topographic elevation'
mg.at_node["bedrock__elevation"][:] = mg.at_node["topographic__elevation"]
mg.at_node["topographic__elevation"][:] += mg.at_node["soil__depth"]
Explanation: Two Landlab components are essential to running the SPACE model: the model itself, and the FlowAccumulator, which calculates drainage pathways, topographic slopes, and surface water discharge across the grid. A depression handler, such as DepressionFinderAndRouter, is extremely useful if a grid is likely to have pits or closed depressions. For this reason, it is generally a good idea to use the DepressionFinderAndRouter in addition to the FlowAccumulator. However, it is not required.
In addition to the relevant process components, some Landlab utilities are required to generate the model grid (in this example RasterModelGrid) and to visualize output (imshow_grid). Note that while it is possible to visualize output through functionality in other libraries (e.g., matplotlib), imshow_grid provides a simple way to generate 2-D maps of model variables.
Most Landlab functionality requires the Numpy package for scientific computing in python. The matplotlib plotting library has also been imported to aid visualization of results.
Step 2: Define the model domain and initial conditions
The SPACE component works on raster grids. For this example we will use a synthetic raster grid. An example and description of the Landlab raster model grid are given in (Shobe et al., 2017), with a more complete explanation offered in Hobley et al. (2017) and Barnhart et al. (2020). In addition to using user-defined, synthetic model grids, it is also possible to import digital elevation models for use as a model domain (see the tutorial reading_dem_into_landlab). In this example, we create a synthetic, square model domain by creating an instance of the RasterModelGrid. In this case, the domain will be a plane slightly tilted towards the lower-left (southwest) corner with random micro-scale topographic roughness to force flow convergence and channelization. The grid is composed of 20 rows and 20 columns for a total of 400 nodes, with user-defined spacing.
Once the grid has been created, the user defines a grid field to contain values of land surface elevation, and then imposes the desired initial condition topography on the model grid. In the case shown below, the field topographic__elevation is added to the model grid and given initial values of all zeros. After that, initial model topography is added to the field. To create a plane tilted to the southwest corner, which is referenced by $(x,y)$ coordinate pair (0,0), topographic elevation is modified to depend on the $x$ and $y$ coordinates of each grid node. Then, randomized micro-scale topographic roughness is added to the model grid. While not strictly necessary for the SPACE model to run, the micro-roughness allows flow convergence, channelization, and the development of realistic landscapes.
In this example, we initialize the model domain with 2 meters of sediment thickness at every core (non-boundary) node. The sediment thickness will shrink over time as water mobilizes and removes sediment. To do this, the fields soil__depth and bedrock__elevation must be added to the model grid. If they are not added, the SPACE model will create them. In that case, however, the default sediment thickness is zero and the default bedrock topography is simply the provided topographic elevation.
End of explanation
# Close all model boundary edges
mg.set_closed_boundaries_at_grid_edges(
bottom_is_closed=True, left_is_closed=True, right_is_closed=True, top_is_closed=True
)
# Set lower-left (southwest) corner as an open boundary
mg.set_watershed_boundary_condition_outlet_id(
0, mg["node"]["topographic__elevation"], -9999.0
)
Explanation: Step 3: Set the boundary conditions
The user must determine the boundary conditions of the model domain (i.e., determine across which boundaries water and sediment may flow). Boundary conditions are controlled by setting the status of individual nodes or grid edges (see Hobley et al., 2017). We will use a single corner node as an “open” boundary and all other boundary nodes will be “closed”. We first use set closed boundaries at grid edges to ensure that no mass (water or sediment) may cross the model boundaries. Then, set watershed boundary condition outlet id is used to open (allow flow through) the lower-left corner of the model domain.
End of explanation
# Instantiate flow router
fr = FlowAccumulator(mg, flow_director="FlowDirectorD8")
# Instantiate depression finder and router; optional
df = DepressionFinderAndRouter(mg)
# Instantiate SPACE model with chosen parameters
sp = Space(
mg,
K_sed=0.01,
K_br=0.001,
F_f=0.0,
phi=0.0,
H_star=1.0,
v_s=5.0,
m_sp=0.5,
n_sp=1.0,
sp_crit_sed=0,
sp_crit_br=0,
)
Explanation: In this configuration, the model domain is set to drain water and sediment out of the only open boundary on the grid, the lower-left corner. There are several options for changing boundary conditions in Landlab. See Hobley et al. (2017) or the Landlab online documentation.
Step 4: Initialize the SPACE component and any other components used
Like most Landlab components, SPACE is written as a Python class. The class was imported at the beginning of the driver script (step 1). In this step, the user declares the instance of the SPACE class and sets any relevant model parameters. The same must be done for any other components used.
End of explanation
# Set model timestep
timestep = 1.0 # years
# Set elapsed time to zero
elapsed_time = 0.0 # years
# Set timestep count to zero
count = 0
# Set model run time
run_time = 500.0 # years
# Array to save sediment flux values
sed_flux = np.zeros(int(run_time // timestep))
while elapsed_time < run_time: # time units of years
# Run the flow router
fr.run_one_step()
# Run the depression finder and router; optional
df.map_depressions()
# Run SPACE for one time step
sp.run_one_step(dt=timestep)
# Save sediment flux value to array
sed_flux[count] = mg.at_node["sediment__flux"][node_next_to_outlet]
# Add to value of elapsed time
elapsed_time += timestep
# Increase timestep count
count += 1
Explanation: Step 5: Run the time loop
The SPACE component calculates sediment entrainment and deposition, bedrock erosion, and changes in land surface elevation over time. The code shown below is an example of how to run the SPACE model over several model timesteps. In the example below, SPACE is run in a loop that executes until elapsed model time has reached a user-defined run time. The user is also responsible for choosing the model timestep. Within the loop, the following steps occur:
The flow router runs first to determine topographic slopes and water discharge at all nodes on the model domain.
The depression finder and router runs to map any nodes located in local topographic minima (i.e., nodes that water cannot drain out of) and to establish flow paths across the surface of these “lakes.” Using the depression finder and router is optional. However, because the SPACE model may in certain situations create local minima, using the depression finder and router can prevent the development of fatal instabilities.
The depression finder and router generates a list of flooded nodes, which is then saved as a variable called “flooded” and passed to the SPACE model.
The SPACE model runs for the duration of a single timestep, computing sediment transport, bedrock erosion, and topographic surface evolution.
The elapsed time is updated.
End of explanation
# Instantiate figure
fig = plt.figure()
# Instantiate subplot
plot = plt.subplot()
# Show sediment flux map
imshow_grid(
mg,
"sediment__flux",
plot_name="Sediment flux",
var_name="Sediment flux",
var_units=r"m$^3$/yr",
grid_units=("m", "m"),
cmap="terrain",
)
# Export figure to image
fig.savefig("sediment_flux_map.eps")
Explanation: Visualization of results
Sediment flux map
End of explanation
# Instantiate figure
fig = plt.figure()
# Instantiate subplot
sedfluxplot = plt.subplot()
# Plot data
sedfluxplot.plot(np.arange(500), sed_flux, color="k", linewidth=3.0)
# Add axis labels
sedfluxplot.set_xlabel("Time [yr]")
sedfluxplot.set_ylabel(r"Sediment flux [m$^3$/yr]")
Explanation: Sedimentograph
Once the data required for the time series has been saved during the time loop, the time series may be plotted using standard matplotlib plotting commands:
End of explanation |
1,209 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Threaded Serial Port reader</h1>
<hr style="border
Step1: <span>
We first create the queue where we want the serial port thread will be pushing the readings.
</span>
Step2: <span>
It is time now to build one of the important things
Step3: <span>
And we will create a Thread instance
Step4: <span>
Let's rock the Thread!
</span>
Step5: <span>
And we start reading que queue...
</span> | Python Code:
import sys
#sys.path.insert(0, '/home/asanso/workspace/att-spyder/att/src/python/')
sys.path.insert(0, 'i:/dev/workspaces/python/att-workspace/att/src/python/')
Explanation: <h1>Threaded Serial Port reader</h1>
<hr style="border: 1px solid #000;">
<span>
<h2>Serial Port reader in an execution thread.<br>
Publishing the readings from the serial port to a Queue.</h2>
</span>
<br>
<span>
This notebook shows how the module ThreadedSerialReader works.
</span>
<span>Set modules path first:</span>
End of explanation
import Queue
workQueue = Queue.Queue(10000)
Explanation: <span>
We first create the queue where we want the serial port thread will be pushing the readings.
</span>
End of explanation
import hit.serial.serial_port_builder
builder = hit.serial.serial_port_builder.ATTHitsFromFilePortBuilder()
Explanation: <span>
It is time now to build one of the important things: the serial port.<br>
We will use the file serialPort abstraction:
</span>
End of explanation
from hit.serial.serial_reader import *
port="train_points_import_data/arduino_raw_data.txt"
baud=0
myThread = ThreadedSerialReader(1, "Thread-1", workQueue, None, builder, port, baud, None)
Explanation: <span>
And we will create a Thread instance:
</span>
End of explanation
myThread.start()
Explanation: <span>
Let's rock the Thread!
</span>
End of explanation
while not workQueue.empty():
reading = workQueue.get()
print reading
if reading == "":
break
Explanation: <span>
And we start reading que queue...
</span>
End of explanation |
1,210 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Naive-Bayes" data-toc-modified-id="Naive-Bayes-1"><span class="toc-item-num">1 </span>Naive Bayes</a></span><ul class="toc-item"><li><span><a href="#Text/Document-Representations" data-toc-modified-id="Text/Document-Representations-1.1"><span class="toc-item-num">1.1 </span>Text/Document Representations</a></span></li><li><span><a href="#Bernoulli-Model" data-toc-modified-id="Bernoulli-Model-1.2"><span class="toc-item-num">1.2 </span>Bernoulli Model</a></span></li><li><span><a href="#Bernoulli-Model-Implementation" data-toc-modified-id="Bernoulli-Model-Implementation-1.3"><span class="toc-item-num">1.3 </span>Bernoulli Model Implementation</a></span></li><li><span><a href="#Multinomial-Distribution" data-toc-modified-id="Multinomial-Distribution-1.4"><span class="toc-item-num">1.4 </span>Multinomial Distribution</a></span></li><li><span><a href="#Multinomial-Model" data-toc-modified-id="Multinomial-Model-1.5"><span class="toc-item-num">1.5 </span>Multinomial Model</a></span><ul class="toc-item"><li><span><a href="#Laplace-Smoothing" data-toc-modified-id="Laplace-Smoothing-1.5.1"><span class="toc-item-num">1.5.1 </span>Laplace Smoothing</a></span></li><li><span><a href="#Log-Transformation" data-toc-modified-id="Log-Transformation-1.5.2"><span class="toc-item-num">1.5.2 </span>Log-Transformation</a></span></li></ul></li><li><span><a href="#Multinomial-Model-Implementation" data-toc-modified-id="Multinomial-Model-Implementation-1.6"><span class="toc-item-num">1.6 </span>Multinomial Model Implementation</a></span></li><li><span><a href="#Pros-and-Cons-of-Naive-Bayes" data-toc-modified-id="Pros-and-Cons-of-Naive-Bayes-1.7"><span class="toc-item-num">1.7 </span>Pros and Cons of Naive Bayes</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
Step1: Naive Bayes
Naive Bayes classifiers is based on Bayes’ theorem, and the adjective naive comes from the assumption that the features in a dataset are mutually independent. In practice, the independence assumption is often violated, but Naive Bayes still tend to perform very well in the fields of text/document classification. Common applications includes spam filtering (categorized a text message as spam or not-spam) and sentiment analysis (categorized a text message as positive or negative review). More importantly, the simplicity of the method means that it takes order of magnitude less time to train when compared to more complexed models like support vector machines.
Text/Document Representations
Text classifiers often don't use any kind of deep representation about language
Step3: Bernoulli Model
Consider a corpus of documents (training data) whose class is given by $C = 1, 2, ..., K$. Using Naive Bayes (no matter if it's the bernoulli model or the multinomial model which we'll later see), we classify a document $D$ as the class which has the highest posterior probability $argmax_{ k = 1, 2, ..., K} \, p(C = k|D)$, which can be re-expressed using Bayes’ Theorem
Step4: Multinomial Distribution
Before discussing the multinomial document model, it is important to be familiar with the multinomial
distribution. The multinomial distribution can be used to compute the probabilities in situations in which there are more than two possible outcomes. For example, suppose that two chess players had played numerous games and it was determined that the probability that Player A would win is 0.40, the probability that Player B would win is 0.35, and the probability that the game would end in a draw is 0.25. The multinomial distribution can be used to answer questions such as
Step5: Given the four documents and its corresponding class (label), which class does the document with the message Chinese Chinese Chinese Tokyo Japan more likely belong to.
Step12: The implementation in the following code chunk is a very crude implementation while the one two code chunks below is a more efficient and robust implementation that leverages sparse matrix and matrix multiplication. | Python Code:
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir(os.path.join('..', '..', 'notebook_format'))
from formats import load_style
load_style(plot_style = False)
os.chdir(path)
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
from sklearn.naive_bayes import MultinomialNB
from sklearn.preprocessing import LabelEncoder
from sklearn.feature_extraction.text import CountVectorizer
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,sklearn
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Naive-Bayes" data-toc-modified-id="Naive-Bayes-1"><span class="toc-item-num">1 </span>Naive Bayes</a></span><ul class="toc-item"><li><span><a href="#Text/Document-Representations" data-toc-modified-id="Text/Document-Representations-1.1"><span class="toc-item-num">1.1 </span>Text/Document Representations</a></span></li><li><span><a href="#Bernoulli-Model" data-toc-modified-id="Bernoulli-Model-1.2"><span class="toc-item-num">1.2 </span>Bernoulli Model</a></span></li><li><span><a href="#Bernoulli-Model-Implementation" data-toc-modified-id="Bernoulli-Model-Implementation-1.3"><span class="toc-item-num">1.3 </span>Bernoulli Model Implementation</a></span></li><li><span><a href="#Multinomial-Distribution" data-toc-modified-id="Multinomial-Distribution-1.4"><span class="toc-item-num">1.4 </span>Multinomial Distribution</a></span></li><li><span><a href="#Multinomial-Model" data-toc-modified-id="Multinomial-Model-1.5"><span class="toc-item-num">1.5 </span>Multinomial Model</a></span><ul class="toc-item"><li><span><a href="#Laplace-Smoothing" data-toc-modified-id="Laplace-Smoothing-1.5.1"><span class="toc-item-num">1.5.1 </span>Laplace Smoothing</a></span></li><li><span><a href="#Log-Transformation" data-toc-modified-id="Log-Transformation-1.5.2"><span class="toc-item-num">1.5.2 </span>Log-Transformation</a></span></li></ul></li><li><span><a href="#Multinomial-Model-Implementation" data-toc-modified-id="Multinomial-Model-Implementation-1.6"><span class="toc-item-num">1.6 </span>Multinomial Model Implementation</a></span></li><li><span><a href="#Pros-and-Cons-of-Naive-Bayes" data-toc-modified-id="Pros-and-Cons-of-Naive-Bayes-1.7"><span class="toc-item-num">1.7 </span>Pros and Cons of Naive Bayes</a></span></li></ul></li><li><span><a href="#Reference" data-toc-modified-id="Reference-2"><span class="toc-item-num">2 </span>Reference</a></span></li></ul></div>
End of explanation
vocab = ['blue', 'red', 'dog', 'cat', 'biscuit', 'apple']
doc = "the blue dog ate a blue biscuit"
# note that the words that didn't appear in the vocabulary will be discarded
bernoulli = [1 if v in doc else 0 for v in vocab]
multinomial = [doc.count(v) for v in vocab]
print('bernoulli', bernoulli)
print('multinomial', multinomial)
Explanation: Naive Bayes
Naive Bayes classifiers is based on Bayes’ theorem, and the adjective naive comes from the assumption that the features in a dataset are mutually independent. In practice, the independence assumption is often violated, but Naive Bayes still tend to perform very well in the fields of text/document classification. Common applications includes spam filtering (categorized a text message as spam or not-spam) and sentiment analysis (categorized a text message as positive or negative review). More importantly, the simplicity of the method means that it takes order of magnitude less time to train when compared to more complexed models like support vector machines.
Text/Document Representations
Text classifiers often don't use any kind of deep representation about language: often times a document is represented as a bag of words. (A bag is like a set that allows repeating elements.) This is an extremely simple representation as it throws away the word order and only keeps which words are included in the document and how many times each word occurs.
We shall look at two probabilistic models of documents, both of which represent documents as a bag of words, using the Naive Bayes assumption. Both models represent documents using feature vectors
whose components correspond to word types. If we have a document containing $|V|$ distinct vocabularies,
then the feature vector dimension $d=|V|$.
Bernoulli document model: a document is represented by a feature vector with binary elements taking value 1 if the corresponding word is present in the document and 0 if the word is not present.
Multinomial document model: a document is represented by a feature vector with integer elements whose value is the frequency of that word in the document.
Example: Consider the vocabulary V = {blue,red, dog, cat, biscuit, apple}. In this case |V| = d = 6. Now consider the (short) document "the blue dog ate a blue biscuit". If $d^B$ is the Bernoulli feature vector for this document, and $d^M$ is the Multinomial feature vector, then we would have:
End of explanation
train = np.genfromtxt('bernoulli.txt', dtype = np.int)
X_train = train[:, :-1]
y_train = train[:, -1] # the last column is the class
print('training data:')
print(X_train)
print()
print(y_train)
print()
print('testing data:')
X_test = np.array([[1, 0, 0, 1, 1, 1, 0, 1], [0, 1, 1, 0, 1, 0, 1, 0]])
print(X_test)
def bernoulli_nb(X_train, y_train, X_test):
Pass in the training data, it's label and
predict the testing data's class using bernoulli naive bayes
# calculate the prior proabability p(C=k)
N = X_train.shape[0]
priors = np.bincount(y_train) / N
# obtain the unique class's type (since it may not be 0 and 1)
class_type = np.unique(y_train)
class_nums = class_type.shape[0]
word_likelihood = np.zeros((class_nums, X_train.shape[1]))
# compute the word likelihood p(w_t∣C)
for index, output in enumerate(class_type):
subset = X_train[np.equal(y_train, output)]
word_likelihood[index, :] = np.sum(subset, axis = 0) / subset.shape[0]
# make predictions on the test set
# note that this code will break if the test set happens to
# be a 1d-array, since the first for loop will not be
# looping through each document, but each document's feature instead
predictions = np.zeros(X_test.shape[0], dtype = np.int)
for index1, document in enumerate(X_test):
# stores the p(C|D) for each class
posteriors = np.zeros(class_nums)
# compute p(C = k|D) for the document for all class
# and return the predicted class with the maximum probability
for c in range(class_nums):
# start with p(C = k)
posterior = priors[c]
word_likelihood_subset = word_likelihood[c, :]
# loop through features to compute p(D∣C = k)
for index2, feature in enumerate(document):
if feature:
prob = word_likelihood_subset[index2]
else:
prob = 1 - word_likelihood_subset[index2]
posterior *= prob
posteriors[c] = posterior
# compute the maximum p(C|D)
predicted_class = class_type[np.argmax(posteriors)]
predictions[index1] = predicted_class
return predictions
predictions = bernoulli_nb(X_train, y_train, X_test)
predictions
Explanation: Bernoulli Model
Consider a corpus of documents (training data) whose class is given by $C = 1, 2, ..., K$. Using Naive Bayes (no matter if it's the bernoulli model or the multinomial model which we'll later see), we classify a document $D$ as the class which has the highest posterior probability $argmax_{ k = 1, 2, ..., K} \, p(C = k|D)$, which can be re-expressed using Bayes’ Theorem:
$$p(C = k|D) = \frac{ p(C = k) \, p(D|C = k) }{p(D)} \ \propto p(C = k) \, p(D|C = k)$$
Where:
$\propto$ means is proportional to.
$p(C = k)$ represents the class k's prior probabilities.
$p(D|C = k)$ is the likelihoods of the document given the class k.
$p(D)$ is the normalizing factor which we don't have to compute since it does not depend on the class $C$. i.e. this factor will be the same across all class $C$, thus the numerator will be enough to determine which $p(C = k|D)$ is the largest.
Starting with $p(D|C)$. The spirit of Naive Bayes is it assumes that each of the features it uses are conditionally independent of one another given some class. More formally, if we wish to calculate the probability of observing features $X_1$ through $X_d$, given some class $C$ we can do it by the following math formula:
$$p(x_{1},x_{2},...,x_{d} \mid C) = \prod_{i=1}^{d}p(x_{i} \mid C)$$
Suppose we have a vocabulary (features) $V$ containing a set of $|V|$ words and the $t_{th}$ dimension of a document vector corresponds to word $w_t$ in the vocabulary. Following the Naive Bayes assumption, that the probability of each word occurring in the document is independent of the occurrences of the other words, we can then re-write the $i_{th}$ document's likelihood $p(D_i \mid C)$ as:
$$p(D_i \mid C ) = \prod_{t=1}^{d}b_{it}p(w_t \mid C) + ( 1 - b_{it} ) (1- p(w_t \mid C)) $$
Where:
$p(w_t \mid C)$ is the probability of word $w_t$ occurring in a document of class $C$.
$1- p(w_t \mid C)$ is the probability of $w_t$ not occurring in a document of class $C$.
$b_{it}$ is either 0 or 1 representing the absence or presence of word $w_t$ in the $i_{th}$ document.
This product goes over all words in the vocabulary. If word $w_t$ is present, then $b_{it} = 1$ and the associated probability is $p(w_t \mid C)$; If word $w_t$ is not present, then $b_{it} = 0$ and the associated probability becomes $1- p(w_t \mid C)$.
As for the word likelihood $p(w_t \mid C)$, we can learn (estimate) these parameters from a training set of documents labelled with class $C=k$.
$$p(w_t \mid C = k) = \frac{n_k(w_t)}{N_k}$$
Where:
$n_k(w_t)$ is the number of class $C=k$'s document in which $w_t$ is observed.
$N_k$ is the number of documents that belongs to class $k$.
Last, calculating $p(C)$ is relatively simple: If there are $N$ documents in total in the training set, then the prior probability of class $C=k$ may be estimated as the relative frequency of documents of class $C=k$:
$$p(C = k)\,= \frac{N_k}{N}$$
Where $N$ is the total number of documents in the training set.
Bernoulli Model Implementation
Consider a set of documents, each of which is related either to Class 1 or to Class 0. Given
a training set of 11 documents, we would like to train a Naive Bayes classifier, using the Bernoulli
document model, to classify unlabelled documents as Class 1 or 0. We define a vocabulary of eight words.
Thus the training data $X$ is presented below as a 11*8 matrix, in which each row represents an 8-dimensional document vector. And the $y$ represents the class of each document. Then we would like to classify the two testing data.
End of explanation
text = pd.read_table('multinomial.txt', sep = ',', header = None, names = ['message', 'label'])
X_train = text['message']
y_train = text['label']
text.head()
Explanation: Multinomial Distribution
Before discussing the multinomial document model, it is important to be familiar with the multinomial
distribution. The multinomial distribution can be used to compute the probabilities in situations in which there are more than two possible outcomes. For example, suppose that two chess players had played numerous games and it was determined that the probability that Player A would win is 0.40, the probability that Player B would win is 0.35, and the probability that the game would end in a draw is 0.25. The multinomial distribution can be used to answer questions such as: "If these two chess players played 12 games, what is the probability that Player A would win 7 games, Player B would win 2 games, and the remaining 3 games would be drawn?" The following generalized formula gives the probability of obtaining a specific set of outcomes when there are $n$ possible outcomes for each event:
$$P = \frac{n!}{n_1!n_2!...n_d!}p_1^{n_1}p_2^{n_2}...p_d^{n_d}$$
n is the total number of events.
$n_1, ..., n_d$ is the number of times outcome 1 to d occurred.
$p_1, ..., p_d$ is the probability of outcome 1 to d occurred.
Or more compactly written as:
$$P = \frac{n!}{\prod_{t=1}^{d}n_t!}\prod_{t=1}^{d}p_t^{n_t}$$
If all of that is still unclear, refer to the following link for a worked example. Youtube: Introduction to the Multinomial Distribution.
Multinomial Model
Recall that for Naive Bayes $argmax_{k = 1, 2, ..., K} \, p(D|C = k)p(C)$ is the objective function that we're trying to solve for. In the multinomial case, calculating $p(D|C = k)$ for the $i_{th}$ document becomes:
$$p(D_i|C = k) = \frac{x_i!}{\prod_{t=1}^{d}x_{it}!}\prod_{t=1}^{d}p(w_t|C)^{x_{it}} \propto \prod_{t=1}^{d}p(w_t|C)^{x_{it}}$$
Where:
$x_{it}$, is the count of the number of times word $w_t$ occurs in document $D_i$.
$x_i= \sum_t x_{it}$ is the total number of words in document $D_i$.
Often times, we don't need the normalization term $\frac{x_i!}{\prod_{t=1}^{d}x_{it}!}$, because it does not depend on the class, $C$.
$p(w_t \mid C)$ is the probability of word $w_t$ occurring in a document of class $C$. This time estimated using the word frequency information from the document's feature vectors. More specifically, this is: $\text{Number of word } w_t \text{ in class } C \big/ \text{Total number of words in class } C$.
$\prod_{t=1}^{d}p(w_t|C)^{x_{it}}$ can be interpreted as the product of word likelihoods for each word in the document.
Laplace Smoothing
One drawback of the equation for the multinomial model is that the likelihood $p(D_i|C = k)$ involves taking a product of probabilities $p(w_t \mid C)$. Hence if any one of the terms of the product is zero, then the whole product becomes zero. This means that the probability of the document belonging to the class in question is zero (impossible). Intuitively, just because a word does not occur in a document class in the training data does not mean that it cannot occur in any document of that class.
Therefore, one way to alleviate the problem is Laplace Smoothing or add one smoothing, where we add a count of one to each word type and the denominator will be increased by $|V|$, the number of vocabularies (features), to ensure that the probabilities are still normalized. More formally, our $p(w_t \mid C)$ becomes:
$$p(w_t \mid C) = \frac{( \text{Number of word } w_t \text{ in class } C + 1 )}{( \text{Total number of words in class } C) + |V|} $$
In sum, by performing Laplace Smoothing, we ensure that our $p(w_t \mid C)$ will never equal to 0.
Log-Transformation
Our original formula for classifying a document in to a class using Multinomial Naive Bayes was:
$$p(C|D) = p(C)\prod_{t=1}^{d}p(w_t|C)^{x_{it}}$$
In practice, when we have a lot of unique words, we create very small values by computing the product of many $p(w_t \mid C)$ terms. On a computer, the values may become so small that they may "underflow" (run out of memory to represent the value and thus it will be rounded to zero). To prevent this, we can simply throw a logarithm around everything:
$$p(C|D) = log \left( p(C)\prod_{t=1}^{d}p(w_t|C)^{x_{it}}\right)$$
Using the property that $log(ab) = log(a) + log(b)$, the formula above then becomes:
$$p(C|D) = log \, p(C) + \sum_{t=1}^d x_{it}log \, p(w_t|C)$$
Multinomial Model Implementation
End of explanation
vect = CountVectorizer()
X_train_dtm = vect.fit_transform(X_train)
X_test_dtm = vect.transform(['Chinese Chinese Chinese Tokyo Japan'])
print('feature name: ', vect.get_feature_names())
# convert to dense array for better visualize representation
print('training:')
print(X_train_dtm.toarray())
print('\ntesting:')
print(X_test_dtm.toarray())
Explanation: Given the four documents and its corresponding class (label), which class does the document with the message Chinese Chinese Chinese Tokyo Japan more likely belong to.
End of explanation
def mutinomial_nb(X_train_dtm, y_train, X_test_dtm):
Pass in the training data, it's label and
predict the testing data's class using mutinomial naive bayes
# compute the priors
# convert the character class to numbers (easier to work with)
le = LabelEncoder()
y = le.fit_transform(y_train)
priors = np.bincount(y) / y.shape[0]
class_type = np.unique(y)
class_nums = class_type.shape[0]
feature_nums = X_train_dtm.shape[1]
likelihood = np.zeros((class_nums, feature_nums))
# compute the word likelihood p(w_t∣C)
# apply lapace smoothing
for index, output in enumerate(class_type):
subset = X_train_dtm[np.equal(y, output)]
likelihood[index, :] = (np.sum(subset, axis = 0) + 1) / (np.sum(subset) + feature_nums)
# make prediction on test set
predictions = np.zeros(X_test_dtm.shape[0], dtype = np.int)
for index1, document in enumerate(X_test_dtm):
# stores the p(C|D) for each class
posteriors = np.zeros(class_nums)
# compute p(C = k|D) for the document for all class
# and return the predicted class with the maximum probability
for c in range(class_nums):
# start with p(C = k)
posterior = np.log(priors[c])
likelihood_subset = likelihood[c, :]
# compute p(D∣C = k)
prob = document * np.log(likelihood_subset)
posterior += np.sum(prob)
posteriors[c] = posterior
# compute the maximum p(C|D)
prediction = np.argmax(posteriors)
predictions[index1] = prediction
# convert the prediction to the original class label
predicted_class = le.inverse_transform(predictions)
return predicted_class
import numpy as np
from scipy.misc import logsumexp
from sklearn.preprocessing import LabelBinarizer
class NaiveBayes:
Multinomial Naive Bayes classifier [1]_.
Parameters
----------
smooth : float, default 1.0
Additive Laplace smoothing.
Attributes
----------
classes_ : 1d ndarray, shape [n_class]
Holds the original label for each class.
class_log_prior_ : 1d ndarray, shape [n_class]
Empirical log probability for each class.
feature_log_prob_ : 1d ndarray, shape [n_classes, n_features]
Smootheed empirical log probability of features given a class,
``P(feature | class)``.
class_count_ : 1d ndarray, shape [n_classes]
Number of samples encountered for each class during fitting.
feature_count_ : 2d ndarray, shape [n_classes, n_features]
Number of samples encountered for each class and feature
during fitting.
References
----------
.. [1] `Scikit-learn MultinomialNB
<http://scikit-learn.org/stable/modules/generated/sklearn.naive_bayes.MultinomialNB.html>`_
def __init__(self, smooth = 1.0):
self.smooth = smooth
def fit(self, X, y):
Fit the model according to the training data X and
training label y.
Parameters
----------
X : scipy sparse csr_matrix, shape [n_samples, n_features]
Training data.
y : 1d ndarray, shape [n_samples]
Label values.
Returns
-------
self
# one hot encode the label column and for binary
# label, also expand it to two columns since it
# only returns a single column vector
labelbin = LabelBinarizer()
Y = labelbin.fit_transform(y).astype(np.float64)
if Y.shape[1] == 1:
Y = np.concatenate((1 - Y, Y), axis = 1)
self.classes_ = labelbin.classes_
# for sparse matrix, the "*" operator performs matrix multiplication
# https://stackoverflow.com/questions/36782588/dot-product-sparse-matrices
self.feature_count_ = Y.T * X
self.class_count_ = Y.sum(axis = 0)
# compute feature log probability:
# number of a particular word in a particular class / total number of words in that class
smoothed_count = self.feature_count_ + self.smooth
smoothed_class = np.sum(smoothed_count, axis = 1)
self.feature_log_prob_ = (np.log(smoothed_count) -
np.log(smoothed_class.reshape(-1, 1)))
# compute class log prior:
# number of observation in a particular class / total number of observation
self.class_log_prior_ = (np.log(self.class_count_) -
np.log(self.class_count_.sum()))
return self
def predict(self, X):
Perform classification for input data X.
Parameters
----------
X : 2d ndarray, shape [n_samples, n_features]
Input data
Returns
-------
pred_class : 1d ndarray, shape [n_samples]
Predicted label for X
joint_prob = self._joint_log_likelihood(X)
pred_class = self.classes_[np.argmax(joint_prob, axis = 1)]
return pred_class
def predict_proba(self, X):
Return probability estimates for input data X.
Parameters
----------
X : 2d ndarray, shape [n_samples, n_features]
Input data
Returns
-------
pred_proba : 2d ndarray, shape [n_samples, n_classes]
Returns the probability of the samples for each class.
The columns correspond to the classes in sorted
order, as they appear in the attribute `classes_`.
joint_prob = self._joint_log_likelihood(X)
# a crude implementation would be to take a exponent
# and perform a normalization
# temp = np.exp(joint_prob)
# temp / temp.sum(axis = 1, keepdims = True)
# but this would be numerically instable
# https://hips.seas.harvard.edu/blog/2013/01/09/computing-log-sum-exp/
joint_prob_norm = logsumexp(joint_prob, axis = 1, keepdims = True)
pred_proba = np.exp(joint_prob - joint_prob_norm)
return pred_proba
def _joint_log_likelihood(self, X):
Compute the unnormalized posterior log probability of X, which is
the features' joint log probability (feature log probability times
the number of times that word appeared in that document) times the
class prior (since we're working in log space, it becomes an addition)
joint_prob = X * self.feature_log_prob_.T + self.class_log_prior_
return joint_prob
pred = mutinomial_nb(
X_train_dtm.toarray(), y_train, X_test_dtm.toarray())
print('crude implementation', pred)
nb = NaiveBayes()
nb.fit(X_train_dtm, y_train)
pred = nb.predict(X_test_dtm)
print('efficient implementation', pred)
nb = MultinomialNB()
nb.fit(X_train_dtm, y_train)
pred = nb.predict(X_test_dtm)
print('library implementation', pred)
Explanation: The implementation in the following code chunk is a very crude implementation while the one two code chunks below is a more efficient and robust implementation that leverages sparse matrix and matrix multiplication.
End of explanation |
1,211 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Constraints
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
Step1: What are Constraints?
Constraints live in their own context of the Bundle, and many are created
by default - either when you add a component or when you set the system hierarchy.
Let's look at all the existing constraints for our binary system by filtering on context='constraint'.
Step2: To see what all of these constraints do, see Advanced
Step3: Here we see the equation used to derive the mass of the primary star
from its orbit, as well as the current value
If we access the Parameter that it is constraining we can see that it
is automatically kept up-to-date.
Step4: The parameter is aware that it's being constrained and has references to all the relevant linking parameters.
Step5: If you change the hierarchy, built-in cross-object constraints (like mass
that depends on its parent orbit), will be adjusted to reflect the new hierarchy. See Advanced
Step6: You'll see that when we set the primary mass, the secondary mass has also changed (because the masses are related through q) and the period has changed (based on resolving the Kepler's third law constraint).
Note that the tags for the constraint are based on those of the constrained parameter, so to switch the parameterization back, we'll have to use a different filter. | Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
Explanation: Constraints
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
b.filter(context='constraint')
Explanation: What are Constraints?
Constraints live in their own context of the Bundle, and many are created
by default - either when you add a component or when you set the system hierarchy.
Let's look at all the existing constraints for our binary system by filtering on context='constraint'.
End of explanation
b.get_parameter(qualifier='mass', component='primary', context='constraint')
Explanation: To see what all of these constraints do, see Advanced: Built-In Constraints or look at the constraint API docs.
For now let's look at a single constraint by accessing a ConstraintParameter.
End of explanation
print(b.get_value(qualifier='mass', component='primary', context='component'))
Explanation: Here we see the equation used to derive the mass of the primary star
from its orbit, as well as the current value
If we access the Parameter that it is constraining we can see that it
is automatically kept up-to-date.
End of explanation
print(b.get_parameter(qualifier='mass', component='primary', context='component'))
Explanation: The parameter is aware that it's being constrained and has references to all the relevant linking parameters.
End of explanation
print(b.get_parameter(qualifier='mass', component='primary', context='component').constrained_by)
print("mass@primary: {}, mass@secondary: {}, period: {}".format(
b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component'),
b.get_value(qualifier='period', component='binary', context='component')))
b.flip_constraint('mass@primary', solve_for='period')
b.set_value(qualifier='mass', component='primary', context='component', value=1.0)
print("mass@primary: {}, mass@secondary: {}, period: {}".format(
b.get_value(qualifier='mass', component='primary', context='component'),
b.get_value(qualifier='mass', component='secondary', context='component'),
b.get_value(qualifier='period', component='binary', context='component')))
Explanation: If you change the hierarchy, built-in cross-object constraints (like mass
that depends on its parent orbit), will be adjusted to reflect the new hierarchy. See Advanced: Constraints and Changing Hierarchices for more details.
Re-Parameterizing or "Flipping" Constraints
NOTE: when re-parameterizing, please be careful and make sure all results and parameters make sense.
As we've just seen, the mass is a constrained (ie. derived) parameter. But
let's say that we would rather provide masses for some reason (perhaps
that was what was provided in a paper). We can choose to provide mass
and instead have one of its related parameters constrained by calling b.flip_constraint.
End of explanation
print(b.filter(context='constraint'))
b.get_parameter(qualifier='period', component='binary', context='constraint')
Explanation: You'll see that when we set the primary mass, the secondary mass has also changed (because the masses are related through q) and the period has changed (based on resolving the Kepler's third law constraint).
Note that the tags for the constraint are based on those of the constrained parameter, so to switch the parameterization back, we'll have to use a different filter.
End of explanation |
1,212 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'cnrm-cm6-1-hr', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: CNRM-CM6-1-HR
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
1,213 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dogs versus Cats Redux Competition on Kaggle
Setup
Step1: Prepare Data
Note
Step2: Create validation and test sets
Step3: Checkpoint - Extract Features
Step4: Checkpoint - Transfer Learning Model
Step5: Load labels and features
Step6: Run model with dropout and batchnorm
Step7: Looks like we can stop after 2 epochs. Doesn't get much better afterwards.
Run model with dropout only
Step8: Training accuracy jumps around a lot and is way lower than validation accuracy.
Either the learning rate it too low, or we're underfitting either because of regularization (dropout), or our model is not complex enough.
[email protected]
Step9: Lowering the learning rate works wonders, looks like the model wasn't underfitting before.
Result look better than with batchnorm.
Run model with batchnorm only | Python Code:
#reset python environment
%reset -f
from pathlib import Path
import numpy as np
import tensorflow as tf
import time
import os
current_dir = os.getcwd()
home_directory = Path(os.getcwd())
dataset_directory = home_directory / "datasets" / "dogs-vs-cats-redux-kernels-edition"
training_dataset_dir = dataset_directory / "train"
validation_dataset_dir = dataset_directory / "valid"
test_dataset_dir = dataset_directory / "test1"
sample_dataset_directory = home_directory / "datasets" / "dogs-vs-cats-redux-kernels-edition" / "sample"
sample_training_dataset_dir = sample_dataset_directory / "train"
sample_validation_dataset_dir = sample_dataset_directory / "valid"
sample_test_dataset_dir = sample_dataset_directory / "test1"
dogs_dir = "dog"
cats_dir = "cat"
default_device = "/gpu:0"
# default_device = v/cpu:0"
Explanation: Dogs versus Cats Redux Competition on Kaggle
Setup
End of explanation
from zipfile import ZipFile
# Create base directory
dataset_directory.mkdir(parents=True)
# Kaggle's train.zip and test1.zip have to be present in ./zips/dogs-vs-cats-redux-kernels-edition/
zips_directory = Path("zips") / "dogs-vs-cats-redux-kernels-edition"
with ZipFile(str(zips_directory / "train.zip")) as train_zip:
train_zip.extractall(dataset_directory)
with ZipFile(str(zips_directory / "test1.zip")) as test_zip:
test_zip.extractall(dataset_directory)
Explanation: Prepare Data
Note: This only needs to run if features haven't been already extracted
Pick random files from training set and use as validation set
Pick a subset of files for experimentation (sample)
Extract zip files
End of explanation
import os
import shutil
from glob import glob
valid_percentage = 0.1
sample_percentage = 0.1
def pick_random(files, percentage, target_dir, move=False):
shuffled = np.random.permutation(files)
num_files = int(len(shuffled) * percentage)
for f in shuffled[:num_files]:
if move:
f.rename(target_dir / f.name)
else:
shutil.copy(str(f), str(target_dir / f.name))
try:
# Create directory for training and validation images
cats_training_dataset_dir = training_dataset_dir / cats_dir
dogs_training_dataset_dir = training_dataset_dir / dogs_dir
cats_training_dataset_dir.mkdir()
dogs_training_dataset_dir.mkdir()
cats_validation_dataset_dir = validation_dataset_dir / cats_dir
dogs_validation_dataset_dir = validation_dataset_dir / dogs_dir
cats_validation_dataset_dir.mkdir(parents=True)
dogs_validation_dataset_dir.mkdir(parents=True)
# Move classes to their respective directories
for f in training_dataset_dir.glob("cat.*.jpg"):
f.rename(cats_training_dataset_dir / f.name)
for f in training_dataset_dir.glob("dog.*.jpg"):
f.rename(dogs_training_dataset_dir / f.name)
# Move randomly picked validation files
pick_random(
list(cats_training_dataset_dir.glob("*.jpg")), valid_percentage,
cats_validation_dataset_dir, move=True)
pick_random(
list(dogs_training_dataset_dir.glob("*.jpg")), valid_percentage,
dogs_validation_dataset_dir, move=True)
# Create directories for sample data
cats_sample_training_dataset_dir = (sample_training_dataset_dir / cats_dir)
dogs_sample_training_dataset_dir = (sample_training_dataset_dir / dogs_dir)
cats_sample_training_dataset_dir.mkdir(parents=True)
dogs_sample_training_dataset_dir.mkdir(parents=True)
cats_sample_validation_dataset_dir = sample_validation_dataset_dir / cats_dir
dogs_sample_validation_dataset_dir = sample_validation_dataset_dir / dogs_dir
cats_sample_validation_dataset_dir.mkdir(parents=True)
dogs_sample_validation_dataset_dir.mkdir(parents=True)
sample_test_dataset_dir.mkdir(parents=True)
# Copy randomly picked training and test files to samples
pick_random(
list(cats_training_dataset_dir.glob("*.jpg")), sample_percentage,
cats_sample_training_dataset_dir, move=False)
pick_random(
list(dogs_training_dataset_dir.glob("*.jpg")), sample_percentage,
dogs_sample_training_dataset_dir, move=False)
pick_random(
list(test_dataset_dir.glob("*.jpg")), sample_percentage,
sample_test_dataset_dir, move=False)
# Move randomly picked validation files
pick_random(
list(cats_sample_training_dataset_dir.glob("*.jpg")), valid_percentage,
cats_sample_validation_dataset_dir, move=False)
pick_random(
list(dogs_sample_training_dataset_dir.glob("*.jpg")), valid_percentage,
dogs_sample_validation_dataset_dir, move=False)
print("Done. Validation and sample sets created.")
except FileExistsError as e:
print("Error: Looks like data has already been prepared. Delete everything except the zip files to recreate.")
Explanation: Create validation and test sets
End of explanation
from glob import glob
def filenames_and_labels(path):
cat_filenames = np.array(glob("{}/cat/*.jpg".format(path)))
cat_labels = np.tile([1, 0], (len(cat_filenames), 1))
dog_filenames = np.array(glob("{}/dog/*.jpg".format(path)))
dog_labels = np.tile([0, 1], (len(dog_filenames), 1))
return np.concatenate([cat_filenames, dog_filenames]), np.concatenate([cat_labels, dog_labels])
import time
import tensorflow as tf
import tensorflow_image_utils as tiu
from vgg16 import Vgg16Model
def extract_features(*, sess, directory, output_filename, batch_size=32, augment=False, input_epochs=1):
filenames, labels = filenames_and_labels(directory)
filename_queue, label_queue = tf.train.slice_input_producer(
[
tf.convert_to_tensor(filenames, dtype=tf.string),
tf.convert_to_tensor(labels, dtype=tf.float32)
], num_epochs=input_epochs, shuffle=False)
image = tiu.load_image(filename_queue, size=(224, 224))
if augment:
image = tiu.distort_image(image)
image = tiu.vgg16_preprocess(image, shape=(224, 224, 3))
batched_data = tf.train.batch(
[image, label_queue, filename_queue],
batch_size=batch_size,
num_threads=4,
enqueue_many=False,
allow_smaller_final_batch=True,
capacity=3 * batch_size, )
inputs = tf.placeholder(tf.float32, shape=(None, 224, 224, 3), name="input")
model = Vgg16Model()
model.build(inputs)
sess.run([
tf.local_variables_initializer(),
tf.global_variables_initializer()
])
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess,coord=coord)
codes = []
num_unique_files = len(filenames)
num_files_to_process = num_unique_files * input_epochs
num_iterations = num_files_to_process // batch_size
if num_files_to_process % batch_size != 0:
num_iterations = num_iterations + 1
current_iteration = 0
tstart = time.perf_counter()
try:
while not coord.should_stop():
t0 = time.perf_counter()
batch_images, batch_labels, batch_filenames = sess.run(batched_data)
t1 = time.perf_counter()
print("\nIteration {}/{}:".format(current_iteration + 1, num_iterations))
print("\tFetching batch took {:.3f} seconds".format(t1-t0))
# flatten shape of maxpool5: (7, 7, 512) -> 7 * 7 * 512
flattened = tf.reshape(model.max_pool5, shape=(-1, 7 * 7 * 512))
features = sess.run(flattened, feed_dict={inputs: batch_images})
t2 = time.perf_counter()
print("\tExtracting features took {:.3f} seconds".format(t2-t1))
for i, batch_filename in enumerate(batch_filenames):
codes.append((batch_labels[i], batch_filename, features[i]))
t3 = time.perf_counter()
current_iteration = current_iteration + 1
print("\tProcessing {} images took {:.3f} seconds".format(len(batch_filenames), t3-t0))
except tf.errors.OutOfRangeError:
print("\nDone -- epoch limit reached")
finally:
coord.request_stop()
coord.join(threads)
np.save(output_filename, np.array(codes, dtype="object"))
print("Extracted to '{}' in {:.3f} seconds\n\n".format(output_filename ,time.perf_counter() - tstart))
with tf.Session(graph=tf.Graph()) as sess:
extract_features(sess=sess, directory=sample_validation_dataset_dir,
output_filename="sample_validation_codes.npy", input_epochs=2)
with tf.Session(graph=tf.Graph()) as sess:
extract_features(sess=sess, directory=sample_training_dataset_dir,
output_filename="sample_training_codes.npy",
augment=True, input_epochs=4)
Explanation: Checkpoint - Extract Features
End of explanation
from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model.signature_def_utils import predict_signature_def
from tensorflow.python.saved_model.tag_constants import SERVING
from tensorflow.python.saved_model.signature_constants import DEFAULT_SERVING_SIGNATURE_DEF_KEY
from tensorflow.python.saved_model.signature_constants import PREDICT_INPUTS
from tensorflow.python.saved_model.signature_constants import PREDICT_OUTPUTS
class TransferModel:
def build(self, *, input_size, num_hidden=1, hidden_layer_size=256, use_batchnorm=True, use_dropout=True):
with tf.name_scope("inputs"):
self.input = tf.placeholder(tf.float32, shape=(None, input_size), name="input")
self.is_training = tf.placeholder(tf.bool, name="is_training")
self.keep_prob = tf.placeholder(tf.float32, name="keep_probability")
self.learning_rate = tf.placeholder(tf.float32, name="learning_rate")
with tf.name_scope("targets"):
self.labels = tf.placeholder(tf.float32, shape=(None, 2), name="labels")
prev_size = input_size
next_input = self.input
for i in range(num_hidden):
with tf.variable_scope("hidden_layer_{}".format(i)):
hidden_weights = tf.Variable(
initial_value = tf.truncated_normal([prev_size, hidden_layer_size], mean=0.0, stddev=0.01),
dtype=tf.float32, name="hidden_weights"
)
hidden_bias = tf.Variable(
initial_value = tf.zeros(hidden_layer_size),
dtype=tf.float32,
name="hidden_bias"
)
hidden = tf.matmul(next_input, hidden_weights) + hidden_bias
if use_batchnorm:
hidden = tf.layers.batch_normalization(hidden, training=self.is_training)
hidden = tf.nn.relu(hidden, name="hidden_relu")
if use_dropout:
hidden = tf.nn.dropout(hidden, keep_prob=self.keep_prob, name="hidden_dropout")
tf.summary.histogram("hidden_weights_{}".format(i), hidden_weights)
tf.summary.histogram("hidden_bias_{}".format(i), hidden_bias)
next_input = hidden
prev_size = hidden_layer_size
with tf.name_scope("outputs"):
output_weights = tf.Variable(
initial_value=tf.truncated_normal(shape=(hidden_layer_size, 2), mean=0.0, stddev=0.01),
dtype=tf.float32, name="output_weights"
)
output_bias = tf.Variable(initial_value=tf.zeros(2), dtype=tf.float32, name="output_bias")
self.logits = tf.matmul(next_input, output_weights) + output_bias
self.predictions = tf.nn.softmax(self.logits, name="predictions")
tf.summary.histogram("output_weights", output_weights)
tf.summary.histogram("output_bias", output_bias)
tf.summary.histogram("predictions", self.predictions)
with tf.name_scope("cost"):
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=self.logits, labels=self.labels, name="cross_entropy")
self.cost = tf.reduce_mean(cross_entropy, name="cost")
tf.summary.scalar("cost", self.cost)
with tf.name_scope("train"):
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
correct_predictions = tf.equal(tf.argmax(self.predictions, 1), tf.argmax(self.labels, 1), name="correct_predictions")
self.accuracy = tf.reduce_mean(tf.cast(correct_predictions, tf.float32), name="accuracy")
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate).minimize(self.cost)
self.merged_summaries = tf.summary.merge_all()
def run_training(self, *, sess, fn_get_batches, num_epochs,
validation_images, validation_labels,
writer=None, keep_prob=0.5, batch_size=64,
learning_rate=0.01, accuracy_print_steps=100):
sess.run(tf.global_variables_initializer())
iteration = 0
for epoch in range(num_epochs):
for batch_train_images, batch_train_labels in fn_get_batches(batch_size):
train_acc, train_loss, _, p, summary = sess.run(
[self.accuracy, self.cost, self.optimizer, self.logits, self.merged_summaries],
feed_dict = {
self.input: batch_train_images,
self.labels: batch_train_labels,
self.keep_prob: keep_prob,
self.learning_rate: learning_rate,
self.is_training: True})
iteration = iteration + 1
if iteration % accuracy_print_steps == 0:
if not writer == None:
writer.add_summary(summary, iteration)
if iteration % accuracy_print_steps == 0:
val_acc = sess.run(self.accuracy, feed_dict ={
self.input: validation_images,
self.labels: validation_labels,
self.keep_prob: 1.,
self.is_training: False})
print("\tEpoch {}/{} Iteration {}, trainacc: {}, valacc: {}, loss: {}".format(epoch + 1, num_epochs, iteration, train_acc, val_acc, train_loss))
def save_model(self, *, sess, saved_model_path):
builder = saved_model_builder.SavedModelBuilder(saved_model_path)
builder.add_meta_graph_and_variables(
sess, [SERVING],
signature_def_map = {
DEFAULT_SERVING_SIGNATURE_DEF_KEY: predict_signature_def(
inputs = { PREDICT_INPUTS: self.images },
outputs = { PREDICT_OUTPUTS: self.predictions }
)
}
)
builder.save()
Explanation: Checkpoint - Transfer Learning Model
End of explanation
training_features = np.load("sample_training_codes.npy")
validation_features = np.load("sample_validation_codes.npy")
np.random.shuffle(training_features)
training_x = np.array(list(map(lambda row: row[2], training_features)))
training_y = np.array(list(map(lambda row: row[0], training_features)))
validation_x = np.array(list(map(lambda row: row[2], validation_features)))
validation_y = np.array(list(map(lambda row: row[0], validation_features)))
def get_batches(x, y, batch_size=32):
num_rows = y.shape[0]
num_batches = num_rows // batch_size
if num_rows % batch_size != 0:
num_batches = num_batches + 1
for batch in range(num_batches):
yield x[batch_size * batch: batch_size * (batch + 1)], y[batch_size * batch: batch_size * (batch + 1)]
Explanation: Load labels and features
End of explanation
tf.reset_default_graph()
with tf.Session() as sess:
m = TransferModel()
m.build(input_size=7 * 7 * 512, num_hidden=1, hidden_layer_size=256, use_batchnorm=True, use_dropout=True)
m.run_training(
sess=sess, num_epochs=5, learning_rate=0.01, keep_prob=0.8, batch_size=64,
fn_get_batches=lambda batch_size: get_batches(training_x, training_y),
validation_images=validation_x, validation_labels=validation_y)
Explanation: Run model with dropout and batchnorm
End of explanation
tf.reset_default_graph()
with tf.Session() as sess:
m = TransferModel()
m.build(input_size=7 * 7 * 512, num_hidden=1, hidden_layer_size=256, use_batchnorm=False, use_dropout=True)
m.run_training(
sess=sess, num_epochs=5, learning_rate=0.01, keep_prob=0.8, batch_size=64,
fn_get_batches=lambda batch_size: get_batches(training_x, training_y),
validation_images=validation_x, validation_labels=validation_y)
Explanation: Looks like we can stop after 2 epochs. Doesn't get much better afterwards.
Run model with dropout only
End of explanation
tf.reset_default_graph()
with tf.Session() as sess:
m = TransferModel()
m.build(input_size=7 * 7 * 512, num_hidden=1, hidden_layer_size=256, use_batchnorm=False, use_dropout=True)
m.run_training(
sess=sess, num_epochs=5, learning_rate=0.001, keep_prob=0.8, batch_size=64,
fn_get_batches=lambda batch_size: get_batches(training_x, training_y),
validation_images=validation_x, validation_labels=validation_y)
Explanation: Training accuracy jumps around a lot and is way lower than validation accuracy.
Either the learning rate it too low, or we're underfitting either because of regularization (dropout), or our model is not complex enough.
[email protected]:
```
Yes your assumption is true - although if you're underfitting due to reasons other than dropout (or other regularization techniques), you won't see this.
The key technique to avoiding underfitting is using a model with plenty of layers and parameters, and picking an appropriate architecture (e.g. CNN with batchnorm for images). Also picking appropriate learning rates.
Picking the output with the highest validation accuracy is generally a good approach.
```
Lower Learning Rate
End of explanation
tf.reset_default_graph()
with tf.Session() as sess:
m = TransferModel()
m.build(input_size=7 * 7 * 512, num_hidden=1, hidden_layer_size=256, use_batchnorm=True, use_dropout=False)
m.run_training(
sess=sess, num_epochs=5, learning_rate=0.01, keep_prob=1, batch_size=64,
fn_get_batches=lambda batch_size: get_batches(training_x, training_y),
validation_images=validation_x, validation_labels=validation_y)
Explanation: Lowering the learning rate works wonders, looks like the model wasn't underfitting before.
Result look better than with batchnorm.
Run model with batchnorm only
End of explanation |
1,214 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download
Step1: Missão 2
Step2: Teste da Solução | Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 7</font>
Download: http://github.com/dsacademybr
End of explanation
class SelectionSort(object):
def sort(self, data):
if data is None:
raise TypeError('Dados não podem ser None')
if len(data) < 2:
return data
for i in range(len(data) - 1):
min_index = i
for j in range(i + 1, len(data)):
if data[j] < data[min_index]:
min_index = j
if data[min_index] < data[i]:
data[i], data[min_index] = data[min_index], data[i]
return data
def sort_iterative_alt(self, data):
if data is None:
raise TypeError('Dados não podem ser None')
if len(data) < 2:
return data
for i in range(len(data) - 1):
self._swap(data, i, self._find_min_index(data, i))
return data
def sort_recursive(self, data):
if data is None:
raise TypeError('Dados não podem ser None')
if len(data) < 2:
return data
return self._sort_recursive(data, start=0)
def _sort_recursive(self, data, start):
if data is None:
return
if start < len(data) - 1:
swap(data, start, self._find_min_index(data, start))
self._sort_recursive(data, start + 1)
return data
def _find_min_index(self, data, start):
min_index = start
for i in range(start + 1, len(data)):
if data[i] < data[min_index]:
min_index = i
return min_index
def _swap(self, data, i, j):
if i != j:
data[i], data[j] = data[j], data[i]
return data
Explanation: Missão 2: Implementar o Algoritmo de Ordenação "Selection sort".
Nível de Dificuldade: Alto
Premissas
As duplicatas são permitidas?
* Sim
Podemos assumir que a entrada é válida?
* Não
Podemos supor que isso se encaixa na memória?
* Sim
Teste Cases
None -> Exception
[] -> []
One element -> [element]
Two or more elements
Algoritmo
Animação do Wikipedia:
Podemos fazer isso de forma recursiva ou iterativa. Iterativamente será mais eficiente, pois não requer sobrecarga de espaço extra com as chamadas recursivas.
Para cada elemento
* Verifique cada elemento à direita para encontrar o min
* Se min < elemento atual, swap
Solução
End of explanation
%%writefile missao4.py
from nose.tools import assert_equal, assert_raises
class TestSelectionSort(object):
def test_selection_sort(self, func):
print('None input')
assert_raises(TypeError, func, None)
print('Input vazio')
assert_equal(func([]), [])
print('Um elemento')
assert_equal(func([5]), [5])
print('Dois ou mais elementos')
data = [5, 1, 7, 2, 6, -3, 5, 7, -10]
assert_equal(func(data), sorted(data))
print('Sua solução foi executada com sucesso! Parabéns!')
def main():
test = TestSelectionSort()
try:
selection_sort = SelectionSort()
test.test_selection_sort(selection_sort.sort)
except NameError:
pass
if __name__ == '__main__':
main()
%run -i missao4.py
Explanation: Teste da Solução
End of explanation |
1,215 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load data from http
Step1: Load sales time-series data
Step2: Following Aarshay Jain over at Analytics Vidhya (see here) we implement a Rolling Mean, Standard Deviation + Dickey-Fuller test | Python Code:
# code written in py_3.0
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
import seaborn as sns
Explanation: Load data from http://media.wiley.com/product_ancillary/6X/11186614/DOWNLOAD/ch08.zip, SwordForecasting.xlsx
End of explanation
# find path to your SwordForecasting.xlsx
df_sales = pd.read_excel(open('C:/Users/craigrshenton/Desktop/Dropbox/excel_data_sci/ch08/SwordForecasting.xlsm','rb'), sheetname=0)
df_sales = df_sales.iloc[0:36, 0:2]
df_sales.rename(columns={'t':'Time'}, inplace=True)
df_sales.head()
df_sales.Time = pd.date_range('2010-1', periods=len(df_sales.Time), freq='M') # 'Time' is now in time-series format
df_sales = df_sales.set_index('Time') # set Time as Series index
sns.set(style="darkgrid", context="notebook", font_scale=0.9, rc={"lines.linewidth": 1.5}) # make plots look nice
fig, ax = plt.subplots(1)
ax.plot(df_sales)
plt.ylabel('Demand')
plt.xlabel('Date')
# rotate and align the tick labels so they look better
fig.autofmt_xdate()
# use a more precise date string for the x axis locations
ax.fmt_xdata = mdates.DateFormatter('%Y-%m-%d')
plt.show()
Explanation: Load sales time-series data
End of explanation
from statsmodels.tsa.stattools import adfuller
def test_stationarity(timeSeries):
fig, ax = plt.subplots(1)
ax.plot(timeSeries, '-.', label='raw data')
ax.plot(timeSeries.rolling(12).mean(), label='moving average (year)')
ax.plot(timeSeries.expanding().mean(), label='expanding')
ax.plot(timeSeries.ewm(alpha=0.03).mean(), label='EWMA ($\\alpha=.03$)')
# rotate and align the tick labels so they look better
fig.autofmt_xdate()
plt.ylabel('Demand')
plt.xlabel('Date')
plt.legend(bbox_to_anchor=(1.35, .5))
plt.show()
# perform Dickey-Fuller test:
print('Results of Dickey-Fuller Test:')
dftest = adfuller(timeSeries.ix[:,0], autolag='AIC')
dfoutput = pd.Series(dftest[0:4], index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(df_sales)
def tsplot(y, lags=None, figsize=(10, 8)):
fig = plt.figure(figsize=figsize)
layout = (2, 2)
ts_ax = plt.subplot2grid(layout, (0, 0), colspan=2)
acf_ax = plt.subplot2grid(layout, (1, 0))
pacf_ax = plt.subplot2grid(layout, (1, 1))
y.plot(ax=ts_ax)
smt.graphics.plot_acf(y, lags=lags, ax=acf_ax)
smt.graphics.plot_pacf(y, lags=lags, ax=pacf_ax)
[ax.set_xlim(1.5) for ax in [acf_ax, pacf_ax]]
sns.despine()
plt.tight_layout()
return ts_ax, acf_ax, pacf_ax
import statsmodels.formula.api as smf
import statsmodels.tsa.api as smt
import statsmodels.api as sm
mod = smt.ARIMA(df_sales, order=(1, 1, 1))
res = mod.fit()
pred_dy = res.get_prediction(start=min(df_sales.index), dynamic=min(df_sales.index))
pred_dy_ci = pred_dy.conf_int()
Explanation: Following Aarshay Jain over at Analytics Vidhya (see here) we implement a Rolling Mean, Standard Deviation + Dickey-Fuller test
End of explanation |
1,216 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Chebyshev derivative method
Exercise
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$, call this function and display the Chebyshev derivative matrix.
Step2: 2. Initialization of setup
Step3: 3. Source Initialization
Step4: 4. Time Extrapolation
Now we time extrapolate using the previously defined get_cheby_matrix(nx) method to call the differentiation matrix. The discrete values of the numerical simulation are indicated by dots in the animation, they represent the Chebyshev collocation points. Observe how the wavefield near the domain center is less dense than towards the boundaries. | Python Code:
# This is a configuration step for the exercise. Please run it before calculating the derivative!
import numpy as np
import matplotlib.pyplot as plt
from ricker import ricker
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">The Chebyshev Pseudospectral Method - Elastic Waves in 1D</div>
</div>
</div>
</div>
Seismo-Live: http://seismo-live.org
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
This notebook presents the numerical solution for the 1D elastic wave equation using the Chebyshev Pseudospectral Method. We depart from the equation
\begin{equation}
\rho(x) \partial_t^2 u(x,t) = \partial_x (\mu(x) \partial_x u(x,t)) + f(x,t),
\end{equation}
and use a standard 3-point finite-difference operator to approximate the time derivatives. Then, the displacement field is extrapolated as
\begin{equation}
\rho_i\frac{u_{i}^{j+1} - 2u_{i}^{j} + u_{i}^{j-1}}{dt^2}= \partial_x (\mu(x) \partial_x u(x,t)){i}^{j} + f{i}^{j}
\end{equation}
An alternative way of performing space derivatives of a function defined on the Chebyshev collocation points is to define a derivative matrix $D_{ij}$
\begin{equation}
D_{ij} =
\begin{cases}
-\frac{2 N^2 + 1}{6} \hspace{1.5cm} \text{for i = j = N}\
-\frac{1}{2} \frac{x_i}{1-x_i^2} \hspace{1.5cm} \text{for i = j = 1,2,...,N-1}\
\frac{c_i}{c_j} \frac{(-1)^{i+j}}{x_i - x_j} \hspace{1.5cm} \text{for i $\neq$ j = 0,1,...,N}
\end{cases}
\end{equation}
where $N+1$ is the number of Chebyshev collocation points $ \ x_i = cos(i\pi / N)$, $ \ i=0,...,N$ and the $c_i$ are given as
$$ c_i = 2 \hspace{1.5cm} \text{for i = 0 or N} $$
$$ c_i = 1 \hspace{1.5cm} \text{otherwise} $$
This differentiation matrix allows us to write the derivative of the function $f_i = f(x_i)$ (possibly depending on time) simply as
$$\partial_x u_i = D_{ij} \ u_j$$
where the right-hand side is a matrix-vector product, and the Einstein summation convention applies.
End of explanation
def get_cheby_matrix(nx):
cx = np.zeros(nx+1)
x = np.zeros(nx+1)
for ix in range(0,nx+1):
x[ix] = np.cos(np.pi * ix / nx)
cx[0] = 2.
cx[nx] = 2.
cx[1:nx] = 1.
D = np.zeros((nx+1,nx+1))
for i in range(0, nx+1):
for j in range(0, nx+1):
if i==j and i!=0 and i!=nx:
D[i,i]=-x[i]/(2.0*(1.0-x[i]*x[i]))
else:
D[i,j]=(cx[i]*(-1)**(i+j))/(cx[j]*(x[i]-x[j]))
D[0,0] = (2.*nx**2+1.)/6.
D[nx,nx] = -D[0,0]
return D
# Call the chebyshev differentiation matrix
# ---------------------------------------------------------------
D_ij = get_cheby_matrix(50)
# ---------------------------------------------------------------
# Display Differentiation Matrix
# ---------------------------------------------------------------
plt.imshow(D_ij, interpolation="bicubic", cmap="gray")
plt.title('Differentiation Matrix $D_{ij}$')
plt.axis("off")
plt.tight_layout()
plt.show()
Explanation: 1. Chebyshev derivative method
Exercise
Define a python function call "get_cheby_matrix(nx)" that initializes the Chebyshev derivative matrix $D_{ij}$, call this function and display the Chebyshev derivative matrix.
End of explanation
# Basic parameters
# ---------------------------------------------------------------
#nt = 5000 # number of time steps
tmax = 0.0006
eps = 1.4 # stability limit
isx = 100
lw = 0.7
ft = 10
f0 = 100000 # dominant frequency
iplot = 20 # Snapshot frequency
# material parameters
rho = 2500.
c = 3000.
mu = rho*c**2
# space domain
nx = 100 # number of grid points in x 199
xs = np.floor(nx/2) # source location
xr = np.floor(nx*0.8)
x = np.zeros(nx+1)
# initialization of pressure fields
p = np.zeros(nx+1)
pnew = np.zeros(nx+1)
pold = np.zeros(nx+1)
d2p = np.zeros(nx+1)
for ix in range(0,nx+1):
x[ix] = np.cos(ix * np.pi / nx)
dxmin = min(abs(np.diff(x)))
dxmax = max(abs(np.diff(x)))
dt = eps*dxmin/c # calculate time step from stability criterion
nt = int(round(tmax/dt))
Explanation: 2. Initialization of setup
End of explanation
# source time function
# ---------------------------------------------------------------
t = np.arange(1, nt+1)*dt # initialize time axis
T0 = 1./f0
tmp = ricker(dt, T0)
isrc = tmp
tmp = np.diff(tmp)
src = np.zeros(nt)
src[0:np.size(tmp)] = tmp
#spatial source function
# ---------------------------------------------------------------
sigma = 1.5*dxmax
x0 = x[int(xs)]
sg = np.exp(-1/sigma**2*(x-x0)**2)
sg = sg/max(sg)
Explanation: 3. Source Initialization
End of explanation
# Initialize animated plot
# ---------------------------------------------------------------
plt.figure(figsize=(10,6))
line = plt.plot(x, p, 'k.', lw=2)
plt.title('Chebyshev Method - 1D Elastic wave', size=16)
plt.xlabel(' x(m)', size=14)
plt.ylabel(' Amplitude ', size=14)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
# Differentiation matrix
D = get_cheby_matrix(nx)
for it in range(nt):
# Space derivatives
dp = np.dot(D, p.T)
dp = mu/rho * dp
dp = D @ dp
# Time extrapolation
pnew = 2*p - pold + np.transpose(dp) * dt**2
# Source injection
pnew = pnew + sg*src[it]*dt**2/rho
# Remapping
pold, p = p, pnew
p[0] = 0; p[nx] = 0 # set boundaries pressure free
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in line:
l.remove()
del l
# --------------------------------------
# Display lines
line = plt.plot(x, p, 'k.', lw=1.5)
plt.gcf().canvas.draw()
Explanation: 4. Time Extrapolation
Now we time extrapolate using the previously defined get_cheby_matrix(nx) method to call the differentiation matrix. The discrete values of the numerical simulation are indicated by dots in the animation, they represent the Chebyshev collocation points. Observe how the wavefield near the domain center is less dense than towards the boundaries.
End of explanation |
1,217 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fundamentals of text processing
Content in this section is adapted from Ramalho (2015) and Lutz (2013).
The most basic characters in a string are the ASCII characters. The string library in Python, helpfully has these all listed out.
Step1: There's also punctionation and digits
Step2: A string is basically a list of these mapping numbers. We can use some other functions and methods to analyze a string like we would with other iterables (like a list).
The len of a string returns the number of characters in it.
Step3: Two (or more) strings can be combined by adding them together.
Step4: Every character is mapped to an underlying integer code.
Step5: We can also use chr to do the reverse mapping
Step6: When you're doing comparisons, you're basically comparing these numbers to each other.
Step7: Here's the first 128 characters. Some of these early characters aren't single characters, but are control characters or whitespace characters.
Step8: You'll notice that this ASCII character mapping doesn't include characters that have accents.
Step9: This last character é also exists at a specific location.
Step10: However, the way that Python performs this mapping is not the same for computers everywhere else in the world. If we use the popular UTF-8 standard to encode this string into generic byte-level representation, we get something interesting
Step11: The length of this b string somehow got a new character in it compared to the original s string.
Step12: If we try to discover where these characters live and then map them back, we run into problems.
Step13: We can convert from this byte-level representation back into Unicode with the .decode method.
Step14: Using a different decoding standard like CP1252 returns something much more grotesque without throwing any errors.
Step15: There are many, many kinds of character encodings for representing non-ASCII text.
This cartoon pretty much explains why there are so many standards rather than a single standard
Step16: You will almost certainly encounter string encoding problems whenever you work with text data. Let's look at how quickly things can go wrong trying to decode a string when we don't know the standard.
Some standards map the \xe9 byte-level representation to the é character we intended, while other standards have nothing at that byte location, and still others map that byte location to a different character.
Step18: How do you discover the proper encoding given an observed byte sequence? You can't. But you can make some informed guesses by using a library like chardet to find clues based on relative frequencies and presence of byte-order marks.
What is your system's default? More like what are the defaults. The situation on PCs is generally a hot mess with Microsoft's standards like CP1252 competing with international standards like UTF-8, but Mac's generally try to keep everything in UTF-8.
Step19: Whenever you encounter problems with character encoding issues and you cannot discover the original encoding (utf8, latin1, cp1252 are always good ones to start with), you can try to ignore or replace the characters.
Step20: Unfortuantely, only tears, fist-shaking, and hair-pulling will give you the necessary experience to handle the inevitability of character encoding issues when working with textual data.
Loading Wikipedia biographies of Presidents
Load the data from disk into memory. See Appendix 1 at the end of the notebook for more details.
Step21: Confirm there are 44 presidents (shaking fist at Grover Cleveland, the 22nd and 24th POTUS) in the dictionary.
Step22: What's an example of a single biography? We access the dictionary by passing the key (President's name), which returns the value (the text of the biography).
Step23: We are going to discuss how to process large text documents using athe Natural Language Toolkit library.
We first have to download some data corpora and libraries to use NLTK. Running this block of code should pop up a new window with four blue tabs
Step24: An important part of processing natural language data is normalizing this data by removing variations in the text that the computer naively thinks are different entities but humans recognize as being the same. There are several steps to this including case adjustment (House to house), tokenizing (finding individual words), and stemming/lemmatization ("tried" to "try").
This figure is a nice summary of the process of pre-processing your text data. The HTML to ASCII data step has already been done with the get_page_content function in the Appendix.
In the case of case adjustment, it turns out several of the different "words" in the corpus are actually the same, but because they have different capitalizations, they're counted as different unique words.
Counting words
How many words are in President Fillmore's article?
A biography can be represented as a single large string (as it is now), but this huge string is not very helpful for analyzing features of the text until the string is segmented into "tokens", which include words but also hyphenated phrases or contractions ("aren't", "doesn't", etc.)
There are a variety of different segmentation/tokenization strategies (with different tradeoffs) and corresponding methods implemented in NLTK.
We could employ a naive approach of splitting on spaces. This turns out to create words out of happenstance punctuation.
Step25: We could use regular expressions to split on repeated whitespaces.
Step26: It's clear we want to separate words based on other punctuation as well so that "Darkness," and "Darkness" aren't treated like separate words. Again, NLTK has a variety of methods for doing word tokenization more intelligently.
word_tokenize is probably the easiest-to-recommend
Step27: But there are others like wordpunct_tokenize tha makes different assumptions about the language.
Step28: Or Toktok is still another word tokenizer.
Step29: There are a variety of strategies for splitting a text document up into its constituent words, each making different assumptions about word boundaries, which results in different counts of the resulting tokens.
Step30: Word cases
Remember that strings of different cases (capitalizations) are treated as different words
Step31: Stop words
English, like many languages, repeats many words in typical language that don't always convey a lot of information by themselves. When we do text processing, we should make sure to remove these "stop words".
Step32: NLTK helpfully has a list of stopwords in different languages.
Step33: We can also use string module's "punctuation" attribute as well.
Step34: Let's combine them so get a list of all_stopwords that we can ignore.
Step35: We can use a list comprehension to exclude the words in this stopword list from analysis while also gives each word similar cases. This is not perfect, but an improvement over what we had before.
Step36: The distribution of word frequencies, even after stripping out stopwords, follows a remarkable strong pattern. Most terms are used infrequently (upper-left) but a handful of terms are used repeatedly! Zipf's law states
Step37: Lemmatization
Lemmatization (and the related concept of stemming) are methods for dealing with conjugated words. Word like "ate" or "eats" are counted as distinct from "eat", although semantically they are similar and should likely be grouped together. Where stemming just removes commons suffixes and prefixes, sometimes resulting in mangled words, lemmatization attempts returns the root word. However, lemmatization can be extremely expensive computationally, which does not make it a good candidate for large corpora.
The get_wordnet_pos and lemmatizer functions below work eith each other to lemmatize a word to its root. This involves attempting to discover the part-of-speech (POS) for each word and passing this POS to NLTK's lemmatize function, ultimately returning the root word (if it exists in the "wordnet" corpus).
Step38: Loop through all the tokens in wpt_lowered_no_stopwords, applying the lemmatizer function to each. Then inspect 25 examples of words where the lemmatizer changed the word length.
Step40: Pulling the pieces all together
We can combine all this functionality together into a single function text_preprocessor that takes a large string of text and returns a list of cleaned tokens, stripped of stopwords, lowered, and lemmatized.
Step41: We can apply this function to every presidential biography (this may take a minute or so) and write the resulting list of cleaned tokens to the "potus_wiki_bios_cleans.json" file. We'll use this file in the next lecture as well.
Step42: Comparative descriptive statistics
Now that we have cleaned biographies for each president, we can perform some basic analyses of the text. Which presidents have the longest biographies?
Step43: How many unique words?
Step44: The lexical diversity is the ratio of unique words to total words. Values closer to 0 indicate the presence of repeated words (low diversity) and values closer to 1 indicate words used only once (high diversity).
Step45: We can count how often a word occurs in each biography.
Step46: Which words occur the most across presidential biographies?
Step50: Case study
Step51: Use get_category_members to get all the immediate members (depth=0) of "Category
Step52: Loop through the fourth through rest of the presidents list and get each president's biography using get_page_content. Store the results in the presidents_wiki_bios dictionary.
Step53: Save the data to a JSON file.
Step54: Appendix 2
Step55: The hard way to get a list of the company names out is parsing the HTML table. We
Step56: The easy eay is to use use pandas's read_html function to parse the table into a DataFrame and access the "Security" (second) column.
Step57: Now we can use get_page_content to get the content of each company's page and add it to the sp500_articles dictionary.
Step58: Save the data to a JSON file. | Python Code:
string.ascii_letters
Explanation: Fundamentals of text processing
Content in this section is adapted from Ramalho (2015) and Lutz (2013).
The most basic characters in a string are the ASCII characters. The string library in Python, helpfully has these all listed out.
End of explanation
string.punctuation
string.digits
Explanation: There's also punctionation and digits
End of explanation
len('Brian')
Explanation: A string is basically a list of these mapping numbers. We can use some other functions and methods to analyze a string like we would with other iterables (like a list).
The len of a string returns the number of characters in it.
End of explanation
'Brian' + ' ' + 'Keegan'
Explanation: Two (or more) strings can be combined by adding them together.
End of explanation
ord('B')
ord('b')
Explanation: Every character is mapped to an underlying integer code.
End of explanation
chr(66)
Explanation: We can also use chr to do the reverse mapping: finding what character exists at a particular numeric value.
End of explanation
'b' == 'B'
Explanation: When you're doing comparisons, you're basically comparing these numbers to each other.
End of explanation
[(i,chr(i)) for i in range(128)]
Explanation: Here's the first 128 characters. Some of these early characters aren't single characters, but are control characters or whitespace characters.
End of explanation
s = 'Beyoncé'
Explanation: You'll notice that this ASCII character mapping doesn't include characters that have accents.
End of explanation
ord('é')
chr(233)
Explanation: This last character é also exists at a specific location.
End of explanation
b = s.encode('utf8')
b
Explanation: However, the way that Python performs this mapping is not the same for computers everywhere else in the world. If we use the popular UTF-8 standard to encode this string into generic byte-level representation, we get something interesting:
End of explanation
print(s,len(s))
print(b,len(b))
Explanation: The length of this b string somehow got a new character in it compared to the original s string.
End of explanation
ord(b'\xc3'), ord(b'\xa9')
chr(195), chr(169)
Explanation: If we try to discover where these characters live and then map them back, we run into problems.
End of explanation
b.decode('utf8')
ord(b'\xc3\xa9'.decode('utf8')), chr(233)
Explanation: We can convert from this byte-level representation back into Unicode with the .decode method.
End of explanation
b.decode('cp1252')
Explanation: Using a different decoding standard like CP1252 returns something much more grotesque without throwing any errors.
End of explanation
for codec in ['latin1','utf8','cp437','gb2312','utf16']:
print(codec.rjust(10),s.encode(codec), sep=' = ')
Explanation: There are many, many kinds of character encodings for representing non-ASCII text.
This cartoon pretty much explains why there are so many standards rather than a single standard:
Latin-1: the basis for many encodings
CP-1252: a common default encoding in Microsoft products similar to Latin-1
UTF-8: one of the most widely adopted and compatible - use it wherever possible
CP-437: used by the original IBM PC (predates latin1) but this old zombie is still lurking
GB-2312: implemented to support Chinese & Japanese characters, Greek & Cyrillic alphabets
UTF-16: treats everyone equally poorly, here there also be emojis
Other resources on why Unicode is what it is by Ned Batchelder, this tutorial by Esther Nam and Travis Fischer, or this Unicode tutorial in the docs.
End of explanation
montreal_s = b'Montr\xe9al'
for codec in ['cp437','cp1252','latin1','gb2312','iso8859_7','koi8_r','utf8','utf16']:
print(codec.rjust(10),montreal_s.decode(codec,errors='replace'),sep=' = ')
Explanation: You will almost certainly encounter string encoding problems whenever you work with text data. Let's look at how quickly things can go wrong trying to decode a string when we don't know the standard.
Some standards map the \xe9 byte-level representation to the é character we intended, while other standards have nothing at that byte location, and still others map that byte location to a different character.
End of explanation
import sys, locale
expressions =
locale.getpreferredencoding()
my_file.encoding
sys.stdout.encoding
sys.stdin.encoding
sys.stderr.encoding
sys.getdefaultencoding()
sys.getfilesystemencoding()
my_file = open('dummy', 'w')
for expression in expressions.split():
value = eval(expression)
print(expression.rjust(30), '=', repr(value))
Explanation: How do you discover the proper encoding given an observed byte sequence? You can't. But you can make some informed guesses by using a library like chardet to find clues based on relative frequencies and presence of byte-order marks.
What is your system's default? More like what are the defaults. The situation on PCs is generally a hot mess with Microsoft's standards like CP1252 competing with international standards like UTF-8, but Mac's generally try to keep everything in UTF-8.
End of explanation
montreal_s.decode('utf8')
for error_handling in ['ignore','replace']:
print(error_handling,montreal_s.decode('utf8',errors=error_handling),sep='\t')
Explanation: Whenever you encounter problems with character encoding issues and you cannot discover the original encoding (utf8, latin1, cp1252 are always good ones to start with), you can try to ignore or replace the characters.
End of explanation
with open('potus_wiki_bios.json','r') as f:
bios = json.load(f)
Explanation: Unfortuantely, only tears, fist-shaking, and hair-pulling will give you the necessary experience to handle the inevitability of character encoding issues when working with textual data.
Loading Wikipedia biographies of Presidents
Load the data from disk into memory. See Appendix 1 at the end of the notebook for more details.
End of explanation
print("There are {0} biographies of presidents.".format(len(bios)))
Explanation: Confirm there are 44 presidents (shaking fist at Grover Cleveland, the 22nd and 24th POTUS) in the dictionary.
End of explanation
example = bios['Grover Cleveland']
print(example)
Explanation: What's an example of a single biography? We access the dictionary by passing the key (President's name), which returns the value (the text of the biography).
End of explanation
# Download a specific lexicon for the sentiment analysis in the next lecture
nltk.download('vader_lexicon')
# Opens the interface to download all the other corpora
nltk.download()
Explanation: We are going to discuss how to process large text documents using athe Natural Language Toolkit library.
We first have to download some data corpora and libraries to use NLTK. Running this block of code should pop up a new window with four blue tabs: Collections, Corpora, Models, All Packages. Under Collections, Select the entry with "book" in the Identifier column and select download. Once the status "Finished downloading collection 'book'." prints in the grey bar at the bottom, you can close this pop-up.
You should only need to do this next step once for each computer you're using NLTK.
End of explanation
example_ws_tokens = example.split(' ')
print("There are {0:,} words when splitting on white spaces.".format(len(example_ws_tokens)))
example_ws_tokens[:25]
Explanation: An important part of processing natural language data is normalizing this data by removing variations in the text that the computer naively thinks are different entities but humans recognize as being the same. There are several steps to this including case adjustment (House to house), tokenizing (finding individual words), and stemming/lemmatization ("tried" to "try").
This figure is a nice summary of the process of pre-processing your text data. The HTML to ASCII data step has already been done with the get_page_content function in the Appendix.
In the case of case adjustment, it turns out several of the different "words" in the corpus are actually the same, but because they have different capitalizations, they're counted as different unique words.
Counting words
How many words are in President Fillmore's article?
A biography can be represented as a single large string (as it is now), but this huge string is not very helpful for analyzing features of the text until the string is segmented into "tokens", which include words but also hyphenated phrases or contractions ("aren't", "doesn't", etc.)
There are a variety of different segmentation/tokenization strategies (with different tradeoffs) and corresponding methods implemented in NLTK.
We could employ a naive approach of splitting on spaces. This turns out to create words out of happenstance punctuation.
End of explanation
example_re_tokens = re.split(r'\s+',example)
print("There are {0:,} words when splitting on white spaces with regular expressions.".format(len(example_re_tokens)))
example_re_tokens[0:25]
Explanation: We could use regular expressions to split on repeated whitespaces.
End of explanation
example_wt_tokens = nltk.word_tokenize(example)
print("There are {0:,} words when splitting on white spaces with word_tokenize.".format(len(example_wt_tokens)))
example_wt_tokens[:25]
Explanation: It's clear we want to separate words based on other punctuation as well so that "Darkness," and "Darkness" aren't treated like separate words. Again, NLTK has a variety of methods for doing word tokenization more intelligently.
word_tokenize is probably the easiest-to-recommend
End of explanation
example_wpt_tokens = nltk.wordpunct_tokenize(example)
print("There are {0:,} words when splitting on white spaces with wordpunct_tokenize.".format(len(example_wpt_tokens)))
example_wpt_tokens[:25]
Explanation: But there are others like wordpunct_tokenize tha makes different assumptions about the language.
End of explanation
toktok = nltk.ToktokTokenizer()
example_ttt_tokens = toktok.tokenize(example)
print("There are {0:,} words when splitting on white spaces with TokTok.".format(len(example_ttt_tokens)))
example_ttt_tokens[:25]
Explanation: Or Toktok is still another word tokenizer.
End of explanation
for name,tokenlist in zip(['space_split','re_tokenizer','word_tokenizer','wordpunct_tokenizer','toktok_tokenizer'],[example_ws_tokens,example_re_tokens,example_wt_tokens,example_wpt_tokens,example_ttt_tokens]):
print("{0:>20}: {1:,} total tokens, {2:,} unique tokens".format(name,len(tokenlist),len(set(tokenlist))))
Explanation: There are a variety of strategies for splitting a text document up into its constituent words, each making different assumptions about word boundaries, which results in different counts of the resulting tokens.
End of explanation
example_wpt_lowered = [token.lower() for token in example_wpt_tokens]
unique_wpt = len(set(example_wpt_tokens))
unique_lowered_wpt = len(set(example_wpt_lowered))
difference = unique_wpt - unique_lowered_wpt
print("There are {0:,} unique words in example before lowering and {1:,} after lowering,\na difference of {2} unique tokens.".format(unique_wpt,unique_lowered_wpt,difference))
Explanation: Word cases
Remember that strings of different cases (capitalizations) are treated as different words: "young" and "Young" are not the same. An important part of text processing is to remove un-needed variation, and mixed cases are variation we generally don't care about.
End of explanation
nltk.FreqDist(example_wpt_lowered).most_common(25)
Explanation: Stop words
English, like many languages, repeats many words in typical language that don't always convey a lot of information by themselves. When we do text processing, we should make sure to remove these "stop words".
End of explanation
english_stopwords = nltk.corpus.stopwords.words('english')
english_stopwords[:10]
Explanation: NLTK helpfully has a list of stopwords in different languages.
End of explanation
list(string.punctuation)[:10]
Explanation: We can also use string module's "punctuation" attribute as well.
End of explanation
all_stopwords = english_stopwords + list(string.punctuation) + ['–']
Explanation: Let's combine them so get a list of all_stopwords that we can ignore.
End of explanation
wpt_lowered_no_stopwords = []
for word in example_wpt_tokens:
if word.lower() not in all_stopwords:
wpt_lowered_no_stopwords.append(word.lower())
fdist_wpt_lowered_no_stopwords = nltk.FreqDist(wpt_lowered_no_stopwords)
fdist_wpt_lowered_no_stopwords.most_common(25)
Explanation: We can use a list comprehension to exclude the words in this stopword list from analysis while also gives each word similar cases. This is not perfect, but an improvement over what we had before.
End of explanation
freq_counter = Counter(fdist_wpt_lowered_no_stopwords.values())
f,ax = plt.subplots(1,1)
ax.scatter(x=list(freq_counter.keys()),y=list(freq_counter.values()))
ax.set_xscale('log')
ax.set_yscale('log')
ax.set_xlabel('Term frequency')
ax.set_ylabel('Number of terms')
Explanation: The distribution of word frequencies, even after stripping out stopwords, follows a remarkable strong pattern. Most terms are used infrequently (upper-left) but a handful of terms are used repeatedly! Zipf's law states:
"the frequency of any word is inversely proportional to its rank in the frequency table. Thus the most frequent word will occur approximately twice as often as the second most frequent word, three times as often as the third most frequent word, etc."
End of explanation
from nltk.corpus import wordnet
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
def get_wordnet_pos(treebank_tag):
if treebank_tag.startswith('J'):
return wordnet.ADJ
elif treebank_tag.startswith('V'):
return wordnet.VERB
elif treebank_tag.startswith('N'):
return wordnet.NOUN
elif treebank_tag.startswith('R'):
return wordnet.ADV
else:
return wordnet.NOUN
def lemmatizer(token):
token,tb_pos = nltk.pos_tag([token])[0]
pos = get_wordnet_pos(tb_pos)
lemma = wnl.lemmatize(token,pos)
return lemma
Explanation: Lemmatization
Lemmatization (and the related concept of stemming) are methods for dealing with conjugated words. Word like "ate" or "eats" are counted as distinct from "eat", although semantically they are similar and should likely be grouped together. Where stemming just removes commons suffixes and prefixes, sometimes resulting in mangled words, lemmatization attempts returns the root word. However, lemmatization can be extremely expensive computationally, which does not make it a good candidate for large corpora.
The get_wordnet_pos and lemmatizer functions below work eith each other to lemmatize a word to its root. This involves attempting to discover the part-of-speech (POS) for each word and passing this POS to NLTK's lemmatize function, ultimately returning the root word (if it exists in the "wordnet" corpus).
End of explanation
wpt_lemmatized = [lemmatizer(t) for t in wpt_lowered_no_stopwords]
[(i,j) for (i,j) in list(zip(wpt_lowered_no_stopwords,wpt_lemmatized)) if len(i) != len(j)][:25]
Explanation: Loop through all the tokens in wpt_lowered_no_stopwords, applying the lemmatizer function to each. Then inspect 25 examples of words where the lemmatizer changed the word length.
End of explanation
def text_preprocessor(text):
Takes a large string (document) and returns a list of cleaned tokens
tokens = nltk.wordpunct_tokenize(text)
clean_tokens = []
for t in tokens:
if t.lower() not in all_stopwords and len(t) > 2:
clean_tokens.append(lemmatizer(t.lower()))
return clean_tokens
Explanation: Pulling the pieces all together
We can combine all this functionality together into a single function text_preprocessor that takes a large string of text and returns a list of cleaned tokens, stripped of stopwords, lowered, and lemmatized.
End of explanation
# Clean each bio
cleaned_bios = {}
for bio_name,bio_text in bios.items():
cleaned_bios[bio_name] = text_preprocessor(bio_text)
# Save to disk
with open('potus_wiki_bios_cleaned.json','w') as f:
json.dump(cleaned_bios,f)
Explanation: We can apply this function to every presidential biography (this may take a minute or so) and write the resulting list of cleaned tokens to the "potus_wiki_bios_cleans.json" file. We'll use this file in the next lecture as well.
End of explanation
potus_total_words = {}
for bio_name,bio_text in cleaned_bios.items():
potus_total_words[bio_name] = len(bio_text)
pd.Series(potus_total_words).sort_values(ascending=False)
Explanation: Comparative descriptive statistics
Now that we have cleaned biographies for each president, we can perform some basic analyses of the text. Which presidents have the longest biographies?
End of explanation
potus_unique_words = {}
for bio_name,bio_text in cleaned_bios.items():
potus_unique_words[bio_name] = len(set(bio_text))
pd.Series(potus_unique_words).sort_values(ascending=False)
Explanation: How many unique words?
End of explanation
def lexical_diversity(token_list):
unique_tokens = len(set(token_list))
total_tokens = len(token_list)
if total_tokens > 0:
return unique_tokens/total_tokens
else:
return 0
potus_lexical_diversity = {}
for bio_name,bio_text in cleaned_bios.items():
potus_lexical_diversity[bio_name] = lexical_diversity(bio_text)
pd.Series(potus_lexical_diversity).sort_values(ascending=False)
Explanation: The lexical diversity is the ratio of unique words to total words. Values closer to 0 indicate the presence of repeated words (low diversity) and values closer to 1 indicate words used only once (high diversity).
End of explanation
# Import the Counter function
from collections import Counter
# Get counts of each token from the cleaned_bios for Grover Cleveland
cleveland_counts = Counter(cleaned_bios['Grover Cleveland'])
# Convert to a pandas Series and sort
pd.Series(cleveland_counts).sort_values(ascending=False).head(25)
potus_word_counts = {}
for bio_name,bio_text in cleaned_bios.items():
potus_word_counts[bio_name] = Counter(bio_text)
potus_word_counts_df = pd.DataFrame(potus_word_counts).T
potus_word_counts_df.to_csv('potus_word_counts.csv',encoding='utf8')
print("There are {0:,} unique words across the {1} presidents.".format(potus_word_counts_df.shape[1],potus_word_counts_df.shape[0]))
Explanation: We can count how often a word occurs in each biography.
End of explanation
potus_word_counts_df.sum().sort_values(ascending=False).head(20)
Explanation: Which words occur the most across presidential biographies?
End of explanation
def get_page_content(title,lang='en',redirects=1):
Takes a page title and returns a (large) string of the HTML content
of the revision.
title - a string for the title of the Wikipedia article
lang - a string (typically two letter ISO 639-1 code) for the language
edition, defaults to "en"
redirects - 1 or 0 for whether to follow page redirects, defaults to 1
parse - 1 or 0 for whether to return the raw HTML or paragraph text
Returns:
str - a (large) string of the content of the revision
bad_titles = ['Special:','Wikipedia:','Help:','Template:','Category:','International Standard','Portal:','s:','File:','Digital object identifier','(page does not exist)']
# Get the response from the API for a query
params = {'action':'parse',
'format':'json',
'page':title,
'redirects':redirects,
'prop':'text',
'disableeditsection':1,
'disabletoc':1
}
url = 'https://{0}.wikipedia.org/w/api.php'.format(lang)
req = requests.get(url,params=params)
json_string = json.loads(req.text)
new_title = json_string['parse']['title']
if 'parse' in json_string.keys():
page_html = json_string['parse']['text']['*']
# Parse the HTML into Beautiful Soup
soup = BeautifulSoup(page_html,'lxml')
# Remove sections at end
bad_sections = ['See_also','Notes','References','Bibliography','External_links']
sections = soup.find_all('h2')
for section in sections:
if section.span['id'] in bad_sections:
# Clean out the divs
div_siblings = section.find_next_siblings('div')
for sibling in div_siblings:
sibling.clear()
# Clean out the ULs
ul_siblings = section.find_next_siblings('ul')
for sibling in ul_siblings:
sibling.clear()
# Get all the paragraphs
paras = soup.find_all('p')
text_list = []
for para in paras:
_s = para.text
# Remove the citations
_s = re.sub(r'\[[0-9]+\]','',_s)
text_list.append(_s)
final_text = '\n'.join(text_list).strip()
return {new_title:final_text}
def get_category_subcategories(category_title,lang='en'):
The function accepts a category_title and returns a list of the category's sub-categories
category_title - a string (including "Category:" prefix) of the category name
lang - a string (typically two letter ISO 639-1 code) for the language edition,
defaults to "en"
Returns:
members - a list containing strings of the sub-categories in the category
# Replace spaces with underscores
category_title = category_title.replace(' ','_')
# Make sure "Category:" appears in the title
if 'Category:' not in category_title:
category_title = 'Category:' + category_title
_S="https://{1}.wikipedia.org/w/api.php?action=query&list=categorymembers&cmtitle={0}&cmtype=subcat&cmprop=title&cmlimit=500&format=json&formatversion=2".format(category_title,lang)
json_response = requests.get(_S).json()
members = list()
if 'categorymembers' in json_response['query']:
for member in json_response['query']['categorymembers']:
members.append(member['title'])
return members
def get_category_members(category_title,depth=1,lang='en'):
The function accepts a category_title and returns a list of category members
category_title - a string (including "Category:" prefix) of the category name
lang - a string (typically two letter ISO 639-1 code) for the language edition,
defaults to "en"
Returns:
members - a list containing strings of the page titles in the category
# Replace spaces with underscores
category_title = category_title.replace(' ','_')
# Make sure "Category:" appears in the title
if 'Category:' not in category_title:
category_title = 'Category:' + category_title
_S="https://{1}.wikipedia.org/w/api.php?action=query&list=categorymembers&cmtitle={0}&cmprop=title&cmnamespace=0&cmlimit=500&format=json&formatversion=2".format(category_title,lang)
json_response = requests.get(_S).json()
members = list()
if depth < 0:
return members
if 'categorymembers' in json_response['query']:
for member in json_response['query']['categorymembers']:
members.append(member['title'])
subcats = get_category_subcategories(category_title,lang=lang)
for subcat in subcats:
members += get_category_members(subcat,depth-1)
return members
Explanation: Case study: Preprocess the S&P500 articles and compute statistics
Step 1: Load the "sp500_wiki_articles.json", use the text_preprocessor function (or some other sequence of functions) from above to clean these articles up, and save the cleaned content to "sp500_wiki_articles_cleaned.json".
Step 2: Compute some descriptive statistics about the company articles with the most words, most unique words, greatest lexical diversity, most used words across articles, and number of unique words across all articles.
Appendix 1: Retrieving Wikipedia content by category
Functions and operations to scrape the most recent (10 August 2018) Wikipedia content from every member of "Category:Presidents of the United States".
The get_page_content function will get the content of the article as HTML and parse the HTML to return something close to a clean string of text. The get_category_subcategories and get_category_members will get all the members of a category in Wikipedia.
End of explanation
presidents = get_category_members('Presidents_of_the_United_States',depth=0)
presidents
Explanation: Use get_category_members to get all the immediate members (depth=0) of "Category:Presidents of the United States."
End of explanation
presidents_wiki_bios = {}
for potus in presidents[3:]:
presidents_wiki_bios.update(get_page_content(potus))
Explanation: Loop through the fourth through rest of the presidents list and get each president's biography using get_page_content. Store the results in the presidents_wiki_bios dictionary.
End of explanation
with open('potus_wiki_bios.json','w') as f:
json.dump(presidents_wiki_bios,f)
Explanation: Save the data to a JSON file.
End of explanation
title = 'List of S&P 500 companies'
lang = 'en'
redirects = 1
params = {'action':'parse',
'format':'json',
'page':title,
'redirects':1,
'prop':'text',
'disableeditsection':1,
'disabletoc':1
}
url = 'https://en.wikipedia.org/w/api.php'
req = requests.get(url,params=params)
json_string = json.loads(req.text)
if 'parse' in json_string.keys():
page_html = json_string['parse']['text']['*']
# Parse the HTML into Beautiful Soup
soup = BeautifulSoup(page_html,'lxml')
Explanation: Appendix 2: Retrieving Wikipedia content from a list
Wikipedia maintains a (superficially) up-to-date List of S&P 500 companies, but not a category of the constituent members. Like the presidents, we want to retrieve a list of all their Wikipedia articles, parse their content, and perform some NLP tasks.
First, get the content of the article so we can parse out the list.
End of explanation
company_names = []
# Get the first table
component_stock_table = soup.find_all('table')[0]
# Get all the rows after the first (header) row
rows = component_stock_table.find_all('tr')[1:]
# Loop through each row and extract the title
for row in rows:
# Get all the links in a row
links = row.find_all('a')
# Get the title in the 2nd cell from the left
title = links[1]['title']
# Add it to the company_links
company_names.append(title)
print("There are {0:,} titles in the list".format(len(set(company_names))))
Explanation: The hard way to get a list of the company names out is parsing the HTML table. We:
Find all the tables in the soup
Get the first table out
Find all the rows in the table
Loop through each row
Find the links in each row
Get the second link's title in each row
Add the title to company_names
End of explanation
company_df = pd.read_html(str(component_stock_table),header=0)[0]
company_df.head()
company_names = company_df['Security'].tolist()
Explanation: The easy eay is to use use pandas's read_html function to parse the table into a DataFrame and access the "Security" (second) column.
End of explanation
sp500_articles = {}
for company in set(company_links):
sp500_articles.update(get_page_content(company))
Explanation: Now we can use get_page_content to get the content of each company's page and add it to the sp500_articles dictionary.
End of explanation
with open('sp500_wiki_articles.json','w') as f:
json.dump(sp500_articles,f)
Explanation: Save the data to a JSON file.
End of explanation |
1,218 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: 1. Data
We should now have all the data loaded, named as it was before. As a reminder, these are the NGC numbers of the galaxies in the data set
Step2: 2. Independent fits for each galaxy
This class will package up the fitting using the "4b" method from the previous notebook (emcee plus analytic integration). In particular, it relies on the log_prior, log_posterior and log_likelihood_B functions (as well as the data, among other previous global-scope definitions). If you want to use a different approach instead, feel free.
There are various defaults here (e.g. nsteps, burn, maxlag) that you might want to tweak, but in principle they should work well enough for this problem.
Step3: Let's set up and run each of these fits, which hopefully shouldn't take too long. As always, you are responsible for looking over the trace plots and making sure everything is ok.
Step4: Based on the plots above, remove some burn-in. Check that the quantitative diagnostics are acceptable as they are printed out.
Step5: Now we'll use pygtc to plot all the individual posteriors, and see how they compare.
Step6: Visually, would you say that it's likely that all the scaling parameters, or some subset, are universal?
TBC commentary
2. A hierarchical model for all galaxies
On the basis of the last section, it should be clear that at least one of the scaling parameters in question is not universal amongst galaxies in the data set, and at least one may well be. Further, it isn't obvious that there is any particular correlation or anticorrelation between the galaxy-to-galaxy differences in these parameters. If we were doing this as a research project, the latter would be an important thing to investigate, along with possible physical explanations for outliers. But we'll keep it relatively simple here.
Let's add a level of hierarchy to the model by assuming that the values of $a$ for each galaxy come from a normal distribution with mean $\mu_a$ and standard deviation $\tau_a$, and similarly $b$ and $\sigma$ come from their own normal distributions. We will not consider the possibility that, for example, all 3 come from a joint, multivariate normal distribution with possible correlations between them, although that could easily be justified. In practice, fitting for independent distributions for each parameter is a reasonable first step, much as fitting each galaxies data independently in Section 1 was a reasonable zeroth step.
Make the relatively simple modifications to your PGM and probabilistic expressions from Section 2 of the previous notebook to accomodate this model.
TBC probabilistic expressions and PGM
We will adopt wide, uniform priors on the new hyperparameters of the model, to make life easier.
3. Strategy
Even more than last time, the total number of free parameters in the model is, technically, staggering. We already know some ways of reducing the overhead associated with each galaxy. For example, using the analytic integration approach from the previous notebook, we have have only 3 parameters left to sample per galaxy, for a total of $3N_\mathrm{gal}+6=33$ parameters. Brute force sampling of these 33 parameters is not unthinkable, although in practice it may or may not be a headache.
Another option is to make use of the samples we obtained in Section 1. These are samples of the posterior (for each galaxy) when the priors on the scaling parameters are very wide and uniform, i.e. constant over the domain where the likelihood is significantly non-zero. They are, therefore, also samples from a PDF that is proportional to the likelihood function. To see why that might be helpful, consider the posterior for the hyperparameters of the new model, $\vec{\alpha} = (\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma)$, marginalized over all the pesky $a_i$, $b_i$ and $\sigma_i$ parameters
Step7: Complete the log-likelihood function for this part. Similarly to the way we dealt with Mtrue before, the galparams argument will end up being an array containing $(a_1,b_1,\sigma_1,a_2,b_2,\sigma_2,\ldots)$, from which we can extract arrays of $a_i$, $b_i$ and $\sigma_i$ if we want. The line given to you accounts for the $\prod_{i=1}^{N_\mathrm{gal}} p(\mathrm{data}|a_i,b_i,\sigma_i)$ part, ultimately calling log_likelihood_B and log_prior from the last notebook (see comments below).
Step8: As a consequence of the code above calling _logpost_vecarg_B (note post), the old priors for the $a_i$, $b_i$ and $\sigma_i$ will be included in the return value. This is ok only because we're using uniform priors, so in the log those priors are either a finite constant or $-\infty$. In general, we would need to divide the old priors out somewhere in the new posterior calculation. Even better, we would not write such dangerously lazy code.
But for our limited purposes, it should work. The bottom line is that we don't need to worry about the priors for the $a_i$, $b_i$ and $\sigma_i$ in the function below, just the hyperparameters of their parent distributions.
Again like the last notebook, we will make galparams an optional argument to the log-prior function, so we can re-use the function later, when the $a_i$, $b_i$ and $\sigma_i$ are not being sampled.
Step9: You can have the log-posterior functions.
Step10: Based on the triangle plot in the first section, guess rough starting values for $(\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma)$. (NB
Step11: Quick check that the functions above work
Step12: Below, we run emcee as before.
IMPORTANT
You should find this to be more tractable than the "brute force" solution in the previous notebook, but still very slow compared to what we normally see in class. Again, you do not need to run this version long enough to get what we would normally consider acceptable results, in terms of convergence and number of independent samples. Just convince yourself that it's functioning, and see how it performs. Again, please do not turn in a notebook where the sampling cell below takes longer than $\sim30$ seconds to evaluate.
Step13: Look at the traces (we'll only include one of the galaxy's scaling parameters).
Step14: Go through the usual motions, making sure to set burn and maxlag to something appropriate for the length of the chain.
Step15: As before, we'll be comparing the posteriors from the methods we attempt
Step16: To be more thorough, we would also want to see how well the new hierarchical part of the model fits, meaning whether the posteriors of $a_i$, $b_i$ and $\sigma_i$ are collectively consistent with being drawn from their respective fitted Gaussians. Things might look slightly different than the plots we made above, since those fits used uniform priors rather than the hierarchical model. With only 9 galaxies, it seems unlikely that we could really rule out a Gaussian distribution, and it's tangential to the point of this tutorial. So this can be an exercise for the reader, if you want.
4b. Sampling with numerical marginalization
Let's see how we do trying to marginalize out the per-galaxy parameters by simple monte carlo, as described above,
$p(\mathrm{data}|\vec{\alpha}) = \prod_{i=1}^{N_\mathrm{gal}} \frac{1}{n_i}\sum_{k=1}^{n_i} p(a_{ik},b_{ik},\sigma_{ik}|\vec{\alpha})$.
Note that, because we are taking a sum of probabilities above, we do actually need to work with probabilities, as opposed to log-probabilities. You might reasonably worry about numerical stability here, but in this case a naive implementation seems to be ok. (In general, what we would need to check is whether the summands contributing most of the sum are easily floating-point representable, i.e. not so tiny that they underflow. We could always renormalize the summands to avoid this, since we will just end up taking the log afterwards.)
Implement the log-likelihood for this approach below.
Step17: This is for free
Step18: The usual sanity check
Step19: Let's get an idea of how computationally expensive all these sums are by running a very short chain.
Step20: For me this comes out to about 7 seconds for 10 steps - slower than we'd ideally like, at least without more serious computing resources than my laptop. (If you run longer, though, you should see performance better than in part A.)
However, its worth asking if we can get away with using fewer samples. In principle, we are well justified in doing this, since the effective number of independent samples estimated for some of the individual fits are only $\sim500$ (when I ran them, anyway).
Note that the cell below is destructive, in that we can't easily get the original chains back after running it. Keep that in mind if you plan to play around, or improve on the code at the start of the notebook.
Step21: With only 500 samples left in the sum for each galaxy, it should be possible to get results that appear basically converged with a couple of minutes runtime (and you should do so). Nevertheless, before turning in the notebook, please reduce the number of steps such that the sampling cell below takes longer than $\sim30$ seconds to evaluate. (You can leave a comment saying what number of steps you actually used, if you like.)
Step22: Let's see how it does
Step23: The sampler is probably struggling to move around efficiently, but you could imagine running patiently for a while and ending up with something useful. Let's call this approach viable, but not ideal. Still, make sure you have reasonable convergence before continuing.
Step24: And now the burning question
Step25: Checkpoint | Python Code:
exec(open('tbc.py').read()) # define TBC and TBC_above
import dill
# may need to change the load path
TBC() # dill.load_session('../ignore/cepheids_one.db')
exec(open('tbc.py').read()) # (re-)define TBC and TBC_above
Explanation: Tutorial: The Cepheid Period-Luminosity Relation for Multiple Galaxies
So far (in the cepheids and cepheids_one_galaxy notebooks), we have fit a hierarchical model describing the period-luminosity relation and its intrinsic scatter to data from a single galaxy. Next, we're interested in how similar, or not, these scaling relation parameters ($a$, $b$ and $\sigma$) are among the galaxies in the data set.
A sensible place to start is to do an identical fit independently to each galaxy to see how compatible the parameter values are. Then we'll fit a model with another level of hierarchy that assumes these parameters come from a parent distribution. The width of that parent distribution will be a parameter that tells us how similar things are from galaxy to galaxy.
Start by restoring the previous notebook:
End of explanation
ngc_numbers
Explanation: 1. Data
We should now have all the data loaded, named as it was before. As a reminder, these are the NGC numbers of the galaxies in the data set:
End of explanation
class singleFitter:
def __init__(self, ngc):
'''
ngc: NGC identifier of the galaxy to fit
'''
self.ngc = ngc
self.data = data[ngc] # from global scope
# reproducing this for paranoia's sake
self.param_names = ['a', 'b', 'sigma']
self.param_labels = [r'$a$', r'$b$', r'$\sigma$']
def _logpost_vecarg_B(self, pvec):
params = {name:pvec[i] for i,name in enumerate(self.param_names)}
return log_posterior(self.data, log_likelihood_B, **params)
def fit(self, guess, nsteps=7500):
npars = len(self.param_names)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, self._logpost_vecarg_B)
start = np.array([np.array(guess)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
%time sampler.run_mcmc(start, nsteps)
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:], ax, labels=self.param_labels);
self.sampler = sampler
self.nwalkers = nwalkers
self.npars = npars
self.nsteps = nsteps
def burnin(self, burn=1000, maxlag=1000):
tmp_samples = [self.sampler.chain[i,burn:,:] for i in range(self.nwalkers)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!')
self.samples = self.sampler.chain[:,burn:,:].reshape(self.nwalkers*(self.nsteps-burn), self.npars)
del self.sampler
# make it simpler/more readable to access the parameter samples
# (could have been fancier and more robust by using self.param_names here)
self.a = self.samples[:,0]
self.b = self.samples[:,1]
self.sigma = self.samples[:,2]
def thin(self, thinto=1000):
j = np.round(np.linspace(0, self.samples.shape[0]-1, thinto)).astype(int)
self.a = self.samples[j,0]
self.b = self.samples[j,1]
self.sigma = self.samples[j,2]
Explanation: 2. Independent fits for each galaxy
This class will package up the fitting using the "4b" method from the previous notebook (emcee plus analytic integration). In particular, it relies on the log_prior, log_posterior and log_likelihood_B functions (as well as the data, among other previous global-scope definitions). If you want to use a different approach instead, feel free.
There are various defaults here (e.g. nsteps, burn, maxlag) that you might want to tweak, but in principle they should work well enough for this problem.
End of explanation
independent_fits = [singleFitter(ngc) for ngc in ngc_numbers]
independent_fits[0].fit(guessvec)
independent_fits[1].fit(guessvec)
independent_fits[2].fit(guessvec)
independent_fits[3].fit(guessvec)
independent_fits[4].fit(guessvec)
independent_fits[5].fit(guessvec)
independent_fits[6].fit(guessvec)
independent_fits[7].fit(guessvec)
independent_fits[8].fit(guessvec)
Explanation: Let's set up and run each of these fits, which hopefully shouldn't take too long. As always, you are responsible for looking over the trace plots and making sure everything is ok.
End of explanation
TBC(1) # burn = ...
for f in independent_fits:
print('NGC', f.ngc)
f.burnin(burn=burn) # optionally, set maxlag here also
print('')
Explanation: Based on the plots above, remove some burn-in. Check that the quantitative diagnostics are acceptable as they are printed out.
End of explanation
plotGTC([f.samples for f in independent_fits], paramNames=param_labels,
chainLabels=['NGC'+str(f.ngc) for f in independent_fits],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
Explanation: Now we'll use pygtc to plot all the individual posteriors, and see how they compare.
End of explanation
param_names_all = ['mu_a', 'tau_a', 'mu_b', 'tau_b', 'mu_s', 'tau_s']
param_labels_all = [r'$\mu_a$', r'$\tau_a$', r'$\mu_b$', r'$\tau_b$', r'$\mu_\sigma$', r'$\tau_\sigma$']
Explanation: Visually, would you say that it's likely that all the scaling parameters, or some subset, are universal?
TBC commentary
2. A hierarchical model for all galaxies
On the basis of the last section, it should be clear that at least one of the scaling parameters in question is not universal amongst galaxies in the data set, and at least one may well be. Further, it isn't obvious that there is any particular correlation or anticorrelation between the galaxy-to-galaxy differences in these parameters. If we were doing this as a research project, the latter would be an important thing to investigate, along with possible physical explanations for outliers. But we'll keep it relatively simple here.
Let's add a level of hierarchy to the model by assuming that the values of $a$ for each galaxy come from a normal distribution with mean $\mu_a$ and standard deviation $\tau_a$, and similarly $b$ and $\sigma$ come from their own normal distributions. We will not consider the possibility that, for example, all 3 come from a joint, multivariate normal distribution with possible correlations between them, although that could easily be justified. In practice, fitting for independent distributions for each parameter is a reasonable first step, much as fitting each galaxies data independently in Section 1 was a reasonable zeroth step.
Make the relatively simple modifications to your PGM and probabilistic expressions from Section 2 of the previous notebook to accomodate this model.
TBC probabilistic expressions and PGM
We will adopt wide, uniform priors on the new hyperparameters of the model, to make life easier.
3. Strategy
Even more than last time, the total number of free parameters in the model is, technically, staggering. We already know some ways of reducing the overhead associated with each galaxy. For example, using the analytic integration approach from the previous notebook, we have have only 3 parameters left to sample per galaxy, for a total of $3N_\mathrm{gal}+6=33$ parameters. Brute force sampling of these 33 parameters is not unthinkable, although in practice it may or may not be a headache.
Another option is to make use of the samples we obtained in Section 1. These are samples of the posterior (for each galaxy) when the priors on the scaling parameters are very wide and uniform, i.e. constant over the domain where the likelihood is significantly non-zero. They are, therefore, also samples from a PDF that is proportional to the likelihood function. To see why that might be helpful, consider the posterior for the hyperparameters of the new model, $\vec{\alpha} = (\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma)$, marginalized over all the pesky $a_i$, $b_i$ and $\sigma_i$ parameters:
$p(\vec{\alpha}|\mathrm{data}) \propto p(\vec{\alpha}) \prod_{i=1}^{N_\mathrm{gal}} \int da_i db_i d\sigma_i \, p(a_i,b_i,\sigma_i|\vec{\alpha}) \, p(\mathrm{data}|a_i,b_i,\sigma_i)$.
To restate what we said above, our individual fits (with uniform priors) give us samples from PDFs
$q(a_i,b_i,\sigma_i|\mathrm{data}) \propto p(\mathrm{data}|a_i,b_i,\sigma_i)$.
We can do this integral by simple monte carlo as
$p(\vec{\alpha}|\mathrm{data}) \propto p(\vec{\alpha}) \prod_{i=1}^{N_\mathrm{gal}} \frac{1}{n_i}\sum_{k=1}^{n_i} p(a_{ik},b_{ik},\sigma_{ik}|\vec{\alpha})$,
where the $n_i$ samples of $(a_{ik},b_{ik},\sigma_{ik}) \sim q(a_i,b_i,\sigma_i|\mathrm{data})$. Our samples from Section 1 happen to satisfy this. (Had we used a non-uniform prior before, we could do something similar, but would need to divide by that prior density in the sum above.) This approach has the advantage that we only need to sample the 6 parameters in $\vec{\alpha}$ to constrain our hierarchical model, since a lot of work is already done. On the other hand, carrying out the sums for each galaxy can become its own numerical challenge.
If we're really stuck in terms of computing power, we could consider a more compressed version of this, by approximating the posterior from each individual galaxy fit as a 3-dimensional Gaussian, or some other simple function. This approximation may or may not be a significant concession on our parts; here it's clearly a bit sketchy in the case of $\sigma$, which has a hard cut at $\sigma=0$ that at least one individual galaxy is consistent with. But, with this approximation, the integral in the first equation above could be done analytically, much as we simplified things for the single-galaxy analysis.
Finally, not that this is an exhaustive list, we could again consider whether conjugate Gibbs sampling is an option. Since the normal distribution has nice conjugacies, we could consider a scheme where we sample $\mu_a|\tau_a,{a_i}$, then $\tau_a|\mu_a,{a_i}$, then similarly for $\mu_b$, $\tau_b$, $\mu_\sigma$ and $\tau_\sigma$, and then all the individual $a_i$, $b_i$, $\sigma_i$ and $M_{ij}$ parameters as we did with LRGS in the previous notebook (accounting for the normal "prior" on $a_i$ given by $\mu_a$ and $\tau_a$, etc.). Or we could conjugate-Gibbs sample the $\mu$'s and $\tau$'s, while using some other method entirely for the galaxy-specific parameters. (We will not actually walk through this, since (a) LRGS (in python) doesn't implement Gaussian priors on the intercept/slope parameters, even though it's a simple addition; (b) I don't feel like dragging yet another code into the mix; and (c) the Gaussian parent distribution is not conjugate for the $\sigma$ parameters, so we'd have to use a different sampling method for those parameters anyway.)
4. Obtain the posterior
4a. Brute force
Let's again start by trying brute force, although in this case we'll still use the analytic integration method from the last notebook rather than the brutest force, which would have a free absolute magnitude for every cepheid in every galaxy. We can make use of our array of singleFitter objects, and specifically their _logpost_vecarg_B methods to do that part of the calculation.
The prototypes below assume the 33 parameters are ordered as: $(\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma,a_1,b_1,\sigma_1,a_2,b_2,\sigma_2,\ldots)$. Also, Let's... not include all the individual galaxy parameters in these lists of parameter names:
End of explanation
def log_likelihood_all_A(mu_a, tau_a, mu_b, tau_b, mu_s, tau_s, galparams):
lnp = np.sum([f._logpost_vecarg_B(galparams[(0+3*i):(3+3*i)]) for i,f in enumerate(independent_fits)])
TBC() # lnp += ... more stuff ...
return lnp
TBC_above()
Explanation: Complete the log-likelihood function for this part. Similarly to the way we dealt with Mtrue before, the galparams argument will end up being an array containing $(a_1,b_1,\sigma_1,a_2,b_2,\sigma_2,\ldots)$, from which we can extract arrays of $a_i$, $b_i$ and $\sigma_i$ if we want. The line given to you accounts for the $\prod_{i=1}^{N_\mathrm{gal}} p(\mathrm{data}|a_i,b_i,\sigma_i)$ part, ultimately calling log_likelihood_B and log_prior from the last notebook (see comments below).
End of explanation
def log_prior_all(mu_a, tau_a, mu_b, tau_b, mu_s, tau_s, galparams=None):
TBC()
TBC_above()
Explanation: As a consequence of the code above calling _logpost_vecarg_B (note post), the old priors for the $a_i$, $b_i$ and $\sigma_i$ will be included in the return value. This is ok only because we're using uniform priors, so in the log those priors are either a finite constant or $-\infty$. In general, we would need to divide the old priors out somewhere in the new posterior calculation. Even better, we would not write such dangerously lazy code.
But for our limited purposes, it should work. The bottom line is that we don't need to worry about the priors for the $a_i$, $b_i$ and $\sigma_i$ in the function below, just the hyperparameters of their parent distributions.
Again like the last notebook, we will make galparams an optional argument to the log-prior function, so we can re-use the function later, when the $a_i$, $b_i$ and $\sigma_i$ are not being sampled.
End of explanation
def log_posterior_all(loglike, **params):
lnp = log_prior_all(**params)
if lnp != -np.inf:
lnp += loglike(**params)
return lnp
def logpost_vecarg_all_A(pvec):
params = {name:pvec[i] for i,name in enumerate(param_names_all)}
params['galparams'] = pvec[len(param_names_all):]
return log_posterior_all(log_likelihood_all_A, **params)
Explanation: You can have the log-posterior functions.
End of explanation
TBC() # guess_all = [list of hyperparameter starting values]
guess_all_A = np.array(guess_all + guessvec*9)
Explanation: Based on the triangle plot in the first section, guess rough starting values for $(\mu_a,\tau_a,\mu_b,\tau_b,\mu_\sigma,\tau_\sigma)$. (NB: make this a list rather than the usual dictionary.) We'll re-use the previous guess for the galaxy-specific parameters.
End of explanation
logpost_vecarg_all_A(guess_all_A)
Explanation: Quick check that the functions above work:
End of explanation
%%time
nsteps = 100 # or whatever
npars = len(guess_all_A)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_all_A)
start = np.array([np.array(guess_all_A)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
sampler.run_mcmc(start, nsteps)
print('Yay!')
Explanation: Below, we run emcee as before.
IMPORTANT
You should find this to be more tractable than the "brute force" solution in the previous notebook, but still very slow compared to what we normally see in class. Again, you do not need to run this version long enough to get what we would normally consider acceptable results, in terms of convergence and number of independent samples. Just convince yourself that it's functioning, and see how it performs. Again, please do not turn in a notebook where the sampling cell below takes longer than $\sim30$ seconds to evaluate.
End of explanation
npars = len(guess_all)+3
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:npars], ax, labels=param_labels_all+param_labels);
npars = len(guess_all_A)
Explanation: Look at the traces (we'll only include one of the galaxy's scaling parameters).
End of explanation
TBC()
# burn = ...
# maxlag = ...
tmp_samples = [sampler.chain[i,burn:,:9] for i in range(nwalkers)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!')
print("Plus, there's a good chance that the results in this section are garbage...")
Explanation: Go through the usual motions, making sure to set burn and maxlag to something appropriate for the length of the chain.
End of explanation
samples_all_A = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars)
plotGTC([samples_all_A[:,:9]], paramNames=param_labels_all+param_labels, chainLabels=['emcee/brute'],
figureSize=12, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
Explanation: As before, we'll be comparing the posteriors from the methods we attempt:
End of explanation
def log_likelihood_all_B(mu_a, tau_a, mu_b, tau_b, mu_s, tau_s):
TBC()
TBC_above()
Explanation: To be more thorough, we would also want to see how well the new hierarchical part of the model fits, meaning whether the posteriors of $a_i$, $b_i$ and $\sigma_i$ are collectively consistent with being drawn from their respective fitted Gaussians. Things might look slightly different than the plots we made above, since those fits used uniform priors rather than the hierarchical model. With only 9 galaxies, it seems unlikely that we could really rule out a Gaussian distribution, and it's tangential to the point of this tutorial. So this can be an exercise for the reader, if you want.
4b. Sampling with numerical marginalization
Let's see how we do trying to marginalize out the per-galaxy parameters by simple monte carlo, as described above,
$p(\mathrm{data}|\vec{\alpha}) = \prod_{i=1}^{N_\mathrm{gal}} \frac{1}{n_i}\sum_{k=1}^{n_i} p(a_{ik},b_{ik},\sigma_{ik}|\vec{\alpha})$.
Note that, because we are taking a sum of probabilities above, we do actually need to work with probabilities, as opposed to log-probabilities. You might reasonably worry about numerical stability here, but in this case a naive implementation seems to be ok. (In general, what we would need to check is whether the summands contributing most of the sum are easily floating-point representable, i.e. not so tiny that they underflow. We could always renormalize the summands to avoid this, since we will just end up taking the log afterwards.)
Implement the log-likelihood for this approach below.
End of explanation
def logpost_vecarg_all_B(pvec):
params = {name:pvec[i] for i,name in enumerate(param_names_all)}
return log_posterior_all(log_likelihood_all_B, **params)
Explanation: This is for free:
End of explanation
logpost_vecarg_all_B(guess_all)
Explanation: The usual sanity check:
End of explanation
nsteps = 10
npars = len(guess_all)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_all_B)
start = np.array([np.array(guess_all)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
%time sampler.run_mcmc(start, nsteps)
print('Yay?')
Explanation: Let's get an idea of how computationally expensive all these sums are by running a very short chain.
End of explanation
for f in independent_fits:
f.thin(500)
Explanation: For me this comes out to about 7 seconds for 10 steps - slower than we'd ideally like, at least without more serious computing resources than my laptop. (If you run longer, though, you should see performance better than in part A.)
However, its worth asking if we can get away with using fewer samples. In principle, we are well justified in doing this, since the effective number of independent samples estimated for some of the individual fits are only $\sim500$ (when I ran them, anyway).
Note that the cell below is destructive, in that we can't easily get the original chains back after running it. Keep that in mind if you plan to play around, or improve on the code at the start of the notebook.
End of explanation
%%time
TBC() # nsteps =
npars = len(guess_all)
nwalkers = 2*npars
sampler = emcee.EnsembleSampler(nwalkers, npars, logpost_vecarg_all_B)
start = np.array([np.array(guess_all)*(1.0 + 0.01*np.random.randn(npars)) for j in range(nwalkers)])
sampler.run_mcmc(start, nsteps);
print('Yay!')
Explanation: With only 500 samples left in the sum for each galaxy, it should be possible to get results that appear basically converged with a couple of minutes runtime (and you should do so). Nevertheless, before turning in the notebook, please reduce the number of steps such that the sampling cell below takes longer than $\sim30$ seconds to evaluate. (You can leave a comment saying what number of steps you actually used, if you like.)
End of explanation
plt.rcParams['figure.figsize'] = (16.0, 3.0*npars)
fig, ax = plt.subplots(npars, 1);
cr.plot_traces(sampler.chain[:min(8,nwalkers),:,:npars], ax, labels=param_labels_all);
Explanation: Let's see how it does:
End of explanation
TBC()
# burn = ...
# maxlag = ...
tmp_samples = [sampler.chain[i,burn:,:] for i in range(nwalkers)]
print('R =', cr.GelmanRubinR(tmp_samples))
print('neff =', cr.effective_samples(tmp_samples, maxlag=maxlag))
print('NB: Since walkers are not independent, these will be optimistic!')
Explanation: The sampler is probably struggling to move around efficiently, but you could imagine running patiently for a while and ending up with something useful. Let's call this approach viable, but not ideal. Still, make sure you have reasonable convergence before continuing.
End of explanation
samples_all_B = sampler.chain[:,burn:,:].reshape(nwalkers*(nsteps-burn), npars)
plotGTC([samples_all_A[:,:len(param_names_all)], samples_all_B], paramNames=param_labels_all, chainLabels=['emcee/brute', 'emcee/SMC'],
figureSize=10, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
Explanation: And now the burning question: how does the posterior compare with the brute force version?
End of explanation
sol = np.loadtxt('solutions/ceph2.dat.gz')
plotGTC([sol, samples_all_B], paramNames=param_labels_all, chainLabels=['solution', 'my emcee/SMC'],
figureSize=8, customLabelFont={'size':12}, customTickFont={'size':12}, customLegendFont={'size':16});
Explanation: Checkpoint: Your posterior is compared with our solution by the cell below. Keep in mind they may have very different numbers of samples - we let ours run for several minutes.
End of explanation |
1,219 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS446/546 - Class Session 19 - Correlation network
In this class session we are going to analyze gene expression data from a human bladder cancer cohort, using python. We will load a data matrix of expression measurements of 4,473 genes in 414 different bladder cancer samples. These genes have been selected because they are differentially expressed between normal bladder and bladder cancer (thus more likely to have a function in bladder cancer specifically), but the columns in the data matrix are restricted to bladder cancer samples (not normal bladder) because we want to obtain a network representing variation across cancers. The measurements in the matrix have already been normalized to account for inter-sample heterogeneity and then log2 transformed. Our job is to compute Pearson correlation coefficients between all pairs of genes, obtain Fisher-transformed z-scores for all pairs of genes, test each pair of genes for significance of the z score, adjust for multiple hypothesis testing, filter to eliminate any pair for which R < 0.75 or Padj > 0.01, load the graph into an igraph.Graph object, and plot the degree distribution on log-log scale. We will then answer two questions
Step1: Using pandas.read_csv, load the tab-deliminted text file of gene expression measurements (rows correspond to genes, columns correspond to bladder tumor samples), into a data frame gene_matrix_for_network_df.
Step2: Use the pandas.DataFrame.as_matrix method to make a matrix gene_matrix_for_network. Print out the dimensions of the matrix, by accessing its shape variable
Step3: Use del to delete the data frame, since we no longer need it (save memory)
Step4: Look at the online help for the numpy.corrcoef function, using help(numpy.corrcoef). When you pass a single argument x which is a 2D "array" (i.e., a matrix), by default does corrcoef compute coefficients for pairs of rows, or pairs of columns?
Step5: Compute the 4,473 x 4,473 matrix of gene-gene Pearson correlation coefficients, using numpy.corrcoef (this function treats each row as a variable, so you don't have to do any transposing of the matrix, unlike the situation in R).
Step6: Look at the online help for numpy.fill_diagonal. Does it return the modified matrix or modify the matrix argument in place?
Step7: Set the diagonal elements of the matrix to zero, using numpy.fill_diagonal
Step8: Look at the online help for numpy.multiply. Does it do element-wise multiplication or matrix multiplication?
Step9: Look at the online help for numpy.tri. Does it modify a matrix argument in-place or return a matrix? What is in the matrix that it returns?
Step10: Set the upper-triangle of the matrix to zero, using numpy.multiply and numpy.tri
Step11: Using numpy.where, get a tuple of two numpy.arrays containing the row/col indices of the entries of the matrix for which R >= 0.75. Use array indexing to obtain the R values for these matrix entries, as a numpy array cor_coeff_values_above_thresh.
Step12: Refer to Eq. (13.5) in the assigned readding for today's class (p9 of the PDF). Obtain a numpy array of the correlation coefficients that exceeded 0.75, and Fisher-transform the correlation coefficient values to get a vector z_scores of z scores. Each of these z scores will correspond to an edge in the network, unless the absolute z score is too small such that we can't exclude the null hypothesis that the corresponding two genes' expression values are indepdenent (we will perform that check in the next step).
Step13: Delete the correlation matrix object in order to save memory (we won't need it from here on out).
Step14: Assume that under the null hypothesis that two genes are independent, then sqrt(M-3)z for the pair of genes is an independent sample from the normal distribution with zero mean and unit variance, where M is the number of samples used to compute the Pearson correlation coefficient (i.e., M = 414). For each entry in z_scores compute a P value as the area under two tails of the normal distribution N(x), where the two tails are x < -sqrt(M-3)z and x > sqrt(M-3)z. (You'll know you are doing it right if z=0 means you get a P value of 1). You will want to use the functions numpy.abs and scipy.stats.norm.cdf, as well as the math.sqrt function (in order to compute the square root).
Step15: Adjust the P values for multiple hypothesis testing, using the statsmodels.sandbox.stats.multicomp.multipletests function wth method="fdr_bh"
Step16: Verify that we don't need to drop any entries due to the adjusted P value not being small enough (use numpy.where and len); this should produce zero since we have M=414 samples per gene.
Step17: Read the online help for the function zip. What does it do?
Step18: We want to pass our tuple of numpy arrays containing row and column indices to Graph.TupleList; however, Graph.TupleList accepts a tuple list, not a tuple of numpy arrays. So we need to make a tuple list, using zip
Step19: Make an undirected graph from the row/column indices of the (upper-triangle) gene pairs whose correlations were above our threshold, using igraph.Graph.TupleList. Print a summary of the network, as a sanity check, using the igraph.Graph.summary method.
Step20: Plot the degree distribution on log-log scale; does it appear to be scale-free?
Step21: Use the igraph.statistics.power_law_fit function to estimate the scaling exponent alpha of the degree distribution
Step22: extra challenge
Step23: extra-extra challenge
For each of the gene pairs for which R>0.75, see if you can compute the t-test P value for each correlation coefficient (don't bother adjusting for false discovery rate control). Compare to the (un-adjusted) P values that you got using the Fisher transformation, using a scatter plot. How do they compare? Which test has better statistical power, for this case where M = 414? (If you are wondering, general advice is to use Fisher if M>=10; for very small numbers of samples, use the Student t test). | Python Code:
import pandas
import scipy.stats
import matplotlib
import pylab
import numpy
import statsmodels.sandbox.stats.multicomp
import igraph
import math
Explanation: CS446/546 - Class Session 19 - Correlation network
In this class session we are going to analyze gene expression data from a human bladder cancer cohort, using python. We will load a data matrix of expression measurements of 4,473 genes in 414 different bladder cancer samples. These genes have been selected because they are differentially expressed between normal bladder and bladder cancer (thus more likely to have a function in bladder cancer specifically), but the columns in the data matrix are restricted to bladder cancer samples (not normal bladder) because we want to obtain a network representing variation across cancers. The measurements in the matrix have already been normalized to account for inter-sample heterogeneity and then log2 transformed. Our job is to compute Pearson correlation coefficients between all pairs of genes, obtain Fisher-transformed z-scores for all pairs of genes, test each pair of genes for significance of the z score, adjust for multiple hypothesis testing, filter to eliminate any pair for which R < 0.75 or Padj > 0.01, load the graph into an igraph.Graph object, and plot the degree distribution on log-log scale. We will then answer two questions: (1) does the network look to be scale-free? and (2) what is it's best-fit scaling exponent?
We will start by importing all of the modules that we will need for this notebook. Note the difference in language-design philosophy between R (which requires loading one package for this analysis) and python (where we have to load seven modules). Python keeps its core minimal, whereas R has a lot of statistical and plotting functions in the base language (or in packages that are loaded by default).
End of explanation
gene_matrix_for_network_df = pandas.read_csv("shared/bladder_cancer_genes_tcga.txt", sep="\t")
Explanation: Using pandas.read_csv, load the tab-deliminted text file of gene expression measurements (rows correspond to genes, columns correspond to bladder tumor samples), into a data frame gene_matrix_for_network_df.
End of explanation
gene_matrix_for_network = gene_matrix_for_network_df.as_matrix()
gene_matrix_for_network.shape
Explanation: Use the pandas.DataFrame.as_matrix method to make a matrix gene_matrix_for_network. Print out the dimensions of the matrix, by accessing its shape variable
End of explanation
del gene_matrix_for_network_df
Explanation: Use del to delete the data frame, since we no longer need it (save memory)
End of explanation
help(numpy.corrcoef)
Explanation: Look at the online help for the numpy.corrcoef function, using help(numpy.corrcoef). When you pass a single argument x which is a 2D "array" (i.e., a matrix), by default does corrcoef compute coefficients for pairs of rows, or pairs of columns?
End of explanation
gene_matrix_for_network_cor = numpy.corrcoef(gene_matrix_for_network)
Explanation: Compute the 4,473 x 4,473 matrix of gene-gene Pearson correlation coefficients, using numpy.corrcoef (this function treats each row as a variable, so you don't have to do any transposing of the matrix, unlike the situation in R).
End of explanation
help(numpy.fill_diagonal)
Explanation: Look at the online help for numpy.fill_diagonal. Does it return the modified matrix or modify the matrix argument in place?
End of explanation
numpy.fill_diagonal(gene_matrix_for_network_cor, 0)
Explanation: Set the diagonal elements of the matrix to zero, using numpy.fill_diagonal
End of explanation
help(numpy.multiply)
Explanation: Look at the online help for numpy.multiply. Does it do element-wise multiplication or matrix multiplication?
End of explanation
help(numpy.tri)
Explanation: Look at the online help for numpy.tri. Does it modify a matrix argument in-place or return a matrix? What is in the matrix that it returns?
End of explanation
gene_matrix_for_network_cor = numpy.multiply(gene_matrix_for_network_cor, numpy.tri(*gene_matrix_for_network_cor.shape))
Explanation: Set the upper-triangle of the matrix to zero, using numpy.multiply and numpy.tri:
End of explanation
inds_correl_above_thresh = numpy.where(gene_matrix_for_network_cor >= 0.75)
cor_coeff_values_above_thresh = gene_matrix_for_network_cor[inds_correl_above_thresh]
Explanation: Using numpy.where, get a tuple of two numpy.arrays containing the row/col indices of the entries of the matrix for which R >= 0.75. Use array indexing to obtain the R values for these matrix entries, as a numpy array cor_coeff_values_above_thresh.
End of explanation
z_scores = 0.5*numpy.log((1 + cor_coeff_values_above_thresh)/
(1 - cor_coeff_values_above_thresh))
Explanation: Refer to Eq. (13.5) in the assigned readding for today's class (p9 of the PDF). Obtain a numpy array of the correlation coefficients that exceeded 0.75, and Fisher-transform the correlation coefficient values to get a vector z_scores of z scores. Each of these z scores will correspond to an edge in the network, unless the absolute z score is too small such that we can't exclude the null hypothesis that the corresponding two genes' expression values are indepdenent (we will perform that check in the next step).
End of explanation
del gene_matrix_for_network_cor
Explanation: Delete the correlation matrix object in order to save memory (we won't need it from here on out).
End of explanation
M = gene_matrix_for_network.shape[1]
P_values = 2*scipy.stats.norm.cdf(-numpy.abs(z_scores)*math.sqrt(M-3))
Explanation: Assume that under the null hypothesis that two genes are independent, then sqrt(M-3)z for the pair of genes is an independent sample from the normal distribution with zero mean and unit variance, where M is the number of samples used to compute the Pearson correlation coefficient (i.e., M = 414). For each entry in z_scores compute a P value as the area under two tails of the normal distribution N(x), where the two tails are x < -sqrt(M-3)z and x > sqrt(M-3)z. (You'll know you are doing it right if z=0 means you get a P value of 1). You will want to use the functions numpy.abs and scipy.stats.norm.cdf, as well as the math.sqrt function (in order to compute the square root).
End of explanation
P_values_adj = statsmodels.sandbox.stats.multicomp.multipletests(P_values, method="fdr_bh")[1]
Explanation: Adjust the P values for multiple hypothesis testing, using the statsmodels.sandbox.stats.multicomp.multipletests function wth method="fdr_bh"
End of explanation
len(numpy.where(P_values_adj >= 0.01)[0])
Explanation: Verify that we don't need to drop any entries due to the adjusted P value not being small enough (use numpy.where and len); this should produce zero since we have M=414 samples per gene.
End of explanation
help(zip)
Explanation: Read the online help for the function zip. What does it do?
End of explanation
row_col_inds_tuple_list = zip(inds_correl_above_thresh[0], inds_correl_above_thresh[1])
## [note this can be done more elegantly using the unary "*" operator:
## row_col_inds_tuple_list = zip(*inds_correl_above_thresh)
## see how we only need to type the variable name once, if we use the unary "*" ]
Explanation: We want to pass our tuple of numpy arrays containing row and column indices to Graph.TupleList; however, Graph.TupleList accepts a tuple list, not a tuple of numpy arrays. So we need to make a tuple list, using zip:
End of explanation
final_network = igraph.Graph.TupleList(row_col_inds_tuple_list)
final_network.summary()
Explanation: Make an undirected graph from the row/column indices of the (upper-triangle) gene pairs whose correlations were above our threshold, using igraph.Graph.TupleList. Print a summary of the network, as a sanity check, using the igraph.Graph.summary method.
End of explanation
degree_dist = final_network.degree_distribution()
xs, ys = zip(*[(left, count) for left, _, count in degree_dist.bins()])
matplotlib.pyplot.scatter(xs, ys, marker="o")
ax = matplotlib.pyplot.gca()
ax.set_yscale("log")
ax.set_xscale("log")
matplotlib.pyplot.ylim((0.5,1000))
pylab.xlabel("k")
pylab.ylabel("N(k)")
pylab.show()
Explanation: Plot the degree distribution on log-log scale; does it appear to be scale-free?
End of explanation
igraph.statistics.power_law_fit(final_network.degree()).alpha
Explanation: Use the igraph.statistics.power_law_fit function to estimate the scaling exponent alpha of the degree distribution:
End of explanation
inds_use = numpy.where(P_values_adj > 0)
matplotlib.pyplot.scatter(cor_coeff_values_above_thresh[inds_use], -numpy.log10(P_values_adj[inds_use]))
pylab.xlabel("R")
pylab.ylabel("-log10(P)")
pylab.show()
Explanation: extra challenge:
If you got this far, see if you can scatter plot the relationship between R (as the independent variable) and -log10(P) value (as the dependent variable). When the effect size variable (e.g., R) can range from negative to positive, this plot is sometimes called a "volcano plot".
End of explanation
ts = numpy.divide(cor_coeff_values_above_thresh * math.sqrt(M - 2), numpy.sqrt(1 - cor_coeff_values_above_thresh**2))
P_values_studentT = 2*scipy.stats.t.cdf(-ts, M-2)
inds_use = numpy.where(numpy.logical_and(P_values > 0, P_values_studentT > 0))
matplotlib.pyplot.scatter(-numpy.log10(P_values[inds_use]),
-numpy.log10(P_values_studentT[inds_use]))
pylab.xlabel("Fisher transform")
pylab.ylabel("Student t")
pylab.show()
Explanation: extra-extra challenge
For each of the gene pairs for which R>0.75, see if you can compute the t-test P value for each correlation coefficient (don't bother adjusting for false discovery rate control). Compare to the (un-adjusted) P values that you got using the Fisher transformation, using a scatter plot. How do they compare? Which test has better statistical power, for this case where M = 414? (If you are wondering, general advice is to use Fisher if M>=10; for very small numbers of samples, use the Student t test).
End of explanation |
1,220 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to TensorFlow
What is a Computation Graph?
Everything in TensorFlow comes down to building a computation graph. What is a computation graph? Its just a series of math operations that occur in some order. Here is an example of a simple computation graph
Step1: Tensorflow uses tf.placeholder to handle inputs to the model. This is like making a reservation at a restaurant. The restaurant reserves a spot for 5 people, but you are free to fill those seats with any set of friends you want to. tf.placeholder lets you specify that some input will be coming in, of some shape and some type. Only when you run the computation graph do you actually provide the values of this input data. You would run this simple computation graph like this
Step2: We use feed_dict to pass in the actual input data into the graph. We use session.run to get the output from the c operation in the graph. Since e is at the end of the graph, this ends up running the entire graph and returning the number 45 - cool!
Neural Networks in Tensorflow
We can define neural networks in TensorFlow using computation graphs. Here is an example, very simple neural network (just 1 perceptron)
Step3: To run this graph, we again use session.run() and feed in our input via feed_dict.
Step4: We can also set the value of a tf.Variable when we make it. Below is an example where we set the value of tf.Variable ourselves. We've made a classification dataset for you to play around with, and see how the decision boundary changes with the model parameters (weights and bias). Try to get all the datapoints correct (green)!
Step5: Tweet Sentiment Analysis
Let's move to a real-world task. We're going to be classifying tweets as positive, negative, or neutral. Check out the very negative tweet below
Step6: Step 2
Step7: Why Do We Pass in None?
A note about ‘None’ and fluid-sized dimensions
Step8: Step 4
Step9: Step 5
Step10: Quick Conceptual Note
Step11: Running this code, you’ll see the network train and output its performance as it learns. I was able to get it to 65.5% accuracy. This is just OK, considering random guessing gets you 33.3% accuracy. In the next tutorial, you'll learn some ways to improve upon this.
Concluding Thoughts
This was a brief introduction into TensorFlow. There is so, so much more to learn and explore, but hopefully this has given you some base knowledge to expand upon. As an additional exercise, you can see what you can do with this code to improve the performance. Ideas include | Python Code:
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.add(a, b)
d = tf.subtract(b, 1)
e = tf.multiply(c, d)
Explanation: Intro to TensorFlow
What is a Computation Graph?
Everything in TensorFlow comes down to building a computation graph. What is a computation graph? Its just a series of math operations that occur in some order. Here is an example of a simple computation graph:
<img src="files/computation-graph.png">
This graph takes 2 inputs, (a, b) and computes an output (e). Each node in the graph is an operation that takes some input, does some computation, and passes its output to another node.
We could make this computation graph in TensorFlow in the following way:
End of explanation
with tf.Session() as session:
a_data, b_data = 3.0, 6.0
feed_dict = {a: a_data, b: b_data}
output = session.run([e], feed_dict=feed_dict)
print(output) # 45.0
Explanation: Tensorflow uses tf.placeholder to handle inputs to the model. This is like making a reservation at a restaurant. The restaurant reserves a spot for 5 people, but you are free to fill those seats with any set of friends you want to. tf.placeholder lets you specify that some input will be coming in, of some shape and some type. Only when you run the computation graph do you actually provide the values of this input data. You would run this simple computation graph like this:
End of explanation
n_input_nodes = 2
n_output_nodes = 1
x = tf.placeholder(tf.float32, (None, n_input_nodes))
W = tf.Variable(tf.ones((n_input_nodes, n_output_nodes)), dtype=tf.float32)
b = tf.Variable(tf.zeros(n_output_nodes), dtype=tf.float32)
'''TODO: Define the operation for z (use tf.matmul to multiply W and x).'''
z = #todo
'''TODO: Define the operation for out (use tf.sigmoid).'''
out = #todo
Explanation: We use feed_dict to pass in the actual input data into the graph. We use session.run to get the output from the c operation in the graph. Since e is at the end of the graph, this ends up running the entire graph and returning the number 45 - cool!
Neural Networks in Tensorflow
We can define neural networks in TensorFlow using computation graphs. Here is an example, very simple neural network (just 1 perceptron):
<img src="files/computation-graph-2.png">
This graph takes an input, (x) and computes an output (out). It does it with what we learned in class, out = sigmoid(W*x+b).
We could make this computation graph in TensorFlow in the following way:
End of explanation
test_input = [[0.5, 0.5]]
with tf.Session() as session:
tf.global_variables_initializer().run(session=session)
feed_dict = {x: test_input}
output = session.run([out], feed_dict=feed_dict)
print(output[0]) # This should output 0.73105. If not, double-check your code above
Explanation: To run this graph, we again use session.run() and feed in our input via feed_dict.
End of explanation
'''TODO: manually optimize weight_values and bias_value to classify points'''
# Modify weight_values, bias_value in the above code to adjust the decision boundary
# See if you can classify all the points correctly (all markers green)
weight_values = np.array([[-0.1], [0.2]]) # TODO change values and re-run
bias_value = np.array([[0.5]]) #TODO change values and re-run
# A pretty good boundary is made with:
# weight_values = np.array([[0.03], [0.12]])
# bias_value = np.array([[-0.5]])
x = tf.placeholder(tf.float32, (None, 2), name='x')
W = tf.Variable(weight_values, name='W', dtype=tf.float32)
b = tf.Variable(bias_value, name='b', dtype=tf.float32)
z = tf.matmul(x, W) + b
out = tf.sigmoid(z)
data = np.array([[2, 7], [1, 7], [3, 1], [3, 3], [4, 3], [4, 6], [6, 5], [7, 7], [7, 5], [2, 4], [2, 2]])
y = np.array([1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0])
with tf.Session() as session:
tf.global_variables_initializer().run(session=session)
utils.classify_and_plot(data, y, x, out, session)
Explanation: We can also set the value of a tf.Variable when we make it. Below is an example where we set the value of tf.Variable ourselves. We've made a classification dataset for you to play around with, and see how the decision boundary changes with the model parameters (weights and bias). Try to get all the datapoints correct (green)!
End of explanation
# load data
X, y, index_to_word, sentences = utils.load_sentiment_data_bow()
X_train, y_train, X_test, y_test = utils.split_data(X, y)
vocab_size = X.shape[1]
n_classes = y.shape[1]
print("Tweet:", sentences[5])
print("Label:", y[5])
print("Bag of Words Representation:", X_train[5])
Explanation: Tweet Sentiment Analysis
Let's move to a real-world task. We're going to be classifying tweets as positive, negative, or neutral. Check out the very negative tweet below:
<img src="files/tweet-model.jpg" style="width: 500px;">
Building the Model
Building an MLP
MLP or Multi-layer perceptron is a basic archetecture where where we multiply our representation with some matrix W and add some bias b and then apply some nonlinearity like tanh at each layer. Layers are fully connected to the next. As the network gets deeper, it's expressive power grows exponentially and so they can draw some pretty fancy decision boundaries. In this exercise, you'll build your own MLP, with 2 hidden layers (layers that aren't input or output).
To make training more stable and efficient, we'll actually evaluate 128 tweets at a time, and take gradients with respect to the loss on the 128. We call this idea training with mini-batches.
Step 1: Representing Words
In this model, we’ll be representing tweets as bag-of-words (BOW) representations. BOW representations are vectors where each element index represents a different word and its value represents the number of times this word appears in our sentence. This means that each sentence will be represented by a vector that is vocab_size long. Our output labels will be represented as a vector of size n_classes (3). We get this data with some utility functions:
End of explanation
data_placeholder = tf.placeholder(tf.float32, shape=(None, vocab_size), name='data_placeholder')
'''TODO: Make the labels placeholder.''';
labels_placeholder = #todo
Explanation: Step 2: Making Placeholders
So we have our data loaded as numpy arrays. But remember, TensorFlow graphs begin with generic placeholder inputs, not actual data. We feed the actual data in later once the full graph has been defined. We define our placeholders like this:
End of explanation
# Define Network Parameters
# Here, we can define how many units will be in each hidden layer.
n_hidden_units_h0 = 512
n_hidden_units_h1 = 256
# Weights going from input to first hidden layer.
# The first value passed into tf.Variable is initialization of the variable.
# We initialize our weights to be sampled from a truncated normal, as this does something
# called symmetry breaking and keeps the neural network from getting stuck at the start.
# Since the weight matrix multiplies the previous layer to get inputs to the next layer,
# its size is prev_layer_size x next_layer_size.
h0_weights = tf.Variable(
tf.truncated_normal([vocab_size, n_hidden_units_h0]),
name='h0_weights')
h0_biases = tf.Variable(tf.zeros([n_hidden_units_h0]),
name='h0_biases')
'''TODO: Set up variables for the weights going into the second hidden layer.
You can check out the tf.Variable API here: https://www.tensorflow.org/api_docs/python/tf/Variable.
''';
h1_weights = #todo
h1_biases = #todo
# Weights going into the output layer.
out_weights = tf.Variable(
tf.truncated_normal([n_hidden_units_h1, n_classes]),
name='out_weights')
out_biases = tf.Variable(tf.zeros([n_classes]),
name='out_biases')
Explanation: Why Do We Pass in None?
A note about ‘None’ and fluid-sized dimensions:
You may notice that the first dimension of shape of data_placeholder is ‘None’. data_placeholder should have shape (num_tweets, vocab_size). However, we don’t know how many tweets we are going to be passing in at a time, num_tweets is unknown. Its possible that we only want to pass in 1 tweet at a time, or 30, or 1,000. Thankfully, TensorFlow allows us to specify placeholders with fluid-sized dimensions. We can use None to specify some fluid dimension of our shape. When our data eventually gets passed in as a numpy array, TensorFlow can figure out what the value of the fluid-size dimension should be.
Step 3: Define Network Parameters
Let’s now define and initialize our network parameters.
We'll our model parameters using tf.Variable. When you create a tf.Variable you pass a Tensor as its initial value to the Variable() constructor. A Tensor is a term for any N-dimensional matrix. There are a ton of different initial Tensor value functions you can use (full list). All these functions take a list argument that determines their shape. Here we use tf.truncated_normal for our weights, and tf.zeros for our biases. Its important that the shape of these parameters are compatible. We’ll be matrix-multiplying the weights, so the last dimension of the previous weight matrix must equal the first dimension of the next weight matrix. Notice this pattern in the following tensor initialization code. Lastly, notice the size of the tensor for our last weights. We are predicting a vector of size n_classes so our network needs to end with n_classes nodes.
End of explanation
# Define Computation Graphs
hidden0 = tf.nn.relu(tf.matmul(data_placeholder, h0_weights) + h0_biases)
'''TODO: write the computation to get the output of the second hidden layer.''';
hidden1 = #todo
logits = tf.matmul(hidden1, out_weights) + out_biases
Explanation: Step 4: Build Computation Graph
Now let’s define our computation graph.
Our first operation in our graph is a tf.matmul of our data input and our first set of weights. tf.matmul performs a matrix multiplication of two tensors. This is why it is so important that the dimensions of data_placeholder and h0_weights align (dimension 1 of data_placeholder must equal dimension 0 of h0_weights). We then just add the h0_bias variable and perform a nonlinearity transformation, in this case we use tf.nn.relu (ReLU). We do this again for the next hidden layer, and the final output logits.
End of explanation
'''TODO: Define the loss. Use tf.nn.softmax_cross_entropy_with_logits.'''
loss = #todo
learning_rate = 0.0002
# Define the optimizer operation. This is what will take the derivate of the loss
# with respect to each of our parameters and try to minimize it.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
prediction = tf.nn.softmax(logits)
# Compute the accuracy
prediction_is_correct = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_placeholder, 1))
accuracy = tf.reduce_mean(tf.cast(prediction_is_correct, tf.float32))
Explanation: Step 5: Define a Loss Functions
We have defined our network, but we need a way to train it. Training a network comes down to optimizing our network to minimize a loss function, or a measure how good we're doing. Then, we can take the gradient with respect to that performance and move in the right direction.
Since we are doing classification (pos vs neg), a common loss function to use is cross entropy:
$$L( \Theta ) = - \sum_i y_i'\log{y_i} $$
where $y$ is our predicted probability distribution and $y'$ is the true distribution. We can access this function in tensorflow with tf.nn.softmax_cross_entropy_with_logits.
Note that this loss is 0 if the prediction is correct.
End of explanation
num_steps = 3000
batch_size = 128
with tf.Session() as session:
# this operation initializes all the variables we made earlier.
tf.global_variables_initializer().run()
for step in range(num_steps):
# Generate a minibatch.
offset = (step * batch_size) % (X_train.shape[0] - batch_size)
batch_data = X_train[offset:(offset + batch_size), :]
batch_labels = y_train[offset:(offset + batch_size), :]
# Create a dictionary to pass in the batch data.
feed_dict_train = {data_placeholder: batch_data, labels_placeholder : batch_labels}
# Run the optimizer, the loss, the predictions.
# We can run multiple things at once and get their outputs.
_, loss_value_train, predictions_value_train, accuracy_value_train = session.run(
[optimizer, loss, prediction, accuracy], feed_dict=feed_dict_train)
# Print stuff every once in a while.
if (step % 100 == 0):
print("Minibatch train loss at step", step, ":", loss_value_train)
print("Minibatch train accuracy: %.3f%%" % (accuracy_value_train*100))
feed_dict_test = {data_placeholder: X_test, labels_placeholder: y_test}
loss_value_test, predictions_value_test, accuracy_value_test = session.run(
[loss, prediction, accuracy], feed_dict=feed_dict_test)
print("Test loss: %.3f" % loss_value_test)
print("Test accuracy: %.3f%%" % (accuracy_value_test*100))
Explanation: Quick Conceptual Note:
Nearly everything we do in TensorFlow is an operation with inputs and outputs. Our loss variable is an operation, that takes the output of the last layer of the net, which takes the output of the 2nd-to-last layer of the net, and so on. Our loss can be traced back to the input as a single function. This is our full computation graph. We pass this to our optimizer which is able to compute the gradient for this computation graph and adjust all the weights in it to minimize the loss.
Step 6: Training our Net
We have our network, our loss function, and our optimizer ready, now we just need to pass in the data to train it. We pass in the data in chunks called mini-batches. We do backpropogation at the end of a batch based on the loss that results from all the examples in the batch. Using batches is just like Stochastic Gradient Descent, except instead of updating parameters after each example, we update them based on the gradient computed after several examples.
To measure how well we're doing, we can't just look at how well our model performs on it's training data. It could be just memorizing the training data and perform terribly on data it hasn't seen before. To really measure how it performs in the wild, we need to present it with unseen data, and we can tune our hyper-parameters (like learning rate, num layers etc.) over this first unseen set, which we call our development (or validation) set. However, given that we optimized our hyper-parameters to the development set, to get a true fair assesment of the model, we test it with respect to a held-out test set at the end, and generally report those numbers.
For now, we'll just use a training set and a testing set. We'll train with the training set and evaluate on the test set to see how well our model performs.
End of explanation
a = tf.placeholder(tf.float32)
b = tf.placeholder(tf.float32)
c = tf.add(a, b)
d = tf.subtract(b, 1)
e = tf.multiply(c, d)
with tf.Session() as session:
a_data, b_data = 3.0, 6.0
feed_dict = {a: a_data, b: b_data}
output = session.run([e], feed_dict=feed_dict)
print(output) # 45.0
n_input_nodes = 2
n_output_nodes = 1
x = tf.placeholder(tf.float32, (None, n_input_nodes))
W = tf.Variable(tf.ones((n_input_nodes, n_output_nodes)), dtype=tf.float32)
b = tf.Variable(tf.zeros(n_output_nodes), dtype=tf.float32)
'''TODO: Define the operation for z (use tf.matmul to multiply W and x).'''
z = tf.matmul(x, W) + b
'''TODO: Define the operation for out (use tf.sigmoid).'''
out = tf.sigmoid(z)
test_input = [[0.5, 0.5]]
with tf.Session() as session:
tf.global_variables_initializer().run(session=session)
feed_dict = {x: test_input}
output = session.run([out], feed_dict=feed_dict)
print(output[0]) # This should output 0.73105. If not, double-check your code above
'''TODO: manually optimize weight_values and bias_value to classify points'''
# Modify weight_values, bias_value in the above code to adjust the decision boundary
# See if you can classify all the points correctly (all markers green)
weight_values = np.array([[-0.1], [0.2]]) # TODO change values and re-run
bias_value = np.array([[0.5]]) #TODO change values and re-run
# A pretty good boundary is made with:
weight_values = np.array([[0.03], [0.12]])
bias_value = np.array([[-0.5]])
x = tf.placeholder(tf.float32, (None, 2), name='x')
W = tf.Variable(weight_values, name='W', dtype=tf.float32)
b = tf.Variable(bias_value, name='b', dtype=tf.float32)
z = tf.matmul(x, W) + b
out = tf.sigmoid(z)
data = np.array([[2, 7], [1, 7], [3, 1], [3, 3], [4, 3], [4, 6], [6, 5], [7, 7], [7, 5], [2, 4], [2, 2]])
y = np.array([1, 1, 0, 0, 0, 1, 1, 1, 1, 0, 0])
with tf.Session() as session:
tf.global_variables_initializer().run(session=session)
utils.classify_and_plot(data, y, x, out, session)
# load data
X, y, index_to_word, sentences = utils.load_sentiment_data_bow()
X_train, y_train, X_test, y_test = utils.split_data(X, y)
vocab_size = X.shape[1]
n_classes = y.shape[1]
print("Tweet:", sentences[5])
print("Label:", y[5])
print("Bag of Words Representation:", X_train[5])
data_placeholder = tf.placeholder(tf.float32, shape=(None, vocab_size), name='data_placeholder')
'''TODO: Make the labels placeholder.''';
labels_placeholder = tf.placeholder(tf.float32, shape=(None, n_classes), name='labels_placeholder')
# Define Network Parameters
# Here, we can define how many units will be in each hidden layer.
n_hidden_units_h0 = 512
n_hidden_units_h1 = 256
# Weights going from input to first hidden layer.
# The first value passed into tf.Variable is initialization of the variable.
# We initialize our weights to be sampled from a truncated normal, as this does something
# called symmetry breaking and keeps the neural network from getting stuck at the start.
# Since the weight matrix multiplies the previous layer to get inputs to the next layer,
# its size is prev_layer_size x next_layer_size.
h0_weights = tf.Variable(
tf.truncated_normal([vocab_size, n_hidden_units_h0]),
name='h0_weights')
h0_biases = tf.Variable(tf.zeros([n_hidden_units_h0]),
name='h0_biases')
'''TODO: Set up variables for the weights going into the second hidden layer.
You can check out the tf.Variable API here: https://www.tensorflow.org/api_docs/python/tf/Variable.
''';
h1_weights = tf.Variable(
tf.truncated_normal([n_hidden_units_h0, n_hidden_units_h1]),
name='h1_weights')
h1_biases = tf.Variable(tf.zeros([n_hidden_units_h1]),
name='h1_biases')
# Weights going into the output layer.
out_weights = tf.Variable(
tf.truncated_normal([n_hidden_units_h1, n_classes]),
name='out_weights')
out_biases = tf.Variable(tf.zeros([n_classes]),
name='out_biases')
# Define Computation Graphs
hidden0 = tf.nn.relu(tf.matmul(data_placeholder, h0_weights) + h0_biases)
'''TODO: write the computation to get the output of the second hidden layer.''';
hidden1 = tf.nn.relu(tf.matmul(hidden0, h1_weights) + h1_biases)
logits = tf.matmul(hidden1, out_weights) + out_biases
'''TODO: Define the loss. Use tf.nn.softmax_cross_entropy_with_logits.'''
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels_placeholder))
learning_rate = 0.0002
# Define the optimizer operation. This is what will take the derivate of the loss
# with respect to each of our parameters and try to minimize it.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
prediction = tf.nn.softmax(logits)
# Compute the accuracy
prediction_is_correct = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_placeholder, 1))
accuracy = tf.reduce_mean(tf.cast(prediction_is_correct, tf.float32))
num_steps = 3000
batch_size = 128
with tf.Session() as session:
# this operation initializes all the variables we made earlier.
tf.global_variables_initializer().run()
for step in range(num_steps):
# Generate a minibatch.
offset = (step * batch_size) % (X_train.shape[0] - batch_size)
batch_data = X_train[offset:(offset + batch_size), :]
batch_labels = y_train[offset:(offset + batch_size), :]
# Create a dictionary to pass in the batch data.
feed_dict_train = {data_placeholder: batch_data, labels_placeholder : batch_labels}
# Run the optimizer, the loss, the predictions.
# We can run multiple things at once and get their outputs.
_, loss_value_train, predictions_value_train, accuracy_value_train = session.run(
[optimizer, loss, prediction, accuracy], feed_dict=feed_dict_train)
# Print stuff every once in a while.
if (step % 100 == 0):
print("Minibatch train loss at step", step, ":", loss_value_train)
print("Minibatch train accuracy: %.3f%%" % (accuracy_value_train*100))
feed_dict_test = {data_placeholder: X_test, labels_placeholder: y_test}
loss_value_test, predictions_value_test, accuracy_value_test = session.run(
[loss, prediction, accuracy], feed_dict=feed_dict_test)
print("Test loss: %.3f" % loss_value_test)
print("Test accuracy: %.3f%%" % (accuracy_value_test*100))
Explanation: Running this code, you’ll see the network train and output its performance as it learns. I was able to get it to 65.5% accuracy. This is just OK, considering random guessing gets you 33.3% accuracy. In the next tutorial, you'll learn some ways to improve upon this.
Concluding Thoughts
This was a brief introduction into TensorFlow. There is so, so much more to learn and explore, but hopefully this has given you some base knowledge to expand upon. As an additional exercise, you can see what you can do with this code to improve the performance. Ideas include: randomizing mini-batches, making the network deeper, using word embeddings (see below) rather than bag-of-words, trying different optimizers (like Adam), different weight initializations. We’ll explore some of these tomorrow.
More on Word Embeddings
In this lab we used Bag-of-Words to represent a tweet. Word Embeddings are a more meaningful representation. The basic idea is we represent a word with a vector $\phi$ by the context the word appears in. We do this by training a neural network to predict the context of words across a large training set. The weights of that neural network can then be thought of as a dense and useful representation that captures context. This is useful because now our representations of words captures actual semantic similarity.
Word Embeddings capture all kinds of useful semantic relationships. For example, one cool emergent property is $ \phi(king) - \phi(queen) = \phi(man) - \phi(woman)$. To learn more about the magic behind word embeddings we recommend Chris Olah's blog post. A common tool for generating Word Embeddings is word2vec.
Solutions
End of explanation |
1,221 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic 1D non-linear regression with Keras
TODO
Step1: Make the dataset
Step2: Make the regressor | Python Code:
import tensorflow as tf
tf.__version__
import keras
keras.__version__
import h5py
h5py.__version__
import pydot
pydot.__version__
Explanation: Basic 1D non-linear regression with Keras
TODO: see https://stackoverflow.com/questions/44998910/keras-model-to-fit-polynomial
Install Keras
https://keras.io/#installation
Install dependencies
Install TensorFlow backend: https://www.tensorflow.org/install/
pip install tensorflow
Insall h5py (required if you plan on saving Keras models to disk): http://docs.h5py.org/en/latest/build.html#wheels
pip install h5py
Install pydot (used by visualization utilities to plot model graphs): https://github.com/pydot/pydot#installation
pip install pydot
Install Keras
pip install keras
Import packages and check versions
End of explanation
df_train = gen_1d_polynomial_samples(n_samples=100, noise_std=0.05)
x_train = df_train.x.values
y_train = df_train.y.values
plt.plot(x_train, y_train, ".k");
df_test = gen_1d_polynomial_samples(n_samples=100, noise_std=None)
x_test = df_test.x.values
y_test = df_test.y.values
plt.plot(x_test, y_test, ".k");
Explanation: Make the dataset
End of explanation
model = keras.models.Sequential()
#model.add(keras.layers.Dense(units=1000, activation='relu', input_dim=1))
#model.add(keras.layers.Dense(units=1))
#model.add(keras.layers.Dense(units=1000, activation='relu'))
#model.add(keras.layers.Dense(units=1))
model.add(keras.layers.Dense(units=5, activation='relu', input_dim=1))
model.add(keras.layers.Dense(units=1))
model.add(keras.layers.Dense(units=5, activation='relu'))
model.add(keras.layers.Dense(units=1))
model.add(keras.layers.Dense(units=5, activation='relu'))
model.add(keras.layers.Dense(units=1))
model.compile(loss='mse',
optimizer='adam')
model.summary()
hist = model.fit(x_train, y_train, batch_size=100, epochs=3000, verbose=None)
plt.plot(hist.history['loss']);
model.evaluate(x_test, y_test)
y_predicted = model.predict(x_test)
plt.plot(x_test, y_test, ".r")
plt.plot(x_test, y_predicted, ".k");
Explanation: Make the regressor
End of explanation |
1,222 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model Fitting
One of the most common things in scientific computing is model fitting. Numerical Recipes devotes a number of chapters to this.
scipy "curve_fit"
astropy.modeling
lmfit (emcee) - Levenberg-Marquardt
pyspeckit
We will also need to be able to read data. We will use pickle (-01) and an ascii file reader (-02)
Step1: reading a spectrum from n6503 using pickle
you may need to re-visit n6503-case1 and reset peakpos in In[18] and execute In[20] and In[19] to get the spectrum file, or pick whatever you have here
Step2: An alternative to execute this shell code, is to open a terminal session within juypter! Go back to the jupyter home screen, and select New->Terminal from the top left pulldown. Then navigate to the notebooks directory, and see what pickle files you have.
Step3: Note the conversion factor from $\sigma$ to FWHM is $2\sqrt{2\ln{2}} \approx 2.355$, see also
https
Step4: Q1
Step5: Q1
Step6: Fitting a straight line
One of the most used and abused fitting routines is fitting a straight line.
Step7: or using the more general curve_fit from scipy
Step8: More complicated curves
See http | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import math
Explanation: Model Fitting
One of the most common things in scientific computing is model fitting. Numerical Recipes devotes a number of chapters to this.
scipy "curve_fit"
astropy.modeling
lmfit (emcee) - Levenberg-Marquardt
pyspeckit
We will also need to be able to read data. We will use pickle (-01) and an ascii file reader (-02)
End of explanation
!ls -l n6503-*.p
Explanation: reading a spectrum from n6503 using pickle
you may need to re-visit n6503-case1 and reset peakpos in In[18] and execute In[20] and In[19] to get the spectrum file, or pick whatever you have here:
End of explanation
try:
import cPickle as pickle
except:
import pickle
sp = pickle.load(open("n6503-sp.p","rb"))
#
# go query this object before going on...... because we don't remember
#
velz = sp['z']
flux = sp['i']
plt.plot(velz,flux)
plt.xlabel(sp['zunit'])
plt.ylabel(sp['iunit'])
plt.title("Pos: %s %s" % (str(sp['xpos']),str(sp['ypos'])));
# compute moments of this spectrum to get an idea what the mean and dispersion of this signal is
# recall this method can easily result in zdisp < 0 for noisy data and becomes unreliable if
# the signal to noise is not high
tmp1 = flux*velz
tmp2 = flux*velz*velz
zmean = tmp1.sum()/flux.sum()
zdisp = tmp2.sum()/flux.sum() - zmean*zmean
print("mean,var:",zmean,zdisp)
if zdisp > 0:
sigma = math.sqrt(zdisp)
print("sigma,FWHM:",sigma,sigma*2.355)
Explanation: An alternative to execute this shell code, is to open a terminal session within juypter! Go back to the jupyter home screen, and select New->Terminal from the top left pulldown. Then navigate to the notebooks directory, and see what pickle files you have.
End of explanation
# noisy spectra can easily result in bogus values for mean and dispersion. Let's try something else:
imax = flux.argmax()
print("Max at %d: %g %g" % (imax, velz[imax], flux[imax]))
nn = 3 # pick a few nearest neighbors
flux1 = flux[imax-nn:imax+nn]
velz1 = velz[imax-nn:imax+nn]
tmp1 = flux1*velz1
tmp2 = flux1*velz1*velz1
zmean1 = tmp1.sum()/flux1.sum()
zdisp1 = tmp2.sum()/flux1.sum() - zmean1*zmean1
print("mean,var:",zmean1,zdisp1)
if zdisp1 > 0:
sigma1 = math.sqrt(zdisp1)
print("sigma,FWHM:",sigma1,sigma1*2.355)
Explanation: Note the conversion factor from $\sigma$ to FWHM is $2\sqrt{2\ln{2}} \approx 2.355$, see also
https://en.wikipedia.org/wiki/Full_width_at_half_maximum
End of explanation
from scipy.optimize import curve_fit
def gauss(x, *p):
# if len(p) != 3: raise ValueError("Error, found %d, (%s), need 3" % (len(p),str(p)))
A, mu, sigma = p
return A*np.exp(-(x-mu)**2/(2.*sigma**2))
# p0 is the initial guess for the fitting coefficients (A, mu and sigma above, in that order)
# does it matter what the initial conditions are?
p0 = [0.01, 10, 10]
#p0 = [0.004, 24, 6]
#p0 = [0.005, -50, 5]
#p0 = [0.007, 100, 22, 0]
coeff, cm = curve_fit(gauss, velz, flux, p0=p0)
flux_fit = gauss(velz, *coeff)
plt.plot(velz,flux, label='Data')
plt.plot(velz,flux_fit, label='Fit')
plt.legend()
print("Fitted amp :",coeff[0])
print("Fitted mean :",coeff[1])
print("Fitted sigma and FWHM :",coeff[2], coeff[2]*2.355)
print("Covariance Matrix :\n",cm)
# what are now the errors in the fitted values?
print("error amp:",math.sqrt(cm[0][0]))
print("error mean:",math.sqrt(cm[1][1]))
print("error sigma:",math.sqrt(cm[2][2]))
Explanation: Q1: in the previous two cells some common code can be extracted and you can refactor your code.
Code refactoring is the process of restructuring existing computer code —changing the factoring—without changing its external behavior. Refactoring improves nonfunctional attributes of the software.
The scipy module has a large number of optimization and fitting routines that work with numpy arrays. They directly call lower level C routines, generally making fitting a fast process.
To fit an actual gaussian , instead of using moments, we use the curve_fit
function in scipy.optimize:
End of explanation
import pyspeckit
# set up a gauss (amp,center,sigma)
xaxis = np.linspace(-50.0,150.0,100)
amp = 1.0
sigma = 10.0
center = 50.0
synth_data = amp*np.exp(-(xaxis-center)**2/(sigma**2 * 2.))
# Add 10% noise (but fix the random seed)
np.random.seed(123)
stddev = 0.1
noise = np.random.randn(xaxis.size)*stddev
error = stddev*np.ones_like(synth_data)
data = noise+synth_data
# this will give a "blank header" warning, which is fine
sp = pyspeckit.Spectrum(data=data, error=error, xarr=xaxis,
xarrkwargs={'unit':'km/s'},
unit='erg/s/cm^2/AA')
sp.plotter()
# Fit with automatic guesses
sp.specfit(fittype='gaussian')
# Fit with input guesses
# The guesses initialize the fitter
# This approach uses the 0th, 1st, and 2nd moments
amplitude_guess = data.max()
center_guess = (data*xaxis).sum()/data.sum()
width_guess = data.sum() / amplitude_guess / np.sqrt(2*np.pi)
guesses = [amplitude_guess, center_guess, width_guess]
sp.specfit(fittype='gaussian', guesses=guesses)
sp.plotter(errstyle='fill')
sp.specfit.plot_fit()
Explanation: Q1: extend the gauss model to have a baseline that is not 0.
PySpecKit
PySpecKit is an extensible spectroscopic analysis toolkit for astronomy. See
http://pyspeckit.bitbucket.org/html/sphinx/index.html
Installing this with
pip install pyspeckit
resulted in an error
AttributeError: module 'distutils.config' has no attribute 'ConfigParser'
turns out this was a python2/3 hack that was needed. The current released version of pyspeckit did not handle this. A manual install of the development release solved this, although a update to pip may be needed as well:
pip install --upgrade pip
pip install https://bitbucket.org/pyspeckit/pyspeckit/get/master.tar.gz
modify to run
The cell below is an adapted version of a gaussian fit case from the pySpecKit manual. By default, this will create some known data with noise. Copy the cell and change it to make it work with our spectrum from n6503. How does it compare to scipy's curve_fit ? Note that in this method initial conditions are generated to help in a robust way of fitting the gauss.
End of explanation
np.random.seed(123)
x = np.linspace(0.0,10.0,10)
y = 2.0 + 3.0 * x + np.random.normal(2.0,2.0,len(x))
plt.scatter(x,y)
from scipy import stats
slope, intercept, r_value, p_value, std_err = stats.linregress(x,y)
print(slope,intercept,r_value,p_value,std_err)
Explanation: Fitting a straight line
One of the most used and abused fitting routines is fitting a straight line.
End of explanation
from scipy.optimize import curve_fit
def line(x, A, B):
return A + B*x
coef,cm = curve_fit(line, x, y)
print(coef,cm)
#
# Q: how would you now plot the data and overplot the (model) line
Explanation: or using the more general curve_fit from scipy
End of explanation
from astropy.extern.six.moves.urllib import request
url = 'http://python4astronomers.github.com/_downloads/3c273.fits'
open('3c273.fits', 'wb').write(request.urlopen(url).read())
!ls -l 3c273.fits
from astropy.io import fits
dat = fits.open('3c273.fits')[1].data
wlen = dat.field('WAVELENGTH')
flux = dat.field('FLUX')
plt.plot(wlen,flux);
Explanation: More complicated curves
See http://python4astronomers.github.io/fitting/spectrum.html for the full example.
End of explanation |
1,223 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Define simple printing functions
Step1: Constructing and allocating dictionaries
The syntax for dictionaries is that {} indicates an empty dictionary
Step2: There are multiple ways to construct a dictionary when the key/value pairs are known beforehand. The following two snippets are equivalent.
Step3: Often an ordered list of keys and values are available as lists, and
it is desirable to create a dictionary from these lists. There are a
number of ways to do this, including
Step4: Adding new data to the dict
Step5: Dictionaries are a dynamic data type, and any object can be used as a value type, including integers, floats, lists, and other dicts, for example
Step6: Accessing the data
It is always possible to get access to the key/value pairs that are contained in the dictionary, and the following
functions help with this
Step7: Iterating over key/values
The following two cells are nearly equivalent.
In order to understand how they differ, it will be helpful to confer with Python documentation on iterators and generators
http
Step8: Filtering and mapping dictionaries | Python Code:
from __future__ import print_function
import json
def print_dict(dd):
print(json.dumps(dd, indent=2))
Explanation: Define simple printing functions
End of explanation
d1 = dict()
d2 = {}
print_dict(d1)
print_dict(d2)
Explanation: Constructing and allocating dictionaries
The syntax for dictionaries is that {} indicates an empty dictionary
End of explanation
d3 = {
'one': 1,
'two': 2
}
print_dict(d3)
d4 = dict(one=1, two=2)
print_dict(d4)
Explanation: There are multiple ways to construct a dictionary when the key/value pairs are known beforehand. The following two snippets are equivalent.
End of explanation
keys = ['one', 'two', 'three']
values = [1, 2, 3]
d5 = {key: value for key, value in zip(keys, values)}
print_dict(d5)
Explanation: Often an ordered list of keys and values are available as lists, and
it is desirable to create a dictionary from these lists. There are a
number of ways to do this, including:
End of explanation
d1['key_1'] = 1
d1['key_2'] = False
print_dict(d1)
Explanation: Adding new data to the dict
End of explanation
d1['list_key'] = [1, 2, 3]
print_dict(d1)
d1['dict_key'] = {'one': 1, 'two': 2}
print_dict(d1)
del d1['key_1']
print_dict(d1)
Explanation: Dictionaries are a dynamic data type, and any object can be used as a value type, including integers, floats, lists, and other dicts, for example:
End of explanation
print(d1.keys())
for item in d1:
print(item)
d1['dict_key']['one']
Explanation: Accessing the data
It is always possible to get access to the key/value pairs that are contained in the dictionary, and the following
functions help with this:
End of explanation
for key, value in d1.items():
print(key, value)
for key, value in d1.iteritems(): # Only in Python2 (.items() returns an iterator in Python3)
print(key, value)
print(d1.keys())
print(d1.values())
Explanation: Iterating over key/values
The following two cells are nearly equivalent.
In order to understand how they differ, it will be helpful to confer with Python documentation on iterators and generators
http://anandology.com/python-practice-book/iterators.html
End of explanation
def dict_only(key_value):
return type(key_value[1]) is dict
print('All dictionary elements:')
print(list(filter(dict_only, d1.items())))
print('Same as above, but with inline function (lambda):')
print(filter(lambda key_value: type(key_value[1]) is dict, d1.items()))
Explanation: Filtering and mapping dictionaries
End of explanation |
1,224 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
Step1: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
Step2: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
Step3: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
Step4: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
Step5: Network Inputs
Here, just creating some placeholders like normal.
Step6: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper
Step7: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note
Step9: Model Loss
Calculating the loss like before, nothing new here.
Step11: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
Step12: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
Step13: Here is a function for displaying generated images.
Step14: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
Step15: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise | Python Code:
%matplotlib inline
import pickle as pkl
import matplotlib.pyplot as plt
import numpy as np
from scipy.io import loadmat
import tensorflow as tf
!mkdir data
Explanation: Deep Convolutional GANs
In this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.
You'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST.
So, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
data_dir = 'data/'
if not isdir(data_dir):
raise Exception("Data directory doesn't exist!")
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(data_dir + "train_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',
data_dir + 'train_32x32.mat',
pbar.hook)
if not isfile(data_dir + "test_32x32.mat"):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:
urlretrieve(
'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',
data_dir + 'test_32x32.mat',
pbar.hook)
Explanation: Getting the data
Here you can download the SVHN dataset. Run the cell above and it'll download to your machine.
End of explanation
trainset = loadmat(data_dir + 'train_32x32.mat')
testset = loadmat(data_dir + 'test_32x32.mat')
Explanation: These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.
End of explanation
idx = np.random.randint(0, trainset['X'].shape[3], size=36)
fig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)
for ii, ax in zip(idx, axes.flatten()):
ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
plt.subplots_adjust(wspace=0, hspace=0)
Explanation: Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.
End of explanation
def scale(x, feature_range=(-1, 1)):
# scale to (0, 1)
x = ((x - x.min())/(255 - x.min()))
# scale to feature_range
min, max = feature_range
x = x * (max - min) + min
return x
class Dataset:
def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):
split_idx = int(len(test['y'])*(1 - val_frac))
self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]
self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]
self.train_x, self.train_y = train['X'], train['y']
self.train_x = np.rollaxis(self.train_x, 3)
self.valid_x = np.rollaxis(self.valid_x, 3)
self.test_x = np.rollaxis(self.test_x, 3)
if scale_func is None:
self.scaler = scale
else:
self.scaler = scale_func
self.shuffle = shuffle
def batches(self, batch_size):
if self.shuffle:
idx = np.arange(len(dataset.train_x))
np.random.shuffle(idx)
self.train_x = self.train_x[idx]
self.train_y = self.train_y[idx]
n_batches = len(self.train_y)//batch_size
for ii in range(0, len(self.train_y), batch_size):
x = self.train_x[ii:ii+batch_size]
y = self.train_y[ii:ii+batch_size]
yield self.scaler(x), y
Explanation: Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.
End of explanation
def model_inputs(real_dim, z_dim):
inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')
return inputs_real, inputs_z
Explanation: Network Inputs
Here, just creating some placeholders like normal.
End of explanation
def generator(z, output_dim, reuse=False, alpha=0.2, training=True):
with tf.variable_scope('generator', reuse=reuse):
# First fully connected layer
x1 = tf.layers.dense(z, 4*4*512)
x1 = tf.reshape(x1, (-1, 4,4,512))
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha*x1, x1)
x1 = tf.layers.conv2d_transpose(x1, filters=256, kernel_size=5, strides=2, padding='SAME')
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha*x1, x1)
#8,8,256 now
x1 = tf.layers.conv2d_transpose(x1, filters=128, kernel_size=5, strides=2, padding='SAME')
x1 = tf.layers.batch_normalization(x1, training=training)
x1 = tf.maximum(alpha*x1, x1)
#16,16,128 now
# Output layer, 32x32x3
logits = tf.layers.conv2d_transpose(x1, filters=output_dim, kernel_size=5, strides=2, padding='SAME')
out = tf.tanh(logits)
return out
Explanation: Generator
Here you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.
What's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.
You keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:
Note that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3.
Exercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.
End of explanation
def discriminator(x, reuse=False, alpha=0.2):
with tf.variable_scope('discriminator', reuse=reuse):
# Input layer is 32x32x3
x = tf.layers.conv2d(x, filters=64, strides=2, kernel_size=5, padding='SAME')
# dont use batch normalization
x = tf.maximum(alpha*x, x)
#16,16,16 now
x = tf.layers.conv2d(x, filters=128, strides=2, kernel_size=5, padding='SAME')
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha*x, x)
#8x8x32 now
x = tf.layers.conv2d(x, filters=256, strides=2, kernel_size=5, padding='SAME')
x = tf.layers.batch_normalization(x, training=True)
x = tf.maximum(alpha*x, x)
#4x4x64 now
flatten = tf.reshape(x, (-1, 4*4*256))
logits = tf.layers.dense(flatten, 1)
out = tf.sigmoid(logits)
return out, logits
Explanation: Discriminator
Here you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.
You'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.
Note: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.
Exercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.
End of explanation
def model_loss(input_real, input_z, output_dim, alpha=0.2):
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
g_model = generator(input_z, output_dim, alpha=alpha)
d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)
d_loss_real = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))
d_loss_fake = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))
g_loss = tf.reduce_mean(
tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))
d_loss = d_loss_real + d_loss_fake
return d_loss, g_loss
Explanation: Model Loss
Calculating the loss like before, nothing new here.
End of explanation
def model_opt(d_loss, g_loss, learning_rate, beta1):
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
# Get weights and bias to update
t_vars = tf.trainable_variables()
d_vars = [var for var in t_vars if var.name.startswith('discriminator')]
g_vars = [var for var in t_vars if var.name.startswith('generator')]
# Optimize
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
Explanation: Optimizers
Not much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.
End of explanation
class GAN:
def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):
tf.reset_default_graph()
self.input_real, self.input_z = model_inputs(real_size, z_size)
self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,
real_size[2], alpha=alpha)
self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)
Explanation: Building the model
Here we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.
End of explanation
def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):
fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols,
sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
ax.axis('off')
img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)
ax.set_adjustable('box-forced')
im = ax.imshow(img, aspect='equal')
plt.subplots_adjust(wspace=0, hspace=0)
return fig, axes
Explanation: Here is a function for displaying generated images.
End of explanation
def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):
saver = tf.train.Saver()
sample_z = np.random.uniform(-1, 1, size=(72, z_size))
samples, losses = [], []
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for x, y in dataset.batches(batch_size):
steps += 1
# Sample random noise for G
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))
# Run optimizers
_ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})
_ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})
if steps % print_every == 0:
# At the end of each epoch, get the losses and print them out
train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})
train_loss_g = net.g_loss.eval({net.input_z: batch_z})
print("Epoch {}/{}...".format(e+1, epochs),
"Discriminator Loss: {:.4f}...".format(train_loss_d),
"Generator Loss: {:.4f}".format(train_loss_g))
# Save losses to view after training
losses.append((train_loss_d, train_loss_g))
if steps % show_every == 0:
gen_samples = sess.run(
generator(net.input_z, 3, reuse=True, training=False),
feed_dict={net.input_z: sample_z})
samples.append(gen_samples)
_ = view_samples(-1, samples, 6, 12, figsize=figsize)
plt.show()
saver.save(sess, './checkpoints/generator.ckpt')
with open('samples.pkl', 'wb') as f:
pkl.dump(samples, f)
return losses, samples
Explanation: And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.
End of explanation
real_size = (32,32,3)
z_size = 100
learning_rate = 0.001
batch_size = 64
epochs = 1
alpha = 0.01
beta1 = 0.9
# Create the network
net = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)
# Load the data and train the network here
dataset = Dataset(trainset, testset)
losses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator', alpha=0.5)
plt.plot(losses.T[1], label='Generator', alpha=0.5)
plt.title("Training Losses")
plt.legend()
_ = view_samples(-1, samples, 6, 12, figsize=(10,5))
Explanation: Hyperparameters
GANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.
Exercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.
End of explanation |
1,225 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Angular Correlations of Amorphous Materials
This notebook demonstrates caclulating the angular correlation of diffraction patterns recorded from an amorphous (or crystalline) material.
The dataset used for this demonstration is a 4-D STEM dataset of a PdNiP deposited thin film glass aquired using a DE-16 Camera and a 200keV FEI-Titan electron microscopt at 100 fps. The probe size was ~2-nm and step size was .365 nm so there there is singificant probe overlap in the probe positions.
This functionality has been checked to run with pyxem-0.13.2 (May 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here
Step1: <a id='s2'></a>
2 - Polar Reprojection
This section deals with converting the signal to a polar signal. This is probably the most important and difficult part of the analysis. Even small distortions in the pattern or misinterpertation of the center of the diffraction pattern will negitively affect the ability to determine correlations.
There is still some ongoing development on methods for identifying and correcting for these distortions but a good check is always to perform the correct and make sure that the first amorphous ring is a line after the polar reprojection. In general your eye should be very good at identifying that. Another thing to notice is that after the correlation if you have small splititing in all of your peaks(especially the self correlation) then most likely your center isn't completely correct.
Step2: Note
Step3: <a id='s3'></a>
3 - Angular Correlations
This section deals with converting the signal to a correlation signal. The most important part here is to properly mask the data. This is important for example if you have a beam stop
Step4: <a id='s4'></a>
4 - Power Spectrum and Correlation Maps
This section deals with visualization of the correlations as correlation maps. These are spatial maps of the strucutre in some material. | Python Code:
data_path = "data/09/PdNiP_test.hspy"
%matplotlib inline
import pyxem as pxm
import hyperspy.api as hs
pxm.__version__
data = hs.load("./data/09/PdNiP_test.hspy")
Explanation: Angular Correlations of Amorphous Materials
This notebook demonstrates caclulating the angular correlation of diffraction patterns recorded from an amorphous (or crystalline) material.
The dataset used for this demonstration is a 4-D STEM dataset of a PdNiP deposited thin film glass aquired using a DE-16 Camera and a 200keV FEI-Titan electron microscopt at 100 fps. The probe size was ~2-nm and step size was .365 nm so there there is singificant probe overlap in the probe positions.
This functionality has been checked to run with pyxem-0.13.2 (May 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues
Background
Angular Correlations are a very natural extension to variance type studies. They offer more insight into the symmetry of the strucutures being studied as well as offering the ability to be studied spatially.
Mathmatically, the Angular correlation is the angular-autocorrelation of some polar unwrapped diffraction pattern I(k).
<p style="text-align: center;">
$ C(k,\phi) = \frac{<I(k, \theta)*I(k, \theta+\phi)>_\theta - <I(k,\theta)>^2_\theta }{<I(k, \theta)>^2_\theta} $
</p>
This is simlar to the radial ("r") variance often calculated in Fluctuation Electron Microscopy.
Contents
<a href='#loa'> Importing & Visualization</a>
<a href='#s2'> Polar Reprojection</a>
<a href='#s3'> Angular Correlation</a>
<a href='#s4'> Power Spectrum and Correlation Maps</a>
<a id='s1'></a>
1 - Importing and Visualization
This section goes over loading the data from the data folder and visualizing the data for further use.
End of explanation
data.set_signal_type("electron_diffraction")
data.beam_energy=200
data.unit = "k_nm^-1"
mask =data.get_direct_beam_mask(20)
# Affine correction from fitting an ellipse
import numpy as np
center=(31.2,31.7)
affine=np.array([[ 1.03725511, -0.02662789, 0. ],
[-0.02662789, 1.01903215, 0. ],
[ 0. , 0. , 1. ]])
data.set_ai(center=center)
rad = data.get_azimuthal_integral2d(npt=100)
Explanation: <a id='s2'></a>
2 - Polar Reprojection
This section deals with converting the signal to a polar signal. This is probably the most important and difficult part of the analysis. Even small distortions in the pattern or misinterpertation of the center of the diffraction pattern will negitively affect the ability to determine correlations.
There is still some ongoing development on methods for identifying and correcting for these distortions but a good check is always to perform the correct and make sure that the first amorphous ring is a line after the polar reprojection. In general your eye should be very good at identifying that. Another thing to notice is that after the correlation if you have small splititing in all of your peaks(especially the self correlation) then most likely your center isn't completely correct.
End of explanation
rad.sum().plot()
Explanation: Note: This isn't perfect, as you can see there is still some distortion that an affine transformation could fix, but for the purposes of this demo this it will suffice
End of explanation
summed = rad.sum()
mask = ((summed>4e6)+(summed<3e5))
mask.plot()
rad.plot(vmax=4000)
cor = rad.get_angular_correlation(mask=mask)
cor.plot()
cor = rad.map(pxm.utils.correlation_utils._correlation, inplace=False, axis=1, normalize=True)
cor.isig[:].plot(vmax=1, vmin=-1)
Explanation: <a id='s3'></a>
3 - Angular Correlations
This section deals with converting the signal to a correlation signal. The most important part here is to properly mask the data. This is important for example if you have a beam stop
End of explanation
power = cor.get_angular_power()
import matplotlib.pyplot as plt
f = plt.figure(figsize=(15,10))
power.plot_symmetries(k_region = [3.,4.5],fig=f)
Explanation: <a id='s4'></a>
4 - Power Spectrum and Correlation Maps
This section deals with visualization of the correlations as correlation maps. These are spatial maps of the strucutre in some material.
End of explanation |
1,226 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numpy
numpy je paket (modul) za (efikasno) numeričko računanje u Pythonu. Naglasak je na efikasnom računanju s nizovima, vektorima i matricama, uključivo višedimenzionalne stukture. Napisan je u C-u i Fortanu te koristi BLAS biblioteku.
Step1: Kreiranje nizova pomoću numpy modula
Step2: Možemo koristiti i funkcije numpy.shape, numpy.size
Step3: Koja je razlika između numpy.ndarray tipa i standardnih lista u Pythonu?
liste u Pythonu mogu sadržavati bilo kakve vrste objekata, to nije slučaj s numpy.ndarray
numpy.ndarray nisu dinamički objekti
Step4: Kako je M statički objekt, ne možemo napraviti ovo
Step5: Naravno, ovo je ok
Step6: dtype se može eksplicitno zadati
Step7: Tipično dtype su
Step8: Učitavanje podataka
Često učitavamo podatke iz datoteka (lokalno ili s weba). Važni formati su cvs (comma-separated values) i tsv (tab-separated values).
Step9: Uz numpy.savetxt možemo napraviti i obrnuto.
Step10: Postoji i interni format za numpy nizove
Step11: Rad s nizovima
Indeksiranje funkcionira standardno.
Step12: Naravno, možemo koristiti i
Step13: S negativnim indeksima računamo od kraja niza
Step14: Naravno, iste operacije imamo i za višedimenzionalne nizove.
Step15: Možemo koristiti i tzv. maske
Step16: Zanimljiviji primjer
Step17: Funkcije na nizovima
Step18: U sljedećem primjeru take djeluje na listu, a izlaz je array
Step19: Funkcija choose
Step20: Što radi ova funkcija?
Vektorizacija koda
Što je više operacija s nizovima, to će kod generalno biti brži.
Step21: Defaultne operacije na nizovima su uvijek definirane po elementima.
Step22: Kako doći do standardnog umnoška?
Step23: Matrice mogu biti i višedimenzionalne
Step24: Postoji i tip matrix. Kod njega operacije +, -, * se ponašaju onako kako smo navikli.
Step25: Naravno, dimenzije trebaju biti kompatibilne.
Step26: Još neke funkcije
Step27: Adjungiranje
Step28: Za izvlačenje realnog, odnosno imaginarnog dijela
Step29: Izvlačenje osnovih informacija iz nizova
Step30: Prosječna dnevna temperatura u Stockholmu u zadnjiih 200 godina je bila 6.2 C.
Step31: Naravno, sve ove operacije možemo raditi na dijelovima nizova.
Step32: Format je
Step33: Sada nije problem doći do histograma za prosječne mjesečne temperature u par redaka.
Step34: Rad s višedimenzionalnim podacima
Step35: Oblik niza se može promijeniti bez da se dira memorija, dakle mogu se primijenjivati i na veliku količinu podataka.
Step36: Funkcija flatten radi kopiju.
Step37: Kopiranje nizova
Step38: Ako želimo napraviti novu kopiju, koristimo funkciju copy
Step39: Funkcija enumerate nam daje i element i njegov indeks
Step41: Vektorizacija funkcija
Step43: To smo mogli napraviti i ručno.
Step44: Eksplicitno pretvaranje podataka. Uvijek stvara novi niz. | Python Code:
from numpy import *
Explanation: Numpy
numpy je paket (modul) za (efikasno) numeričko računanje u Pythonu. Naglasak je na efikasnom računanju s nizovima, vektorima i matricama, uključivo višedimenzionalne stukture. Napisan je u C-u i Fortanu te koristi BLAS biblioteku.
End of explanation
v = array([1,2,3,4])
v
M = array([[1, 2], [3, 4]])
M
type(v), type(M)
v.shape
M.shape
v.size, M.size
Explanation: Kreiranje nizova pomoću numpy modula
End of explanation
shape(M)
size(M)
Explanation: Možemo koristiti i funkcije numpy.shape, numpy.size
End of explanation
M.dtype
Explanation: Koja je razlika između numpy.ndarray tipa i standardnih lista u Pythonu?
liste u Pythonu mogu sadržavati bilo kakve vrste objekata, to nije slučaj s numpy.ndarray
numpy.ndarray nisu dinamički objekti: pri kreiranju im je određen tip
za numpy.ndarray implementirane su razne efikasne metode važne u numerici
de facto sva računanja se odvijaju u C-u i Fortranu pomoću BLAS rutina
dtype (data type) nam daje informaciju o tipu podataka u nizu:
End of explanation
M[0,0] = "hello"
Explanation: Kako je M statički objekt, ne možemo napraviti ovo:
End of explanation
M[0,0]=5
Explanation: Naravno, ovo je ok:
End of explanation
M = array([[1, 2], [3, 4]], dtype=complex)
M
Explanation: dtype se može eksplicitno zadati:
End of explanation
x = arange(0, 10, 1) # argumenti: početak, kraj, korak
x # 10 nije u nizu!
x = arange(-1, 1, 0.1)
x
# ovdje su i početak i kraj uključeni!
linspace(0, 10, 25)
logspace(0, 10, 10, base=e)
x, y = mgrid[0:5, 0:5] # slično kao meshgrid u MATLAB-u
x
y
from numpy import random
# uniformna distribucija na [0,1]
random.rand(5,5)
# standardna normalna distribucija
random.randn(5,5)
# dijagonalna matrica
diag([1,2,3])
# matrica sa sporednom dijagonalom
diag([1,2,3], k=1)
zeros((3,3))
ones((3,3))
Explanation: Tipično dtype su: int, float, complex, bool, object, itd.
Ali možemo biti i eksplicitni vezano za veličinu registra: int64, int16, float128, complex128.
Funkcije koje generiraju nizove
End of explanation
!head tpt-europe.csv
data = genfromtxt('tpt-europe.csv')
data.shape, data.dtype
Explanation: Učitavanje podataka
Često učitavamo podatke iz datoteka (lokalno ili s weba). Važni formati su cvs (comma-separated values) i tsv (tab-separated values).
End of explanation
M = random.rand(3,3)
M
savetxt("random-matrix.csv", M)
!cat random-matrix.csv
savetxt("random-matrix.csv", M, fmt='%.5f') # s fmt specificiramo format
!cat random-matrix.csv
Explanation: Uz numpy.savetxt možemo napraviti i obrnuto.
End of explanation
save("random-matrix.npy", M)
!file random-matrix.npy
load("random-matrix.npy")
M.itemsize # byte-ovi po elementu
M.nbytes
M.ndim
Explanation: Postoji i interni format za numpy nizove:
End of explanation
v[0]
M[1,1]
M
M[1]
Explanation: Rad s nizovima
Indeksiranje funkcionira standardno.
End of explanation
M[1,:] # redak 1
M[:,1] # stupac 1
M[1,:] = 0
M[:,2] = -1
M
A = array([1,2,3,4,5])
A
A[1:3]
A[1:3] = [-2,-3]
A
A[::]
A[::2]
A[:3]
A[3:]
Explanation: Naravno, možemo koristiti i : operator:
End of explanation
A = array([1,2,3,4,5])
A[-1] # zadnji element niza
A[-3:] # zadnja tri elementa
Explanation: S negativnim indeksima računamo od kraja niza:
End of explanation
A = array([[n+m*10 for n in range(5)] for m in range(5)])
A
A[1:4, 1:4]
A[::2, ::2]
indeksi_redaka = [1, 2, 3]
A[indeksi_redaka]
indeksi_stupaca = [1, 2, -1]
A[indeksi_redaka, indeksi_stupaca]
Explanation: Naravno, iste operacije imamo i za višedimenzionalne nizove.
End of explanation
B = array([n for n in range(5)])
B
maska = array([True, False, True, False, False])
B[maska]
maska = array([1,0,1,0,0], dtype=bool)
B[maska]
Explanation: Možemo koristiti i tzv. maske: ako je maska numpy niz tipa bool, tada se izabiru oni elementi koji u maski odgovaraju vrijednosti True.
End of explanation
x = arange(0, 10, 0.5)
x
maska = (5 < x) * (x < 7.5)
maska
x[maska]
Explanation: Zanimljiviji primjer:
End of explanation
indeksi = where(maska)
indeksi
x[indeksi]
print(A)
diag(A)
diag(A, -1)
v2 = arange(-3,3)
v2
indeksi_redaka = [1, 3, 5]
v2[indeksi_redaka]
v2.take(indeksi_redaka)
Explanation: Funkcije na nizovima
End of explanation
take([-3, -2, -1, 0, 1, 2], indeksi_redaka)
Explanation: U sljedećem primjeru take djeluje na listu, a izlaz je array:
End of explanation
koji = [1, 0, 1, 0]
izbori = [[-1,-2,-3,-4], [5,4,3,2]]
choose(koji, izbori)
Explanation: Funkcija choose:
End of explanation
v1 = arange(0, 5)
v1 * 2
v1 + 2
print(A)
A * 2, A + 2
Explanation: Što radi ova funkcija?
Vektorizacija koda
Što je više operacija s nizovima, to će kod generalno biti brži.
End of explanation
A * A
v1 * v1
A.shape, v1.shape
print(A,v1)
A * v1
Explanation: Defaultne operacije na nizovima su uvijek definirane po elementima.
End of explanation
dot(A, A)
A @ A # nova operacija definirana u Python-u 3.5+
matmul(A,A) # @ je zapravo pokrata za matmul, dot i matmul nisu iste operacije (poklapaju se na 1D i 2D nizovima)
dot(A, v1)
A @ v1
v1 @ v1 # analogno dot(v1, v1)
Explanation: Kako doći do standardnog umnoška?
End of explanation
a = random.rand(8,13,13)
b = random.rand(8,13,13)
matmul(a, b).shape
Explanation: Matrice mogu biti i višedimenzionalne
End of explanation
M = matrix(A)
v = matrix(v1).T # da bi dobili stupčasti vektor
v
M*M
M*v
# skalarni produkt
v.T * v
v + M*v
Explanation: Postoji i tip matrix. Kod njega operacije +, -, * se ponašaju onako kako smo navikli.
End of explanation
v = matrix([1,2,3,4,5,6]).T
shape(M), shape(v)
M * v
Explanation: Naravno, dimenzije trebaju biti kompatibilne.
End of explanation
C = matrix([[1j, 2j], [3j, 4j]])
C
conjugate(C)
Explanation: Još neke funkcije: inner, outer, cross, kron, tensordot.
End of explanation
C.H
Explanation: Adjungiranje:
End of explanation
real(C) # isto što i C.real
imag(C) # isto što i C.imag
angle(C+1) # u MATLAB-u je to funkcija arg, dakle argument (faza) kompleksnog broja
abs(C)
from numpy.linalg import inv, det
inv(C) # isto što i C.I
C.I * C
det(C)
det(C.I)
Explanation: Za izvlačenje realnog, odnosno imaginarnog dijela: real i imag:
End of explanation
# u stockholm_td_adj.dat su podaci o vremenu za Stockholm
dataStockholm = genfromtxt('stockholm_td_adj.dat')
dataStockholm.shape
# temperatura se nalazi u 4. stupcu (znači stupcu broj 3)
mean(dataStockholm[:,3])
Explanation: Izvlačenje osnovih informacija iz nizova
End of explanation
std(dataStockholm[:,3]), var(dataStockholm[:,3])
dataStockholm[:,3].min()
dataStockholm[:,3].max()
d = arange(0, 10)
d
sum(d)
prod(d+1)
# kumulativa suma
cumsum(d)
# kumulativan produkt
cumprod(d+1)
# isto što i: diag(A).sum()
trace(A)
Explanation: Prosječna dnevna temperatura u Stockholmu u zadnjiih 200 godina je bila 6.2 C.
End of explanation
!head -n 3 stockholm_td_adj.dat
Explanation: Naravno, sve ove operacije možemo raditi na dijelovima nizova.
End of explanation
# mjeseci su 1.,..., 12.
unique(dataStockholm[:,1])
maska_velj = dataStockholm[:,1] == 2
mean(dataStockholm[maska_velj,3])
Explanation: Format je: godina, mjesec, dan, prosječna dnevna temperatura, najniža, najviša, lokacija.
Recimo da nas zanimaju samo temperature u veljači.
End of explanation
mjeseci = arange(1,13)
mjeseci_prosjek = [mean(dataStockholm[dataStockholm[:,1] == mjesec, 3]) for mjesec in mjeseci]
from pylab import *
%matplotlib inline
fig, ax = subplots()
ax.bar(mjeseci, mjeseci_prosjek)
ax.set_xlabel("Mjesec")
ax.set_ylabel("Prosj. mj. temp.");
Explanation: Sada nije problem doći do histograma za prosječne mjesečne temperature u par redaka.
End of explanation
m = rand(3,3)
m
m.max()
# max u svakom stupcu
m.max(axis=0)
# max u svakom retku
m.max(axis=1)
Explanation: Rad s višedimenzionalnim podacima
End of explanation
A
n, m = A.shape
B = A.reshape((1,n*m))
B
B[0,0:5] = 5 # promijenili smo B
B
A # a time smo promijenili i A
Explanation: Oblik niza se može promijeniti bez da se dira memorija, dakle mogu se primijenjivati i na veliku količinu podataka.
End of explanation
B = A.flatten()
B
B[0:5] = 10
B
A # A je sad ostao isti
v = array([1,2,3])
shape(v)
# pretvorimo v u matricu
v[:, newaxis]
v[:,newaxis].shape
v[newaxis,:].shape
a = array([[1, 2], [3, 4]])
# ponovimo svaki element tri puta
repeat(a, 3)
tile(a, 3)
b = array([[5, 6]])
concatenate((a, b), axis=0)
concatenate((a, b.T), axis=1)
vstack((a,b))
hstack((a,b.T))
Explanation: Funkcija flatten radi kopiju.
End of explanation
A = array([[1, 2], [3, 4]])
A
# B je isto što i A (bez kopiranja podataka)
B = A
Explanation: Kopiranje nizova
End of explanation
B = copy(A)
v = array([1,2,3,4])
for element in v:
print (element)
M = array([[1,2], [3,4]])
for row in M:
print ("redak {}".format(row))
for element in row:
print (element)
Explanation: Ako želimo napraviti novu kopiju, koristimo funkciju copy:
End of explanation
for row_idx, row in enumerate(M):
print ("indeks retka {} redak {}".format(row_idx, row))
for col_idx, element in enumerate(row):
print ("col_idx {} element {}".format(col_idx, element))
M[row_idx, col_idx] = element ** 2
Explanation: Funkcija enumerate nam daje i element i njegov indeks:
End of explanation
def Theta(x):
Sklarna verzija step funkcije.
if x >= 0:
return 1
else:
return 0
Theta(array([-3,-2,-1,0,1,2,3]))
Theta_vec = vectorize(Theta)
Theta_vec(array([-3,-2,-1,0,1,2,3]))
Explanation: Vektorizacija funkcija
End of explanation
def Theta(x):
Vektorska verzija step funkcije.
return 1 * (x >= 0)
Theta(array([-3,-2,-1,0,1,2,3]))
# radi naravno i za skalare
Theta(-1.2), Theta(2.6)
M
if (M > 5).any():
print ("barem jedan element iz M je veći od 5")
else:
print ("svi elementi iz M su manji ili jednaki od 5")
if (M > 5).all():
print ("svi elementi iz M su veći od 5")
else:
print ("barem jedan element je manji ili jednak od 5")
Explanation: To smo mogli napraviti i ručno.
End of explanation
M.dtype
M2 = M.astype(float)
M2
M2.dtype
M3 = M.astype(bool)
M3
from verzije import *
from IPython.display import HTML
HTML(print_sysinfo()+info_packages('numpy,matplotlib'))
Explanation: Eksplicitno pretvaranje podataka. Uvijek stvara novi niz.
End of explanation |
1,227 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intuit craft demonstration
Kyle Willett (Fellow, Insight Data Science)
<font color='red'>Create a reasonable definition(s) of rule performance.</font>
The definition of rule performance I used involves three related measures of success
Step3: Descriptive statistics
Step6: Plot the distribution of money involved per transaction. There are ~900 cases where no money was exchanged, but an alert was still triggered. These are potential bad data points, and might need to be removed from the sample. I'd consult with other members of the team to determine whether that would be appropriate. The distribution of money spent is roughly log-normal.
Step8: No - every case in the cases table is associated with at least one rule triggering an alert.
Step9: Defining metrics
So they're asking for a dashboard that predicts "rules performance". I have individual cases, some of which had funds withheld because of rules performance, and then some fraction of those which were flagged as actual bad cases following judgement by a human.
So the rules performance is strictly whether a case is likely to have funds automatically withheld and forwarded to a human for review. The badmerch label is another level on top of that; the current success ratio should be a measure of how successful the automated system is.
Based on the outcomes above, a 4
Step11: I have a label that predicts both a specific rule and its associated class for each transaction. So a reasonable ordered set of priorities might be
Step12: So the distribution of outcomes is very different depending on the overall rule type. Let's look at the actual numbers in each category.
Step14: This data splits the number of alerts by the category of the triggering rule and the ultimate outcome. In every category, the most common outcome is that funds were not withheld and there was no corresponding loss. However, the ratio of outcomes varies strongly by rule type. For rules on compliance, more than 80% of cases are benign and flagged as such. The benign fraction drops to 61% for financial risk and 56% for fraud. So the type of rule being broken is strongly correlated with the likelihood of a bad transaction.
Results
Step15: This is one of the initial plots in the mock dashboard. It shows the overall performance of each rule sorted by outcome. Rule 17 stands out because it has only a single triggered alert in the dataset (agent placed funds on hold, but there was no fraud involved - false negative).
Good rules are ones dominated by true positives and where every other category is low; a high true negative rate would indicate that the agents are being accurate, but that the rule is overly sensitive (eg, Rule 31). The best at this by eye is Rule 18.
Next, we'll calculate our metrics of choice (precision, recall, F1) for the dataset when split by rule.
Step16: This is a good overall summary; we have three metrics for each rule, of which the combined F1 is considered to be the most important. For any rule, we can look at the corresponding plot in the dashboard and examine whether F1 is above a chosen threshold value (labeled here as 0.5). Reading from left to right in the top row, for example, Rule 1 is performing well, Rule 2 is acceptable, Rules 3-5 are performing below the desired accuracy, etc.
Splitting by rule type
Step17: Financial risk rules are the largest category, and are mostly cases that were true negatives (money not held and it wasn't a bad transaction). The false negative rate is slightly larger than the true positive, though, indicating that financial risks are missing more than half of the genuinely bad transactions. Fraud rules also have true negatives as the most common category, but a significantly lower false negative rate compard to true positives. So these types are less likely to be missed by the agents. Compliance rules trigger the fewest total number of alerts; the rates of anything except a false negative are all low (81% of these alerts are benign).
Step20: Grouping by type; fraud rules have by a significant amount the best performance across all three metrics. Financial risk has comparable precision, but much worse recall. Compliance is poor across the board.
Cumulative performance of metrics split by rule
The above dashboard is a useful start, since we've defined a metric and looked at how it differs for each rule. However, the data being used was collected over a period of several months, and the data should be examined for variations in the metrics as a function of time. This would examine whether a rule is performing well (and if it improves or degrades with more data), the response of the risk agents to different triggers, and possibly variations in the population of merchants submitting cases.
We'll look at this analysis in the context of an expanding window - for every point in a time series of data, we use data up to and including that point. This gives the cumulative performance as a function of time, which is useful for looking at how the performance of a given rule stabilizes.
Step21: This will be the second set of plots in our dashboard. This shows the results over an expanding window covering the full length of time in the dataset, where the value of the three metrics (precision, recall, F1) track how the rules are performing with respect to the analysts and true outcomes over time.
By definition, data over an expanding window should stabilize as more data comes in and the variance decreases (assuming that the rule definitions, performance of risk agents, and underlying merchant behavior is all the same). Large amounts of recent variation would indicate that we don't know whether the rule is performing well yet.
To assess how much the rules are varying in performance, we'll measure the stability of each metric weighted more heavily toward the most recent results. A simple measure which will use is the largest absolute deviation over the second half of the data.
Step22: Six out of the thirty rules have a variation $\Delta_\mathrm{max,abs} < 0.1$ in the second half of the current data. Of those, two (Rules 7 and 26) have only a handful of datapoints and estimates of the true accuracy are very uncertain. Two others (Rules 2 and 30) more data, although less than 100 points each. Rule 2 has very different behavior starting a few weeks toward the end, sharply increasing both its precision and recall. This could indicate either a difference in merchant tendencies or a re-definition of the existing rule. Rule 30 has shown a gradual improvement from an early nadir, which might be a sign of a set of bad/unlikely transactions earlier and now regressing to the mean. Rule 4 basically only has data in the second half of the set (not stabilized yet) and Rule 5 has a gradually decreasing recall, which may be a counterexample to the trend in Rule 30.
The remainder of the rules (especially for those with a few hundred data points) are relatively stable over the expanding window. So we can broadly categorize rule performance in three categories
Step24: Cumulative performance of metrics split by rule type
Step25: Analysis
Step26: Rolling performance of metrics split by rule type
Step27: Co-occurence and effectiveness of rules
Are there any rules that occur together at very high rates (indicating that the model is too complicated)?
Step28: Rules 8, 14, 15, and 27 all have fairly strong co-occurrences with other rules in the set. These would be good candidates to check for the overall F1 scores and evaluate whether they're a necessary trigger for the system.
Other questions that I'd explore in the data given more time | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
import pandas as pd # Requires v 0.18.0
import numpy as np
import seaborn as sns
sns.set_style("whitegrid")
dbname = 'risk'
username = 'willettk'
# Note: password must be entered to run, but don't put this anywhere public.
psswd = ''
engine = create_engine('postgresql://%s:%s@localhost/%s'%(username,psswd,dbname))
# Check if database exists
database_exists(engine.url)
# Load the risk databases from CSV files
cases = pd.read_csv('risk_data/cases.csv',
parse_dates=['alertDate'],
infer_datetime_format=True)
cases.rename(columns=lambda x: x.lower(), inplace=True)
print cases.dtypes
cases.head()
rules = pd.read_csv('risk_data/rules.csv')
rules.rename(columns=lambda x: x.lower(), inplace=True)
rules.head()
categories = pd.read_csv('risk_data/ruleCategories.csv')
categories.rename(columns=lambda x: x.lower(), inplace=True)
categories.head()
# Insert tables into PostgreSQL
cases.to_sql('cases', engine, if_exists='replace', index=False)
rules.to_sql('rules', engine, if_exists='replace', index=False)
categories.to_sql('categories', engine, if_exists='replace', index=False)
# As when setting up PSQL, the connection will need the password for the database entered here
con = psycopg2.connect(database = dbname, user = username, host='localhost', password=psswd)
Explanation: Intuit craft demonstration
Kyle Willett (Fellow, Insight Data Science)
<font color='red'>Create a reasonable definition(s) of rule performance.</font>
The definition of rule performance I used involves three related measures of success: precision, recall, and the combined F1-score. This primarily evaluates the decisions of the risk agents, where a (true) positive result occurred when a rule was triggered, the agent withheld funds, and the case was ultimately labeled as fraud. A poorly-performing rule is one where a trigger either meant the agent released the funds the majority of the time (which would mean the rule is too sensitive to false positives) or with a high rate of released funds for cases labeled as fraud (which would mean that the agents do not recognize the merits of this rule).
I calculated precision, recall, and F1 for each rule based on the rates of held funds and bad cases. Roughly 1/3 of the rules (11/30) have high marks for both precision and recall and have triggered a sufficient number of alerts that their performance is fairly well characterized. A second group of rules (10/30) have enough triggers to measure their performance, but low precision and recall scores; these should be re-assessed and potentially modified to lower the number of false detections. The remaining 9/30 rules either have very few triggers and/or exhibit rapidly changing behavior, and need more triggers before their effectiveness can be evaluated.
I also looked at performance grouped by the overall rule type. Fraud rules have by a significant amount the best performance across all three metrics. Financial risk has comparable precision, but much worse recall. Metrics for both fraud and financial risk stabilized after $\sim1$ month of collecting data. Compliance metrics are poor both in precision and recall, and have been mildly but steadily decreasing over the last two months of data.
<font color='red'>Build a mockup of a dashboard(s) that tracks rules performance (by rule and by RiskAlertCategory) in whatever way you think is appropriate (there may be multiple ways to assess performance).</font>
The dashboards I built are static plots in Python/Jupyter notebook, although the queries are run against a SQL database (PostgreSQL for this example) that can be updated with new data. The dashboard can be easily updated with more recent data. The key plot shows the precision, recall, and F1 scores split by rule and plotted over an expanding time window (taking in all data up to the current point). It shows the relative stability and performance of each rule simultaneously.
For the daily business of risk agents, the performance is plotted over a rolling window so that agents can assess the recent performance of each rule as well.
<font color='red'>Assess the overall decision making process (which includes Risk Agents’ decisions).</font>
Evaluation of the decision making process relies heavily on two pieces of data that are not included in this set. The first set would be the actual true negatives: cases of daily transactions that did not trigger a rule. Information on this would provide a baseline on the sensitivity of a particular rule to both holding funds and ultimate investigation into whether a transaction is fraudulent.
Secondly, there is no information on the risk agent handling each of the individual cases. This information is potentially important because of the human factor involved; a particular risk agent, for example, will have varying levels of accuracy (either overall or with respect to particular rules), each of which could be modeled. If so, that would allow better assessment of the rule performance since the effect of a particular user agent can be marginalized. This information could also be attached via anonymized ID in the same case table.
Import data to SQL database
End of explanation
# How many different rules are there, grouped by type?
sql_query =
SELECT ruletype,COUNT(ruletype)
FROM categories
GROUP BY ruleType;
pd.read_sql_query(sql_query,con).head()
# Are there cases triggered without any money involved in the transaction?
sql_query =
SELECT COUNT(caseid)
FROM cases
WHERE amount = 0;
pd.read_sql_query(sql_query,con).head()
Explanation: Descriptive statistics
End of explanation
pl = np.log10(cases.amount+1).hist(bins=50)
pl.set_xlabel("log(Transaction mount per triggered case [$])")
pl.set_ylabel("Count")
pl.axvline(np.log10(cases.amount.median()),color='r',lw=2,ls='--')
pl.set_title("Median transaction is ${:.2f}".format(cases.amount.median()));
cases.amount.max()
# What are the distributions of outcomes with regard to holds and bad merchants?
sql_query =
SELECT held, badmerch, COUNT(badmerch) as c
FROM cases
GROUP BY held,badmerch;
p = pd.read_sql_query(sql_query,con)
p.head()
# How many total cases are there?
print "Total number of cases in this data set: {}".format(len(cases))
# Does the number of rules violations equal the number of helds?
print len(rules)
print sum(cases.held)
# Are there rules violations that don't correspond to cases in the table?
sql_query =
SELECT COUNT(rules.caseid)
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
WHERE cases.caseid IS NULL;
pd.read_sql_query(sql_query,con).head()
Explanation: Plot the distribution of money involved per transaction. There are ~900 cases where no money was exchanged, but an alert was still triggered. These are potential bad data points, and might need to be removed from the sample. I'd consult with other members of the team to determine whether that would be appropriate. The distribution of money spent is roughly log-normal.
End of explanation
# Look at the distribution of rule types for benign cases
sql_query =
SELECT ruletype,sum(count) FROM
(SELECT X.count, categories.ruletype FROM
(SELECT rules.ruleid, COUNT(rules.ruleid)
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
WHERE cases.held = 0
AND cases.badmerch = 0
GROUP BY rules.ruleid) X
JOIN categories ON categories.ruleid = X.ruleid
) Y
GROUP BY ruletype
;
ruletypes_clean = pd.read_sql_query(sql_query,con)
ax = sns.barplot(x="ruletype", y="sum", data=ruletypes_clean)
Explanation: No - every case in the cases table is associated with at least one rule triggering an alert.
End of explanation
# Define helper functions for computing metrics of rule performance
def get_precision(TP,FP):
return TP* 1./ (TP + FP)
def get_recall(TP,FN):
return TP * 1./(TP + FN)
def get_accuracy(TP,FP,TN,FN):
return (TP + TN) * 1./ (TN+FN+FP+TP)
def get_f1(TP,FP,TN,FN):
precision = get_precision(TP,FP)
recall = get_recall(TP,FN)
return 2*precision*recall / (precision+recall)
# Print metrics for entire dataset
TN,FN,FP,TP = p.c / sum(p.c)
print "Precision: {:.3f}".format(get_precision(TP,FP))
print "Recall: {:.3f}".format(get_recall(TP,FN))
print "Accuracy: {:.3f}".format(get_accuracy(TP,FP,TN,FN))
print "F1: {:.3f}".format(get_f1(TP,FP,TN,FN))
Explanation: Defining metrics
So they're asking for a dashboard that predicts "rules performance". I have individual cases, some of which had funds withheld because of rules performance, and then some fraction of those which were flagged as actual bad cases following judgement by a human.
So the rules performance is strictly whether a case is likely to have funds automatically withheld and forwarded to a human for review. The badmerch label is another level on top of that; the current success ratio should be a measure of how successful the automated system is.
Based on the outcomes above, a 4:1 ratio might not be considered particularly successful.
66% of cases were not held and ultimately were good. (TN)
15% of cases were not held, but turned out to be bad. (FN)
6% of cases were held but turned out to be OK. (FP)
12% of cases were held and did turn out to be bad. (TP)
End of explanation
sql_query =
SELECT X.ruleid, X.caseid, X.outcome, categories.ruletype FROM
(SELECT rules.ruleid, rules.caseid,
CASE
WHEN cases.held = 0 and cases.badMerch = 0 THEN 'not held, good'
WHEN cases.held = 0 and cases.badMerch = 1 THEN 'not held, bad'
WHEN cases.held = 1 and cases.badMerch = 0 THEN 'held, good'
WHEN cases.held = 1 and cases.badMerch = 1 THEN 'held, bad'
END outcome
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
) X
JOIN categories ON categories.ruleid = X.ruleid
;
allcases = pd.read_sql_query(sql_query,con)
fig,ax = plt.subplots(1,1,figsize=(10,6))
sns.countplot(x="ruletype", hue="outcome", data=allcases, ax=ax);
Explanation: I have a label that predicts both a specific rule and its associated class for each transaction. So a reasonable ordered set of priorities might be:
predict whether \$\$ will be held (ie, a rule is triggered)
predict what type of rule will be triggered
predict which specific rule will be triggered
predict whether a triggered case will ultimately be determined to be fraudulent
I'll need to engineer some of my own features here (ie, for each case I could do something like number of past cases, number of past bad cases, average money in transactions, average time between transactions, etc). Whatever interesting/potential combinations I can get from the ID, time, cost, and history.
Then I need to turn that into a "dashboard" - that could be both a visualization of past results and/or some mockup of a "current" day's activity and who my results would flag.
Outcome as a function of rule type
The next step in the analysis will be to make some plots and assess how the rules being trigger vary by rule and rule type.
End of explanation
for g in allcases.groupby("ruletype"):
for gg in g[1].groupby("outcome"):
print "{:15}, {:15}, {:2.1f}%".format(g[0],gg[0],len(gg[1]) * 100./len(g[1]))
print ""
Explanation: So the distribution of outcomes is very different depending on the overall rule type. Let's look at the actual numbers in each category.
End of explanation
# Retrieve the outcomes of all triggered cases and encode those outcomes as numeric data
sql_query =
SELECT X.ruleid, X.caseid, X.outcome, categories.ruletype FROM
(SELECT rules.ruleid, rules.caseid,
CASE
WHEN cases.held = 0 and cases.badMerch = 0 THEN 0
WHEN cases.held = 0 and cases.badMerch = 1 THEN 1
WHEN cases.held = 1 and cases.badMerch = 0 THEN 2
WHEN cases.held = 1 and cases.badMerch = 1 THEN 3
END outcome
FROM rules
LEFT JOIN cases ON cases.caseid = rules.caseid
) X
JOIN categories ON categories.ruleid = X.ruleid
;
all_numeric = pd.read_sql_query(sql_query,con)
# Plot results as a grid of bar charts, separated by rule.
# Color indicates the overall rule type
ruleorder = list(categories[categories.ruletype=="Fraud"].ruleid.values) + \
list(categories[categories.ruletype=="Financial Risk"].ruleid.values) + \
list(categories[categories.ruletype=="Compliance"].ruleid.values)
grid = sns.FacetGrid(all_numeric,
col="ruleid",
hue="ruletype",
col_order = ruleorder,
col_wrap=8, size=2, aspect=1,
xlim=(0,3))
grid.map(plt.hist, "outcome", normed=True)
grid.set(xticks=[0,1,2,3])
grid.set_xticklabels(['TN','FN','FP','TP']);
Explanation: This data splits the number of alerts by the category of the triggering rule and the ultimate outcome. In every category, the most common outcome is that funds were not withheld and there was no corresponding loss. However, the ratio of outcomes varies strongly by rule type. For rules on compliance, more than 80% of cases are benign and flagged as such. The benign fraction drops to 61% for financial risk and 56% for fraud. So the type of rule being broken is strongly correlated with the likelihood of a bad transaction.
Results: assessing performance
The challenge from Intuit is specifically to assess rule performance. I interpret that as evaluating individually whether each of these rules is doing well, based on the ultimate accuracy.
The approach I'll begin with is to look at the rates of the various outcomes for each rule as a function of some metric (precision, accuracy, F1).
Splitting by rule: performance metrics
End of explanation
metric,value,ruleid = [],[],[]
for g in all_numeric.groupby('ruleid'):
outcomes = {}
for gg in g[1].groupby('outcome'):
outcomes[gg[0]] = len(gg[1])
TN,FN,FP,TP = [outcomes.setdefault(i, 0) for i in range(4)]
p_ = get_precision(TP,FP) if (TP + FP) > 0 and TP > 0 else 0.
r_ = get_recall(TP,FN) if (TP + FN) > 0 and TP > 0 else 0.
if p_ > 0. and r_ > 0.:
f_ = get_f1(TP,FP,TN,FN)
else:
f_ = 0.
value.append(p_)
value.append(r_)
value.append(f_)
metric.append('precision')
metric.append('recall')
metric.append('f1')
ruleid.extend([g[0],]*3)
m = pd.DataFrame(index = range(len(metric)))
m['metric'] = pd.Series(metric)
m['value'] = pd.Series(value)
m['ruleid'] = pd.Series(ruleid)
# Plot the metrics for the overall data split by rule
grid = sns.FacetGrid(m,
col="ruleid",
col_wrap=8, size=2, aspect=1)
grid.map(sns.barplot, "metric","value","metric",palette=sns.color_palette("Set1"))
grid.map(plt.axhline, y=0.5, ls="--", c="0.5",lw=1);
Explanation: This is one of the initial plots in the mock dashboard. It shows the overall performance of each rule sorted by outcome. Rule 17 stands out because it has only a single triggered alert in the dataset (agent placed funds on hold, but there was no fraud involved - false negative).
Good rules are ones dominated by true positives and where every other category is low; a high true negative rate would indicate that the agents are being accurate, but that the rule is overly sensitive (eg, Rule 31). The best at this by eye is Rule 18.
Next, we'll calculate our metrics of choice (precision, recall, F1) for the dataset when split by rule.
End of explanation
# Plot the counts of each outcome split by rule type.
grid = sns.FacetGrid(all_numeric,
col="ruletype", hue="outcome",
col_wrap=3, size=5, aspect=1,
xlim=(0,3))
grid.map(plt.hist, "outcome")
grid.set(xticks=[0,1,2,3,4])
grid.set_xticklabels(['TN','FN','FP','TP']);
Explanation: This is a good overall summary; we have three metrics for each rule, of which the combined F1 is considered to be the most important. For any rule, we can look at the corresponding plot in the dashboard and examine whether F1 is above a chosen threshold value (labeled here as 0.5). Reading from left to right in the top row, for example, Rule 1 is performing well, Rule 2 is acceptable, Rules 3-5 are performing below the desired accuracy, etc.
Splitting by rule type: performance metrics
Repeat the same analysis as above, but split by rule type (Fraud, Financial Risk, Compliance) instead of the rules themselves.
End of explanation
# Calculate precision, recall, F1 for data by rule type
rt_metric,rt_value,rt_ruletype = [],[],[]
for g in all_numeric.groupby('ruletype'):
outcomes = {}
for gg in g[1].groupby('outcome'):
outcomes[gg[0]] = len(gg[1])
TN,FN,FP,TP = [outcomes.setdefault(i, 0) for i in range(4)]
p_ = get_precision(TP,FP) if (TP + FP) > 0 and TP > 0 else 0.
r_ = get_recall(TP,FN) if (TP + FN) > 0 and TP > 0 else 0.
if p_ > 0. and r_ > 0.:
f_ = get_f1(TP,FP,TN,FN)
else:
f_ = 0.
rt_value.append(p_)
rt_value.append(r_)
rt_value.append(f_)
rt_metric.append('precision')
rt_metric.append('recall')
rt_metric.append('f1')
rt_ruletype.extend([g[0],]*3)
rtm = pd.DataFrame(index = range(len(rt_metric)))
rtm['metric'] = pd.Series(rt_metric)
rtm['value'] = pd.Series(rt_value)
rtm['ruletype'] = pd.Series(rt_ruletype)
# Plot the overall precision, recall, F1 for the dataset split by rule type
grid = sns.FacetGrid(rtm,
col="ruletype",
col_wrap=3, size=5, aspect=1)
grid.map(sns.barplot, "metric","value","metric",palette=sns.color_palette("Set1"))
grid.map(plt.axhline, y=0.5, ls="--", c="0.5",lw=1);
Explanation: Financial risk rules are the largest category, and are mostly cases that were true negatives (money not held and it wasn't a bad transaction). The false negative rate is slightly larger than the true positive, though, indicating that financial risks are missing more than half of the genuinely bad transactions. Fraud rules also have true negatives as the most common category, but a significantly lower false negative rate compard to true positives. So these types are less likely to be missed by the agents. Compliance rules trigger the fewest total number of alerts; the rates of anything except a false negative are all low (81% of these alerts are benign).
End of explanation
# Compute precision, recall, and F1 over an expanding time window
def ex_precision(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
if (TP + FP) > 0.:
return TP * 1./ (TP + FP)
else:
return 0.
def ex_recall(ts):
TP = (ts.badmerch & ts.held).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
if (TP + FN) > 0.:
return TP * 1./(TP + FN)
else:
return 0.
def ex_f1(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
num = 2*TP
den = 2*TP + FP + FN
if den > 0.:
return num * 1./den
else:
return 0.
# Make the expanded window with associated metrics by looping over every row in the dataframe
def make_expanded(ts,window=1):
expanding_precision = pd.concat([(pd.Series(ex_precision(ts.iloc[:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
expanding_recall = pd.concat([(pd.Series(ex_recall(ts.iloc[:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
expanding_f1 = pd.concat([(pd.Series(ex_f1(ts.iloc[:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
ex = pd.DataFrame(data={"precision":expanding_precision.values,
"recall":expanding_recall.values,
"f1":expanding_f1.values,
},
index=ts.index[1:])
return ex
# Run the expanded window for all cases, sorted by ruleid
sql_query =
SELECT cases.*,rules.ruleid
FROM cases
JOIN rules ON rules.caseid = cases.caseid
ORDER BY ruleid,alertdate
;
casejoined = pd.read_sql_query(sql_query,con)
exdict = {}
for g in casejoined.groupby("ruleid"):
ruleid = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
try:
exdict[ruleid] = make_expanded(ts)
except ValueError:
print "No true positives in Rule {} ({} trigger); cannot compute expanded window.".format(ruleid,len(df))
ruleid = 4
# Quick code to make single plots for presentation
pl = sns.barplot(x="metric",y="value",data=m[m.ruleid==ruleid])
pl.axhline(y=0.5, ls="--", c="0.5",lw=1)
pl.set_title("RuleID = {}".format(ruleid),fontsize=20);
pl = exdict[ruleid].plot(legend=True)
pl.set_title("RuleID = {}".format(ruleid),fontsize=20)
pl.set_ylim(0,1.05)
pl.set_ylabel("metrics",fontsize=12);
# Plot results in a grid
fig,axarr = plt.subplots(5,6,figsize=(15,15))
rules_sorted = sorted(exdict.keys())
for ruleid,ax in zip(rules_sorted,axarr.ravel()):
ex = exdict[ruleid]
pl = ex.plot(ax=ax,legend=(False | ruleid==6))
pl.set_title("ruleid = {}".format(ruleid))
pl.set_ylim(0,1.05)
pl.set_xticklabels([""])
Explanation: Grouping by type; fraud rules have by a significant amount the best performance across all three metrics. Financial risk has comparable precision, but much worse recall. Compliance is poor across the board.
Cumulative performance of metrics split by rule
The above dashboard is a useful start, since we've defined a metric and looked at how it differs for each rule. However, the data being used was collected over a period of several months, and the data should be examined for variations in the metrics as a function of time. This would examine whether a rule is performing well (and if it improves or degrades with more data), the response of the risk agents to different triggers, and possibly variations in the population of merchants submitting cases.
We'll look at this analysis in the context of an expanding window - for every point in a time series of data, we use data up to and including that point. This gives the cumulative performance as a function of time, which is useful for looking at how the performance of a given rule stabilizes.
End of explanation
# Rank rule performance by deltamax: the largest absolute deviation in the second half of the dataset.
l = []
for ruleid in exdict:
ex = exdict[ruleid]
ex_2ndhalf = ex.iloc[len(ex)//2:]
f1diff = (ex_2ndhalf.f1.max() - ex_2ndhalf.f1.min())
if np.isfinite(f1diff):
l.append((ruleid,f1diff,len(ex_2ndhalf)))
else:
print "No variation for Rule {:2} in the second half (median is zero).".format(ruleid)
lsorted = sorted(l, key=lambda x: x[1],reverse=True)
for ll in lsorted:
print "Rule {:2} varies by {:.2f} in the second half ({:4} data points)".format(*ll)
Explanation: This will be the second set of plots in our dashboard. This shows the results over an expanding window covering the full length of time in the dataset, where the value of the three metrics (precision, recall, F1) track how the rules are performing with respect to the analysts and true outcomes over time.
By definition, data over an expanding window should stabilize as more data comes in and the variance decreases (assuming that the rule definitions, performance of risk agents, and underlying merchant behavior is all the same). Large amounts of recent variation would indicate that we don't know whether the rule is performing well yet.
To assess how much the rules are varying in performance, we'll measure the stability of each metric weighted more heavily toward the most recent results. A simple measure which will use is the largest absolute deviation over the second half of the data.
End of explanation
# Sort and print the rules matching the criteria for stability and high performance.
stable_good = []
stable_bad = []
unstable = []
for ruleid in exdict:
ex = exdict[ruleid]
ex_2ndhalf = ex.iloc[len(ex)//2:]
deltamax = (ex_2ndhalf.f1.max() - ex_2ndhalf.f1.min())
f1 = ex.iloc[len(ex)-1].f1
stable = True if deltamax < 0.1 and len(ex)//2 > 10 else False
good = True if f1 >= 0.5 else False
if stable and good:
stable_good.append(ruleid)
elif stable:
stable_bad.append(ruleid)
else:
unstable.append(ruleid)
print "{:2} rules {} are performing well.".format(len(stable_good),stable_good)
print "{:2} rules {} are not performing well.".format(len(stable_bad),stable_bad)
print "{:2} rules {} are unstable and cannot be evaluated yet.".format(len(unstable),unstable)
Explanation: Six out of the thirty rules have a variation $\Delta_\mathrm{max,abs} < 0.1$ in the second half of the current data. Of those, two (Rules 7 and 26) have only a handful of datapoints and estimates of the true accuracy are very uncertain. Two others (Rules 2 and 30) more data, although less than 100 points each. Rule 2 has very different behavior starting a few weeks toward the end, sharply increasing both its precision and recall. This could indicate either a difference in merchant tendencies or a re-definition of the existing rule. Rule 30 has shown a gradual improvement from an early nadir, which might be a sign of a set of bad/unlikely transactions earlier and now regressing to the mean. Rule 4 basically only has data in the second half of the set (not stabilized yet) and Rule 5 has a gradually decreasing recall, which may be a counterexample to the trend in Rule 30.
The remainder of the rules (especially for those with a few hundred data points) are relatively stable over the expanding window. So we can broadly categorize rule performance in three categories:
rules that are performing well
rules that are not performing well
rules for which behavior is not stable/well-determined
We'll define a well-performing rule as one whose cumulative score is $F1 \ge 0.5$, and a stable rule as one with $N_\mathrm{cases}>10$ and $\Delta_\mathrm{max,abs} < 0.1$.
End of explanation
# Compute the change in performance by rule type over an expanding time window
sql_query =
SELECT cases.*,categories.ruletype
FROM cases
JOIN rules ON rules.caseid = cases.caseid
JOIN categories on categories.ruleid = rules.ruleid
ORDER BY categories.ruletype,alertdate
;
rtjoined = pd.read_sql_query(sql_query,con)
# Get the dataframes
rtd = {}
for g in rtjoined.groupby("ruletype"):
ruletype = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
try:
rtd[ruletype] = make_expanded(ts)
except ValueError:
print "Problems with {}".format(ruletype)
# Plot results in a grid
fig,axarr = plt.subplots(1,3,figsize=(15,6))
rules_sorted = sorted(rtd.keys())
for ruletype,ax in zip(rules_sorted,axarr.ravel()):
ex = rtd[ruletype]
pl = ex.plot(ax=ax)
pl.set_title("ruletype = {}".format(ruletype))
pl.set_ylim(0,1.05)
# Rank rules by the largest absolute deviation in the second half of the dataset.
l = []
for ruletype in rtd:
ex = rtd[ruletype]
ex_2ndhalf = ex.iloc[len(ex)//2:]
f1diff = (ex_2ndhalf.f1.max() - ex_2ndhalf.f1.min())
l.append((ruletype,f1diff,len(ex_2ndhalf)))
print ''
lsorted = sorted(l, key=lambda x: x[1],reverse=True)
for ll in lsorted:
print "{:15} rules vary by {:.2f} in the second half ({:4} data points)".format(*ll)
Explanation: Cumulative performance of metrics split by rule type
End of explanation
ts = pd.DataFrame(data={"amount":cases.amount.values,
"held":cases.held.values,
"badmerch":cases.badmerch.values},
index=cases.alertdate.values)
r = ts.rolling(window=7,min_periods=1)
# Make a rolling window with associated metrics by looping over every row in the dataframe
def r_precision(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
if (TP + FP) > 0.:
return TP * 1./ (TP + FP)
else:
return np.nan
def r_recall(ts):
TP = (ts.badmerch & ts.held).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
if (TP + FN) > 0.:
return TP * 1./(TP + FN)
else:
return np.nan
def r_f1(ts):
TP = (ts.badmerch & ts.held).sum()
FP = (ts.held & np.logical_not(ts.badmerch)).sum()
FN = (ts.badmerch & np.logical_not(ts.held)).sum()
num = 2*TP
den = 2*TP + FP + FN
if den > 0.:
return num * 1./den
else:
return np.nan
def make_rolling(ts,window):
rolling_precision = pd.concat([(pd.Series(r_precision(ts.iloc[i:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
rolling_recall = pd.concat([(pd.Series(r_recall(ts_sorted.iloc[i:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
rolling_f1 = pd.concat([(pd.Series(r_f1(ts.iloc[i:i+window]),
index=[ts.index[i+window]])) for i in range(len(ts)-window) ])
r = pd.DataFrame(data={"precision":rolling_precision.values,
"recall":rolling_recall.values,
"f1":rolling_f1.values,
},
index=rolling_f1.index)
return r
# Run the rolling window for all cases, sorted by rule
rdict = {}
for g in casejoined.groupby("ruleid"):
ruleid = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
ts_sorted = ts.sort_index()
try:
rdict[ruleid] = make_rolling(ts_sorted,window=50)
except ValueError:
print "No true positives in Rule {} over interval ({} triggers); cannot compute rolling window.".format(ruleid,len(df))
# Empty dataframe
rdict[ruleid] = pd.DataFrame([0,]*len(df),index=[[casejoined.alertdate.min(),]*(len(df)-1) + [casejoined.alertdate.max()]])
# Plot the dashboard with rolling windows
fig,axarr = plt.subplots(5,6,figsize=(15,12))
for ax,r in zip(axarr.ravel(),rdict):
rp = rdict[r].plot(xlim=(casejoined.alertdate.min(),casejoined.alertdate.max()),
ylim=(0,1.05),
ax=ax,
legend=(False | r == 1))
if r < 25:
rp.set_xticklabels([""])
rp.set_title("ruleid = {}; N={}".format(r,len(rdict[r])));
Explanation: Analysis: all three of the rule type have a variation $\Delta_\mathrm{max,abs} \le 0.05$ in the second half of the current data. Since all three rule types have at least hundreds of data points distributed over time, stability is mostly expected. Compliance rules still show the largest deviations; there was a large amount of early variance, which is more stable but still mildly decreasing. Both fraud and financial risk have been quite stable following about the first month of data.
Rolling performance of metrics split by rule
The analysis above is useful from an overall perspective about whether a rule has been historically justified. For data scientists and risk analysts, however, it is also critical to look only at recent data so that action can be taken if performance starts to drastically change. Expanding windows do not work well for this since the data are weighted over all input and it will take time for variations to affect the integrated totals. Instead, we will run a similar analysis on a rolling window to look for changes on a weekly timescale.
End of explanation
# Same rolling analysis, but by rule type
rtrdict = {}
for g in rtjoined.groupby("ruletype"):
ruleid = g[0]
df = g[1]
ts = pd.DataFrame(data={"amount":df.amount.values,
"held":df.held.values,
"badmerch":df.badmerch.values},
index=df.alertdate.values)
ts_sorted = ts.sort_index()
try:
rtrdict[ruleid] = make_rolling(ts_sorted,window=200)
except ValueError:
print "No true positives in Rule {} over interval ({} triggers); cannot compute rolling window.".format(ruleid,len(df))
# Empty dataframe
rtrdict[ruleid] = pd.DataFrame([0,]*len(df),index=[[casejoined.alertdate.min(),]*(len(df)-1) + [casejoined.alertdate.max()]])
# Plot the dashboard with rolling windows by rule type
fig,axarr = plt.subplots(1,3,figsize=(15,6))
for ax,r in zip(axarr.ravel(),["Compliance","Financial Risk","Fraud"]):
rp = rtrdict[r].plot(xlim=(rtjoined.alertdate.min(),rtjoined.alertdate.max()),
ylim=(0,1.05),
ax=ax)
rp.set_title("Rule type = {}; N={}".format(r,len(rtrdict[r])));
Explanation: Rolling performance of metrics split by rule type
End of explanation
# Compute the co-occurrence matrix for triggering rules
df = pd.DataFrame(index=rules.caseid.unique())
rule_count_arr = np.zeros((len(rules.caseid.unique()),30),dtype=int)
for idx,g in enumerate(rules.groupby('caseid')):
g1 = g[1]
for r in g1.ruleid.values:
# Numbering is a little off because there's no Rule 28 in the dataset.
if r < 28:
rule_count_arr[idx,r-1] = 1
else:
rule_count_arr[idx,r-2] = 1
# Create pandas DataFrame and rename the columns to the actual rule IDs
df = pd.DataFrame(data=rule_count_arr,
index=rules.caseid.unique(),
columns=[sorted(rules.ruleid.unique())])
# Co-occurrence matrix is the product of the matrix and its transpose
coocc = df.T.dot(df)
coocc.head()
# Plot the co-occurrence matrix and mask the diagonal and upper triangle values
# (mirrored on the bottom half of the matrix)
fig,ax = plt.subplots(1,1,figsize=(14,10))
mask = np.zeros_like(coocc)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
sns.heatmap(coocc,
mask = mask,
annot=True, fmt="d",
vmax = 100,
square=True,
ax=ax)
ax.set_xlabel('Rule',fontsize=16)
ax.set_ylabel('Rule',fontsize=16);
Explanation: Co-occurence and effectiveness of rules
Are there any rules that occur together at very high rates (indicating that the model is too complicated)?
End of explanation
# How much money did bad transactions cost Insight in this dataset?
print "Bad money in transactions totals ${:.2f}.".format(cases[(cases.held == 0) & (cases.badmerch == 1)].amount.sum())
Explanation: Rules 8, 14, 15, and 27 all have fairly strong co-occurrences with other rules in the set. These would be good candidates to check for the overall F1 scores and evaluate whether they're a necessary trigger for the system.
Other questions that I'd explore in the data given more time:
What fraction of the triggers for each rule are co-occurrences?
What are the F1 scores produced by combinations of rules?
How do the combined F1 scores compare to the scores when triggered individually?
Predicting whether a transaction is fraudulent
None of this analysis has actually attempted to predict the effectiveness of the rule system in place; right now, it only evaluates it with respect to the hold decisions and the ultimate labels. Given more time, it should be very tractable to build a machine learning classifier that:
predicts the likelihood of being a bad transaction (based on merchant, past history, timestamp, and amount)
predicts the likelihood of funds being withheld given that it is a bad transaction
recommends triggering the rule by minimizing the false negative rate while maintaining specificity
I'd start with a simple logistic regression model and assess performance with cross-validation on later times in the dataset; depending on the accuracy, ensemble models such as random forests or support vector machines would also be good candidates for increased accuracy. If none of those achieve the desired accuracy, the next step would be trying the performance of a neural net.
End of explanation |
1,228 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Simple Time Series Analysis Of The S&P 500 Index
This notebook presents some basic ideas from time series analysis applied to stock market data, specificially the daily closing value of the S&P 500 index from 1950 up to the present day.
The first thing we need to do is import the data set. This was downloaded as a CSV file from Yahoo Finance and is available in the "data" folder.
Step1: The first thing to do is just plot the data and see what it looks like. We'll target the closing price after each day.
Step2: The data is clearly non-stationary as we can see it's trending up over time. we can create a first difference of the original series to attempt to make it stationary.
Step3: Notice how the magnitude of the variance from day to day still increases over time. The data is also exponentially increasing, making variations in earlier observations difficult to see. we can fix this by applying a log transform to the data.
Step4: So that gives us the original closing price with a log transform applied to "flatten" the data from an exponential curve to a linear curve. Now if we were to compare the variance over time of the original series to the logged series, we can see a clear difference.
Step5: Observe that in the top graph, we can't even see any of the variations until the late 80s. In the bottom graph however it's a different story, changes in the value are clearly visible throughout the entire data set.
We can now see a lot more of the variability in the series, but the logged value still isn't stationary. Let's try a first difference on the logged value of the closing price to even it out.
Step6: Much better! We now have a stationary time series model of daily changes to the S&P 500 index. Now let's create some lag variables y(t-1), y(t-2) etc. and examine their relationship to y(t). We'll look at 1 and 2-day lags along with weekly and monthly lags to look for "seasonal" effects.
Step7: One interesting visual way to evaluate the relationship between lagged variables is to do a scatter plot of the original variable vs. the lagged variable and see where the distribution lies.
Step8: It probably comes as no surprise that there's very little correlation between the change in value from one day to the next. The other lagged variables that we created above show similar results. There could be a relationship to other lag steps that we haven't tried, but it's impractical to test every possible lag value manually. Fortunately there is a class of functions that can systematically do this for us.
Step9: The auto-correlation function computes the correlation between a variable and itself at each lag step up to some limit (in this case 40). The partial auto-correlation function computes the correlation at each lag step that is NOT already explained by previous, lower-order lag steps. We can plot the results to see if there are any significant correlations.
Step10: The auto-correlation and partial-autocorrelation results are very close to each other (I only plotted the auto-correlation results above). What this shows is that there is no significant correlation between the value at time t and at any time prior to t up to 40 steps behind. In order words, the series is a random walk (pretty much expected with stock data). Another interesting technique we can try is a decomposition. This is a technique that attempts to break down a time series into trend, seasonal, and residual factors.
Step11: Unfortunately this one can't be resized in-line but you can still get an idea of what it's doing. Since there isn't any true cycle in this data the visualization doesn't come out too well. Here's an example from the statsmodels documentation that looks more interesting.
Step12: Although we're not likely to get much value out of fitting a time series model based solely on lagged data points in this instance, we can still try fitting some ARIMA models and see what we get. Let's start with a simple moving average model.
Step13: Although it appears like the model is performing very well (the lines are almost indistinguishable after all), remember that we used the un-differenced series! The value only fluctuates a small percentage day-to-day relative to the total absolute value. What we really want is to predict the first difference, or the day-to-day moves. We can either re-run the model using the differenced series, or add an "I" term to the ARIMA model (resulting in a (1, 1, 0) model) which should accomplish the same thing. Let's try using the differenced series.
Step14: We can kind of see that the variations predicted by the model are much smaller than the true variations, but hard to observe when we're looking at the entire 60+ years of history. What if we take just a small slice of the data?
Step15: Now it's pretty obvious that the forecast is way off. We're predicting tiny little variations relative to what is actually happening day-to-day. Again, this is more of less expected with a simple moving average model of a random walk time series. There's not enough information from the previous days to accurately forcast what's going to happen the next day.
A moving average model doesn't appear to do so well. What about an exponential smoothing model? Exponential smoothing spreads out the impact of previous values using an exponential weighting, so things that happened more recently are more impactful than things that happened a long time ago. Maybe this "smarter" form of averaging will be more accurate? | Python Code:
%matplotlib inline
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
import seaborn as sb
sb.set_style('darkgrid')
#path = os.getcwd() + '\data\stock_data.csv'
path = "/data/stock_data.csv"
stock_data = pd.read_csv(path)
stock_data['Date'] = stock_data['Date'].convert_objects(convert_dates='coerce')
stock_data = stock_data.sort_index(by='Date')
stock_data = stock_data.set_index('Date')
stock_data.head()
Explanation: A Simple Time Series Analysis Of The S&P 500 Index
This notebook presents some basic ideas from time series analysis applied to stock market data, specificially the daily closing value of the S&P 500 index from 1950 up to the present day.
The first thing we need to do is import the data set. This was downloaded as a CSV file from Yahoo Finance and is available in the "data" folder.
End of explanation
stock_data['Close'].plot(figsize=(16, 12));
Explanation: The first thing to do is just plot the data and see what it looks like. We'll target the closing price after each day.
End of explanation
stock_data['First Difference'] = stock_data['Close'] - stock_data['Close'].shift()
stock_data['First Difference'].plot(figsize=(16, 12));
Explanation: The data is clearly non-stationary as we can see it's trending up over time. we can create a first difference of the original series to attempt to make it stationary.
End of explanation
stock_data['Natural Log'] = stock_data['Close'].apply(lambda x: np.log(x))
stock_data['Natural Log'].plot(figsize=(16, 12));
Explanation: Notice how the magnitude of the variance from day to day still increases over time. The data is also exponentially increasing, making variations in earlier observations difficult to see. we can fix this by applying a log transform to the data.
End of explanation
#stock_data['Original Variance'] = pd.rolling_var(stock_data['Close'], 30, min_periods=None, freq=None, center=True)
#stock_data['Log Variance'] = pd.rolling_var(stock_data['Natural Log'], 30, min_periods=None, freq=None, center=True)
stock_data['Original Variance'] = stock_data['Close'].rolling(window=30, center=True).var()
stock_data['Log Variance'] = stock_data['Natural Log'].rolling(window=30, center=True).var()
fig, ax = plt.subplots(2, 1, figsize=(13, 12))
stock_data['Original Variance'].plot(ax=ax[0], title='Original Variance')
stock_data['Log Variance'].plot(ax=ax[1], title='Log Variance')
fig.tight_layout();
Explanation: So that gives us the original closing price with a log transform applied to "flatten" the data from an exponential curve to a linear curve. Now if we were to compare the variance over time of the original series to the logged series, we can see a clear difference.
End of explanation
stock_data['Logged First Difference'] = stock_data['Natural Log'] - stock_data['Natural Log'].shift()
stock_data['Logged First Difference'].plot(figsize=(16, 12));
Explanation: Observe that in the top graph, we can't even see any of the variations until the late 80s. In the bottom graph however it's a different story, changes in the value are clearly visible throughout the entire data set.
We can now see a lot more of the variability in the series, but the logged value still isn't stationary. Let's try a first difference on the logged value of the closing price to even it out.
End of explanation
stock_data['Lag 1'] = stock_data['Logged First Difference'].shift()
stock_data['Lag 2'] = stock_data['Logged First Difference'].shift(2)
stock_data['Lag 5'] = stock_data['Logged First Difference'].shift(5)
stock_data['Lag 30'] = stock_data['Logged First Difference'].shift(30)
Explanation: Much better! We now have a stationary time series model of daily changes to the S&P 500 index. Now let's create some lag variables y(t-1), y(t-2) etc. and examine their relationship to y(t). We'll look at 1 and 2-day lags along with weekly and monthly lags to look for "seasonal" effects.
End of explanation
sb.jointplot('Logged First Difference', 'Lag 1', stock_data, kind='reg', size=13)
Explanation: One interesting visual way to evaluate the relationship between lagged variables is to do a scatter plot of the original variable vs. the lagged variable and see where the distribution lies.
End of explanation
from statsmodels.tsa.stattools import acf
from statsmodels.tsa.stattools import pacf
lag_correlations = acf(stock_data['Logged First Difference'].iloc[1:])
lag_partial_correlations = pacf(stock_data['Logged First Difference'].iloc[1:])
Explanation: It probably comes as no surprise that there's very little correlation between the change in value from one day to the next. The other lagged variables that we created above show similar results. There could be a relationship to other lag steps that we haven't tried, but it's impractical to test every possible lag value manually. Fortunately there is a class of functions that can systematically do this for us.
End of explanation
fig, ax = plt.subplots(figsize=(16,12))
ax.plot(lag_correlations, marker='o', linestyle='--')
Explanation: The auto-correlation function computes the correlation between a variable and itself at each lag step up to some limit (in this case 40). The partial auto-correlation function computes the correlation at each lag step that is NOT already explained by previous, lower-order lag steps. We can plot the results to see if there are any significant correlations.
End of explanation
from statsmodels.tsa.seasonal import seasonal_decompose
decomposition = seasonal_decompose(stock_data['Natural Log'], model='additive', freq=30)
fig = plt.figure()
fig = decomposition.plot()
Explanation: The auto-correlation and partial-autocorrelation results are very close to each other (I only plotted the auto-correlation results above). What this shows is that there is no significant correlation between the value at time t and at any time prior to t up to 40 steps behind. In order words, the series is a random walk (pretty much expected with stock data). Another interesting technique we can try is a decomposition. This is a technique that attempts to break down a time series into trend, seasonal, and residual factors.
End of explanation
co2_data = sm.datasets.co2.load_pandas().data
co2_data.co2.interpolate(inplace=True)
result = sm.tsa.seasonal_decompose(co2_data.co2)
fig = plt.figure()
fig = result.plot()
Explanation: Unfortunately this one can't be resized in-line but you can still get an idea of what it's doing. Since there isn't any true cycle in this data the visualization doesn't come out too well. Here's an example from the statsmodels documentation that looks more interesting.
End of explanation
model = sm.tsa.ARIMA(stock_data['Natural Log'].iloc[1:], order=(1, 0, 0))
results = model.fit(disp=-1)
stock_data['Forecast'] = results.fittedvalues
stock_data[['Natural Log', 'Forecast']].plot(figsize=(16, 12))
Explanation: Although we're not likely to get much value out of fitting a time series model based solely on lagged data points in this instance, we can still try fitting some ARIMA models and see what we get. Let's start with a simple moving average model.
End of explanation
model = sm.tsa.ARIMA(stock_data['Logged First Difference'].iloc[1:], order=(1, 0, 0))
results = model.fit(disp=-1)
stock_data['Forecast'] = results.fittedvalues
stock_data[['Logged First Difference', 'Forecast']].plot(figsize=(16, 12))
Explanation: Although it appears like the model is performing very well (the lines are almost indistinguishable after all), remember that we used the un-differenced series! The value only fluctuates a small percentage day-to-day relative to the total absolute value. What we really want is to predict the first difference, or the day-to-day moves. We can either re-run the model using the differenced series, or add an "I" term to the ARIMA model (resulting in a (1, 1, 0) model) which should accomplish the same thing. Let's try using the differenced series.
End of explanation
stock_data[['Logged First Difference', 'Forecast']].iloc[1200:1600, :].plot(figsize=(16, 12))
Explanation: We can kind of see that the variations predicted by the model are much smaller than the true variations, but hard to observe when we're looking at the entire 60+ years of history. What if we take just a small slice of the data?
End of explanation
model = sm.tsa.ARIMA(stock_data['Logged First Difference'].iloc[1:], order=(0, 0, 1))
results = model.fit(disp=-1)
stock_data['Forecast'] = results.fittedvalues
stock_data[['Logged First Difference', 'Forecast']].plot(figsize=(16, 12))
Explanation: Now it's pretty obvious that the forecast is way off. We're predicting tiny little variations relative to what is actually happening day-to-day. Again, this is more of less expected with a simple moving average model of a random walk time series. There's not enough information from the previous days to accurately forcast what's going to happen the next day.
A moving average model doesn't appear to do so well. What about an exponential smoothing model? Exponential smoothing spreads out the impact of previous values using an exponential weighting, so things that happened more recently are more impactful than things that happened a long time ago. Maybe this "smarter" form of averaging will be more accurate?
End of explanation |
1,229 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aula 04 - pandas
Objetivos
Análise séries temporais
Ler, manipular e plotar dados tabulares
Guia de group-by e outras operações tabulares avançadas
Ler um CSV e mostrar apenas o início da tabela.
Step1: O read_csv (e vários outros métodos do pandas) possuí uma infinidade de
opções para ajudar na leitura dos dados.
Step2: Algumas facilidades do pandas é ter labels ricos em informação. Então vamos renomear as nossas colunas e ordená-las pela profundidade.
Step3: Outra facilidade é a limpeza da tabela, aqui jogamos fora todos os inválidos
(Not-a-Number) apenas quando todos os valores do eixo 1 (colunas) foram
NaN.
Step4: Como será a variabilidade desses dados?
Step5: Como já havíamos notado antes toda essa série parece ter um gap no mesmo
lugar (provavelmente uma manutenção do fundeio). Com o pandas podemos
facilmente interpolar esse gap para obtermos uma série contínua.
Step6: Com o pandas vocês podem facilmente "re-amostar" uma série ou tabela
para uma nova frequência temporal.
Step7: Group-by months/years (climatology)
Step8: Pandas e seus irmãos | Python Code:
import pandas as pd
pd.read_csv('./data/dados_pirata.csv').head()
Explanation: Aula 04 - pandas
Objetivos
Análise séries temporais
Ler, manipular e plotar dados tabulares
Guia de group-by e outras operações tabulares avançadas
Ler um CSV e mostrar apenas o início da tabela.
End of explanation
df = pd.read_csv('./data/dados_pirata.csv',
index_col='datahora',
na_values=-99999,
parse_dates=True).drop('Unnamed: 0', axis=1)
df.tail()
Explanation: O read_csv (e vários outros métodos do pandas) possuí uma infinidade de
opções para ajudar na leitura dos dados.
End of explanation
df.columns = ['{0:0>3}'.format(col.split('_')[1]) for col in df.columns]
df.sort(axis=1, inplace=True)
df.head()
Explanation: Algumas facilidades do pandas é ter labels ricos em informação. Então vamos renomear as nossas colunas e ordená-las pela profundidade.
End of explanation
df.dropna(how='all', axis=1, inplace=True)
df.head()
desc = df.describe()
desc
Explanation: Outra facilidade é a limpeza da tabela, aqui jogamos fora todos os inválidos
(Not-a-Number) apenas quando todos os valores do eixo 1 (colunas) foram
NaN.
End of explanation
desc.ix['std'] ** 2
%matplotlib inline
ax = df[['001', '100', '180', '500']].plot()
Explanation: Como será a variabilidade desses dados?
End of explanation
df['001'].interpolate().plot()
df['001'].plot()
df['001'].interpolate(method='time', limit=30).plot()
df['001'].plot()
Explanation: Como já havíamos notado antes toda essa série parece ter um gap no mesmo
lugar (provavelmente uma manutenção do fundeio). Com o pandas podemos
facilmente interpolar esse gap para obtermos uma série contínua.
End of explanation
import numpy as np
fig, ax = plt.subplots(figsize=(9, 5))
ax = df.resample('M', how=np.median).plot(ax=ax)
Explanation: Com o pandas vocês podem facilmente "re-amostar" uma série ou tabela
para uma nova frequência temporal.
End of explanation
key = lambda x: x.month
grouped = df.groupby(key)
monthly = grouped.mean()
monthly.shape
fig, ax = plt.subplots(figsize=(9, 5))
ax = monthly.plot(ax=ax)
key = lambda x: x.year
grouped = df.groupby(key)
yearly = grouped.mean()
yearly.shape
fig, ax = plt.subplots(figsize=(9, 5))
ax = yearly.plot(ax=ax)
ax = df.resample('A', how='mean').plot()
for col in df.columns:
yearly[col] = ((yearly[col] - yearly[col].mean()) /
yearly[col].std(ddof=0))
fig, ax = plt.subplots(figsize=(9, 5))
ax = yearly.plot(ax=ax)
yearly.corr()
!head ./data/15t30717.3f1
from datetime import datetime
from pandas import read_table
cols = ['j', 'u', 'v', 'temp', 'sal', 'y', 'mn', 'd', 'h', 'mi']
df = read_table('./data/15t30717.3f1' , delim_whitespace=True, names=cols)
dates = [datetime(*x) for x in
zip(df['y'], df['mn'], df['d'], df['h'], df['mi'])]
df.index = dates
df.drop(['y', 'mn', 'd', 'h', 'mi', 'j'], axis=1, inplace=True)
df.head()
ax = df['v'].plot()
from oceans import lanc
freq = 1./40 # Hours
window_size = 96+1+96
pad = np.zeros(window_size) * np.NaN
wt = lanc(window_size, freq)
res = np.convolve(wt, df['v'], mode='same')
df['low'] = res
df['high'] = df['v'] - df['low']
ax = df[['low', 'high']].plot(figsize=(9, 3))
Explanation: Group-by months/years (climatology)
End of explanation
import geopandas as gpd
fname = "./data/2013-04-29-Running.geojson"
df = gpd.read_file(fname)
df.plot()
from ctd import DataFrame
cast = DataFrame.from_cnv('./data/CTD_001.cnv.gz', compression='gzip')
cast.head()
fig, ax = cast['t090C'].plot()
Explanation: Pandas e seus irmãos
End of explanation |
1,230 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell
Step1: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. | Python Code:
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
from IPython.html import widgets
from IPython.display import Image
assert True # leave this to grade the import statements
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
Image(url='http://talklikeaphysicist.com/wp-content/uploads/2008/05/tesla-coil.jpg',embed=True, width=600, height=600)
assert True # leave this to grade the image display
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
%%html
<table>
<tr>Quarks
<th>Name</th>
<th>Symbol</th>
<th>Antiparticle</th>
<th>Charge ($e$)</th>
<th>Mass($MeV/c^2$)
</tr>
<tr>
<td>up</td>
<td>$u$</td>
<td>$\bar{u}$</td>
<td>$+\frac{2}{3}$</td>
<td>1.5-3.3</td>
</tr>
<tr>
<td>down</td>
<td>$d$</td>
<td>$\bar{d}$</td>
<td>$-\frac{1}{3}$</td>
<td>3.5-6.0</td>
</tr>
<tr>
<td>charm</td>
<td>$c$</td>
<td>$\bar{c}$</td>
<td>$+\frac{2}{3}$</td>
<td>1,160-1,340</td>
</tr>
<tr>
<td>strange</td>
<td>$s$</td>
<td>$\bar{s}$</td>
<td>$-\frac{1}{3}$</td>
<td>70-130</td>
</tr>
<tr>
<td>top</td>
<td>$t$</td>
<td>$\bar{t}$</td>
<td>$+\frac{2}{3}$</td>
<td>169,100-173,300</td>
</tr>
<tr>
<td>bottom</td>
<td>$b$</td>
<td>$\bar{b}$</td>
<td>$-\frac{1}{3}$</td>
<td>4,130-4,370</td>
</tr>
</table>
assert True # leave this here to grade the quark table
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation |
1,231 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Index - Back - Next
Widget Events
Special events
Step1: The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
Step2: Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the on_click method, a button that prints a message when it has been clicked is shown below.
Step3: on_sumbit
The Text also has a special on_submit event. The on_submit event fires when the user hits return.
Step4: Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the on_trait_change method of the widget can be used to register a callback. The doc string for on_trait_change can be seen below.
Step5: Signatures
Mentioned in the doc string, the callback registered can have 4 possible signatures
Step6: Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
The first method is to use the link and directional_link functions from the traitlets module.
Linking traitlets attributes from the server side
Step7: Function traitlets.link returns a Link object. The link can be broken by calling the unlink method.
Step8: Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency dues to the rountrip to the server side. You can also directly link widgets attributes, either in a unidirectional or a bidirectional fashion using the link widgets.
Step9: Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method. | Python Code:
from __future__ import print_function
Explanation: Index - Back - Next
Widget Events
Special events
End of explanation
from IPython.html import widgets
print(widgets.Button.on_click.__doc__)
Explanation: The Button is not used to represent a data type. Instead the button widget is used to handle mouse clicks. The on_click method of the Button can be used to register function to be called when the button is clicked. The doc string of the on_click can be seen below.
End of explanation
from IPython.display import display
button = widgets.Button(description="Click Me!")
display(button)
def on_button_clicked(b):
print("Button clicked.")
button.on_click(on_button_clicked)
Explanation: Example
Since button clicks are stateless, they are transmitted from the front-end to the back-end using custom messages. By using the on_click method, a button that prints a message when it has been clicked is shown below.
End of explanation
text = widgets.Text()
display(text)
def handle_submit(sender):
print(text.value)
text.on_submit(handle_submit)
Explanation: on_sumbit
The Text also has a special on_submit event. The on_submit event fires when the user hits return.
End of explanation
print(widgets.Widget.on_trait_change.__doc__)
Explanation: Traitlet events
Widget properties are IPython traitlets and traitlets are eventful. To handle changes, the on_trait_change method of the widget can be used to register a callback. The doc string for on_trait_change can be seen below.
End of explanation
int_range = widgets.IntSlider()
display(int_range)
def on_value_change(name, value):
print(value)
int_range.on_trait_change(on_value_change, 'value')
Explanation: Signatures
Mentioned in the doc string, the callback registered can have 4 possible signatures:
callback()
callback(trait_name)
callback(trait_name, new_value)
callback(trait_name, old_value, new_value)
Using this method, an example of how to output an IntSlider's value as it is changed can be seen below.
End of explanation
from IPython.utils import traitlets
caption = widgets.Latex(value = 'The values of slider1, slider2 and slider3 are synchronized')
sliders1, slider2, slider3 = widgets.IntSlider(description='Slider 1'),\
widgets.IntSlider(description='Slider 2'),\
widgets.IntSlider(description='Slider 3')
l2 = traitlets.link((sliders1, 'value'), (slider2, 'value'))
l3 = traitlets.link((sliders1, 'value'), (slider3, 'value'))
display(caption, sliders1, slider2, slider3)
caption = widgets.Latex(value = 'Changes in source values are reflected in target1 and target2')
source, target1, target2 = widgets.IntSlider(description='Source'),\
widgets.IntSlider(description='Target 1'),\
widgets.IntSlider(description='Target 2')
traitlets.dlink((source, 'value'), (target1, 'value'))
traitlets.dlink((source, 'value'), (target2, 'value'))
display(caption, source, target1, target2)
Explanation: Linking Widgets
Often, you may want to simply link widget attributes together. Synchronization of attributes can be done in a simpler way than by using bare traitlets events.
The first method is to use the link and directional_link functions from the traitlets module.
Linking traitlets attributes from the server side
End of explanation
l2.unlink()
Explanation: Function traitlets.link returns a Link object. The link can be broken by calling the unlink method.
End of explanation
caption = widgets.Latex(value = 'The values of range1, range2 and range3 are synchronized')
range1, range2, range3 = widgets.IntSlider(description='Range 1'),\
widgets.IntSlider(description='Range 2'),\
widgets.IntSlider(description='Range 3')
l2 = widgets.jslink((range1, 'value'), (range2, 'value'))
l3 = widgets.jslink((range1, 'value'), (range3, 'value'))
display(caption, range1, range2, range3)
caption = widgets.Latex(value = 'Changes in source_range values are reflected in target_range1 and target_range2')
source_range, target_range1, target_range2 = widgets.IntSlider(description='Source range'),\
widgets.IntSlider(description='Target range 1'),\
widgets.IntSlider(description='Target range 2')
widgets.jsdlink((source_range, 'value'), (target_range1, 'value'))
widgets.jsdlink((source_range, 'value'), (target_range2, 'value'))
display(caption, source_range, target_range1, target_range2)
Explanation: Linking widgets attributes from the client side
When synchronizing traitlets attributes, you may experience a lag because of the latency dues to the rountrip to the server side. You can also directly link widgets attributes, either in a unidirectional or a bidirectional fashion using the link widgets.
End of explanation
l2.unlink()
Explanation: Function widgets.jslink returns a Link widget. The link can be broken by calling the unlink method.
End of explanation |
1,232 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Info from the web
This notebook goes with a blog post at Agile*.
We're going to get some info from Wikipedia, and some financial prices from Yahoo Finance. We'll make good use of the requests library, a really nicely designed Python library for making web requests in Python.
Geological ages from Wikipedia
We'll start with the Jurassic, then generalize.
Step1: I used View Source in my browser to figure out where the age range is on the page, and what it looks like. The most predictable spot, that will work on every period's page, is in the infobox. It's given as a range, in italic text, with "million years ago" right after it.
Try to find the same string here.
Step2: Now we have the entire text of the webpage, along with some metadata. The text is stored in r.text, and I happen to know roughly where the relevant bit of text is
Step3: We can get at that bit of text using a regular expression
Step4: And if we're really cunning, we can get the start and end ages
Step5: An exercise for you, dear reader
Step6: You should be able to call your function like this
Step7: Now we can make a function that makes the sentence we made before, calling the function you just wrote
Step8: Natural gas prices from Yahoo Finance
Here is an explanation of how to form Yahoo Finance queries.
Step9: The symbol s we're passing is HHF17.NYM. This means
Step10: This should work | Python Code:
url = "http://en.wikipedia.org/wiki/Jurassic" # Line 1
Explanation: Info from the web
This notebook goes with a blog post at Agile*.
We're going to get some info from Wikipedia, and some financial prices from Yahoo Finance. We'll make good use of the requests library, a really nicely designed Python library for making web requests in Python.
Geological ages from Wikipedia
We'll start with the Jurassic, then generalize.
End of explanation
import requests # I don't count these lines.
r = requests.get(url) # Line 2
Explanation: I used View Source in my browser to figure out where the age range is on the page, and what it looks like. The most predictable spot, that will work on every period's page, is in the infobox. It's given as a range, in italic text, with "million years ago" right after it.
Try to find the same string here.
End of explanation
r.text[7400:7600] # I don't count these lines either.
Explanation: Now we have the entire text of the webpage, along with some metadata. The text is stored in r.text, and I happen to know roughly where the relevant bit of text is: around the 7500th character, give or take:
End of explanation
import re
s = re.search(r'<i>(.+?million years ago)</i>', r.text)
text = s.group(1)
text
Explanation: We can get at that bit of text using a regular expression:
End of explanation
start, end = re.search(r'<i>([\.0-9]+)–([\.0-9]+) million years ago</i>', r.text).groups() # Line 3
duration = float(start) - float(end) # Line 4
print("According to Wikipedia, the Jurassic lasted {:.2f} Ma.".format(duration)) # Line 5
Explanation: And if we're really cunning, we can get the start and end ages:
End of explanation
def get_age(period):
url = # Make a URL out of a base URL and the period name
r = # Make the request.
start, end = # Provide the regex.
return float(start), float(end)
Explanation: An exercise for you, dear reader: Make a function to get the start and end ages of any geologic period, taking the name of the period as an argument. I have left some hints.
End of explanation
period = "Jurassic"
get_age(period)
Explanation: You should be able to call your function like this:
End of explanation
def duration(period):
t0, t1 = get_age(period)
duration = t0 - t1
response = "According to Wikipedia, the {0} lasted {1:.2f} Ma.".format(period, duration)
return response
duration('Cretaceous')
Explanation: Now we can make a function that makes the sentence we made before, calling the function you just wrote:
End of explanation
import requests
url = "http://download.finance.yahoo.com/d/quotes.csv" # Line 6
params = {'s': 'HHG17.NYM', 'f': 'l1'} # Line 7
r = requests.get(url, params=params) # Line 8
price = float(r.text) # Line 9
print("Henry Hub price for Feb 2017: ${:.2f}".format(price)) # Line 10
Explanation: Natural gas prices from Yahoo Finance
Here is an explanation of how to form Yahoo Finance queries.
End of explanation
import time
def get_symbol(benchmark):
# I'll help you with the time.
# We compute a time 45 days in the future for a price
future = time.gmtime(time.time() + 90*24*60*60)
month = future.tm_mon
year = future.tm_year
month_codes = ['F', 'G', 'H', 'J', 'K', 'M', 'N', 'Q', 'U', 'V', 'X', 'Z']
# This is where you come in.
month = #### Get the appropriate code for the month.
year = #### Make a string for the year.
return benchmark + month + year + ".NYM"
Explanation: The symbol s we're passing is HHF17.NYM. This means:
The ticker symbols we're passing look like XXMYY.NYM, with components as follows:
XX — commodity 'benchmark' symbol, as explained below.
M — a month code, symbolizing January to December: [F,G,H,J,K,M,N,Q,U,V,X,Z]
YY — a two-digit year, like 17.
.NYM — the Nymex symbol.
Benchmarks that seem to work with this service:
CL — West Texas Intermediate or WTI, light sweet crude
BB — Brent crude penultimate financial futures
BZ — Brent look-alike crude oil futures
MB — Gulf Coast Sour Crude
RE — Russian Export Blend Crude Oil (REBCO) futures
Gas spot prices that work:
NG — Henry Hub physical futures
HH — Henry Hub last day financial futures
Symbols that don't work:
DC — Dubai crude calendar futures
WCC — Canadian Heavy (differential, cf CL)
WCE — Western Canadian select crude oil futures (differential, cf CL) — but this seems to be the same price as WCC, which can't be right
LN — European options
As an exercise, write a function to get the futures price for a given benchmark, based on the contract price 90 days from 'now', whenever now is.
End of explanation
get_symbol('CL')
Explanation: This should work:
End of explanation |
1,233 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Numerical Differentiation
Step1: Applications
Step5: Question
Image you're planning a mission to the South Pole Aitken Basin and want to explore some permanently shadowed craters. What factors might you consider in planning out your rover's landing site and route?
Most rovers can tolerate grades up to about 20%, For reference, the grade on I-70 near Eisenhower Tunnel is about 6%.
Differentiation Review
Numerical Derivatives on a Grid (Text Appendix B.2)
Step7: Notice that we don't calculate the derivative at the end points because there are no points beyond them to difference with.
Q. So, what would happen without the [1
Step8: Example
Step9: Notice that the points miss the curve.
Q. How can we improve the accuracy of our numerical derivative?
Step10: Example
Step11: Q. Where should our spacecraft land? What areas seem accessible?
Q. How do we find the lowest point? Highest? How could we determine how many "mountains" and "craters" there are?
Step12: Q. What do you think "diff" does?
Step13: Q. What type of differentiation scheme does this formula represent? How is this different than our "derivative" function from earlier?
Step14: Q. How many hills and craters are there?
Q. Why did we use x[0
Step15: Q. Using the slope, how could we determine which places we could reach and which we couldn't? | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as pl
Explanation: Numerical Differentiation
End of explanation
from IPython.display import Image
Image(url='http://wordlesstech.com/wp-content/uploads/2011/11/New-Map-of-the-Moon-2.jpg')
Explanation: Applications:
Derivative difficult to compute analytically
Rate of change in a dataset
You have position data but you want to know velocity
Finding extrema
Important for fitting models to data (ASTR 3800)
Maximum likelihood methods
Topology: finding peaks and valleys (place where slope is zero)
Topology Example: South Pole Aitken Basin (lunar farside)
Interesting:
Oldest impact basin in the solar system
Important for studies of solar system formation
Permananently shadowed craters
High concentration of hydrogen (e.g., LCROSS mission)!
Good place for an observatory (e.g., the Lunar Radio Array concept)!
End of explanation
def forwardDifference(f, x, h):
A first order differentiation technique.
Parameters
----------
f : function to be differentiated
x : point of interest
h : step-size to use in approximation
return (f(x + h) - f(x)) / h # From our notes
def centralDifference(f, x, h):
A second order differentiation technique.
Also known as `symmetric difference quotient.
Parameters
----------
f : function to be differentiated
x : point of interest
h : step-size to use in approximation
return (f(x + h) - f(x - h)) / (2.0 * h) # From our notes
np.linspace(1,10,100).shape
def derivative(formula, func, xLower, xUpper, n):
Differentiate func(x) at all points from xLower
to xUpper with n *equally spaced* points.
The differentiation formula is given by
formula(func, x, h).
h = (xUpper - xLower) / float(n) # Calculate the derivative step size
xArray = np.linspace(xLower, xUpper, n) # Create an array of x values
derivArray = np.zeros(n) # Create an empty array for the derivative values
for index in range(1, n - 1): # xrange(start, stop, [step])
derivArray[index] = formula(func, xArray[index], h) # Calculate the derivative for the current
# x value using the formula passed in
return (xArray[1:-1], derivArray[1:-1]) # This returns TWO things:
# x values and the derivative values
Explanation: Question
Image you're planning a mission to the South Pole Aitken Basin and want to explore some permanently shadowed craters. What factors might you consider in planning out your rover's landing site and route?
Most rovers can tolerate grades up to about 20%, For reference, the grade on I-70 near Eisenhower Tunnel is about 6%.
Differentiation Review
Numerical Derivatives on a Grid (Text Appendix B.2)
End of explanation
def derivative2(formula, func, xLower, xUpper, n):
Differentiate func(x) at all points from xLower
to xUpper with n+1 *equally spaced* points.
The differentiation formula is given by
formula(func, x, h).
h = (xUpper - xLower) / float(n) # Calculate the derivative step size
xArray = np.linspace(xLower, xUpper, n) # Create an array of x values
derivArray = np.zeros(n) # Create an empty array for the derivative values
for index in range(0, n): # xrange(start, stop, [step])
derivArray[index] = formula(func, xArray[index], h) # Calculate the derivative for the current
# x value using the formula passed in
return (xArray, derivArray) # This returns TWO things:
# x values and the derivative values
Explanation: Notice that we don't calculate the derivative at the end points because there are no points beyond them to difference with.
Q. So, what would happen without the [1:-1] in the return statement?
End of explanation
tau = 2*np.pi
x = np.linspace(0, tau, 100)
# Plot sin and cos
pl.plot(x, np.sin(x), color='k');
pl.plot(x, np.cos(x), color='b');
# Compute derivative using central difference formula
xder, yder = derivative2(centralDifference, np.sin, 0, tau, 10)
# Plot numerical derivative as scatter plot
pl.scatter(xder, yder, color='g', s=100, marker='+', lw=2);
# s controls marker size (experiment with it)
# lw = "linewidth" in pixels
Explanation: Example: Differentiate $\sin(x)$
We know the answer:
$$\frac{d}{dx} \left[\sin(x)\right] = \cos(x)$$
End of explanation
# Plot sin and cos
pl.plot(x, np.sin(x), color='k')
pl.plot(x, np.cos(x), color='b')
# Compute derivative using central difference formula
xder, yder = derivative2(centralDifference, np.sin, 0, tau, 100)
# Plot numerical derivative as scatter plot
pl.scatter(xder, yder, color='g', s=100, marker='*', lw=2)
Explanation: Notice that the points miss the curve.
Q. How can we improve the accuracy of our numerical derivative?
End of explanation
numCraters = 5 # number of craters
widthMax = 1.0 # maximal width of Gaussian crater
heightMin = -1.0 # maximal depth of craters / valleys
heightMax = 2.0 # maximal height of hills / mountains
# 1-D Gaussian
def gaussian(x, A, mu, sigma):
return A * np.exp(-(x - mu)**2 / 2.0 / sigma**2)
# 1-D Gaussian (same thing using lambda)
#gaussian = lambda x, A, mu, sigma: A * np.exp(-(x - mu)**2 / 2. / sigma**2)
# Create an array of linearly spaced x values
xArray = np.linspace(0, 10, 500) # km
# Create an array of initially flat landscape (aka filled with 0's)
yArray = np.zeros_like(xArray)
# Add craters / mountains to landscape
for _ in range(numCraters): # '_' is the so called dummy variable
# Amplitude between heightMin and heightMax
A = np.random.rand() * (heightMax - heightMin) + heightMin
# Center location of the crater
center = np.random.rand() * xArray.max()
# Width of the crater
sigma = np.random.rand() * widthMax
# Add crater to landscape!
yArray += gaussian(xArray, A=A, mu=center, sigma=sigma)
pl.plot(xArray, yArray, color='k')
pl.xlabel('position [km]')
pl.ylabel('altitutde [km]')
Explanation: Example: Traversing A 1-D landscape
Gaussian Equation:
$$f(x)=A * e^{-\frac{(x-\mu)^2}{2*\sigma}}$$
End of explanation
dydx = np.diff(yArray) / np.diff(xArray)
Explanation: Q. Where should our spacecraft land? What areas seem accessible?
Q. How do we find the lowest point? Highest? How could we determine how many "mountains" and "craters" there are?
End of explanation
arr = np.array([1,4,10, 12,5, 7])
np.diff(arr)
Explanation: Q. What do you think "diff" does?
End of explanation
pl.plot(xArray[0:-1], dydx, color='r', label='slope')
pl.plot(xArray, yArray, color='k', label='data')
pl.xlabel('position [km]')
pl.ylabel('slope')
pl.plot([xArray.min(), xArray.max()], [0,0], color='k', ls=':')
#pl.ylim(-4, 4)
pl.legend(loc='best')
Explanation: Q. What type of differentiation scheme does this formula represent? How is this different than our "derivative" function from earlier?
End of explanation
slopeTolerance = 0.5
Explanation: Q. How many hills and craters are there?
Q. Why did we use x[0:-1] in the above plot instead of x?
End of explanation
myArray = np.array([0, 1, 2, 3, 4]) # Create an array
tfArray = np.logical_and(myArray < 2, myArray != 0) # Use boolean logic on array
print (myArray) # Print original array
print (tfArray) # Print the True/False array (from boolean logic)
print (myArray[tfArray]) # Print the original array using True/False array to limit values
reachable = np.logical_and(dydx < slopeTolerance, dydx > -slopeTolerance)
unreachable = np.logical_not(reachable)
pl.plot(xArray, yArray, color='k')
pl.scatter(xArray[:-1][unreachable], yArray[:-1][unreachable], color='r', label='bad')
pl.scatter(xArray[:-1][reachable], yArray[:-1][reachable], color='g', label='good')
pl.legend(loc='best')
pl.xlabel('position [km]')
pl.ylabel('altitude [km]')
Explanation: Q. Using the slope, how could we determine which places we could reach and which we couldn't?
End of explanation |
1,234 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
This notebook gives suggests how to solve the problem of non-linear compressible flow using the automatic differentiation library included in PorePy.
The Ad functionality in PorePy is currently undergoing a substantial expansion, see the tutorial AdFramework for indications of possibilities beyond what is covered herein.
Model
As an example, we will set up a non-linear problem for compressible flow. As usuall, we assume Darcy's law is valid
Step1: Define constitutive laws and constants
We set the porosity to 0.2 and let set the permeability to the default value (i.e. $\mathcal K = 1$).
We define the depenecy of $\rho$ on $p$ as a function. Note that we have to use the exponent function ad.exp (and not np.exp)
Step2: Discretization
We use a finite-volume method to discretize the model equation. As a first step we create a partition of the domain into grid cells
Step3: Next, the model equation is integrated over each controll volume (i.e., each cell of the grid), and the divergence theorem is applied to the flux term
Step6: Note that the negative sign in front of the surface-integral is included into the flux discretization matrix.
The density is defined at the cell centers, but in the flux term we need to evaluate it at the faces. To do so, we simply take the average of the two neighbooring cells (note that other alternatives, such as upstream weighting, are commonly used).
We also create discretized versions of the divergence operator div. The discrete divergence operator sums the fluxes in and out of each grid cell.
Step7: Residual function
To discretize the time-deriveative, we use backward Euler. Further, we assume that the densities are constant over each cell so we can take them out of the integral
Step8: Initialize Ad variable
To initialize an AD variable create an Ad_array(...) with values equal the initial value and jacobian equal the identity matrix
Step9: Time loop
We are now ready to set up the time loop. We will set up a simple Newton iteration to find the zero of the residual function. | Python Code:
import numpy as np
import scipy.sparse as sps
import matplotlib.pyplot as plt
# Porepy modules
import porepy as pp
Explanation: Introduction
This notebook gives suggests how to solve the problem of non-linear compressible flow using the automatic differentiation library included in PorePy.
The Ad functionality in PorePy is currently undergoing a substantial expansion, see the tutorial AdFramework for indications of possibilities beyond what is covered herein.
Model
As an example, we will set up a non-linear problem for compressible flow. As usuall, we assume Darcy's law is valid:
$$
\vec u = -\mathcal K \nabla p,
$$
where $\vec u$ is the flux, $\mathcal K$ the permeability tensor and $p$ the fulid pressure. Further, the conservation of mass gives
$$
\frac{\partial \phi \rho}{\partial t} + \nabla \cdot \rho \vec u = q,\quad \text{in}\ \Omega \
u\cdot n = 0,\quad \text{on}\ \partial \Omega
$$
for porosity $\phi$, fluid density $\rho$, and source/sink term $q$.
To solve this system of equation we need a constitutive law relating the fluid density to the pressure:
$$
\rho = \rho_r e^{c(p - p_r)},
$$
for reference density $\rho_r$ and pressure $p_r$.
Import statements
End of explanation
# Define data
dt = 0.2 # Time step
phi = 0.2 # Porosity
c = 1e-1 # Compressibility
# Constitutive law
def rho(p):
rho0 = 1
p_ref = 1
return rho0 * pp.ad.functions.exp(c * (p - p_ref))
Explanation: Define constitutive laws and constants
We set the porosity to 0.2 and let set the permeability to the default value (i.e. $\mathcal K = 1$).
We define the depenecy of $\rho$ on $p$ as a function. Note that we have to use the exponent function ad.exp (and not np.exp)
End of explanation
# Create grid
g = pp.CartGrid([11,11])
g.compute_geometry()
pp.plot_grid(g, plot_2d=True)
Explanation: Discretization
We use a finite-volume method to discretize the model equation. As a first step we create a partition of the domain into grid cells:
End of explanation
# Initialize default data (i.e., unit parameters)
data = pp.initialize_default_data(g, {}, 'flow')
# Define flux discretization:
flx_disc = pp.Tpfa('flow')
# Discretize
flx_disc.discretize(g, data)
# The flux discretization can now be found in the dictionary as:
flux = data[pp.DISCRETIZATION_MATRICES]['flow']['flux']
Explanation: Next, the model equation is integrated over each controll volume (i.e., each cell of the grid), and the divergence theorem is applied to the flux term:
$$
\int_\Omega \phi \frac{\partial \rho}{\partial t} dV - \int_{\partial\Omega}\vec n\cdot(\rho\vec u)dS - \int_\Omega q dV= 0
$$
The key-point of the finite-volume discretization is how the flux-term $\vec u$ is approximated. We do not cover that in this tutorial(see e.g., I. Aavatsmark. An introduction to multipoint flux approximations for quadrilateral grids. Comput. Geosci., Vol. 6, No. 3, pp. 405–432, 2002. DOI: 10.1023/A:1021291114475).
However, the main idea is that the fluid flux $\vec u$ across a face is expressed as a linear combination of the cell-centered pressures $\vec u = \text{flux}\ \vec p$. Here, $\text{flux}$ is the discretization matrix and $\vec p$ is the vector of all cell-centered pressures.
In porepy we can obtain the discretization matrix with, e.g, the two-point flux approximation:
End of explanation
cell_faces_T = g.cell_faces.T
def div(x):
Discrete divergence
return cell_faces_T * x
def avg(x):
Averageing. Note that this is not strictly correct for the boundary faces since
these only have 1 cell neighboor, but we have zero flux condition on these, so
this is not a problem.
return 0.5 * np.abs(g.cell_faces) * x
Explanation: Note that the negative sign in front of the surface-integral is included into the flux discretization matrix.
The density is defined at the cell centers, but in the flux term we need to evaluate it at the faces. To do so, we simply take the average of the two neighbooring cells (note that other alternatives, such as upstream weighting, are commonly used).
We also create discretized versions of the divergence operator div. The discrete divergence operator sums the fluxes in and out of each grid cell.
End of explanation
def f(p, p0):
# darcy:
u = flux * p
# Source:
src = np.zeros(g.num_cells)
src[60] = 1
# Define residual function
time = phi * (rho(p) - rho(p0)) / dt * g.cell_volumes
advection = div(avg(rho(p)) * u)
lhs = time + advection
rhs = src * g.cell_volumes
return lhs - rhs
Explanation: Residual function
To discretize the time-deriveative, we use backward Euler. Further, we assume that the densities are constant over each cell so we can take them out of the integral:
$$
\int_\Omega \phi \frac{\rho^k - \rho^{k-1}}{\Delta t} dV =\phi \frac{\rho^k - \rho^{k-1}}{\Delta t} \int_\Omega dV = \phi \frac{\rho^k - \rho^{k-1}}{\Delta t}V,
$$
where $V$ is the volume of the cell. The same is also done for the source term.
This gives us the residual
$$
\phi \frac{\rho^k - \rho^{k-1}}{\Delta t} V + \text{div}(\text{avg}(\rho^k)\text{flux } p^k) - q^k V= 0
$$
End of explanation
# Set initial condition
p0 = np.zeros(g.num_cells)
p = pp.ad.Ad_array(p0, sps.diags(np.ones(p0.shape)))
Explanation: Initialize Ad variable
To initialize an AD variable create an Ad_array(...) with values equal the initial value and jacobian equal the identity matrix
End of explanation
# define iteration parameters
newton_tol = 1e-6
t = .0
T = 1
k = 0
times = [t]
# Time loop
while t < T:
# Increment time
t += dt
k += 1
times.append(t)
p0 = p.val
print('Solving time step: ', k)
# solve newton iteration
err = np.inf
while err > newton_tol:
eq = f(p, p0)
p = p - sps.linalg.spsolve(eq.jac, eq.val)
err = np.sqrt(np.sum(eq.val**2))
pp.plot_grid(g, p.val,color_map = [0, 1])
Explanation: Time loop
We are now ready to set up the time loop. We will set up a simple Newton iteration to find the zero of the residual function.
End of explanation |
1,235 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dates in timeseries models
Step1: Getting started
Step2: Right now an annual date series must be datetimes at the end of the year.
Step3: Using Pandas
Make a pandas TimeSeries or DataFrame
Step4: Instantiate the model
Step5: Out-of-sample prediction
Step6: Using explicit dates
Step7: This just returns a regular array, but since the model has date information attached, you can get the prediction dates in a roundabout way. | Python Code:
from __future__ import print_function
import statsmodels.api as sm
import numpy as np
import pandas as pd
Explanation: Dates in timeseries models
End of explanation
data = sm.datasets.sunspots.load()
Explanation: Getting started
End of explanation
from datetime import datetime
dates = sm.tsa.datetools.dates_from_range('1700', length=len(data.endog))
Explanation: Right now an annual date series must be datetimes at the end of the year.
End of explanation
endog = pd.TimeSeries(data.endog, index=dates)
Explanation: Using Pandas
Make a pandas TimeSeries or DataFrame
End of explanation
ar_model = sm.tsa.AR(endog, freq='A')
pandas_ar_res = ar_model.fit(maxlag=9, method='mle', disp=-1)
Explanation: Instantiate the model
End of explanation
pred = pandas_ar_res.predict(start='2005', end='2015')
print(pred)
Explanation: Out-of-sample prediction
End of explanation
ar_model = sm.tsa.AR(data.endog, dates=dates, freq='A')
ar_res = ar_model.fit(maxlag=9, method='mle', disp=-1)
pred = ar_res.predict(start='2005', end='2015')
print(pred)
Explanation: Using explicit dates
End of explanation
print(ar_res.data.predict_dates)
Explanation: This just returns a regular array, but since the model has date information attached, you can get the prediction dates in a roundabout way.
End of explanation |
1,236 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bayesian Hierarchical Linear Regression
Author
Step1: In the dataset, we were provided with a baseline chest CT scan and associated clinical information for a set of patients. A patient has an image acquired at time Week = 0 and has numerous follow up visits over the course of approximately 1-2 years, at which time their FVC is measured. For this tutorial, I will use only the Patient ID, the weeks and the FVC measurements, discarding all the rest. Using only these columns enabled our team to achieve a competitive score, which shows the power of Bayesian hierarchical linear regression models especially when gauging uncertainty is an important part of the problem.
Since this is real medical data, the relative timing of FVC measurements varies widely, as shown in the 3 sample patients below
Step2: On average, each of the 176 provided patients made 9 visits, when FVC was measured. The visits happened in specific weeks in the [-12, 133] interval. The decline in lung capacity is very clear. We see, though, they are very different from patient to patient.
We were are asked to predict every patient's FVC measurement for every possible week in the [-12, 133] interval, and the confidence for each prediction. In other words
Step3: That's all for modelling!
3. Fitting the model
A great achievement of Probabilistic Programming Languages such as NumPyro is to decouple model specification and inference. After specifying my generative model, with priors, condition statements and data likelihood, I can leave the hard work to NumPyro's inference engine.
Calling it requires just a few lines. Before we do it, let's add a numerical Patient ID for each patient code. That can be easily done with scikit-learn's LabelEncoder
Step4: Now, calling NumPyro's inference engine
Step5: 4. Checking the model
4.1. Inspecting the learned parameters
First, let's inspect the parameters learned. To do that, I will use ArviZ, which perfectly integrates with NumPyro
Step6: Looks like our model learned personalized alphas and betas for each patient!
4.2. Visualizing FVC decline curves for some patients
Now, let's visually inspect FVC decline curves predicted by our model. We will completely fill in the FVC table, predicting all missing values. The first step is to create a table to fill
Step7: Predicting the missing values in the FVC table and confidence (sigma) for each value becomes really easy
Step8: Let's now put the predictions together with the true values, to visualize them
Step9: Finally, let's see our predictions for 3 patients
Step10: The results are exactly what we expected to see! Highlight observations | Python Code:
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
train = pd.read_csv('https://gist.githubusercontent.com/ucals/'
'2cf9d101992cb1b78c2cdd6e3bac6a4b/raw/'
'43034c39052dcf97d4b894d2ec1bc3f90f3623d9/'
'osic_pulmonary_fibrosis.csv')
train.head()
Explanation: Bayesian Hierarchical Linear Regression
Author: Carlos Souza
Probabilistic Machine Learning models can not only make predictions about future data, but also model uncertainty. In areas such as personalized medicine, there might be a large amount of data, but there is still a relatively small amount of data for each patient. To customize predictions for each person it becomes necessary to build a model for each person — with its inherent uncertainties — and to couple these models together in a hierarchy so that information can be borrowed from other similar people [1].
The purpose of this tutorial is to demonstrate how to implement a Bayesian Hierarchical Linear Regression model using NumPyro. To motivate the tutorial, I will use OSIC Pulmonary Fibrosis Progression competition, hosted at Kaggle.
1. Understanding the task
Pulmonary fibrosis is a disorder with no known cause and no known cure, created by scarring of the lungs. In this competition, we were asked to predict a patient’s severity of decline in lung function. Lung function is assessed based on output from a spirometer, which measures the forced vital capacity (FVC), i.e. the volume of air exhaled.
In medical applications, it is useful to evaluate a model's confidence in its decisions. Accordingly, the metric used to rank the teams was designed to reflect both the accuracy and certainty of each prediction. It's a modified version of the Laplace Log Likelihood (more details on that later).
Let's explore the data and see what's that all about:
End of explanation
def chart(patient_id, ax):
data = train[train['Patient'] == patient_id]
x = data['Weeks']
y = data['FVC']
ax.set_title(patient_id)
ax = sns.regplot(x, y, ax=ax, ci=None, line_kws={'color':'red'})
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00010637202177584971671', axes[2])
Explanation: In the dataset, we were provided with a baseline chest CT scan and associated clinical information for a set of patients. A patient has an image acquired at time Week = 0 and has numerous follow up visits over the course of approximately 1-2 years, at which time their FVC is measured. For this tutorial, I will use only the Patient ID, the weeks and the FVC measurements, discarding all the rest. Using only these columns enabled our team to achieve a competitive score, which shows the power of Bayesian hierarchical linear regression models especially when gauging uncertainty is an important part of the problem.
Since this is real medical data, the relative timing of FVC measurements varies widely, as shown in the 3 sample patients below:
End of explanation
import numpyro
from numpyro.infer import MCMC, NUTS, Predictive
import numpyro.distributions as dist
from jax import random
def model(PatientID, Weeks, FVC_obs=None):
μ_α = numpyro.sample("μ_α", dist.Normal(0., 100.))
σ_α = numpyro.sample("σ_α", dist.HalfNormal(100.))
μ_β = numpyro.sample("μ_β", dist.Normal(0., 100.))
σ_β = numpyro.sample("σ_β", dist.HalfNormal(100.))
unique_patient_IDs = np.unique(PatientID)
n_patients = len(unique_patient_IDs)
with numpyro.plate("plate_i", n_patients):
α = numpyro.sample("α", dist.Normal(μ_α, σ_α))
β = numpyro.sample("β", dist.Normal(μ_β, σ_β))
σ = numpyro.sample("σ", dist.HalfNormal(100.))
FVC_est = α[PatientID] + β[PatientID] * Weeks
with numpyro.plate("data", len(PatientID)):
numpyro.sample("obs", dist.Normal(FVC_est, σ), obs=FVC_obs)
Explanation: On average, each of the 176 provided patients made 9 visits, when FVC was measured. The visits happened in specific weeks in the [-12, 133] interval. The decline in lung capacity is very clear. We see, though, they are very different from patient to patient.
We were are asked to predict every patient's FVC measurement for every possible week in the [-12, 133] interval, and the confidence for each prediction. In other words: we were asked fill a matrix like the one below, and provide a confidence score for each prediction:
<img src="https://i.ibb.co/0Z9kW8H/matrix-completion.jpg" alt="drawing" width="600"/>
The task was perfect to apply Bayesian inference. However, the vast majority of solutions shared by Kaggle community used discriminative machine learning models, disconsidering the fact that most discriminative methods are very poor at providing realistic uncertainty estimates. Because they are typically trained in a manner that optimizes the parameters to minimize some loss criterion (e.g. the predictive error), they do not, in general, encode any uncertainty in either their parameters or the subsequent predictions. Though many methods can produce uncertainty estimates either as a by-product or from a post-processing step, these are typically heuristic based, rather than stemming naturally from a statistically principled estimate of the target uncertainty distribution [2].
2. Modelling: Bayesian Hierarchical Linear Regression with Partial Pooling
The simplest possible linear regression, not hierarchical, would assume all FVC decline curves have the same $\alpha$ and $\beta$. That's the pooled model. In the other extreme, we could assume a model where each patient has a personalized FVC decline curve, and these curves are completely unrelated. That's the unpooled model, where each patient has completely separate regressions.
Here, I'll use the middle ground: Partial pooling. Specifically, I'll assume that while $\alpha$'s and $\beta$'s are different for each patient as in the unpooled case, the coefficients all share similarity. We can model this by assuming that each individual coefficient comes from a common group distribution. The image below represents this model graphically:
<img src="https://i.ibb.co/H7NgBfR/Artboard-2-2x-100.jpg" alt="drawing" width="600"/>
Mathematically, the model is described by the following equations:
\begin{align}
\mu_{\alpha} &\sim \mathcal{N}(0, 100) \
\sigma_{\alpha} &\sim |\mathcal{N}(0, 100)| \
\mu_{\beta} &\sim \mathcal{N}(0, 100) \
\sigma_{\beta} &\sim |\mathcal{N}(0, 100)| \
\alpha_i &\sim \mathcal{N}(\mu_{\alpha}, \sigma_{\alpha}) \
\beta_i &\sim \mathcal{N}(\mu_{\beta}, \sigma_{\beta}) \
\sigma &\sim \mathcal{N}(0, 100) \
FVC_{ij} &\sim \mathcal{N}(\alpha_i + t \beta_i, \sigma)
\end{align}
where t is the time in weeks. Those are very uninformative priors, but that's ok: our model will converge!
Implementing this model in NumPyro is pretty straightforward:
End of explanation
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
train['PatientID'] = le.fit_transform(train['Patient'].values)
FVC_obs = train['FVC'].values
Weeks = train['Weeks'].values
PatientID = train['PatientID'].values
numpyro.set_host_device_count(4)
Explanation: That's all for modelling!
3. Fitting the model
A great achievement of Probabilistic Programming Languages such as NumPyro is to decouple model specification and inference. After specifying my generative model, with priors, condition statements and data likelihood, I can leave the hard work to NumPyro's inference engine.
Calling it requires just a few lines. Before we do it, let's add a numerical Patient ID for each patient code. That can be easily done with scikit-learn's LabelEncoder:
End of explanation
nuts_kernel = NUTS(model)
mcmc = MCMC(nuts_kernel, num_samples=2000, num_warmup=2000)
rng_key = random.PRNGKey(0)
mcmc.run(rng_key, PatientID, Weeks, FVC_obs=FVC_obs)
posterior_samples = mcmc.get_samples()
Explanation: Now, calling NumPyro's inference engine:
End of explanation
import arviz as az
data = az.from_numpyro(mcmc)
az.plot_trace(data, compact=True);
Explanation: 4. Checking the model
4.1. Inspecting the learned parameters
First, let's inspect the parameters learned. To do that, I will use ArviZ, which perfectly integrates with NumPyro:
End of explanation
pred_template = []
for i in range(train['Patient'].nunique()):
df = pd.DataFrame(columns=['PatientID', 'Weeks'])
df['Weeks'] = np.arange(-12, 134)
df['PatientID'] = i
pred_template.append(df)
pred_template = pd.concat(pred_template, ignore_index=True)
Explanation: Looks like our model learned personalized alphas and betas for each patient!
4.2. Visualizing FVC decline curves for some patients
Now, let's visually inspect FVC decline curves predicted by our model. We will completely fill in the FVC table, predicting all missing values. The first step is to create a table to fill:
End of explanation
PatientID = pred_template['PatientID'].values
Weeks = pred_template['Weeks'].values
predictive = Predictive(model, posterior_samples,
return_sites=['σ', 'obs'])
samples_predictive = predictive(random.PRNGKey(0),
PatientID, Weeks, None)
Explanation: Predicting the missing values in the FVC table and confidence (sigma) for each value becomes really easy:
End of explanation
df = pd.DataFrame(columns=['Patient', 'Weeks', 'FVC_pred', 'sigma'])
df['Patient'] = le.inverse_transform(pred_template['PatientID'])
df['Weeks'] = pred_template['Weeks']
df['FVC_pred'] = samples_predictive['obs'].T.mean(axis=1)
df['sigma'] = samples_predictive['obs'].T.std(axis=1)
df['FVC_inf'] = df['FVC_pred'] - df['sigma']
df['FVC_sup'] = df['FVC_pred'] + df['sigma']
df = pd.merge(df, train[['Patient', 'Weeks', 'FVC']],
how='left', on=['Patient', 'Weeks'])
df = df.rename(columns={'FVC': 'FVC_true'})
df.head()
Explanation: Let's now put the predictions together with the true values, to visualize them:
End of explanation
def chart(patient_id, ax):
data = df[df['Patient'] == patient_id]
x = data['Weeks']
ax.set_title(patient_id)
ax.plot(x, data['FVC_true'], 'o')
ax.plot(x, data['FVC_pred'])
ax = sns.regplot(x, data['FVC_true'], ax=ax, ci=None,
line_kws={'color':'red'})
ax.fill_between(x, data["FVC_inf"], data["FVC_sup"],
alpha=0.5, color='#ffcd3c')
ax.set_ylabel('FVC')
f, axes = plt.subplots(1, 3, figsize=(15, 5))
chart('ID00007637202177411956430', axes[0])
chart('ID00009637202177434476278', axes[1])
chart('ID00011637202177653955184', axes[2])
Explanation: Finally, let's see our predictions for 3 patients:
End of explanation
y = df.dropna()
rmse = ((y['FVC_pred'] - y['FVC_true']) ** 2).mean() ** (1/2)
print(f'RMSE: {rmse:.1f} ml')
sigma_c = y['sigma'].values
sigma_c[sigma_c < 70] = 70
delta = (y['FVC_pred'] - y['FVC_true']).abs()
delta[delta > 1000] = 1000
lll = - np.sqrt(2) * delta / sigma_c - np.log(np.sqrt(2) * sigma_c)
print(f'Laplace Log Likelihood: {lll.mean():.4f}')
Explanation: The results are exactly what we expected to see! Highlight observations:
- The model adequately learned Bayesian Linear Regressions! The orange line (learned predicted FVC mean) is very inline with the red line (deterministic linear regression). But most important: it learned to predict uncertainty, showed in the light orange region (one sigma above and below the mean FVC line)
- The model predicts a higher uncertainty where the data points are more disperse (1st and 3rd patients). Conversely, where the points are closely grouped together (2nd patient), the model predicts a higher confidence (narrower light orange region)
- Finally, in all patients, we can see that the uncertainty grows as the look more into the future: the light orange region widens as the # of weeks grow!
4.3. Computing the modified Laplace Log Likelihood and RMSE
As mentioned earlier, the competition was evaluated on a modified version of the Laplace Log Likelihood. In medical applications, it is useful to evaluate a model's confidence in its decisions. Accordingly, the metric is designed to reflect both the accuracy and certainty of each prediction.
For each true FVC measurement, we predicted both an FVC and a confidence measure (standard deviation $\sigma$). The metric was computed as:
\begin{align}
\sigma_{clipped} &= max(\sigma, 70) \
\delta &= min(|FVC_{true} - FVC_{pred}|, 1000) \
metric &= -\dfrac{\sqrt{2}\delta}{\sigma_{clipped}} - \ln(\sqrt{2} \sigma_{clipped})
\end{align}
The error was thresholded at 1000 ml to avoid large errors adversely penalizing results, while the confidence values were clipped at 70 ml to reflect the approximate measurement uncertainty in FVC. The final score was calculated by averaging the metric across all (Patient, Week) pairs. Note that metric values will be negative and higher is better.
Next, we calculate the metric and RMSE:
End of explanation |
1,237 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Executed
Step1: Notebook arguments
sigma (float)
Step2: Fitting models
Models used to fit the data.
1. Simple Exponential
In this model, we define the model function as an exponential transient
Step3: 2. Integrated Exponential
A more realistic model needs to take into account that each data point
is the result of an integration over a time window $w$
Step4: Generative model
These are the models used to generate the simulates (noisy) data.
1. Simple Exponential + Noise
In this simple model, we simulate random data $Y$ as an exponential decay plus
additive Gaussian noise
Step5: An ideal transient (no noise, no integration)
Step6: A simulated transient (including noise + integration)
Step7: Plot the computed curves
Step8: Fit data
Fit the "Integrated Exponential" model
Step9: Fit the "Simple Exponential" model
Step10: Print and plot fit results
Step11: Monte-Carlo Simulation
Here, fixed the model paramenters, we generate and fit several noisy datasets. Then, by plotting the distribution of the fitted parameters, we assess the stability and accuracy of the fit.
Parameters
The number simulation cycles is defined by num_sim_cycles. Current value is
Step12: The fixed kinetic curve parameters are
Step13: While tau is varied, taking the following values
Step14: <div class="alert alert-info">
**NOTE**
Step15: Run Monte-Carlo simulation
Run the Monte-Carlo fit for a set of different time-constants (taus)
and save results in two DataFrames, one for each model.
Step16: <div class="alert alert-danger">
**WARNING**
Step17: Results2 - Integrated Exponential | Python Code:
sigma = 0.016
time_window = 30
time_step = 5
time_start = -900
time_stop = 900
decimation = 20
t0_vary = True
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0) # time origin
num_sim_cycles = 1000
taus = (30, 60)
# Cell inserted during automated execution.
time_start = -900
num_sim_cycles = 1000
t0_vary = False
time_window = 30
taus = (5, 10, 30, 60)
decimation = 20
time_stop = 900
time_step = 5
true_params = {'init_value': 0.3, 't0': 0, 'tau': 60, 'final_value': 0.8}
sigma = 0.016
Explanation: Executed: Tue Oct 11 12:00:15 2016
Duration: 488 seconds.
End of explanation
%matplotlib inline
import numpy as np
import lmfit
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import models # custom module
Explanation: Notebook arguments
sigma (float): standard deviation of additive Gaussian noise to be simulated
time_window (float): seconds, integration window duration
time_step (float): seconds, time step for the moving integration window
time_start (float): seconds, start of time axis (kinetics starts at t = t0).
time_stop (float): seconds, stop of time axis (kinetics starts at t = t0).
t0_vary (bool): whether models should vary the curve origin (t0) during the fit
true_params (dict): parameters used to generate simulated kinetic curves
num_sim_cycles (int): number of times fit is repeated (Monte-Carlo)
taus (tuple): list of values for time-costant tau simulated during repeated fits (Monte-Carlo).
Simulated Kinetic Curve Fit
<p class=lead>This notebook fits simulated exponential transients with additive Gaissian noise in order to study time-constant fitting accuracy.
In particular we compare a simple exponential model with a more realistic model
with integration window, checking the effect on the fit results.
<p>
You can either run this notebook directly, or run it through the [master notebook](Simulated Kinetic Curve Fit - Run-All.ipynb) for batch processing.
## Imports
End of explanation
labels = ('tau', 'init_value', 'final_value')
model = models.factory_model_exp(t0_vary=True)
Explanation: Fitting models
Models used to fit the data.
1. Simple Exponential
In this model, we define the model function as an exponential transient:
$$ y = f(t) = A \cdot e^{-t/\tau} + K$$
The python function implementing it is:
models.exp_func().
Next cell defines and initializes the fitting model (lmfit.model.Model) including parameters' constrains:
End of explanation
modelw = models.factory_model_expwin(t_window=time_window, decimation=decimation, t0_vary=t0_vary)
Explanation: 2. Integrated Exponential
A more realistic model needs to take into account that each data point
is the result of an integration over a time window $w$:
$$f(t) = A \cdot e^{-t/\tau} + K$$
$$y(t) = \int_{t}^{t+w} f(t')\;dt'$$
In other words, when we process a measurement in time chunks, we are integrating
a non-stationary signal $f(t)$ over a time window $w$. This integration causes
a smoothing of $f(t)$, regardless of the fact that time is binned or
is swiped-through with a moving windows (overlapping chunks).
Numerically, $t$ is discretized with step equal to (time_step / decimation).
The python function implementing this model function is:
models.expwindec_func().
And, finally, we define and initialize the fitting model parameters' constrains:
End of explanation
t = np.arange(time_start, time_stop-time_window, time_step).astype(float)
t.size
Explanation: Generative model
These are the models used to generate the simulates (noisy) data.
1. Simple Exponential + Noise
In this simple model, we simulate random data $Y$ as an exponential decay plus
additive Gaussian noise:
$$ Y(t_k) = f(t_k) + N_k $$
$$ {N_k} \sim {\rm Normal}{\mu=0; \sigma}$$
$$ \Delta t = t_k - t_{k-1} = \texttt{time_step}$$
2. Integrated Exponential + Noise
For the "integrating window" model, we first define a finer time axis $\theta_i$
which oversamples $t_k$ by a factor $n$. Then we define the function $Y_f$
adding Gaussian noise $\sqrt{n}\,N_i$, with $n$ times larger variance:
$$ Y_f(\theta_i) = f(\theta_i) + \sqrt{n}\,N_i $$
$$ \Delta \theta = \theta_i - \theta_{i-1} = \texttt{time_step} \;/\; n$$
Finally, by averaging each time window, we compute the data on the coarse time axis $t_k$:
$$ Y_w(t_k) = \frac{1}{m}\sum_{i} Y_f(\theta_i)$$
Here, for each $t_k$, we compute the mean of $m$ consecutive $Y_f$ values. The number $m$
is chosen so that $m\, \Delta \theta$ is equal to the time window.
Noise amplitude
The amplitude of the additive noise ($\sigma$) is estimated from the experimental kinetic curves.
In particular we take the variance from the POST period (i.e. the steady state period after the transient).
The POST period has been chosen because it exhibits higher variance than the PRE period (i.e. the steady state period
before the transient). These values have been calculated in 8-spot bubble-bubble kinetics - Summary.
In both models we define the noise amplitude as sigma (see first cell):
sigma = 0.016
Time axis
We also define the parameters for the time axis $t$:
time_start = -900 # seconds
time_stop = 900 # seconds
time_step = 5 # seconds
Kinetic curve paramenters
The simulated kinetic curve has the following parameters:
true_params = dict(
tau = 60, # time constant
init_value = 0.3, # initial value (for t < t0)
final_value = 0.8, # final value (for t -> +inf)
t0 = 0) # time origin
<div class="alert alert-info">
**NOTE**: All previous parameters are defined in the first notebook cell.
</div>
Single kinetic curve fit
Here we simulate one kinetic curve and fit it with the two models (simple exponential and integrated exponential).
Draw simulated data
Time axis for simulated data:
End of explanation
y = models.expwindec_func(t, t_window=time_window, **true_params)
y.shape
Explanation: An ideal transient (no noise, no integration):
End of explanation
time_window, time_step
yr = models.expwindec_func(t, t_window=time_window, sigma=sigma, **true_params)
yr.shape
Explanation: A simulated transient (including noise + integration):
End of explanation
plt.plot(t, y, '-', label='model')
plt.plot(t, yr, 'o', label='model + noise')
Explanation: Plot the computed curves:
End of explanation
#%%timeit
resw = modelw.fit(yr, t=t, tau=10, init_value=0.1, final_value=0.9, verbose=False)
Explanation: Fit data
Fit the "Integrated Exponential" model:
End of explanation
#%%timeit
res = model.fit(yr, t=t + 0.5*time_window, tau=10, init_value=0.1, final_value=0.9, verbose=False)
Explanation: Fit the "Simple Exponential" model:
End of explanation
fig = plt.figure(figsize=(14, 8))
res.plot(fig=fig)
ci = lmfit.conf_interval(res, res)
lmfit.report_fit(res)
print(lmfit.ci_report(ci, with_offset=False))
#plt.xlim(-300, 300)
fig = plt.figure(figsize=(14, 8))
resw.plot(fig=fig)
ci = lmfit.conf_interval(resw, resw)
lmfit.report_fit(resw)
print(lmfit.ci_report(ci, with_offset=False))
#plt.xlim(-300, 300)
Explanation: Print and plot fit results:
End of explanation
num_sim_cycles
Explanation: Monte-Carlo Simulation
Here, fixed the model paramenters, we generate and fit several noisy datasets. Then, by plotting the distribution of the fitted parameters, we assess the stability and accuracy of the fit.
Parameters
The number simulation cycles is defined by num_sim_cycles. Current value is:
End of explanation
{k: v for k, v in true_params.items() if k is not "tau"}
Explanation: The fixed kinetic curve parameters are:
End of explanation
taus
t0_vary
Explanation: While tau is varied, taking the following values:
End of explanation
def draw_samples_and_fit(true_params):
# Create the data
t = np.arange(time_start, time_stop-time_window, time_step).astype(float)
yr = models.expwindec_func(t, t_window=time_window, sigma=sigma, decimation=100, **true_params)
# Fit the model
tc = t + 0.5*time_window
kws = dict(fit_kws=dict(nan_policy='omit'), verbose=False)
res = model.fit(yr, t=tc, tau=90, method='nelder', **kws)
res = model.fit(yr, t=tc, **kws)
resw = modelw.fit(yr, t=t, tau=400, decimation=decimation, method='nelder', **kws)
resw = modelw.fit(yr, t=t, decimation=decimation, **kws)
return res, resw
def monte_carlo_sim(true_params, N):
df1 = pd.DataFrame(index=range(N), columns=labels)
df2 = df1.copy()
for i in range(N):
res1, res2 = draw_samples_and_fit(true_params)
for var in labels:
df1.loc[i, var] = res1.values[var]
df2.loc[i, var] = res2.values[var]
return df1, df2
Explanation: <div class="alert alert-info">
**NOTE**: All previous parameters are defined in the first notebook cell.
</div>
Functions
Here we define two functions:
draw_samples_and_fit() draws a set of data and fits it with both models
monte_carlo_sim() run the Monte-Carlo simulation: calls draw_samples_and_fit() many times.
NOTE: Global variables are used by previous functions.
End of explanation
mc_results1, mc_results2 = [], []
%%timeit -n1 -r1 # <-- prints execution time
for tau in taus:
true_params['tau'] = tau
df1, df2 = monte_carlo_sim(true_params, num_sim_cycles)
mc_results1.append(df1)
mc_results2.append(df2)
Explanation: Run Monte-Carlo simulation
Run the Monte-Carlo fit for a set of different time-constants (taus)
and save results in two DataFrames, one for each model.
End of explanation
for tau, df in zip(taus, mc_results1):
true_params['tau'] = tau
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for i, var in enumerate(labels):
std = df[var].std()
df[var].hist(bins=30, ax=ax[i])
ax[i].set_title("%s = %.1f (%.3f)" % (var, true_params[var], std), fontsize=18)
ax[i].axvline(true_params[var], color='r', ls='--')
#print('True parameters: %s' % true_params)
Explanation: <div class="alert alert-danger">
**WARNING**: The previous cell can take a long to execute. Execution time scales with **`num_sim_cycles * len(taus)`**.
</div>
Results1 - Simple Exponential
End of explanation
for tau, df in zip(taus, mc_results2):
true_params['tau'] = tau
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
for i, var in enumerate(labels):
std = df[var].std()
df[var].hist(bins=30, ax=ax[i])
ax[i].set_title("%s = %.1f (%.3f)" % (var, true_params[var], std), fontsize=18)
ax[i].axvline(true_params[var], color='r', ls='--')
#print('True parameters: %s' % true_params)
Explanation: Results2 - Integrated Exponential
End of explanation |
1,238 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Installating R on WinPython
This procedure applys for Winpython (Version of December 2015 and after)
1 - Downloading R binary
Step1: 2 - checking and Installing R binary in the right place
Step4: During Installation (if you wan't to move the R installation after)
Choose non default option "Yes (customized startup"
then after 3 screens, Select "Don't create a Start Menu Folder"
Un-select "Create a desktop icon"
Un-select "Save version number in registery"
<img src="https
Step5: 4- Install a R package via a IPython Kernel
Step6: 5- Small demo via R magic
Step7: 6 - Installing the very best of R pakages (optional, you will start to get a really big directory) | Python Code:
import os
import sys
import io
# downloading R may takes a few minutes (80Mo)
try:
import urllib.request as urllib2 # Python 3
except:
import urllib2 # Python 2
# specify R binary and (md5, sha1) hash
# R-3.4.3:
r_url = "https://cran.r-project.org/bin/windows/base/R-3.4.3-win.exe"
hashes=("0ff087acbae677d7255af19b0a9df27f","aabf0b671ae1dca741c3df9dee976a7d4b584f80")
# specify target location
r_installer = os.environ["WINPYDIR"]+"\\..\\tools\\"+os.path.basename(r_url)
os.environ["r_installer"] = r_installer
# Download
g = urllib2.urlopen(r_url)
with io.open(r_installer, 'wb') as f:
f.write(g.read())
g.close
g = None
#checking it's there
!dir %r_installer%
Explanation: Installating R on WinPython
This procedure applys for Winpython (Version of December 2015 and after)
1 - Downloading R binary
End of explanation
# checking it's the official R
import hashlib
def give_hash(of_file, with_this):
with io.open(r_installer, 'rb') as f:
return with_this(f.read()).hexdigest()
print (" "*12+"MD5"+" "*(32-12-3)+" "+" "*15+"SHA-1"+" "*(40-15-5)+"\n"+"-"*32+" "+"-"*40)
print ("%s %s %s" % (give_hash(r_installer, hashlib.md5) , give_hash(r_installer, hashlib.sha1),r_installer))
if give_hash(r_installer, hashlib.md5) == hashes[0] and give_hash(r_installer, hashlib.sha1) == hashes[1]:
print("looks good!")
else:
print("problem ! please check")
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# preparing Dos variables
os.environ["R_HOME"] = os.environ["WINPYDIR"]+ "\\..\\tools\\R\\"
os.environ["R_HOMEbin"]=os.environ["R_HOME"] + "bin"
# for installation we need this
os.environ["tmp_Rbase"]=os.path.join(os.path.split(os.environ["WINPYDIR"])[0] , 'tools','R' )
if 'amd64' in sys.version.lower():
r_comp ='/COMPONENTS="main,x64,translations'
else:
r_comp ='/COMPONENTS="main,i386,translations'
os.environ["tmp_R_comp"]=r_comp
# let's install it, if hashes do match
assert give_hash(r_installer, hashlib.md5) == hashes[0]
assert give_hash(r_installer, hashlib.sha1) == hashes[1]
# If you are "USB life style", or multi-winpython
# ==> CLICK the OPTION "Don't create a StartMenuFolder' <== (when it will show up)
!start cmd /C %r_installer% /DIR=%tmp_Rbase% %tmp_R_comp%
Explanation: 2 - checking and Installing R binary in the right place
End of explanation
import os
import sys
import io
# let's create a R launcher
r_launcher = r
@echo off
call %~dp0env.bat
rscript %*
r_launcher_bat = os.environ["WINPYDIR"]+"\\..\\scripts\\R_launcher.bat"
# let's create a R init script
# in manual command line, you can use repos = c('http://irkernel.github.io/', getOption('repos'))
r_initialization = r
install.packages(c('repr', 'IRdisplay', 'stringr', 'crayon', 'pbdZMQ', 'devtools'), repos = c('http://cran.rstudio.com/', 'http://cran.rstudio.com/'))
devtools::install_github('IRkernel/IRkernel')
library('pbdZMQ')
library('repr')
library('IRkernel')
library('IRdisplay')
library('crayon')
library('stringr')
IRkernel::installspec()
r_initialization_r = os.path.normpath(os.environ["WINPYDIR"]+"\\..\\scripts\\R_initialization.r")
for i in [(r_launcher,r_launcher_bat), (r_initialization, r_initialization_r)]:
with io.open(i[1], 'w', encoding = sys.getdefaultencoding() ) as f:
for line in i[0].splitlines():
f.write('%s\n' % line )
#check what we are going to do
print ("!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save " + r_initialization_r)
# Launch Rkernel setup
os.environ["r_initialization_r"] = r_initialization_r
!start cmd /C %WINPYDIR%\\..\\scripts\\R_launcher.bat --no-restore --no-save %r_initialization_r%
# make RKernel a movable installation with the rest of WinPython
from winpython import utils
base_winpython = os.path.dirname(os.path.normpath(os.environ["WINPYDIR"]))
rkernel_json=(base_winpython+"\\settings\\kernels\\ir\\kernel.json")
# so we get "argv": ["{prefix}/../tools/R/bin/x64/R"
utils.patch_sourcefile(rkernel_json, base_winpython.replace("\\","/"), r'{prefix}/..', silent_mode=False)
Explanation: During Installation (if you wan't to move the R installation after)
Choose non default option "Yes (customized startup"
then after 3 screens, Select "Don't create a Start Menu Folder"
Un-select "Create a desktop icon"
Un-select "Save version number in registery"
<img src="https://raw.githubusercontent.com/stonebig/winpython_afterdoc/master/examples/images/r_setup_unclick_shortcut.GIF">
3 - create a R_launcher and install irkernel
End of explanation
%load_ext rpy2.ipython
#vitals: 'dplyr', 'R.utils', 'nycflights13'
# installation takes 2 minutes
%R install.packages(c('dplyr','R.utils', 'nycflights13'), repos='http://cran.rstudio.com/')
Explanation: 4- Install a R package via a IPython Kernel
End of explanation
%load_ext rpy2.ipython
%%R
library('dplyr')
library('nycflights13')
write.csv(flights, "flights.csv")
%R head(flights)
%R airports %>% mutate(dest = faa) %>% semi_join(flights) %>% head
Explanation: 5- Small demo via R magic
End of explanation
# essentials: 'tidyr', 'shiny', 'ggplot2', 'caret' , 'nnet'
# remaining of Hadley Wickahm "stack" (https://github.com/rstudio)
%R install.packages(c('tidyr', 'ggplot2', 'shiny','caret' , 'nnet'), repos='https://cran.rstudio.com/')
%R install.packages(c('knitr', 'purrr', 'readr', 'readxl'), repos='https://cran.rstudio.com/')
%R install.packages(c('rvest', 'lubridate', 'ggvis', 'readr','base64enc'), repos='https://cran.rstudio.com/')
# TRAINING = online training book http://r4ds.had.co.nz/ (or https://github.com/hadley/r4ds)
Explanation: 6 - Installing the very best of R pakages (optional, you will start to get a really big directory)
End of explanation |
1,239 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FloPy
SpatialReference demo
A short demonstration of functionality in the SpatialReference class for locating the model in a "real world" coordinate reference system
Step1: description
SpatialReference is a stand-alone option that describes the location of the model grid in a real-world coordinate reference system. It can be created with some basic information about the model's location.
Step2: SpatialReference makes many calculations internally, so you don't have to
calling the sr object prints the important parameters. Note that the upper right corner of the model, which is often used to specify the origin, has been calculated. A proj4 string has also been fetched from <spatialreference.org> using the supplied epsg code.
Step3: Note the length parameters
units
Step4: Note that the length multiplier and xul, yul changed
the lower left corner was maintained because it was specified. If xul, yul had been specified instead, that would have been preserved.
Step5: information about the grid coordinates
cell centroids
Step6: cell vertices
Step7: Transformations
model coordinates to real-world coordinates
e.g., for working with MODPATH output
Step8: real-word coordinates to model coordinates
e.g., for specifying observations for Hydmod
Step9: model row, column for real-world coordinate
Step10: Grid bounds
Step11: Shapefile of grid
Step12: grid spec file
(for PEST)
Step13: Loading SpatialReference
SpatialReference is included in flopy models at the model level;
with model.write() the basic parameters are written to the comment header of the Name file.
Step14: on load
SpatialReference info is read from
1) usgs.model.reference, if it exists (https
Step15: Adding/modifying SpatialReference for a model
Step16: individual parameters can also be updated
Step17: note that the epsg code was cleared when the proj4 string was updated
Step18: interpolation between cells
(e.g. for head values at point locations)
read in some head results
Step19: get interpolated head at some points
uses scipy.interpolate.gridddata to interpolate values between cell centers
Step20: rasterizing features
requires the rasterio and fiona modules, available via conda-forge (mac or windows), or the Unofficial Python Binaries website (windows)
create a geoJSON-style geometry
(this could be easily read in from a shapefile
Step21: read in a shapefile
Step22: list the feature geometries and attributes
Step23: select an attribute to map onto the model grid
Step24: make an rasterio.Affine object using info from the sr
Step25: rasterize the features, applying the attribute values to cells intersecting each one
In this case, the cells with zeros are those that don't intersect a feature | Python Code:
import sys
sys.path.append('../..')
import os
import numpy as np
import matplotlib.pyplot as plt
import flopy
from flopy.utils.reference import SpatialReference
import flopy.utils.binaryfile as bf
% matplotlib inline
outpath = 'temp/'
Explanation: FloPy
SpatialReference demo
A short demonstration of functionality in the SpatialReference class for locating the model in a "real world" coordinate reference system
End of explanation
nrow, ncol = 10, 10
xll, yll = 617822.3, 5114820.7 # origin of the model (lower left corner)
dxdy = 250 # grid spacing (in model units)
rot = 29 # rotation (positive counterclockwise)
# epsg code specifying coordinate reference system
# (https://www.epsg-registry.org/)
# in this case, UTM zone 16 N, NAD83
# http://spatialreference.org/ref/epsg/nad83-utm-zone-16n/
model_epsg = 26915
# alternatively, a proj4 string can be supplied
model_proj4 = 'http://spatialreference.org/ref/epsg/nad83-utm-zone-16n/proj4/'
# row and column spacings
# (note that delc is column spacings along a row; delr the row spacings along a column)
delc = np.ones(nrow, dtype=float) * dxdy
delr = np.ones(ncol, dtype=float) * dxdy
sr = SpatialReference(delr=delr, delc=delc, xll=xll, yll=yll, rotation=rot, epsg=model_epsg)
Explanation: description
SpatialReference is a stand-alone option that describes the location of the model grid in a real-world coordinate reference system. It can be created with some basic information about the model's location.
End of explanation
sr
Explanation: SpatialReference makes many calculations internally, so you don't have to
calling the sr object prints the important parameters. Note that the upper right corner of the model, which is often used to specify the origin, has been calculated. A proj4 string has also been fetched from <spatialreference.org> using the supplied epsg code.
End of explanation
# switch the MODFLOW units to feet
sr.lenuni = 1
sr
Explanation: Note the length parameters
units: Length units for real world coordinate system (typically meters; feet are also supported). This attribute is inferred from the epsg code or proj4 string, but can also be supplied.
lenuni: MODFLOW length unit (see documentation) (default is 2, meters).
length_multiplier: multiplier for scaling grid from MODFLOW units to real world crs units. This parameter is inferred from the above units, or can be supplied directly.
SpatialReference parameters can be updated dynamically
End of explanation
sr.xll, sr.yll
Explanation: Note that the length multiplier and xul, yul changed
the lower left corner was maintained because it was specified. If xul, yul had been specified instead, that would have been preserved.
End of explanation
sr.xcentergrid, sr.ycentergrid
Explanation: information about the grid coordinates
cell centroids
End of explanation
sr.vertices
Explanation: cell vertices
End of explanation
sr.transform(np.array([0, 10.]), np.array([0, 2.5]))
Explanation: Transformations
model coordinates to real-world coordinates
e.g., for working with MODPATH output
End of explanation
sr.transform(617824.59, 5114822.84, inverse=True)
Explanation: real-word coordinates to model coordinates
e.g., for specifying observations for Hydmod
End of explanation
sr.get_rc(671900., 618000.)
Explanation: model row, column for real-world coordinate
End of explanation
sr.bounds
Explanation: Grid bounds
End of explanation
sr.write_shapefile(outpath + 'grid.shp')
Explanation: Shapefile of grid
End of explanation
sr.write_gridSpec(outpath + 'grid.spc')
Explanation: grid spec file
(for PEST)
End of explanation
# load an existing model
model_ws = os.path.join("..", "data", "freyberg_multilayer_transient")
with open(model_ws + '/freyberg.nam') as input:
print(input.readlines()[0:2])
Explanation: Loading SpatialReference
SpatialReference is included in flopy models at the model level;
with model.write() the basic parameters are written to the comment header of the Name file.
End of explanation
ml = flopy.modflow.Modflow.load("freyberg.nam", model_ws=model_ws, verbose=False,
check=False, exe_name="mfnwt")
ml.sr
Explanation: on load
SpatialReference info is read from
1) usgs.model.reference, if it exists (https://water.usgs.gov/ogw/policy/gw-model/modelers-setup.html)
2) otherwise, the Name file
if no spatial reference information is found, a default sr object is created
End of explanation
ml.sr = SpatialReference(delr=ml.dis.delr, delc=ml.dis.delc,
xul=150000, yul=30000, rotation=25, epsg=26715)
ml.sr
Explanation: Adding/modifying SpatialReference for a model
End of explanation
ml.sr.proj4_str = "+proj=utm +zone=14 +ellps=WGS84 +datum=WGS84 +units=m +no_defs"
ml.sr
Explanation: individual parameters can also be updated
End of explanation
ml.sr.epsg
Explanation: note that the epsg code was cleared when the proj4 string was updated
End of explanation
hdsobj = bf.HeadFile(model_ws + '/freyberg.hds')
hds = hdsobj.get_data()
hds[hds < -999] = np.nan
plt.imshow(hds[0])
plt.colorbar()
ml.sr.bounds
Explanation: interpolation between cells
(e.g. for head values at point locations)
read in some head results
End of explanation
ml.sr.interpolate(hds[0], ([157000, 160000], [30000, 27000]))
ml.sr.xll = 0
ml.sr.yll = 0
ml.sr.rotation = 0
Explanation: get interpolated head at some points
uses scipy.interpolate.gridddata to interpolate values between cell centers
End of explanation
import fiona
from rasterio.features import rasterize
from rasterio import Affine
Explanation: rasterizing features
requires the rasterio and fiona modules, available via conda-forge (mac or windows), or the Unofficial Python Binaries website (windows)
create a geoJSON-style geometry
(this could be easily read in from a shapefile
End of explanation
shpname = '../data/freyberg/gis/bedrock_outcrop_hole.shp'
with fiona.open(shpname) as src:
records = [r for r in src]
Explanation: read in a shapefile
End of explanation
geoms = [r['geometry'] for r in records]
attr = [r['properties'] for r in records]
attr
Explanation: list the feature geometries and attributes
End of explanation
geoms = [(g, attr[i]['OBJECTID']) for i, g in enumerate(geoms)]
Explanation: select an attribute to map onto the model grid
End of explanation
dx = ml.dis.delr.array[0]
dy = ml.dis.delc.array[0]
trans = Affine(dx, ml.sr.rotation, ml.sr.xul,
ml.sr.rotation, -dy, ml.sr.yul)
trans
Explanation: make an rasterio.Affine object using info from the sr
End of explanation
r = rasterize(geoms, out_shape=(ml.nrow, ml.ncol), transform=trans)
fig, ax = plt.subplots()
qm = ml.sr.plot_array(r, ax=ax)
ax.set_aspect(1)
plt.colorbar(qm)
Explanation: rasterize the features, applying the attribute values to cells intersecting each one
In this case, the cells with zeros are those that don't intersect a feature
End of explanation |
1,240 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Artificial Intelligence Nanodegree
Machine Translation Project
In this notebook, sections that end with '(IMPLEMENTATION)' in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully!
Introduction
In this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.
Preprocess - You'll convert text to sequence of integers.
Models Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!
Prediction Run the model on English text.
Dataset
We begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from WMT. However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset.
Load Data
The data is located in data/small_vocab_en and data/small_vocab_fr. The small_vocab_en file contains English sentences with their French translations in the small_vocab_fr file. Load the English and French data from these files from running the cell below.
Step1: Files
Each line in small_vocab_en contains an English sentence with the respective translation in each line of small_vocab_fr. View the first two lines from each file.
Step2: From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing.
Vocabulary
The complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.
Step4: For comparison, Alice's Adventures in Wonderland contains 2,766 unique words of a total of 15,500 words.
Preprocess
For this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods
Step6: Padding (IMPLEMENTATION)
When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.
Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the end of each sequence using Keras's pad_sequences function.
Step8: Preprocess Pipeline
Your focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the preprocess function.
Step10: Models
In this section, you will experiment with various neural network architectures.
You will begin by training four relatively simple architectures.
- Model 1 is a simple RNN
- Model 2 is a RNN with Embedding
- Model 3 is a Bidirectional RNN
- Model 4 is an optional Encoder-Decoder RNN
After experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models.
Ids Back to Text
The neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function logits_to_text will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.
Step12: Model 1
Step14: Model 2
Step16: Model 3
Step18: Model 4
Step20: Model 5
Step22: Prediction (IMPLEMENTATION) | Python Code:
import helper
# Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')
print('Dataset Loaded')
Explanation: Artificial Intelligence Nanodegree
Machine Translation Project
In this notebook, sections that end with '(IMPLEMENTATION)' in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully!
Introduction
In this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.
Preprocess - You'll convert text to sequence of integers.
Models Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!
Prediction Run the model on English text.
Dataset
We begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from WMT. However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset.
Load Data
The data is located in data/small_vocab_en and data/small_vocab_fr. The small_vocab_en file contains English sentences with their French translations in the small_vocab_fr file. Load the English and French data from these files from running the cell below.
End of explanation
for sample_i in range(2):
print('small_vocab_en Line {}: {}'.format(sample_i + 1, english_sentences[sample_i]))
print('small_vocab_fr Line {}: {}'.format(sample_i + 1, french_sentences[sample_i]))
Explanation: Files
Each line in small_vocab_en contains an English sentence with the respective translation in each line of small_vocab_fr. View the first two lines from each file.
End of explanation
import collections
english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])
print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"')
Explanation: From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing.
Vocabulary
The complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.
End of explanation
import project_tests as tests
from keras.preprocessing.text import Tokenizer
def tokenize(x):
Tokenize x
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
# TODO: Implement
t_tokenizer = Tokenizer()
t_tokenizer.fit_on_texts(x)
t_tokenized = t_tokenizer.texts_to_sequences(x)
return t_tokenized, t_tokenizer
tests.test_tokenize(tokenize)
# Tokenize Example output
text_sentences = [
'The quick brown fox jumps over the lazy dog .',
'By Jove , my quick study of lexicography won a prize .',
'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent))
Explanation: For comparison, Alice's Adventures in Wonderland contains 2,766 unique words of a total of 15,500 words.
Preprocess
For this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:
1. Tokenize the words into ids
2. Add padding to make all the sequences the same length.
Time to start preprocessing the data...
Tokenize (IMPLEMENTATION)
For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).
We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.
Turn each sentence into a sequence of words ids using Keras's Tokenizer function. Use this function to tokenize english_sentences and french_sentences in the cell below.
Running the cell will run tokenize on sample data and show output for debugging.
End of explanation
import numpy as np
from keras.preprocessing.sequence import pad_sequences
def pad(x, length=None):
Pad x
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, use length of longest sequence in x.
:return: Padded numpy array of sequences
# TODO: Implement
return pad_sequences(sequences=x, maxlen=length, padding='post')
tests.test_pad(pad)
# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(np.array(token_sent)))
print(' Output: {}'.format(pad_sent))
Explanation: Padding (IMPLEMENTATION)
When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.
Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the end of each sequence using Keras's pad_sequences function.
End of explanation
def preprocess(x, y):
Preprocess x and y
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x)
preprocess_y = pad(preprocess_y)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
preprocess(english_sentences, french_sentences)
print('Data Preprocessed')
Explanation: Preprocess Pipeline
Your focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the preprocess function.
End of explanation
def logits_to_text(logits, tokenizer):
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>'
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
print('`logits_to_text` function loaded.')
Explanation: Models
In this section, you will experiment with various neural network architectures.
You will begin by training four relatively simple architectures.
- Model 1 is a simple RNN
- Model 2 is a RNN with Embedding
- Model 3 is a Bidirectional RNN
- Model 4 is an optional Encoder-Decoder RNN
After experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models.
Ids Back to Text
The neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function logits_to_text will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.
End of explanation
from keras.layers import GRU, Input, Dense, TimeDistributed
from keras.models import Model, Sequential
from keras.layers import Activation
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
# TODO: Build the layers
learning_rate = 0.01
network_input = Input(shape=input_shape[1:])
network = GRU(512, return_sequences=True)(network_input)
network = TimeDistributed(Dense(french_vocab_size, activation='relu'))(network)
network_output = Activation('softmax')(network)
model = Model(inputs=network_input, outputs=network_output)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_simple_model(simple_model)
# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
simple_rnn_model = simple_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index),
len(french_tokenizer.word_index))
simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
Explanation: Model 1: RNN (IMPLEMENTATION)
A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French.
End of explanation
from keras.layers.embeddings import Embedding
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
# TODO: Implement
network_input = Input(shape=input_shape[1:])
network_output = Embedding(input_dim=english_vocab_size, output_dim=output_sequence_length)(network_input)
network_output = GRU(512, return_sequences=True)(network_output)
network_output = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(network_output)
model = Model(inputs=network_input, outputs=network_output)
model.compile(loss=sparse_categorical_crossentropy, optimizer=Adam(0.01), metrics=['accuracy'])
return model
tests.test_embed_model(embed_model)
# TODO: Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
# TODO: Train the neural network
embed_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index),
len(french_tokenizer.word_index))
embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# TODO: Print prediction(s)
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
Explanation: Model 2: Embedding (IMPLEMENTATION)
You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.
In this model, you'll create a RNN model using embedding.
End of explanation
from keras.layers import Bidirectional
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
# TODO: Implement
network_input = Input(shape=input_shape[1:])
network_output = Bidirectional(GRU(output_sequence_length, return_sequences=True))(network_input)
network_output = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(network_output)
model = Model(inputs=network_input, outputs=network_output)
model.compile(loss=sparse_categorical_crossentropy, optimizer=Adam(0.01), metrics=['accuracy'])
return model
tests.test_bd_model(bd_model)
# TODO: Train and Print prediction(s)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
bd_rnn_model = bd_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index),
len(french_tokenizer.word_index))
bd_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
print(logits_to_text(bd_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
Explanation: Model 3: Bidirectional RNNs (IMPLEMENTATION)
One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data.
End of explanation
from keras.layers import RepeatVector
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
# OPTIONAL: Implement
return None
tests.test_encdec_model(encdec_model)
# OPTIONAL: Train and Print prediction(s)
Explanation: Model 4: Encoder-Decoder (OPTIONAL)
Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.
Create an encoder-decoder model in the cell below.
End of explanation
from keras.layers import RepeatVector
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
# TODO: Implement
model = Sequential()
model.add(Embedding(english_vocab_size, 256, input_length=input_shape[1]))
model.add(Bidirectional(GRU(512, return_sequences=False)))
model.add(RepeatVector(output_sequence_length))
model.add(Bidirectional(GRU(512, return_sequences=True)))
model.add(TimeDistributed(Dense(french_vocab_size, activation='softmax')))
model.compile(loss=sparse_categorical_crossentropy, optimizer=Adam(0.001), metrics=['accuracy'])
return model
tests.test_model_final(model_final)
print('Final Model Loaded')
Explanation: Model 5: Custom (IMPLEMENTATION)
Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model.
End of explanation
import numpy as np
from keras.preprocessing.sequence import pad_sequences
def final_predictions(x, y, x_tk, y_tk):
Gets predictions using the final model
:param x: Preprocessed English data
:param y: Preprocessed French data
:param x_tk: English tokenizer
:param y_tk: French tokenizer
# TODO: Train neural network using model_final
model = model_final(x.shape, y.shape[1], len(x_tk.word_index), len(y_tk.word_index))
model.fit(x, y, batch_size=1024, epochs=10, validation_split=0.2)
## DON'T EDIT ANYTHING BELOW THIS LINE
y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
y_id_to_word[0] = '<PAD>'
sentence = 'he saw a old yellow truck'
sentence = [x_tk.word_index[word] for word in sentence.split()]
sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
sentences = np.array([sentence[0], x[0]])
predictions = model.predict(sentences, len(sentences))
print('Sample 1:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
print('Il a vu un vieux camion jaune')
print('Sample 2:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
print(' '.join([y_id_to_word[np.argmax(x)] for x in y[0]]))
final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer)
Explanation: Prediction (IMPLEMENTATION)
End of explanation |
1,241 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Heat
In this example the laser-excitation of a sample Structure is shown.
It includes the actual absorption of the laser light as well as the transient temperature profile calculation.
Setup
Do all necessary imports and settings.
Step1: Structure
Refer to the structure-example for more details.
Step2: Initialize Heat
The Heat class requires a Structure object and a boolean force_recalc in order overwrite previous simulation results.
These results are saved in the cache_dir when save_data is enabled.
Printing simulation messages can be en-/disabled using disp_messages and progress bars can using the boolean switch progress_bar.
Step3: Simple Excitation
In order to calculate the temperature of the sample after quasi-instantaneous (delta) photoexcitation the excitation must be set with the following parameters
Step4: Laser Absorption Profile
Here the difference in the spatial laser absorption profile is shown between the multilayer absorption algorithm and the Lambert-Beer law.
Note that Lambert-Beer does not include reflection of the incident light from the surface of the sample structure
Step5: Temperature Map
Step6: Heat Diffusion
In order to enable heat diffusion the boolean switch heat_diffusion must be True.
Step7: Heat Diffusion Parameters
For heat diffusion simulations various parameters for the underlying pdepe solver can be altered.
By default, the backend is set to scipy but can be switched to matlab.
Currently, the is no obvious reason to choose MATLAB above SciPy.
Depending on the backend either the ode_options or ode_options_matlab can be configured and are directly handed to the actual solver.
Please refer to the documentation of the actual backend and solver and the API documentation for more details.
The speed but also the result of the heat diffusion simulation strongly depends on the spatial grid handed to the solver.
By default, one spatial grid point is used for every Layer (AmorphousLayer or UnitCell) in the Structure.
The resulting temp_map will also always be interpolated in this spatial grid which is equivalent to the distance vector returned by S.get_distances_of_layers().
As the solver for the heat diffusion usually suffers from large gradients, e.g. of thermal properties or initial temperatures, additional spatial grid points are added by default only for internal calculations.
The number of additional points (should be an odd number, default is 11) is set by
Step8: The internally used spatial grid can be returned by
Step9: The internal spatial grid can also be given by hand, e.g. to realize logarithmic steps for rather large Structure
Step10: As already shown above, the heat diffusion simulation supports also an top and bottom boundary condition. The can have the types
Step11: Multipulse Excitation
As already stated above, also multiple pulses of variable fluence, pulse width and, delay are possible.
The heat diffusion simulation automatically splits the calculation in parts with and without excitation and adjusts the initial temporal step width according to the pulse width.
Hence the solver does not miss any excitation pulses when adjusting its temporal step size.
The temporal laser pulse profile is always assumed to be Gaussian and the pulse width must be given as FWHM
Step12: $N$-Temperature Model
The heat diffusion is also capable of simulating an N-temperature model which is often applied to empirically simulate the energy flow between electrons, phonons, and spins.
In order to run the NTM all thermo-elastic properties must be given as a list of N elements corresponding to different sub-systems.
The actual external laser-excitation is always set to happen within the first sub-system, which is usually the electron-system.
In addition the sub_system_coupling must be provided in order to allow for energy-flow between the sub-systems.
sub_system_coupling is often set to a constant prefactor multiplied with the difference between the electronic and phononic temperatures, as in the example below.
For sufficiently high temperatures, this prefactor also depdends on temperature. See here for an overview.
In case the thermo-elastic parameters are provided as functions of the temperature $T$, the sub_system_coupling requires the temperature T to be a vector of all sub-system-temperatures which can be accessed in the function string via the underscore-notation. The heat_capacity and lin_therm_exp instead require the temperature T to be a scalar of only the current sub-system-temperature. For the therm_cond both options are available.
Step13: As no new Structure is build, the num_sub_systems must be updated by hand.
Otherwise this happens automatically.
Step14: Set the excitation conditions | Python Code:
import udkm1Dsim as ud
u = ud.u # import the pint unit registry from udkm1Dsim
import scipy.constants as constants
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
u.setup_matplotlib() # use matplotlib with pint units
Explanation: Heat
In this example the laser-excitation of a sample Structure is shown.
It includes the actual absorption of the laser light as well as the transient temperature profile calculation.
Setup
Do all necessary imports and settings.
End of explanation
O = ud.Atom('O')
Ti = ud.Atom('Ti')
Sr = ud.Atom('Sr')
Ru = ud.Atom('Ru')
Pb = ud.Atom('Pb')
Zr = ud.Atom('Zr')
# c-axis lattice constants of the two layers
c_STO_sub = 3.905*u.angstrom
c_SRO = 3.94897*u.angstrom
# sound velocities [nm/ps] of the two layers
sv_SRO = 6.312*u.nm/u.ps
sv_STO = 7.800*u.nm/u.ps
# SRO layer
prop_SRO = {}
prop_SRO['a_axis'] = c_STO_sub # aAxis
prop_SRO['b_axis'] = c_STO_sub # bAxis
prop_SRO['deb_Wal_Fac'] = 0 # Debye-Waller factor
prop_SRO['sound_vel'] = sv_SRO # sound velocity
prop_SRO['opt_ref_index'] = 2.44+4.32j
prop_SRO['therm_cond'] = 5.72*u.W/(u.m*u.K) # heat conductivity
prop_SRO['lin_therm_exp'] = 1.03e-5 # linear thermal expansion
prop_SRO['heat_capacity'] = '455.2 + 0.112*T - 2.1935e6/T**2' # heat capacity [J/kg K]
SRO = ud.UnitCell('SRO', 'Strontium Ruthenate', c_SRO, **prop_SRO)
SRO.add_atom(O, 0)
SRO.add_atom(Sr, 0)
SRO.add_atom(O, 0.5)
SRO.add_atom(O, 0.5)
SRO.add_atom(Ru, 0.5)
# STO substrate
prop_STO_sub = {}
prop_STO_sub['a_axis'] = c_STO_sub # aAxis
prop_STO_sub['b_axis'] = c_STO_sub # bAxis
prop_STO_sub['deb_Wal_Fac'] = 0 # Debye-Waller factor
prop_STO_sub['sound_vel'] = sv_STO # sound velocity
prop_STO_sub['opt_ref_index'] = 2.1+0j
prop_STO_sub['therm_cond'] = 12*u.W/(u.m*u.K) # heat conductivity
prop_STO_sub['lin_therm_exp'] = 1e-5 # linear thermal expansion
prop_STO_sub['heat_capacity'] = '733.73 + 0.0248*T - 6.531e6/T**2' # heat capacity [J/kg K]
STO_sub = ud.UnitCell('STOsub', 'Strontium Titanate Substrate', c_STO_sub, **prop_STO_sub)
STO_sub.add_atom(O, 0)
STO_sub.add_atom(Sr, 0)
STO_sub.add_atom(O, 0.5)
STO_sub.add_atom(O, 0.5)
STO_sub.add_atom(Ti, 0.5)
S = ud.Structure('Single Layer')
S.add_sub_structure(SRO, 100) # add 100 layers of SRO to sample
S.add_sub_structure(STO_sub, 200) # add 200 layers of STO substrate
Explanation: Structure
Refer to the structure-example for more details.
End of explanation
h = ud.Heat(S, True)
h.save_data = False
h.disp_messages = True
print(h)
Explanation: Initialize Heat
The Heat class requires a Structure object and a boolean force_recalc in order overwrite previous simulation results.
These results are saved in the cache_dir when save_data is enabled.
Printing simulation messages can be en-/disabled using disp_messages and progress bars can using the boolean switch progress_bar.
End of explanation
h.excitation = {'fluence': [5]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
# when calculating the laser absorption profile using Lamber-Beer-law
# the opt_pen_depth must be set manually or calculated from the refractive index
SRO.set_opt_pen_depth_from_ref_index(800*u.nm)
STO_sub.set_opt_pen_depth_from_ref_index(800*u.nm)
# temporal and spatial grid
delays = np.r_[-10:200:0.1]*u.ps
_, _, distances = S.get_distances_of_layers()
Explanation: Simple Excitation
In order to calculate the temperature of the sample after quasi-instantaneous (delta) photoexcitation the excitation must be set with the following parameters:
* fluence
* delay_pump
* pulse_width
* multilayer_absorption
* wavelength
* theta
The angle of incidence theta does change the footprint of the excitation on the sample for any type excitation.
The wavelength and theta angle of the excitation are also relevant if multilayer_absorption = True.
Otherwise the Lambert_Beer-law is used and its absorption profile is independent of wavelength and theta.
Note: the fluence, delay_pump, and pulse_width must be given as array or list.
The simulation requires also a delay array as temporal grid as well as an initial temperature init_temp.
The later can be either a scalar which is then the constant temperature of the whole sample structure, or the initial temperature can be an array of temperatures for each single layer in the structure.
End of explanation
plt.figure()
dAdz, _, _, _ = h.get_multilayers_absorption_profile()
plt.plot(distances.to('nm'), dAdz, label='multilayer')
dAdz = h.get_Lambert_Beer_absorption_profile()
plt.plot(distances.to('nm'), dAdz, label='Lamber-Beer')
plt.legend()
plt.xlabel('Distance [nm]')
plt.ylabel('Differnetial Absorption')
plt.title('Laser Absorption Profile')
plt.show()
Explanation: Laser Absorption Profile
Here the difference in the spatial laser absorption profile is shown between the multilayer absorption algorithm and the Lambert-Beer law.
Note that Lambert-Beer does not include reflection of the incident light from the surface of the sample structure:
End of explanation
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :])
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.title('Temperature Profile')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map')
plt.tight_layout()
plt.show()
Explanation: Temperature Map
End of explanation
# enable heat diffusion
h.heat_diffusion = True
# set the boundary conditions
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :], label=np.round(delays[101]))
plt.plot(distances.to('nm').magnitude, temp_map[501, :], label=np.round(delays[501]))
plt.plot(distances.to('nm').magnitude, temp_map[-1, :], label=np.round(delays[-1]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.legend()
plt.title('Temperature Profile with Heat Diffusion')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map with Heat Diffusion')
plt.tight_layout()
plt.show()
Explanation: Heat Diffusion
In order to enable heat diffusion the boolean switch heat_diffusion must be True.
End of explanation
h.intp_at_interface = 11
Explanation: Heat Diffusion Parameters
For heat diffusion simulations various parameters for the underlying pdepe solver can be altered.
By default, the backend is set to scipy but can be switched to matlab.
Currently, the is no obvious reason to choose MATLAB above SciPy.
Depending on the backend either the ode_options or ode_options_matlab can be configured and are directly handed to the actual solver.
Please refer to the documentation of the actual backend and solver and the API documentation for more details.
The speed but also the result of the heat diffusion simulation strongly depends on the spatial grid handed to the solver.
By default, one spatial grid point is used for every Layer (AmorphousLayer or UnitCell) in the Structure.
The resulting temp_map will also always be interpolated in this spatial grid which is equivalent to the distance vector returned by S.get_distances_of_layers().
As the solver for the heat diffusion usually suffers from large gradients, e.g. of thermal properties or initial temperatures, additional spatial grid points are added by default only for internal calculations.
The number of additional points (should be an odd number, default is 11) is set by:
End of explanation
dist_interp, original_indicies = S.interp_distance_at_interfaces(h.intp_at_interface)
Explanation: The internally used spatial grid can be returned by:
End of explanation
h.distances = np.linspace(0, distances.magnitude[-1], 100)*u.m
Explanation: The internal spatial grid can also be given by hand, e.g. to realize logarithmic steps for rather large Structure:
End of explanation
h.boundary_conditions = {'top_type': 'temperature', 'top_value': 500*u.K,
'bottom_type': 'flux', 'bottom_value': 5e11*u.W/u.m**2}
print(h)
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :], label=np.round(delays[101]))
plt.plot(distances.to('nm').magnitude, temp_map[501, :], label=np.round(delays[501]))
plt.plot(distances.to('nm').magnitude, temp_map[-1, :], label=np.round(delays[-1]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.legend()
plt.title('Temperature Profile with Heat Diffusion and BC')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map with Heat Diffusion and BC')
plt.tight_layout()
plt.show()
Explanation: As already shown above, the heat diffusion simulation supports also an top and bottom boundary condition. The can have the types:
* isolator
* temperature
* flux
For the later types also a value must be provides:
End of explanation
h.excitation = {'fluence': [5, 5, 5, 5]*u.mJ/u.cm**2,
'delay_pump': [0, 10, 20, 20.5]*u.ps,
'pulse_width': [0.1, 0.1, 0.1, 0.5]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.plot(distances.to('nm').magnitude, temp_map[101, :], label=np.round(delays[101]))
plt.plot(distances.to('nm').magnitude, temp_map[201, :], label=np.round(delays[201]))
plt.plot(distances.to('nm').magnitude, temp_map[301, :], label=np.round(delays[301]))
plt.plot(distances.to('nm').magnitude, temp_map[-1, :], label=np.round(delays[-1]))
plt.xlim([0, distances.to('nm').magnitude[-1]])
plt.xlabel('Distance [nm]')
plt.ylabel('Temperature [K]')
plt.legend()
plt.title('Temperature Profile Multiplulse')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map, shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Multiplulse')
plt.tight_layout()
plt.show()
Explanation: Multipulse Excitation
As already stated above, also multiple pulses of variable fluence, pulse width and, delay are possible.
The heat diffusion simulation automatically splits the calculation in parts with and without excitation and adjusts the initial temporal step width according to the pulse width.
Hence the solver does not miss any excitation pulses when adjusting its temporal step size.
The temporal laser pulse profile is always assumed to be Gaussian and the pulse width must be given as FWHM:
End of explanation
# update the relevant thermo-elastic properties of the layers in the sample structure
SRO.therm_cond = [0,
5.72*u.W/(u.m*u.K)]
SRO.lin_therm_exp = [1.03e-5,
1.03e-5]
SRO.heat_capacity = ['0.112*T',
'455.2 - 2.1935e6/T**2']
SRO.sub_system_coupling = ['5e17*(T_1-T_0)',
'5e17*(T_0-T_1)']
STO_sub.therm_cond = [0,
12*u.W/(u.m*u.K)]
STO_sub.lin_therm_exp = [1e-5,
1e-5]
STO_sub.heat_capacity = ['0.0248*T',
'733.73 - 6.531e6/T**2']
STO_sub.sub_system_coupling = ['5e17*(T_1-T_0)',
'5e17*(T_0-T_1)']
Explanation: $N$-Temperature Model
The heat diffusion is also capable of simulating an N-temperature model which is often applied to empirically simulate the energy flow between electrons, phonons, and spins.
In order to run the NTM all thermo-elastic properties must be given as a list of N elements corresponding to different sub-systems.
The actual external laser-excitation is always set to happen within the first sub-system, which is usually the electron-system.
In addition the sub_system_coupling must be provided in order to allow for energy-flow between the sub-systems.
sub_system_coupling is often set to a constant prefactor multiplied with the difference between the electronic and phononic temperatures, as in the example below.
For sufficiently high temperatures, this prefactor also depdends on temperature. See here for an overview.
In case the thermo-elastic parameters are provided as functions of the temperature $T$, the sub_system_coupling requires the temperature T to be a vector of all sub-system-temperatures which can be accessed in the function string via the underscore-notation. The heat_capacity and lin_therm_exp instead require the temperature T to be a scalar of only the current sub-system-temperature. For the therm_cond both options are available.
End of explanation
S.num_sub_systems = 2
Explanation: As no new Structure is build, the num_sub_systems must be updated by hand.
Otherwise this happens automatically.
End of explanation
h.excitation = {'fluence': [5]*u.mJ/u.cm**2,
'delay_pump': [0]*u.ps,
'pulse_width': [0.25]*u.ps,
'multilayer_absorption': True,
'wavelength': 800*u.nm,
'theta': 45*u.deg}
h.boundary_conditions = {'top_type': 'isolator', 'bottom_type': 'isolator'}
delays = np.r_[-5:15:0.01]*u.ps
# The resulting temperature profile is calculated in one line:
temp_map, delta_temp = h.get_temp_map(delays, 300*u.K)
plt.figure(figsize=[6, 8])
plt.subplot(2, 1, 1)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 0], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Electrons')
plt.subplot(2, 1, 2)
plt.pcolormesh(distances.to('nm').magnitude, delays.to('ps').magnitude, temp_map[:, :, 1], shading='auto')
plt.colorbar()
plt.xlabel('Distance [nm]')
plt.ylabel('Delay [ps]')
plt.title('Temperature Map Phonons')
plt.tight_layout()
plt.show()
plt.figure()
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['SRO'], 0], 1), label='SRO electrons')
plt.plot(delays.to('ps'), np.mean(temp_map[:, S.get_all_positions_per_unique_layer()['SRO'], 1], 1), label='SRO phonons')
plt.ylabel('Temperature [K]')
plt.xlabel('Delay [ps]')
plt.legend()
plt.title('Temperature Electrons vs. Phonons')
plt.show()
Explanation: Set the excitation conditions:
End of explanation |
1,242 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
7. Maximum Likelihood fit
Step1: We use the pancake dataset, sampled at 300 random locations to produce a quite dense sample.
Step2: First of, the variogram is calculated. We use Scott's rule to determine the number of lag classes, explicitly set Trust-Region Reflective as fitting method (although its default) and limit the distance matrix to 70% of the maximum separating distance.
Additionally, we capture the processing time for the whole variogram estimation. Note, that this also includes the calculation of the distance matrix, which is a mututal step.
Step3: Maximum likelihood using SciKit-GStat
Since version 0.6.12 SciKit-GStat implements an utility function factory which takes a Variogram instance and builds up a (negative) maximum likelihood function for the associated sample, distance matrix and model type. The used function is defined in eq. 14 from Lark (2000). Eq. 16 from same publication was adapted to all available theoretical models available in SciKit-GStat, with the exception of the harmonized model, which does not require a fitting.
First step to perform the fitting is to make initial guesses for the parameters. Here, we take the mean separating distance for the effective range, the sample variance for the sill and 10% of the sample variance for the nugget. To improve performance and runtime, we also define a boundary to restrict the parameter space.
Step4: Next step is to pass the Variogram instance to the function factory. We find optimal parameters by minimizing the returned negative log-likelihood function. Please refer to SciPy's minimize function to learn about attributes. The returned function from the utility suite is built with SciPy in mind, as the function signature complies to SciPy's interface and, thus can just be passed to the minimize function.
Here, we pass the initial guess, the bounds and set the solver method to SLSQP, a suitable solver for bounded optimization.
Step5: Apply the optimized parameters. For comparison, the three method-of-moment methods from SciKit-GStat are applied as well. Note that the used sample is quite dense. Thus we do not expect a different between the MoM based procedures. They should all find the same paramters.
Step6: Make the result plot
Step7: Build from scratch
SciKit-GStat's utility suite does only implement the maximum likelihood approach as published by Lark (2000). There are no settings to adjust the returned function, nor use other implementations. If you need to use another approach, the idea behind the implementation is demonstrated below for the spherical variogram model. This solution is only build on SciPy and does not need SciKit-GStat, in case the distance matrix is build externally. | Python Code:
import skgstat as skg
from skgstat.util.likelihood import get_likelihood
import numpy as np
import matplotlib.pyplot as plt
from scipy.optimize import minimize
import warnings
from time import time
import matplotlib.pyplot as plt
warnings.filterwarnings('ignore')
Explanation: 7. Maximum Likelihood fit
End of explanation
# use the same dataset as used in GMD paper
c, v = skg.data.pancake(N=300, seed=42).get('sample')
Explanation: We use the pancake dataset, sampled at 300 random locations to produce a quite dense sample.
End of explanation
t1 = time()
V = skg.Variogram(c,v, bin_func='scott', maxlag=0.7, fit_func='trf')
t2 = time() # get time for full analysis, including fit
print(f"Processing time: {round((t2 - t1) * 1000)} ms")
print(V)
fig = V.plot()
Explanation: First of, the variogram is calculated. We use Scott's rule to determine the number of lag classes, explicitly set Trust-Region Reflective as fitting method (although its default) and limit the distance matrix to 70% of the maximum separating distance.
Additionally, we capture the processing time for the whole variogram estimation. Note, that this also includes the calculation of the distance matrix, which is a mututal step.
End of explanation
# base initial guess on separating distance and sample variance
sep_mean = V.distance.mean()
sam_var = V.values.var()
print(f"Mean sep. distance: {sep_mean.round(1)} sample variance: {sam_var.round(1)}")
# create initial guess
# mean dist. variance 5% of variance
p0 = np.array([sep_mean, sam_var, 0.1 * sam_var])
print('initial guess: ', p0.round(1))
# create the bounds to restrict optimization
bounds = [[0, V.bins[-1]], [0, 3*sam_var], [0, 2.9*sam_var]]
print('bounds: ', bounds)
Explanation: Maximum likelihood using SciKit-GStat
Since version 0.6.12 SciKit-GStat implements an utility function factory which takes a Variogram instance and builds up a (negative) maximum likelihood function for the associated sample, distance matrix and model type. The used function is defined in eq. 14 from Lark (2000). Eq. 16 from same publication was adapted to all available theoretical models available in SciKit-GStat, with the exception of the harmonized model, which does not require a fitting.
First step to perform the fitting is to make initial guesses for the parameters. Here, we take the mean separating distance for the effective range, the sample variance for the sill and 10% of the sample variance for the nugget. To improve performance and runtime, we also define a boundary to restrict the parameter space.
End of explanation
# load the likelihood function for this variogram
likelihood = get_likelihood(V)
# minimize the likelihood function
t3 = time()
res = minimize(likelihood, p0, bounds=bounds, method='SLSQP')
t4 = time()
print(f"Processing time {np.round(t4 - t3, 2)} seconds")
print('initial guess: ', p0.round(1))
print('optimal parameters:', res.x.round(1))
Explanation: Next step is to pass the Variogram instance to the function factory. We find optimal parameters by minimizing the returned negative log-likelihood function. Please refer to SciPy's minimize function to learn about attributes. The returned function from the utility suite is built with SciPy in mind, as the function signature complies to SciPy's interface and, thus can just be passed to the minimize function.
Here, we pass the initial guess, the bounds and set the solver method to SLSQP, a suitable solver for bounded optimization.
End of explanation
# use 100 steps
x = np.linspace(0, V.bins[-1], 100)
# apply the maximum likelihood fit parameters
y_ml = V.model(x, *res.x)
# apply the trf fit
y_trf = V.fitted_model(x)
# apply Levelberg marquard
V.fit_method = 'lm'
y_lm = V.fitted_model(x)
# apply parameter ml
V.fit_method = 'ml'
y_pml = V.fitted_model(x)
# check if the method-of-moment fits are different
print('Trf and Levenberg-Marquardt identical: ', all(y_lm - y_trf < 0.1))
print('Trf and parameter ML identical: ', all(y_pml - y_trf < 0.1))
Explanation: Apply the optimized parameters. For comparison, the three method-of-moment methods from SciKit-GStat are applied as well. Note that the used sample is quite dense. Thus we do not expect a different between the MoM based procedures. They should all find the same paramters.
End of explanation
plt.plot(V.bins, V.experimental, '.b', label='experimental')
plt.plot(x, y_ml, '-g', label='ML fit (Lark, 2000)')
plt.plot(x, y_trf, '-b', label='SciKit-GStat TRF')
plt.legend(loc='lower right')
#plt.gcf().savefig('compare.pdf', dpi=300)
Explanation: Make the result plot
End of explanation
from scipy.spatial.distance import squareform
from scipy.linalg import inv, det
# define the spherical model only dependent on the range
def f(h, a):
if h >= a:
return 1.
elif h == 0:
return 0.
return (3*h) / (2*a) - 0.5 * (h / a)**3
# create the autocovariance matrix
def get_A(r, s, b, dists):
a = np.array([f(d, r) for d in dists])
A = squareform((s / (s + b)) * (1 - a))
np.fill_diagonal(A, 1)
return A
# likelihood function
def like(r, s, b, z, dists):
A = get_A(r, s, b, dists)
n = len(A)
A_inv = inv(A)
ones = np.ones((n, 1))
z = z.reshape(n, -1)
m = inv(ones.T @ A_inv @ ones) @ (ones.T @ A_inv @ z)
b = np.log((z - m).T @ A_inv @ (z - m))
d = np.log(det(A))
if d == -np.inf:
print('invalid det(A)')
return np.inf
loglike = (n / 2)*np.log(2*np.pi) + (n / 2) - (n / 2)* np.log(n) + 0.5* d + (n / 2) * b
return loglike.flatten()[0]
from scipy.optimize import minimize
from scipy.spatial.distance import pdist
# c and v are coordinate and values array from the data source
z = np.array(v)
# in case you use 2D coordinates, without caching and euclidean metric, skgstat is using pdist under the hood
dists = pdist(c)
fun = lambda x, *args: like(x[0], x[1], x[2], z=z, dists=dists)
t3 = time()
res = minimize(fun, p0, bounds=bounds)
t4 = time()
print(f"Processing time {np.round(t4 - t3, 2)} seconds")
print('initial guess: ', p0.round(1))
print('optimal parameters:', res.x.round(1))
import matplotlib.pyplot as plt
mod = lambda h: f(h, res.x[0]) * res.x[1] + res.x[2]
x = np.linspace(0, 450, 100)
y = list(map(mod, x))
y2 = V.fitted_model(x)
plt.plot(V.bins, V.experimental, '.b', label='experimental')
plt.plot(x, y, '-g', label='ML fit (Lark, 2000)')
plt.plot(x, y2, '-b', label='SciKit-GStat default fit')
plt.legend(loc='lower right')
plt.gcf().savefig('compare.pdf', dpi=300)
Explanation: Build from scratch
SciKit-GStat's utility suite does only implement the maximum likelihood approach as published by Lark (2000). There are no settings to adjust the returned function, nor use other implementations. If you need to use another approach, the idea behind the implementation is demonstrated below for the spherical variogram model. This solution is only build on SciPy and does not need SciKit-GStat, in case the distance matrix is build externally.
End of explanation |
1,243 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
hypothesis
Test the hypothesis about alcohol consumption and life expectancy Specifically, is how quantity litters alcohol consumption per year in a country is related a life expectancy, or in hypothesis testing terms, is the quantity of alcohol consumption and life expectancy is independent or dependent? For this analysis, I'm going to use a categorical explanatory variable with five levels, with the following categorical values
Step1: Examining these column percents for those with life expectancy
(greater or less than mean) we see that as alcohol consumption (from 5)
until 15 liters per year increase, the life expectancy also increase.
Step2: The analysis of this graph without other analysis (like the frequencies), maybe
can lead an error. It seems to show that the most country with the life expectancy greater than the mean are those that alcohol consumption is in the range between 20 and 25 liters. To help solve this issue, I used the countplot seaborn function, that is "A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable" Se here.
On this graph is easy to see that only 1 observation was realized in the column >=20 <=25.
Step3: Post hoc Bonferroni Adjustment
Looking at the significant P value, we will accept the alternate Hypothesis, where not all life expectancy rates are equal across alcohol consumption categories. If my explanatory variable had only two levels, I could interpret the two corresponding column percentages and be able to say wich group had a significantly higher rate of life expectancy. But my explanatory variable has five categories. So I know that not all are equal. But I don't know wich are different and wich are not. | Python Code:
''' Categoriacal explanatory variable with five levels '''
alcohol_map = {1: '>=0 <5', 2: '>=5 <10', 3: '>=10 <15', 4: '>=15 <20', 5: '>=20 <25'}
data2['alcohol'] = pd.cut(data1.alcohol,[0,5,10,15,20,25],
labels=[i for i in alcohol_map.values()])
data2["alcohol"] = data2["alcohol"].astype('category')
# Mean, Min and Max of life expectancy
meal = data2.life.mean()
minl = data2.life.min()
maxl = data2.life.max()
print (tabulate([[np.floor(minl), meal, np.ceil(maxl)]],
tablefmt="fancy_grid", headers=['Min', 'Mean', 'Max']))
# Create categorical response variable life (Two levels based on mean)
life_map = {1: '<=69', 2: '>69'}
data2['life'] = pd.cut(data1.life,[np.floor(minl),meal,np.ceil(maxl)], labels=[i for i in life_map.values()])
data2["life"] = data2["life"].astype('category')
# contingency table of observed counts
ct1=pd.crosstab(data2['life'], data2['alcohol'])
headers_alcohol = [i for i in ct1.keys()]
headers_alcohol.insert(0,'life/alcool')
print (tabulate(ct1,tablefmt="fancy_grid",headers=headers_alcohol))
Explanation: hypothesis
Test the hypothesis about alcohol consumption and life expectancy Specifically, is how quantity litters alcohol consumption per year in a country is related a life expectancy, or in hypothesis testing terms, is the quantity of alcohol consumption and life expectancy is independent or dependent? For this analysis, I'm going to use a categorical explanatory variable with five levels, with the following categorical values: Alcohol consumption (per year, in liters) from 0 to 5, from 5 to 10, from 10 to 15, from 15 to 20 and from 20 to 25
My response variable is categorical with 2 levels. That is, life expectancy greater than or less than the mean of all countries in gapmind data set.
End of explanation
'''
Generate the column percentages wich show the percent of individuals with level of
live expectancy within each alcohol consumption level.
'''
colsum = ct1.sum(axis=0)
colpct = ct1/colsum
headers = [i for i in colpct.keys()]
headers.insert(0,'Life/Alcohol')
print (tabulate(colpct, tablefmt="fancy_grid", headers=headers, floatfmt=".3f"))
'''
compute the expected frequencies for the table based on the marginal sums
under the assumption that the groups associated with each dimension are independent.
'''
print ('expected frequencies.')
print (tabulate(scipy.stats.contingency.expected_freq(ct1),
tablefmt="fancy_grid", floatfmt=".3f"))
'''
Graph the percent of population with life expectancy greather
than mean (69.14) within each consumption alcohol category
'''
# Create categorical response variable life (numeric) (Two levels based on mean)
data2['life_n'] = data2.life
life_map = {1: 0, 2: 1}
data2['life_n'] = pd.cut(data1.life,[np.floor(minl),meal,np.ceil(maxl)],
labels=[i for i in life_map.values()])
#data2["life_n"] = data2["life_n"].astype('category')
data2.life_n = pd.to_numeric(data2.life_n)
seaborn.factorplot(x='alcohol', y='life_n', data=data2, kind='bar', ci=None)
plt.xlabel('Alcohol consumption')
plt.ylabel('Life Expectancy')
Explanation: Examining these column percents for those with life expectancy
(greater or less than mean) we see that as alcohol consumption (from 5)
until 15 liters per year increase, the life expectancy also increase.
End of explanation
seaborn.countplot(x='alcohol', data=data2, palette='Greens_d')
'''Chi-square calculations, wich include the chi-square value, the associated
p-vale, and a table of expected counts that ares used in these calculations.'''
cs1 = scipy.stats.chi2_contingency(ct1)
results = OrderedDict()
results['chi-square'] = cs1[0]
results['p-value'] = cs1[1]
results['df'] = cs1[2]
print (tabulate([results.values()], tablefmt="fancy_grid",
headers=[i for i in results.keys()]))
print ('\nThe expected frequencies, based on the marginal sums of the table.')
print (tabulate(cs1[3]))
Explanation: The analysis of this graph without other analysis (like the frequencies), maybe
can lead an error. It seems to show that the most country with the life expectancy greater than the mean are those that alcohol consumption is in the range between 20 and 25 liters. To help solve this issue, I used the countplot seaborn function, that is "A special case for the bar plot is when you want to show the number of observations in each category rather than computing a statistic for a second variable. This is similar to a histogram over a categorical, rather than quantitative, variable" Se here.
On this graph is easy to see that only 1 observation was realized in the column >=20 <=25.
End of explanation
'''
Post hoc Bonferroni Adjustment
On Bonferroni Adjustment p value had adjusted dividing p 0.05 by
the number of comparisions that we plan to make
'''
p_for_reject_h0 = .05/10
pairs = []
pairs.append((('>=0 <5','>=5 <10'),[0,5,5,10]))
pairs.append((('>=0 <5','>=10 <15'),[0,5,10,15]))
pairs.append((('>=0 <5','>=15 <20'),[0,5,15,20]))
pairs.append((('>=0 <5','>=20 <25'),[0,5,20,25]))
pairs.append((('>=5 <10','>=10 <15'),[5,10,10,15]))
pairs.append((('>=5 <10','>=15 <20'),[5,10,15,20]))
pairs.append((('>=5 <10','>=20 <25'),[5,10,20,25]))
pairs.append((('>=10 <15','>=15 <20'),[10,15,15,20]))
pairs.append((('>=10 <15','>=20 <25'),[10,15,20,25]))
pairs.append((('>=15 <20','>=20 <25'),[15,20,20,25]))
data_pairs = []
results=[]
for pair in pairs:
data_pair = data2[ (((data1.alcohol>pair[1][0]) & (data1.alcohol<=pair[1][1])) |
((data1.alcohol>pair[1][2]) & (data1.alcohol<=pair[1][3])))]
ct0=pd.crosstab( data_pair['life'], data_pair['alcohol'])
ct1 = ct0[ [pair[0][0],pair[0][1]] ]
cs0 = scipy.stats.chi2_contingency(ct1)
#print (ct0)
# chi-square, p=value and degree of freedom
reject = 'yes' if (cs0[1] < .05/10) else 'no'
results.append((pair[0],cs0[0],cs0[1],cs0[2], reject))
print (tabulate(results, tablefmt="fancy_grid",
headers=['groups', 'chi-square', 'p-value', 'df', 'h0 reject'] ))
Explanation: Post hoc Bonferroni Adjustment
Looking at the significant P value, we will accept the alternate Hypothesis, where not all life expectancy rates are equal across alcohol consumption categories. If my explanatory variable had only two levels, I could interpret the two corresponding column percentages and be able to say wich group had a significantly higher rate of life expectancy. But my explanatory variable has five categories. So I know that not all are equal. But I don't know wich are different and wich are not.
End of explanation |
1,244 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
```
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step1: (Optional) Connect to TPU
Set Colab
Step2: Import
Step3: ImageNet
Download checkpoints and load model
Step4: Inference on a single image
Step5: CIFAR
Inspect input pipeline of CIFAR
Use the cifar image augmentations
Step6: Running a single training step on CIFAR | Python Code:
![ -d nested-transformer ] || git clone --depth=1 https://github.com/google-research/nested-transformer
!cd nested-transformer && git pull
!pip install -qr nested-transformer/requirements.txt
Explanation: ```
Licensed under the Apache License, Version 2.0 (the "License");
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
Aggregating Nested Transformer
https://arxiv.org/pdf/2105.12723.pdf
This colab shows how to
check data pipelines
load pretrained checkpoints for inference
train CIFAR for steps
Setup
End of explanation
USE_TPU = False
if USE_TPU:
# Google Colab "TPU" runtimes are configured in "2VM mode", meaning that JAX
# cannot see the TPUs because they're not directly attached. Instead we need to
# setup JAX to communicate with a second machine that has the TPUs attached.
import os
if 'google.colab' in str(get_ipython()) and 'COLAB_TPU_ADDR' in os.environ:
import jax
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
print('Connected to TPU.')
else:
print('No TPU detected. Can be changed under "Runtime/Change runtime type".')
Explanation: (Optional) Connect to TPU
Set Colab: Runtime -> Change runtime type -> TPU
End of explanation
import sys
sys.path.append('./nested-transformer')
import os
import time
import flax
from flax import nn
import jax
import jax.numpy as jnp
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import functools
from absl import logging
from libml import input_pipeline
from libml import preprocess
from models import nest_net
import train
from configs import cifar_nest
from configs import imagenet_nest
# Hide any GPUs form TensorFlow. Otherwise TF might reserve memory and make
# it unavailable to JAX.
tf.config.experimental.set_visible_devices([], "GPU")
logging.set_verbosity(logging.INFO)
print("JAX devices:\n" + "\n".join([repr(d) for d in jax.devices()]))
print('Current folder content', os.listdir())
Explanation: Import
End of explanation
checkpoint_dir = "./nested-transformer/checkpoints/"
remote_checkpoint_dir = "gs://gresearch/nest-checkpoints/nest-b_imagenet"
print('List checkpoints: ')
!gsutil ls "$remote_checkpoint_dir"
print('Download checkpoints: ')
!mkdir -p "$checkpoint_dir"
!gsutil cp -r "$remote_checkpoint_dir" "$checkpoint_dir".
# Use checkpoint of host 0.
imagenet_config = imagenet_nest.get_config()
state_dict = train.checkpoint.load_state_dict(
os.path.join(checkpoint_dir, os.path.basename(remote_checkpoint_dir)))
variables = {
"params": state_dict["optimizer"]["target"],
}
variables.update(state_dict["model_state"])
model_cls = nest_net.create_model(imagenet_config.model_name, imagenet_config)
model = functools.partial(model_cls, num_classes=1000)
Explanation: ImageNet
Download checkpoints and load model
End of explanation
import PIL
!wget https://picsum.photos/id/237/200/300 -O dog.jpg
img = PIL.Image.open('dog.jpg')
img
def predict(image):
logits = model(train=False).apply(variables, image, mutable=False)
# Return predicted class and confidence.
return logits.argmax(axis=-1), nn.softmax(logits, axis=-1).max(axis=-1)
def _preprocess(image):
image = np.array(image.resize((224, 224))).astype(np.float32) / 255
mean = np.array(preprocess.IMAGENET_DEFAULT_MEAN).reshape(1, 1, 3)
std = np.array(preprocess.IMAGENET_DEFAULT_STD).reshape(1, 1, 3)
image = (image - mean) / std
return image[np.newaxis,...]
input = _preprocess(img)
cls, prob = predict(input)
print(f'ImageNet class id: {cls[0]}, prob: {prob[0]}')
Explanation: Inference on a single image
End of explanation
cifar_builder = tfds.builder("cifar10")
config = cifar_nest.get_config()
# Do not apply MixUp or CutMix operations since tfds.visualization.show_examples
# only accepts integer labels
config.mix = None
info, train_ds, eval_ds = input_pipeline.create_datasets(
config, jax.random.PRNGKey(0)
)
_ = tfds.visualization.show_examples(train_ds.unbatch().unbatch(), cifar_builder.info)
Explanation: CIFAR
Inspect input pipeline of CIFAR
Use the cifar image augmentations
End of explanation
config = cifar_nest.get_config()
config.num_train_steps = 1
config.num_eval_steps = 1
config.num_epochs = 1
config.warmup_epochs = 0
config.per_device_batch_size = 128 # Set to smaller batch size to avoid OOM
workdir = f"./nested-transformer/checkpoints/cifar_nest_colab_{int(time.time())}"
# Re-create datasets with possibly updated config.
info, train_ds, eval_ds = input_pipeline.create_datasets(
config, jax.random.PRNGKey(0)
)
train.train_and_evaluate(config, workdir)
Explanation: Running a single training step on CIFAR
End of explanation |
1,245 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Store the data to HDF5 file for rapid analysis and calculation
This tutorial discuss the analyses that can be performed using the dnaMD Python module included in the do_x3dna package. The tutorial is prepared using Jupyter Notebook and this notebook tutorial file could be downloaded from this link.
Download the input files that are used in the tutorial from this link.
Two following input files are required in this tutorial
L-BP_cdna.dat
L-BPS_cdna.dat
L-BPH_cdna.dat
HelAxis_cdna.dat
MGroove_cdna.dat
BackBoneCHiDihedrals_cdna.dat
These files should be present inside tutorial_data of the current/present working directory.
The Python APIs should be only used when do_x3dna is executed with -ref option.
Detailed documentation is provided here.
Importing Python Modules
numpy
Step1: Initializing DNA object with HDF5 file
DNA object is initialized by using the total number of base-pairs
To store the data in HDF5 file, just initialize the class with the filename as shown below. Here, we named the HDF5 file as cdna.h5.
NOTE
Step2: Store/Save data to HDF5 file
No extra step neccessary to store the data in HDF5 file. Just read the parameters from do_x3dna output files as described in previous tutorials.
Local base-pair parameters as shown preveiosly here.
Local base-step parameters as shown preveiosly here.
Local helical base-step parameters as shown preveiosly here.
Helical axis as shown preveiosly here.
Major and minor grooves as shown preveiosly here.
Backbone dihedrals as shown preveiosly here.
Step3: Example to extract a parameter
As shown previously here, data can be extracted from HDF5 by same way as shown in the following.
Also, see that plot is similar.
Note that in this case, data is read from the HDF5 file, while in the previous tutorial, data was stored in memory (RAM).
Step4: Example to extract parameter as a function of time
As shown previously here, smae method (dnaMD.DNA.time_vs_parameter(...)) can be used to get parameter values as a function of time.
Also, see that plot is similar.
Note that in this case, data is read from the HDF5 file, while in the previous tutorial, data was stored in memory (RAM). | Python Code:
import os
import numpy as np
import matplotlib.pyplot as plt
import dnaMD
%matplotlib inline
try:
os.remove('cdna.h5')
except:
pass
Explanation: Store the data to HDF5 file for rapid analysis and calculation
This tutorial discuss the analyses that can be performed using the dnaMD Python module included in the do_x3dna package. The tutorial is prepared using Jupyter Notebook and this notebook tutorial file could be downloaded from this link.
Download the input files that are used in the tutorial from this link.
Two following input files are required in this tutorial
L-BP_cdna.dat
L-BPS_cdna.dat
L-BPH_cdna.dat
HelAxis_cdna.dat
MGroove_cdna.dat
BackBoneCHiDihedrals_cdna.dat
These files should be present inside tutorial_data of the current/present working directory.
The Python APIs should be only used when do_x3dna is executed with -ref option.
Detailed documentation is provided here.
Importing Python Modules
numpy: Required for the calculations involving large arrays
matplotlib: Required to plot the results
dnaMD: Python module to analyze DNA/RNA structures from the do_x3dna output files.
End of explanation
# Initialization
dna = dnaMD.DNA(60, filename='cdna.h5') #Initialization for 60 base-pairs DNA bound with the protein
Explanation: Initializing DNA object with HDF5 file
DNA object is initialized by using the total number of base-pairs
To store the data in HDF5 file, just initialize the class with the filename as shown below. Here, we named the HDF5 file as cdna.h5.
NOTE: Except initialization, all other methods and functions can be used in similar ways.
End of explanation
# Read Local base-pair parameters
dna.set_base_pair_parameters('tutorial_data/L-BP_cdna.dat', bp=[1, 60], bp_range=True)
# Read Local base-step parameters
dna.set_base_step_parameters('tutorial_data/L-BPS_cdna.dat', bp_step=[1, 59], parameters='All', step_range=True)
# Read Local helical base-step parameters
dna.set_base_step_parameters('tutorial_data/L-BPH_cdna.dat', bp_step=[1, 59], parameters='All', step_range=True, helical=True)
# Read Helical axis
dna.set_helical_axis('tutorial_data/HelAxis_cdna.dat')
# Generate global axis by interpolation (smoothening)
dna.generate_smooth_axis(smooth=500, spline=3, fill_point=6)
# Calculate curvature and tangent along global helical axis
dna.calculate_curvature_tangent(store_tangent=True)
# Major and minor grooves
parameters = [ 'minor groove', 'minor groove refined', 'major groove', 'major groove refined' ]
dna.set_major_minor_groove('tutorial_data/MGroove_cdna.dat', bp_step=[1, 59], parameters=parameters, step_range=True)
#Backbone dihedrals
dna.set_backbone_dihedrals('tutorial_data/BackBoneCHiDihedrals_cdna.dat', bp=[2, 59], parameters='All', bp_range=True)
Explanation: Store/Save data to HDF5 file
No extra step neccessary to store the data in HDF5 file. Just read the parameters from do_x3dna output files as described in previous tutorials.
Local base-pair parameters as shown preveiosly here.
Local base-step parameters as shown preveiosly here.
Local helical base-step parameters as shown preveiosly here.
Helical axis as shown preveiosly here.
Major and minor grooves as shown preveiosly here.
Backbone dihedrals as shown preveiosly here.
End of explanation
# Extracting "Shear" of 22nd bp
shear_20bp = dna.data['bp']['22']['shear']
#Shear vs Time for 22nd bp
plt.title('22nd bp')
plt.plot(dna.time, shear_20bp)
plt.xlabel('Time (ps)')
plt.ylabel('Shear ($\AA$)')
plt.show()
Explanation: Example to extract a parameter
As shown previously here, data can be extracted from HDF5 by same way as shown in the following.
Also, see that plot is similar.
Note that in this case, data is read from the HDF5 file, while in the previous tutorial, data was stored in memory (RAM).
End of explanation
# Rise vs Time for 25-40 bp segment
plt.title('Rise for 25-40 bp segment')
# Rise is the distance between two base-pairs, so for a given segment it is sum over the base-steps
time, value = dna.time_vs_parameter('rise', [25, 40], merge=True, merge_method='sum')
plt.plot(time, value, label='bound DNA', c='k')
plt.xlabel('Time (ps)')
plt.ylabel('Rise ( $\AA$)')
plt.legend()
plt.show()
Explanation: Example to extract parameter as a function of time
As shown previously here, smae method (dnaMD.DNA.time_vs_parameter(...)) can be used to get parameter values as a function of time.
Also, see that plot is similar.
Note that in this case, data is read from the HDF5 file, while in the previous tutorial, data was stored in memory (RAM).
End of explanation |
1,246 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
레버리지와 아웃라이어
레버리지 (Leverage)
개별적인 데이터 표본이 회귀 분석 결과에 미치는 영향은 레버리지(leverage)분석을 통해 알 수 있다.
레버리지는 래의 target value $y$가 예측된(predicted) target $\hat{y}$에 미치는 영향을 나타낸 값이다. self-influence, self-sensitivity 라고도 한다
레버리지는 RegressionResults 클래스의 get_influence 메서드로 구할 수 있다.
weight vector
$$ w = (X^TX)^{-1} X^T y $$
$$ \hat{y} = X w = X((X^TX)^{-1} X^T y ) = ( X(X^TX)^{-1} X^T) y = Hy $$
leverage $h_{ii}$
$$ h_{ii}=(H)_{ii} $$
leverage 특성
$$ 0 \leq h_{ii} \leq 1 $$
$$ \sum_i^N h_{ii} = 2 $$
leverages는 어떤 데이터 포인트가 예측점을 자기 자신의 위치로 끌어 당기는 정도
만약 $h_{ii} \simeq 1$이면
$$ \hat{y} \simeq y $$
Step1: Outlier
Good Leverage Points
leverage가(영향력이) 높지만 residual이(오차가) 작은 데이터
Bad Leverage Points = Outliner
leverage도(영향력도) 높지만 residual도(오차도) 큰 데이터
Step2: Influence
Cook's Distance
(normalized) residual과 leverage의 복합 측도
$$ D_i = \frac{e_i^2}{\text{RSS}}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right] $$
Fox' Outlier Recommendation
$$ D_i > \dfrac{4}{N − 2} $$ | Python Code:
from sklearn.datasets import make_regression
X0, y, coef = make_regression(n_samples=100, n_features=1, noise=20, coef=True, random_state=1)
# add high-leverage points
X0 = np.vstack([X0, np.array([[4],[3]])])
X = sm.add_constant(X0)
y = np.hstack([y, [300, 150]])
plt.scatter(X0, y)
plt.show()
model = sm.OLS(pd.DataFrame(y), pd.DataFrame(X))
result = model.fit()
print(result.summary())
influence = result.get_influence()
hat = influence.hat_matrix_diag
plt.stem(hat)
plt.axis([ -2, len(y)+2, 0, 0.2 ])
plt.show()
print("hat.sum() =", hat.sum())
plt.scatter(X0, y)
sm.graphics.abline_plot(model_results=result, ax=plt.gca())
idx = hat > 0.05
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400])
plt.show()
model2 = sm.OLS(y[:-1], X[:-1])
result2 = model2.fit()
plt.scatter(X0, y);
sm.graphics.abline_plot(model_results=result, c="r", linestyle="--", ax=plt.gca())
sm.graphics.abline_plot(model_results=result2, c="g", alpha=0.7, ax=plt.gca())
plt.plot(X0[-1], y[-1], marker='x', c="m", ms=20, mew=5)
plt.axis([-3, 5, -300, 400])
plt.legend(["before", "after"], loc="upper left")
plt.show()
model3 = sm.OLS(y[1:], X[1:])
result3 = model3.fit()
plt.scatter(X0, y)
sm.graphics.abline_plot(model_results=result, c="r", linestyle="--", ax=plt.gca())
sm.graphics.abline_plot(model_results=result3, c="g", alpha=0.7, ax=plt.gca())
plt.plot(X0[0], y[0], marker='x', c="m", ms=20, mew=5)
plt.axis([-3, 5, -300, 400])
plt.legend(["before", "after"], loc="upper left")
plt.show()
Explanation: 레버리지와 아웃라이어
레버리지 (Leverage)
개별적인 데이터 표본이 회귀 분석 결과에 미치는 영향은 레버리지(leverage)분석을 통해 알 수 있다.
레버리지는 래의 target value $y$가 예측된(predicted) target $\hat{y}$에 미치는 영향을 나타낸 값이다. self-influence, self-sensitivity 라고도 한다
레버리지는 RegressionResults 클래스의 get_influence 메서드로 구할 수 있다.
weight vector
$$ w = (X^TX)^{-1} X^T y $$
$$ \hat{y} = X w = X((X^TX)^{-1} X^T y ) = ( X(X^TX)^{-1} X^T) y = Hy $$
leverage $h_{ii}$
$$ h_{ii}=(H)_{ii} $$
leverage 특성
$$ 0 \leq h_{ii} \leq 1 $$
$$ \sum_i^N h_{ii} = 2 $$
leverages는 어떤 데이터 포인트가 예측점을 자기 자신의 위치로 끌어 당기는 정도
만약 $h_{ii} \simeq 1$이면
$$ \hat{y} \simeq y $$
End of explanation
plt.figure(figsize=(10, 2))
plt.stem(result.resid)
plt.xlim([-2, len(y)+2])
plt.show()
sm.graphics.plot_leverage_resid2(result)
plt.show()
Explanation: Outlier
Good Leverage Points
leverage가(영향력이) 높지만 residual이(오차가) 작은 데이터
Bad Leverage Points = Outliner
leverage도(영향력도) 높지만 residual도(오차도) 큰 데이터
End of explanation
sm.graphics.influence_plot(result, plot_alpha=0.3)
plt.show()
cooks_d2, pvals = influence.cooks_distance
fox_cr = 4 / (len(y) - 2)
idx = np.where(cooks_d2 > fox_cr)[0]
plt.scatter(X0, y)
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400])
from statsmodels.graphics import utils
utils.annotate_axes(range(len(idx)), idx, zip(X0[idx], y[idx]), [(-20,15)]*len(idx), size="large", ax=plt.gca())
plt.show()
idx = np.nonzero(result.outlier_test().ix[:, -1].abs() < 0.9)[0]
plt.scatter(X0, y)
plt.scatter(X0[idx], y[idx], s=300, c="r", alpha=0.5)
plt.axis([-3, 5, -300, 400]);
utils.annotate_axes(range(len(idx)), idx, zip(X0[idx], y[idx]), [(-10,10)]*len(idx), size="large", ax=plt.gca())
plt.show()
plt.figure(figsize=(10, 2))
plt.stem(result.outlier_test().ix[:, -1])
plt.show()
Explanation: Influence
Cook's Distance
(normalized) residual과 leverage의 복합 측도
$$ D_i = \frac{e_i^2}{\text{RSS}}\left[\frac{h_{ii}}{(1-h_{ii})^2}\right] $$
Fox' Outlier Recommendation
$$ D_i > \dfrac{4}{N − 2} $$
End of explanation |
1,247 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Check ensemble of OpenMM temperature replica exchange simulations
Note
Step1: Ensemble validation is particularly useful for validating enhanced sampling methods such
as temperature replica exchange (parallel tempering) molecular dynamics simulations. In
replica exchange MD, configurational swaps are attempted periodically between simulations
running concurrently in separate simulation cells at different temperatures. Since the
acceptance criteria for exchange moves depends on the energies of each replica involved,
it is critical that the energy distributions are correct at all thermodynamic states used
the parallel tempering scheme.
In this example, parallel tempering simulations on a simple coarse-grained oligomer system
were run using the openmmtools framework. 6 temperature states spaced logarithmically over
the range of 300K to 500K, over which the oligomer undergoes a transition from a helix to
unfolded state.
Step2: For convenience, we will use native physical_validation units
Step3: Create the UnitData object which we will use to inform physical_validation of our choices. Note that the conversion factors will all be 1.0, since we're using native units. The notation here is more general though, and would allow to change any unit set in the previous cell.
Step4: We first read the replica exchange output file and extract the thermodynamics states used in the simulation. We will use these states to create the EnsembleData objects which inform physical_validation of the sampled ensembles. Note that the example simulations were performed at constant temperature in a non-periodic box. We will therefore set the (undefined) volume of the NVT ensemble to -1.
Step5: We will then read the replica energies and the state indices. Note that
replica_state_indices[replica, step] denotes the thermodynamics state index sampled by replica replica during step step
replica_energies[replica, state, step] is the reduced potential of replica replica at state state during step step
Step6: For our analysis, we are only interested in the energies at the states the replicas were sampling at, rather than at all states. It will also make our remaining analysis easier if these energies are organized as time series per state rather than time series per replica.
Also, the replica energies are stored in reduced potential. For our analysis, we are interested in the non-reduced form, which we can obtain by multiplying the result by kT of the respective thermodynamic state.
Step7: We now have all the required information to create a SimulationData object for each separate state
Step8: We can now run the ensemble validation on all adjacent temperature pairs | Python Code:
# enable plotting in notebook
%matplotlib notebook
Explanation: Check ensemble of OpenMM temperature replica exchange simulations
Note: This notebook can be run locally by cloning the
Github repository.
The notebook is located in doc/examples/openmm_replica_exchange.ipynb. The input and output files of the simulation are located in doc/examples/simulation_results/openMMTemperatureReplicaExchange/.
Be aware that probabilistic quantities such as error estimates based on bootstrapping
will differ when repeating the analysis.
End of explanation
import numpy as np
import physical_validation
import openmmtools.multistate
import simtk.unit
Explanation: Ensemble validation is particularly useful for validating enhanced sampling methods such
as temperature replica exchange (parallel tempering) molecular dynamics simulations. In
replica exchange MD, configurational swaps are attempted periodically between simulations
running concurrently in separate simulation cells at different temperatures. Since the
acceptance criteria for exchange moves depends on the energies of each replica involved,
it is critical that the energy distributions are correct at all thermodynamic states used
the parallel tempering scheme.
In this example, parallel tempering simulations on a simple coarse-grained oligomer system
were run using the openmmtools framework. 6 temperature states spaced logarithmically over
the range of 300K to 500K, over which the oligomer undergoes a transition from a helix to
unfolded state.
End of explanation
energy_unit = simtk.unit.kilojoule_per_mole
length_unit = simtk.unit.nanometer
volume_unit = length_unit ** 3
temperature_unit = simtk.unit.kelvin
pressure_unit = simtk.unit.bar
time_unit = simtk.unit.picosecond
kb = simtk.unit.MOLAR_GAS_CONSTANT_R.value_in_unit(energy_unit / temperature_unit)
Explanation: For convenience, we will use native physical_validation units:
End of explanation
unit_data = physical_validation.data.UnitData(
kb=kb,
energy_conversion=energy_unit.conversion_factor_to(simtk.unit.kilojoule_per_mole),
length_conversion=length_unit.conversion_factor_to(simtk.unit.nanometer),
volume_conversion=volume_unit.conversion_factor_to(simtk.unit.nanometer ** 3),
temperature_conversion=temperature_unit.conversion_factor_to(simtk.unit.kelvin),
pressure_conversion=pressure_unit.conversion_factor_to(simtk.unit.bar),
time_conversion=time_unit.conversion_factor_to(simtk.unit.picosecond),
energy_str=energy_unit.get_symbol(),
length_str=length_unit.get_symbol(),
volume_str=volume_unit.get_symbol(),
temperature_str=temperature_unit.get_symbol(),
pressure_str=pressure_unit.get_symbol(),
time_str=time_unit.get_symbol(),
)
Explanation: Create the UnitData object which we will use to inform physical_validation of our choices. Note that the conversion factors will all be 1.0, since we're using native units. The notation here is more general though, and would allow to change any unit set in the previous cell.
End of explanation
output_data = "simulation_results/openMMTemperatureReplicaExchange/output/output.nc"
reporter = openmmtools.multistate.MultiStateReporter(output_data, open_mode="r")
states = reporter.read_thermodynamic_states()[0]
num_states = len(states)
ensemble_data = [
physical_validation.data.EnsembleData(
ensemble="NVT",
natoms=state.n_particles,
volume=-1,
temperature=state.temperature.value_in_unit(temperature_unit),
)
for state in states
]
Explanation: We first read the replica exchange output file and extract the thermodynamics states used in the simulation. We will use these states to create the EnsembleData objects which inform physical_validation of the sampled ensembles. Note that the example simulations were performed at constant temperature in a non-periodic box. We will therefore set the (undefined) volume of the NVT ensemble to -1.
End of explanation
analyzer = openmmtools.multistate.ReplicaExchangeAnalyzer(reporter)
replica_energies, _, _, replica_state_indices = analyzer.read_energies()
Explanation: We will then read the replica energies and the state indices. Note that
replica_state_indices[replica, step] denotes the thermodynamics state index sampled by replica replica during step step
replica_energies[replica, state, step] is the reduced potential of replica replica at state state during step step
End of explanation
# Prepare array of kT values
kT = np.array(
[kb * state.temperature.value_in_unit(temperature_unit) for state in states]
)
total_steps = replica_energies.shape[2]
potential_energies = []
for state in range(num_states):
state_energy = np.zeros(total_steps)
for step in range(total_steps):
# Find the replica which sampled at state `state` during step `step`
state_energy[step] = replica_energies[
np.nonzero(replica_state_indices[:, step] == state), state, step
]
# Append non-reduced potential energy time series
potential_energies.append(state_energy * kT[state])
Explanation: For our analysis, we are only interested in the energies at the states the replicas were sampling at, rather than at all states. It will also make our remaining analysis easier if these energies are organized as time series per state rather than time series per replica.
Also, the replica energies are stored in reduced potential. For our analysis, we are interested in the non-reduced form, which we can obtain by multiplying the result by kT of the respective thermodynamic state.
End of explanation
simulation_data = []
for ensemble, potential_energy in zip(ensemble_data, potential_energies):
simulation_data.append(
physical_validation.data.SimulationData(
units=unit_data,
ensemble=ensemble,
observables=physical_validation.data.ObservableData(
potential_energy=potential_energy
),
)
)
Explanation: We now have all the required information to create a SimulationData object for each separate state:
End of explanation
for simulation_lower, simulation_upper in zip(
simulation_data[:-1], simulation_data[1:]
):
physical_validation.ensemble.check(
data_sim_one=simulation_lower, data_sim_two=simulation_upper, screen=True
)
Explanation: We can now run the ensemble validation on all adjacent temperature pairs:
End of explanation |
1,248 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step4: 5. Create a logistic regression model | Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
df = pd.read_csv("hanford.csv")
df
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
df.describe()
Explanation: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
End of explanation
df['High_Exposure'] = df['Exposure'].apply(lambda x:1 if x > 3.41 else 0)
Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data
End of explanation
lm = LogisticRegression()
x = np.asarray(dataset[['Mortality']])
y = np.asarray(dataset['Exposure'])
lm = lm.fit(x,y)
Explanation: 5. Create a logistic regression model
End of explanation |
1,249 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This example is a Jupyter notebook. You can download it or run it interactively on mybinder.org.
Linear regression
Data
The true parameters of the linear regression
Step1: Generate data
Step2: Model
The regressors, that is, the input data
Step3: Note that we added a column of ones to the regressor matrix for the bias term. We model the slope and the bias term in the same node so we do not factorize between them
Step4: The first element is the slope which multiplies x and the second element is the bias term which multiplies the constant ones. Now we compute the dot product of X and B
Step5: The noise parameter
Step6: The noisy observations
Step7: Inference
Observe the data
Step8: Construct the variational Bayesian (VB) inference engine by giving all stochastic nodes
Step9: Iterate until convergence
Step10: Results
Create a simple predictive model for new inputs
Step11: Note that we use the learned node B but create a new regressor array for predictions. Plot the predictive distribution of noiseless function values
Step12: Note that the above plot shows two standard deviation of the posterior of the noiseless function, thus the data points may lie well outside this range. The red line shows the true linear function. Next, plot the distribution of the noise parameter and the true value, 2−2=0.25
Step13: The noise level is captured quite well, although the posterior has more mass on larger noise levels (smaller precision parameter values). Finally, plot the distribution of the regression parameters and mark the true value
Step14: In this case, the true parameters are captured well by the posterior distribution.
Improving accuracy
The model can be improved by not factorizing between B and tau but learning their joint posterior distribution. This requires a slight modification to the model by using GaussianGammaISO node
Step15: This node contains both the regression parameter vector and the noise parameter. We compute the dot product similarly as before
Step16: However, Y is constructed as follows
Step17: Because the noise parameter is already in F_tau we can give a constant one as the second argument. The total noise parameter for Y is the product of the noise parameter in F_tau and one. Now, inference is run similarly as before | Python Code:
import numpy as np
k = 2 # slope
c = 5 # bias
s = 2 # noise standard deviation
# This cell content is hidden from Sphinx-generated documentation
%matplotlib inline
np.random.seed(42)
Explanation: This example is a Jupyter notebook. You can download it or run it interactively on mybinder.org.
Linear regression
Data
The true parameters of the linear regression:
End of explanation
x = np.arange(10)
y = k*x + c + s*np.random.randn(10)
Explanation: Generate data:
End of explanation
X = np.vstack([x, np.ones(len(x))]).T
Explanation: Model
The regressors, that is, the input data:
End of explanation
from bayespy.nodes import GaussianARD
B = GaussianARD(0, 1e-6, shape=(2,))
Explanation: Note that we added a column of ones to the regressor matrix for the bias term. We model the slope and the bias term in the same node so we do not factorize between them:
End of explanation
from bayespy.nodes import SumMultiply
F = SumMultiply('i,i', B, X)
Explanation: The first element is the slope which multiplies x and the second element is the bias term which multiplies the constant ones. Now we compute the dot product of X and B:
End of explanation
from bayespy.nodes import Gamma
tau = Gamma(1e-3, 1e-3)
Explanation: The noise parameter:
End of explanation
Y = GaussianARD(F, tau)
Explanation: The noisy observations:
End of explanation
Y.observe(y)
Explanation: Inference
Observe the data:
End of explanation
from bayespy.inference import VB
Q = VB(Y, B, tau)
Explanation: Construct the variational Bayesian (VB) inference engine by giving all stochastic nodes:
End of explanation
Q.update(repeat=1000)
Explanation: Iterate until convergence:
End of explanation
xh = np.linspace(-5, 15, 100)
Xh = np.vstack([xh, np.ones(len(xh))]).T
Fh = SumMultiply('i,i', B, Xh)
Explanation: Results
Create a simple predictive model for new inputs:
End of explanation
import bayespy.plot as bpplt
bpplt.pyplot.figure()
bpplt.plot(Fh, x=xh, scale=2)
bpplt.plot(y, x=x, color='r', marker='x', linestyle='None')
bpplt.plot(k*xh+c, x=xh, color='r');
Explanation: Note that we use the learned node B but create a new regressor array for predictions. Plot the predictive distribution of noiseless function values:
End of explanation
bpplt.pyplot.figure()
bpplt.pdf(tau, np.linspace(1e-6,1,100), color='k')
bpplt.pyplot.axvline(s**(-2), color='r');
Explanation: Note that the above plot shows two standard deviation of the posterior of the noiseless function, thus the data points may lie well outside this range. The red line shows the true linear function. Next, plot the distribution of the noise parameter and the true value, 2−2=0.25:
End of explanation
bpplt.pyplot.figure();
bpplt.contour(B, np.linspace(1,3,1000), np.linspace(1,9,1000),
n=10, colors='k');
bpplt.plot(c, x=k, color='r', marker='x', linestyle='None',
markersize=10, markeredgewidth=2)
bpplt.pyplot.xlabel(r'$k$');
bpplt.pyplot.ylabel(r'$c$');
Explanation: The noise level is captured quite well, although the posterior has more mass on larger noise levels (smaller precision parameter values). Finally, plot the distribution of the regression parameters and mark the true value:
End of explanation
from bayespy.nodes import GaussianGamma
B_tau = GaussianGamma(np.zeros(2), 1e-6*np.identity(2), 1e-3, 1e-3)
Explanation: In this case, the true parameters are captured well by the posterior distribution.
Improving accuracy
The model can be improved by not factorizing between B and tau but learning their joint posterior distribution. This requires a slight modification to the model by using GaussianGammaISO node:
End of explanation
F_tau = SumMultiply('i,i', B_tau, X)
Explanation: This node contains both the regression parameter vector and the noise parameter. We compute the dot product similarly as before:
End of explanation
Y = GaussianARD(F_tau, 1)
Explanation: However, Y is constructed as follows:
End of explanation
Y.observe(y)
Q = VB(Y, B_tau)
Q.update(repeat=1000)
Explanation: Because the noise parameter is already in F_tau we can give a constant one as the second argument. The total noise parameter for Y is the product of the noise parameter in F_tau and one. Now, inference is run similarly as before:
End of explanation |
1,250 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MySQL-python
It is an interface to MySQL that
Step1: WARNING
Step2: It is recommend to interpolate sql using the DB API.
It knows how to deal with strings, integers, booleans, None...
Querying
Step3: pymongo
'pymongo' is the official Python MongoDB driver
Step4: When connecting you can provide a list of seeds (replica set servers) in several ways
http
Step5: Most of shell operations can be translated easilly
Step6: Updating | Python Code:
# let's create a testing database
# CREATE DATABASE IF NOT EXISTS mod_mysqldb DEFAULT CHARACTER SET 'UTF8' DEFAULT COLLATE 'UTF8_GENERAL_CI';
# GRANT ALL PRIVILEGES ON mod_mysqldb.* TO 'user'@'localhost' IDENTIFIED BY 'user';
# let's connect to our database
import MySQLdb as mysql
conn = mysql.connect('localhost', 'user', 'user', 'mod_mysqldb')
cursor = conn.cursor()
cursor.execute("CREATE TABLE IF NOT EXISTS writers(id INT PRIMARY KEY AUTO_INCREMENT, name VARCHAR(25), UNIQUE KEY (name));")
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES('Jack London')")
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES('Honore de Balzac')")
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES('Lion Feuchtwanger')")
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES('Emile Zola')")
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES('Truman Capote')")
more_writers = ['Yukio Mishima', 'Lev Tolstoi', 'Franz Kafka']
for writer in more_writers:
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES(%s)", (writer,))
more_writers_using_many = [('Charles Bukowski',), ('Jorge Luis Borges',), ('Gabriel Garcia Marquez',)]
cursor.executemany("INSERT IGNORE INTO writers(Name) VALUES(%s)", more_writers_using_many)
Explanation: MySQL-python
It is an interface to MySQL that:
- Compliance with Python db API 2.0 ( http://www.python.org/dev/peps/pep-0249/ )
- Thread safety
- Thread-friendliness (threads will not block each other)
MySQL-3.23 through 5.5 and Python-2.4 through 2.7 are currently supported.
End of explanation
more_writers_using_dict = [{'name':'Pablo Neruda'}, {'name':'Fedor Dostoievski'}]
cursor.executemany("INSERT IGNORE INTO writers(Name) VALUES(%(name)s)", more_writers_using_dict)
cursor.execute("INSERT IGNORE INTO writers(Name) VALUES(%s)" % 'Francis Scott Fitzgerald')
# What has happened?
"INSERT IGNORE INTO writers(Name) VALUES(%s)" % 'Francis Scott Fitzgerald'
Explanation: WARNING: executemany just makes a loop on execute, so it is not a bulk update
End of explanation
cursor.execute('SELECT * FROM writers')
for writer in cursor.fetchall():
print writer
# query for an specific register
cursor.execute("SELECT * FROM writers WHERE name='Pablo Neruda'")
print cursor.fetchone()
# querying using interpolation
cursor.execute("SELECT * FROM writers WHERE name=%(name)s", {'name': 'Charles Bukowski'})
print cursor.fetchone()
# using a dict cursor to improve working with a queryset
import MySQLdb.cursors
conn.commit()
cursor.close()
conn.close()
conn = mysql.connect('localhost', 'user', 'user', 'mod_mysqldb', cursorclass=MySQLdb.cursors.DictCursor)
cursor = conn.cursor()
cursor.execute('SELECT * FROM writers')
for writer in cursor.fetchall():
print writer
Explanation: It is recommend to interpolate sql using the DB API.
It knows how to deal with strings, integers, booleans, None...
Querying
End of explanation
import pymongo
client = pymongo.MongoClient('localhost', 27017) # localhost:27017 is the default value
dbconn = client.mod_pymongo # also client['mod_pymongo'] getting a database is so easy ('use db' in mongo shell)
print client
Explanation: pymongo
'pymongo' is the official Python MongoDB driver
End of explanation
from pymongo import ReadPreference
from pymongo.errors import AutoReconnect, ConnectionFailure, DuplicateKeyError
replica_client = pymongo.MongoClient(
('localhost:27017', 'localhost:27018', 'localhost:27019'), # also you can use url format
w=3, # globally set write_concern (wtimeout can also be set...).
replicaset='sdrepl',
read_preference=ReadPreference.PRIMARY, # several options available
auto_start_request=True # consistent reads (socket allocated by requests)
) # you can also use MongoReplicaSetClient
# More options in http://api.mongodb.org/python/current/api/pymongo/connection.html
print replica_client
client.close()
db_replica = replica_client.mod_pymongo
db_replica.books.drop()
db_replica.writers.ensure_index([("name", pymongo.ASCENDING), ("age", pymongo.DESCENDING)], unique=True, name="unique_name")
more_writers = ["Yukio Mishima", "Lev Tolstoi", "Franz Kafka", "J. D. Salinger"]
for writer in more_writers:
db_replica.writers.insert({"name": writer, "age": 90})
# some more
db_replica.books.insert({'_id': 'hobbit', 'editions': []}) # rules is pretended to be a list of complex objects
db_replica.books.insert({'_id': 'lord_rings', 'editions': None }, w=0) # write_concern can be disabled in collection level operations
more_writers_using_bulk = ["Charles Bukowski", "Jorge Luis Borges", "Gabriel Garcia Marquez"]
db_replica.writers.insert([{"name": name} for name in more_writers_using_bulk])
from pymongo.errors import DuplicateKeyError, OperationFailure
# collection level operations raise OperationFailure when a problem happens
# OperationFailure is translated in some cases:
try:
db_replica.books.insert({'_id': 'hobbit'})
except DuplicateKeyError:
print "Already created object"
except OperationFailure:
print "Some problem occurred"
Explanation: When connecting you can provide a list of seeds (replica set servers) in several ways
http://api.mongodb.org/python/current/examples/high_availability.html
End of explanation
cursor = db_replica.writers.find()
for writer in cursor: # we get a pymongo Cursor not a list (ordering, skip...)
print writer
# query for an specific register
res = db_replica.writers.find_one({"name": "Pablo Neruda"})
print res # we get a dict in python
# querying with several fields, just provide a dict
import re
db_replica.writers.insert({'name': 'Miguel de Unamuno', 'age': 130})
db_replica.writers.insert({'name': 'Miguel Delibes', 'age': 90})
db_replica.writers.insert({'name': 'Miguel de Cervantes', 'age': 500})
res = db_replica.writers.find({"name": re.compile("^Miguel"), "age": {'$lt': 200}}) # regex can be used in query
print list(res) # we get a dict in python
# sort, skip and limit are quite similar to shell
res = db_replica.writers.find().sort('name', pymongo.DESCENDING).skip(3).limit(1)
print list(res)
# you can use it as kw arguments
res = db_replica.writers.find(skip=3).sort('name', pymongo.DESCENDING).limit(1)
print list(res)
# to sort by more than one parameter we use list of set not dict
res = db_replica.writers.find().sort([('name', pymongo.DESCENDING), ('_id', pymongo.ASCENDING)]).skip(3).limit(1)
print list(res)
# Explain plans
from pprint import pprint
pprint(db_replica.writers.find({"name": "Pablo Neruda"}).explain())
Explanation: Most of shell operations can be translated easilly:
dict and list in python vs object and array in json
some times dict must be changed to list of set because dict has no ordering... (ensure_index)
Querying
End of explanation
# Change the name of a field in a document
db_replica.writers.update({"name": "J. D. Salinger"}, {"name": "Jerome David Salinger"})
# if object does not exist, create new one (upsert)
db_replica.writers.update({"name": "George R. R. Martin"}, {"name": "George Raymond Richard Martin"}, upsert=True)
# Add book as subdocument in collection
book = {'name': 'hobbit'}
db_replica.writers.update({"name": "Jerome David Salinger"},{'$set': {'books': book}})
db_replica.writers.update({"name": "George Raymond Richard Martin"},{'$set': {'books': {'name': 'another_book'}}})
# check the documents...
print db_replica.writers.find_one({"name": "Jerome David Salinger"})
print db_replica.writers.find_one({"name": "George Raymond Richard Martin"})
# Update subdocument field
db_replica.writers.update({"name": "George Raymond Richard Martin"},{'$set': {'books.name': 'lord_rings'}})
res = dbconn.writers.find_one({"name": "George Raymond Richard Martin"})
print res
# add one object to an array with push
edition = {
'year': '1997',
'editorial': 'planet'
}
db_replica.books.update({'_id': 'hobbit' }, {'$push': {'editions': edition}}) # quite similar to mongo shell
print db_replica.books.find_one({'_id': 'hobbit'})
# Dealing with Autoreconnect in replicaset
# Stop the mongo primary instance before continue
import time
try:
db_replica.books.find_one()
except AutoReconnect:
print "Connection lost"
# We make same query again ...
print db_replica.books.find_one()
Explanation: Updating
End of explanation |
1,251 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GPyTorch Regression With KeOps
Introduction
KeOps is a recently released software package for fast kernel operations that integrates wih PyTorch. We can use the ability of KeOps to perform efficient kernel matrix multiplies on the GPU to integrate with the rest of GPyTorch.
In this tutorial, we'll demonstrate how to integrate the kernel matmuls of KeOps with all of the bells of whistles of GPyTorch, including things like our preconditioning for conjugate gradients.
In this notebook, we will train an exact GP on 3droad, which has hundreds of thousands of data points. Together, the highly optimized matmuls of KeOps combined with algorithmic speed improvements like preconditioning allow us to train on a dataset like this in a matter of minutes using only a single GPU.
Step1: Downloading Data
We will be using the 3droad UCI dataset which contains a total of 278,319 data points. The next cell will download this dataset from a Google drive and load it.
Step2: Using KeOps with a GPyTorch Model
Using KeOps with one of our pre built kernels is as straightforward as swapping the kernel out. For example, in the cell below, we copy the simple GP from our basic tutorial notebook, and swap out gpytorch.kernels.MaternKernel for gpytorch.kernels.keops.MaternKernel.
Step3: Compute RMSE | Python Code:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
Explanation: GPyTorch Regression With KeOps
Introduction
KeOps is a recently released software package for fast kernel operations that integrates wih PyTorch. We can use the ability of KeOps to perform efficient kernel matrix multiplies on the GPU to integrate with the rest of GPyTorch.
In this tutorial, we'll demonstrate how to integrate the kernel matmuls of KeOps with all of the bells of whistles of GPyTorch, including things like our preconditioning for conjugate gradients.
In this notebook, we will train an exact GP on 3droad, which has hundreds of thousands of data points. Together, the highly optimized matmuls of KeOps combined with algorithmic speed improvements like preconditioning allow us to train on a dataset like this in a matter of minutes using only a single GPU.
End of explanation
import urllib.request
import os.path
from scipy.io import loadmat
from math import floor
if not os.path.isfile('../3droad.mat'):
print('Downloading \'3droad\' UCI dataset...')
urllib.request.urlretrieve('https://www.dropbox.com/s/f6ow1i59oqx05pl/3droad.mat?dl=1', '../3droad.mat')
data = torch.Tensor(loadmat('../3droad.mat')['data'])
import numpy as np
N = data.shape[0]
# make train/val/test
n_train = int(0.8 * N)
train_x, train_y = data[:n_train, :-1], data[:n_train, -1]
test_x, test_y = data[n_train:, :-1], data[n_train:, -1]
# normalize features
mean = train_x.mean(dim=-2, keepdim=True)
std = train_x.std(dim=-2, keepdim=True) + 1e-6 # prevent dividing by 0
train_x = (train_x - mean) / std
test_x = (test_x - mean) / std
# normalize labels
mean, std = train_y.mean(),train_y.std()
train_y = (train_y - mean) / std
test_y = (test_y - mean) / std
# make continguous
train_x, train_y = train_x.contiguous(), train_y.contiguous()
test_x, test_y = test_x.contiguous(), test_y.contiguous()
output_device = torch.device('cuda:0')
train_x, train_y = train_x.to(output_device), train_y.to(output_device)
test_x, test_y = test_x.to(output_device), test_y.to(output_device)
Explanation: Downloading Data
We will be using the 3droad UCI dataset which contains a total of 278,319 data points. The next cell will download this dataset from a Google drive and load it.
End of explanation
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.keops.MaternKernel(nu=2.5))
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood().cuda()
model = ExactGPModel(train_x, train_y, likelihood).cuda()
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use the adam optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
import time
training_iter = 50
for i in range(training_iter):
start_time = time.time()
# Zero gradients from previous iteration
optimizer.zero_grad()
# Output from model
output = model(train_x)
# Calc loss and backprop gradients
loss = -mll(output, train_y)
loss.backward()
print('Iter %d/%d - Loss: %.3f lengthscale: %.3f noise: %.3f' % (
i + 1, training_iter, loss.item(),
model.covar_module.base_kernel.lengthscale.item(),
model.likelihood.noise.item()
))
optimizer.step()
print(time.time() - start_time)
# Get into evaluation (predictive posterior) mode
model.eval()
likelihood.eval()
# Test points are regularly spaced along [0,1]
# Make predictions by feeding model through likelihood
with torch.no_grad(), gpytorch.settings.fast_pred_var():
observed_pred = likelihood(model(test_x))
Explanation: Using KeOps with a GPyTorch Model
Using KeOps with one of our pre built kernels is as straightforward as swapping the kernel out. For example, in the cell below, we copy the simple GP from our basic tutorial notebook, and swap out gpytorch.kernels.MaternKernel for gpytorch.kernels.keops.MaternKernel.
End of explanation
torch.sqrt(torch.mean(torch.pow(observed_pred.mean - test_y, 2)))
Explanation: Compute RMSE
End of explanation |
1,252 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Computing for Mathematics - 2020/2021 individual coursework
Important Do not delete the cells containing
Step7: b. $1/2$
Available marks
Step11: c. $3/4$
Available marks
Step15: d. $1$
Available marks
Step16: Question 2
(Hint
Step17: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation.
Available marks
Step20: Question 3
(Hint
Step21: b. Create a variable equation that has value the equation $f'(0)=0$.
Available marks
Step24: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$.
Available marks
Step27: Question 4
(Hint
Step29: b. Given that $c=2$ output $\frac{df}{dx}$ where
Step32: c. Given that $c=2$ output $\int f(x)dx$
Available marks | Python Code:
import random
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 0
return random.random() < 0
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
q1_a_answer = _
q1_a_expected_answer = 0
feedback_text = Your output is not the expected answer
assert q1_a_answer == q1_a_expected_answer, feedback_text
feedback_text = You did not include a docstring. This is important to help document your code.
It is done using triple quotation marks. For example:
def get_remainder(m, n):
\"\"\"
This function returns the remainder of m when dividing by n
\"\"\"
…
Using that it's possible to access the docstring,
one way to do this is to type: `get_remainder?`
(which only works in Jupyter) or help(get_remainder).
We can also comment code using `#` but this is completely
ignored by Python so cannot be accessed in the same way.
try:
assert sample_experiment.__doc__ is not None, feedback_text
except NameError:
assert False, "You did not create a function called `sample_experiment`"
Explanation: Computing for Mathematics - 2020/2021 individual coursework
Important Do not delete the cells containing:
```
BEGIN SOLUTION
END SOLUTION
```
write your solution attempts in those cells.
To submit this notebook:
Change the name of the notebook from main to: <student_number>. For example, if your student number is c1234567 then change the name of the notebook to c1234567.
Write all your solution attempts in the correct locations;
Do not delete any code that is already in the cells;
Save the notebook (File>Save As);
Follow the instructions given in class/email to submit.
Question 1
(Hint: This question is similar to the first exercise of the Probability chapter of Python for mathematics.)
For each of the following, write a function sample_experiment, and repeatedly use it to simulate the probability of an event occurring with the following chances.
For each chance output the simulated probability.
a. $0$
Available marks: 2
End of explanation
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 1 / 2
return random.random() < 1 / 2
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
import numpy as np
q1_b_answer = _
q1_b_expected_answer = 1 / 2
feedback_text = Your output is not within an acceptable range of the expected answer
assert q1_b_expected_answer * .75 <= q1_b_answer <= q1_b_expected_answer * 1.25, feedback_text
feedback_text = You did not include a docstring. This is important to help document your code.
It is done using triple quotation marks. For example:
def get_remainder(m, n):
\"\"\"
This function returns the remainder of m when dividing by n
\"\"\"
…
Using that it's possible to access the docstring,
one way to do this is to type: `get_remainder?`
(which only works in Jupyter) or help(get_remainder).
We can also comment code using `#` but this is completely
ignored by Python so cannot be accessed in the same way.
try:
assert sample_experiment.__doc__ is not None, feedback_text
except NameError:
assert False, "You did not create a function called `sample_experiment`"
Explanation: b. $1/2$
Available marks: 2
End of explanation
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 3 / 4
return random.random() < 3 / 4
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
q1_c_answer = _
q1_c_expected_answer = 3 / 4
feedback_text = Your output is not within an acceptable range of the expected answer
assert q1_c_expected_answer * .75 <= q1_c_answer <= q1_c_expected_answer * 1.25, feedback_text
feedback_text = You did not include a docstring. This is important to help document your code.
It is done using triple quotation marks. For example:
def get_remainder(m, n):
\"\"\"
This function returns the remainder of m when dividing by n
\"\"\"
…
Using that it's possible to access the docstring,
one way to do this is to type: `get_remainder?`
(which only works in Jupyter) or help(get_remainder).
We can also comment code using `#` but this is completely
ignored by Python so cannot be accessed in the same way.
try:
assert sample_experiment.__doc__ is not None, feedback_text
except NameError:
assert False, "You did not create a function called `sample_experiment`"
Explanation: c. $3/4$
Available marks: 2
End of explanation
def sample_experiment():
### BEGIN SOLUTION
Returns true if a random number is less than 1
return random.random() < 1
number_of_experiments = 1000
sum(
sample_experiment() for repetition in range(number_of_experiments)
) / number_of_experiments
### END SOLUTION
q1_d_answer = _
q1_d_expected_answer = 1
feedback_text = Your output is not the expected answer
assert q1_d_answer == q1_d_expected_answer, feedback_text
feedback_text = You did not include a docstring. This is important to help document your code.
It is done using triple quotation marks. For example:
def get_remainder(m, n):
\"\"\"
This function returns the remainder of m when dividing by n
\"\"\"
…
Using that it's possible to access the docstring,
one way to do this is to type: `get_remainder?`
(which only works in Jupyter) or help(get_remainder).
We can also comment code using `#` but this is completely
ignored by Python so cannot be accessed in the same way.
try:
assert sample_experiment.__doc__ is not None, feedback_text
except NameError:
assert False, "You did not create a function called `sample_experiment`"
Explanation: d. $1$
Available marks: 2
End of explanation
import itertools
pets = ("cat", "dog", "fish", "lizard", "hamster")
### BEGIN SOLUTION
permutations = tuple(itertools.permutations(pets, 4))
number_of_permutations = len(permutations)
### END SOLUTION
expected_number_of_permutations = 120
feedback = "The expected answer is 120"
try:
assert expected_number_of_permutations == number_of_permutations, feedback
except NameError:
assert False, "You did not create a variable called `number_of_permutations`"
Explanation: Question 2
(Hint: This question is similar to the second exercise of the Combinatorics chapter of Python for mathematics.)
a. Create a variable number_of_permutations that gives the number of permutations of pets = ("cat", "dog", "fish", "lizard", "hamster) of size 4. Do this by generating and counting them.
Available marks: 2
End of explanation
import scipy.special
### BEGIN SOLUTION
direct_number_of_permutations = scipy.special.perm(5, 4)
### END SOLUTION
expected_number_of_permutations = 120
feedback = "The expected answer is 120"
try:
assert expected_number_of_permutations == direct_number_of_permutations, feedback
except NameError:
assert False, "You did not create a variable called `direct_number_of_permutations`"
Explanation: b. Create a variable direct_number_of_permutations that gives the number of permutations of pets of size 4 by direct computation.
Available marks: 1
End of explanation
import sympy as sym
x = sym.Symbol("x")
c1 = sym.Symbol("c1")
### BEGIN SOLUTION
second_derivative = 4 * x + sym.cos(x)
derivative = sym.integrate(second_derivative, x) + c1
### END SOLUTION
feedback_text = `derivative` is not a symbolic expression.
You are expected to use sympy for this question.
try:
assert derivative.expand(), feedback_text
except NameError:
assert False, "You did not create a variable called `derivative`"
import sympy as sym
x = sym.Symbol("x")
c1 = sym.Symbol("c1")
expected_answer = c1 + 2 * x ** 2 + sym.sin(x)
feedback_text = fYour answer is not correct.
The expected answer is {expected_answer}.
assert sym.expand(derivative - expected_answer) == 0, feedback_text
Explanation: Question 3
(Hint: This question uses concepts from the Algebra and Calculus chapters of Python for mathematics.)
Consider the second derivative $f''(x)=4 x + \cos(x)$.
a. Create a variable derivative which has value $f'(x)$ (use the variables x and c1 if necessary):
Available marks: 3
End of explanation
### BEGIN SOLUTION
equation = sym.Eq(derivative.subs({x:0}), 0)
### END SOLUTION
expected_lhs = c1
feedback = f"The expected left hand side is {expected_lhs}"
try:
assert sym.expand(equation.lhs - expected_lhs) == 0, feedback
except AttributeError:
assert False, "You did not create a symbolic equation"
expected_rhs = 0
feedback = f"The expected right hand side is {expected_rhs}. Note that the exact value of pi can be used from the sympy library."
try:
assert sym.expand(equation.rhs - expected_rhs) == 0, feedback
except AttributeError:
assert "You did not create a symbolic equation", False
Explanation: b. Create a variable equation that has value the equation $f'(0)=0$.
Available marks: 4
End of explanation
### BEGIN SOLUTION
particular_derivative = derivative.subs({c1: 0})
function = sym.integrate(particular_derivative) + c1
sym.integrate(function, (x, 0, 5 * sym.pi))
### END SOLUTION
q3_c_answer = _
feedback_text = your output is not a symbolic expression.
You are expected to use sympy for this question.
assert q3_c_answer.expand(), feedback_text
expected_answer = 625 * sym.pi ** 4 / 6
delta = sym.expand((q3_c_answer - expected_answer) / (5 * sym.pi))
feedback_text = fThe expected answer is: {expected_answer} (up to a constant of integration).
This is done by substituting the value of c1=pi+1 and then integrating.)
If your we let your answer be Y, so that Y={q3_c_answer}
Then
(Y - {expected_answer}) / (5 pi) should be a constant (with respect to x) but you have:
(Y - {expected_answer}) / (5 pi) = {delta}
assert sym.diff(delta, x) == 0, feedback_text
Explanation: c. Using the solution to that equation, output the value of $\int_{0}^{5\pi}f(x)dx$.
Available marks: 4
End of explanation
c = sym.Symbol("c")
### BEGIN SOLUTION
def get_sequence_a(n):
Return the sequence a.
if n == 1:
return c
return 3 * get_sequence_a(n - 1) + c / n
sum(get_sequence_a(n) for n in range(1, 16))
### END SOLUTION
q4_a_answer = _
feedback_text = your output is not a symbolic expression.
You are expected to use sympy for this question.
assert q4_a_answer.expand(), feedback_text
expected_answer = 2748995546 * c / 315
feedback_text = f"The expected answer is: {expected_answer}."
assert sym.expand(q4_a_answer - expected_answer) == 0, feedback_text
Explanation: Question 4
(Hint: This question uses concepts from the Calculus and Sequences chapters of Python for mathematics.)
Consider this recursive definition for the sequence $a_n$:
$$
a_n = \begin{cases}
c & \text{ if n = 1}\
3a_{n - 1} + \frac{c}{n}
\end{cases}
$$
a. Output the sum of the 15 terms.
Available marks: 5
End of explanation
### BEGIN SOLUTION
f = (get_sequence_a(n=1) + get_sequence_a(n=2) * x + get_sequence_a(n=3) * x ** 2 + + get_sequence_a(n=4) * x ** 3).subs({c: 2})
sym.diff(f, x)
### END SOLUTION
q4_b_answer = _
feedback_text = your output is not a symbolic expression.
You are expected to use sympy for this question.
assert q4_b_answer.expand(), feedback_text
import sympy as sym
expected_answer = sym.S(393) * x ** 2 / 2 + sym.S(130) * x / 3 + sym.S(7)
feedback_text = f"The expected answer is: {expected_answer}."
assert sym.expand(q4_b_answer - expected_answer) == 0, feedback_text
Explanation: b. Given that $c=2$ output $\frac{df}{dx}$ where:
$$
f(x) = a_1 + a_2 x + a_3 x ^ 2 + a_4 x ^ 3
$$
Available marks: 4
End of explanation
### BEGIN SOLUTION
sym.integrate(f, x)
### END SOLUTION
q4_c_answer = _
feedback_text = your output is not a symbolic expression.
You are expected to use sympy for this question.
assert q4_c_answer.expand(), feedback_text
import sympy as sym
def _get_sequence_a(n):
Return the sequence a.
if n == 1:
return c
return 3 * _get_sequence_a(n - 1) + c / n
expected_derivative = (_get_sequence_a(n=1) + _get_sequence_a(n=2) * x + _get_sequence_a(n=3) * x ** 2 + _get_sequence_a(n=4) * x ** 3).subs({c: 2})
feedback_text = f"The expected answer is: {sym.integrate(expected_derivative, x)}."
assert sym.expand(sym.diff(q4_c_answer, x) - expected_derivative) == 0, feedback_text
Explanation: c. Given that $c=2$ output $\int f(x)dx$
Available marks: 4
End of explanation |
1,253 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convert data to NILMTK format and load into NILMTK
Step1: NILMTK uses an open file format based on the HDF5 binary file format to store both the power data and the metadata. The very first step when using NILMTK is to convert your dataset to the NILMTK HDF5 file format.
NOTE
Step2: Now redd.h5 holds all the REDD power data and all the relevant metadata. In NILMTK v0.2 this conversion only uses a tiny fraction of the system memory (unlike NILMTK v0.1 which would guzzle ~1 GByte of RAM just to do the dataset conversion!).
Of course, if you want to run convert_redd on your own machine then you first need to download REDD, decompress it and pass the relevant source_directory and output_filename to convert_redd().
Other datasets
At the time of writing, NILMTK contains converters for 8 datasets.
Contributing a new converter is easy and highly encouraged! Learn how to write a dataset converter.
Open HDF5 in NILMTK
Step3: At this point, all the metadata has been loaded into memory but none of the power data has been loaded. This is our first encounter with a fundamental difference between NILMTK v0.1 and v0.2+
Step4: We also have all the buildings available as an OrderedDict (indexed from 1 not 0 because every dataset we are aware of starts numbering buildings from 1 not 0)
Step5: Each building has a little bit of metadata associated with it (there isn't much building-specific metadata in REDD)
Step6: Each building has an elec attribute which is a MeterGroup object (much more about those soon!) | Python Code:
!! pip install -U Pillow==6.1.0
Explanation: Convert data to NILMTK format and load into NILMTK
End of explanation
from nilmtk.dataset_converters import convert_redd
convert_redd('../datasets/REDD/low_freq', '../datasets/REDD/low_freq.h5')
Explanation: NILMTK uses an open file format based on the HDF5 binary file format to store both the power data and the metadata. The very first step when using NILMTK is to convert your dataset to the NILMTK HDF5 file format.
NOTE: If you are on Windows, remember to escape the back-slashes, use forward-slashs, or use raw-strings when passing paths in Python, e.g. one of the following would work:
python
convert_redd('c:\\data\\REDD\\low_freq', r'c:\\data\\redd.h5')
convert_redd('c:/data/REDD/low_freq', 'c:/data/redd.h5')
convert_redd(r'c:\data\REDD\low_freq', r'c:\data\redd.h5')
REDD
Converting the REDD dataset is easy:
End of explanation
from nilmtk import DataSet
from nilmtk.utils import print_dict
redd = DataSet('../datasets/REDD/low_freq.h5')
Explanation: Now redd.h5 holds all the REDD power data and all the relevant metadata. In NILMTK v0.2 this conversion only uses a tiny fraction of the system memory (unlike NILMTK v0.1 which would guzzle ~1 GByte of RAM just to do the dataset conversion!).
Of course, if you want to run convert_redd on your own machine then you first need to download REDD, decompress it and pass the relevant source_directory and output_filename to convert_redd().
Other datasets
At the time of writing, NILMTK contains converters for 8 datasets.
Contributing a new converter is easy and highly encouraged! Learn how to write a dataset converter.
Open HDF5 in NILMTK
End of explanation
print_dict(redd.metadata)
Explanation: At this point, all the metadata has been loaded into memory but none of the power data has been loaded. This is our first encounter with a fundamental difference between NILMTK v0.1 and v0.2+: NILMTK v0.1 used to eagerly load the entire dataset into memory before you did any actual work on the data. NILMTK v0.2+ is lazy! It won't load data into memory until you tell it what you want to do with the data (and, even then, large dataset will be loaded in chunks that fit into memory). This allows NILMTK v0.2+ to work with arbitrarily large datasets (datasets too large to fit into memory) without choking your system.
Exploring the DataSet object
Let's have a quick poke around to see what's in this redd object...
There is a lot of metadata associated with the dataset, including information about the two models of meter device the authors used to record REDD:
End of explanation
print_dict(redd.buildings)
Explanation: We also have all the buildings available as an OrderedDict (indexed from 1 not 0 because every dataset we are aware of starts numbering buildings from 1 not 0)
End of explanation
print_dict(redd.buildings[1].metadata)
Explanation: Each building has a little bit of metadata associated with it (there isn't much building-specific metadata in REDD):
End of explanation
redd.buildings[1].elec
Explanation: Each building has an elec attribute which is a MeterGroup object (much more about those soon!)
End of explanation |
1,254 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Space Time Scan
Using SatScan
Start SaTScan and make a new session
Under the "Input" tab
Step1: Save to SatScan format
Embarrasingly, we now seem to have surpassed SaTScan in terms of speed and member usage, and so our code can analyse somewhat larger datasets (we do not compute p-values, of course, but we are still faster...)
Step2: Bin times
Step3: Grid first
Step4: Run the full analysis
Step5: Grid and bin first
Step6: Zero radius clusters
As you will see above, some of the clusters returned have zero radius (which makes sense, as having assigned all events to the middle of the grid cell they fall into, there will be clusters just consisting of the events in one grid cell).
- We cannot see these in the plot above
- But they do contribute to the gridded "risk" profile, hence the mismatch between the left and right plots.
Instead, it is possible to use the library to replace each cluster by the cluster with the same centre but with a radius enlarged to the maximum extent possible so that it contains no more events.
- This is not quite the same as still asking for the clusters not to overlap, as you can see.
- It leads to a different risk profile. It is not clear to me if this is "bettr" or not...
Step7: Optimisation work | Python Code:
%matplotlib inline
from common import *
#datadir = os.path.join("//media", "disk", "Data")
datadir = os.path.join("..", "..", "..", "..", "..", "Data")
south_side, points = load_data(datadir)
grid = grid_for_south_side()
import open_cp.stscan as stscan
import open_cp.stscan2 as stscan2
trainer = stscan.STSTrainer()
trainer.region = grid.region()
trainer.data = points
scanner, _ = trainer.to_scanner()
scanner.coords.shape, scanner.timestamps.shape
# Check how we covert the data
last_time = max(trainer.data.timestamps)
x = (last_time - trainer.data.timestamps) / np.timedelta64(1,"ms")
indexes = np.argsort(x)
np.testing.assert_allclose(x[indexes], scanner.timestamps)
np.testing.assert_allclose(trainer.data.coords[:,indexes], scanner.coords)
Explanation: Space Time Scan
Using SatScan
Start SaTScan and make a new session
Under the "Input" tab:
Set the "Case File" to "chicago.cas"
Set the "Coordinates Files" to "chicago.geo"
Set "Coordinates" to "Cartesian"
Set "Time Precision" to Day
Set the "Study Period" from 2011-03-01 to 2011-09-27 (or whatever)
Under the "Analysis" tab:
Select "Propsective Analysis" -> "Space-Time"
Select "Probability Model" -> "Space-Time Permutation"
Select "Time Aggregation" -> "1 Day"
Under the "Output" tab:
Select the "Main Results File" to whatever
Optionally change the spatial and temporal window:
Under "Analysis", click "Advanced":
Under "Spatial Window", select "is a circle with a ..."
Under "Temporal Window", select "Maximum Temporal Cluster Size" is ... days
Using our library code
End of explanation
ts = scanner.timestamps / 1000 / 60
ts = ts[:100]
c = scanner.coords[:,:100]
stscan2.AbstractSTScan.write_to_satscan("temp", max(ts), c, ts)
max(max(ts) - ts)
scanner.timestamps = scanner.timestamps[:100]
scanner.coords = scanner.coords[:,:100]
list(scanner.find_all_clusters())
Explanation: Save to SatScan format
Embarrasingly, we now seem to have surpassed SaTScan in terms of speed and member usage, and so our code can analyse somewhat larger datasets (we do not compute p-values, of course, but we are still faster...)
End of explanation
trainer1 = trainer.bin_timestamps(np.datetime64("2017-01-01"), np.timedelta64(1, "D"))
trainer1.data.number_data_points, trainer1.data.time_range
trainer1.to_satscan("test")
result = trainer1.predict()
result.statistics[:5]
Explanation: Bin times
End of explanation
trainer1 = trainer.grid_coords(grid.region(), grid.xsize)
trainer1 = trainer1.bin_timestamps(np.datetime64("2017-01-01"), np.timedelta64(1, "D"))
#trainer1.data = trainer1.data[trainer1.data.timestamps < np.datetime64("2011-04-01")]
trainer1.data.number_data_points, trainer1.data.time_range
trainer1.to_satscan("test")
result = trainer1.predict()
result.statistics[:5]
Explanation: Grid first
End of explanation
result = trainer.predict()
result.clusters[0]
pred = result.grid_prediction(grid.xsize)
pred.mask_with(grid)
import matplotlib.patches
def plot_clusters(ax, coords, clusters):
xmax, xmin = np.max(scanner.coords[0]), np.min(scanner.coords[0])
xd = (xmax - xmin) / 100 * 5
ax.set(xlim=[xmin-xd, xmax+xd])
ymax, ymin = np.max(scanner.coords[1]), np.min(scanner.coords[1])
yd = (ymax - ymin) / 100 * 5
ax.set(ylim=[ymin-yd, ymax+yd])
ax.set_aspect(1)
for c in clusters:
cir = matplotlib.patches.Circle(c.centre, c.radius, alpha=0.5)
ax.add_patch(cir)
ax.scatter(*coords, color="black", marker="+", linewidth=1)
def plot_grid_pred(ax, pred):
cmap = ax.pcolormesh(*pred.mesh_data(), pred.intensity_matrix, cmap=yellow_to_red)
fig.colorbar(cmap, ax=ax)
ax.set_aspect(1)
ax.add_patch(descartes.PolygonPatch(south_side, fc="none", ec="Black"))
fig, axes = plt.subplots(ncols=2, figsize=(17,8))
plot_clusters(axes[0], trainer.data.coords, result.clusters)
plot_grid_pred(axes[1], pred)
Explanation: Run the full analysis
End of explanation
trainer1 = trainer.grid_coords(grid.region(), grid.xsize)
trainer1 = trainer1.bin_timestamps(np.datetime64("2017-01-01"), np.timedelta64(1, "D"))
trainer1.region = grid.region()
result1 = trainer1.predict()
result1.clusters[:10]
pred1 = result1.grid_prediction(grid.xsize)
pred1.mask_with(grid)
fig, axes = plt.subplots(ncols=2, figsize=(17,8))
plot_clusters(axes[0], trainer1.data.coords, result1.clusters)
plot_grid_pred(axes[1], pred1)
Explanation: Grid and bin first
End of explanation
pred2 = result1.grid_prediction(grid.xsize, use_maximal_clusters=True)
pred2.mask_with(grid)
fig, axes = plt.subplots(ncols=2, figsize=(17,8))
plot_clusters(axes[0], trainer1.data.coords, result1.max_clusters)
plot_grid_pred(axes[1], pred2)
Explanation: Zero radius clusters
As you will see above, some of the clusters returned have zero radius (which makes sense, as having assigned all events to the middle of the grid cell they fall into, there will be clusters just consisting of the events in one grid cell).
- We cannot see these in the plot above
- But they do contribute to the gridded "risk" profile, hence the mismatch between the left and right plots.
Instead, it is possible to use the library to replace each cluster by the cluster with the same centre but with a radius enlarged to the maximum extent possible so that it contains no more events.
- This is not quite the same as still asking for the clusters not to overlap, as you can see.
- It leads to a different risk profile. It is not clear to me if this is "bettr" or not...
End of explanation
time_masks, time_counts, times = scanner.make_time_ranges()
N = scanner.timestamps.shape[0]
centre = scanner.coords.T[0]
space_masks, space_counts, dists = scanner.find_discs(centre)
actual = scanner._calc_actual(space_masks, time_masks, time_counts)
expected = space_counts[:,None] * time_counts[None,:] / N
_mask = (actual > 1) & (actual > expected)
stats = scanner._ma_statistic(np.ma.array(actual, mask=~_mask),
np.ma.array(expected, mask=~_mask), N)
_mask1 = np.any(_mask, axis=1)
if not np.any(_mask1):
raise Exception()
m = np.ma.argmax(stats, axis=1)[_mask1]
stats1 = stats[_mask1,:]
stats2 = stats1[range(stats1.shape[0]),m].data
used_dists = dists[_mask1]
used_times = times[m]
%timeit( scanner.find_discs(centre) )
%timeit( np.sum(space_masks[:,:,None] & time_masks[:,None,:], axis=0) )
%timeit(scanner._calc_actual(space_masks, time_masks, time_counts))
np.testing.assert_allclose(scanner._calc_actual(space_masks, time_masks, time_counts),
np.sum(space_masks[:,:,None] & time_masks[:,None,:], axis=0))
%timeit(space_counts[:,None] * time_counts[None,:] / N)
%timeit((actual > 1) & (actual > expected))
%timeit(scanner._ma_statistic(np.ma.array(actual, mask=~_mask), np.ma.array(expected, mask=~_mask), N))
log_lookup = np.log(np.array([1] + list(range(1,N+1))))
log_lookup2 = np.log(np.array([1] + list(range(1,N*N+1))))
sh = (space_counts.shape[0], time_counts.shape[0])
s = np.ma.array(np.broadcast_to(space_counts[:,None], sh), mask=~_mask)
t = np.ma.array(np.broadcast_to(time_counts[None,:], sh), mask=~_mask)
a = np.ma.array(actual, mask=~_mask)
e = np.ma.array(s*t, mask=~_mask) / N
x1 = a * np.ma.log(a/e)
Nl = np.log(N)
aa = a.astype(np.int)
y1 = a * (Nl + log_lookup[aa] - log_lookup[s] - log_lookup[t])
assert np.ma.max(np.ma.abs(x1-y1)) < 1e-10
x2 = (N-a) * (np.ma.log(N-a) - np.ma.log(N-e))
y2 = (N-a) * (Nl + log_lookup[N-aa] - np.ma.log(N*N-s*t))
assert np.ma.max(np.ma.abs(x2-y2)) < 1e-10
aa = actual.astype(np.int)
def f():
sl = log_lookup[space_counts]
tl = log_lookup[time_counts]
st = N*N - space_counts[:,None] * time_counts[None,:]
Nl = np.log(N)
y = aa * (Nl + log_lookup[aa] - sl[:,None] - tl[None,:])
yy = (N-aa) * (Nl + log_lookup[N-aa] - log_lookup2[st])
return np.ma.array(y + yy, mask=~_mask)
stats = scanner._ma_statistic(np.ma.array(actual, mask=~_mask),
np.ma.array(expected, mask=~_mask), N)
np.ma.max(np.ma.abs(stats - f()))
%timeit(f())
%timeit(np.any(_mask, axis=1))
%timeit(np.ma.argmax(stats, axis=1)[_mask1])
%timeit(stats[_mask1,:])
%timeit(stats1[range(stats1.shape[0]),m].data)
%timeit(dists[_mask1])
%timeit(times[m])
def f():
x = scanner.faster_score_all()
return next(x)
def f1():
x = scanner.faster_score_all_new()
return next(x)
a = f()
a1 = f1()
for i in range(4):
np.testing.assert_allclose(a[i], a1[i])
# Compare against the old slow method
def find_chunk(ar, start_index):
x = (ar == ar[start_index])
end_index = start_index
while end_index < len(ar) and x[end_index]:
end_index += 1
return end_index
x = scanner.faster_score_all_old()
a2 = next(x)
start_index = 0
for index in range(len(a1[1])):
end_index = find_chunk(a2[1], start_index)
i = np.argmax(a2[3][start_index:end_index])
for j in range(1,4):
assert abs(a2[j][start_index+i] - a1[j][index]) < 1e-10
start_index = end_index
%timeit(f())
%timeit(f1())
import datetime
x = scanner.faster_score_all()
for _ in range(20):
now = datetime.datetime.now()
next(x)
print(datetime.datetime.now() - now)
next(scanner.find_all_clusters())
Explanation: Optimisation work
End of explanation |
1,255 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="images/charizard.png" alt="Expert" width="200">
Expert level
Welcome to the expert level!
For this level, I'm assuming you are somewhat familiar with the Python programming language, for example by completing the 'adept' level, or having done some other projects in Python.
Alternatively, if you have enough experience in other programming languages, you can follow along.
If some aspect of the Python programming language is unclear, I found Jake VanderPlas' "A whirlwind tour of Python" a great reference to quickly look things up.
During the next half hour, we are going to construct a basic pipeline that begins with some raw MEG data and performs a minimum-norm estimate (MNE) of the cortical sources.
The experiment
I have prepared a bit of MEG data for you. It was recorded while a brave volunteer was in the scanner, listening to auditory beeps and looking at visual checkerboards. The volunteer was staring at a "fixation cross" in the center of a screen in front of him/her. From time to time, beeps sounded from either the left or the right side of his/her head. At other times, a checkerboard would be displayed either to the left or right side of to the cross. The volunteer was instructed to keep his/her eyes fixed on the cross and not directly look at the checkerboards. This way, the checkerboard was presented either to the left or right visual field of the volunteer.
<center>
<img src="images/sound.png" width="100" style="display
Step1: Overview of MNE-Python and how to view the documentation
The MNE-Python software module is subdivided in several sub-modules, all housing classes and functions related to different aspects of data analysis. Here are the sub-modules that are relevant for this exercise
Step2: If the code was correct, the cell below will visualize the raw data, using an interactive figure.
Click in the scrollbars or use the arrow keys to scroll through the data.
Step3: <div style="border
Step4: The system generated 6 types of events. We are interested in events with ids 1 to 4, which correspond to the presentation of one of the four different types of stimuli. Lets give them names. Here is a dictionary mapping string names to event ids
Step5: Creating epochs
Now that we have the information on what stimulus was presented at what time, we can extract "epochs".
Epochs are little snippets of signal surrounding an event.
These epochs can then be averaged to produce the "evoked" signal.
To cut up the continuous data into epochs, create an mne.Epochs object. Pass the raw data, the events array and the event_id dictionary as parameters.
By default, epochs will be cut starting from 0.2 seconds before the onset of the event until 0.5 seconds after the onset.
These defaults are fine for the data we're currently analyzing.
<div style="border
Step6: The figure you just created is interactive. Try clicking in the scrollbars and using the arrow keys to explore the data. Also try pressing the b key to switch to "butterly" mode. In this mode, all the channels are plotted on top of each other. This is a great mode for quickly checking data quality
Step7: The data/mri folders contains the MRI data for all volunteers that participated in the experiment.
In our case, there is only one volunteer, named sample.
All the MNE-Python functions that need access to the head model take a parameter subject, that needs to be set to 'sample' in our case.
With access to both the FreeSurfer folder and the subject name, the function knows where to find the head model data it needs.
Another tedious task is to align the coordinate frames of the MRI and the MEG scanners, which requires some interactive tools that unfortunately don't work in the browser.
The result of this alignment process is a coordinate transformation object, which I have also prepared for you in advance.
The line of code below will loaded the transformation into memory
Step8: To visualize the head model and check if the coordinate systems have been properly aligned, you can use the mne.viz.plot_alignment function.
Take a look at its documentation and you will see you can pass a whole list of different objects to this function.
The purpose of the function is to visualize everything you give it in the same coordinate space.
If everything lines up properly, we're good to go!
The line of code below will call the function.
Take note of the subject parameter.
You will need to use this parameter for any function calls that need access to the head model.
Step9: The above figure is interactive. Drag on the figure to rotate the 3D model and check that the brain and head, which are generated from the MRI images, are aligned nicely with the MEG helmet and the EEG sensors, which locations are taken from the epochs.info dictionary.
Creating a source space
We are going to estimate the cortical orgins of the signals by creating a fine grid of points along the cortex.
More precisely, we're going to put these points on the boundary between the white and gray matter of the brain.
Then, we will compute for each grid point, a spatial filter that attempts to isolate any signals possibly originating from that point.
Taken together, the estimated activity at all the grid points give a complete picture of the estimated signals originating from all over the cortex.
To define the grid points, or the "source space" as MNE-Python calls it, we can use the mne.setup_source_space function.
This function needs the subject parameter that you saw before used in the mne.viz.plot_alignment function.
There is another, optional, parameter you want to set to increase the computation speed.
You want to set add_dist=False to disable the very lengthy point-to-point distance computation that we don't need for this exercise.
You can leave the rest of the parameters at their default values.
The mne.setup_source_space function produces an object of the type mne.SourceSpaces (plural, because it creates a separate source space for each hemisphere).
This object has, you guessed it, a plot method to visualize it.
Create a source space, store it in a variable named src and visualize it
Step10: The forward model
With the head model and source space in place, we can compute the forward model (or forward "solution" as MNE-Python calls it).
This is a simulation of the magnetic fields originating from the grid points of the source space, propagating through the various tissues in the head model to reach the MEG sensors.
It is created by the mne.make_forward_solution function.
This function has four required parameters
Step11: If all went well, the following line of code will plot the sensitivity profile
Step12: The inverse model
The forward model we just computed can simulate signals originating from the cortex to the MEG sensors.
We want to go the other way
Step13: If all went well, the following line will plot some information about the covariance matrix you just computed
Step14: Now we can compute the actual inverse model, or inverse "operator" as MNE-Python calls it.
This is done with the mne.minimum_norm.make_inverse_operator function.
It has as required parameters the epochs.info dictionary, the forward model fwd and the covariance cov we just computed.
Store the result in a variable called inv | Python Code:
%matplotlib notebook
# Import the MNE-Python module, which contains all the data analysis routines we need
import mne
print('MNE-Python imported.')
# Configure the graphics engine
from matplotlib import pyplot as plt
plt.rc('figure', max_open_warning=100)
%matplotlib notebook
from mayavi import mlab # Mayavi is used for 3D graphics
mlab.init_notebook('ipy') # This instructs Mayavi to render in the background and send png graphics to the browser
print('From now on, all graphics will send to your browser.')
Explanation: <img src="images/charizard.png" alt="Expert" width="200">
Expert level
Welcome to the expert level!
For this level, I'm assuming you are somewhat familiar with the Python programming language, for example by completing the 'adept' level, or having done some other projects in Python.
Alternatively, if you have enough experience in other programming languages, you can follow along.
If some aspect of the Python programming language is unclear, I found Jake VanderPlas' "A whirlwind tour of Python" a great reference to quickly look things up.
During the next half hour, we are going to construct a basic pipeline that begins with some raw MEG data and performs a minimum-norm estimate (MNE) of the cortical sources.
The experiment
I have prepared a bit of MEG data for you. It was recorded while a brave volunteer was in the scanner, listening to auditory beeps and looking at visual checkerboards. The volunteer was staring at a "fixation cross" in the center of a screen in front of him/her. From time to time, beeps sounded from either the left or the right side of his/her head. At other times, a checkerboard would be displayed either to the left or right side of to the cross. The volunteer was instructed to keep his/her eyes fixed on the cross and not directly look at the checkerboards. This way, the checkerboard was presented either to the left or right visual field of the volunteer.
<center>
<img src="images/sound.png" width="100" style="display: inline; margin-right: 50px">
<img src="images/checkerboard.png" width="100" style="display: inline; margin-right: 50px">
<img src="images/cross.png" width="100" style="display: inline; margin-right: 50px">
<img src="images/checkerboard.png" width="100" style="display: inline; margin-right: 50px">
<img src="images/sound.png" width="100" style="display: inline; transform: scaleX(-1);">
</center>
By analyzing the MEG signal, we should be able to see the activity in the auditory and visual cortices.
Some housekeeping
First order of business is to import the MNE-Python module mne and configure the graphics engine to send all figures to the browser.
Executing the cell below will accomplish this.
End of explanation
# Write your line of code here
raw =
Explanation: Overview of MNE-Python and how to view the documentation
The MNE-Python software module is subdivided in several sub-modules, all housing classes and functions related to different aspects of data analysis. Here are the sub-modules that are relevant for this exercise:
mne - Top level module, containing general purpose classes/functions as well as all sub-modules
mne.io - Functions related to loading data in different formats
mne.viz - Functions related to data visualization
mne.minimum_norm - Classes and functions related to performing minimum norm estimates (MNE)
Documentation of all functions and classes can be found at the Python API reference page.
For quick access, all class/function names used in this notebook are also links to their respective documentation pages.
To use MNE-Python effectively for your own projects, you need to be able to use the documentation.
Therefore, I'm not going to spell out how to call each function, but instead want you to use the documentation to look up this information.
Loading data
Let's dive in by loading some data and looking at the raw signal coming out of the MEG scanner.
The function to load the FIFF files that are produced by the recording software is called mne.io.read_raw_fif. Take a look at its documentation. From the function signature we can see the function has one required argument, the name of the file to load, and several optional arguments that we can leave alone for now.
The file with the raw data is 'data/sample-raw.fif'.
In the cell below, write the line of code to load it using the mne.io.read_raw_fif function and store the result in a variable called raw.
End of explanation
# Lots of MNE-Python objects have a .plot() method, and mne.Raw is no exception
raw.plot(); # Note the semicolon ; at the end, see the text below to find out why
Explanation: If the code was correct, the cell below will visualize the raw data, using an interactive figure.
Click in the scrollbars or use the arrow keys to scroll through the data.
End of explanation
events =
Explanation: <div style="border: 3px solid #aaccff; margin: 10px 100px; padding: 10px">
<b>What's with the semicolon ; ?</b>
The Jupyter notebook you are working in right now displays the result of the last statement in a code cell. The plotting functions return a figure object. Therefore, if the last statement of a cell is a call to a plotting function, the figure is displayed twice: once when the function is called, and once more when the figure object is displayed by the Jupyter notebook. By ending a line with a semicolon `;`, we suppress the result by starting a new empty statement.
</div>
Browsing through the channels, you will notice there are several channel types:
<span style="color: #0000ff">204 MEG gradiometers (102 pairs of two)</span>
<span style="color: #00008b">102 MEG magnetometers</span>
9 STIM channels
1 EOG sensor
All these channels record different information about the volunteer and the environment.
Staring at the raw data is not very helpful.
The brain is constantly doing all kinds of things, so there are a lot overlapping signals.
For this exercise, we are interested only in the signals that are related to processing the visual and auditory stimuli that were presented to the volunteer.
Let's start by cutting out only the pieces of signal surrounding the times at which a stimulus was presented.
Of course, that means we first have to figure out when stimuli were presented.
For this, we can use the STIM channels.
The STIM channels and events
In the figure you just made, scroll down and take a look at channel STI 014.
On this channel, the computer that is presenting the stimuli was sending timing information to the MEG equipment.
Whenever a stimulus (checkerboard or beep) was presented, the signal at this channel jumps briefly from 0 to either 1, 2, 3 or 4, indicating the type of stimulus.
We can use this channel to create an "events" matrix: a table listing all the times a stimulus was presented, along with the time of the event and the type of stimulus.
The function to do this is called mne.find_events, and creates a 2D NumPy array containing all the events along with when they occurred and a numerical code indicating the type of event.
The event array can be visualized using the mne.viz.plot_events function.
In the cell below, use the mne.find_events function to create an array called events, then visualize it using the mne.viz.plot_events function:
End of explanation
event_id = {
'audio/left': 1,
'audio/right': 2,
'visual/left': 3,
'visual/right': 4
}
Explanation: The system generated 6 types of events. We are interested in events with ids 1 to 4, which correspond to the presentation of one of the four different types of stimuli. Lets give them names. Here is a dictionary mapping string names to event ids:
End of explanation
epochs =
Explanation: Creating epochs
Now that we have the information on what stimulus was presented at what time, we can extract "epochs".
Epochs are little snippets of signal surrounding an event.
These epochs can then be averaged to produce the "evoked" signal.
To cut up the continuous data into epochs, create an mne.Epochs object. Pass the raw data, the events array and the event_id dictionary as parameters.
By default, epochs will be cut starting from 0.2 seconds before the onset of the event until 0.5 seconds after the onset.
These defaults are fine for the data we're currently analyzing.
<div style="border: 3px solid #aaccff; margin: 10px 100px; padding: 10px">
<b>A note on creating objects in Python</b>
In the Python programming language, creating objects is very similar to calling functions.
To create an object of a certain class, you call the class name as a function.
For example, to create an object of type `str`, you call `str()`.
You can pass parameters like usual: `str('my string')`.
</div>
The created mne.Epochs object has a .plot() method (like so many of MNE-Python's objects).
Try calling it to visualize the epochs. Don't forget to put a semicolon ; at the end of your plotting call or the figure will show twice.
Write some code in the cell below to create the epochs and visualize them:
End of explanation
# This code sets an environment variable called SUBJECTS_DIR
import os
os.environ['SUBJECTS_DIR'] = 'data/mri'
Explanation: The figure you just created is interactive. Try clicking in the scrollbars and using the arrow keys to explore the data. Also try pressing the b key to switch to "butterly" mode. In this mode, all the channels are plotted on top of each other. This is a great mode for quickly checking data quality: can you spot any epochs containing anomalous spikes caused by eye-blinks and movements of the volunteer? Clicking on epochs will turn them red, causing them to be dropped from further analysis after clicking the "end interaction" button that looks like this: <img src="images/end_interaction.png" width="30" style="display: inline-block; vertical-align: middle; margin: 0px;">.
Visualizing the evoked field
Now we have snippets of data that are likely contain signals that are related to the processing of the stimuli.
However, there are still so many overlapping signals that it's difficult to see anything.
During the talks earlier today, you have heard about "evoked" data.
By averaging all the epochs corresponding to a stimulus, signals that are consistently present every time a stimulus was presented will remain, while all other signals will more or less cancel out.
The result is referred to as the "evoked" field (i.e. signals that are "evoked" by the stimulus).
Averaging epochs is simple: the mne.Epochs object has a method called average for exactly that purpose.
The method doesn't need any parameters (there are some optional ones, but we can leave them alone for now) and produces a new object of type mne.Evoked.
Of course, this evoked object also has a plot method you can use to plot a basic visualization of it, but also a plot_joint method that provides a much better visualization.
Another useful feature of the mne.Epochs object is that it behaves as a Python dictionary.
To select all the epochs that correspond to a specific event type, you can index the object like so:
python
epochs['visual/left']
(it uses the string descriptions that we defined in the event_id dictionary earlier on.)
Hence, to visualize the evoked field in response to a checkerboard presented in the left visual field of the volunteer, we write:
python
epochs['visual/left'].average().plot_joint()
In the cell belows, visualize the evoked fields for all four stimuli:
From the evoked data, we can see several bursts of activity following the presentation of a stimulus.
You can also see how the different sensors (magnetometers and gradiometers) pick up the same signal.
Looking at the topographic maps (the "heads"), you can already see how the visual stimuli generated activity at the back of the head, where the visual cortex is located, and how auditory stimuli generated activity on the side of the head, where the auditory cortices are located.
But let's turn it up a notch and try to estimate and visualize the cortical origins of the evoked fields.
Loading the MRI data and the concept of "subjects_dir"
As you may remember from the talks earlier today, we can use MRI data to create a detailed model of the head, which allows us to simulate the signals propagating from the cortex to the MEG sensors, which in turn allows us to estimate the cortical origins of the signal.
Processing the raw MRI data into a 3D head model takes about 24 hours and is performed by a program called FreeSurfer.
We don't have that kind of time right now, so I ran the program in advance for you.
FreeSurfer stores its output in a folder, which is available on this system as data/mri.
Inside this folder is a subfolder for each subject.
We only have one subject, called sample.
We are not going to load all of the FreeSurfer data into memory.
Instead, we're going to inform MNE-Pythnon where the MRI folder ('data/mri') is located on the system, so the required data can be loaded as needed.
The cell below accomplishes this:
End of explanation
trans = mne.read_trans('data/sample-trans.fif')
Explanation: The data/mri folders contains the MRI data for all volunteers that participated in the experiment.
In our case, there is only one volunteer, named sample.
All the MNE-Python functions that need access to the head model take a parameter subject, that needs to be set to 'sample' in our case.
With access to both the FreeSurfer folder and the subject name, the function knows where to find the head model data it needs.
Another tedious task is to align the coordinate frames of the MRI and the MEG scanners, which requires some interactive tools that unfortunately don't work in the browser.
The result of this alignment process is a coordinate transformation object, which I have also prepared for you in advance.
The line of code below will loaded the transformation into memory:
End of explanation
mne.viz.plot_alignment(epochs.info, trans, subject='sample', surfaces=['white', 'outer_skin'])
Explanation: To visualize the head model and check if the coordinate systems have been properly aligned, you can use the mne.viz.plot_alignment function.
Take a look at its documentation and you will see you can pass a whole list of different objects to this function.
The purpose of the function is to visualize everything you give it in the same coordinate space.
If everything lines up properly, we're good to go!
The line of code below will call the function.
Take note of the subject parameter.
You will need to use this parameter for any function calls that need access to the head model.
End of explanation
src =
Explanation: The above figure is interactive. Drag on the figure to rotate the 3D model and check that the brain and head, which are generated from the MRI images, are aligned nicely with the MEG helmet and the EEG sensors, which locations are taken from the epochs.info dictionary.
Creating a source space
We are going to estimate the cortical orgins of the signals by creating a fine grid of points along the cortex.
More precisely, we're going to put these points on the boundary between the white and gray matter of the brain.
Then, we will compute for each grid point, a spatial filter that attempts to isolate any signals possibly originating from that point.
Taken together, the estimated activity at all the grid points give a complete picture of the estimated signals originating from all over the cortex.
To define the grid points, or the "source space" as MNE-Python calls it, we can use the mne.setup_source_space function.
This function needs the subject parameter that you saw before used in the mne.viz.plot_alignment function.
There is another, optional, parameter you want to set to increase the computation speed.
You want to set add_dist=False to disable the very lengthy point-to-point distance computation that we don't need for this exercise.
You can leave the rest of the parameters at their default values.
The mne.setup_source_space function produces an object of the type mne.SourceSpaces (plural, because it creates a separate source space for each hemisphere).
This object has, you guessed it, a plot method to visualize it.
Create a source space, store it in a variable named src and visualize it:
End of explanation
fwd =
Explanation: The forward model
With the head model and source space in place, we can compute the forward model (or forward "solution" as MNE-Python calls it).
This is a simulation of the magnetic fields originating from the grid points of the source space, propagating through the various tissues in the head model to reach the MEG sensors.
It is created by the mne.make_forward_solution function.
This function has four required parameters:
the epochs.info dictionary containing information about the sensors
the coordinate tranformation trans that aligns the coordinates of the MRI with that of the MEG scanner.
the source space src we just created
the location of the physical model of the various tissues in the head. This has been computed by the FreeSurfer program and can be found at: bem='data/mri/sample/bem/bem-sol.fif'.
Go ahead and create the forward model and store it in a variable called fwd:
End of explanation
brain = mne.sensitivity_map(fwd).plot(hemi='both', surface='white', time_label='Sensitivity map')
brain.scale_data_colormap(0, 0.5, 1, False)
brain
Explanation: If all went well, the following line of code will plot the sensitivity profile: how well the MEG sensors pick up signals from each grid point of the source space.
You will see that the closer a grid point is to a MEG sensor, the better we can see signals originating from it.
End of explanation
cov =
Explanation: The inverse model
The forward model we just computed can simulate signals originating from the cortex to the MEG sensors.
We want to go the other way: tracing signals measured at the MEG sensors back to their cortical origin.
To "invert" the forward model, we will use the minimum-norm estimate (MNE) method.
This method combines the covariance between the sensors with the forward model to construct a suitable inverse model.
We can use the mne.compute_covariance function to compute the covariance between the sensors.
For the MNE method, we need to compute this using only the data just before the stimulus was presented.
If we regard the time at which the stimulus was presented as 0, we want all the negative time points.
It is also a good idea to apply some shrinkage to the covariance matrix, which will make the solution better behaved.
Call the mne.compute_covariance, give it the epochs, set tmax=0 and set method='shrunk'.
Store the result in a variable called cov:
End of explanation
mne.viz.plot_cov(cov, epochs.info);
Explanation: If all went well, the following line will plot some information about the covariance matrix you just computed:
End of explanation
inv =
Explanation: Now we can compute the actual inverse model, or inverse "operator" as MNE-Python calls it.
This is done with the mne.minimum_norm.make_inverse_operator function.
It has as required parameters the epochs.info dictionary, the forward model fwd and the covariance cov we just computed.
Store the result in a variable called inv:
End of explanation |
1,256 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Large Scale Text Classification for Sentiment Analysis
Scalability Issues
The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices.
The main scalability issues are
Step1: The vocabulary is used at transform time to build the occurrence matrix
Step2: Let's refit with a slightly larger corpus
Step3: The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especially if we want to distribute the processing on a cluster.
With this new vocabulary, the dimensionality of the output space is now larger
Step4: The Sentiment 140 Dataset
To illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task
Step5: Those files were downloaded from the research archive of the http
Step6: Let's parse the CSV files and load everything in memory. As loading everything can take up to 2GB, let's limit the collection to 100K tweets of each (positive and negative) out of the total of 1.6M tweets.
Step7: Let's display the first samples
Step8: A polarity of "0" means negative while a polarity of "4" means positive. All the positive tweets are at the end of the file
Step9: Let's split the training CSV file into a smaller training set and a validation set with 100k random tweets each
Step10: Let's open the manually annotated tweet files. The evaluation set also has neutral tweets with a polarity of "2" which we ignore. We can build the final evaluation set with only the positive and negative tweets of the evaluation CSV file
Step11: The Hashing Trick
Remember the bag of word representation using a vocabulary based vectorizer
Step12: This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core learning.
The HashingVectorizer class is an alternative to the TfidfVectorizer class with use_idf=False that internally uses the murmurhash hash function
Step13: It shares the same "preprocessor", "tokenizer" and "analyzer" infrastructure
Step14: We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method
Step15: The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute)
Step16: As only the non-zero elements are stored, n_features has little impact on the actual size of the data in memory. We can combine the hashing vectorizer with a Passive-Aggressive linear model in a pipeline
Step17: Let's check that the score on the validation set is reasonably in line with the set of manually annotated tweets
Step18: As the text_train_small dataset is not that big we can still use a vocabulary based vectorizer to check that the hashing collisions are not causing any significant performance drop on the validation set (WARNING this is twice as slow as the hashing vectorizer version, skip this cell if your computer is too slow)
Step19: We get almost the same score but almost twice as slower with also a big, slow to (un)pickle datastructure in memory
Step21: More info and reference for the original papers on the Hashing Trick in the answers to this http
Step22: We can now use our infinte tweet source to train an online machine learning algorithm using the hashing vectorizer. Note the use of the partial_fit method of the PassiveAggressiveClassifier instance in place of the traditional call to the fit method that needs access to the full training set.
From time to time, we evaluate the current predictive performance of the model on our validation set that is guaranteed to not overlap with the infinite training set source
Step23: We can now plot the collected validation score values, versus the number of samples generated by the infinite source and feed to the model | Python Code:
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
])
vectorizer.vocabulary_
Explanation: Large Scale Text Classification for Sentiment Analysis
Scalability Issues
The sklearn.feature_extraction.text.CountVectorizer and sklearn.feature_extraction.text.TfidfVectorizer classes suffer from a number of scalability issues that all stem from the internal usage of the vocabulary_ attribute (a Python dictionary) used to map the unicode string feature names to the integer feature indices.
The main scalability issues are:
Memory usage of the text vectorizer: the all the string representations of the features are loaded in memory
Parallelization problems for text feature extraction: the vocabulary_ would be a shared state: complex synchronization and overhead
Impossibility to do online or out-of-core / streaming learning: the vocabulary_ needs to be learned from the data: its size cannot be known before making one pass over the full dataset
To better understand the issue let's have a look at how the vocabulary_ attribute work. At fit time the tokens of the corpus are uniquely indentified by a integer index and this mapping stored in the vocabulary:
End of explanation
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
Explanation: The vocabulary is used at transform time to build the occurrence matrix:
End of explanation
vectorizer = CountVectorizer(min_df=1)
vectorizer.fit([
"The cat sat on the mat.",
"The quick brown fox jumps over the lazy dog.",
])
vectorizer.vocabulary_
Explanation: Let's refit with a slightly larger corpus:
End of explanation
X = vectorizer.transform([
"The cat sat on the mat.",
"This cat is a nice cat.",
]).toarray()
print(len(vectorizer.vocabulary_))
print(vectorizer.get_feature_names())
print(X)
Explanation: The vocabulary_ is the (logarithmically) growing with the size of the training corpus. Note that we could not have built the vocabularies in parallel on the 2 text documents as they share some words hence would require some kind of shared datastructure or synchronization barrier which is complicated to setup, especially if we want to distribute the processing on a cluster.
With this new vocabulary, the dimensionality of the output space is now larger:
End of explanation
import os
sentiment140_folder = os.path.join('datasets', 'sentiment140')
training_csv_file = os.path.join(sentiment140_folder, 'training.1600000.processed.noemoticon.csv')
testing_csv_file = os.path.join(sentiment140_folder, 'testdata.manual.2009.06.14.csv')
Explanation: The Sentiment 140 Dataset
To illustrate the scalability issues of the vocabulary-based vectorizers, let's load a more realistic dataset for a classical text classification task: sentiment analysis on tweets. The goal is to tell apart negative from positive tweets on a variety of topics.
Assuming that the ../fetch_data.py script was run successfully the following files should be available:
End of explanation
!ls -lh datasets/sentiment140/training.1600000.processed.noemoticon.csv
Explanation: Those files were downloaded from the research archive of the http://www.sentiment140.com/ project. The first file was gathered using the twitter streaming API by running stream queries for the positive ":)" and negative ":(" emoticons to collect tweets that are explicitly positive or negative. To make the classification problem non-trivial, the emoticons were stripped out of the text in the CSV files:
End of explanation
FIELDNAMES = ('polarity', 'id', 'date', 'query', 'author', 'text')
def read_csv(csv_file, fieldnames=FIELDNAMES, max_count=None,
n_partitions=1, partition_id=0):
import csv # put the import inside for use in IPython.parallel
def file_opener(csv_file):
try:
open(csv_file, 'r', encoding="latin1").close()
return open(csv_file, 'r', encoding="latin1")
except TypeError:
# Python 2 does not have encoding arg
return open(csv_file, 'rb')
texts = []
targets = []
with file_opener(csv_file) as f:
reader = csv.DictReader(f, fieldnames=fieldnames,
delimiter=',', quotechar='"')
pos_count, neg_count = 0, 0
for i, d in enumerate(reader):
if i % n_partitions != partition_id:
# Skip entry if not in the requested partition
continue
if d['polarity'] == '4':
if max_count and pos_count >= max_count / 2:
continue
pos_count += 1
texts.append(d['text'])
targets.append(1)
elif d['polarity'] == '0':
if max_count and neg_count >= max_count / 2:
continue
neg_count += 1
texts.append(d['text'])
targets.append(-1)
return texts, targets
%time text_train_all, target_train_all = read_csv(training_csv_file, max_count=200000)
len(text_train_all), len(target_train_all)
Explanation: Let's parse the CSV files and load everything in memory. As loading everything can take up to 2GB, let's limit the collection to 100K tweets of each (positive and negative) out of the total of 1.6M tweets.
End of explanation
for text in text_train_all[:3]:
print(text + "\n")
print(target_train_all[:3])
Explanation: Let's display the first samples:
End of explanation
for text in text_train_all[-3:]:
print(text + "\n")
print(target_train_all[-3:])
Explanation: A polarity of "0" means negative while a polarity of "4" means positive. All the positive tweets are at the end of the file:
End of explanation
from sklearn.cross_validation import train_test_split
text_train_small, text_validation, target_train_small, target_validation = train_test_split(
text_train_all, np.array(target_train_all), test_size=.5, random_state=42)
len(text_train_small)
(target_train_small == -1).sum(), (target_train_small == 1).sum()
len(text_validation)
(target_validation == -1).sum(), (target_validation == 1).sum()
Explanation: Let's split the training CSV file into a smaller training set and a validation set with 100k random tweets each:
End of explanation
text_test_all, target_test_all = read_csv(testing_csv_file)
len(text_test_all), len(target_test_all)
Explanation: Let's open the manually annotated tweet files. The evaluation set also has neutral tweets with a polarity of "2" which we ignore. We can build the final evaluation set with only the positive and negative tweets of the evaluation CSV file:
End of explanation
from sklearn.utils.murmurhash import murmurhash3_bytes_u32
# encode for python 3 compatibility
for word in "the cat sat on the mat".encode("utf-8").split():
print("{0} => {1}".format(
word, murmurhash3_bytes_u32(word, 0) % 2 ** 20))
Explanation: The Hashing Trick
Remember the bag of word representation using a vocabulary based vectorizer:
To workaround the limitations of the vocabulary-based vectorizers, one can use the hashing trick. Instead of building and storing an explicit mapping from the feature names to the feature indices in a Python dict, we can just use a hash function and a modulus operation:
End of explanation
from sklearn.feature_extraction.text import HashingVectorizer
h_vectorizer = HashingVectorizer(encoding='latin-1')
h_vectorizer
Explanation: This mapping is completely stateless and the dimensionality of the output space is explicitly fixed in advance (here we use a modulo 2 ** 20 which means roughly 1M dimensions). The makes it possible to workaround the limitations of the vocabulary based vectorizer both for parallelizability and online / out-of-core learning.
The HashingVectorizer class is an alternative to the TfidfVectorizer class with use_idf=False that internally uses the murmurhash hash function:
End of explanation
analyzer = h_vectorizer.build_analyzer()
analyzer('This is a test sentence.')
Explanation: It shares the same "preprocessor", "tokenizer" and "analyzer" infrastructure:
End of explanation
%time X_train_small = h_vectorizer.transform(text_train_small)
Explanation: We can vectorize our datasets into a scipy sparse matrix exactly as we would have done with the CountVectorizer or TfidfVectorizer, except that we can directly call the transform method: there is no need to fit as HashingVectorizer is a stateless transformer:
End of explanation
X_train_small
Explanation: The dimension of the output is fixed ahead of time to n_features=2 ** 20 by default (nearly 1M features) to minimize the rate of collision on most classification problem while having reasonably sized linear models (1M weights in the coef_ attribute):
End of explanation
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.pipeline import Pipeline
h_pipeline = Pipeline((
('vec', HashingVectorizer(encoding='latin-1')),
('clf', PassiveAggressiveClassifier(C=1, n_iter=1)),
))
%time h_pipeline.fit(text_train_small, target_train_small).score(text_validation, target_validation)
Explanation: As only the non-zero elements are stored, n_features has little impact on the actual size of the data in memory. We can combine the hashing vectorizer with a Passive-Aggressive linear model in a pipeline:
End of explanation
h_pipeline.score(text_test_all, target_test_all)
Explanation: Let's check that the score on the validation set is reasonably in line with the set of manually annotated tweets:
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
vocabulary_vec = TfidfVectorizer(encoding='latin-1', use_idf=False)
vocabulary_pipeline = Pipeline((
('vec', vocabulary_vec),
('clf', PassiveAggressiveClassifier(C=1, n_iter=1)),
))
%time vocabulary_pipeline.fit(text_train_small, target_train_small).score(text_validation, target_validation)
Explanation: As the text_train_small dataset is not that big we can still use a vocabulary based vectorizer to check that the hashing collisions are not causing any significant performance drop on the validation set (WARNING this is twice as slow as the hashing vectorizer version, skip this cell if your computer is too slow):
End of explanation
len(vocabulary_vec.vocabulary_)
Explanation: We get almost the same score but almost twice as slower with also a big, slow to (un)pickle datastructure in memory:
End of explanation
from random import Random
class InfiniteStreamGenerator(object):
Simulate random polarity queries on the twitter streaming API
def __init__(self, texts, targets, seed=0, batchsize=100):
self.texts_pos = [text for text, target in zip(texts, targets)
if target > 0]
self.texts_neg = [text for text, target in zip(texts, targets)
if target <= 0]
self.rng = Random(seed)
self.batchsize = batchsize
def next_batch(self, batchsize=None):
batchsize = self.batchsize if batchsize is None else batchsize
texts, targets = [], []
for i in range(batchsize):
# Select the polarity randomly
target = self.rng.choice((-1, 1))
targets.append(target)
# Combine 2 random texts of the right polarity
pool = self.texts_pos if target > 0 else self.texts_neg
text = self.rng.choice(pool) + " " + self.rng.choice(pool)
texts.append(text)
return texts, targets
infinite_stream = InfiniteStreamGenerator(text_train_small, target_train_small)
texts_in_batch, targets_in_batch = infinite_stream.next_batch(batchsize=3)
for t in texts_in_batch:
print(t + "\n")
targets_in_batch
Explanation: More info and reference for the original papers on the Hashing Trick in the answers to this http://metaoptimize.com/qa question: What is the Hashing Trick?.
Out-of-Core learning
Out-of-Core learning is the task of training a machine learning model on a dataset that does not fit in the main memory. This requires the following conditions:
a feature extraction layer with fixed output dimensionality
knowing the list of all classes in advance (in this case we only have positive and negative tweets)
a machine learning algorithm that supports incremental learning (the partial_fit method in scikit-learn).
Let us simulate an infinite tweeter stream that can generate batches of annotated tweet texts and there polarity. We can do this by recombining randomly pairs of positive or negative tweets from our fixed dataset:
End of explanation
n_batches = 1000
validation_scores = []
training_set_size = []
# Build the vectorizer and the classifier
h_vectorizer = HashingVectorizer(encoding='latin-1')
clf = PassiveAggressiveClassifier(C=1)
# Extract the features for the validation once and for all
X_validation = h_vectorizer.transform(text_validation)
classes = np.array([-1, 1])
n_samples = 0
for i in range(n_batches):
texts_in_batch, targets_in_batch = infinite_stream.next_batch()
n_samples += len(texts_in_batch)
# Vectorize the text documents in the batch
X_batch = h_vectorizer.transform(texts_in_batch)
# Incrementally train the model on the new batch
clf.partial_fit(X_batch, targets_in_batch, classes=classes)
if n_samples % 100 == 0:
# Compute the validation score of the current state of the model
score = clf.score(X_validation, target_validation)
validation_scores.append(score)
training_set_size.append(n_samples)
if i % 100 == 0:
print("n_samples: {0}, score: {1:.4f}".format(n_samples, score))
Explanation: We can now use our infinte tweet source to train an online machine learning algorithm using the hashing vectorizer. Note the use of the partial_fit method of the PassiveAggressiveClassifier instance in place of the traditional call to the fit method that needs access to the full training set.
From time to time, we evaluate the current predictive performance of the model on our validation set that is guaranteed to not overlap with the infinite training set source:
End of explanation
plt.plot(training_set_size, validation_scores)
plt.ylim(0.5, 1)
plt.xlabel("Number of samples")
plt.ylabel("Validation score")
Explanation: We can now plot the collected validation score values, versus the number of samples generated by the infinite source and feed to the model:
End of explanation |
1,257 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TruePeakDetector use example
This algorithm implements the “true-peak” level meter as descripted in the second annex of the ITU-R BS.1770-2[1] or the ITU-R BS.1770-4[2] (default).
Note
Step1: The problem of true peak estimation
The following widget demonstrates two intersample detection techniques
Step2: As it can be seen from the widget, the oversampling strategy generates a smaller error in most of the cases.
The ITU-R BS.1770 approach
The ITU-R BS.1770 recommentation proposess the following signal chain based on the oversampling strategy | Python Code:
import essentia.standard as es
import numpy as np
import matplotlib
matplotlib.use('nbagg')
import matplotlib.pyplot as plt
import ipywidgets as wg
from IPython.display import Audio
from essentia import array as esarr
plt.rcParams["figure.figsize"] =(9, 5)
Explanation: TruePeakDetector use example
This algorithm implements the “true-peak” level meter as descripted in the second annex of the ITU-R BS.1770-2[1] or the ITU-R BS.1770-4[2] (default).
Note: the parameters 'blockDC' and 'emphatise' work only when 'version' is set to 2.
References:
[1] Series, B. S. (2011). Recommendation ITU-R BS.1770-2. Algorithms to
measure audio programme loudness and true-peak audio level,
https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-2-201103-S!!PDF-E.
pdfe
[2] Series, B. S. (2011). Recommendation ITU-R BS.1770-4. Algorithms
to measure audio programme loudness and true-peak audio level,
https://www.itu.int/dms_pubrec/itu-r/rec/bs/R-REC-BS.1770-4-201510-I!!PDF-E.
pdf
End of explanation
# Parameters
duration = 10 # s
fs = 1 # hz
k = 1. # amplitude
oversamplingFactor = 4 # factor of oversampling for the real signal
nSamples = fs * duration
time = np.arange(-nSamples/2, nSamples/2,
2 ** -oversamplingFactor, dtype='float')
samplingPoints = time[::2 ** oversamplingFactor]
def shifted_sinc(x, k, offset):
xShifted = x - offset
y = np.zeros(len(xShifted))
for idx, i in enumerate(xShifted):
if not i:
y[idx] = k
else:
y[idx] = (k * np.sin(np.pi * i) / (np.pi * i))
return y
def resampleStrategy(y, fs, quality=0, oversampling=4):
yResample = es.Resample(inputSampleRate=fs,
outputSampleRate=fs*oversampling,
quality=quality)(y.astype(np.float32))
tResample = np.arange(np.min(samplingPoints), np.max(samplingPoints)
+ 1, 1. / (fs * oversampling))
tResample = tResample[:len(yResample)]
# getting the stimated peaks
yResMax = np.max(yResample)
tResMax = tResample[np.argmax(yResample)]
return yResample, tResample, yResMax, tResMax
def parabolicInterpolation(y, threshold=.6):
# todo plot the parabol maybe
positions, amplitudes = es.PeakDetection(threshold=threshold)\
(y.astype(np.float32))
pos = int(positions[0] * (len(y-1)))
a = y[pos - 1]
b = y[pos]
c = y[pos + 1]
tIntMax = samplingPoints[pos] + (a - c) / (2 * (a - 2 * b + c))
yIntMax = b - ((a - b) ** 2) / (8 * (a - 2 * b + c))
return tIntMax, yIntMax
def process():
## Processing
# "real" sinc
yReal = shifted_sinc(time, k, offset.value)
# sampled sinc
y = shifted_sinc(samplingPoints, k, offset.value)
# Resample strategy
yResample, tResample, yResMax, tResMax = \
resampleStrategy(y, fs, quality=0, oversampling=4)
# Parabolic Interpolation extrategy
tIntMax, yIntMax = parabolicInterpolation(y)
## Plotting
ax.clear()
plt.title('Interpeak detection estrategies')
ax.grid(True)
ax.grid(xdata=samplingPoints)
ax.plot(time, yReal, label='real signal')
yRealMax = np.max(yReal)
sampledLabel = 'sampled signal. Error:{:.3f}'\
.format(np.abs(np.max(y) - yRealMax))
ax.plot(samplingPoints, y, label=sampledLabel, ls='-.',
color='r', marker='x', markersize=6, alpha=.7)
ax.plot(tResample, yResample, ls='-.',
color='y', marker='x', alpha=.7)
resMaxLabel = 'Resample Peak. Error:{:.3f}'\
.format(np.abs(yResMax - yRealMax))
ax.plot(tResMax, yResMax, label= resMaxLabel,
color='y', marker = 'x', markersize=12)
intMaxLabel = 'Interpolation Peak. Error:{:.3f}'\
.format(np.abs(yIntMax - yRealMax))
ax.plot(tIntMax, yIntMax, label= intMaxLabel,
marker = 'x', markersize=12)
fig.legend()
fig.show()
# matplotlib.use('TkAgg')
offset = wg.FloatSlider()
offset.max = 1
offset.min = -1
offset.step = .1
display(offset)
fig, ax = plt.subplots()
process()
def on_value_change(change):
process()
offset.observe(on_value_change, names='value')
Explanation: The problem of true peak estimation
The following widget demonstrates two intersample detection techniques:
- Signal upsampling.
- parabolic interpolation.
The accuracy of both methods can be assessed in real-time by shifting the sampling points in a Sinc function and evaluating the error produced by both systems.
End of explanation
fs = 44100.
eps = np.finfo(np.float32).eps
audio_dir = '../../audio/'
audio = es.MonoLoader(filename='{}/{}'.format(audio_dir,
'recorded/distorted.wav'),
sampleRate=fs)()
times = np.linspace(0, len(audio) / fs, len(audio))
peakLocations, output = es.TruePeakDetector(version=2)(audio)
oversampledtimes = np.linspace(0, len(output) / (fs*4), len(output))
random_indexes = [1, 300, 1000, 3000]
figu, axes = plt.subplots(len(random_indexes))
plt.subplots_adjust(hspace=.9)
for idx, ridx in enumerate(random_indexes):
l0 = axes[idx].axhline(0, color='r', alpha=.7, ls = '--')
l1 = axes[idx].plot(times, 20 * np.log10(np.abs(audio + eps)))
l2 = axes[idx].plot(oversampledtimes, 20 * np.log10(output + eps), alpha=.8)
axes[idx].set_xlim([peakLocations[ridx] / fs - .0002, peakLocations[ridx] / fs + .0002])
axes[idx].set_ylim([-.15, 0.15])
axes[idx].set_title('Clipping peak located at {:.2f}s'.format(peakLocations[ridx] / (fs*4)))
axes[idx].set_ylabel('dB')
figu.legend([l0, l1[-1], l2[-1]], ['Dynamic range limit', 'Original signal', 'Resampled signal'])
plt.show()
Explanation: As it can be seen from the widget, the oversampling strategy generates a smaller error in most of the cases.
The ITU-R BS.1770 approach
The ITU-R BS.1770 recommentation proposess the following signal chain based on the oversampling strategy:
-12.04dB --> x4 oversample --> LowPass --> abs() --> 20 * log10() --> +12.04dB
In our implementation, the gain control is suppressed from the chain as in not required when working with float point values, and the result is returned in natural units as it can be converted to dB as a postprocessing step. Here we can see an example.
End of explanation |
1,258 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unit 2
Step1: 1. What is the output of the commands above?
Now each time we call a function that’s in a library, we use the syntax
Step2: 2. In the command above, why did we use pd.read_table, not just read_table or pandas.read_table?
Now we have a DataFrame called gapminder with all of our data in it!
One of the first things we do after importing data is to just have a look and make sure it looks like what we think it should.
In this case, this is your first look, so type the name of the DataFrame and run the code cell below to see what happens...
Step3: There are usually too many rows to print to the screen and you don't really want to find out by printing them all to the screen and regretting it. By default, when you type the name of the DataFrame and run a cell, pandas knows not to print the whole thing if there are a lot of rows. Instead, you will see the first and last few rows with dots in between.
A neater way to look at a preview of the dataset is by using the head() method. Calling DataFrame.head() will displace the first 5 rows of the data (this is also an exmaple of the "dot" notation where we want the "head" gapminder, so we write DataFrame.head()). You can specify how many rows you want to see as an argument, like DataFrame.head(10). The tail() method does the same with the last rows of the DataFrame.
Use these methods below to get an idea of what the gapminder DataFrame looks like.
Step4: 3. Make a few observations about what you see in the data so far (you don't have to go too deep since we are going to use pandas to look next)
Assess structure and cleanliness
How many rows and columns are in the data?
We often want to know how many rows and columns are in the data -- we want to know what is called the "shape" attribute of the DataFrame. Pandas has a convenient way for getting that information by using the DataFrame.shape (using DataFrame as a generic name for a, well, DataFrame, which in pandas is usually written DataFrame). This returns a tuple (values separated by commas) representing the dimensions of the DataFrame (rows, columns).
write code to use shape to get the shape of the gapminder DataFrame
Step5: 4. How many rows and columns are there?
The info() method gives a few useful pieces of information quickly, including the shape of the DataFrame, the variable type of each column, and the amount of memory stored.
run the code in the cell below
Step6: There are several problems with this data set as it is. BTW, this is super common and data nerd slang for fundamentally useful data with issues is "dirty data" and the process for un-dirtying it is called "data cleaning". (believe it or not, tidy data is yet again a different thing, we'll get to that later)
The first step in data cleaning is identifying the problems. The info above has already revealed one major problem and one minor/annoying problem.
Let's investigate!
5. What types of variables are in the gapminder DataFrame and what values can they take? (region is a little tricky)
6. Some of your data types should seem odd to you (hint - party like it's 1999.9?). Which ones and why?
7. Look at the number of entries in the entire DataFrame at the top (RangeIndex) and compare that to each column. Explain what you think is happening.
Let's keep looking
There are other fast, easy, and informative ways to get a sense of what your data might look like and any issues that it has. The describe() method will take the numeric columns and give a summary of their values. This is useful for getting a sense of the ranges of values and seeing if there are any unusual or suspicious numbers.
Use the describe() method on the gapminder DataFrame in the cell below
Step7: Uh-oh what happened to the percentiles? Also notice that you get an error, but it gives you some output.
The DataFrame method describe() just blindly looks at all numeric variables. First, we wouldn't actually want to take the mean year. Additionally, we obtain 'NaN' (not a number) values for our quartiles. This suggests we might have missing data which we can (and will) deal with shortly when we begin to clean our data.
For now, let's pull out only the columns that are truly continuous numbers (i.e. ignore the description for 'year'). This is a preview of selection columns of the data; we'll talk more about how to do it later in the lesson, but many methods that work on DataFrames work in this way if you use DataFrame.method(['column_name']) -- the brackets should remind you of indexing a list or a string and the 'column_name' needs to be in quotes since it is itself a string.
run the code in the cell below
Step8: 8. We haven't really nailed this problem down yet. Reflect (or look) back on your answer to Question#7 and think about what might be happening (don't worry too much about the mathematical specifics).
Let's get after it.
The command value_counts() gives you a quick idea of what kinds of names are in your categorical data such as strings. In this case our categorical data is in the region column and represents the names of the regions where the data came from.
Important info
Step9: Uh-oh! The table reveals several problems.
9. Describe and Explain the (several) problems that you see.
Data cleaning
Handling Missing Data
Missing data is an important issue to handle. As we've seen, ignoring it doesn't make it go away, in our example, it has been giving us problems in our describe() results -- remember the NaN values?
There are different ways of dealing with missing data which include, all of them have advantages and disadvantages so choose carefully and with good reasons!
Step10: Yikes! There are NA values in each column except region.
Removing NAs from a DataFrame is incredibly easy to do because pandas allows you to either remove all instances will NA/null data or replace them with a particular value. (sometimes too easy, so make sure you are careful!)
The method df = df.dropna() (here df is short for DataFrame) drops rows with any column having NA/null data. df = df.fillna(value) replaces all NA/null data with the argument value. You have to use the assignment to the same (or different) identifier since dropna() does not work "in place" so if you don't assign the result to something it prints and nothing happens to the actual DataFrame.
use dropna() to remove the NAs from the gapminder DataFrame
Step11: 11. How many rows were removed? Make sure the code immediately above justifies this.
12. Which regions lost rows? Run code to see if there are still NAs and also to identify regions with too many observations. (hint
Step12: Are we done yet? Oh, no, we certainly are not...
One more nice trick. If you want to examine a subset (more later) of the rows, for example to exmaine the region with too many observations, you can use code like this
Step13: Handling strange variable types
Remember above where we had some strange/annoying variable types? Now we can fix them. The methods below would have failed if there were NAs in the data since they look for the type of the data and try to change it. Inconsistencies in data types and NAs cause problems for a lot of methods, so we often deal with them first when cleaning data.
We can change (or type cast) the inappropriate data types with the function astype(), like so
Step14: Now you use astype() to type cast the other problematic column and use info() to make sure these two operations worked.
Step15: Progress!
Your data types should make more sense now and since we removed the NA values above, the total number of rows or entries is the same as the non-null rows. Nice!
Handling (Unwanted) Repetitive Data
Sometimes observations can end up in the data set more than once creating a duplicate row. Luckily, pandas has methods that allow us to identify which observations are duplicates. The method df.duplicated() will return boolean values for each row in the DataFrame telling you whether or not a row is an exact repeat.
Run the code below to see how it works.
Step16: Wow, there is a repeat in the first 5 rows!
Write code below that allows you to confirm this by examining the DataFrame.
Step17: In cases where you don’t want exactly repeated rows (we don’t -- we only want each country to be represented once for every relevant year), you can easily drop such duplicate rows with the method df.drop_duplicates().
Note
Step18: More Progress!
Reindexing with reset_index()
Now we have 1704 rows, but our index is off. Look at the index values in gapminder just above. One is missing!
We can reset our indices easily with the method reset_index(drop=True). Remember, Python is 0-indexed so our indices will be valued 0-1703. The drop=True parameter drops the old index (as opposed to placing it in a new column, which is useful sometimes but not here).
The concept of reindexing is important. When we remove some of the messier, unwanted data, we end up with "gaps" in our index values. By correcting this, we can improve our search functionality and our ability to perform iterative functions on our cleaned data set.
Note
Step19: Handling Inconsistent Data
The region column still has issues that will affect our analysis. We used the value_counts() method above to examine some of these issues...
write code to look at the number of observations for each unique region
Step20: 14. Describe and Explain...
14a. what is better now that we have done some data cleaning.
14b. what is still in need of being fixed.
String manipulations
Very common problems with string variables are
Step21: As a side note, one of the coolest thing about pandas is an idea called chaining. If you prefer, the three commands can be written in one single line!
python
df['column_name'] = df['column_name'].str.lstrip().str.rstrip().str.lower()
You may be wondering about the order of operations and it starts closest to the data (DataFrame name, df here), and moves away performing methods in order. This is a bit advanced, but can save a lot of typing once you get used to it. The downside is that it can be harder to troubleshoot -- it can leave you asking "which command in my long slick looking chain produced the weird error?"
Are we there yet???
15. What is still wrong with the region column?
regex + replace()
A regular expression, aka regex, is a powerful search technique that uses a sequence of characters that define a search pattern. In a regular expression, the symbol "*" matches the preceding character 0 or more times, whereas "+" matches the preceding character 1 or more times. "." matches any single character. Writing "x|y" means to match either "x" or "y".
For more regex shortcuts (cheatsheet)
Step22: 16. What happened?
Step23: We can revert back to working on the the non-temporary DataFrame and correctly modify our regex to isolate only the Democratic Republic of Congo instances (as opposed to including the Republic of Congo as well).
Using regex to fix the Dem. Rep. Congo...
As noted above, regular expressions (regex) provide a powerful
tool for fixing errors that arise in strings. In order to correctly label the
two different countries that include the word "congo", we need to design and
use (via df.replace()) a regex that correctly differentiates between the
two countries.
Recall that the "." is the wildcard (matching any single character); combining
this with "*" allows us to match any number of single characters an unspecified
number of times. By combining these characters with substrings corresponding to
variations in the naming of the Democratic Republic of the Congo, we can
correctly normalize the name.
If you feel that the use of regex is not particularly straightforward, you are absolutely
correct -- appropriately using these tools takes a great deal of time to master.
When designing regex for these sorts of tasks, you might find the following
prototyper helpful
Step24: Exercise (regex)
Step25: Are we there yet??!??!
17. Looks like we successfully cleaned our data! Justify that we are done.
Tidy data
Having what is called a Tidy data set can make cleaning and analyzing your data much easier.
Two of the important aspects of Tidy data are
Step26: 19. Describe what happens for each of the lines of code that contains split(). Be sure to explain why the 2 columns end up with different values.
Step27: Removing and renaming columns
We have now added the columns country and continent, but we still have the old region column as well. In order to remove that column we use the drop() command. The first argument of the drop() command is the name of the element to be dropped. The second argument is the axis number
Step28: Finally, it is a good idea to look critically at your column names themselves. It is a good idea to be as consistent as possible when naming columns. We often use all lowercase for all column names to avoid accidentally making names that can be confusing -- gdppercap and gdpPercap and GDPpercap are all the same nut gdpercap is different.
Avoid spaces in column names to simplify manipulating your data. Also look out for lingering white space at the beginning or end of your column names.
Run the following code that turns all column names to lowercase.
Step29: We also want to remove the space from the life exp column name. We can do that with Pandas rename method. It takes a dictionary as its argument, with the old column names as keys and new column names as values.
If you're unfamiliar with dictionaries, they are a very useful data structure in Python. You can read more about them here.
Step30: Tidy data wrap-up
21. explain why the data set at this point is Tidy or at least much tidier than it was before.
Export clean and tidy data file
Now that we have a clean and tidy data set in a DataFrame, we want to export it and give it a new name so that we can refer to it and use it
python
df.to_csv('file name for export', index=False) # index=False keeps index out of columns
For more info on this method, check out the docs for to_csv()
Step31: Subsetting and sorting
There are many ways in which you can manipulate a Pandas DataFrame - here we will discuss only two
Step32: write code to take a different slice
Step33: 21. Predict what the slice below will do.
python
gapminder[-10
Step34: 22a. What does the negative number (in the cell above) mean?
22b. What happens when you leave the space before or after the colon empty?
write code to take another different slice
Step35: More Subsetting
Subsetting can also be done by selecting for a particular value in a column.
For instance to select all of the rows that have 'africa' in the column 'continent'.
python
gapminder_africa = gapminder[gapminder['continent']=='africa']
Note the double equal sign
Step36: Even more...
There are several other fancy ways to subset rows and columns from DataFrames, the .loc and .iloc methods are particularly useful. If you're interested, look them up on the cheatsheet or in the docs.
Sorting
Sorting may help to further organize and inspect your data. The method sort_values() takes a number of arguments; the most important ones are by and ascending. The following command will sort your DataFrame by year, beginning with the most recent.
Step37: Note
Step38: 23. Make a new variable with a sorted version of the DataFrame organized by country, from ‘Afganistan’ to ‘Zimbabwe’. Also include code to show that it is sorted correctly.
Step39: Summarize and plot
Summaries and Statistics are very useful for intial examination of data as well as in depth analysis. Here we will only scratch the surface.
Plots/graphs/charts are also great visual ways to examine data and illustrate patterns.
Exploring your data is often iterative - summarize, plot, summarize, plot, etc. - sometimes it branche, sometimes there are more cleaning steps to be discovered...
Let's try it!
Summarizing data
Remember that the info() method gives a few useful pieces of information, including the shape of the DataFrame, the variable type of each column, and the amount of memory stored. We can see many of our changes (continent and country columns instead of region, higher number of rows, etc.) reflected in the output of the info() method.
Step40: We also saw above that the describe() method will take the numeric columns and give a summary of their values. We have to remember that we changed the changed column names, and this time it shouldn't have NAs.
Step41: More summaries
What if we just want a single value, like the mean of the population column? We can call mean on a single column this way
Step42: What if we want to know the mean population by continent? Then we need to use the Pandas groupby() method and tell it which column we want to group by.
Step43: What if we want to know the median population by continent?
The method that gives you the median is called median().
write code below to get the median population by continent
Step44: Or the number of entries (rows) per continent?
Step45: Sometimes we don't want a whole DataFrame. Here is another way to do this that produces a series as opposed to a DataFrame that tells us number of entries (rows).
Step46: We can also look at the mean GDP per capita of each country
Step47: What if we wanted a new DataFrame that just contained these summaries? This could be a table in a report, for example.
Step48: Visualization with matplotlib
Recall that matplotlib is Python's main visualization
library. It provides a range of tools for constructing plots, and numerous
high-level and add on plotting libraries are
built with matplotlib in mind. When we were in the early stages of setting up
our analysis, we loaded these libraries like so
Step49: Single variable plots
Histograms - provide a quick way of visualizing the distribution of numerical data, or the frequencies of observations for categorical variables.
Run the code below to generate a smaple histogram.
Step50: write code below to make a histogram of the distribution of life expectancies of the countries in the gapminder DataFrame. | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Unit 2: Programming Design
Lesson 14: Packages and Data Analysis
Notebook Authors
(fill in your two names here)
Facilitator: (fill in name)
Spokesperson: (fill in name)
Process Analyst: (fill in name)
Quality Control: (fill in name)
If there are only three people in your group, have one person serve as both spokesperson and process analyst for the rest of this activity.
At the end of this Lesson, you will be asked to record how long each Model required for your team. The Facilitator should keep track of time for your team.
Science Context: (Reproducible) Data Analysis
After the preactivity, you are at least a litte more familiar with two ideas:
1. Very often scientific data is in a table/spreadsheet/DataFrame format (columns and rows)
2. Reproducibility of scientific data analysis is challenging, important to consider, and more doable with the correct tools (like Jupyter)
Outline/Objectives
In this activity we will use the pandas library of tools to analyze a DataFrame.
This introduction to analyzing tabular data in Python will include an introduction to:
+ loading data form a .csv file into a Jupyter Notebook environment
+ importing and using a library
+ basic data cleaning and wrangling
+ "tidy" data
+ pandas functions and methods for DataFrames
+ a brief intro to regex
+ basic data summary info
+ very quick intro to plotting with matplotlib
A note about the data in this activity:
The data originally comes from Gapminder which describes it self as "a fact tank [that] fights devastating misconceptions about global development ... making the world understandable based on reliable statistics". It has been processed lightly by the Data Carpentry community for data analysis activities. It might seem a bit less sciency (unless you're into global development and public health) but analyzing tabular data to do some statistical or other analysis of numerical and categorical data is the same...
This activity is heavily based on the Data Exploration lesson developed by the Data Carpentry community (Brian was a participant/developer).
Setting up the notebook
About Libraries in Python
A library in Python contains a set of tools (called functions) that perform tasks on our data and instead of you having to write them all, someone else has written them for you. Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench for use in a project. Once a library is imported, it can be used or called to perform many tasks.
Python doesn’t load all of the libraries available to it by default -- that would be too inefficient. We have to add an import statement to our code in order to use library functions. To import a library, we use the syntax:
python
import libraryName
If we want to give the library a nickname to shorten the commands you need to type, we can do it like this:
python
import libraryName as nickNameHere
An example of importing the pandas library using the very common nickname pd is below (and also in notebook):
python
import pandas as pd
Luckily for us, Anaconda installs a lot of commonly used libraries, like pandas so we don't have to do anything but import them. If you are using less common libraries you would need to also install them using Anaconda or the Python installer, pip.
matplotlib and other plotting libraries
matplotlib is the most widely used Python library for plotting. We can run it right in the notebook using the magic command
python
%matplotlib inline
See the IPython docs for other options to pass to the magic command.
In this lesson, we will only use matplotlib since it is the standard, basic plotting tool in Python. There are a whole range of graphics packages in Python, ranging from better basic visualizations, like ggplot, to fancifying matplotlib graphs like seaborn, and still others that make sweet interactive graphics like bokeh and plotly. None of these are included in the default Anaconda install.
We encourage you to explore on your own! Chances are, if you can imagine a plot you'd like to make, somebody else has written a package to do it. (Although you have to install those packages first...)
The Pandas Library
One of the best options for working with tabular data in Python is to use the Python Data Analysis Library (a.k.a. pandas. The pandas library provides data structures, produces high quality plots with matplotlib and integrates nicely with other libraries that use NumPy arrays (another Python library that provides a lot of very useful numerical libraries).
Resources
One of the best parts of the pandas library is the developers have made it as easy to use as possible, have great documentation, and they even made a cheat sheet that has a lot of useful info!
Let's Go!
End of explanation
# url where data is stored
url = "https://raw.githubusercontent.com/Reproducible-Science-Curriculum/data-exploration-RR-Jupyter/master/gapminderDataFiveYear_superDirty.txt"
# assigns the identifier 'gapminder' to the entire dataset
gapminder = pd.read_table(url, sep = "\t")
Explanation: 1. What is the output of the commands above?
Now each time we call a function that’s in a library, we use the syntax:
python
LibraryName.FunctionName
Adding the library name with a . before the function name tells Python where to find the function. In the example above, we have imported Pandas as pd. This means we don’t have to type out pandas each time we call a Pandas function.
We will begin by locating and reading our data which are in a table format. We can use Pandas’ read_table function to pull the file directly into a DataFrame.
Getting data into the notebook
What’s a DataFrame?
A DataFrame is a 2-dimensional data structure that can store data of different types (including characters, integers, floating point values, factors and more) in columns. It is similar to a spreadsheet or an SQL table or the data.frame in R.
A DataFrame always has an index (0-based since it's Python). An index refers to the position of an element in the data structure and is super useful for some of the pandas methods.
pandas also can take apart DataFrames into columns by making a specialized data structure called a Series (a 1D structure that still has an index). This allows many pandas methods to work on one column at a time, or the whole DataFrame column by column. pandas can also perform what are called vectorized operations which means that the iteration is already built in and you don't have to loop through values in a list, you can just call a methon on a Series. But enough of that for now, let's get to some data...
Load data
The commands below load the data directly from the GitHub repository where the data is stored.
This particular data set is tab separated since the columns in the data set are separated by a TAB. We need to tell the read_table function in Pandas that this is the case with sep = '\t' argument. (this \t for tab is kind of like the \n character that we have seen for a new line in a text file)
The fact that this data is tab separated as opposed to the more common comma separated (.csv) data is just a quirk of how we came by the data, and databases and instruments often have complex or even proprietary data formats, so get used to it.
run the code below to bring in the data and assigns it to the identifier gapminder.
End of explanation
## DataFrame name here
Explanation: 2. In the command above, why did we use pd.read_table, not just read_table or pandas.read_table?
Now we have a DataFrame called gapminder with all of our data in it!
One of the first things we do after importing data is to just have a look and make sure it looks like what we think it should.
In this case, this is your first look, so type the name of the DataFrame and run the code cell below to see what happens...
End of explanation
## head and tail methods
Explanation: There are usually too many rows to print to the screen and you don't really want to find out by printing them all to the screen and regretting it. By default, when you type the name of the DataFrame and run a cell, pandas knows not to print the whole thing if there are a lot of rows. Instead, you will see the first and last few rows with dots in between.
A neater way to look at a preview of the dataset is by using the head() method. Calling DataFrame.head() will displace the first 5 rows of the data (this is also an exmaple of the "dot" notation where we want the "head" gapminder, so we write DataFrame.head()). You can specify how many rows you want to see as an argument, like DataFrame.head(10). The tail() method does the same with the last rows of the DataFrame.
Use these methods below to get an idea of what the gapminder DataFrame looks like.
End of explanation
## shape
Explanation: 3. Make a few observations about what you see in the data so far (you don't have to go too deep since we are going to use pandas to look next)
Assess structure and cleanliness
How many rows and columns are in the data?
We often want to know how many rows and columns are in the data -- we want to know what is called the "shape" attribute of the DataFrame. Pandas has a convenient way for getting that information by using the DataFrame.shape (using DataFrame as a generic name for a, well, DataFrame, which in pandas is usually written DataFrame). This returns a tuple (values separated by commas) representing the dimensions of the DataFrame (rows, columns).
write code to use shape to get the shape of the gapminder DataFrame:
End of explanation
# get info
gapminder.info()
Explanation: 4. How many rows and columns are there?
The info() method gives a few useful pieces of information quickly, including the shape of the DataFrame, the variable type of each column, and the amount of memory stored.
run the code in the cell below
End of explanation
## describe
Explanation: There are several problems with this data set as it is. BTW, this is super common and data nerd slang for fundamentally useful data with issues is "dirty data" and the process for un-dirtying it is called "data cleaning". (believe it or not, tidy data is yet again a different thing, we'll get to that later)
The first step in data cleaning is identifying the problems. The info above has already revealed one major problem and one minor/annoying problem.
Let's investigate!
5. What types of variables are in the gapminder DataFrame and what values can they take? (region is a little tricky)
6. Some of your data types should seem odd to you (hint - party like it's 1999.9?). Which ones and why?
7. Look at the number of entries in the entire DataFrame at the top (RangeIndex) and compare that to each column. Explain what you think is happening.
Let's keep looking
There are other fast, easy, and informative ways to get a sense of what your data might look like and any issues that it has. The describe() method will take the numeric columns and give a summary of their values. This is useful for getting a sense of the ranges of values and seeing if there are any unusual or suspicious numbers.
Use the describe() method on the gapminder DataFrame in the cell below
End of explanation
# use describe but only on columns with continuous values
gapminder[['pop', 'life Exp', 'gdpPercap']].describe()
Explanation: Uh-oh what happened to the percentiles? Also notice that you get an error, but it gives you some output.
The DataFrame method describe() just blindly looks at all numeric variables. First, we wouldn't actually want to take the mean year. Additionally, we obtain 'NaN' (not a number) values for our quartiles. This suggests we might have missing data which we can (and will) deal with shortly when we begin to clean our data.
For now, let's pull out only the columns that are truly continuous numbers (i.e. ignore the description for 'year'). This is a preview of selection columns of the data; we'll talk more about how to do it later in the lesson, but many methods that work on DataFrames work in this way if you use DataFrame.method(['column_name']) -- the brackets should remind you of indexing a list or a string and the 'column_name' needs to be in quotes since it is itself a string.
run the code in the cell below
End of explanation
## use value_counts to find out how many times each unique region occus
Explanation: 8. We haven't really nailed this problem down yet. Reflect (or look) back on your answer to Question#7 and think about what might be happening (don't worry too much about the mathematical specifics).
Let's get after it.
The command value_counts() gives you a quick idea of what kinds of names are in your categorical data such as strings. In this case our categorical data is in the region column and represents the names of the regions where the data came from.
Important info: The data set covers 12 years, so each region should appear 12 times.
use value_counts() on gapminder to see if all regions have 12 rows/entries as expected
End of explanation
# isnull results in T/F for each cell,
# sum adds them up by column since F=0 and T=1 so #=#NAs
gapminder.isnull().sum()
Explanation: Uh-oh! The table reveals several problems.
9. Describe and Explain the (several) problems that you see.
Data cleaning
Handling Missing Data
Missing data is an important issue to handle. As we've seen, ignoring it doesn't make it go away, in our example, it has been giving us problems in our describe() results -- remember the NaN values?
There are different ways of dealing with missing data which include, all of them have advantages and disadvantages so choose carefully and with good reasons!:
* analyzing only the available data (i.e. ignore the missing data)
* replace the missing data with replacement values and treat these as though they were observed (danger!)
* replace the missing data and account for the fact that these values were inputed with uncertainty (e.g. create a new boolean variable as a flag so you know that these values were not actually observed)
* use statistical models to allow for missing data -- make assumptions about their relationships with the available data as necessary
For our purposes with the dirty gapminder data set, we know our missing data is excess (and unnecessary) and we are going to choose to analyze only the available data. To do this, we will simply remove rows with missing values.
10. Wait, why do we think it's extra data? Justify! Also include from what regions you expect to lose observations.
In large tabular data that is used for analysis, missing data is usually coded NA (stands for Not Available and various other things) although there are other possibilities such as NaN (as we saw above) if a function or import method expects a number. NA can mean several things under the hood, but using NA for missing values in your data is your best bet and most of the methods will expect that (and even have it in their name).
Let's find out how many NAs there are
We are going to chain 2 steps together to determine the number of NA/null values in the gapminder DataFrame.
+ first isnull() returns a boolean for each cell in the DataFrame - True if the value in the cell is NA, False if it is not.
+ then sum() adds up the values in each column.
"hold on", you say, "you can't add booleans!" Oh yes you can! True == 1 and False == 0 in Python so if you sum() the results of isnull() you get the number of NA/null values in each column -- awesome!
End of explanation
## before you rip into removing, it's a good idea to know how many rows you have
## you know a few ways to get this info, write some code here to get # of rows
## now use dropna()
## now get the number of rows again
## use isnull() again to confirm
Explanation: Yikes! There are NA values in each column except region.
Removing NAs from a DataFrame is incredibly easy to do because pandas allows you to either remove all instances will NA/null data or replace them with a particular value. (sometimes too easy, so make sure you are careful!)
The method df = df.dropna() (here df is short for DataFrame) drops rows with any column having NA/null data. df = df.fillna(value) replaces all NA/null data with the argument value. You have to use the assignment to the same (or different) identifier since dropna() does not work "in place" so if you don't assign the result to something it prints and nothing happens to the actual DataFrame.
use dropna() to remove the NAs from the gapminder DataFrame
End of explanation
## your code
Explanation: 11. How many rows were removed? Make sure the code immediately above justifies this.
12. Which regions lost rows? Run code to see if there are still NAs and also to identify regions with too many observations. (hint: it won't be perfect yet)
End of explanation
## code to examine rows from regions with too many obs
Explanation: Are we done yet? Oh, no, we certainly are not...
One more nice trick. If you want to examine a subset (more later) of the rows, for example to exmaine the region with too many observations, you can use code like this:
python
gapminder[gapminder.column == 'value_in_column']
where column is the name of a column and 'value_in_column' is what you're interested in, for example the region.
write code in the cell below to look at all of the rows from regions that have a problematically high number of observations.
End of explanation
##type casts the year column data from float to int
gapminder['year'] = gapminder['year'].astype(int)
Explanation: Handling strange variable types
Remember above where we had some strange/annoying variable types? Now we can fix them. The methods below would have failed if there were NAs in the data since they look for the type of the data and try to change it. Inconsistencies in data types and NAs cause problems for a lot of methods, so we often deal with them first when cleaning data.
We can change (or type cast) the inappropriate data types with the function astype(), like so:
python
DataFrame.astype(dtype)
A few important things to consider!
+ dtype is the data type that you want to cast to
+ if you just use DataFrame it will try to cast the entire DataFrame to the same type, and you do not want to go there! so we need to specify which columns we want to cast, see below...
There are several ways to select only some columns of a DataFrame in pandas but the easiest and most intuitive is usually to just use the name. It is very much like indexing a list or a string and looks like: DataFrame['column_name'], where column_name is the column name. So in context of our astype() method call we would want:
python
DataFrame['column_name'].astype(dtype)
to only cast the type of a single column.
Run the code in the cell below to see an example:
End of explanation
## fix other column and make sure all is ok
Explanation: Now you use astype() to type cast the other problematic column and use info() to make sure these two operations worked.
End of explanation
## get T/F output for each row if it is a duplicate (only look at top 5)
gapminder.duplicated().head()
Explanation: Progress!
Your data types should make more sense now and since we removed the NA values above, the total number of rows or entries is the same as the non-null rows. Nice!
Handling (Unwanted) Repetitive Data
Sometimes observations can end up in the data set more than once creating a duplicate row. Luckily, pandas has methods that allow us to identify which observations are duplicates. The method df.duplicated() will return boolean values for each row in the DataFrame telling you whether or not a row is an exact repeat.
Run the code below to see how it works.
End of explanation
## confirm duplicate in top 5 rows of df
## you can use the .sum() to count the number of duplicated rows in the DataFrame
Explanation: Wow, there is a repeat in the first 5 rows!
Write code below that allows you to confirm this by examining the DataFrame.
End of explanation
## remove duplicates and confirm
Explanation: In cases where you don’t want exactly repeated rows (we don’t -- we only want each country to be represented once for every relevant year), you can easily drop such duplicate rows with the method df.drop_duplicates().
Note: drop_duplicates() is another method that does not work in place, and if you want the DataFrame to now not have the duplicates, you need to assign it again: df = df.drop_duplicates()
Warning: you always want to be cautious about dropping rows from your data.
13. Justify why it's ok for us to drop these rows (a mechanical reason that has to do with the structure of the data itself, the "experimental" reason is above).
write code below to remove the duplicated rows and confirm that the duplicate in the first 5 rows is gone and/or that all of the duplicates are gone.
End of explanation
## reset index and check top 5
Explanation: More Progress!
Reindexing with reset_index()
Now we have 1704 rows, but our index is off. Look at the index values in gapminder just above. One is missing!
We can reset our indices easily with the method reset_index(drop=True). Remember, Python is 0-indexed so our indices will be valued 0-1703. The drop=True parameter drops the old index (as opposed to placing it in a new column, which is useful sometimes but not here).
The concept of reindexing is important. When we remove some of the messier, unwanted data, we end up with "gaps" in our index values. By correcting this, we can improve our search functionality and our ability to perform iterative functions on our cleaned data set.
Note: reset_index() is yet another method that does not work in place, and if you want the DataFrame to now not have the duplicates, you need to assign it again...
write code that resets the index of gapminder and shows you the top 5 so you can see it has changed
End of explanation
## get value counts for the region column
Explanation: Handling Inconsistent Data
The region column still has issues that will affect our analysis. We used the value_counts() method above to examine some of these issues...
write code to look at the number of observations for each unique region
End of explanation
## write code to strip white space on both left and right of region names
## convert region names to lowercase, and print out df to confirm
Explanation: 14. Describe and Explain...
14a. what is better now that we have done some data cleaning.
14b. what is still in need of being fixed.
String manipulations
Very common problems with string variables are:
+ lingering white space
+ upper case vs. lower case
These issues are problematic since upper and lower case characters are considered different characters by Python (consider that 'abc' == 'ABC' evaluates to False) and any extra character in a string makes it different to Python (consider that 'ABC' == ' ABC' evaluates to False).
The following three pandas string methods (hence the str) remove all such trailing spaces (left and right) and put everything in lowercase, respectively.
python
df.str.lstrip() # Strip white space on left
df.str.rstrip() # Strip white space on right
df.str.lower() # Convert to lowercase
Note: none of these methods work in place, so if you want the DataFrame to now not have the duplicates, you need to assign it again (df = df.method())...
write code that strips the white space from both sides of the values in the region column, makes all of the values lower case, and shows that it has been accomplished (to make sure you've reassigned).
End of explanation
# gives a problem -- 24 values of the congo!
temp = gapminder['region'].replace(".*congo.*", "africa_dem rep congo", regex=True)
temp.value_counts()
Explanation: As a side note, one of the coolest thing about pandas is an idea called chaining. If you prefer, the three commands can be written in one single line!
python
df['column_name'] = df['column_name'].str.lstrip().str.rstrip().str.lower()
You may be wondering about the order of operations and it starts closest to the data (DataFrame name, df here), and moves away performing methods in order. This is a bit advanced, but can save a lot of typing once you get used to it. The downside is that it can be harder to troubleshoot -- it can leave you asking "which command in my long slick looking chain produced the weird error?"
Are we there yet???
15. What is still wrong with the region column?
regex + replace()
A regular expression, aka regex, is a powerful search technique that uses a sequence of characters that define a search pattern. In a regular expression, the symbol "*" matches the preceding character 0 or more times, whereas "+" matches the preceding character 1 or more times. "." matches any single character. Writing "x|y" means to match either "x" or "y".
For more regex shortcuts (cheatsheet): https://www.shortcutfoo.com/app/dojos/regex/cheatsheet
Pandas allows you to use regex in its replace() function -- when a regex term is found in an element, the element is then replaced with the specified replacement term. In order for it to appropriately correct elements, both regex and inplace variables need to be set to True (as their defaults are false). This ensures that the initial input string is read as a regular expression and that the elements will be modified in place.
For more documentation on the replace method: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.html
Using regex and replace() is easy and powerful and potentially dangerous
Here's an incorrect regex example: we create a temporary DataFrame (notice the temp = assignment) in which a regex statement inside replace() identifies all values that contain the term "congo" and replaces it with "africa_dem rep congo". Unfortunately, this creates 24 instances of the Democratic Republic of the Congo (and there should only be 12) -- this is an error in our cleaning!
run the code below to go through the incorrect example (and let it be a warning to ye)
End of explanation
## this should help
# shows all the rows that have 'congo' in the name
gapminder[gapminder['region'].str.contains('congo')]
Explanation: 16. What happened?
End of explanation
# fix the two different versions of the incorrect name to the same correct version
gapminder['region'].replace(".*congo, dem.*", "africa_dem rep congo", regex=True, inplace=True)
gapminder['region'].replace(".*_democratic republic of the congo", "africa_dem rep congo", regex=True, inplace=True)
## you write code to check to make sure it's fixed
Explanation: We can revert back to working on the the non-temporary DataFrame and correctly modify our regex to isolate only the Democratic Republic of Congo instances (as opposed to including the Republic of Congo as well).
Using regex to fix the Dem. Rep. Congo...
As noted above, regular expressions (regex) provide a powerful
tool for fixing errors that arise in strings. In order to correctly label the
two different countries that include the word "congo", we need to design and
use (via df.replace()) a regex that correctly differentiates between the
two countries.
Recall that the "." is the wildcard (matching any single character); combining
this with "*" allows us to match any number of single characters an unspecified
number of times. By combining these characters with substrings corresponding to
variations in the naming of the Democratic Republic of the Congo, we can
correctly normalize the name.
If you feel that the use of regex is not particularly straightforward, you are absolutely
correct -- appropriately using these tools takes a great deal of time to master.
When designing regex for these sorts of tasks, you might find the following
prototyper helpful: https://regex101.com/
run the code below to fix the rows for the Democratic Republic of the Congo
End of explanation
## code to fix other incorrect names
Explanation: Exercise (regex):
Now that we've taken a close look at how to properly design and use regex to
clean string entries in our data, let's try to normalize the naming of a few
other countries.
Using the pandas code we constructed above as a template, write similar code to what we used above (using df.replace()) to fix the names of all of the rows/entries for the Ivory Coast and Canada to "africa_cote d'ivoire" and "americas_canada", respectively.
End of explanation
# split region into country and continent to tidy data
gapminder['country']=gapminder['region'].str.split('_', 1).str[1]
gapminder['continent']=gapminder['region'].str.split('_', 1).str[0]
gapminder.head()
Explanation: Are we there yet??!??!
17. Looks like we successfully cleaned our data! Justify that we are done.
Tidy data
Having what is called a Tidy data set can make cleaning and analyzing your data much easier.
Two of the important aspects of Tidy data are:
* every variable has its own column
* every observation has its own row
There are other aspects of Tidy data, here is a good blog post about Tidy data in Python.
Currently the dataset has a single column for continent and country (the ‘region’ column).
18. Why is having a single column for continent and country not Tidy?
Let's make gapminder Tidy
We can split the region column into two, by using the underscore that separates continent from country. We can create a new column in the DataFrame by naming it before the = sign:
python
gapminder['country'] =
The following commands use the string method split() to split the string at the underscore (the first argument), which results in a list of two elements: before and after the _. The second argument, in this case "1", tells split() that the split should take place only at the first occurrence of the underscore. Then the str[] specifies which item in the resulting Series to return.
End of explanation
## mess around with it if you need to, but make a 'temp' version of gapminder to play with!
Explanation: 19. Describe what happens for each of the lines of code that contains split(). Be sure to explain why the 2 columns end up with different values.
End of explanation
## check new columns
# drop old region column
gapminder = gapminder.drop('region', 1)
gapminder.head()
Explanation: Removing and renaming columns
We have now added the columns country and continent, but we still have the old region column as well. In order to remove that column we use the drop() command. The first argument of the drop() command is the name of the element to be dropped. The second argument is the axis number: 0 for row, 1 for column.
Note: any time we are getting rid of stuff, we want to make sure that we are doing it for a good reason and that we know our data will be ok after. You might want to double check your new columns before you drop the old one
End of explanation
# turns all column names to lowercase
# yes you need the .columns on the left side too
gapminder.columns = gapminder.columns.str.lower()
gapminder.head()
Explanation: Finally, it is a good idea to look critically at your column names themselves. It is a good idea to be as consistent as possible when naming columns. We often use all lowercase for all column names to avoid accidentally making names that can be confusing -- gdppercap and gdpPercap and GDPpercap are all the same nut gdpercap is different.
Avoid spaces in column names to simplify manipulating your data. Also look out for lingering white space at the beginning or end of your column names.
Run the following code that turns all column names to lowercase.
End of explanation
# rename column
gapminder = gapminder.rename(columns={'life exp' : 'lifeexp'})
gapminder.head()
Explanation: We also want to remove the space from the life exp column name. We can do that with Pandas rename method. It takes a dictionary as its argument, with the old column names as keys and new column names as values.
If you're unfamiliar with dictionaries, they are a very useful data structure in Python. You can read more about them here.
End of explanation
# exports gapminder_CandT.csv
gapminder.to_csv('gapminder_CandT.csv', index=False)
Explanation: Tidy data wrap-up
21. explain why the data set at this point is Tidy or at least much tidier than it was before.
Export clean and tidy data file
Now that we have a clean and tidy data set in a DataFrame, we want to export it and give it a new name so that we can refer to it and use it
python
df.to_csv('file name for export', index=False) # index=False keeps index out of columns
For more info on this method, check out the docs for to_csv()
End of explanation
# first 15 rows
gapminder[0:15] # could also be gapminder[:15]
Explanation: Subsetting and sorting
There are many ways in which you can manipulate a Pandas DataFrame - here we will discuss only two: subsetting and sorting.
Sometimes you only want part of a larger data set, then you would subset your DataFrame. Othertimes you want to sort the data into a particular order (year more recent to oldest, GDP lowest to highest, etc.).
Subsetting
We can subset (or slice) by giving the numbers of the rows you want to see between square brackets.
Run the code below for an example:
End of explanation
## your slice
Explanation: write code to take a different slice
End of explanation
## run the code here to test prediction.
Explanation: 21. Predict what the slice below will do.
python
gapminder[-10:]
End of explanation
## your 'nother slice
Explanation: 22a. What does the negative number (in the cell above) mean?
22b. What happens when you leave the space before or after the colon empty?
write code to take another different slice
End of explanation
# makes a new DataFrame with just data from Africa
gapminder_africa = gapminder[gapminder['continent']=='africa']
## write code to quickly check that this worked
Explanation: More Subsetting
Subsetting can also be done by selecting for a particular value in a column.
For instance to select all of the rows that have 'africa' in the column 'continent'.
python
gapminder_africa = gapminder[gapminder['continent']=='africa']
Note the double equal sign: Remember that single equal signs are used in Python to assign something to a variable. The double equal sign is a comparison: in this case, the value from the column on the left has to be exactly equal to the string to the right.
Also note that we made a new DataFrame to contain our subset of the data from Africa.
End of explanation
# sort by year, from most recent to oldest
gapminder.sort_values(by='year', ascending = False)
Explanation: Even more...
There are several other fancy ways to subset rows and columns from DataFrames, the .loc and .iloc methods are particularly useful. If you're interested, look them up on the cheatsheet or in the docs.
Sorting
Sorting may help to further organize and inspect your data. The method sort_values() takes a number of arguments; the most important ones are by and ascending. The following command will sort your DataFrame by year, beginning with the most recent.
End of explanation
## your code, sort() not in place
Explanation: Note: the sort() method does not sort in place.
write code to prove it.
End of explanation
## alphabetical by country
Explanation: 23. Make a new variable with a sorted version of the DataFrame organized by country, from ‘Afganistan’ to ‘Zimbabwe’. Also include code to show that it is sorted correctly.
End of explanation
# review info()
gapminder.info()
Explanation: Summarize and plot
Summaries and Statistics are very useful for intial examination of data as well as in depth analysis. Here we will only scratch the surface.
Plots/graphs/charts are also great visual ways to examine data and illustrate patterns.
Exploring your data is often iterative - summarize, plot, summarize, plot, etc. - sometimes it branche, sometimes there are more cleaning steps to be discovered...
Let's try it!
Summarizing data
Remember that the info() method gives a few useful pieces of information, including the shape of the DataFrame, the variable type of each column, and the amount of memory stored. We can see many of our changes (continent and country columns instead of region, higher number of rows, etc.) reflected in the output of the info() method.
End of explanation
# review describe
gapminder[['pop', 'lifeexp', 'gdppercap']].describe()
Explanation: We also saw above that the describe() method will take the numeric columns and give a summary of their values. We have to remember that we changed the changed column names, and this time it shouldn't have NAs.
End of explanation
# population mean
gapminder['pop'].mean()
Explanation: More summaries
What if we just want a single value, like the mean of the population column? We can call mean on a single column this way:
End of explanation
# population mean by continent
gapminder[['continent', 'pop']].groupby(by='continent').mean()
Explanation: What if we want to know the mean population by continent? Then we need to use the Pandas groupby() method and tell it which column we want to group by.
End of explanation
## population median by continent
Explanation: What if we want to know the median population by continent?
The method that gives you the median is called median().
write code below to get the median population by continent
End of explanation
# count number of rows
gapminder[['continent', 'country']].groupby(by='continent').count()
Explanation: Or the number of entries (rows) per continent?
End of explanation
# get size by continent
gapminder[['continent', 'country']].groupby(by='continent').size()
Explanation: Sometimes we don't want a whole DataFrame. Here is another way to do this that produces a series as opposed to a DataFrame that tells us number of entries (rows).
End of explanation
## mean GDP per capita by country
Explanation: We can also look at the mean GDP per capita of each country:
write code below to get the mean GDP per capita of each country
End of explanation
# pretty slick, right?!
continent_mean_pop = gapminder[['continent', 'pop']].groupby(by='continent').mean()
continent_mean_pop = continent_mean_pop.rename(columns = {'pop':'meanpop'})
continent_row_ct = gapminder[['continent', 'country']].groupby(by='continent').count()
continent_row_ct = continent_row_ct.rename(columns = {'country':'nrows'})
continent_median_pop = gapminder[['continent', 'pop']].groupby(by='continent').median()
continent_median_pop = continent_median_pop.rename(columns = {'pop':'medianpop'})
gapminder_summs = pd.concat([continent_row_ct,continent_mean_pop,continent_median_pop], axis=1)
gapminder_summs = gapminder_summs.rename(columns = {'y':'year'})
gapminder_summs
Explanation: What if we wanted a new DataFrame that just contained these summaries? This could be a table in a report, for example.
End of explanation
# import again just in case
import matplotlib.pyplot as plt
## generate some toy data and plot it
# required to generate toy data
import numpy as np
# magic to plot straight into notebook, probably no longer needed.
# %matplotlib inline
# evenly sampled time at 200ms intervals
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
Explanation: Visualization with matplotlib
Recall that matplotlib is Python's main visualization
library. It provides a range of tools for constructing plots, and numerous
high-level and add on plotting libraries are
built with matplotlib in mind. When we were in the early stages of setting up
our analysis, we loaded these libraries like so:
End of explanation
# example histogram
# generate some random numbers from a normal distribution
data = 100 + np.random.randn(500)
# make a histogram with 20 bins
plt.hist(data, 20)
plt.xlabel('x axis')
plt.ylabel('y axis')
plt.show()
Explanation: Single variable plots
Histograms - provide a quick way of visualizing the distribution of numerical data, or the frequencies of observations for categorical variables.
Run the code below to generate a smaple histogram.
End of explanation
## histogram of lifeexp
Explanation: write code below to make a histogram of the distribution of life expectancies of the countries in the gapminder DataFrame.
End of explanation |
1,259 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PyBroom Example - Multiple Datasets - Scipy Robust Fit
This notebook is part of pybroom.
This notebook demonstrate using pybroom when fitting a set of curves (curve fitting) using robust fitting and scipy.
We will show that pybroom greatly simplifies comparing, filtering and plotting fit results
from multiple datasets.
See
pybroom-example-multi-datasets
for an example using lmfit.Model instead of directly scipy.
Step1: Create Noisy Data
We start simulating N datasets which are identical except for the additive noise.
Step2: Add some outliers
Step3: Model Fitting
curve_fit()
Step4: Using a namedtuple is a clean way to assign names to an array of paramenters
Step5: Unfortunately, not much data is returned by curve_fit, a 2-element tuple with
Step6: Then, we fit the N datasets with different loss functions storing result in a dict containing lists
Step7: <div class="alert alert-info">
**NOTE**
Step8: Tidying the results
Now we tidy the results, combining the results for the different loss functions
in a single DataFrames.
We start with the glance function, which returns one row per fit result
Step9: Then we apply tidy, which returns one row per parameter.
Since the object OptimzeResult returned by scipy.optimize does
only contains an array of parameters, we need to pass the names as
as additional argument
Step10: Finally, we cannot apply the
augment function, since the OptimizeResult object
does not include much per-data-point information
(it may contain the array of residuals).
Plots
First we plot the peak position and sigmas distributions
Step11: A more complete overview for all the fit paramenters can be obtained with a factorplot
Step12: From all the previous plots we see that, as espected, using robust fitting
with higher damping of outlier (i.e. cauchy vs huber or linear)
results in more accurate fit results.
Finally, we can have a peek at the comparison of raw data and fitted models
for a few datatsets.
Since OptimizeResults does not include "augmented" data we need to
generate these data by evaluating the model with the best-fit parameters.
We use seaborn's FacetGrid, passing a custom function _plot
for model evaluation | Python Code:
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.pylab import normpdf
import seaborn as sns
from lmfit import Model
import lmfit
print('lmfit: %s' % lmfit.__version__)
sns.set_style('whitegrid')
import pybroom as br
Explanation: PyBroom Example - Multiple Datasets - Scipy Robust Fit
This notebook is part of pybroom.
This notebook demonstrate using pybroom when fitting a set of curves (curve fitting) using robust fitting and scipy.
We will show that pybroom greatly simplifies comparing, filtering and plotting fit results
from multiple datasets.
See
pybroom-example-multi-datasets
for an example using lmfit.Model instead of directly scipy.
End of explanation
N = 200
x = np.linspace(-10, 10, 101)
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
#params = model.make_params(p1_amplitude=1.5, p2_amplitude=1,
# p1_sigma=1, p2_sigma=1)
Y_data = np.zeros((N, x.size))
Y_data.shape, x.shape
for i in range(Y_data.shape[0]):
Y_data[i] = model.eval(x=x, p1_center=-1, p2_center=2,
p1_sigma=0.5, p2_sigma=1,
p1_height=1, p2_height=0.5)
Y_data += np.random.randn(*Y_data.shape)/10
Explanation: Create Noisy Data
We start simulating N datasets which are identical except for the additive noise.
End of explanation
num_outliers = int(Y_data.size * 0.05)
idx_ol = np.random.randint(low=0, high=Y_data.size, size=num_outliers)
Y_data.reshape(-1)[idx_ol] = (np.random.rand(num_outliers) - 0.5)*4
plt.plot(x, Y_data.T, 'ok', alpha=0.1);
plt.title('%d simulated datasets, with outliers' % N);
Explanation: Add some outliers:
End of explanation
import scipy.optimize as so
from collections import namedtuple
# Model PDF to be maximized
def model_pdf(x, a1, a2, mu1, mu2, sig1, sig2):
return (a1 * normpdf(x, mu1, sig1) +
a2 * normpdf(x, mu2, sig2))
result = so.curve_fit(model_pdf, x, Y_data[0])
type(result), type(result[0]), type(result[1])
result[0]
Explanation: Model Fitting
curve_fit()
End of explanation
Params = namedtuple('Params', 'a1 a2 mu1 mu2 sig1 sig2')
p = Params(*result[0])
p
Explanation: Using a namedtuple is a clean way to assign names to an array of paramenters:
End of explanation
def residuals(p, x, y):
return y - model_pdf(x, *p)
Explanation: Unfortunately, not much data is returned by curve_fit, a 2-element tuple with:
array of best-fit parameters
array of jacobian
Therefore curve_fit is not very useful for detailed comparison of fit results.
A better interface for curve fitting would be lmfit.Model (see
this other notebook).
In the current notebook we keep exploring further options offered by scipy.optimize.
least_squares()
As an example, we use the least_squares function which supports robust loss functions and constraints.
We need to define the residuals:
End of explanation
losses = ('linear', 'huber', 'cauchy')
Results = {}
for loss in losses:
Results[loss] = [so.least_squares(residuals, (1,1,0,1,1,1), args=(x, y), loss=loss, f_scale=0.5)
for y in Y_data]
Explanation: Then, we fit the N datasets with different loss functions storing result in a dict containing lists:
End of explanation
# result = Results['cauchy'][0]
# for k in result.keys():
# print(k, type(result[k]))
Explanation: <div class="alert alert-info">
**NOTE**: For more details on robust fitting and on the different loss functions see
[Robust nonlinear regression in scipy](http://scipy-cookbook.readthedocs.io/items/robust_regression.html).
</div>
End of explanation
dg_tot = br.glance(Results, var_names=['loss', 'dataset'])
dg_tot.head()
dg_tot.success.all()
Explanation: Tidying the results
Now we tidy the results, combining the results for the different loss functions
in a single DataFrames.
We start with the glance function, which returns one row per fit result:
End of explanation
pnames = 'a1 a2 mu1 mu2 sig1 sig2'
dt_tot = br.tidy(Results, var_names=['loss', 'dataset'], param_names=pnames)
dt_tot.head()
Explanation: Then we apply tidy, which returns one row per parameter.
Since the object OptimzeResult returned by scipy.optimize does
only contains an array of parameters, we need to pass the names as
as additional argument:
End of explanation
kws = dict(bins = np.arange(-2, 4, 0.1), histtype='step', lw=2)
for loss in losses:
dt_tot.query('(name == "mu1" or name == "mu2") and loss == "%s"' % loss)['value'].hist(label=loss, **kws)
kws['ax'] = plt.gca()
plt.title(' Distribution of peaks centers')
plt.legend();
kws = dict(bins = np.arange(0, 4, 0.1), histtype='step', lw=2)
for loss in losses:
dt_tot.query('(name == "sig1" or name == "sig2") and loss == "%s"' % loss)['value'].hist(label=loss, **kws)
kws['ax'] = plt.gca()
plt.title(' Distribution of peaks sigmas')
plt.legend();
Explanation: Finally, we cannot apply the
augment function, since the OptimizeResult object
does not include much per-data-point information
(it may contain the array of residuals).
Plots
First we plot the peak position and sigmas distributions:
End of explanation
sns.factorplot(x='loss', y='value', data=dt_tot, col='name', hue='loss',
col_wrap=4, kind='box', sharey=False);
Explanation: A more complete overview for all the fit paramenters can be obtained with a factorplot:
End of explanation
def _plot(names, values, x, label=None, color=None):
df = pd.concat([names, values], axis=1)
kw_pars = br.tidy_to_dict(df)
y = model_pdf(x, **kw_pars)
plt.plot(x, y, lw=2, color=color, label=label)
grid = sns.FacetGrid(dt_tot.query('dataset < 9'), col='dataset', hue='loss', col_wrap=3)
grid.map(_plot, 'name', 'value', x=x)
grid.add_legend()
for i, ax in enumerate(grid.axes):
ax.plot(x, Y_data[i], 'o', ms=3, color='k')
plt.ylim(-1, 1.5)
Explanation: From all the previous plots we see that, as espected, using robust fitting
with higher damping of outlier (i.e. cauchy vs huber or linear)
results in more accurate fit results.
Finally, we can have a peek at the comparison of raw data and fitted models
for a few datatsets.
Since OptimizeResults does not include "augmented" data we need to
generate these data by evaluating the model with the best-fit parameters.
We use seaborn's FacetGrid, passing a custom function _plot
for model evaluation:
End of explanation |
1,260 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this notebook, we will demonstrate how to copy and paste Page resources within the SAME agent from a Source Flow to a Target Flow.
These same methods/functions can be further modified to move pages BETWEEN Agents as well!
Prerequisites
Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it.
Step1: Imports
Step2: User Inputs
In the next section, we will collect runtime variables needed to execute this notebook.
This should be the only cell of the notebook you need to edit in order for this notebook to run.
For this notebook, we'll need the following inputs
Step3: Get Flows Map from Agent
First, we will extract a map of the Flow IDs and Flow Display Names from the Agent.
We pass reverse=True to the get_flows_map function which provides the Display Names as keys and the IDs as values.
This allows us to refernce the map using Display Names vs. the long, cumbersome ID names.
Step4: Get All Pages from Source Flow
Once we have our flows_map, we will use it to extract all of the Page objects in our Source Flow.
In our case, we want to extract pages from the Flow named Default Start Flow.
We will call the list_pages function and pass in the appropriate dictionary reference.
Step5: Extract Subset of Pages To Copy
Now that we have my_pages (a List of Page objects) we need to extract the subset of pages that we plan on copying over to our Target Flow.
Usually you will do this using some regex matcher or pattern to select your pages.
The easiest way to do this is to ensure the the Page designer has prepended the page.display_name with a specific label.
The more unique the matching pattern, the better!
Ex
Step6: Create Page Shells in Target Flow
Using the subset_pages that we just collected, we will iterate through them and create a "shell" page in the Target flow.
This allows CX to assign a new UUID for the Target page which we will use to replace all references in the existing subset_pages
Remember to pass in the Target Flow ID using the reverse flows map from above.
NOTE - If you have a lot of pages, consider a time.sleep(.5) in your loop so as to not overrun your API limits!
Step7: Modify Page Objects
In a previous step, we collected the subset_pages list of Page objects.
Now we will use a 2-step process comprised of the following ~! MAGIC !~ CopyUtil functions
Step8: Update Pages in Target Flow
Our final step is to loop through our final_pages list and call the update_page function for each Page in the list.
This will write the modified Page objects to our Dialogflow CX Agent. | Python Code:
#If you haven't already, make sure you install the `dfcx-scrapi` library
!pip install dfcx-scrapi
Explanation: Introduction
In this notebook, we will demonstrate how to copy and paste Page resources within the SAME agent from a Source Flow to a Target Flow.
These same methods/functions can be further modified to move pages BETWEEN Agents as well!
Prerequisites
Ensure you have a GCP Service Account key with the Dialogflow API Admin privileges assigned to it.
End of explanation
from dfcx_scrapi.tools.copy_util import CopyUtil
Explanation: Imports
End of explanation
creds_path = '<YOUR_CREDS_FILE>'
agent_id = '<YOUR_AGENT_ID>'
source_flow = 'Default Start Flow'
target_flow = 'My Target Flow'
Explanation: User Inputs
In the next section, we will collect runtime variables needed to execute this notebook.
This should be the only cell of the notebook you need to edit in order for this notebook to run.
For this notebook, we'll need the following inputs:
creds_path: Your local path to your GCP Service Account Credentials
agent_id: Your Dialogflow CX Agent ID in String format
End of explanation
cu = CopyUtil(creds_path)
flows_map = cu.flows.get_flows_map(agent_id, reverse=True)
Explanation: Get Flows Map from Agent
First, we will extract a map of the Flow IDs and Flow Display Names from the Agent.
We pass reverse=True to the get_flows_map function which provides the Display Names as keys and the IDs as values.
This allows us to refernce the map using Display Names vs. the long, cumbersome ID names.
End of explanation
my_pages = cu.pages.list_pages(flows_map[source_flow])
print('{} Page Count = {}'.format(source_flow, len(my_pages)))
Explanation: Get All Pages from Source Flow
Once we have our flows_map, we will use it to extract all of the Page objects in our Source Flow.
In our case, we want to extract pages from the Flow named Default Start Flow.
We will call the list_pages function and pass in the appropriate dictionary reference.
End of explanation
subset_pages = [] # define a list placeholder for your Page proto objects
for page in my_pages:
if 'MyFlow -' in page.display_name:
subset_pages.append(page)
print('Total Pages to Copy = {}'.format(len(subset_pages)))
Explanation: Extract Subset of Pages To Copy
Now that we have my_pages (a List of Page objects) we need to extract the subset of pages that we plan on copying over to our Target Flow.
Usually you will do this using some regex matcher or pattern to select your pages.
The easiest way to do this is to ensure the the Page designer has prepended the page.display_name with a specific label.
The more unique the matching pattern, the better!
Ex:
- MyFlow - Page 1
- MyFlow - Page 2
- MyFlow - Page N
End of explanation
for page in subset_pages:
cu.pages.create_page(flows_map[target_flow], display_name=page.display_name)
Explanation: Create Page Shells in Target Flow
Using the subset_pages that we just collected, we will iterate through them and create a "shell" page in the Target flow.
This allows CX to assign a new UUID for the Target page which we will use to replace all references in the existing subset_pages
Remember to pass in the Target Flow ID using the reverse flows map from above.
NOTE - If you have a lot of pages, consider a time.sleep(.5) in your loop so as to not overrun your API limits!
End of explanation
# Step 1
subset_pages_prepped = cu.convert_from_source_page_dependencies(agent_id, subset_pages, source_flow)
# Step 2
final_pages = cu.convert_to_destination_page_dependencies(agent_id, subset_pages_prepped, target_flow)
Explanation: Modify Page Objects
In a previous step, we collected the subset_pages list of Page objects.
Now we will use a 2-step process comprised of the following ~! MAGIC !~ CopyUtil functions:
1. convert_from_source_page_dependencies
2. convert_to_destination_page_dependencies
In Step #1, we will use the following args:
- agent_id
- subset_pages (i.e. the original List of Page Objects we collected)
- source_flow
This will modify all of the UUIDs in the Page objects to be PLAIN TEXT STRING DISPLAY NAMES using internal map functions.
We will store the results in a variable called subset_pages_prepped
In Step #2, we will use the following args:
- agent_id
- subset_pages_prepped (i.e. our MODIFIED List of Page Objects we collected)
- target_flow
This will perform a reverse dictionary lookup on all of the previously modified resources using internal map functions and give them their newly assigned UUIDs.
We will store the results in a variable called final_pages
End of explanation
for page in final_pages:
cu.pages.update_page(page.name, page)
print('Updated Page: {}'.format(page.display_name))
Explanation: Update Pages in Target Flow
Our final step is to loop through our final_pages list and call the update_page function for each Page in the list.
This will write the modified Page objects to our Dialogflow CX Agent.
End of explanation |
1,261 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow IO Authors.
Step1: TensorFlow を使用した Azure Blob Storage
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: Azurite のインストールとセットアップ(オプション)
Azure Storage アカウントをお持ちでない場合に、Azure Storage インターフェースをエミュレートする Azurite をインストールしてセットアップするには次を行う必要があります。
Step3: TensorFlow を使用した Azure Storage のファイルの読み取りと書き込み
以下は、TensorFlow API を使用して、Azure Storage のファイルの読み取りと書き込みを行う例です。
tensorflow-io は自動的に azfs の使用を登録するため、tensorflow-io パッケージがインポートされると、ほかのファイルシステム(POSIX または GCS)と同じように動作します。
Azure Storage キーは、TF_AZURE_STORAGE_KEY 環境変数で指定します。これを行わない場合、TF_AZURE_USE_DEV_STORAGE は True に設定され、代わりに Azurite エミュレータが使用されてしまいます。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow IO Authors.
End of explanation
try:
%tensorflow_version 2.x
except Exception:
pass
!pip install tensorflow-io
Explanation: TensorFlow を使用した Azure Blob Storage
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/io/tutorials/azure"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/azure.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/io/tutorials/azure.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a> </td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/io/tutorials/azure.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
</table>
注: このノートブックは、Python パッケージのほか、npm install --user を使用してパッケージをインストールします。ローカルで実行する際には注意してください。
概要
このチュートリアルでは、TensorFlow を使って Azure Blob Storage 上のファイルの読み取りと書き込みを使用する方法を説明します。TensorFlow IO の Azure ファイルシステム統合を使用します。
Azure Blob Storage のファイルの読み取りと書き込みには、Azure Storage アカウントが必要です。Azure Storage キーは、環境変数で指定します。
os.environ['TF_AZURE_STORAGE_KEY'] = '<key>'
Storage アカウント名とコンテナ名は、ファイル名 URL の一部です。
azfs://<storage-account-name>/<container-name>/<path>
このチュートリアルは実演を目的としているため、オプションとして Azure Storage のエミュレータである Azurite をセットアップできます。Azurite エミュレータでは、TensorFlow を使用して、Azure Blob Storage を介したファイルの読み取りと書き込みを行えます。
セットアップと使用方法
必要なパッケージをインストールし、ランタイムを再起動する
End of explanation
!npm install [email protected]
# The path for npm might not be exposed in PATH env,
# you can find it out through 'npm bin' command
npm_bin_path = get_ipython().getoutput('npm bin')[0]
print('npm bin path: ', npm_bin_path)
# Run `azurite-blob -s` as a background process.
# IPython doesn't recognize `&` in inline bash cells.
get_ipython().system_raw(npm_bin_path + '/' + 'azurite-blob -s &')
Explanation: Azurite のインストールとセットアップ(オプション)
Azure Storage アカウントをお持ちでない場合に、Azure Storage インターフェースをエミュレートする Azurite をインストールしてセットアップするには次を行う必要があります。
End of explanation
import os
import tensorflow as tf
import tensorflow_io as tfio
# Switch to False to use Azure Storage instead:
use_emulator = True
if use_emulator:
os.environ['TF_AZURE_USE_DEV_STORAGE'] = '1'
account_name = 'devstoreaccount1'
else:
# Replace <key> with Azure Storage Key, and <account> with Azure Storage Account
os.environ['TF_AZURE_STORAGE_KEY'] = '<key>'
account_name = '<account>'
# Alternatively, you can use a shared access signature (SAS) to authenticate with the Azure Storage Account
os.environ['TF_AZURE_STORAGE_SAS'] = '<your sas>'
account_name = '<account>'
pathname = 'az://{}/aztest'.format(account_name)
tf.io.gfile.mkdir(pathname)
filename = pathname + '/hello.txt'
with tf.io.gfile.GFile(filename, mode='w') as w:
w.write("Hello, world!")
with tf.io.gfile.GFile(filename, mode='r') as r:
print(r.read())
Explanation: TensorFlow を使用した Azure Storage のファイルの読み取りと書き込み
以下は、TensorFlow API を使用して、Azure Storage のファイルの読み取りと書き込みを行う例です。
tensorflow-io は自動的に azfs の使用を登録するため、tensorflow-io パッケージがインポートされると、ほかのファイルシステム(POSIX または GCS)と同じように動作します。
Azure Storage キーは、TF_AZURE_STORAGE_KEY 環境変数で指定します。これを行わない場合、TF_AZURE_USE_DEV_STORAGE は True に設定され、代わりに Azurite エミュレータが使用されてしまいます。
End of explanation |
1,262 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
version 1.1
Введение (Introduction)
Данный блокнот является дополнительным материалом к статье по демонстрации примеров анализа данных и линейной регрессии представленной публикации на портале Habrahabr – https
Step1: Первый взгляд на данные (First look at the data)
Посмотрим на корреляцию между некоторыми столбцами, а именно всеми кроме связанных с округами Москвы. Построим их попарные диаграммы
Let's look at the correlation between some columns, namely all except those connected with the districts of Moscow. We construct their pair diagrams
Step2: Посмотрим подробней на некоторые комбинации, в которых есть намек на линейную зависимость. Получим количественную оценку в виде коэффициента корреляции Пирсона.
Let's look in more detail at some combinations in which there is a similarity with linear dependence. We obtain a quantitative estimate in the form of a Pearson correlation coefficient.
Step3: С одной стороны, очевидно, что чем больше общее количество обращений или обращений в электронной форме, тем больше всего обращений. С другой стороны, надо отметить, что эта зависимость не полностью линейная и наверняка мы не смогли учесть всего.
On the one hand, it is obvious that the greater the total number of appeals or appeals in electronic form. The greater the number of appeals. On the other hand, it should be noted that this dependence is not completely linear and we could not possibly have taken into account everything.
Давайте рассмотрим, еще что-нибудь. Например, найдем округ Москвы, где за год больше всего обращений граждан на 10000 человек населения.
Let's look at something else. For example, we will find the district of Moscow, where for the year the most appeals of citizens for 10,000 people.
Step5: Добавим другие данные из сети (Add other data from the network)
Step6: Посмотрим связано ли как-нибудь количество положительных решений по обращениям с ценами на нефть. Соберем данные автоматически напишем простенький сборщик данных с сайта.
Let's see whether there is somehow related to the number of positive decisions on the treatment of oil prices. Let's collect the data automatically write a simple data collector from the site.
Step7: Линейная регрессия (Linear regression)
Произведём некоторые манипуляции с исходными данными для того чтобы строить модель линейной регрессии в последующих ячейках
Let's make some manipulations with the initial data in order to build a linear regression model in the cells below
Step8: Построим модель на основании большинства столбцов таблицы в качестве признаков, без учета данных о месяцах. Посмотрим, как это поможет нам предсказать, число положительных решений по обращениям граждан.
We will construct the model on the basis of the majority of columns of the table as features, without taking the data on the months. Let's see how this will help us predict the number of positive decisions on citizens' appeals.
Step9: Мы будем использовать линейную регрессию с регуляризацией Гребень (Ridge). Данные поделим в соотношении 80% к 20 % (обучение / контроль), также будем проверять качество модели с помощью кросс валидации (в данном случае разбиение будет один цикл – один образец).
We will use linear regression with Ridge regularization. The data will be divided in the ratio of 80% to 20% (train/ test), and we will also check the quality of the model using cross validation (in this case, the split will be one cycle - one sample).
Step10: Посмотрим, как влияет цена на нефть на качество предсказания.
Let's see how the price of oil affects the quality of the prediction.
Step11: При идеально точном предсказании, все 4 точки должны были бы располагаться на линии.
With an ideally accurate prediction, all 4 points would have to be on the line.
Временной ряд (Time series)
До этого мы брали случайные точки дал контроля предсказания.
Давайте теперь рассмотрим тоже самое, но в контексте временного тренда.
Будем предсказывать количество положительных решений в «будущем».
Previously, we took random points to give control of the prediction.
Let's now consider the same thing, but in the context of the trend.
We will predict the number of positive decisions in the "future".
Для начала посмотрим на нашу прошлую модель с ценами на нефть.
First, look at our past model with oil prices.
Step12: Уберем цены на нефть, зато добавим закодированные данные о месяцах.
We remove oil prices, but we add coded data about the months. | Python Code:
#import libraries
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import requests, bs4
import time
from sklearn import model_selection
from collections import OrderedDict
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn import linear_model
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
%pylab inline
#load data
df = pd.read_csv('msc_appel_data.csv', sep='\t', index_col='num')
df.tail(12)
Explanation: version 1.1
Введение (Introduction)
Данный блокнот является дополнительным материалом к статье по демонстрации примеров анализа данных и линейной регрессии представленной публикации на портале Habrahabr – https://habrahabr.ru/post/343216/
Учитывая возможные ошибки вызванные техническими и «человеческими» факторами при обработке данных, рекомендуется применение данного набора исключительно в демонстрационных целях.
This notebook is an additional material to the article on demonstrating examples of data analysis and linear regression.
More detailed on the Habrahabr - https://habrahabr.ru/post/343216/
Materials may contain errors, not recommended for serious research.
P.S. English text from google translate :)
Описание данных (Data description)
Данные о работе органов государственной власти г. Москвы с обращениями граждан. Собраны вручную с официального портала Мэра и Правительства Москвы - https://www.mos.ru/feedback/reviews/
num – Индекс записи
year – год записи
month – месяц записи
total_appeals – общее количество обращений за месяц
appeals_to_mayor – общее количество обращений в адрес Мэра
res_positive- количество положительных решений
res_explained – количество обращений на которые дали разъяснения
res_negative – количество обращений с отрицательным решением
El_form_to_mayor – количество обращений к Мэру в электронной форме
Pap_form_to_mayor - – количество обращений к Мэру на бумажных носителях
to_10K_total_VAO…to_10K_total_YUZAO – количество обращений на 10000 населения в различных округах Москвы
to_10K_mayor_VAO… to_10K_mayor_YUZAO– количество обращений в адрес Мэра и правительства Москвы на 10000 населения в различных округах города
Data on the work with appeals of citizens of the executive power of Moscow. Manually collected from the official portal of the Mayor and the Government of Moscow - https://www.mos.ru/feedback/reviews/
num - Record index
year is the year of recording
month - recording month
total_appeals - total number of hits per month
appeals_to_mayor - total number of appeals to the Mayor
res_positive - the number of appeals with positive decisions
res_explained - the number of appeals that were explained
res_negative - number of appeals with negative decision
El_form_to_mayor - the number of appeals to the Mayor in electronic form
Pap_form_to_mayor - - number of appeals to the Mayor on paper
to_10K_total_VAO ... to_10K_total_YUZAO - the number of appeals per 10000 population in various districts of Moscow
to_10K_mayor_VAO ... to_10K_mayor_YUZAO- the number of appeals to the Mayor and the Government of Moscow for 10,000 people in various districts of the city
Загрузим данные и библиотеки
Let's import libriaries and load data
End of explanation
columns_to_show = ['res_positive', 'res_explained', 'res_negative',
'total_appeals', 'appeals_to_mayor','El_form_to_mayor', 'Pap_form_to_mayor']
data=df[columns_to_show]
grid = sns.pairplot(df[columns_to_show])
savefig('1.png')
Explanation: Первый взгляд на данные (First look at the data)
Посмотрим на корреляцию между некоторыми столбцами, а именно всеми кроме связанных с округами Москвы. Построим их попарные диаграммы
Let's look at the correlation between some columns, namely all except those connected with the districts of Moscow. We construct their pair diagrams
End of explanation
print("Correlation coefficient for a explained review result to the total number of appeals =",
df.res_explained.corr(df.total_appeals) )
print("Corr.coeff. for a total number of appeals to mayor to the total number of appeals to mayor in electronic form =",
df.appeals_to_mayor.corr(df.El_form_to_mayor) )
Explanation: Посмотрим подробней на некоторые комбинации, в которых есть намек на линейную зависимость. Получим количественную оценку в виде коэффициента корреляции Пирсона.
Let's look in more detail at some combinations in which there is a similarity with linear dependence. We obtain a quantitative estimate in the form of a Pearson correlation coefficient.
End of explanation
district_columns = ['to_10K_total_VAO', 'to_10K_total_ZAO', 'to_10K_total_ZelAO',
'to_10K_total_SAO','to_10K_total_SVAO','to_10K_total_SZAO','to_10K_total_TiNAO','to_10K_total_CAO',
'to_10K_total_YUAO','to_10K_total_YUVAO','to_10K_total_YUZAO']
y_pos = np.arange(len(district_columns))
short_district_columns=district_columns.copy()
for i in range(len(short_district_columns)):
short_district_columns[i] = short_district_columns[i].replace('to_10K_total_','')
distr_sum = df[district_columns].sum()
plt.figure(figsize=(16,9))
plt.bar(y_pos, distr_sum, align='center', alpha=0.5)
plt.xticks(y_pos, short_district_columns)
plt.ylabel('Number of appeals')
plt.title('Number of appeals per 10,000 people for all time')
savefig('2.png')
Explanation: С одной стороны, очевидно, что чем больше общее количество обращений или обращений в электронной форме, тем больше всего обращений. С другой стороны, надо отметить, что эта зависимость не полностью линейная и наверняка мы не смогли учесть всего.
On the one hand, it is obvious that the greater the total number of appeals or appeals in electronic form. The greater the number of appeals. On the other hand, it should be noted that this dependence is not completely linear and we could not possibly have taken into account everything.
Давайте рассмотрим, еще что-нибудь. Например, найдем округ Москвы, где за год больше всего обращений граждан на 10000 человек населения.
Let's look at something else. For example, we will find the district of Moscow, where for the year the most appeals of citizens for 10,000 people.
End of explanation
To remind
district_columns = ['to_10K_total_VAO', 'to_10K_total_ZAO', 'to_10K_total_ZelAO',
'to_10K_total_SAO','to_10K_total_SVAO','to_10K_total_SZAO','to_10K_total_TiNAO','to_10K_total_CAO',
'to_10K_total_YUAO','to_10K_total_YUVAO','to_10K_total_YUZAO']
# we will collect the data manually from
# https://ru.wikipedia.org/wiki/%D0%90%D0%B4%D0%BC%D0%B8%D0%BD%D0%B8%D1%81%D1%82%D1%80%D0%B0%D1%82%D0%B8%D0%B2%D0%BD%D0%BE-%D1%82%D0%B5%D1%80%D1%80%D0%B8%D1%82%D0%BE%D1%80%D0%B8%D0%B0%D0%BB%D1%8C%D0%BD%D0%BE%D0%B5_%D0%B4%D0%B5%D0%BB%D0%B5%D0%BD%D0%B8%D0%B5_%D0%9C%D0%BE%D1%81%D0%BA%D0%B2%D1%8B
#the data is filled in the same order as the district_columns
district_population=[1507198,1368731,239861,1160576,1415283,990696,339231,769630,1776789,1385385,1427284]
#transition from 1/10000 to citizens' appeal to the entire population of the district
total_appel_dep=district_population*distr_sum/10000
plt.figure(figsize=(16,9))
plt.bar(y_pos, total_appel_dep, align='center', alpha=0.5)
plt.xticks(y_pos, short_district_columns)
plt.ylabel('Number of appeals')
plt.title('Number of appeals per total pщulation of district for all time')
savefig('3.png')
Explanation: Добавим другие данные из сети (Add other data from the network)
End of explanation
#we use beautifulsoup
oil_page=requests.get('https://worldtable.info/yekonomika/cena-na-neft-marki-brent-tablica-s-1986-po-20.html')
b=bs4.BeautifulSoup(oil_page.text, "html.parser")
table=b.select('.item-description')
table = b.find('div', {'class': 'item-description'})
table_tr=table.find_all('tr')
d_parse=OrderedDict()
for tr in table_tr[1:len(table_tr)-1]:
td=tr.find_all('td')
d_parse[td[0].get_text()]=float(td[1].get_text())
# dictionary selection boundaries
d_start=358
d_end=379 #because the site has no data for October
#d_end=380 if the authors in the data source fill in the values for October, you must enter 380
# Uncomment all if grabber doesn't work
#d_parse=[("январь 2016", 30.8), ("февраль 2016", 33.2), ("март 2016", 39.25), ("апрель 2016", 42.78), ("май 2016", 47.09),
# ("июнь 2016", 49.78), ("июль 2016", 46.63), ("август 2016", 46.37), ("сентябрь 2016", 47.68), ("октябрь 2016", 51.1),
# ("ноябрь 2016", 47.97), ("декабрь 2016", 54.44), ("январь 2017", 55.98), ("февраль 2017", 55.95), ("март 2017", 53.38),
# ("апрель 2017", 53.54), ("май 2017", 50.66), ("июнь 2017", 47.91), ("июль 2017", 49.51), ("август 2017", 51.82) , ("сентябрь 2017", 55.74)]
#d_parse=dict(d_parse)
#d_start=0
#d_end=20
# values from January 2016 to October 2017
oil_price=list(d_parse.values())[d_start:d_end]
oil_price.append(57.64) #delete this when the source site shows data for October
#In the collected data the October's the data was calculated manually,
#in the future if it is fixed in the source, you can delete these lines and the code (oil_price.append(57.64)) above
df['oil_price']=oil_price
df.tail(5)
print("Correlation coefficient for the total number of appeals result to the oil price (in US $) =",
df.total_appeals.corr(df.oil_price) )
print("Correlation coefficient for a positive review result to the oil price (in US $) =",
df.res_positive.corr(df.oil_price) )
Explanation: Посмотрим связано ли как-нибудь количество положительных решений по обращениям с ценами на нефть. Соберем данные автоматически напишем простенький сборщик данных с сайта.
Let's see whether there is somehow related to the number of positive decisions on the treatment of oil prices. Let's collect the data automatically write a simple data collector from the site.
End of explanation
df2=df.copy()
#Let's make a separate column for each value of our categorical variable
df2=pd.get_dummies(df2,prefix=['month'])
#Let's code the month with numbers
d={'January':1, 'February':2, 'March':3, 'April':4, 'May':5, 'June':6, 'July':7,
'August':8, 'September':9, 'October':10, 'November':11, 'December':12}
month=df.month.map(d)
#We paste the information about the date from several columns
dt=list()
for year,mont in zip(df2.year.values, month.values):
s=str(year)+' '+str(mont)+' 1'
dt.append(s)
#convert the received data into the DateTime type and replace them with a column year
df2.rename(columns={'year': 'DateTime'}, inplace=True)
df2['DateTime']=pd.to_datetime(dt, format='%Y %m %d')
df2.head(5)
Explanation: Линейная регрессия (Linear regression)
Произведём некоторые манипуляции с исходными данными для того чтобы строить модель линейной регрессии в последующих ячейках
Let's make some manipulations with the initial data in order to build a linear regression model in the cells below
End of explanation
#Prepare the data
cols_for_regression=columns_to_show+district_columns
cols_for_regression.remove('res_positive')
cols_for_regression.remove('total_appeals')
X=df2[cols_for_regression].values
y=df2['res_positive']
#Scale the data
scaler =StandardScaler()
X_scal=scaler.fit_transform(X)
y_scal=scaler.fit_transform(y)
Explanation: Построим модель на основании большинства столбцов таблицы в качестве признаков, без учета данных о месяцах. Посмотрим, как это поможет нам предсказать, число положительных решений по обращениям граждан.
We will construct the model on the basis of the majority of columns of the table as features, without taking the data on the months. Let's see how this will help us predict the number of positive decisions on citizens' appeals.
End of explanation
X_train, X_test, y_train, y_test = train_test_split(X_scal, y_scal, test_size=0.2, random_state=42)
#y_train=np.reshape(y_train,[y_train.shape[0],1])
#y_test=np.reshape(y_test,[y_test.shape[0],1])
loo = model_selection.LeaveOneOut()
#alpha coefficient is taken at a rough guess
lr = linear_model.Ridge(alpha=55.0)
scores = model_selection.cross_val_score(lr , X_train, y_train, scoring='mean_squared_error', cv=loo,)
print('CV Score:', scores.mean())
lr .fit(X_train, y_train)
print('Coefficients:', lr.coef_)
print('Test Score:', lr.score(X_test,y_test))
Explanation: Мы будем использовать линейную регрессию с регуляризацией Гребень (Ridge). Данные поделим в соотношении 80% к 20 % (обучение / контроль), также будем проверять качество модели с помощью кросс валидации (в данном случае разбиение будет один цикл – один образец).
We will use linear regression with Ridge regularization. The data will be divided in the ratio of 80% to 20% (train/ test), and we will also check the quality of the model using cross validation (in this case, the split will be one cycle - one sample).
End of explanation
X_oil=df2[cols_for_regression+['oil_price']].values
y_oil=df2['res_positive']
scaler =StandardScaler()
X_scal_oil=scaler.fit_transform(X_oil)
y_scal_oil=scaler.fit_transform(y_oil)
X_train, X_test, y_train, y_test = train_test_split(X_scal_oil, y_scal_oil, test_size=0.2, random_state=42)
#y_train=np.reshape(y_train,[y_train.shape[0],1])
#y_test=np.reshape(y_test,[y_test.shape[0],1])
lr = linear_model.Ridge()
loo = model_selection.LeaveOneOut()
lr = linear_model.Ridge(alpha=55.0)
scores = model_selection.cross_val_score(lr , X_train, y_train, scoring='mean_squared_error', cv=loo,)
print('CV Score:', scores.mean())
lr .fit(X_train, y_train)
print('Coefficients:', lr.coef_)
print('Test Score:', lr.score(X_test,y_test))
# plot for test data
plt.figure(figsize=(16,9))
plt.scatter(lr.predict(X_test), y_test, color='black')
plt.plot(y_test, y_test, '-', color='green',
linewidth=1)
plt.xlabel('relative number of positive results (predict)')
plt.ylabel('relative number of positive results (test)')
plt.title="Regression on test data"
print('predict: {0} '.format(lr.predict(X_test)))
print('real: {0} '.format(y_test))
savefig('4.png')
Explanation: Посмотрим, как влияет цена на нефть на качество предсказания.
Let's see how the price of oil affects the quality of the prediction.
End of explanation
l_bord = 18
r_bord = 22
X_train=X_scal_oil[0:l_bord]
X_test=X_scal_oil[l_bord:r_bord]
y_train=y_scal_oil[0:l_bord]
y_test=y_scal_oil[l_bord:r_bord]
loo = model_selection.LeaveOneOut()
lr = linear_model.Ridge(alpha=7.0)
scores = model_selection.cross_val_score(lr , X_train, y_train, scoring='mean_squared_error', cv=loo,)
print('CV Score:', scores.mean())
lr.fit(X_train, y_train)
print('Coefficients:', lr.coef_)
print('Test Score:', lr.score(X_test,y_test))
# plot for test data
plt.figure(figsize=(19,10))
#trainline
plt.scatter(df2.DateTime.values[0:l_bord], lr.predict(X_train), color='black')
plt.plot(df2.DateTime.values[0:l_bord], y_train, '--', color='green',
linewidth=3)
#test line
plt.scatter(df2.DateTime.values[l_bord:r_bord], lr.predict(X_test), color='black')
plt.plot(df2.DateTime.values[l_bord:r_bord], y_test, '--', color='blue',
linewidth=3)
#connecting line
plt.plot([df2.DateTime.values[l_bord-1],df2.DateTime.values[l_bord]], [y_train[l_bord-1],y_test[0]] ,
color='magenta',linewidth=2, label='train to test')
plt.xlabel('Date')
plt.ylabel('Relative number of positive results')
plt.title="Time series"
print('predict: {0} '.format(lr.predict(X_test)))
print('real: {0} '.format(y_test))
savefig('5.1.png')
Explanation: При идеально точном предсказании, все 4 точки должны были бы располагаться на линии.
With an ideally accurate prediction, all 4 points would have to be on the line.
Временной ряд (Time series)
До этого мы брали случайные точки дал контроля предсказания.
Давайте теперь рассмотрим тоже самое, но в контексте временного тренда.
Будем предсказывать количество положительных решений в «будущем».
Previously, we took random points to give control of the prediction.
Let's now consider the same thing, but in the context of the trend.
We will predict the number of positive decisions in the "future".
Для начала посмотрим на нашу прошлую модель с ценами на нефть.
First, look at our past model with oil prices.
End of explanation
l_bord = 18
r_bord = 22
cols_months=['month_December', 'month_February', 'month_January', 'month_July', 'month_June', 'month_March', 'month_May', 'month_November',
'month_October','month_September','month_April','month_August']
X_month=df2[cols_for_regression+cols_months].values
y_month=df2['res_positive']
scaler =StandardScaler()
X_scal_month=scaler.fit_transform(X_month)
y_scal_month=scaler.fit_transform(y_month)
X_train=X_scal_month[0:l_bord]
X_test=X_scal_month[l_bord:r_bord]
y_train=y_scal_month[0:l_bord]
y_test=y_scal_month[l_bord:r_bord]
loo = model_selection.LeaveOneOut()
lr = linear_model.Ridge(alpha=7.0)
scores = model_selection.cross_val_score(lr , X_train, y_train, scoring='mean_squared_error', cv=loo,)
print('CV Score:', scores.mean())
lr.fit(X_train, y_train)
print('Coefficients:', lr.coef_)
print('Test Score:', lr.score(X_test,y_test))
# plot for test data
plt.figure(figsize=(19,10))
#trainline
plt.scatter(df2.DateTime.values[0:l_bord], lr.predict(X_train), color='black')
plt.plot(df2.DateTime.values[0:l_bord], y_train, '--', color='green',
linewidth=3)
#test line
plt.scatter(df2.DateTime.values[l_bord:r_bord], lr.predict(X_test), color='black')
plt.plot(df2.DateTime.values[l_bord:r_bord], y_test, '--', color='blue',
linewidth=3)
#connecting line
plt.plot([df2.DateTime.values[l_bord-1],df2.DateTime.values[l_bord]], [y_train[l_bord-1],y_test[0]] , color='magenta',linewidth=2, label='train to test')
plt.xlabel('Date')
plt.ylabel('Relative number of positive results')
plt.title="Time series"
print('predict: {0} '.format(lr.predict(X_test)))
print('real: {0} '.format(y_test))
savefig('5.2.png')
Explanation: Уберем цены на нефть, зато добавим закодированные данные о месяцах.
We remove oil prices, but we add coded data about the months.
End of explanation |
1,263 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aerobic metabolic model
Import the necessary libraries
Step1: Load the data
Step2: Compute the aerobic metabolic model
Step3: Plot the information related to the MAP determination using Pinot et al. approach
Step4: Plot the information related to the AEI determination using Pinot et al. approach | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from skcycling.data_management import Rider
from skcycling.metrics import aerobic_meta_model
from skcycling.utils.fit import log_linear_model
from skcycling.utils.fit import linear_model
from datetime import date
Explanation: Aerobic metabolic model
Import the necessary libraries
End of explanation
filename = '../../data/rider/user_1.p'
my_rider = Rider.load_from_pickles(filename)
Explanation: Load the data
End of explanation
# Define the starting and ending date from which
# we want to compute the record power-profile
start_date = date(2014,1,1)
end_date = date(2014, 12, 31)
# Compute the record power-profile
my_rider.compute_record_pp((start_date, end_date))
# Compute the amm
pma, t_pma, aei, fit_info_pma_fitting, fit_info_aei_fitting = aerobic_meta_model(my_rider.record_pp_)
print 'MAP : {}, time at MAP : {}, aei : {}'.format(pma, t_pma, aei)
print 'Fitting information about the MAP: {}'.format(fit_info_pma_fitting)
print 'Fitting information about the AEI: {}'.format(fit_info_aei_fitting)
Explanation: Compute the aerobic metabolic model
End of explanation
# Plot the normalized power
plt.figure(figsize=(14, 10))
# Define the time samples to use for the plotting
t = np.array([3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 10, 20, 30, 45, 60, 120, 180, 240])
# Plot the log linear model found
plt.semilogx(t, log_linear_model(t, fit_info_pma_fitting['slope'], fit_info_pma_fitting['intercept']),
label=r'$R^2={0:.2f}$'.format(fit_info_aei_fitting['coeff_det']))
# Plot the confidence
plt.fill_between(t,
log_linear_model(t, fit_info_pma_fitting['slope'],
fit_info_pma_fitting['intercept']) + 2 * fit_info_pma_fitting['std_err'],
log_linear_model(t, fit_info_pma_fitting['slope'],
fit_info_pma_fitting['intercept']) - 2 * fit_info_pma_fitting['std_err'],
alpha=0.2)
# Plot the real data point
plt.semilogx(t, my_rider.record_pp_.resampling_rpp(t), 'go')
# Plot the MAP point
plt.semilogx(t_pma, pma, 'ro', label='t={0:.1f} min / MAP={1:.1f} W'.format(t_pma, pma))
# Plot the legend
plt.xlabel('Time in minutes (min)')
plt.ylabel('Power in watts (W)')
plt.xlim(min(t), max(t))
plt.ylim(0, max(my_rider.record_pp_.resampling_rpp(t)+50))
plt.title(r'Determine MAP with the model ${0:.1f} \times \log(t) + {1:.1f}$'.format(fit_info_pma_fitting['slope'],
fit_info_pma_fitting['intercept']))
plt.legend()
plt.show()
Explanation: Plot the information related to the MAP determination using Pinot et al. approach
End of explanation
# Plot the normalized power
plt.figure(figsize=(14, 10))
# Define the time samples to use for the plotting
t = np.array([3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 10, 20, 30, 45, 60, 120, 180, 240])
t = t[np.nonzero(t > t_pma)]
# Plot the log linear model found
plt.semilogx(t, log_linear_model(t, fit_info_aei_fitting['slope'], fit_info_aei_fitting['intercept']),
label=r'$R^2={0:.2f}$'.format(fit_info_aei_fitting['coeff_det']))
# Plot the confidence
plt.fill_between(t,
log_linear_model(t, fit_info_aei_fitting['slope'],
fit_info_aei_fitting['intercept']) + 2 * fit_info_aei_fitting['std_err'],
log_linear_model(t, fit_info_aei_fitting['slope'],
fit_info_aei_fitting['intercept']) - 2 * fit_info_aei_fitting['std_err'],
alpha=0.2)
# Plot the real data point
plt.semilogx(t, my_rider.record_pp_.resampling_rpp(t) / pma * 100., 'go')
# Plot the legend
plt.xlabel('Time in minutes (min)')
plt.ylabel('Power in watts (W)')
plt.xlim(min(t), max(t))
plt.ylim(0, 100)
plt.title(r'Determine AEI with the model ${0:.1f} \times \log(t) + {1:.1f}$'.format(fit_info_aei_fitting['slope'],
fit_info_aei_fitting['intercept']))
plt.legend()
plt.show()
Explanation: Plot the information related to the AEI determination using Pinot et al. approach
End of explanation |
1,264 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: 2. Check the types of the variable that you take into account along the way.
Step2: 3. Draw the histogram of total day minutes and total intl calls and interpret the result.
Step3: The above histogram shows us the frequency of the variable "Total day minutes" in the telecom_churn dataset. The histogram reads as follows
Step4: The above histogram shows us the frequency of the variable "Total intl calls" in the telecom_churn dataset. The histogram reads as follows
Step5: Density plots are, by definition, smoothed-out versions of the respective historgrams. We get roughly the same information from the density plots as we do from the histograms, which is that "Total day minutes" is normally distributed, whereas "Total intl calls" has a significant right skew.
5. Draw the distplot, boxplot, and violin plot of total intl calls and interpret the result
Step6: A box plot helps us to understand the extent to which data spreads out. We see from the above box plot of "Total intl calls" that
Step7: The above violin plot of "Total intl calls" includes a rotated kernel density plot on each side. It shows us the full distribution of the data, and confirms that the data are most dense between 2.5 and 5.0 calls.
Step8: The above distplot of "Total intl calls" shows us similar information as the violin plot, which is that the data are most dense between 2.5 and 5.0 calls.
6. Using churn and customer service calls variables, draw the count plot and interpret the result*
Step9: Most customers do not churn.
Step10: The most frequent value for 'Customer service calls' is 1.0, followed by 2, 0, 3, 4, 5, 6, and 7. It's interesting that a significant number of customers don't seem to make customer service calls, since 0 is the third most frequent number of calls.
7. Identify the correlation between the variables using pandas and seaborn libraries
Step11: Total day minutes is strongly correlated with Total day charge.
Total eve minutes is strongly correlated with Total eve charge.
Total night minutes is strongly correlated with Total night charge.
Total intl minutes is strongly correlated with Total intl charge.
8. Detect the relationship between total day minutes and total night minutes variable using a visualization technique of your choice
Step12: There does not appear to be a linear relationship between Total day minutes and Total night minutes.
9. Try to understand the relationship between total day minutes and two categorical variables, namely churn and customer service calls using catplot.
Hint
Step13: From the above catplot, we see that the median number of total day minutes for customers who have churned is higher than the median number of total day minutes for customers who have not churned, for customer service calls under 4. Starting at 4 calls, the trend reverses and the median number of total day minutes for customers who have churned is lower than the median number of total day minutes for customers who have not churned.
A clear relationship between total day minutes and churn is difficult to detect here.
10. Using subplot function, draw histogram of all numerical variables | Python Code:
#codes here
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("https://raw.githubusercontent.com/Yorko/mlcourse.ai/master/data/telecom_churn.csv")
df.head()
Explanation: <a href="https://colab.research.google.com/github/gaargly/gaargly.github.io/blob/master/2021_06_30_Lira_3rdAssignment_Tyler.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
In this assignment, we will telekom_churn dataset to explore the data using visualization techniques. Please note that for every visualization you apply, you need to interpret what you have found in order to get full credit.
1. Import the data first
End of explanation
#codes here
df.dtypes
Explanation: 2. Check the types of the variable that you take into account along the way.
End of explanation
#codes here
plt.figure(figsize=(10,5))
plt.hist(df['Total day minutes'])
plt.xlabel('Total Day Minutes')
plt.ylabel('Frequency')
plt.title('Histogram of Total Day Minutes')
plt.show()
Explanation: 3. Draw the histogram of total day minutes and total intl calls and interpret the result.
End of explanation
#codes here
plt.figure(figsize=(10,5))
plt.hist(df['Total intl calls'])
plt.xlabel('Total Intl Calls')
plt.ylabel('Frequency')
plt.title('Histogram of Total Intl Calls')
plt.show()
Explanation: The above histogram shows us the frequency of the variable "Total day minutes" in the telecom_churn dataset. The histogram reads as follows:
The data appear to be approximately normally distributed.
The most frequent interval gathers around total day minutes of 200.
The next frequent interval gathers around total day minutes of 150.
The data range from 0 to 350 minutes, with a mean between 150 and 200 minutes (at around 175 minutes).
End of explanation
#codes here
from scipy.stats import kde
data = df['Total day minutes']
density = kde.gaussian_kde(data)
x = np.linspace(0,350,20)
y = density(x)
plt.plot(x,y)
plt.title("Density Plot of Total Day Minutes")
plt.show()
#codes here
data = df['Total intl calls']
density = kde.gaussian_kde(data)
x = np.linspace(0,20,300)
y = density(x)
plt.plot(x,y)
plt.title("Density Plot of the Total Intl Calls")
plt.show()
Explanation: The above histogram shows us the frequency of the variable "Total intl calls" in the telecom_churn dataset. The histogram reads as follows:
The data have a right skew.
The most frequent interval gathers around total intl calls of 2.5.
The next frequent interval gathers around total day calls of 5.0.
A very small proportion of accounts had more than 10.0 calls in this dataset.
4. This time draw the density plot of the same variable and discuss the difference between these two plots
End of explanation
# Boxplot
plt.figure(figsize=(10,5))
plt.boxplot(df['Total intl calls'])
plt.ylabel('Total Intl Calls')
plt.title('Boxplot of Total Intl Calls')
plt.xticks([])
plt.show()
Explanation: Density plots are, by definition, smoothed-out versions of the respective historgrams. We get roughly the same information from the density plots as we do from the histograms, which is that "Total day minutes" is normally distributed, whereas "Total intl calls" has a significant right skew.
5. Draw the distplot, boxplot, and violin plot of total intl calls and interpret the result
End of explanation
# Violin Plot
plt.figure(figsize=(10,5))
plt.violinplot(df['Total intl calls'])
plt.xlabel('Probability')
plt.ylabel('Total Intl Calls')
plt.title('Violin plot of Total Intl Calls')
plt.show()
Explanation: A box plot helps us to understand the extent to which data spreads out. We see from the above box plot of "Total intl calls" that:
Minimum: The smallest value excluding any outliers is 0.0
First quartile (25th Percentile): The middle value between the smallest number (not the minimum) and the median of the dataset is a little over 2.5
Median of the data: The middle value of the dataset is around 3.5
Third quartile: The middle value between the largest number and the median of the dataset is around 6.0
Maximum: The biggest value excluding any outliers is 10.0
There are 10 outliers with values bigger than 10.0
End of explanation
# Distplot
sns.set(rc={"figure.figsize": (8, 4)}); np.random.seed(0)
x = df['Total intl calls']
ax = sns.distplot(x)
plt.show()
Explanation: The above violin plot of "Total intl calls" includes a rotated kernel density plot on each side. It shows us the full distribution of the data, and confirms that the data are most dense between 2.5 and 5.0 calls.
End of explanation
#codes here
p = sns.countplot(data=df, x = 'Churn')
Explanation: The above distplot of "Total intl calls" shows us similar information as the violin plot, which is that the data are most dense between 2.5 and 5.0 calls.
6. Using churn and customer service calls variables, draw the count plot and interpret the result*
End of explanation
#codes here
p = sns.countplot(data=df, x = 'Customer service calls')
Explanation: Most customers do not churn.
End of explanation
#codes here
sns.heatmap(df.corr())
plt.show()
Explanation: The most frequent value for 'Customer service calls' is 1.0, followed by 2, 0, 3, 4, 5, 6, and 7. It's interesting that a significant number of customers don't seem to make customer service calls, since 0 is the third most frequent number of calls.
7. Identify the correlation between the variables using pandas and seaborn libraries
End of explanation
#codes here
plt.figure(figsize=(10,10))
plt.subplot(2,1,1)
plt.scatter(df['Total day minutes'],df['Total night minutes'])
plt.xlabel('Total day minutes calls')
plt.ylabel('Total night minutes')
plt.title('Total day minutes vs Total night minutes')
plt.show()
Explanation: Total day minutes is strongly correlated with Total day charge.
Total eve minutes is strongly correlated with Total eve charge.
Total night minutes is strongly correlated with Total night charge.
Total intl minutes is strongly correlated with Total intl charge.
8. Detect the relationship between total day minutes and total night minutes variable using a visualization technique of your choice
End of explanation
#codes here
sns.catplot(
x="Churn",
y="Total day minutes",
col="Customer service calls",
data=df[df["Customer service calls"] < 8],
kind="box",
col_wrap=4,
height=3,
aspect=0.8,
);
Explanation: There does not appear to be a linear relationship between Total day minutes and Total night minutes.
9. Try to understand the relationship between total day minutes and two categorical variables, namely churn and customer service calls using catplot.
Hint: try different values of customer service calls
End of explanation
#codes here
fig, axs = plt.subplots(7, 2, figsize=(7, 7))
sns.histplot(data=df, x="Number vmail messages", kde=True, color="violet", ax=axs[0, 0])
sns.histplot(data=df, x="Total day minutes", kde=True, color="indigo", ax=axs[0, 1])
sns.histplot(data=df, x="Total day calls", kde=True, color="blue", ax=axs[1, 0])
sns.histplot(data=df, x="Total day charge", kde=True, color="green", ax=axs[1, 1])
sns.histplot(data=df, x="Total eve minutes", kde=True, color="yellow", ax=axs[2, 0])
sns.histplot(data=df, x="Total eve calls", kde=True, color="orange", ax=axs[2, 1])
sns.histplot(data=df, x="Total eve charge", kde=True, color="red", ax=axs[3, 0])
sns.histplot(data=df, x="Total night minutes", kde=True, color="lightblue", ax=axs[3, 1])
sns.histplot(data=df, x="Total night calls", kde=True, color="black", ax=axs[4, 0])
sns.histplot(data=df, x="Total night charge", kde=True, color="purple", ax=axs[4, 1])
sns.histplot(data=df, x="Total intl minutes", kde=True, color="navy", ax=axs[5, 0])
sns.histplot(data=df, x="Total intl calls", kde=True, color="coral", ax=axs[5, 1])
sns.histplot(data=df, x="Total intl charge", kde=True, color="cyan", ax=axs[6, 0])
sns.histplot(data=df, x="Customer service calls", kde=True, color="magenta", ax=axs[6, 1])
plt.show()
Explanation: From the above catplot, we see that the median number of total day minutes for customers who have churned is higher than the median number of total day minutes for customers who have not churned, for customer service calls under 4. Starting at 4 calls, the trend reverses and the median number of total day minutes for customers who have churned is lower than the median number of total day minutes for customers who have not churned.
A clear relationship between total day minutes and churn is difficult to detect here.
10. Using subplot function, draw histogram of all numerical variables
End of explanation |
1,265 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PageRank exercise
Question 1
Consider three Web pages with the following links
Step1: Suppose we compute PageRank with a β of 0.7, and we introduce the additional constraint that the sum of the PageRanks of the three pages must be 3, to handle the problem that otherwise any multiple of a solution will also be a solution. Compute the PageRanks a, b, and c of the three pages A, B, and C, respectively.
Step2: Question 2
Consider three Web pages with the following links
Step3: Suppose we compute PageRank with β=0.85. Write the equations for the PageRanks a, b, and c of the three pages A, B, and C, respectively. Then, identify the correct equations representing a, b and c.
Step4: Question 3
Consider three Web pages with the following links
Step5: Assuming no "taxation," compute the PageRanks a, b, and c of the three pages A, B, and C, using iteration, starting with the "0th" iteration where all three pages have rank a = b = c = 1. Compute as far as the 5th iteration, and also determine what the PageRanks are in the limit.
Step6: Question 4
Consider the link graph below. First, construct the L, the link matrix, as discussed in the HITS algorithm. Then do the following
Step7: Question 5
Compute the Topic-Specific PageRank for the following link topology. Assume that pages selected for the teleport set are nodes 1 and 2 and that in the teleport set, the weight assigned for node 1 is twice that of node 2. Assume further that the teleport probability, (1 - beta), is 0.3. Which of the following statements is correct?
Step8: TSPR(1) = 0.3576
TSPR(2) = 0.2252
TSPR(3) = 0.2454
TSPR(4) = 0.1718
Question 6
The spam-farm architecture suffers from the problem that the target page has many links --- one to each supporting page. To avoid that problem, the spammer could use the architecture shown below
Step9: There, k "second-tier" nodes act as intermediaries. The target page t has only to link to the k second-tier pages, and each of those pages links to m/k of the m supporting pages. Each of the supporting pages links only to t (although most of these links are not shown). Suppose the taxation parameter is β = 0.85, and x is the amount of PageRank supplied from outside to the target page. Let n be the total number of pages in the Web. Finally, let y be the PageRank of target page t. If we compute the formula for y in terms of k, m, and n, we get a formula with the form
y = ax + bm/n + ck/n
Note | Python Code:
from IPython.display import Image
Image(filename='pagerank1.jpeg')
Explanation: PageRank exercise
Question 1
Consider three Web pages with the following links:
End of explanation
import numpy as np
# Adjacency matrix
# m1 = [ 0, 0, 0]
# [0.5, 0, 0]
# [0.5, 1, 1]
m1 = np.matrix([[0, 0, 0],[0.5, 0, 0],[0.5, 1, 1]])
beta = 0.7
# r = beta * m1 * r + ((1-beta)/N)
def r_p(r):
return beta * m1 * r + np.matrix([0.1,0.1,0.1]).T
r = np.matrix([1.0/3,1.0/3,1.0/3]).T
for i in range(1000):
r = r_p(r)
print "Final PageRank: \n" + str(r*3)
a = r[0] * 3
b = r[1] * 3
c = r[2] * 3
print 'a = ', a
print 'b = ', b
print 'c = ', c
print 'a + b = ', a + b
print 'b + c = ', b + c
print 'a + c = ', a + c
Explanation: Suppose we compute PageRank with a β of 0.7, and we introduce the additional constraint that the sum of the PageRanks of the three pages must be 3, to handle the problem that otherwise any multiple of a solution will also be a solution. Compute the PageRanks a, b, and c of the three pages A, B, and C, respectively.
End of explanation
Image(filename='pagerank2.jpeg')
Explanation: Question 2
Consider three Web pages with the following links:
End of explanation
import numpy as np
# Adjacency matrix
# m2 = [ 0, 0, 1]
# [0.5, 0, 0]
# [0.5, 1, 0]
m2 = np.matrix([[0, 0, 1],[0.5, 0, 0],[0.5, 1, 0]])
beta =0.85
def r_p(r):
return beta * m2 * r + np.matrix([0.05,0.05,0.05]).T
r = np.matrix([1.0/3,1.0/3,1.0/3]).T
for i in range(1000):
r = r_p(r)
print "Final PageRank: \n" + str(r)
a = r[0]
b = r[1]
c = r[2]
print "0.95a = ", 0.95*a, "= 0.9c + 0.05b = ", 0.9*c + 0.05*b
print "0.95b = ", 0.95*b, "= 0.475a + 0.05c = ", 0.475*a + 0.05*c
print "0.95c = ", 0.95*c, "= 0.9b + 0.475a = ", 0.9*b + 0.475*a
Explanation: Suppose we compute PageRank with β=0.85. Write the equations for the PageRanks a, b, and c of the three pages A, B, and C, respectively. Then, identify the correct equations representing a, b and c.
End of explanation
Image(filename='pagerank2.jpeg')
Explanation: Question 3
Consider three Web pages with the following links:
End of explanation
import numpy as np
# Adjacency matrix
# m3 = [ 0, 0, 1]
# [0.5, 0, 0]
# [0.5, 1, 0]
m3 = np.matrix([[0, 0, 1],[0.5, 0, 0],[0.5, 1, 0]])
beta = 1
r = np.matrix([1,1,1]).T
for i in range(50):
r = m3.dot(r)
print i+1
print r
print "Final PageRank: \n" + str(r)
Explanation: Assuming no "taxation," compute the PageRanks a, b, and c of the three pages A, B, and C, using iteration, starting with the "0th" iteration where all three pages have rank a = b = c = 1. Compute as far as the 5th iteration, and also determine what the PageRanks are in the limit.
End of explanation
Image(filename='pagerank4.jpg')
import numpy as np
# Function to normalize all values so that the largest value is 1
def norm(Matrix):
return Matrix/float(Matrix.max())
def estimate(L,h):
# To estimate of the authority vector a = LTh
#a = L.T*h
a = np.dot(L.T, h)
# Normalize a by dividing all values so the largest value is 1
a = norm(a)
# To estimate of the hubbiness vector h = La
#h = L*a
h = np.dot(L, a)
# Normalize h by dividing all values so the largest value is 1
h = norm(h)
return a,h
# The vector h is (the transpose of) [1,1,1,1]
h = np.matrix([1,1,1,1]).T
# The link graph: 1->2; 1->3; 2->1; 3->4; 4->3
L = np.matrix([[0,1,1,0],
[1,0,0,0],
[0,0,0,1],
[0,0,1,0]])
# After step 1
a,h = estimate(L,h)
print "After step 1:"
print "authority:", np.round(a.T, decimals=3)
print "hubbiness:", np.round(h.T, decimals=3)
# After step 2 (repeat of step 1)
a,h = estimate(L,h)
print "Final estimate:"
print "authority:", np.round(a.T, decimals=3)
print "hubbiness:", np.round(h.T, decimals=3)
Explanation: Question 4
Consider the link graph below. First, construct the L, the link matrix, as discussed in the HITS algorithm. Then do the following:
Start by assuming the hubbiness of each node is 1; that is, the vector h is (the transpose of) [1,1,1,1].
Compute an estimate of the authority vector $a=L^Th$.
Normalize a by dividing all values so the largest value is 1.
Compute an estimate of the hubbiness vector h=La.
Normalize h by dividing all values so the largest value is 1.
Repeat steps 2-5.
Now, identify the final estimates.
End of explanation
Image(filename='pagerank4.jpg')
import numpy as np
A = np.matrix([[ 0.0, 0.5, 0.5, 0.0 ],
[ 1.0, 0.0, 0.0, 0.0 ],
[ 0.0, 0.0, 0.0, 1.0 ],
[ 0.0, 0.0, 1.0, 0.0 ],]).T
w = 1.0/3.0
B = np.matrix([[2*w, 2*w, 2*w, 2*w],
[w, w, w, w],
[0, 0, 0, 0],
[0, 0, 0, 0]])
beta = 0.7
r = np.ones((A.shape[0], 1)) / A.shape[0]
for i in range(50):
r = beta * np.dot(A, r) + (1 - beta) * np.dot(B, r)
print i+1
print r
Explanation: Question 5
Compute the Topic-Specific PageRank for the following link topology. Assume that pages selected for the teleport set are nodes 1 and 2 and that in the teleport set, the weight assigned for node 1 is twice that of node 2. Assume further that the teleport probability, (1 - beta), is 0.3. Which of the following statements is correct?
End of explanation
Image(filename='pagerank5.jpeg')
Explanation: TSPR(1) = 0.3576
TSPR(2) = 0.2252
TSPR(3) = 0.2454
TSPR(4) = 0.1718
Question 6
The spam-farm architecture suffers from the problem that the target page has many links --- one to each supporting page. To avoid that problem, the spammer could use the architecture shown below:
End of explanation
import numpy as np
import math
beta = 0.85
a = 1.0/ (1 - np.power(beta, 3))
b = beta / (1.0 + beta + np.power(beta, 2))
c = np.power(beta, 2)/ (1.0 + beta + np.power(beta, 2))
print 'a = %f , b = %f , c = %f' % (a, b, c)
Explanation: There, k "second-tier" nodes act as intermediaries. The target page t has only to link to the k second-tier pages, and each of those pages links to m/k of the m supporting pages. Each of the supporting pages links only to t (although most of these links are not shown). Suppose the taxation parameter is β = 0.85, and x is the amount of PageRank supplied from outside to the target page. Let n be the total number of pages in the Web. Finally, let y be the PageRank of target page t. If we compute the formula for y in terms of k, m, and n, we get a formula with the form
y = ax + bm/n + ck/n
Note: To arrive at this form, it is necessary at the last step to drop a low-order term that is a fraction of 1/n. Determine coefficients a, b, and c, remembering that β is fixed at 0.85. Then, identify the value of these coefficients.
End of explanation |
1,266 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center>
<img src="../../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала
Step1: Проведем небольшой EDA
Step2: Для начала всегда неплохо бы посмотреть на значения, которые принимают переменные.
Переведем данные в "Long Format"-представление и отрисуем с помощью factorplot количество значений, которые принимают категориальные переменные.
Step3: Видим, что классы целевой переменно сбалансированы, отлично!
Можно также разбить элементы обучающей выборки по значениям целевой переменной
Step4: Видим, что в зависимости от целевой переменной сильно меняется распределение холестерина и глюкозы. Совпадение?
Немного статистики по уникальным значениям признаков.
Step5: Итого
Step6: 2. Распределение роста для мужчин и женщин
Как мы увидели, в процессе исследования уникальных значений, пол кодируется значениями 1 и 2, расшифровка изначально не была нам дана в описании данных, но мы догадались, кто есть кто, посчитав средние значения роста (или веса) при разных значениях признака gender. Теперь сделаем то же самое, но графически.
Постройте violinplot для роста и пола. Используйте
Step7: 3. Ранговая корреляция
В большинстве случаев достаточно воспользоваться линейным коэффициентом корреляции Пирсона для выявления закономерностей в данных, но мы пойдем чуть дальше и используем ранговую корреляцию, которая поможет нам выявить пары, в которых меньший ранг из вариационного ряда одного признака всегда предшествует большему другого (или наоборот, в случае отрицательной корреляции).
Постройте корреляционную матрицу, используя коэффициент Спирмена
3.1 Какие признаки теперь больше всего коррелируют (по Спирмену) друг с другом?
Height, Weight
Age, Weight
Cholesterol, Gluc
Cardio, Cholesterol
Ap_hi, Ap_lo
Smoke, Alco
Step8: 3.2 Почему мы получили такое большое (относительно) значение ранговой корреляции у этих признаков?
Неточности в данных (ошибки при сборе данных)
Связь ошибочна, переменные никак не должны быть связаны друг с другом
Природа данных
Step13: 4. Совместное распределение признаков
Постройте совместный график распределения jointplot двух наиболее коррелирующих между собой признаков (по Спирмену).
Кажется, наш график получился неинформативным из-за выбросов в значениях. Постройте тот же график, но с логарифмической шкалой.
Step14: 4.1 Сколько четко выраженных кластеров получилось на совместном графике выбранных признаков, с логарифмической шкалой?
1
2
3
больше трех
Step15: 5. Barplot
Посчитаем, сколько полных лет было респондентам на момент их занесения в базу.
Step16: Постройте Countplot, где на оси абсцисс будет отмечен возраст, на оси ординат – количество. Каждое значение возраста должно иметь два столбца, соответствующих количеству человек каждого класса cardio данного возраста.
5. В каком возрасте количество пациентов с ССЗ впервые становится больше, чем здоровых?
44
53
64
70 | Python Code:
# подгружаем все нужные пакеты
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.ticker
%matplotlib inline
# настройка внешнего вида графиков в seaborn
sns.set_context(
"notebook",
font_scale = 1.5,
rc = {
"figure.figsize" : (12, 9),
"axes.titlesize" : 18
}
)
Explanation: <center>
<img src="../../img/ods_stickers.jpg">
Открытый курс по машинному обучению. Сессия № 2
</center>
Автор материала: Илья Барышников. Материал распространяется на условиях лицензии Creative Commons CC BY-NC-SA 4.0. Можно использовать в любых целях (редактировать, поправлять и брать за основу), кроме коммерческих, но с обязательным упоминанием автора материала.
<center> Домашнее задание №2
<center> Визуальный анализ данных о сердечно-сосудистых заболеваниях
В задании предлагается с помощью визуального анализа ответить на несколько вопросов по данным о сердечно-сосудистых заболеваниях. Данные использовались в соревновании Ml Boot Camp 5 (качать их не надо, они уже есть в репозитории).
Заполните код в клетках (где написано "Ваш код здесь") и ответьте на вопросы в веб-форме.
В соревновании предлагалось определить наличие/отсутствие сердечно-сосудистых заболеваний (ССЗ) по результатам осмотра пациента.
Описание данных.
Объективные признаки:
Возраст (age)
Рост (height)
Вес (weight)
Пол (gender)
Результаты измерения:
Артериальное давление верхнее и нижнее (ap_hi, ap_lo)
Холестерин (cholesterol)
Глюкоза (gluc)
Субъективные признаки (со слов пациентов):
Курение (smoke)
Употребление алкоголя (alco)
Физическая активность (active)
Целевой признак (который интересно будет прогнозировать):
- Наличие сердечно-сосудистых заболеваний по результатам классического врачебного осмотра (cardio)
Значения показателей холестерина и глюкозы представлены одним из трех классов: норма, выше нормы, значительно выше нормы. Значения субъективных признаков — бинарны.
Все показатели даны на момент осмотра.
End of explanation
train = pd.read_csv('../../data/mlbootcamp5_train.csv', sep=';',
index_col='id')
print('Размер датасета: ', train.shape)
train.head()
Explanation: Проведем небольшой EDA
End of explanation
train_uniques = pd.melt(frame=train, value_vars=['gender','cholesterol',
'gluc', 'smoke', 'alco',
'active', 'cardio'])
train_uniques = pd.DataFrame(train_uniques.groupby(['variable',
'value'])['value'].count()) \
.sort_index(level=[0, 1]) \
.rename(columns={'value': 'count'}) \
.reset_index()
sns.factorplot(x='variable', y='count', hue='value',
data=train_uniques, kind='bar', size=12);
Explanation: Для начала всегда неплохо бы посмотреть на значения, которые принимают переменные.
Переведем данные в "Long Format"-представление и отрисуем с помощью factorplot количество значений, которые принимают категориальные переменные.
End of explanation
train_uniques = pd.melt(frame=train, value_vars=['gender','cholesterol',
'gluc', 'smoke', 'alco',
'active'],
id_vars=['cardio'])
train_uniques = pd.DataFrame(train_uniques.groupby(['variable', 'value',
'cardio'])['value'].count()) \
.sort_index(level=[0, 1]) \
.rename(columns={'value': 'count'}) \
.reset_index()
sns.factorplot(x='variable', y='count', hue='value',
col='cardio', data=train_uniques, kind='bar', size=9);
Explanation: Видим, что классы целевой переменно сбалансированы, отлично!
Можно также разбить элементы обучающей выборки по значениям целевой переменной: иногда на таких графиках можно сразу увидеть самый значимый признак.
End of explanation
for c in train.columns:
n = train[c].nunique()
print(c)
if n <= 3:
print(n, sorted(train[c].value_counts().to_dict().items()))
else:
print(n)
print(10 * '-')
Explanation: Видим, что в зависимости от целевой переменной сильно меняется распределение холестерина и глюкозы. Совпадение?
Немного статистики по уникальным значениям признаков.
End of explanation
corr_matrix = train.drop(['age', 'ap_hi', 'ap_lo',
'gluc', 'active'], axis=1).corr()
sns.heatmap(corr_matrix, annot=True)
Explanation: Итого:
- Пять количественных признаков (без id)
- Семь категориальных
- 70000 элементов
1. Визуализируем корреляционную матрицу
Для того, чтобы лучше понять признаки в датасете, можно посчитать матрицу коэффициентов корреляции между признаками. <br>
Постройте heatmap корреляционной матрицы. Матрица формируется средствами pandas, со стандартным значением параметров.
1. Какие два признака больше всего коррелируют (по Пирсону) с признаком gender ?
Cardio, Cholesterol
Height, Smoke
Smoke, Alco
Height, Weight
End of explanation
train.head()
filtered_df = train[(train['ap_lo'] <= train['ap_hi']) &
(train['height'] >= train['height'].quantile(0.025)) &
(train['height'] <= train['height'].quantile(0.975)) &
(train['weight'] >= train['weight'].quantile(0.025)) &
(train['weight'] <= train['weight'].quantile(0.975))]
df = pd.melt(filtered_df, value_vars=['weight'], id_vars='gender')
sns.violinplot(x='variable', y='value', hue='gender', data=df)
female = filtered_df[filtered_df['gender'] == 1]
male = filtered_df[filtered_df['gender'] == 2]
ax = sns.kdeplot(female.height, female.height,
cmap="Reds", shade=True, shade_lowest=False)
ax = sns.kdeplot(male.height, male.height,
cmap="Blues", shade=True, shade_lowest=False)
Explanation: 2. Распределение роста для мужчин и женщин
Как мы увидели, в процессе исследования уникальных значений, пол кодируется значениями 1 и 2, расшифровка изначально не была нам дана в описании данных, но мы догадались, кто есть кто, посчитав средние значения роста (или веса) при разных значениях признака gender. Теперь сделаем то же самое, но графически.
Постройте violinplot для роста и пола. Используйте:
- hue – для разбивки по полу
- scale – для оценки количества каждого из полов
Для корректной отрисовки, преобразуйте DataFrame в "Long Format"-представление с помощью функции melt в pandas.
<br>
еще один пример
Постройте на одном графике два отдельных kdeplot роста, отдельно для мужчин и женщин. На нем разница будет более наглядной, но нельзя будет оценить количество мужчин/женщин.
End of explanation
filtered_df = train[(train['ap_lo'] <= train['ap_hi']) &
(train['height'] >= train['height'].quantile(0.025)) &
(train['height'] <= train['height'].quantile(0.975)) &
(train['weight'] >= train['weight'].quantile(0.025)) &
(train['weight'] <= train['weight'].quantile(0.975))]
corr_matrix = filtered_df.drop(['gender', 'active'], axis=1).corr(method = 'spearman')
sns.heatmap(corr_matrix, annot=True)
Explanation: 3. Ранговая корреляция
В большинстве случаев достаточно воспользоваться линейным коэффициентом корреляции Пирсона для выявления закономерностей в данных, но мы пойдем чуть дальше и используем ранговую корреляцию, которая поможет нам выявить пары, в которых меньший ранг из вариационного ряда одного признака всегда предшествует большему другого (или наоборот, в случае отрицательной корреляции).
Постройте корреляционную матрицу, используя коэффициент Спирмена
3.1 Какие признаки теперь больше всего коррелируют (по Спирмену) друг с другом?
Height, Weight
Age, Weight
Cholesterol, Gluc
Cardio, Cholesterol
Ap_hi, Ap_lo
Smoke, Alco
End of explanation
mybins=np.logspace(0,np.log(100),100)
data = sns.load_dataset('tips')
g = sns.JointGrid('total_bill', 'tip', data,xlim=[1,100],ylim=[0.01,100])
g.plot_marginals(sns.distplot, hist=True, kde=True, color='blue',bins=mybins)
g.plot_joint(plt.scatter, color='black', edgecolor='black')
ax = g.ax_joint
ax.set_xscale('log')
ax.set_yscale('log')
g.ax_marg_x.set_xscale('log')
g.ax_marg_y.set_yscale('log')
Explanation: 3.2 Почему мы получили такое большое (относительно) значение ранговой корреляции у этих признаков?
Неточности в данных (ошибки при сборе данных)
Связь ошибочна, переменные никак не должны быть связаны друг с другом
Природа данных
End of explanation
g = sns.JointGrid('ap_lo', 'ap_hi', train,xlim=[1,100000],ylim=[0.01,100000])
g.plot_marginals(sns.distplot, hist=True, kde=True, color='blue')
g.plot_joint(plt.scatter, color='black', edgecolor='black')
ax = g.ax_joint
ax.set_xscale('log')
ax.set_yscale('log')
g.ax_marg_x.set_xscale('log')
g.ax_marg_y.set_yscale('log')
Сетка
g.ax_joint.grid(True)
Преобразуем логарифмические значения на шкалах в реальные
#g.ax_joint.yaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, pos: str(x) if x == 0 else str(round(int(np.exp(x))))))
#g.ax_joint.xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, pos: str(x) if x == 0 else str(round(int(np.exp(x))))))
clear_ds = train[
(train['ap_lo'] > 0)
& (train['ap_hi'] > 0)
& (train['ap_hi'] >= train['ap_hi'].quantile(0.025))
& (train['ap_hi'] <= train['ap_hi'].quantile(0.975))
& (train['ap_lo'] >= train['ap_lo'].quantile(0.025))
& (train['ap_lo'] <= train['ap_lo'].quantile(0.975))
]
clear_ds['ap_lo_log'] = np.log1p(clear_ds['ap_lo'])
clear_ds['ap_hi_log'] = np.log1p(clear_ds['ap_hi'])
g = sns.jointplot("ap_lo_log", "ap_hi_log", data=clear_ds)
Сетка
g.ax_joint.grid(True)
Преобразуем логарифмические значения на шкалах в реальные
g.ax_joint.yaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, pos: str(round(int(np.exp(x))))))
g.ax_joint.xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(lambda x, pos: str(round(int(np.exp(x))))))
Explanation: 4. Совместное распределение признаков
Постройте совместный график распределения jointplot двух наиболее коррелирующих между собой признаков (по Спирмену).
Кажется, наш график получился неинформативным из-за выбросов в значениях. Постройте тот же график, но с логарифмической шкалой.
End of explanation
# Ваш код здесь
Explanation: 4.1 Сколько четко выраженных кластеров получилось на совместном графике выбранных признаков, с логарифмической шкалой?
1
2
3
больше трех
End of explanation
train['age_years'] = (train['age'] // 365.25).astype(int)
Explanation: 5. Barplot
Посчитаем, сколько полных лет было респондентам на момент их занесения в базу.
End of explanation
sns.countplot(x="age_years", hue="cardio", data=train)
Explanation: Постройте Countplot, где на оси абсцисс будет отмечен возраст, на оси ординат – количество. Каждое значение возраста должно иметь два столбца, соответствующих количеству человек каждого класса cardio данного возраста.
5. В каком возрасте количество пациентов с ССЗ впервые становится больше, чем здоровых?
44
53
64
70
End of explanation |
1,267 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoregressive Moving Average (ARMA)
Step1: Sunpots Data
Step2: Does our model obey the theory?
Step3: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
Step4: Exercise
Step5: Let's make sure this model is estimable.
Step6: What does this mean?
Step7: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags.
The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags.
Step8: Exercise
Step9: Hint
Step10: P-value of the unit-root test, resoundly rejects the null of no unit-root. | Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from statsmodels.graphics.api import qqplot
Explanation: Autoregressive Moving Average (ARMA): Sunspots data
End of explanation
print(sm.datasets.sunspots.NOTE)
dta = sm.datasets.sunspots.load_pandas().data
dta.index = pd.Index(sm.tsa.datetools.dates_from_range('1700', '2008'))
del dta["YEAR"]
dta.plot(figsize=(12,8));
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2)
arma_mod20 = sm.tsa.ARMA(dta, (2,0)).fit(disp=False)
print(arma_mod20.params)
arma_mod30 = sm.tsa.ARMA(dta, (3,0)).fit(disp=False)
print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic)
print(arma_mod30.params)
print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
Explanation: Sunpots Data
End of explanation
sm.stats.durbin_watson(arma_mod30.resid.values)
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax = arma_mod30.resid.plot(ax=ax);
resid = arma_mod30.resid
stats.normaltest(resid)
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
fig = qqplot(resid, line='q', ax=ax, fit=True)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
r,q,p = sm.tsa.acf(resid.values.squeeze(), qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
Explanation: Does our model obey the theory?
End of explanation
predict_sunspots = arma_mod30.predict('1990', '2012', dynamic=True)
print(predict_sunspots)
fig, ax = plt.subplots(figsize=(12, 8))
ax = dta.loc['1950':].plot(ax=ax)
fig = arma_mod30.plot_predict('1990', '2012', dynamic=True, ax=ax, plot_insample=False)
def mean_forecast_err(y, yhat):
return y.sub(yhat).mean()
mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
Explanation: This indicates a lack of fit.
In-sample dynamic prediction. How good does our model do?
End of explanation
from statsmodels.tsa.arima_process import arma_generate_sample, ArmaProcess
np.random.seed(1234)
# include zero-th lag
arparams = np.array([1, .75, -.65, -.55, .9])
maparams = np.array([1, .65])
Explanation: Exercise: Can you obtain a better fit for the Sunspots model? (Hint: sm.tsa.AR has a method select_order)
Simulated ARMA(4,1): Model Identification is Difficult
End of explanation
arma_t = ArmaProcess(arparams, maparams)
arma_t.isinvertible
arma_t.isstationary
Explanation: Let's make sure this model is estimable.
End of explanation
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax.plot(arma_t.generate_sample(nsample=50));
arparams = np.array([1, .35, -.15, .55, .1])
maparams = np.array([1, .65])
arma_t = ArmaProcess(arparams, maparams)
arma_t.isstationary
arma_rvs = arma_t.generate_sample(nsample=500, burnin=250, scale=2.5)
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(arma_rvs, lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(arma_rvs, lags=40, ax=ax2)
Explanation: What does this mean?
End of explanation
arma11 = sm.tsa.ARMA(arma_rvs, (1,1)).fit(disp=False)
resid = arma11.resid
r,q,p = sm.tsa.acf(resid, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
arma41 = sm.tsa.ARMA(arma_rvs, (4,1)).fit(disp=False)
resid = arma41.resid
r,q,p = sm.tsa.acf(resid, qstat=True)
data = np.c_[range(1,41), r[1:], q, p]
table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"])
print(table.set_index('lag'))
Explanation: For mixed ARMA processes the Autocorrelation function is a mixture of exponentials and damped sine waves after (q-p) lags.
The partial autocorrelation function is a mixture of exponentials and dampened sine waves after (p-q) lags.
End of explanation
macrodta = sm.datasets.macrodata.load_pandas().data
macrodta.index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
cpi = macrodta["cpi"]
Explanation: Exercise: How good of in-sample prediction can you do for another series, say, CPI
End of explanation
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
ax = cpi.plot(ax=ax);
ax.legend();
Explanation: Hint:
End of explanation
print(sm.tsa.adfuller(cpi)[1])
Explanation: P-value of the unit-root test, resoundly rejects the null of no unit-root.
End of explanation |
1,268 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AI Explanations
Step1: Run the following cell to create your Cloud Storage bucket if it does not already exist.
Step2: Import libraries
Import the libraries for this tutorial.
Step3: Download and preprocess the data
This section shows how to download the flower images, use the tf.data API to create a data input pipeline, and split the data into training and validation sets.
Step4: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model.
Step5: Read images and labels from TFRecords
In this dataset the images are stored as TFRecords.
TODO
Step6: Use the visualization utility function provided earlier to preview flower images with their labels.
Step7: Create training and validation datasets
Step8: Build, train, and evaluate the model
This section shows how to build, train, evaluate, and get local predictions from a model by using the TF.Keras Sequential API.
Step9: Train the model
Train this on a GPU by attaching a GPU to your CAIP notebook instance. On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes.
TODO
Step10: Visualize local predictions
Get predictions on your local model and visualize the images with their predicted labels, using the visualization utility function provided earlier.
Step11: Export the model as a TF 2.3 SavedModel
When using TensorFlow 2.3, you export the model as a SavedModel and load it into Cloud Storage. During export, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
Serving function for image data
Sending base 64 encoded image data to AI Platform is more space efficient. Since this deployed model expects input data as raw bytes, you need to ensure that the b64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is part of the model's graph (instead of upstream on a CPU).
When you send a prediction or explanation request, the request goes to the serving function (serving_fn), which preprocesses the b64 bytes into raw numpy bytes (preprocess_fn). At this point, the data can be passed to the model (m_call).
TODO
Step12: Get input and output signatures
Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. You'll use this information when you deploy your model to AI Explanations in the next section.
Step13: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
You need the signatures for the following layers
Step14: Generate explanation metadata
In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields.
For image models, using [0,1] as your input baseline represents black and white images. This example uses np.random to generate the baseline because the training images contain a lot of black and white (i.e. daisy petals).
Note
Step15: Deploy model to AI Explanations
This section shows how to use gcloud to deploy the model to AI Explanations, using two different explanation methods for image models.
Create the model
Step16: Create explainable model versions
For image models, we offer two choices for explanation methods
Step17: Deploy an XRAI model
Step18: Get predictions and explanations
This section shows how to prepare test images to send to your deployed model, and how to send a batch prediction request to AI Explanations.
Get and prepare test images
To prepare the test images
Step19: Format your explanation request
Prepare a batch of instances.
Step20: Send the explanations request and visualize
If you deployed both an IG and an XRAI model, you can request explanations for both models and compare the results.
If you only deployed one model above, run only the cell for that explanation method.
You can use the Explainable AI SDK to send explanation requests to your deployed model and visualize the explanations.
TODO
Step21: Check explanations and baselines
To better make sense of your feature attributions, you can compare them with your model's baseline. For image models, the baseline_score returned by AI Explanations is the score your model would give an image input with the baseline you specified. The baseline is different for each class in the model. Every time your model predicts tulip as the top class, you'll see the same baseline score.
Earlier, you used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above. If the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful.
Calculate the difference between baseline_score and example_score for the three test images above.
Note that the score values for classification models are probabilities
Step22: Explain the baseline image
Another way to check your baseline choice is to view explanations for this model's baseline image
Step23: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.)
Step24: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction
Step25: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model.
Step26: Cleaning Up | Python Code:
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
import os
PROJECT_ID = "" # TODO: your PROJECT_ID here.
os.environ["PROJECT_ID"] = PROJECT_ID
BUCKET_NAME = PROJECT_ID # TODO: replace your BUCKET_NAME, if needed
REGION = "us-central1"
os.environ["BUCKET_NAME"] = BUCKET_NAME
os.environ["REGION"] = REGION
Explanation: AI Explanations: Deploying an image model
Overview
This tutorial shows how to train a Keras classification model on image data and deploy it to the AI Platform Explanations service to get feature attributions on your deployed model.
If you've already got a trained model and want to deploy it to AI Explanations, skip to the Export the model as a TF 2 SavedModel section.
Dataset
The dataset used for this tutorial is the flowers dataset from TensorFlow Datasets.
Objective
The goal of this tutorial is to train a model on a simple image dataset (flower classification) to understand how you can use AI Explanations with image models. For image models, AI Explanations returns an image with the pixels highlighted that signaled your model's prediction the most.
This tutorial focuses more on deploying the model to AI Platform with Explanations than on the design of the model itself.
Setup
Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET_NAME} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist.
End of explanation
import io
import os
import random
from base64 import b64encode
import numpy as np
import PIL
import tensorflow as tf
from matplotlib import pyplot as plt
AUTO = tf.data.experimental.AUTOTUNE
print("AUTO", AUTO)
import explainable_ai_sdk
Explanation: Import libraries
Import the libraries for this tutorial.
End of explanation
GCS_PATTERN = "gs://flowers-public/tfrecords-jpeg-192x192-2/*.tfrec"
IMAGE_SIZE = [192, 192]
BATCH_SIZE = 32
VALIDATION_SPLIT = 0.19
CLASSES = [
"daisy",
"dandelion",
"roses",
"sunflowers",
"tulips",
] # do not change, maps to the labels in the data (folder names)
# Split data files between training and validation
filenames = tf.io.gfile.glob(GCS_PATTERN)
random.shuffle(filenames)
split = int(len(filenames) * VALIDATION_SPLIT)
training_filenames = filenames[split:]
validation_filenames = filenames[:split]
print(
"Pattern matches {} data files. Splitting dataset into {} training files and {} validation files".format(
len(filenames), len(training_filenames), len(validation_filenames)
)
)
validation_steps = (
int(3670 // len(filenames) * len(validation_filenames)) // BATCH_SIZE
)
steps_per_epoch = (
int(3670 // len(filenames) * len(training_filenames)) // BATCH_SIZE
)
print(
"With a batch size of {}, there will be {} batches per training epoch and {} batch(es) per validation run.".format(
BATCH_SIZE, steps_per_epoch, validation_steps
)
)
Explanation: Download and preprocess the data
This section shows how to download the flower images, use the tf.data API to create a data input pipeline, and split the data into training and validation sets.
End of explanation
def dataset_to_numpy_util(dataset, N):
dataset = dataset.batch(N)
if tf.executing_eagerly():
# In eager mode, iterate in the Dataset directly.
for images, labels in dataset:
numpy_images = images.numpy()
numpy_labels = labels.numpy()
break
else:
# In non-eager mode, must get the TF note that
# yields the nextitem and run it in a tf.Session.
get_next_item = dataset.make_one_shot_iterator().get_next()
with tf.Session() as ses:
numpy_images, numpy_labels = ses.run(get_next_item)
return numpy_images, numpy_labels
def title_from_label_and_target(label, correct_label):
label = np.argmax(label, axis=-1) # one-hot to class number
correct_label = np.argmax(correct_label, axis=-1) # one-hot to class number
correct = label == correct_label
return (
"{} [{}{}{}]".format(
CLASSES[label],
str(correct),
", shoud be " if not correct else "",
CLASSES[correct_label] if not correct else "",
),
correct,
)
def display_one_flower(image, title, subplot, red=False):
plt.subplot(subplot)
plt.axis("off")
plt.imshow(image)
plt.title(title, fontsize=16, color="red" if red else "black")
return subplot + 1
def display_9_images_from_dataset(dataset):
subplot = 331
plt.figure(figsize=(13, 13))
images, labels = dataset_to_numpy_util(dataset, 9)
for i, image in enumerate(images):
title = CLASSES[np.argmax(labels[i], axis=-1)]
subplot = display_one_flower(image, title, subplot)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_9_images_with_predictions(images, predictions, labels):
subplot = 331
plt.figure(figsize=(13, 13))
for i, image in enumerate(images):
title, correct = title_from_label_and_target(predictions[i], labels[i])
subplot = display_one_flower(image, title, subplot, not correct)
if i >= 8:
break
plt.tight_layout()
plt.subplots_adjust(wspace=0.1, hspace=0.1)
plt.show()
def display_training_curves(training, validation, title, subplot):
if subplot % 10 == 1: # set up the subplots on the first call
plt.subplots(figsize=(10, 10), facecolor="#F0F0F0")
plt.tight_layout()
ax = plt.subplot(subplot)
ax.set_facecolor("#F8F8F8")
ax.plot(training)
ax.plot(validation)
ax.set_title("model " + title)
ax.set_ylabel(title)
ax.set_xlabel("epoch")
ax.legend(["train", "valid."])
Explanation: The following cell contains some image visualization utility functions. This code isn't essential to training or deploying the model.
End of explanation
def read_tfrecord(example):
features = {
"image": tf.io.FixedLenFeature(
[], tf.string
), # tf.string means bytestring
"class": tf.io.FixedLenFeature([], tf.int64), # shape [] means scalar
"one_hot_class": tf.io.VarLenFeature(tf.float32),
}
example = tf.io.parse_single_example(example, features)
image = tf.image.decode_jpeg(example["image"], channels=3)
image = (
tf.cast(image, tf.float32) / 255.0
) # convert image to floats in [0, 1] range
image = tf.reshape(
image, [*IMAGE_SIZE, 3]
) # explicit size will be needed for TPU
one_hot_class = tf.sparse.to_dense(example["one_hot_class"])
one_hot_class = tf.reshape(one_hot_class, [5])
return image, one_hot_class
def load_dataset(filenames):
# Read data from TFRecords
# TODO: Complete the load_dataset function to load the images from TFRecords
return dataset
Explanation: Read images and labels from TFRecords
In this dataset the images are stored as TFRecords.
TODO:Complete the load_dataset function to load the images from TFRecords
End of explanation
display_9_images_from_dataset(load_dataset(training_filenames))
Explanation: Use the visualization utility function provided earlier to preview flower images with their labels.
End of explanation
def get_batched_dataset(filenames):
dataset = load_dataset(filenames)
dataset = dataset.cache() # This dataset fits in RAM
dataset = dataset.repeat()
dataset = dataset.shuffle(2048)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(
AUTO
) # prefetch next batch while training (autotune prefetch buffer size)
# For proper ordering of map/batch/repeat/prefetch, see Dataset performance guide: https://www.tensorflow.org/guide/performance/datasets
return dataset
def get_training_dataset():
return get_batched_dataset(training_filenames)
def get_validation_dataset():
return get_batched_dataset(validation_filenames)
some_flowers, some_labels = dataset_to_numpy_util(
load_dataset(validation_filenames), 8 * 20
)
Explanation: Create training and validation datasets
End of explanation
from tensorflow.keras import Sequential
from tensorflow.keras.layers import (
BatchNormalization,
Conv2D,
Dense,
GlobalAveragePooling2D,
MaxPooling2D,
)
from tensorflow.keras.optimizers import Adam
model = Sequential(
[
# Stem
Conv2D(
kernel_size=3,
filters=16,
padding="same",
activation="relu",
input_shape=[*IMAGE_SIZE, 3],
),
BatchNormalization(),
Conv2D(kernel_size=3, filters=32, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=64, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=96, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
# Conv Group
Conv2D(kernel_size=3, filters=128, padding="same", activation="relu"),
BatchNormalization(),
MaxPooling2D(pool_size=2),
Conv2D(kernel_size=3, filters=128, padding="same", activation="relu"),
BatchNormalization(),
# 1x1 Reduction
Conv2D(kernel_size=1, filters=32, padding="same", activation="relu"),
BatchNormalization(),
# Classifier
GlobalAveragePooling2D(),
Dense(5, activation="softmax"),
]
)
model.compile(
optimizer=Adam(lr=0.005, decay=0.98),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
model.summary()
Explanation: Build, train, and evaluate the model
This section shows how to build, train, evaluate, and get local predictions from a model by using the TF.Keras Sequential API.
End of explanation
EPOCHS = 20 # Train for 60 epochs for higher accuracy, 20 should get you ~75%
# TODO: Using the GPU train the model for 20 to 60 epochs
Explanation: Train the model
Train this on a GPU by attaching a GPU to your CAIP notebook instance. On a CPU, it'll take ~30 minutes to run training. On a GPU, it takes ~5 minutes.
TODO: Using the GPU train the model defined above
End of explanation
# Randomize the input so that you can execute multiple times to change results
permutation = np.random.permutation(8 * 20)
some_flowers, some_labels = (
some_flowers[permutation],
some_labels[permutation],
)
predictions = model.predict(some_flowers, batch_size=16)
evaluations = model.evaluate(some_flowers, some_labels, batch_size=16)
print(np.array(CLASSES)[np.argmax(predictions, axis=-1)].tolist())
print("[val_loss, val_acc]", evaluations)
display_9_images_with_predictions(some_flowers, predictions, some_labels)
Explanation: Visualize local predictions
Get predictions on your local model and visualize the images with their predicted labels, using the visualization utility function provided earlier.
End of explanation
export_path = "gs://" + BUCKET_NAME + "/explanations/mymodel"
def _preprocess(bytes_input):
decoded = tf.io.decode_jpeg(bytes_input, channels=3)
decoded = tf.image.convert_image_dtype(decoded, tf.float32)
resized = tf.image.resize(decoded, size=(192, 192))
return resized
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def preprocess_fn(bytes_inputs):
with tf.device("cpu:0"):
decoded_images = tf.map_fn(_preprocess, bytes_inputs, dtype=tf.float32)
return {
"numpy_inputs": decoded_images
} # User needs to make sure the key matches model's input
m_call = tf.function(model.call).get_concrete_function(
[
tf.TensorSpec(
shape=[None, 192, 192, 3], dtype=tf.float32, name="numpy_inputs"
)
]
)
@tf.function(input_signature=[tf.TensorSpec([None], tf.string)])
def serving_fn(bytes_inputs):
# TODO: Complete the function
return prob
tf.saved_model.save(
model,
export_path,
signatures={
"serving_default": serving_fn,
"xai_preprocess": preprocess_fn, # Required for XAI
"xai_model": m_call, # Required for XAI
},
)
Explanation: Export the model as a TF 2.3 SavedModel
When using TensorFlow 2.3, you export the model as a SavedModel and load it into Cloud Storage. During export, you need to define a serving function to convert data to the format your model expects. If you send encoded data to AI Platform, your serving function ensures that the data is decoded on the model server before it is passed as input to your model.
Serving function for image data
Sending base 64 encoded image data to AI Platform is more space efficient. Since this deployed model expects input data as raw bytes, you need to ensure that the b64 encoded data gets converted back to raw bytes before it is passed as input to the deployed model.
To resolve this, define a serving function (serving_fn) and attach it to the model as a preprocessing step. Add a @tf.function decorator so the serving function is part of the model's graph (instead of upstream on a CPU).
When you send a prediction or explanation request, the request goes to the serving function (serving_fn), which preprocesses the b64 bytes into raw numpy bytes (preprocess_fn). At this point, the data can be passed to the model (m_call).
TODO: Complete the serving function
End of explanation
! saved_model_cli show --dir $export_path --all
Explanation: Get input and output signatures
Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. You'll use this information when you deploy your model to AI Explanations in the next section.
End of explanation
loaded = tf.saved_model.load(export_path)
input_name = list(
loaded.signatures["xai_model"].structured_input_signature[1].keys()
)[0]
print(input_name)
output_name = list(loaded.signatures["xai_model"].structured_outputs.keys())[0]
print(output_name)
preprocess_name = list(
loaded.signatures["xai_preprocess"].structured_input_signature[1].keys()
)[0]
print(preprocess_name)
Explanation: You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
You need the signatures for the following layers:
Serving function input layer
Model input layer
Model output layer
End of explanation
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
# We want to explain 'xai_model' signature.
builder = SavedModelMetadataBuilder(export_path, signature_name="xai_model")
random_baseline = np.random.rand(192, 192, 3)
builder.set_image_metadata(
"numpy_inputs", input_baselines=[random_baseline.tolist()]
)
builder.save_metadata(export_path)
Explanation: Generate explanation metadata
In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields.
For image models, using [0,1] as your input baseline represents black and white images. This example uses np.random to generate the baseline because the training images contain a lot of black and white (i.e. daisy petals).
Note: for the explanation request, use the model's signature for the input and output tensors. Do not use the serving function signature.
End of explanation
import datetime
MODEL = "flowers" + TIMESTAMP
print(MODEL)
# Create the model if it doesn't exist yet (you only need to run this once)
! gcloud ai-platform models create $MODEL --enable-logging --region=$REGION
Explanation: Deploy model to AI Explanations
This section shows how to use gcloud to deploy the model to AI Explanations, using two different explanation methods for image models.
Create the model
End of explanation
# Each time you create a version the name should be unique
IG_VERSION = "v_ig"
! gcloud beta ai-platform versions create $IG_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method integrated-gradients \
--num-integral-steps 25 \
--region $REGION
# Make sure the IG model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $IG_VERSION --model $MODEL --region $REGION
Explanation: Create explainable model versions
For image models, we offer two choices for explanation methods:
* Integrated Gradients (IG)
* XRAI
You can find more info on each method in the documentation. You can deploy a version with both so that you can compare results. If you already know which explanation method you'd like to use, you can deploy one version and skip the code blocks for the other method.
Creating the version will take ~5-10 minutes. Note that your first deploy may take longer.
Deploy an Integrated Gradients model
End of explanation
# Each time you create a version the name should be unique
XRAI_VERSION = "v_xrai"
# Create the XRAI version with gcloud
! gcloud beta ai-platform versions create $XRAI_VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method xrai \
--num-integral-steps 25 \
--region $REGION
# Make sure the XRAI model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $XRAI_VERSION --model $MODEL --region=$REGION
Explanation: Deploy an XRAI model
End of explanation
# Resize the images to what your model is expecting (192,192)
test_filenames = []
for i in os.listdir("../assets/flowers"):
img_path = "../assets/flowers/" + i
with PIL.Image.open(img_path) as ex_img:
resize_img = ex_img.resize([192, 192])
resize_img.save(img_path)
test_filenames.append(img_path)
Explanation: Get predictions and explanations
This section shows how to prepare test images to send to your deployed model, and how to send a batch prediction request to AI Explanations.
Get and prepare test images
To prepare the test images:
Download a small sample of images from the flowers dataset -- just enough for a batch prediction.
Resize the images to match the input shape (192, 192) of the model.
Save the resized images back to your bucket.
End of explanation
# Prepare your images to send to your Cloud model
instances = []
for image_path in test_filenames:
img_bytes = tf.io.read_file(image_path)
b64str = b64encode(img_bytes.numpy()).decode("utf-8")
instances.append({preprocess_name: {"b64": b64str}})
Explanation: Format your explanation request
Prepare a batch of instances.
End of explanation
# IG EXPLANATIONS
remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(project=PROJECT_ID,
model=MODEL,
version=IG_VERSION,
region=REGION)
ig_response = #TODO
for response in ig_response:
response.visualize_attributions()
# XRAI EXPLANATIONS
remote_xrai_model = #TODO: Similar to above, load the XRAI model
xrai_response = #TODO
for response in xrai_response:
response.visualize_attributions()
Explanation: Send the explanations request and visualize
If you deployed both an IG and an XRAI model, you can request explanations for both models and compare the results.
If you only deployed one model above, run only the cell for that explanation method.
You can use the Explainable AI SDK to send explanation requests to your deployed model and visualize the explanations.
TODO: Write code to get explanations from the saved model. You will need to use model.explain(instances) to get the results
End of explanation
for i, response in enumerate(ig_response):
attr = response.get_attribution()
baseline_score = attr.baseline_score
predicted_score = attr.example_score
print("Baseline score: ", baseline_score)
print("Predicted score: ", predicted_score)
print("Predicted - Baseline: ", predicted_score - baseline_score, "\n")
Explanation: Check explanations and baselines
To better make sense of your feature attributions, you can compare them with your model's baseline. For image models, the baseline_score returned by AI Explanations is the score your model would give an image input with the baseline you specified. The baseline is different for each class in the model. Every time your model predicts tulip as the top class, you'll see the same baseline score.
Earlier, you used a baseline image of np.random randomly generated values. If you'd like the baseline for your model to be solid black and white images instead, pass [0,1] as the value to input_baselines in your explanation_metadata.json file above. If the baseline_score is very close to the value of example_score, the highlighted pixels may not be meaningful.
Calculate the difference between baseline_score and example_score for the three test images above.
Note that the score values for classification models are probabilities: the confidence your model has in its predicted class. A score of 0.90 for tulip means your model has classified the image as a tulip with 90% confidence.
The code below checks baselines for the IG model. To inspect your XRAI model, swap out the ig_response and IG_VERSION variables below.
End of explanation
# Convert your baseline from above to a base64 string
rand_test_img = PIL.Image.fromarray((random_baseline * 255).astype("uint8"))
buffer = io.BytesIO()
rand_test_img.save(buffer, format="PNG")
new_image_string = b64encode(np.asarray(buffer.getvalue())).decode("utf-8")
# Preview it
plt.imshow(rand_test_img)
sanity_check_img = {preprocess_name: {"b64": new_image_string}}
Explanation: Explain the baseline image
Another way to check your baseline choice is to view explanations for this model's baseline image: an image array of randomly generated values using np.random. First, convert the same np.random baseline array generated earlier to a base64 string and preview it. This encodes the random noise as if it's a PNG image. Additionally, you must convert the byte buffer to a numpy array, because this is the format the underlying model expects for input when you send the explain request.
End of explanation
# Sanity Check explanations EXPLANATIONS
sanity_check_response = remote_ig_model.explain([sanity_check_img])
Explanation: Send the explanation request for the baseline image. (To check a baseline image for XRAI, change IG_VERSION to XRAI_VERSION below.)
End of explanation
sanity_check_response[0].visualize_attributions()
Explanation: Visualize the explanation for your random baseline image, highlighting the pixels that contributed to the prediction
End of explanation
attr = sanity_check_response[0].get_attribution()
baseline_score = attr.baseline_score
example_score = attr.example_score
print(abs(baseline_score - example_score))
Explanation: The difference between your model's predicted score and the baseline score for this image should be close to 0. Run the following cell to confirm. If there is a difference between these two values, try increasing the number of integral steps used when you deploy your model.
End of explanation
# Delete model version resource
! gcloud ai-platform versions delete $IG_VERSION --quiet --model $MODEL
! gcloud ai-platform versions delete $XRAI_VERSION --quiet --model $MODEL
# Delete model resource
! gcloud ai-platform models delete $MODEL --quiet
Explanation: Cleaning Up
End of explanation |
1,269 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import Socorro crash data into the Data Platform
We want to be able to store Socorro crash data in Parquet form so that it can be made accessible from re
Step4: We create the pyspark datatype for representing the crash data in spark. This is a slightly modified version of peterbe/crash-report-struct-code.
Step6: First fetch from the primary source in s3 as per bug 1312006. We fall back to the github location if this is not available.
Step9: Read crash data as json, convert it to parquet | Python Code:
!conda install boto3 --yes
import logging
logging.basicConfig(level=logging.INFO)
log = logging.getLogger(__name__)
Explanation: Import Socorro crash data into the Data Platform
We want to be able to store Socorro crash data in Parquet form so that it can be made accessible from re:dash.
See Bug 1273657 for more details
End of explanation
from pyspark.sql.types import *
def create_struct(schema):
Take a JSON schema and return a pyspark StructType of equivalent structure.
replace_definitions(schema, schema['definitions'])
assert '$ref' not in str(schema), 're-write didnt work'
struct = StructType()
for row in get_rows(schema):
struct.add(row)
return struct
def replace_definitions(schema, definitions):
Replace references in the JSON schema with their definitions.
if 'properties' in schema:
for prop, meta in schema['properties'].items():
replace_definitions(meta, definitions)
elif 'items' in schema:
if '$ref' in schema['items']:
ref = schema['items']['$ref'].split('/')[-1]
schema['items'] = definitions[ref]
replace_definitions(schema['items'], definitions)
else:
replace_definitions(schema['items'], definitions)
elif '$ref' in str(schema):
err_msg = "Reference not found for schema: {}".format(str(schema))
log.error(err_msg)
raise ValueError(err_msg)
def get_rows(schema):
Map the fields in a JSON schema to corresponding data structures in pyspark.
if 'properties' not in schema:
err_msg = "Invalid JSON schema: properties field is missing."
log.error(err_msg)
raise ValueError(err_msg)
for prop in sorted(schema['properties']):
meta = schema['properties'][prop]
if 'string' in meta['type']:
logging.debug("{!r} allows the type to be String AND Integer".format(prop))
yield StructField(prop, StringType(), 'null' in meta['type'])
elif 'integer' in meta['type']:
yield StructField(prop, IntegerType(), 'null' in meta['type'])
elif 'boolean' in meta['type']:
yield StructField(prop, BooleanType(), 'null' in meta['type'])
elif meta['type'] == 'array' and 'items' not in meta:
# Assuming strings in the array
yield StructField(prop, ArrayType(StringType(), False), True)
elif meta['type'] == 'array' and 'items' in meta:
struct = StructType()
for row in get_rows(meta['items']):
struct.add(row)
yield StructField(prop, ArrayType(struct), True)
elif meta['type'] == 'object':
struct = StructType()
for row in get_rows(meta):
struct.add(row)
yield StructField(prop, struct, True)
else:
err_msg = "Invalid JSON schema: {}".format(str(meta)[:100])
log.error(err_msg)
raise ValueError(err_msg)
Explanation: We create the pyspark datatype for representing the crash data in spark. This is a slightly modified version of peterbe/crash-report-struct-code.
End of explanation
import boto3
import botocore
import json
import tempfile
import urllib2
def fetch_schema():
Fetch the crash data schema from an s3 location or github location. This
returns the corresponding JSON schema in a python dictionary.
region = "us-west-2"
bucket = "crashstats-telemetry-crashes-prod-us-west-2"
key = "crash_report.json"
fallback_url = "https://raw.githubusercontent.com/mozilla/socorro/master/socorro/schemas/crash_report.json"
try:
log.info("Fetching latest crash data schema from s3://{}/{}".format(bucket, key))
s3 = boto3.client('s3', region_name=region)
# download schema to memory via a file like object
resp = tempfile.TemporaryFile()
s3.download_fileobj(bucket, key, resp)
resp.seek(0)
except botocore.exceptions.ClientError as e:
log.warning(("Could not fetch schema from s3://{}/{}: {}\n"
"Fetching crash data schema from {}")
.format(bucket, key, e, fallback_url))
resp = urllib2.urlopen(fallback_url)
return json.load(resp)
Explanation: First fetch from the primary source in s3 as per bug 1312006. We fall back to the github location if this is not available.
End of explanation
from datetime import datetime as dt, timedelta, date
from pyspark.sql import SQLContext
def daterange(start_date, end_date):
for n in range(int((end_date - start_date).days) + 1):
yield (end_date - timedelta(n)).strftime("%Y%m%d")
def import_day(d, schema, version):
Convert JSON data stored in an S3 bucket into parquet, indexed by crash_date.
source_s3path = "s3://crashstats-telemetry-crashes-prod-us-west-2/v1/crash_report"
dest_s3path = "s3://telemetry-parquet/socorro_crash/"
num_partitions = 10
log.info("Processing {}, started at {}".format(d, dt.utcnow()))
cur_source_s3path = "{}/{}".format(source_s3path, d)
cur_dest_s3path = "{}/v{}/crash_date={}".format(dest_s3path, version, d)
df = sqlContext.read.json(cur_source_s3path, schema=schema)
df.repartition(num_partitions).write.parquet(cur_dest_s3path, mode="overwrite")
def backfill(start_date_yyyymmdd, schema, version):
Import data from a start date to yesterday's date.
Example:
backfill("20160902", crash_schema, version)
start_date = dt.strptime(start_date_yyyymmdd, "%Y%m%d")
end_date = dt.utcnow() - timedelta(1) # yesterday
for d in daterange(start_date, end_date):
try:
import_day(d)
except Exception as e:
log.error(e)
from os import environ
# get the relevant date
yesterday = dt.strftime(dt.utcnow() - timedelta(1), "%Y%m%d")
target_date = environ.get('date', yesterday)
# fetch and generate the schema
schema_data = fetch_schema()
crash_schema = create_struct(schema_data)
version = schema_data.get('$target_version', 0) # default to v0
# process the data
import_day(target_date, crash_schema, version)
Explanation: Read crash data as json, convert it to parquet
End of explanation |
1,270 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing Pronto CycleShare Data with Python and Pandas
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed.
<!-- PELICAN_BEGIN_SUMMARY -->
This week Pronto CycleShare, Seattle's Bicycle Share system, turned one year old.
To celebrate this, Pronto made available a large cache of data from the first year of operation and announced the Pronto Cycle Share's Data Challenge, which offers prizes for different categories of analysis.
There are a lot of tools out there that you could use to analyze data like this, but my tool of choice is (obviously) Python.
In this post, I want to show how you can get started analyzing this data and joining it with other available data sources using the PyData stack, namely NumPy, Pandas, Matplotlib, and Seaborn.
Here I'll take a look at some of the basic questions you can answer with this data.
Later I hope to find the time to dig deeper and ask some more interesting and creative questions – stay tuned!
<!-- PELICAN_END_SUMMARY -->
For those who aren't familiar, this post is composed in the form of a Jupyter Notebook, which is an open document format that combines text, code, data, and graphics and is viewable through the web browser – if you have not used it before I encourage you to try it out!
You can download the notebook containing this post here, open it with Jupyter, and start asking your own questions of the data.
Downloading Pronto's Data
We'll start by downloading the data (available on Pronto's Website) which you can do by uncommenting the following shell commands (the exclamation mark here is a special IPython syntax to run a shell command).
The total download is about 70MB, and the unzipped files are around 900MB.
Step1: Next we need some standard Python package imports
Step2: And now we load the trip data with Pandas
Step3: Each row of this trip dataset is a single ride by a single person, and the data contains over 140,000 rows!
Exploring Trips over Time
Let's start by looking at the trend in number of daily trips over the course of the year
Step4: This plot shows the daily trend, separated by Annual members (top) and Day-Pass users (bottom).
A couple observations
Step5: We see a complementary pattern overall
Step6: Now we can plot the results to see the hourly trends
Step7: We see a clear difference between a "commute" pattern, which sharply peaks in the morning and evening (e.g. annual members during weekdays) and a "recreational" pattern, which has a broad peak in the early afternoon (e.g. annual members on weekends, and short-term users all the time).
Interestingly, the average behavior of annual pass holders on weekends seems to be almost identical to the average behavior of day-pass users on weekdays!
For those who have read my previous posts, you might recognize this as very similar to the patterns I found in the Fremont Bridge bicycle counts.
Trip Durations
Next let's take a look at the durations of trips.
Pronto rides are designed to be up to 30 minutes; any single use that is longer than this incurs a usage fee of a couple dollars for the first half hour, and about ten dollars per hour thereafter.
Let's look at the distribution of trip durations for Annual members and short-term pass holders
Step8: Here I have added a red dashed line separating the free rides (left) from the rides which incur a usage fee (right). It seems that annual users are much more savvy to the system rules
Step11: Now we need to find bicycling distances between pairs of lat/lon coordinates.
Fortunately, Google Maps has a distances API that we can use for free.
Reading the fine print, free use is limited to 2500 distances per day, and 100 distances per 10 seconds.
With 55 stations we have $55 \times 54 / 2 = 1485$ unique nonzero distances, so we can just query all of them within a few minutes on a single day for free (if we do it carefully).
To do this, we'll query one (partial) row at a time, waiting 10+ seconds between queries (Note
Step12: Here's what the first 5x5 section of the distance matrix looks like
Step13: Let's convert these distances to miles and join them to our trips data
Step14: Now we can plot the distribution of trip distances
Step15: Keep in mind that this shows the shortest possible distance between stations, and thus is a lower bound on the actual distance ridden on each trip.
Many trips (especially for day pass users) begin and end within a few blocks.
Beyond this, trips peak at around 1 mile, though some extreme users are pushing their trips out to four or more miles.
Estimating Rider Speed
Given these distances, we can also compute a lower bound on the estimated riding speed.
Let's do this, and then take a look at the distribution of speeds for Annual and Short-term users
Step16: Interestingly, the distributions are quite different, with annual riders showing on average a higher inferred speed.
You might be tempted to conclude here that annual members ride faster than day-pass users, but the data alone aren't sufficient to support this conclusion.
This data could also be explained if annual users tend to go from point A to point B by the most direct route, while day pass users tend to meander around and get to their destination indirectly.
I suspect that the reality is some mix of these two effects.
It is also informative to take a look at the relationship between distance and speed
Step19: Overall, we see that longer rides tend to be faster – though this is subject to the same lower-bound caveats as above.
As above, for reference I have plotted the line separating free trips (above the red line) from trips requiring an additional fee (below the red line).
Again we see that the annual members are much more savvy about not going over the half hour limit than are day pass users – the sharp cutoff in the distribution of points points to users keeping close track of their time to avoid an extra charge!
Trend with Elevation
One oft-mentioned concern with the feasibility of bike share in Seattle is that it is a very hilly city – before the launch, armchair analysts predicted that there would be a steady flow of bikes from uphill to downhill, and that this would add up with other challenges to spell the demise of the system ("Sure, bikeshare works other places, but it can't work here
Step20: Now let's read-in the elevation data
Step21: Just to make ourselves feel better, we'll double check that the latitudes and longitudes match
Step22: Now we can join the elevations to with the trip data by way of the station data
Step23: Let's take a look at the distribution of elevation gain by rider type
Step24: We have plotted some shading in the background to help guide the eye.
Again, there is a big difference between Annual Members and Short-term users
Step25: We see that the first year had 30,000 more downhill trips than uphill trips – that's about 60% more.
Given current usage levels, that means that Pronto staff must be shuttling an average of about 100 bikes per day from low-lying stations to higher-up stations.
Weather
The other common "Seattle is special" argument against the feasibility of cycle share is the weather.
Let's take a look at how the number of rides changes with temperature and precipitation.
Fortunately, the data release includes a wide range of weather data
Step26: Let's join this weather data with the trip data
Step27: Now we can take a look at how the number of rides scales with both Temperature and Precipitation, splitting the data by weekday and weekend | Python Code:
# !curl -O https://s3.amazonaws.com/pronto-data/open_data_year_one.zip
# !unzip open_data_year_one.zip
Explanation: Analyzing Pronto CycleShare Data with Python and Pandas
This notebook originally appeared as a post on the blog Pythonic Perambulations. The content is BSD licensed.
<!-- PELICAN_BEGIN_SUMMARY -->
This week Pronto CycleShare, Seattle's Bicycle Share system, turned one year old.
To celebrate this, Pronto made available a large cache of data from the first year of operation and announced the Pronto Cycle Share's Data Challenge, which offers prizes for different categories of analysis.
There are a lot of tools out there that you could use to analyze data like this, but my tool of choice is (obviously) Python.
In this post, I want to show how you can get started analyzing this data and joining it with other available data sources using the PyData stack, namely NumPy, Pandas, Matplotlib, and Seaborn.
Here I'll take a look at some of the basic questions you can answer with this data.
Later I hope to find the time to dig deeper and ask some more interesting and creative questions – stay tuned!
<!-- PELICAN_END_SUMMARY -->
For those who aren't familiar, this post is composed in the form of a Jupyter Notebook, which is an open document format that combines text, code, data, and graphics and is viewable through the web browser – if you have not used it before I encourage you to try it out!
You can download the notebook containing this post here, open it with Jupyter, and start asking your own questions of the data.
Downloading Pronto's Data
We'll start by downloading the data (available on Pronto's Website) which you can do by uncommenting the following shell commands (the exclamation mark here is a special IPython syntax to run a shell command).
The total download is about 70MB, and the unzipped files are around 900MB.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns; sns.set()
Explanation: Next we need some standard Python package imports:
End of explanation
trips = pd.read_csv('2015_trip_data.csv',
parse_dates=['starttime', 'stoptime'],
infer_datetime_format=True)
trips.head()
Explanation: And now we load the trip data with Pandas:
End of explanation
# Find the start date
ind = pd.DatetimeIndex(trips.starttime)
trips['date'] = ind.date.astype('datetime64')
trips['hour'] = ind.hour
# Count trips by date
by_date = trips.pivot_table('trip_id', aggfunc='count',
index='date',
columns='usertype', )
fig, ax = plt.subplots(2, figsize=(16, 8))
fig.subplots_adjust(hspace=0.4)
by_date.iloc[:, 0].plot(ax=ax[0], title='Annual Members');
by_date.iloc[:, 1].plot(ax=ax[1], title='Day-Pass Users');
Explanation: Each row of this trip dataset is a single ride by a single person, and the data contains over 140,000 rows!
Exploring Trips over Time
Let's start by looking at the trend in number of daily trips over the course of the year
End of explanation
by_weekday = by_date.groupby([by_date.index.year,
by_date.index.dayofweek]).mean()
by_weekday.columns.name = None # remove label for plot
fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharey=True)
by_weekday.loc[2014].plot(title='Average Use by Day of Week (2014)', ax=ax[0]);
by_weekday.loc[2015].plot(title='Average Use by Day of Week (2015)', ax=ax[1]);
for axi in ax:
axi.set_xticklabels(['Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat', 'Sun'])
Explanation: This plot shows the daily trend, separated by Annual members (top) and Day-Pass users (bottom).
A couple observations:
The big spike in short-term pass rides in April is likely due to the American Planning Association national conference, held in downtown Seattle that week. The only other time that gets close is the 4th of July weekend.
Day pass users seem to show a steady ebb and flow with the seasons; the usage of annual users has not waned as significantly with the coming of fall.
Both annual members and day-pass users seem to show a distinct weekly trend.
Let's zoom-in on this weekly trend, by averaging all rides by day of week.
Becuase of the change in pattern around January 2015, we'll split the data by both year and by day of week:
End of explanation
# count trips by date and by hour
by_hour = trips.pivot_table('trip_id', aggfunc='count',
index=['date', 'hour'],
columns='usertype').fillna(0).reset_index('hour')
# average these counts by weekend
by_hour['weekend'] = (by_hour.index.dayofweek >= 5)
by_hour = by_hour.groupby(['weekend', 'hour']).mean()
by_hour.index.set_levels([['weekday', 'weekend'],
["{0}:00".format(i) for i in range(24)]],
inplace=True);
by_hour.columns.name = None
Explanation: We see a complementary pattern overall: annual users tend to use their bikes during Monday to Friday (i.e. as part of a commute) while day pass users tend to use their bikes on the weekend.
This pattern didn't fully develop until the start of 2015, however, especially for annual members: it seems that for the first couple months, users had not yet adapted their commute habits to make use of Pronto!
It's also quite interesting to view the average hourly trips by weekday and weekend.
This takes a bit of manipulation:
End of explanation
fig, ax = plt.subplots(1, 2, figsize=(16, 6), sharey=True)
by_hour.loc['weekday'].plot(title='Average Hourly Use (Mon-Fri)', ax=ax[0])
by_hour.loc['weekend'].plot(title='Average Hourly Use (Sat-Sun)', ax=ax[1])
ax[0].set_ylabel('Average Trips per Hour');
Explanation: Now we can plot the results to see the hourly trends:
End of explanation
trips['minutes'] = trips.tripduration / 60
trips.groupby('usertype')['minutes'].hist(bins=np.arange(61), alpha=0.5, normed=True);
plt.xlabel('Duration (minutes)')
plt.ylabel('relative frequency')
plt.title('Trip Durations')
plt.text(34, 0.09, "Free Trips\n\nAdditional Fee", ha='right',
size=18, rotation=90, alpha=0.5, color='red')
plt.legend(['Annual Members', 'Short-term Pass'])
plt.axvline(30, linestyle='--', color='red', alpha=0.3);
Explanation: We see a clear difference between a "commute" pattern, which sharply peaks in the morning and evening (e.g. annual members during weekdays) and a "recreational" pattern, which has a broad peak in the early afternoon (e.g. annual members on weekends, and short-term users all the time).
Interestingly, the average behavior of annual pass holders on weekends seems to be almost identical to the average behavior of day-pass users on weekdays!
For those who have read my previous posts, you might recognize this as very similar to the patterns I found in the Fremont Bridge bicycle counts.
Trip Durations
Next let's take a look at the durations of trips.
Pronto rides are designed to be up to 30 minutes; any single use that is longer than this incurs a usage fee of a couple dollars for the first half hour, and about ten dollars per hour thereafter.
Let's look at the distribution of trip durations for Annual members and short-term pass holders:
End of explanation
stations = pd.read_csv('2015_station_data.csv')
pronto_shop = dict(id=54, name="Pronto shop",
terminal="Pronto shop",
lat=47.6173156, long=-122.3414776,
dockcount=100, online='10/13/2014')
stations = stations.append(pronto_shop, ignore_index=True)
Explanation: Here I have added a red dashed line separating the free rides (left) from the rides which incur a usage fee (right). It seems that annual users are much more savvy to the system rules: only a small tail of the trip distribution lies beyond 30 minutes.
Around one in four Day Pass Rides, on the other hand, are longer than the half hour limit and incur additional fees.
My hunch is that these day pass users aren't fully aware of this pricing structure ("I paid for the day, right?") and likely walk away unhappy with the experience.
Estimating Trip Distances
It's also interesting to look at the distance of the trips.
Distances between stations are not included in Pronto's data release, so we need to find them via another source.
Let's start by loading the station data – and because some of the trips start and end at Pronto's shop, we'll add this as another "station":
End of explanation
from time import sleep
def query_distances(stations=stations):
Query the Google API for bicycling distances
latlon_list = ['{0},{1}'.format(lat, long)
for (lat, long) in zip(stations.lat, stations.long)]
def create_url(i):
URL = ('https://maps.googleapis.com/maps/api/distancematrix/json?'
'origins={origins}&destinations={destinations}&mode=bicycling')
return URL.format(origins=latlon_list[i],
destinations='|'.join(latlon_list[i + 1:]))
for i in range(len(latlon_list) - 1):
url = create_url(i)
filename = "distances_{0}.json".format(stations.terminal.iloc[i])
print(i, filename)
!curl "{url}" -o {filename}
sleep(11) # only one query per 10 seconds!
def build_distance_matrix(stations=stations):
Build a matrix from the Google API results
dist = np.zeros((len(stations), len(stations)), dtype=float)
for i, term in enumerate(stations.terminal[:-1]):
filename = 'queried_distances/distances_{0}.json'.format(term)
row = json.load(open(filename))
dist[i, i + 1:] = [el['distance']['value'] for el in row['rows'][0]['elements']]
dist += dist.T
distances = pd.DataFrame(dist, index=stations.terminal,
columns=stations.terminal)
distances.to_csv('station_distances.csv')
return distances
# only call this the first time
import os
if not os.path.exists('station_distances.csv'):
# Note: you can call this function at most ~twice per day!
query_distances()
# Move all the queried files into a directory
# so we don't accidentally overwrite them
if not os.path.exists('queried_distances'):
os.makedirs('queried_distances')
!mv distances_*.json queried_distances
# Build distance matrix and save to CSV
distances = build_distance_matrix()
Explanation: Now we need to find bicycling distances between pairs of lat/lon coordinates.
Fortunately, Google Maps has a distances API that we can use for free.
Reading the fine print, free use is limited to 2500 distances per day, and 100 distances per 10 seconds.
With 55 stations we have $55 \times 54 / 2 = 1485$ unique nonzero distances, so we can just query all of them within a few minutes on a single day for free (if we do it carefully).
To do this, we'll query one (partial) row at a time, waiting 10+ seconds between queries (Note: we might also use the googlemaps Python package instead, but it requires obtaining an API key).
End of explanation
distances = pd.read_csv('station_distances.csv', index_col='terminal')
distances.iloc[:5, :5]
Explanation: Here's what the first 5x5 section of the distance matrix looks like:
End of explanation
stacked = distances.stack() / 1609.34 # convert meters to miles
stacked.name = 'distance'
trips = trips.join(stacked, on=['from_station_id', 'to_station_id'])
Explanation: Let's convert these distances to miles and join them to our trips data:
End of explanation
fig, ax = plt.subplots(figsize=(12, 4))
trips.groupby('usertype')['distance'].hist(bins=np.linspace(0, 6.99, 50),
alpha=0.5, ax=ax);
plt.xlabel('Distance between start & end (miles)')
plt.ylabel('relative frequency')
plt.title('Minimum Distance of Trip')
plt.legend(['Annual Members', 'Short-term Pass']);
Explanation: Now we can plot the distribution of trip distances:
End of explanation
trips['speed'] = trips.distance * 60 / trips.minutes
trips.groupby('usertype')['speed'].hist(bins=np.linspace(0, 15, 50), alpha=0.5, normed=True);
plt.xlabel('lower bound riding speed (MPH)')
plt.ylabel('relative frequency')
plt.title('Rider Speed Lower Bound (MPH)')
plt.legend(['Annual Members', 'Short-term Pass']);
Explanation: Keep in mind that this shows the shortest possible distance between stations, and thus is a lower bound on the actual distance ridden on each trip.
Many trips (especially for day pass users) begin and end within a few blocks.
Beyond this, trips peak at around 1 mile, though some extreme users are pushing their trips out to four or more miles.
Estimating Rider Speed
Given these distances, we can also compute a lower bound on the estimated riding speed.
Let's do this, and then take a look at the distribution of speeds for Annual and Short-term users:
End of explanation
g = sns.FacetGrid(trips, col="usertype", hue='usertype', size=6)
g.map(plt.scatter, "distance", "speed", s=4, alpha=0.2)
# Add lines and labels
x = np.linspace(0, 10)
g.axes[0, 0].set_ylabel('Lower Bound Speed')
for ax in g.axes.flat:
ax.set_xlabel('Lower Bound Distance')
ax.plot(x, 2 * x, '--r', alpha=0.3)
ax.text(9.8, 16.5, "Free Trips\n\nAdditional Fee", ha='right',
size=18, rotation=39, alpha=0.5, color='red')
ax.axis([0, 10, 0, 25])
Explanation: Interestingly, the distributions are quite different, with annual riders showing on average a higher inferred speed.
You might be tempted to conclude here that annual members ride faster than day-pass users, but the data alone aren't sufficient to support this conclusion.
This data could also be explained if annual users tend to go from point A to point B by the most direct route, while day pass users tend to meander around and get to their destination indirectly.
I suspect that the reality is some mix of these two effects.
It is also informative to take a look at the relationship between distance and speed:
End of explanation
def get_station_elevations(stations):
Get station elevations via Google Maps API
URL = "https://maps.googleapis.com/maps/api/elevation/json?locations="
locs = '|'.join(['{0},{1}'.format(lat, long)
for (lat, long) in zip(stations.lat, stations.long)])
URL += locs
!curl "{URL}" -o elevations.json
def process_station_elevations():
Convert Elevations JSON output to CSV
import json
D = json.load(open('elevations.json'))
def unnest(D):
loc = D.pop('location')
loc.update(D)
return loc
elevs = pd.DataFrame([unnest(item) for item in D['results']])
elevs.to_csv('station_elevations.csv')
return elevs
# only run this the first time:
import os
if not os.path.exists('station_elevations.csv'):
get_station_elevations(stations)
process_station_elevations()
Explanation: Overall, we see that longer rides tend to be faster – though this is subject to the same lower-bound caveats as above.
As above, for reference I have plotted the line separating free trips (above the red line) from trips requiring an additional fee (below the red line).
Again we see that the annual members are much more savvy about not going over the half hour limit than are day pass users – the sharp cutoff in the distribution of points points to users keeping close track of their time to avoid an extra charge!
Trend with Elevation
One oft-mentioned concern with the feasibility of bike share in Seattle is that it is a very hilly city – before the launch, armchair analysts predicted that there would be a steady flow of bikes from uphill to downhill, and that this would add up with other challenges to spell the demise of the system ("Sure, bikeshare works other places, but it can't work here: Seattle is special! We're just so special!")
Elevation data is not included in the data release, but again we can turn to the Google Maps API to get what we need; see this site for a description of the elevation API.
In this case the free-use limit is 2500 requests per day & 512 elevations per request.
Since we need just 55 elevations, we can do it in a single query:
End of explanation
elevs = pd.read_csv('station_elevations.csv', index_col=0)
elevs.head()
Explanation: Now let's read-in the elevation data:
End of explanation
# double check that locations match
print(np.allclose(stations.long, elevs.lng))
print(np.allclose(stations.lat, elevs.lat))
Explanation: Just to make ourselves feel better, we'll double check that the latitudes and longitudes match:
End of explanation
stations['elevation'] = elevs['elevation']
elevs.index = stations['terminal']
trips['elevation_start'] = trips.join(elevs, on='from_station_id')['elevation']
trips['elevation_end'] = trips.join(elevs, on='to_station_id')['elevation']
trips['elevation_gain'] = trips['elevation_end'] - trips['elevation_start']
Explanation: Now we can join the elevations to with the trip data by way of the station data:
End of explanation
g = sns.FacetGrid(trips, col="usertype", hue='usertype')
g.map(plt.hist, "elevation_gain", bins=np.arange(-145, 150, 10))
g.fig.set_figheight(6)
g.fig.set_figwidth(16);
# plot some lines to guide the eye
for lim in range(60, 150, 20):
x = np.linspace(-lim, lim, 3)
for ax in g.axes.flat:
ax.fill(x, 100 * (lim - abs(x)),
color='gray', alpha=0.1, zorder=0)
Explanation: Let's take a look at the distribution of elevation gain by rider type:
End of explanation
print("total downhill trips:", (trips.elevation_gain < 0).sum())
print("total uphill trips: ", (trips.elevation_gain > 0).sum())
Explanation: We have plotted some shading in the background to help guide the eye.
Again, there is a big difference between Annual Members and Short-term users: annual users definitely show a preference for downhill trips (left of the distribution), while daily users dont show this as strongly, but rather show a preference for rides which start and end at the same elevation (i.e. the same station).
To make the effect of elevation change more quantitative, let's compute the numbers:
End of explanation
weather = pd.read_csv('2015_weather_data.csv', index_col='Date', parse_dates=True)
weather.columns
Explanation: We see that the first year had 30,000 more downhill trips than uphill trips – that's about 60% more.
Given current usage levels, that means that Pronto staff must be shuttling an average of about 100 bikes per day from low-lying stations to higher-up stations.
Weather
The other common "Seattle is special" argument against the feasibility of cycle share is the weather.
Let's take a look at how the number of rides changes with temperature and precipitation.
Fortunately, the data release includes a wide range of weather data:
End of explanation
by_date = trips.groupby(['date', 'usertype'])['trip_id'].count()
by_date.name = 'count'
by_date = by_date.reset_index('usertype').join(weather)
Explanation: Let's join this weather data with the trip data:
End of explanation
# add a flag indicating weekend
by_date['weekend'] = (by_date.index.dayofweek >= 5)
#----------------------------------------------------------------
# Plot Temperature Trend
g = sns.FacetGrid(by_date, col="weekend", hue='usertype', size=6)
g.map(sns.regplot, "Mean_Temperature_F", "count")
g.add_legend();
# do some formatting
g.axes[0, 0].set_title('')
g.axes[0, 1].set_title('')
g.axes[0, 0].text(0.05, 0.95, 'Monday - Friday', va='top', size=14,
transform=g.axes[0, 0].transAxes)
g.axes[0, 1].text(0.05, 0.95, 'Saturday - Sunday', va='top', size=14,
transform=g.axes[0, 1].transAxes)
g.fig.text(0.45, 1, "Trend With Temperature", ha='center', va='top', size=16);
#----------------------------------------------------------------
# Plot Precipitation
g = sns.FacetGrid(by_date, col="weekend", hue='usertype', size=6)
g.map(sns.regplot, "Precipitation_In ", "count")
g.add_legend();
# do some formatting
g.axes[0, 0].set_ylim(-50, 600);
g.axes[0, 0].set_title('')
g.axes[0, 1].set_title('')
g.axes[0, 0].text(0.95, 0.95, 'Monday - Friday', ha='right', va='top', size=14,
transform=g.axes[0, 0].transAxes)
g.axes[0, 1].text(0.95, 0.95, 'Saturday - Sunday', ha='right', va='top', size=14,
transform=g.axes[0, 1].transAxes)
g.fig.text(0.45, 1, "Trend With Precipitation", ha='center', va='top', size=16);
Explanation: Now we can take a look at how the number of rides scales with both Temperature and Precipitation, splitting the data by weekday and weekend:
End of explanation |
1,271 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2017 Google LLC.
Step1: # 텐서 만들기 및 조작
학습 목표
Step2: ## 벡터 덧셈
텐서에서 여러 일반적인 수학 연산을 할 수 있습니다(TF API). 다음 코드는
각기 정확히 6개 요소를 가지는 두 벡터(1-D 텐서)를 만들고 조작합니다.
Step3: ### 텐서 형태
형태는 텐서의 크기와 차원 수를 결정하는 데 사용됩니다. 텐서 형태는 목록으로 표현하며, i번째 요소는 i 차원에서 크기를 나타냅니다. 그리고 이 목록의 길이는 텐서의 순위(예
Step4: ### 브로드캐스팅
수학에서는 같은 형태의 텐서에서 요소간 연산(예
Step5: ## 행렬 곱셈
선형대수학에서 두 개의 행렬을 곱할 때는 첫 번째 행렬의 열 수가 두 번째
행렬의 행 수와 같아야 했습니다.
3x4 행렬과 4x2 행렬을 곱하는 것은 유효합니다. 이렇게 하면 3x2 행렬을 얻을 수 있습니다.
4x2 행렬과 3x4 행렬을 곱하는 것은 유효하지 않습니다.
Step6: ## 텐서 형태 변경
텐서 덧셈과 행렬 곱셈에서 각각 피연산자에 제약조건을 부여하면
텐서플로우 프로그래머는 자주 텐서의 형태를 변경해야 합니다.
tf.reshape 메서드를 사용하여 텐서의 형태를 변경할 수 있습니다.
예를 들어 8x2 텐서를 2x8 텐서나 4x4 텐서로 형태를 변경할 수 있습니다.
Step7: 또한 tf.reshape를 사용하여 텐서의 차원 수(\'순위\')를 변경할 수도 있습니다.
예를 들어 8x2 텐서를 3-D 2x2x4 텐서나 1-D 16-요소 텐서로 변경할 수 있습니다.
Step8: ### 실습 #1
Step9: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
Step10: ## 변수, 초기화, 할당
지금까지 수행한 모든 연산은 정적 값(tf.constant)에서 실행되었고; eval()을 호출하면 항상 같은 결과가 반환되었습니다. 텐서플로우에서는 변수 객체를 정의할 수 있으며, 변수 값은 변경할 수 있습니다.
변수를 만들 때 초기 값을 명시적으로 설정하거나 이니셜라이저(예
Step11: 텐서플로우의 한 가지 특징은 변수 초기화가 자동으로 실행되지 않는다는 것입니다. 예를 들어 다음 블록에서는 오류가 발생합니다.
Step12: 변수를 초기화하는 가장 쉬운 방법은 global_variables_initializer를 호출하는 것입니다. eval()과 거의 비슷한 Session.run()의 사용을 참고하세요.
Step13: 초기화된 변수는 같은 세션 내에서는 값을 유지합니다. 하지만 새 세션을 시작하면 다시 초기화해야 합니다.
Step14: 변수 값을 변경하려면 할당 작업을 사용합니다. 할당 작업을 만들기만 하면 실행되는 것은 아닙니다. 초기화와 마찬가지로 할당 작업을 실행해야 변수 값이 업데이트됩니다.
Step15: 로드 및 저장과 같이 여기에서 다루지 않은 변수에 관한 주제도 더 많이 있습니다. 자세히 알아보려면 텐서플로우 문서를 참조하세요.
### 실습 #2
Step16: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요. | Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2017 Google LLC.
End of explanation
from __future__ import print_function
import tensorflow as tf
Explanation: # 텐서 만들기 및 조작
학습 목표:
* 텐서플로우 변수 초기화 및 할당
* 텐서 만들기 및 조작
* 선형대수학의 덧셈 및 곱셈 지식 되살리기(이 주제가 생소한 경우 행렬 덧셈 및 곱셈 참조)
* 기본 텐서플로우 수학 및 배열 작업에 익숙해지기
End of explanation
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create another six-element vector. Each element in the vector will be
# initialized to 1. The first argument is the shape of the tensor (more
# on shapes below).
ones = tf.ones([6], dtype=tf.int32)
# Add the two vectors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
# Create a session to run the default graph.
with tf.Session() as sess:
print(just_beyond_primes.eval())
Explanation: ## 벡터 덧셈
텐서에서 여러 일반적인 수학 연산을 할 수 있습니다(TF API). 다음 코드는
각기 정확히 6개 요소를 가지는 두 벡터(1-D 텐서)를 만들고 조작합니다.
End of explanation
with tf.Graph().as_default():
# A scalar (0-D tensor).
scalar = tf.zeros([])
# A vector with 3 elements.
vector = tf.zeros([3])
# A matrix with 2 rows and 3 columns.
matrix = tf.zeros([2, 3])
with tf.Session() as sess:
print('scalar has shape', scalar.get_shape(), 'and value:\n', scalar.eval())
print('vector has shape', vector.get_shape(), 'and value:\n', vector.eval())
print('matrix has shape', matrix.get_shape(), 'and value:\n', matrix.eval())
Explanation: ### 텐서 형태
형태는 텐서의 크기와 차원 수를 결정하는 데 사용됩니다. 텐서 형태는 목록으로 표현하며, i번째 요소는 i 차원에서 크기를 나타냅니다. 그리고 이 목록의 길이는 텐서의 순위(예: 차원 수)를 나타냅니다.
자세한 정보는 텐서플로우 문서를 참조하세요.
몇 가지 기본 예:
End of explanation
with tf.Graph().as_default():
# Create a six-element vector (1-D tensor).
primes = tf.constant([2, 3, 5, 7, 11, 13], dtype=tf.int32)
# Create a constant scalar with value 1.
ones = tf.constant(1, dtype=tf.int32)
# Add the two tensors. The resulting tensor is a six-element vector.
just_beyond_primes = tf.add(primes, ones)
with tf.Session() as sess:
print(just_beyond_primes.eval())
Explanation: ### 브로드캐스팅
수학에서는 같은 형태의 텐서에서 요소간 연산(예: add 및 equals)만 실행할 수 있습니다. 하지만 텐서플로우에서는 텐서에서 기존에는 호환되지 않았던 연산을 실행할 수 있습니다. 텐서플로우는 요소간 연산에서 더 작은 배열을 확장하여 더 큰 배열과 같은 형태를 가지게 하는 브로드캐스팅(Numpy에서 차용한 개념)을 지원합니다. 예를 들어 브로드캐스팅을 통해 다음과 같은 결과를 얻을 수 있습니다.
피연산자에 크기가 [6]인 텐서가 필요한 경우 크기가 [1] 또는 크기가 []인 텐서가 피연산자가 될 수 있습니다.
연산에 크기가 [4, 6]인 텐서가 필요한 경우 다음 크기의 텐서가 피연산자가 될 수 있습니다.
[1, 6]
[6]
[]
연산에 크기가 [3, 5, 6]인 텐서가 필요한 경우 다음 크기의 텐서가 피연산자가 될 수 있습니다.
[1, 5, 6]
[3, 1, 6]
[3, 5, 1]
[1, 1, 1]
[5, 6]
[1, 6]
[6]
[1]
[]
참고: 텐서가 브로드캐스팅되면 텐서의 항목은 개념적으로 복사됩니다. (성능상의 이유로 실제로 복사되지는 않음. 브로드캐스팅은 성능 최적화를 위해 개발됨.)
전체 브로드캐스팅 규칙 세트는 Numpy 브로드캐스팅 문서에 이해하기 쉽게 잘 설명되어 있습니다.
다음 코드는 앞서 설명한 텐서 덧셈을 실행하지만 브로드캐스팅을 사용합니다.
End of explanation
with tf.Graph().as_default():
# Create a matrix (2-d tensor) with 3 rows and 4 columns.
x = tf.constant([[5, 2, 4, 3], [5, 1, 6, -2], [-1, 3, -1, -2]],
dtype=tf.int32)
# Create a matrix with 4 rows and 2 columns.
y = tf.constant([[2, 2], [3, 5], [4, 5], [1, 6]], dtype=tf.int32)
# Multiply `x` by `y`.
# The resulting matrix will have 3 rows and 2 columns.
matrix_multiply_result = tf.matmul(x, y)
with tf.Session() as sess:
print(matrix_multiply_result.eval())
Explanation: ## 행렬 곱셈
선형대수학에서 두 개의 행렬을 곱할 때는 첫 번째 행렬의 열 수가 두 번째
행렬의 행 수와 같아야 했습니다.
3x4 행렬과 4x2 행렬을 곱하는 것은 유효합니다. 이렇게 하면 3x2 행렬을 얻을 수 있습니다.
4x2 행렬과 3x4 행렬을 곱하는 것은 유효하지 않습니다.
End of explanation
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 2x8 matrix.
reshaped_2x8_matrix = tf.reshape(matrix, [2,8])
# Reshape the 8x2 matrix into a 4x4 matrix
reshaped_4x4_matrix = tf.reshape(matrix, [4,4])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped matrix (2x8):")
print(reshaped_2x8_matrix.eval())
print("Reshaped matrix (4x4):")
print(reshaped_4x4_matrix.eval())
Explanation: ## 텐서 형태 변경
텐서 덧셈과 행렬 곱셈에서 각각 피연산자에 제약조건을 부여하면
텐서플로우 프로그래머는 자주 텐서의 형태를 변경해야 합니다.
tf.reshape 메서드를 사용하여 텐서의 형태를 변경할 수 있습니다.
예를 들어 8x2 텐서를 2x8 텐서나 4x4 텐서로 형태를 변경할 수 있습니다.
End of explanation
with tf.Graph().as_default():
# Create an 8x2 matrix (2-D tensor).
matrix = tf.constant([[1,2], [3,4], [5,6], [7,8],
[9,10], [11,12], [13, 14], [15,16]], dtype=tf.int32)
# Reshape the 8x2 matrix into a 3-D 2x2x4 tensor.
reshaped_2x2x4_tensor = tf.reshape(matrix, [2,2,4])
# Reshape the 8x2 matrix into a 1-D 16-element tensor.
one_dimensional_vector = tf.reshape(matrix, [16])
with tf.Session() as sess:
print("Original matrix (8x2):")
print(matrix.eval())
print("Reshaped 3-D tensor (2x2x4):")
print(reshaped_2x2x4_tensor.eval())
print("1-D vector:")
print(one_dimensional_vector.eval())
Explanation: 또한 tf.reshape를 사용하여 텐서의 차원 수(\'순위\')를 변경할 수도 있습니다.
예를 들어 8x2 텐서를 3-D 2x2x4 텐서나 1-D 16-요소 텐서로 변경할 수 있습니다.
End of explanation
# Write your code for Task 1 here.
Explanation: ### 실습 #1: 두 개의 텐서를 곱하기 위해 두 텐서의 형태를 변경합니다.
다음 두 벡터는 행렬 곱셈과 호환되지 않습니다.
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
이 벡터를 행렬 곱셈에 호환될 수 있는 피연산자로 형태를 변경하세요.
그런 다음 형태가 변경된 텐서에서 행렬 곱셈 작업을 호출하세요.
End of explanation
with tf.Graph().as_default(), tf.Session() as sess:
# Task: Reshape two tensors in order to multiply them
# Here are the original operands, which are incompatible
# for matrix multiplication:
a = tf.constant([5, 3, 2, 7, 1, 4])
b = tf.constant([4, 6, 3])
# We need to reshape at least one of these operands so that
# the number of columns in the first operand equals the number
# of rows in the second operand.
# Reshape vector "a" into a 2-D 2x3 matrix:
reshaped_a = tf.reshape(a, [2,3])
# Reshape vector "b" into a 2-D 3x1 matrix:
reshaped_b = tf.reshape(b, [3,1])
# The number of columns in the first matrix now equals
# the number of rows in the second matrix. Therefore, you
# can matrix mutiply the two operands.
c = tf.matmul(reshaped_a, reshaped_b)
print(c.eval())
# An alternate approach: [6,1] x [1, 3] -> [6,3]
Explanation: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
End of explanation
g = tf.Graph()
with g.as_default():
# Create a variable with the initial value 3.
v = tf.Variable([3])
# Create a variable of shape [1], with a random initial value,
# sampled from a normal distribution with mean 1 and standard deviation 0.35.
w = tf.Variable(tf.random_normal([1], mean=1.0, stddev=0.35))
Explanation: ## 변수, 초기화, 할당
지금까지 수행한 모든 연산은 정적 값(tf.constant)에서 실행되었고; eval()을 호출하면 항상 같은 결과가 반환되었습니다. 텐서플로우에서는 변수 객체를 정의할 수 있으며, 변수 값은 변경할 수 있습니다.
변수를 만들 때 초기 값을 명시적으로 설정하거나 이니셜라이저(예: 분포)를 사용할 수 있습니다.
End of explanation
with g.as_default():
with tf.Session() as sess:
try:
v.eval()
except tf.errors.FailedPreconditionError as e:
print("Caught expected error: ", e)
Explanation: 텐서플로우의 한 가지 특징은 변수 초기화가 자동으로 실행되지 않는다는 것입니다. 예를 들어 다음 블록에서는 오류가 발생합니다.
End of explanation
with g.as_default():
with tf.Session() as sess:
initialization = tf.global_variables_initializer()
sess.run(initialization)
# Now, variables can be accessed normally, and have values assigned to them.
print(v.eval())
print(w.eval())
Explanation: 변수를 초기화하는 가장 쉬운 방법은 global_variables_initializer를 호출하는 것입니다. eval()과 거의 비슷한 Session.run()의 사용을 참고하세요.
End of explanation
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# These three prints will print the same value.
print(w.eval())
print(w.eval())
print(w.eval())
Explanation: 초기화된 변수는 같은 세션 내에서는 값을 유지합니다. 하지만 새 세션을 시작하면 다시 초기화해야 합니다.
End of explanation
with g.as_default():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# This should print the variable's initial value.
print(v.eval())
assignment = tf.assign(v, [7])
# The variable has not been changed yet!
print(v.eval())
# Execute the assignment op.
sess.run(assignment)
# Now the variable is updated.
print(v.eval())
Explanation: 변수 값을 변경하려면 할당 작업을 사용합니다. 할당 작업을 만들기만 하면 실행되는 것은 아닙니다. 초기화와 마찬가지로 할당 작업을 실행해야 변수 값이 업데이트됩니다.
End of explanation
# Write your code for Task 2 here.
Explanation: 로드 및 저장과 같이 여기에서 다루지 않은 변수에 관한 주제도 더 많이 있습니다. 자세히 알아보려면 텐서플로우 문서를 참조하세요.
### 실습 #2: 주사위 2개 10번 굴리기를 시뮬레이션합니다.
주사위 시뮬레이션을 만듭니다. 여기에서 10x3 2-D 텐서를 생성하며 조건은 다음과 같습니다.
열 1 및 2는 각각 주사위 1개를 1번 던졌을 때의 값입니다.
열 3은 같은 줄의 열 1과 2의 합입니다.
예를 들어 첫 번째 행의 값은 다음과 같을 수 있습니다.
열 1은 4
열 2는 3
열 3은 7
텐서플로우 문서를 참조하여 이 문제를 해결해 보세요.
End of explanation
with tf.Graph().as_default(), tf.Session() as sess:
# Task 2: Simulate 10 throws of two dice. Store the results
# in a 10x3 matrix.
# We're going to place dice throws inside two separate
# 10x1 matrices. We could have placed dice throws inside
# a single 10x2 matrix, but adding different columns of
# the same matrix is tricky. We also could have placed
# dice throws inside two 1-D tensors (vectors); doing so
# would require transposing the result.
dice1 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
dice2 = tf.Variable(tf.random_uniform([10, 1],
minval=1, maxval=7,
dtype=tf.int32))
# We may add dice1 and dice2 since they share the same shape
# and size.
dice_sum = tf.add(dice1, dice2)
# We've got three separate 10x1 matrices. To produce a single
# 10x3 matrix, we'll concatenate them along dimension 1.
resulting_matrix = tf.concat(
values=[dice1, dice2, dice_sum], axis=1)
# The variables haven't been initialized within the graph yet,
# so let's remedy that.
sess.run(tf.global_variables_initializer())
print(resulting_matrix.eval())
Explanation: ### 해결 방법
해결 방법을 보려면 아래를 클릭하세요.
End of explanation |
1,272 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark + Python = PySpark
Esse notebook introduz os conceitos básicos do Spark através de sua interface com a linguagem Python. Como aplicação inicial faremos o clássico examplo de contador de palavras . Com esse exemplo é possível entender a lógica de programação funcional para as diversas tarefas de exploração de dados distribuídos.
Para isso utilizaremos o livro texto Trabalhos completos de William Shakespeare obtidos do Projeto Gutenberg. Veremos que esse mesmo algoritmo pode ser empregado em textos de qualquer tamanho.
Esse notebook contém
Step2: (1b) Plural
Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD.
Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format().
Nota
Step3: (1c) Aplicando a função ao RDD
Transforme cada palavra do nosso RDD em plural usando map()
Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
Step4: Nota
Step5: (1e) Tamanho de cada palavra
Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
Step6: (1f) RDDs de pares e tuplas
Para contar a frequência de cada palavra de maneira distribuída, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuídos. No nosso caso, vamos atribuir o valor 1 para cada palavra.
Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD.
Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
Step7: Parte 2
Step8: (2b) Calculando as contagens
Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave.
Utilizando a transformação map() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
Step9: (2c) reduceByKey
Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas.
Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
Step10: (2d) Agrupando os comandos
A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
Step11: Parte 3
Step12: (3b) Calculando a Média de contagem de palavras
Encontre a média de frequência das palavras utilizando o RDD contagem.
Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
Step14: Parte 4
Step16: (4b) Normalizando o texto
Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para
Step17: (4c) Carregando arquivo texto
Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg.
Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições.
O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuído (ex.
Step18: (4d) Extraindo as palavras
Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD
Step19: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
Step20: (4e) Remover linhas vazias
Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
Step21: (4f) Contagem de palavras
Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras().
Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes.
takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.
Step23: Parte 5
Step27: (5b) Valores Categóricos
Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possível valor de cada atributo, se ele possui esse atributo ou não.
Com o vetor binário podemos utilizar a distância de Hamming definida por | Python Code:
ListaPalavras = ['gato', 'elefante', 'rato', 'rato', 'gato']
palavrasRDD = sc.parallelize(ListaPalavras, 4)
print type(palavrasRDD)
Explanation: Spark + Python = PySpark
Esse notebook introduz os conceitos básicos do Spark através de sua interface com a linguagem Python. Como aplicação inicial faremos o clássico examplo de contador de palavras . Com esse exemplo é possível entender a lógica de programação funcional para as diversas tarefas de exploração de dados distribuídos.
Para isso utilizaremos o livro texto Trabalhos completos de William Shakespeare obtidos do Projeto Gutenberg. Veremos que esse mesmo algoritmo pode ser empregado em textos de qualquer tamanho.
Esse notebook contém:
Parte 1: Criando uma base RDD e RDDs de tuplas
Parte 2: Manipulando RDDs de tuplas
Parte 3: Encontrando palavras únicas e calculando médias
Parte 4: Aplicar contagem de palavras em um arquivo
Parte 5: Similaridade entre Objetos
Para os exercícios é aconselhável consultar a documentação da API do PySpark
Part 1: Criando e Manipulando RDDs
Nessa parte do notebook vamos criar uma base RDD a partir de uma lista com o comando parallelize.
(1a) Criando uma base RDD
Podemos criar uma base RDD de diversos tipos e fonte do Python com o comando sc.parallelize(fonte, particoes), sendo fonte uma variável contendo os dados (ex.: uma lista) e particoes o número de partições para trabalhar em paralelo.
End of explanation
# EXERCICIO
def Plural(palavra):
Adds an 's' to `palavra`.
Args:
palavra (str): A string.
Returns:
str: A string with 's' added to it.
return <COMPLETAR>
print Plural('gato')
help(Plural)
assert Plural('rato')=='ratos', 'resultado incorreto!'
print 'OK'
Explanation: (1b) Plural
Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD.
Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format().
Nota: a string entre os conjuntos de três aspas representa a documentação da função. Essa documentação é exibida com o comando help(). Vamos utilizar a padronização de documentação sugerida para o Python, manteremos essa documentação em inglês.
End of explanation
# EXERCICIO
pluralRDD = palavrasRDD.<COMPLETAR>
print pluralRDD.collect()
assert pluralRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
Explanation: (1c) Aplicando a função ao RDD
Transforme cada palavra do nosso RDD em plural usando map()
Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
End of explanation
# EXERCICIO
pluralLambdaRDD = palavrasRDD.<COMPLETAR>
print pluralLambdaRDD.collect()
assert pluralLambdaRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
Explanation: Nota: utilize o comando collect() apenas quando tiver certeza de que a lista caberá na memória. Para gravar os resultados de volta em arquivo texto ou base de dados utilizaremos outro comando.
(1d) Utilizando uma função lambda
Repita a criação de um RDD de plurais, porém utilizando uma função lambda.
End of explanation
# EXERCICIO
pluralTamanho = (pluralRDD
<COMPLETAR>
)
print pluralTamanho
assert pluralTamanho==[5,9,5,5,5], 'valores incorretos'
print "OK"
Explanation: (1e) Tamanho de cada palavra
Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
End of explanation
# EXERCICIO
palavraPar = palavrasRDD.<COMPLETAR>
print palavraPar.collect()
assert palavraPar.collect() == [('gato',1),('elefante',1),('rato',1),('rato',1),('gato',1)], 'valores incorretos!'
print "OK"
Explanation: (1f) RDDs de pares e tuplas
Para contar a frequência de cada palavra de maneira distribuída, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuídos. No nosso caso, vamos atribuir o valor 1 para cada palavra.
Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD.
Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
End of explanation
# EXERCICIO
palavrasGrupo = palavraPar.groupByKey()
for chave, valor in palavrasGrupo.collect():
print '{0}: {1}'.format(chave, list(valor))
assert sorted(palavrasGrupo.mapValues(lambda x: list(x)).collect()) == [('elefante', [1]), ('gato',[1, 1]), ('rato',[1, 2])],
'Valores incorretos!'
print "OK"
Explanation: Parte 2: Manipulando RDD de tuplas
Vamos manipular nossa RDD para contar as palavras do texto.
(2a) Função groupByKey()
A função groupByKey() agrupa todos os valores de um RDD através da chave (primeiro elemento da tupla) agregando os valores em uma lista.
Essa abordagem tem um ponto fraco pois:
A operação requer que os dados distribuídos sejam movidos em massa para que permaneçam na partição correta.
As listas podem se tornar muito grandes. Imagine contar todas as palavras do Wikipedia: termos comuns como "a", "e" formarão uma lista enorme de valores que pode não caber na memória do processo escravo.
End of explanation
# EXERCICIO
contagemGroup = palavrasGrupo.<COMPLETAR>
print contagemGroup.collect()
assert sorted(contagemGroup.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2b) Calculando as contagens
Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave.
Utilizando a transformação map() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
End of explanation
# EXERCICIO
contagem = palavraPar.<COMPLETAR>
print contagem.collect()
assert sorted(contagem.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2c) reduceByKey
Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas.
Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
End of explanation
# EXERCICIO
contagemFinal = (palavrasRDD
<COMPLETAR>
<COMPLETAR>
)
print contagemFinal.collect()
assert sorted(contagemFinal)==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2d) Agrupando os comandos
A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
End of explanation
# EXERCICIO
palavrasUnicas = <COMPLETAR>
print palavrasUnicas
assert palavrasUnicas==3, 'valor incorreto!'
print "OK"
Explanation: Parte 3: Encontrando as palavras únicas e calculando a média de contagem
(3a) Palavras Únicas
Calcule a quantidade de palavras únicas do RDD. Utilize comandos de RDD da API do PySpark e alguma das últimas RDDs geradas nos exercícios anteriores.
End of explanation
# EXERCICIO
# add é equivalente a lambda x,y: x+y
from operator import add
total = (contagemFinal
<COMPLETAR>
<COMPLETAR>
)
media = total / float(palavrasUnicas)
print total
print round(media, 2)
assert round(media, 2)==1.67, 'valores incorretos!'
print "OK"
Explanation: (3b) Calculando a Média de contagem de palavras
Encontre a média de frequência das palavras utilizando o RDD contagem.
Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
End of explanation
# EXERCICIO
def contaPalavras(chavesRDD):
Creates a pair RDD with word counts from an RDD of words.
Args:
chavesRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
return (chavesRDD
<COMPLETAR>
<COMPLETAR>
)
print contaPalavras(palavrasRDD).collect()
assert sorted(contaPalavras(palavrasRDD).collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: Parte 4: Aplicar nosso algoritmo em um arquivo
(4a) Função contaPalavras
Para podermos aplicar nosso algoritmo genéricamente em diversos RDDs, vamos primeiro criar uma função para aplicá-lo em qualquer fonte de dados. Essa função recebe de entrada um RDD contendo uma lista de chaves (palavras) e retorna um RDD de tuplas com as chaves e a contagem delas nessa RDD
End of explanation
# EXERCICIO
import re
def removerPontuacao(texto):
Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
texto (str): A string.
Returns:
str: The cleaned up string.
return re.sub(r'[^A-Za-z0-9 ]', '', texto).strip().lower()
print removerPontuacao('Ola, quem esta ai??!')
print removerPontuacao(' Sem espaco e_sublinhado!')
assert removerPontuacao(' O uso de virgulas, embora permitido, nao deve contar. ')=='o uso de virgulas embora permitido nao deve contar', 'string incorreta!'
print "OK"
Explanation: (4b) Normalizando o texto
Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para:
Padronizar a capitalização das palavras (tudo maiúsculo ou tudo minúsculo).
Remover pontuação.
Remover espaços no início e no final da palavra.
Crie uma função removerPontuacao que converte todo o texto para minúscula, remove qualquer pontuação e espaços em branco no início ou final da palavra. Para isso, utilize a biblioteca re para remover todo texto que não seja letra, número ou espaço, encadeando com as funções de string para remover espaços em branco e converter para minúscula (veja Strings).
End of explanation
# Apenas execute a célula
import os.path
import urllib
url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt' # url do livro
arquivo = os.path.join('Data','Aula02','shakespeare.txt') # local de destino: 'Data/Aula02/shakespeare.txt'
if os.path.isfile(arquivo): # verifica se já fizemos download do arquivo
print 'Arquivo já existe!'
else:
try:
urllib.urlretrieve(url, arquivo) # salva conteúdo da url em arquivo
except IOError:
print 'Impossível fazer o download: {0}'.format(url)
# lê o arquivo com textFile e aplica a função removerPontuacao
shakesRDD = (sc
.textFile(arquivo, 8)
.map(removerPontuacao)
)
# zipWithIndex gera tuplas (conteudo, indice) onde indice é a posição do conteudo na lista sequencial
# Ex.: sc.parallelize(['gato','cachorro','boi']).zipWithIndex() ==> [('gato',0), ('cachorro',1), ('boi',2)]
# sep.join() junta as strings de uma lista através do separador sep. Ex.: ','.join(['a','b','c']) ==> 'a,b,c'
print '\n'.join(shakesRDD
.zipWithIndex()
.map(lambda (linha, num): '{0}: {1}'.format(num,linha))
.take(15)
)
Explanation: (4c) Carregando arquivo texto
Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg.
Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições.
O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuído (ex.: hdfs://).
Vamos também aplicar a função removerPontuacao() para normalizar o texto e verificar as 15 primeiras linhas com o comando take().
End of explanation
# EXERCICIO
shakesPalavrasRDD = shakesRDD.<COMPLETAR>
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.take(5)
print total
Explanation: (4d) Extraindo as palavras
Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD:
Precisamos gerar listas de palavras ao invés de listas de sentenças.
Eliminar linhas vazias.
As strings em Python tem o método split() que faz a separação de uma string por separador. No nosso caso, queremos separar as strings por espaço.
Utilize a função map() para gerar um novo RDD como uma lista de palavras.
End of explanation
# EXERCICIO
shakesPalavrasRDD = shakesRDD.flatMap(lambda x: x.split())
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.top(5)
print total
assert total==927631 or total == 928908, "valor incorreto de palavras!"
print "OK"
assert shakesPalavrasRDD.top(5)==[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],'lista incorreta de palavras'
print "OK"
Explanation: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
End of explanation
# EXERCICIO
shakesLimpoRDD = shakesPalavrasRDD.<COMPLETAR>
total = shakesLimpoRDD.count()
print total
assert total==882996, 'valor incorreto!'
print "OK"
Explanation: (4e) Remover linhas vazias
Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
End of explanation
# EXERCICIO
top15 = <COMPLETAR>
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15))
assert top15 == [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],'valores incorretos!'
print "OK"
Explanation: (4f) Contagem de palavras
Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras().
Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes.
takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.:
takeOrdered(15, key=lambda x: -x): ordem decrescente dos valores de x
End of explanation
import numpy as np
# Vamos criar uma função pNorm que recebe como parâmetro p e retorna uma função que calcula a pNorma
def pNorm(p):
Generates a function to calculate the p-Norm between two points.
Args:
p (int): The integer p.
Returns:
Dist: A function that calculates the p-Norm.
def Dist(x,y):
return np.power(np.power(np.abs(x-y),p).sum(),1/float(p))
return Dist
# Vamos criar uma RDD com valores numéricos
numPointsRDD = sc.parallelize(enumerate(np.random.random(size=(10,100))))
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartPointsRDD = numPointsRDD.<COMPLETAR>
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartPointsParesRDD = cartPointsRDD.<COMPLETAR>
# Aplique um mapa para calcular a Distância Euclidiana entre os pares
Euclid = pNorm(2)
distRDD = cartPointsParesRDD.<COMPLETAR>
# Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
statRDD = distRDD.<COMPLETAR>
minv, maxv, meanv = statRDD.<COMPLETAR>, statRDD.<COMPLETAR>, statRDD.<COMPLETAR>
print minv, maxv, meanv
assert (minv.round(2), maxv.round(2), meanv.round(2))==(0.0, 4.70, 3.65), 'Valores incorretos'
print "OK"
Explanation: Parte 5: Similaridade entre Objetos
Nessa parte do laboratório vamos aprender a calcular a distância entre atributos numéricos, categóricos e textuais.
(5a) Vetores no espaço Euclidiano
Quando nossos objetos são representados no espaço Euclidiano, medimos a similaridade entre eles através da p-Norma definida por:
$$d(x,y,p) = (\sum_{i=1}^{n}{|x_i - y_i|^p})^{1/p}$$
As normas mais utilizadas são $p=1,2,\infty$ que se reduzem em distância absoluta, Euclidiana e máxima distância:
$$d(x,y,1) = \sum_{i=1}^{n}{|x_i - y_i|}$$
$$d(x,y,2) = (\sum_{i=1}^{n}{|x_i - y_i|^2})^{1/2}$$
$$d(x,y,\infty) = \max(|x_1 - y_1|,|x_2 - y_2|, ..., |x_n - y_n|)$$
End of explanation
# Vamos criar uma função para calcular a distância de Hamming
def Hamming(x,y):
Calculates the Hamming distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
H (int): The Hamming distance between x and y.
return (x!=y).sum()
# Vamos criar uma função para calcular a distância de Jaccard
def Jaccard(x,y):
Calculates the Jaccard distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
J (int): The Jaccard distance between x and y.
return (x==y).sum()/float( np.maximum(x,y).sum() )
# Vamos criar uma RDD com valores categóricos
catPointsRDD = sc.parallelize(enumerate([['alto', 'caro', 'azul'],
['medio', 'caro', 'verde'],
['alto', 'barato', 'azul'],
['medio', 'caro', 'vermelho'],
['baixo', 'barato', 'verde'],
]))
# EXERCICIO
# Crie um RDD de chaves únicas utilizando flatMap
chavesRDD = (catPointsRDD
.<COMPLETAR>
.<COMPLETAR>
.<COMPLETAR>
)
chaves = dict((v,k) for k,v in enumerate(chavesRDD.collect()))
nchaves = len(chaves)
print chaves, nchaves
assert chaves=={'alto': 0, 'medio': 1, 'baixo': 2, 'barato': 3, 'azul': 4, 'verde': 5, 'caro': 6, 'vermelho': 7}, 'valores incorretos!'
print "OK"
assert nchaves==8, 'número de chaves incorreta'
print "OK"
def CreateNP(atributos,chaves):
Binarize the categorical vector using a dictionary of keys.
Args:
atributos (list): List of attributes of a given object.
chaves (dict): dictionary with the relation attribute -> index
Returns:
array (np.array): Binary array of attributes.
array = np.zeros(len(chaves))
for atr in atributos:
array[ chaves[atr] ] = 1
return array
# Converte o RDD para o formato binário, utilizando o dict chaves
binRDD = catPointsRDD.map(lambda rec: (rec[0],CreateNP(rec[1], chaves)))
binRDD.collect()
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartBinRDD = binRDD.<COMPLETAR>
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartBinParesRDD = cartBinRDD.<COMPLETAR>
# Aplique um mapa para calcular a Distância de Hamming e Jaccard entre os pares
hamRDD = cartBinParesRDD.<COMPLETAR>
jacRDD = cartBinParesRDD.<COMPLETAR>
# Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
statHRDD = hamRDD.<COMPLETAR>
statJRDD = jacRDD.<COMPLETAR>
Hmin, Hmax, Hmean = statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>, statHRDD.<COMPLETAR>
Jmin, Jmax, Jmean = statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>, statJRDD.<COMPLETAR>
print "\t\tMin\tMax\tMean"
print "Hamming:\t{:.2f}\t{:.2f}\t{:.2f}".format(Hmin, Hmax, Hmean )
print "Jaccard:\t{:.2f}\t{:.2f}\t{:.2f}".format( Jmin, Jmax, Jmean )
assert (Hmin.round(2), Hmax.round(2), Hmean.round(2)) == (0.00,6.00,3.52), 'valores incorretos'
print "OK"
assert (Jmin.round(2), Jmax.round(2), Jmean.round(2)) == (0.33,2.67,1.14), 'valores incorretos'
print "OK"
Explanation: (5b) Valores Categóricos
Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possível valor de cada atributo, se ele possui esse atributo ou não.
Com o vetor binário podemos utilizar a distância de Hamming definida por:
$$ H(x,y) = \sum_{i=1}^{n}{x_i != y_i} $$
Também é possível definir a distância de Jaccard como:
$$ J(x,y) = \frac{\sum_{i=1}^{n}{x_i == y_i} }{\sum_{i=1}^{n}{\max(x_i, y_i}) } $$
End of explanation |
1,273 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Categorical Embeddings
We will use the embeddings through the whole lab. They are simply represented by a matrix of tunable parameters (weights).
Let us assume that we are given a pre-trained embedding matrix for an vocabulary of size 10. Each embedding vector in that matrix has dimension 4. Those dimensions are too small to be realistic and are only used for demonstration purposes
Step1: To access the embedding for a given integer (ordinal) symbol $i$, you may either
Step2: compute a one-hot encoding vector $\mathbf{v}$ of $i$, then compute a dot product with the embedding matrix
Step3: The Embedding layer in Keras
In Keras, embeddings have an extra parameter, input_length which is typically used when having a sequence of symbols as input (think sequence of words). In our case, the length will always be 1.
py
Embedding(output_dim=embedding_size, input_dim=vocab_size,
input_length=sequence_length, name='my_embedding')
furthermore, we load the fixed weights from the previous matrix instead of using a random initialization
Step4: Let's use it as part of a Keras model
Step5: The output of an embedding layer is then a 3-d tensor of shape (batch_size, sequence_length, embedding_size).
Step6: None is a marker for dynamic dimensions.
The embedding weights can be retrieved as model parameters
Step7: The model.summary() method gives the list of trainable parameters per layer in the model
Step8: We can use the predict method of the Keras embedding model to project a single integer label into the matching embedding vector
Step9: Let's do the same for a batch of integers
Step10: The output of an embedding layer is then a 3-d tensor of shape (batch_size, sequence_length, embedding_size).
To remove the sequence dimension, useless in our case, we use the Flatten() layer
Step11: Question how many trainable parameters does model2 have? Check your answer with model2.summary().
Note that we re-used the same embedding_layer instance in both model and model2
Step12: Home assignment | Python Code:
import numpy as np
embedding_size = 4
vocab_size = 10
embedding_matrix = np.arange(embedding_size * vocab_size, dtype='float32')
embedding_matrix = embedding_matrix.reshape(vocab_size, embedding_size)
print(embedding_matrix)
Explanation: Categorical Embeddings
We will use the embeddings through the whole lab. They are simply represented by a matrix of tunable parameters (weights).
Let us assume that we are given a pre-trained embedding matrix for an vocabulary of size 10. Each embedding vector in that matrix has dimension 4. Those dimensions are too small to be realistic and are only used for demonstration purposes:
End of explanation
i = 3
print(embedding_matrix[i])
Explanation: To access the embedding for a given integer (ordinal) symbol $i$, you may either:
- simply index (slice) the embedding matrix by $i$, using numpy integer indexing:
End of explanation
def onehot_encode(dim, label):
return np.eye(dim)[label]
onehot_i = onehot_encode(vocab_size, i)
print(onehot_i)
embedding_vector = np.dot(onehot_i, embedding_matrix)
print(embedding_vector)
Explanation: compute a one-hot encoding vector $\mathbf{v}$ of $i$, then compute a dot product with the embedding matrix:
End of explanation
from tensorflow.keras.layers import Embedding
embedding_layer = Embedding(
output_dim=embedding_size, input_dim=vocab_size,
weights=[embedding_matrix],
input_length=1, name='my_embedding')
Explanation: The Embedding layer in Keras
In Keras, embeddings have an extra parameter, input_length which is typically used when having a sequence of symbols as input (think sequence of words). In our case, the length will always be 1.
py
Embedding(output_dim=embedding_size, input_dim=vocab_size,
input_length=sequence_length, name='my_embedding')
furthermore, we load the fixed weights from the previous matrix instead of using a random initialization:
py
Embedding(output_dim=embedding_size, input_dim=vocab_size,
weights=[embedding_matrix],
input_length=sequence_length, name='my_embedding')
End of explanation
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
x = Input(shape=[1], name='input')
embedding = embedding_layer(x)
model = Model(inputs=x, outputs=embedding)
Explanation: Let's use it as part of a Keras model:
End of explanation
model.output_shape
Explanation: The output of an embedding layer is then a 3-d tensor of shape (batch_size, sequence_length, embedding_size).
End of explanation
model.get_weights()
Explanation: None is a marker for dynamic dimensions.
The embedding weights can be retrieved as model parameters:
End of explanation
model.summary()
Explanation: The model.summary() method gives the list of trainable parameters per layer in the model:
End of explanation
labels_to_encode = np.array([[3]])
model.predict(labels_to_encode)
Explanation: We can use the predict method of the Keras embedding model to project a single integer label into the matching embedding vector:
End of explanation
labels_to_encode = np.array([[3], [3], [0], [9]])
model.predict(labels_to_encode)
Explanation: Let's do the same for a batch of integers:
End of explanation
from tensorflow.keras.layers import Flatten
x = Input(shape=[1], name='input')
y = Flatten()(embedding_layer(x))
model2 = Model(inputs=x, outputs=y)
model2.output_shape
model2.predict(np.array([3]))
Explanation: The output of an embedding layer is then a 3-d tensor of shape (batch_size, sequence_length, embedding_size).
To remove the sequence dimension, useless in our case, we use the Flatten() layer
End of explanation
model2.set_weights([np.ones(shape=(vocab_size, embedding_size))])
labels_to_encode = np.array([[3]])
model2.predict(labels_to_encode)
model.predict(labels_to_encode)
Explanation: Question how many trainable parameters does model2 have? Check your answer with model2.summary().
Note that we re-used the same embedding_layer instance in both model and model2: therefore the two models share exactly the same weights in memory:
End of explanation
from tensorflow.keras.models import Sequential
# TODO
model3 = None
# print(model3.predict(labels_to_encode))
# %load solutions/embeddings_sequential_model.py
from tensorflow.keras.models import Sequential
model3 = Sequential([
embedding_layer,
Flatten(),
])
labels_to_encode = np.array([[3]])
print(model3.predict(labels_to_encode))
Explanation: Home assignment:
The previous model definitions used the function API of Keras. Because the embedding and flatten layers are just stacked one after the other it is possible to instead use the Sequential model API.
Defined a third model named model3 using the sequential API and that also reuses the same embedding layer to share parameters with model and model2.
End of explanation |
1,274 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interactive WebGL trajectory widget
Note
Step1: To enable these features, we first need to run enable_notebook to initialize
the required javascript.
Step2: The WebGL viewer engine is called iview, and is introduced in the following paper
Step3: We can even animate through the trajectory simply by updating the widget's frame attribute | Python Code:
from __future__ import print_function
import mdtraj as md
traj = md.load_pdb('http://www.rcsb.org/pdb/files/2M6K.pdb')
print(traj)
Explanation: Interactive WebGL trajectory widget
Note: this feature requires a 'running' notebook, connected to a live kernel. It will not work with a staticly rendered display. For an introduction to the IPython interactive widget system and its capabilities, see this talk by Brian Granger
http://player.vimeo.com/video/79832657#t=30m
Let's start by just loading up a PDB file from the RCSB
End of explanation
from mdtraj.html import TrajectoryView, enable_notebook
enable_notebook()
Explanation: To enable these features, we first need to run enable_notebook to initialize
the required javascript.
End of explanation
# Controls:
# - default mouse to rotate.
# - ctrl to translate
# - shift to zoom (or use wheel)
# - shift+ctrl to change the fog
# - double click to toggle full screen
widget = TrajectoryView(traj, secondaryStructure='ribbon')
widget
Explanation: The WebGL viewer engine is called iview, and is introduced in the following paper: Li, Hongjian, et al. "iview: an interactive WebGL visualizer for protein-ligand complex." BMC Bioinformatics 15.1 (2014): 56.
End of explanation
import time
for i in range(traj.n_frames):
widget.frame = i
time.sleep(0.1)
Explanation: We can even animate through the trajectory simply by updating the widget's frame attribute
End of explanation |
1,275 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. You will
Step1: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
Step2: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
Step3: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
Step4: Note
Step5: Now, we will perform 2 simple data transformations
Step6: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note
Step7: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
Step8: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint
Step9: Quiz Question. How many reviews contain the word perfect?
Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
Step10: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned
Step11: Let us convert the data into NumPy arrays.
Step12: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands
Step13: Quiz Question
Step14: Estimating conditional probability with link function
Recall from lecture that the link function is given by
Step15: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$
Step16: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture
Step17: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation)
Step18: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
Step19: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent
Step20: Now, let us run the logistic regression solver.
Step21: Quiz Question
Step22: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above
Step23: Quiz Question
Step24: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows
Step25: Quiz Question
Step26: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
Step27: Quiz Question
Step28: Quiz Question | Python Code:
import graphlab
Explanation: Implementing logistic regression from scratch
The goal of this notebook is to implement your own logistic regression classifier. You will:
Extract features from Amazon product reviews.
Convert an SFrame into a NumPy array.
Implement the link function for logistic regression.
Write a function to compute the derivative of the log likelihood function with respect to a single coefficient.
Implement gradient ascent.
Given a set of coefficients, predict sentiments.
Compute classification accuracy for the logistic regression model.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create. Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
products = graphlab.SFrame('amazon_baby_subset.gl/')
Explanation: Load review dataset
For this assignment, we will use a subset of the Amazon product review dataset. The subset was chosen to contain similar numbers of positive and negative reviews, as the original dataset consisted primarily of positive reviews.
End of explanation
products['sentiment']
Explanation: One column of this dataset is 'sentiment', corresponding to the class label with +1 indicating a review with positive sentiment and -1 indicating one with negative sentiment.
End of explanation
products.head(10)['name']
print '# of positive reviews =', len(products[products['sentiment']==1])
print '# of negative reviews =', len(products[products['sentiment']==-1])
Explanation: Let us quickly explore more of this dataset. The 'name' column indicates the name of the product. Here we list the first 10 products in the dataset. We then count the number of positive and negative reviews.
End of explanation
import json
with open('important_words.json', 'r') as f: # Reads the list of most frequent words
important_words = json.load(f)
important_words = [str(s) for s in important_words]
print important_words
Explanation: Note: For this assignment, we eliminated class imbalance by choosing
a subset of the data with a similar number of positive and negative reviews.
Apply text cleaning on the review data
In this section, we will perform some simple feature cleaning using SFrames. The last assignment used all words in building bag-of-words features, but here we limit ourselves to 193 words (for simplicity). We compiled a list of 193 most frequent words into a JSON file.
Now, we will load these words from this JSON file:
End of explanation
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
products['review_clean'] = products['review'].apply(remove_punctuation)
Explanation: Now, we will perform 2 simple data transformations:
Remove punctuation using Python's built-in string functionality.
Compute word counts (only for important_words)
We start with Step 1 which can be done as follows:
End of explanation
for word in important_words:
products[word] = products['review_clean'].apply(lambda s : s.split().count(word))
Explanation: Now we proceed with Step 2. For each word in important_words, we compute a count for the number of times the word occurs in the review. We will store this count in a separate column (one for each word). The result of this feature processing is a single column for each word in important_words which keeps a count of the number of times the respective word occurs in the review text.
Note: There are several ways of doing this. In this assignment, we use the built-in count function for Python lists. Each review string is first split into individual words and the number of occurances of a given word is counted.
End of explanation
products['perfect']
Explanation: The SFrame products now contains one column for each of the 193 important_words. As an example, the column perfect contains a count of the number of times the word perfect occurs in each of the reviews.
End of explanation
def contains_perfect(count):
return 1 if count >= 1 else 0
products['contains_perfect'] = products['perfect'].apply(contains_perfect)
products['contains_perfect'].sum()
Explanation: Now, write some code to compute the number of product reviews that contain the word perfect.
Hint:
* First create a column called contains_perfect which is set to 1 if the count of the word perfect (stored in column perfect) is >= 1.
* Sum the number of 1s in the column contains_perfect.
End of explanation
import numpy as np
Explanation: Quiz Question. How many reviews contain the word perfect?
Convert SFrame to NumPy array
As you have seen previously, NumPy is a powerful library for doing matrix manipulation. Let us convert our data to matrices and then implement our algorithms with matrices.
First, make sure you can perform the following import.
End of explanation
def get_numpy_data(data_sframe, features, label):
data_sframe['intercept'] = 1
features = ['intercept'] + features
features_sframe = data_sframe[features]
feature_matrix = features_sframe.to_numpy()
label_sarray = data_sframe[label]
label_array = label_sarray.to_numpy()
return(feature_matrix, label_array)
Explanation: We now provide you with a function that extracts columns from an SFrame and converts them into a NumPy array. Two arrays are returned: one representing features and another representing class labels. Note that the feature matrix includes an additional column 'intercept' to take account of the intercept term.
End of explanation
# Warning: This may take a few minutes...
feature_matrix, sentiment = get_numpy_data(products, important_words, 'sentiment')
Explanation: Let us convert the data into NumPy arrays.
End of explanation
feature_matrix.shape
Explanation: Are you running this notebook on an Amazon EC2 t2.micro instance? (If you are using your own machine, please skip this section)
It has been reported that t2.micro instances do not provide sufficient power to complete the conversion in acceptable amount of time. For interest of time, please refrain from running get_numpy_data function. Instead, download the binary file containing the four NumPy arrays you'll need for the assignment. To load the arrays, run the following commands:
arrays = np.load('module-3-assignment-numpy-arrays.npz')
feature_matrix, sentiment = arrays['feature_matrix'], arrays['sentiment']
End of explanation
sentiment
Explanation: Quiz Question: How many features are there in the feature_matrix?
Quiz Question: Assuming that the intercept is present, how does the number of features in feature_matrix relate to the number of features in the logistic regression model?
Now, let us see what the sentiment column looks like:
End of explanation
def prediction(score):
return (1 / (1 + np.exp(-score)))
'''
produces probablistic estimate for P(y_i = +1 | x_i, w).
estimate ranges between 0 and 1.
'''
def predict_probability(feature_matrix, coefficients):
# Take dot product of feature_matrix and coefficients
scores = np.dot(feature_matrix, coefficients)
# Compute P(y_i = +1 | x_i, w) using the link function
predictions = np.apply_along_axis(prediction, 0, scores)
# return predictions
return predictions
Explanation: Estimating conditional probability with link function
Recall from lecture that the link function is given by:
$$
P(y_i = +1 | \mathbf{x}_i,\mathbf{w}) = \frac{1}{1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))},
$$
where the feature vector $h(\mathbf{x}_i)$ represents the word counts of important_words in the review $\mathbf{x}_i$. Complete the following function that implements the link function:
End of explanation
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_predictions = np.array( [ 1./(1+np.exp(-correct_scores[0])), 1./(1+np.exp(-correct_scores[1])) ] )
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_predictions =', correct_predictions
print 'output of predict_probability =', predict_probability(dummy_feature_matrix, dummy_coefficients)
Explanation: Aside. How the link function works with matrix algebra
Since the word counts are stored as columns in feature_matrix, each $i$-th row of the matrix corresponds to the feature vector $h(\mathbf{x}_i)$:
$$
[\text{feature_matrix}] =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right] =
\left[
\begin{array}{cccc}
h_0(\mathbf{x}_1) & h_1(\mathbf{x}_1) & \cdots & h_D(\mathbf{x}_1) \
h_0(\mathbf{x}_2) & h_1(\mathbf{x}_2) & \cdots & h_D(\mathbf{x}_2) \
\vdots & \vdots & \ddots & \vdots \
h_0(\mathbf{x}_N) & h_1(\mathbf{x}_N) & \cdots & h_D(\mathbf{x}_N)
\end{array}
\right]
$$
By the rules of matrix multiplication, the score vector containing elements $\mathbf{w}^T h(\mathbf{x}_i)$ is obtained by multiplying feature_matrix and the coefficient vector $\mathbf{w}$.
$$
[\text{score}] =
[\text{feature_matrix}]\mathbf{w} =
\left[
\begin{array}{c}
h(\mathbf{x}_1)^T \
h(\mathbf{x}_2)^T \
\vdots \
h(\mathbf{x}_N)^T
\end{array}
\right]
\mathbf{w}
= \left[
\begin{array}{c}
h(\mathbf{x}_1)^T\mathbf{w} \
h(\mathbf{x}_2)^T\mathbf{w} \
\vdots \
h(\mathbf{x}_N)^T\mathbf{w}
\end{array}
\right]
= \left[
\begin{array}{c}
\mathbf{w}^T h(\mathbf{x}_1) \
\mathbf{w}^T h(\mathbf{x}_2) \
\vdots \
\mathbf{w}^T h(\mathbf{x}_N)
\end{array}
\right]
$$
Checkpoint
Just to make sure you are on the right track, we have provided a few examples. If your predict_probability function is implemented correctly, then the outputs will match:
End of explanation
def feature_derivative(errors, feature):
# Compute the dot product of errors and feature
derivative = np.dot(feature, errors)
# Return the derivative
return derivative
Explanation: Compute derivative of log likelihood with respect to a single coefficient
Recall from lecture:
$$
\frac{\partial\ell}{\partial w_j} = \sum_{i=1}^N h_j(\mathbf{x}_i)\left(\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})\right)
$$
We will now write a function that computes the derivative of log likelihood with respect to a single coefficient $w_j$. The function accepts two arguments:
* errors vector containing $\mathbf{1}[y_i = +1] - P(y_i = +1 | \mathbf{x}_i, \mathbf{w})$ for all $i$.
* feature vector containing $h_j(\mathbf{x}_i)$ for all $i$.
Complete the following code block:
End of explanation
def compute_log_likelihood(feature_matrix, sentiment, coefficients):
indicator = (sentiment==+1)
scores = np.dot(feature_matrix, coefficients)
logexp = np.log(1. + np.exp(-scores))
# Simple check to prevent overflow
mask = np.isinf(logexp)
logexp[mask] = -scores[mask]
lp = np.sum((indicator-1)*scores - logexp)
return lp
Explanation: In the main lecture, our focus was on the likelihood. In the advanced optional video, however, we introduced a transformation of this likelihood---called the log likelihood---that simplifies the derivation of the gradient and is more numerically stable. Due to its numerical stability, we will use the log likelihood instead of the likelihood to assess the algorithm.
The log likelihood is computed using the following formula (see the advanced optional video if you are curious about the derivation of this equation):
$$\ell\ell(\mathbf{w}) = \sum_{i=1}^N \Big( (\mathbf{1}[y_i = +1] - 1)\mathbf{w}^T h(\mathbf{x}_i) - \ln\left(1 + \exp(-\mathbf{w}^T h(\mathbf{x}_i))\right) \Big) $$
We provide a function to compute the log likelihood for the entire dataset.
End of explanation
dummy_feature_matrix = np.array([[1.,2.,3.], [1.,-1.,-1]])
dummy_coefficients = np.array([1., 3., -1.])
dummy_sentiment = np.array([-1, 1])
correct_indicators = np.array( [ -1==+1, 1==+1 ] )
correct_scores = np.array( [ 1.*1. + 2.*3. + 3.*(-1.), 1.*1. + (-1.)*3. + (-1.)*(-1.) ] )
correct_first_term = np.array( [ (correct_indicators[0]-1)*correct_scores[0], (correct_indicators[1]-1)*correct_scores[1] ] )
correct_second_term = np.array( [ np.log(1. + np.exp(-correct_scores[0])), np.log(1. + np.exp(-correct_scores[1])) ] )
correct_ll = sum( [ correct_first_term[0]-correct_second_term[0], correct_first_term[1]-correct_second_term[1] ] )
print correct_scores
print correct_indicators
print correct_first_term
print correct_second_term
print 'The following outputs must match '
print '------------------------------------------------'
print 'correct_log_likelihood =', correct_ll
print 'output of compute_log_likelihood =', compute_log_likelihood(dummy_feature_matrix, dummy_sentiment, dummy_coefficients)
Explanation: Checkpoint
Just to make sure we are on the same page, run the following code block and check that the outputs match.
End of explanation
from math import sqrt
def logistic_regression(feature_matrix, sentiment, initial_coefficients, step_size, max_iter):
coefficients = np.array(initial_coefficients) # make sure it's a numpy array
for itr in xrange(max_iter):
# Predict P(y_i = +1|x_i,w) using your predict_probability() function
# YOUR CODE HERE
predictions = predict_probability(feature_matrix, coefficients)
# Compute indicator value for (y_i = +1)
indicator = (sentiment==+1)
# Compute the errors as indicator - predictions
errors = indicator - predictions
for j in xrange(len(coefficients)): # loop over each coefficient
# Recall that feature_matrix[:,j] is the feature column associated with coefficients[j].
# Compute the derivative for coefficients[j]. Save it in a variable called derivative
# YOUR CODE HERE
derivative = feature_derivative(errors, feature_matrix[:, j])
# add the step size times the derivative to the current coefficient
coefficients[j] += (step_size * derivative)
# Checking whether log likelihood is increasing
if itr <= 15 or (itr <= 100 and itr % 10 == 0) or (itr <= 1000 and itr % 100 == 0) \
or (itr <= 10000 and itr % 1000 == 0) or itr % 10000 == 0:
lp = compute_log_likelihood(feature_matrix, sentiment, coefficients)
print 'iteration %*d: log likelihood of observed labels = %.8f' % \
(int(np.ceil(np.log10(max_iter))), itr, lp)
return coefficients
Explanation: Taking gradient steps
Now we are ready to implement our own logistic regression. All we have to do is to write a gradient ascent function that takes gradient steps towards the optimum.
Complete the following function to solve the logistic regression model using gradient ascent:
End of explanation
coefficients = logistic_regression(feature_matrix, sentiment, initial_coefficients=np.zeros(194),
step_size=1e-7, max_iter=301)
Explanation: Now, let us run the logistic regression solver.
End of explanation
# Compute the scores as a dot product between feature_matrix and coefficients.
scores = np.dot(feature_matrix, coefficients)
Explanation: Quiz Question: As each iteration of gradient ascent passes, does the log likelihood increase or decrease?
Predicting sentiments
Recall from lecture that class predictions for a data point $\mathbf{x}$ can be computed from the coefficients $\mathbf{w}$ using the following formula:
$$
\hat{y}_i =
\left{
\begin{array}{ll}
+1 & \mathbf{x}_i^T\mathbf{w} > 0 \
-1 & \mathbf{x}_i^T\mathbf{w} \leq 0 \
\end{array}
\right.
$$
Now, we will write some code to compute class predictions. We will do this in two steps:
* Step 1: First compute the scores using feature_matrix and coefficients using a dot product.
* Step 2: Using the formula above, compute the class predictions from the scores.
Step 1 can be implemented as follows:
End of explanation
def get_prediction(score):
if score > 0:
return 1
else:
return -1
predictions = np.zeros(shape=(scores.shape[0],))
idx = 0
for score in scores:
predictions[idx] = int(get_prediction(score))
idx += 1
Explanation: Now, complete the following code block for Step 2 to compute the class predictions using the scores obtained above:
End of explanation
pos = (predictions == 1).sum()
neg = (predictions == -1).sum()
print ("number of positive predicted reviews: {}".format(pos))
print ("number of negative predicted reviews: {}".format(neg))
Explanation: Quiz Question: How many reviews were predicted to have positive sentiment?
End of explanation
sentiment = products['sentiment'].to_numpy()
num_mistakes = sentiment - predictions
correct = len(sentiment) - np.count_nonzero(num_mistakes)
accuracy = float(correct) / len(sentiment)
print "-----------------------------------------------------"
print '# Reviews correctly classified =', len(products) - num_mistakes
print '# Reviews incorrectly classified =', num_mistakes
print '# Reviews total =', len(products)
print "-----------------------------------------------------"
print 'Accuracy = %.2f' % accuracy
Explanation: Measuring accuracy
We will now measure the classification accuracy of the model. Recall from the lecture that the classification accuracy can be computed as follows:
$$
\mbox{accuracy} = \frac{\mbox{# correctly classified data points}}{\mbox{# total data points}}
$$
Complete the following code block to compute the accuracy of the model.
End of explanation
coefficients = list(coefficients[1:]) # exclude intercept
word_coefficient_tuples = [(word, coefficient) for word, coefficient in zip(important_words, coefficients)]
word_coefficient_tuples = sorted(word_coefficient_tuples, key=lambda x:x[1], reverse=True)
Explanation: Quiz Question: What is the accuracy of the model on predictions made above? (round to 2 digits of accuracy)
Which words contribute most to positive & negative sentiments?
Recall that in Module 2 assignment, we were able to compute the "most positive words". These are words that correspond most strongly with positive reviews. In order to do this, we will first do the following:
* Treat each coefficient as a tuple, i.e. (word, coefficient_value).
* Sort all the (word, coefficient_value) tuples by coefficient_value in descending order.
End of explanation
word_coefficient_tuples[:10]
Explanation: Now, word_coefficient_tuples contains a sorted list of (word, coefficient_value) tuples. The first 10 elements in this list correspond to the words that are most positive.
Ten "most positive" words
Now, we compute the 10 words that have the most positive coefficient values. These words are associated with positive sentiment.
End of explanation
word_coefficient_tuples_descending = sorted(word_coefficient_tuples, key = lambda x: x[1], reverse=False)
Explanation: Quiz Question: Which word is not present in the top 10 "most positive" words?
Ten "most negative" words
Next, we repeat this exercise on the 10 most negative words. That is, we compute the 10 words that have the most negative coefficient values. These words are associated with negative sentiment.
End of explanation
word_coefficient_tuples_descending[0:10]
Explanation: Quiz Question: Which word is not present in the top 10 "most negative" words?
End of explanation |
1,276 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a id="ndvi_std_top"></a>
NDVI STD
Deviations from an established average z-score.
<hr>
Notebook Summary
A baseline for each month is determined by measuring NDVI over a set time
The data cube is used to visualize at NDVI anomalies over time.
Anomalous times are further explored and visualization solutions are proposed.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platform and Product
Get the Extents of the Cube
Define the Extents of the Analysis
Load Data from the Data Cube
Create and Use a Clean Mask
Calculate the NDVI
Convert the Xarray to a Dataframe
Define a Function to Visualize Values Over the Region
Visualize the Baseline Average NDVI by Month
Visualize the Baseline Distributions Binned by Month
Visualize the Baseline Kernel Distributions Binned by Month
Plot Z-Scores by Month and Year
Further Examine Times Of Interest
<hr>
How It Works
To detect changes in plant life, we use a measure called NDVI.
* <font color=green>NDVI</font> is the ratio of the difference between amount of near infrared light <font color=red>(NIR)</font> and red light <font color=red>(RED)</font> divided by their sum.
<br>
$$ NDVI = \frac{(NIR - RED)}{(NIR + RED)}$$
<br>
<div class="alert-info">
The idea is to observe how much red light is being absorbed versus reflected. Photosynthetic plants absorb most of the visible spectrum's wavelengths when they are healthy. When they aren't healthy, more of that light will get reflected. This makes the difference between <font color=red>NIR</font> and <font color=red>RED</font> much smaller which will lower the <font color=green>NDVI</font>. The resulting values from doing this over several pixels can be used to create visualizations for the changes in the amount of photosynthetic vegetation in large areas.
</div>
<span id="ndvi_std_import">Import Dependencies and Connect to the Data Cube ▴ </span>
Step1: <span id="ndvi_std_plat_prod">Choose Platform and Product ▴</span>
Step2: <span id="ndvi_std_extents">Get the Extents of the Cube ▴</span>
Step3: <span id="ndvi_std_define_extents">Define the Extents of the Analysis ▴</span>
Step4: <span id="ndvi_std_load_data">Load Data from the Data Cube ▴</span>
Step5: <span id="ndvi_std_clean_mask">Create and Use a Clean Mask ▴</span>
Step6: <span id="ndvi_std_calculate">Calculate the NDVI ▴</span>
Step7: <span id="ndvi_std_pandas">Convert the Xarray to a Dataframe ▴</span>
Step8: <span id="ndvi_std_visualization_function">Define a Function to Visualize Values Over the Region ▴</span>
Step9: Lets examine the average <font color=green>NDVI</font> across all months and years to get a look at the region
Step10: This gives us an idea of the healthier areas of the region before we start looking at specific months and years.
<span id="ndvi_std_baseline_mean_ndvi">Visualize the Baseline Average NDVI by Month ▴</span>
Step11: <span id="ndvi_std_boxplot_analysis">Visualize the Baseline Distributions Binned by Month ▴</span>
Step12: The plot above shows the distributions for each individual month over the baseline period.
<br>
- The <b><font color=red>red</font></b> line is the mean line which connects the <b><em>mean values</em></b> for each month.
<br>
- The dotted <b><font color=blue>blue</font></b> lines are exactly <b><em>one standard deviation away</em></b> from the mean and show where the NDVI values fall within 68% of the time, according to the Empirical Rule.
<br>
- The <b><font color=green>green</font></b> dotted lines are <b><em>two standard deviations away</em></b> from the mean and show where an estimated 95% of the NDVI values are contained for that month.
<br>
<div class="alert-info"><font color=black> <em><b>NOTE
Step13: <hr>
<span id="ndvi_std_pixelplot_analysis">Plot Z-Scores by Month and Year ▴</span>
Pixel Plot Visualization
Step14: Each block in the visualization above is representative of the deviation from the average for the region selected in a specific month and year. The omitted blocks are times when there was no satellite imagery available. Their values must either be inferred, ignored, or interpolated.
You may notice long vertical strips of red. These are strong indications of drought since they deviate from the baseline consistently over a long period of time.
<span id="ndvi_std_heatmap_analysis">Further Examine Times Of Interest ▴</span>
Use the function we created to examine times of interest
Step15: Note | Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
from matplotlib.ticker import FuncFormatter
import seaborn as sns
from utils.data_cube_utilities.dc_load import get_product_extents
from utils.data_cube_utilities.dc_display_map import display_map
from utils.data_cube_utilities.clean_mask import landsat_clean_mask_full
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import datacube
from utils.data_cube_utilities.data_access_api import DataAccessApi
api = DataAccessApi()
dc = api.dc
Explanation: <a id="ndvi_std_top"></a>
NDVI STD
Deviations from an established average z-score.
<hr>
Notebook Summary
A baseline for each month is determined by measuring NDVI over a set time
The data cube is used to visualize at NDVI anomalies over time.
Anomalous times are further explored and visualization solutions are proposed.
<hr>
Index
Import Dependencies and Connect to the Data Cube
Choose Platform and Product
Get the Extents of the Cube
Define the Extents of the Analysis
Load Data from the Data Cube
Create and Use a Clean Mask
Calculate the NDVI
Convert the Xarray to a Dataframe
Define a Function to Visualize Values Over the Region
Visualize the Baseline Average NDVI by Month
Visualize the Baseline Distributions Binned by Month
Visualize the Baseline Kernel Distributions Binned by Month
Plot Z-Scores by Month and Year
Further Examine Times Of Interest
<hr>
How It Works
To detect changes in plant life, we use a measure called NDVI.
* <font color=green>NDVI</font> is the ratio of the difference between amount of near infrared light <font color=red>(NIR)</font> and red light <font color=red>(RED)</font> divided by their sum.
<br>
$$ NDVI = \frac{(NIR - RED)}{(NIR + RED)}$$
<br>
<div class="alert-info">
The idea is to observe how much red light is being absorbed versus reflected. Photosynthetic plants absorb most of the visible spectrum's wavelengths when they are healthy. When they aren't healthy, more of that light will get reflected. This makes the difference between <font color=red>NIR</font> and <font color=red>RED</font> much smaller which will lower the <font color=green>NDVI</font>. The resulting values from doing this over several pixels can be used to create visualizations for the changes in the amount of photosynthetic vegetation in large areas.
</div>
<span id="ndvi_std_import">Import Dependencies and Connect to the Data Cube ▴ </span>
End of explanation
# Change the data platform and data cube here
product = 'ls7_usgs_sr_scene'
platform = 'LANDSAT_7'
collection = 'c1'
level = 'l2'
# product = 'ls8_usgs_sr_scene'
# platform = 'LANDSAT_8'
# collection = 'c1'
# level = 'l2'
Explanation: <span id="ndvi_std_plat_prod">Choose Platform and Product ▴</span>
End of explanation
full_lat, full_lon, min_max_dates = get_product_extents(api, platform, product)
print("{}:".format(platform))
print("Lat bounds:", full_lat)
print("Lon bounds:", full_lon)
print("Time bounds:", min_max_dates)
Explanation: <span id="ndvi_std_extents">Get the Extents of the Cube ▴</span>
End of explanation
display_map(full_lat, full_lon)
params = {'latitude': (0.55, 0.6),
'longitude': (35.55, 35.5),
'time': ('2008-01-01', '2010-12-31')}
display_map(params["latitude"], params["longitude"])
Explanation: <span id="ndvi_std_define_extents">Define the Extents of the Analysis ▴</span>
End of explanation
dataset = dc.load(**params,
platform = platform,
product = product,
measurements = ['red', 'green', 'blue', 'swir1', 'swir2', 'nir', 'pixel_qa'],
dask_chunks={'time':1, 'latitude':1000, 'longitude':1000}).persist()
Explanation: <span id="ndvi_std_load_data">Load Data from the Data Cube ▴</span>
End of explanation
# Make a clean mask to remove clouds and scanlines.
clean_mask = landsat_clean_mask_full(dc, dataset, product=product, platform=platform,
collection=collection, level=level)
# Filter the scenes with that clean mask
dataset = dataset.where(clean_mask)
Explanation: <span id="ndvi_std_clean_mask">Create and Use a Clean Mask ▴</span>
End of explanation
#Calculate NDVI
ndvi = (dataset.nir - dataset.red)/(dataset.nir + dataset.red)
Explanation: <span id="ndvi_std_calculate">Calculate the NDVI ▴</span>
End of explanation
#Cast to pandas dataframe
df = ndvi.to_dataframe("NDVI")
#flatten the dimensions since it is a compound hierarchical dataframe
df = df.stack().reset_index()
#Drop the junk column that was generated for NDVI
df = df.drop(["level_3"], axis=1)
#Preview first 5 rows to make sure everything looks as it should
df.head()
#Rename the NDVI column to the appropriate name
df = df.rename(index=str, columns={0: "ndvi"})
#clamp NDVI between 0 and 1
df.ndvi = df.ndvi.clip(lower=0)
#Add columns for Month and Year for convenience
df["Month"] = df.time.dt.month
df["Year"] = df.time.dt.year
#Preview changes
df.head()
Explanation: <span id="ndvi_std_pandas">Convert the Xarray to a Dataframe ▴</span>
End of explanation
#Create a function for formatting our axes
def format_axis(axis, digits = None, suffix = ""):
#Get Labels
labels = axis.get_majorticklabels()
#Exit if empty
if len(labels) == 0: return
#Create formatting function
format_func = lambda x, pos: "{0}{1}".format(labels[pos]._text[:digits],suffix)
#Use formatting function
axis.set_major_formatter(FuncFormatter(format_func))
#Create a function for examining the z-score and NDVI of the region graphically
def examine(month = list(df["time"].dt.month.unique()), year = list(df["time"].dt.year.unique()), value_name = "z_score"):
#This allows the user to pass single floats as values as well
if type(month) is not list: month = [month]
if type(year) is not list: year = [year]
#pivoting the table to the appropriate layout
piv = pd.pivot_table(df[df["time"].dt.year.isin(year) & df["time"].dt.month.isin(month)],
values=value_name,index=["latitude"], columns=["longitude"])
#Sizing
plt.rcParams["figure.figsize"] = [11,11]
#Plot pivot table as heatmap using seaborn
val_range = (-1.96,1.96) if value_name is "z_score" else (df[value_name].unique().min(),df[value_name].unique().max())
ax = sns.heatmap(piv, square=False, cmap="RdYlGn",vmin=val_range[0],vmax=val_range[1], center=0)
#Formatting
format_axis(ax.yaxis, 6)
format_axis(ax.xaxis, 7)
plt.setp(ax.xaxis.get_majorticklabels(), rotation=90 )
plt.gca().invert_yaxis()
Explanation: <span id="ndvi_std_visualization_function">Define a Function to Visualize Values Over the Region ▴</span>
End of explanation
#It defaults to binning the entire range of months and years so we can just leave those parameters out
examine(value_name="ndvi")
Explanation: Lets examine the average <font color=green>NDVI</font> across all months and years to get a look at the region
End of explanation
#Make labels for convenience
labels = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
#Initialize an empty pandas Series
df["z_score"] = pd.Series()
#declare list for population
binned_data = list()
#Calculate monthly binned z-scores from the composited monthly NDVI mean and store them
for i in range(12):
#grab z_score and NDVI for the appropriate month
temp = df[["z_score", "ndvi"]][df["Month"] == i+1]
#populate z_score
df.loc[df["Month"] == i+1,"z_score"] = (temp["ndvi"] - temp["ndvi"].mean())/temp["ndvi"].std(ddof=0)
#print the month next to its mean NDVI and standard deviation
binned_data.append((labels[i], temp["ndvi"].mean(), temp["ndvi"].std()))
#Create dataframe for binned values
binned_data = pd.DataFrame.from_records(binned_data, columns=["Month","Mean", "Std_Dev"])
#print description for clarification
print("Monthly Average NDVI over Baseline Period")
#display binned data
binned_data
Explanation: This gives us an idea of the healthier areas of the region before we start looking at specific months and years.
<span id="ndvi_std_baseline_mean_ndvi">Visualize the Baseline Average NDVI by Month ▴</span>
End of explanation
#Set figure size to a larger size
plt.rcParams["figure.figsize"] = [16,9]
#Create the boxplot
df.boxplot(by="Month",column="ndvi")
#Create the mean line
plt.plot(binned_data.index+1, binned_data.Mean, 'r-')
#Create the one standard deviation away lines
plt.plot(binned_data.index+1, binned_data.Mean-binned_data.Std_Dev, 'b--')
plt.plot(binned_data.index+1, binned_data.Mean+binned_data.Std_Dev, 'b--')
#Create the two standard deviations away lines
plt.plot(binned_data.index+1, binned_data.Mean-(2*binned_data.Std_Dev), 'g-.', alpha=.3)
plt.plot(binned_data.index+1, binned_data.Mean+(2*binned_data.Std_Dev), 'g-.', alpha=.3)
Explanation: <span id="ndvi_std_boxplot_analysis">Visualize the Baseline Distributions Binned by Month ▴</span>
End of explanation
sns.violinplot(x=df.Month, y="ndvi", data=df)
Explanation: The plot above shows the distributions for each individual month over the baseline period.
<br>
- The <b><font color=red>red</font></b> line is the mean line which connects the <b><em>mean values</em></b> for each month.
<br>
- The dotted <b><font color=blue>blue</font></b> lines are exactly <b><em>one standard deviation away</em></b> from the mean and show where the NDVI values fall within 68% of the time, according to the Empirical Rule.
<br>
- The <b><font color=green>green</font></b> dotted lines are <b><em>two standard deviations away</em></b> from the mean and show where an estimated 95% of the NDVI values are contained for that month.
<br>
<div class="alert-info"><font color=black> <em><b>NOTE: </b>You will notice a seasonal trend in the plot above. If we had averaged the NDVI without binning, this trend data would be lost and we would end up comparing specific months to the average derived from all the months combined, instead of individually.</em></font>
</div>
<span id="ndvi_std_violinplot_analysis">Visualize the Baseline Kernel Distributions Binned by Month ▴</span>
The violinplot has the advantage of allowing us to visualize kernel distributions but comes at a higher computational cost.
End of explanation
#Create heatmap layout from dataframe
img = pd.pivot_table(df, values="z_score",index=["Month"], columns=["Year"], fill_value=None)
#pass the layout to seaborn heatmap
ax = sns.heatmap(img, cmap="RdYlGn", annot=True, fmt="f", center = 0)
#set the title for Aesthetics
ax.set_title('Z-Score\n Regional Selection Averages by Month and Year')
ax.fill= None
Explanation: <hr>
<span id="ndvi_std_pixelplot_analysis">Plot Z-Scores by Month and Year ▴</span>
Pixel Plot Visualization
End of explanation
#Lets look at that drought in 2009 during the months of Aug-Oct
#This will generate a composite of the z-scores for the months and years selected
examine(month = [8], year = 2009, value_name="z_score")
Explanation: Each block in the visualization above is representative of the deviation from the average for the region selected in a specific month and year. The omitted blocks are times when there was no satellite imagery available. Their values must either be inferred, ignored, or interpolated.
You may notice long vertical strips of red. These are strong indications of drought since they deviate from the baseline consistently over a long period of time.
<span id="ndvi_std_heatmap_analysis">Further Examine Times Of Interest ▴</span>
Use the function we created to examine times of interest
End of explanation
#Restrict input to a maximum of about 12 grids (months*year) for memory
def grid_examine(month = None, year = None, value_name = "z_score"):
#default to all months then cast to list, if not already
if month is None: month = list(df["Month"].unique())
elif type(month) is int: month = [month]
#default to all years then cast to list, if not already
if year is None: year = list(df["Year"].unique())
elif type(year) is int: year = [year]
#get data within the bounds specified
data = df[np.logical_and(df["Month"].isin(month) , df["Year"].isin(year))]
#Set the val_range to be used as the vertical limit (vmin and vmax)
val_range = (-1.96,1.96) if value_name is "z_score" else (df[value_name].unique().min(),df[value_name].unique().max())
#create colorbar to export and use on grid
Z = [[val_range[0],0],[0,val_range[1]]]
CS3 = plt.contourf(Z, 200, cmap="RdYlGn")
plt.clf()
#Define facet function to use for each tile in grid
def heatmap_facet(*args, **kwargs):
data = kwargs.pop('data')
img = pd.pivot_table(data, values=value_name,index=["latitude"], columns=["longitude"], fill_value=None)
ax = sns.heatmap(img, cmap="RdYlGn",vmin=val_range[0],vmax=val_range[1],
center = 0, square=True, cbar=False, mask = img.isnull())
plt.setp(ax.xaxis.get_majorticklabels(), rotation=90 )
plt.gca().invert_yaxis()
#Create grid using the face function above
with sns.plotting_context(font_scale=5.5):
g = sns.FacetGrid(data, col="Year", row="Month", height=5,sharey=True, sharex=True)
mega_g = g.map_dataframe(heatmap_facet, "longitude", "latitude")
g.set_titles(col_template="Yr= {col_name}", fontweight='bold', fontsize=18)
#Truncate axis tick labels using the format_axis function defined in block 13
for ax in g.axes:
format_axis(ax[0]._axes.yaxis, 6)
format_axis(ax[0]._axes.xaxis, 7)
#create a colorbox and apply the exported colorbar
cbar_ax = g.fig.add_axes([1.015,0.09, 0.015, 0.90])
cbar = plt.colorbar(cax=cbar_ax, mappable=CS3)
grid_examine(month=[8,9,10], year=[2008,2009,2010])
Explanation: Note:
This graphical representation of the region shows the amount of deviation from the mean for each pixel that was binned by month.
Grid Layout of Selected Times
End of explanation |
1,277 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - [email protected] - http
Step2: Set up the model in Shogun
Step3: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http
Step4: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http
Step5: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice
Step6: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
Step7: So far so good, now lets plot the density of this GMM using the code from above
Step8: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http
Step9: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
Step10: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here
Step11: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering. | Python Code:
%pylab inline
%matplotlib inline
import os
SHOGUN_DATA_DIR=os.getenv('SHOGUN_DATA_DIR', '../../../data')
# import all Shogun classes
from modshogun import *
from matplotlib.patches import Ellipse
# a tool for visualisation
def get_gaussian_ellipse_artist(mean, cov, nstd=1.96, color="red", linewidth=3):
Returns an ellipse artist for nstd times the standard deviation of this
Gaussian, specified by mean and covariance
# compute eigenvalues (ordered)
vals, vecs = eigh(cov)
order = vals.argsort()[::-1]
vals, vecs = vals[order], vecs[:, order]
theta = numpy.degrees(arctan2(*vecs[:, 0][::-1]))
# width and height are "full" widths, not radius
width, height = 2 * nstd * sqrt(vals)
e = Ellipse(xy=mean, width=width, height=height, angle=theta, \
edgecolor=color, fill=False, linewidth=linewidth)
return e
Explanation: Gaussian Mixture Models and Expectation Maximisation in Shogun
By Heiko Strathmann - [email protected] - http://github.com/karlnapf - http://herrstrathmann.de.
Based on the GMM framework of the Google summer of code 2011 project of Alesis Novik - https://github.com/alesis
This notebook is about learning and using Gaussian <a href="https://en.wikipedia.org/wiki/Mixture_model">Mixture Models</a> (GMM) in Shogun. Below, we demonstrate how to use them for sampling, for density estimation via <a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a>, and for <a href="https://en.wikipedia.org/wiki/Data_clustering">clustering</a>.
Note that Shogun's interfaces for mixture models are deprecated and are soon to be replace by more intuitive and efficient ones. This notebook contains some python magic at some places to compensate for this. However, all computations are done within Shogun itself.
Finite Mixture Models (skip if you just want code examples)
We begin by giving some intuition about mixture models. Consider an unobserved (or latent) discrete random variable taking $k$ states $s$ with probabilities $\text{Pr}(s=i)=\pi_i$ for $1\leq i \leq k$, and $k$ random variables $x_i|s_i$ with arbritary densities or distributions, which are conditionally independent of each other given the state of $s$. In the finite mixture model, we model the probability or density for a single point $x$ begin generated by the weighted mixture of the $x_i|s_i$
$$
p(x)=\sum_{i=1}^k\text{Pr}(s=i)p(x)=\sum_{i=1}^k \pi_i p(x|s)
$$
which is simply the marginalisation over the latent variable $s$. Note that $\sum_{i=1}^k\pi_i=1$.
For example, for the Gaussian mixture model (GMM), we get (adding a collection of parameters $\theta:={\boldsymbol{\mu}i, \Sigma_i}{i=1}^k$ that contains $k$ mean and covariance parameters of single Gaussian distributions)
$$
p(x|\theta)=\sum_{i=1}^k \pi_i \mathcal{N}(\boldsymbol{\mu}_i,\Sigma_i)
$$
Note that any set of probability distributions on the same domain can be combined to such a mixture model. Note again that $s$ is an unobserved discrete random variable, i.e. we model data being generated from some weighted combination of baseline distributions. Interesting problems now are
Learning the weights $\text{Pr}(s=i)=\pi_i$ from data
Learning the parameters $\theta$ from data for a fixed family of $x_i|s_i$, for example for the GMM
Using the learned model (which is a density estimate) for clustering or classification
All of these problems are in the context of unsupervised learning since the algorithm only sees the plain data and no information on its structure.
Expectation Maximisation
<a href="https://en.wikipedia.org/wiki/Expectation-maximization_algorithm">Expectation Maximisation (EM)</a> is a powerful method to learn any form of latent models and can be applied to the Gaussian mixture model case. Standard methods such as Maximum Likelihood are not straightforward for latent models in general, while EM can almost always be applied. However, it might converge to local optima and does not guarantee globally optimal solutions (this can be dealt with with some tricks as we will see later). While the general idea in EM stays the same for all models it can be used on, the individual steps depend on the particular model that is being used.
The basic idea in EM is to maximise a lower bound, typically called the free energy, on the log-likelihood of the model. It does so by repeatedly performing two steps
The E-step optimises the free energy with respect to the latent variables $s_i$, holding the parameters $\theta$ fixed. This is done via setting the distribution over $s$ to the posterior given the used observations.
The M-step optimises the free energy with respect to the paramters $\theta$, holding the distribution over the $s_i$ fixed. This is done via maximum likelihood.
It can be shown that this procedure never decreases the likelihood and that stationary points (i.e. neither E-step nor M-step produce changes) of it corresponds to local maxima in the model's likelihood. See references for more details on the procedure, and how to obtain a lower bound on the log-likelihood. There exist many different flavours of EM, including variants where only subsets of the model are iterated over at a time. There is no learning rate such as step size or similar, which is good and bad since convergence can be slow.
Mixtures of Gaussians in Shogun
The main class for GMM in Shogun is <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CGMM.html">CGMM</a>, which contains an interface for setting up a model and sampling from it, but also to learn the model (the $\pi_i$ and parameters $\theta$) via EM. It inherits from the base class for distributions in Shogun, <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a>, and combines multiple single distribution instances to a mixture.
We start by creating a GMM instance, sampling from it, and computing the log-likelihood of the model for some points, and the log-likelihood of each individual component for some points. All these things are done in two dimensions to be able to plot them, but they generalise to higher (or lower) dimensions easily.
Let's sample, and illustrate the difference of knowing the latent variable indicating the component or not.
End of explanation
# create mixture of three Gaussians
num_components=3
num_max_samples=100
gmm=GMM(num_components)
dimension=2
# set means (TODO interface should be to construct mixture from individuals with set parameters)
means=zeros((num_components, dimension))
means[0]=[-5.0, -4.0]
means[1]=[7.0, 3.0]
means[2]=[0, 0.]
[gmm.set_nth_mean(means[i], i) for i in range(num_components)]
# set covariances
covs=zeros((num_components, dimension, dimension))
covs[0]=array([[2, 1.3],[.6, 3]])
covs[1]=array([[1.3, -0.8],[-0.8, 1.3]])
covs[2]=array([[2.5, .8],[0.8, 2.5]])
[gmm.set_nth_cov(covs[i],i) for i in range(num_components)]
# set mixture coefficients, these have to sum to one (TODO these should be initialised automatically)
weights=array([0.5, 0.3, 0.2])
gmm.set_coef(weights)
Explanation: Set up the model in Shogun
End of explanation
# now sample from each component seperately first, the from the joint model
hold(True)
colors=["red", "green", "blue"]
for i in range(num_components):
# draw a number of samples from current component and plot
num_samples=int(rand()*num_max_samples)+1
# emulate sampling from one component (TODO fix interface of GMM to handle this)
w=zeros(num_components)
w[i]=1.
gmm.set_coef(w)
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_samples)])
plot(X[:,0], X[:,1], "o", color=colors[i])
# draw 95% elipsoid for current component
gca().add_artist(get_gaussian_ellipse_artist(means[i], covs[i], color=colors[i]))
hold(False)
_=title("%dD Gaussian Mixture Model with %d components" % (dimension, num_components))
# since we used a hack to sample from each component
gmm.set_coef(weights)
Explanation: Sampling from mixture models
Sampling is extremely easy since every instance of the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> class in Shogun allows to sample from it (if implemented)
End of explanation
# generate a grid over the full space and evaluate components PDF
resolution=100
Xs=linspace(-10,10, resolution)
Ys=linspace(-8,6, resolution)
pairs=asarray([(x,y) for x in Xs for y in Ys])
D=asarray([gmm.cluster(pairs[i])[3] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,2,1)
pcolor(Xs,Ys,D)
xlim([-10,10])
ylim([-8,6])
title("Log-Likelihood of GMM")
subplot(1,2,2)
pcolor(Xs,Ys,exp(D))
xlim([-10,10])
ylim([-8,6])
_=title("Likelihood of GMM")
Explanation: Evaluating densities in mixture Models
Next, let us visualise the density of the joint model (which is a convex sum of the densities of the individual distributions). Note the similarity between the calls since all distributions implement the <a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CDistribution.html">CDistribution</a> interface, including the mixture.
End of explanation
# sample and plot (TODO fix interface to have loop within)
X=array([gmm.sample() for _ in range(num_max_samples)])
plot(X[:,0], X[:,1], "o")
_=title("Samples from GMM")
Explanation: Density estimating with mixture models
Now let us draw samples from the mixture model itself rather than from individual components. This is the situation that usually occurs in practice: Someone gives you a bunch of data with no labels attached to it all all. Our job is now to find structure in the data, which we will do with a GMM.
End of explanation
def estimate_gmm(X, num_components):
# bring data into shogun representation (note that Shogun data is in column vector form, so transpose)
features=RealFeatures(X.T)
gmm_est=GMM(num_components)
gmm_est.set_features(features)
# learn GMM
gmm_est.train_em()
return gmm_est
Explanation: Imagine you did not know the true generating process of this data. What would you think just looking at it? There are clearly at least two components (or clusters) that might have generated this data, but three also looks reasonable. So let us try to learn a Gaussian mixture model on those.
End of explanation
component_numbers=[2,3]
# plot true likelihood
D_true=asarray([gmm.cluster(pairs[i])[num_components] for i in range(len(pairs))]).reshape(resolution,resolution)
figure(figsize=(18,5))
subplot(1,len(component_numbers)+1,1)
pcolor(Xs,Ys,exp(D_true))
xlim([-10,10])
ylim([-8,6])
title("True likelihood")
for n in range(len(component_numbers)):
# TODO get rid of these hacks and offer nice interface from Shogun
# learn GMM with EM
gmm_est=estimate_gmm(X, component_numbers[n])
# evaluate at a grid of points
D_est=asarray([gmm_est.cluster(pairs[i])[component_numbers[n]] for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise densities
subplot(1,len(component_numbers)+1,n+2)
pcolor(Xs,Ys,exp(D_est))
xlim([-10,10])
ylim([-8,6])
_=title("Estimated likelihood for EM with %d components"%component_numbers[n])
Explanation: So far so good, now lets plot the density of this GMM using the code from above
End of explanation
# function to draw ellipses for all components of a GMM
def visualise_gmm(gmm, color="blue"):
for i in range(gmm.get_num_components()):
component=Gaussian.obtain_from_generic(gmm.get_component(i))
gca().add_artist(get_gaussian_ellipse_artist(component.get_mean(), component.get_cov(), color=color))
# multiple runs to illustrate random initialisation matters
for _ in range(3):
figure(figsize=(18,5))
subplot(1, len(component_numbers)+1, 1)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color="blue")
title("True components")
for i in range(len(component_numbers)):
gmm_est=estimate_gmm(X, component_numbers[i])
subplot(1, len(component_numbers)+1, i+2)
plot(X[:,0],X[:,1], 'o')
visualise_gmm(gmm_est, color=colors[i])
# TODO add a method to get likelihood of full model, retraining is inefficient
likelihood=gmm_est.train_em()
_=title("Estimated likelihood: %.2f (%d components)"%(likelihood,component_numbers[i]))
Explanation: It is also possible to access the individual components of the mixture distribution. In our case, we can for example draw 95% ellipses for each of the Gaussians using the method from above. We will do this (and more) below.
On local minima of EM
It seems that three comonents give a density that is closest to the original one. While two components also do a reasonable job here, it might sometimes happen (<a href="http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CKMeans.html">KMeans</a> is used to initialise the cluster centres if not done by hand, using a random cluster initialisation) that the upper two Gaussians are grouped, re-run for a couple of times to see this. This illustrates how EM might get stuck in a local minimum. We will do this below, where it might well happen that all runs produce the same or different results - no guarantees.
Note that it is easily possible to initialise EM via specifying the parameters of the mixture components as did to create the original model above.
One way to decide which of multiple convergenced EM instances to use is to simply compute many of them (with different initialisations) and then choose the one with the largest likelihood. WARNING Do not select the number of components like this as the model will overfit.
End of explanation
def cluster_and_visualise(gmm_est):
# obtain cluster index for each point of the training data
# TODO another hack here: Shogun should allow to pass multiple points and only return the index
# as the likelihood can be done via the individual components
# In addition, argmax should be computed for us, although log-pdf for all components should also be possible
clusters=asarray([argmax(gmm_est.cluster(x)[:gmm.get_num_components()]) for x in X])
# visualise points by cluster
hold(True)
for i in range(gmm.get_num_components()):
indices=clusters==i
plot(X[indices,0],X[indices,1], 'o', color=colors[i])
hold(False)
# learn gmm again
gmm_est=estimate_gmm(X, num_components)
figure(figsize=(18,5))
subplot(121)
cluster_and_visualise(gmm)
title("Clustering under true GMM")
subplot(122)
cluster_and_visualise(gmm_est)
_=title("Clustering under estimated GMM")
Explanation: Clustering with mixture models
Recall that our initial goal was not to visualise mixture models (although that is already pretty cool) but to find clusters in a given set of points. All we need to do for this is to evaluate the log-likelihood of every point under every learned component and then pick the largest one. Shogun can do both. Below, we will illustrate both cases, obtaining a cluster index, and evaluating the log-likelihood for every point under each component.
End of explanation
figure(figsize=(18,5))
for comp_idx in range(num_components):
subplot(1,num_components,comp_idx+1)
# evaluated likelihood under current component
# TODO Shogun should do the loop and allow to specify component indices to evaluate pdf for
# TODO distribution interface should be the same everywhere
component=Gaussian.obtain_from_generic(gmm.get_component(comp_idx))
cluster_likelihoods=asarray([component.compute_PDF(X[i]) for i in range(len(X))])
# normalise
cluster_likelihoods-=cluster_likelihoods.min()
cluster_likelihoods/=cluster_likelihoods.max()
# plot, coloured by likelihood value
cm=get_cmap("jet")
hold(True)
for j in range(len(X)):
color = cm(cluster_likelihoods[j])
plot(X[j,0], X[j,1] ,"o", color=color)
hold(False)
title("Data coloured by likelihood for component %d" % comp_idx)
Explanation: These are clusterings obtained via the true mixture model and the one learned via EM. There is a slight subtlety here: even the model under which the data was generated will not cluster the data correctly if the data is overlapping. This is due to the fact that the cluster with the largest probability is chosen. This doesn't allow for any ambiguity. If you are interested in cases where data overlaps, you should always look at the log-likelihood of the point for each cluster and consider taking into acount "draws" in the decision, i.e. probabilities for two different clusters are equally large.
Below we plot all points, coloured by their likelihood under each component.
End of explanation
# compute cluster index for every point in space
D_est=asarray([gmm_est.cluster(pairs[i])[:num_components].argmax() for i in range(len(pairs))]).reshape(resolution,resolution)
# visualise clustering
cluster_and_visualise(gmm_est)
# visualise space partitioning
hold(True)
pcolor(Xs,Ys,D_est)
hold(False)
Explanation: Note how the lower left and middle cluster are overlapping in the sense that points at their intersection have similar likelihoods. If you do not care at all about this and are just interested in a partitioning of the space, simply choose the maximum.
Below we plot the space partitioning for a hard clustering.
End of explanation |
1,278 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Python
An introduction to Python for middle and high school students using Python 3 syntax.
Getting started
We're assuming that you already have Python 3.6 or higher installed. If not, go to Python.org to download the latest for your operating system. Verify that you have the correct version by opening a terminal or command prompt and running
$ python --version
Python 3.6.0
Your First Program
Step1: Choose File > New Window. An empty window will appear with Untitled in the menu bar. Enter the following code into the new shell window. Choose File > Save. Save as hello.py, which is known as a python module. Choose Run > Run module to run the file
Calculating with Python
Mathematical operators
Step2: Division
Step3: Type function
Theres lots more available via the standard library and third party packages. To see the type of the result, use the type function. For example type(3//4)) returns int
Order of Operations
Python reads left to right. Higher precedence operators are applied before lower precedence operators. Operators below are listed lowest precedence at the top.
| Operator | Description |
|----------------------------------------------|-----------------------------------------------------------|
| or | Boolean OR |
| and | Boolean AND |
| not | Boolean NOT |
| in, not in, is, is not, <, <=, >, >=, !=, == | Comparison, including membership tests and identity tests |
| +, - | Addition and Subtraction |
| *, /, //, % | Multiplication, division, integer division, remainder |
| ** | Exponentiation |
Calculate the result of 5 + 1 * 4.
We override the precendence using parenthesis which are evaluated from the innermost out.
Calculate the result of (5+1) * 4.
Remember that multiplication and division always go before
addition and subtraction, unless parentheses are used to control
the order of operations.
Step4: Variables
Variables are like labels so that we can refer to things by a recognizable name.
Step5: Valid varible names
Variables begin with a letter followed by container letters, numbers and underscores
jim
other_jim
other_jim99
Invalid variable names
Step6: User Input
We can get keyboard input using the input function
Step7: Strings
Strings are immutable objects in python, meaning they can't be modified once created, but they can be used to create new strings.
Strings should be surrounded with a single quote ' or double quote ". The general rule is to use the single quote unless you plan to use something called interpolation
Formatting
Strings support templating and formatting.
Step8: Data Structures
Lists []
Lists are orderings of things where each thing corresponds to an index starting at 0.
Example [1, 2, 3] where 1 is at index 0, 2 is at index 1 and 3 is at index 2.
Tuples ()
Tuples are like lists, only you can't
Dictionaries {}
key value pairs
Comprehension
Lists can be constructed using comprehension logic
Step9: We can use conditionals as well | Python Code:
print('Hello, World!')
Explanation: Introduction to Python
An introduction to Python for middle and high school students using Python 3 syntax.
Getting started
We're assuming that you already have Python 3.6 or higher installed. If not, go to Python.org to download the latest for your operating system. Verify that you have the correct version by opening a terminal or command prompt and running
$ python --version
Python 3.6.0
Your First Program: Hello, World!
Open the Interactive DeveLopment Environment (IDLE) and write the famous Hello World program. Open IDLE and you'll be in an interactive shell.
End of explanation
3*4
Explanation: Choose File > New Window. An empty window will appear with Untitled in the menu bar. Enter the following code into the new shell window. Choose File > Save. Save as hello.py, which is known as a python module. Choose Run > Run module to run the file
Calculating with Python
Mathematical operators:
Addition: +
Subtraction: -
Multiplication: *
Try these
* 3 * 4
End of explanation
3//4
# Exponents
2**3
# Modulus
5%4
Explanation: Division:
* Floating point /
* Integer //
Try these:
* 5/4
* 1/0
* 3//4
* 5//4
End of explanation
(2 + 2) ** 3
Explanation: Type function
Theres lots more available via the standard library and third party packages. To see the type of the result, use the type function. For example type(3//4)) returns int
Order of Operations
Python reads left to right. Higher precedence operators are applied before lower precedence operators. Operators below are listed lowest precedence at the top.
| Operator | Description |
|----------------------------------------------|-----------------------------------------------------------|
| or | Boolean OR |
| and | Boolean AND |
| not | Boolean NOT |
| in, not in, is, is not, <, <=, >, >=, !=, == | Comparison, including membership tests and identity tests |
| +, - | Addition and Subtraction |
| *, /, //, % | Multiplication, division, integer division, remainder |
| ** | Exponentiation |
Calculate the result of 5 + 1 * 4.
We override the precendence using parenthesis which are evaluated from the innermost out.
Calculate the result of (5+1) * 4.
Remember that multiplication and division always go before
addition and subtraction, unless parentheses are used to control
the order of operations.
End of explanation
fred = 10 + 5
type(fred)
fred = 10 / 5
type(fred)
fred * 55 + fred
joe = fred * 55
joe
joe
fred
joe = fred
fred = joe
Explanation: Variables
Variables are like labels so that we can refer to things by a recognizable name.
End of explanation
ends_with_9 = 9
a = 6
b = 4
my_var = 7
num_apples * 65
doesntexist
Explanation: Valid varible names
Variables begin with a letter followed by container letters, numbers and underscores
jim
other_jim
other_jim99
Invalid variable names: don't meet requiremenets
symbol$notallowed
5startswithnumber
Invalid variable names: reserved words
| Reserved words | | | | |
|----------------|----------|--------|----------|-------|
| None | continue | for | lambda | try |
| True | def | from | nonlocal | while |
| and | del | global | not | with |
| as | elif | if | or | yield |
| break | except | in | raise | |
Referring to a previous result
You can use the _ variable to refer to the result of a previous calculation when working in the shell.
End of explanation
name = input("What's your name? ")
print("Hi ", name)
Explanation: User Input
We can get keyboard input using the input function
End of explanation
id("bar")
fred = "bar"
id(fred)
"this string is %s" % ('formatted')
"this string is also {message}. The message='{message}' can be used more than once".format(message='formatted')
# Called string concatenation
"this string is "+ 'concatenated'
## Conditionals
`if (condition):`
`elif (condition):`
`else (optional condition):`
aa = False
if aa:
print('a is true')
else:
print ('aa is not true')
aa = 'wasdf'
if aa == 'aa':
print('first condition')
elif aa == 'bb':
print('second condition')
else:
print('default condition')
Explanation: Strings
Strings are immutable objects in python, meaning they can't be modified once created, but they can be used to create new strings.
Strings should be surrounded with a single quote ' or double quote ". The general rule is to use the single quote unless you plan to use something called interpolation
Formatting
Strings support templating and formatting.
End of explanation
[(a, a*2) for a in range(10)]
Explanation: Data Structures
Lists []
Lists are orderings of things where each thing corresponds to an index starting at 0.
Example [1, 2, 3] where 1 is at index 0, 2 is at index 1 and 3 is at index 2.
Tuples ()
Tuples are like lists, only you can't
Dictionaries {}
key value pairs
Comprehension
Lists can be constructed using comprehension logic
End of explanation
[(a, a*2) for a in range(10) if a < 8]
Explanation: We can use conditionals as well
End of explanation |
1,279 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language
http
Step1: Working with Data
Step2: Normalizing data
If we look at two of the features in the data we can see they are of different scales.
Step3: We can use standard deviation to normalize data.
Here we generate a random set of data that creates a dataset that follows a Standard deviation from the mean.
https
Step4: We are now going to normalize the data so we give both data items the same weight.
for each column, we compute the mean and remove the standard deviation
Let's say we have points x1, x2,.. xn in column "AGE"
mean = $(1/n) * (x1+x2+...xn)$
std = $\sqrt{(1/n) * ( (x1-mean)^2 + (x2 -mean)^2 + ...)}$
Step5: with numpy array we can do simple vectorized operations
so if i do
arr = arr - c
it subtracts c to all elements in arr if i do
arr = arr/c
it divides all elements in arr by c | Python Code:
# Series
import numpy as np
import pandas as pd
myArray = np.array([2,3,4])
row_names = ['p','q','r']
mySeries = pd.Series(myArray,index=row_names)
print (mySeries)
print (mySeries[0])
print (mySeries['p'])
# Dataframes
myArray = np.array([[2,3,4],[5,6,7]])
row_names = ['p','q']
col_names = ['One','Two','Three']
myDataFrame = pd.DataFrame(myArray,index = row_names,columns = col_names)
print (myDataFrame)
print ('Method 1 :')
print ('One column = \n{}'.format(myDataFrame['One']))
print ('Method 2 :')
print ('One column = \n{}'.format(myDataFrame.One))
Explanation: pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language
http://pandas.pydata.org/
End of explanation
# Let's load data from a csv
df = pd.read_csv("../data/diabetes.csv")
df.info()
# Examine data
df.head()
Explanation: Working with Data
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
# Histogram
bins=range(0,100,10)
plt.hist(df["Age"].values, bins, alpha=0.5, label='age')
plt.show()
plt.hist(df["BMI"].values, bins, alpha=0.5, label='BMI')
plt.show()
plt.hist(df["Age"].values, bins, alpha=0.5, label='age')
plt.hist(df["BMI"].values, bins, alpha=0.5, label='BMI')
plt.show()
Explanation: Normalizing data
If we look at two of the features in the data we can see they are of different scales.
End of explanation
from numpy.random import normal
gaussian_numbers = normal(size=5000)
plt.hist(gaussian_numbers, bins=np.linspace(-5.0, 5.0, num=20)) # Set bin bounds
plt.show()
Explanation: We can use standard deviation to normalize data.
Here we generate a random set of data that creates a dataset that follows a Standard deviation from the mean.
https://en.wikipedia.org/wiki/Standard_deviation
End of explanation
# Let's start with an example on the AGE feature
# I create a new array for easier manipulation
arr_age = df["Age"].values
arr_age[:10]
Explanation: We are now going to normalize the data so we give both data items the same weight.
for each column, we compute the mean and remove the standard deviation
Let's say we have points x1, x2,.. xn in column "AGE"
mean = $(1/n) * (x1+x2+...xn)$
std = $\sqrt{(1/n) * ( (x1-mean)^2 + (x2 -mean)^2 + ...)}$
End of explanation
mean_age = np.mean(arr_age)
std_age = np.std(arr_age)
print ('Age Mean: {} Std:{}'.format(mean_age, std_age))
# So to compute the standardized array, I write :
arr_age_new = (arr_age - mean_age)/std_age
arr_age_new[:10]
# I can now apply the same idea to a pandas dataframe
# using some built in pandas functions :
df_new = (df - df.mean()) / df.std()
df_new.head()
df.head()
# Histogram
bins=np.linspace(-5.0, 5.0, num=20)
plt.hist(df_new["Age"].values, bins, alpha=0.5, label='age')
plt.hist(df_new["BMI"].values, bins, alpha=0.5, label='BMI')
plt.show()
Explanation: with numpy array we can do simple vectorized operations
so if i do
arr = arr - c
it subtracts c to all elements in arr if i do
arr = arr/c
it divides all elements in arr by c
End of explanation |
1,280 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: Does improved weight pruning outperforms regular SET
Step4: No significant difference between the two approaches
What is the impact of early stopping? | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET).
Motivation.
Check if results are consistently above baseline.
Conclusion
No significant difference between both models
No support for early stopping
End of explanation
exps = ['improved_magpruning_eval1', ]
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
# replace hebbian prine
df['hebbian_prune_perc'] = df['hebbian_prune_perc'].replace(np.nan, 0.0, regex=True)
df['weight_prune_perc'] = df['weight_prune_perc'].replace(np.nan, 0.0, regex=True)
df.columns
df.shape
df.iloc[1]
df.groupby('model')['model'].count()
Explanation: Load and check data
End of explanation
# Did any trials failed?
df[df["epochs"]<30]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# which ones failed?
# failed, or still ongoing?
df_origin['failed'] = df_origin["epochs"]<30
df_origin[df_origin['failed']]['epochs']
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
Explanation: ## Analysis
Experiment Details
End of explanation
agg(['model'])
agg(['on_perc', 'model'])
agg(['weight_prune_perc', 'model'])
agg(['on_perc', 'pruning_early_stop', 'model'])
agg(['on_perc', 'pruning_early_stop', 'model'])
Explanation: Does improved weight pruning outperforms regular SET
End of explanation
agg(['pruning_early_stop'])
agg(['model', 'pruning_early_stop'])
agg(['on_perc', 'pruning_early_stop'])
Explanation: No significant difference between the two approaches
What is the impact of early stopping?
End of explanation |
1,281 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Discretize PV row sides and indexing
In this section, we will learn how to
Step1: Prepare PV array parameters
Step2: Create discretization scheme
Step3: Create a PV array
Import the OrderedPVArray class and create a PV array object using the parameters above
Step4: Plot the PV array at index 0
Step5: As we can see, there is some discretization on the leftmost and the center PV rows.
We can check that it was correctly done using the pvarray object.
Step6: Indexing the timeseries surfaces in a PV array
In order to perform some calculations on PV array surfaces, it is often important to index them.
pvfactors takes care of this.
We can for instance check the index of the timeseries surfaces on the front side of the center PV row
Step7: Intuitively, one could have expected only 3 timeseries surfaces because that's what the previous plot at index 0 was showing.
But it is important to understand that ALL timeseries surfaces are created at PV array fitting time, even the ones that don't exist for the given timestamps.
So in this example
Step8: As expected, all shaded timeseries surfaces on the front side of the PV row have length zero.
Plot PV array with indices
It is possible also to visualize the PV surface indices of all the non-zero surfaces when plotting a PV array, for a given timestamp (here at the first timestamp, so 0). | Python Code:
# Import external libraries
import matplotlib.pyplot as plt
# Settings
%matplotlib inline
Explanation: Discretize PV row sides and indexing
In this section, we will learn how to:
create a PV array with discretized PV row sides
understand the indices of the timeseries surfaces of a PV array
plot a PV array with indices shown on plot
Imports and settings
End of explanation
pvarray_parameters = {
'n_pvrows': 3, # number of pv rows
'pvrow_height': 1, # height of pvrows (measured at center / torque tube)
'pvrow_width': 1, # width of pvrows
'axis_azimuth': 0., # azimuth angle of rotation axis
'surface_tilt': 20., # tilt of the pv rows
'surface_azimuth': 270., # azimuth of the pv rows front surface
'solar_zenith': 40., # solar zenith angle
'solar_azimuth': 150., # solar azimuth angle
'gcr': 0.5, # ground coverage ratio
}
Explanation: Prepare PV array parameters
End of explanation
discretization = {'cut':{
0: {'back': 5}, # discretize the back side of the leftmost PV row into 5 segments
1: {'front': 3} # discretize the front side of the center PV row into 3 segments
}}
pvarray_parameters.update(discretization)
Explanation: Create discretization scheme
End of explanation
from pvfactors.geometry import OrderedPVArray
# Create pv array
pvarray = OrderedPVArray.fit_from_dict_of_scalars(pvarray_parameters)
Explanation: Create a PV array
Import the OrderedPVArray class and create a PV array object using the parameters above
End of explanation
# Plot pvarray shapely geometries
f, ax = plt.subplots(figsize=(10, 3))
pvarray.plot_at_idx(0, ax)
plt.show()
Explanation: Plot the PV array at index 0
End of explanation
pvrow_left = pvarray.ts_pvrows[0]
n_segments = len(pvrow_left.back.list_segments)
print("Back side of leftmost PV row has {} segments".format(n_segments))
pvrow_center = pvarray.ts_pvrows[1]
n_segments = len(pvrow_center.front.list_segments)
print("Front side of center PV row has {} segments".format(n_segments))
Explanation: As we can see, there is some discretization on the leftmost and the center PV rows.
We can check that it was correctly done using the pvarray object.
End of explanation
# List some indices
ts_surface_list = pvrow_center.front.all_ts_surfaces
print("Indices of surfaces on front side of center PV row")
for ts_surface in ts_surface_list:
index = ts_surface.index
print("... surface index: {}".format(index))
Explanation: Indexing the timeseries surfaces in a PV array
In order to perform some calculations on PV array surfaces, it is often important to index them.
pvfactors takes care of this.
We can for instance check the index of the timeseries surfaces on the front side of the center PV row
End of explanation
for ts_surface in ts_surface_list:
index = ts_surface.index
shaded = ts_surface.shaded
length = ts_surface.length
print("Surface with index: '{}' has shading status '{}' and length {} m".format(index, shaded, length))
Explanation: Intuitively, one could have expected only 3 timeseries surfaces because that's what the previous plot at index 0 was showing.
But it is important to understand that ALL timeseries surfaces are created at PV array fitting time, even the ones that don't exist for the given timestamps.
So in this example:
- we have 3 illuminated timeseries surfaces, which do exist at timestamp 0
- and 3 shaded timeseries surfaces, which do NOT exist at timestamp 0 (so they have zero length).
Let's check that.
End of explanation
# Plot pvarray shapely geometries with surface indices
f, ax = plt.subplots(figsize=(10, 4))
pvarray.plot_at_idx(0, ax, with_surface_index=True)
ax.set_xlim(-3, 5)
plt.show()
Explanation: As expected, all shaded timeseries surfaces on the front side of the PV row have length zero.
Plot PV array with indices
It is possible also to visualize the PV surface indices of all the non-zero surfaces when plotting a PV array, for a given timestamp (here at the first timestamp, so 0).
End of explanation |
1,282 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'bcc-esm1', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: BCC
Source ID: BCC-ESM1
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
1,283 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<H1>Bidirectional connections as a function of the distance </H1>
<P>
We will analyze the probability of finding bidirectionally connected inhibitory synapses are over-represented as a function of the intersomatic distance.</P>
Step1: Read filename from the dataset, that contains 2 or more PV(+)-interneurons
Step3: <H2> Load all distances from all connected PV cells</H2>
<P> Read intersomatic distances between PV(+) interneurons. Some distances may be
missing, and the function will return a warning.</P>
Step4: <H2> Plot the histogram of recorded distances</H2>
Step6: <H2>Distances in recurrently connected inhibitory neurons</H2>
<P> We now collect only recurrently connected inhibitory neurons (i.e., bidirectionally connected).</P>
Step7: We collect the intersomatic distances between recurently connected inhibitory neurons. We will plot
them against the total number of possible bidirectionally connected neurons.
Step8: To plot the total number of possible bidirectional connections we could simply divide the total number by two.
Alternatively, we can take only the positive (or negative) distances. Remember that positive distances are
distances from neuron A -> B and negative distances are from B -> A. Either one of these is the number of
positive bidirectional connections (not both of them!). | Python Code:
%pylab inline
import warnings
from inet import DataLoader, __version__
from inet.utils import II_slice
print('Inet version {}'.format(__version__))
Explanation: <H1>Bidirectional connections as a function of the distance </H1>
<P>
We will analyze the probability of finding bidirectionally connected inhibitory synapses are over-represented as a function of the intersomatic distance.</P>
End of explanation
# use filenames in the dataset to read list of distances to be read
mydataset = DataLoader('../data/PV/')
count_pv = lambda x : len([i for i in range(len(mydataset)) if int(mydataset.filename(i)[0])==x])
pv_id = [ idx for idx in range(len(mydataset)) if int(mydataset.filename(idx)[0])>1 ]
print('{} experiments with 2 or more PV-cells\n'.format(len(pv_id)))
for i in range(2,5):
print('{:2d} experiments with {} PV-cells'.format(count_pv(i), i))
Explanation: Read filename from the dataset, that contains 2 or more PV(+)-interneurons
End of explanation
# read distances from between inhibitory neurons
def read_dist(fname):
get distances between inhibitory pairs of neurons
from a matrix of intersomatic distances.
Argument:
fname: string
the matrix name to that contains the connected synapses. It is
the name of the file without extension (e.g., *.syn).
It will through a warning if the matrix of distances is not added.
mypath = '../data/PV/' + fname + '.dist'
try:
D = np.loadtxt(mypath)
D = II_slice(D, int(fname[0]))
idx = np.where(~np.eye(D.shape[0], dtype = bool))
mydist = np.abs(D[idx]).tolist()
return(mydist)
except IOError:
warnings.warn(fname + '.dist not found!')
return([])
# collect all intersomatic distances in a single list
dist_tested = list()
for i in pv_id:
dist_tested +=read_dist( mydataset.filename(i) )
print('{} total distances read'.format(len(dist_tested))) # total distances
Explanation: <H2> Load all distances from all connected PV cells</H2>
<P> Read intersomatic distances between PV(+) interneurons. Some distances may be
missing, and the function will return a warning.</P>
End of explanation
mybins = arange(0,600, 50)
plt.hist(dist_tested, bins = mybins, facecolor='white', lw=2);
plt.ylim(ymax=40);
plt.ylabel('Inhibitory chemical synapses');
plt.xlabel('Intersomatic distance ($\mu$m)');
Explanation: <H2> Plot the histogram of recorded distances</H2>
End of explanation
def read_rec_dist(fname):
get distances between bidirectionally connected interneurons
from a matrix of intersomatic distances.
Argument:
fname: string
the matrix name to that contains the connected synapses (*.syn)
mydistpath = '../data/PV/' + fname + '.dist'
# load distance matrix (D)
try:
D = II_slice(np.loadtxt(mydistpath), int(fname[0]))
except IOError:
warnings.warn(mydistpath + ' not found!')
return([])
# load synapse matrix (S)
try:
S = np.loadtxt('../data/PV/' + fname + '.syn')
except IOError:
warnings.warn(fname + ' not found!')
return([])
S = II_slice(S, int(fname[0]) )
S[S==2] = 0 # remove gaps
S[S==3] = 1 # remove gaps in chemical
x,y = np.nonzero(S)
ids = zip(x,y)
mydist = list()
if ids>0:
for i,j in ids:
if S[j,i] == 1:
mydist.append( D[i,j] )
#print( np.unique(np.abs(mydist)) )
return( np.unique(np.abs(mydist)).tolist() )
# Number of bidirectionally connected interneurons is computed in the counter
mydataset.motif['ii_c2']
# select experiments with bidirectional motifs
bidirectional_id = [i for i in range(len(mydataset)) if mydataset.motifs(i)['ii_c2']['found']]
for i in bidirectional_id:
print('Experiment {:3d}, filename: {}'.format(i, mydataset.filename(i)))
Explanation: <H2>Distances in recurrently connected inhibitory neurons</H2>
<P> We now collect only recurrently connected inhibitory neurons (i.e., bidirectionally connected).</P>
End of explanation
dist_found = list()
for i in bidirectional_id:
dist_found += read_rec_dist( fname = mydataset.filename(i) )
Explanation: We collect the intersomatic distances between recurently connected inhibitory neurons. We will plot
them against the total number of possible bidirectionally connected neurons.
End of explanation
mybins = arange(0,550, 50)
bid_tested = np.unique(dist_tested) # like dividing by two
plt.hist(bid_tested, bins = mybins, facecolor='white', lw=2);
plt.ylim(ymax=20);
plt.ylabel('Inhbitory chemical synapses');
plt.xlabel('Intersomatic distance ($\mu$m)');
plt.hist(dist_found, bins = mybins, facecolor='gray', lw=2);
Explanation: To plot the total number of possible bidirectional connections we could simply divide the total number by two.
Alternatively, we can take only the positive (or negative) distances. Remember that positive distances are
distances from neuron A -> B and negative distances are from B -> A. Either one of these is the number of
positive bidirectional connections (not both of them!).
End of explanation |
1,284 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Restore the trained network if you need to
Step12: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
len(train_words)
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is that probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start:idx] + words[idx+1:stop+1])
return list(target_words)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name='inputs')
labels = tf.placeholder(tf.int32, [None, None], name='labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as one-hot encoded vectors. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs)
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(n_vocab))
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b,
labels, embed,
n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation |
1,285 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Temp
Step1: Remove corrupted h5 files on the repo and remove all ZIP-files on the data repo
Step2: Recreate all the ZIP folders
Load the existing h5 files and use this to create the zip files, we can use the coverage file information do to so as explained in the documentation of the datamover
https | Python Code:
from file_transfer.creds import URL, LOGIN, PASSWORD
btos = dm.BaltradToS3(URL, LOGIN, PASSWORD, "lw-enram", profile_name="lw-enram")
btos.transfer(name_match="_vp_", overwrite=True,
limit=5, verbose=True)
btos.transferred
s3handle.create_zip_version(btos.transferred)
import shutil
shutil.rmtree(os.path.join(".", "cz"))
shutil.rmtree(os.path.join(".", "cz", "brd", "2017", "09"))
os.removedirs(os.path.join(".", "cz", "brd", "2017"))
Explanation: Temp
End of explanation
corrupted = []
country_exclude = ["be", "ch", "cz", "dk", "es"]
sat_exclude = ["boo", "drs", "eis", "emd", "ess", "fld", "hnr", "mem", "neu", "nhb", "anj", "ika", "kes"]
for j, file in enumerate(s3handle.bucket.objects.all()):
if file.key.endswith(".h5") and not file.key.split("/")[0] in country_exclude and not file.key.split("/")[1] in sat_exclude:
# download file
s3handle.download_file(file.key)
# check if it can be read
try:
h5py.File(file.key, mode="r")
except:
corrupted.append(file.key)
file.delete()
os.remove(file.key)
elif file.key.endswith(".zip"):
file.delete()
corrupted[-10:]
Explanation: Remove corrupted h5 files on the repo and remove all ZIP-files on the data repo
End of explanation
s3enram = dm.S3EnramHandler("lw-enram", profile_name="lw-enram")
s3enram.create_zip_version(s3enram.count_enram_coverage(level="month"))
Explanation: Recreate all the ZIP folders
Load the existing h5 files and use this to create the zip files, we can use the coverage file information do to so as explained in the documentation of the datamover
https://github.com/enram/data-repository/blob/master/file_transfer/tutorial_datamover.ipynb
End of explanation |
1,286 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Populations
Step1: Let's create a population. Agent creation is here dealt with automatically. Still, it is possible to manually add or remove agents (Hence the IDs of the agents), what will be seen later. | Python Code:
import naminggamesal.ngpop as ngpop
Explanation: Populations
End of explanation
pop_cfg={
'voc_cfg':{
'voc_type':'matrix',
'M':5,
'W':10
},
'strat_cfg':{
'strat_type':'naive',
'vu_cfg':{'vu_type':'BLIS_epirob'}
},
'interact_cfg':{
'interact_type':'speakerschoice'
},
'nbagent':5
}
testpop=ngpop.Population(**pop_cfg)
testpop
print(testpop)
print
testpop.visual(vtype="agents")
testpop.play_game(100)
print(testpop)
testpop.visual()
testpop.visual(vtype="agents")
testpop._agentlist[0]._vocabulary._content
testpop._agentlist[0]._vocabulary.add(0,0,0.5)
Explanation: Let's create a population. Agent creation is here dealt with automatically. Still, it is possible to manually add or remove agents (Hence the IDs of the agents), what will be seen later.
End of explanation |
1,287 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data science pipeline
Step1: Primary object types
Step2: What are the features?
* TV
Step3: Linear regression
Pros
Step4: Splitting X and y into training and testing sets
Step5: Linear regression in scikit-learn
Step6: Interpreting model coefficients
Step7: <center>y=2.88+0.0466×TV+0.179×Radio+0.00345×Newspaper</center>
How do we interpret the TV coefficient (0.0466)?
For a given amount of Radio and Newspaper ad spending, a "unit" increase in TV ad spending is associated with a 0.0466 "unit" increase in Sales.
Or more clearly
Step8: We need an evaluation metric in order to compare our predictions with the actual values!
Model evaluation metrics for regression
Evaluation metrics for classification problems, such as accuracy, are not useful for regression problems. Instead, we need evaluation metrics designed for comparing continuous values.
Let's create some example numeric predictions, and calculate three common evaluation metrics for regression problems
Step9: Mean Absolute Error (MAE) is the mean of the absolute value of the errors
Step10: Mean Squared Error (MSE) is the mean of the squared errors
Step11: Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors
Step12: Comparing these metrics
Step13: Feature selection
Does Newspaper "belong" in our model? In other words, does it improve the quality of our predictions?
Let's remove it from the model and check the RMSE!
Step14: The RMSE decreased when we removed Newspaper from the model. (Error is something we want to minimize, so a lower number for RMSE is better.) Thus, it is unlikely that this feature is useful for predicting Sales, and should be removed from the model. | Python Code:
# conventional way to import pandas
import pandas as pd
# read CSV file directly from a URL and save the results
data = pd.read_csv('http://www-bcf.usc.edu/~gareth/ISL/Advertising.csv', index_col=0)
# display the first 5 rows
data.head()
Explanation: Data science pipeline: pandas, seaborn, scikit-learn¶
Agenda
How do I use the pandas library to read data into Python?
How do I use the seaborn library to visualize data?
What is linear regression, and how does it work?
How do I train and interpret a linear regression model in scikit-learn?
What are some evaluation metrics for regression problems?
How do I choose which features to include in my model?
Types of supervised learning
Classification: Predict a categorical response
Regression: Predict a continuous response
Reading data using pandas
Pandas: popular Python library for data exploration, manipulation, and analysis
* Anaconda users: pandas is already installed
* Other users: installation instructions
End of explanation
# display the last 5 rows
data.tail()
# check the shape of the DataFrame (rows, columns)
data.shape
Explanation: Primary object types:
* DataFrame: rows and columns (like a spreadsheet)
* Series: a single column
End of explanation
# conventional way to import seaborn
import seaborn as sns
# allow plots to appear within the notebook
%matplotlib inlineb
# visualize the relationship between the features and the response using scatterplots
sns.pairplot(data, x_vars=['TV','Radio','Newspaper'], y_vars='Sales', size=7, aspect=0.7, kind='reg')
Explanation: What are the features?
* TV: advertising dollars spent on TV for a single product in a given market (in thousands of dollars)
* Radio: advertising dollars spent on Radio
* Newspaper: advertising dollars spent on Newspaper
What is the response?
* Sales: sales of a single product in a given market (in thousands of items)
What else do we know?
* Because the response variable is continuous, this is a regression problem.
* There are 200 observations (represented by the rows), and each observation is a single market.
Visualizing data using seaborn
Seaborn: Python library for statistical data visualization built on top of Matplotlib
* Anaconda users: run conda install seaborn from the command line
* Other users: installation instructions
End of explanation
# create a Python list of feature names
feature_cols = ['TV', 'Radio', 'Newspaper']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# equivalent command to do this in one line
X = data[['TV', 'Radio', 'Newspaper']]
# print the first 5 rows
X.head()
# check the type and shape of X
print type(X)
print X.shape
# select a Series from the DataFrame
y = data['Sales']
# equivalent command that works if there are no spaces in the column name
y = data.Sales
# print the first 5 values
y.head()
# check the type and shape of y
print type(y)
print y.shape
Explanation: Linear regression
Pros: fast, no tuning required, highly interpretable, well-understood
Cons: unlikely to produce the best predictive accuracy (presumes a linear relationship between the features and response)
Form of linear regression
y=β0+β1x1+β2x2+...+βnxn
* y is the response
* β0 is the intercept
* β1 is the coefficient for x1 (the first feature)
* βn is the coefficient for xn (the nth feature)
In this case:
y=β0+β1×TV+β2×Radio+β3×Newspaper
The β values are called the model coefficients. These values are "learned" during the model fitting step using the "least squares" criterion. Then, the fitted model can be used to make predictions!
Preparing X and y using pandas
scikit-learn expects X (feature matrix) and y (response vector) to be NumPy arrays.
However, pandas is built on top of NumPy.
Thus, X can be a pandas DataFrame and y can be a pandas Series!
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# default split is 75% for training and 25% for testing
print X_train.shape
print y_train.shape
print X_test.shape
print y_test.shape
Explanation: Splitting X and y into training and testing sets
End of explanation
# import model
from sklearn.linear_model import LinearRegression
# instantiate
linreg = LinearRegression()
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
Explanation: Linear regression in scikit-learn
End of explanation
# print the intercept and coefficients
print linreg.intercept_
print linreg.coef_
# pair the feature names with the coefficients
zip(feature_cols, linreg.coef_)
Explanation: Interpreting model coefficients
End of explanation
# make predictions on the testing set
y_pred = linreg.predict(X_test)
Explanation: <center>y=2.88+0.0466×TV+0.179×Radio+0.00345×Newspaper</center>
How do we interpret the TV coefficient (0.0466)?
For a given amount of Radio and Newspaper ad spending, a "unit" increase in TV ad spending is associated with a 0.0466 "unit" increase in Sales.
Or more clearly: For a given amount of Radio and Newspaper ad spending, an additional \$1,000 spent on TV ads is associated with an increase in sales of 46.6 items.
Important notes:
* This is a statement of association, not causation.
* If an increase in TV ad spending was associated with a decrease in sales, β1 would be negative.
Making predictions
End of explanation
# define true and predicted response values
true = [100, 50, 30, 20]
pred = [90, 50, 50, 30]
Explanation: We need an evaluation metric in order to compare our predictions with the actual values!
Model evaluation metrics for regression
Evaluation metrics for classification problems, such as accuracy, are not useful for regression problems. Instead, we need evaluation metrics designed for comparing continuous values.
Let's create some example numeric predictions, and calculate three common evaluation metrics for regression problems:
End of explanation
# calculate MAE by hand
print (10 + 0 + 20 + 10)/4.
# calculate MAE using scikit-learn
from sklearn import metrics
print metrics.mean_absolute_error(true, pred)
Explanation: Mean Absolute Error (MAE) is the mean of the absolute value of the errors:
$$\1/n∑_{i=1}^n|y_i-ŷ_i|$$
End of explanation
# calculate MSE by hand
print (10**2 + 0**2 + 20**2 + 10**2)/4.
# calculate MSE using scikit-learn
print metrics.mean_squared_error(true, pred)
Explanation: Mean Squared Error (MSE) is the mean of the squared errors:
$$\1/n∑_{i=1}^n(y_i-ŷ_i)^2$$
End of explanation
# calculate RMSE by hand
import numpy as np
print np.sqrt((10**2 + 0**2 + 20**2 + 10**2)/4.)
# calculate RMSE using scikit-learn
print np.sqrt(metrics.mean_squared_error(true, pred))
Explanation: Root Mean Squared Error (RMSE) is the square root of the mean of the squared errors:
$$\sqrt{1/n∑_{i=1}^n(y_i-ŷ_i)^2}$$
End of explanation
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
Explanation: Comparing these metrics:
MAE is the easiest to understand, because it's the average error.
MSE is more popular than MAE, because MSE "punishes" larger errors.
RMSE is even more popular than MSE, because RMSE is interpretable in the "y" units.
Computing the RMSE for our Sales predictions
End of explanation
# create a Python list of feature names
feature_cols = ['TV', 'Radio']
# use the list to select a subset of the original DataFrame
X = data[feature_cols]
# select a Series from the DataFrame
y = data.Sales
# split into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# fit the model to the training data (learn the coefficients)
linreg.fit(X_train, y_train)
# make predictions on the testing set
y_pred = linreg.predict(X_test)
# compute the RMSE of our predictions
print np.sqrt(metrics.mean_squared_error(y_test, y_pred))
Explanation: Feature selection
Does Newspaper "belong" in our model? In other words, does it improve the quality of our predictions?
Let's remove it from the model and check the RMSE!
End of explanation
from IPython.core.display import HTML
def css_styling():
styles = open("styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: The RMSE decreased when we removed Newspaper from the model. (Error is something we want to minimize, so a lower number for RMSE is better.) Thus, it is unlikely that this feature is useful for predicting Sales, and should be removed from the model.
End of explanation |
1,288 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
Step24: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
Step27: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save the batch_size and save_path parameters for inference.
Step43: Checkpoint
Step46: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step48: Translate
This will translate translate_sentence from English to French. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_sentences = [sentence.split() for sentence in source_text.split("\n")]
target_sentences = [sentence.split() for sentence in target_text.split("\n")]
source_id_text = []
target_id_text = []
for sentence in source_sentences:
source_id_text.append([source_vocab_to_int[word] for word in sentence])
for sentence in target_sentences:
target_sentence = [target_vocab_to_int[word] for word in sentence]
target_sentence.extend([target_vocab_to_int['<EOS>']])
target_id_text.append(target_sentence)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32)
keep_probability = tf.placeholder(tf.float32, name="keep_prob")
return (inputs, targets, learning_rate, keep_probability)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoding_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Return the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)
End of explanation
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for dencoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
target_data_strip = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
target_input_data = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int["<GO>"]), target_data_strip], 1)
return target_input_data
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
Explanation: Process Decoding Input
Implement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
_, rnn_state = tf.nn.dynamic_rnn(cell, rnn_inputs, dtype=tf.float32)
return rnn_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: Maximum length of
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(output_fn, encoder_state,
dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().
End of explanation
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected(x, vocab_size, None, scope=decoding_scope)
dec_lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
dec_drop = tf.contrib.rnn.DropoutWrapper(dec_lstm, output_keep_prob=keep_prob)
dec_cell = tf.contrib.rnn.MultiRNNCell([dec_drop] * num_layers)
train_logits = decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob)
decoding_scope.reuse_variables()
inference_logits = decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'],
target_vocab_to_int['<EOS>'], sequence_length, vocab_size, decoding_scope, output_fn, keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Create RNN cell for decoding using rnn_size and num_layers.
Create the output fuction using lambda to transform it's input, logits, to class logits.
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
rnn_inputs = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
rnn_state = encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob)
target_input_data = process_decoding_input(target_data, target_vocab_to_int, batch_size)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, target_input_data)
train_logits, test_logits = decoding_layer(dec_embed_input, dec_embeddings, rnn_state, target_vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob)
return (train_logits, test_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Apply embedding to the input data for the encoder.
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).
Process target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.
Apply embedding to the target data for the decoder.
Decode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).
End of explanation
# Number of Epochs
epochs = 5
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 200
decoding_embedding_size = 200
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.7
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target_batch,
[(0,0),(0,max_seq - target_batch.shape[1]), (0,0)],
'constant')
if max_seq - batch_train_logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
sentence_id = [vocab_to_int.get(word.lower(), vocab_to_int["<UNK>"]) for word in sentence.split()]
return sentence_id
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation |
1,289 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bcc', 'bcc-csm2-hr', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: BCC
Source ID: BCC-CSM2-HR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:39
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
1,290 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Partie 5
Step1: <a name="PbGeneral">Problèmatique générale</a>
Objectif
L'objectif principal de l'automatique ou de la théorie du contrôle est d'imposer un comportement dynamique spécifique à un système en jouant sur les commandes disponibles
Step2: <u>Simulation du système commandé en boucle ouverte
Step3: La commande boucle ouverte proposée permet de remplir le réservoir au volume souhaité.
Cependant, cela ne va marcher que dans le cas où il n'y a aucun aléa, et où le système est parfaitement connu. Si on se place dans un cas plus réaliste, c'est à dire moins idéal, il faudra prendre en compte dans le système le fait qu'il peut pleuvoir (entrée supplémentaire du système), que des personnes vont utiliser l'eau de ce réservoir (sortie supplémentaire) et qu'il y a des imperfections dans le système qui font que la valeur de débit maximal n'est pas exactement égale à $d_{\text{max}}$ et peut même varier légèrement dans le temps. Dans ces cas là, l'application de la commande en boucle ouverte ne conduira pas exatement au volume souhaité. C'est pourquoi on utilise des commandes en boucle fermée qui peuvent être robustes aux aléas, incertitudes et erreurs de modélisation.
En effet, en notant $p$ le débit d'entrée dû aux précipitations et $c$ le débit de sortie dû à la consommation des agriculteurs, le modèle s'écrit maintenant
Step4: Commande en boucle fermée ou rétroaction
Exemple du remplissage d'un réservoir pour l'irrigation
On souhaite toujours que le réservoir soit maintenu à un volume constant $V_c$, pour satisfaire les besoins en eau des agriculteurs qui viennent se servir dans ce réservoir. Pour remplir le réservoir, on dispose toujours d'un accès à une nappe fréatique dans laquelle on peut pomper. Cependant, on souhaite maintenant tenir compte des précipitations, qui sont une seconde source (entrée) participant au remplissage du réservoir. La quantité d'eau de pluie qui tombe ne peut être contrôlée et n'est pas prévisible alors que la quantité d'eau pompée dans les nappes fréatiques peut être contrôlée via le débit de pompage $d$
Step5: Quelques lois de commande boucle fermée classiques
Notons $y^m(t)$ la mesure de $y(t)$ à l'instant $t$. Voici deux lois de commande classiquement appliquées
Step6: On notera $Q$ le débit d'alimentation du réacteur (égal au débit de soutirage) et $V$ le volume constant du réacteur.
Un modèle de ce système est donné par
Step7: On a donc $3$ points d'équilibre qui sont
Step8: En conclusion, on a montré que les seuls points d'équilibre atteignables avec une commande boucle ouverte sont $E_0$ et $E_1$. Comme $0 ≤ S_1 ≤ \sqrt{K_SK_I}$ les seules valeurs $S^\ast$ atteignables avec une commande boucle ouverte sont
Step9: Test 1
Step10: On constate donc que, si on choisit une valeur de consigne $S^\ast$ atteignable, alors la loi de commande boucle ouverte permet bien de faire tendre la concentration en sucre vers cette valeur de consigne.
Ce n'est par contre plus le cas si on choisit un $S^\ast>\sqrt{K_SK_I}$
Regardons maintenant si cette loi de commande est robuste aux perturbations.
Supposons que pour des raisons physiques, il y a une erreur entre le débit que l'on souhaite appliqué (et qui est donné par la loi de commande), et le débit qui est réellement appliqué.
$$ Q_{réel}=Q_{calc}(1+\delta)$$
Test 2
Step11: Limites de la boucle ouverte
On touche ici aux limites de la commande boucle ouverte.
la commande boucle ouverte ne peut être utilisée que pour certaines valeurs de consignes
cette commande n'est pas robuste aux perturbations
Action proportionnelle
On va maintenant tester une loi de commande boucle fermée constituée d'un terme proportionnel à l'erreur, c'est à dire une loi de commande de la forme
Step12: Test
Step13: Comparé à la commande en boucle ouverte, on constate qu'avec la commande boucle fermée avec terme proportionnel, on peut atteindre des valeurs $S^\ast$ de $S$ qui étaient non atteignables avec la boucle ouverte.
Testons maintenant si cette commande est robuste aux perturbations.
Test
Step14: On constate que la commande boucle fermée proportionnelle n'est pas robuste aux perturbations.
Rajoutons maintenant un terme intégral à cette loi de commande.
Action intégrale
On va donc maintenant tester une loi de commande boucle fermée constitué d'un terme proportionnel à l'erreur et d'un terme proportionnel à l'intégrale de l'erreur, c'est à dire une loi de commande de la forme
Step15: Test
Step16: Test
Step17: On remarque que, en ajoutant le terme intégral dans la commande, cela permet d'annuler l'erreur que l'on faisait avec les lois de commande boucle ouverte ou boucle fermée proportionnelle.
Les lois de commande Proportionnelle Intégrale sont donc robustes aux perturbations!
Action Dérivée
On peut également rajouter un terme dérivée dans la loi de commande boucle fermée. On obtient alors la loi de commande proportionnelle intégrale dérivée (PID) avec un terme proportionnel à l'erreur, un terme proportionnel à l'intégrale de l'erreur et un terme proportionnel à la dérivée de l'erreur, c'est à dire une loi de commande de la forme
Step18: Test | Python Code:
# -*- coding: utf-8 -*-
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
Pour afficher le code python, cliquer sur le bouton:
<button onclick="javascript:code_toggle()">Afficher code python</button>
''')
import numpy as np
from matplotlib import pyplot as plt
from ipywidgets import interact, fixed
#from IPython.html.widgets import interact, fixed
import scipy.integrate as scint
from matplotlib import patches as pat
Explanation: Partie 5: Contrôle de systèmes dynamiques I
Problématique générale
Modélisation de systèmes dynamiques: les représentations d'état
Commande boucle ouverte et boucle fermée
Contrôleur Proportionnel Intégral Dérivé (PID)
Les parties sur le contrôleur PID sont tirées du livre édité en 2008 par Denis Dochain et intitulé "Automatic Control of Bioprocesses".
End of explanation
# Controle d'un réservoir pour l'irrigation
# -----------------------------------------
plt.close('all') # ferme toutes les figures
# ** variables utilisés dans le code:
# t est le temps courant
# T est la durée pendant laquelle on va pomper dans la nappe fréatique pour remplir le réservoir
# dmax est la valeur maximale du débit
# base est l'aire de la base carrée du réservoir
# cote est la longueur du côté de la base
# precip et conso sont des variables qui contiennent des données de précipitations et de consommation par les
# les agriculteurs: cest variables ne sont pour l'instant pas utilisées
# x est la variable d'état du système, c'est à dire ici la hauteur d'eau dans le réservoir
# Vc est le volume que l'on veut atteindre dans le réservoir
# modèle du réservoir
def reservoir(x,t,T,dmax,base,precip,conso,Vc):
# le modèle du réservoir est donné par dx/dt = u/base
if np.fix(t)>len(precip)-1:
dx = fonction_u(t,T,dmax,base,Vc,precip,conso,x)/base
else :
dx = (fonction_u(t,T,dmax,base,Vc,precip,conso,x[0])+precip[int(np.fix(t))]+conso[int(np.fix(t))])/base
return dx
def remplissage(pluie,arrosage):
# pluie = 1 si on prend en compte les précipitations, 0 sinon
# arrosage = 1 si on prend en compte la consommation des agriculteurs, 0 sinon
# paramètres du modèles
dmax = 2.; cote = 3.; base = cote**2; Vmax = 100; Vc = 90; T = Vc/dmax
# Prise en compte des précipitations pendant 1 an
if pluie == 0: # si on ne considère pas les précipitations
precip = np.zeros(365)
tmax = min(T*1.5,365)
nblinefig = 2
else: # si on considère les précipitations, on va charger un fichier de données
precip0 = np.loadtxt('data/precipitations.txt') # données en mm par jour (par m² de terrain)
precip = 1e-3*precip0*base # conversion des données en m³ par jour
# Prise en compte des arrosages
conso = np.zeros(len(precip))
if arrosage == 1:
# ajout de periodes dans l'année pendant lesquelles les agriculteurs utilisent de l'eau du réservoir
conso[10:15] = -1
conso[40:45] = -1
conso[70:75] = -1
conso[100:105] = -1
conso[130:135] = -1
# integration numerique de l'EDO
if pluie + arrosage >0:
tmax=365 # temps maximal de simulation
nblinefig = 4 # nombre de figure pour l'affichage
temps = np.linspace(0,tmax,2000) # vecteur temps
x0 = 0 # condition initiale
h = scint.odeint(reservoir,x0,temps,args=(T,dmax,base,precip,conso,Vc)) # integration de l'équation
u = fonction_u(temps,T,dmax,base,Vc,precip,conso,h) # re-calcul de la commande qui a été appliquée
# tracé des solutions
plt.figure(figsize = (10, 7))
plt.subplots_adjust(hspace=0.4,wspace=0.4)
plt.subplot2grid((nblinefig,3),(0,0),rowspan=nblinefig)
axes=plt.gca()
axes.add_artist(pat.Rectangle((0, 0), cote, h[-1], color = 'blue'))
plt.ylim([0,Vmax/base])
plt.xlim([0,cote])
plt.subplot2grid((nblinefig,3),(0,1),colspan=2)
plt.plot(temps, h, color="red", linewidth="1")
plt.plot(np.array([0,tmax]), np.array([Vc/base,Vc/base]), color="black", linewidth="1")
plt.xlim([0,tmax])
plt.ylim([0,Vmax/base])
plt.title("Hauteur d'eau (m)")
plt.subplot2grid((nblinefig,3),(1,1),colspan=2)
plt.xlim([0,tmax])
plt.plot(temps, u, color="red", linewidth="1")
plt.ylim([-0.2,dmax*1.1])
plt.title("débit d'entrée ($m^3$ par jour)")
if pluie + arrosage == 0:
plt.xlabel("Temps (jour)")
plt.xlim([0,tmax])
else:
plt.subplot2grid((nblinefig,3),(2,1),colspan=2)
plt.plot(np.arange(365),precip0,color="red", linewidth="1")
plt.xlim([0,tmax])
plt.ylim([0,60])
plt.title("Précipitations (mm) par jour")
plt.subplot2grid((nblinefig,3),(3,1),colspan=2)
plt.plot(np.arange(365),conso,color="red", linewidth="1")
plt.xlim([0,tmax])
plt.ylim([-1.2,0])
plt.xlabel("Temps (jour)")
plt.title("Consommation en $m^3$ par jour")
plt.show()
# Loi de commande boucle ouverte
def fonction_u(t,T,dmax,base,Vc,precip,conso,x):
# Si t<T, la commande vaut dmax, sinon elle vaut 0
if type(t)==float: # cas où l'entrée t de la fonction est un scalaire
valu = (t<=T)*dmax
else: # cas où l'entrée t de la fonction est un vecteur
valu = np.zeros(len(t))
valu[t<=T] = dmax
return valu
Explanation: <a name="PbGeneral">Problèmatique générale</a>
Objectif
L'objectif principal de l'automatique ou de la théorie du contrôle est d'imposer un comportement dynamique spécifique à un système en jouant sur les commandes disponibles: on dit alors que l'on commande ou contrôle le système.
<u>Exemples de comportements recherchés</u>:
- amener une quantité à atteindre une valeur désirée (régulation). Exemple: le chauffage
- stabilisation ou déstabilisation d'un système instable/stable. Exemple: lutte contre la prolifération d'insectes ravageurs
- poursuite d'une trajectoire. Exemple: missile
En plus de l'objectif à atteindre, il faut aussi respecter certaines contraintes.
<u>Exemples de contraintes</u>
- saturation de la commande. Exemple: débit maximum autorisé
- temps de réponse rapide
- peu d'oscillations
- minimisation de la consommation énergétique
Comportement entrées/sorties
Quand on souhaite contrôler un système, on s'intéresse en fait à son comportement entrées/sorties, ce que l'on représente généralement de la manière suivante:
où:
- $u=(u_1,u_2,...,u_m)^T$ sont les entrées (vecteur), que l'on appelle aussi "actionneurs" ou "commandes"
- $y=(y_1,y_2,...,y_p)^T$ sont les sorties (vecteur)
Design d'une loi de commande
Contrôler un système d'entrées $u$ et de sorties $y$ revient à chercher quelles entrées appliquer au système pour que les sorties se comportent comme on le souhaite.
L'expression de $u$ est appelée loi de commande.
Il existe de nombreuses techniques permettant d'obtenir ces lois de commande. Certaines d'entre elles sont basées sur le modèle du système.
<a name="RepEtat">Modélisation de systèmes dynamiques: les représentations d'état</a>
Les systèmes étudiés peuvent être décrits par des modèles de types différents (EDO, EDP, modèles stochastiques, ...) qui sont fonction de la nature même du système, mais également de l'objectif de la modélisation (analyse, simulation, contrôle, etc.). On s'intéresse ici aux systèmes dynamiques pouvant être décrits par un nombre fini d'équations différentielles ordinaires du premier ordre, c'est à dire des systèmes dits 'différentiels', de la forme:
\begin{equation}
\left{
\begin{array}{rcl}
\dot{x_1} & = & f_1(x_1,x_2,...,x_n,u_1,u_2,...,u_m)\
\dot{x_2} & = & f_2(x_1,x_2,...,x_n,u_1,u_2,...,u_m)\
\vdots & = & \vdots \
\dot{x_n} & = & f_n(x_1,x_2,...,x_n,u_1,u_2,...,u_m),
\end{array}
\right.
\end{equation}
où $\dot{x_i}$ représente la dérivée de la variable $x_i$ par rapport au temps $t$, aussi notée $\frac{dx_i}{dt}$. On ajoute à ce système une condition initiale $x_0=(x_1(t_0), x_2(t_0), ... , x_n(t_0))^T$ et les sorties sont exprimées par une relation de la forme suivante:
\begin{equation}
\left{
\begin{array}{rcl}
{y_1} & = & h_1(x_1,x_2,...,x_n,u_1,u_2,...,u_m)\
{y_2} & = & h_2(x_1,x_2,...,x_n,u_1,u_2,...,u_m)\
\vdots & = & \vdots \
{y_p} & = & h_p(x_1,x_2,...,x_n,u_1,u_2,...,u_m).
\end{array}
\right.
\end{equation}
De manière plus concise, on écrira, pour $x_0$ donné:
<a name="RepEtatNL"> \begin{equation}
(M_1)\,\left{
\begin{array}{rcl}
\dot{x}&=&f(x,u)\
y&=&h(x,u)
\end{array}
\right.
\end{equation}</a>
avec $x=(x_1,x_2,...,x_n)^T,\,u=(u_1,u_2,...,u_m)^T,\,y=(y_1,y_2,...,y_p)^T,$ $f=(f_1,f_2,...,f_n)^T$ et $h=(h_1,h_2,...,h_p)^T$.
Figure 2. Système différentiel d'entrées $u$ et de sorties $y$
Terminologie
* le modèle $(M_1)$ est appelé modèle d'état ou représentation d'état.
* $x(t)\in \mathbb{R}^n$ est le vecteur des variables d'état: à un instant $t$ donné, $x(t)$ caractérise entièrement le système. En effet, la connaissance de $x(t)$ et la donnée des entrées $u$ sur l'intervalle $[t,T]$ suffisent à déterminer, via le système $(M_1)$, l'évolution de $x$ sur l'intervalle $[t,T]$. On peut dire que l'état d'un système à un instant $t$ représente la mémoire minimale du passé nécessaire à la détermination du futur.
* $u(t)\in \mathbb{R}^m$ est le vecteur des entrées ou encore commandes. Elles représentent l'influence du monde extérieur sur le système considéré. C'est via ces variables que l'on va pouvoir chercher à contrôler le système.
* $y(t)\in \mathbb{R}^p$ est le vecteur des sorties.
Remarque
* $x$, $y$ et $u$ sont des fonctions du temps, à valeurs respectivement dans $\mathbb{R}^n$, $\mathbb{R}^p$ et $\mathbb{R}^m$. La dépendance en temps sera parfois marquée explicitement, et parfois omise.
* Etant donnée une condition initiale $x_0=x(t_0)$, et pour une entrée $u$ donnée, la solution $x(t)$ pour $t\geqslant t_0$ de $(M_1)$ est appelée trajectoire du système. On suppose qu'une telle trajectoire existe toujours, est unique et continue.
* Un même système admet une infinité de représentations d'état, chacune associée à un choix de variables d'état.
* Le choix des variables d'état est arbitraire. Cependant, en fonction de l'objectif de la modélisation, certains choix peuvent s'avérer plus judicieux que d'autres (maintien de la signification physique des variables par exemple...).
* Le nombre minimal de variables d'états nécessaires à la caractérisation du système correspond à l'ordre du système.
* Lorsque l'entrée $u$ du système peut être librement choisie, on dit que le système est commandé, car on peut contrôler l'allure de la trajectoire $x$ en jouant sur le choix de $u$.
Cas linéaire
Un cas particulier de modèles d'état sont les modèles d'état linéaires, c'est à dire des modèles de la forme:
<a name="RepEtatLin">
\begin{eqnarray}
(M_2)\,\left{
\begin{array}{rcl}
\dot{x}&=&Ax+Bu,\
y&=&Cx+Du,
\end{array}
\right.
\end{eqnarray}
</a>
avec $A\in \mathbb{R}^{n\times n}$, $B \in \mathbb{R}^{n \times m}$, $C\in \mathbb{R}^{p \times n}$ et $D \in \mathbb{R}^{p \times m}$.
C'est une classe de modèles importante, car plus simple à étudier que les modèles non linéaires et pour laquelle de nombreux outils (d'analyse, de commande, etc...) ont été developpés.
Cas non linéaire
Les outils développés pour les modèles linéaires peuvent être également utilisés dans le cas non linéaire, après linéarisation autour d'un point d'équilibre du modèle considéré.
En effet, soit $\bar{x}$ un point d'équilibre de $(M_1)$ pour un $\bar{u}$ donné (c'est à dire un point $\bar{x}$ tel que $f(\bar{x},\bar{u})=0$), et $B$ une boule ouverte de $\mathbb{R}^n \times \mathbb{R}^p$ centrée en $(\bar{x},\bar{u})$.
On suppose $f: \mathbb{R}^n \times \mathbb{R}^p \mapsto \mathbb{R}^n$ de classe $\mathcal{C}^1$ sur $\bar{B}$, c'est à dire dérivable par rapport à $x_i$ et $u_j$ pour tout $i=1:n,\, j=1:m$ et de dérivées partielles continues. La formule de Taylor à l'ordre $1$ nous donne alors, pour tout $(\delta x, \delta u)$ tel que $(\bar{x}+\delta x,\bar{u}+\delta u) \in B$:
\begin{equation}
f(\bar{x}+\delta x,\bar{u}+\delta u)=f(\bar{x},\bar{u})+J_{f,x}(\bar{x},\bar{u})\delta x +J_{f,u}(\bar{x},\bar{u})\delta u+R_1(\delta x , \delta u),
\end{equation}
où $J_{f,x}(\bar{x},\bar{u})$ est la matrice jacobienne de $x \mapsto f(x,u)$ en $(\bar{x},\bar{u})$, c'est à dire la matrice:
\begin{equation}
J_{f,x}(\bar{x},\bar{u})=
\left[
\begin{array}{ccc}
\partial_{x_1}f_1(\bar{x},\bar{u}) & ... & \partial_{x_n}f_1(\bar{x},\bar{u})\
\vdots & & \vdots \
\partial_{x_1}f_n(\bar{x},\bar{u}) & ... & \partial_{x_n}f_n(\bar{x},\bar{u})
\end{array}
\right] \in \mathbb{R}^{n\times n},
\end{equation}
$J_{f,u}(\bar{x},\bar{u})$ est la matrice jacobienne de $u \mapsto f(x,u)$ en $(\bar{x},\bar{u})$, c'est à dire la matrice:
\begin{equation}
J_{f,u}(\bar{x},\bar{u})=
\left[
\begin{array}{ccc}
\partial_{u_1}f_1(\bar{x},\bar{u}) & ... & \partial_{u_m}f_1(\bar{x},\bar{u})\
\vdots & & \vdots \
\partial_{u_1}f_n(\bar{x},\bar{u}) & ... & \partial_{u_m}f_n(\bar{x},\bar{u})
\end{array}
\right]\in \mathbb{R}^{n\times m},
\end{equation}
et $R_1(\delta x , \delta u)$ est un 'reste', qui est négligeable devant $\lVert (\delta x,\delta u) \lVert$, ce que l'on note également $R_1(\delta x , \delta u)=o(\lVert (\delta x,\delta u) \lVert)$, c'est à dire:
\begin{equation}
\lim_{\lVert (\delta x,\delta u) \lVert \rightarrow 0} \frac{R_1(\delta x,\delta u)}{\lVert (\delta x,\delta u) \lVert}=0.
\end{equation}
Comme $f(\bar{x},u)=0$, et si l'on considère des $\delta x=x-\bar{X}$ et $\delta u=u-\bar{u}$ suffisamment petits, on obtient l'approximation linéaire du système, donnée par:
<a name="RE">
\begin{equation}
(M_3)\, \left{
\begin{array}{rcl}
\dot{\delta x}&=&J_{f,x}(\bar{x},\bar{u})\delta x+J_{f,u}(\bar{x},\bar{u})\delta u,\
y&=&h(\bar{x}+\delta x,\bar{u}+\delta u).
\end{array}
\right.
\end{equation}
</a>
<a name="BOBF">Commande boucle ouverte et boucle fermée</a>
Une fois que l'on s'est assuré de l'éxistence d'une commande $u$, on s'intéresse au problème de sa conception, de son design. Plusieurs stratégies de commande sont envisageables: on distingue notamment les lois de commande en boucle ouverte et en boucle fermée.
Commande en boucle ouverte
Définition
Une commande en boucle ouverte (ou contrôle en boucle ouverte) est une application $u:t\mapsto c(t)$ qui ne dépend pas de l'état ou de la sortie du système. Bien choisie, elle peut permettre de réaliser un objectif donné, à condition que le système soit bien connu, que le modèle soit fiable (voir parfait), et qu'il n'y ait aucun imprévu.
<a name="SystemeBO">Figure 6. Commande en boucle ouverte d'un système différentiel d'entrées $u$ et de sorties $y$</a>
Exemple d'un remplissage d'un réservoir pour l'irrigation
<u>Problématique</u>: On s'intéresse au problème du remplissage d'un réservoir pour l'irrigation. Le réservoir doit être maintenu à un volume constant $V_c$, pour satisfaire les besoins en eau des agriculteurs qui viennent se servir dans ce réservoir. Pour remplir le réservoir, on peut pomper dans la nappe fréatique avec un débit variable $d$. A l'instant $t_0$, le réservoir est vide, et on souhaite le remplir.
<u>Modèle</u>: En notant $h$ la hauteur d'eau dans le réservoir et $b$ l'air de la base du réservoir, le volume s'écrit $V=b\times h$.
En prenant $h$ comme variable d'état du système, on peut écrire le modèle de réservoir suivant:
\begin{equation}
\dot{V}=d \Longleftrightarrow \dot{h}=\frac{d}{b}.
\end{equation}
<u>Commande boucle ouverte</u>: Pour remplir le réservoir supposé vide, il suffit d'ouvrir le robinet (à fond si l'on souhaite que le réservoir se remplisse le plus vite possible) le temps nécessaire pour que le réservoir soit rempli. Si $d$ est le débit volumique, c'est à dire le volume d'eau fourni par unité de temps, et si on appelle $d_{max}$ la valeur maximale du débit, alors il faudra pomper dans la nappe fréatique pendant $T=\frac{V_c}{d_{max}}$. La commande boucle ouverte $u=d$ à appliquer est donc donnée par $u=d_{max} \mathbb{1}{[t_0,t_0+\frac{V_c}{d{max}}]}$.
End of explanation
# Remplissage d'un réservoir par une loi de commande boucle ouverte,
# sans prise en compte ni des précipitations ni de l'utilisation de l'eau par les agriculteurs
remplissage(0,0)
Explanation: <u>Simulation du système commandé en boucle ouverte:</u>
End of explanation
# Remplissage d'un réservoir par une loi de commande boucle ouverte,
# AVEC prise en compte des précipitations et de l'utilisation de l'eau par les agriculteurs
plt.close('all') # close all figure
remplissage(1,1)
Explanation: La commande boucle ouverte proposée permet de remplir le réservoir au volume souhaité.
Cependant, cela ne va marcher que dans le cas où il n'y a aucun aléa, et où le système est parfaitement connu. Si on se place dans un cas plus réaliste, c'est à dire moins idéal, il faudra prendre en compte dans le système le fait qu'il peut pleuvoir (entrée supplémentaire du système), que des personnes vont utiliser l'eau de ce réservoir (sortie supplémentaire) et qu'il y a des imperfections dans le système qui font que la valeur de débit maximal n'est pas exactement égale à $d_{\text{max}}$ et peut même varier légèrement dans le temps. Dans ces cas là, l'application de la commande en boucle ouverte ne conduira pas exatement au volume souhaité. C'est pourquoi on utilise des commandes en boucle fermée qui peuvent être robustes aux aléas, incertitudes et erreurs de modélisation.
En effet, en notant $p$ le débit d'entrée dû aux précipitations et $c$ le débit de sortie dû à la consommation des agriculteurs, le modèle s'écrit maintenant:
\begin{equation}
\dot{h}=\frac{d+p-c}{b},
\end{equation}
où $d$ est le débit de pompage dans la nappe fréatique.
La simulation du système commandé en boucle ouverte donne maintenant les résultats suivants, qui montre que l'on atteint pas le niveau d'eau désiré.
End of explanation
# Remplissage d'un réservoir par une loi de commande boucle fermée
# AVEC prise en compte ni des précipitations ni de l'utilisation de l'eau par les agriculteurs
plt.close('all') # close all figure
def fonction_u(t,T,dmax,base,Vc,precip,conso,x):
coeffalpha=0.1
valu = base*coeffalpha*(Vc/base-x)
return valu
remplissage(1,1)
Explanation: Commande en boucle fermée ou rétroaction
Exemple du remplissage d'un réservoir pour l'irrigation
On souhaite toujours que le réservoir soit maintenu à un volume constant $V_c$, pour satisfaire les besoins en eau des agriculteurs qui viennent se servir dans ce réservoir. Pour remplir le réservoir, on dispose toujours d'un accès à une nappe fréatique dans laquelle on peut pomper. Cependant, on souhaite maintenant tenir compte des précipitations, qui sont une seconde source (entrée) participant au remplissage du réservoir. La quantité d'eau de pluie qui tombe ne peut être contrôlée et n'est pas prévisible alors que la quantité d'eau pompée dans les nappes fréatiques peut être contrôlée via le débit de pompage $d$: ce sera l'entrée, ou commande, $u$ de notre système.
Dans ce cas, on s'aperçoit qu'il est impossible de proposer "à l'avance" une stratégie de pompage dans la nappe fréatique, puisque cela dépendra de la quantité de pluie tombée et de la consommation des agriculteurs! Il faut alors passer par une rétroaction, ou commande en boucle fermée, comme illustré dans la suite.
Définition
Une commande en boucle fermée (ou contrôle en boucle fermée), aussi appelée rétroaction ou feedback, est une application $u:t\mapsto R(x(t),y(t))$ (voir figure 7) choisie pour imposer un comportement dynamique au système d'état $x$
* Si $u(t)=R(x(t))$ (respectivement $u(t)=R(y(t))$), on parlera de retour d'état (respectivement retour de sortie).
* Si $R$ admet une expression analytique, on parlera de retour ou commande statique,
* Si $R$ n'est déterminée que via la résolution d'une équation dynamique, par exemple de la forme différentielle $\dot{u}=r(x,y,u)$, on parlera de retour ou commande dynamique*.
<a name="SystemeBF">Figure 7. Commande en boucle fermée (retour de sortie) d'un système différentiel d'entrées $u$ et de sorties $y$</a>
En utilisant une commande en boucle fermée, on décide d'appliquer une commande qui dépend de l'état courant (ou de la sortie) du système. On réajuste en fait la commande en fonction des informations que l'on récupère (via l'état ou la sortie) au cours du temps, ce qui permet de rectifier le comportement dans le cas d'évenements imprévus par exemple.
Souvent, on va avoir le schéma suivant:
On se donne une valeur à atteindre $y^\ast$ qu'on appelle consigne. Cette consigne peut être variable dans le temps.
A chaque pas de temps, on mesure la valeur de $y$
On compare $y$ et $y^\ast$
On ajuste la commande en fonction de la différence entre $y$ et $y^\ast$
Remarque
La rétroaction est un phénomène qui se retrouve abondamment dans la nature, notamment lorsque des êtres vivants sont impliqués. Par exemple, la température du corps humain est constamment régulée: la transpiration (commande), qui dépend de la température extérieure (variable d'état mesurée), permet notamment cette régulation.
Le déplacement d'un animal comprend également des boucles de rétroaction: en fonction des informations visuelles perçues (variables d'état mesurées) , le système nerveux central va envoyer des signaux (commande) aux muscles pour aller dans la bonne direction.
Exemple: Remplissage d'un réservoir pour l'irrigation (suite)
Dans cet exemple, on comprend bien que la quantité d'eau que l'on va pomper dans la nappe fréatique va dépendre du volume d'eau déjà présent dans le réservoir. Si on prend comme variable d'état du système la hauteur d'eau dans le réservoir, on va donc faire un retour d'état.
On suppose que l'on peut mesurer la hauteur d'eau $h$ dans le réservoir et que l'on connait l'air de la base $b$ du réservoir, de sorte que le volume soit directement déduit de la hauteur d'eau mesurée par la relation $V=b\times h$. Pendant le remplissage, on va donc avoir une équation d'état de la forme:
\begin{equation}
\dot{h}=\frac{d+p-c}{b},
\end{equation}
où $d$ est le débit de pompage dans la nappe fréatique, $p$ est le débit d'entrée dû aux précipitations et $c$ est le débit de sortie dû à la consommation des agriculteurs.
Une loi de commande boucle fermée que l'on peut proposer est la suivante:
\begin{equation}
d=\alpha\,b(h_c- h)
\end{equation}
où $h_c=\frac{V_c}{b}$ et $\alpha>0$.
En supposant que la mesure de la hauteur soit exacte, on a alors une dynamique du système en boucle fermée donnée par:
\begin{equation}
\dot{h}=\alpha(h_c-h)+\frac{p-c}{b}.
\end{equation}
Si $p$ et $c$ sont ponctuelles et de valeurs pas trop grandes, alors $h$ va toujours tenter de se rapprocher de $h_c$.
En fait, $p$ et $c$ peuvent être considérées comme des perturbations du système $\dot{h}=\alpha(h_c-h)$ qui est une équation d'un système du premier ordre telle que $h(t) \underset{t \rightarrow \infty}{\longrightarrow} h_c$.
End of explanation
# Tracé de la fonction de Haldane
# -------------------------------
# vecteur de valeurs de S
S= np.arange(0,6,0.1)
# paramètres de la fonction de Haldane
muast = 2.3; KS = 10; KI = 0.1;
# taux de croissance
mu=muast*S/(KS+S+S**2/KI)
# Valeur de S pour laquelle le taux de croissance est maximum
Smax=np.sqrt(KS*KI)
# Valeur maximale du taux de croissance
mumax=muast*Smax/(KS+Smax+Smax**2/KI)
# Tracé de la fonction de Haldane
plt.figure(figsize = (4, 2))
plt.plot(S,mu,'r')
plt.plot(Smax*np.array([1,1]),np.array([0,mumax]),'b--')
plt.plot(Smax,mumax,'bo')
plt.xlabel('Substrat S')
plt.ylabel('taux de croissance $\mu(S)$')
plt.title('Fonction de Haldane')
plt.show()
Explanation: Quelques lois de commande boucle fermée classiques
Notons $y^m(t)$ la mesure de $y(t)$ à l'instant $t$. Voici deux lois de commande classiquement appliquées:
Commande "bang-bang" ou "tout ou rien":
$$ u(t)=\left{
\begin{array}{c}
u_{max}\text{ if }y^\ast-y^m(t)>0\
u_{min}\text{ if }y^\ast-y^m(t)\leqslant 0
\end{array}
\right.$$
Commande proportionnelle intégrale dérivée (PID)
$$ u(t)=u_c+K_p(y^\ast-y^m(t))+K_i\int_0^t(y^\ast-y^m(s))ds+K_d\frac{d(y^\ast-y^m)}{dt}(t)$$
<a name="PID">Contrôleur Proportionnel Intégral Dérivé (PID)</a>
La commande proportionnelle intégrale dérivée (PID) est, comme son nom l'indique, composée de 3 termes:
- un terme proportionnel à l'erreur $y^\ast-y^m$
- un terme proportionnel à l'intégral de l'erreur $y^\ast-y^m$
- et un terme proportionnel à la dérivée de l'erreur $y^\ast-y^m$
$$ u(t)=u_c+\underbrace{K_p(y^\ast-y^m(t))}{\text{proportionnel}}+\underbrace{K_i\int_0^t(y^\ast-y^m(s))ds}{\text{intégral}}+\underbrace{K_d\frac{d(y^\ast-y^m)}{dt}(t)}_{\text{dérivé}} $$
Nous allons illustrer ses performances sur un exemple concret.
Exemple : Culture de micro-organismes dans un réacteur continu
On va s'intéresser à l'exemple de la croissance d'une population de micro-organismes que l'on notera $B$ (pour biomasse) sur un substrat noté $S$ dans un réacteur continu (c'est à dire à volume constant et alimenté en continu) et parfaitement mélangé. Cette réaction peut être schématiquement représentée de la manière suivante:
$$ S \underset{B}{\longrightarrow} B$$
la présence du $B$ sous la flèche signifiant que $B$ est un catalyseur de sa propre croissance (la vitesse de la réaction, qui est le taux de croissance de la population va donc dépendre de $B$). On supposera que le taux de croissance $\mu(S)$ de ces micro-organismes suit une loi de Haldane:
$$ \mu(S)=\frac{\mu^\ast S}{K_S+S+\frac{S^2}{K_I}}\text{ avec }1-4\frac{K_s}{K_I}<0$$
Cette fonction admet un maximum en $S=\sqrt{K_sK_I}$ valant $\mu_{max}=\frac{\mu^\ast}{1+2\sqrt{\frac{K_S}{K_I}}}$.
Remarques mathématiques:
<u>Valeur maximum</u>: pour trouver le maximum de la fonction $\mu$, il suffit de résoudre l'équation $\mu^\prime(S)=0$ et de choisir ensuite parmi les solutions la plus grande valeur parmi les maximas locaux (car les solutions peuvent aussi des minimas). Dans notre cas on a: $\mu^\prime(S)= \frac{\mu^\ast}{\left(K_S+S+\frac{S^2}{K_I}\right)^2}\left(K_S-\frac{S^2}{K_I}\right)$. Les valeurs de $S$ pour lesquelles $\mu^\prime(S)=0$ sont donc telles que $S^2=K_SK_I \Leftrightarrow S=\pm \sqrt{K_sK_I}$. Comme on ne s'intéresse qu'aux valeurs positives de $S$ (valeurs physiques) on ne retiendra que la valeur $S=\sqrt{K_sK_I}$ dont on peut montrer que c'est un maximum. En effet, pour tout $S\in[0,\sqrt{K_SK_I}],\,\mu^\prime(S)>0$ et pour tout $S\in[\sqrt{K_SK_I},\infty ),\,\mu^\prime(S)<0$.
<u>Condition</u> $1-4\frac{K_s}{K_I}<0$: cette condition permet d'assurer que $\mu(S)>0$ pour tout $S>0$. En effet, $\mu(S)>0 \Leftrightarrow K_S+S+\frac{S^2}{K_I}>0$. Le discriminant de ce polynôme est donné par $\Delta = 1-4\frac{K_S}{K_I}$. Si $\Delta<0$ alors le polynôme $K_S+S+\frac{S^2}{K_I}$ n'admet aucune racine réelle et $K_S+S+\frac{S^2}{K_I}>0,\,\forall S>0$. Si $Delta>0$ alors le polynôme $K_S+S+\frac{S^2}{K_I}$ admet deux racines réelles qui sont toutes deux positives. Par conséquent il existera des valeurs de $S>0$ pour lesquelles $K_S+S+\frac{S^2}{K_I}<0$ ce qui n'a pas de signification biologique.
End of explanation
# Calcul des points d'équilibre du modèle de réacteur continu
# ------------------------------------------------------------
def test(Q):
# calcul des racines r1 et r2 du polynome Q*KS/V+(Q/V-muast)*S+Q/(V*KI)*S^2
r=np.roots(np.array([Q/(V*KI),Q/V-muast,Q*KS/V]))
# Valeur du taux de croissance en r1
mur0 = muast*r[0]/(KS+r[0]+r[0]**2/KI)
# Valeur du taux de croissance en r2
mur1 = muast*r[1]/(KS+r[1]+r[1]**2/KI)
# --> en principe, on a mur1=mur2=Q/V
# Valeur de S pour laquelle le taux de croissance est maximum
Smax = np.sqrt(KS*KI)
# Valeur maximale du taux de croissance
mumax = muast*Smax/(KS+Smax+Smax**2/KI)
# Vecteur de valeurs
S= np.arange(0,max(max(r),15),0.1)
# Taux de dilution D=Q/V
D=Q/V
# taux de croissance calculé en les valeurs de S
mu=muast*S/(KS+S+S**2/KI)
# Tracé de l'intersection entre la fonction de Haldane et la droite d'equation y=Q/V
plt.figure(figsize = (4, 2))
plt.plot(S,mu,'r')
plt.plot(np.array([0,S[-1]]),D*np.array([1,1]),'b')
plt.plot(r[0]*np.array([1,1]),np.array([0,mur0]),'b--')
plt.plot(r[0],mur0,'bo')
plt.plot(r[1]*np.array([1,1]),np.array([0,mur1]),'b--')
plt.plot(r[1],mur1,'bo')
plt.plot(Smax*np.array([1,1]),np.array([0,mumax]),'r--')
plt.xlabel('Substrat S')
plt.ylabel('taux de croissance $\mu(S)$')
plt.title('Fonction de Haldane')
plt.show()
# paramètres de la fonction de Haldane
muast = 2.3; KS = 10; KI = 0.1;Qin = 0.01; V = 0.5;
# valeur maximale Qmax du débit d'entrée Q que l'on peut appliquer
Qmax = V*muast/(1+2*np.sqrt(KS/KI))
# Evolution des solutions de l'équation mu(S)=Q/V en fonction de la valeur de Q appliquée --> tracé intéractif
interact(test,Q=(0.001,Qmax,Qmax/10))
Explanation: On notera $Q$ le débit d'alimentation du réacteur (égal au débit de soutirage) et $V$ le volume constant du réacteur.
Un modèle de ce système est donné par:
$$
\boxed{
\left{
\begin{array}{crl}
\frac{dB}{dt}= & \mu(S)B &-\frac{Q}{V}B\
\frac{dS}{dt}= & -k\mu(S)B&+\frac{Q}{V}(S_0-S) \
\end{array}
\right.}
$$
Problème: on cherche à contrôler la concentration $S$ de substrat dans le réacteur en jouant sur la commande $Q$.
On note $S^\ast$ la valeur de la concentration en sucre que l'on souhaite atteindre.
Loi de commande boucle ouverte
Avant de tester un commande PID, on va regarder ce que donne la commande boucle ouverte.
Trouver la loi de commande boucle ouverte du problème consiste à trouver la valeur $Q^\ast$ de $Q$ qui, si elle est appliquée au système, amènera la concentration en sucre $S$ à la valeur $S^\ast$.
Autrement dit, on cherche la valeur $Q^\ast$ de $Q$ telle que $S^\ast$ (et la valeur de $B$ correspondante) est un point d'équilibre stable du système.
Avant de chercher cette valeur $Q^\ast$, on va d'abord se poser la question suivante:
Question: Quelles sont les valeurs $S^\ast$ que l'on peut atteindre avec une loi de commande boucle ouverte (c'est à dire avec une valeur de $Q$ constante et positive)?
Répondre à cette question revient à calculer les points d'équilibre du système qui sont stables. On va donc chercher dans un premier temps l'ensemble des points d'équilibre, c'est à dire les valeurs de $S$ et $B$ telles que:
$$ \left{
\begin{array}{rl}
(\mu(S) -\frac{Q}{V})B &=& 0\
-k\mu(S)B+\frac{Q}{V}(S_0-S) &=& 0 \
\end{array}\right. $$
En supposant que $Q>0$ (donc non nul) on a:
$$\left{ \begin{array}{rl}
(\mu(S) -\frac{Q}{V})B &=& 0\
-k\mu(S)B+\frac{Q}{V}(S_0-S) &=& 0 \
\end{array}\right.
\Longleftrightarrow \left{
\begin{array}{rl}
B &=& 0\
\frac{Q}{V}(S_0-S) &=& 0 \
\end{array}\right.
\text{ ou }
\left{
\begin{array}{rl}
\frac{Q}{V} &=& \mu(S)\
\mu(S)(S_0-S-kB) &=& 0 \
\end{array}\right. $$
$$
\Longleftrightarrow
\left{
\begin{array}{rl}
B &=& 0\
S &=& S_0 \
\end{array}\right.
\text{ ou }
\left{
\begin{array}{rl}
\mu(S) &=& \frac{Q}{V}\
B&=& \frac{S_0-S}{k} \
\end{array}\right. $$
Il nous faut donc encore résoudre l'équation $\mu(S)=\frac{Q}{V}$. En remplaçant $\mu(S)$ par son expression, on obtient l'équation suivante:
$$ \frac{\mu^\ast S}{K_S+S+\frac{S^2}{K_I}}=\frac{Q}{V} $$
qui, si on la multiplie par $K_S+S+\frac{S^2}{K_I}$ (qui est toujours strictement positif), et après réarrangement, donne:
$$ \frac{Q}{V}K_S+(\frac{Q}{V}-\mu^\ast) S+\frac{Q}{V}\frac{S^2}{K_I}=0.$$
On peut montrer que, pour $0 < Q < V \frac{\mu^\ast}{1+2\sqrt{\frac{K_S}{K_I}}}$
cette équation admet deux solutions (racines du polynôme en $S$) notées $S_1$ et $S_2$ telles que
$$0 ≤ S_1 ≤ \sqrt{K_SK_I} ≤ S_2$$.
End of explanation
# Tracé du portrait de phase du modèle de réacteur continu
# --------------------------------------------------------
# Paramètres du modèle de réacteur continu
muast = 2.3; KS = 10; KI = 0.1;Q = 0.05; V = 0.5; S0 = 3.2; coeffk = 0.6;
# modèle du réacteur continu
def reacteur(x,t,coeffk,muast,KS,KI,Q,V,S0):
B = x[0] #biomasse
S = x[1] #substrat
# initialisation de la dérivée de x par rapport au temps
dx = np.zeros(2)
# taux de croissance
mu = muast*S/(KS+S+S**2/KI)
# second membre de l'équation en B
dx[0] = mu*B-Q/V*B
# second membre de l'équation en S
dx[1] = -coeffk*mu*B+Q/V*(S0-S)
return dx
# ** Calcul des points d'équilibre
# pour cela, on calcul les racines r1 et r2 du polynome Q*KS/V+(Q/V-muast)*S+Q/(V*KI)*S^2
r=np.roots(np.array([Q/(V*KI),Q/V-muast,Q*KS/V]))
# ** Positionnement des points d'équilibre sur le portrait de phase
# - point d'équilibre E2
plt.plot((S0-r[0])/coeffk,r[0],'ro')
plt.text((S0-r[0])/coeffk*1.05,r[0]*1.05,'E2',color='r')
# - point d'équilibre E1
plt.plot((S0-r[1])/coeffk,r[1],'ro')
plt.text((S0-r[1])/coeffk*1.05,r[1]*1.05,'E1',color='r')
# - point d'équilibre E0
plt.plot(0,S0,'ro')
plt.text(0,S0,'E0',color='r')
# ** Tracé du portrait de phase
# Pour tracer le portrait de phase, on va simuler plusieurs trajectoires solutions du modèle
# en changeant à chaque fois la condition initiale et on va les tracer dans le plan de phase
# c'est à dire que l'on va tracer les valeurs de S en fonction de B (ou l'inverse)
# On ne s'intéressera évidemment qu'à la partie du plan de phase où B>0 et S>0 puisque seules ces valeurs
# ont une signification biologique
# comme on ne peut pas tracer le portrait de phase entier, car le plan de phase est infini, on va
# définir des bornes Bmax et Smax pour les valeurs de B et de S
Bmax=(max(S0-r)/coeffk)*1.1
Smax=max(max(r),S0)*1.1
# pour chaque bord de ce sous-domaine [0,Bmax]x[0,Smax] on va simuler nbinit+1 trajectoires qui partent de ce bord
nbinit = 22 # nombre de conditions initiales par bord
# on définit également une plage de temps sur laquelle on va simuler les trajectoires
tmax=200
temps = np.linspace(0,tmax,20000)
# et on choisit les couleurs pour le tracé des trajectoires sur le portrait de phase
cmap = plt.get_cmap('hsv')
couleurs = [cmap(i) for i in np.linspace(0, 1, 4*(nbinit+1))]
# numéro de la simulation qui sera incrémenté
nb_simu = 0
for k in np.arange(4): # boucle sur les 4 bords de la partie du plan de phase à laquelle on s'intèresse
for l in np.arange(nbinit+1): # boucle sur les nbinit+1 conditions initiales du bord considéré
# initialisation de la condition initiale
x0 = np.zeros(2)
if k == 0: # bord 0 : B=0 et 0<S<Smax
x0[0]=0
x0[1]= l*Smax/nbinit
elif k==1: # bord 1 : S=Smax et 0<B<Bmax
x0[0]=l*Bmax/nbinit
x0[1]=Smax
elif k==2: # bord 2 : B=Bmax et 0<S<Smax
x0[0]=Bmax
x0[1]=(nbinit+1-l)*Smax/nbinit
elif k==3: # bord 3 : S=0 et 0<B<Bmax
x0[0]=(nbinit+1-l)*Bmax/nbinit
x0[1]=0
# simulation du modèle
X = scint.odeint(reacteur,x0,temps,args=(coeffk,muast,KS,KI,Q,V,S0))
# tracé de la trajectoire solution dans le plan de phase
plt.plot(X[:,0],X[:,1],color=couleurs[nb_simu])
# incrémentation du numéro de la simulation
nb_simu = nb_simu + 1
# titres des axes des abscisses et ordonnées
plt.xlabel('biomasse B')
plt.ylabel('substrat S')
plt.show()
Explanation: On a donc $3$ points d'équilibre qui sont:
$E_0=(B,S) = (0,S_0)$ : ce point d'équilibre correspond au lessivage, c'est à dire à la disparition totale de la population de micro-organismes
$E_1=(B,S) = (\frac{S_0-S_1}{k},S_1)$
$E_2=(B,S) = (\frac{S_0-S_2}{k},S_2)$
Si on s'intéresse maintenant à la stabilité (locale) de ces points d'équilibre, on peut montrer (admis) que:
- $E_0$ est stable si et seulement si $\frac{Q}{V}>\mu(S_0)$
- $E_1$ est stable
- $E_2$ est instable
End of explanation
# Controle boucle ouverte de la culture bactérienne dans un réacteur continu
# --------------------------------------------------------------------------
plt.close('all') # close all figure
# ** modèle du réacteur continu
def reacteur(x,t,k,muast,KS,KI,Qin,V,S0,Sast,control_type,coeffcontrol,disturb):
# x : variables d'état du modèle c'est à dire B et S dans notre cas
# t : temps
# muast, KS, KI : paramètres de la fonction de Haldane utilisée pour le taux de croissance
# Qin : débit d'entrée dans le réacteur
# S0 : concentration en sucre dans le milieu qui alimente le réacteur
# Sast : consigne en concentration en sucre pour la commande (= valeur que l'on veut atteindre)
# control_type : type de loi de commande à appliquer. Pour l'instant un type est possible control_type='BO'
# pour la boucle ouverte
# coeffcontrol : paramètres utilisés dans la loi de commande
# disturb : perturbation sur la commande c'est à dire valeur telle que Qréel = Qcalc*(1+disturb)
# récupération des valeurs des variables d'état
B = x[0] # biomasse
S = x[1] # substrat
# Calcul de la commande: dans fonction_u la loi de commande est calculée, disturb permet de prendre
# en compte d'éventuelles perturbation sur la valeur réellement appliquée de la commande
Q = fonction_u(t,x,Sast,k,muast,KI,KS,V,S0,control_type,coeffcontrol)*(1+disturb)
# initialisation de dx, second membre du modèle correspondant à la dérivée des variables d'état dB/dt et dS/dt
dx = np.zeros(2)
# taux de croissance (fonction de Haldane)
mu = muast*S/(KS+S+S**2/KI)
# second membre de l'équation en B
dx[0] = mu*B-Q/V*B
# second membre de l'équatio en S
dx[1] = -k*mu*B+Q/V*(S0-S)
return dx
# ** fonction qui simule le modèle avec la loi de commande demandée et qui trace ensuite la solution
def culture_cont(Sast,control_type,coeffcontrol,disturb):
# vecteur de temps pour la simulation
tmax = 150
temps = np.linspace(0,tmax,2000)
# paramètres du modèle
k = 0.6; muast = 2.3; KS = 10; KI = 0.1; Qin = 0.01; V = 0.5;
# conditions initiales du modèle (valeurs initiales de la biomasse et de la concentration en sucre)
B0 = 9; S0 = 3.2;
# si on utilise un terme intégrale, il faudra rajouter une équation dans le modèle et du coup rajouter
# la condition initiale correspondante qui est égale à 0 (voir plus loin dans le paragraphe sur le terme
# intégral)
if control_type in ['I','PI','PID']:
x0 = np.array([B0,S0,0])
else:
x0 = np.array([B0,S0])
# intégration numérique de l'EDO
x = scint.odeint(reacteur,x0,temps,args=(k,muast,KS,KI,Qin,V,S0,Sast,control_type,coeffcontrol,disturb))
# re-calcul de la commande appliquée
u = fonction_u(temps,x,Sast,k,muast,KI,KS,V,S0,control_type,coeffcontrol)
# tracé des solutions
plt.figure(figsize = (10, 3))
plt.subplots_adjust(hspace=0.4,wspace=0.4)
plt.subplot2grid((1,2),(0,0))
plt.plot(temps,x[:,0],'r',label='Biomasse')
plt.plot(temps,x[:,1],'g',label='Substrat')
plt.plot(np.array([0,temps[-1]]),np.array([Sast,Sast]),'g--',label='S*')
plt.legend(); plt.xlabel('time (h)')
plt.subplot2grid((1,2),(0,1))
plt.plot(temps,u,'r',label='Debit')
plt.legend(); plt.xlabel('time (h)')
plt.show()
# Loi de commande
def fonction_u(t,x,Sast,k,muast,KI,KS,V,SO,control_type,coeffcontrol):
if control_type == 'BO': # BOUCLE OUVERTE
# loi donnée par Qast=mu(Sast)*V
Qast=muast*Sast/(KS+Sast+Sast**2/KI)*V
if type(t)==float: # cas où t est scalaire
valu = Qast
else: # cas où t est un vecteur
valu = np.ones(len(t))*Qast
return valu
Explanation: En conclusion, on a montré que les seuls points d'équilibre atteignables avec une commande boucle ouverte sont $E_0$ et $E_1$. Comme $0 ≤ S_1 ≤ \sqrt{K_SK_I}$ les seules valeurs $S^\ast$ atteignables avec une commande boucle ouverte sont:
$$S^\ast \in\left[0,\sqrt{K_SK_I}\right]\cup \left{S_0 \right}$$
On retourne maintenant à la question de départ qui était:
Question: Etant donnée une valeur atteignable $S^\ast$ de $S$, quelle est la valeur $Q^\ast$ de $Q$ telle que $S$ tende vers $S^\ast$?
Dans notre cas, on a vu que les points d'équilibre $E_1$ et $E_2$ étaient caractérisés par $\mu(S)=\frac{Q}{V}$.
Pour atteindre $S^\ast$ il faudra donc appliquer un débit $Q^\ast$ égal à:
$$Q^\ast=V \mu(S^\ast)$$
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast
# dans le cas d'une commande en BOUCLE OUVERTE SANS PERTURBATION
def test(Sast):
return culture_cont(Sast,'BO',0,0)
interact(test,Sast=(0,2,0.1))
Explanation: Test 1: Loi de commande en BOUCLE OUVERTE, SANS PERTURBATION
Ici $k = 0.6$; $\mu^\ast = 2.3$; $K_S = 10$; $K_I = 0.1$; $S^{in} = 3.2$; $V = 0.5$;
donc les valeurs de $S$ atteignables sont $\left[0,\sqrt{K_SK_I}\right]\cup \left{S^{in} \right}=[0,1]\cup\left{3.2 \right}$
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast
# dans le cas d'une commande en BOUCLE OUVERTE AVEC PERTURBATION
def test(Sast,disturb):
return culture_cont(Sast,'BO',0,disturb)
interact(test,Sast=(0,2,0.1),disturb=(0,0.4,0.01))
Explanation: On constate donc que, si on choisit une valeur de consigne $S^\ast$ atteignable, alors la loi de commande boucle ouverte permet bien de faire tendre la concentration en sucre vers cette valeur de consigne.
Ce n'est par contre plus le cas si on choisit un $S^\ast>\sqrt{K_SK_I}$
Regardons maintenant si cette loi de commande est robuste aux perturbations.
Supposons que pour des raisons physiques, il y a une erreur entre le débit que l'on souhaite appliqué (et qui est donné par la loi de commande), et le débit qui est réellement appliqué.
$$ Q_{réel}=Q_{calc}(1+\delta)$$
Test 2: Loi de commande en BOUCLE OUVERTE, AVEC PERTURBATION
End of explanation
# Contrôle boucle fermée de la culture bactérienne dans un réacteur continu : commande proportionnelle
# -----------------------------------------------------------------------------------------------------
# Loi de commande
def fonction_u(t,x,Sast,k,muast,KI,KS,V,S0,control_type,coeffcontrol):
if control_type == 'BO': # BOUCLE OUVERTE
# loi donnée par Qast=mu(Sast)*V
Qast = muast*Sast/(KS+Sast+Sast**2/KI)*V
if type(t)==float: # cas où t est scalaire
valu = Qast
else: # cas où t est un vecteur
valu = np.ones(len(t))*Qast
elif control_type == 'P': # BOUCLE FERMEE action PROPORTIONNELLE
# récupération des paramètres de la loi de commande
kprop = coeffcontrol
# et de la valeur de S
if type(t)==float: # cas où t est scalaire
valS = x[1]
else: # cas où t est un vecteur
valS = x[:,1]
# loi donnée par mu(Sast)*V+kprop*(Sast-S)
valu = muast*Sast/(KS+Sast+Sast**2/KI)*V+kprop*(Sast-valS)
return valu
Explanation: Limites de la boucle ouverte
On touche ici aux limites de la commande boucle ouverte.
la commande boucle ouverte ne peut être utilisée que pour certaines valeurs de consignes
cette commande n'est pas robuste aux perturbations
Action proportionnelle
On va maintenant tester une loi de commande boucle fermée constituée d'un terme proportionnel à l'erreur, c'est à dire une loi de commande de la forme:
$$ u(t)=u^\ast+K_p(y^\ast-y^m(t))$$
où $u^\ast$ est la valeur de la loi de commande constante boucle ouverte qui permet de stabiliser le système à $S^\ast$
Dans le cas de notre exemple, cela correspondant à la loi de commande:
$$ Q(t)=V\mu(S^\ast)+K_p(S^\ast-S^m(t))$$
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast et kprop
# dans le cas d'une commande en BOUCLE FERMEE PROPORTIONNELLE SANS PERTURBATION
def test(Sast,kprop):
return culture_cont(Sast,'P',kprop,0)
interact(test,Sast=(0,4,0.1),kprop=(0,0.1,0.01))
Explanation: Test: Loi de commande en BOUCLE FERMEE avec action proportionnelle (P), SANS PERTURBATION
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast, kprop et disturb
# dans le cas d'une commande en BOUCLE FERMEE PROPORTIONNELLE AVEC PERTURBATION
def test(Sast,kprop,disturb):
return culture_cont(Sast,'P',kprop,disturb)
interact(test,Sast=(0,4,0.1),kprop=(0,0.1,0.01),disturb=(0,0.4,0.01))
Explanation: Comparé à la commande en boucle ouverte, on constate qu'avec la commande boucle fermée avec terme proportionnel, on peut atteindre des valeurs $S^\ast$ de $S$ qui étaient non atteignables avec la boucle ouverte.
Testons maintenant si cette commande est robuste aux perturbations.
Test: Loi de commande en BOUCLE FERMEE avec action proportionnelle (P), AVEC PERTURBATION
End of explanation
# Contrôle boucle fermée de la culture bactérienne dans un réacteur continu : commande proportionnelle et intégrale
# ------------------------------------------------------------------------------------------------------------------
# Loi de commande
def fonction_u(t,x,Sast,k,muast,KI,KS,V,S0,control_type,coeffcontrol):
if control_type == 'BO': # BOUCLE OUVERTE
# loi donnée par Qast=mu(Sast)*V
Qast = muast*Sast/(KS+Sast+Sast**2/KI)*V
if type(t)==float: # cas où t est scalaire
valu = Qast
else: # cas où t est un vecteur
valu = np.ones(len(t))*Qast
elif control_type == 'P': # BOUCLE FERMEE action PROPORTIONNELLE
# récupération des paramètres de la loi de commande
kprop = coeffcontrol
# et de la valeur de S
if type(t)==float: # cas où t est scalaire
valS = x[1]
else: # cas où t est un vecteur
valS = x[:,1]
# loi donnée par mu(Sast)*V+kprop*(Sast-S)
valu = muast*Sast/(KS+Sast+Sast**2/KI)*V+kprop*(Sast-valS)
elif control_type == 'PI': # BOUCLE FERMEE action PROPORTIONNELLE INTEGRALE
# récupération des paramètres de la loi de commande
k0 = coeffcontrol[0]
kprop = coeffcontrol[1]
kint = coeffcontrol[2]
# et de la valeur de valint, qui est l'intégrale entre 0 et t de Sast-S
if type(t)==float: # cas où t est scalaire
valint = x[2]; valS = x[1]
else: # cas où t est un vecteur
valint = x[:,2]; valS = x[:,1]
# loi donnée par k0+kprop*(Sast-S) + kint*valint
# où valint est l'intégrale entre 0 et t de Sast-S
valu = k0+kprop*(Sast-valS)+kint*valint
return valu
# modification du modèle de réacteur continu pour calculer l'intégrale de Sast-S au cours du temps
def reacteur(x,t,k,muast,KS,KI,Qin,V,S0,Sast,control_type,coeffcontrol,disturb):
# x : variables d'état du modèle c'est à dire B et S dans notre cas
# t : temps
# muast, KS, KI : paramètres de la fonction de Haldane utilisée pour le taux de croissance
# Qin : débit d'entrée dans le réacteur
# S0 : concentration en sucre dans le milieu qui alimente le réacteur
# Sast : consigne en concentration en sucre pour la commande (= valeur que l'on veut atteindre)
# control_type : type de loi de commande à appliquer. Pour l'instant un type est possible control_type='BO'
# pour la boucle ouverte
# coeffcontrol : paramètres utilisés dans la loi de commande
# disturb : perturbation sur la commande c'est à dire valeur telle que Qréel = Qcalc*(1+disturb)
# récupération des valeurs des variables d'état
B = x[0] # biomasse
S = x[1] # substrat
# Calcul de la commande: dans fonction_u la loi de commande est calculée, disturb permet de prendre
# en compte d'éventuelles perturbation sur la valeur réellement appliquée de la commande
Q = fonction_u(t,x,Sast,k,muast,KI,KS,V,S0,control_type,coeffcontrol)*(1+disturb)
# initialisation de dx, second membre du modèle correspondant à la dérivée des variables d'état dB/dt et dS/dt
if control_type in ['PI','PID']: # si il y a un terme intégrale dans la commande, on rajoute une équation pour
# calculer l'intégrale Sast-S au cours du temps
dx = np.zeros(3)
# second membre de l'equation qui calcule l'intégrale de Sast-S qui sera stockée dans x[2]
dx[2] = Sast-S
else: # si il n'y a pas de terme intégrale dans la commande, on ne rajoute pas d'équation
dx = np.zeros(2)
# taux de croissance (fonction de Haldane)
mu = muast*S/(KS+S+S**2/KI)
# second membre de l'équation en B
dx[0] = mu*B-Q/V*B
# second membre de l'équatio en S
dx[1] = -k*mu*B+Q/V*(S0-S)
return dx
Explanation: On constate que la commande boucle fermée proportionnelle n'est pas robuste aux perturbations.
Rajoutons maintenant un terme intégral à cette loi de commande.
Action intégrale
On va donc maintenant tester une loi de commande boucle fermée constitué d'un terme proportionnel à l'erreur et d'un terme proportionnel à l'intégrale de l'erreur, c'est à dire une loi de commande de la forme:
$$ u(t)=K_0+K_p(y^\ast-y^m(t))+K_i\int_0^t(y^\ast-y^m(s))ds$$
Dans le cas de notre exemple, cela correspondant à la loi de commande:
$$ Q(t)=K_0+K_p(S^\ast-S^m(t))+K_i\int_0^t(S^\ast-S^m(s))ds$$
Remarque on voit ici que la valeur $u^\ast$ de la commande constante boucle ouverte n'apparait plus dans l'expression de la loi de commande: elle a été remplacée par le terme constant $K_0$ En effet, on va voir qu'il n'est ici plus nécessaire de connaitre la valeur de la loi de commande boucle ouverte pour contrôler le système: c'est l'intégrateur qui va assurer que l'on converge bien vers la consigne. La valeur de $K_0$ permettra quant à elle de choisir la valeur de la commande au départ (au moment où on appliquer la commande). Cela peut être utile pour éviter des discontinuités, autrement dit des changements brutaux de valeur de commande.
Application pratique
Numériquement, pour calculer le terme $I=\int_0^t(S^\ast-S^m(s))ds$ on va simplement rajouter l'équation suivante:
$$ \frac{dI}{dt}=S^\ast - S^m$$
avec $I(0)=0$.
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast, kprop et kint
# dans le cas d'une commande en BOUCLE FERMEE PROPORTIONNELLE INTEGRALE SANS PERTURBATION
def test(Sast,k0,kprop,kint):
return culture_cont(Sast,'PI',np.array([k0,kprop,kint]),0)
interact(test,Sast=(0,4,0.1),k0=(0,0.1,0.01),kprop=(0,0.1,0.01),kint=(0,0.02,0.001))
Explanation: Test: Loi de commande en BOUCLE FERMEE avec actions proportionnelle et intégrale (PI), SANS PERTURBATION
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast, kprop, kint et disturb
# dans le cas d'une commande en BOUCLE FERMEE PROPORTIONNELLE INTEGRALE AVEC PERTURBATION
def test(Sast,k0,kprop,kint,disturb):
return culture_cont(Sast,'PI',np.array([k0,kprop,kint]),disturb)
interact(test,Sast=(0,4,0.1),k0=(0,0.1,0.01),kprop=(0,0.1,0.01),kint=(0,0.02,0.001),disturb=(0,0.4,0.01))
Explanation: Test: Loi de commande en BOUCLE FERMEE avec actions proportionnelle et intégrale (PI), AVEC PERTURBATION
End of explanation
# Contrôle boucle fermée de la culture bactérienne dans un réacteur continu: commande proportionnelle intégrale dérivée
# ---------------------------------------------------------------------------------------------------------------------
# Loi de commande
def fonction_u(t,x,Sast,k,muast,KI,KS,V,S0,control_type,coeffcontrol):
if control_type == 'BO': # BOUCLE OUVERTE
# loi donnée par Qast=mu(Sast)*V
Qast = muast*Sast/(KS+Sast+Sast**2/KI)*V
if type(t)==float: # cas où t est scalaire
valu = Qast
else: # cas où t est un vecteur
valu = np.ones(len(t))*Qast
elif control_type == 'P': # BOUCLE FERMEE action PROPORTIONNELLE
# récupération des paramètres de la loi de commande
kprop = coeffcontrol
# et de la valeur de S
if type(t)==float: # cas où t est scalaire
valS = x[1]
else: # cas où t est un vecteur
valS = x[:,1]
# loi donnée par mu(Sast)*V+kprop*(Sast-S)
valu = muast*Sast/(KS+Sast+Sast**2/KI)*V+kprop*(Sast-valS)
elif control_type == 'PI': # BOUCLE FERMEE action PROPORTIONNELLE INTEGRALE
# récupération des paramètres de la loi de commande
k0 = coeffcontrol[0]
kprop = coeffcontrol[1]
kint = coeffcontrol[2]
# et de la valeur de valint, qui est l'intégrale entre 0 et t de Sast-S
if type(t)==float: # cas où t est scalaire
valint = x[2]; valS = x[1]
else: # cas où t est un vecteur
valint = x[:,2]; valS = x[:,1]
# loi donnée par k0+kprop*(Sast-S) + kint*valint
# où valint est l'intégrale entre 0 et t de Sast-S
valu = k0+kprop*(Sast-valS)+kint*valint
elif control_type == 'PID': # BOUCLE FERMEE action PROPORTIONNELLE INTEGRALE DERIVEE
# récupération des paramètres de la loi de commande
k0 = coeffcontrol[0]
kprop = coeffcontrol[1]
kint = coeffcontrol[2]
kderiv = coeffcontrol[3]
# et des valeurs de valint (intégrale de Sast-S), de S et de B
if type(t)==float: # cas où t est scalaire
valint = x[2]; valS = x[1]; valB=x[0]
else: # cas où t est un vecteur
valint = x[:,2]; valS = x[:,1]; valB = x[:,0]
# loi donnée par k0+kprop*(Sast-S) + kint*valint + kderiv*(dSast/dt-dS/dt)
mu = muast*valS/(KS+valS+valS**2/KI)
valu = (k0+kprop*(Sast-valS)+kint*valint+kderiv*k*mu*valB)/(1+kderiv/V*(S0-valS))
return valu
Explanation: On remarque que, en ajoutant le terme intégral dans la commande, cela permet d'annuler l'erreur que l'on faisait avec les lois de commande boucle ouverte ou boucle fermée proportionnelle.
Les lois de commande Proportionnelle Intégrale sont donc robustes aux perturbations!
Action Dérivée
On peut également rajouter un terme dérivée dans la loi de commande boucle fermée. On obtient alors la loi de commande proportionnelle intégrale dérivée (PID) avec un terme proportionnel à l'erreur, un terme proportionnel à l'intégrale de l'erreur et un terme proportionnel à la dérivée de l'erreur, c'est à dire une loi de commande de la forme:
$$ u(t)=uK_0+K_p(y^\ast-y^m(t))+K_i\int_0^t(y^\ast-y^m(s))ds+K_d\frac{d(y^\ast-y^m)}{dt}$$
Dans le cas de notre exemple, cela correspondant à la loi de commande:
$$ Q(t)=K_0+K_p(S^\ast-S^m(t))+K_i\int_0^t(S^\ast-S^m(s))ds+K_d \frac{d(S^\ast-S^m)}{dt}$$
Application pratique
Numériquement, pour calculer cette loi de commande on peut remplacer $\frac{d(S^\ast-S^m(s))}{dt}=-\frac{S^m}{dt}$ (car $S^\ast$ est une constante) par le second membre de l'équation en $S$ du modèle, c'est à dire par $-k\mu(S^m)B^m+\frac{Q}{V}(S_0-S^m)$. On a alors:
$$ Q(t)=K_0+K_p(S^\ast-S^m(t))+K_i\int_0^t(S^\ast-S^m(s))ds-K_d \left(-k\mu(S^m(t))B^m(t)+\frac{Q(t)}{V}(S_0-S^m(t))\right)$$
Comme la commande $Q$ se retrouve alors dans les deux termes de l'égalité, il faut résoudre cette équation. En passant tous les termes dépendant de $Q$ dans le terme de gauche, et en multipliant par le bon coefficient on a:
$$ Q(t)=\left[K_0+K_p(S^\ast-S^m(t))+K_i\int_0^t(S^\ast-S^m(s))ds+K_d k\mu(S^m(t))B^m(t)\right] \frac{1}{1+\frac{K_d}{V}(S_0-S^m(t))}$$
End of explanation
# tracé interactif de l'évolution de la dynamique boucle fermée en fonction de la valeur de Sast, kprop, kint, kderiv
# et disturb dans le cas d'une commande en BOUCLE FERMEE PROPORTIONNELLE INTEGRALE DERIVEE AVEC PERTURBATION
def test(Sast,k0,kprop,kint,kderiv,disturb):
return culture_cont(Sast,'PID',np.array([k0,kprop,kint,kderiv]),disturb)
interact(test,Sast=(0,4,0.1),k0=(0,0.1,0.01),kprop=(0,0.1,0.01),kint=(0,0.02,0.001),kderiv=(0,0.5,0.1),disturb=(0,0.4,0.01))
Explanation: Test: Loi de commande en BOUCLE FERMEE avec actions proportionnelle intégrale et dérivée (PID), AVEC PERTURBATION
End of explanation |
1,291 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
My sample df has four columns with NaN values. The goal is to concatenate all the rows while excluding the NaN values. | Problem:
import pandas as pd
import numpy as np
df = pd.DataFrame({'keywords_0':["a", np.nan, "c"],
'keywords_1':["d", "e", np.nan],
'keywords_2':[np.nan, np.nan, "b"],
'keywords_3':["f", np.nan, "g"]})
import numpy as np
def g(df):
df["keywords_all"] = df.apply(lambda x: ','.join(x.dropna()), axis=1)
return df
df = g(df.copy()) |
1,292 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Structures de données
Plus de détails sur les listes
Le type de données liste possède d’autres méthodes. Voici toutes les méthodes des objets listes
Step1: Utiliser les listes comme des piles (*)
Les méthodes des listes rendent très facile l’utilisation d’une liste comme une pile, où le dernier élément ajouté est le premier élément récupéré (LIFO, "last-in, first-out"). Pour ajouter un élément au sommet de la pile, utilisez la méthode append(). Pour récupérer un élément du sommet de la pile, utilisez pop() sans indice explicite. Par exemple
Step2: Utiliser les listes comme des files (*)
Vous pouvez aussi utiliser facilement une liste comme une file, où le premier élément ajouté est le premier élément retiré (FIFO, "first-in, first-out"). Pour ajouter un élément à la fin de la file, utiliser append(). Pour récupérer un élément du devant de la file, utilisez pop() avec 0 pour indice. Par exemple
Step3: Outils de programmation fonctionnelle (*)
Il y a trois fonctions intégrées qui sont très pratiques avec les listes
Step4: 'map(fonction, sequence)' appelle fonction(element) pour chacun des éléments de la séquence et renvoie la liste des valeurs de retour. Par exemple, pour calculer les cubes
Step5: Plusieurs séquences peuvent être passées en paramètre ; la fonction doit alors avoir autant d’arguments qu’il y a de séquences et est appelée avec les éléments correspondants de chacune des séquences (ou None si l’une des séquences est plus courte que l’autre). Si None est passé en tant que fonction, une fonction retournant ses arguments lui est substituée.
reduce(fonction, sequence) renvoie une valeur unique construite par l’appel de la fonction binaire fonction sur les deux premiers éléments de la séquence, puis sur le résultat et l’élément suivant, et ainsi de suite. Par exemple, pour calculer la somme des nombres de 1 à 10
Step6: S’il y a seulement un élément dans la séquence, sa valeur est renvoyée ; si la séquence est vide, une exception est déclenchée.
Un troisième argument peut être transmis pour indiquer la valeur de départ. Dans ce cas, la valeur de départ est renvoyée pour une séquence vide, et la fonction est d’abord appliquée à la valeur de départ et au premier élément de la séquence, puis au résultat et à l’élément suivant, et ainsi de suite. Par exemple,
Step7: List Comprehensions (*)
Les list comprehensions fournissent une façon concise de créer des listes sans avoir recours à map(), filter() et/ou lambda. La définition de liste qui en résulte a souvent tendance à être plus claire que des listes construites avec ces outils. Chaque list comprehension consiste en une expression suivie d’une clause for, puis zéro ou plus clauses for ou if. Le résultat sera une liste résultant de l’évaluation de l’expression dans le contexte des clauses for et if qui la suivent. Si l’expression s’évalue en un tuple, elle doit être mise entre parenthèses.
Step8: L'instruction del
Il y a un moyen d’enlever un élément d’une liste en ayant son indice au lieu de sa valeur
Step9: del peut aussi être utilisé pour supprimer des variables complètes
Step10: Faire par la suite référence au nom a est une erreur (au moins jusqu’à ce qu’une autre valeur ne lui soit affectée). Nous trouverons d’autres utilisations de del plus tard.
N-uplets (tuples) et séquences
Nous avons vu que les listes et les chaînes ont plusieurs propriétés communes, telles que l’indexation et les opérations de découpage. Elles sont deux exemples de types de données de type séquence. Puisque Python est un langage qui évolue, d’autres types de données de type séquence pourraient être ajoutés. Il y a aussi un autre type de données de type séquence standard
Step11: Comme vous pouvez le voir, à l’affichage, les tuples sont toujours entre parenthèses, de façon à ce que des tuples de tuples puissent être interprétés correctement ; ils peuvent être saisis avec ou sans parenthèses, bien que des parenthèses soient souvent nécessaires (si le tuple fait partie d’une expression plus complexe).
Les tuples ont plein d’utilisations. Par exemple, les couples de coordonnées (x, y), les enregistrements des employés d’une base de données, etc. Les tuples, comme les chaînes, sont non-modifiables
Step12: L’instruction t = 12345, 54321, 'salut !' est un exemple d’ emballage en tuple (tuple packing)
Step13: Cela est appelé, fort judicieusement, déballage de tuple (tuple unpacking). Le déballage d’un tuple nécessite que la liste des variables à gauche ait un nombre d’éléments égal à la longueur du tuple. Notez que des affectations multiples ne sont en réalité qu’une combinaison d’emballage et déballage de tuples !
Ensembles (*)
Python comporte également un type de données pour représenter des ensembles. Un set est une collection (non rangée) sans éléments dupliqués. Les emplois basiques sont le test d’appartenance et l’élimination des entrée dupliquées. Les objets ensembles supportent les opérations mathématiques comme l’union, l’intersection, la différence et la différence symétrique.
Voici une démonstration succincte
Step14: Une autre démonstration rapide des ensembles sur les lettres uniques de deux mots
Step15: Dictionnaires
Un autre type de données intégré à Python est le dictionnaire. Les dictionnaires sont parfois trouvés dans d’autres langages sous le nom de "mémoires associatives" ou "tableaux associatifs". A la différence des séquences, qui sont indexées par un intervalle numérique, les dictionnaires sont indexés par des clés, qui peuvent être de n’importe quel type non-modifiable ; les chaînes et les nombres peuvent toujours être des clés. Les tuples peuvent être utilisés comme clés s’ils ne contiennent que des chaînes, des nombres ou des tuples. Vous ne pouvez pas utiliser des listes comme clés, puisque les listes peuvent être modifiées en utilisant leur méthode append().
Il est préférable de considérer les dictionnaires comme des ensembles non ordonnés de couples clé
Step16: Le constructeur dict() construit des dictionnaires directement à partir de listes de paires clé
Step17: Lorsque les clés sont de simples chaînes il est parfois plus simple de spécifier les paires en utilisant des arguments à mot-clé
Step18: Techniques de boucles
Lorsqu’on boucle sur un dictionnaire, les clés et les valeurs correspondantes peuvent être obtenues en même temps en utilisant la méthode iteritems()
Step19: Lorsqu’on boucle sur une séquence, l’indice donnant la position et la valeur correspondante peuvent être obtenus en même temps en utilisant la fonction enumerate().
Step20: Pour boucler sur deux séquences, ou plus, en même temps, les éléments peuvent être appariés avec la fonction zip().
Step21: Pour boucler à l’envers sur une séquence, spécifiez d’abord la séquence à l’endroit, ensuite appelez la fonction reversed().
Step22: Pour boucler sur une séquence comme si elle était triée, utilisez la fonction sorted() qui retourne une liste nouvelle triée tout en laissant la source inchangée.
Step23: Plus de détails sur les conditions (*)
Les conditions utilisées dans les instructions while et if peuvent contenir d’autres opérateurs en dehors des comparaisons.
Les opérateurs de comparaison in et not in vérifient si une valeur apparaît (ou non) dans une séquence. Les opérateurs is et is not vérifient si deux objets sont réellement le même objet ; cela se justifie seulement pour les objets modifiables comme les listes. Tous les opérateurs de comparaison ont la même priorité, qui est plus faible que celle de tous les opérateurs numériques.
Les comparaisons peuvent être enchaînées. Par exemple, a < b == c teste si a est strictement inférieur à b et de plus si b est égal à c.
Les comparaisons peuvent être combinées avec les opérateurs booléens and (et) et or (ou), et le résultat d’une comparaison (ou de n’importe quel autre expression Booléenne) peut être inversé avec not (pas). Ces opérateurs ont encore une fois une priorité inférieure à celle des opérateurs de comparaison ; et entre eux, not a la plus haute priorité, et or la plus faible, de sorte que A and not B or C est équivalent à (A and (not B)) or C. Bien sûr, les parenthèses peuvent être utilisées pour exprimer les compositions désirées.
Les opérateurs booléens and et or sont des opérateurs dits court-circuit
Step24: Notez qu’en Python, au contraire du C, les affectations ne peuvent pas être effectuées à l’intérieur des expressions. Les programmeurs C ronchonneront peut-être, mais cela évite une classe de problèmes qu’on rencontre dans les programmes C | Python Code:
ma_liste = [66.6, 333, 333, 1, 1234.5]
print (ma_liste.count(333), ma_liste.count(66.6), ma_liste.count('x'))
ma_liste2 = list(ma_liste)
ma_liste2.sort()
print (ma_liste2)
ma_liste.insert(2, -1)
ma_liste.append(333)
ma_liste
ma_liste.index(333)
ma_liste.remove(333)
print(ma_liste)
ma_liste.reverse()
ma_liste
ma_liste.sort()
ma_liste
Explanation: Structures de données
Plus de détails sur les listes
Le type de données liste possède d’autres méthodes. Voici toutes les méthodes des objets listes :
append(x) équivalent à a.insert(len(a), x).
extend(L) rallonge la liste en ajoutant à la fin tous les éléments de la liste donnée ; équivaut à a[len(a):] = L.
insert(i, x) insère un élément à une position donnée. Le premier argument est l’indice de l’élément avant lequel il faut insérer, donc a.insert(0, x) insère au début de la liste, et a.insert(len(a), x) est équivalent à a.append(x).
remove(x) enlève le premier élément de la liste dont la valeur est x. Il y a erreur si cet élément n’existe pas.
pop([i ]) enlève l’élément présent à la position donnée dans la liste, et le renvoie. Si aucun indice n’est spécifié, a.pop() renvoie le dernier élément de la liste. L’élément est aussi supprimé de la liste.
index(x) retourne l’indice dans la liste du premier élément dont la valeur est x. Il y a erreur si cet élément n’existe pas.
count(x) renvoie le nombre de fois que x apparaît dans la liste.
sort() trie les éléments à l’intérieur de la liste.
reverse() renverse l’ordre des éléments à l’intérieur de la liste.
Un exemple qui utilise toutes les méthodes des listes :
End of explanation
pile = [3, 4, 5]
pile.append(6)
pile.append(7)
pile
pile.pop()
pile
pile.pop()
pile.pop()
pile
type(pile)
Explanation: Utiliser les listes comme des piles (*)
Les méthodes des listes rendent très facile l’utilisation d’une liste comme une pile, où le dernier élément ajouté est le premier élément récupéré (LIFO, "last-in, first-out"). Pour ajouter un élément au sommet de la pile, utilisez la méthode append(). Pour récupérer un élément du sommet de la pile, utilisez pop() sans indice explicite. Par exemple :
End of explanation
file = ["Eric", "John", "Michael"]
file.append("Terry") # Terry arrive
file.append("Graham") # Graham arrive
file.pop(0)
file.pop(0)
file
Explanation: Utiliser les listes comme des files (*)
Vous pouvez aussi utiliser facilement une liste comme une file, où le premier élément ajouté est le premier élément retiré (FIFO, "first-in, first-out"). Pour ajouter un élément à la fin de la file, utiliser append(). Pour récupérer un élément du devant de la file, utilisez pop() avec 0 pour indice. Par exemple :
End of explanation
def f(x): return x % 2 != 0 and x % 3 != 0
filter(f, range(2, 25))
Explanation: Outils de programmation fonctionnelle (*)
Il y a trois fonctions intégrées qui sont très pratiques avec les listes : filter(), map(), et reduce().
'filter(fonction, sequence)' renvoit une liste (du même type, si possible) contenant les seul éléments de la
séquence pour lesquels fonction(element) est vraie. Par exemple, pour calculer quelques nombres premiers :
End of explanation
def cube(x): return x*x*x
map(cube, range(1, 11))
Explanation: 'map(fonction, sequence)' appelle fonction(element) pour chacun des éléments de la séquence et renvoie la liste des valeurs de retour. Par exemple, pour calculer les cubes :
End of explanation
def ajoute(x,y): return x+y
reduce(ajoute, range(1, 11))
Explanation: Plusieurs séquences peuvent être passées en paramètre ; la fonction doit alors avoir autant d’arguments qu’il y a de séquences et est appelée avec les éléments correspondants de chacune des séquences (ou None si l’une des séquences est plus courte que l’autre). Si None est passé en tant que fonction, une fonction retournant ses arguments lui est substituée.
reduce(fonction, sequence) renvoie une valeur unique construite par l’appel de la fonction binaire fonction sur les deux premiers éléments de la séquence, puis sur le résultat et l’élément suivant, et ainsi de suite. Par exemple, pour calculer la somme des nombres de 1 à 10 :
End of explanation
def somme(seq):
def ajoute(x,y): return x+y
return reduce(ajoute, seq, 0)
somme(range(1, 11))
somme([])
Explanation: S’il y a seulement un élément dans la séquence, sa valeur est renvoyée ; si la séquence est vide, une exception est déclenchée.
Un troisième argument peut être transmis pour indiquer la valeur de départ. Dans ce cas, la valeur de départ est renvoyée pour une séquence vide, et la fonction est d’abord appliquée à la valeur de départ et au premier élément de la séquence, puis au résultat et à l’élément suivant, et ainsi de suite. Par exemple,
End of explanation
liste_de_fruits = [' banane', ' myrtille ', 'fruit de la passion ']
nouvelle_liste_de_fruits = [fruit.strip() for fruit in liste_de_fruits]
print (nouvelle_liste_de_fruits)
vec = [2, 4, 6]
[3*x for x in vec]
[3*x for x in vec if x > 3]
[3*x for x in vec if x <= 2]
[{x: x**2} for x in vec]
[[x,x**2] for x in vec]
[x, x**2 for x in vec] # erreur : parenthèses obligatoires pour les tuples
[(x, x**2) for x in vec]
vec1 = [2, 4, 6]
vec2 = [4, 3, -9]
[x*y for x in vec1 for y in vec2]
[x+y for x in vec1 for y in vec2]
[vec1[i]*vec2[i] for i in range(len(vec1))]
Explanation: List Comprehensions (*)
Les list comprehensions fournissent une façon concise de créer des listes sans avoir recours à map(), filter() et/ou lambda. La définition de liste qui en résulte a souvent tendance à être plus claire que des listes construites avec ces outils. Chaque list comprehension consiste en une expression suivie d’une clause for, puis zéro ou plus clauses for ou if. Le résultat sera une liste résultant de l’évaluation de l’expression dans le contexte des clauses for et if qui la suivent. Si l’expression s’évalue en un tuple, elle doit être mise entre parenthèses.
End of explanation
a = [-1, 1, 66.6, 333, 333, 1234.5]
del a[0]
a
del a[2:4]
a
Explanation: L'instruction del
Il y a un moyen d’enlever un élément d’une liste en ayant son indice au lieu de sa valeur : l’instruction del. Cela peut aussi être utilisé pour enlever des tranches dans une liste (ce que l’on a fait précédemment par remplacement de la tranche par une liste vide). Par exemple :
End of explanation
del a
Explanation: del peut aussi être utilisé pour supprimer des variables complètes :
End of explanation
t = 12345, 54321, 'salut!'
t[0]
t
# Les tuples peuvent être imbriqués:
u = t, (1, 2, 3, 4, 5)
u
Explanation: Faire par la suite référence au nom a est une erreur (au moins jusqu’à ce qu’une autre valeur ne lui soit affectée). Nous trouverons d’autres utilisations de del plus tard.
N-uplets (tuples) et séquences
Nous avons vu que les listes et les chaînes ont plusieurs propriétés communes, telles que l’indexation et les opérations de découpage. Elles sont deux exemples de types de données de type séquence. Puisque Python est un langage qui évolue, d’autres types de données de type séquence pourraient être ajoutés. Il y a aussi un autre type de données de type séquence standard : le tuple (ou n-uplet).
Un n-uplet consiste en un ensemble de valeurs séparées par des virgules, par exemple :
End of explanation
empty = ()
singleton = 'salut', # <-- notez la virgule en fin de ligne
len(empty)
len(singleton)
singleton
Explanation: Comme vous pouvez le voir, à l’affichage, les tuples sont toujours entre parenthèses, de façon à ce que des tuples de tuples puissent être interprétés correctement ; ils peuvent être saisis avec ou sans parenthèses, bien que des parenthèses soient souvent nécessaires (si le tuple fait partie d’une expression plus complexe).
Les tuples ont plein d’utilisations. Par exemple, les couples de coordonnées (x, y), les enregistrements des employés d’une base de données, etc. Les tuples, comme les chaînes, sont non-modifiables : il est impossible d’affecter individuellement une valeur aux éléments d’un tuple (bien que vous puissiez simuler quasiment cela avec le découpage et la concaténation).
spécificités des tuples (*)
Un problème particulier consiste à créer des tuples contenant 0 ou 1 élément : la syntaxe reconnaît quelques subtilités pour y arriver. Les tuples vides sont construits grâce à des parenthèses vides ; un tuple avec un élément est construit en faisant suivre une valeur d’une virgule (il ne suffit pas de mettre une valeur seule entre parenthèses). Peu lisible, mais efficace. Par exemple :
End of explanation
x, y, z = t
Explanation: L’instruction t = 12345, 54321, 'salut !' est un exemple d’ emballage en tuple (tuple packing) : les valeurs 12345, 54321 et 'salut !' sont emballées ensemble dans un tuple. L’opération inverse est aussi possible :
End of explanation
panier = ['pomme', 'orange', 'pomme', 'poire', 'orange', 'banane']
fruits = set(panier) # creation d'un set sans éléments dupliqués
fruits
'orange' in fruits # test d'appartenance rapide
'ananas' in fruits
Explanation: Cela est appelé, fort judicieusement, déballage de tuple (tuple unpacking). Le déballage d’un tuple nécessite que la liste des variables à gauche ait un nombre d’éléments égal à la longueur du tuple. Notez que des affectations multiples ne sont en réalité qu’une combinaison d’emballage et déballage de tuples !
Ensembles (*)
Python comporte également un type de données pour représenter des ensembles. Un set est une collection (non rangée) sans éléments dupliqués. Les emplois basiques sont le test d’appartenance et l’élimination des entrée dupliquées. Les objets ensembles supportent les opérations mathématiques comme l’union, l’intersection, la différence et la différence symétrique.
Voici une démonstration succincte :
End of explanation
a = set('abracadabra')
b = set('alacazam')
a # lettres uniques dans abracadabra
b # lettres uniques dans alacazam
a - b # lettres dans a mais pas dans b
a | b # lettres soit dans a ou b
a & b # lettres dans a et b
a ^ b # lettres dans a ou dans b mais pas dans les deux
Explanation: Une autre démonstration rapide des ensembles sur les lettres uniques de deux mots
End of explanation
tel = {'jack': 4098, 'sape': 4139}
tel['guido'] = 4127
tel
tel['jack']
del (tel['sape'])
tel['irv'] = 4127
tel
tel.keys()
tel.has_key('guido')
Explanation: Dictionnaires
Un autre type de données intégré à Python est le dictionnaire. Les dictionnaires sont parfois trouvés dans d’autres langages sous le nom de "mémoires associatives" ou "tableaux associatifs". A la différence des séquences, qui sont indexées par un intervalle numérique, les dictionnaires sont indexés par des clés, qui peuvent être de n’importe quel type non-modifiable ; les chaînes et les nombres peuvent toujours être des clés. Les tuples peuvent être utilisés comme clés s’ils ne contiennent que des chaînes, des nombres ou des tuples. Vous ne pouvez pas utiliser des listes comme clés, puisque les listes peuvent être modifiées en utilisant leur méthode append().
Il est préférable de considérer les dictionnaires comme des ensembles non ordonnés de couples clé:valeur, avec la contrainte que les clés soient uniques (à l’intérieur d’un même dictionnaire). Un couple d’accolades crée un dictionnaire vide : {}. Placer une liste de couples clé:valeur séparés par des virgules à l’intérieur des accolades ajoute les couples initiaux clé :valeur au dictionnaire ; c’est aussi de cette façon que les dictionnaires sont affichés.
Les opérations principales sur un dictionnaire sont le stockage d’une valeur à l’aide d’une certaine clé et l’extraction de la valeur en donnant la clé. Il est aussi possible de détruire des couples clé:valeur avec del. Si vous stockez avec une clé déjà utilisée, l’ancienne valeur associée à cette clé est oubliée. C’est une erreur d’extraire une valeur en utilisant une clé qui n’existe pas.
La méthode keys() d’un objet de type dictionnaire retourne une liste de toutes les clés utilisées dans le dictionnaire, dans un ordre quelconque (si vous voulez qu’elle soit triée, appliquez juste la méthode sort() à la liste des clés). Pour savoir si une clé particulière est dans le dictionnaire, utilisez la méthode has_key() du dictionnaire.
Voici un petit exemple utilisant un dictionnaire :
End of explanation
dict([('sape', 4139), ('guido', 4127), ('jack', 4098)])
dict([(x, x**2) for x in (2, 4, 6)]) # utilisation de la list comprehension
Explanation: Le constructeur dict() construit des dictionnaires directement à partir de listes de paires clé:valeur rangées comme des n-uplets. Lorsque les paires forment un motif, les list comprehensions peuvent spécifier de manière compacte la liste de clés-valeurs.
End of explanation
dict(sape=4139, guido=4127, jack=4098)
Explanation: Lorsque les clés sont de simples chaînes il est parfois plus simple de spécifier les paires en utilisant des arguments à mot-clé :
End of explanation
chevaliers = {'gallahad': 'le pur', 'robin': 'le brave'}
for c, v in chevaliers.items():
print (c, v)
Explanation: Techniques de boucles
Lorsqu’on boucle sur un dictionnaire, les clés et les valeurs correspondantes peuvent être obtenues en même temps en utilisant la méthode iteritems()
End of explanation
for i, v in enumerate(['tic', 'tac', 'toe']):
print (i, v)
Explanation: Lorsqu’on boucle sur une séquence, l’indice donnant la position et la valeur correspondante peuvent être obtenus en même temps en utilisant la fonction enumerate().
End of explanation
questions = ['nom', 'but', 'drapeau']
reponses = ['lancelot', 'le sacre graal', 'le bleu']
for q, r in zip(questions, reponses):
print ("Quel est ton %s? C'est %s." % (q, r))
Explanation: Pour boucler sur deux séquences, ou plus, en même temps, les éléments peuvent être appariés avec la fonction zip().
End of explanation
for i in reversed(xrange(1,10,2)):
print (i)
Explanation: Pour boucler à l’envers sur une séquence, spécifiez d’abord la séquence à l’endroit, ensuite appelez la fonction reversed().
End of explanation
panier = ['pomme', 'orange', 'pomme', 'poire', 'orange', 'banane']
for f in sorted(set(panier)):
print (f)
Explanation: Pour boucler sur une séquence comme si elle était triée, utilisez la fonction sorted() qui retourne une liste nouvelle triée tout en laissant la source inchangée.
End of explanation
chaine1, chaine2, chaine3 = '', 'Trondheim', 'Hammer Dance'
non_null = chaine1 or chaine2 or chaine3
non_null
Explanation: Plus de détails sur les conditions (*)
Les conditions utilisées dans les instructions while et if peuvent contenir d’autres opérateurs en dehors des comparaisons.
Les opérateurs de comparaison in et not in vérifient si une valeur apparaît (ou non) dans une séquence. Les opérateurs is et is not vérifient si deux objets sont réellement le même objet ; cela se justifie seulement pour les objets modifiables comme les listes. Tous les opérateurs de comparaison ont la même priorité, qui est plus faible que celle de tous les opérateurs numériques.
Les comparaisons peuvent être enchaînées. Par exemple, a < b == c teste si a est strictement inférieur à b et de plus si b est égal à c.
Les comparaisons peuvent être combinées avec les opérateurs booléens and (et) et or (ou), et le résultat d’une comparaison (ou de n’importe quel autre expression Booléenne) peut être inversé avec not (pas). Ces opérateurs ont encore une fois une priorité inférieure à celle des opérateurs de comparaison ; et entre eux, not a la plus haute priorité, et or la plus faible, de sorte que A and not B or C est équivalent à (A and (not B)) or C. Bien sûr, les parenthèses peuvent être utilisées pour exprimer les compositions désirées.
Les opérateurs booléens and et or sont des opérateurs dits court-circuit : leurs arguments sont évalués de gauche à droite, et l’évaluation s’arrête dès que le résultat est trouvé. Par exemple, si A et C sont vrais mais que B est faux, A and B and C n'évalue pas l'expression C. En général, la valeur de retour d'un opérateur court-circuit, quand elle est utilisée comme une valeur générale et non comme un booléen, est celle du dernier argument évalué.
Il est possible d’affecter le résultat d’une comparaison ou une autre expression booléenne à une variable. Par exemple
End of explanation
(1, 2, 3) < (1, 2, 4)
[1, 2, 3] < [1, 2, 4]
'ABC' < 'C' < 'Pascal' < 'Python'
(1, 2, 3, 4) < (1, 2, 4)
(1, 2) < (1, 2, -1)
(1, 2, 3) == (1.0, 2.0, 3.0)
(1, 2, ('aa', 'ab')) < (1, 2, ('abc', 'a'), 4)
Explanation: Notez qu’en Python, au contraire du C, les affectations ne peuvent pas être effectuées à l’intérieur des expressions. Les programmeurs C ronchonneront peut-être, mais cela évite une classe de problèmes qu’on rencontre dans les programmes C : écrire = dans une expression alors qu’il fallait ==.
Comparer les séquences et d’autres types (*)
Les objets de type séquence peuvent être comparés à d’autres objets appartenant au même type de séquence. La comparaison utilise l’ordre lexicographique : les deux premiers éléments sont d’abord comparés, et s’ils diffèrent cela détermine le résultat de la comparaison ; s’ils sont égaux, les deux éléments suivants sont comparés, et ainsi de suite, jusqu’à ce que l’une des deux séquences soit épuisée. Si deux éléments à comparer sont eux-mêmes des séquences du même type, la comparaison lexicographique est reconsidérée récursivement. Si la comparaison de tous les éléments de deux séquences les donne égaux, les séquences sont considérées comme égales. Si une séquence est une sous-séquence initiale de l’autre, la séquence la plus courte est la plus petite (inférieure). L’ordonnancement lexicographique pour les chaînes utilise l’ordonnancement ASCII pour les caractères. Quelques exemples de comparaisons de séquences du même type :
End of explanation |
1,293 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I combined all the code lines I said should be at the begining of your code.
Step1: Importing mltools
First you want to make sure it sits in the same folder or wherever you put your PYTHON_PATH pointer to. By default it will be the same folder you're running things from (which is the case here).
You can see that I have that folder in my directory.
Step2: With having it there, I can just import it use it as said in the HW assignement.
Step3: Using mltools
Step4: One important tool that you will use ALL the time is the shuffle and split data methods. The shuffle is used to add randomality to the order of points in case the order of them was some indication of something. The split allows you to create train and test data easily.
Step5: A common mistake here is to split and then forget to use the new splitted data and use X, Y instead.
KNN Classifier
You can read about it on the wiki page or in your notes.
Step6: A VERY good practice thing you should do after you make predictions is to make sure all the dimensions match. That way you at least know that you probably ran it on the right data.
Plotting the classifier and predictions
This is useful if you have 2D data (or 1D for that matter). To show how it works we'll repeat the process using only the first two columns of X.
We plot the areas of classification and the training data.
Step7: Now let's plot the test data with the predicted class. Notice that to do so I just had to change the set of points and classes that I give the plotClassify2D method.
Step8: In the plot above we plotted the test data with the predicted class. That's why it looks perfectly correct. Next we'll plot the test data with the true class.
Now we can see some mistakes.
Step9: Plotting Error
In the HW assignment you are required to plot the error for the training and validation using the samilogx method. To show you how to do that, I'll use a random errors.
In my plotting I will use a more commonly used way of plotting using the axis handler. This way gives a lot more control though I will not demondtrate that too much here. I will try to do add new plotting stuff every new discussion as producing nice plots is 80% of the job for a data scientist | Python Code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(0)
Explanation: I combined all the code lines I said should be at the begining of your code.
End of explanation
!ls
Explanation: Importing mltools
First you want to make sure it sits in the same folder or wherever you put your PYTHON_PATH pointer to. By default it will be the same folder you're running things from (which is the case here).
You can see that I have that folder in my directory.
End of explanation
import mltools as ml
# If this prints error you either defined the PYTHON_PATH to point to somewhere else or entered a different directory.
Explanation: With having it there, I can just import it use it as said in the HW assignement.
End of explanation
path_to_file = 'HW1-code/data/iris.txt'
iris = np.genfromtxt(path_to_file, delimiter=None) # Loading thet txt file
X = iris[:, :-1] # Features are the first 4 columns
Y = iris[:, -1] # Classes are the last column
Explanation: Using mltools
End of explanation
X, Y = ml.shuffleData(X, Y) ## MAKE SURE YOU HAVE BOTH X AND Y!!! (Why?)
# It's still the same size, just different order
Xtr, Xva, Ytr, Yva = ml.splitData(X, Y, 0.75) # Splitting keeping 75% as training and the rest as validation
Explanation: One important tool that you will use ALL the time is the shuffle and split data methods. The shuffle is used to add randomality to the order of points in case the order of them was some indication of something. The split allows you to create train and test data easily.
End of explanation
# Creating an classifier.
knn = ml.knn.knnClassify()
# Training the classifier.
knn.train(Xtr, Ytr, K=5) # What is this thing doing? (Look at the code)
# Making predictions
YvaHat = knn.predict(Xva)
Explanation: A common mistake here is to split and then forget to use the new splitted data and use X, Y instead.
KNN Classifier
You can read about it on the wiki page or in your notes.
End of explanation
knn = ml.knn.knnClassify()
knn.train(Xtr[:, :2], Ytr, K=5)
ml.plotClassify2D(knn, Xtr[:, :2], Ytr)
plt.show()
Explanation: A VERY good practice thing you should do after you make predictions is to make sure all the dimensions match. That way you at least know that you probably ran it on the right data.
Plotting the classifier and predictions
This is useful if you have 2D data (or 1D for that matter). To show how it works we'll repeat the process using only the first two columns of X.
We plot the areas of classification and the training data.
End of explanation
YvaHat = knn.predict(Xva[:, :2])
ml.plotClassify2D(knn, Xva[:, :2], YvaHat)
plt.show()
Explanation: Now let's plot the test data with the predicted class. Notice that to do so I just had to change the set of points and classes that I give the plotClassify2D method.
End of explanation
ml.plotClassify2D(knn, Xva[:, :2], Yva)
plt.show()
Explanation: In the plot above we plotted the test data with the predicted class. That's why it looks perfectly correct. Next we'll plot the test data with the true class.
Now we can see some mistakes.
End of explanation
K = [1, 2, 5, 10, 50, 100, 200]
train_err = np.ones(7) * np.random.rand(7)
val_err = np.ones(7) * np.random.rand(7)
# Creating subplots with just one subplot so basically a single figure.
fig, ax = plt.subplots(1, 1, figsize=(10, 8))
# I added lw (line width) and the label.
ax.semilogx(K, train_err, 'r-', lw=3, label='Training')
ax.semilogx(K, val_err, 'g-', lw=3, label='Validation')
# Adding a legend to the plot that will use the labels from the 'label'.
ax.legend()
# Controlling the axis.
ax.set_xlim(0, 200)
ax.set_ylim(0, 1)
# And still doing this to clean the canvas.
plt.show()
Explanation: Plotting Error
In the HW assignment you are required to plot the error for the training and validation using the samilogx method. To show you how to do that, I'll use a random errors.
In my plotting I will use a more commonly used way of plotting using the axis handler. This way gives a lot more control though I will not demondtrate that too much here. I will try to do add new plotting stuff every new discussion as producing nice plots is 80% of the job for a data scientist :)
End of explanation |
1,294 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples on the use of roppy's FluxSection class
The FluxSection class implements a staircase approximation to a section,
starting and ending in psi-points and following U- and V-edges.
No interpolation is needed to estimate the flux, giving good conservation
properties. On the other hand, this limits the flexibility of the approach.
As distance get distorded, depending on the stair shape, it is not suited
for plotting normal current and other properties along the section.
Step1: User settings
First the ROMS dataset and the section must be described. The section is described by its end points.
By convention the flux is considered positive if the direction is to the right of the section
going from the first to the second end point.
Step2: Make SGrid and FluxSection objects
This datafile contains enough horizontal and vertical information to determine
an SGrid object.
The SGrid class has a method ll2xy to convert from lon/lat to grid coordinates.
Thereafter the nearest $\psi$-points are found and a staircase curve joining
the two $\psi$-points. Thereafter a FluxSection object can be created.
Step3: Visual check
To check the section specification plot it in a simple map.
Step4: Staircase approximation
The next plot is just an illustration of how the function staircase_from_line works, interpolating the straight line in the grid plane as closely as possible.
Step5: Read the velocity
To compute the fluxes, we need the 3D velocity components
Step6: Total volume flux
Obtaining the total volume flux is easy, there is a convenient method transport for this purpose returning the net and positive transport to the right of the section (northwards in this case).
Step7: Flux limited by watermass
The class is flexible enough that more complicated flux calculations can be done.
The method flux_array returns a 2D array of flux through the cells along the section.
Using numpy's advanced logical indexing, different conditions can be prescribed.
For instance a specific water mass can be given by inequalities in salinity and temperature.
NOTE
Step8: Property flux
The flux of properties can be determined. Different definitions and/or reference levels may be applied.
As an example, the code below computes the total tranport of salt by the net flux through the section
Step9: Flux in a depth range
The simplest way to compute the flux in a depth range is to use only
flux cells where the $\rho$-point is in the depth range. This can be
done by the logical indexing.
Step10: Alternative algorithm
A more accurate algorithm is to include the fraction of the grid cell
above the depth limit. This can be done by an integrating kernel,
that is a 2D array K where the entries are zero if the cell is totally
below the limit, one if totally above the limit and the fraction above the
limit if the flux cell contains the limit. The total flux above the limit is found
by multiplying the flux array with K and summing.
This algorithm is not more complicated than above. In our example, the
estimated flux values are almost equal, we had to include the third decimal to
notice the difference.
Step11: Componentwise fluxes
It may be instructional to examine the staircase behaviour of the flux.
We may separate the flux across U- and V-edges respectively. The
FluxSection class has 1D horiozontal logical arrays Eu and Ev
pointing to the respective edge types.
To use the logical indexing pattern
from the other examples, this has to be extended vertically so that we get
a condition on the flux cell indicating wheter it is part of a U- or V-edge.
The numpy function logical_and.outer with a True argument may be used
for this. [Better ways?]
Step12: Flux calculations on a subgrid
It may save memory and I/O time to work on a subgrid. Just specify the subgrid using
the SGrid subgrid convention and use the staircase function unchanged. The SGrid object
is responsible for handling any offsets. | Python Code:
# Imports
=======
The class depends on `numpy` and is part of `roppy`. To read the data `netCDF4` is needed.
The graphic package `matplotlib` is not required for `FluxSection` but is used for visualisation in this notebook.
# Imports
import numpy as np
import matplotlib.pyplot as plt
from netCDF4 import Dataset
import roppy
%matplotlib inline
Explanation: Examples on the use of roppy's FluxSection class
The FluxSection class implements a staircase approximation to a section,
starting and ending in psi-points and following U- and V-edges.
No interpolation is needed to estimate the flux, giving good conservation
properties. On the other hand, this limits the flexibility of the approach.
As distance get distorded, depending on the stair shape, it is not suited
for plotting normal current and other properties along the section.
End of explanation
# Settings
# Data
romsfile = './data/ocean_avg_example.nc'
tstep = 2 # Third time frame in the file
# Section end points
lon0, lat0 = 4.72, 60.75 # Section start - Feie
lon1, lat1 = -0.67, 60.75 # Section stop - Shetland
Explanation: User settings
First the ROMS dataset and the section must be described. The section is described by its end points.
By convention the flux is considered positive if the direction is to the right of the section
going from the first to the second end point.
End of explanation
# Make SGrid and FluxSection objects
fid = Dataset(romsfile)
grid = roppy.SGrid(fid)
# End points in grid coordinates
x0, y0 = grid.ll2xy(lon0, lat0)
x1, y1 = grid.ll2xy(lon1, lat1)
# Find nearest psi-points
i0, i1, j0, j1 = [int(np.ceil(v)) for v in [x0, x1, y0, y1]]
# The staircase flux section
I, J = roppy.staircase_from_line(i0, i1, j0, j1)
sec = roppy.FluxSection(grid, I, J)
Explanation: Make SGrid and FluxSection objects
This datafile contains enough horizontal and vertical information to determine
an SGrid object.
The SGrid class has a method ll2xy to convert from lon/lat to grid coordinates.
Thereafter the nearest $\psi$-points are found and a staircase curve joining
the two $\psi$-points. Thereafter a FluxSection object can be created.
End of explanation
# Make a quick and dirty horizontal plot of the section
# Read topography
H = fid.variables['h'][:,:]
Levels = (0, 100, 300, 1000, 3000, 5000)
plt.contourf(H, levels=Levels, cmap=plt.get_cmap('Blues'))
plt.colorbar()
# Poor man's coastline
plt.contour(H, levels=[10], colors='black')
# Plot the stair case section
# NOTE: subtract 0.5 to go from psi-index to grid coordinate
plt.plot(sec.I - 0.5, sec.J - 0.5, lw=2, color='red') # Staircase
Explanation: Visual check
To check the section specification plot it in a simple map.
End of explanation
# Zoom in on the staircase
# Plot blue line between end points
plt.plot([sec.I[0]-0.5, sec.I[-1]-0.5], [sec.J[0]-0.5, sec.J[-1]-0.5])
# Plot red staircase curve
plt.plot(sec.I-0.5, sec.J-0.5, lw=2, color='red')
plt.grid(True)
_ = plt.axis('equal')
Explanation: Staircase approximation
The next plot is just an illustration of how the function staircase_from_line works, interpolating the straight line in the grid plane as closely as possible.
End of explanation
# Read the velocity
U = fid.variables['u'][tstep, :, :, :]
V = fid.variables['v'][tstep, :, :, :]
Explanation: Read the velocity
To compute the fluxes, we need the 3D velocity components
End of explanation
# Compute volume flux through the section
# ----------------------------------------
netflux,posflux = sec.transport(U, V)
print("Net flux = {:6.2f} Sv".format(netflux * 1e-6))
print("Total northwards flux = {:6.2f} Sv".format(posflux * 1e-6))
print("Total southwards flux = {:6.2f} Sv".format((posflux-netflux)*1e-6))
Explanation: Total volume flux
Obtaining the total volume flux is easy, there is a convenient method transport for this purpose returning the net and positive transport to the right of the section (northwards in this case).
End of explanation
# Flux of specific water mass
# --------------------------------
# Read hydrography
S = fid.variables['salt'][tstep, :, :]
T = fid.variables['temp'][tstep, :, :]
# Compute section arrays
Flux = sec.flux_array(U, V)
S = sec.sample3D(S)
T = sec.sample3D(T)
# Compute Atlantic flux where S > 34.9 and T > 5
S_lim = 34.9
T_lim = 5.0
cond = (S > S_lim) & (T > T_lim)
net_flux = np.sum(Flux[cond]) * 1e-6
# Northwards component
cond1 = (cond) & (Flux > 0)
north_flux = np.sum(Flux[cond1]) * 1e-6
print("Net flux, S > {:4.1f}, T > {:4.1f} = {:6.2f} Sv".format(S_lim, T_lim, net_flux))
print("Northwards flux, S > {:4.1f}, T > {:4.1f} = {:6.2f} Sv".format(S_lim, T_lim, north_flux))
print("Southwards flux, S > {:4.1f}, T > {:4.1f} = {:6.2f} Sv".format(S_lim, T_lim, north_flux - net_flux))
Explanation: Flux limited by watermass
The class is flexible enough that more complicated flux calculations can be done.
The method flux_array returns a 2D array of flux through the cells along the section.
Using numpy's advanced logical indexing, different conditions can be prescribed.
For instance a specific water mass can be given by inequalities in salinity and temperature.
NOTE: Different conditions must be parenthesed before using logical operators.
The 3D hydrographic fields must be sampled to the section cells, this is done by the method sample3D.
End of explanation
# Salt flux
# ---------
rho = 1025.0 # Density, could compute this from hydrography
salt_flux = rho * np.sum(Flux * S)
# unit Gg/s = kt/s
print "Net salt flux = {:5.2f} Gg/s".format(salt_flux * 1e-9)
Explanation: Property flux
The flux of properties can be determined. Different definitions and/or reference levels may be applied.
As an example, the code below computes the total tranport of salt by the net flux through the section
End of explanation
# Flux in a depth range
# ----------------------
depth_lim = 100.0
# Have not sampled the depth of the rho-points,
# instead approximate by the average from w-depths
z_r = 0.5*(sec.z_w[:-1,:] + sec.z_w[1:,:])
# Shallow flux
cond = z_r > -depth_lim
net_flux = np.sum(Flux[cond]) * 1e-6
cond1 = (cond) & (Flux > 0)
north_flux = np.sum(Flux[cond1]) * 1e-6
print("Net flux, depth < {:4.0f} = {:6.3f} Sv".format(depth_lim, net_flux))
print("Northwards flux, depth < {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux))
print("Southwards flux, depth < {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux - net_flux))
# Deep flux
cond = z_r < -depth_lim
net_flux = np.sum(Flux[cond]) * 1e-6
cond1 = (cond) & (Flux > 0)
north_flux = np.sum(Flux[cond1]) * 1e-6
print("")
print("Net flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, net_flux))
print("Northwards flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux))
print("Southwards flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux - net_flux))
Explanation: Flux in a depth range
The simplest way to compute the flux in a depth range is to use only
flux cells where the $\rho$-point is in the depth range. This can be
done by the logical indexing.
End of explanation
depth_lim = 100
# Make an integration kernel
K = (sec.z_w[1:,:] + depth_lim) / sec.dZ # Fraction of cell above limit
np.clip(K, 0.0, 1.0, out=K)
net_flux = np.sum(K*Flux) * 1e-6
north_flux = np.sum((K*Flux)[Flux>0]) *1e-6
print("Net flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, net_flux))
print("Northwards flux, depth > {:4.0f} = {:6.3f} Sv".format(depth_lim, north_flux))
Explanation: Alternative algorithm
A more accurate algorithm is to include the fraction of the grid cell
above the depth limit. This can be done by an integrating kernel,
that is a 2D array K where the entries are zero if the cell is totally
below the limit, one if totally above the limit and the fraction above the
limit if the flux cell contains the limit. The total flux above the limit is found
by multiplying the flux array with K and summing.
This algorithm is not more complicated than above. In our example, the
estimated flux values are almost equal, we had to include the third decimal to
notice the difference.
End of explanation
# Examine the staircase
# ------------------------
# Flux in X-direction (mostly east)
cond = sec.Eu # Only use U-edges
# Extend the array in the vertical
cond = np.logical_and.outer(sec.N*[True], cond)
net_flux = np.sum(Flux[cond]) * 1e-6
# Postive component
cond1 = (cond) & (Flux > 0)
pos_flux = np.sum(Flux[cond1]) * 1e-6
print("net X flux = {:6.2f} Sv".format(net_flux))
print("pos X flux = {:6.2f} Sv".format(pos_flux))
print("neg X flux = {:6.2f} Sv".format(pos_flux-net_flux))
# Flux in Y-direction (mostly north)
cond = np.logical_and.outer(sec.N*[True], sec.Ev) # Only V-edges
net_flux = np.sum(Flux[cond]) * 1e-6
# Postive component
cond1 = (cond) & (Flux > 0)
pos_flux = np.sum(Flux[cond1]) * 1e-6
print("")
print("net Y flux = {:6.2f} Sv".format(net_flux))
print("pos Y flux = {:6.2f} Sv".format(pos_flux))
print("neg Y flux = {:6.2f} Sv".format(pos_flux-net_flux))
Explanation: Componentwise fluxes
It may be instructional to examine the staircase behaviour of the flux.
We may separate the flux across U- and V-edges respectively. The
FluxSection class has 1D horiozontal logical arrays Eu and Ev
pointing to the respective edge types.
To use the logical indexing pattern
from the other examples, this has to be extended vertically so that we get
a condition on the flux cell indicating wheter it is part of a U- or V-edge.
The numpy function logical_and.outer with a True argument may be used
for this. [Better ways?]
End of explanation
# Print the limits of the section
## print I[0], I[-1], J[0], J[-1]
# Specify a subgrid
i0, i1, j0, j1 = 94, 131, 114, 130 # Minimal subgrid
# Check that the section is contained in the subgrid
assert i0 < I[0] < i1 and i0 < I[-1] < i1
assert j0 < J[0] < j1 and j0 < J[-1] < j1
# Make a SGrid object for the subgrid
grd1 = roppy.SGrid(fid, subgrid=(i0,i1,j0,j1))
# Make a FluxSection object
sec1 = roppy.FluxSection(grd1, I, J)
# Read velocity for the subgrid only
U1 = fid.variables['u'][tstep, :, grd1.Ju, grd1.Iu]
V1 = fid.variables['v'][tstep, :, grd1.Jv, grd1.Iv]
# Compute net and positive fluxes
netflux1, posflux1 = sec1.transport(U1, V1)
# Control that the values have not changed from the computations for the whole grid
print(" whole grid subgrid")
print("Net flux : {:6.3f} {:6.3f} Sv".format(netflux * 1e-6, netflux1 * 1e-6))
print("Total northwards flux : {:6.3f} {:6.3f} Sv".format(posflux * 1e-6, posflux1 * 1e-6))
Explanation: Flux calculations on a subgrid
It may save memory and I/O time to work on a subgrid. Just specify the subgrid using
the SGrid subgrid convention and use the staircase function unchanged. The SGrid object
is responsible for handling any offsets.
End of explanation |
1,295 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hierarchical Topic Models and the Nested Chinese Restaurant Process
Tun-Chieh Hsu, Xialingzi Jin, Yen-Hua Chen
I. Background
Recently, complex probabilistic models are increasingly prevalent in various of domains. However, there are several challenges that should be dealt with due to their open-ended nature. That is, the data sets often grow over time, as they growing, they bring new entities and new structures to the fore. Take the problem of learning a topic hierarchy from data for example. Given a collection of documents, each of which contains a set of words and the goal is to discover common usage patterns or topics in the documents, and to organize these topics into a hierarchy.
This paper proposes a new method that specified a generative probabilistic model for hierarchical structures and adopt Bayesian perspective to learn such structures from data. The hierarchies in this case could be considered as random variables and specified procedurally. In addition, the underlying approach of constructing the probabilistic object is Chinese restaurant process (CRP), a distribution on partitions of integers. In this paper, they extend CRP to a hierarchy of partitions, known as nested Chinese restaruant process (nCRP), and apply it as a representation of prior and posterior distributions for topic hierarchies. To be more specific, each node in the hierarchy is associated with a topic, where a topic is a distribution across words. A document is generated by choosing a path from the root to a leaf, repeatedly sampling topics along that path, and sampling the words from the selected topics. Thus the orga- nization of topics into a hierarchy aims to capture the breadth of usage of topics across the corpus, reflecting underlying syntactic and semantic notions of generality and specificity.
II. Algorithm Description
A. Chinese Restaurant Process
CRP is an analogous to seating customers at tables in a Chinese restaurant. Imagine there is a Chinese restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 sits at the first table. The next customer either sits at the same table as customer 1, or the next table. The $m$th subsequent customer sits at a table drawn from the following distribution
Step1: B. Function construction
B.1 Chinese Restaurant Process (CRP)
Step2: B.2 Node Sampling
Step3: B.3 Gibbs sampling -- $z_{m,n}$
Step4: B.4 Gibbs sampling -- ${\bf c}_{m}$, CRP prior
Step5: B.5 Gibbs sampling -- ${\bf c}_{m}$, likelihood
Step6: B.6 Gibbs sampling -- ${\bf c}_{m}$, posterior
Step7: B.7 Gibbs sampling -- $w_{n}$
Step8: C. Gibbs sampling
C.1 Find most common value
Step9: C.2 Gibbs sampling
Step10: V. Topic Model with hLDA
Gibbs sampling in section IV distributes the input vocabularies from documents in corpus to available topics, which sampled from $L$-dimensional topics. In section V, an $n$-level tree will be presented by tree plot, which the root-node will be more general and the leaves will be more specific. In addition, tree plot will return the words sorted by their frequencies for each node.
A. hLDA model
Step11: B. hLDA plot
Step12: VI. Empirical Example
A. Simulated data
For simulated data example, each document, $d$, in corpus is generated by normal distribution with different size of words, $w_{d,n}$, where $n\in{10,...,200}$ and ${\bf w}_{d}\sim N(0, 1)$. In this example, by generating 35 documents in the corpus, we are able to see the simulated tree with the number near mean, $0$, such as {w0, w1, w-1} in the root node and the number far from mean such as {w10, w-10, w15} in the leaves.
Step13: B. Real data
For real data example, the corpus of documents is generated from Blei's sample data. The documents are splitted by paragraph; that is, each paragraph reprents one document. We take first 11 documents to form the sample corpus used in the hLDA model. To form the corpus, we read the corpus as a large list of lists. The sublists in the nested list represent the documents; the elements in each sublist represent the words in specific document. Note that the punctuations are removed from the corpus.
Step14: VII. Download and Install from Github
The hLDA code of the paper Hierarchical Topic Models and the Nested Chinese Restaurant Process is released on github with the package named hLDA (click to clone). One can easily download (click to download) and install by running python setup.py install. The package provides 4 functions
Step15: VIII. Optimization
To optimize the hLDA model, we choose cython to speed the functions up, since the only matrix calculation function, c_m, was already vectorized. However, after applying cython, the code is not able to speed up efficiently. The possible reasons are shown as follows.
First, if we simply speed up single function, cython does it well. Take the first function, node_sampling, for example, the run time decreased from 52.2 ms to 47.2ms, which menas cython is about 10% faster than python code. On the other hand, if we try to speed up all the functions used in gibbs sampling function, gibbs, the run time is similar or even slower, since it has to import external cython function to complete the work.
Second, most of the variables used in hLDA are lists. When coding cython in python, we fail to initialize the data type for the list variables efficiently.
Step16: IX. Code Comparison
This section will introduce LDA model as the comparison with hLDA model. The LDA model needs user to specify the number of topics and returns the probability of the words in each topic, which are the most different parts compares to hLDA model. The hLDA model applies nonparametric prior which allows arbitrary factors and readily accommodates growing data collections. That is , the hLDA model will sample the number of topics by nCRP and return a topic hierarchy tree.
The lda_topic function returns a single-layer word distributions for topics, which number is specified as parameter in the function. In each topic, the LDA model gives the probability distribution of possible words. In LDA model, it treats corpus as a big document, instead of consider each document by it own. Furthermore, the model is not able to illustrate the relationship between topics and words which are provided in hLDA model. | Python Code:
import numpy as np
from scipy.special import gammaln
import random
from collections import Counter
import string
import graphviz
import pygraphviz
import pydot
Explanation: Hierarchical Topic Models and the Nested Chinese Restaurant Process
Tun-Chieh Hsu, Xialingzi Jin, Yen-Hua Chen
I. Background
Recently, complex probabilistic models are increasingly prevalent in various of domains. However, there are several challenges that should be dealt with due to their open-ended nature. That is, the data sets often grow over time, as they growing, they bring new entities and new structures to the fore. Take the problem of learning a topic hierarchy from data for example. Given a collection of documents, each of which contains a set of words and the goal is to discover common usage patterns or topics in the documents, and to organize these topics into a hierarchy.
This paper proposes a new method that specified a generative probabilistic model for hierarchical structures and adopt Bayesian perspective to learn such structures from data. The hierarchies in this case could be considered as random variables and specified procedurally. In addition, the underlying approach of constructing the probabilistic object is Chinese restaurant process (CRP), a distribution on partitions of integers. In this paper, they extend CRP to a hierarchy of partitions, known as nested Chinese restaruant process (nCRP), and apply it as a representation of prior and posterior distributions for topic hierarchies. To be more specific, each node in the hierarchy is associated with a topic, where a topic is a distribution across words. A document is generated by choosing a path from the root to a leaf, repeatedly sampling topics along that path, and sampling the words from the selected topics. Thus the orga- nization of topics into a hierarchy aims to capture the breadth of usage of topics across the corpus, reflecting underlying syntactic and semantic notions of generality and specificity.
II. Algorithm Description
A. Chinese Restaurant Process
CRP is an analogous to seating customers at tables in a Chinese restaurant. Imagine there is a Chinese restaurant with an infinite number of circular tables, each with infinite capacity. Customer 1 sits at the first table. The next customer either sits at the same table as customer 1, or the next table. The $m$th subsequent customer sits at a table drawn from the following distribution:
\begin{align}
p(\text{occupied table}\hspace{0.5ex}i\hspace{0.5ex}\text{ | previous customers}) = \frac{m_i}{\gamma+m-1}\
p(\text{next unoccupied table | previous customers}) = \frac{\gamma}{\gamma + m -1}
\end{align}
where $m_i$ is the number of previous customers at table $i$, and $\gamma$ is a parameter. After $M$
customers sit down, the seating plan gives a partition of $M$ items. This distribution gives
the same partition structure as draws from a Dirichlet process.
B. Nested Chinese Restaurant Process
A nested Chinese restaurant process (nCRP) is an extended version of CRP. Suppose that there are an infinite number of infinite-table Chinese restaurants in a city. A restaurant is determined to be the root restaurant and on each of its infinite tables is a card with the name of another restaurant. On each of the tables in those restaurants are cards that refer to other restaurants, and this structure repeats infinitely. Each restaurant is referred to exactly once. As a result, the whole process could be imagined as an infinitely-branched tree.
Now, consider a tourist arrives in the city for a culinary vacation. On the first first day, he select a root Chinese restaurant and selects a table from the equation above. On the second day, he enters to the restaurant refered by previous restaurant , again from the above equation. This process was repeated for $L$ days, and at the end, the tourist has sat at L restaurants which constitute a path from the root to a restaurant at the $L$th level in the infinite tree. After M tourists take L-day vacations, the collection of paths describe a particular L-level subtree of the infinite tree.
C. Hierarchical Topic Model (hLDA)
The hierarchical latent Dirichlet allocation model (hLDA) together with nested Chinese restaruant process (nCRP) illustrate the pattern of words from the collection of documents. There are 3 procedures in hLDA: (1) Draw a path from root-node to a leaf; (2) Select a specific path, draw a vector of topic along the path; (3) Draw the words from the topic. In addition, all documents share the topic associated with the root restaurant.
Let $c_1$ be the root restaurant.
For each level $\ell\in{2,...,L}$:
Draw a table from restaurant $c_{\ell-1}$ using CRP. Set $c_{\ell}$ to be the restaurant reffered to by that table.
Draw an L-dimensional topic proportion vector $\theta$ from Dir($\alpha$).
For each word $n\in{1,...,N}$:
Draw $z\in{1,...,L}$ from Mult($\theta$).
Draw $w_n$ from the topic associated with restaurant $c_z$.
<img src="hLDA.png" style="width:400px">
Notation:
$T$ : L-level infinite-tree - drawn from CRP($\gamma$)
$\theta$ : L-dimensional topic propotional distribution - drawn from Dir($\alpha$)
$\beta$ : probability of words for each topic - drawn from $\eta$
$c_{\ell}$ : L-level paths, given $T$
$z$ : actual number of topics for each level - drawn from Mult($\theta$)
$w$ : word distribution for each topic at each level
$N$ : number of words - $n\in{1,...,N}$
$M$ : number of documents - $m\in{1,...,M}$
III. Approximate Inference by Gibbs Sampling
Gibbs sampling will sample from the posterior nCRP and corresponding topics in the hLDA model. The sampler are divided into 2 parts -- $z_{m,n}$ and $ c_{m,\ell}$. In addition, variables $\theta$ and $\beta$ are integrated out.
A. Notation
$w_{m,n}$ : the $n$th word in the $m$th documnt
$c_{m,\ell}$ : the restaurant corresponding to the $\ell$th topic in document $m$
$z_{m,n}$ : the assignment of the $n$th word in the $m$th document to one of the $L$ available topics
B. Topic distribution : $z_{m,n}$
\begin{align}
p(z_{i}=j\hspace{0.5ex}|\hspace{0.5ex}{\bf z}{-i},{\bf w})\propto\frac{n{-i,j}^{(w_{i})}+\beta}{n_{-i,j}^{(\cdot)}+W\beta}\frac{n_{-i,j}^{(d_{i})}+\alpha}{n_{-i,\cdot}^{(d_{i})}+T\alpha}
\end{align}
$z_{i}$ : assignments of words to topics
$n_{-i,j}^{(w_{i})}$ : number of words assigned to topic $j$ that are the same as $w_i$
$n_{-i,j}^{(\cdot)}$ : total number of words assigned to topic $j$
$n_{-i,j}^{(d_{i})}$ : number of words from document $d_i$ assigned to topic $j$
$n_{-i,\cdot}^{(d_{i})}$ : total number of words in document $d_i$
$W$ : number of words have been assigned
C. Path : ${\bf c}_{m}$
$$p({\bf c}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf w}, {\bf c}{-m}, {\bf z})\propto p({\bf w}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf c}, {\bf w}{-m}, {\bf z})\cdot p({\bf c}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf c}{-m})$$
$p({\bf c}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf w}, {\bf c}{-m}, {\bf z})$ : posterior of the set of probabilities of possible novel paths
$p({\bf w}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf c}, {\bf w}{-m}, {\bf z})$ : likelihood of the data given a particular choice of ${\bf c}_{m}$
$p({\bf c}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf c}{-m})$ : prior on ${\bf c}_{m}$ which implies by the nCRP
$$p({\bf w}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf c}, {\bf w}{-m}, {\bf z})=\prod_{\ell=1}^{L}\left(\frac{\Gamma(n_{c_{m,\ell},-m}^{(\cdot)}+W\eta)}{\prod_{w}\Gamma(n_{c_{m,\ell},-m}^{(w)}+\eta)}\frac{\prod_{w}\Gamma(n_{c_{m,\ell},-m}^{(w)}+n_{c_{m,\ell},m}^{(w)}+\eta)}{\Gamma(n_{c_{m,\ell},-m}^{(\cdot)}+n_{c_{m,\ell},m}^{(\cdot)}+W\eta)}\right)$$
$p({\bf w}{m}\hspace{0.5ex}|\hspace{0.5ex}{\bf c}, {\bf w}{-m}, {\bf z})$ : joint distribution of likelihood
$n_{c_{m,\ell},-m}^{(w)}$ : number of instances of word $w$ that have been assigned to the topic indexed by $c_{m,\ell}$, not in the document $m$
$W$ : total vocabulary size
IV. Implementation
A. Package import
End of explanation
def CRP(topic, phi):
'''
CRP gives the probability of topic assignment for specific vocabulary
Return a 1 * j array, where j is the number of topic
Parameter
---------
topic: a list of lists, contains assigned words in each sublist (topic)
phi: double, parameter for CRP
Return
------
p_crp: the probability of topic assignments for new word
'''
p_crp = np.empty(len(topic)+1)
m = sum([len(x) for x in topic])
p_crp[0] = phi / (phi + m)
for i, word in enumerate(topic):
p_crp[i+1] = len(word) / (phi + m)
return p_crp
Explanation: B. Function construction
B.1 Chinese Restaurant Process (CRP)
End of explanation
def node_sampling(corpus_s, phi):
'''
Node sampling samples the number of topics, L
return a j-layer list of lists, where j is the number of topics
Parameter
---------
corpus_s: a list of lists, contains words in each sublist (document)
phi: double, parameter for CRP
Return
------
topic: a list of lists, contains assigned words in each sublist (topic)
'''
topic = []
for corpus in corpus_s:
for word in corpus:
cm = CRP(topic, phi)
theta = np.random.multinomial(1, (cm/sum(cm))).argmax()
if theta == 0:
topic.append([word])
else:
topic[theta-1].append(word)
return topic
Explanation: B.2 Node Sampling
End of explanation
def Z(corpus_s, topic, alpha, beta):
'''
Z samples from LDA model
Return two j-layer list of lists, where j is the number of topics
Parameter
---------
corpus_s: a list of lists, contains words in each sublist (document)
topic: a L-dimensional list of lists, sample from node_sampling
alpha: double, parameter
beta: double, parameter
Return
------
z_topic: a j-dimensional list of lists, drawn from L-dimensioanl topic, j<L
z_doc: a j-dimensioanl list of lists, report from which document the word is assigned to each topic
'''
n_vocab = sum([len(x) for x in corpus_s])
t_zm = np.zeros(n_vocab).astype('int')
z_topic = [[] for _ in topic]
z_doc = [[] for _ in topic]
z_tmp = np.zeros((n_vocab, len(topic)))
assigned = np.zeros((len(corpus_s), len(topic)))
n = 0
for i in range(len(corpus_s)):
for d in range(len(corpus_s[i])):
wi = corpus_s[i][d]
for j in range(len(topic)):
lik = (z_topic[j].count(wi) + beta) / (assigned[i, j] + n_vocab * beta)
pri = (len(z_topic[j]) + alpha) / ((len(corpus_s[i]) - 1) + len(topic) * alpha)
z_tmp[n, j] = lik * pri
t_zm[n] = np.random.multinomial(1, (z_tmp[n,:]/sum(z_tmp[n,:]))).argmax()
z_topic[t_zm[n]].append(wi)
z_doc[t_zm[n]].append(i)
assigned[i, t_zm[n]] += 1
n += 1
z_topic = [x for x in z_topic if x != []]
z_doc = [x for x in z_doc if x != []]
return z_topic, z_doc
Explanation: B.3 Gibbs sampling -- $z_{m,n}$
End of explanation
def CRP_prior(corpus_s, doc, phi):
'''
CRP_prior implies by nCRP
Return a m*j array, whre m is the number of documents and j is the number of topics
Parameter
---------
corpus_s: a list of lists, contains words in each sublist (document)
doc: a j-dimensioanl list of lists, drawn from Z function (z_doc)
phi: double, parameter for CRP
Return
------
c_p: a m*j array, for each document the probability of the topics
'''
c_p = np.empty((len(corpus_s), len(doc)))
for i, corpus in enumerate(corpus_s):
p_topic = [[x for x in doc[j] if x != i] for j in range(len(doc))]
tmp = CRP(p_topic, phi)
c_p[i,:] = tmp[1:]
return c_p
Explanation: B.4 Gibbs sampling -- ${\bf c}_{m}$, CRP prior
End of explanation
def likelihood(corpus_s, topic, eta):
'''
likelihood gives the propability of data given a particular choice of c
Return a m*j array, whre m is the number of documents and j is the number of topics
Parameter
---------
corpus_s: a list of lists, contains words in each sublist (document)
topic: a j-dimensional list of lists, drawn from Z function (z_assigned)
eta: double, parameter
Return
------
w_m: a m*j array
'''
w_m = np.empty((len(corpus_s), len(topic)))
allword_topic = [word for t in topic for word in t]
n_vocab = sum([len(x) for x in corpus_s])
for i, corpus in enumerate(corpus_s):
prob_result = []
for j in range(len(topic)):
current_topic = topic[j]
n_word_topic = len(current_topic)
prev_dominator = 1
later_numerator = 1
prob_word = 1
overlap = [val for val in set(corpus) if val in current_topic]
prev_numerator = gammaln(len(current_topic) - len(overlap) + n_vocab * eta)
later_dominator = gammaln(len(current_topic) + n_vocab * eta)
for word in corpus:
corpus_list = corpus
if current_topic.count(word) - corpus_list.count(word) < 0 :
a = 0
else:
a = current_topic.count(word) - corpus_list.count(word)
prev_dominator += gammaln(a + eta)
later_numerator += gammaln(current_topic.count(word) + eta)
prev = prev_numerator - prev_dominator
later = later_numerator - later_dominator
like = prev + later
w_m[i, j] = like
w_m[i, :] = w_m[i, :] + abs(min(w_m[i, :]) + 0.1)
w_m = w_m/w_m.sum(axis = 1)[:, np.newaxis]
return w_m
Explanation: B.5 Gibbs sampling -- ${\bf c}_{m}$, likelihood
End of explanation
def post(w_m, c_p):
'''
Parameter
---------
w_m: likelihood, drawn from likelihood function
c_p: prior, drawn from CRP_prior function
Return
------
c_m, a m*j list of lists
'''
c_m = (w_m * c_p) / (w_m * c_p).sum(axis = 1)[:, np.newaxis]
return np.array(c_m)
Explanation: B.6 Gibbs sampling -- ${\bf c}_{m}$, posterior
End of explanation
def wn(c_m, corpus_s):
'''
wn return the assignment of words for topics, drawn from multinomial distribution
Return a n*1 array, where n is the total number of word
Parameter
---------
c_m: a m*j list of lists, drawn from post function
corpus_s: a list of lists, contains words in each sublist (document)
Return
------
wn_ass: a n*1 array, report the topic assignment for each word
'''
wn_ass = []
for i, corpus in enumerate(corpus_s):
for word in corpus:
theta = np.random.multinomial(1, c_m[i]).argmax()
wn_ass.append(theta)
return np.array(wn_ass)
Explanation: B.7 Gibbs sampling -- $w_{n}$
End of explanation
most_common = lambda x: Counter(x).most_common(1)[0][0]
Explanation: C. Gibbs sampling
C.1 Find most common value
End of explanation
def gibbs(corpus_s, topic, alpha, beta, phi, eta, ite):
'''
gibbs will return the distribution of words for topics
Return a j-dimensional list of lists, where j is the number of topics
Parameter
---------
corpus_s: a list of lists, contains words in each sublist (document)
topic: a j-dimensional list of lists, drawn from Z function (z_assigned)
alpha: double, parameter for Z function
beta: double, parameter for Z function
phi: double, parameter fro CRP_prior function
eta: double, parameter for w_n function
ite: int, number of iteration
Return
------
wn_topic: a j-dimensional list of lists, the distribution of words for topics
'''
n_vocab = sum([len(x) for x in corpus_s])
gibbs = np.empty((n_vocab, ite)).astype('int')
for i in range(ite):
z_topic, z_doc = Z(corpus_s, topic, alpha, beta)
c_p = CRP_prior(corpus_s, z_doc, phi)
w_m = likelihood(corpus_s, z_topic, eta)
c_m = post(w_m, c_p)
gibbs[:, i] = wn(c_m, corpus_s)
# drop first 1/10 data
gibbs = gibbs[:, int(ite/10):]
theta = [most_common(gibbs[x]) for x in range(n_vocab)]
n_topic = max(theta)+1
wn_topic = [[] for _ in range(n_topic)]
wn_doc_topic = [[] for _ in range(n_topic)]
doc = 0
n = 0
for i, corpus_s in enumerate(corpus_s):
if doc == i:
for word in corpus_s:
wn_doc_topic[theta[n]].append(word)
n += 1
for j in range(n_topic):
if wn_doc_topic[j] != []:
wn_topic[j].append(wn_doc_topic[j])
wn_doc_topic = [[] for _ in range(n_topic)]
doc += 1
wn_topic = [x for x in wn_topic if x != []]
return wn_topic
Explanation: C.2 Gibbs sampling
End of explanation
def hLDA(corpus_s, alpha, beta, phi, eta, ite, level):
'''
hLDA generates an n*1 list of lists, where n is the number of level
Parameter
---------
corpus_s: a list of lists, contains words in each sublist (document)
alpha: double, parameter for Z function
beta: double, parameter for Z function
phi: double, parameter fro CRP_prior function
eta: double, parameter for w_n function
ite: int, number of iteration
level: int, number of level
Return
hLDA_tree: an n*1 list of lists, each sublist represents a level, the sublist in each level represents a topic
node: an n*1 list of lists, returns how many nodes there are in each level
'''
topic = node_sampling(corpus_s, phi)
print(len(topic))
hLDA_tree = [[] for _ in range(level)]
tmp_tree = []
node = [[] for _ in range(level+1)]
node[0].append(1)
for i in range(level):
if i == 0:
wn_topic = gibbs(corpus_s, topic, alpha, beta, phi, eta, ite)
node_topic = [x for word in wn_topic[0] for x in word]
hLDA_tree[0].append(node_topic)
tmp_tree.append(wn_topic[1:])
tmp_tree = tmp_tree[0]
node[1].append(len(wn_topic[1:]))
else:
for j in range(sum(node[i])):
if tmp_tree == []:
break
wn_topic = gibbs(tmp_tree[0], topic, alpha, beta, phi, eta, ite)
node_topic = [x for word in wn_topic[0] for x in word]
hLDA_tree[i].append(node_topic)
tmp_tree.remove(tmp_tree[0])
if wn_topic[1:] != []:
tmp_tree.extend(wn_topic[1:])
node[i+1].append(len(wn_topic[1:]))
return hLDA_tree, node[:level]
Explanation: V. Topic Model with hLDA
Gibbs sampling in section IV distributes the input vocabularies from documents in corpus to available topics, which sampled from $L$-dimensional topics. In section V, an $n$-level tree will be presented by tree plot, which the root-node will be more general and the leaves will be more specific. In addition, tree plot will return the words sorted by their frequencies for each node.
A. hLDA model
End of explanation
def HLDA_plot(hLDA_object, Len = 8, save = False):
from IPython.display import Image, display
def viewPydot(pdot):
plt = Image(pdot.create_png())
display(plt)
words = hLDA_object[0]
struc = hLDA_object[1]
graph = pydot.Dot(graph_type='graph')
end_index = [np.insert(np.cumsum(i),0,0) for i in struc]
for level in range(len(struc)-1):
leaf_level = level + 1
leaf_word = words[leaf_level]
leaf_struc = struc[leaf_level]
word = words[level]
end_leaf_index = end_index[leaf_level]
for len_root in range(len(word)):
root_word = '\n'.join([x[0] for x in Counter(word[len_root]).most_common(Len)])
leaf_index = leaf_struc[len_root]
start = end_leaf_index[len_root]
end = end_leaf_index[len_root+1]
lf = leaf_word[start:end]
for l in lf:
leaf_w = '\n'.join([x[0] for x in Counter(list(l)).most_common(Len)])
edge = pydot.Edge(root_word, leaf_w)
graph.add_edge(edge)
if save == True:
graph.write_png('graph.png')
viewPydot(graph)
Explanation: B. hLDA plot
End of explanation
def sim_corpus(n):
n_rows = n
corpus = [[] for _ in range(n_rows)]
for i in range(n_rows):
n_cols = np.random.randint(10, 200, 1, dtype = 'int')[0]
for j in range(n_cols):
num = np.random.normal(0, 1, n_cols)
word = 'w%s' % int(round(num[j], 1)*10)
corpus[i].append(word)
return corpus
corpus_0 = sim_corpus(35)
tree_0 = hLDA(corpus_0, 0.1, 0.01, 2, 0.01, 100, 3)
HLDA_plot(tree_0, 5, False)
Explanation: VI. Empirical Example
A. Simulated data
For simulated data example, each document, $d$, in corpus is generated by normal distribution with different size of words, $w_{d,n}$, where $n\in{10,...,200}$ and ${\bf w}_{d}\sim N(0, 1)$. In this example, by generating 35 documents in the corpus, we are able to see the simulated tree with the number near mean, $0$, such as {w0, w1, w-1} in the root node and the number far from mean such as {w10, w-10, w15} in the leaves.
End of explanation
def read_corpus(corpus_path):
punc = ['`', ',', "'", '.', '!', '?']
corpus = []
with open(corpus_path, 'r') as f:
for line in f:
for x in punc:
line = line.replace(x, '')
line = line.strip('\n')
word = line.split(' ')
corpus.append(word)
return(corpus)
corpus_1 = read_corpus('sample.txt')
tree_1 = hLDA(corpus_1, 0.1, 0.01, 1, 0.01, 100, 3)
HLDA_plot(tree_1, 5, False)
Explanation: B. Real data
For real data example, the corpus of documents is generated from Blei's sample data. The documents are splitted by paragraph; that is, each paragraph reprents one document. We take first 11 documents to form the sample corpus used in the hLDA model. To form the corpus, we read the corpus as a large list of lists. The sublists in the nested list represent the documents; the elements in each sublist represent the words in specific document. Note that the punctuations are removed from the corpus.
End of explanation
import hLDA
sim = hLDA.sim_corpus(5)
print(sim[0])
corpus = hLDA.read_corpus('sample.txt')
print(corpus[0])
tree = hLDA.hLDA(corpus, 0.1, 0.01, 1, 0.01, 10, 3)
hLDA.HLDA_plot(tree)
Explanation: VII. Download and Install from Github
The hLDA code of the paper Hierarchical Topic Models and the Nested Chinese Restaurant Process is released on github with the package named hLDA (click to clone). One can easily download (click to download) and install by running python setup.py install. The package provides 4 functions:
hLDA.sim_corpus(n): return a simulated corpus with $n$ number of documents
inputs:
n: int, number of documents in the corpus
hLDA.read_corpus(corpus_path): return a list of lists of corpus with length $n$, where $n$ is the number of documents.
inputs:
corpus_path: the path of txt file, note that each paragraph represents a document
hLDA.hLDA(corpus, alpha, beta, phi, eta, iteration, level): return a $n$-level tree, where $n$ is the input level
inputs:
corpus: corpus read from hLDA.read_corpus or simulated from sim_corpus
alpha: double, parameter for Z function
beta: double, parameter for Z function
phi: double, parameter fro CRP_prior function
eta: double, parameter for w_n function
iteration: int, number of iteration for gibbs sampling
level: int, number of level
hLDA.HLDA_plot(hLDA_result, n_words, save): return a tree plot from hLDA topic model
inputs:
hLDA_result: the hLDA result generated from hLDA.hLDA
n_words: int, how many words to show in each node (sorted by frequency), default with 5
save: boolean, save the plot or not, default with False
Note that the requirement packages for hLDA are: (1) numpy; (2) scipy; (3) collections; (4) string; (5) pygraphviz; (6) pydot.
End of explanation
%load_ext Cython
%%cython -a
cimport cython
cimport numpy as np
import numpy as np
@cython.cdivision
@cython.boundscheck(False)
@cython.wraparound(False)
def CRP_c(list topic, double phi):
cdef double[:] cm = np.empty(len(topic)+1)
cdef int m = sum([len(x) for x in topic])
cm[0] = phi / (phi + m)
cdef int i
cdef list word
for i, word in enumerate(topic):
cm[i+1] = len(word) / (phi + m)
return np.array(cm)
def node_sampling_c(list corpus_s, double phi):
cdef list topic = []
cdef int theta
cdef list corpus
cdef str word
for corpus in corpus_s:
for word in corpus:
cm = CRP_c(topic, phi)
theta = np.random.multinomial(1, (cm/sum(cm))).argmax()
if theta == 0:
topic.append([word])
else:
topic[theta-1].append(word)
return topic
%timeit node_sampling_c(corpus_1, 1)
%timeit node_sampling(corpus_1, 1)
Explanation: VIII. Optimization
To optimize the hLDA model, we choose cython to speed the functions up, since the only matrix calculation function, c_m, was already vectorized. However, after applying cython, the code is not able to speed up efficiently. The possible reasons are shown as follows.
First, if we simply speed up single function, cython does it well. Take the first function, node_sampling, for example, the run time decreased from 52.2 ms to 47.2ms, which menas cython is about 10% faster than python code. On the other hand, if we try to speed up all the functions used in gibbs sampling function, gibbs, the run time is similar or even slower, since it has to import external cython function to complete the work.
Second, most of the variables used in hLDA are lists. When coding cython in python, we fail to initialize the data type for the list variables efficiently.
End of explanation
import matplotlib.pyplot as plt
from nltk.tokenize import RegexpTokenizer
from stop_words import get_stop_words
from nltk.stem.porter import PorterStemmer
from gensim import corpora, models
import gensim
def lda_topic(corpus_s, dic, n_topics, ite):
lda = gensim.models.ldamodel.LdaModel(corpus = corpus_s,
id2word = dic,
num_topics = n_topics,
update_every = 1,
chunksize = 1,
passes = 1,
iterations = ite)
return lda.print_topics()
corpus = read_corpus('sample.txt')
def lda_corpus(corpus_s):
texts = []
tokenizer = RegexpTokenizer(r'\w+')
for doc in corpus_s:
for word in doc:
raw = word.lower()
tokens = tokenizer.tokenize(raw)
texts.append(tokens)
dictionary = corpora.Dictionary(texts)
n_corpus = [dictionary.doc2bow(text) for text in texts]
corpora.MmCorpus.serialize('sample.mm', n_corpus)
sample = gensim.corpora.MmCorpus('sample.mm')
return sample, dictionary
sample, dic = lda_corpus(corpus)
lda_topic(sample, dic, 3, 5000)
Explanation: IX. Code Comparison
This section will introduce LDA model as the comparison with hLDA model. The LDA model needs user to specify the number of topics and returns the probability of the words in each topic, which are the most different parts compares to hLDA model. The hLDA model applies nonparametric prior which allows arbitrary factors and readily accommodates growing data collections. That is , the hLDA model will sample the number of topics by nCRP and return a topic hierarchy tree.
The lda_topic function returns a single-layer word distributions for topics, which number is specified as parameter in the function. In each topic, the LDA model gives the probability distribution of possible words. In LDA model, it treats corpus as a big document, instead of consider each document by it own. Furthermore, the model is not able to illustrate the relationship between topics and words which are provided in hLDA model.
End of explanation |
1,296 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mpi-m', 'mpi-esm-1-2-hr', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: MPI-M
Source ID: MPI-ESM-1-2-HR
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:17
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
1,297 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Question
Step1: Init
Step2: Using nestly
Step3: Plotting results
Step4: Sandbox
Enrichment of TP for abundant incorporators?
What is the abundance distribution of TP and FP?
Are more abundant incorporators being detected more than low abundant taxa
Step5: Notes
Step6: Notes | Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/'
genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
R_dir = '/home/nick/notebook/SIPSim/lib/R/'
Explanation: Goal
Question: how is incorporator identification accuracy affected by the percent isotope incorporation of taxa?
Using genome dataset created in the "dataset" notebook
Simulates isotope dilution or short incubations
Method
25% taxa incorporate
incorporation % same for all incorporators
incorporation % treatments: 10, 20, 40, 60, 80, 100%
Total treatments: 6
ALSO, testing the use of differing BD ranges on sensitivity/specificity
User variables
End of explanation
import glob
import itertools
from os.path import abspath
import nestly
%load_ext rpy2.ipython
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
Explanation: Init
End of explanation
# building tree structure
nest = nestly.Nest()
## varying params
### perc incorporation
nest.add('percIncorp', range(0,101,20))
### BD range
BD_min = np.arange(1.67, 1.77, 0.02).tolist()
BD_max = [x + 0.04 for x in BD_min]
f = lambda x: {'BD_range': str(x[0]) + '-' + str(x[1]),
'BD_min':x[0],
'BD_max':x[1]}
BD_range = [f(x) for x in itertools.product(BD_min, BD_max)
if x[0] < x[1]]
#### contains BD_min & BD_max
nest.add('BD_range', BD_range, update=True)
## set params
nest.add('padj', [0.1], create_dir=False)
nest.add('log2', [0.25], create_dir=False)
nest.add('topTaxaToPlot', [100], create_dir=False)
## input/output files
#nest.add('fileName', ['ampFrags'], create_dir=False)
nest.add('perc_incorp_dir', ['/home/nick/notebook/SIPSim/dev/bac_genome1210/percIncorpUnif/'], create_dir=False)
nest.add('otu_file', ['OTU_n2_abs1e10_sub-norm_w.txt'], create_dir=False)
nest.add('otu_metadata', ['OTU_n2_abs1e10_sub-norm_meta.txt'], create_dir=False)
nest.add('BD_shift', ['ampFrags_kde_dif_incorp_BD-shift.txt'], create_dir=False)
nest.add('R_dir', [R_dir], create_dir=False)
# building directory tree
buildDir = os.path.join(workDir, 'percIncorpUnif_difBD')
nest.build(buildDir)
bashFile = os.path.join(workDir, 'SIPSimRun.sh')
%%writefile $bashFile
#!/bin/bash
# copying files from perc_incorp_unif
cp {perc_incorp_dir}{percIncorp}/{otu_file} .
cp {perc_incorp_dir}{percIncorp}/{otu_metadata} .
cp {perc_incorp_dir}{percIncorp}/{BD_shift} .
#-- R analysis --#
export PATH={R_dir}:$PATH
# running DeSeq2 and making confusion matrix on predicting incorporators
## making phyloseq object from OTU table
phyloseq_make.r \
{otu_file} \
-s {otu_metadata} \
> {otu_file}.physeq
## filtering phyloseq object to just taxa/samples of interest
phyloseq_edit.r \
{otu_file}.physeq \
--BD_min {BD_min} --BD_max {BD_max} \
> {otu_file}_filt.physeq
## making ordination
phyloseq_ordination.r \
--log2 {log2} \
--hypo greater \
{otu_file}_filt.physeq \
{otu_file}_bray-NMDS.pdf
## DESeq2
phyloseq_DESeq2.r \
{otu_file}_filt.physeq \
> {otu_file}_DESeq2
## Confusion matrix
DESeq2_confuseMtx.r \
{BD_shift} \
{otu_file}_DESeq2 \
--padj {padj}
!chmod 775 $bashFile
!cd $workDir; \
nestrun -j 30 --template-file $bashFile -d percIncorpUnif_difBD --log-file log.txt
# aggregating confusion matrix data
## table
!cd $workDir; \
nestagg delim \
-d percIncorpUnif_difBD \
-k percIncorp,BD_min,BD_max \
-o ./percIncorpUnif_difBD/DESeq2-cMtx_table.csv \
DESeq2-cMtx_table.csv
## overall
!cd $workDir; \
nestagg delim \
-d percIncorpUnif_difBD \
-k percIncorp,BD_min,BD_max \
-o ./percIncorpUnif_difBD/DESeq2-cMtx_overall.csv \
DESeq2-cMtx_overall.csv
## byClass
!cd $workDir; \
nestagg delim \
-d percIncorpUnif_difBD \
-k percIncorp,BD_min,BD_max \
-o ./percIncorpUnif_difBD/DESeq2-cMtx_byClass.csv \
DESeq2-cMtx_byClass.csv
Explanation: Using nestly: different incorporation percentages
End of explanation
%%R -i workDir -w 600 -h 600
setwd(workDir)
byClass = read.csv('./percIncorpUnif_difBD/DESeq2-cMtx_byClass.csv')
byClass$byClass[is.na(byClass$byClass)] = 0
cat.str = function(x,y, col=':'){
x = as.character(x)
y = as.character(y)
z = paste(c(x,y),collapse=col)
return(z)
}
byClass = byClass %>%
mutate(BD_range = mapply(cat.str, BD_min, BD_max)) %>%
mutate(BD_min = as.character(BD_min),
BD_max = as.character(BD_max))
%%R -w 800 -h 750
col2keep = c('Balanced Accuracy', 'Sensitivity','Specificity')
byClass.f = byClass %>%
filter(X %in% col2keep) %>%
mutate(BD_min = as.numeric(BD_min),
BD_max = as.numeric(BD_max),
byClass = as.numeric(byClass),
byClass.inv = 1 - byClass)
byClass.fs = byClass.f %>%
group_by(X, percIncorp) %>%
summarize(byClass.max = max(byClass))
just.true = function(x){
if(x == TRUE){
return(1)
} else{
return(NA)
}
}
byClass.j = inner_join(byClass.f, byClass.fs, c('X' = 'X',
'percIncorp' = 'percIncorp')) %>%
mutate(max_val = as.numeric(byClass == byClass.max))
byClass.jf = byClass.j %>%
filter(max_val == 1)
x.breaks = unique(byClass.j$BD_min)
p = ggplot(byClass.j, aes(BD_min, BD_max, fill=byClass.inv)) +
geom_tile() +
geom_point(data=byClass.jf, aes(color='red')) +
scale_x_continuous(breaks=x.breaks) +
labs(x='Minimum BD', y='Maximum BD') +
facet_grid(percIncorp ~ X) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5)
)
p
%%R -w 800 -h 750
col2keep = c('Balanced Accuracy', 'Sensitivity','Specificity')
byClass.f = byClass %>%
filter(X %in% col2keep) %>%
mutate(BD_min = as.numeric(BD_min),
BD_max = as.numeric(BD_max),
byClass = as.numeric(byClass),
byClass.inv = 1 - byClass)
byClass.fs = byClass.f %>%
group_by(X, percIncorp) %>%
summarize(byClass.max = max(byClass))
just.true = function(x){
if(x == TRUE){
return(1)
} else{
return(NA)
}
}
byClass.j = inner_join(byClass.f, byClass.fs, c('X' = 'X',
'percIncorp' = 'percIncorp')) %>%
mutate(max_val = as.numeric(byClass == byClass.max),
byClass.txt = round(byClass, 2))
byClass.jf = byClass.j %>%
filter(max_val == 1)
x.breaks = unique(byClass.j$BD_min)
p = ggplot(byClass.j, aes(BD_min, BD_max, fill=byClass.inv)) +
geom_tile() +
geom_text(data=byClass.jf, aes(label=byClass.txt), color=c('white')) +
scale_x_continuous(breaks=x.breaks) +
labs(x='Minimum Buoyant Density', y='Maximum Buoyant Density') +
facet_grid(percIncorp ~ X) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5)
)
p
%%R -w 800 -h 750
col2keep = c('Balanced Accuracy', 'Sensitivity','Specificity')
byClass.f = byClass %>%
filter(X %in% col2keep) %>%
mutate(BD_min = as.numeric(BD_min),
BD_max = as.numeric(BD_max),
byClass = as.numeric(byClass),
byClass.inv = 1 - byClass)
byClass.fs = byClass.f %>%
group_by(X, percIncorp) %>%
summarize(byClass.max = max(byClass))
just.true = function(x){
if(x == TRUE){
return(1)
} else{
return(NA)
}
}
byClass.j = inner_join(byClass.f, byClass.fs, c('X' = 'X',
'percIncorp' = 'percIncorp')) %>%
mutate(max_val = as.numeric(byClass == byClass.max),
byClass.txt = round(byClass, 2))
byClass.jf = byClass.j %>%
filter(max_val == 1)
x.breaks = unique(byClass.j$BD_min)
p = ggplot(byClass.j, aes(BD_min, BD_max, fill=byClass.inv)) +
geom_tile() +
geom_text(aes(label=byClass.txt), color=c('white'), size=4) +
geom_text(data=byClass.jf, aes(label=byClass.txt), color=c('red'), size=4) +
scale_x_continuous(breaks=x.breaks) +
labs(x='Minimum Buoyant Density', y='Maximum Buoyant Density') +
facet_grid(percIncorp ~ X) +
theme_bw() +
theme(
text=element_text(size=16),
axis.text.x=element_text(angle=90, hjust=1, vjust=0.5)
)
p
Explanation: Plotting results
End of explanation
%%R -i workDir
setwd(workDir)
tbl.c = read.csv('percIncorpUnif/100/DESeq2-cMtx_data.csv')
tbl.otu = read.delim('percIncorpUnif/100/OTU_n2_abs1e10_sub20000.txt', sep='\t')
%%R
# OTU total counts
tbl.otu.sum = tbl.otu %>%
group_by(library, taxon) %>%
summarize(total_count = sum(count))
tbl.otu.sum %>% head
%%R
#
label.tp.fn = function(known, pred){
if(is.na(known) | is.na(pred)){
return(NA)
} else
if(known==TRUE & pred==TRUE){
return('TP')
} else
if(known==TRUE & pred==FALSE){
return('FN')
} else {
return(NA)
}
}
tbl.c.tp.fn = tbl.c %>%
mutate(tp.fn = mapply(label.tp.fn, incorp.known, incorp.pred)) %>%
filter(! is.na(tp.fn))
tbl.tp.fn = inner_join(tbl.c.tp.fn, tbl.otu.sum, c('taxon' = 'taxon'))
tbl.tp.fn %>% head
%%R
# how many TP & FN?
tbl.tp.fn %>%
group_by(library, tp.fn) %>%
summarize(n = n())
%%R -h 350
tbl.tp.fn$library = as.character(tbl.tp.fn$library)
ggplot(tbl.tp.fn, aes(library, total_count, color=tp.fn)) +
geom_boxplot() +
labs(y='Total count') +
theme(
text = element_text(size=18)
)
%%R -h 700 -w 900
tbl.tp.fn$library = as.character(tbl.tp.fn$library)
ggplot(tbl.tp.fn, aes(taxon, total_count, group=taxon, color=library)) +
geom_point(size=3) +
geom_line() +
scale_y_log10() +
labs(y='Total count') +
facet_grid(. ~ tp.fn, scales='free_x') +
theme(
text = element_text(size=18),
axis.text.x = element_text(angle=90, hjust=1)
)
Explanation: Sandbox
Enrichment of TP for abundant incorporators?
What is the abundance distribution of TP and FP?
Are more abundant incorporators being detected more than low abundant taxa
End of explanation
%%R
# OTU total counts
heavy.cut = 1.71
tbl.otu.sum = tbl.otu %>%
filter(! grepl('inf', fraction)) %>%
separate(fraction, into = c('BD_min','BD_max'), sep='-', convert=TRUE) %>%
filter(BD_min >= heavy.cut & BD_max <= 2) %>%
group_by(library, taxon) %>%
summarize(total_count = sum(count))
tbl.otu.sum %>% head
%%R
tbl.tp.fn = inner_join(tbl.c.tp.fn, tbl.otu.sum, c('taxon' = 'taxon'))
tbl.tp.fn %>% head
%%R -h 350
tbl.tp.fn$library = as.character(tbl.tp.fn$library)
ggplot(tbl.tp.fn, aes(library, total_count, color=tp.fn)) +
geom_boxplot() +
labs(y='Total count') +
theme(
text = element_text(size=18)
)
%%R -h 700 -w 900
tbl.tp.fn$library = as.character(tbl.tp.fn$library)
ggplot(tbl.tp.fn, aes(taxon, total_count, group=taxon, color=library)) +
geom_point(size=3) +
geom_line() +
scale_y_log10() +
labs(y='Total count') +
facet_grid(. ~ tp.fn, scales='free_x') +
theme(
text = element_text(size=18),
axis.text.x = element_text(angle=90, hjust=1)
)
%%R -i workDir -w 1000 -h 450
setwd(workDir)
tbl.ds = read.csv('DESeq2-cMtx_data.csv')
# loading file
tbl.otu = read.delim('OTU_abs1e10.txt', sep='\t')
tbl.otu = tbl.otu %>%
filter(!grepl('inf', fraction, ignore.case=T)) %>%
separate(fraction, into = c('BD_min','BD_max'), sep='-', convert=TRUE) %>%
tbl.j = inner_join(tbl.otu, tbl.ds, c('taxon' = 'taxon'))
Explanation: Notes:
FNs can be abundant
TP can have a lower abundance in the 'treatment' library
Just looking at the 'heavy' fractions
End of explanation
# building tree structure
from os.path import abspath
nest = nestly.Nest()
## values
vals = [str(x) for x in range(1,5)]
nest.add('vals', vals)
## input files
nest.add('--np', [1], create_dir=False)
buildDir = '/home/nick/t/nestly/' #os.path.join(workDir, 'vals')
nest.build(buildDir)
%%writefile /home/nick/t/example.sh
#!/bin/bash
export TIME='elapsed,maxmem,exitstatus\n%e,%M,%x'
echo {--np} > {--np}_test.txt
!cd /home/nick/t/; \
chmod 777 example.sh
!cd /home/nick/t/; \
nestrun -j 2 --template-file example.sh -d nestly
Explanation: Notes:
The TPs (for the most part) are dramatically different in abundance between control and treatment
Sandbox
End of explanation |
1,298 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CS446/519 - Class Session 7 - Transitivity (Clustering Coefficients)
In this class session we are going to compute the local clustering coefficient of all vertices in the undirected human
protein-protein interaction network (PPI), in two ways -- first without using igraph, and the using igraph. We'll obtain the interaction data from the Pathway Commons SIF file (in the shared/ folder), we'll make an "adjacency forest" representation of the network, and we'll manually compute the local clustering coefficient of each vertex (protein) in the network using the "enumerating neighbor pairs" method described by Newman. Then we'll run the same algorithm using the transitivity_local_undirected function in igraph, and we'll compare the results in order to check our work. Grad students
Step1: Step 1
Step2: Step 2
Step3: Step 3
Step4: Step 4
Step5: Step 5
Step6: Step 6
Step7: Step 7
Step8: Step 8
Step9: Step 9
Step10: Step 10
Step11: So the built-in python dictionary type gave us fantastic performance. But is this coming at the cost of huge memory footprint? Let's check the size of our adjacency "list of hashtables", in MB | Python Code:
from igraph import Graph
from igraph import summary
import pandas
import numpy
import timeit
from pympler import asizeof
import bintrees
Explanation: CS446/519 - Class Session 7 - Transitivity (Clustering Coefficients)
In this class session we are going to compute the local clustering coefficient of all vertices in the undirected human
protein-protein interaction network (PPI), in two ways -- first without using igraph, and the using igraph. We'll obtain the interaction data from the Pathway Commons SIF file (in the shared/ folder), we'll make an "adjacency forest" representation of the network, and we'll manually compute the local clustering coefficient of each vertex (protein) in the network using the "enumerating neighbor pairs" method described by Newman. Then we'll run the same algorithm using the transitivity_local_undirected function in igraph, and we'll compare the results in order to check our work. Grad students: you should also group vertices by their "binned" vertex degree k (bin size 50, total number of bins = 25) and plot the average local clustering coefficient for the vertices within a bin, against the center k value for the bin, on log-log scale (compare to Newman Fig. 8.12)
End of explanation
sif_data = pandas.read_csv("shared/pathway_commons.sif",
sep="\t", names=["species1","interaction_type","species2"])
Explanation: Step 1: load in the SIF file (refer to Class 6 exercise) into a data frame sif_data, using the pandas.read_csv function, and name the columns species1, interaction_type, and species2.
End of explanation
interaction_types_ppi = set(["interacts-with",
"in-complex-with"])
interac_ppi = sif_data[sif_data.interaction_type.isin(interaction_types_ppi)]
Explanation: Step 2: restrict the interactions to protein-protein undirected ("in-complex-with", "interacts-with"), by using the isin function and then using [ to index rows into the data frame. Call the returned ata frame interac_ppi.
End of explanation
for i in range(0, interac_ppi.shape[0]):
if interac_ppi.iat[i,0] > interac_ppi.iat[i,2]:
temp_name = interac_ppi.iat[i,0]
interac_ppi.set_value(i, 'species1', interac_ppi.iat[i,2])
interac_ppi.set_value(i, 'species2', temp_name)
interac_ppi_unique = interac_ppi[["species1","species2"]].drop_duplicates()
ppi_igraph = Graph.TupleList(interac_ppi_unique.values.tolist(), directed=False)
summary(ppi_igraph)
Explanation: Step 3: restrict the data frame to only the unique interaction pairs of proteins (ignoring the interaction type), and call that data frame interac_ppi_unique. Make an igraph Graph object from interac_ppi_unique using Graph.TupleList, values, and tolist. Call summary on the Graph object. Refer to the notebooks for the in-class exercises in Class sessions 3 and 6.
End of explanation
ppi_adj_list = ppi_igraph.get_adjlist()
Explanation: Step 4: Obtain an adjacency list representation of the graph (refer to Class 5 exercise), using get_adjlist.
End of explanation
def get_bst_forest(theadjlist):
g_adj_list = theadjlist
n = len(g_adj_list)
theforest = []
for i in range(0,n):
itree = bintrees.AVLTree()
for j in g_adj_list[i]:
itree.insert(j,1)
theforest.append(itree)
return theforest
def find_bst_forest(bst_forest, i, j):
return j in bst_forest[i]
ppi_adj_forest = get_bst_forest(ppi_adj_list)
Explanation: Step 5: Make an "adjacency forest" data structure as a list of AVLTree objects (refer to Class 5 exercise). Call this adjacency forest, ppi_adj_forest.
End of explanation
N = len(ppi_adj_list)
civals = numpy.zeros(100)
civals[:] = numpy.NaN
start_time = timeit.default_timer()
for n in range(0, 100):
neighbors = ppi_adj_list[n]
nneighbors = len(neighbors)
if nneighbors > 1:
nctr = 0
for i in range(0, nneighbors):
for j in range(i+1, nneighbors):
if neighbors[j] in ppi_adj_forest[neighbors[i]]:
nctr += 1
civals[n] = nctr/(nneighbors*(nneighbors-1)/2)
ci_elapsed = timeit.default_timer() - start_time
print(ci_elapsed)
Explanation: Step 6: Compute the local clustering coefficient (Ci) values of the first 100 vertices (do timing on this operation) as a numpy.array; for any vertex with degree=1, it's Ci value can be numpy NaN. You'll probably want to have an outer for loop for vertex ID n going from 0 to 99, and then an inner for loop iterating over neighbor vertices of vertex n. Store the clustering coefficients in a list, civals. Print out how many seconds it takes to perform this calculation.
End of explanation
start_time = timeit.default_timer()
civals_igraph = ppi_igraph.transitivity_local_undirected(vertices=list(range(0,100)))
ci_elapsed = timeit.default_timer() - start_time
print(ci_elapsed)
Explanation: Step 7: Calculate the local clustering coefficients for the first 100 vertices using
the method igraph.Graph.transitivity_local_undirected and save the results as a list civals_igraph. Do timing on the call to transitivity_local_undirected, using vertices= to specify the vertices for which you want to compute the local clustering coefficient.
End of explanation
import matplotlib.pyplot
matplotlib.pyplot.plot(civals, civals_igraph)
matplotlib.pyplot.xlabel("Ci (my code)")
matplotlib.pyplot.ylabel("Ci (igraph)")
matplotlib.pyplot.show()
Explanation: Step 8: Compare your Ci values to those that you got from igraph, using a scatter plot where civals is on the horizontal axis and civals_igraph is on the vertical axis.
End of explanation
civals_igraph = numpy.array(ppi_igraph.transitivity_local_undirected())
deg_igraph = ppi_igraph.degree()
deg_npa = numpy.array(deg_igraph)
deg_binids = numpy.rint(deg_npa/50)
binkvals = 50*numpy.array(range(0,25))
civals_avg = numpy.zeros(25)
for i in range(0,25):
civals_avg[i] = numpy.mean(civals_igraph[deg_binids == i])
matplotlib.pyplot.loglog(
binkvals,
civals_avg)
matplotlib.pyplot.ylabel("<Ci>")
matplotlib.pyplot.xlabel("k")
matplotlib.pyplot.show()
Explanation: Step 9: scatter plot the average log(Ci) vs. log(k) (i.e., local clustering coefficient vs. vertex degree) for 25 bins of vertex degree, with each bin size being 50 (so we are binning by k, and the bin centers are 50, 100, 150, 200, ...., 1250)
End of explanation
civals = numpy.zeros(len(ppi_adj_list))
civals[:] = numpy.NaN
ppi_adj_hash = []
for i in range(0, len(ppi_adj_list)):
newhash = {}
for j in ppi_adj_list[i]:
newhash[j] = True
ppi_adj_hash.append(newhash)
start_time = timeit.default_timer()
for n in range(0, len(ppi_adj_list)):
neighbors = ppi_adj_hash[n]
nneighbors = len(neighbors)
if nneighbors > 1:
nctr = 0
for i in neighbors:
for j in neighbors:
if (j > i) and (j in ppi_adj_hash[i]):
nctr += 1
civals[n] = nctr/(nneighbors*(nneighbors-1)/2)
ci_elapsed = timeit.default_timer() - start_time
print(ci_elapsed)
Explanation: Step 10: Now try computing the local clustering coefficient using a "list of hashtables" approach; compute the local clustering coefficients for all vertices, and compare to the timing for R. Which is faster, the python3 implementation or the R implementation?
End of explanation
asizeof.asizeof(ppi_adj_hash)/1000000
Explanation: So the built-in python dictionary type gave us fantastic performance. But is this coming at the cost of huge memory footprint? Let's check the size of our adjacency "list of hashtables", in MB:
End of explanation |
1,299 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AutoML Text entity extractionn model
Installation
Install the latest version of AutoML SDK.
Step1: Install the Google cloud-storage library as well.
Step2: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoM SDK into our Python environment.
Step11: AutoML constants
Setup up the following constants for AutoML
Step12: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
Step13: Example output
Step14: Example output
Step15: Response
Step16: Example output
Step17: projects.locations.datasets.importData
Request
Step18: Example output
Step19: Response
Step20: Example output
Step21: Example output
Step22: Response
Step23: Example output
Step24: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
Step25: Response
Step26: Example output
Step27: Response
Step28: Example output
Step29: Example output
Step30: Example output
Step31: Response
Step32: Example output
Step33: Example output
Step34: Response
Step35: Example output
Step36: Request
Step37: Example output
Step38: Response
Step39: Example output | Python Code:
! pip3 install google-cloud-automl
Explanation: AutoML Text entity extractionn model
Installation
Install the latest version of AutoML SDK.
End of explanation
! pip3 install google-cloud-storage
Explanation: Install the Google cloud-storage library as well.
End of explanation
import os
if not os.getenv("AUTORUN"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the Kernel
Once you've installed the AutoML SDK and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AutoML APIs and Compute Engine APIs.
Google Cloud SDK is already installed in AutoML Notebooks.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for AutoML. We recommend when possible, to choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You cannot use a Multi-Regional Storage bucket for training with AutoML. Not all regions provide support for all AutoML services. For the latest support per region, see Region support for AutoML services
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import os
import sys
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your Google Cloud account. This provides access
# to your Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Vertex, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this tutorial in a notebook locally, replace the string
# below with the path to your service account key and run this cell to
# authenticate your Google Cloud account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS your_path_to_credentials.json
# Log in to your account on Google Cloud
! gcloud auth login
Explanation: Authenticate your GCP account
If you are using AutoML Notebooks, your environment is already
authenticated. Skip this step.
Note: If you are on an AutoML notebook and run the cell, the cell knows to skip executing the authentication steps.
End of explanation
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "[your-bucket-name]":
BUCKET_NAME = PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
This tutorial is designed to use training data that is in a public Cloud Storage bucket and a local Cloud Storage bucket for your batch predictions. You may alternatively use your own training data that you have stored in a local Cloud Storage bucket.
Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
End of explanation
! gsutil mb -l $REGION gs://$BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al gs://$BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import json
import os
import sys
import time
from google.cloud import automl
from google.protobuf.json_format import MessageToJson
from google.protobuf.struct_pb2 import Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import AutoML SDK
Import the AutoM SDK into our Python environment.
End of explanation
# AutoM location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: AutoML constants
Setup up the following constants for AutoML:
PARENT: The AutoM location root path for dataset, model and endpoint resources.
End of explanation
def automl_client():
return automl.AutoMlClient()
def prediction_client():
return automl.PredictionServiceClient()
def operations_client():
return automl.AutoMlClient()._transport.operations_client
clients = {}
clients["automl"] = automl_client()
clients["prediction"] = prediction_client()
clients["operations"] = operations_client()
for client in clients.items():
print(client)
IMPORT_FILE = "gs://cloud-ml-data/NL-entity/dataset.csv"
! gsutil cat $IMPORT_FILE | head -n 10
Explanation: Clients
The AutoML SDK works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the server (AutoML).
You will use several clients in this tutorial, so set them all up upfront.
End of explanation
dataset = {
"display_name": "entity_" + TIMESTAMP,
"text_extraction_dataset_metadata": {},
}
print(
MessageToJson(
automl.CreateDatasetRequest(parent=PARENT, dataset=dataset).__dict__["_pb"]
)
)
Explanation: Example output:
TRAIN,gs://cloud-ml-data/NL-entity/train.jsonl
TEST,gs://cloud-ml-data/NL-entity/test.jsonl
VALIDATION,gs://cloud-ml-data/NL-entity/validation.jsonl
Create a dataset
Prepare data
projects.locations.datasets.create
Request
End of explanation
request = clients["automl"].create_dataset(parent=PARENT, dataset=dataset)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"dataset": {
"displayName": "entity_20210303201139",
"textExtractionDatasetMetadata": {}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/TEN4244124229064196096"
}
End of explanation
input_config = {"gcs_source": {"input_uris": [IMPORT_FILE]}}
print(
MessageToJson(
automl.ImportDataRequest(name=dataset_id, input_config=input_config).__dict__[
"_pb"
]
)
)
Explanation: projects.locations.datasets.importData
Request
End of explanation
request = clients["automl"].import_data(name=dataset_id, input_config=input_config)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/datasets/TEN4244124229064196096",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://cloud-ml-data/NL-entity/dataset.csv"
]
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
model = {
"display_name": "entity_" + TIMESTAMP,
"dataset_id": dataset_short_id,
"text_extraction_model_metadata": {},
}
print(
MessageToJson(automl.CreateModelRequest(parent=PARENT, model=model).__dict__["_pb"])
)
Explanation: Example output:
{}
Train a model
projects.locations.models.create
Request
End of explanation
request = clients["automl"].create_model(parent=PARENT, model=model)
Explanation: Example output:
{
"parent": "projects/migration-ucaip-training/locations/us-central1",
"model": {
"displayName": "entity_20210303201139",
"datasetId": "TEN4244124229064196096",
"textExtractionModelMetadata": {}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
# The full unique ID for the training pipeline
model_id = result.name
# The short numeric ID for the training pipeline
model_short_id = model_id.split("/")[-1]
print(model_short_id)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TEN7821373765161320448"
}
End of explanation
request = clients["automl"].list_model_evaluations(parent=model_id, filter="")
Explanation: Evaluate the model
projects.locations.models.modelEvaluations.list
Call
End of explanation
import json
model_evaluations = [json.loads(MessageToJson(me.__dict__["_pb"])) for me in request]
# The evaluation slice
evaluation_slice = request.model_evaluation[0].name
print(json.dumps(model_evaluations, indent=2))
Explanation: Response
End of explanation
request = clients["automl"].get_model_evaluation(name=evaluation_slice)
Explanation: Example output:
```
[
{
"name": "projects/116273516712/locations/us-central1/models/TEN7821373765161320448/modelEvaluations/132746642406774043",
"createTime": "2021-03-03T22:30:27.832506Z",
"evaluatedExampleCount": 60,
"textExtractionEvaluationMetrics": {
"confidenceMetricsEntries": [
{
"confidenceThreshold": 0.04,
"recall": 0.79928315,
"precision": 0.7950089,
"f1Score": 0.7971403
},
{
"confidenceThreshold": 0.96,
"recall": 0.75089604,
"precision": 0.8603696,
"f1Score": 0.80191386
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.43,
"recall": 0.5913978,
"precision": 0.57894737,
"f1Score": 0.5851064
},
{
"confidenceThreshold": 0.44,
"recall": 0.5913978,
"precision": 0.57894737,
"f1Score": 0.5851064
}
]
},
"displayName": "DiseaseClass"
}
]
```
projects.locations.models.modelEvaluations.get
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
import tensorflow as tf
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
gcs_input_uri = "gs://" + BUCKET_NAME + "/test.jsonl"
with tf.io.gfile.GFile(gcs_input_uri, "w") as f:
data = {"id": 0, "text_snippet": {"content": test_item}}
f.write(json.dumps(data) + "\n")
! gsutil cat $gcs_input_uri
Explanation: Example output:
```
{
"name": "projects/116273516712/locations/us-central1/models/TEN7821373765161320448/modelEvaluations/132746642406774043",
"createTime": "2021-03-03T22:30:27.832506Z",
"evaluatedExampleCount": 60,
"textExtractionEvaluationMetrics": {
"confidenceMetricsEntries": [
{
"confidenceThreshold": 0.04,
"recall": 0.79928315,
"precision": 0.7950089,
"f1Score": 0.7971403
},
{
"confidenceThreshold": 0.96,
"recall": 0.75089604,
"precision": 0.8603696,
"f1Score": 0.80191386
},
# REMOVED FOR BREVITY
{
"confidenceThreshold": 0.43,
"recall": 0.7921147,
"precision": 0.7935368,
"f1Score": 0.7928251
},
{
"confidenceThreshold": 0.44,
"recall": 0.7921147,
"precision": 0.7935368,
"f1Score": 0.7928251
}
]
}
}
```
Make batch predictions
Prepare files for batch prediction
End of explanation
input_config = {"gcs_source": {"input_uris": [gcs_input_uri]}}
output_config = {
"gcs_destination": {"output_uri_prefix": "gs://" + f"{BUCKET_NAME}/batch_output/"}
}
print(
MessageToJson(
automl.BatchPredictRequest(
name=model_id, input_config=input_config, output_config=output_config
).__dict__["_pb"]
)
)
Explanation: Example output:
{"id": 0, "text_snippet": {"content": "Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign \" pseudodeficient \" allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described"}}
projects.locations.models.batchPredict
Request
End of explanation
request = clients["prediction"].batch_predict(
name=model_id, input_config=input_config, output_config=output_config
)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TEN7821373765161320448",
"inputConfig": {
"gcsSource": {
"inputUris": [
"gs://migration-ucaip-trainingaip-20210303201139/test.jsonl"
]
}
},
"outputConfig": {
"gcsDestination": {
"outputUriPrefix": "gs://migration-ucaip-trainingaip-20210303201139/batch_output/"
}
}
}
Call
End of explanation
result = request.result()
print(MessageToJson(result.__dict__["_pb"]))
Explanation: Response
End of explanation
destination_uri = output_config["gcs_destination"]["output_uri_prefix"][:-1]
! gsutil ls $destination_uri/*
! gsutil cat $destination_uri/prediction*/*.jsonl
Explanation: Example output:
{}
End of explanation
request = clients["automl"].deploy_model(name=model_id)
Explanation: Example output:
gs://migration-ucaip-trainingaip-20210303201139/batch_output/prediction-entity_20210303201139-2021-03-03T22:30:36.292153Z/text_extraction_1.jsonl
{"textSnippet":{"content":"Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- \u003e AT transition at the donor splice-site of intron 9 . The second , a C-- \u003e T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign \" pseudodeficient \" allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- \u003e A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described"},"annotations":[{"displayName":"SpecificDisease","textExtraction":{"score":0.99955064,"textSegment":{"startOffset":"19","endOffset":"46","content":"hexosaminidase A deficiency"}}},{"displayName":"SpecificDisease","textExtraction":{"score":0.9995449,"textSegment":{"startOffset":"149","endOffset":"166","content":"Tay-Sachs disease"}}},{"displayName":"SpecificDisease","textExtraction":{"score":0.99939877,"textSegment":{"startOffset":"169","endOffset":"172","content":"TSD"}}},{"displayName":"Modifier","textExtraction":{"score":0.9993252,"textSegment":{"startOffset":"236","endOffset":"239","content":"TSD"}}},{"displayName":"Modifier","textExtraction":{"score":0.9993484,"textSegment":{"startOffset":"330","endOffset":"333","content":"TSD"}}},{"displayName":"Modifier","textExtraction":{"score":0.9993844,"textSegment":{"startOffset":"688","endOffset":"691","content":"TSD"}}}],"id":"0"}
Make online predictions
projects.locations.models.deploy
Call
End of explanation
result = request.result()
print(MessageToJson(result))
Explanation: Response
End of explanation
test_item = 'Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign " pseudodeficient " allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described'
Explanation: Example output:
{}
projects.locations.models.predict
Prepare data item for online prediction
End of explanation
payload = {"text_snippet": {"content": test_item, "mime_type": "text/plain"}}
request = automl.PredictRequest(
name=model_id,
payload=payload,
)
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Request
End of explanation
request = clients["prediction"].predict(request=request)
Explanation: Example output:
{
"name": "projects/116273516712/locations/us-central1/models/TEN7821373765161320448",
"payload": {
"textSnippet": {
"content": "Molecular basis of hexosaminidase A deficiency and pseudodeficiency in the Berks County Pennsylvania Dutch.\\tFollowing the birth of two infants with Tay-Sachs disease ( TSD ) , a non-Jewish , Pennsylvania Dutch kindred was screened for TSD carriers using the biochemical assay . A high frequency of individuals who appeared to be TSD heterozygotes was detected ( Kelly et al . , 1975 ) . Clinical and biochemical evidence suggested that the increased carrier frequency was due to at least two altered alleles for the hexosaminidase A alpha-subunit . We now report two mutant alleles in this Pennsylvania Dutch kindred , and one polymorphism . One allele , reported originally in a French TSD patient ( Akli et al . , 1991 ) , is a GT-- > AT transition at the donor splice-site of intron 9 . The second , a C-- > T transition at nucleotide 739 ( Arg247Trp ) , has been shown by Triggs-Raine et al . ( 1992 ) to be a clinically benign \" pseudodeficient \" allele associated with reduced enzyme activity against artificial substrate . Finally , a polymorphism [ G-- > A ( 759 ) ] , which leaves valine at codon 253 unchanged , is described",
"mimeType": "text/plain"
}
}
}
Call
End of explanation
print(MessageToJson(request.__dict__["_pb"]))
Explanation: Response
End of explanation
delete_dataset = True
delete_model = True
delete_bucket = True
# Delete the dataset using the AutoML fully qualified identifier for the dataset
try:
if delete_dataset:
clients["automl"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the model using the AutoML fully qualified identifier for the model
try:
if delete_model:
clients["automl"].delete_model(name=model_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r gs://$BUCKET_NAME
Explanation: Example output:
{
"payload": [
{
"annotationSpecId": "8605379431835369472",
"displayName": "SpecificDisease",
"textExtraction": {
"score": 0.99955064,
"textSegment": {
"startOffset": "19",
"endOffset": "46",
"content": "hexosaminidase A deficiency"
}
}
},
{
"annotationSpecId": "8605379431835369472",
"displayName": "SpecificDisease",
"textExtraction": {
"score": 0.9995449,
"textSegment": {
"startOffset": "149",
"endOffset": "166",
"content": "Tay-Sachs disease"
}
}
},
{
"annotationSpecId": "8605379431835369472",
"displayName": "SpecificDisease",
"textExtraction": {
"score": 0.99939877,
"textSegment": {
"startOffset": "169",
"endOffset": "172",
"content": "TSD"
}
}
},
{
"annotationSpecId": "3417232661104558080",
"displayName": "Modifier",
"textExtraction": {
"score": 0.9993252,
"textSegment": {
"startOffset": "236",
"endOffset": "239",
"content": "TSD"
}
}
},
{
"annotationSpecId": "3417232661104558080",
"displayName": "Modifier",
"textExtraction": {
"score": 0.9993484,
"textSegment": {
"startOffset": "330",
"endOffset": "333",
"content": "TSD"
}
}
},
{
"annotationSpecId": "3417232661104558080",
"displayName": "Modifier",
"textExtraction": {
"score": 0.9993844,
"textSegment": {
"startOffset": "688",
"endOffset": "691",
"content": "TSD"
}
}
}
]
}
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.