Unnamed: 0
int64
0
16k
text_prompt
stringlengths
110
62.1k
code_prompt
stringlengths
37
152k
5,900
Given the following text description, write Python code to implement the functionality described below step by step Description: In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space. Step1: Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above. The following two functions will generate the voxel RFs, and then generate several trials of that dataset. There are options to add uniform noise to either the RF or the trials. Step2: Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials. Step3: Using this synthetic training data, we can fit the IEM. Step4: Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate. Remember that the plot below is in circular space. Hence, the channels wrap around the x-axis. For example, the channel depicted in blue is centered at 0 degrees (far left of plot), which is the same as 360 degrees (far right of plot). We can check whether the channels properly tile the feature space by summing across all of them. This is shown on the right plot. It should be a straight horizontal line. Step5: Now we can generate test data and see how well we can predict the test stimuli. Step6: In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature. Below we will plot all of the reconstructions. There will be some variability because of the noise added during the synthetic data generation. Step7: For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below. Step8: We'll iterate through the list and look at the resulting R^2 values.
Python Code: # Set up parameters n_channels = 6 cos_exponent = 5 range_start = 0 range_stop = 360 feature_resolution = 360 iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='circular', range_start=range_start, range_stop=range_stop, channel_density=feature_resolution) # You can also try the half-circular space. Here's the associated code: # range_stop = 180 # since 0 and 360 degrees are the same, we want to stop shy of 360 # feature_resolution = 180 # iem_obj = IEM.InvertedEncoding1D(n_channels, cos_exponent, stimulus_mode='halfcircular', range_start=range_start, # range_stop=range_stop, channel_density=feature_resolution, verbose=True) stim_vals = np.linspace(0, feature_resolution - (feature_resolution/6), 6).astype(int) Explanation: In this example, we will assume that the stimuli are patches of different motion directions. These stimuli span a 360-degree, circular feature space. We will build an encoding model that has 6 channels, or basis functions, which also span this feature space. End of explanation # Generate synthetic data s.t. each voxel has a Gaussian tuning function def generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=True, RF_noise=0.): if random_tuning: # Voxel selectivity is random voxel_tuning = np.floor((np.random.rand(n_voxels) * range_stop) + range_start).astype(int) else: # Voxel selectivity is evenly spaced along the feature axis voxel_tuning = np.linspace(range_start, range_stop, n_voxels+1) voxel_tuning = voxel_tuning[0:-1] voxel_tuning = np.floor(voxel_tuning).astype(int) gaussian = scipy.signal.gaussian(feature_resolution, 15) voxel_RFs = np.zeros((n_voxels, feature_resolution)) for i in range(0, n_voxels): voxel_RFs[i, :] = np.roll(gaussian, voxel_tuning[i] - ((feature_resolution//2)-1)) voxel_RFs += np.random.rand(n_voxels, feature_resolution)*RF_noise # add noise to voxel RFs voxel_RFs = voxel_RFs / np.max(voxel_RFs, axis=1)[:, None] return voxel_RFs, voxel_tuning def generate_voxel_data(voxel_RFs, n_voxels, trial_list, feature_resolution, trial_noise=0.25): one_hot = np.eye(feature_resolution) # Generate trial-wise responses based on voxel RFs if range_start > 0: trial_list = trial_list + range_start elif range_start < 0: trial_list = trial_list - range_start stim_X = one_hot[:, trial_list] #@ basis_set.transpose() trial_data = voxel_RFs @ stim_X trial_data += np.random.rand(n_voxels, trial_list.size)*(trial_noise*np.max(trial_data)) return trial_data Explanation: Now we'll generate synthetic data. Ideally, each voxel that we measure from is roughly tuned to some part of the feature space (see Sprague, Boynton, Serences, 2019). So we will generate data that has a receptive field (RF). We can define the RF along the same feature axis as the channels that we generated above. The following two functions will generate the voxel RFs, and then generate several trials of that dataset. There are options to add uniform noise to either the RF or the trials. End of explanation np.random.seed(100) n_voxels = 50 n_train_trials = 120 training_stim = np.repeat(stim_vals, n_train_trials/6) voxel_RFs, voxel_tuning = generate_voxel_RFs(n_voxels, feature_resolution, random_tuning=False, RF_noise=0.1) train_data = generate_voxel_data(voxel_RFs, n_voxels, training_stim, feature_resolution, trial_noise=0.25) print(np.linalg.cond(train_data)) # print("Voxels are tuned to: ", voxel_tuning) # Generate plots to look at the RF of an example voxel. voxi = 20 f = plt.figure() plt.subplot(1, 2, 1) plt.plot(train_data[voxi, :]) plt.xlabel("trial") plt.ylabel("activation") plt.title("Activation over trials") plt.subplot(1, 2, 2) plt.plot(voxel_RFs[voxi, :]) plt.xlabel("degrees (motion direction)") plt.axvline(voxel_tuning[voxi]) plt.title("Receptive field at {} deg".format(voxel_tuning[voxi])) plt.suptitle("Example voxel") plt.figure() plt.imshow(train_data) plt.ylabel('voxel') plt.xlabel('trial') plt.suptitle('Simulated data from each voxel') Explanation: Now let's generate some training data and look at it. This code will create a plot that depicts the response of an example voxel for different trials. End of explanation # Fit an IEM iem_obj.fit(train_data.transpose(), training_stim) Explanation: Using this synthetic training data, we can fit the IEM. End of explanation # Let's visualize the basis functions. channels = iem_obj.channels_ feature_axis = iem_obj.channel_domain print(channels.shape) plt.figure() plt.subplot(1, 2, 1) for i in range(0, channels.shape[0]): plt.plot(feature_axis, channels[i,:]) plt.title('Channels (i.e. basis functions)') plt.subplot(1, 2, 2) plt.plot(np.sum(channels, 0)) plt.ylim(0, 2.5) plt.title('Sum across channels') Explanation: Calling the IEM fit method defines the channels, or the basis set, which span the feature domain. We can examine the channels and plot them to check that they look appropriate. Remember that the plot below is in circular space. Hence, the channels wrap around the x-axis. For example, the channel depicted in blue is centered at 0 degrees (far left of plot), which is the same as 360 degrees (far right of plot). We can check whether the channels properly tile the feature space by summing across all of them. This is shown on the right plot. It should be a straight horizontal line. End of explanation # Generate test data n_test_trials = 12 test_stim = np.repeat(stim_vals, n_test_trials/len(stim_vals)) np.random.seed(330) test_data = generate_voxel_data(voxel_RFs, n_voxels, test_stim, feature_resolution, trial_noise=0.25) # Predict test stim & get R^2 score pred_feature = iem_obj.predict(test_data.transpose()) R2 = iem_obj.score(test_data.transpose(), test_stim) print("Predicted features are: {} degrees.".format(pred_feature)) print("Actual features are: {} degrees.".format(test_stim)) print("Test R^2 is {}".format(R2)) Explanation: Now we can generate test data and see how well we can predict the test stimuli. End of explanation # Now get the model-based reconstructions, which are continuous # functions that should peak at each test stimulus feature recons = iem_obj._predict_feature_responses(test_data.transpose()) f = plt.figure() for i in range(0, n_test_trials-1): plt.plot(feature_axis, recons[:, i]) for i in stim_vals: plt.axvline(x=i, color='k', linestyle='--') plt.title("Reconstructions of {} degrees".format(np.unique(test_stim))) Explanation: In addition to predicting the exact feature, we can examine the model-based reconstructions in the feature domain. That is, instead of getting single predicted values for each feature, we can look at a reconstructed function which peaks at the predicted feature. Below we will plot all of the reconstructions. There will be some variability because of the noise added during the synthetic data generation. End of explanation iem_obj.verbose = False def train_and_test(nvox, ntrn, ntst, rfn, tn): vRFs, vox_tuning = generate_voxel_RFs(nvox, feature_resolution, random_tuning=True, RF_noise=rfn) trn = np.repeat(stim_vals, ntrn/6).astype(int) trnd = generate_voxel_data(vRFs, nvox, trn, feature_resolution, trial_noise=tn) tst = np.repeat(stim_vals, ntst/6).astype(int) tstd = generate_voxel_data(vRFs, nvox, tst, feature_resolution, trial_noise=tn) iem_obj.fit(trnd.transpose(), trn) recons = iem_obj._predict_feature_responses(tstd.transpose()) pred_ori = iem_obj.predict(tstd.transpose()) R2 = iem_obj.score(tstd.transpose(), tst) return recons, pred_ori, R2, tst Explanation: For a sanity check, let's check how R^2 changes as the number of voxels increases. We can write a quick wrapper function to train and test on a given set of motion directions, as below. End of explanation np.random.seed(300) vox_list = (5, 10, 15, 25, 50) R2_list = np.zeros(len(vox_list)) for idx, nvox in enumerate(vox_list): recs, preds, R2_list[idx], test_features = train_and_test(nvox, 120, 30, 0.1, 0.25) print("The R2 values for increasing numbers of voxels: ") print(R2_list) Explanation: We'll iterate through the list and look at the resulting R^2 values. End of explanation
5,901
Given the following text description, write Python code to implement the functionality described below step by step Description: Python 线程与协程(1) 要说到线程(Thread)与协程(Coroutine)似乎总是需要从并行(Parallelism)与并发(Concurrency)谈起,关于并行与并发的问题,Rob Pike 用 Golang 小地鼠烧书的例子给出了非常生动形象的说明。简单来说并行就是我们现实世界运行的样子,每个人都是独立的执行单元,各自完成自己的任务,这对应着计算机中的分布式(多台计算机)或多核(多个CPU)运作模式;而对于并发,我看到最生动的解释来自Quora 上 Jan Christian Meyer 回答的这张图: 并发对应计算机中充分利用单核(一个CPU)实现(看起来)多个任务同时执行。我们在这里将要讨论的 Python 中的线程与协程仅是基于单核的并发实现,随便去网上搜一搜(Thread vs Coroutine)可以找到一大批关于它们性能的争论、benchmark,这次话题的目的不在于讨论谁好谁坏,套用一句非常套路的话来说,抛开应用场景争好坏都是耍流氓。当然在硬件支持的条件下(多核)也可以利用线程和协程实现并行计算,而且 Python 2.6 之后新增了标准库 multiprocessing (PEP 371)突破了 GIL 的限制可以充分利用多核,但由于协程是基于单个线程的,因此多进程的并行对它们来说情况是类似的,因此这里只讨论单核并发的实现。 要了解线程以及协程的原理和由来可以查看参考链接中的前两篇文章。Python 3.5 中关于线程的标准库是 threading,之前在 2.x 版本中的 thread 在 3.x 之后更名为 _thread ,无论是2.7还是3.5都应该尽量避免使用较为底层的 thread/_thread 而应该使用 threading。 创建一个线程可以通过实例化一个 threading.Thread 对象: Step1: 除了通过将函数传递给 Thread 创建线程实例之外,还可以直接继承 Thread 类: Step2: 根据上面代码执行的结果可以发现,compute_sum/t.run 函数的执行是按照 start() 的顺序,但 _sum 结果的输出顺序却是随机的。因为 _sum 中加入了 time.sleep(2.0) ,让程序执行到这里就会进入阻塞状态,但是几个线程的执行看起来却像是同时进行的(并发)。 有时候我们既需要并发地“跳过“阻塞的部分,又需要有序地执行其它部分,例如操作共享数据的时候,这时就需要用到”锁“。在上述”求和线程“的例子中,假设每次求和都需要加上额外的 _base 并把计算结果累积到 _base 中。尽管这个例子不太恰当,但它说明了线程锁的用途:
Python Code: from threading import Thread import time def _sum(x, y): print("Compute {} + {}...".format(x, y)) time.sleep(2.0) return x+y def compute_sum(x, y): result = _sum(x, y) print("{} + {} = {}".format(x, y, result)) start = time.time() threads = [ Thread(target=compute_sum, args=(0,0)), Thread(target=compute_sum, args=(1,1)), Thread(target=compute_sum, args=(2,2)), ] for t in threads: t.start() for t in threads: t.join() print("Total elapsed time {} s".format(time.time() - start)) # Do not use Thread start = time.time() compute_sum(0,0) compute_sum(1,1) compute_sum(2,2) print("Total elapsed time {} s".format(time.time() - start)) Explanation: Python 线程与协程(1) 要说到线程(Thread)与协程(Coroutine)似乎总是需要从并行(Parallelism)与并发(Concurrency)谈起,关于并行与并发的问题,Rob Pike 用 Golang 小地鼠烧书的例子给出了非常生动形象的说明。简单来说并行就是我们现实世界运行的样子,每个人都是独立的执行单元,各自完成自己的任务,这对应着计算机中的分布式(多台计算机)或多核(多个CPU)运作模式;而对于并发,我看到最生动的解释来自Quora 上 Jan Christian Meyer 回答的这张图: 并发对应计算机中充分利用单核(一个CPU)实现(看起来)多个任务同时执行。我们在这里将要讨论的 Python 中的线程与协程仅是基于单核的并发实现,随便去网上搜一搜(Thread vs Coroutine)可以找到一大批关于它们性能的争论、benchmark,这次话题的目的不在于讨论谁好谁坏,套用一句非常套路的话来说,抛开应用场景争好坏都是耍流氓。当然在硬件支持的条件下(多核)也可以利用线程和协程实现并行计算,而且 Python 2.6 之后新增了标准库 multiprocessing (PEP 371)突破了 GIL 的限制可以充分利用多核,但由于协程是基于单个线程的,因此多进程的并行对它们来说情况是类似的,因此这里只讨论单核并发的实现。 要了解线程以及协程的原理和由来可以查看参考链接中的前两篇文章。Python 3.5 中关于线程的标准库是 threading,之前在 2.x 版本中的 thread 在 3.x 之后更名为 _thread ,无论是2.7还是3.5都应该尽量避免使用较为底层的 thread/_thread 而应该使用 threading。 创建一个线程可以通过实例化一个 threading.Thread 对象: End of explanation from threading import Thread import time class ComputeSum(Thread): def __init__(self, x, y): super().__init__() self.x = x self.y = y def run(self): result = self._sum(self.x, self.y) print("{} + {} = {}".format(self.x, self.y, result)) def _sum(self, x, y): print("Compute {} + {}...".format(x, y)) time.sleep(2.0) return x+y threads = [ComputeSum(0,0), ComputeSum(1,1), ComputeSum(2,2)] start = time.time() for t in threads: t.start() for t in threads: t.join() print("Total elapsed time {} s".format(time.time() - start)) Explanation: 除了通过将函数传递给 Thread 创建线程实例之外,还可以直接继承 Thread 类: End of explanation from threading import Thread, Lock import time _base = 1 _lock = Lock() class ComputeSum(Thread): def __init__(self, x, y): super().__init__() self.x = x self.y = y def run(self): result = self._sum(self.x, self.y) print("{} + {} + base = {}".format(self.x, self.y, result)) def _sum(self, x, y): print("Compute {} + {}...".format(x, y)) time.sleep(2.0) global _base with _lock: result = x + y + _base _base = result return result threads = [ComputeSum(0,0), ComputeSum(1,1), ComputeSum(2,2)] start = time.time() for t in threads: t.start() for t in threads: t.join() print("Total elapsed time {} s".format(time.time() - start)) Explanation: 根据上面代码执行的结果可以发现,compute_sum/t.run 函数的执行是按照 start() 的顺序,但 _sum 结果的输出顺序却是随机的。因为 _sum 中加入了 time.sleep(2.0) ,让程序执行到这里就会进入阻塞状态,但是几个线程的执行看起来却像是同时进行的(并发)。 有时候我们既需要并发地“跳过“阻塞的部分,又需要有序地执行其它部分,例如操作共享数据的时候,这时就需要用到”锁“。在上述”求和线程“的例子中,假设每次求和都需要加上额外的 _base 并把计算结果累积到 _base 中。尽管这个例子不太恰当,但它说明了线程锁的用途: End of explanation
5,902
Given the following text description, write Python code to implement the functionality described below step by step Description: Grid Searches <img src="figures/grid_search_cross_validation.svg" width=100%> Grid-Search with build-in cross validation Step1: Define parameter grid Step2: A GridSearchCV object behaves just like a normal classifier. Step3: Nested Cross-validation in scikit-learn
Python Code: from sklearn.grid_search import GridSearchCV from sklearn.svm import SVC from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split digits = load_digits() X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target, random_state=0) Explanation: Grid Searches <img src="figures/grid_search_cross_validation.svg" width=100%> Grid-Search with build-in cross validation End of explanation import numpy as np param_grid = {'C': 10. ** np.arange(-3, 3), 'gamma' : 10. ** np.arange(-5, 0)} np.set_printoptions(suppress=True) print(param_grid) grid_search = GridSearchCV(SVC(), param_grid, verbose=3) Explanation: Define parameter grid: End of explanation grid_search.fit(X_train, y_train) grid_search.predict(X_test) grid_search.score(X_test, y_test) grid_search.best_params_ # We extract just the scores scores = [x.mean_validation_score for x in grid_search.grid_scores_] scores = np.array(scores).reshape(6, 5) plt.matshow(scores) plt.xlabel('gamma') plt.ylabel('C') plt.colorbar() plt.xticks(np.arange(5), param_grid['gamma']) plt.yticks(np.arange(6), param_grid['C']); Explanation: A GridSearchCV object behaves just like a normal classifier. End of explanation from sklearn.neighbors import KNeighborsClassifier # %load solutions/grid_search_k_neighbors.py Explanation: Nested Cross-validation in scikit-learn: Exercises Use GridSearchCV to adjust n_neighbors of KNeighborsClassifier. Visualize grid_search.grid_scores_. End of explanation
5,903
Given the following text description, write Python code to implement the functionality described below step by step Description: Redoing Weka Stuff In this section we will try to redo some of the things we have already done in Weka. Objective Step1: Feature creations - Math Expressions Step2: Creating many features at once using patsy Step3: Plot decision regions We can only do this if our data has 2 features Step4: Naive Bayes classifier Step5: Decision surface of Naive Bayes classifier will not have overlapping colors because of the basic code I am using to show decision boundaries. A better code can show the mixing of colors properly Step6: Logistic regression Step7: IBk of K-nearest neighbors classifier
Python Code: %matplotlib inline import numpy as np from scipy.io import arff import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import patsy import statsmodels.api as sm from sklearn import tree, linear_model, metrics, dummy, naive_bayes, neighbors from IPython.display import Image import pydotplus sns.set_context("paper") sns.set_style("ticks") def load_arff(filename): data, meta = arff.loadarff(filename) df = pd.DataFrame(data, columns=meta.names()) for c, k in zip(df.columns, meta.types()): if k == "nominal": df[c] = df[c].astype("category") if k == "numeric": df[c] = df[c].astype("float") return df def get_confusion_matrix(clf, X, y, verbose=True): y_pred = clf.predict(X) cm = metrics.confusion_matrix(y_true=y, y_pred=y_pred) clf_report = metrics.classification_report(y, y_pred) df_cm = pd.DataFrame(cm, columns=clf.classes_, index=clf.classes_) if verbose: print clf_report print df_cm return clf_report, df_cm def show_decision_tree(clf, X, y): dot_data = tree.export_graphviz(clf, out_file=None, feature_names=X.columns, class_names=y.unique(), filled=True, rounded=True, special_characters=True, impurity=False) graph = pydotplus.graph_from_dot_data(dot_data) return Image(graph.create_png()) def plot_decision_regions(clf, X, y, col_x=0, col_y=1, ax=None, plot_step=0.01, colors="bry"): if ax is None: fig, ax = plt.subplots() x_min, x_max = X[col_x].min(), X[col_x].max() y_min, y_max = X[col_y].min(), X[col_y].max() xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step), np.arange(y_min, y_max, plot_step)) Z = clf.predict(np.c_[xx.ravel(), yy.ravel()]) b, Z = np.unique(Z, return_inverse=True) Z = Z.reshape(xx.shape) cs = ax.contourf(xx, yy, Z, cmap=plt.cm.Paired) for i, l in enumerate(clf.classes_): idx = np.where(y==l)[0] ax.scatter(X.ix[idx, col_x], X.ix[idx, col_y], label=l, c=colors[i], cmap=plt.cm.Paired) ax.set_xlabel(col_x) ax.set_ylabel(col_y) ax.legend(bbox_to_anchor=(1.2, 0.5)) fig.tight_layout() return ax df = load_arff("../data/iris.arff") print df.shape df.head() df.dtypes Explanation: Redoing Weka Stuff In this section we will try to redo some of the things we have already done in Weka. Objective: To try out some familiar algorithms for classification and regression in python using its libraries. Imports I always try to import all the useful libraries upfront. It is also considered a good practice in programming community. End of explanation df_t = df.copy() ## Since we are going to edit the data we should always make a copy df_t.head() df_t["sepallength_sqr"] = df_t["sepallength"]**2 ## ** in python is used for exponent. df_t.head() df_t["sepallength_log"] = np.log10(df_t["sepallength"]) df_t.head() Explanation: Feature creations - Math Expressions End of explanation df_t = df_t.rename(columns={"class": "label"}) df_t.head() y, X = patsy.dmatrices("label ~ petalwidth + petallength:petalwidth + I(sepallength**2)-1", data=df_t, return_type="dataframe") print y.shape, X.shape y.head() X.head() model = sm.MNLogit(y, X) res = model.fit() res.summary() model_sk = linear_model.LogisticRegression(multi_class="multinomial", solver="lbfgs") model_sk.fit(X, df_t["label"]) y_pred = model_sk.predict(X) y_pred[:10] print metrics.classification_report(df_t["label"], y_pred) model_sk_t = tree.DecisionTreeClassifier() model_sk_t.fit(X, df_t["label"]) show_decision_tree(model_sk_t, X, df_t["label"]) model_0r = dummy.DummyClassifier(strategy="most_frequent") model_0r.fit(X, df_t["label"]) y_pred = model_0r.predict(X) print metrics.classification_report(df_t["label"], y_pred) cm = metrics.confusion_matrix(y_true=df_t["label"], y_pred=y_pred) df_cm = pd.DataFrame(cm, columns=model_0r.classes_, index=model_0r.classes_) df_cm _ = get_confusion_matrix(model_0r, X, df_t["label"]) _ = get_confusion_matrix(model_sk_t, X, df_t["label"]) _ = get_confusion_matrix(model_sk, X, df_t["label"]) Explanation: Creating many features at once using patsy End of explanation y, X = patsy.dmatrices("label ~ petalwidth + petallength - 1", data=df_t, return_type="dataframe") # -1 forces the data to not generate an intercept X.columns y = df_t["label"] clf = tree.DecisionTreeClassifier() clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) clf.feature_importances_ show_decision_tree(clf, X, y) X.head() y.value_counts() plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength") Explanation: Plot decision regions We can only do this if our data has 2 features End of explanation clf = naive_bayes.GaussianNB() clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) Explanation: Naive Bayes classifier End of explanation plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength") Explanation: Decision surface of Naive Bayes classifier will not have overlapping colors because of the basic code I am using to show decision boundaries. A better code can show the mixing of colors properly End of explanation clf = linear_model.LogisticRegression(multi_class="multinomial", solver="lbfgs") clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength") Explanation: Logistic regression End of explanation clf = neighbors.KNeighborsClassifier(n_neighbors=1) clf.fit(X, y) _ = get_confusion_matrix(clf, X, y) plot_decision_regions(clf, X, y, col_x="petalwidth", col_y="petallength") Explanation: IBk of K-nearest neighbors classifier End of explanation
5,904
Given the following text description, write Python code to implement the functionality described below step by step Description: Ugly To Pretty for CSVS Run on linux. Set an import path and an export path to folders. Will take every file in import directory that is a mathematica generated CSV and turn it into a nicely fomatted CSV in Output directory. Paths Step1: Function Step2: Run
Python Code: importpath = "/home/jwb/repos/github-research/csvs/Individuals/Ugly/Stack/" exportpath = "/home/jwb/repos/github-research/csvs/Individuals/Pretty/Stack/" Explanation: Ugly To Pretty for CSVS Run on linux. Set an import path and an export path to folders. Will take every file in import directory that is a mathematica generated CSV and turn it into a nicely fomatted CSV in Output directory. Paths End of explanation import csv import pandas as pd import os def arrayer(path): with open(path, "rt") as f: reader = csv.reader(f) names = set() times = {} windows = [] rownum = 0 for row in reader: newrow = [(i[1:-1],j[:-2]) for i,j in zip(row[1::2], row[2::2])] #Drops the timewindow, and groups the rest of the row into [name, tally] rowdict = dict(newrow) names.update([x[0] for x in newrow]) #adds each name to a name set l=row[0].replace("DateObject[{","").strip("{}]}").replace(",","").replace("}]","").split() #Strips DateObject string timestamp=':'.join(l[:3])+'-'+':'.join(l[3:]) #Formats date string windows.append(timestamp) #add timestamp to list times[timestamp] = rowdict #link results as value in timestamp dict rownum += 1 cols = [[times[k][name] if name in times[k] else ' 0' for name in names ] for k in windows] #put the tally for each name across each timestamp in a nested list of Columns data = pd.DataFrame(cols,columns=list(names),index=windows) #Put into dataframe with labels return data.transpose() Explanation: Function End of explanation for filename in os.listdir(importpath): arrayer(importpath+filename).to_csv(exportpath+filename, encoding='utf-8') Explanation: Run End of explanation
5,905
Given the following text description, write Python code to implement the functionality described below step by step Description: <h2 align="center">点击下列图标在线运行HanLP</h2> <div align="center"> <a href="https Step1: 加载模型 HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。 Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务: Step3: 语义角色分析 任务越少,速度越快。如指定仅执行语义角色分析: Step4: 返回值为一个Document Step5: doc['srl']字段为语义角色标注结果,每个四元组的格式为[论元或谓词, 语义角色标签, 起始下标, 终止下标]。其中,谓词的语义角色标签为PRED,起止下标对应以tok开头的第一个单词数组。 可视化谓词论元结构: Step6: 遍历谓词论元结构: Step7: 为已分词的句子执行语义角色分析:
Python Code: !pip install hanlp -U Explanation: <h2 align="center">点击下列图标在线运行HanLP</h2> <div align="center"> <a href="https://colab.research.google.com/github/hankcs/HanLP/blob/doc-zh/plugins/hanlp_demo/hanlp_demo/zh/srl_mtl.ipynb" target="_blank"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://mybinder.org/v2/gh/hankcs/HanLP/doc-zh?filepath=plugins%2Fhanlp_demo%2Fhanlp_demo%2Fzh%2Fsrl_mtl.ipynb" target="_blank"><img src="https://mybinder.org/badge_logo.svg" alt="Open In Binder"/></a> </div> 安装 无论是Windows、Linux还是macOS,HanLP的安装只需一句话搞定: End of explanation import hanlp hanlp.pretrained.mtl.ALL # MTL多任务,具体任务见模型名称,语种见名称最后一个字段或相应语料库 Explanation: 加载模型 HanLP的工作流程是先加载模型,模型的标示符存储在hanlp.pretrained这个包中,按照NLP任务归类。 End of explanation HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_BASE_ZH) Explanation: 调用hanlp.load进行加载,模型会自动下载到本地缓存。自然语言处理分为许多任务,分词只是最初级的一个。与其每个任务单独创建一个模型,不如利用HanLP的联合模型一次性完成多个任务: End of explanation doc = HanLP('2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', tasks='srl') Explanation: 语义角色分析 任务越少,速度越快。如指定仅执行语义角色分析: End of explanation print(doc) Explanation: 返回值为一个Document: End of explanation doc.pretty_print() Explanation: doc['srl']字段为语义角色标注结果,每个四元组的格式为[论元或谓词, 语义角色标签, 起始下标, 终止下标]。其中,谓词的语义角色标签为PRED,起止下标对应以tok开头的第一个单词数组。 可视化谓词论元结构: End of explanation for i, pas in enumerate(doc['srl']): print(f'第{i+1}个谓词论元结构:') for form, role, begin, end in pas: print(f'{form} = {role} at [{begin}, {end}]') Explanation: 遍历谓词论元结构: End of explanation HanLP([ ["HanLP", "为", "生产", "环境", "带来", "次世代", "最", "先进", "的", "多语种", "NLP", "技术", "。"], ["我", "的", "希望", "是", "希望", "张晚霞", "的", "背影", "被", "晚霞", "映红", "。"] ], tasks='srl', skip_tasks='tok*').pretty_print() Explanation: 为已分词的句子执行语义角色分析: End of explanation
5,906
Given the following text description, write Python code to implement the functionality described below step by step Description: Facies classification using an SVM classifier with RBF kernel Contest entry by Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are Step2: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe. Step3: This is a quick view of the statistical distribution of the input variables. Looking at the count values, most values have 4149 valid values except for PE, which has 3232. We will drop the feature vectors that don't have a valid PE entry. Step4: Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, and we are also using depth. We also get a vector of the facies labels that correspond to each feature vector. Step5: Stratified K-fold validation to evaluate model performance One of the key steps in machine learning is to estimate a model's performance on data that it has not seen before. Scikit-learn provides a simple utility utility (train_test_split) to partition the data into a training and a test set, but the disadvantage with that is that we ignore a portion of our dataset during training. An additional disadvantage of simple spit, inherent to log data, is that there's a depth dependence. A possible strategy to avoid this is cross-validation. With k-fold cross-validation we randomly split the data into k-folds without replacement, where k-1 folds are used for training and one fold for testing. The process is repeated k times, and the performance is obtained by taking the average of the k individual performances. Stratified k-fold is an improvement over standard k-fold in that the class proportions are preserved in each fold to ensure that each fold is representative of the class proportions in the data. Grid search for parameter tuning Another important aspect of machine learning is the search for the optimal model parameters (i.e. those that will yield the best performance). This tuning is done using grid search. The above short summary is based on Sebastian Raschka's <a href="https Step6: Two birds with a stone Below we will perform grid search with stratified K-fold Step7: SVM classifier SImilar to the classifier in the article (but, as you will see, it uses a different kernel). We will re-import the data so as to pre-process it as in the tutorial. Step9: Learning curves The idea from this point forward is to use the parameters, as tuned above, but to create a brand new classifier for the learning curves exercise. This classifier will therefore be well tuned but would not have seen the training data. We will look at learning curves of training and (cross-validated) testing error versus number of samples, hoping to gain some insight into whether Step10: First things first, how many samples do we have for each leave-one-well-out split? Step11: On average, we'll have about 2830 samples for training curves and 400 for testing curves. Step12: Observations Neither training nor cross-validation scores are very high. The scores start to converge at just about the number of samples (on average) we intend to use for training of our final classifier with leave-one-well-out well cross-validation. But there's still a bit of a gap, which may indicate slight over-fitting (variance a bit high). Since we cannot address the overfitting by increasing the number of samples (without sacrificing the leave-one-well-out strategy), we can increase regularization a bit by slightly decreasing the parameter C. Step13: Confusion matrix Let's see how we do with predicting the actual facies, by looking at a confusion matrix. We do this by keeping the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before. Step14: Final classifier We now train our final classifier with leave-one-well-out validation. Again, we keep the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before. Step15: NB
Python Code: %matplotlib inline import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt from sklearn import preprocessing from sklearn.metrics import f1_score, accuracy_score, make_scorer from sklearn.model_selection import LeaveOneGroupOut, validation_curve import pandas as pd from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None filename = '../facies_vectors.csv' training_data = pd.read_csv(filename) training_data Explanation: Facies classification using an SVM classifier with RBF kernel Contest entry by: <a href="https://github.com/mycarta">Matteo Niccoli</a> and <a href="https://github.com/dahlmb">Mark Dahl</a> Original contest notebook by Brendon Hall, Enthought <a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The code and ideas in this notebook,</span> by <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Matteo Niccoli and Mark Dahl,</span> are licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>. In this notebook we will train a machine learning algorithm to predict facies from well log data. The dataset comes from a class exercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset consists of log data from nine wells that have been labeled with a facies type based on observation of core. We will use this log data to train a support vector machine to classify facies types. The plan After a quick exploration of the dataset, we will: - run cross-validated grid search (with stratified k-fold) for parameter tuning - look at learning curves to get an idea of bias vs. variance, and under fitting vs. over fitting - train a new classifier with tuned parameters using leave-one-well-out as a method of testing Exploring the dataset First, we will examine the data set we will use to train the classifier. End of explanation training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are: * Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE. * Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS) The nine discrete facies (classes of rocks) are: 1. Nonmarine sandstone 2. Nonmarine coarse siltstone 3. Nonmarine fine siltstone 4. Marine siltstone and shale 5. Mudstone (limestone) 6. Wackestone (limestone) 7. Dolomite 8. Packstone-grainstone (limestone) 9. Phylloid-algal bafflestone (limestone) These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors. Facies |Label| Adjacent Facies :---: | :---: |:--: 1 |SS| 2 2 |CSiS| 1,3 3 |FSiS| 2 4 |SiSh| 5 5 |MS| 4,6 6 |WS| 5,7 7 |D| 6,8 8 |PS| 6,7,9 9 |BS| 7,8 Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type. End of explanation # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale #5=mudstone 6=wackestone 7=dolomite 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041', '#DC7633','#A569BD', '#000000', '#000080', '#2E86C1', '#AED6F1', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) training_data.describe() Explanation: These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe. End of explanation PE_mask = training_data['PE'].notnull().values training_data = training_data[PE_mask] training_data.describe() Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, most values have 4149 valid values except for PE, which has 3232. We will drop the feature vectors that don't have a valid PE entry. End of explanation y = training_data['Facies'].values print y[25:40] print np.shape(y) X = training_data.drop(['Formation', 'Well Name','Facies','FaciesLabels'], axis=1) print np.shape(X) X.describe(percentiles=[.05, .25, .50, .75, .95]) Explanation: Now we extract just the feature variables we need to perform the classification. The predictor variables are the five log values and two geologic constraining variables, and we are also using depth. We also get a vector of the facies labels that correspond to each feature vector. End of explanation from sklearn.model_selection import GridSearchCV Explanation: Stratified K-fold validation to evaluate model performance One of the key steps in machine learning is to estimate a model's performance on data that it has not seen before. Scikit-learn provides a simple utility utility (train_test_split) to partition the data into a training and a test set, but the disadvantage with that is that we ignore a portion of our dataset during training. An additional disadvantage of simple spit, inherent to log data, is that there's a depth dependence. A possible strategy to avoid this is cross-validation. With k-fold cross-validation we randomly split the data into k-folds without replacement, where k-1 folds are used for training and one fold for testing. The process is repeated k times, and the performance is obtained by taking the average of the k individual performances. Stratified k-fold is an improvement over standard k-fold in that the class proportions are preserved in each fold to ensure that each fold is representative of the class proportions in the data. Grid search for parameter tuning Another important aspect of machine learning is the search for the optimal model parameters (i.e. those that will yield the best performance). This tuning is done using grid search. The above short summary is based on Sebastian Raschka's <a href="https://github.com/rasbt/python-machine-learning-book"> Python Machine Learning</a> book. End of explanation Fscorer = make_scorer(f1_score, average = 'micro') Ascorer = make_scorer(accuracy_score) Explanation: Two birds with a stone Below we will perform grid search with stratified K-fold: http://scikit-learn.org/stable/auto_examples/model_selection/grid_search_digits.html#sphx-glr-auto-examples-model-selection-grid-search-digits-py. This will give us reasonable values for the more critical (for performance) classifier's parameters. Make performance scorers Used to evaluate training, testing, and validation performance. End of explanation from sklearn import svm SVC_classifier = svm.SVC(cache_size = 800, random_state=1) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values parm_grid={'kernel': ['linear', 'rbf'], 'C': [0.5, 1, 5, 10, 15], 'gamma':[0.0001, 0.001, 0.01, 0.1, 1, 10]} grid_search = GridSearchCV(SVC_classifier, param_grid=parm_grid, scoring = Fscorer, cv=10) # Stratified K-fold with n_splits=10 # For integer inputs, if the estimator is a # classifier and y is either binary or multiclass, # as in our case, StratifiedKFold is used grid_search.fit(X, y) print('Best score: {}'.format(grid_search.best_score_)) print('Best parameters: {}'.format(grid_search.best_params_)) grid_search.best_estimator_ Explanation: SVM classifier SImilar to the classifier in the article (but, as you will see, it uses a different kernel). We will re-import the data so as to pre-process it as in the tutorial. End of explanation from sklearn.model_selection import learning_curve from sklearn.model_selection import ShuffleSplit def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(0.1, 1., 5)): Generate a simple plot of the test and training learning curve. Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : int, cross-validation generator or an iterable, optional Determines the cross-validation splitting strategy. Possible inputs for cv are: - None, to use the default 3-fold cross-validation, - integer, to specify the number of folds. - An object to be used as a cross-validation generator. - An iterable yielding train/test splits. For integer/None inputs, if ``y`` is binary or multiclass, :class:`StratifiedKFold` used. If the estimator is not a classifier or if ``y`` is neither binary nor multiclass, :class:`KFold` is used. Refer :ref:`User Guide <cross_validation>` for the various cross-validators that can be used here. n_jobs : integer, optional Number of jobs to run in parallel (default 1). plt.figure() plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes, scoring = Fscorer) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training F1") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation F1") plt.legend(loc="best") return plt Explanation: Learning curves The idea from this point forward is to use the parameters, as tuned above, but to create a brand new classifier for the learning curves exercise. This classifier will therefore be well tuned but would not have seen the training data. We will look at learning curves of training and (cross-validated) testing error versus number of samples, hoping to gain some insight into whether: - since we will be testing eventually using a leave one-well-out, would we have enough samples? - is there a good bias-variance trade-off? In other words, is the classifier under-fitting, over-fitting, or just right? The plots are adapted from: http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html End of explanation training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values wells = training_data["Well Name"].values logo = LeaveOneGroupOut() for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] print well_name, 'out: ', np.shape(train)[0], 'training samples - ', np.shape(test)[0], 'test samples' Explanation: First things first, how many samples do we have for each leave-one-well-out split? End of explanation from sklearn import svm SVC_classifier_learn = svm.SVC(C=5, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values title = "Learning Curves (SVC)" # Learning curves with 50 iterations to get smoother mean test and train # score curves; each time we hold 15% of the data randomly as a validation set. # This is equivalent to leaving about 1 well out, on average (3232 minus ~2800 samples) cv = ShuffleSplit(n_splits=50, test_size=0.15, random_state=1) plot_learning_curve(SVC_classifier_learn, title, X, y, cv=cv, ylim=(0.45, 0.75), n_jobs=4) plt.show() Explanation: On average, we'll have about 2830 samples for training curves and 400 for testing curves. End of explanation SVC_classifier_learn_2 = svm.SVC(C=2, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) title = "Learning Curves (SVC)" # Learning curves with 50 iterations to get smoother mean test and train # score curves; each time we hold 15% of the data randomly as a validation set. # This is equivalent to leaving about 1 well out, on average (3232 minus ~2800 samples) cv = ShuffleSplit(n_splits=50, test_size=0.15, random_state=1) plot_learning_curve(SVC_classifier_learn_2, title, X, y, cv=cv, ylim=(0.45, 0.75), n_jobs=4) plt.show() Explanation: Observations Neither training nor cross-validation scores are very high. The scores start to converge at just about the number of samples (on average) we intend to use for training of our final classifier with leave-one-well-out well cross-validation. But there's still a bit of a gap, which may indicate slight over-fitting (variance a bit high). Since we cannot address the overfitting by increasing the number of samples (without sacrificing the leave-one-well-out strategy), we can increase regularization a bit by slightly decreasing the parameter C. End of explanation from sklearn import svm SVC_classifier_conf = svm.SVC(C=2, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) svc_pred = SVC_classifier_conf.fit(X,y) svc_pred = SVC_classifier_conf.predict(X) from sklearn.metrics import confusion_matrix from classification_utilities import display_cm, display_adj_cm conf = confusion_matrix(svc_pred, y) display_cm(conf, facies_labels, display_metrics=True, hide_zeros=True) Explanation: Confusion matrix Let's see how we do with predicting the actual facies, by looking at a confusion matrix. We do this by keeping the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before. End of explanation from sklearn import svm SVC_classifier_LOWO = svm.SVC(C=2, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values wells = training_data["Well Name"].values logo = LeaveOneGroupOut() f1_SVC = [] for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] SVC_classifier_LOWO.fit(X[train], y[train]) pred = SVC_classifier_LOWO.predict(X[test]) sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro') print("{:>20s} {:.3f}".format(well_name, sc)) f1_SVC.append(sc) print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_SVC)/(1.0*(len(f1_SVC)))) Explanation: Final classifier We now train our final classifier with leave-one-well-out validation. Again, we keep the parameters from the previous section, but creating a brand new classifier. So when we fit the data, it won't have seen it before. End of explanation from sklearn import svm SVC_classifier_LOWO_C5 = svm.SVC(C=5, cache_size=800, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3, gamma=0.01, kernel='rbf', max_iter=-1, probability=False, random_state=1, shrinking=True, tol=0.001, verbose=False) training_data = pd.read_csv('../training_data.csv') X = training_data.drop(['Formation', 'Well Name', 'Facies'], axis=1).values scaler = preprocessing.StandardScaler().fit(X) X = scaler.transform(X) y = training_data['Facies'].values wells = training_data["Well Name"].values logo = LeaveOneGroupOut() f1_SVC = [] for train, test in logo.split(X, y, groups=wells): well_name = wells[test[0]] SVC_classifier_LOWO_C5.fit(X[train], y[train]) pred = SVC_classifier_LOWO_C5.predict(X[test]) sc = f1_score(y[test], pred, labels = np.arange(10), average = 'micro') print("{:>20s} {:.3f}".format(well_name, sc)) f1_SVC.append(sc) print "-Average leave-one-well-out F1 Score: %6f" % (sum(f1_SVC)/(1.0*(len(f1_SVC)))) Explanation: NB: the final classifier above resulted in a validated F1 score of 0.536 with the blind facies in the STUART and CRAWFORD wells. This compares favourably with the previous SVM implementations. However, had we used a parameter C equal to 5: End of explanation
5,907
Given the following text description, write Python code to implement the functionality described below step by step Description: Diversified economy Matrix $X$ - production needs, where $x_{ij}$ is how much of $i$-th product is needed to make $j$-th product Step1: Vector $y$ - consumer needs, where $y_i$ shows how much of $i$-th product people buy(non-production needs) Step2: Total production can be calculated as $$ x_i = \sum_{j=1}^n x_{ij} + y_i $$ Step3: Leontiev model We assume that cost is proportional to production $$ x_{ij}=a_{ij}x_j \implies a_{ij}=\frac{x_{ij}}{x_j} $$ Step4: When our consumer needs changed Step5: We can approximate production changes needed to satisfy our customers $$ x' = Ax' + y' $$ $$ (A - E)x' = -y' $$ Step6: Part 2 Given matrix $A$ Step7: We can find coefficients of charactiristic polynomial using numpy.poly Step8: To compute eigenvalues and eigenvectors we can use scipy.linalg.eig Step9: Frobenius number is by definition the largest of eigenvalues Step10: Because Frobenius number is less than 1 we can say that technology matrix is productive. Also left and right frobenius vectors are Step11: Full costs matrix Can be computed as Step12: We can also approximate matrix $B$ with limit $$ \lim_{n \to \infty} (E + \sum_{i=1}^n A^n) = \lim_{n \to \infty} E + A + A^2 + ... + A^n = B $$ Step13: Prices (by Leontiev) Additional costs vector $s$ equals $$ p = pA + s $$ $$ p(E - A) = s $$ $$ p = s(E - A) ^{-1} $$ $$ p = sB $$
Python Code: X = np.array([[500, 300], [150, 200]]) Explanation: Diversified economy Matrix $X$ - production needs, where $x_{ij}$ is how much of $i$-th product is needed to make $j$-th product End of explanation y = np.array([900, 500]) Explanation: Vector $y$ - consumer needs, where $y_i$ shows how much of $i$-th product people buy(non-production needs) End of explanation x = np.sum(X, axis=1) + y print(x) Explanation: Total production can be calculated as $$ x_i = \sum_{j=1}^n x_{ij} + y_i $$ End of explanation A = X / x print(A) Explanation: Leontiev model We assume that cost is proportional to production $$ x_{ij}=a_{ij}x_j \implies a_{ij}=\frac{x_{ij}}{x_j} $$ End of explanation y1 = np.array([1100, 800]) Explanation: When our consumer needs changed End of explanation x1 = np.linalg.solve(A - np.eye(A.shape[0]), -y1) print(x1) Explanation: We can approximate production changes needed to satisfy our customers $$ x' = Ax' + y' $$ $$ (A - E)x' = -y' $$ End of explanation A = np.array([[0.5, 0.1, 0.5], [0, 0.3, 0.1], [0.2, 0.3, 0.1]]) Explanation: Part 2 Given matrix $A$ End of explanation coefs = np.poly(A) print(coefs) Explanation: We can find coefficients of charactiristic polynomial using numpy.poly End of explanation vals, left, right = linalg.eig(A, left=True, right=True) Explanation: To compute eigenvalues and eigenvectors we can use scipy.linalg.eig End of explanation print(np.max(vals).real) Explanation: Frobenius number is by definition the largest of eigenvalues End of explanation print(left[:, np.argmax(vals)]) print(right[:, np.argmax(vals)]) Explanation: Because Frobenius number is less than 1 we can say that technology matrix is productive. Also left and right frobenius vectors are End of explanation B = np.linalg.inv(np.eye(A.shape[0]) - A) print(B) Explanation: Full costs matrix Can be computed as End of explanation B1 = P = np.eye(A.shape[0]) for i in range(100): P = P @ A B1 = B1 + P if np.max(np.abs(B - B1)) < 1e-2: print('It took {} steps to converge.'.format(i)) break print(B1) Explanation: We can also approximate matrix $B$ with limit $$ \lim_{n \to \infty} (E + \sum_{i=1}^n A^n) = \lim_{n \to \infty} E + A + A^2 + ... + A^n = B $$ End of explanation s = np.array([0.2, 0.3, 0.4]) p = s @ B print(p) Explanation: Prices (by Leontiev) Additional costs vector $s$ equals $$ p = pA + s $$ $$ p(E - A) = s $$ $$ p = s(E - A) ^{-1} $$ $$ p = sB $$ End of explanation
5,908
Given the following text description, write Python code to implement the functionality described below step by step Description: Facies classification using Machine Learning- Random Forest Contest entry by Priyanka Raghavan and Steve Hall This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a Logistical regression classifier to classify facies types. We will use simple logistics regression to classify wells scikit-learn. First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data. Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets. We will then be ready to build the classifier. Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data. Exploring the dataset First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data. Step1: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are Step2: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set. Remove a single well to use as a blind test later. These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe. Step3: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial. Step4: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters. We then show log plots for wells SHRIMPLIN. Step5: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class. Step6: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies. Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies. Conditioning the data set Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector. Step7: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie Step8: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set. Step9: Training the classifier using Random forest Now we use the cleaned and conditioned training set to create a facies classifier. Lets try random forest Step10: Now we can train the classifier using the training set we created above. Step11: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Step12: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels. The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i. To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function. Step13: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS. The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications. Step14: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels. Step15: Applying the classification model to the blind data We held a well back from the training, and stored it in a dataframe called blind Step16: The label vector is just the Facies column Step17: We can form the feature matrix by dropping some of the columns and making a new dataframe Step18: Now we can transform this with the scaler we made before Step19: Now it's a simple matter of making a prediction and storing it back in the dataframe Step20: Let's see how we did with the confusion matrix Step21: The results are 0.43 accuracy on facies classification of blind data and 0.87 adjacent facies classification. Step22: ...but does remarkably well on the adjacent facies predictions. Step23: Applying the classification model to new data Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input. This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data. Step24: The data needs to be scaled using the same constants we used for the training data. Step25: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe. Step26: We can use the well log plot to view the classification results along with the well logs. Step27: Finally we can write out a csv file with the well data along with the facies classification results.
Python Code: %matplotlib inline import pandas as pd import numpy as np import matplotlib as mpl import matplotlib.pyplot as plt import matplotlib.colors as colors from mpl_toolkits.axes_grid1 import make_axes_locatable from sklearn.ensemble import RandomForestClassifier from pandas import set_option set_option("display.max_rows", 10) pd.options.mode.chained_assignment = None filename = '../facies_vectors.csv' training_data = pd.read_csv(filename) training_data Explanation: Facies classification using Machine Learning- Random Forest Contest entry by Priyanka Raghavan and Steve Hall This notebook demonstrates how to train a machine learning algorithm to predict facies from well log data. The dataset we will use comes from a class excercise from The University of Kansas on Neural Networks and Fuzzy Systems. This exercise is based on a consortium project to use machine learning techniques to create a reservoir model of the largest gas fields in North America, the Hugoton and Panoma Fields. For more info on the origin of the data, see Bohling and Dubois (2003) and Dubois et al. (2007). The dataset we will use is log data from nine wells that have been labeled with a facies type based on oberservation of core. We will use this log data to train a Logistical regression classifier to classify facies types. We will use simple logistics regression to classify wells scikit-learn. First we will explore the dataset. We will load the training data from 9 wells, and take a look at what we have to work with. We will plot the data from a couple wells, and create cross plots to look at the variation within the data. Next we will condition the data set. We will remove the entries that have incomplete data. The data will be scaled to have zero mean and unit variance. We will also split the data into training and test sets. We will then be ready to build the classifier. Finally, once we have a built and tuned the classifier, we can apply the trained model to classify facies in wells which do not already have labels. We will apply the classifier to two wells, but in principle you could apply the classifier to any number of wells that had the same log data. Exploring the dataset First, we will examine the data set we will use to train the classifier. The training data is contained in the file facies_vectors.csv. The dataset consists of 5 wireline log measurements, two indicator variables and a facies label at half foot intervals. In machine learning terminology, each log measurement is a feature vector that maps a set of 'features' (the log measurements) to a class (the facies type). We will use the pandas library to load the data into a dataframe, which provides a convenient data structure to work with well log data. End of explanation training_data['Well Name'] = training_data['Well Name'].astype('category') training_data['Formation'] = training_data['Formation'].astype('category') training_data['Well Name'].unique() training_data.describe() Explanation: This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are: * Five wire line log curves include gamma ray (GR), resistivity logging (ILD_log10), photoelectric effect (PE), neutron-density porosity difference and average neutron-density porosity (DeltaPHI and PHIND). Note, some wells do not have PE. * Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS) The nine discrete facies (classes of rocks) are: 1. Nonmarine sandstone 2. Nonmarine coarse siltstone 3. Nonmarine fine siltstone 4. Marine siltstone and shale 5. Mudstone (limestone) 6. Wackestone (limestone) 7. Dolomite 8. Packstone-grainstone (limestone) 9. Phylloid-algal bafflestone (limestone) These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors. Facies |Label| Adjacent Facies :---: | :---: |:--: 1 |SS| 2 2 |CSiS| 1,3 3 |FSiS| 2 4 |SiSh| 5 5 |MS| 4,6 6 |WS| 5,7 7 |D| 6,8 8 |PS| 6,7,9 9 |BS| 7,8 Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type. End of explanation # 1=sandstone 2=c_siltstone 3=f_siltstone # 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite # 8=packstone 9=bafflestone facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D'] facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D','PS', 'BS'] #facies_color_map is a dictionary that maps facies labels #to their respective colors facies_color_map = {} for ind, label in enumerate(facies_labels): facies_color_map[label] = facies_colors[ind] def label_facies(row, labels): return labels[ row['Facies'] -1] #training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) faciesVals = training_data['Facies'].values well = training_data['Well Name'].values mpl.rcParams['figure.figsize'] = (20.0, 10.0) for w_idx, w in enumerate(np.unique(well)): ax = plt.subplot(3, 4, w_idx+1) hist = np.histogram(faciesVals[well == w], bins=np.arange(len(facies_labels)+1)+.5) plt.bar(np.arange(len(hist[0])), hist[0], color=facies_colors, align='center') ax.set_xticks(np.arange(len(hist[0]))) ax.set_xticklabels(facies_labels) ax.set_title(w) blind = training_data[training_data['Well Name'] == 'NEWBY'] training_data = training_data[training_data['Well Name'] != 'NEWBY'] training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1) PE_mask = training_data['PE'].notnull().values training_data = training_data[PE_mask] Explanation: This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set. Remove a single well to use as a blind test later. These are the names of the 10 training wells in the Council Grove reservoir. Data has been recruited into pseudo-well 'Recruit F9' to better represent facies 9, the Phylloid-algal bafflestone. Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe. End of explanation def make_facies_log_plot(logs, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im=ax[5].imshow(cluster, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[5]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-1): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) Explanation: Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial. End of explanation make_facies_log_plot( training_data[training_data['Well Name'] == 'SHRIMPLIN'], facies_colors) Explanation: Placing the log plotting code in a function will make it easy to plot the logs from multiples wells, and can be reused later to view the results when we apply the facies classification model to other wells. The function was written to take a list of colors and facies labels as parameters. We then show log plots for wells SHRIMPLIN. End of explanation #count the number of unique entries for each facies, sort them by #facies number (instead of by number of entries) facies_counts = training_data['Facies'].value_counts().sort_index() #use facies labels to index each count facies_counts.index = facies_labels facies_counts.plot(kind='bar',color=facies_colors, title='Distribution of Training Data by Facies') facies_counts Explanation: In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class. End of explanation correct_facies_labels = training_data['Facies'].values feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1) feature_vectors.describe() Explanation: This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies. Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies. Conditioning the data set Now we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector. End of explanation from sklearn import preprocessing scaler = preprocessing.StandardScaler().fit(feature_vectors) scaled_features = scaler.transform(feature_vectors) feature_vectors Explanation: Scikit includes a preprocessing module that can 'standardize' the data (giving each variable zero mean and unit variance, also called whitening). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The StandardScalar class can be fit to the training set, and later used to standardize any training data. End of explanation from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( scaled_features, correct_facies_labels, test_size=0.1, random_state=42) Explanation: Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set. End of explanation clf = RandomForestClassifier(n_estimators=150, min_samples_leaf= 50,class_weight="balanced",oob_score=True,random_state=50 ) Explanation: Training the classifier using Random forest Now we use the cleaned and conditioned training set to create a facies classifier. Lets try random forest End of explanation clf.fit(X_train,y_train) Explanation: Now we can train the classifier using the training set we created above. End of explanation predicted_labels = clf.predict(X_test) Explanation: Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. End of explanation from sklearn.metrics import confusion_matrix from classification_utilities import display_cm, display_adj_cm conf = confusion_matrix(y_test, predicted_labels) display_cm(conf, facies_labels, hide_zeros=True) Explanation: We need some metrics to evaluate how good our classifier is doing. A confusion matrix is a table that can be used to describe the performance of a classification model. Scikit-learn allows us to easily create a confusion matrix by supplying the actual and predicted facies labels. The confusion matrix is simply a 2D array. The entries of confusion matrix C[i][j] are equal to the number of observations predicted to have facies j, but are known to have facies i. To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file classification_utilities.py in this repo for the display_cm() function. End of explanation def accuracy(conf): total_correct = 0. nb_classes = conf.shape[0] for i in np.arange(0,nb_classes): total_correct += conf[i][i] acc = total_correct/sum(sum(conf)) return acc Explanation: The rows of the confusion matrix correspond to the actual facies labels. The columns correspond to the labels assigned by the classifier. For example, consider the first row. For the feature vectors in the test set that actually have label SS, 18 were correctly indentified as SS, 5 were classified as CSiS and 1 was classified as FSiS. The entries along the diagonal are the facies that have been correctly classified. Below we define two functions that will give an overall value for how the algorithm is performing. The accuracy is defined as the number of correct classifications divided by the total number of classifications. End of explanation adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]]) def accuracy_adjacent(conf, adjacent_facies): nb_classes = conf.shape[0] total_correct = 0. for i in np.arange(0,nb_classes): total_correct += conf[i][i] for j in adjacent_facies[i]: total_correct += conf[i][j] return total_correct / sum(sum(conf)) print('Facies classification accuracy = %f' % accuracy(conf)) print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies)) Explanation: As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label i, adjacent_facies[i] is an array of the adjacent facies labels. End of explanation blind Explanation: Applying the classification model to the blind data We held a well back from the training, and stored it in a dataframe called blind: End of explanation y_blind = blind['Facies'].values Explanation: The label vector is just the Facies column: End of explanation well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1) Explanation: We can form the feature matrix by dropping some of the columns and making a new dataframe: End of explanation X_blind = scaler.transform(well_features) Explanation: Now we can transform this with the scaler we made before: End of explanation y_pred = clf.predict(X_blind) blind['Prediction'] = y_pred Explanation: Now it's a simple matter of making a prediction and storing it back in the dataframe: End of explanation cv_conf = confusion_matrix(y_blind, y_pred) print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf)) print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies)) Explanation: Let's see how we did with the confusion matrix: End of explanation display_cm(cv_conf, facies_labels, display_metrics=True, hide_zeros=True) Explanation: The results are 0.43 accuracy on facies classification of blind data and 0.87 adjacent facies classification. End of explanation display_adj_cm(cv_conf, facies_labels, adjacent_facies, display_metrics=True, hide_zeros=True) def compare_facies_plot(logs, compadre, facies_colors): #make sure logs are sorted by depth logs = logs.sort_values(by='Depth') cmap_facies = colors.ListedColormap( facies_colors[0:len(facies_colors)], 'indexed') ztop=logs.Depth.min(); zbot=logs.Depth.max() cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1) cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1) f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12)) ax[0].plot(logs.GR, logs.Depth, '-g') ax[1].plot(logs.ILD_log10, logs.Depth, '-') ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5') ax[3].plot(logs.PHIND, logs.Depth, '-', color='r') ax[4].plot(logs.PE, logs.Depth, '-', color='black') im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto', cmap=cmap_facies,vmin=1,vmax=9) divider = make_axes_locatable(ax[6]) cax = divider.append_axes("right", size="20%", pad=0.05) cbar=plt.colorbar(im2, cax=cax) cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS', 'SiSh', ' MS ', ' WS ', ' D ', ' PS ', ' BS '])) cbar.set_ticks(range(0,1)); cbar.set_ticklabels('') for i in range(len(ax)-2): ax[i].set_ylim(ztop,zbot) ax[i].invert_yaxis() ax[i].grid() ax[i].locator_params(axis='x', nbins=3) ax[0].set_xlabel("GR") ax[0].set_xlim(logs.GR.min(),logs.GR.max()) ax[1].set_xlabel("ILD_log10") ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max()) ax[2].set_xlabel("DeltaPHI") ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max()) ax[3].set_xlabel("PHIND") ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max()) ax[4].set_xlabel("PE") ax[4].set_xlim(logs.PE.min(),logs.PE.max()) ax[5].set_xlabel('Facies') ax[6].set_xlabel(compadre) ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([]) ax[4].set_yticklabels([]); ax[5].set_yticklabels([]) ax[5].set_xticklabels([]) ax[6].set_xticklabels([]) f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94) compare_facies_plot(blind, 'Prediction', facies_colors) Explanation: ...but does remarkably well on the adjacent facies predictions. End of explanation well_data = pd.read_csv('../validation_data_nofacies.csv') well_data['Well Name'] = well_data['Well Name'].astype('category') well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1) Explanation: Applying the classification model to new data Now that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input. This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called test_data. End of explanation X_unknown = scaler.transform(well_features) Explanation: The data needs to be scaled using the same constants we used for the training data. End of explanation #predict facies of unclassified data y_unknown = clf.predict(X_unknown) well_data['Facies'] = y_unknown well_data well_data['Well Name'].unique() Explanation: Finally we predict facies labels for the unknown data, and store the results in a Facies column of the test_data dataframe. End of explanation make_facies_log_plot( well_data[well_data['Well Name'] == 'STUART'], facies_colors=facies_colors) make_facies_log_plot( well_data[well_data['Well Name'] == 'CRAWFORD'], facies_colors=facies_colors) Explanation: We can use the well log plot to view the classification results along with the well logs. End of explanation well_data.to_csv('SHPR_FirstAttempt_RandomForest_facies.csv') Explanation: Finally we can write out a csv file with the well data along with the facies classification results. End of explanation
5,909
Given the following text description, write Python code to implement the functionality described below step by step Description: Interact Exercise 4 Imports Step2: Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$ Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function Step6: Use interact to explore the plot_random_line function using
Python Code: %matplotlib inline import matplotlib.pyplot as plt import numpy as np from IPython.html.widgets import interact, interactive, fixed from IPython.display import display Explanation: Interact Exercise 4 Imports End of explanation def random_line(m, x, b, sigma, size=10): Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0] Parameters ---------- m : float The slope of the line. b : float The y-intercept of the line. sigma : float The standard deviation of the y direction normal distribution noise. size : int The number of points to create for the line. Returns ------- x : array of floats The array of x values for the line with `size` points. y : array of floats The array of y values for the lines with `size` points. y = m * x + b + N(0, sigma**2) return y m = 0.0; b = 1.0; sigma=0.0; size=3 x, y = random_line(m, b, sigma, size) assert len(x)==len(y)==size assert list(x)==[-1.0,0.0,1.0] assert list(y)==[1.0,1.0,1.0] sigma = 1.0 m = 0.0; b = 0.0 size = 500 x, y = random_line(m, b, sigma, size) assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1) assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1) Explanation: Line with Gaussian noise Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$: $$ y = m x + b + N(0,\sigma^2) $$ Be careful about the sigma=0.0 case. End of explanation def ticks_out(ax): Move the ticks to the outside of the box. ax.get_xaxis().set_tick_params(direction='out', width=1, which='both') ax.get_yaxis().set_tick_params(direction='out', width=1, which='both') def plot_random_line(m, b, sigma, size=10, color='red'): Plot a random line with slope m, intercept b and size points. # YOUR CODE HERE raise NotImplementedError() plot_random_line(5.0, -1.0, 2.0, 50) assert True # use this cell to grade the plot_random_line function Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function: Make the marker color settable through a color keyword argument with a default of red. Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$. Customize your plot to make it effective and beautiful. End of explanation # YOUR CODE HERE raise NotImplementedError() #### assert True # use this cell to grade the plot_random_line interact Explanation: Use interact to explore the plot_random_line function using: m: a float valued slider from -10.0 to 10.0 with steps of 0.1. b: a float valued slider from -5.0 to 5.0 with steps of 0.1. sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01. size: an int valued slider from 10 to 100 with steps of 10. color: a dropdown with options for red, green and blue. End of explanation
5,910
Given the following text description, write Python code to implement the functionality described below step by step Description: Basic setup Step4: The problems When using the any search function to search for two different terms, the results are wrong. Problem 1 Step6: Solving problem 1 This query gets wrong results because it the OR query is poorly constructed Step8: Properly structuring the OR clause takes away the problem of having different results for for OR dense dense OR fog Option 1 Step11: Option 2 Step13: Option 3 Step15: Solving problem 2 To really get the right results, though, one should not just use any, but rather any/cql.proxinfo. Step17: Or in its simpler form Step19: This does not seem to be affected by whether you mention cql or not (that is a cql specification, if I am not wrong). Step22: The counts are now correct
Python Code: # coding: utf-8 import os from cheshire3.baseObjects import Session from cheshire3.document import StringDocument from cheshire3.internal import cheshire3Root from cheshire3.server import SimpleServer session = Session() session.database = 'db_dickens' serv = SimpleServer(session, os.path.join(cheshire3Root, 'configs', 'serverConfig.xml')) db = serv.get_object(session, session.database) qf = db.get_object(session, 'defaultQueryFactory') resultSetStore = db.get_object(session, 'resultSetStore') idxStore = db.get_object(session, 'indexStore') Explanation: Basic setup End of explanation # This is the query that is currently being used. # The count is the number of chapters query = qf.get_query(session, ((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "fog") or c3.chapter-idx = "dense") ) result_set = db.search(session, query) print len(result_set) # To get a more speficic count one also needs to include the numbers of hits # in the different chapters def count_total(result_set): Helper function to count the total number of hits in the search results count = 0 for result in result_set: count += len(result.proxInfo) return count count_total(result_set) def try_query(query): Another helper function to take a query and return the total number of hits query = qf.get_query(session, query) result_set = db.search(session, query) return count_total(result_set) Explanation: The problems When using the any search function to search for two different terms, the results are wrong. Problem 1: searching for fog OR dense is not the same as dense OR fog. Problem 2: Second, the counts for fog OR dense are off. Currently, there are 150 results for fog OR dense and 221 for dense OR fog, but there should be many more (142 or 144 if one counts compound nouns). End of explanation try_query( ((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "dense") or c3.chapter-idx = "fog") ) Explanation: Solving problem 1 This query gets wrong results because it the OR query is poorly constructed End of explanation try_query( (c3.subcorpus-idx all "dickens" and/cql.proxinfo (c3.chapter-idx = "dense" or c3.chapter-idx = "fog")) ) Explanation: Properly structuring the OR clause takes away the problem of having different results for for OR dense dense OR fog Option 1 End of explanation try_query( (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "dense fog") ) try_query( (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any "fog dense") ) Explanation: Option 2 End of explanation try_query( ((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "dense") or (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "fog")) ) Explanation: Option 3: the verbose one End of explanation try_query( (c3.subcorpus-idx all "dickens" and/proxinfo (c3.chapter-idx = "dense" or/proxinfo c3.chapter-idx = "fog")) ) Explanation: Solving problem 2 To really get the right results, though, one should not just use any, but rather any/cql.proxinfo. End of explanation try_query( (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any/proxinfo "fog dense") ) Explanation: Or in its simpler form: End of explanation try_query( (c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx any/cql.proxinfo "fog dense") ) Explanation: This does not seem to be affected by whether you mention cql or not (that is a cql specification, if I am not wrong). End of explanation dense = try_query((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "dense")) print dense fog = try_query((c3.subcorpus-idx all "dickens" and/cql.proxinfo c3.chapter-idx = "fog")) print fog dense + fog Explanation: The counts are now correct: End of explanation
5,911
Given the following text description, write Python code to implement the functionality described below step by step Description: Jiří Polcar &lt;[email protected]&gt; Úvod Motivace Workflow Základní dělení metod strojového učení Redukce dimenzi & feature importance Vyhodnocení modelů Cross validation & Grid search Závěr <h2><center>Motivace</center></h2> Step1: <h2><center>Základní dělení metod strojového učení</center></h2> Classification (snažíme se vstupnímu vektoru přiřadit jednu z kategorií) textu přiřadit rubriku záznamu z akcelerometru typ pohybu (chůze, běh, jízda tramvají, jízda autem, ...) sentimen komentářů na sociální síti (positivni/negativní) spam Clustering (snažíme se vstupní vektory rozdělit do skupin) "blízké" vektory tvoří skupipy klíčová slova pro dané téma Regression (snažime se vstupnímu vektoru přiřadit (spojitou) hodnotu) předpoveď měnových kurzů předpoveď návstěvnosti Dimension reduction (snažime se zredukovat velikost vstupního vektoru) máme moc dat, neupočítáme je chceme vědět, které vstupní hodnoty jsou podstatne snadnější vizualizace <h2><center>Wrokflow</center></h2> <h2><center>Hyperparametry modeů</center></h2> Step2: <h2><center>Kosatce (Iris) Data</center></h2> <img alt="Iris Data Explanation" src="images/iris.png" style="width Step3: Studenti Step4: <h2><center>Vyhodnocení modeů</center></h2> <img src="images/confusion_matrix.png" alt="Confusion Matrix" style="width Step5: Step6: Studenti Step7: <h2><center>Co dále?</center></h2> Datová špína Normalizace (sklearn.preprocessing.Normalizer) Použité metriky (Distance Metric Learning) sklearn.preprocessing.LabelEncoder / sklearn.preprocessing.OneHotEncoder Big Data
Python Code: from sklearn import datasets from sklearn import metrics digits = datasets.load_digits() fig, axes = plt.subplots(5, 10, figsize=(8, 5)) fig.subplots_adjust(hspace=0.1, wspace=0.1) for i, ax in enumerate(axes.flat): ax.imshow(digits.images[i], cmap='binary') ax.text(0.05, 0.05, str(digits.target[i]), transform=ax.transAxes, color='green') ax.set_xticks([]) ax.set_yticks([]) print(digits.images.shape) print(digits.images[0]) plt.rcParams['figure.figsize'] = 4, 4 plt.imshow(digits.images[0]); plt.rcParams['figure.figsize'] = 20, 8 %%time from sklearn.linear_model import LogisticRegression clf = LogisticRegression() clf.fit(digits.data, digits.target) pred = clf.predict(digits.data) sns.heatmap(metrics.confusion_matrix(digits.target, pred), annot=True, fmt='d') plt.ylabel('True label') plt.xlabel('Predicted label'); Explanation: Jiří Polcar &lt;[email protected]&gt; Úvod Motivace Workflow Základní dělení metod strojového učení Redukce dimenzi & feature importance Vyhodnocení modelů Cross validation & Grid search Závěr <h2><center>Motivace</center></h2> End of explanation from sklearn.decomposition import PCA pca_digits = PCA(n_components=2) reduced_data_pca_digits = pca_digits.fit_transform(digits.data) colors = ['black', 'blue', 'purple', 'yellow', 'white', 'red', 'lime', 'cyan', 'orange', 'gray'] for i in range(len(colors)): x = reduced_data_pca_digits[:, 0][digits.target == i] y = reduced_data_pca_digits[:, 1][digits.target == i] plt.scatter(x, y, c=colors[i]) plt.legend(digits.target_names, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('First Principal Component') plt.ylabel('Second Principal Component') plt.show() Explanation: <h2><center>Základní dělení metod strojového učení</center></h2> Classification (snažíme se vstupnímu vektoru přiřadit jednu z kategorií) textu přiřadit rubriku záznamu z akcelerometru typ pohybu (chůze, běh, jízda tramvají, jízda autem, ...) sentimen komentářů na sociální síti (positivni/negativní) spam Clustering (snažíme se vstupní vektory rozdělit do skupin) "blízké" vektory tvoří skupipy klíčová slova pro dané téma Regression (snažime se vstupnímu vektoru přiřadit (spojitou) hodnotu) předpoveď měnových kurzů předpoveď návstěvnosti Dimension reduction (snažime se zredukovat velikost vstupního vektoru) máme moc dat, neupočítáme je chceme vědět, které vstupní hodnoty jsou podstatne snadnější vizualizace <h2><center>Wrokflow</center></h2> <h2><center>Hyperparametry modeů</center></h2> End of explanation from sklearn.datasets import load_iris iris = load_iris() data = {iris.feature_names[it]: iris.data.transpose()[it] for it in range(4)} data.update({'species': [iris.target_names[it] for it in iris.target]}) pd.DataFrame(data).head(4) Explanation: <h2><center>Kosatce (Iris) Data</center></h2> <img alt="Iris Data Explanation" src="images/iris.png" style="width: 180px;"/> End of explanation pca_iris = PCA(n_components=2) reduced_data_pca_iris = pca_iris.fit_transform(iris.data) colors = ['black', 'blue', 'red'] for i in range(len(colors)): x = reduced_data_pca_iris[iris.target == i, 0] y = reduced_data_pca_iris[iris.target == i, 1] plt.scatter(x, y, c=colors[i]) plt.legend(iris.target_names, bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.) plt.xlabel('First Principal Component') plt.ylabel('Second Principal Component') plt.show() from sklearn.datasets import fetch_olivetti_faces from numpy.random import RandomState dataset = fetch_olivetti_faces(shuffle=True, random_state=RandomState(0), download_if_missing=True, data_home='.') faces = dataset.data n_samples, n_features = faces.shape image_shape = (64, 64) print n_samples, n_features plt.rcParams['figure.figsize'] = 6, 6 plt.imshow(faces[0].reshape(image_shape)); plt.rcParams['figure.figsize'] = 20, 8 def plot_gallery(images, n_col, n_row): plt.figure(figsize=(2. * n_col, 2.26 * n_row)) for i, comp in enumerate(images): plt.subplot(n_row, n_col, i + 1) vmax = max(comp.max(), -comp.min()) plt.imshow(comp.reshape(image_shape), cmap=plt.cm.gray, interpolation='nearest', vmin=-vmax, vmax=vmax) plt.xticks(()) plt.yticks(()) plt.subplots_adjust(0.01, 0.05, 0.99, 0.93, 0.04, 0.) plot_gallery(faces[:10], 5, 2) from sklearn.decomposition import PCA estimator_faces = PCA(n_components=10) estimator_faces.fit(faces); plt.imshow(estimator_faces.components_[0].reshape(image_shape)); plot_gallery(estimator_faces.components_[:10], 5, 2); from sklearn import tree clf = tree.DecisionTreeClassifier() clf = clf.fit(iris.data, iris.target) from IPython.display import Image import pydotplus dot_data = tree.export_graphviz(clf, out_file=None, feature_names=iris.feature_names, class_names=iris.target_names, filled=True, rounded=True, special_characters=True) graph = pydotplus.graph_from_dot_data(dot_data) Image(graph.create_png(), width=600) Explanation: Studenti: pomocí PCA zredukujte dimenzi u X = iris.data a vykreslete data v redukovaných souřadnicích. End of explanation from sklearn.model_selection import train_test_split from sklearn import svm X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=0) print 'Original data set:', iris.data.shape, iris.target.shape print 'Training part:', X_train.shape, y_train.shape print 'Test part:', X_test.shape, y_test.shape clf = svm.SVC(kernel='linear', C=1).fit(X_train, y_train) print 'Accuracy: {:0.2f}'.format(clf.score(X_test, y_test)) from sklearn.model_selection import cross_val_score clf = svm.SVC(kernel='linear', C=1) scores = cross_val_score(clf, iris.data, iris.target, scoring='f1_macro', cv=5) print scores print "Accuracy: {:0.2f} (+/- {:0.2f})".format(scores.mean(), scores.std() * 2) Explanation: <h2><center>Vyhodnocení modeů</center></h2> <img src="images/confusion_matrix.png" alt="Confusion Matrix" style="width: 350px;"/> <img alt="Type I and II errors" src="images/Type-I-and-II-errors1-625x468.jpg" style="width: 400px;"/> Accuracy: Overall, how often is the classifier correct?<br> (TP+TN)/total = (100+50)/165 = 0.91<br><br> Misclassification/Error Rate: Overall, how often is it wrong?<br> (FP+FN)/total = (10+5)/165 = 0.09<br><br> True Positive Rate/Recall: When it's actually yes, how often does it predict yes?<br> TP/actual yes = 100/105 = 0.95<br><br> False Positive Rate: When it's actually no, how often does it predict yes?<br> FP/actual no = 10/60 = 0.17<br><br> Specificity: When it's actually no, how often does it predict no?<br> TN/actual no = 50/60 = 0.83<br><br> Precision: When it predicts yes, how often is it correct?<br> TP/predicted yes = 100/110 = 0.91<br><br> Prevalence: How often does the yes condition actually occur in our sample?<br> actual yes/total = 105/165 = 0.64<br><br> F1-score: Harmonic mean of precision and recall — multiplying the constant of 2 scales the score to 1 when both recall and precision are 1: <h2><center>Přeučení/Overfitting</center></h2> <img src="images/overfitting.png" alt="Overfitting" style="width: 350px;"/> End of explanation from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import classification_report # To apply an classifier on this data, we need to flatten the image, to # turn the data in a (samples, feature) matrix: n_samples = len(digits.images) X = digits.images.reshape((n_samples, -1)) y = digits.target # Split the dataset in two equal parts X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0) # Set the parameters by cross-validation tuned_parameters = [{'kernel': ['rbf'], 'gamma': [1e-3, 1e-4], 'C': [1, 10, 100, 1000]}, {'kernel': ['linear'], 'C': [1, 10, 100, 1000]}] %%time clf = GridSearchCV(svm.SVC(C=1), tuned_parameters, cv=5, scoring='f1_macro') clf.fit(X_train, y_train); clf.best_params_ means = clf.cv_results_['mean_test_score'] stds = clf.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, clf.cv_results_['params']): print "{:0.3f} (+/-{:0.03f}) for {}".format(mean, std*2, params) Explanation: End of explanation from sklearn.model_selection import train_test_split from sklearn.model_selection import GridSearchCV from sklearn.metrics import classification_report # To apply an classifier on this data, we need to flatten the image, to # turn the data in a (samples, feature) matrix: n_samples = len(iris.data) X = iris.data.reshape((n_samples, -1)) y = iris.target # Split the dataset in two equal parts X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5, random_state=0) # Set the parameters by cross-validation tuned_parameters = {'criterion':['gini','entropy'], 'max_depth':range(3,20)} clf = GridSearchCV(tree.DecisionTreeClassifier(), tuned_parameters, cv=5, scoring='f1_macro') clf.fit(X_train, y_train); print 'Best:', clf.best_params_ print means = clf.cv_results_['mean_test_score'] stds = clf.cv_results_['std_test_score'] for mean, std, params in zip(means, stds, clf.cv_results_['params']): print "{:0.3f} (+/-{:0.03f}) for {}".format(mean, std*2, params) Explanation: Studenti: nalezněte optimálni hodnoty hyperparametrů criterion a max_depth pro DecisionTreeClassifier, pro klasifikaci iris.data. End of explanation from sklearn.preprocessing import LabelEncoder labels = ['one', 'two', 'three'] encoder = LabelEncoder().fit_transform(labels) encoder from sklearn.preprocessing import OneHotEncoder encoder = OneHotEncoder() e = encoder.fit_transform(np.array([0, 1, 2, 3]).reshape(-1, 1)) e.todense() Explanation: <h2><center>Co dále?</center></h2> Datová špína Normalizace (sklearn.preprocessing.Normalizer) Použité metriky (Distance Metric Learning) sklearn.preprocessing.LabelEncoder / sklearn.preprocessing.OneHotEncoder Big Data End of explanation
5,912
Given the following text description, write Python code to implement the functionality described below step by step Description: Slater module about the slater's rule Germain Salvato-Vallverdu &#103;&#101;&#114;&#109;&#97;&#105;&#110;&#46;&#118;&#97;&#108;&#108;&#118;&#101;&#114;&#100;&#117;&#64;&#117;&#110;&#105;&#118;&#45;&#112;&#97;&#117;&#46;&#102;&#114; Atomic orbitals The Klechkowski class The ElectronicConf class Example of Chromium exception Step1: Atomic orbitals Atomic orbitals type The class AOType store the type of the atomic orbitals Step2: The atomic orbital class Using the AO class of the slater module. Step3: An occupency can be set to the shell. Step4: You can define the AO from a usual string. Step5: Other ways to define the AO Step6: The Klechkowski class This class simply implements the Klechlowski rule. You should not need to use it directly. Step7: The ElectronicConf class This is the main part of the module. Create the object You can start with the number of electrons Step8: Or you can give your own electronic configuration. Your own configuration might be wrong. Only the number of electrons by sub-shell is checked Step9: You can export the configuration in as a latex formula. Step10: You can print valence electrons Step11: Compute energies using slater's rule Step12: Energy of the configuration Step13: Play with ions Simplest cases Step14: More complicated (4s / 3d inversion) Step15: Example of Chromium exception Electronic configuration of chromium presents an exception. Here we compute the energy difference between the electronic configuration following the Klechkowski rule and the one with the exception.
Python Code: import slater print(slater.__doc__) Explanation: Slater module about the slater's rule Germain Salvato-Vallverdu &#103;&#101;&#114;&#109;&#97;&#105;&#110;&#46;&#118;&#97;&#108;&#108;&#118;&#101;&#114;&#100;&#117;&#64;&#117;&#110;&#105;&#118;&#45;&#112;&#97;&#117;&#46;&#102;&#114; Atomic orbitals The Klechkowski class The ElectronicConf class Example of Chromium exception End of explanation print(slater.AOType.all_shells) print(slater.AOType.s) p_type = slater.AOType.from_string("p") print(p_type) f_type = slater.AOType.from_int(3) print(f_type, " l = ", f_type.l) Explanation: Atomic orbitals Atomic orbitals type The class AOType store the type of the atomic orbitals : s, p, d ... End of explanation AO_2s = slater.AO(n=2, aoType=slater.AOType.s, occ=1) print(AO_2s) print("n = ", AO_2s.n, "\nl = ", AO_2s.l) print(AO_2s.name) print("degeneracy (2l + 1) = ", AO_2s.degeneracy) Explanation: The atomic orbital class Using the AO class of the slater module. End of explanation print(AO_2s.occ) Explanation: An occupency can be set to the shell. End of explanation OA_3d = slater.AO.from_string("3d") print("OA : ", OA_3d.name, "\nn = ", OA_3d.n, "\nl = ", OA_3d.l, "\ndeg = ", OA_3d.degeneracy) Explanation: You can define the AO from a usual string. End of explanation OA_4p = slater.AO(4, "p") print(OA_4p) OA_3s = slater.AO(3, 0) print(OA_3s) Explanation: Other ways to define the AO : End of explanation k = slater.Klechkowski() print(k) Explanation: The Klechkowski class This class simply implements the Klechlowski rule. You should not need to use it directly. End of explanation Na = slater.ElectronicConf(nelec=11) print(Na) Explanation: The ElectronicConf class This is the main part of the module. Create the object You can start with the number of electrons : End of explanation Ca = slater.ElectronicConf.from_string("1s^2 2s^2 2p^6 3s^2 3p^6 4s^2") print(Ca) Explanation: Or you can give your own electronic configuration. Your own configuration might be wrong. Only the number of electrons by sub-shell is checked End of explanation print(Na.toTex()) Explanation: You can export the configuration in as a latex formula. End of explanation print(Na.valence) Explanation: You can print valence electrons : End of explanation data = Na.computeEnergy() print(data[slater.AO.from_string("2p")]) sigma, e = data[slater.AO.from_string("3s")] print("sigma = ", sigma, "\ne = ", e, "eV") Explanation: Compute energies using slater's rule End of explanation print("E = ", Na.energy) Explanation: Energy of the configuration End of explanation print("Na :", Na) print("q :", Na.q, " Z = ", Na.Z) Na_p = Na.ionize(1) print("Na + :", Na_p) print("q :", Na_p.q, " Z = ", Na_p.Z) Cl = slater.ElectronicConf(nelec=17) print("Cl :", Cl) Cl_m = Cl.ionize(-1) print("Cl- :", Cl_m) Explanation: Play with ions Simplest cases : End of explanation V = slater.ElectronicConf(nelec=23) print("V :", V) for i in [1, 2, 3]: ion = V.ionize(i) print("V{}+ :".format(ion.q), ion) Explanation: More complicated (4s / 3d inversion) : vanadium End of explanation Cr = slater.ElectronicConf(nelec=24) print(Cr) Cr_exc = slater.ElectronicConf.from_string("1s^2 2s^2 2p^6 3s^2 3p^6 4s^1 3d^5") print(Cr_exc) d = Cr.computeEnergy() d_exc = Cr_exc.computeEnergy() Cr.energy < Cr_exc.energy Explanation: Example of Chromium exception Electronic configuration of chromium presents an exception. Here we compute the energy difference between the electronic configuration following the Klechkowski rule and the one with the exception. End of explanation
5,913
Given the following text description, write Python code to implement the functionality described below step by step Description: Sebastian Raschka back to the matplotlib-gallery at https Step1: Errorbar Plots in matplotlib Sections Standard Deviation, Standard Error, and Confidence Intervals Adding error bars to a barplot <br> <br> Standard Deviation, Standard Error, and Confidence Intervals [back to top] Step2: <br> <br> Adding error bars to a barplot [back to top]
Python Code: %load_ext watermark %watermark -u -v -d -p matplotlib,numpy,scipy %matplotlib inline Explanation: Sebastian Raschka back to the matplotlib-gallery at https://github.com/rasbt/matplotlib-gallery End of explanation import numpy as np from matplotlib import pyplot as plt from scipy.stats import t # Generating 15 random data points in the range 5-15 (inclusive) X = np.random.randint(5, 15, 15) # sample size n = X.size # mean X_mean = np.mean(X) # standard deviation X_std = np.std(X) # standard error X_se = X_std / np.sqrt(n) # alternatively: # from scipy import stats # stats.sem(X) # 95% Confidence Interval dof = n - 1 # degrees of freedom alpha = 1.0 - 0.95 conf_interval = t.ppf(1-alpha/2., dof) * X_std*np.sqrt(1.+1./n) fig = plt.gca() plt.errorbar(1, X_mean, yerr=X_std, fmt='-o') plt.errorbar(2, X_mean, yerr=X_se, fmt='-o') plt.errorbar(3, X_mean, yerr=conf_interval, fmt='-o') plt.xlim([0,4]) plt.ylim(X_mean-conf_interval-2, X_mean+conf_interval+2) # axis formatting fig.axes.get_xaxis().set_visible(False) fig.spines["top"].set_visible(False) fig.spines["right"].set_visible(False) plt.tick_params(axis="both", which="both", bottom="off", top="off", labelbottom="on", left="on", right="off", labelleft="on") plt.legend(['Standard Deviation', 'Standard Error', 'Confidence Interval'], loc='upper left', numpoints=1, fancybox=True) plt.ylabel('random variable') plt.title('15 random values in the range 5-15') plt.show() Explanation: Errorbar Plots in matplotlib Sections Standard Deviation, Standard Error, and Confidence Intervals Adding error bars to a barplot <br> <br> Standard Deviation, Standard Error, and Confidence Intervals [back to top] End of explanation import matplotlib.pyplot as plt # input data mean_values = [1, 2, 3] variance = [0.2, 0.4, 0.5] bar_labels = ['bar 1', 'bar 2', 'bar 3'] fig = plt.gca() # plot bars x_pos = list(range(len(bar_labels))) plt.bar(x_pos, mean_values, yerr=variance, align='center', alpha=0.5) # set height of the y-axis max_y = max(zip(mean_values, variance)) # returns a tuple, here: (3, 5) plt.ylim([0, (max_y[0] + max_y[1]) * 1.1]) # set axes labels and title plt.ylabel('variable y') plt.xticks(x_pos, bar_labels) plt.title('Bar plot with error bars') # axis formatting fig.axes.get_xaxis().set_visible(False) fig.spines["top"].set_visible(False) fig.spines["right"].set_visible(False) plt.tick_params(axis="both", which="both", bottom="off", top="off", labelbottom="on", left="on", right="off", labelleft="on") plt.show() Explanation: <br> <br> Adding error bars to a barplot [back to top] End of explanation
5,914
Given the following text description, write Python code to implement the functionality described below step by step Description: X-ray Speckle Visibility Spectroscopy The analysis module "skxray/core/speckle" https Step1: Easily switch between interactive and static matplotlib plots¶ Step2: This data provided by Dr. Andrei Fluerasu L. Li, P. Kwasniewski, D. Oris, L Wiegart, L. Cristofolini, C. Carona and A. Fluerasu , "Photon statistics and speckle visibility spectroscopy with partially coherent x-rays" J. Synchrotron Rad., vol 21, p 1288-1295, 2014. Step3: Create the Rings Mask¶ Use the skxray.core.roi module to create Ring ROIs (ROI Mask).¶ (https Step4: Convert the edge values of the rings to q ( reciprocal space) Step5: Create a labeled array using roi.rings Step6: Find the brightest pixel in any ROI in any image in the image set. Using roi_max_counts function from skxray.core.roi module Step7: Everything looks good, next X-ray speckle visibilty spectroscopy This function will provide the probability density of detecting photons for different integration time. Using skxray.core.speckle module Step8: Find the integration times using skxray.core.utils.geometric_series Step9: Get the mean intensity of each ring Step10: Get the normalized bin edges and bin centers for each integration time. using skxray.core.speckle.normalize_bin_edges Step11: 1st q ring 0.0026859 (1/Angstroms) Step12: 2nd q ring 0.00278726 (1/Angstroms)¶ Step13: X-ray speckle visibilty spectroscopy(XSVS) for differnt time steps This function will provide the probability density of detecting photons for different integration time. Using skxray.core.speckle module Find the new integration times using skxray.core.utils.geometric_series Step14: XSVS results for new integartion times 1ms, 5ms and 25ms Step15: Plot the results for each Q ring 1st q ring 0.0026859 (1/Angstroms) Step16: 2nd q ring 0.00278726 (1/Angstroms) Step17: 3rd q ring 0.00288861 (1/ Angstroms) Step18: 4th q ring 0.0298997 (1/ Angstroms)
Python Code: import xray_vision import xray_vision.mpl_plotting as mpl_plot import skxray.core.speckle as xsvs import skxray.core.roi as roi import skxray.core.correlation as corr import skxray.core.utils as utils import numpy as np import os, sys import matplotlib as mpl import matplotlib.pyplot as plt from matplotlib.ticker import MaxNLocator from matplotlib.colors import LogNorm from mpl_toolkits.axes_grid1.axes_grid import ImageGrid Explanation: X-ray Speckle Visibility Spectroscopy The analysis module "skxray/core/speckle" https://github.com/scikit-xray/scikit-xray/blob/master/skxray/core/speckle.py End of explanation interactive_mode = False if interactive_mode: %matplotlib notebook else: %matplotlib inline backend = mpl.get_backend() Explanation: Easily switch between interactive and static matplotlib plots¶ End of explanation %run download.py data_dir = "Duke_data/" duke_rdata = np.load(data_dir+"duke_img_1_5000.npy") duke_dark = np.load(data_dir+"duke_dark.npy") duke_data = [] for i in range(duke_rdata.shape[0]): duke_data.append(duke_rdata[i] - duke_dark) duke_ndata=np.asarray(duke_data) # load the mask(s) and mask the data mask1 = np.load(data_dir+"new_mask4.npy") mask2 = np.load(data_dir+"Luxi_duke_mask.npy") N_mask = ~(mask1 + mask2) mask_data = N_mask*duke_ndata # get the average image avg_img = np.average(duke_ndata, axis=0) # if matplotlib version 1.5 or later if float('.'.join(mpl.__version__.split('.')[:2])) >= 1.5: cmap = 'viridis' else: cmap = 'CMRmap' # plot the average image data after masking plt.figure() plt.imshow(N_mask*avg_img, vmax=1e0, cmap=cmap) plt.title("Averaged masked data for Duke Silica Gel ") plt.colorbar() plt.show() Explanation: This data provided by Dr. Andrei Fluerasu L. Li, P. Kwasniewski, D. Oris, L Wiegart, L. Cristofolini, C. Carona and A. Fluerasu , "Photon statistics and speckle visibility spectroscopy with partially coherent x-rays" J. Synchrotron Rad., vol 21, p 1288-1295, 2014. End of explanation inner_radius = 26 # radius of the first ring width = 1 # width of each ring spacing = 0 # no spacing between rings num_rings = 4 # number of rings center = (133, 143) # center of the spckle pattern # find the edges of the required rings edges = roi.ring_edges(inner_radius, width, spacing, num_rings) edges Explanation: Create the Rings Mask¶ Use the skxray.core.roi module to create Ring ROIs (ROI Mask).¶ (https://github.com/scikit-xray/scikit-xray/blob/master/skxray/core/roi.py) End of explanation dpix = 0.055 # The physical size of the pixels lambda_ = 1.5498 # wavelength of the X-rays Ldet = 2200. # # detector to sample distance two_theta = utils.radius_to_twotheta(Ldet, edges*dpix) q_val = utils.twotheta_to_q(two_theta, lambda_) q_val q_ring = np.mean(q_val, axis=1) q_ring Explanation: Convert the edge values of the rings to q ( reciprocal space) End of explanation rings = roi.rings(edges, center, avg_img.shape) images_sets = (mask_data, ) ring_mask = rings*N_mask # plot the figure fig, axes = plt.subplots(figsize=(5, 5)) axes.set_title("Ring Mask") im = mpl_plot.show_label_array(axes, ring_mask, cmap=cmap) plt.show() Explanation: Create a labeled array using roi.rings End of explanation max_cts = roi.roi_max_counts(images_sets, ring_mask) max_cts Explanation: Find the brightest pixel in any ROI in any image in the image set. Using roi_max_counts function from skxray.core.roi module End of explanation spe_cts_all, std_dev = xsvs.xsvs(images_sets, ring_mask, timebin_num=2, number_of_img=30, max_cts=max_cts) Explanation: Everything looks good, next X-ray speckle visibilty spectroscopy This function will provide the probability density of detecting photons for different integration time. Using skxray.core.speckle module End of explanation time_steps = utils.geometric_series(2, 30) time_steps Explanation: Find the integration times using skxray.core.utils.geometric_series End of explanation mean_int_sets, index_list = roi.mean_intensity(mask_data, ring_mask) plt.figure(figsize=(8, 8)) plt.title("Mean intensity of each ring") for i in range(num_rings): plt.plot(mean_int_sets[:,i], label="Ring "+str(i+1)) plt.legend() plt.show() mean_int_ring = np.mean(mean_int_sets, axis=0) mean_int_ring Explanation: Get the mean intensity of each ring End of explanation num_times = 6 num_rois=num_rings norm_bin_edges, norm_bin_centers = xsvs.normalize_bin_edges(num_times, num_rois, mean_int_ring, max_cts) Explanation: Get the normalized bin edges and bin centers for each integration time. using skxray.core.speckle.normalize_bin_edges End of explanation fig, axes = plt.subplots(figsize=(6, 6)) axes.set_xlabel("K/<K>") axes.set_ylabel("P(K)") for i in range(4): art, = axes.plot(norm_bin_edges[i, 0][:-1], spe_cts_all[i, 0], '-o', label=str(time_steps[i])+" ms") axes.set_xlim(0, 4) axes.legend() plt.title("1st q ring 0.0026859 (1/Angstroms)") plt.show() Explanation: 1st q ring 0.0026859 (1/Angstroms) End of explanation fig, axes = plt.subplots(figsize=(6, 6)) axes.set_xlabel("K/<K>") axes.set_ylabel("P(K)") for i in range(4): art, = axes.plot(norm_bin_edges[i, 1][:-1], spe_cts_all[i, 1], '-o', label=str(time_steps[i])+" ms") axes.legend() axes.set_xlim(0, 4) plt.title("2nd q ring 0.00278726 (1/Angstroms)") plt.show() Explanation: 2nd q ring 0.00278726 (1/Angstroms)¶ End of explanation time_steps_5 = utils.geometric_series(5, 50) time_steps_5 Explanation: X-ray speckle visibilty spectroscopy(XSVS) for differnt time steps This function will provide the probability density of detecting photons for different integration time. Using skxray.core.speckle module Find the new integration times using skxray.core.utils.geometric_series End of explanation p_K, std_dev_5 = xsvs.xsvs(images_sets, ring_mask, timebin_num=5, number_of_img=50, max_cts=max_cts) Explanation: XSVS results for new integartion times 1ms, 5ms and 25ms End of explanation fig, axes = plt.subplots(figsize=(6, 6)) axes.set_xlabel("K/<K>") axes.set_ylabel("P(K)") for i in range(3): art, = axes.plot(norm_bin_edges[i, 0][:-1], p_K[i, 0], '-o', label=str(time_steps_5[i])+" ms") axes.set_xlim(0, 4) axes.legend() plt.title("1st q ring 0.0026859 (1/Angstroms)") plt.show() Explanation: Plot the results for each Q ring 1st q ring 0.0026859 (1/Angstroms) End of explanation fig, axes = plt.subplots(figsize=(6, 6)) axes.set_xlabel("K/<K>") axes.set_ylabel("P(K)") for i in range(3): art, = axes.plot(norm_bin_edges[i, 1][:-1], p_K[i, 1], '-o', label=str(time_steps_5[i])+" ms") axes.legend() axes.set_xlim(0, 4) plt.title("2nd q ring 0.00278726 (1/Angstroms)") plt.show() Explanation: 2nd q ring 0.00278726 (1/Angstroms) End of explanation fig, axes = plt.subplots(figsize=(6, 6)) axes.set_xlabel("K/<K>") axes.set_ylabel("P(K)") for i in range(3): art, = axes.plot(norm_bin_edges[i, 2][:-1], p_K[i, 2], '-o', label=str(time_steps_5[i])+" ms" ) axes.set_xlim(0, 4) axes.legend() plt.title("3rd q ring 0.00288861 (1/ Angstroms)") plt.show() Explanation: 3rd q ring 0.00288861 (1/ Angstroms) End of explanation fig, axes = plt.subplots(figsize=(6, 6)) axes.set_xlabel("K/<K>") axes.set_ylabel("P(K)") for i in range(3): art, = axes.plot(norm_bin_edges[i, 3][:-1], p_K[i, 3], '-o', label=str(time_steps_5[i])+" ms") axes.set_xlim(0, 4) axes.legend() plt.title("4th q ring 0.0298997 (1/ Angstroms)") plt.show() import skxray print(skxray.__version__) Explanation: 4th q ring 0.0298997 (1/ Angstroms) End of explanation
5,915
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 The TensorFlow Authors. Step1: TensorFlow 2 quickstart for beginners <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: If you are following along in your own development environment, rather than Colab, see the install guide for setting up TensorFlow for development. Note Step3: Build a machine learning model Build a tf.keras.Sequential model by stacking layers. Step4: For each example, the model returns a vector of logits or log-odds scores, one for each class. Step5: The tf.nn.softmax function converts these logits to probabilities for each class Step6: Note Step7: This loss is equal to the negative log probability of the true class Step8: Before you start training, configure and compile the model using Keras Model.compile. Set the optimizer class to adam, set the loss to the loss_fn function you defined earlier, and specify a metric to be evaluated for the model by setting the metrics parameter to accuracy. Step9: Train and evaluate your model Use the Model.fit method to adjust your model parameters and minimize the loss Step10: The Model.evaluate method checks the models performance, usually on a "Validation-set" or "Test-set". Step11: The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials. If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 The TensorFlow Authors. End of explanation import tensorflow as tf print("TensorFlow version:", tf.__version__) Explanation: TensorFlow 2 quickstart for beginners <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/beginner"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/tutorials/quickstart/beginner.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> This short introduction uses Keras to: Load a prebuilt dataset. Build a neural network machine learning model that classifies images. Train this neural network. Evaluate the accuracy of the model. This tutorial is a Google Colaboratory notebook. Python programs are run directly in the browser—a great way to learn and use TensorFlow. To follow this tutorial, run the notebook in Google Colab by clicking the button at the top of this page. In Colab, connect to a Python runtime: At the top-right of the menu bar, select CONNECT. Run all the notebook code cells: Select Runtime > Run all. Set up TensorFlow Import TensorFlow into your program to get started: End of explanation mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() x_train, x_test = x_train / 255.0, x_test / 255.0 Explanation: If you are following along in your own development environment, rather than Colab, see the install guide for setting up TensorFlow for development. Note: Make sure you have upgraded to the latest pip to install the TensorFlow 2 package if you are using your own development environment. See the install guide for details. Load a dataset Load and prepare the MNIST dataset. Convert the sample data from integers to floating-point numbers: End of explanation model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) Explanation: Build a machine learning model Build a tf.keras.Sequential model by stacking layers. End of explanation predictions = model(x_train[:1]).numpy() predictions Explanation: For each example, the model returns a vector of logits or log-odds scores, one for each class. End of explanation tf.nn.softmax(predictions).numpy() Explanation: The tf.nn.softmax function converts these logits to probabilities for each class: End of explanation loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) Explanation: Note: It is possible to bake the tf.nn.softmax function into the activation function for the last layer of the network. While this can make the model output more directly interpretable, this approach is discouraged as it's impossible to provide an exact and numerically stable loss calculation for all models when using a softmax output. Define a loss function for training using losses.SparseCategoricalCrossentropy, which takes a vector of logits and a True index and returns a scalar loss for each example. End of explanation loss_fn(y_train[:1], predictions).numpy() Explanation: This loss is equal to the negative log probability of the true class: The loss is zero if the model is sure of the correct class. This untrained model gives probabilities close to random (1/10 for each class), so the initial loss should be close to -tf.math.log(1/10) ~= 2.3. End of explanation model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) Explanation: Before you start training, configure and compile the model using Keras Model.compile. Set the optimizer class to adam, set the loss to the loss_fn function you defined earlier, and specify a metric to be evaluated for the model by setting the metrics parameter to accuracy. End of explanation model.fit(x_train, y_train, epochs=5) Explanation: Train and evaluate your model Use the Model.fit method to adjust your model parameters and minimize the loss: End of explanation model.evaluate(x_test, y_test, verbose=2) Explanation: The Model.evaluate method checks the models performance, usually on a "Validation-set" or "Test-set". End of explanation probability_model = tf.keras.Sequential([ model, tf.keras.layers.Softmax() ]) probability_model(x_test[:5]) Explanation: The image classifier is now trained to ~98% accuracy on this dataset. To learn more, read the TensorFlow tutorials. If you want your model to return a probability, you can wrap the trained model, and attach the softmax to it: End of explanation
5,916
Given the following text description, write Python code to implement the functionality described below step by step Description: Home Depot Product Search Relevance The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters. LabGraph Create This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code. Step1: Load data from CSV files Step2: Data merging Step3: Let's explore some data Let's examine 3 different queries and products Step4: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not. Step5: only 'wood' is present from search term Step6: 'sheer' and 'courtain' are present and that's all How many search terms are not present in description and title for ranked 3 documents Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title Step7: Stemming Step8: BM25
Python Code: import graphlab as gl from nltk.stem import * Explanation: Home Depot Product Search Relevance The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters. LabGraph Create This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code. End of explanation train = gl.SFrame.read_csv("../data/train.csv") test = gl.SFrame.read_csv("../data/test.csv") desc = gl.SFrame.read_csv("../data/product_descriptions.csv") Explanation: Load data from CSV files End of explanation # merge train with description train = train.join(desc, on = 'product_uid', how = 'left') # merge test with description test = test.join(desc, on = 'product_uid', how = 'left') Explanation: Data merging End of explanation first_doc = train[0] first_doc Explanation: Let's explore some data Let's examine 3 different queries and products: * first from the training set * somewhere in the moddle in the training set * the last one from the training set End of explanation middle_doc = train[37033] middle_doc Explanation: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not. End of explanation last_doc = train[-1] last_doc Explanation: only 'wood' is present from search term End of explanation train['search_term_word_count'] = gl.text_analytics.count_words(train['search_term']) ranked3doc = train[train['relevance'] == 3] print ranked3doc.head() len(ranked3doc) words_search = gl.text_analytics.tokenize(ranked3doc['search_term'], to_lower = True) words_description = gl.text_analytics.tokenize(ranked3doc['product_description'], to_lower = True) words_title = gl.text_analytics.tokenize(ranked3doc['product_title'], to_lower = True) wordsdiff_desc = [] wordsdiff_title = [] puid = [] search_term = [] ws_count = [] ws_count_used_desc = [] ws_count_used_title = [] for item in xrange(len(ranked3doc)): ws = words_search[item] pd = words_description[item] pt = words_title[item] diff = set(ws) - set(pd) if diff is None: diff = 0 wordsdiff_desc.append(diff) diff2 = set(ws) - set(pt) if diff2 is None: diff2 = 0 wordsdiff_title.append(diff2) puid.append(ranked3doc[item]['product_uid']) search_term.append(ranked3doc[item]['search_term']) ws_count.append(len(ws)) ws_count_used_desc.append(len(ws) - len(diff)) ws_count_used_title.append(len(ws) - len(diff2)) differences = gl.SFrame({"puid" : puid, "search term": search_term, "diff desc" : wordsdiff_desc, "diff title" : wordsdiff_title, "ws count" : ws_count, "ws count used desc" : ws_count_used_desc, "ws count used title" : ws_count_used_title}) differences.sort(['ws count used desc', 'ws count used title']) print "No terms used in description : " + str(len(differences[differences['ws count used desc'] == 0])) print "No terms used in title : " + str(len(differences[differences['ws count used title'] == 0])) print "No terms used in description and title : " + str(len(differences[(differences['ws count used desc'] == 0) & (differences['ws count used title'] == 0)])) import matplotlib.pyplot as plt %matplotlib inline Explanation: 'sheer' and 'courtain' are present and that's all How many search terms are not present in description and title for ranked 3 documents Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title End of explanation #stemmer = SnowballStemmer("english") stemmer = PorterStemmer() def stem(word): singles = [stemmer.stem(plural) for plural in unicode(word, errors='replace').split()] text = ' '.join(singles) return text print "Starting stemming train search term..." stemmed = train['search_term'].apply(stem) train['stem_search_term'] = stemmed print "Starting stemming train product description..." stemmed = train['product_description'].apply(stem) train['stem_product_description'] = stemmed print "Starting stemming train product title..." stemmed = train['product_title'].apply(stem) train['stem_product_title'] = stemmed print "Starting stemming test search term..." stemmed = test['search_term'].apply(stem) test['stem_search_term'] = stemmed print "Starting stemming test product description..." stemmed = test['product_description'].apply(stem) test['stem_product_description'] = stemmed print "Starting stemming test product title..." stemmed = test['product_title'].apply(stem) test['stem_product_title'] = stemmed Explanation: Stemming End of explanation train['stem_search_term_split'] = train['stem_search_term'].apply(lambda x: x.split()) train['stem_product_title_split'] = train['stem_product_title'].apply(lambda x: x.split()) train_bm25_title = gl.text_analytics.bm25(train['stem_product_title_split'], train['stem_search_term']) train_bm25_title train['product_desc_word_count'] = gl.text_analytics.count_words(train['stem_product_description']) train_desc_tfidf = gl.text_analytics.tf_idf(train['product_desc_word_count']) train['desc_tfidf'] = train_desc_tfidf train['product_title_word_count'] = gl.text_analytics.count_words(train['stem_product_title']) train_title_tfidf = gl.text_analytics.tf_idf(train['product_title_word_count']) train['title_tfidf'] = train_title_tfidf train['distance_desc'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf'])) #train['distance_desc_sqrt'] = train['distance_desc'] ** 2 train['distance_title'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf'])) #train['distance_title_sqrt'] = train['distance_title'] ** 3 model1 = gl.random_forest_regression.create(train, target = 'relevance', features = ['distance_desc', 'distance_title'], num_trees = 500, validation_set = None) test['search_term_word_count'] = gl.text_analytics.count_words(test['stem_search_term']) test_search_tfidf = gl.text_analytics.tf_idf(test['search_term_word_count']) test['search_tfidf'] = test_search_tfidf test['product_desc_word_count'] = gl.text_analytics.count_words(test['stem_product_description']) test_desc_tfidf = gl.text_analytics.tf_idf(test['product_desc_word_count']) test['desc_tfidf'] = test_desc_tfidf test['product_title_word_count'] = gl.text_analytics.count_words(test['stem_product_title']) test_title_tfidf = gl.text_analytics.tf_idf(test['product_title_word_count']) test['title_tfidf'] = test_title_tfidf test['distance_desc'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf'])) #test['distance_desc_sqrt'] = test['distance_desc'] ** 2 test['distance_title'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf'])) #test['distance_title_sqrt'] = test['distance_title'] ** 3 ''' predictions_test = model1.predict(test) test_errors = predictions_test - test['relevance'] RSS_test = sum(test_errors * test_errors) print RSS_test ''' predictions_test = model1.predict(test) predictions_test #result = model1.evaluate(test) #result submission = gl.SFrame(test['id']) submission.add_column(predictions_test) submission.rename({'X1': 'id', 'X2':'relevance'}) submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance']) submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance']) submission['relevance'] = submission.apply(lambda x: str(x['relevance'])) submission.export_csv('../data/submission2.csv', quote_level = 3) #gl.canvas.set_target('ipynb') Explanation: BM25 End of explanation
5,917
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2020 The TensorFlow Authors. Step1: TF.Text Metrics <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: ROUGE-L The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS). Source Step3: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks. Now we can call text.metrics.rouge_l and get our result back Step4: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2020 The TensorFlow Authors. End of explanation !pip install -q "tensorflow-text==2.8.*" import tensorflow as tf import tensorflow_text as text Explanation: TF.Text Metrics <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_similarity"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview TensorFlow Text provides a collection of text-metrics-related classes and ops ready to use with TensorFlow 2.0. The library contains implementations of text-similarity metrics such as ROUGE-L, required for automatic evaluation of text generation models. The benefit of using these ops in evaluating your models is that they are compatible with TPU evaluation and work nicely with TF streaming metric APIs. Setup End of explanation hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'], ['the', '1990', 'transcript']]) references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'], ['this', 'concludes', 'the', 'transcript']]) Explanation: ROUGE-L The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS). Source: https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/ The TF.Text implementation returns the F-measure, Precision, and Recall for each (hypothesis, reference) pair. Consider the following hypothesis/reference pair: End of explanation result = text.metrics.rouge_l(hypotheses, references) print('F-Measure: %s' % result.f_measure) print('P-Measure: %s' % result.p_measure) print('R-Measure: %s' % result.r_measure) Explanation: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks. Now we can call text.metrics.rouge_l and get our result back: End of explanation # Compute ROUGE-L with alpha=0 result = text.metrics.rouge_l(hypotheses, references, alpha=0) print('F-Measure (alpha=0): %s' % result.f_measure) print('P-Measure (alpha=0): %s' % result.p_measure) print('R-Measure (alpha=0): %s' % result.r_measure) # Compute ROUGE-L with alpha=1 result = text.metrics.rouge_l(hypotheses, references, alpha=1) print('F-Measure (alpha=1): %s' % result.f_measure) print('P-Measure (alpha=1): %s' % result.p_measure) print('R-Measure (alpha=1): %s' % result.r_measure) Explanation: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall. End of explanation
5,918
Given the following text description, write Python code to implement the functionality described below step by step Description: In this tutorial, we illustrate how to generate decision landscape visualisations. As an example, we use the data from O'Hora et al.(2013). First, download the data from https Step1: Cache the processed data to csv so that you don't need to preprocess the raw data every time you analyse it. Step2: Next time just read the already processed data Step3: Decision landscape time! Let's try the model with alpha=3 (that is, with four free parameters). We generate individual-level landscapes for all participants in the dataset. Step4: Now that we've got the parameters for every subject, we can compare the landscapes!
Python Code: import os from pydlv import data_reader, derivative_calculator dr = data_reader.DataReader() dc = derivative_calculator.DerivativeCalculator() data = dr.read_data(path='../../../../data/scirep_locdyn') # rewards_sum defines the experimental conditions to be analysed (see data description on OSF for details) # reward sums of 12, 15 and 25 correspond to the High vs. Low condition in 7/5, 10/5 and 20/5 experiments, respectively data = dr.preprocess_data(data, rewards_sum = [12, 15, 25]) data = dc.append_derivatives(data) Explanation: In this tutorial, we illustrate how to generate decision landscape visualisations. As an example, we use the data from O'Hora et al.(2013). First, download the data from https://osf.io/ahpv6/ and save it to any directory. Then, using the DataReader class, load the data with dr.read_data() (the path parameter should point to the directory where you downloaded the data), and then preprocess it with dr.preprocess_data(). End of explanation if not os.path.exists('csv'): os.makedirs('csv') data.to_csv('csv/processed_data_high_low.csv') Explanation: Cache the processed data to csv so that you don't need to preprocess the raw data every time you analyse it. End of explanation data = dr.get_processed_data(path='csv/processed_data_high_low.csv') Explanation: Next time just read the already processed data: End of explanation from pydlv import dl_model_3, dl_generator model = dl_model_3.DLModel3() dlg = dl_generator.DLGenerator(model) fit_dl = lambda trajs: dlg.fit_dl_mult_traj(trajs, method=9) fit_params = data.groupby(by='subj_id').apply(fit_dl) fit_params.index = fit_params.index.droplevel(1) fit_params.to_csv('csv/fit_params_by_subject.csv') Explanation: Decision landscape time! Let's try the model with alpha=3 (that is, with four free parameters). We generate individual-level landscapes for all participants in the dataset. End of explanation %matplotlib inline from pydlv import dl_plotter from matplotlib import cm dlp = dl_plotter.DLPlotter(elev=33, azim=107) subjects = [9444, 8969] labels = ['Participant %i' % (subj_id) for subj_id in subjects] cmap = cm.viridis colors = [cmap(0.1), cmap(0.7)] for i, subj_id in enumerate(subjects): x, y, dl = dlg.get_model_dl(fit_params.loc[subj_id][2:2+model.n_params]) dlp.plot_surface(x, y, dl, color=colors[i], alpha=0.8) dlp.add_legend(colors, labels) Explanation: Now that we've got the parameters for every subject, we can compare the landscapes! End of explanation
5,919
Given the following text description, write Python code to implement the functionality described below step by step Description: Hands-On Exercise 6 Step1: The first thing to notice is that the table does not include magnitude measurements. Gaaaasp The horror!! As an important point of background - Firth et al. normalize all images to a common zero-point of 27.0 mag (AB), and then after image subtraction perform forced PSF photometry at the location of the PTF SN on the subtraction image. One advantage of this method is that it enables the measurement of negative fluxes (which for SNe isn't particularly useful, but for variable stars is extremely important for detecting events such as eclipses). For SNe, which should have no flux in the reference image (though there are rare cases where this may not be the case), to calculate magnitudes from flux (or counts in the case of the Firth et al. study) use the following equation Step2: The previous line should have led to a NumPy run error Step3: In addition to tracking the mag of the SNe at each epoch, we need to track the uncertainty on each of those measurements. To convert uncertainties in flux to mag uncertainties use the following equation Step4: Part B - Plot light curves Now that we have calculated magnitudes, let's examine some light curves. As discussed earlier today, an important aspect of SN science is determining whether or not a new discovery is a young SN. Examining non-detections, and the corresponding flux limits, can constrain the age of a SN at the time of discovery [more on this later]. Thus, a careful measurement of the upper limits is very important. As an example, plot the light curve for the first SN in the table, PTF09dsy, which is a typical SN from Firth et al. Problem B1 Plot the light curve, including uncertainties, of PTF09dsy. For epochs where the SN is not detected, plot upper limits (use the v symbol in matplotlib). Step5: Problem B2 Did PTF discover PTF09dsy at an early epoch? Type your response to B2 here Now examine the family of PTF light curves to see how they compare. Problem B3 In a single figure, plot all of the PTF light curves from Firth et al. For this figure ignore upper limits. Label each light curve so you can tell them apart via a legend. Hint - use a for loop to keep your code clean and simple. It may also be useful to increase the size of this particular figure for clarity. Step6: Problem B4 Which SN stands out above the rest from the Firth et al. sample? Type your response to B4 here Problem 2) Calibrating Type Ia SNe Before we can use Type Ia SNe to measure distances (and, eventually, $H_0$) we must first calibrate their luminosities. Virtually all distance indicators are calibrated in the same way via the "distance ladder." Geometric parallax determines the distance to pulsating variables (e.g., RR Lyrae stars, Cepheids), calibrating those stars, which are then used to calibrate other distance indicators in nearby galaxies. [Note - historically there are many many rungs on the distance ladder which involve many steps for calibration. Recent work, however, has directly calibrated Cepheids via parallax leading to a two step calibration Cepheids $\rightarrow$ SNe Ia, e.g. Riess et al. (2016).] At the end of problem 1, you should have identified PTF11kly as the especially unique SN in the Firth et al. sample. And, indeed, PTF11kly (aka SN 2011fe) is unique in many, many respects. PTF detected this SN shortly after explosion (more on that later...) in the spectacular Pinwheel Galaxy, M 101. Several hundred papers have been written on this SN that was discovered less than 5 years ago! M 101 is close enough to the the Milky Way that it is possible to detect individual Cepheids with the Hubble Space Telescope, which enables a precise distance measurement via the Cepheid Period-Luminosity relation. Thus, we can use PTF11kly to calibrate the absolute magnitude of Type Ia SNe, thereby measuring distances to all the PTF SNe in the Firth et al. sample. Part A - Identify Potential Systematics In addition to being significantly brighter than the other SNe in the sample, the PTF11kly light curve is different for another reason as well. The following is a bit different from the other problems we have encountered so far in that it is a bit open ended, however, it is an important exercise in data exploration. Problem A1 Identify the difference, aside from peak brightness, between PTF11kly and the other light curves in the Firth et al. sample. Hint - don't spend too much time on this as the answer is below, but also don't just scroll down without trying to figure this out first. Step7: Part B - Calibrate the Peak Luminosity of SNe Ia Earlier it was mentioned that Type Ia SNe are standardizable, but there was no discussion of how they are standardizable. For now we will assume that all Type Ia SNe have the same absolute magnitude (a proxy for luminosity for our 1 filter light curves) at peak. In practice, detailed light-curve fitting algorithms such as SALT or MLCS are used to standardize the luminosities of SNe Ia, but the use of these tools is beyond the scope of this problem. Furthermore, SALT, MLCS, and all precise SN distance measurement techniques require light curves in at least two filters, which is not available for PTF data. To calibrate the absolute magnitude of PTF11kly, we need to determine the distance to M 101, in units of mag, a.k.a the distance modulus. Problem B1 Determine the distance to M 101, and store the result in mu_M101. Hint - you might find the answer on NED or in Riess et al. (2016). Step8: Above, you determined that PTF11kly is different from the other SNe in that PTF observed it in the $g$-Band. SNe do not have flat spectra (in an AB sense) so we cannot calibrate the absolute magnitude in one band and apply the results to another filter. Thus, we will introduce our first cheat of the workshop, which is that we will use a non-PTF light curve to calibrate PTF11kly in the $R$-band. [In addition to being the wrong filter, PTF also did not observe PTF11kly at peak.] Fortunately, the KAIT telescope observed PTF11kly in the $R$-band covering the peak of the SN light curve. [The KAIT $R$ filter and PTF $R$ filter are not identical, but we will ignore those differences for now. The KAIT light curve shows that PTF11kly peaked at $R = 10.02 \; \mathrm{mag}$. Problem B2 Store the peak $R$-band brightness in a variable called peak11kly. Step9: Problem B3 Determine the peak absolute magnitude of Type Ia SNe in the $R$-band, store the result in a variable called M_Ia. Confirm that your result makes sense. Step10: Problem 3) Measure $H_0$ Now that we have calibrated the peak absolute magnitude of Type Ia SNe, we can measure $H_0$. Prior to measuring $H_0$, we will try to develop some intuition for the uniformity of SNe Ia at peak. Part A - Scatter in SNe Traditionally, after SNe candidates are discovered they are sent to the IAU for confirmation, after which they are officially named SN YYYY??, where YYYY is the year, and ?? is an alphabetical sequence following the order in which the SNe were discovered. [Note - with modern surveys discovering hundreds of new SNe every year this scheme is no longer used.] It has been said that one can make a low-scatter Hubble diagram using the SN redshift and mag at discovery from the IAU Circulars, without any sort of filter corrections. We can test this hypothesis using the PTF data in Firth et al. If SNe are standard candles, and the hypothesis is true, then a plot of $\mathrm{mag}_\mathrm{disc}$ vs. $z$ should show small scatter. Problem A1 Plot $R$ at the epoch of discovery against redshift for the SNe in the Firth et al. sample. Store the results in arrays called disc_mag, disc_mag_unc, and z. Step11: Based on your above plot, are SNe (at the time of discovery) good standard candles? While the correlation in the above plot is weak, and the scatter large, we maintain that a decent Hubble diagram can be made using the information present in old IAU circulars. Problem A2 List some reasons the above exercise would not work well for PTF, but it could work well for previous surveys. Hint - think back to Mansi's talk from this morning. Type your response to A2 here Of course, we know SNe Ia are standard(-izable) candles [they wouldn't award a Nobel Prize for something's that wrong, right? ... Right?!], and that they are ~standard at the time of peak. Now we will see if PTF SNe Ia are standard candles near the time of peak. Problem A3 Plot the peak $R$ mag for each of the SNe as a function of redshift. Store the results in arrays called mag_peak and mag_peak_unc. Hint - no need for fancy fitting, simply use the brightest observation of each SNe. If you are looking for a challenge you can fit the light curves and interpolate to get the peak, however, note that there is no simple functional form to fit, so you may get worse results following this procedure. Step12: How does the correlation and scatter look now? Part B - Meauresuring $H_0$ Now that we have empirically demonstrated a correlation between peak brightness and distance for SNe Ia [this is not exactly true, but let's roll with it], we can use the fact that they are standard candles to infer the distance to each. This will allow us to determine distance as a function of recession velocity, aka Hubble's Constant. Problem B1 Using PTF11kly as a calibrator, determine the distance, in Mpc, to each of the other SNe in the Firth et al. sample. Recall that the distance modulus, $\mu$, is given by Step13: Problem B2 Plot recession velocity as a function of distance for the PTF Type Ia SNe, thus making a version of the Hubble diagram. How do your results compare to Hubble's original diagram? Step14: Problem B3 Perform a linear-least squares fit to the data in the previous plot to determine the value of $H_0$ from PTF SNe. Then, plot the line corresponding to the best fit on the previous plot. Hint - there are many ways to perform linear-least squares in Python including Step15: Riess et al. (2016) use a large sample of Cepheids and SNe to measure $H_0 = 73.24 \pm 1.74$. [Some of you may be familiar with the tension in $H_0$ measurements between SNe and the Planck measurements of the cosmic microwave background. This is a time-domain workshop, so we are definitely #TeamSNe.] Problem B4 How does the Riess et al. measurement compare to what you derived in the previous problem? Type your response to B4 here Not bad for a basic method and only 10 SNe!! Problem 4) Constraining the Radius of a White Dwarf Now we will significantly change pace, as we pivot away from the utility of Type Ia SNe as distance indicators and instead focus on the physics of an exploding white dwarf. As previously noted, PTF11kly was a very special supernova. In addition to exploding in a very nearby galaxy, PTF detected this SN just a few hours after it exploded, corresponding to the earliest detection of a Type Ia SN at the time. As Mansi highlighted in her talk, early detections of SNe can reveal a great deal about the progenitor systems. Here, we will look at how the PTF light curve of PTF11kly constrains the exploding white dwarf. Part A - the PTF11kly light curve Theorists hate magnitudes, and prefer to work in the "natural" units of luminosity. Thus, to compare the observed PTF light curve to models we need to convert from magnitude to luminosity. Problem A1 Convert the PTF11kly magnitude measurements to luminosity in units of $\mathrm{erg} \; \mathrm{s}^{-1}$, and store the results in an array called L. Assume no bolometric correction from the $g$-band. Assume the PTF $g$-band has a central frequency of $6.284 \times 10^{14} \mathrm{Hz}$. Hint - recall that AB magnitudes have a standard zeropoint Step16: Problem A2 Plot the luminosity, including upper limits, of PTF 11kly. Step17: Part B - constraining the WD radius To compare models to the PTF 11kly light curve, we need to determine the exact time at which the SN exploded. Fortunately, the early luminosity evolution of Type Ia SNe has been shown to be parabolic Step18: Now that we have determined the precise time of explosion, we can compare the luminosity of PTF11kly to theoretical models of shock breakout, which will constrain the radius of the progenitor. For example, Rabinak et al. (2011) found that the early luminosity of Type Ia SNe can be described as Step19: Rabinak et al. show that $L(t)$ is directly proportional to the progenitor radius, as should be confirmed by the above plot. From these curves, we can constrain the radius of PTF11kly. Problem B3 Plot the early ($t \le 4$) light curve of PTF11kly on the above plot, along with the $t^2$-fireball model fit to show the comparison of the actual explosion to the models. Step20: Problem B4 Based on the above plot - what constraints can you place on the progenitor radius? Using the initial detection of PTF11kly, what is the maximum size of the progenitor? Step21: And thus, PTF11kly provides direct evidence that (at least one) Type Ia SN come from progenitors that are significantly smaller than the Sun! Something to chew on - is this proof that SNe Ia come from white dwarfs? Hint - can you think of any other astrophysical objects that are allowed within the constraints above? Problem 5 - Challenge If you finish early, work on the following problem, or continue working on this as homework for this evening. The challenge problem is going to focus on improving the use of the PTF SNe for measuring cosmological distances. Typically, the goodness of an individual method is reported as the scatter (in mag) about the best fit Hubble line. Challenge Problem 1 Assuming $H_0 = 73.24 \pm 1.74$, plot the Hubble expansion curve on a plot showing distance modulus, $\mu$, against redshift, $z$. Overplot the distance modulus and redshift of the SNe in the Firth et al. sample. Do you notice any trends? In a separate plot, show the residuals relative to $H_0 = 73.24$, and calculate the scatter (rms in mag) of your method relative to this baseline. Modern SN light curve fitters produce a scatter of $\sim{0.14} \;\mathrm{mag}$. Given all that you now know - do you consider our method used to derive $H_0$ good?
Python Code: # execute this cell SNlcs = Table.read("../data/Firth14Tbl2.txt", format = 'ascii') SNlcs Explanation: Hands-On Exercise 6: Determining $H_0$ with Type Ia SNe from PTF Version 0.1 Today we learned about a variety of different explosive, extragalactic transients. While the lectures focused on recently discovered, rare transients, the most famous transients are without question Type Ia supernovae (SNe). SNe Ia have nearly uniform peak luminosities, which can be standardized via first and second order corrections, such that they are standardizable candels. They are the best distance indicators at high redshift, and the 2011 Nobel Prize was awarded to Adam Riess, Brian Schmidt, and Saul Perlmutter for the discovery of the accelerating universe via Type Ia SNe. During this exercise we will use PTF data to calibrate the brightness of Type Ia SNe, and then measure Hubble's constant, $H_0$, using this calibration. Following that we will use PTF observations to limit the size of an exploding white dwarf star. By AA Miller (c) 2016 Jun 14 Problem 1) SNe Light Curves Previously we learned about image subtraction and the importance of removing the flux from "static" sources (i.e. host galaxies), when trying to measure the brightness of SNe. The PTF public data releases do not include image subtraction products, and the software to perform image subtraction is not (at the moment, anyway) easily installed and implemented in python. Thus, for this exercise we will bypass the public PTF data and instead use published light curves from Firth et al. 2015. Firth et al. (2015) present a study of the rise time of Type Ia SNe discovered by PTF. As they have already performed image subtraction, we will utilize the light curves produced in that study for our exercise. Table 2 of Firth et al. contains the light curves, and that data can be accessed in data/Firth14Tbl2.txt. One brief note before we start, that study includes SNe from the La Silla-QUEST survey, which we have commented out in the file in the data/ directory. We do this to ensure all light curves were taken in the same filter. As a first step we will examine the light curves and the formatting of the data file. As before, we will read the data into an astropy table file. End of explanation mag = # complete Explanation: The first thing to notice is that the table does not include magnitude measurements. Gaaaasp The horror!! As an important point of background - Firth et al. normalize all images to a common zero-point of 27.0 mag (AB), and then after image subtraction perform forced PSF photometry at the location of the PTF SN on the subtraction image. One advantage of this method is that it enables the measurement of negative fluxes (which for SNe isn't particularly useful, but for variable stars is extremely important for detecting events such as eclipses). For SNe, which should have no flux in the reference image (though there are rare cases where this may not be the case), to calculate magnitudes from flux (or counts in the case of the Firth et al. study) use the following equation: $$ m = \textrm{ZP} - 2.5 \log_{10} f,$$ where $m$ is the magnitude, $\textrm{ZP}$ is the zero-point, and $f$ is the flux (or counts). Part A - Determine magnitudes for PTF SNe As a first step, convert the counts measurements for each SN to magnitudes. Problem A1 Convert counts in the SN light curves table to magnitudes using the equation provided above. Store the results in an array called mag. End of explanation det = # complete Explanation: The previous line should have led to a NumPy run error: and thus, we have encountered one of the downsides of using negative flux measurements - it is not possible to take the $\log$ of a negative number. Furthermore, the magnitude array that we just created includes measurements where the signal-to-noise ratio (SNR) is $< 1$. Typically, astronomical sources are only considered detected when their flux exceeds some threshold, usually defined in units of the noise (e.g., 3$\sigma$, 5$\sigma$, or in very conservative cases 10$\sigma$). Problem A2 Create a boolean array, called det, which tracks epochs in which the SNe are actually detected. Hint - you must choose the limits at which the SN is considered detected. End of explanation mag_unc = # complete Explanation: In addition to tracking the mag of the SNe at each epoch, we need to track the uncertainty on each of those measurements. To convert uncertainties in flux to mag uncertainties use the following equation: $$\Delta m = 1.0857 \frac{\sigma_f}{f},$$ where $\Delta m$ is the mag uncertainty and $\sigma_f$ is the flux uncertainty. [You can arrive at this equation by differentiating the first equation in this notebook.] Problem A3 Calculate the uncertainties for the magnitude measures at each epoch in the SN light curve table and store the results in an array called mag_unc. End of explanation # complete plt.errorbar( # complete plt.ylim( # complete plt.xlabel( # complete plt.ylabel( # complete Explanation: Part B - Plot light curves Now that we have calculated magnitudes, let's examine some light curves. As discussed earlier today, an important aspect of SN science is determining whether or not a new discovery is a young SN. Examining non-detections, and the corresponding flux limits, can constrain the age of a SN at the time of discovery [more on this later]. Thus, a careful measurement of the upper limits is very important. As an example, plot the light curve for the first SN in the table, PTF09dsy, which is a typical SN from Firth et al. Problem B1 Plot the light curve, including uncertainties, of PTF09dsy. For epochs where the SN is not detected, plot upper limits (use the v symbol in matplotlib). End of explanation plt.figure( # complete # complete plt.plot( # complete # complete plt.legend( # complete Explanation: Problem B2 Did PTF discover PTF09dsy at an early epoch? Type your response to B2 here Now examine the family of PTF light curves to see how they compare. Problem B3 In a single figure, plot all of the PTF light curves from Firth et al. For this figure ignore upper limits. Label each light curve so you can tell them apart via a legend. Hint - use a for loop to keep your code clean and simple. It may also be useful to increase the size of this particular figure for clarity. End of explanation # demonstrate that PTF11kly is different from the other SNe Explanation: Problem B4 Which SN stands out above the rest from the Firth et al. sample? Type your response to B4 here Problem 2) Calibrating Type Ia SNe Before we can use Type Ia SNe to measure distances (and, eventually, $H_0$) we must first calibrate their luminosities. Virtually all distance indicators are calibrated in the same way via the "distance ladder." Geometric parallax determines the distance to pulsating variables (e.g., RR Lyrae stars, Cepheids), calibrating those stars, which are then used to calibrate other distance indicators in nearby galaxies. [Note - historically there are many many rungs on the distance ladder which involve many steps for calibration. Recent work, however, has directly calibrated Cepheids via parallax leading to a two step calibration Cepheids $\rightarrow$ SNe Ia, e.g. Riess et al. (2016).] At the end of problem 1, you should have identified PTF11kly as the especially unique SN in the Firth et al. sample. And, indeed, PTF11kly (aka SN 2011fe) is unique in many, many respects. PTF detected this SN shortly after explosion (more on that later...) in the spectacular Pinwheel Galaxy, M 101. Several hundred papers have been written on this SN that was discovered less than 5 years ago! M 101 is close enough to the the Milky Way that it is possible to detect individual Cepheids with the Hubble Space Telescope, which enables a precise distance measurement via the Cepheid Period-Luminosity relation. Thus, we can use PTF11kly to calibrate the absolute magnitude of Type Ia SNe, thereby measuring distances to all the PTF SNe in the Firth et al. sample. Part A - Identify Potential Systematics In addition to being significantly brighter than the other SNe in the sample, the PTF11kly light curve is different for another reason as well. The following is a bit different from the other problems we have encountered so far in that it is a bit open ended, however, it is an important exercise in data exploration. Problem A1 Identify the difference, aside from peak brightness, between PTF11kly and the other light curves in the Firth et al. sample. Hint - don't spend too much time on this as the answer is below, but also don't just scroll down without trying to figure this out first. End of explanation mu_M101 = Explanation: Part B - Calibrate the Peak Luminosity of SNe Ia Earlier it was mentioned that Type Ia SNe are standardizable, but there was no discussion of how they are standardizable. For now we will assume that all Type Ia SNe have the same absolute magnitude (a proxy for luminosity for our 1 filter light curves) at peak. In practice, detailed light-curve fitting algorithms such as SALT or MLCS are used to standardize the luminosities of SNe Ia, but the use of these tools is beyond the scope of this problem. Furthermore, SALT, MLCS, and all precise SN distance measurement techniques require light curves in at least two filters, which is not available for PTF data. To calibrate the absolute magnitude of PTF11kly, we need to determine the distance to M 101, in units of mag, a.k.a the distance modulus. Problem B1 Determine the distance to M 101, and store the result in mu_M101. Hint - you might find the answer on NED or in Riess et al. (2016). End of explanation peak11kly = Explanation: Above, you determined that PTF11kly is different from the other SNe in that PTF observed it in the $g$-Band. SNe do not have flat spectra (in an AB sense) so we cannot calibrate the absolute magnitude in one band and apply the results to another filter. Thus, we will introduce our first cheat of the workshop, which is that we will use a non-PTF light curve to calibrate PTF11kly in the $R$-band. [In addition to being the wrong filter, PTF also did not observe PTF11kly at peak.] Fortunately, the KAIT telescope observed PTF11kly in the $R$-band covering the peak of the SN light curve. [The KAIT $R$ filter and PTF $R$ filter are not identical, but we will ignore those differences for now. The KAIT light curve shows that PTF11kly peaked at $R = 10.02 \; \mathrm{mag}$. Problem B2 Store the peak $R$-band brightness in a variable called peak11kly. End of explanation M_Ia = Explanation: Problem B3 Determine the peak absolute magnitude of Type Ia SNe in the $R$-band, store the result in a variable called M_Ia. Confirm that your result makes sense. End of explanation # complete # complete disc_mag = # complete disc_mag_unc = # complete z = # complete # complete # complete plt.errorbar( # complete plt.ylim(# complete plt.xlabel(# complete plt.ylabel(# complete Explanation: Problem 3) Measure $H_0$ Now that we have calibrated the peak absolute magnitude of Type Ia SNe, we can measure $H_0$. Prior to measuring $H_0$, we will try to develop some intuition for the uniformity of SNe Ia at peak. Part A - Scatter in SNe Traditionally, after SNe candidates are discovered they are sent to the IAU for confirmation, after which they are officially named SN YYYY??, where YYYY is the year, and ?? is an alphabetical sequence following the order in which the SNe were discovered. [Note - with modern surveys discovering hundreds of new SNe every year this scheme is no longer used.] It has been said that one can make a low-scatter Hubble diagram using the SN redshift and mag at discovery from the IAU Circulars, without any sort of filter corrections. We can test this hypothesis using the PTF data in Firth et al. If SNe are standard candles, and the hypothesis is true, then a plot of $\mathrm{mag}_\mathrm{disc}$ vs. $z$ should show small scatter. Problem A1 Plot $R$ at the epoch of discovery against redshift for the SNe in the Firth et al. sample. Store the results in arrays called disc_mag, disc_mag_unc, and z. End of explanation mag_peak = # complete mag_peak_unc = # complete # complete # complete plt.errorbar(# complete plt.ylim(# complete plt.xlabel(# complete plt.ylabel(# complete Explanation: Based on your above plot, are SNe (at the time of discovery) good standard candles? While the correlation in the above plot is weak, and the scatter large, we maintain that a decent Hubble diagram can be made using the information present in old IAU circulars. Problem A2 List some reasons the above exercise would not work well for PTF, but it could work well for previous surveys. Hint - think back to Mansi's talk from this morning. Type your response to A2 here Of course, we know SNe Ia are standard(-izable) candles [they wouldn't award a Nobel Prize for something's that wrong, right? ... Right?!], and that they are ~standard at the time of peak. Now we will see if PTF SNe Ia are standard candles near the time of peak. Problem A3 Plot the peak $R$ mag for each of the SNe as a function of redshift. Store the results in arrays called mag_peak and mag_peak_unc. Hint - no need for fancy fitting, simply use the brightest observation of each SNe. If you are looking for a challenge you can fit the light curves and interpolate to get the peak, however, note that there is no simple functional form to fit, so you may get worse results following this procedure. End of explanation d = # complete Explanation: How does the correlation and scatter look now? Part B - Meauresuring $H_0$ Now that we have empirically demonstrated a correlation between peak brightness and distance for SNe Ia [this is not exactly true, but let's roll with it], we can use the fact that they are standard candles to infer the distance to each. This will allow us to determine distance as a function of recession velocity, aka Hubble's Constant. Problem B1 Using PTF11kly as a calibrator, determine the distance, in Mpc, to each of the other SNe in the Firth et al. sample. Recall that the distance modulus, $\mu$, is given by: $$\mu = m - M = 5 \log_{10} (\frac{d}{10}),$$ where $m$ is the observed mag, $M$ the absolute mag, and $d$ the distance in pc. End of explanation # complete plt.errorbar(# complete plt.xlabel(# complete plt.ylabel(# complete Explanation: Problem B2 Plot recession velocity as a function of distance for the PTF Type Ia SNe, thus making a version of the Hubble diagram. How do your results compare to Hubble's original diagram? End of explanation # complete print('The fit results in H_0 = {:.1f} km/s/Mpc'.format(# complete Explanation: Problem B3 Perform a linear-least squares fit to the data in the previous plot to determine the value of $H_0$ from PTF SNe. Then, plot the line corresponding to the best fit on the previous plot. Hint - there are many ways to perform linear-least squares in Python including: np.polyfit() np.linalg.lstsq() scipy.stats.linregress() and, of course, for a problem this simple it would also be straight-forward to directly code the result yourself. Note For an actual publication, performing an ordinary least squares (OLS) fit to this data would be inappropriate as the distance measurements have significant uncertainties. Furthermore, flipping the axes, such that the fit is d vs. v, and inverting the slope to get $H_0$ also would not work, as fitting $X$ vs. $Y$ is different from fitting $Y$ vs. $X$. For a brief tutorial on a better approach in this specific case, see Hogg et al. 2010. End of explanation # complete # complete # complete L = # complete Explanation: Riess et al. (2016) use a large sample of Cepheids and SNe to measure $H_0 = 73.24 \pm 1.74$. [Some of you may be familiar with the tension in $H_0$ measurements between SNe and the Planck measurements of the cosmic microwave background. This is a time-domain workshop, so we are definitely #TeamSNe.] Problem B4 How does the Riess et al. measurement compare to what you derived in the previous problem? Type your response to B4 here Not bad for a basic method and only 10 SNe!! Problem 4) Constraining the Radius of a White Dwarf Now we will significantly change pace, as we pivot away from the utility of Type Ia SNe as distance indicators and instead focus on the physics of an exploding white dwarf. As previously noted, PTF11kly was a very special supernova. In addition to exploding in a very nearby galaxy, PTF detected this SN just a few hours after it exploded, corresponding to the earliest detection of a Type Ia SN at the time. As Mansi highlighted in her talk, early detections of SNe can reveal a great deal about the progenitor systems. Here, we will look at how the PTF light curve of PTF11kly constrains the exploding white dwarf. Part A - the PTF11kly light curve Theorists hate magnitudes, and prefer to work in the "natural" units of luminosity. Thus, to compare the observed PTF light curve to models we need to convert from magnitude to luminosity. Problem A1 Convert the PTF11kly magnitude measurements to luminosity in units of $\mathrm{erg} \; \mathrm{s}^{-1}$, and store the results in an array called L. Assume no bolometric correction from the $g$-band. Assume the PTF $g$-band has a central frequency of $6.284 \times 10^{14} \mathrm{Hz}$. Hint - recall that AB magnitudes have a standard zeropoint: $$m_{AB} = -2.5 \log_{10} \frac{f_\nu}{3631 \; \mathrm{Jy}}.$$ End of explanation plt.figure(figsize = (10,8)) plt.errorbar( # complete plt.yscale( # complete plt.xlim( # limit the plot to the few relevant upper limits prior to explosion plt.xlabel( # complete plt.ylabel( # complete Explanation: Problem A2 Plot the luminosity, including upper limits, of PTF 11kly. End of explanation # complete # complete # complete print('PTF11kly exploded at MJD = {:.3f}'.format( # complete Explanation: Part B - constraining the WD radius To compare models to the PTF 11kly light curve, we need to determine the exact time at which the SN exploded. Fortunately, the early luminosity evolution of Type Ia SNe has been shown to be parabolic: $L(t) \propto (t - t_0)^2,$ where $L$ is the luminosity, $t$ is the time, and $t_0$ is the time of explosion. Problem B1 Fit a parabolic function to the early (i.e., only fit the first few days) light curve to determine the time at which PTF11kly exploded. End of explanation # complete # complete R_p = np.array( # complete # complete # complete plt.figure(figsize = (10,8)) for r in R_p: plt.plot( # complete plt.legend( # complete plt.yscale('log') plt.xlim(0.0, 3.5) plt.ylim( # complete Explanation: Now that we have determined the precise time of explosion, we can compare the luminosity of PTF11kly to theoretical models of shock breakout, which will constrain the radius of the progenitor. For example, Rabinak et al. (2011) found that the early luminosity of Type Ia SNe can be described as: $$L(t) = 1.2 \times 10^{40} \frac{R_{10}E_{51}^{0.85}}{M_c^{0.69}\kappa_{0.2}^{0.85}f_p^{0.16}} t_d^{-0.31} \; \mathrm{erg} \; \mathrm{s}^{-1},$$ where $R_{10}$ is the progenitor radius $R_p/10^{10} \; \mathrm{cm}$, $E_{51}$ is the explosion energy in units of $10^{51}\; \mathrm{erg}$, $M_c$ is the progenitor mass in units of $M_{\mathrm{ch}}$, and $\kappa_{0.2}$ is the opacity $\kappa/0.2 \; \mathrm{cm}^2 \; \mathrm{g}^{-1}$, and $f_p$ is the form factor. Problem B2 Assuming $E_{51} = M_c = \kappa_{0.2} = 1$, and $f_p = 0.05$, plot the theoretical light curves for exploding white dwarfs with radii of $R_\mathrm{WD} = 0.01, 0.1, 1.0 \; R_\odot$. End of explanation # overplot these results on the theoretical models plt.errorbar( # complete plt.plot( # complete Explanation: Rabinak et al. show that $L(t)$ is directly proportional to the progenitor radius, as should be confirmed by the above plot. From these curves, we can constrain the radius of PTF11kly. Problem B3 Plot the early ($t \le 4$) light curve of PTF11kly on the above plot, along with the $t^2$-fireball model fit to show the comparison of the actual explosion to the models. End of explanation R = # complete print('The radius of the progenitor is <= {:.3f} Rsun'.format( # complete Explanation: Problem B4 Based on the above plot - what constraints can you place on the progenitor radius? Using the initial detection of PTF11kly, what is the maximum size of the progenitor? End of explanation # complete # complete plt.figure() plt.plot( # complete # complete # complete plt.figure() plt.plot( # complete Explanation: And thus, PTF11kly provides direct evidence that (at least one) Type Ia SN come from progenitors that are significantly smaller than the Sun! Something to chew on - is this proof that SNe Ia come from white dwarfs? Hint - can you think of any other astrophysical objects that are allowed within the constraints above? Problem 5 - Challenge If you finish early, work on the following problem, or continue working on this as homework for this evening. The challenge problem is going to focus on improving the use of the PTF SNe for measuring cosmological distances. Typically, the goodness of an individual method is reported as the scatter (in mag) about the best fit Hubble line. Challenge Problem 1 Assuming $H_0 = 73.24 \pm 1.74$, plot the Hubble expansion curve on a plot showing distance modulus, $\mu$, against redshift, $z$. Overplot the distance modulus and redshift of the SNe in the Firth et al. sample. Do you notice any trends? In a separate plot, show the residuals relative to $H_0 = 73.24$, and calculate the scatter (rms in mag) of your method relative to this baseline. Modern SN light curve fitters produce a scatter of $\sim{0.14} \;\mathrm{mag}$. Given all that you now know - do you consider our method used to derive $H_0$ good? End of explanation
5,920
Given the following text description, write Python code to implement the functionality described below step by step Description: BigQuery Query To Table Save query results into a BigQuery table. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project Step2: 3. Enter BigQuery Query To Table Recipe Parameters Specify a single query and choose legacy or standard mode. For PLX use user authentication and Step3: 4. Execute BigQuery Query To Table This does NOT need to be modified unless you are changing the recipe, click play.
Python Code: !pip install git+https://github.com/google/starthinker Explanation: BigQuery Query To Table Save query results into a BigQuery table. License Copyright 2020 Google LLC, Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Disclaimer This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team. This code generated (see starthinker/scripts for possible source): - Command: "python starthinker_ui/manage.py colab" - Command: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install Dependencies First install the libraries needed to execute recipes, this only needs to be done once, then click play. End of explanation from starthinker.util.configuration import Configuration CONFIG = Configuration( project="", client={}, service={}, user="/content/user.json", verbose=True ) Explanation: 2. Set Configuration This code is required to initialize the project. Fill in required fields and press play. If the recipe uses a Google Cloud Project: Set the configuration project value to the project identifier from these instructions. If the recipe has auth set to user: If you have user credentials: Set the configuration user value to your user credentials JSON. If you DO NOT have user credentials: Set the configuration client value to downloaded client credentials. If the recipe has auth set to service: Set the configuration service value to downloaded service credentials. End of explanation FIELDS = { 'auth_write':'service', # Credentials used for writing data. 'query':'', # SQL with newlines and all. 'dataset':'', # Existing BigQuery dataset. 'table':'', # Table to create from this query. 'legacy':True, # Query type must match source tables. } print("Parameters Set To: %s" % FIELDS) Explanation: 3. Enter BigQuery Query To Table Recipe Parameters Specify a single query and choose legacy or standard mode. For PLX use user authentication and: SELECT * FROM [plx.google:FULL_TABLE_NAME.all] WHERE... Every time the query runs it will overwrite the table. Modify the values below for your use case, can be done multiple times, then click play. End of explanation from starthinker.util.configuration import execute from starthinker.util.recipe import json_set_fields TASKS = [ { 'bigquery':{ 'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}}, 'from':{ 'query':{'field':{'name':'query','kind':'text','order':1,'default':'','description':'SQL with newlines and all.'}}, 'legacy':{'field':{'name':'legacy','kind':'boolean','order':4,'default':True,'description':'Query type must match source tables.'}} }, 'to':{ 'dataset':{'field':{'name':'dataset','kind':'string','order':2,'default':'','description':'Existing BigQuery dataset.'}}, 'table':{'field':{'name':'table','kind':'string','order':3,'default':'','description':'Table to create from this query.'}} } } } ] json_set_fields(TASKS, FIELDS) execute(CONFIG, TASKS, force=True) Explanation: 4. Execute BigQuery Query To Table This does NOT need to be modified unless you are changing the recipe, click play. End of explanation
5,921
Given the following text description, write Python code to implement the functionality described below step by step Description: Grade Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again. In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements. As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble. Step2: Problem set 1 Step3: Problem set 2 Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.) Expected output Step5: TA-COMMENT This looks great! If you pass in the following string as statement, you get the exact same order as Allison Step6: TA-COMMENT Alternate SQL query Step7: BONUS
Python Code: import pg8000 conn = pg8000.connect(user='postgres', password='password', database="homework2_radhika") Explanation: Grade: 6 / 6 -- but search "TA-COMMENT" to see a few notes on some of the problems. Homework 2: Working with SQL (Data and Databases 2016) This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate. You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps: Make sure you're viewing this notebook in Github. Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer. Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb). Open the notebook on your computer using your locally installed version of IPython Notebook. When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class). Setting the scene These problem sets address SQL, with a focus on joins and aggregates. I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL. To import the data, follow these steps: Launch psql. At the prompt, type CREATE DATABASE homework2; Connect to the database you just created by typing \c homework2 Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file. After you run the \i command, you should see the following output: CREATE TABLE CREATE TABLE CREATE TABLE COPY 100000 COPY 1682 COPY 943 The table schemas for the data look like this: Table "public.udata" Column | Type | Modifiers -----------+---------+----------- user_id | integer | item_id | integer | rating | integer | timestamp | integer | Table "public.uuser" Column | Type | Modifiers ------------+-----------------------+----------- user_id | integer | age | integer | gender | character varying(1) | occupation | character varying(80) | zip_code | character varying(10) | Table "public.uitem" Column | Type | Modifiers --------------------+------------------------+----------- movie_id | integer | not null movie_title | character varying(81) | not null release_date | date | video_release_date | character varying(32) | imdb_url | character varying(134) | unknown | integer | not null action | integer | not null adventure | integer | not null animation | integer | not null childrens | integer | not null comedy | integer | not null crime | integer | not null documentary | integer | not null drama | integer | not null fantasy | integer | not null film_noir | integer | not null horror | integer | not null musical | integer | not null mystery | integer | not null romance | integer | not null scifi | integer | not null thriller | integer | not null war | integer | not null western | integer | not null Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2. End of explanation conn.rollback() Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again. In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements. As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble. End of explanation conn.rollback() cursor = conn.cursor() statement = "SELECT movie_title, release_date from uitem WHERE horror=1 AND scifi=1 ORDER BY release_date DESC;" cursor.execute(statement) for row in cursor: print(row[0]) Explanation: Problem set 1: WHERE and ORDER BY In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query. Expected output: Deep Rising (1998) Alien: Resurrection (1997) Hellraiser: Bloodline (1996) Robert A. Heinlein's The Puppet Masters (1994) Body Snatchers (1993) Army of Darkness (1993) Body Snatchers (1993) Alien 3 (1992) Heavy Metal (1981) Alien (1979) Night of the Living Dead (1968) Blob, The (1958) End of explanation conn.rollback() cursor = conn.cursor() statement = "SELECT count(*) FROM uitem WHERE musical=1 OR childrens=1;" cursor.execute(statement) for row in cursor: print(row[0]) Explanation: Problem set 2: Aggregation, GROUP BY and HAVING In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate. Expected output: 157 End of explanation conn.rollback() cursor = conn.cursor() statement = "SELECT occupation, count(occupation) FROM uuser GROUP BY occupation HAVING count(occupation) > 50;" cursor.execute(statement) for row in cursor: print(row[0], row[1]) Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.) Expected output: administrator 79 programmer 66 librarian 51 student 196 other 105 engineer 67 educator 95 Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.) End of explanation conn.rollback() cursor = conn.cursor() #statement = "SELECT movie_id from uitem limit 5;" statement = "SELECT DISTINCT(uitem.movie_title), item_id, rating FROM udata JOIN uitem on item_id = movie_id WHERE documentary=1 AND uitem.release_date < '1992-01-01' AND udata.rating = 5;" cursor.execute(statement) for row in cursor: print(row[0]) Explanation: TA-COMMENT This looks great! If you pass in the following string as statement, you get the exact same order as Allison: 'SELECT occupation, count(*) FROM uuser GROUP BY occupation HAVING count(occupation) &gt; 50;' Problem set 3: Joining tables In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output: Madonna: Truth or Dare (1991) Koyaanisqatsi (1983) Paris Is Burning (1990) Thin Blue Line, The (1988) Hints: JOIN the udata and uitem tables. Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once). The SQL expression to include in order to find movies released before 1992 is uitem.release_date &lt; '1992-01-01'. End of explanation conn.rollback() cursor = conn.cursor() statement = "SELECT uitem.movie_title, avg(rating) FROM udata JOIN uitem on item_id = movie_id WHERE horror=1 GROUP BY uitem.movie_title ORDER BY avg(udata.rating) limit 10;" cursor.execute(statement) for row in cursor: print(row[0], "%0.2f" % row[1]) Explanation: TA-COMMENT Alternate SQL query: "SELECT DISTINCT(uitem.movie_title) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE uitem.documentary = 1 AND uitem.release_date &lt; '1992-01-01' AND rating = '5';" You included several other items to the immediate right of SELECT -- however, note that those items aren't actually being displayed in your output! That's an important hint that those parts of your query should not be included. So, in your query, you should not have included item_id and rating immediately to the right of SELECT -- they are causing your output to be ordered differently. Problem set 4: Joins and aggregations... together at last This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go. In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.) Expected output: Amityville 1992: It's About Time (1992) 1.00 Beyond Bedlam (1993) 1.00 Amityville: Dollhouse (1996) 1.00 Amityville: A New Generation (1993) 1.00 Amityville 3-D (1983) 1.17 Castle Freak (1995) 1.25 Amityville Curse, The (1990) 1.25 Children of the Corn: The Gathering (1996) 1.32 Machine, The (1994) 1.50 Body Parts (1991) 1.62 End of explanation conn.rollback() cursor = conn.cursor() statement = "SELECT uitem.movie_title, avg(rating) FROM udata JOIN uitem on item_id = movie_id WHERE horror=1 GROUP BY uitem.movie_title HAVING count(rating) > 10 ORDER BY avg(udata.rating) limit 10;" cursor.execute(statement) for row in cursor: print(row[0], "%0.2f" % row[1]) Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below. Expected output: Children of the Corn: The Gathering (1996) 1.32 Body Parts (1991) 1.62 Amityville II: The Possession (1982) 1.64 Jaws 3-D (1983) 1.94 Hellraiser: Bloodline (1996) 2.00 Tales from the Hood (1995) 2.04 Audrey Rose (1977) 2.17 Addiction, The (1995) 2.18 Halloween: The Curse of Michael Myers (1995) 2.20 Phantoms (1998) 2.23 End of explanation
5,922
Given the following text description, write Python code to implement the functionality described below step by step Description: Active Subspaces Example Function Step1: First we draw M samples randomly from the input space. Step2: Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$. Step3: Compute gradients to approximate the matrix on which the active subspace is based. Step4: Now we use our data to compute the active subspace. Step5: We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values).
Python Code: import active_subspaces as ac import numpy as np %matplotlib inline # The piston_functions.py file contains two functions: the piston function (piston(xx)) # and its gradient (piston_grad(xx)). Each takes an Mx7 matrix (M is the number of data # points) with rows being normalized inputs; piston returns a column vector of function # values at each row of the input and piston_grad returns a matrix whose ith row is the # gradient of piston at the ith row of xx with respect to the normalized inputs from piston_functions import * Explanation: Active Subspaces Example Function: Piston Cycle Time Ryan Howard, CO School of Mines, &#114;&#121;&#104;&#111;&#119;&#97;&#114;&#100;&#64;&#109;&#105;&#110;&#101;&#115;&#46;&#101;&#100;&#117; Paul Constantine, CO School of Mines, &#112;&#99;&#111;&#110;&#115;&#116;&#97;&#110;&#64;&#109;&#105;&#110;&#101;&#115;&#46;&#101;&#100;&#117; <br> In this tutorial, we'll be applying active subspaces to the function $$ C = 2\pi\sqrt{\frac{M}{k+S^2\frac{P_0V_0}{T_0}\frac{T_a}{V^2}}}, $$ where $$ V = \frac{S}{2k}\left(\sqrt{A^2+4k\frac{P_0V_0}{T_0}T_a}-A\right),\ A=P_0S+19.62M-\frac{kV_0}{S}, $$as seen on http://www.sfu.ca/~ssurjano/piston.html. This function models the cycle time of a piston within a cylinder, and its inputs and their distributions are described in the table below. Variable|Symbol|Distribution (U(min, max)) :-----|:-----:|:----- piston Weight|$M$|U(30, 60) piston Surface Area|$S$|U(.005, .02) initial Gas Volume|$V_0$|U(.002, .01) spring Coefficient|$k$|U(1000, 5000) atmospheric Pressure|$P_0$|U(90000, 110000) ambient Temperature|$T_a$|U(290, 296) filling Gas Temperature|$T_0$|U(340, 360) End of explanation M = 1000 #This is the number of data points to use #Sample the input space according to the distributions in the table above M0 = np.random.uniform(30, 60, (M, 1)) S = np.random.uniform(.005, .02, (M, 1)) V0 = np.random.uniform(.002, .01, (M, 1)) k = np.random.uniform(1000, 5000, (M, 1)) P0 = np.random.uniform(90000, 110000, (M, 1)) Ta = np.random.uniform(290, 296, (M, 1)) T0 = np.random.uniform(340, 360, (M, 1)) #the input matrix x = np.hstack((M0, S, V0, k, P0, Ta, T0)) Explanation: First we draw M samples randomly from the input space. End of explanation #Upper and lower limits for inputs xl = np.array([30, .005, .002, 1000, 90000, 290, 340]) xu = np.array([60, .02, .01, 5000, 110000, 296, 360]) #XX = normalized input matrix XX = ac.utils.misc.BoundedNormalizer(xl, xu).normalize(x) Explanation: Now we normalize the inputs, linearly scaling each to the interval $[-1, 1]$. End of explanation #output values (f) and gradients (df) f = piston(XX) df = piston_grad(XX) Explanation: Compute gradients to approximate the matrix on which the active subspace is based. End of explanation #Set up our subspace using the gradient samples ss = ac.subspaces.Subspaces() ss.compute(df=df, nboot=500) Explanation: Now we use our data to compute the active subspace. End of explanation #Component labels in_labels = ['M', 'S', 'V0', 'k', 'P0', 'Ta', 'T0'] #plot eigenvalues, subspace errors ac.utils.plotters.eigenvalues(ss.eigenvals, ss.e_br) ac.utils.plotters.subspace_errors(ss.sub_br) #manually make the subspace 2D for the eigenvector and 2D summary plots ss.partition(2) #Compute the active variable values y = XX.dot(ss.W1) #Plot eigenvectors, sufficient summaries ac.utils.plotters.eigenvectors(ss.W1, in_labels=in_labels) ac.utils.plotters.sufficient_summary(y, f) Explanation: We use plotting utilities to plot eigenvalues, subspace error, components of the first 2 eigenvectors, and 1D and 2D sufficient summary plots (plots of function values vs. active variable values). End of explanation
5,923
Given the following text description, write Python code to implement the functionality described below step by step Description: The Inference Button Step1: Generating data Create some toy data to play around with and scatter-plot it. Essentially we are creating a regression line defined by intercept and slope and add data points by sampling from a Normal with the mean set to the regression line. Step2: Estimating the model Lets fit a Bayesian linear regression model to this data. As you can see, model specifications in PyMC3 are wrapped in a with statement. Here we use the awesome new NUTS sampler (our Inference Button) to draw 2000 posterior samples. Step3: This should be fairly readable for people who know probabilistic programming. However, would my non-statistican friend know what all this does? Moreover, recall that this is an extremely simple model that would be one line in R. Having multiple, potentially transformed regressors, interaction terms or link-functions would also make this much more complex and error prone. The new glm() function instead takes a Patsy linear model specifier from which it creates a design matrix. glm() then adds random variables for each of the coefficients and an appopriate likelihood to the model. Step4: Much shorter, but this code does the exact same thing as the above model specification (you can change priors and everything else too if we wanted). glm() parses the Patsy model string, adds random variables for each regressor (Intercept and slope x in this case), adds a likelihood (by default, a Normal is chosen), and all other variables (sigma). Finally, glm() then initializes the parameters to a good starting point by estimating a frequentist linear model using statsmodels. If you are not familiar with R's syntax, 'y ~ x' specifies that we have an output variable y that we want to estimate as a linear function of x. Analyzing the model Bayesian inference does not give us only one best fitting line (as maximum likelihood does) but rather a whole posterior distribution of likely parameters. Lets plot the posterior distribution of our parameters and the individual samples we drew. Step5: The left side shows our marginal posterior -- for each parameter value on the x-axis we get a probability on the y-axis that tells us how likely that parameter value is. There are a couple of things to see here. The first is that our sampling chains for the individual parameters (left side) seem well converged and stationary (there are no large drifts or other odd patterns). Secondly, the maximum posterior estimate of each variable (the peak in the left side distributions) is very close to the true parameters used to generate the data (x is the regression coefficient and sigma is the standard deviation of our normal). In the GLM we thus do not only have one best fitting regression line, but many. A posterior predictive plot takes multiple samples from the posterior (intercepts and slopes) and plots a regression line for each of them. Here we are using the glm.plot_posterior_predictive() convenience function for this.
Python Code: %matplotlib inline from pymc3 import * import numpy as np import matplotlib.pyplot as plt Explanation: The Inference Button: Bayesian GLMs made easy with PyMC3 Author: Thomas Wiecki This tutorial appeared as a post in a small series on Bayesian GLMs on my blog: The Inference Button: Bayesian GLMs made easy with PyMC3 This world is far from Normal(ly distributed): Robust Regression in PyMC3 The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3 In this blog post I will talk about: How the Bayesian Revolution in many scientific disciplines is hindered by poor usability of current Probabilistic Programming languages. A gentle introduction to Bayesian linear regression and how it differs from the frequentist approach. A preview of PyMC3 (currently in alpha) and its new GLM submodule I wrote to allow creation and estimation of Bayesian GLMs as easy as frequentist GLMs in R. Ready? Lets get started! There is a huge paradigm shift underway in many scientific disciplines: The Bayesian Revolution. While the theoretical benefits of Bayesian over Frequentist stats have been discussed at length elsewhere (see Further Reading below), there is a major obstacle that hinders wider adoption -- usability (this is one of the reasons DARPA wrote out a huge grant to improve Probabilistic Programming). This is mildly ironic because the beauty of Bayesian statistics is their generality. Frequentist stats have a bazillion different tests for every different scenario. In Bayesian land you define your model exactly as you think is appropriate and hit the Inference Button(TM) (i.e. running the magical MCMC sampling algorithm). Yet when I ask my colleagues why they use frequentist stats (even though they would like to use Bayesian stats) the answer is that software packages like SPSS or R make it very easy to run all those individuals tests with a single command (and more often then not, they don't know the exact model and inference method being used). While there are great Bayesian software packages like JAGS, BUGS, Stan and PyMC, they are written for Bayesians statisticians who know very well what model they want to build. Unfortunately, "the vast majority of statistical analysis is not performed by statisticians" -- so what we really need are tools for scientists and not for statisticians. In the interest of putting my code where my mouth is I wrote a submodule for the upcoming PyMC3 that makes construction of Bayesian Generalized Linear Models (GLMs) as easy as Frequentist ones in R. Linear Regression While future blog posts will explore more complex models, I will start here with the simplest GLM -- linear regression. In general, frequentists think about Linear Regression as follows: $$ Y = X\beta + \epsilon $$ where $Y$ is the output we want to predict (or dependent variable), $X$ is our predictor (or independent variable), and $\beta$ are the coefficients (or parameters) of the model we want to estimate. $\epsilon$ is an error term which is assumed to be normally distributed. We can then use Ordinary Least Squares or Maximum Likelihood to find the best fitting $\beta$. Probabilistic Reformulation Bayesians take a probabilistic view of the world and express this model in terms of probability distributions. Our above linear regression can be rewritten to yield: $$ Y \sim \mathcal{N}(X \beta, \sigma^2) $$ In words, we view $Y$ as a random variable (or random vector) of which each element (data point) is distributed according to a Normal distribution. The mean of this normal distribution is provided by our linear predictor with variance $\sigma^2$. While this is essentially the same model, there are two critical advantages of Bayesian estimation: Priors: We can quantify any prior knowledge we might have by placing priors on the paramters. For example, if we think that $\sigma$ is likely to be small we would choose a prior with more probability mass on low values. Quantifying uncertainty: We do not get a single estimate of $\beta$ as above but instead a complete posterior distribution about how likely different values of $\beta$ are. For example, with few data points our uncertainty in $\beta$ will be very high and we'd be getting very wide posteriors. Bayesian GLMs in PyMC3 With the new GLM module in PyMC3 it is very easy to build this and much more complex models. First, lets import the required modules. End of explanation size = 200 true_intercept = 1 true_slope = 2 x = np.linspace(0, 1, size) # y = a + b*x true_regression_line = true_intercept + true_slope * x # add noise y = true_regression_line + np.random.normal(scale=.5, size=size) data = dict(x=x, y=y) fig = plt.figure(figsize=(7, 7)) ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model') ax.plot(x, y, 'x', label='sampled data') ax.plot(x, true_regression_line, label='true regression line', lw=2.) plt.legend(loc=0); Explanation: Generating data Create some toy data to play around with and scatter-plot it. Essentially we are creating a regression line defined by intercept and slope and add data points by sampling from a Normal with the mean set to the regression line. End of explanation with Model() as model: # model specifications in PyMC3 are wrapped in a with-statement # Define priors sigma = HalfCauchy('sigma', beta=10, testval=1.) intercept = Normal('Intercept', 0, sd=20) x_coeff = Normal('x', 0, sd=20) # Define likelihood likelihood = Normal('y', mu=intercept + x_coeff * x, sd=sigma, observed=y) # Inference! start = find_MAP() # Find starting value by optimization step = NUTS(scaling=start) # Instantiate MCMC sampling algorithm trace = sample(2000, step, start=start, progressbar=False) # draw 2000 posterior samples using NUTS sampling Explanation: Estimating the model Lets fit a Bayesian linear regression model to this data. As you can see, model specifications in PyMC3 are wrapped in a with statement. Here we use the awesome new NUTS sampler (our Inference Button) to draw 2000 posterior samples. End of explanation with Model() as model: # specify glm and pass in data. The resulting linear model, its likelihood and # and all its parameters are automatically added to our model. glm.glm('y ~ x', data) start = find_MAP() step = NUTS(scaling=start) # Instantiate MCMC sampling algorithm trace = sample(2000, step, progressbar=False) # draw 2000 posterior samples using NUTS sampling Explanation: This should be fairly readable for people who know probabilistic programming. However, would my non-statistican friend know what all this does? Moreover, recall that this is an extremely simple model that would be one line in R. Having multiple, potentially transformed regressors, interaction terms or link-functions would also make this much more complex and error prone. The new glm() function instead takes a Patsy linear model specifier from which it creates a design matrix. glm() then adds random variables for each of the coefficients and an appopriate likelihood to the model. End of explanation plt.figure(figsize=(7, 7)) traceplot(trace[100:]) plt.tight_layout(); Explanation: Much shorter, but this code does the exact same thing as the above model specification (you can change priors and everything else too if we wanted). glm() parses the Patsy model string, adds random variables for each regressor (Intercept and slope x in this case), adds a likelihood (by default, a Normal is chosen), and all other variables (sigma). Finally, glm() then initializes the parameters to a good starting point by estimating a frequentist linear model using statsmodels. If you are not familiar with R's syntax, 'y ~ x' specifies that we have an output variable y that we want to estimate as a linear function of x. Analyzing the model Bayesian inference does not give us only one best fitting line (as maximum likelihood does) but rather a whole posterior distribution of likely parameters. Lets plot the posterior distribution of our parameters and the individual samples we drew. End of explanation plt.figure(figsize=(7, 7)) plt.plot(x, y, 'x', label='data') glm.plot_posterior_predictive(trace, samples=100, label='posterior predictive regression lines') plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y') plt.title('Posterior predictive regression lines') plt.legend(loc=0) plt.xlabel('x') plt.ylabel('y'); Explanation: The left side shows our marginal posterior -- for each parameter value on the x-axis we get a probability on the y-axis that tells us how likely that parameter value is. There are a couple of things to see here. The first is that our sampling chains for the individual parameters (left side) seem well converged and stationary (there are no large drifts or other odd patterns). Secondly, the maximum posterior estimate of each variable (the peak in the left side distributions) is very close to the true parameters used to generate the data (x is the regression coefficient and sigma is the standard deviation of our normal). In the GLM we thus do not only have one best fitting regression line, but many. A posterior predictive plot takes multiple samples from the posterior (intercepts and slopes) and plots a regression line for each of them. Here we are using the glm.plot_posterior_predictive() convenience function for this. End of explanation
5,924
Given the following text description, write Python code to implement the functionality described below step by step Description: pvsystem tutorial This tutorial explores the pvlib.pvsystem module. The module has functions for importing PV module and inverter data and functions for modeling module and inverter performance. systemdef Angle of Incidence Modifiers Sandia Cell Temp correction Sandia Inverter Model Sandia Array Performance Model SAPM IV curves DeSoto Model Single Diode Model This tutorial has been tested against the following package versions Step1: systemdef Step2: pvlib can import TMY2 and TMY3 data. Here, we import the example files. Step3: Angle of Incidence Modifiers Step4: Sandia Cell Temp correction PV system efficiency can vary by up to 0.5% per degree C, so it's important to accurately model cell and module temperature. The sapm_celltemp function uses plane of array irradiance, ambient temperature, wind speed, and module and racking type to calculate cell and module temperatures. From King et. al. (2004) Step5: Cell and module temperature as a function of wind speed. Step6: Cell and module temperature as a function of ambient temperature. Step7: Cell and module temperature as a function of incident irradiance. Step8: Cell and module temperature for different module and racking types. Step9: snlinverter Step10: Need to put more effort into describing this function. DC model This example shows use of the Desoto module performance model and the Sandia Array Performance Model (SAPM). Both models reuire a set of parameter values which can be read from SAM databases for modules. Foe the Desoto model, the database content is returned by supplying the keyword cecmod to pvsystem.retrievesam. Step11: The Sandia module database is read by the same function with the keyword SandiaMod. Step12: Generate some irradiance data for modeling. Step14: Now we can run the module parameters and the irradiance data through the SAPM functions. Step15: For comparison, here's the SAPM for a sunny, windy, cold version of the same day. Step16: SAPM IV curves The IV curve function only calculates the 5 points of the SAPM. We will add arbitrary points in a future release, but for now we just interpolate between the 5 SAPM points. Step17: desoto The same weather data run through the Desoto model. Step18: Single diode model
Python Code: # built-in python modules import os import inspect import datetime # scientific python add-ons import numpy as np import pandas as pd # plotting stuff # first line makes the plots appear in the notebook %matplotlib inline import matplotlib.pyplot as plt # seaborn makes your plots look better try: import seaborn as sns sns.set(rc={"figure.figsize": (12, 6)}) except ImportError: print('We suggest you install seaborn using conda or pip and rerun this cell') # finally, we import the pvlib library import pvlib Explanation: pvsystem tutorial This tutorial explores the pvlib.pvsystem module. The module has functions for importing PV module and inverter data and functions for modeling module and inverter performance. systemdef Angle of Incidence Modifiers Sandia Cell Temp correction Sandia Inverter Model Sandia Array Performance Model SAPM IV curves DeSoto Model Single Diode Model This tutorial has been tested against the following package versions: * pvlib 0.4.5 * Python 3.6.2 * IPython 6.0 * Pandas 0.20.1 It should work with other Python and Pandas versions. It requires pvlib >= 0.4.0 and IPython >= 3.0. Authors: * Will Holmgren (@wholmgren), University of Arizona. 2015, March 2016, November 2016, May 2017. End of explanation import pvlib from pvlib import pvsystem Explanation: systemdef End of explanation pvlib_abspath = os.path.dirname(os.path.abspath(inspect.getfile(pvlib))) tmy3_data, tmy3_metadata = pvlib.tmy.readtmy3(os.path.join(pvlib_abspath, 'data', '703165TY.csv')) tmy2_data, tmy2_metadata = pvlib.tmy.readtmy2(os.path.join(pvlib_abspath, 'data', '12839.tm2')) pvlib.pvsystem.systemdef(tmy3_metadata, 0, 0, .1, 5, 5) pvlib.pvsystem.systemdef(tmy2_metadata, 0, 0, .1, 5, 5) Explanation: pvlib can import TMY2 and TMY3 data. Here, we import the example files. End of explanation angles = np.linspace(-180,180,3601) ashraeiam = pd.Series(pvsystem.ashraeiam(angles, .05), index=angles) ashraeiam.plot() plt.ylabel('ASHRAE modifier') plt.xlabel('input angle (deg)') angles = np.linspace(-180,180,3601) physicaliam = pd.Series(pvsystem.physicaliam(angles), index=angles) physicaliam.plot() plt.ylabel('physical modifier') plt.xlabel('input index') plt.figure() ashraeiam.plot(label='ASHRAE') physicaliam.plot(label='physical') plt.ylabel('modifier') plt.xlabel('input angle (deg)') plt.legend() Explanation: Angle of Incidence Modifiers End of explanation # scalar inputs pvsystem.sapm_celltemp(900, 5, 20) # irrad, wind, temp # vector inputs times = pd.DatetimeIndex(start='2015-01-01', end='2015-01-02', freq='12H') temps = pd.Series([0, 10, 5], index=times) irrads = pd.Series([0, 500, 0], index=times) winds = pd.Series([10, 5, 0], index=times) pvtemps = pvsystem.sapm_celltemp(irrads, winds, temps) pvtemps.plot() Explanation: Sandia Cell Temp correction PV system efficiency can vary by up to 0.5% per degree C, so it's important to accurately model cell and module temperature. The sapm_celltemp function uses plane of array irradiance, ambient temperature, wind speed, and module and racking type to calculate cell and module temperatures. From King et. al. (2004): $$T_m = E e^{a+b*WS} + T_a$$ $$T_c = T_m + \frac{E}{E_0} \Delta T$$ The $a$, $b$, and $\Delta T$ parameters depend on the module and racking type. The default parameter set is open_rack_cell_glassback. sapm_celltemp works with either scalar or vector inputs, but always returns a pandas DataFrame. End of explanation wind = np.linspace(0,20,21) temps = pd.DataFrame(pvsystem.sapm_celltemp(900, wind, 20), index=wind) temps.plot() plt.legend() plt.xlabel('wind speed (m/s)') plt.ylabel('temperature (deg C)') Explanation: Cell and module temperature as a function of wind speed. End of explanation atemp = np.linspace(-20,50,71) temps = pvsystem.sapm_celltemp(900, 2, atemp).set_index(atemp) temps.plot() plt.legend() plt.xlabel('ambient temperature (deg C)') plt.ylabel('temperature (deg C)') Explanation: Cell and module temperature as a function of ambient temperature. End of explanation irrad = np.linspace(0,1000,101) temps = pvsystem.sapm_celltemp(irrad, 2, 20).set_index(irrad) temps.plot() plt.legend() plt.xlabel('incident irradiance (W/m**2)') plt.ylabel('temperature (deg C)') Explanation: Cell and module temperature as a function of incident irradiance. End of explanation models = ['open_rack_cell_glassback', 'roof_mount_cell_glassback', 'open_rack_cell_polymerback', 'insulated_back_polymerback', 'open_rack_polymer_thinfilm_steel', '22x_concentrator_tracker'] temps = pd.DataFrame(index=['temp_cell','temp_module']) for model in models: temps[model] = pd.Series(pvsystem.sapm_celltemp(1000, 5, 20, model=model).iloc[0]) temps.T.plot(kind='bar') # try removing the transpose operation and replotting plt.legend() plt.ylabel('temperature (deg C)') Explanation: Cell and module temperature for different module and racking types. End of explanation inverters = pvsystem.retrieve_sam('sandiainverter') inverters vdcs = pd.Series(np.linspace(0,50,51)) idcs = pd.Series(np.linspace(0,11,110)) pdcs = idcs * vdcs pacs = pvsystem.snlinverter(vdcs, pdcs, inverters['ABB__MICRO_0_25_I_OUTD_US_208_208V__CEC_2014_']) #pacs.plot() plt.plot(pacs, pdcs) plt.ylabel('ac power') plt.xlabel('dc power') Explanation: snlinverter End of explanation cec_modules = pvsystem.retrieve_sam('cecmod') cec_modules cecmodule = cec_modules.Example_Module cecmodule Explanation: Need to put more effort into describing this function. DC model This example shows use of the Desoto module performance model and the Sandia Array Performance Model (SAPM). Both models reuire a set of parameter values which can be read from SAM databases for modules. Foe the Desoto model, the database content is returned by supplying the keyword cecmod to pvsystem.retrievesam. End of explanation sandia_modules = pvsystem.retrieve_sam(name='SandiaMod') sandia_modules sandia_module = sandia_modules.Canadian_Solar_CS5P_220M___2009_ sandia_module Explanation: The Sandia module database is read by the same function with the keyword SandiaMod. End of explanation from pvlib import clearsky from pvlib import irradiance from pvlib import atmosphere from pvlib.location import Location tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson') times_loc = pd.date_range(start=datetime.datetime(2014,4,1), end=datetime.datetime(2014,4,2), freq='30s', tz=tus.tz) solpos = pvlib.solarposition.get_solarposition(times_loc, tus.latitude, tus.longitude) dni_extra = pvlib.irradiance.extraradiation(times_loc) airmass = pvlib.atmosphere.relativeairmass(solpos['apparent_zenith']) pressure = pvlib.atmosphere.alt2pres(tus.altitude) am_abs = pvlib.atmosphere.absoluteairmass(airmass, pressure) cs = tus.get_clearsky(times_loc) surface_tilt = tus.latitude surface_azimuth = 180 # pointing south aoi = pvlib.irradiance.aoi(surface_tilt, surface_azimuth, solpos['apparent_zenith'], solpos['azimuth']) total_irrad = pvlib.irradiance.total_irrad(surface_tilt, surface_azimuth, solpos['apparent_zenith'], solpos['azimuth'], cs['dni'], cs['ghi'], cs['dhi'], dni_extra=dni_extra, model='haydavies') Explanation: Generate some irradiance data for modeling. End of explanation module = sandia_module # a sunny, calm, and hot day in the desert temps = pvsystem.sapm_celltemp(total_irrad['poa_global'], 0, 30) effective_irradiance = pvlib.pvsystem.sapm_effective_irradiance( total_irrad['poa_direct'], total_irrad['poa_diffuse'], am_abs, aoi, module) sapm_1 = pvlib.pvsystem.sapm(effective_irradiance, temps['temp_cell'], module) sapm_1.plot() def plot_sapm(sapm_data, effective_irradiance): Makes a nice figure with the SAPM data. Parameters ---------- sapm_data : DataFrame The output of ``pvsystem.sapm`` fig, axes = plt.subplots(2, 3, figsize=(16,10), sharex=False, sharey=False, squeeze=False) plt.subplots_adjust(wspace=.2, hspace=.3) ax = axes[0,0] sapm_data.filter(like='i_').plot(ax=ax) ax.set_ylabel('Current (A)') ax = axes[0,1] sapm_data.filter(like='v_').plot(ax=ax) ax.set_ylabel('Voltage (V)') ax = axes[0,2] sapm_data.filter(like='p_').plot(ax=ax) ax.set_ylabel('Power (W)') ax = axes[1,0] [ax.plot(effective_irradiance, current, label=name) for name, current in sapm_data.filter(like='i_').iteritems()] ax.set_ylabel('Current (A)') ax.set_xlabel('Effective Irradiance') ax.legend(loc=2) ax = axes[1,1] [ax.plot(effective_irradiance, voltage, label=name) for name, voltage in sapm_data.filter(like='v_').iteritems()] ax.set_ylabel('Voltage (V)') ax.set_xlabel('Effective Irradiance') ax.legend(loc=4) ax = axes[1,2] ax.plot(effective_irradiance, sapm_data['p_mp'], label='p_mp') ax.set_ylabel('Power (W)') ax.set_xlabel('Effective Irradiance') ax.legend(loc=2) # needed to show the time ticks for ax in axes.flatten(): for tk in ax.get_xticklabels(): tk.set_visible(True) plot_sapm(sapm_1, effective_irradiance) Explanation: Now we can run the module parameters and the irradiance data through the SAPM functions. End of explanation temps = pvsystem.sapm_celltemp(total_irrad['poa_global'], 10, 5) sapm_2 = pvlib.pvsystem.sapm(effective_irradiance, temps['temp_cell'], module) plot_sapm(sapm_2, effective_irradiance) sapm_1['p_mp'].plot(label='30 C, 0 m/s') sapm_2['p_mp'].plot(label=' 5 C, 10 m/s') plt.legend() plt.ylabel('Pmp') plt.title('Comparison of a hot, calm day and a cold, windy day') Explanation: For comparison, here's the SAPM for a sunny, windy, cold version of the same day. End of explanation import warnings warnings.simplefilter('ignore', np.RankWarning) def sapm_to_ivframe(sapm_row): pnt = sapm_row ivframe = {'Isc': (pnt['i_sc'], 0), 'Pmp': (pnt['i_mp'], pnt['v_mp']), 'Ix': (pnt['i_x'], 0.5*pnt['v_oc']), 'Ixx': (pnt['i_xx'], 0.5*(pnt['v_oc']+pnt['v_mp'])), 'Voc': (0, pnt['v_oc'])} ivframe = pd.DataFrame(ivframe, index=['current', 'voltage']).T ivframe = ivframe.sort_values(by='voltage') return ivframe def ivframe_to_ivcurve(ivframe, points=100): ivfit_coefs = np.polyfit(ivframe['voltage'], ivframe['current'], 30) fit_voltages = np.linspace(0, ivframe.loc['Voc', 'voltage'], points) fit_currents = np.polyval(ivfit_coefs, fit_voltages) return fit_voltages, fit_currents times = ['2014-04-01 07:00:00', '2014-04-01 08:00:00', '2014-04-01 09:00:00', '2014-04-01 10:00:00', '2014-04-01 11:00:00', '2014-04-01 12:00:00'] times.reverse() fig, ax = plt.subplots(1, 1, figsize=(12,8)) for time in times: ivframe = sapm_to_ivframe(sapm_1.loc[time]) fit_voltages, fit_currents = ivframe_to_ivcurve(ivframe) ax.plot(fit_voltages, fit_currents, label=time) ax.plot(ivframe['voltage'], ivframe['current'], 'ko') ax.set_xlabel('Voltage (V)') ax.set_ylabel('Current (A)') ax.set_ylim(0, None) ax.set_title('IV curves at multiple times') ax.legend() Explanation: SAPM IV curves The IV curve function only calculates the 5 points of the SAPM. We will add arbitrary points in a future release, but for now we just interpolate between the 5 SAPM points. End of explanation photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth = ( pvsystem.calcparams_desoto(total_irrad['poa_global'], temp_cell=temps['temp_cell'], alpha_sc=cecmodule['alpha_sc'], a_ref=cecmodule['a_ref'], I_L_ref=cecmodule['I_L_ref'], I_o_ref=cecmodule['I_o_ref'], R_sh_ref=cecmodule['R_sh_ref'], R_s=cecmodule['R_s']) ) photocurrent.plot() plt.ylabel('Light current I_L (A)') saturation_current.plot() plt.ylabel('Saturation current I_0 (A)') resistance_series resistance_shunt.plot() plt.ylabel('Shunt resistance (ohms)') plt.ylim(0,100) nNsVth.plot() plt.ylabel('nNsVth') Explanation: desoto The same weather data run through the Desoto model. End of explanation single_diode_out = pvsystem.singlediode(photocurrent, saturation_current, resistance_series, resistance_shunt, nNsVth) single_diode_out['i_sc'].plot() single_diode_out['v_oc'].plot() single_diode_out['p_mp'].plot() Explanation: Single diode model End of explanation
5,925
Given the following text description, write Python code to implement the functionality described below step by step Description: Python 2nd step Step1: In the following example, the data of the Object which x refered did not change but x refered new Object. Step2: Basic Data type Plese refer Python3 reference for detail. Boolean Integer Float None String Except above, Python support Complex ( $i Step3: Boolean True and False are reserved word for Boolean value. Not only Boolean but also other type such as Integer,String or List can be used for True/False check of if and while. Step4: Boolean operators and,or,not are the reserved words and used as boolean operators. (same as && || ! of C) These are not bitwise operation but logical operation. Step5: and operation Step6: Membership test The operators in and not in test for membership. x in s evaluates to true if x is a member of s, and false otherwise. x not in s returns the negation of x in s . All built-in sequences and set types support this as well as dictionary. (i.e. String, List, Tupple ... support this) Step7: Integer 0, 100, -10000, 0b1001, 0o664, 0x01a0 etc. The size of integer is Unlimited in Python. (most computer langueage does not support unlimited integer) int() function can be used to convert from float or string to integer. Fractional part (digits below 0) of float value is discared when it is converted to integer. Step8: Float 0.1, 10e3 etc. The result of the operation of integer and float is float. float() function can be used to convert from integer or string to float. Numeric operation +,-,*,/,//,%,** are numerical operators of Python. It's almost same as C, except devide (/ and //), and power (**). / always returns float, // returns integer (which will be converted to float, if one of the argument is float). The behavior of // is similar to C, but not same for negative result. Step9: None This type has a single value. There is a single object with this value. This object is accessed through the built-in name None. It is used to signify the absence of a value in many situations, e.g., it is returned from functions that doesn't explicitly return anything. Its truth value is false. Step10: String String can be made using single quate ('), double quate("), triple single quate or double quate. str() function converts numeric value to string. The biggest difference of String between C and Python is that the String is immutable(not changeable) in Python, and String of Python has its own method. Step12: docstring The first string in the module or function is called docstring. It is strongly recommended to write docstring. Indent of the quatation need to be alligned. docstring is shown on help(). Step13: Concatenation and Repetation + operator returns concatenated(merged) string, * operator returns repeated string. Step14: Indexed reference and Slice Strings can be indexed (subscripted), with the first character having index 0. There is no separate character type; a character is simply a string of size one Step15: Length The built-in function len() returns the length of a string Step16: Method String has many method. See https
Python Code: x = 1 print('x =', x, type(x)) x = 'abc' print('x =', x, type(x)) Explanation: Python 2nd step: Variables and Data type In case of C or other compile langueage, variables need to be declared with data type. In Python, Object have data type, variables just refer Object. Following sequence is valid in Python. End of explanation x = 1 print('x =', x, 'object id:', id(x)) y = x x += 1 print('x =', x, 'object id:', id(x)) print('y =', y, 'object id:', id(y)) Explanation: In the following example, the data of the Object which x refered did not change but x refered new Object. End of explanation def void_vundc(): return None x = void_vundc() print(type(x)) x = 0+1j print(x**2) Explanation: Basic Data type Plese refer Python3 reference for detail. Boolean Integer Float None String Except above, Python support Complex ( $i: i^2 = -1$ ) as well. ( j is used intead of i ) End of explanation x = True print(x, type(x)) x = False print(x, type(x)) x = (1 == 1) print('1==1:', x, type(x)) x = (1 > 2) print('1>2:', x, type(x)) # Evaluate this cell before foloowing cell def check_true(b): if b: print(b,"is treated as true") else: print(b,"is treated as false") check_true(0) check_true(1) check_true(0.0) check_true(0.1) check_true(-2) check_true('') check_true(' ') check_true([]) check_true([0]) check_true(0) check_true(1) check_true(0.0) check_true(0.1) check_true(-2) check_true('') check_true(' ') check_true([]) check_true([0]) Explanation: Boolean True and False are reserved word for Boolean value. Not only Boolean but also other type such as Integer,String or List can be used for True/False check of if and while. End of explanation print (True and True, True and False, False and True, False and False) print (True or True, True or False, False or True, False or False) print (not True, not False) print("\nBitwise operation: 1('0001'b) & 2('0010'b)") x = 0b01 & 0b10 print(x, type(x)) check_true(0b01 & 0b10) print('\nLogical operation: 1 and 2') x = 0 and 2 print(x, type(x)) check_true(0b01 and 0b10) print('\nLogical operation: 1 or 2') check_true(0b01 or 0b10) Explanation: Boolean operators and,or,not are the reserved words and used as boolean operators. (same as && || ! of C) These are not bitwise operation but logical operation. End of explanation x = 'abc' y = 'abc' print("\nResult of 'x is y':", x is y, " Result of 'x == y':", x == y) print('x:', x, 'y:',y, '\nid(x):', id(x), 'id(y):', id(y)) x += 'd' y += 'd' print("\nResult of 'x is y':", x is y, " Result of 'x == y':", x == y) print('x:', x, 'y:',y, '\nid(x):', id(x), 'id(y):', id(y)) Explanation: and operation: 1 and 2 -> 1 is evaluated (true) - 2 is evaluated (true) -> true 0 and 2 -> 0 is evaluated (false) --------------------------> false or operation 1 or 2 -> 1 is evaluated (true) ----------------------------> true 0 or 2 -> 0 is evaluated (false) -> 2 is evaluated (true) -> true Value comparisons Same as C. >, >=, <, <=, ==, != can be used to compare 2 objects. The objects do not need to have the same type (some combination such as Integer and String are not allowed). These operator can be used other than numeric value, such as String. Identity comparisons The operators is and is not test for object identity: x is y is true if and only if x and y are the same object. Object identity is determined using the id() function. x is not y yields the inverse truth value. is operator shall not be used to compare the value of 2 objects. == shall be used instead. End of explanation s = 'abc' # s is a String print('x' in s) print('b' in s) print('ab' in s) print('ac' in s) print() t = [1, 2, 3, 4, [5,6]] # t is a List print('number of elements of t:',len(t)) print(5 in t) print(5 in t[4]) # t[0]: 1, t[1]: 2, ... t[4]: [5,6] print(3 in t) print() u = ('a', 'b', 3, 0.1) # u is a Tupple print(0.1 in u) print('c' not in u) Explanation: Membership test The operators in and not in test for membership. x in s evaluates to true if x is a member of s, and false otherwise. x not in s returns the negation of x in s . All built-in sequences and set types support this as well as dictionary. (i.e. String, List, Tupple ... support this) End of explanation x = 1.9999 print('x:', x, 'int(x):', int(x)) print() print('2^1000 = ', 2 ** 1000) Explanation: Integer 0, 100, -10000, 0b1001, 0o664, 0x01a0 etc. The size of integer is Unlimited in Python. (most computer langueage does not support unlimited integer) int() function can be used to convert from float or string to integer. Fractional part (digits below 0) of float value is discared when it is converted to integer. End of explanation x = 3.1415 x += 0.1 print(x) print(round(x, 6)) x = 1+1 print('1+1:', x, type(x)) x = 1-3 print('1-3:', x, type(x)) x = 4/2 # / returns float print('4/2:', x, type(x)) x = 5//2 # // returns integer if both of arguments are integer print('5//2:', x, type(x)) x = 5//(-2) # Python returns floor(x) for the result of negative division of // print('5/-2:', x, type(x)) x = 5//(-2) print('5//(-2)', x, type(x)) x = 5//2.0 # result is converted to float if one of the auguments is float print('5//2.0:', x, type(x)) x = 2*8 print('2*8:', x, type(x)) x = 5%2 print('5%2 (Modulo):', x, type(x)) x = 2**8 print('2**8 (exponents):', x, type(x)) x = 2**(1/2) print('2**(1/2) (square root):', x, type(x)) Explanation: Float 0.1, 10e3 etc. The result of the operation of integer and float is float. float() function can be used to convert from integer or string to float. Numeric operation +,-,*,/,//,%,** are numerical operators of Python. It's almost same as C, except devide (/ and //), and power (**). / always returns float, // returns integer (which will be converted to float, if one of the argument is float). The behavior of // is similar to C, but not same for negative result. End of explanation x = None print(x == True, x == False) # None is not True or False if x: print ('None is evaluated as True') else: print ('None is evaluated as False') def do_nothing(): return y = do_nothing() print('y:', y, 'id(x):', id(x), 'id(y):', id(y)) # Only 1 object of None exists if y: print('y is treated as True') else: print('y is treated as False') if y == False: print('y == False') else: print('y != False') if y == True: print('y == True') else: print('y != True') Explanation: None This type has a single value. There is a single object with this value. This object is accessed through the built-in name None. It is used to signify the absence of a value in many situations, e.g., it is returned from functions that doesn't explicitly return anything. Its truth value is false. End of explanation x = 'string' print(x, type(x)) x = "it's string too" print(x, type(x)) print('double quatation(") can be written without escape in single quatation(\')') # escape sequence \' and \" is also possible x = 'a\'b\"c' for c in x: print (c, end = ' ') print() x = '''Triple quatation is OK too''' print(x, type(x)) # Triple quatation is usually used to make multiple line string x = ''' 1st line, 2nd line, 3rd line ''' print(x, type(x)) Explanation: String String can be made using single quate ('), double quate("), triple single quate or double quate. str() function converts numeric value to string. The biggest difference of String between C and Python is that the String is immutable(not changeable) in Python, and String of Python has its own method. End of explanation #!/usr/bin/python3 # -*- coding: utf-8 -*- ''' This is a module docstring This program displays number from 1 to 10 horizontally ''' def show_num(first,last): first: Start number last: End Number - 1 Return: None for i in range(first,last): print (i, end = ' ') print() # help(module_name) shows the help of the nodule (python script) help(show_num) import math # help(math) help(math.cos) Explanation: docstring The first string in the module or function is called docstring. It is strongly recommended to write docstring. Indent of the quatation need to be alligned. docstring is shown on help(). End of explanation x = 'abc' + 'xyz' print(x, type(x)) x = 3 * '123' print(x, type(x)) Explanation: Concatenation and Repetation + operator returns concatenated(merged) string, * operator returns repeated string. End of explanation # Index word = 'python' print(word[0], word[1], word[2]) print(word[-1], word[-2], word[-6]) # Slice print(word[0:2]) # from word[0] 'p' to BEFORE word[2] => 'py' print(word[2:5]) # from word[2] 't' to BEFORE word[5] => 'tho' print(word[:2]) # from word[0] to BEFORE word[2] print(word[2:]) # from word[2] to the end print(word[:]) # all print(word[::2]) # from top to last, step=2 for c in word[::-1]: # from last to top (all string, step: -1) print(c, end = '') print() print(word[::-1]) Explanation: Indexed reference and Slice Strings can be indexed (subscripted), with the first character having index 0. There is no separate character type; a character is simply a string of size one: In addition to indexing, slicing is also supported. While indexing is used to obtain individual characters, slicing allows you to obtain substring: End of explanation x = 'abc' print(x, '\nlen(x):', len(x)) x = 'Hello world! ' * 10 print(x, '\nlen(x):', len(x)) Explanation: Length The built-in function len() returns the length of a string: End of explanation s = 'a b c' x = s.split() print(x) help(str) s = 'test string' print(s, s.capitalize()) print("Number of 't' in", s, s.count('t')) print("Location of 'str' in", s, s.find('str')) print("Location of 'stx' in", s, s.find('stx')) print(s, 'islower ?', s.islower()) print('Convert to upper case:', s.upper()) x = s.split() print('Spit to List:', x, type(x)) print("{0} + {1} = {2}".format(2, 3, 2+3)) # {0}: first parameter ... # Hands on # find some methods in the help(str) and try them Explanation: Method String has many method. See https://docs.python.org/3.5/library/stdtypes.html#string-methods for detail. (this document is in NFS directory. library.pdf) help(str) shows overview. End of explanation
5,926
Given the following text description, write Python code to implement the functionality described below step by step Description: Weather Forecast with PixieDust This notebook shows how to Step1: 2. Get weather data Find the latitude and longitude of your current location by running this magic javascript cell. Then fill in your Weather Company API credentials to load the weather forecast for where you are. Step2: Wait a few seconds to run the second cell to allow the above geolocation function to run. Step3: Uncomment and run the next cell to have a look what the json data file looks like. Step4: 3. Convert json data to pandas DataFrame Convert the data into a DataFrame with each timestep on a new row. Convert the timestamp into a datetime format and drop the columns that are not needed. See this Cheat sheet for date format conversions. Finally, convert the data type into numeric. Step5: As there seems to be an issue with the pop column (percentage of precipitation), create a new column rain. Step6: 4. Plot data with matplotlib Step7: 5. Create a temperature map for the UK Step8: 6. Plot data with PixieDust https
Python Code: #!pip install --upgrade pixiedust #!pip install --upgrade bokeh import requests import json import pandas as pd import numpy as np from datetime import datetime import time import pixiedust Explanation: Weather Forecast with PixieDust This notebook shows how to: 1. use the Weather Company Data API to get weather forecast json data based on latitude and longitude 2. convert this json data into a pandas DataFrame 4. create a weather chart and map with matplotlib 3. create a weather chart and map with PixieDust Before running the notebook: * Sign up for a free 30-day trial Bluemix account * Launch the Weather Data service in Bluemix and fill in the credentials below. Learn more here * Run this notebook locally or in the Cloud using the IBM Data Science Experience 1. Load and install packages First, uncomment the lines in the below cell and upgrade the pixiedust and bokeh packages. When this is done restart the kernel. You have to do this only once, or when there is an update available. Then import the package needed to run this notebook. End of explanation %%javascript navigator.geolocation.getCurrentPosition(function(position) { console.log(position.coords.latitude, position.coords.longitude); setTimeout(function() { IPython.notebook.kernel.execute('lat="' + position.coords.latitude + '";') IPython.notebook.kernel.execute('lon="' + position.coords.longitude + '";') },5000)}); Explanation: 2. Get weather data Find the latitude and longitude of your current location by running this magic javascript cell. Then fill in your Weather Company API credentials to load the weather forecast for where you are. End of explanation print(lat, lon) # @hidden_cell # Weather company data API credentials username='xxxxxx' password='xxxxxx' line='https://'+username+':'+password+\ '@twcservice.mybluemix.net/api/weather/v1/geocode/'+\ lat+'/'+lon+'/forecast/intraday/10day.json?&units=m' r=requests.get(line) weather = json.loads(r.text) Explanation: Wait a few seconds to run the second cell to allow the above geolocation function to run. End of explanation #print json.dumps(weather, indent=4, sort_keys=True) Explanation: Uncomment and run the next cell to have a look what the json data file looks like. End of explanation df = pd.DataFrame.from_dict(weather['forecasts'][0],orient='index').transpose() for forecast in weather['forecasts'][1:]: df = pd.concat([df, pd.DataFrame.from_dict(forecast,orient='index').transpose()]) df['date'] = df['fcst_valid_local'].apply(lambda x: datetime.strptime(x, '%Y-%m-%dT%H:%M:%S+0100')) df = df.drop(['expire_time_gmt','num','qualifier','qualifier_code'],1) df = df.drop(['fcst_valid','fcst_valid_local','icon_extd','wdir_cardinal'],1) df = df.drop(['subphrase_pt1','subphrase_pt2','subphrase_pt3','class'],1) df = df.drop(['daypart_name','phrase_12char','phrase_22char','phrase_32char'],1) df.dtypes df[['pop','wspd','rh','clds','wdir','temp']] = df[['pop','wspd','rh','clds','wdir','temp']].apply(pd.to_numeric) df.dtypes Explanation: 3. Convert json data to pandas DataFrame Convert the data into a DataFrame with each timestep on a new row. Convert the timestamp into a datetime format and drop the columns that are not needed. See this Cheat sheet for date format conversions. Finally, convert the data type into numeric. End of explanation df['rain'] = df['pop'].as_matrix() df=df.drop('pop',1) df.head() Explanation: As there seems to be an issue with the pop column (percentage of precipitation), create a new column rain. End of explanation df = df.set_index('date',drop=False) df.head() import matplotlib.pyplot as plt import matplotlib %matplotlib inline fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(14, 8)) df['temp'].plot(ax=axes[1], color='#6EEDD8',lw=3.0,sharex=True) axes[1].set_title('Temperature',loc='left',fontsize=20) df['rain'].plot(ax=axes[0], kind='bar', color='#C93D79',lw=2.0,sharex=True) axes[0].set_title('Chance of rain',loc='left',fontsize=20) Explanation: 4. Plot data with matplotlib End of explanation cities = [ ('Exeter',50.7184,-3.5339), ('Truro',50.2632,-5.051), ('Carmarthen',51.8576,-4.3121), ('Norwich',52.6309,1.2974), ('Brighton And Hove',50.8225,-0.1372), ('Bristol',51.44999778,-2.583315472), ('Durham',54.7753,-1.5849), ('Llanidloes',52.4135,-3.5883), ('Penrith',54.6641,-2.7527), ('Jedburgh',55.4777,-2.5549), ('Coventry',52.42040367,-1.499996583), ('Edinburgh',55.94832786,-3.219090618), ('Cambridge',52.2053,0.1218), ('Glasgow',55.87440472,-4.250707236), ('Kingston upon Hull',53.7457,-0.3367), ('Leeds',53.83000755,-1.580017539), ('London',51.49999473,-0.116721844), ('Manchester',53.50041526,-2.247987103), ('Nottingham',52.97034426,-1.170016725), ('Aberdeen',57.1497,-2.0943), ('Fort Augustus',57.1448,-4.6805), ('Lairg',58.197,-4.6173), ('Oxford',51.7517,-1.2553), ('Inverey',56.9855,-3.5055), ('Shrewsbury',52.7069,-2.7527), ('Colwyn Bay',53.2932,-3.7276), ('Newton Stewart',54.9186,-4.5918), ('Portsmouth',50.80034751,-1.080022218)] icons=[] temps=[] for city in cities: lat = city[1] lon = city[2] line='https://'+username+':'+password+'@twcservice.mybluemix.net/api/weather/v1/geocode/'+str(lat)+'/'+str(lon)+'/observations.json?&units=m' r=requests.get(line) weather = json.loads(r.text) icons=np.append(icons,weather['observation']['wx_icon']) temps=np.append(temps,weather['observation']['temp']) dfmap = pd.DataFrame(cities, columns=['city','lat','lon']) dfmap['temp']=temps dfmap['icon']=icons dfmap.head() from mpl_toolkits.basemap import Basemap from matplotlib.offsetbox import AnnotationBbox, OffsetImage from matplotlib._png import read_png from itertools import izip import urllib matplotlib.style.use('bmh') fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(10, 12)) # background maps m1 = Basemap(projection='mill',resolution=None,llcrnrlon=-7.5,llcrnrlat=49.84,urcrnrlon=2.5,urcrnrlat=59,ax=axes[0]) m1.drawlsmask(land_color='dimgrey',ocean_color='dodgerBlue',lakes=True) m2 = Basemap(projection='mill',resolution=None,llcrnrlon=-7.5,llcrnrlat=49.84,urcrnrlon=2.5,urcrnrlat=59,ax=axes[1]) m2.drawlsmask(land_color='dimgrey',ocean_color='dodgerBlue',lakes=True) # weather icons map for [icon,city] in izip(icons,cities): lat = city[1] lon = city[2] try: pngfile=urllib.urlopen('https://github.com/ibm-cds-labs/python-notebooks/blob/master/weathericons/icon'+str(int(icon))+'.png?raw=true') icon_hand = read_png(pngfile) imagebox = OffsetImage(icon_hand, zoom=.15) ab = AnnotationBbox(imagebox,m1(lon,lat),frameon=False) axes[0].add_artist(ab) except: pass # temperature map for [temp,city] in izip(temps,cities): lat = city[1] lon = city[2] if temp>12: col='indigo' elif temp>10: col='darkmagenta' elif temp>8: col='red' elif temp>6: col='tomato' elif temp>4: col='turquoise' x1, y1 = m2(lon,lat) bbox_props = dict(boxstyle="round,pad=0.3", fc=col, ec=col, lw=2) axes[1].text(x1, y1, temp, ha="center", va="center", size=11,bbox=bbox_props) plt.tight_layout() Explanation: 5. Create a temperature map for the UK End of explanation display(df) Explanation: 6. Plot data with PixieDust https://ibm-cds-labs.github.io/pixiedust/ End of explanation
5,927
Given the following text description, write Python code to implement the functionality described below step by step Description: Module 2 Convolutions First let's have a look at what convolutions do Learning Activity 1 Step1: Learning Activity 2 Step2: The flag flatten means you imported the image as a grey-scale image. This means each pixel is represented by a single value between 0 (black) and 255 (white). You can view the image using the imshow function from matplotlib Step3: or view a part of the image by selecting a region Step4: Alternatively, you can view the values of the pixels directly, for example to view the values in the top left corner of the image Step5: Learning Activity 3 Step6: Now let's implement the multiply_sum function. It takes as input two numpy arrays of the same shape, multiplies them elementwise and returns the sum Step7: An there you have it, a convolution operator! You can apply a filter onto an image and see the result Step8: Quiz Step9: Transpose the filter Step10: Learning Activity 4 Step11: The first dimension represents colours. You can see the filter responds to regions that are red, a little green, and not at all blue. Let's load a colourful version of the Grace Hopper portrait Step12: Quiz Can you design a filter which will detect the edge from the background (blue) to Grace Hopper’s left shoulder (black). Hint Step13: We could almost apply our convolution operator already, but the filter we have is currently in the wrong format. The colour channel dimension should be the last one of the filter, same as in our image. This can be fixed by a simple transpose Step14: Convolutional Neural networks Now lets load an already trained network in our environment. This network (VGG-16) has been trained on the Imagenet dataset where the goal is to classify pictures into one of one thousand categories. When it came out in 2014, it won the annual ImageNet Recognition Challenge correctly classifying 93% of the images in the test set. For comparison, humans can achieve around 95% accuracy. It's also very simple, it only uses 3x3 convolutions! It is very deep though and it takes 4 GPUS 2-3 weeks to train it. To load the model, you must first define it's architecture. You're going to do this step by step as you learn the components of convolutional neural networks. But first, lets load the necessary libraries. We are again going to use the Keras library. Learning Activity 5 Step15: Learning Activity 6 Step16: Learning Activity 7 Step17: Learning Activity 8 Step18: Learning Activity 9 Step19: As you can see, the depth of the layers get progressively larger, up to 512 for the latest layers. This means as we go along, each layer detects a greater number of features. On the other hand, each max-pooling layer halves the height and width of the layer outputs. Starting from images of dimensions 224x224, the final outputs are only of size 7x7. Now you're about to add some fully connected layers which can learn the more abstract features of the image. But first you must first change the layout of the input so it looks like a 1-D tensor. Step20: The Flatten function removes the spatial dimensions of the layer output, it is now a simple 1-D row of numbers. This means we can no longer apply convolutions, but can apply fully connected layers like the ones of the perceptron from the previous module. Dense layers are fully connected layers. You used them in the previous module. Learning Activity 10 Step21: The number 0.5 indicates the amount of change, 0.0 means no change, and 1.0 means completely different. Add one more fully connected layer Step22: Finally a softmax layer to predict the categories. There are 1000 categories and hence 1000 neurons. Step23: Learning Activity 11 Step24: Learning Activity 12 Step25: Learning Activity 13 Step26: The network seems pretty confident! Lets look at its top 5 guesses Step27: Hurray! Our network knows what it's talking about. Let's have a closer look at what goes on inside. Learning Activity 14 Step28: You can see how each filter detects a different property of the input image. Some are designed to respond to certain colours, while some other -- the greyscale looking ones -- detects changes in brightness such as edges. You may notice the brown filter in the top left corner, if we print the values of its weights Step29: It is the brown filter we applied to the Grace Hopper portrait above! Another way of visualising the network is to see which neurons get activated as the images traverses the network. A neuron outputing a high value means the pattern it has learnt to detect has been observed. Let's apply this to our kitten image. Step30: It's worth spending a moment to understand what is going on here. Each pixel in this image is a different neuron in the neural network. Neurons on the same image sample share the same weights and therefore detect the same feature. You can compare the visualised filters above with their corresponding image sample. For example, have find the bright green filter in the original visualisation and look at its corresponding image response. Using this method, it is possible to visualise the deeper parts of the neural network, although they become harder to interpret. You can visualise the output of the second convolutional layer Step31: And the eighth layer Step32: As we get further down the network, the representations become smaller in their spatial features thanks to the pooling layers. The final convolutional layers only have dimensions 14 by 14. Step33: Training your own network Lets train a network! We're going to use the CIFAR10 dataset, in which the goal is to categorise images in one of 10 categories. Learning Activity 15 Step34: Learning Activity 16 Step35: Quiz Step36: Learning Activity 18 Step37: And you're set! You can start training and see the accuracy improve!
Python Code: import os import numpy as np import matplotlib.pyplot as plt from scipy import misc % matplotlib inline plt.rcParams['figure.figsize'] = (10, 10) plt.rcParams['image.cmap'] = 'gray' Explanation: Module 2 Convolutions First let's have a look at what convolutions do Learning Activity 1: Load the Python libraries Let us start by loading the necessary Python libraries: End of explanation # Load the grace_hopper.jpg image from the data folder Explanation: Learning Activity 2: Import the data We are going to start by loading the portrait of Grace Hopper using the imread() function: End of explanation # View the image Explanation: The flag flatten means you imported the image as a grey-scale image. This means each pixel is represented by a single value between 0 (black) and 255 (white). You can view the image using the imshow function from matplotlib: End of explanation # View a region of the image Explanation: or view a part of the image by selecting a region: End of explanation # Print the pixel values of a region in the top left corner Explanation: Alternatively, you can view the values of the pixels directly, for example to view the values in the top left corner of the image End of explanation # Convolution function def convolve(image, filter): filter_height = filter.shape[0] filter_width = filter.shape[1] filtered_image = np.ndarray(shape=(image.shape[0] - filter_height + 1, image.shape[1] - filter_width + 1)) for x in range(0, filtered_image.shape[0]): for y in range(0, filtered_image.shape[1]): # We select a local patch of the image patch = image[x: x + filter_height, y: y + filter_width] # Then apply the convolution operation to it filtered_image[x,y] = multiply_sum(patch, filter) return filtered_image Explanation: Learning Activity 3: Define and apply a convolution function Now lets define a convolution function. First you must define a function which traverses the image to apply the convolution at every point and returns the result in a filtered image. Calculating the size of the filtered image along each dimension can be a little tricky, the formula is: Size of the filtered image = input image size - filter size + 1 Let us start by implementing the convolve function. It takes as input an image and a filter, and returns the output of applying the filter at each position in the image through a function multiply_sum End of explanation # Multiply_sum function def multiply_sum(patch, filter): # Let's make sure our two inputs have the same shape assert(patch.shape == filter.shape) return np.sum(patch * filter) Explanation: Now let's implement the multiply_sum function. It takes as input two numpy arrays of the same shape, multiplies them elementwise and returns the sum: End of explanation #First define the 3x3 filter #Then apply it onto the image #Show the result Explanation: An there you have it, a convolution operator! You can apply a filter onto an image and see the result: End of explanation # Define the filter # Convolve using this filter # Show the result Explanation: Quiz: What did our filter do? 1) By looking at the image, can you tell what kind of pattern the filter detected? 2) How would you design a filter which detects vertical edges? 3) What would the following filter do: filter = np.array([[ 1, 1, 1], [ 0, 0, 0], [-1, -1, -1]]) End of explanation # Transpose # Convolve using this filter # Show the result Explanation: Transpose the filter: End of explanation # Create a brown filter brown_filter = np.array( [[[ 0.13871045, 0.17157242, 0.12934428], [ 0.16168842, 0.20229845, 0.14835016], [ 0.135694 , 0.16206263, 0.11727387]], [[ 0.04231958, 0.05471011, 0.03167877], [ 0.0462575 , 0.06581022, 0.03104937], [ 0.04185439, 0.04734124, 0.02087744]], [[-0.15704881, -0.16666673, -0.16600266], [-0.17439997, -0.17757156, -0.18760149], [-0.15435153, -0.17037505, -0.17269668]]]) print(brown_filter.shape) Explanation: Learning Activity 4: Convolutions with colour Very good! But what if we had a colour image, how would we use that extra information to detect useful patterns? The idea is simple, on top of having a set weight for each pixel, we have a set weight for each colour channel within that pixel. The following kernel detects region of the image which are mostly brown. End of explanation # Read and show the Grace Hopper portrait Explanation: The first dimension represents colours. You can see the filter responds to regions that are red, a little green, and not at all blue. Let's load a colourful version of the Grace Hopper portrait End of explanation # Answer Explanation: Quiz Can you design a filter which will detect the edge from the background (blue) to Grace Hopper’s left shoulder (black). Hint: make sure the weights in your €lter sum to 0. End of explanation # Transpose the brown filter Explanation: We could almost apply our convolution operator already, but the filter we have is currently in the wrong format. The colour channel dimension should be the last one of the filter, same as in our image. This can be fixed by a simple transpose: End of explanation import theano import cv2 from keras.models import Sequential from keras.layers.core import Flatten, Dense, Dropout from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers.convolutional import ZeroPadding2D from keras.optimizers import SGD from keras.datasets import cifar10 from keras.preprocessing.image import ImageDataGenerator from keras.utils import np_utils Explanation: Convolutional Neural networks Now lets load an already trained network in our environment. This network (VGG-16) has been trained on the Imagenet dataset where the goal is to classify pictures into one of one thousand categories. When it came out in 2014, it won the annual ImageNet Recognition Challenge correctly classifying 93% of the images in the test set. For comparison, humans can achieve around 95% accuracy. It's also very simple, it only uses 3x3 convolutions! It is very deep though and it takes 4 GPUS 2-3 weeks to train it. To load the model, you must first define it's architecture. You're going to do this step by step as you learn the components of convolutional neural networks. But first, lets load the necessary libraries. We are again going to use the Keras library. Learning Activity 5: Load the Python libraries End of explanation # Implement a convolutional layer # Create the model # On the very first layer, you must specify the input shape # Your first convolutional layer will have 64 3x3 filters, and will use a relu activation function Explanation: Learning Activity 6: Implementing a convolutional layer You are going to define the first convolutional layer of the network. But before, you will add some padding to the image so the convolutions get to apply on the outer edges. End of explanation # Stacking layers # Once again you must add padding Explanation: Learning Activity 7: Stacking layers Now you're going to stack another convolutional layer. Remember, the output of a convolutional layer is a 3-D tensor, just like our input image. Although it does have a much higher depth! End of explanation # Add a pooling layer with window size 2x2 # The stride indicates the distance between each pooled window Explanation: Learning Activity 8: Adding pooling layers Now lets add your first pooling layer. Pooling reduces the width and height of the input by aggregating adjacent cells together. End of explanation # Lots more Convolutional and Pooling layers Explanation: Learning Activity 9: Adding more convolutions for VGG Now you can stack many more of these! Remember not to change the parameters as we are about to load the weights of an already trained version of this network. End of explanation # Flatten the input # Add a fully connected layer with 4096 neurons Explanation: As you can see, the depth of the layers get progressively larger, up to 512 for the latest layers. This means as we go along, each layer detects a greater number of features. On the other hand, each max-pooling layer halves the height and width of the layer outputs. Starting from images of dimensions 224x224, the final outputs are only of size 7x7. Now you're about to add some fully connected layers which can learn the more abstract features of the image. But first you must first change the layout of the input so it looks like a 1-D tensor. End of explanation # Add a dropout layer Explanation: The Flatten function removes the spatial dimensions of the layer output, it is now a simple 1-D row of numbers. This means we can no longer apply convolutions, but can apply fully connected layers like the ones of the perceptron from the previous module. Dense layers are fully connected layers. You used them in the previous module. Learning Activity 10: Preventing overfitting with Dropout Dropout is a method used at train time to prevent overfitting. As a layer, it randomly modifies its input so that the neural network learns to be robust to these changes. Although you won’t actually use it now, you must define it to correctly load the weights as it was part of the original network. End of explanation # Add another fully connected layer with 4096 neurons and a Dropout at the output Explanation: The number 0.5 indicates the amount of change, 0.0 means no change, and 1.0 means completely different. Add one more fully connected layer: End of explanation # Add softmax layer Explanation: Finally a softmax layer to predict the categories. There are 1000 categories and hence 1000 neurons. End of explanation # Load the weights # Compile the network no need to worry about this for now Explanation: Learning Activity 11: Loading the weights And you're all set! Let's load the weights of the network: End of explanation # Load the image img = cv2.resize(cv2.imread('data/cat.jpg'), (224, 224)) # Transform it to the right formatd def transform_image(image): image_t = np.copy(image).astype(np.float32) # Avoids modifying the original image_t[:,:,0] -= 103.939 # Substracts mean Red image_t[:,:,1] -= 116.779 # Substracts mean Green image_t[:,:,2] -= 123.68 # Substracts mean Blue image_t = image_t.transpose((2,0,1)) # The colour dimension comes first image_t = np.expand_dims(image_t, axis=0) # The network takes batches of images as input return image_t img_t = transform_image(img) # What does the image look like? plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB)) Explanation: Learning Activity 12: Preprocessing the data Lets feed an image to your model. In the VGG network, we only do zero centering. The model takes as input a slightly transformed version of the input: End of explanation # Push the image through #The network takes batches of images, we only want the result for one image #The output is an array with 1000 values, one for each category. What does it look like? Explanation: Learning Activity 13: Getting an output from the network Now push it through the network and get the output End of explanation # Load labels # Sort top k predictions from softmax output Explanation: The network seems pretty confident! Lets look at its top 5 guesses: End of explanation # This is a helper function to let you visualise what goes on inside the network def vis_square(weights, padsize=1, padval=0): #Avoids modifying the network weights data = np.copy(weights) #Normalize the inputs data -= data.min() data /= data.max() # Lets tile the inputs # How many inputs per row n = int(np.ceil(np.sqrt(data.shape[0]))) # Add padding between inputs padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((0, 0),) * (data.ndim - 3) data = np.pad(data, padding, mode='constant', constant_values=(padval, padval)) #place the filters on an n by n grid data = data.reshape((n, n) + data.shape[1:]) #merge the filters contents onto a single image data = data.transpose((0, 2, 1, 3) + tuple(range(4, data.ndim))) data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:]) #show the filter plt.imshow(data) # Get the weights of the first convolutional layer first_layer_weights = vgg_model.layers[1].get_weights() # first_layer_weights[0] stores the connection weights # first_layer_weights[1] stores the bias weights # For now we're interrested in the connections filters = first_layer_weights[0] # Visualise the filters vis_square(filters.transpose(0, 2, 3, 1)) Explanation: Hurray! Our network knows what it's talking about. Let's have a closer look at what goes on inside. Learning Activity 14: Looking inside the network In a convolutional neural network, there's an easy way to visualise the filters learned at the very first layer. We can print each filter to show which colours it reponds to. End of explanation print(filters[1]) Explanation: You can see how each filter detects a different property of the input image. Some are designed to respond to certain colours, while some other -- the greyscale looking ones -- detects changes in brightness such as edges. You may notice the brown filter in the top left corner, if we print the values of its weights End of explanation # This function fetches the intermediary output from a layer def get_layer_output(model, image, layer): # This theano function lets us look at the acivations throughout the network theano_function = theano.function([model.layers[0].input], model.layers[layer].get_output(train=False)) return theano_function(image)[0] layer_output = get_layer_output(vgg_model, img_t, 1) vis_square(layer_output, padsize=5, padval=1) Explanation: It is the brown filter we applied to the Grace Hopper portrait above! Another way of visualising the network is to see which neurons get activated as the images traverses the network. A neuron outputing a high value means the pattern it has learnt to detect has been observed. Let's apply this to our kitten image. End of explanation # Visualise the output of the second convolutional layer Explanation: It's worth spending a moment to understand what is going on here. Each pixel in this image is a different neuron in the neural network. Neurons on the same image sample share the same weights and therefore detect the same feature. You can compare the visualised filters above with their corresponding image sample. For example, have find the bright green filter in the original visualisation and look at its corresponding image response. Using this method, it is possible to visualise the deeper parts of the neural network, although they become harder to interpret. You can visualise the output of the second convolutional layer: End of explanation # Visualise the output of the eighth convolutional layer Explanation: And the eighth layer: End of explanation # Visualise the output of the final convolutional layers Explanation: As we get further down the network, the representations become smaller in their spatial features thanks to the pooling layers. The final convolutional layers only have dimensions 14 by 14. End of explanation # Load the data # Turn our images into floating point numbers # Put our input data in the range 0-1 # convert class vectors to binary class matrices Explanation: Training your own network Lets train a network! We're going to use the CIFAR10 dataset, in which the goal is to categorise images in one of 10 categories. Learning Activity 15: Loading the CIFAR10 dataset Load the CIFAR10 dataset: End of explanation # Define the model: Our model has six layers, four convolutional, and two fully connected. The first two layers have a # depth of 32, meaning they each detect 32 types of filters. They use 3x3 sized filters. Explanation: Learning Activity 16: Building the model Define the model, we will use a small model so it trains faster. End of explanation # Using Stochastic gradient descent with an initial learning rate of 0.01 # With Nesterov momentum, and a learning rate decay of 1e-6 per iteration Explanation: Quiz: HOW MANY WEIGHTS IN THE NETWORK? How many convolution weights does the first layer contain? What about the second layer? Are there any other weights in those layers? Learning Activity 17: Define the training schedule Using Stochastic gradient descent with an initial learning rate of 0.01 with Nesterov momentum, and a learning rate decay of 1e-6 per iteration End of explanation # Preprocessing, does both normalization and augmentation datagen = ImageDataGenerator( featurewise_center=True, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=True, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False) # randomly flip images # Compute quantities required for featurewise normalization (std, mean) datagen.fit(X_train) Explanation: Learning Activity 18: Image pre-processing Define the preprocessing of the image: End of explanation # Train, set, go!!! Explanation: And you're set! You can start training and see the accuracy improve! End of explanation
5,928
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Seaice MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required Step7: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required Step8: 3.2. Ocean Freezing Point Value Is Required Step9: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required Step10: 4.2. Canonical Horizontal Resolution Is Required Step11: 4.3. Number Of Horizontal Gridpoints Is Required Step12: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required Step13: 5.2. Target Is Required Step14: 5.3. Simulations Is Required Step15: 5.4. Metrics Used Is Required Step16: 5.5. Variables Is Required Step17: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required Step18: 6.2. Additional Parameters Is Required Step19: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required Step20: 7.2. On Diagnostic Variables Is Required Step21: 7.3. Missing Processes Is Required Step22: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required Step23: 8.2. Properties Is Required Step24: 8.3. Budget Is Required Step25: 8.4. Was Flux Correction Used Is Required Step26: 8.5. Corrected Conserved Prognostic Variables Is Required Step27: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required Step28: 9.2. Grid Type Is Required Step29: 9.3. Scheme Is Required Step30: 9.4. Thermodynamics Time Step Is Required Step31: 9.5. Dynamics Time Step Is Required Step32: 9.6. Additional Details Is Required Step33: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required Step34: 10.2. Number Of Layers Is Required Step35: 10.3. Additional Details Is Required Step36: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required Step37: 11.2. Number Of Categories Is Required Step38: 11.3. Category Limits Is Required Step39: 11.4. Ice Thickness Distribution Scheme Is Required Step40: 11.5. Other Is Required Step41: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required Step42: 12.2. Number Of Snow Levels Is Required Step43: 12.3. Snow Fraction Is Required Step44: 12.4. Additional Details Is Required Step45: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required Step46: 13.2. Transport In Thickness Space Is Required Step47: 13.3. Ice Strength Formulation Is Required Step48: 13.4. Redistribution Is Required Step49: 13.5. Rheology Is Required Step50: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required Step51: 14.2. Thermal Conductivity Is Required Step52: 14.3. Heat Diffusion Is Required Step53: 14.4. Basal Heat Flux Is Required Step54: 14.5. Fixed Salinity Value Is Required Step55: 14.6. Heat Content Of Precipitation Is Required Step56: 14.7. Precipitation Effects On Salinity Is Required Step57: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required Step58: 15.2. Ice Vertical Growth And Melt Is Required Step59: 15.3. Ice Lateral Melting Is Required Step60: 15.4. Ice Surface Sublimation Is Required Step61: 15.5. Frazil Ice Is Required Step62: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required Step63: 16.2. Sea Ice Salinity Thermal Impacts Is Required Step64: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required Step65: 17.2. Constant Salinity Value Is Required Step66: 17.3. Additional Details Is Required Step67: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required Step68: 18.2. Constant Salinity Value Is Required Step69: 18.3. Additional Details Is Required Step70: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required Step71: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required Step72: 20.2. Additional Details Is Required Step73: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required Step74: 21.2. Formulation Is Required Step75: 21.3. Impacts Is Required Step76: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required Step77: 22.2. Snow Aging Scheme Is Required Step78: 22.3. Has Snow Ice Formation Is Required Step79: 22.4. Snow Ice Formation Scheme Is Required Step80: 22.5. Redistribution Is Required Step81: 22.6. Heat Diffusion Is Required Step82: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required Step83: 23.2. Ice Radiation Transmission Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'seaice') Explanation: ES-DOC CMIP6 Model Properties - Seaice MIP Era: CMIP6 Institute: DWD Source ID: MPI-ESM-1-2-HR Topic: Seaice Sub-Topics: Dynamics, Thermodynamics, Radiative Processes. Properties: 80 (63 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:57 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties --&gt; Model 2. Key Properties --&gt; Variables 3. Key Properties --&gt; Seawater Properties 4. Key Properties --&gt; Resolution 5. Key Properties --&gt; Tuning Applied 6. Key Properties --&gt; Key Parameter Values 7. Key Properties --&gt; Assumptions 8. Key Properties --&gt; Conservation 9. Grid --&gt; Discretisation --&gt; Horizontal 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Seaice Categories 12. Grid --&gt; Snow On Seaice 13. Dynamics 14. Thermodynamics --&gt; Energy 15. Thermodynamics --&gt; Mass 16. Thermodynamics --&gt; Salt 17. Thermodynamics --&gt; Salt --&gt; Mass Transport 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics 19. Thermodynamics --&gt; Ice Thickness Distribution 20. Thermodynamics --&gt; Ice Floe Size Distribution 21. Thermodynamics --&gt; Melt Ponds 22. Thermodynamics --&gt; Snow Processes 23. Radiative Processes 1. Key Properties --&gt; Model Name of seaice model used. 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of sea ice model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.model.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.variables.prognostic') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sea ice temperature" # "Sea ice concentration" # "Sea ice thickness" # "Sea ice volume per grid cell area" # "Sea ice u-velocity" # "Sea ice v-velocity" # "Sea ice enthalpy" # "Internal ice stress" # "Salinity" # "Snow temperature" # "Snow depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Variables List of prognostic variable in the sea ice model. 2.1. Prognostic Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the sea ice component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS-10" # "Constant" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Seawater Properties Properties of seawater relevant to sea ice 3.1. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Ocean Freezing Point Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant seawater freezing point, specify this value. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Resolution Resolution of the sea ice grid 4.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning applied to sea ice model component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Target Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Simulations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 *Which simulations had tuning applied, e.g. all, not historical, only pi-control? * End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Metrics Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any observed metrics used in tuning model/parameters End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.5. Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Which variables were changed during the tuning process? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ice strength (P*) in units of N m{-2}" # "Snow conductivity (ks) in units of W m{-1} K{-1} " # "Minimum thickness of ice created in leads (h0) in units of m" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Key Parameter Values Values of key parameters 6.1. Typical Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N What values were specificed for the following parameters if used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Additional Parameters Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.description') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Assumptions Assumptions made in the sea ice model 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N General overview description of any key assumptions made in this model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. On Diagnostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Missing Processes Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the sea ice component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Provide a general description of conservation methodology. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.properties') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Mass" # "Salt" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Properties Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in sea ice by the numerical schemes. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.4. Was Flux Correction Used Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does conservation involved flux correction? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.5. Corrected Conserved Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 List any variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Ocean grid" # "Atmosphere Grid" # "Own Grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9. Grid --&gt; Discretisation --&gt; Horizontal Sea ice discretisation in the horizontal 9.1. Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Grid on which sea ice is horizontal discretised? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Structured grid" # "Unstructured grid" # "Adaptive grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.2. Grid Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the type of sea ice grid? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite differences" # "Finite elements" # "Finite volumes" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 9.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the advection scheme? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.4. Thermodynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model thermodynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 9.5. Dynamics Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the time step in the sea ice model dynamic component in seconds. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9.6. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional horizontal discretisation details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Zero-layer" # "Two-layers" # "Multi-layers" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Sea ice vertical properties 10.1. Layering Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 10.2. Number Of Layers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using multi-layers specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional vertical grid details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 11. Grid --&gt; Seaice Categories What method is used to represent sea ice categories ? 11.1. Has Mulitple Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Set to true if the sea ice model has multiple sea ice categories. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 11.2. Number Of Categories Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify how many. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Category Limits Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 If using sea ice categories specify each of the category limits. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Ice Thickness Distribution Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the sea ice thickness distribution scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.seaice_categories.other') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Other Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 12. Grid --&gt; Snow On Seaice Snow on sea ice details 12.1. Has Snow On Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is snow on ice represented in this model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 12.2. Number Of Snow Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels of snow on ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.3. Snow Fraction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how the snow fraction on sea ice is determined End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.4. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Specify any additional details related to snow on ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.horizontal_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Dynamics Sea Ice Dynamics 13.1. Horizontal Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of horizontal advection of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Incremental Re-mapping" # "Prather" # "Eulerian" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Transport In Thickness Space Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice transport in thickness space (i.e. in thickness categories)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Hibler 1979" # "Rothrock 1975" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.3. Ice Strength Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Which method of sea ice strength formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.redistribution') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Rafting" # "Ridging" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.4. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Which processes can redistribute sea ice (including thickness)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.dynamics.rheology') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Free-drift" # "Mohr-Coloumb" # "Visco-plastic" # "Elastic-visco-plastic" # "Elastic-anisotropic-plastic" # "Granular" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.5. Rheology Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Rheology, what is the ice deformation formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice latent heat (Semtner 0-layer)" # "Pure ice latent and sensible heat" # "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)" # "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Thermodynamics --&gt; Energy Processes related to energy in sea ice thermodynamics 14.1. Enthalpy Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the energy formulation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pure ice" # "Saline ice" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Thermal Conductivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What type of thermal conductivity is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Conduction fluxes" # "Conduction and radiation heat fluxes" # "Conduction, radiation and latent heat transport" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.3. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of heat diffusion? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Heat Reservoir" # "Thermal Fixed Salinity" # "Thermal Varying Salinity" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.4. Basal Heat Flux Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method by which basal ocean heat flux is handled? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.5. Fixed Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.6. Heat Content Of Precipitation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which the heat content of precipitation is handled. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14.7. Precipitation Effects On Salinity Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Thermodynamics --&gt; Mass Processes related to mass in sea ice thermodynamics 15.1. New Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method by which new sea ice is formed in open water. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Ice Vertical Growth And Melt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs the vertical growth and melt of sea ice. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Floe-size dependent (Bitz et al 2001)" # "Virtual thin ice melting (for single-category)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15.3. Ice Lateral Melting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the method of sea ice lateral melting? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.4. Ice Surface Sublimation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method that governs sea ice surface sublimation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.5. Frazil Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the method of frazil ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16. Thermodynamics --&gt; Salt Processes related to salt in sea ice thermodynamics. 16.1. Has Multiple Sea Ice Salinities Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 16.2. Sea Ice Salinity Thermal Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Does sea ice salinity impact the thermal properties of sea ice? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 17. Thermodynamics --&gt; Salt --&gt; Mass Transport Mass transport of salt 17.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the mass transport of salt calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 17.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Prescribed salinity profile" # "Prognostic salinity profile" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 18. Thermodynamics --&gt; Salt --&gt; Thermodynamics Salt thermodynamics 18.1. Salinity Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is salinity determined in the thermodynamic calculation? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 18.2. Constant Salinity Value Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If using a constant salinity value specify this value in PSU? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.3. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the salinity profile used. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Virtual (enhancement of thermal conductivity, thin ice melting)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19. Thermodynamics --&gt; Ice Thickness Distribution Ice thickness distribution details. 19.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice thickness distribution represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Parameterised" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 20. Thermodynamics --&gt; Ice Floe Size Distribution Ice floe-size distribution details. 20.1. Representation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 How is the sea ice floe-size represented? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20.2. Additional Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Please provide further details on any parameterisation of floe-size. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 21. Thermodynamics --&gt; Melt Ponds Characteristics of melt ponds. 21.1. Are Included Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Are melt ponds included in the sea ice model? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flocco and Feltham (2010)" # "Level-ice melt ponds" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.2. Formulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What method of melt pond formulation is used? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Albedo" # "Freshwater" # "Heat" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 21.3. Impacts Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N What do melt ponds have an impact on? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22. Thermodynamics --&gt; Snow Processes Thermodynamic processes in snow on sea ice 22.1. Has Snow Aging Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has a snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.2. Snow Aging Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow aging scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') # PROPERTY VALUE(S): # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 22.3. Has Snow Ice Formation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Set to True if the sea ice model has snow ice formation. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.4. Snow Ice Formation Scheme Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe the snow ice formation scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 22.5. Redistribution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the impact of ridging on snow cover? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Single-layered heat diffusion" # "Multi-layered heat diffusion" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.6. Heat Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 What is the heat diffusion through snow methodology in sea ice thermodynamics? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Parameterized" # "Multi-band albedo" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Radiative Processes Sea Ice Radiative Processes 23.1. Surface Albedo Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Method used to handle surface albedo. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Delta-Eddington" # "Exponential attenuation" # "Ice radiation transmission per category" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23.2. Ice Radiation Transmission Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Method by which solar radiation through sea ice is handled. End of explanation
5,929
Given the following text description, write Python code to implement the functionality described below step by step Description: Notebook 8 Step1: Download the sequence data Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. Project SRA Step3: For each ERS (individuals) get all of the ERR (sequence file accessions). Step4: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names. Step5: Make a params file Step6: Note Step7: Assemble in pyrad Step8: Results We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. Raw data amounts The average number of raw reads per sample is 1.36M. Step9: Look at distributions of coverage pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others. Step10: Plot the coverage for the sample with highest mean coverage Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage. Step11: Print final stats table Step12: Infer ML phylogeny in raxml as an unrooted tree Step13: Plot the tree in R using ape Step14: Get phylo distances (GTRgamma dist)
Python Code: ### Notebook 8 ### Data set 8: Barnacles ### Authors: Herrera et al. 2015 ### Data Location: SRP051026 Explanation: Notebook 8: This is an IPython notebook. Most of the code is composed of bash scripts, indicated by %%bash at the top of the cell, otherwise it is IPython code. This notebook includes code to download, assemble and analyze a published RADseq data set. End of explanation %%bash ## make a new directory for this analysis mkdir -p empirical_8/fastq/ Explanation: Download the sequence data Sequence data for this study are archived on the NCBI sequence read archive (SRA). Below I read in SraRunTable.txt for this project which contains all of the information we need to download the data. Project SRA: SRP051026 BioProject ID: PRJNA269631 SRA link: http://trace.ncbi.nlm.nih.gov/Traces/study/?acc=SRP051026 End of explanation ## IPython code import pandas as pd import numpy as np import urllib2 import os ## open the SRA run table from github url url = "https://raw.githubusercontent.com/"+\ "dereneaton/RADmissing/master/empirical_8_SraRunTable.txt" intable = urllib2.urlopen(url) indata = pd.read_table(intable, sep="\t") ## print first few rows print indata.head() def wget_download(SRR, outdir, outname): Python function to get sra data from ncbi and write to outdir with a new name using bash call wget ## get output name output = os.path.join(outdir, outname+".sra") ## create a call string call = "wget -q -r -nH --cut-dirs=9 -O "+output+" "+\ "ftp://ftp-trace.ncbi.nlm.nih.gov/"+\ "sra/sra-instant/reads/ByRun/sra/SRR/"+\ "{}/{}/{}.sra;".format(SRR[:6], SRR, SRR) ## call bash script ! $call Explanation: For each ERS (individuals) get all of the ERR (sequence file accessions). End of explanation for ID, SRR in zip(indata.Sample_Name_s, indata.Run_s): wget_download(SRR, "empirical_8/fastq/", ID) %%bash ## convert sra files to fastq using fastq-dump tool ## output as gzipped into the fastq directory fastq-dump --gzip -O empirical_8/fastq/ empirical_8/fastq/*.sra ## remove .sra files rm empirical_8/fastq/*.sra %%bash ls -lh empirical_8/fastq/ Explanation: Here we pass the SRR number and the sample name to the wget_download function so that the files are saved with their sample names. End of explanation %%bash pyrad --version %%bash ## remove old params file if it exists rm params.txt ## create a new default params file pyrad -n Explanation: Make a params file End of explanation %%bash ## substitute new parameters into file sed -i '/## 1. /c\empirical_8/ ## 1. working directory ' params.txt sed -i '/## 6. /c\TGCAGG ## 6. cutters ' params.txt sed -i '/## 7. /c\20 ## 7. N processors ' params.txt sed -i '/## 9. /c\6 ## 9. NQual ' params.txt sed -i '/## 10./c\.85 ## 10. clust threshold ' params.txt sed -i '/## 12./c\4 ## 12. MinCov ' params.txt sed -i '/## 13./c\10 ## 13. maxSH ' params.txt sed -i '/## 14./c\empirical_8_m4 ## 14. output name ' params.txt sed -i '/## 18./c\empirical_8/fastq/*.gz ## 18. data location ' params.txt sed -i '/## 29./c\2,2 ## 29. trim overhang ' params.txt sed -i '/## 30./c\p,n,s ## 30. output formats ' params.txt cat params.txt Explanation: Note: The data here are from Illumina Casava <1.8, so the phred scores are offset by 64 instead of 33, so we use that in the params file below. End of explanation %%bash pyrad -p params.txt -s 234567 >> log.txt 2>&1 %%bash sed -i '/## 12./c\2 ## 12. MinCov ' params.txt sed -i '/## 14./c\empirical_8_m2 ## 14. output name ' params.txt %%bash pyrad -p params.txt -s 7 >> log.txt 2>&1 Explanation: Assemble in pyrad End of explanation import pandas as pd ## read in the data s2dat = pd.read_table("empirical_8/stats/s2.rawedit.txt", header=0, nrows=42) ## print summary stats print s2dat["passed.total"].describe() ## find which sample has the most raw data maxraw = s2dat["passed.total"].max() print "\nmost raw data in sample:" print s2dat['sample '][s2dat['passed.total']==maxraw] Explanation: Results We are interested in the relationship between the amount of input (raw) data between any two samples, the average coverage they recover when clustered together, and the phylogenetic distances separating samples. Raw data amounts The average number of raw reads per sample is 1.36M. End of explanation ## read in the s3 results s8dat = pd.read_table("empirical_8/stats/s3.clusters.txt", header=0, nrows=14) ## print summary stats print "summary of means\n==================" print s8dat['dpt.me'].describe() ## print summary stats print "\nsummary of std\n==================" print s8dat['dpt.sd'].describe() ## print summary stats print "\nsummary of proportion lowdepth\n==================" print pd.Series(1-s8dat['d>5.tot']/s8dat["total"]).describe() ## find which sample has the greatest depth of retained loci max_hiprop = (s8dat["d>5.tot"]/s8dat["total"]).max() print "\nhighest coverage in sample:" print s8dat['taxa'][s8dat['d>5.tot']/s8dat["total"]==max_hiprop] maxprop =(s8dat['d>5.tot']/s8dat['total']).max() print "\nhighest prop coverage in sample:" print s8dat['taxa'][s8dat['d>5.tot']/s8dat['total']==maxprop] import numpy as np ## print mean and std of coverage for the highest coverage sample with open("empirical_8/clust.85/82121_15.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) print "Means for sample 82121_15" print depths.mean(), depths.std() print depths[depths>5].mean(), depths[depths>5].std() Explanation: Look at distributions of coverage pyrad v.3.0.63 outputs depth information for each sample which I read in here and plot. First let's ask which sample has the highest depth of coverage. The std of coverages is pretty low in this data set compared to several others. End of explanation import toyplot import toyplot.svg import numpy as np ## read in the depth information for this sample with open("empirical_8/clust.85/82121_15.depths", 'rb') as indat: depths = np.array(indat.read().strip().split(","), dtype=int) ## make a barplot in Toyplot canvas = toyplot.Canvas(width=350, height=300) axes = canvas.axes(xlabel="Depth of coverage (N reads)", ylabel="N loci", label="dataset8/sample=82121_15") ## select the loci with depth > 5 (kept) keeps = depths[depths>5] ## plot kept and discarded loci edat = np.histogram(depths, range(30)) # density=True) kdat = np.histogram(keeps, range(30)) #, density=True) axes.bars(edat) axes.bars(kdat) #toyplot.svg.render(canvas, "empirical_8_depthplot.svg") Explanation: Plot the coverage for the sample with highest mean coverage Green shows the loci that were discarded and orange the loci that were retained. The majority of data were discarded for being too low of coverage. End of explanation cat empirical_8/stats/empirical_8_m4.stats cat empirical_8/stats/empirical_8_m2.stats Explanation: Print final stats table End of explanation %%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_8/ \ -n empirical_8_m4 -s empirical_8/outfiles/empirical_8_m4.phy %%bash ## raxml argumement w/ ... raxmlHPC-PTHREADS-AVX -f a -m GTRGAMMA -N 100 -x 12345 -p 12345 -T 20 \ -w /home/deren/Documents/RADmissing/empirical_8/ \ -n empirical_8_m2 -s empirical_8/outfiles/empirical_8_m2.phy %%bash head -n 20 empirical_8/RAxML_info.empirical_8 Explanation: Infer ML phylogeny in raxml as an unrooted tree End of explanation %load_ext rpy2.ipython %%R -h 800 -w 800 library(ape) tre <- read.tree("empirical_8/RAxML_bipartitions.empirical_8") ltre <- ladderize(tre) par(mfrow=c(1,2)) plot(ltre, use.edge.length=F) nodelabels(ltre$node.label) plot(ltre, type='u') Explanation: Plot the tree in R using ape End of explanation %%R mean(cophenetic.phylo(ltre)) Explanation: Get phylo distances (GTRgamma dist) End of explanation
5,930
Given the following text description, write Python code to implement the functionality described below step by step Description: Deep Learning Assignment 3 Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model. The goal of this assignment is to explore regularization techniques. Step1: First reload the data we generated in notmist.ipynb. Step2: Reformat into a shape that's more adapted to the models we're going to train
Python Code: # These are all the modules we'll be using later. Make sure you can import them # before proceeding further. import cPickle as pickle import numpy as np import tensorflow as tf Explanation: Deep Learning Assignment 3 Previously in 2_fullyconnected.ipynb, you trained a logistic regression and a neural network model. The goal of this assignment is to explore regularization techniques. End of explanation pickle_file = 'notMNIST.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) train_dataset = save['train_dataset'] train_labels = save['train_labels'] valid_dataset = save['valid_dataset'] valid_labels = save['valid_labels'] test_dataset = save['test_dataset'] test_labels = save['test_labels'] del save # hint to help gc free up memory print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape Explanation: First reload the data we generated in notmist.ipynb. End of explanation image_size = 28 num_labels = 10 def reformat(dataset, labels): dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32) # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...] labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32) return dataset, labels train_dataset, train_labels = reformat(train_dataset, train_labels) valid_dataset, valid_labels = reformat(valid_dataset, valid_labels) test_dataset, test_labels = reformat(test_dataset, test_labels) print 'Training set', train_dataset.shape, train_labels.shape print 'Validation set', valid_dataset.shape, valid_labels.shape print 'Test set', test_dataset.shape, test_labels.shape def accuracy(predictions, labels): return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1)) / predictions.shape[0]) Explanation: Reformat into a shape that's more adapted to the models we're going to train: - data as a flat matrix, - labels as float 1-hot encodings. End of explanation
5,931
Given the following text description, write Python code to implement the functionality described below step by step Description: Explore surface Argo oxygen float and World Ocean Atlas data Get cached surface data and compare with data from the World Ocean Atlas Build local cache of data from some floats known to have oxygen data Step1: Get a Pandas DataFrame of all the data in this cache file. Step2: Define a function to scatter plot the float positions on a map. Step3: See where these floats have been. Step4: Compute the mean of all the surface values. Step5: Before computing monthly means let's add lat, lon, wmo, and month and year numbers to new columns of the DataFrame. Step6: Compute the monthly means and add an o2sat (for percent oxygen saturation) column using the Gibbs SeaWater Oceanographic Package of TEOS-10. Step7: Add columns ilon & ilat ('i', for index) rounding lat and lon to nearest 0.5 degree mark to facilitate querying the World Ocean Atlas. Step8: Build a dictionary (woa) of OpenDAP monthly URLs to the o2sat data. Step9: Define a function to get WOA O_an (Objectively analyzed mean fields for fractional_saturation_of_oxygen_in_seawater at standard depth) variable given a month, depth, latitude, and longitude. Step10: Add the woa_o2sat column, taken from 5.0 m depth for each month and position of the float. Step11: The above takes a few minutes to do the WOA lookups, so let's copy the 'o2...' columns of the result to a DataFrame that we'll use for calculating the gain over time for each float. Add wmo column back and make a Python datetime index. Step12: Plot the gain over time for each of the floats. Step13: Print the mean gain value for each float. Values much less than 1 should not be trusted (Josh Plant, personal communication).
Python Code: from biofloat import ArgoData from os.path import join, expanduser ad = ArgoData(cache_file = join(expanduser('~'), 'biofloat_fixed_cache_variablesDOXY_ADJUSTED-PSAL_ADJUSTED-TEMP_ADJUSTED_wmo1900650-1901157-5901073.hdf')) Explanation: Explore surface Argo oxygen float and World Ocean Atlas data Get cached surface data and compare with data from the World Ocean Atlas Build local cache of data from some floats known to have oxygen data: bash load_biofloat_cache.py --wmo 1900650 1901157 5901073 -v Get an ArgoData object that uses the local cache file built with above command. End of explanation %%time wmo_list = ['1900650', '1901157', '5901073'] ad.set_verbosity(1) df = ad.get_float_dataframe(wmo_list) Explanation: Get a Pandas DataFrame of all the data in this cache file. End of explanation %pylab inline import pylab as plt from mpl_toolkits.basemap import Basemap def map(lons, lats): m = Basemap(llcrnrlon=15, llcrnrlat=-90, urcrnrlon=390, urcrnrlat=90, projection='cyl') m.fillcontinents(color='0.8') m.scatter(lons, lats, latlon=True, color='red') Explanation: Define a function to scatter plot the float positions on a map. End of explanation plt.rcParams['figure.figsize'] = (18.0, 8.0) tdf = df.copy() tdf['lon'] = tdf.index.get_level_values('lon') tdf['lat'] = tdf.index.get_level_values('lat') map(tdf.lon, tdf.lat) # Place wmo lables at the mean position for each float for wmo, lon, lat in tdf.groupby(level='wmo')['lon', 'lat'].mean().itertuples(): if lon < 0: lon += 360 plt.text(lon, lat, wmo) Explanation: See where these floats have been. End of explanation sdf = df.query('(pressure < 10)').groupby(level=['wmo', 'time', 'lon', 'lat']).mean() sdf.head() Explanation: Compute the mean of all the surface values. End of explanation sdf['lon'] = sdf.index.get_level_values('lon') sdf['lat'] = sdf.index.get_level_values('lat') sdf['month'] = sdf.index.get_level_values('time').month sdf['year'] = sdf.index.get_level_values('time').year sdf['wmo'] = sdf.index.get_level_values('wmo') Explanation: Before computing monthly means let's add lat, lon, wmo, and month and year numbers to new columns of the DataFrame. End of explanation msdf = sdf.groupby(['wmo', 'year', 'month']).mean() from biofloat.utils import o2sat, convert_to_mll msdf['o2sat'] = 100 * (msdf.DOXY_ADJUSTED / o2sat(msdf.PSAL_ADJUSTED, msdf.TEMP_ADJUSTED)) msdf.head(10) Explanation: Compute the monthly means and add an o2sat (for percent oxygen saturation) column using the Gibbs SeaWater Oceanographic Package of TEOS-10. End of explanation def round_to(n, increment, mark): correction = mark if n >= 0 else -mark return int( n / increment) + correction imsdf = msdf.copy() imsdf['ilon'] = msdf.apply(lambda x: round_to(x.lon, 1, 0.5), axis=1) imsdf['ilat'] = msdf.apply(lambda x: round_to(x.lat, 1, 0.5), axis=1) imsdf.head(10) Explanation: Add columns ilon & ilat ('i', for index) rounding lat and lon to nearest 0.5 degree mark to facilitate querying the World Ocean Atlas. End of explanation woa_tmpl = 'http://data.nodc.noaa.gov/thredds/dodsC/woa/WOA13/DATA/o2sat/netcdf/all/1.00/woa13_all_O{:02d}_01.nc' woa = {} for m in range(1,13): woa[m] = woa_tmpl.format(m) Explanation: Build a dictionary (woa) of OpenDAP monthly URLs to the o2sat data. End of explanation import xray def woa_o2sat(month, depth, lon, lat): ds = xray.open_dataset(woa[month], decode_times=False) return ds.loc[dict(lon=lon, lat=lat, depth=depth)]['O_an'].values[0] Explanation: Define a function to get WOA O_an (Objectively analyzed mean fields for fractional_saturation_of_oxygen_in_seawater at standard depth) variable given a month, depth, latitude, and longitude. End of explanation %%time woadf = imsdf.copy() woadf['month'] = woadf.index.get_level_values('month') woadf['woa_o2sat'] = woadf.apply(lambda x: woa_o2sat(x.month, 5.0, x.ilon, x.ilat), axis=1) Explanation: Add the woa_o2sat column, taken from 5.0 m depth for each month and position of the float. End of explanation import pandas as pd gdf = woadf[['o2sat', 'woa_o2sat']].copy() gdf['wmo'] = gdf.index.get_level_values('wmo') years = gdf.index.get_level_values('year') months = gdf.index.get_level_values('month') gdf['date'] = pd.to_datetime(years * 100 + months, format='%Y%m') Explanation: The above takes a few minutes to do the WOA lookups, so let's copy the 'o2...' columns of the result to a DataFrame that we'll use for calculating the gain over time for each float. Add wmo column back and make a Python datetime index. End of explanation plt.style.use('ggplot') plt.rcParams['figure.figsize'] = (18.0, 4.0) ax = gdf[['o2sat', 'woa_o2sat']].unstack(level=0).plot() ax.set_ylabel('Oxygen Saturation (%)') gdf['gain'] = gdf.woa_o2sat / gdf.o2sat ax = gdf[['gain']].unstack(level=0).plot() ax.set_ylabel('Gain') Explanation: Plot the gain over time for each of the floats. End of explanation gdf.groupby('wmo').gain.mean() Explanation: Print the mean gain value for each float. Values much less than 1 should not be trusted (Josh Plant, personal communication). End of explanation
5,932
Given the following text description, write Python code to implement the functionality described below step by step Description: Iterators and Generators Homework Problem 1 Create a generator that generates the squares of numbers up to some number N. Step1: Problem 2 Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note Step2: Problem 3 Use the iter() function to convert the string below Step3: Problem 4 Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement. Extra Credit! Can you explain what gencomp is in the code below? (Note
Python Code: def gensquares(N): pass for x in gensquares(10): print x Explanation: Iterators and Generators Homework Problem 1 Create a generator that generates the squares of numbers up to some number N. End of explanation import random random.randint(1,10) def rand_num(low,high,n): pass for num in rand_num(1,10,12): print num Explanation: Problem 2 Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note: Use the random library. For example: End of explanation s = 'hello' #code here Explanation: Problem 3 Use the iter() function to convert the string below End of explanation my_list = [1,2,3,4,5] gencomp = (item for item in my_list if item > 3) for item in gencomp: print item Explanation: Problem 4 Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement. Extra Credit! Can you explain what gencomp is in the code below? (Note: We never covered this in lecture! You will have to do some googling/Stack Overflowing!) End of explanation
5,933
Given the following text description, write Python code to implement the functionality described below step by step Description: 3.2. Increasing dataset size The next thing we're going to try is to increase the size of our dataset. On the previous trainings we used a small subset of the book "Don Quijote de La Mancha" that contained 169KB of text. The problem is that we have to consider that what we're going to do is to teach Spanish to our RNN. And, let's be honest, it's quite difficult to learn a language from scratch by reading only 169K characters (a few chapters of a book); we'll learn some words and maybe even a few sentences, but it's very difficult to really learn the language. Therefore, in order to solve this, we'll greatly increase the size of the dataset. We'll use the entire "Don Quijote de la Mancha" book, and to it we'll append another very famous Spanish book, "La Regenta" by Leopoldo Alas. Combining both, we'll get a dataset of about 4MB (more than 20x the previous one). And, although this will slow down our training a lot, it will be with very high probability a very huge improvement in our code. Let's start the code Step1: The next step will be to read both books and to combine them into a single dataset, and then we'll proceed with the usual calculations Step2: (We won't see the results here because I've actually executed this code in another machine, not directly in the notebook; as you can imagine, this will take a loooooong time). We'll, so here we are again! If you're reading this once I've finished the notebook you won't notice the pause, but I'm writing this two weeks later than the previous paragraph. As I predicted, the NN took a looooooooong time to learn. Each one of the epochs required about 11 hours to finish! And besides, there's another important thing to take into account Step3: 3.2.2. Most probable character The code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method Step4: 3.2.3. Randomized prediction The code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method
Python Code: import numpy as np import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout from keras.layers import LSTM from keras.callbacks import ModelCheckpoint from keras.utils import np_utils Explanation: 3.2. Increasing dataset size The next thing we're going to try is to increase the size of our dataset. On the previous trainings we used a small subset of the book "Don Quijote de La Mancha" that contained 169KB of text. The problem is that we have to consider that what we're going to do is to teach Spanish to our RNN. And, let's be honest, it's quite difficult to learn a language from scratch by reading only 169K characters (a few chapters of a book); we'll learn some words and maybe even a few sentences, but it's very difficult to really learn the language. Therefore, in order to solve this, we'll greatly increase the size of the dataset. We'll use the entire "Don Quijote de la Mancha" book, and to it we'll append another very famous Spanish book, "La Regenta" by Leopoldo Alas. Combining both, we'll get a dataset of about 4MB (more than 20x the previous one). And, although this will slow down our training a lot, it will be with very high probability a very huge improvement in our code. Let's start the code: End of explanation # Load the books, merging them and covert the result to lowercase filename1 = "El ingenioso hidalgo don Quijote de la Mancha.txt" book1 = open(filename1).read() filename2 = "La Regenta.txt" book2 = open(filename2).read() book = book1 + book2 # Create mapping of unique chars to integers, and its reverse chars = sorted(list(set(book))) char_to_int = dict((c, i) for i, c in enumerate(chars)) int_to_char = dict((i, c) for i, c in enumerate(chars)) # Summarizing the loaded data n_chars = len(book) n_vocab = len(chars) print "Total Characters: ", n_chars print "Total Vocab: ", n_vocab # Prepare the dataset of input to output pairs encoded as integers seq_length = 100 dataX = [] dataY = [] # Iterating over the book for i in range(0, n_chars - seq_length, 1): sequence_in = book[i:i + seq_length] sequence_out = book[i + seq_length] # Converting each char to its corresponding int sequence_in_int = [char_to_int[char] for char in sequence_in] sequence_out_int = char_to_int[sequence_out] # Appending the result to the current data dataX.append(sequence_in_int) dataY.append(sequence_out_int) n_patterns = len(dataX) print "Total Patterns: ", n_patterns # Reshaping X to be [samples, time steps, features] X = np.reshape(dataX, (n_patterns, seq_length, 1)) # Normalizing X = X / float(n_vocab) # One hot encode the output variable y = np_utils.to_categorical(dataY) # Define the LSTM model model = Sequential() model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2]), return_sequences=True)) model.add(Dropout(0.2)) model.add(LSTM(256)) model.add(Dropout(0.2)) model.add(Dense(y.shape[1], activation='softmax')) # Starting from a checkpoint (if we set one) checkpoint = "" if checkpoint: model.load_weights(checkpoint) # Amount of epochs that we still have to run epochs_run = 0 epochs_left = 50 - epochs_run # Define the checkpoints structure filepath="weights-improvement-{epoch:02d}-{loss:.4f}.hdf5" checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min') callbacks_list = [checkpoint] # Compiling the model model.compile(loss='categorical_crossentropy', optimizer='adam') # Fitting the model model.fit(X, y, nb_epoch=epochs_left, batch_size=64, callbacks=callbacks_list) Explanation: The next step will be to read both books and to combine them into a single dataset, and then we'll proceed with the usual calculations End of explanation # Load the network weights filename = "weights-improvement-09-1.5410.hdf5" model.load_weights(filename) model.compile(loss='categorical_crossentropy', optimizer='adam') # Pick a random seed start = np.random.randint(0, len(dataX)-1) pattern = dataX[start] starting_pattern = pattern # saving a copy seed = ''.join([int_to_char[value] for value in pattern]) print "Seed:" print "\"", seed, "\"" result_str = "" Explanation: (We won't see the results here because I've actually executed this code in another machine, not directly in the notebook; as you can imagine, this will take a loooooong time). We'll, so here we are again! If you're reading this once I've finished the notebook you won't notice the pause, but I'm writing this two weeks later than the previous paragraph. As I predicted, the NN took a looooooooong time to learn. Each one of the epochs required about 11 hours to finish! And besides, there's another important thing to take into account: the NN stopped generating weights after the 10th one, although the code is still running. I'd like to thing that it happened because the loss stopped decreasing at that point (what won't be as bad as it may seem, because due to the big size of the dataset we achieved quite good results even with few epochs), but we can't know it for sure at the moment; however, I will for sure update this notebook when I analyse it more precisely, so don't stop visiting it. And now that this part is explained, let's go back to what really matters: the results! In order to test our neural net, we'll use the two approaches tried before, in order to see the results achieved with each one: choosing the most probable character each iteration and using the output probabilities as the Probability Density Function. 3.2.1. Preparing the prediction In this section we're going to include all the code that is common to both prediction methods (loading the weights, preparing the seed...) in order to avoid executing the same code twice End of explanation # Generate characters for i in range(500): x = numpy.reshape(pattern, (1, len(pattern), 1)) x = x / float(n_vocab) prediction = model.predict(x, verbose=0) index = numpy.argmax(prediction) result = int_to_char[index] seq_in = [int_to_char[value] for value in pattern] result_str += result pattern.append(index) pattern = pattern[1:len(pattern)] print "\nDone." Explanation: 3.2.2. Most probable character The code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method End of explanation pattern = starting_pattern # Restoring the seed to its initial state result_str = "" # Generate characters for i in range(500): x = np.reshape(pattern, (1, len(pattern), 1)) x = x / float(n_vocab) # Choosing the character randomly prediction = model.predict(x, verbose=0) prob_cum = np.cumsum(prediction[0]) rand_ind = np.random.rand() for i in range(len(prob_cum)): if (rand_ind <= prob_cum[i]) and (rand_ind > prob_cum[i-1]): index = i break result = int_to_char[index] seq_in = [int_to_char[value] for value in pattern] result_str += result pattern.append(index) pattern = pattern[1:len(pattern)] print "\nDone." Explanation: 3.2.3. Randomized prediction The code to use here doesn't need much explanation, as is exactly the one we've used on previous notebooks. You can check them to see the reason for using this method End of explanation
5,934
Given the following text description, write Python code to implement the functionality described below step by step Description: Please find torch implementation of this notebook here Step1: Implementation from scratch For fully connected layers, we take the average along minibatch samples for each dimension independently. For 2d convolutional layers, we take the average along minibatch samples, and along horizontal and vertical locations, for each channel (feature dimension) independently. When training, we update the estimate of the mean and variance using a moving average. When testing (doing inference), we use the pre-computed values. Step2: Wrap the batch norm function in a layer Step3: Applying batch norm to LeNet We add BN layers after some of the convolutions and fully connected layers, but before the activation functions. Step5: Train the model We train the model using the same code as in the standard LeNet colab. The only difference from the previous colab is the larger learning rate (which is possible because BN stabilizes training). Step15: Plotting Step16: Training Function Step17: We create a subclass of train_state.TrainState store the auxilliary variables (i.e. gamma and beta) required by BatchNorm. Step21: Since the same training procedure needs to be applied on two different networks (i.e. LeNetBN and LeNetBNFlax), we define a train_procedure_builder helper function to create separate procedures for these two networks. Note that we cannot simply pass the model class to the training functions because we are using @jax.jit and nn.Module is not a valid JAX type. Step22: Examine learned parameters Step23: Use Flax's BatchNorm layer The built-in layer is much faster than our Python code, since it is implemented in C++. Note that instead of specifying ndims=2 for fully connected layer (batch x features) and ndims=4 for convolutional later (batch x channels x height x width), we simply use BatchNorm and take advantage of JAX's shape inference feature. Step24: Learning Curve Step25: Examine learned parameters
Python Code: import jax import jax.numpy as jnp # JAX NumPy import matplotlib.pyplot as plt import math from IPython import display try: from flax import linen as nn # The Linen API except ModuleNotFoundError: %pip install -qq flax from flax import linen as nn # The Linen API from flax.training import train_state # Useful dataclass to keep train state import numpy as np # Ordinary NumPy try: import optax # Optimizers except ModuleNotFoundError: %pip install -qq optax import optax # Optimizers try: import tensorflow_datasets as tfds # TFDS for MNIST except ModuleNotFoundError: %pip install -qq tensorflow tensorflow_datasets import tensorflow_datasets as tfds # TFDS for MNIST import random import os import time !mkdir figures # for saving plots Explanation: Please find torch implementation of this notebook here: https://colab.research.google.com/github/probml/pyprobml/blob/master/notebooks/book1/14/batchnorm_torch.ipynb Batch normalization We implement a batchnorm layer from scratch and add to LeNet CNN. Code based on sec 7.5 of http://d2l.ai/chapter_convolutional-modern/batch-norm.html End of explanation def batch_norm(X, train, gamma, beta, moving_mean, moving_var, eps, momentum): # Use `train` to determine whether the current mode is training # mode or prediction mode if not train: # If it is prediction mode, directly use the mean and variance # obtained by moving average X_hat = (X - moving_mean.value) / jnp.sqrt(moving_var.value + eps) else: assert len(X.shape) in (2, 4) if len(X.shape) == 2: # When using a fully-connected layer, calculate the mean and # variance on the feature dimension mean = X.mean(axis=0) var = ((X - mean) ** 2).mean(axis=0) else: # When using a two-dimensional convolutional layer, calculate the # mean and variance on the channel dimension (axis=1). Here we # need to maintain the shape of `X`, so that the broadcasting # operation can be carried out later mean = X.mean(axis=(0, 2, 3), keepdims=True) var = ((X - mean) ** 2).mean(axis=(0, 2, 3), keepdims=True) # In training mode, the current mean and variance are used for the # standardization X_hat = (X - mean) / jnp.sqrt(var + eps) # Update the mean and variance using moving average moving_mean.value = momentum * moving_mean.value + (1.0 - momentum) * mean moving_var.value = momentum * moving_var.value + (1.0 - momentum) * var Y = gamma * X_hat + beta # Scale and shift return Y Explanation: Implementation from scratch For fully connected layers, we take the average along minibatch samples for each dimension independently. For 2d convolutional layers, we take the average along minibatch samples, and along horizontal and vertical locations, for each channel (feature dimension) independently. When training, we update the estimate of the mean and variance using a moving average. When testing (doing inference), we use the pre-computed values. End of explanation class BatchNorm(nn.Module): # `num_features`: the number of outputs for a fully-connected layer # or the number of output channels for a convolutional layer. num_features: int # `num_dims`: 2 for a fully-connected layer and 4 for a convolutional layer num_dims: int # Use `train` to determine whether the current mode is training # mode or prediction mode train: bool @nn.compact def __call__(self, X): if self.num_dims == 2: shape = (1, self.num_features) else: shape = (1, 1, 1, self.num_features) # The scale parameter and the shift parameter (model parameters) are # initialized to 1 and 0, respectively gamma = self.param("gamma", jax.nn.initializers.ones, shape) beta = self.param("beta", jax.nn.initializers.zeros, shape) # The variables that are not model parameters are initialized to 0 and 1 moving_mean = self.variable("batch_stats", "moving_mean", jnp.zeros, shape) moving_var = self.variable("batch_stats", "moving_var", jnp.ones, shape) # Save the updated `moving_mean` and `moving_var` Y = batch_norm(X, self.train, gamma, beta, moving_mean, moving_var, eps=1e-5, momentum=0.9) return Y Explanation: Wrap the batch norm function in a layer End of explanation # NOTE: The LeNet in batchnorm_torch.ipynb uses max pooling instead of average # pooling. Here I'm switching to average pooling since that's what the # original paper did. # NOTE: I adapted the following code from notebooks-d2l/lenet_jax.ipynb. class LeNetBN(nn.Module): @nn.compact def __call__(self, x, *, train): x = nn.Conv(features=6, kernel_size=(5, 5), padding=[(2, 2), (2, 2)])(x) x = BatchNorm(6, num_dims=4, train=train)(x) # <--- x = nn.sigmoid(x) x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2), padding=[(0, 0), (0, 0)]) x = nn.Conv(features=16, kernel_size=(5, 5), padding=[(0, 0), (0, 0)])(x) x = BatchNorm(16, num_dims=4, train=train)(x) # <--- x = nn.sigmoid(x) x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2), padding=[(0, 0), (0, 0)]) x = x.reshape((x.shape[0], -1)) x = nn.Dense(features=120)(x) x = BatchNorm(120, num_dims=2, train=train)(x) # <--- x = nn.sigmoid(x) x = nn.Dense(features=84)(x) x = BatchNorm(84, num_dims=2, train=train)(x) # <--- x = nn.sigmoid(x) x = nn.Dense(features=10)(x) return x Explanation: Applying batch norm to LeNet We add BN layers after some of the convolutions and fully connected layers, but before the activation functions. End of explanation def get_datasets(): Load MNIST train and test datasets into memory. ds_builder = tfds.builder("fashion_mnist") ds_builder.download_and_prepare() train_ds = tfds.as_numpy(ds_builder.as_dataset(split="train", batch_size=-1)) test_ds = tfds.as_numpy(ds_builder.as_dataset(split="test", batch_size=-1)) train_ds["image"] = jnp.float32(train_ds["image"]) / 255.0 test_ds["image"] = jnp.float32(test_ds["image"]) / 255.0 return train_ds, test_ds Explanation: Train the model We train the model using the same code as in the standard LeNet colab. The only difference from the previous colab is the larger learning rate (which is possible because BN stabilizes training). End of explanation class Animator: For plotting data in animation. def __init__( self, xlabel=None, ylabel=None, legend=None, xlim=None, ylim=None, xscale="linear", yscale="linear", fmts=("-", "m--", "g-.", "r:"), nrows=1, ncols=1, figsize=(3.5, 2.5), ): # Incrementally plot multiple lines if legend is None: legend = [] display.set_matplotlib_formats("svg") self.fig, self.axes = plt.subplots(nrows, ncols, figsize=figsize) if nrows * ncols == 1: self.axes = [ self.axes, ] # Use a lambda function to capture arguments self.config_axes = lambda: set_axes(self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend) self.X, self.Y, self.fmts = None, None, fmts def add(self, x, y): # Add multiple data points into the figure if not hasattr(y, "__len__"): y = [y] n = len(y) if not hasattr(x, "__len__"): x = [x] * n if not self.X: self.X = [[] for _ in range(n)] if not self.Y: self.Y = [[] for _ in range(n)] for i, (a, b) in enumerate(zip(x, y)): if a is not None and b is not None: self.X[i].append(a) self.Y[i].append(b) self.axes[0].cla() for x, y, fmt in zip(self.X, self.Y, self.fmts): self.axes[0].plot(x, y, fmt) self.config_axes() display.display(self.fig) display.clear_output(wait=True) class Timer: Record multiple running times. def __init__(self): self.times = [] self.start() def start(self): Start the timer. self.tik = time.time() def stop(self): Stop the timer and record the time in a list. self.times.append(time.time() - self.tik) return self.times[-1] def avg(self): Return the average time. return sum(self.times) / len(self.times) def sum(self): Return the sum of time. return sum(self.times) def cumsum(self): Return the accumulated time. return np.array(self.times).cumsum().tolist() class Accumulator: For accumulating sums over `n` variables. def __init__(self, n): self.data = [0.0] * n def add(self, *args): self.data = [a + float(b) for a, b in zip(self.data, args)] def reset(self): self.data = [0.0] * len(self.data) def __getitem__(self, idx): return self.data[idx] def set_axes(axes, xlabel, ylabel, xlim, ylim, xscale, yscale, legend): Set the axes for matplotlib. axes.set_xlabel(xlabel) axes.set_ylabel(ylabel) axes.set_xscale(xscale) axes.set_yscale(yscale) axes.set_xlim(xlim) axes.set_ylim(ylim) if legend: axes.legend(legend) axes.grid() Explanation: Plotting End of explanation def compute_metrics(*, logits, labels): one_hot = jax.nn.one_hot(labels, num_classes=10) loss = jnp.mean(optax.softmax_cross_entropy(logits=logits, labels=one_hot)) accuracy = jnp.mean(jnp.argmax(logits, -1) == labels) metrics = { "loss": loss, "accuracy": accuracy, } return metrics Explanation: Training Function End of explanation from typing import Any class BatchNormTrainState(train_state.TrainState): batch_stats: Any = None Explanation: We create a subclass of train_state.TrainState store the auxilliary variables (i.e. gamma and beta) required by BatchNorm. End of explanation def train_procedure_builder(Net): def create_train_state(rng, learning_rate): Creates initial `BatchNormTrainState`. variables = Net().init(rng, jnp.ones([1, 28, 28, 1]), train=True) batch_stats, params = variables.pop("params") tx = optax.sgd(learning_rate) return BatchNormTrainState.create(apply_fn=Net().apply, params=params, tx=tx, batch_stats=batch_stats) @jax.jit def train_step(state, batch): Train for a single step. def loss_fn(params): variables = {"params": params, **state.batch_stats} logits, new_batch_stats = Net().apply(variables, batch["image"], mutable=["batch_stats"], train=True) one_hot = jax.nn.one_hot(batch["label"], num_classes=10) loss = jnp.mean(optax.softmax_cross_entropy(logits=logits, labels=one_hot)) return loss, (new_batch_stats, logits) grad_fn = jax.value_and_grad(loss_fn, has_aux=True) (_, (new_batch_stats, logits)), grads = grad_fn(state.params) state = state.apply_gradients(grads=grads) metrics = compute_metrics(logits=logits, labels=batch["label"]) new_state = state.replace(batch_stats=new_batch_stats) return new_state, metrics @jax.jit def eval_step(state, batch): variables = {"params": state.params, **state.batch_stats} logits = Net().apply(variables, batch["image"], train=False) return compute_metrics(logits=logits, labels=batch["label"]) def train_epoch(state, train_ds, batch_size, epoch, rng, animator): Train for a single epoch. train_ds_size = len(train_ds["image"]) steps_per_epoch = train_ds_size // batch_size perms = jax.random.permutation(rng, train_ds_size) perms = perms[: steps_per_epoch * batch_size] # skip incomplete batch perms = perms.reshape((steps_per_epoch, batch_size)) batch_metrics = [] for perm in perms: batch = {k: v[perm, ...] for k, v in train_ds.items()} state, metrics = train_step(state, batch) batch_metrics.append(metrics) # compute mean of metrics across each batch in epoch. batch_metrics_np = jax.device_get(batch_metrics) epoch_metrics_np = {k: np.mean([metrics[k] for metrics in batch_metrics_np]) for k in batch_metrics_np[0]} animator.add(epoch, (epoch_metrics_np["loss"], epoch_metrics_np["accuracy"], None)) print( "train epoch: %d, loss: %.4f, accuracy: %.2f" % (epoch, epoch_metrics_np["loss"], epoch_metrics_np["accuracy"] * 100) ) return state def eval_model(state, test_ds): metrics = eval_step(state, test_ds) metrics = jax.device_get(metrics) summary = jax.tree_map(lambda x: x.item(), metrics) return summary["loss"], summary["accuracy"] def train(train_ds, test_ds, num_epochs, batch_size, learning_rate): rng = jax.random.PRNGKey(42) rng, init_rng = jax.random.split(rng) state = create_train_state(init_rng, learning_rate) del init_rng # Must not be used anymore. animator = Animator(xlabel="epoch", xlim=[1, num_epochs], legend=["train loss", "train acc", "test acc"]) for epoch in range(1, num_epochs + 1): # Use a separate PRNG key to permute image data during shuffling rng, input_rng = jax.random.split(rng) # Run an optimization step over a training batch state = train_epoch(state, train_ds, batch_size, epoch, input_rng, animator) # Evaluate on the train and test set after each training epoch train_loss, train_accuracy = eval_model(state, train_ds) test_loss, test_accuracy = eval_model(state, test_ds) animator.add(epoch, (None, None, test_accuracy)) print(f"loss {train_loss:.3f}, train acc {train_accuracy:.3f}, " f"test acc {test_accuracy:.3f}") return state return train learning_rate = 1.0 num_epochs = 10 batch_size = 256 train_ds, test_ds = get_datasets() train = train_procedure_builder(LeNetBN) state = train(train_ds, test_ds, num_epochs, batch_size, learning_rate) Explanation: Since the same training procedure needs to be applied on two different networks (i.e. LeNetBN and LeNetBNFlax), we define a train_procedure_builder helper function to create separate procedures for these two networks. Note that we cannot simply pass the model class to the training functions because we are using @jax.jit and nn.Module is not a valid JAX type. End of explanation state.params["BatchNorm_0"]["gamma"].reshape((-1,)), state.params["BatchNorm_0"]["beta"].reshape((-1,)) Explanation: Examine learned parameters End of explanation class LeNetBNFlax(nn.Module): @nn.compact def __call__(self, x, *, train): x = nn.Conv(features=6, kernel_size=(5, 5), padding=[(2, 2), (2, 2)])(x) x = nn.BatchNorm(use_running_average=not train, momentum=0.9, epsilon=1e-5)(x) # <--- x = nn.sigmoid(x) x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2), padding=[(0, 0), (0, 0)]) x = nn.Conv(features=16, kernel_size=(5, 5), padding=[(0, 0), (0, 0)])(x) x = nn.BatchNorm(use_running_average=not train, momentum=0.9, epsilon=1e-5)(x) # <--- x = nn.sigmoid(x) x = nn.avg_pool(x, window_shape=(2, 2), strides=(2, 2), padding=[(0, 0), (0, 0)]) x = x.reshape((x.shape[0], -1)) x = nn.Dense(features=120)(x) x = nn.BatchNorm(use_running_average=not train, momentum=0.9, epsilon=1e-5)(x) # <--- x = nn.sigmoid(x) x = nn.Dense(features=84)(x) x = nn.BatchNorm(use_running_average=not train, momentum=0.9, epsilon=1e-5)(x) # <--- x = nn.sigmoid(x) x = nn.Dense(features=10)(x) return x Explanation: Use Flax's BatchNorm layer The built-in layer is much faster than our Python code, since it is implemented in C++. Note that instead of specifying ndims=2 for fully connected layer (batch x features) and ndims=4 for convolutional later (batch x channels x height x width), we simply use BatchNorm and take advantage of JAX's shape inference feature. End of explanation train = train_procedure_builder(LeNetBNFlax) state = train(train_ds, test_ds, num_epochs, batch_size, learning_rate) Explanation: Learning Curve End of explanation state.params["BatchNorm_0"]["scale"].reshape((-1,)), state.params["BatchNorm_0"]["bias"].reshape((-1,)) Explanation: Examine learned parameters End of explanation
5,935
Given the following text description, write Python code to implement the functionality described below step by step Description: Matplotlib Introduccion Matplotlib es la libreria ejemplar para la visualización de información en Python. Fue creada por John Hunter con la intención de replicar las capacidaddes para graficar de MatLab. Es una excelente libreria para graficar 2D y 3D así como para generar figuras cientificas Algunas de las ventajas de Matplotlib son Step1: Para la visualizar las graficas en linea ocupamos el codigo Step2: Esta linea de codigo solo es necesaria para los cuadernos de jupyter, si estas usando algun otro editor, deberas de usar el comando plt.show() para visualizar la grafica en otra ventana Ejemplo Basico Utilizando dos arreglos de numpy, podemos generar una grafica Step3: Comandos básicos de Matplotlib Podemos crear una simple grafica de linea utilizando los siguientes comandos Step4: Crear varias graficas en el mismo espacio Step5: Matplotlib orientado a objetos Introduccion al metodo orientado a objetos El proposito de utilizar un metodo orientado a objeto es generar un objeto figura y de ahi solo mandar llamar los metodos o atributos del objeto. Este metodo es mas efectivo al momento de manipular varias graficas Comencemos creando la instancia de figura. Posteriormente crearemos los ejes Step6: El codigo es un poco mas complicado, pero tiene la ventaja de que tenemos el control de donde colocar las graficas. Step7: subplots() El objeto plt.subplots() actuara como un manejador automatico de ejes Uso basico Step8: The esta manera se puede especificar el numero de filas y columnas a crear al momento de crear el objeto Step9: Se puede iterar en los ejes Step10: Un error comun en matplotlib es cuando se empalman las graficas y subgraficas, para ello se puede utilizar los metodos fig.tight_layout() o plt.tight_layout() los cuales automaticamentes ajustan las posiciones de los ejes para evitar que estos se empalmen. Step11: Figure size, aspect ratio and DPI Matplotlib permite el uso de aspect ratio, DPI and figure size para ser especificado al momento de crear el objeto. Podemos utilizar figsize y dpi comandos * figsize es una tupla que nos permite determinar el ancho y la altura de la figura en pulgadas * dpi son los puntos por pulgada (pixel per inch). Por ejemplo Step12: Estos argumentos se pueden pasar al administrador del layout como una funcion subplots Step13: Guardar figuras Matplotlib puede generar imagenes de alta calidad, incluyendo PNG, JPG, EPS, SVG, PGF and PDF. Para guardar la figura en un archivo utilizamos el metodo savefig en la clase Figure Step14: En este metodo podemos opcionalmente especificar el DPI y seleccionar los diferentes formatos Step15: Legendas, etiquetas y titulos Ahora que ya tenemos una idea de lo basico al momento de crear una figura, vamos a ver como podemos decorar nuestras graficas Titulos de la figura Un titulo puede ser agregado a cada eje de la figurapara emplearlo utiliza el metodo set_title Step16: Etiquetas de los ejes Igual que con los metodos set_xlabel y set_ylabel podemos agregar etiquetas para X y Y Step17: Legendas Se puede utilizar el metodo legend para mostrar los elementos de la figura Step18: Si deseamos colocar la leyenda a nuestro gusto podemos utilizar los siguientes atributos Step19: Ajustar colores, tipo y tamano de linea Matplotlib permite modificar a modo los colores el tipo y tamano de linea Colores con la sintaxis de MatLab Con matplotlib, podemos definir el color y el tipo de linea mediante la sintaxis de MATLAB, ejemplo 'b' significa blue, 'g' significa verde, etc. Step20: Colores con el parametro color= parameter Podemos definir los colores por medio de sus nombres o los codigos RGB hex y opcionalmente proporcionar un valor alpha con los metodos color y alpha. Alpha indica la opacidad. Step21: Linea y tipos de marcas Para cambiar el ancho de la linea podemos utilizar el metodo linewidth o lw . El estilo de linea lo podemos modificar con el metodo linestyle o ls Step22: Control sobre como aparecen los ejes En esta seccion vamos a ver como controlar el tamano y algunas propiedades de una figura de Matplotlib Plot range Podemos configurar los rangos de cada uno de los ejes con los metodos set_ylim y set_xlim en el objeto axes, o axis('tight') para un rango automatizado Step23: Graficas especiales Existen diferentes tipos de graficas
Python Code: import matplotlib.pyplot as plt Explanation: Matplotlib Introduccion Matplotlib es la libreria ejemplar para la visualización de información en Python. Fue creada por John Hunter con la intención de replicar las capacidaddes para graficar de MatLab. Es una excelente libreria para graficar 2D y 3D así como para generar figuras cientificas Algunas de las ventajas de Matplotlib son: * Facil de usar * Textos y etiquedas modificables * Control sobre cada elemento de la grafica * Alta calidad de imagen en el resultado * Generalmente modificable Matplotlib nos permite reproducir graficas programaticamente. como recomencion pueden checar la pagina oficial de Matplotlib en http://matplotlib.org/ Instalacion Necesitamos instalar primeramente matploblib con el siguiente comando conda install matplotlib Importar Importar el modulo matplotlib.pyplot con el nombre plt: End of explanation %matplotlib inline Explanation: Para la visualizar las graficas en linea ocupamos el codigo: End of explanation import numpy as np x = np.linspace(0, 5, 11) y = x ** 2 x y Explanation: Esta linea de codigo solo es necesaria para los cuadernos de jupyter, si estas usando algun otro editor, deberas de usar el comando plt.show() para visualizar la grafica en otra ventana Ejemplo Basico Utilizando dos arreglos de numpy, podemos generar una grafica End of explanation plt.plot(x, y, 'r') # 'r' es el color rojo plt.xlabel('Titulo eje X') plt.ylabel('Titulo eje Y') plt.title('Titulo de la grafica') plt.show() Explanation: Comandos básicos de Matplotlib Podemos crear una simple grafica de linea utilizando los siguientes comandos End of explanation # plt.subplot(nrows, ncols, plot_number) plt.subplot(1,2,1) plt.plot(x, y, 'r--') # Opciones de color plt.subplot(1,2,2) plt.plot(y, x, 'g*-'); Explanation: Crear varias graficas en el mismo espacio End of explanation # Crear Figura (espacio vacio) fig = plt.figure() # Agregar ejes axes = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # izquierda, abajo, ancho, alto (rango de 0 a 1) # Graficar los ejes axes.plot(x, y, 'b') axes.set_xlabel('Etiqueta X ') # Utilizamos set_ al principio de los metodos axes.set_ylabel('Etiqueta Y') axes.set_title('Titulo') Explanation: Matplotlib orientado a objetos Introduccion al metodo orientado a objetos El proposito de utilizar un metodo orientado a objeto es generar un objeto figura y de ahi solo mandar llamar los metodos o atributos del objeto. Este metodo es mas efectivo al momento de manipular varias graficas Comencemos creando la instancia de figura. Posteriormente crearemos los ejes End of explanation # Crear espacio en blanco fig = plt.figure() axes1 = fig.add_axes([0.1, 0.1, 0.8, 0.8]) # ejes principales axes2 = fig.add_axes([0.2, 0.5, 0.4, 0.3]) # ejes insertados # Figura grande eje 1 axes1.plot(x, y, 'b') axes1.set_xlabel('X_etiqueta_eje1') axes1.set_ylabel('Y_etiqueta_eje1') axes1.set_title('Eje 1 Titulo') # Figura insertada eje 2 axes2.plot(y, x, 'r') axes2.set_xlabel('X_etiqueta_eje2') axes2.set_ylabel('Y_etiqueta_eje2') axes2.set_title('Eje 2 Title'); Explanation: El codigo es un poco mas complicado, pero tiene la ventaja de que tenemos el control de donde colocar las graficas. End of explanation # Usamos una sintaxis similar a plt.figure() excepto que utilizamos una tupla para tomar los valores de fig y axes fig, axes = plt.subplots() # Utilizamos el objeto axes para agregar las graficas axes.plot(x, y, 'r') axes.set_xlabel('x') axes.set_ylabel('y') axes.set_title('Titulo'); Explanation: subplots() El objeto plt.subplots() actuara como un manejador automatico de ejes Uso basico: End of explanation # Espacio vacio de 1 por 2 subplots fig, axes = plt.subplots(nrows=1, ncols=2) # Los ejes es un arreglo de ejes donde se puede graficar axes Explanation: The esta manera se puede especificar el numero de filas y columnas a crear al momento de crear el objeto End of explanation for ax in axes: ax.plot(x, y, 'b') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('Titulo') # Mostrar el objeto figura fig Explanation: Se puede iterar en los ejes End of explanation fig, axes = plt.subplots(nrows=1, ncols=2) for ax in axes: ax.plot(x, y, 'g') ax.set_xlabel('x') ax.set_ylabel('y') ax.set_title('title') fig plt.tight_layout() Explanation: Un error comun en matplotlib es cuando se empalman las graficas y subgraficas, para ello se puede utilizar los metodos fig.tight_layout() o plt.tight_layout() los cuales automaticamentes ajustan las posiciones de los ejes para evitar que estos se empalmen. End of explanation fig = plt.figure(figsize=(8,4), dpi=100) Explanation: Figure size, aspect ratio and DPI Matplotlib permite el uso de aspect ratio, DPI and figure size para ser especificado al momento de crear el objeto. Podemos utilizar figsize y dpi comandos * figsize es una tupla que nos permite determinar el ancho y la altura de la figura en pulgadas * dpi son los puntos por pulgada (pixel per inch). Por ejemplo: End of explanation fig, axes = plt.subplots(figsize=(12,3)) axes.plot(x, y, 'r') axes.set_xlabel('x') axes.set_ylabel('y') axes.set_title('Titulo'); Explanation: Estos argumentos se pueden pasar al administrador del layout como una funcion subplots: End of explanation fig.savefig("NombreDelArchivo.png") Explanation: Guardar figuras Matplotlib puede generar imagenes de alta calidad, incluyendo PNG, JPG, EPS, SVG, PGF and PDF. Para guardar la figura en un archivo utilizamos el metodo savefig en la clase Figure : End of explanation fig.savefig("NombreDelArchivo.png", dpi=200) Explanation: En este metodo podemos opcionalmente especificar el DPI y seleccionar los diferentes formatos: End of explanation ax.set_title("title"); Explanation: Legendas, etiquetas y titulos Ahora que ya tenemos una idea de lo basico al momento de crear una figura, vamos a ver como podemos decorar nuestras graficas Titulos de la figura Un titulo puede ser agregado a cada eje de la figurapara emplearlo utiliza el metodo set_title : End of explanation ax.set_xlabel("x") ax.set_ylabel("y"); Explanation: Etiquetas de los ejes Igual que con los metodos set_xlabel y set_ylabel podemos agregar etiquetas para X y Y End of explanation fig = plt.figure() ax = fig.add_axes([0,0,1,1]) ax.plot(x, x**2, label="x**2") ax.plot(x, x**3, label="x**3") ax.legend() Explanation: Legendas Se puede utilizar el metodo legend para mostrar los elementos de la figura End of explanation # Opciones ax.legend(loc=1) # upper right corner ax.legend(loc=2) # upper left corner ax.legend(loc=3) # lower left corner ax.legend(loc=4) # lower right corner # La opcion mas comun ax.legend(loc=0) # Matplotlib decide por default fig Explanation: Si deseamos colocar la leyenda a nuestro gusto podemos utilizar los siguientes atributos End of explanation # MATLAB Sintaxis fig, ax = plt.subplots() ax.plot(x, x**2, 'b.-') # linea azul con puntos ax.plot(x, x**3, 'g--') # linea verde de lineas Explanation: Ajustar colores, tipo y tamano de linea Matplotlib permite modificar a modo los colores el tipo y tamano de linea Colores con la sintaxis de MatLab Con matplotlib, podemos definir el color y el tipo de linea mediante la sintaxis de MATLAB, ejemplo 'b' significa blue, 'g' significa verde, etc. End of explanation fig, ax = plt.subplots() ax.plot(x, x+1, color="blue", alpha=0.5) # medio transparente ax.plot(x, x+2, color="#8B008B") # RGB hex code ax.plot(x, x+3, color="#FF8C00") # RGB hex code Explanation: Colores con el parametro color= parameter Podemos definir los colores por medio de sus nombres o los codigos RGB hex y opcionalmente proporcionar un valor alpha con los metodos color y alpha. Alpha indica la opacidad. End of explanation fig, ax = plt.subplots(figsize=(12,6)) ax.plot(x, x+1, color="red", linewidth=0.25) ax.plot(x, x+2, color="red", linewidth=0.50) ax.plot(x, x+3, color="red", linewidth=1.00) ax.plot(x, x+4, color="red", linewidth=2.00) # Tipos de linea ‘-‘, ‘–’, ‘-.’, ‘:’, ‘steps’ ax.plot(x, x+5, color="green", lw=3, linestyle='-') ax.plot(x, x+6, color="green", lw=3, ls='-.') ax.plot(x, x+7, color="green", lw=3, ls=':') # configuracion de espacios line, = ax.plot(x, x+8, color="black", lw=1.50) line.set_dashes([5, 10, 15, 10]) # formato: longitud de linea, longitud de espacio # Simbolos de marcas: marker = '+', 'o', '*', 's', ',', '.', '1', '2', '3', '4', ... ax.plot(x, x+ 9, color="blue", lw=3, ls='-', marker='+') ax.plot(x, x+10, color="blue", lw=3, ls='--', marker='o') ax.plot(x, x+11, color="blue", lw=3, ls='-', marker='s') ax.plot(x, x+12, color="blue", lw=3, ls='--', marker='1') # marker size and color ax.plot(x, x+13, color="purple", lw=1, ls='-', marker='o', markersize=2) ax.plot(x, x+14, color="purple", lw=1, ls='-', marker='o', markersize=4) ax.plot(x, x+15, color="purple", lw=1, ls='-', marker='o', markersize=8, markerfacecolor="red") ax.plot(x, x+16, color="purple", lw=1, ls='-', marker='s', markersize=8, markerfacecolor="yellow", markeredgewidth=3, markeredgecolor="green"); Explanation: Linea y tipos de marcas Para cambiar el ancho de la linea podemos utilizar el metodo linewidth o lw . El estilo de linea lo podemos modificar con el metodo linestyle o ls End of explanation fig, axes = plt.subplots(1, 3, figsize=(12, 4)) axes[0].plot(x, x**2, x, x**3) axes[0].set_title("Rangos por default") axes[1].plot(x, x**2, x, x**3) axes[1].axis('tight') axes[1].set_title("tight axes") axes[2].plot(x, x**2, x, x**3) axes[2].set_ylim([0, 60]) axes[2].set_xlim([2, 5]) axes[2].set_title("rangos manipulados"); Explanation: Control sobre como aparecen los ejes En esta seccion vamos a ver como controlar el tamano y algunas propiedades de una figura de Matplotlib Plot range Podemos configurar los rangos de cada uno de los ejes con los metodos set_ylim y set_xlim en el objeto axes, o axis('tight') para un rango automatizado End of explanation plt.scatter(x,y) from random import sample data = sample(range(1, 1000), 100) plt.hist(data) data = [np.random.normal(0, std, 100) for std in range(1, 4)] # graficas de cuadros plt.boxplot(data,vert=True,patch_artist=True); Explanation: Graficas especiales Existen diferentes tipos de graficas: Barra, Histogramas, Puntos y más. End of explanation
5,936
Given the following text description, write Python code to implement the functionality described below step by step Description: Chapter 9 Step1: As you can see, this folder holds a number of plain text files, ending in the .txt extension. Let us open a random file Step2: Here, we use the open() function to create a file object f, which we can use to access the actual text content of the file. Make sure that you do not pass the 'w' parameter ("write") to open(), instead of 'r' ("read"), since this would overwrite and thus erase the existing file. After assigning the string returned by f.read() to the variable text, we print the 500 first characters of text to get an impression of what it contains, using simple string indexing ([ Step3: This code block does exactly the same thing as the previous one but saves you some typing. In this chapter we would like to work with all the files in the arabian_nights directory. This is where loops come in handy of course, since what we really would like to do, is iterate over the contents of the directory. Accessing these contents in Python is easy, but requires importing some extra functionality. In this case, we need to import the os module, which contains all functionality related to the 'operating system' of your machine, such as directory information Step4: Using the dot-syntax (os.xxx), we can now access all functions that come with this module, such as listdir(), which returns a list of the items which are included under a given directory Step5: The function os.listdir() returns a list of strings, representing the filenames contained under a directory. Quiz In Burton's translation some of the 1001 nights are missing. How many? Can you come up with a clever way to find out which nights are missing? Hint Step6: With os.listdir(), you need to make sure that you pass the correct path to an existing directory Step7: It might therefore be convenient to check whether a directory actually exists in a given location Step8: The second directory, naturally, does not exist and isdir() evaluates to False in this case. Creating a new (and thus empty) directory is also easy using os Step9: We can see that it lives in the present working directory now, by typing ls again Step10: Or we use Python Step11: Removing directories is also easy, but PLEASE watch out, sometimes it is too easy Step12: And lo behold Step13: Here, we use the rmtree() command to remove the entire directory in a recursive way Step14: The folder contains things and therefore cannot be removed using this function. There are, of course, also ways to remove individual files or check whether they exist Step15: Here, we created a directory, wrote a new file to it (1001.txt), and removed it again. Using os.path.exists() we monitored at which point the file existed. Finally, the shutil module also ships with a useful copyfile() function which allows you to copy files from one location to another, possibly with another name. To copy night 66 to the present directory, for instance, we could do Step16: Indeed, we have added an exact copy of night 66 to our present working directory Step17: We can safely remove it again Step18: Paths The paths we have used so far are 'relative' paths, in the sense that they are relative to the place on our machine from which we execute our Python code. Absolute paths can also be retrieved and will differ on each computer, because they typically include user names etc Step19: While absolute paths are longer to type, they have the advantage that they can be used anywhere on your computer (i.e. irrespective of where you run your code from). Paths can be tricky. Suppose that we would like to open one of our filenames Step20: Python throws a FileNotFoundError, complaining that the file we wish to open does not exist. This situation stems from the fact that os.listdir() only returns the base name of a given file, and not an entire (absolute or relative) path to it. To properly access the file, we must therefore not forget to include the rest of the path again Step21: Apart from os.listdir() there are a number of other common ways to obtain directory listings in Python. Using the glob module for instance, we can easily access the full relative path leading to our Arabian Nights Step22: The asterisk (*) in the argument passed to glob.glob() is worth noting here. Just like with regular expressions, this asterisk is a sort of wildcard which will match any series of characters (i.e. the filenames under arabian_nights). When we exploit this wildcard syntax, glob.glob() offers another distinct advantage Step23: Interestingly, the command in this code block will only load filenames that end in ".txt". This is interesting when we would like to ignore other sorts of junk files etc. that might be present in a directory. To replicate similar behaviour with os.listdir(), we would have needed a typical for-loop, such as Step24: Or for you stylish coders out there, you can show off with a list comprehension Step25: However, when using glob.glob(), you might sometimes want to be able to extract a file's base name again. There are several solutions to this Step26: Both os.sep and os.path.basename have the advantage that they know what separator is used for paths in the operating system, so you don't need to explicitly code it like in the first solution. Separators differ between Windows (backslash) and Mac/Linux (forward slash). Finally, sometimes, you might be interested in all the subdirectories of a particular directory (and all the subdirectories of these subdirectories etc.). Parsing such deep directory structures can be tricky, especially if you do not know how deep a directory tree might run. You could of course try stacking multiple loops using os.listdir(), but a more convenient way is os.walk() Step27: As you can see, os.walk() allows you to efficiently loop over the entire tree. As always, don't forget that help is right around the corner in your notebooks. Using help(), you can quickly access the documentation of modules and their functions etc. (but only after you have imported the modules first!). Step28: Quiz In the next part of this chapter, we will need a way to sort our stories from the first, to the very last night. For our own convenience we will use a little hack for this. In this quiz, we would like you to create a new folder under data directory, called '1001'. You should copy all the original files from arabian_nights to this new folder, but give the files a new name, prepending zeros to filename until all nights have four digits in their name. 1001.txt stays 1001.txt, for instance, but 66.txt becomes 0066.txt and 2.txt becomes 0002.txt etc. This will make sorting the nights easier below. For this quiz you could for instance use a for loop in combination with a while loop (but don't get stuck in endless loops...) Step29: Parsing files Using the code from the previous quiz, it is now trivial to sort our nights sequentially on the basis of their actual name (i.e. a string variable) Step30: Using the old filenames, this was not possible directly, because of the way Python sorts strings of unequal lengths. Note that the number in the filenames are represented as strings, which are completely different from real numeric integers, and thus will be sorted differently Step31: Note Step32: Should you be interested Step33: This code reviews some of the materials from previous chapters, including the use of a regular expression, which converts all consecutive instances of whitespace (including line breaks, for instance) to a single space. After executing the previous code block, we can now test our function Step34: We can now apply this function to the contents from a random night Step35: This text looks cleaner already! We can now start to extract individual tokens from the text and count them. This process is called tokenization. Here, we make the naive assumption that words are simply space-free alphabetic strings -- which is of course wrong in the case of English words like "can't". Note that for many languages there exist better tokenizers in Python (such as the ones in the Natural Language Toolkit (nltk). We suffice with a simpler approach for now Step36: Using the list comprehension, we make sure that we do not accidentally return empty strings as a token, for instance, at the beginning of a text which starts with a newline. Remember that anything in Python with a length of 0, will evaluate to False, which explains the if t in the comprehension Step37: We can now start analyzing our nights. A good start would be to check the length of each night in words Step38: Quiz Iterate over all the nights in 1001 in a sorted way. Open, preprocess and tokenize each text. Store in a list called word_counts how many words each story has. Step39: We now have a list of numbers, which we can plot over time. We will cover plotting more extensively in one of the next chapters. The things below are just a teaser. Start by importing matplotlib, which is imported as follows by convention Step40: The second line is needed to make sure that the plots will properly show up in our notebook. Let us start with a simple visualization Step41: As you can see, this simple command can be used to quickly obtain a visualization that shows interesting trends. On the y-axis, we plot absolute word counts for each of our nights. The x-axis is figured out automatically by matplotlib and adds an index on the horizontal x-axis. Implicitly, it interprets our command as follows Step42: When plt.plot receives two flat lists as arguments, it plots the first along the x-axis, and the second along the y-axis. If it only receives one list, it plots it along the y-axis and uses the range we now (redundantly) specified here for the x-axis. This is in fact a subtoptimal plot, since the index of the first data point we plot is zero, although the name of the first night is '1.txt'. Additionally, we know that there are some nights missing in our data. To set this straight, we could pass in our own x-coordinates as follows Step43: We can now make our plot more truthful, and add some bells and whistles Step44: Quiz Using axvline() you can add vertical lines to a plot, for instance at position Step45: Write code that plots the position of the missing nights using this function (and blue lines). Step46: Right now, we are visualizing texts, but we might also be interested in the vocabulary used in the story collection. Counting how often a word appears in a text is trivial for you right now with custom code, for instance Step47: One interesting item which you can use for counting in Python is the Counter object, which we can import as follows Step48: This Counter makes it much easier to write code for counting. Below you can see how this counter automatically creates a dictionary-like structure Step49: If we would like to find which items are most frequent for instance, we could simply do Step50: We can also pass the Counter the tokens to count in multiple stages Step51: After passing our tokens twice to the counter, we see that the numbers double in size. Quiz Write code that makes a word frequency counter named vocab, which counts the cumulative frequencies of all words in the Arabian Nights. Which are the 15 most frequent words? Does that make sense? Step52: Let us now finally visualize the frequencies of the 15 most frequent items using a standard barplot in matplotlib. This can be achieved as follows. We first split out the names and frequencies, since .mostcommon(n) returns a list of tuples, and we create indices Step53: Next, we simply do Step54: Et voilà! Closing Assignment In this larger assignment, you will have to perform some basic text processing on the larger set of XML-encoded files under data/TEI/french_plays. For this assignment, there are several subtasks
Python Code: ls data/arabian_nights Explanation: Chapter 9: What we have covered so far (and a bit more) In this chapter, we will work our way through a concise review of the Python functionality we have covered so far. Throughout this chapter, we will work with a interesting, yet not too large dataset, namely the well-known Arabian nights. Alf Laylah Wa Laylah, the Stories of One Thousand and One Nights is a collection of folk tales, collected over many centuries by various authors, translators, and scholars across West, Central and South Asia and North Africa. It forms a huge narrative wheel with an overarching plot, created by the frame story of Shahrazad. The stories begin with the tale of king Shahryar and his brother, who, having both been deceived by their respective Sultanas, leave their kingdom, only to return when they have found someone who — in their view — was wronged even more. On their journey the two brothers encounter a huge jinn who carries a glass box containing a beautiful young woman. The two brothers hide as quickly as they can in a tree. The jinn lays his head on the girl’s lap and as soon as he is asleep, the girl demands the two kings to make love to her or else she will wake her ‘husband’. They reluctantly give in and the brothers soon discover that the girl has already betrayed the jinn ninety-eight times before. This exemplar of lust and treachery strengthens the Sultan’s opinion that all women are wicked and not to be trusted. When king Shahryar returns home, his wrath against women has grown to an unprecedented level. To temper his anger, each night the king sleeps with a virgin only to execute her the next morning. In order to make an end to this cruelty and save womanhood from a "virgin scarcity", Sharazad offers herself as the next king’s bride. On the first night, Sharazad begins to tell the king a story, but she does not end it. The king’s curiosity to know how the story ends, prevents him from executing Shahrazad. The next night Shahrazad finishes her story, and begins a new one. The king, eager to know the ending of this tale as well, postpones her execution once more. Using this strategy for One Thousand and One Nights in a labyrinth of stories-within-stories-within-stories, Shahrazad attempts to gradually move the king’s cynical stance against women towards a politics of love and justice (see Marina Warner’s Stranger Magic (2013) in case you're interested). The first European version of the Nights was translated into French by Antoine Galland. Many translations (in different languages) followed, such as the (heavily criticized) English translation by Sir Richard Francis Burton entitled The Book of the Thousand and a Night (1885). This version is freely available from the Gutenberg project (see here), and will be the one we will explore here. Files and directories In the notebooks we use, there is a convenient way to quickly inspect the contents of a folder using the ls command. Our Arabian nights are contained under the general data folder: End of explanation f = open('data/arabian_nights/848.txt', 'r') text = f.read() f.close() print(text[:500]) Explanation: As you can see, this folder holds a number of plain text files, ending in the .txt extension. Let us open a random file: End of explanation with open('data/arabian_nights/848.txt', 'r') as f: text = f.read() print(text[:500]) Explanation: Here, we use the open() function to create a file object f, which we can use to access the actual text content of the file. Make sure that you do not pass the 'w' parameter ("write") to open(), instead of 'r' ("read"), since this would overwrite and thus erase the existing file. After assigning the string returned by f.read() to the variable text, we print the 500 first characters of text to get an impression of what it contains, using simple string indexing ([:500]). Don't forget to close the file again after you have opened or strange things could happen to your file! One little trick which is commonly used to avoid having to explicitly open and close your file is a with block (mind the indentation): End of explanation import os Explanation: This code block does exactly the same thing as the previous one but saves you some typing. In this chapter we would like to work with all the files in the arabian_nights directory. This is where loops come in handy of course, since what we really would like to do, is iterate over the contents of the directory. Accessing these contents in Python is easy, but requires importing some extra functionality. In this case, we need to import the os module, which contains all functionality related to the 'operating system' of your machine, such as directory information: End of explanation filenames = os.listdir('data/arabian_nights') print(len(filenames)) print(filenames[:20]) Explanation: Using the dot-syntax (os.xxx), we can now access all functions that come with this module, such as listdir(), which returns a list of the items which are included under a given directory End of explanation # your code goes here Explanation: The function os.listdir() returns a list of strings, representing the filenames contained under a directory. Quiz In Burton's translation some of the 1001 nights are missing. How many? Can you come up with a clever way to find out which nights are missing? Hint: a counting loop and some string casting might be useful here! End of explanation os.listdir('data/belgian_nights') Explanation: With os.listdir(), you need to make sure that you pass the correct path to an existing directory: End of explanation print(os.path.isdir('data/arabian_nights')) print(os.path.isdir('data/belgian_nights')) Explanation: It might therefore be convenient to check whether a directory actually exists in a given location: End of explanation os.mkdir('belgian_nights') Explanation: The second directory, naturally, does not exist and isdir() evaluates to False in this case. Creating a new (and thus empty) directory is also easy using os: End of explanation ls Explanation: We can see that it lives in the present working directory now, by typing ls again: End of explanation print(os.path.isdir('belgian_nights')) Explanation: Or we use Python: End of explanation import shutil shutil.rmtree('belgian_nights') Explanation: Removing directories is also easy, but PLEASE watch out, sometimes it is too easy: if you remove a wrong directory in Python, it will be gone forever... Unlike other applications, Python does not keep a copy of it in your Trash and it does not have a Ctrl-Z button. Please watch out with what you do, since with great power comes great responsiblity! Removing the entire directory which we just created can be done as follows: End of explanation print(os.path.isdir('belgian_nights')) Explanation: And lo behold: the directory has disappeared again: End of explanation os.rmdir('data/arabian_nights') Explanation: Here, we use the rmtree() command to remove the entire directory in a recursive way: even if the directory isn't empty and contains files and subfolders, we will remove all of them. The os module also comes with a rmdir() but this will not allow you to remove a directory which is not empty, as becomes clear in the OSError raised below: End of explanation os.mkdir('belgian_nights') f = open('belgian_nights/1001.txt', 'w') f.write('Content') f.close() print(os.path.exists('belgian_nights/1001.txt')) os.remove('belgian_nights/1001.txt') print(os.path.exists('belgian_nights/1001.txt')) Explanation: The folder contains things and therefore cannot be removed using this function. There are, of course, also ways to remove individual files or check whether they exist: End of explanation shutil.copyfile('data/arabian_nights/66.txt', 'new_66.txt') Explanation: Here, we created a directory, wrote a new file to it (1001.txt), and removed it again. Using os.path.exists() we monitored at which point the file existed. Finally, the shutil module also ships with a useful copyfile() function which allows you to copy files from one location to another, possibly with another name. To copy night 66 to the present directory, for instance, we could do: End of explanation ls Explanation: Indeed, we have added an exact copy of night 66 to our present working directory: End of explanation os.remove('new_66.txt') Explanation: We can safely remove it again: End of explanation os.path.abspath('data/arabian_nights/848.txt') Explanation: Paths The paths we have used so far are 'relative' paths, in the sense that they are relative to the place on our machine from which we execute our Python code. Absolute paths can also be retrieved and will differ on each computer, because they typically include user names etc: End of explanation filenames = os.listdir('data/arabian_nights') random_filename = filenames[9] with open(random_filename, 'r') as f: text = f.read() print(text[:500]) Explanation: While absolute paths are longer to type, they have the advantage that they can be used anywhere on your computer (i.e. irrespective of where you run your code from). Paths can be tricky. Suppose that we would like to open one of our filenames: End of explanation filenames = os.listdir('data/arabian_nights') random_filename = filenames[9] with open('data/arabian_nights/'+ random_filename, 'r') as f: text = f.read() print(text[:500]) Explanation: Python throws a FileNotFoundError, complaining that the file we wish to open does not exist. This situation stems from the fact that os.listdir() only returns the base name of a given file, and not an entire (absolute or relative) path to it. To properly access the file, we must therefore not forget to include the rest of the path again: End of explanation import glob filenames = glob.glob('data/arabian_nights/*') print(filenames[:10]) Explanation: Apart from os.listdir() there are a number of other common ways to obtain directory listings in Python. Using the glob module for instance, we can easily access the full relative path leading to our Arabian Nights: End of explanation filenames = glob.glob('data/arabian_nights/*.txt') print(filenames[:10]) Explanation: The asterisk (*) in the argument passed to glob.glob() is worth noting here. Just like with regular expressions, this asterisk is a sort of wildcard which will match any series of characters (i.e. the filenames under arabian_nights). When we exploit this wildcard syntax, glob.glob() offers another distinct advantage: we can use it to easily filter out filenames which we are not interested in: End of explanation filenames = [] for fn in os.listdir('data/arabian_nights'): if fn.endswith('.txt'): filenames.append(fn) print(filenames[:10]) Explanation: Interestingly, the command in this code block will only load filenames that end in ".txt". This is interesting when we would like to ignore other sorts of junk files etc. that might be present in a directory. To replicate similar behaviour with os.listdir(), we would have needed a typical for-loop, such as: End of explanation filenames = [fn for fn in os.listdir('data/arabian_nights') if fn.endswith('.txt')] Explanation: Or for you stylish coders out there, you can show off with a list comprehension: End of explanation filenames = glob.glob('data/arabian_nights/*.txt') fn = filenames[10] # simple string splitting: print(fn.split('/')[-1]) # using os.sep: print(fn.split(os.sep)[-1]) # using os.path: print(os.path.basename(fn)) Explanation: However, when using glob.glob(), you might sometimes want to be able to extract a file's base name again. There are several solutions to this: End of explanation for root, directory, filename in os.walk("data"): print(filename) Explanation: Both os.sep and os.path.basename have the advantage that they know what separator is used for paths in the operating system, so you don't need to explicitly code it like in the first solution. Separators differ between Windows (backslash) and Mac/Linux (forward slash). Finally, sometimes, you might be interested in all the subdirectories of a particular directory (and all the subdirectories of these subdirectories etc.). Parsing such deep directory structures can be tricky, especially if you do not know how deep a directory tree might run. You could of course try stacking multiple loops using os.listdir(), but a more convenient way is os.walk(): End of explanation help(os.walk) Explanation: As you can see, os.walk() allows you to efficiently loop over the entire tree. As always, don't forget that help is right around the corner in your notebooks. Using help(), you can quickly access the documentation of modules and their functions etc. (but only after you have imported the modules first!). End of explanation # your quiz code Explanation: Quiz In the next part of this chapter, we will need a way to sort our stories from the first, to the very last night. For our own convenience we will use a little hack for this. In this quiz, we would like you to create a new folder under data directory, called '1001'. You should copy all the original files from arabian_nights to this new folder, but give the files a new name, prepending zeros to filename until all nights have four digits in their name. 1001.txt stays 1001.txt, for instance, but 66.txt becomes 0066.txt and 2.txt becomes 0002.txt etc. This will make sorting the nights easier below. For this quiz you could for instance use a for loop in combination with a while loop (but don't get stuck in endless loops...) End of explanation for fn in sorted(os.listdir('data/1001')): print(fn) Explanation: Parsing files Using the code from the previous quiz, it is now trivial to sort our nights sequentially on the basis of their actual name (i.e. a string variable): End of explanation for fn in sorted(os.listdir('data/arabian_nights/')): print(fn) Explanation: Using the old filenames, this was not possible directly, because of the way Python sorts strings of unequal lengths. Note that the number in the filenames are represented as strings, which are completely different from real numeric integers, and thus will be sorted differently: End of explanation for fn in sorted(os.listdir('data/arabian_nights/'), key=lambda nb: int(nb[:-4])): print(fn) Explanation: Note: There is a more elegant, but also slightly less trivial way to achieve the correct order in this case: End of explanation import re def preprocess(in_str): out_str = '' for c in in_str.lower(): if c.isalpha() or c.isspace(): out_str += c whitespace = re.compile(r'\s+') out_str = whitespace.sub(' ', out_str) return out_str Explanation: Should you be interested: here, we pass a key argument to sort, which specifies which operations should be applied to the filenames before actually sorting them. Here, we specify a so-called lambda function to key, which is less intuitive to read, but which allow you to specify a sort of 'mini-function' in a very condensed way: this lambda function chops off the last four characters from each filename and then converts (or 'casts') the results to a new data type using int(), namely an integer (a 'whole' number, as opposed to floating point numbers). Eventually, this leads to the same order. More functions So far, we have been using pre-existing, ready-made functions from Python's standard library, or the standard set of functionality which comes with the programming language. Importantly, there are two additional ways of using functions on your code, which we will cover below: (i) you can write your own functions, and (ii) you can use functions from other, external libraries, which have been developped by so-called 'third parties'. Below, we will for instance use plotting functions from matplotlib, which is a common visualization library for Python. At this point, we have an efficient way of looping over the Arabian Nights sequentially. What we still lack, are functions to load and clean our data. As you could see above, our files still contain a lot of punctuation marks etc., which are perhaps less interesting from the point of view of textual analysis. Let us write a simple function that takes a string as input, and returns a cleaner version of it, where all characters are lowercased, and only alphabetic characters are kept: End of explanation old_str = 'This; is -- a very DIRTY string!' new_str = preprocess(old_str) print(new_str) Explanation: This code reviews some of the materials from previous chapters, including the use of a regular expression, which converts all consecutive instances of whitespace (including line breaks, for instance) to a single space. After executing the previous code block, we can now test our function: End of explanation with open('data/1001/0007.txt', 'r') as f: in_str = f.read() print(preprocess(in_str)) Explanation: We can now apply this function to the contents from a random night: End of explanation def tokenize(in_str): tokens = in_str.split() tokens = [t for t in tokens if t] return tokens Explanation: This text looks cleaner already! We can now start to extract individual tokens from the text and count them. This process is called tokenization. Here, we make the naive assumption that words are simply space-free alphabetic strings -- which is of course wrong in the case of English words like "can't". Note that for many languages there exist better tokenizers in Python (such as the ones in the Natural Language Toolkit (nltk). We suffice with a simpler approach for now: End of explanation with open('data/1001/0007.txt', 'r') as f: in_str = f.read() tokens = tokenize(preprocess(in_str)) print(tokens[:10]) Explanation: Using the list comprehension, we make sure that we do not accidentally return empty strings as a token, for instance, at the beginning of a text which starts with a newline. Remember that anything in Python with a length of 0, will evaluate to False, which explains the if t in the comprehension: empty strings will fail this condition. We can start stacking our functions now: End of explanation print(len(tokens)) Explanation: We can now start analyzing our nights. A good start would be to check the length of each night in words: End of explanation # your quiz code Explanation: Quiz Iterate over all the nights in 1001 in a sorted way. Open, preprocess and tokenize each text. Store in a list called word_counts how many words each story has. End of explanation import matplotlib.pyplot as plt %matplotlib inline Explanation: We now have a list of numbers, which we can plot over time. We will cover plotting more extensively in one of the next chapters. The things below are just a teaser. Start by importing matplotlib, which is imported as follows by convention: End of explanation plt.plot(word_counts) Explanation: The second line is needed to make sure that the plots will properly show up in our notebook. Let us start with a simple visualization: End of explanation plt.plot(range(0, len(word_counts)), word_counts) Explanation: As you can see, this simple command can be used to quickly obtain a visualization that shows interesting trends. On the y-axis, we plot absolute word counts for each of our nights. The x-axis is figured out automatically by matplotlib and adds an index on the horizontal x-axis. Implicitly, it interprets our command as follows: End of explanation filenames = sorted(os.listdir('data/1001')) idxs = [int(i[:-4]) for i in filenames] print(idxs[:20]) print(min(idxs)) print(max(idxs)) Explanation: When plt.plot receives two flat lists as arguments, it plots the first along the x-axis, and the second along the y-axis. If it only receives one list, it plots it along the y-axis and uses the range we now (redundantly) specified here for the x-axis. This is in fact a subtoptimal plot, since the index of the first data point we plot is zero, although the name of the first night is '1.txt'. Additionally, we know that there are some nights missing in our data. To set this straight, we could pass in our own x-coordinates as follows: End of explanation plt.plot(idxs, word_counts, color='r') plt.xlabel('Word length') plt.ylabel('# words (absolute counts)') plt.title('The Arabian Nights') plt.xlim(1, 1001) Explanation: We can now make our plot more truthful, and add some bells and whistles: End of explanation plt.plot(idxs, word_counts, color='r') plt.xlabel('Word length') plt.ylabel('# words (absolute counts)') plt.title(r'The Arabian Nights') plt.xlim(1, 1001) plt.axvline(500, color='g') Explanation: Quiz Using axvline() you can add vertical lines to a plot, for instance at position: End of explanation # quiz code goes here Explanation: Write code that plots the position of the missing nights using this function (and blue lines). End of explanation cnts = {} for word in tokens: if word in cnts: cnts[word] += 1 else: cnts[word] = 1 print(cnts) Explanation: Right now, we are visualizing texts, but we might also be interested in the vocabulary used in the story collection. Counting how often a word appears in a text is trivial for you right now with custom code, for instance: End of explanation from collections import Counter Explanation: One interesting item which you can use for counting in Python is the Counter object, which we can import as follows: End of explanation cnt = Counter(tokens) print(cnt) Explanation: This Counter makes it much easier to write code for counting. Below you can see how this counter automatically creates a dictionary-like structure: End of explanation print(cnt.most_common(25)) Explanation: If we would like to find which items are most frequent for instance, we could simply do: End of explanation cnt = Counter() cnt.update(tokens) cnt.update(tokens) print(cnt.most_common(25)) Explanation: We can also pass the Counter the tokens to count in multiple stages: End of explanation # quiz code Explanation: After passing our tokens twice to the counter, we see that the numbers double in size. Quiz Write code that makes a word frequency counter named vocab, which counts the cumulative frequencies of all words in the Arabian Nights. Which are the 15 most frequent words? Does that make sense? End of explanation freqs = [f for _, f in vocab.most_common(15)] words = [w for w, _ in vocab.most_common(15)] # note the use of underscores for 'throwaway' variables idxs = range(1, len(freqs)+1) Explanation: Let us now finally visualize the frequencies of the 15 most frequent items using a standard barplot in matplotlib. This can be achieved as follows. We first split out the names and frequencies, since .mostcommon(n) returns a list of tuples, and we create indices: End of explanation plt.barh(idxs, freqs, align='center') plt.yticks(idxs, words) plt.xlabel('Words') plt.ylabel('Cumulative absolute frequencies') Explanation: Next, we simply do: End of explanation from IPython.core.display import HTML def css_styling(): styles = open("styles/custom.css", "r").read() return HTML(styles) css_styling() Explanation: Et voilà! Closing Assignment In this larger assignment, you will have to perform some basic text processing on the larger set of XML-encoded files under data/TEI/french_plays. For this assignment, there are several subtasks: 1. Each of these files represent a play written by a particular author (see the &lt;author&gt; element): count how many texts were written by each author in the entire corpus. Make use of a Counter. 2. Each play has a cast list (&lt;castList&gt;), with a role-element for every character in it. In this element, the civil-attribute encodes the gender of the character (M/F, or another charatcer ). Create for each individual author a barplot using matplotlib, showing the percentage of male, female and 'other' characters as a percentage. Pick beautiful colors. 3. Difficult: The information contained in the castList is priceless, because it allows us to determine for each word in the play by whom it is uttered, since the &lt;sp&gt; tag encodes which character in the cast list is speaking at a particular time. Parse play 156.xml (L'Amour à la mode) and calculate which of the characters has the highest vocabulary richness: divide the number of unique words in the speaker's utterances by the total number of words (s)he utters. Only consider speakers that utter at least 1000 tokens in the play. Hint: If your run into encoding errors etc. when processing larger text collections, you can always use try/except constructions to catch these. Ignore the following, it's just here to make the page pretty: End of explanation
5,937
Given the following text description, write Python code to implement the functionality described below step by step Description: Schauen wir uns den Flughafen Basel an Step1: Die Dokumentatiovon BeautifulSoup ist wirklich sehr beautiful. Es lohnt sich hier einen Blick darauf zu werfen. Beginnen wir damit, die Arrivals Site des Flughafens Basel/Mulhouse zu analysieren. URL einlesen Step2: Developer Tools im Browser Suchen wir die Daten, der uns wirklich interessiert. Beginnen bei den Developers Tools. Find und Findall Step3: Definieren wir eine Variable damit. Step4: Holen wir alle "row-1" und "row-0" heraus. Step5: Arbeit mit den Listen & find next sibling Step6: Packen wir das alles in einen Loop? Step7: Gehen wir die zweite Liste durch? Step8: Verbinden wir beide Listen
Python Code: import requests from bs4 import BeautifulSoup import pandas as pd Explanation: Schauen wir uns den Flughafen Basel an End of explanation url = "https://www.euroairport.com/en/flights/daily-arrivals.html" response = requests.get(url) arrivals_soup = BeautifulSoup(response.text, 'html.parser') arrivals_soup Explanation: Die Dokumentatiovon BeautifulSoup ist wirklich sehr beautiful. Es lohnt sich hier einen Blick darauf zu werfen. Beginnen wir damit, die Arrivals Site des Flughafens Basel/Mulhouse zu analysieren. URL einlesen End of explanation arrivals_soup.find('tbody') arrivals_soup.find('div', {'class': 'cblock modules-flights-flightlist modules-flights'}) Explanation: Developer Tools im Browser Suchen wir die Daten, der uns wirklich interessiert. Beginnen bei den Developers Tools. Find und Findall End of explanation table = arrivals_soup.find('div', {'class': 'cblock modules-flights-flightlist modules-flights'}) type(table) table.text Explanation: Definieren wir eine Variable damit. End of explanation row0 = table.find_all('tr', {'class': 'row-0'}) row1 = table.find_all('tr', {'class': 'row-1'}) allrows= table.find_all('tr') Explanation: Holen wir alle "row-1" und "row-0" heraus. End of explanation type(row0) len(row1) len(row0) len(allrows) allrows[0] allrows = allrows[1:] len(allrows) allrows[0] allrows[0].find('td', {'class':'first'}).text allrows[0] allrows[0].find('td').find_next_sibling('td').text row0[0].find('td').find_next_sibling('td') \ .find_next_sibling('td') \ .text.replace('\n', '') \ .replace('\t','') row0[0].find('td').find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td').text \ .replace('\t','').replace('\n', '') row0[0].find('td').find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td').text \ .replace('\t','').replace('\n', '') row0[0].find('td', {'class': 'last'}).text Explanation: Arbeit mit den Listen & find next sibling End of explanation fluege = [] for elem in allrows[1:]: ga_zeit = elem.find('td', {'class':'first'}).text herkunft = elem.find('td', {'class': 'first'}).find_next_sibling('td').text airline = elem.find('td', {'class': 'first'}).find_next_sibling('td') \ .find_next_sibling('td') \ .text.replace('\n', '').replace('\t','') nummer = elem.find('td', {'class': 'first'}).find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td').text \ .replace('\t','').replace('\n', '') a_zeit = elem.find('td', {'class': 'first'}).find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td').text \ .replace('\t','').replace('\n', '') typ = elem.find('td', {'class': 'last'}).text mini_dict = {'Geplante Ankunft': ga_zeit, 'Ankunft': a_zeit, 'Herkunft': herkunft, 'Flugnummer': nummer, 'Passagier/Cargo': typ} fluege.append(mini_dict) Explanation: Packen wir das alles in einen Loop? End of explanation fluege1 = [] #das ändern for elem in row1: ga_zeit = elem.find('td', {'class': 'first'}).text herkunft = elem.find('td', {'class': 'first'}).find_next_sibling('td').text airline = elem.find('td', {'class': 'first'}).find_next_sibling('td') \ .find_next_sibling('td') \ .text.replace('\n', '').replace('\t','') nummer = elem.find('td', {'class': 'first'}).find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td').text \ .replace('\t','').replace('\n', '') a_zeit = elem.find('td', {'class': 'first'}).find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td') \ .find_next_sibling('td').text \ .replace('\t','').replace('\n', '') typ = elem.find('td', {'class': 'last'}).text mini_dict = {'Geplante Ankunft': ga_zeit, 'Ankunft': a_zeit, 'Ankunft aus': herkunft, 'Flugnummer': nummer, 'Passagier/Cargo': typ} fluege1.append(mini_dict) #und hier Explanation: Gehen wir die zweite Liste durch? End of explanation f = fluege pd.DataFrame(f) df = pd.DataFrame(f) df.to_csv('fluege_BS.csv') Explanation: Verbinden wir beide Listen End of explanation
5,938
Given the following text description, write Python code to implement the functionality described below step by step Description: DESI spectral extraction code benchmarks Stephen Bailey<br/> Lawrence Berkeley National Lab<br/> Spring 2017 Update 2017-03-12 Intel engineers identified the cause of the previous lack of scaling with number of processes (an OpenMP bug on KNL). The work-around is Step1: Scaling tests Test the single node scaling performance for Haswell and KNL with two methods Step2: Startup time Python startup time was significant for N>>1 MPI processes, especially on KNL. All code was installed to $SCRATCH or /global/common (fast metadata disks) and $PYTHONPATH did not have anything in /home or /project (slow metadata disks). "Wakeup" timed the following imports
Python Code: %pylab inline import numpy as np from astropy.table import Table #- Scaling with OMP_NUM_THREADS (or not) nt_hsw = Table.read('data/extract/ex-nthread-hsw-1.dat', format='ascii') nt_knl = Table.read('data/extract/ex-nthread-knl-1.dat', format='ascii') plot(nt_hsw['OMP_NUM_THREADS'], nt_hsw['time'], 'bs-') plot(nt_knl['OMP_NUM_THREADS'], nt_knl['time'], 'gd-') xlabel('OMP_NUM_THREADS'); ylabel('time [sec]') xticks(nt_hsw['OMP_NUM_THREADS'], nt_hsw['OMP_NUM_THREADS']) xlim(0,65); ylim(0,130) ratio = np.max(nt_knl['time']) / np.max(nt_hsw['time']) print('ratio = {:.1f}'.format(ratio)) Explanation: DESI spectral extraction code benchmarks Stephen Bailey<br/> Lawrence Berkeley National Lab<br/> Spring 2017 Update 2017-03-12 Intel engineers identified the cause of the previous lack of scaling with number of processes (an OpenMP bug on KNL). The work-around is: * for multiprocessing, use $KMP_AFFINITY=disabled * for MPI, use srun --cpu_bind=cores ... * On Haswell, neither of these is necessary to achieve reasonable scaling Introduction These benchmarks test the DESI spectral extraction code, which is the most computationally intensive portion of our data processing pipeline. It performs a forward modeling analysis of astronomical spectra projected onto 2D CCD images. Early non-KNL benchmarking indicated that it was spending approximately 1/3 of its time in each of * MKL via scipy.linalg.eigh * Miscellaneous non-MKL compiled code in numpy and scipy * Miscellaneous pure python We have not recently re-benchmarked this code and these benchmarks use a smaller example problem that may have a different ratio. See the end of this notebook for how to get the code and run the benchmarks. Scaling with \$OMP_NUM_THREADS Initial test: check if $OMP_NUM_THREADS matters for this code. Takeaway points: * $OMP_NUM_THREADS doesn't matter for this code, presumably because it is spending most of its time in non-MKL / non-OpenMP code. * single process Haswell ~6x faster than single-process KNL example srun commands: ``` - Haswell srun -n 1 -c 64 --cpu_bind=cores python extract.py 1 4 16 32 64 - KNL srun -n 1 -c 256 --cpu_bind=cores python extract.py 1 4 16 64 256 ``` On Haswell, no significant effect from setting or not * OMP_PROC_BIND=spread * OMP_PLACES=cores * KMP_AFFINITY=disabled The remaining tests use $OMP_NUM_THREADS=1 End of explanation #- Scaling with OMP_NUM_THREADS (or not) hsw_mpi = Table.read('data/extract/exmpi-hsw-2.dat', format='ascii') knl_mpi = Table.read('data/extract/exmpi-knl-2.dat', format='ascii') hsw_mp = Table.read('data/extract/exmp-hsw-2.dat', format='ascii') knl_mp = Table.read('data/extract/exmp-knl-2.dat', format='ascii') rcParams['legend.fontsize'] = 10 plot(hsw_mp['nproc'], hsw_mp['rate'], 'bs-', label='Haswell multiprocessing') plot(hsw_mpi['nproc'], hsw_mpi['rate'], 'bs:', label='Haswell MPI') plot(knl_mp['nproc'], knl_mp['rate'], 'gd-', label='KNL multiprocessing') plot(knl_mpi['nproc'], knl_mpi['rate'], 'gd:', label='KNL MPI') x = hsw_mp['nproc'] plot(x, hsw_mp['rate'][0]*x/x[0], 'k-', alpha=0.5, label='perfect scaling') x = knl_mp['nproc'] plot(x, knl_mp['rate'][0]*x/x[0], 'k-', alpha=0.5, label='_none_') legend(loc='lower right') xlabel('Number of processes'); ylabel('extractions per second') loglog() xlim(1,300) xticks([1,4,16,64,256], [1,4,16,64,256]) title('5 spec x 50 wavelengths') maxhsw = max(np.max(hsw_mp['rate']), np.max(hsw_mpi['rate'])) maxknl = max(np.max(knl_mp['rate']), np.max(knl_mpi['rate'])) print('ratio of Haswell/KNL node rate = {:.1f}'.format(maxhsw/maxknl)) print('ratio of Haswell/KNL process rate = {:.1f}'.format(hsw_mpi['rate'][0]/knl_mpi['rate'][0])) # savefig('extract-scaling.png') #- Scaling with larger extraction size (more realistic for how DESI currently runs) hsw = Table.read('data/extract/exmp-hsw-3.dat', format='ascii') knl = Table.read('data/extract/exmp-knl-3.dat', format='ascii') plot(hsw['nproc'], hsw['rate'], 'bs-', label='Haswell') plot(knl['nproc'], knl['rate'], 'gd-', label='KNL') loglog() legend(loc='upper left') xlabel('Number of processes') ylabel('extractions per second') xticks([1,4,16,64,256], [1,4,16,64,256]) xlim(1,300) title('25 spec x 50 wavelengths') print('HSW/KNL = {}'.format(np.max(hsw['rate'])/np.max(knl['rate']))) Explanation: Scaling tests Test the single node scaling performance for Haswell and KNL with two methods: * python multiprocessing * mpi4py This is a data-parallel problem where different processes/ranks are working on different pieces of data without needing to communicate with each other. Under ideal scaling each iteration will take the same amount of well time such that more processes = more data processed = higher total rate. Takeaway points: Scaling is quite good up to the number of physical cores, with slight degredation after that on Haswell and slight gains on KNL. A Haswell nodes is 2-3x faster than a KNL node Not shown here: On Haswell, not setting $OMP_NUM_THREADS=1 clobbers performance at larger concurrency, even though the previous test indicated that $OMP_NUM_THREADS &gt; 1 doesn't help for a single process. End of explanation plot(hsw_mpi['nproc'], hsw_mpi['wakeup']/60, 'bs:', label='Haswell MPI') plot(knl_mpi['nproc'], knl_mpi['wakeup']/60, 'gd:', label='KNL MPI') axhline(3.5/60, color='b', label='Haswell multiprocessing') axhline(12.8/60, color='g', label='KNL multiprocessing') ylabel('python wakeup time [minutes]') xlabel('MPI processes') legend(loc='upper left') title('Python startup time') xticks([1,16,64,128], [1,16,64,128]) Explanation: Startup time Python startup time was significant for N>>1 MPI processes, especially on KNL. All code was installed to $SCRATCH or /global/common (fast metadata disks) and $PYTHONPATH did not have anything in /home or /project (slow metadata disks). "Wakeup" timed the following imports: ```python from mpi4py import MPI comm = MPI.COMM_WORLD import sys, os import platform import optparse import multiprocessing import numpy as np from specter.extract import ex2d import specter.psf import knltest ``` End of explanation
5,939
Given the following text description, write Python code to implement the functionality described below step by step Description: NCEMPY's 3D slicer Interactively scroll through all images in a 3D dataset Ideal for time series data and can be used for volume data Set the dirName nad fName below to point to your data and run all Step1: Load a 3D data set Use the simple ser reader function Change the file_name and the reader function as needed Step2: Interactively show images in the time series Use Jupyter's ipywidgets to create a basic user interface Use the scroll bar at the bottom
Python Code: # Set the data location dirName = r'c:\users\linol\data' fName = '10_series_1.ser' # Load needed modules %matplotlib widget from pathlib import Path import matplotlib.pyplot as plt import ncempy.io as nio import ipywidgets as widgets from ipywidgets import interact, interactive Explanation: NCEMPY's 3D slicer Interactively scroll through all images in a 3D dataset Ideal for time series data and can be used for volume data Set the dirName nad fName below to point to your data and run all End of explanation # Set file path file_name = Path(dirName) / Path(fName) # Read the data in the file ser0 = nio.ser.serReader(file_name) # Set the variable named data to the loaded dataset data = ser0['data'] # Available information # The 'data' key holds all of the data print(ser0.keys()) # Print out the shape of the dataset print('Dataset shape = {}'.format(data.shape)) Explanation: Load a 3D data set Use the simple ser reader function Change the file_name and the reader function as needed End of explanation fg1, ax1 = plt.subplots(1,1) imax1 = ax1.imshow(data[0,:,:], vmin=data.min(),vmax = data.max()) # Set the initial image and intenstiy scaling # Updates the plot def axUpdate(i): imax1.set_data(data[i,:,:]) # Create the slider to update the plot w = interactive(axUpdate, i=(0,data.shape[0]-1)) display(w) Explanation: Interactively show images in the time series Use Jupyter's ipywidgets to create a basic user interface Use the scroll bar at the bottom End of explanation
5,940
Given the following text description, write Python code to implement the functionality described below step by step Description: Geowave GPX Demo This Demo runs KMeans on the GPX dataset consisting of approximately 285 million point locations. We use a cql filter to reduce the KMeans set to a bounding box over Berlin, Germany. Simply focus a cell and use [SHIFT + ENTER] to run the code. Import pixiedust Start by importing pixiedust which if all bootstrap and install steps were run correctly. You should see below for opening the pixiedust database successfully with no errors. Depending on the version of pixiedust that gets installed, it may ask you to update. If so, run this first cell. Step1: Pixiedust also allows us to monitor spark job progress directly from the notebook. Simply run the cell below and anytime a spark job is run from the notebook you should see incremental progress shown in the output below. NOTE If this function fails or produces a error often this is just a link issue between pixiedust and python the first time pixiedust is imported. Restart the Kernel and rerun the cells to fix the error. Step2: Creating the SQLContext and inspecting pyspark Context Pixiedust imports pyspark and the SparkContext + SparkSession should be already available through the "sc" and "spark" variables respectively. Step3: Download and ingest the GPX data NOTE Depending on cluster size sometimes the copy can fail. This appears to be a race condition error with the copy command when downloading the files from s3. This may make the following import into acccumulo command fail. You can check the accumulo tables by looking at port 9995 of the emr cluster. There should be 5 tables after importing. Step4: Setup Datastores Step5: Run KMeans Run Kmeans on the reduced dataset over Berlin, Germany. Once the spark job begins running you should be able to monitor its progress from the cell with pixiedust, or you can monitor the progress from the spark history server on the emr cluster. Step6: Load Centroids into DataFrame and display Step7: Parse DataFrame data into lat/lon columns and display centroids on map Using pixiedust's built in map visualization we can display data on a map assuming it has the following properties. - Keys Step8: Export KMeans Hulls to DataFrame If you have some more complex data to visualize pixiedust may not be the best option. The Kmeans hull generation outputs polygons that would be difficult for pixiedust to display without creating a special plugin. Instead, we can use another map renderer to visualize our data. For the Kmeans hulls we will use folium to visualize the data. Folium allows us to easily add wms layers to our notebook, and we can combine that with GeoWaves geoserver functionality to render the hulls and centroids. Step9: Visualize results using geoserver and wms folium provides an easy way to visualize leaflet maps in jupyter notebooks. When the data is too complicated or big to work within the simple framework pixiedust provides for map display we can instead turn to geoserver and wms to render our layers. First we configure geoserver then setup wms layers for folium to display the kmeans results on the map.
Python Code: #!pip install --user --upgrade pixiedust import pixiedust import geowave_pyspark Explanation: Geowave GPX Demo This Demo runs KMeans on the GPX dataset consisting of approximately 285 million point locations. We use a cql filter to reduce the KMeans set to a bounding box over Berlin, Germany. Simply focus a cell and use [SHIFT + ENTER] to run the code. Import pixiedust Start by importing pixiedust which if all bootstrap and install steps were run correctly. You should see below for opening the pixiedust database successfully with no errors. Depending on the version of pixiedust that gets installed, it may ask you to update. If so, run this first cell. End of explanation pixiedust.enableJobMonitor() Explanation: Pixiedust also allows us to monitor spark job progress directly from the notebook. Simply run the cell below and anytime a spark job is run from the notebook you should see incremental progress shown in the output below. NOTE If this function fails or produces a error often this is just a link issue between pixiedust and python the first time pixiedust is imported. Restart the Kernel and rerun the cells to fix the error. End of explanation # Print Spark info and create sql_context print('Spark Version: {0}'.format(sc.version)) print('Python Version: {0}'.format(sc.pythonVer)) print('Application Name: {0}'.format(sc.appName)) print('Application ID: {0}'.format(sc.applicationId)) print('Spark Master: {0}'.format( sc.master)) Explanation: Creating the SQLContext and inspecting pyspark Context Pixiedust imports pyspark and the SparkContext + SparkSession should be already available through the "sc" and "spark" variables respectively. End of explanation %%bash s3-dist-cp -D mapreduce.task.timeout=60000000 --src=s3://geowave-gpx-data/gpx --dest=hdfs://$HOSTNAME:8020/tmp/ %%bash /opt/accumulo/bin/accumulo shell -u root -p secret -e "importtable geowave.germany_gpx_SPATIAL_IDX /tmp/spatial" /opt/accumulo/bin/accumulo shell -u root -p secret -e "importtable geowave.germany_gpx_GEOWAVE_METADATA /tmp/metadata" Explanation: Download and ingest the GPX data NOTE Depending on cluster size sometimes the copy can fail. This appears to be a race condition error with the copy command when downloading the files from s3. This may make the following import into acccumulo command fail. You can check the accumulo tables by looking at port 9995 of the emr cluster. There should be 5 tables after importing. End of explanation %%bash # clear out potential old runs geowave config rmstore kmeans_gpx geowave config rmstore germany_gpx_accumulo # configure geowave connection params for name stores "germany_gpx_accumulo" and "kmeans_gpx" geowave config addstore germany_gpx_accumulo --gwNamespace geowave.germany_gpx -t accumulo --zookeeper $HOSTNAME:2181 --instance accumulo --user root --password secret geowave config addstore kmeans_gpx --gwNamespace geowave.kmeans -t accumulo --zookeeper $HOSTNAME:2181 --instance accumulo --user root --password secret Explanation: Setup Datastores End of explanation %%bash geowave remote clear kmeans_gpx # Pull core GeoWave datastore classes hbase_options_class = sc._jvm.org.locationtech.geowave.datastore.hbase.cli.config.HBaseRequiredOptions accumulo_options_class = sc._jvm.org.locationtech.geowave.datastore.accumulo.cli.config.AccumuloRequiredOptions query_options_class = sc._jvm.org.locationtech.geowave.core.store.query.QueryOptions byte_array_class = sc._jvm.org.locationtech.geowave.core.index.ByteArrayId # Pull core GeoWave Spark classes from jvm geowave_rdd_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDD rdd_loader_class = sc._jvm.org.locationtech.geowave.analytic.spark.GeoWaveRDDLoader rdd_options_class = sc._jvm.org.locationtech.geowave.analytic.spark.RDDOptions sf_df_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.SimpleFeatureDataFrame kmeans_runner_class = sc._jvm.org.locationtech.geowave.analytic.spark.kmeans.KMeansRunner datastore_utils_class = sc._jvm.org.locationtech.geowave.core.store.util.DataStoreUtils spatial_encoders_class = sc._jvm.org.locationtech.geowave.analytic.spark.sparksql.GeoWaveSpatialEncoders spatial_encoders_class.registerUDTs() #setup input datastore input_store = accumulo_options_class() input_store.setInstance('accumulo') input_store.setUser('root') input_store.setPassword('secret') input_store.setZookeeper(os.environ['HOSTNAME'] + ':2181') input_store.setGeowaveNamespace('geowave.germany_gpx') #Setup output datastore output_store = accumulo_options_class() output_store.setInstance('accumulo') output_store.setUser('root') output_store.setPassword('secret') output_store.setZookeeper(os.environ['HOSTNAME'] + ':2181') output_store.setGeowaveNamespace('geowave.kmeans') #Create a instance of the runner kmeans_runner = kmeans_runner_class() input_store_plugin = input_store.createPluginOptions() output_store_plugin = output_store.createPluginOptions() #set the appropriate properties #We want it to execute using the existing JavaSparkContext wrapped by python. kmeans_runner.setSparkSession(sc._jsparkSession) kmeans_runner.setAdapterId('gpxpoint') kmeans_runner.setNumClusters(8) kmeans_runner.setInputDataStore(input_store_plugin) kmeans_runner.setOutputDataStore(output_store_plugin) kmeans_runner.setCqlFilter("BBOX(geometry, 13.3, 52.45, 13.5, 52.5)") kmeans_runner.setCentroidTypeName('mycentroids') kmeans_runner.setHullTypeName('myhulls') kmeans_runner.setGenerateHulls(True) kmeans_runner.setComputeHullData(True) #execute the kmeans runner kmeans_runner.run() Explanation: Run KMeans Run Kmeans on the reduced dataset over Berlin, Germany. Once the spark job begins running you should be able to monitor its progress from the cell with pixiedust, or you can monitor the progress from the spark history server on the emr cluster. End of explanation # Create the dataframe and get a rdd for the output of kmeans # Grab adapter and setup query options for rdd load adapter_id = byte_array_class('mycentroids') query_adapter = datastore_utils_class.getDataAdapter(output_store_plugin, adapter_id) query_options = query_options_class(query_adapter) # Create RDDOptions for loader rdd_options = rdd_options_class() rdd_options.setQueryOptions(query_options) output_rdd = rdd_loader_class.loadRDD(sc._jsc.sc(), output_store_plugin, rdd_options) # Create a SimpleFeatureDataFrame from the GeoWaveRDD sf_df = sf_df_class(spark._jsparkSession) sf_df.init(output_store_plugin, adapter_id) df = sf_df.getDataFrame(output_rdd) # Convert Java DataFrame to Python DataFrame import pyspark.mllib.common as convert py_df = convert._java2py(sc, df) py_df.createOrReplaceTempView('mycentroids') df = spark.sql("select * from mycentroids") display(df) Explanation: Load Centroids into DataFrame and display End of explanation # Convert the string point information into lat long columns and create a new dataframe for those. import pyspark def parseRow(row): lat=row.geom.y lon=row.geom.x return pyspark.sql.Row(lat=lat,lon=lon,ClusterIndex=row.ClusterIndex) row_rdd = df.rdd new_rdd = row_rdd.map(lambda row: parseRow(row)) new_df = new_rdd.toDF() display(new_df) Explanation: Parse DataFrame data into lat/lon columns and display centroids on map Using pixiedust's built in map visualization we can display data on a map assuming it has the following properties. - Keys: put your latitude and longitude fields here. They must be floating values. These fields must be named latitude, lat or y and longitude, lon or x. - Values: the field you want to use to thematically color the map. Only one field can be used. Also you will need a access token from whichever map renderer you choose to use with pixiedust (mapbox, google). Follow the instructions in the token help on how to create and use the access token. End of explanation # Create the dataframe and get a rdd for the output of kmeans # Grab adapter and setup query options for rdd load adapter_id = byte_array_class('myhulls') query_adapter = datastore_utils_class.getDataAdapter(output_store_plugin, adapter_id) query_options = query_options_class(query_adapter) # Use GeoWaveRDDLoader to load an RDD rdd_options = rdd_options_class() rdd_options.setQueryOptions(query_options) output_rdd_hulls = rdd_loader_class.loadRDD(sc._jsc.sc(), output_store_plugin, rdd_options) # Create a SimpleFeatureDataFrame from the GeoWaveRDD sf_df_hulls = sf_df_class(spark._jsparkSession) sf_df_hulls.init(output_store_plugin, adapter_id) df_hulls = sf_df_hulls.getDataFrame(output_rdd_hulls) # Convert Java DataFrame to Python DataFrame import pyspark.mllib.common as convert py_df_hulls = convert._java2py(sc, df_hulls) # Create a sql table view of the hulls data py_df_hulls.createOrReplaceTempView('myhulls') # Run SQL Query on Hulls data df_hulls = spark.sql("select * from myhulls order by Density") display(df_hulls) Explanation: Export KMeans Hulls to DataFrame If you have some more complex data to visualize pixiedust may not be the best option. The Kmeans hull generation outputs polygons that would be difficult for pixiedust to display without creating a special plugin. Instead, we can use another map renderer to visualize our data. For the Kmeans hulls we will use folium to visualize the data. Folium allows us to easily add wms layers to our notebook, and we can combine that with GeoWaves geoserver functionality to render the hulls and centroids. End of explanation %%bash # set up geoserver geowave config geoserver "$HOSTNAME:8000" # add the centroids layer geowave gs addlayer kmeans_gpx -id mycentroids geowave gs setls mycentroids --styleName point # add the hulls layer geowave gs addlayer kmeans_gpx -id myhulls geowave gs setls myhulls --styleName line import owslib from owslib.wms import WebMapService url = "http://" + os.environ['HOSTNAME'] + ":8000/geoserver/geowave/wms" web_map_services = WebMapService(url) #print layers available wms print('\n'.join(web_map_services.contents.keys())) import folium #grab wms info for centroids layer = 'mycentroids' wms = web_map_services.contents[layer] #build center of map off centroid bbox lon = (wms.boundingBox[0] + wms.boundingBox[2]) / 2. lat = (wms.boundingBox[1] + wms.boundingBox[3]) / 2. center = [lat, lon] m = folium.Map(location = center,zoom_start=10) name = wms.title centroids = folium.raster_layers.WmsTileLayer( url=url, name=name, fmt='image/png', transparent=True, layers=layer, overlay=True, COLORSCALERANGE='1.2,28', ) centroids.add_to(m) layer = 'myhulls' wms = web_map_services.contents[layer] name = wms.title hulls = folium.raster_layers.WmsTileLayer( url=url, name=name, fmt='image/png', transparent=True, layers=layer, overlay=True, COLORSCALERANGE='1.2,28', ) hulls.add_to(m) m Explanation: Visualize results using geoserver and wms folium provides an easy way to visualize leaflet maps in jupyter notebooks. When the data is too complicated or big to work within the simple framework pixiedust provides for map display we can instead turn to geoserver and wms to render our layers. First we configure geoserver then setup wms layers for folium to display the kmeans results on the map. End of explanation
5,941
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Chemistry Scheme Scope Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Form Is Required Step9: 1.6. Number Of Tracers Is Required Step10: 1.7. Family Approach Is Required Step11: 1.8. Coupling With Chemical Reactivity Is Required Step12: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required Step13: 2.2. Code Version Is Required Step14: 2.3. Code Languages Is Required Step15: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required Step16: 3.2. Split Operator Advection Timestep Is Required Step17: 3.3. Split Operator Physical Timestep Is Required Step18: 3.4. Split Operator Chemistry Timestep Is Required Step19: 3.5. Split Operator Alternate Order Is Required Step20: 3.6. Integrated Timestep Is Required Step21: 3.7. Integrated Scheme Type Is Required Step22: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required Step23: 4.2. Convection Is Required Step24: 4.3. Precipitation Is Required Step25: 4.4. Emissions Is Required Step26: 4.5. Deposition Is Required Step27: 4.6. Gas Phase Chemistry Is Required Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required Step30: 4.9. Photo Chemistry Is Required Step31: 4.10. Aerosols Is Required Step32: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required Step33: 5.2. Global Mean Metrics Used Is Required Step34: 5.3. Regional Metrics Used Is Required Step35: 5.4. Trend Metrics Used Is Required Step36: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required Step37: 6.2. Matches Atmosphere Grid Is Required Step38: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required Step39: 7.2. Canonical Horizontal Resolution Is Required Step40: 7.3. Number Of Horizontal Gridpoints Is Required Step41: 7.4. Number Of Vertical Levels Is Required Step42: 7.5. Is Adaptive Grid Is Required Step43: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required Step44: 8.2. Use Atmospheric Transport Is Required Step45: 8.3. Transport Details Is Required Step46: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required Step47: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required Step48: 10.2. Method Is Required Step49: 10.3. Prescribed Climatology Emitted Species Is Required Step50: 10.4. Prescribed Spatially Uniform Emitted Species Is Required Step51: 10.5. Interactive Emitted Species Is Required Step52: 10.6. Other Emitted Species Is Required Step53: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required Step54: 11.2. Method Is Required Step55: 11.3. Prescribed Climatology Emitted Species Is Required Step56: 11.4. Prescribed Spatially Uniform Emitted Species Is Required Step57: 11.5. Interactive Emitted Species Is Required Step58: 11.6. Other Emitted Species Is Required Step59: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required Step60: 12.2. Prescribed Upper Boundary Is Required Step61: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required Step62: 13.2. Species Is Required Step63: 13.3. Number Of Bimolecular Reactions Is Required Step64: 13.4. Number Of Termolecular Reactions Is Required Step65: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required Step66: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required Step67: 13.7. Number Of Advected Species Is Required Step68: 13.8. Number Of Steady State Species Is Required Step69: 13.9. Interactive Dry Deposition Is Required Step70: 13.10. Wet Deposition Is Required Step71: 13.11. Wet Oxidation Is Required Step72: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required Step73: 14.2. Gas Phase Species Is Required Step74: 14.3. Aerosol Species Is Required Step75: 14.4. Number Of Steady State Species Is Required Step76: 14.5. Sedimentation Is Required Step77: 14.6. Coagulation Is Required Step78: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required Step79: 15.2. Gas Phase Species Is Required Step80: 15.3. Aerosol Species Is Required Step81: 15.4. Number Of Steady State Species Is Required Step82: 15.5. Interactive Dry Deposition Is Required Step83: 15.6. Coagulation Is Required Step84: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required Step85: 16.2. Number Of Reactions Is Required Step86: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required Step87: 17.2. Environmental Conditions Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'atmoschem') Explanation: ES-DOC CMIP6 Model Properties - Atmoschem MIP Era: CMIP6 Institute: NOAA-GFDL Source ID: GFDL-CM4 Topic: Atmoschem Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry. Properties: 84 (39 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-20 15:02:34 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Software Properties 3. Key Properties --&gt; Timestep Framework 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order 5. Key Properties --&gt; Tuning Applied 6. Grid 7. Grid --&gt; Resolution 8. Transport 9. Emissions Concentrations 10. Emissions Concentrations --&gt; Surface Emissions 11. Emissions Concentrations --&gt; Atmospheric Emissions 12. Emissions Concentrations --&gt; Concentrations 13. Gas Phase Chemistry 14. Stratospheric Heterogeneous Chemistry 15. Tropospheric Heterogeneous Chemistry 16. Photo Chemistry 17. Photo Chemistry --&gt; Photolysis 1. Key Properties Key properties of the atmospheric chemistry 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of atmospheric chemistry model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of atmospheric chemistry model code. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "troposhere" # "stratosphere" # "mesosphere" # "mesosphere" # "whole atmosphere" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Chemistry Scheme Scope Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Atmospheric domains covered by the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Basic approximations made in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "3D mass/mixing ratio for gas" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Form Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Form of prognostic variables in the atmospheric chemistry component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 1.6. Number Of Tracers Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of advected tracers in the atmospheric chemistry model End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.family_approach') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.7. Family Approach Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry calculations (not advection) generalized into families of species? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 1.8. Coupling With Chemical Reactivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Software Properties Software properties of aerosol code 2.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 2.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Operator splitting" # "Integrated" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Timestep Framework Timestepping in the atmospheric chemistry model 3.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Mathematical method deployed to solve the evolution of a given variable End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.2. Split Operator Advection Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemical species advection (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.3. Split Operator Physical Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for physics (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.4. Split Operator Chemistry Timestep Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Timestep for chemistry (in seconds). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.5. Split Operator Alternate Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 3.6. Integrated Timestep Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Timestep for the atmospheric chemistry model (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Explicit" # "Implicit" # "Semi-implicit" # "Semi-analytic" # "Impact solver" # "Back Euler" # "Newton Raphson" # "Rosenbrock" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3.7. Integrated Scheme Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specify the type of timestep scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Timestep Framework --&gt; Split Operator Order ** 4.1. Turbulence Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.2. Convection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.3. Precipitation Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.4. Emissions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.5. Deposition Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.6. Gas Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.9. Photo Chemistry Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 4.10. Aerosols Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Tuning Applied Tuning methodology for atmospheric chemistry component 5.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Grid Atmospheric chemistry grid 6.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe the general structure of the atmopsheric chemistry grid End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.2. Matches Atmosphere Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 * Does the atmospheric chemistry grid match the atmosphere grid?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Grid --&gt; Resolution Resolution in the atmospheric chemistry grid 7.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Canonical Horizontal Resolution Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.3. Number Of Horizontal Gridpoints Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 7.4. Number Of Vertical Levels Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 7.5. Is Adaptive Grid Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Transport Atmospheric chemistry transport 8.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview of transport implementation End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.2. Use Atmospheric Transport Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is transport handled by the atmosphere, rather than within atmospheric cehmistry? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.transport.transport_details') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Transport Details Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If transport is handled within the atmospheric chemistry scheme, describe it. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Emissions Concentrations Atmospheric chemistry emissions 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric chemistry emissions End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Vegetation" # "Soil" # "Sea surface" # "Anthropogenic" # "Biomass burning" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Emissions Concentrations --&gt; Surface Emissions ** 10.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 10.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted at the surface and specified via any other method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Aircraft" # "Biomass burning" # "Lightning" # "Volcanos" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Emissions Concentrations --&gt; Atmospheric Emissions TO DO 11.1. Sources Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Climatology" # "Spatially uniform mixing ratio" # "Spatially uniform concentration" # "Interactive" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Method Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.3. Prescribed Climatology Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant)) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.4. Prescribed Spatially Uniform Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and prescribed as spatially uniform End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.5. Interactive Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an interactive method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 11.6. Other Emitted Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of chemical species emitted in the atmosphere and specified via an &quot;other method&quot; End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Emissions Concentrations --&gt; Concentrations TO DO 12.1. Prescribed Lower Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the lower boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12.2. Prescribed Upper Boundary Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of species prescribed at the upper boundary. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 13. Gas Phase Chemistry Atmospheric chemistry transport 13.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview gas phase atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "HOx" # "NOy" # "Ox" # "Cly" # "HSOx" # "Bry" # "VOCs" # "isoprene" # "H2O" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13.2. Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Species included in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.3. Number Of Bimolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of bi-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.4. Number Of Termolecular Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of ter-molecular reactions in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.7. Number Of Advected Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of advected species in the gas phase chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.8. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.9. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.10. Wet Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 13.11. Wet Oxidation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 14. Stratospheric Heterogeneous Chemistry Atmospheric chemistry startospheric heterogeneous chemistry 14.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview stratospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Cly" # "Bry" # "NOy" # TODO - please enter value(s) Explanation: 14.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Gas phase species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Polar stratospheric ice" # "NAT (Nitric acid trihydrate)" # "NAD (Nitric acid dihydrate)" # "STS (supercooled ternary solution aerosol particule))" # TODO - please enter value(s) Explanation: 14.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the stratospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.5. Sedimentation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 14.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15. Tropospheric Heterogeneous Chemistry Atmospheric chemistry tropospheric heterogeneous chemistry 15.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview tropospheric heterogenous atmospheric chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 15.2. Gas Phase Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 List of gas phase species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Sulphate" # "Nitrate" # "Sea salt" # "Dust" # "Ice" # "Organic" # "Black carbon/soot" # "Polar stratospheric ice" # "Secondary organic aerosols" # "Particulate organic matter" # TODO - please enter value(s) Explanation: 15.3. Aerosol Species Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Aerosol species included in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.4. Number Of Steady State Species Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of steady state species in the tropospheric heterogeneous chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.5. Interactive Dry Deposition Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 15.6. Coagulation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Photo Chemistry Atmospheric chemistry photo chemistry 16.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview atmospheric photo chemistry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 16.2. Number Of Reactions Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 The number of reactions in the photo-chemistry scheme. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Offline (clear sky)" # "Offline (with clouds)" # "Online" # TODO - please enter value(s) Explanation: 17. Photo Chemistry --&gt; Photolysis Photolysis scheme 17.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Photolysis scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17.2. Environmental Conditions Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.) End of explanation
5,942
Given the following text description, write Python code to implement the functionality described below step by step Description: <!-- 27/10 Ordenamientos y búsquedas. Excepciones. Funciones anónimas.(Pablo o Andres) --> Ordenamiento de listas Las listas se pueden ordenar fácilmente usando la función sorted Step1: Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br> Simple, interrogamos un poco a la función Step2: ¿Y si lo que quiero ordenar es una lista de registros?. <br> Podemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar. Step3: Búsquedas en listas Para saber si un elemento se encuentra en una lista, alcanza con usar el operador in Step4: También es muy fácil saber si un elemento no esta en la lista Step5: En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es Step6: Ahora, para todos estos casos lo que hice fue buscar un elemento completo, es decir, que tenía que conocer todo lo que buscaba y no sólamente una parte, como podría ser el padrón de un alumno. Step7: Funciones anónimas Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usan en una gran cantidad de lugares se pueden usar las funciones lambda Step8: Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros). Por ejemplo, para ordenar los alumnos por padrón podríamos usar Step9: Excepciones Una excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br> Por ejemplo Step10: Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran Step11: Pero supongamos que implementamos la regla de tres de la siguiente forma Step12: En cambio, si le pasamos 0 en el lugar de x Step13: Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br> Pero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br> En el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en Step14: Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace Step15: Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. Capturar múltiples excepciones Una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura Step16: En esos casos podemos capturar más de una excepción de la siguiente forma Step17: Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas Step18: Jerarquía de excepciones Existe una <a href="https Step19: Y también como Step20: Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis. Además, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final Step21: Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código Step22: Lanzar excepciones Hasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise
Python Code: lista_de_numeros = [1, 6, 3, 9, 5, 2] lista_ordenada = sorted(lista_de_numeros) print lista_ordenada print lista_de_numeros Explanation: <!-- 27/10 Ordenamientos y búsquedas. Excepciones. Funciones anónimas.(Pablo o Andres) --> Ordenamiento de listas Las listas se pueden ordenar fácilmente usando la función sorted: End of explanation lista_de_numeros = [1, 6, 3, 9, 5, 2] print sorted(lista_de_numeros, reverse=True) Explanation: Pero, ¿y cómo hacemos para ordenarla de mayor a menor?. <br> Simple, interrogamos un poco a la función: ```Python print sorted.doc sorted(iterable, cmp=None, key=None, reverse=False) --> new sorted list `` Entonces, con sólo pasarle el parámetro de *reverse* enTrue` debería alcanzar: End of explanation def crear_curso(): curso = [ {'nombre': 'Rodriguez, Carlos', 'nota': 6, 'padron': 98128}, {'nombre': 'Perez, Lucas', 'nota': 6, 'padron': 93453}, {'nombre': 'Gonzalez, Ramiro', 'nota': 8, 'padron': 93716}, {'nombre': 'Gonzalez, Carlos', 'nota': 6, 'padron': 90464}, {'nombre': 'Lopez, Carlos', 'nota': 7, 'padron': 98569} ] return curso def imprimir_curso(lista): for idx, x in enumerate(lista): msg = ' {pos:2}. {padron} - {nombre}: {nota}' print msg.format(pos=idx, **x) def obtener_padron(alumno): return alumno['padron'] curso = crear_curso() print 'La lista tiene los alumnos:' imprimir_curso(curso) lista_ordenada = sorted(curso, key=obtener_padron) print 'Y la lista ordenada por padrón:' imprimir_curso(lista_ordenada) Explanation: ¿Y si lo que quiero ordenar es una lista de registros?. <br> Podemos pasarle una función que sepa cómo comparar esos registros o una que sepa devolver la información que necesita comparar. End of explanation lista = [11, 4, 6, 1, 3, 5, 7] if 3 in lista: print '3 esta en la lista' else: print '3 no esta en la lista' if 15 in lista: print '15 esta en la lista' else: print '15 no esta en la lista' Explanation: Búsquedas en listas Para saber si un elemento se encuentra en una lista, alcanza con usar el operador in: End of explanation lista = [11, 4, 6, 1, 3, 5, 7] if 3 not in lista: print '3 NO esta en la lista' else: print '3 SI esta en la lista' Explanation: También es muy fácil saber si un elemento no esta en la lista: End of explanation lista = [11, 4, 6, 1, 3, 5, 7] pos = lista.index(3) print 'El 3 se encuentra en la posición', pos pos = lista.index(15) print 'El 15 se encuentra en la posición', pos Explanation: En cambio, si lo que queremos es saber es dónde se encuentra el número 3 en la lista es: End of explanation curso = crear_curso() print 'La lista tiene los alumnos:' imprimir_curso(curso) alumno_93716 = (alumno for alumno in curso if alumno['padron'] == 93716).next() print 'El alumno de padron 93716 se llama {nombre}'.format(**alumno_93716) Explanation: Ahora, para todos estos casos lo que hice fue buscar un elemento completo, es decir, que tenía que conocer todo lo que buscaba y no sólamente una parte, como podría ser el padrón de un alumno. End of explanation help("lambda") mi_funcion = lambda x, y: x+y resultado = mi_funcion(1, 2) print resultado print type(mi_funcion) def mi_funcion2(x, y): return x + y resultado = mi_funcion2(1, 2) print resultado print type(mi_funcion2) Explanation: Funciones anónimas Hasta ahora, a todas las funciones que creamos les poníamos un nombre al momento de crearlas, pero cuando tenemos que crear funciones que sólo tienen una línea y no se usan en una gran cantidad de lugares se pueden usar las funciones lambda: End of explanation curso = crear_curso() print 'Curso original' imprimir_curso(curso) lista_ordenada = sorted(curso, key=lambda alumno: (-alumno['nota'], alumno['padron'])) print 'Curso ordenado' imprimir_curso(lista_ordenada) Explanation: Si bien no son funciones que se usen todos los días, se suelen usar cuando una función recibe otra función como parámetro (las funciones son un tipo de dato, por lo que se las pueden asignar a variables, y por lo tanto, también pueden ser parámetros). Por ejemplo, para ordenar los alumnos por padrón podríamos usar: Python sorted(curso, key=lambda x: x['padron']) Ahora, si quiero ordenar la lista anterior por nota decreciente y, en caso de igualdad, por padrón podríamos usar: End of explanation print 1/0 Explanation: Excepciones Una excepción es la forma que tiene el intérprete de que indicarle al programador y/o usuario que ha ocurrido un error. Si la excepción no es controlada por el desarrollador ésta llega hasta el usuario y termina abruptamente la ejecución del sistema. <br> Por ejemplo: End of explanation dividendo = 10 divisor = '0' print 'Intentare hacer la división de {}/{}'.format(dividendo, divisor) try: resultado = dividendo / divisor print resultado except ZeroDivisionError: print 'No se puede hacer la división ya que el divisor es 0.' except TypeError: print 'Alguno de los parametros no es un número' print 'Algo' Explanation: Pero no hay que tenerle miedo a las excepciones, sólo hay que tenerlas en cuenta y controlarlas en el caso de que ocurran: End of explanation def dividir(x, y): return x/y def regla_de_tres(x, y, z): return dividir(z*y, x) # Si de 28 alumnos, aprobaron 15, el porcentaje de aprobados es de... porcentaje_de_aprobados = regla_de_tres(28, 15, 100) print 'Porcentaje de aprobados: {0:.2f}%'.format(porcentaje_de_aprobados) Explanation: Pero supongamos que implementamos la regla de tres de la siguiente forma: End of explanation resultado = regla_de_tres(0, 13, 100) print 'Porcentaje de aprobados: {0:.2f}%'.format(resultado) Explanation: En cambio, si le pasamos 0 en el lugar de x: End of explanation def dividir(x, y): return x/y def regla_de_tres(x, y, z): resultado = 0 try: resultado = dividir(z*y, x) except ZeroDivisionError: print 'No se puede calcular la regla de tres ' \ 'porque el divisor es 0' return resultado print regla_de_tres(0, 1, 2) Explanation: Acá podemos ver todo el traceback o stacktrace, que son el cómo se fueron llamando las distintas funciones entre sí hasta que llegamos al error. <br> Pero no es bueno que este tipo de excepciones las vea directamente el usuario, por lo que podemos controlarlas en distintos momentos. Se pueden controlar inmediatamente donde ocurre el error, como mostramos antes, o en cualquier parte de este stacktrace. <br> En el caso de la regla_de_tres no nos conviene poner el try/except encerrando la línea x/y, ya que en ese punto no tenemos toda la información que necesitamos para informarle correctamente al usuario, por lo que podemos ponerla en: End of explanation def dividir(x, y): return x/y def regla_de_tres(x, y, z): return dividir(z*y, x) try: print regla_de_tres(0, 1, 2) except ZeroDivisionError: print 'No se puede calcular la regla de tres ' \ 'porque el divisor es 0' Explanation: Pero en este caso igual muestra 0, por lo que si queremos, podemos poner los try/except incluso más arriba en el stacktrace: End of explanation def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except ZeroDivisionError: print 'ERROR: Ha ocurrido un error por dividir por 0' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) Explanation: Todos los casos son distintos y no hay UN lugar ideal dónde capturar la excepción; es cuestión del desarrollador decidir dónde conviene ponerlo para cada problema. Capturar múltiples excepciones Una única línea puede lanzar distintas excepciones, por lo que capturar un tipo de excepción en particular no me asegura que el programa no pueda lanzar un error en esa línea que supuestamente es segura: En algunos casos tenemos en cuenta que el código puede lanzar una excepción como la de ZeroDivisionError, pero eso puede no ser suficiente: End of explanation def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except TypeError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' except ZeroDivisionError: print 'ERROR: Ha ocurrido un error de división por cero' except Exception: print 'ERROR: Ha ocurrido un error inesperado' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) Explanation: En esos casos podemos capturar más de una excepción de la siguiente forma: End of explanation def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except (ZeroDivisionError, TypeError): print 'ERROR: No se puede calcular la división' dividir_numeros(1, 0) dividir_numeros(10, 2) dividir_numeros("10", 2) Explanation: Incluso, si queremos que los dos errores muestren el mismo mensaje podemos capturar ambas excepciones juntas: End of explanation try: print 1/0 except ZeroDivisionError: print 'Ha ocurrido un error de división por cero' Explanation: Jerarquía de excepciones Existe una <a href="https://docs.python.org/2/library/exceptions.html">jerarquía de excepciones</a>, de forma que si se sabe que puede venir un tipo de error, pero no se sabe exactamente qué excepción puede ocurrir siempre se puede poner una excepción de mayor jerarquía: <img src="excepciones.png"/> Por lo que el error de división por cero se puede evitar como: End of explanation try: print 1/0 except Exception: print 'Ha ocurrido un error inesperado' Explanation: Y también como: End of explanation def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es {}'.format(resultado) except ZeroDivisionError: print 'Error: División por cero' else: print 'Este mensaje se mostrará sólo si no ocurre ningún error' finally: print 'Este bloque de código se muestra siempre' dividir_numeros(1, 0) print '-------------' dividir_numeros(10, 2) Explanation: Si bien siempre se puede poner Exception en lugar del tipo de excepción que se espera, no es una buena práctica de programación ya que se pueden esconder errores indeseados. Por ejemplo, un error de sintaxis. Además, cuando se lanza una excepción en el bloque try, el intérprete comienza a buscar entre todas cláusulas except una que coincida con el error que se produjo, o que sea de mayor jerarquía. Por lo tanto, es recomendable poner siempre las excepciones más específicas al principio y las más generales al final: Python def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es: %s' % resultado except TypeError: print 'ERROR: Ha ocurrido un error por mezclar tipos de datos' except ZeroDivisionError: print 'ERROR: Ha ocurrido un error de división por cero' except Exception: print 'ERROR: Ha ocurrido un error inesperado' Si el error no es capturado por ninguna clausula se propaga de la misma forma que si no se hubiera puesto nada. Otras cláusulas para el manejo de excepciones Además de las cláusulas try y except existen otras relacionadas con las excepciones que nos permiten manejar de mejor manera el flujo del programa: * else: se usa para definir un bloque de código que se ejecutará sólo si no ocurrió ningún error. * finally: se usa para definir un bloque de código que se ejecutará siempre, independientemente de si se lanzó una excepción o no. End of explanation def dividir_numeros(x, y): try: resultado = x/y print 'El resultado es {}'.format(resultado) except ZeroDivisionError: print 'Error: División por cero' else: print 'Ahora hago que ocurra una excepción' print 1/0 finally: print 'Este bloque de código se muestra siempre' dividir_numeros(1, 0) print '-------------' dividir_numeros(10, 2) Explanation: Pero entonces, ¿por qué no poner ese código dentro del try-except?. Porque tal vez no queremos capturar con las cláusulas except lo que se ejecute en ese bloque de código: End of explanation def dividir_numeros(x, y): if y == 0: raise Exception('Error de división por cero') resultado = x/y print 'El resultado es {0}'.format(resultado) try: dividir_numeros(1, 0) except ZeroDivisionError as e: print 'ERROR: División por cero' except Exception as e: print 'ERROR: ha ocurrido un error del tipo Exception' print '----------' dividir_numeros(1, 0) Explanation: Lanzar excepciones Hasta ahora vimos cómo capturar un error y trabajar con él sin que el programa termine abruptamente, pero en algunos casos somos nosotros mismos quienes van a querer lanzar una excepción. Y para eso, usaremos la palabra reservada raise: End of explanation
5,943
Given the following text description, write Python code to implement the functionality described below step by step Description: CNTK Time series prediction with LSTM This demo demonstrates how to use CNTK to predict future values in a time series using a Recurrent Neural Network (RNN). It is based on a LSTM tutorial that comes with the CNTK distribution. RNNs are particularly well suited to learn sequence data. For details on RNNs, see this excellent post. Goal We will download stock prices for a chosen symbol, then train a recurrent neural net to predict the closing price on the following day from $N$ previous days' closing prices. Our RNN will We will use Long-Short Term Memory LSTM units in the hidden layer. An LSTM network is well-suited to learn from experience to classify, process or predict time series when there are time lags of unknown size between important events. Organization The example has the following sections Step1: Select the notebook runtime environment devices / settings Set the device. If you have both CPU and GPU on your machine, you can optionally switch the devices. Step3: Download and Prepare Data Here we define helper methods to prepare the data. download_data() Queries Yahoo Finance for daily close price of a given stock ticker symbol. Returns an array of floats. Step5: As an alternative, we have code to read two CSV files downloaded from DataMarket. The files are 1. Mean daily temperature, Fisher River near Dallas, Jan 01, 1988 to Dec 31, 1991 2. Monthly milk production Step7: generate_RNN_data() The RNN will be trained on sequences of length $N$ of single values (scalars), meaning that each training sample is a $N\times1$ matrix. CNTK requires us to shape our input data as an array with each element being an observation. So, for inputs, $X$, we need to create a tensor or 3-D array with dimensions $[M, N, 1]$ where $M$ is the number of training samples.<br/> As output we want our network to predict the next value of the sequence, so our target, $Y$, is a $[M, 1]$ array containing the next day's value of the stock price after the sequence presented in $X$.</br/> We do this by sampling from the sequence of stock prices and creating numpy ndarrays. Step9: split_data() This function will split the data into training, validation and test sets and return a list with those elements, each containing a ndarray as described above. Step10: Execute Download data, generate the RNN training and evaluation data and visualize Step11: Quick check on the dimensions of the data + make sure we don't have any NaNs Step13: We define the next_batch() iterator that produces batches we can feed to the training function. Note that because CNTK supports variable sequence length, we must feed the batches as list of sequences. This is a convenience function to generate small batches of data often referred to as minibatch. Step15: Network modeling We setup our network with $N$ LSTM cells, each receiving the single value of our sequence as input at every time step. The $N$ outputs from the LSTM layer are the input into a dense layer that produces a single output. So, we have 1 input, $N$ hidden LSTM nodes and again a single output node. To train, CNTK will unroll the network over time steps and backpropagate the error over time. Between LSTM and dense layer we insert a special dropout operation that randomly ignores 20% of the values coming the LSTM during training to prevent overfitting. When using the model to make predictions all values will be retained. We are only interested in predicting one step ahead when we get to the end of each training sequence, so we use another operator to identify the last item in the sequence before connecting the output layer. Using CNTK we can easily express our model Step16: CNTK inputs, outputs and parameters are organized as tensors, or n-dimensional arrays. CNTK refers to these different dimensions as axes. Every CNTK tensor has some static axes and some dynamic axes. The static axes have the same length throughout the life of the network whereas the dynamic axes can vary in length from instance to instance. The axis over which you run a recurrence is dynamic and thus its dimensions are unknown at the time you define your variable. Thus the input variable only lists the shapes of the static axes. Since our inputs are a sequence of one dimensional numbers we specify the input as C.layers.Input(1) Both the $N$ instances in the sequence (training window) and the number of sequences that form a mini-batch are implicitly represented in the default dynamic axis as shown below in the form of defaults. x_axes = [C.Axis.default_batch_axis(), C.Axis.default_dynamic_axis()] C.layers.Input(1, dynamic_axes=x_axes) More information here. The trainer needs definition of the loss function and the optimization algorithm. Step17: Setup everything else we need for training the model Step18: Training the network We are ready to train. 100 epochs should yield acceptable results. Step19: Let's look at how the loss function decreases over time to see if the model is converging Step20: Normally we would validate the training on the data that we set aside for validation but since the input data is small we can run validattion on all parts of the dataset. Step21: We check that the errors are roughly the same for train, validation and test sets. We also plot the expected output (Y) and the prediction our model made to shows how well the simple LSTM approach worked.
Python Code: # Standard packages import math from matplotlib import pyplot as plt import numpy as np import os import pandas as pd import time # Helpers for reading stock prices import pandas_datareader.data as pdr import datetime as dt # Images from IPython.display import Image # CNTK packages import cntk as C import cntk.axis from cntk.layers import Input, Dense, Dropout, Recurrence %matplotlib inline Explanation: CNTK Time series prediction with LSTM This demo demonstrates how to use CNTK to predict future values in a time series using a Recurrent Neural Network (RNN). It is based on a LSTM tutorial that comes with the CNTK distribution. RNNs are particularly well suited to learn sequence data. For details on RNNs, see this excellent post. Goal We will download stock prices for a chosen symbol, then train a recurrent neural net to predict the closing price on the following day from $N$ previous days' closing prices. Our RNN will We will use Long-Short Term Memory LSTM units in the hidden layer. An LSTM network is well-suited to learn from experience to classify, process or predict time series when there are time lags of unknown size between important events. Organization The example has the following sections: - Download and prepare data - LSTM network modeling - Model training and evaluation End of explanation # If you have a GPU, uncomment the GPU line below #C.device.set_default_device(C.device.gpu(0)) C.device.set_default_device(C.device.cpu()) Explanation: Select the notebook runtime environment devices / settings Set the device. If you have both CPU and GPU on your machine, you can optionally switch the devices. End of explanation def download_data(symbol='MSFT', start=dt.datetime(2017, 1, 1), end=dt.datetime(2017, 3, 1)): Download daily close and volume for specified stock symbol from Yahoo Finance Returns pandas DataFrame data = pdr.DataReader(symbol, 'yahoo', start, end) data.rename(inplace = True, columns={'Close':'data'}) rv = data['data'].diff()[1:] / 50.0 return rv Explanation: Download and Prepare Data Here we define helper methods to prepare the data. download_data() Queries Yahoo Finance for daily close price of a given stock ticker symbol. Returns an array of floats. End of explanation def read_data(which = "milk"): Read csv if(which == 'temp'): data = pd.read_csv('data/mean-daily-temperature-fisher-ri.csv') name = 'Mean Daily Temperature - Fisher River' data.rename(inplace = True, columns={'temp':'data'}) rv = data['data']/100 else: data = pd.read_csv('data/monthly-milk-production-pounds-p.csv') name = 'Monthly Milk Production' data.rename(inplace = True, columns={'milk':'data'}) rv = data['data'].diff()[1:] / 50.0 f, a = plt.subplots(1, 1, figsize=(12, 5)) a.plot(data['data'], label=name) a.legend(); return rv Explanation: As an alternative, we have code to read two CSV files downloaded from DataMarket. The files are 1. Mean daily temperature, Fisher River near Dallas, Jan 01, 1988 to Dec 31, 1991 2. Monthly milk production: pounds per cow. Jan 62 – Dec 75 End of explanation def generate_RNN_data(x, time_steps=10): Generate sequences to feed to rnn x: DataFrame, daily close time_steps: int, number of days in sequences used to train the RNN rnn_x = [] for i in range(len(x) - (time_steps+1)): # Each training sample is a sequence of length time_steps xi = x[i: i + time_steps].astype(np.float32).as_matrix() # We need to reshape as a column vector as the model expects # 1 float per time point xi = xi.reshape(-1,1) rnn_x.append(xi) rnn_x = np.array(rnn_x) # The target values are a single float per training sequence rnn_y = np.array(x[time_steps+1:].astype(np.float32)).reshape(-1,1) return split_data(rnn_x, 0.2, 0.2), split_data(rnn_y, 0.2, 0.2) Explanation: generate_RNN_data() The RNN will be trained on sequences of length $N$ of single values (scalars), meaning that each training sample is a $N\times1$ matrix. CNTK requires us to shape our input data as an array with each element being an observation. So, for inputs, $X$, we need to create a tensor or 3-D array with dimensions $[M, N, 1]$ where $M$ is the number of training samples.<br/> As output we want our network to predict the next value of the sequence, so our target, $Y$, is a $[M, 1]$ array containing the next day's value of the stock price after the sequence presented in $X$.</br/> We do this by sampling from the sequence of stock prices and creating numpy ndarrays. End of explanation def split_data(data, val_size=0.1, test_size=0.1): splits np.array into training, validation and test pos_test = int(len(data) * (1 - test_size)) pos_val = int(len(data[:pos_test]) * (1 - val_size)) train, val, test = data[:pos_val], data[pos_val:pos_test], data[pos_test:] return {"train": train, "val": val, "test": test} Explanation: split_data() This function will split the data into training, validation and test sets and return a list with those elements, each containing a ndarray as described above. End of explanation symbol = 'MSFT' start = dt.datetime(2010, 1, 1) end = dt.datetime(2017, 3, 1) window = 30 #raw_data = download_data1(symbol=symbol, start=start, end=end) #rd100 = raw_data['Close']/10.0 raw_data = read_data('milk') X, Y = generate_RNN_data(raw_data, window) f, a = plt.subplots(3, 1, figsize=(12, 8)) for j, ds in enumerate(['train', 'val', 'test']): a[j].plot(Y[ds], label=ds + ' raw') [i.legend() for i in a]; Explanation: Execute Download data, generate the RNN training and evaluation data and visualize End of explanation print([(a, X[a].shape) for a in X.keys()]) print([(a, Y[a].shape) for a in Y.keys()]) print([(a, np.isnan(X[a]).any()) for a in X.keys()]) print([(a, np.isnan(Y[a]).any()) for a in Y.keys()]) Explanation: Quick check on the dimensions of the data + make sure we don't have any NaNs End of explanation def next_batch(x, y, ds, size=10): get the next batch to process for i in range(0, len(x[ds])-size, size): yield np.array(x[ds][i:i+size]), np.array(y[ds][i:i+size]) Explanation: We define the next_batch() iterator that produces batches we can feed to the training function. Note that because CNTK supports variable sequence length, we must feed the batches as list of sequences. This is a convenience function to generate small batches of data often referred to as minibatch. End of explanation def create_model(I, H, O): Create the model for time series prediction with C.layers.default_options(initial_state = 0.1): x = C.layers.Input(I) m = C.layers.Recurrence(C.layers.LSTM(H))(x) m = C.ops.sequence.last(m) m = C.layers.Dropout(0.2)(m) m = cntk.layers.Dense(O)(m) # Also create a layer to represent the target. It has the same number of units as the output # and has to share the same dynamic axes y = C.layers.Input(1, dynamic_axes=m.dynamic_axes, name="y") return (m, x, y) Explanation: Network modeling We setup our network with $N$ LSTM cells, each receiving the single value of our sequence as input at every time step. The $N$ outputs from the LSTM layer are the input into a dense layer that produces a single output. So, we have 1 input, $N$ hidden LSTM nodes and again a single output node. To train, CNTK will unroll the network over time steps and backpropagate the error over time. Between LSTM and dense layer we insert a special dropout operation that randomly ignores 20% of the values coming the LSTM during training to prevent overfitting. When using the model to make predictions all values will be retained. We are only interested in predicting one step ahead when we get to the end of each training sequence, so we use another operator to identify the last item in the sequence before connecting the output layer. Using CNTK we can easily express our model: End of explanation def create_trainer(model, output, learning_rate = 0.001, batch_size = 20): # the learning rate lr_schedule = C.learning_rate_schedule(learning_rate, C.UnitType.minibatch) # loss function loss = C.ops.squared_error(model, output) # use squared error for training error = C.ops.squared_error(model, output) # use adam optimizer momentum_time_constant = C.learner.momentum_as_time_constant_schedule(batch_size / -math.log(0.9)) learner = C.learner.adam_sgd(z.parameters, lr = lr_schedule, momentum = momentum_time_constant, unit_gain = True) # Construct the trainer return(C.Trainer(model, (loss, error), [learner])) Explanation: CNTK inputs, outputs and parameters are organized as tensors, or n-dimensional arrays. CNTK refers to these different dimensions as axes. Every CNTK tensor has some static axes and some dynamic axes. The static axes have the same length throughout the life of the network whereas the dynamic axes can vary in length from instance to instance. The axis over which you run a recurrence is dynamic and thus its dimensions are unknown at the time you define your variable. Thus the input variable only lists the shapes of the static axes. Since our inputs are a sequence of one dimensional numbers we specify the input as C.layers.Input(1) Both the $N$ instances in the sequence (training window) and the number of sequences that form a mini-batch are implicitly represented in the default dynamic axis as shown below in the form of defaults. x_axes = [C.Axis.default_batch_axis(), C.Axis.default_dynamic_axis()] C.layers.Input(1, dynamic_axes=x_axes) More information here. The trainer needs definition of the loss function and the optimization algorithm. End of explanation # create the model with 1 input (x), 10 LSTM units, and 1 output unit (y) (z, x, y) = create_model(1, 10, 1) # Construct the trainer BATCH_SIZE = 2 trainer = create_trainer(z, y, learning_rate=0.0002, batch_size=BATCH_SIZE) Explanation: Setup everything else we need for training the model: define user specified training parameters, define inputs, outputs, model and the optimizer. End of explanation # Training parameters EPOCHS = 500 # train loss_summary = [] start = time.time() for epoch in range(0, EPOCHS): for x1, y1 in next_batch(X, Y, "train", BATCH_SIZE): trainer.train_minibatch({x: x1, y: y1}) if epoch % (EPOCHS / 20) == 0: training_loss = cntk.utils.get_train_loss(trainer) loss_summary.append((epoch, training_loss)) print("epoch: {:3d}/{}, loss: {:.5f}".format(epoch, EPOCHS, training_loss)) loss_summary = np.array(loss_summary) print("training took {0:.1f} sec".format(time.time() - start)) Explanation: Training the network We are ready to train. 100 epochs should yield acceptable results. End of explanation plt.plot(loss_summary[:,0], loss_summary[:,1], label='training loss'); Explanation: Let's look at how the loss function decreases over time to see if the model is converging End of explanation # validate def get_mse(X,Y,labeltxt): result = 0.0 for x1, y1 in next_batch(X, Y, labeltxt, BATCH_SIZE): eval_error = trainer.test_minibatch({x:x1, y:y1}) result += eval_error return result/len(X[labeltxt]) # Print the train and validation errors for labeltxt in ["train", "val", 'test']: print("mse for {}: {:.6f}".format(labeltxt, get_mse(X, Y, labeltxt))) Explanation: Normally we would validate the training on the data that we set aside for validation but since the input data is small we can run validattion on all parts of the dataset. End of explanation # predict f, a = plt.subplots(3, 1, figsize = (12, 8)) for j, ds in enumerate(["train", "val", "test"]): results = [] for x1, y1 in next_batch(X, Y, ds, BATCH_SIZE): pred = z.eval({x: x1}) results.extend(pred[:, 0]) a[j].plot(Y[ds], label = ds + ' raw') a[j].plot(results, label = ds + ' predicted') [i.legend() for i in a]; Explanation: We check that the errors are roughly the same for train, validation and test sets. We also plot the expected output (Y) and the prediction our model made to shows how well the simple LSTM approach worked. End of explanation
5,944
Given the following text description, write Python code to implement the functionality described below step by step Description: API example for the formal integral There are currently two ways to invoke the calculation of the formal integral with tardis. The first is for the use in interactive shells and scripts and the second one for running tardis with the command line script tardis. Let's start with some common imports Step1: We run tardis in an interactive shell the usual way. Afterwards, we can call simulation.runner.integrator.calculate_spectrum(frequency) to create an integrated spectrum for any list of frequencies. Step2: For simplicity we use the list of frequencies defined by the configuration. We could use any other list that is a astropy Quantity which can be transformed into a frequency (e.g. a list of wavelengths in angstrom works, too). The integration returns a TARDISSpectrum instance holding variables like the luminosity_density in nu and lambda. Step3: Here is a simple plot of the integrated spectrum (blue) in comparison with the virtual spectrum (green). Step4: When running the command line script, we can control the type of spectrum that is written to txt with the method option of the spectrum setting in the YAML. If omitted, the default value is 'virtual', however it can be set to 'real' for the spectrum of the real packets, or to 'integrated' for a spectrum from the formal integral approach. Here we verify our method for the spectral generation is 'integrated'. Step5: Afterwards we run the tardis script in the shell and save the spectrum to /tmp/tardis_spec.txt Step6: Finally we plot the result and verify, that this looks indeed how a integrated spectrum should look like (no spread due to noise). As the run was done with only 10 000 packets, one would expect significant visible noise.
Python Code: %pylab notebook import tardis from tardis.io.config_reader import Configuration from tardis.simulation import Simulation config_fname = tardis.__path__[0] + '/../../data/tardis_example/tardis_example_integral.yml' Explanation: API example for the formal integral There are currently two ways to invoke the calculation of the formal integral with tardis. The first is for the use in interactive shells and scripts and the second one for running tardis with the command line script tardis. Let's start with some common imports: End of explanation simulation = tardis.run_tardis(config_fname); Explanation: We run tardis in an interactive shell the usual way. Afterwards, we can call simulation.runner.integrator.calculate_spectrum(frequency) to create an integrated spectrum for any list of frequencies. End of explanation wl = simulation.runner.spectrum.frequency spectrum = simulation.runner.integrator.calculate_spectrum(wl) Explanation: For simplicity we use the list of frequencies defined by the configuration. We could use any other list that is a astropy Quantity which can be transformed into a frequency (e.g. a list of wavelengths in angstrom works, too). The integration returns a TARDISSpectrum instance holding variables like the luminosity_density in nu and lambda. End of explanation fig = figure() ax = plt.gca() spectrum.plot(ax) simulation.runner.spectrum_virtual.plot(ax) fig.show() Explanation: Here is a simple plot of the integrated spectrum (blue) in comparison with the virtual spectrum (green). End of explanation config = Configuration.from_yaml(config_fname) config['spectrum'] Explanation: When running the command line script, we can control the type of spectrum that is written to txt with the method option of the spectrum setting in the YAML. If omitted, the default value is 'virtual', however it can be set to 'real' for the spectrum of the real packets, or to 'integrated' for a spectrum from the formal integral approach. Here we verify our method for the spectral generation is 'integrated'. End of explanation !tardis $config_fname /tmp/tardis_spec.txt Explanation: Afterwards we run the tardis script in the shell and save the spectrum to /tmp/tardis_spec.txt End of explanation wl, lum = np.loadtxt('/tmp/tardis_spec.txt', unpack=True); figure() plot(wl, lum); Explanation: Finally we plot the result and verify, that this looks indeed how a integrated spectrum should look like (no spread due to noise). As the run was done with only 10 000 packets, one would expect significant visible noise. End of explanation
5,945
Given the following text description, write Python code to implement the functionality described below step by step Description: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps Step2: Inline Question #1 Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5 Step5: You should expect to see a slightly better performance than with k = 1. Step6: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
Python Code: # Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt import numpy.linalg as la import seaborn as sns import itertools import pandas as pd sns.set_style('whitegrid') # create a palette generator palette = itertools.cycle(sns.color_palette()) # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (12.0, 12.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) Explanation: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages: During training, the classifier takes the training data and simply remembers it During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. End of explanation # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) dists2 = classifier.compute_distances_one_loop(X_test) dists3 = classifier.compute_distances_no_loops(X_test) distances = [dists, dists2, dists3] names = ['two loop', 'one loop', 'no loop'] for distance, name in zip(distances, names): print(name) plt.imshow(dists, interpolation='none') plt.show() Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: First we must compute the distances between all test examples and all train examples. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. End of explanation # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got {} / {} correct => accuracy: {:.3f}'.format(num_correct, num_test, accuracy)) # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists3, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print('Got {} / {} correct => accuracy: {:.3f}'.format(num_correct, num_test, accuracy)) Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) What in the data is the cause behind the distinctly bright rows? What causes the columns? Your Answer: fill this in. End of explanation y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5: End of explanation # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are: def time_function(f, *args): Call a function f with args and return the time (in seconds) that it took to execute. import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation Explanation: You should expect to see a slightly better performance than with k = 1. End of explanation def run_knn(X_train, y_train, X_validation, y_validation, k): # initalize KNN classifer = KNearestNeighbor() # train the classifer on training set classifier.train(X_train, y_train) # get distance for X validation dist = classifier.compute_distances_no_loops(X_validation) # make prediction based on k y_pred = classifier.predict_labels(dist, k=k) # get the number of correct predictions num_correct = np.sum(y_pred == y_validation) # score the classifer accuracy = float(num_correct)/len(y_validation) return accuracy num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] n = X_train.shape[0] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ indices = range(n) cv_indices = np.array_split(np.array(indices), num_folds) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for k in k_choices: k_to_accuracies[k] = [run_knn(X_train[np.setdiff1d(cv_indices, subset)], y_train[np.setdiff1d(cv_indices, subset)], X_train[subset], y_train[subset], k) for subset in cv_indices] ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations plt.figure(figsize=(14,6)) for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies, label=k, c=next(palette), lw=.25) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.legend() plt.show() # eye ball the variance with a heatmap k_to_accuracies_df = pd.DataFrame(k_to_accuracies) plt.figure(figsize=(10,4)) sns.heatmap(k_to_accuracies_df, annot=True, linecolor='white', linewidths=.005) plt.ylabel("Fold") plt.xlabel("K") plt.title("Cross-validation accuracy"); print(k_to_accuracies_df.describe().T) # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 10 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. End of explanation
5,946
Given the following text description, write Python code to implement the functionality described below step by step Description: DTM Example In this example we will present a sample usage of the DTM wrapper. Prior to using this you need to compile the DTM code yourself or use one of the binaries. This tutorial is on Windows. Running it on Linux and OSX is the same. In this example we will use a small already processed corpus. To see how to get a dataset to this stage please take a look at Gensim Tutorials Step1: First we wil setup logging Step2: Now lets load a set of documents Step3: This corpus contains 10 documents. Now lets say we would like to model this with DTM. To do this we have to define the time steps each document belongs to. In this case the first 3 documents were collected at the same time, while the last 7 were collected a month later, and we wish to see how the topics change from month to month. For this we will define the time_seq, which contains the time slice definition. Step4: A simple corpus wrapper to load a premade corpus. You can use this with your own data. Step5: So now we have to generate the path to DTM executable, here I have already set an ENV variable for the DTM_HOME Step6: That is basically all we need to be able to invoke the Training. If initialize_lda=True then DTM will create a LDA model first and store it in initial-lda-ss.dat. If you already have itial-lda-ss.dat in the DTM folder then you can save time and re-use it with initialize_lda=False. If the file is missing then DTM wil exit with an error. Step7: If everything worked we should be able to print out the topics Step8: Document-Topic proportions Next, we'll attempt to find the Document-Topic proportions. We will use the gamma class variable of the model to do the same. Gamma is a matrix such that gamma[5,10] is the proportion of the 10th topic in document 5. To find, say, the topic proportions in Document 1, we do the following Step9: DIM Example The DTM wrapper in Gensim also has the capacity to run in Document Influence Model mode. The Model is described in this paper. What it allows you to do is find the 'influence' of a certain document on a particular topic. It is primarily used in identifying the scientific impact of research papers through the capability of that document's keywords influencing a topic. 'Influence' can be naively thought of like this - if more of a particular document's words appear in subsequent evolution of a topic, that document is understood to have influenced that topic more. To run it in this mode, we now call DtmModel again, but with the model parameter set as fixed. Note that running it in this mode will also generate the DTM topics similar to running plain DTM, but with added information on document influence. Step10: The main difference between the DTM and DIM models are the addition of Influence files for each time-slice, which is interpreted with the influences_time variable. To find, say, the influence of Document 2 on Topic 2 in Time-Slice 1, we do the following
Python Code: import logging import os from gensim import corpora, utils from gensim.models.wrappers.dtmmodel import DtmModel import numpy as np Explanation: DTM Example In this example we will present a sample usage of the DTM wrapper. Prior to using this you need to compile the DTM code yourself or use one of the binaries. This tutorial is on Windows. Running it on Linux and OSX is the same. In this example we will use a small already processed corpus. To see how to get a dataset to this stage please take a look at Gensim Tutorials End of explanation logger = logging.getLogger() logger.setLevel(logging.DEBUG) logging.debug("test") Explanation: First we wil setup logging End of explanation documents = [[u'senior', u'studios', u'studios', u'studios', u'creators', u'award', u'mobile', u'currently', u'challenges', u'senior', u'summary', u'senior', u'motivated', u'creative', u'senior', u'performs', u'engineering', u'tasks', u'infrastructure', u'focusing', u'primarily', u'programming', u'interaction', u'designers', u'engineers', u'leadership', u'teams', u'teams', u'crews', u'responsibilities', u'engineering', u'quality', u'functional', u'functional', u'teams', u'organizing', u'prioritizing', u'technical', u'decisions', u'engineering', u'participates', u'participates', u'reviews', u'participates', u'hiring', u'conducting', u'interviews', u'feedback', u'departments', u'define', u'focusing', u'engineering', u'teams', u'crews', u'facilitate', u'engineering', u'departments', u'deadlines', u'milestones', u'typically', u'spends', u'designing', u'developing', u'updating', u'bugs', u'mentoring', u'engineers', u'define', u'schedules', u'milestones', u'participating', u'reviews', u'interviews', u'sized', u'teams', u'interacts', u'disciplines', u'knowledge', u'skills', u'knowledge', u'knowledge', u'xcode', u'scripting', u'debugging', u'skills', u'skills', u'knowledge', u'disciplines', u'animation', u'networking', u'expertise', u'competencies', u'oral', u'skills', u'management', u'skills', u'proven', u'effectively', u'teams', u'deadline', u'environment', u'bachelor', u'minimum', u'shipped', u'leadership', u'teams', u'location', u'resumes', u'jobs', u'candidates', u'openings', u'jobs'], [u'maryland', u'client', u'producers', u'electricity', u'operates', u'storage', u'utility', u'retail', u'customers', u'engineering', u'consultant', u'maryland', u'summary', u'technical', u'technology', u'departments', u'expertise', u'maximizing', u'output', u'reduces', u'operating', u'participates', u'areas', u'engineering', u'conducts', u'testing', u'solve', u'supports', u'environmental', u'understands', u'objectives', u'operates', u'responsibilities', u'handles', u'complex', u'engineering', u'aspects', u'monitors', u'quality', u'proficiency', u'optimization', u'recommendations', u'supports', u'personnel', u'troubleshooting', u'commissioning', u'startup', u'shutdown', u'supports', u'procedure', u'operating', u'units', u'develops', u'simulations', u'troubleshooting', u'tests', u'enhancing', u'solving', u'develops', u'estimates', u'schedules', u'scopes', u'understands', u'technical', u'management', u'utilize', u'routine', u'conducts', u'hazards', u'utilizing', u'hazard', u'operability', u'methodologies', u'participates', u'startup', u'reviews', u'pssr', u'participate', u'teams', u'participate', u'regulatory', u'audits', u'define', u'scopes', u'budgets', u'schedules', u'technical', u'management', u'environmental', u'awareness', u'interfacing', u'personnel', u'interacts', u'regulatory', u'departments', u'input', u'objectives', u'identifying', u'introducing', u'concepts', u'solutions', u'peers', u'customers', u'coworkers', u'knowledge', u'skills', u'engineering', u'quality', u'engineering', u'commissioning', u'startup', u'knowledge', u'simulators', u'technologies', u'knowledge', u'engineering', u'techniques', u'disciplines', u'leadership', u'skills', u'proven', u'engineers', u'oral', u'skills', u'technical', u'skills', u'analytically', u'solve', u'complex', u'interpret', u'proficiency', u'simulation', u'knowledge', u'applications', u'manipulate', u'applications', u'engineering', u'calculations', u'programs', u'matlab', u'excel', u'independently', u'environment', u'proven', u'skills', u'effectively', u'multiple', u'tasks', u'planning', u'organizational', u'management', u'skills', u'rigzone', u'jobs', u'developer', u'exceptional', u'strategies', u'junction', u'exceptional', u'strategies', u'solutions', u'solutions', u'biggest', u'insurers', u'operates', u'investment'], [u'vegas', u'tasks', u'electrical', u'contracting', u'expertise', u'virtually', u'electrical', u'developments', u'institutional', u'utilities', u'technical', u'experts', u'relationships', u'credibility', u'contractors', u'utility', u'customers', u'customer', u'relationships', u'consistently', u'innovations', u'profile', u'construct', u'envision', u'dynamic', u'complex', u'electrical', u'management', u'grad', u'internship', u'electrical', u'engineering', u'infrastructures', u'engineers', u'documented', u'management', u'engineering', u'quality', u'engineering', u'electrical', u'engineers', u'complex', u'distribution', u'grounding', u'estimation', u'testing', u'procedures', u'voltage', u'engineering', u'troubleshooting', u'installation', u'documentation', u'bsee', u'certification', u'electrical', u'voltage', u'cabling', u'electrical', u'engineering', u'candidates', u'electrical', u'internships', u'oral', u'skills', u'organizational', u'prioritization', u'skills', u'skills', u'excel', u'cadd', u'calculation', u'autocad', u'mathcad', u'skills', u'skills', u'customer', u'relationships', u'solving', u'ethic', u'motivation', u'tasks', u'budget', u'affirmative', u'diversity', u'workforce', u'gender', u'orientation', u'disability', u'disabled', u'veteran', u'vietnam', u'veteran', u'qualifying', u'veteran', u'diverse', u'candidates', u'respond', u'developing', u'workplace', u'reflects', u'diversity', u'communities', u'reviews', u'electrical', u'contracting', u'southwest', u'electrical', u'contractors'], [u'intern', u'electrical', u'engineering', u'idexx', u'laboratories', u'validating', u'idexx', u'integrated', u'hardware', u'entails', u'planning', u'debug', u'validation', u'engineers', u'validation', u'methodologies', u'healthcare', u'platforms', u'brightest', u'solve', u'challenges', u'innovation', u'technology', u'idexx', u'intern', u'idexx', u'interns', u'supplement', u'interns', u'teams', u'roles', u'competitive', u'interns', u'idexx', u'interns', u'participate', u'internships', u'mentors', u'seminars', u'topics', u'leadership', u'workshops', u'relevant', u'planning', u'topics', u'intern', u'presentations', u'mixers', u'applicants', u'ineligible', u'laboratory', u'compliant', u'idexx', u'laboratories', u'healthcare', u'innovation', u'practicing', u'veterinarians', u'diagnostic', u'technology', u'idexx', u'enhance', u'veterinarians', u'efficiency', u'economically', u'idexx', u'worldwide', u'diagnostic', u'tests', u'tests', u'quality', u'headquartered', u'idexx', u'laboratories', u'employs', u'customers', u'qualifications', u'applicants', u'idexx', u'interns', u'potential', u'demonstrated', u'portfolio', u'recommendation', u'resumes', u'marketing', u'location', u'americas', u'verification', u'validation', u'schedule', u'overtime', u'idexx', u'laboratories', u'reviews', u'idexx', u'laboratories', u'nasdaq', u'healthcare', u'innovation', u'practicing', u'veterinarians'], [u'location', u'duration', u'temp', u'verification', u'validation', u'tester', u'verification', u'validation', u'middleware', u'specifically', u'testing', u'applications', u'clinical', u'laboratory', u'regulated', u'environment', u'responsibilities', u'complex', u'hardware', u'testing', u'clinical', u'analyzers', u'laboratory', u'graphical', u'interfaces', u'complex', u'sample', u'sequencing', u'protocols', u'developers', u'correction', u'tracking', u'tool', u'timely', u'troubleshoot', u'testing', u'functional', u'manual', u'automated', u'participate', u'ongoing', u'testing', u'coverage', u'planning', u'documentation', u'testing', u'validation', u'corrections', u'monitor', u'implementation', u'recurrence', u'operating', u'statistical', u'quality', u'testing', u'global', u'multi', u'teams', u'travel', u'skills', u'concepts', u'waterfall', u'agile', u'methodologies', u'debugging', u'skills', u'complex', u'automated', u'instrumentation', u'environment', u'hardware', u'mechanical', u'components', u'tracking', u'lifecycle', u'management', u'quality', u'organize', u'define', u'priorities', u'organize', u'supervision', u'aggressive', u'deadlines', u'ambiguity', u'analyze', u'complex', u'situations', u'concepts', u'technologies', u'verbal', u'skills', u'effectively', u'technical', u'clinical', u'diverse', u'strategy', u'clinical', u'chemistry', u'analyzer', u'laboratory', u'middleware', u'basic', u'automated', u'testing', u'biomedical', u'engineering', u'technologists', u'laboratory', u'technology', u'availability', u'click', u'attach'], [u'scientist', u'linux', u'asrc', u'scientist', u'linux', u'asrc', u'technology', u'solutions', u'subsidiary', u'asrc', u'engineering', u'technology', u'contracts', u'multiple', u'agencies', u'scientists', u'engineers', u'management', u'personnel', u'allows', u'solutions', u'complex', u'aeronautics', u'aviation', u'management', u'aviation', u'engineering', u'hughes', u'technical', u'technical', u'aviation', u'evaluation', u'engineering', u'management', u'technical', u'terminal', u'surveillance', u'programs', u'currently', u'scientist', u'travel', u'responsibilities', u'develops', u'technology', u'modifies', u'technical', u'complex', u'reviews', u'draft', u'conformity', u'completeness', u'testing', u'interface', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'skills', u'travel', u'programming', u'linux', u'environment', u'cisco', u'knowledge', u'terminal', u'environment', u'clearance', u'clearance', u'input', u'output', u'digital', u'automatic', u'terminal', u'management', u'controller', u'termination', u'testing', u'evaluating', u'policies', u'procedure', u'interface', u'installation', u'verification', u'certification', u'core', u'avionic', u'programs', u'knowledge', u'procedural', u'testing', u'interfacing', u'hardware', u'regression', u'impact', u'reliability', u'maintainability', u'factors', u'standardization', u'missions', u'asrc', u'subsidiaries', u'affirmative', u'employers', u'applicants', u'disability', u'veteran', u'technology', u'location', u'airport', u'bachelor', u'schedule', u'travel', u'contributor', u'management', u'asrc', u'reviews'], [u'technical', u'solarcity', u'niche', u'vegas', u'overview', u'resolving', u'customer', u'clients', u'expanding', u'engineers', u'developers', u'responsibilities', u'knowledge', u'planning', u'adapt', u'dynamic', u'environment', u'inventive', u'creative', u'solarcity', u'lifecycle', u'responsibilities', u'technical', u'analyzing', u'diagnosing', u'troubleshooting', u'customers', u'ticketing', u'console', u'escalate', u'knowledge', u'engineering', u'timely', u'basic', u'phone', u'functionality', u'customer', u'tracking', u'knowledgebase', u'rotation', u'configure', u'deployment', u'sccm', u'technical', u'deployment', u'deploy', u'hardware', u'solarcity', u'bachelor', u'knowledge', u'dell', u'laptops', u'analytical', u'troubleshooting', u'solving', u'skills', u'knowledge', u'databases', u'preferably', u'server', u'preferably', u'monitoring', u'suites', u'documentation', u'procedures', u'knowledge', u'entries', u'verbal', u'skills', u'customer', u'skills', u'competitive', u'solar', u'package', u'insurance', u'vacation', u'savings', u'referral', u'eligibility', u'equity', u'performers', u'solarcity', u'affirmative', u'diversity', u'workplace', u'applicants', u'orientation', u'disability', u'veteran', u'careerrookie'], [u'embedded', u'exelis', u'junction', u'exelis', u'embedded', u'acquisition', u'networking', u'capabilities', u'classified', u'customer', u'motivated', u'develops', u'tests', u'innovative', u'solutions', u'minimal', u'supervision', u'paced', u'environment', u'enjoys', u'assignments', u'interact', u'multi', u'disciplined', u'challenging', u'focused', u'embedded', u'developments', u'spanning', u'engineering', u'lifecycle', u'specification', u'enhancement', u'applications', u'embedded', u'freescale', u'applications', u'android', u'platforms', u'interface', u'customers', u'developers', u'refine', u'specifications', u'architectures', u'java', u'programming', u'scripts', u'python', u'debug', u'debugging', u'emulators', u'regression', u'revisions', u'specialized', u'setups', u'capabilities', u'subversion', u'technical', u'documentation', u'multiple', u'engineering', u'techexpousa', u'reviews'], [u'modeler', u'semantic', u'modeling', u'models', u'skills', u'ontology', u'resource', u'framework', u'schema', u'technologies', u'hadoop', u'warehouse', u'oracle', u'relational', u'artifacts', u'models', u'dictionaries', u'models', u'interface', u'specifications', u'documentation', u'harmonization', u'mappings', u'aligned', u'coordinate', u'technical', u'peer', u'reviews', u'stakeholder', u'communities', u'impact', u'domains', u'relationships', u'interdependencies', u'models', u'define', u'analyze', u'legacy', u'models', u'corporate', u'databases', u'architectural', u'alignment', u'customer', u'expertise', u'harmonization', u'modeling', u'modeling', u'consulting', u'stakeholders', u'quality', u'models', u'storage', u'agile', u'specifically', u'focus', u'modeling', u'qualifications', u'bachelors', u'accredited', u'modeler', u'encompass', u'evaluation', u'skills', u'knowledge', u'modeling', u'techniques', u'resource', u'framework', u'schema', u'technologies', u'unified', u'modeling', u'technologies', u'schemas', u'ontologies', u'sybase', u'knowledge', u'skills', u'interpersonal', u'skills', u'customers', u'clearance', u'applicants', u'eligibility', u'classified', u'clearance', u'polygraph', u'techexpousa', u'solutions', u'partnership', u'solutions', u'integration'], [u'technologies', u'junction', u'develops', u'maintains', u'enhances', u'complex', u'diverse', u'intensive', u'analytics', u'algorithm', u'manipulation', u'management', u'documented', u'individually', u'reviews', u'tests', u'components', u'adherence', u'resolves', u'utilizes', u'methodologies', u'environment', u'input', u'components', u'hardware', u'offs', u'reuse', u'cots', u'gots', u'synthesis', u'components', u'tasks', u'individually', u'analyzes', u'modifies', u'debugs', u'corrects', u'integrates', u'operating', u'environments', u'develops', u'queries', u'databases', u'repositories', u'recommendations', u'improving', u'documentation', u'develops', u'implements', u'algorithms', u'functional', u'assists', u'developing', u'executing', u'procedures', u'components', u'reviews', u'documentation', u'solutions', u'analyzing', u'conferring', u'users', u'engineers', u'analyzing', u'investigating', u'areas', u'adapt', u'hardware', u'mathematical', u'models', u'predict', u'outcome', u'implement', u'complex', u'database', u'repository', u'interfaces', u'queries', u'bachelors', u'accredited', u'substituted', u'bachelors', u'firewalls', u'ipsec', u'vpns', u'technology', u'administering', u'servers', u'apache', u'jboss', u'tomcat', u'developing', u'interfaces', u'firefox', u'internet', u'explorer', u'operating', u'mainframe', u'linux', u'solaris', u'virtual', u'scripting', u'programming', u'oriented', u'programming', u'ajax', u'script', u'procedures', u'cobol', u'cognos', u'fusion', u'focus', u'html', u'java', u'java', u'script', u'jquery', u'perl', u'visual', u'basic', u'powershell', u'cots', u'cots', u'oracle', u'apex', u'integration', u'competitive', u'package', u'bonus', u'corporate', u'equity', u'tuition', u'reimbursement', u'referral', u'bonus', u'holidays', u'insurance', u'flexible', u'disability', u'insurance', u'technologies', u'disability', u'accommodation', u'recruiter', u'techexpousa']] Explanation: Now lets load a set of documents End of explanation time_seq = [3, 7] # first 3 documents are from time slice one # and the other 7 are from the second time slice. Explanation: This corpus contains 10 documents. Now lets say we would like to model this with DTM. To do this we have to define the time steps each document belongs to. In this case the first 3 documents were collected at the same time, while the last 7 were collected a month later, and we wish to see how the topics change from month to month. For this we will define the time_seq, which contains the time slice definition. End of explanation class DTMcorpus(corpora.textcorpus.TextCorpus): def get_texts(self): return self.input def __len__(self): return len(self.input) corpus = DTMcorpus(documents) Explanation: A simple corpus wrapper to load a premade corpus. You can use this with your own data. End of explanation # path to dtm home folder dtm_home = os.environ.get('DTM_HOME', "dtm-master") # path to the binary. on my PC the executable file is dtm-master/bin/dtm dtm_path = os.path.join(dtm_home, 'bin', 'dtm') if dtm_home else None # you can also copy the path down directly. Change this variable to your DTM executable before running. dtm_path = "/home/bhargav/dtm/main" Explanation: So now we have to generate the path to DTM executable, here I have already set an ENV variable for the DTM_HOME End of explanation model = DtmModel(dtm_path, corpus, time_seq, num_topics=2, id2word=corpus.dictionary, initialize_lda=True) Explanation: That is basically all we need to be able to invoke the Training. If initialize_lda=True then DTM will create a LDA model first and store it in initial-lda-ss.dat. If you already have itial-lda-ss.dat in the DTM folder then you can save time and re-use it with initialize_lda=False. If the file is missing then DTM wil exit with an error. End of explanation topics = model.show_topic(topicid=1, time=1, num_words=10) topics Explanation: If everything worked we should be able to print out the topics End of explanation doc_number = 1 num_topics = 2 for i in range(0, num_topics): print ("Distribution of Topic %d %f" % (i, model.gamma_[doc_number, i])) Explanation: Document-Topic proportions Next, we'll attempt to find the Document-Topic proportions. We will use the gamma class variable of the model to do the same. Gamma is a matrix such that gamma[5,10] is the proportion of the 10th topic in document 5. To find, say, the topic proportions in Document 1, we do the following: End of explanation model = DtmModel(dtm_path, corpus, time_seq, num_topics=2, id2word=corpus.dictionary, initialize_lda=True, model='fixed') Explanation: DIM Example The DTM wrapper in Gensim also has the capacity to run in Document Influence Model mode. The Model is described in this paper. What it allows you to do is find the 'influence' of a certain document on a particular topic. It is primarily used in identifying the scientific impact of research papers through the capability of that document's keywords influencing a topic. 'Influence' can be naively thought of like this - if more of a particular document's words appear in subsequent evolution of a topic, that document is understood to have influenced that topic more. To run it in this mode, we now call DtmModel again, but with the model parameter set as fixed. Note that running it in this mode will also generate the DTM topics similar to running plain DTM, but with added information on document influence. End of explanation document_no = 1 #document 2 topic_no = 1 #topic number 2 time_slice = 0 #time slice 1 model.influences_time[time_slice][document_no][topic_no] Explanation: The main difference between the DTM and DIM models are the addition of Influence files for each time-slice, which is interpreted with the influences_time variable. To find, say, the influence of Document 2 on Topic 2 in Time-Slice 1, we do the following: End of explanation
5,947
Given the following text description, write Python code to implement the functionality described below step by step Description: Example 4 - stripy gradients SRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients. Notebook contents Analytic function and derivatives Evaluating accuracy The next example is Ex5-Smoothing Define a computational mesh Use the (usual) icosahedron with face points included. Step1: Analytic function Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse Step2: Derivatives of solution compared to analytic values The gradient method of Triangulation takes a data array f representing values on the mesh vertices and returns the x,y derivatives. ``` python Triangulation.gradient(f, nit=3, tol=0.001) ``` Derivatives of higher accuracy can be obtained by tweaking tol, which controls the convergence tolerance, or nit which controls the number of iterations to a solution. The default values are set to an optimal trade-off between speed and accuracy.
Python Code: import stripy as stripy xmin = 0.0 xmax = 10.0 ymin = 0.0 ymax = 10.0 extent = [xmin, xmax, ymin, ymax] spacingX = 0.2 spacingY = 0.2 mesh = stripy.cartesian_meshes.elliptical_mesh(extent, spacingX, spacingY, refinement_levels=3) print("number of points = {}".format(mesh.npoints)) Explanation: Example 4 - stripy gradients SRFPACK is a Fortran 77 software package that constructs a smooth interpolatory or approximating surface to data values associated with arbitrarily distributed points. It employs automatically selected tension factors to preserve shape properties of the data and avoid overshoot and undershoot associated with steep gradients. Notebook contents Analytic function and derivatives Evaluating accuracy The next example is Ex5-Smoothing Define a computational mesh Use the (usual) icosahedron with face points included. End of explanation import numpy as np def analytic(xs, ys, k1, k2): return np.cos(k1*xs) * np.sin(k2*ys) def analytic_ddx(xs, ys, k1, k2): return -k1 * np.sin(k1*xs) * np.sin(k2*ys) / np.cos(ys) def analytic_ddy(xs, ys, k1, k2): return k2 * np.cos(k1*xs) * np.cos(k2*ys) analytic_sol = analytic(mesh.x, mesh.y, 0.1, 1.0) analytic_sol_ddx = analytic_ddx(mesh.x, mesh.y, 0.1, 1.0) analytic_sol_ddy = analytic_ddy(mesh.x, mesh.y, 0.1, 1.0) %matplotlib inline import matplotlib.pyplot as plt def axis_mesh_field(fig, ax, mesh, field, label): ax.axis('off') x0 = mesh.x y0 = mesh.y trip = ax.tripcolor(x0, y0, mesh.simplices, field, cmap=plt.cm.RdBu) fig.colorbar(trip, ax=ax) ax.set_title(str(label)) return fig = plt.figure(figsize=(10, 8), facecolor="none") ax = fig.add_subplot(111) axis_mesh_field(fig, ax, mesh, analytic_sol, "analytic solution") Explanation: Analytic function Define a relatively smooth function that we can interpolate from the coarse mesh to the fine mesh and analyse End of explanation stripy_ddx, stripy_ddy = mesh.gradient(analytic_sol) fig, ax = plt.subplots(3,2, figsize=(12, 15), facecolor="none") axis_mesh_field(fig, ax[0,0], mesh, analytic_sol, label="original") axis_mesh_field(fig, ax[1,0], mesh, stripy_ddx, label="ddy") axis_mesh_field(fig, ax[1,1], mesh, stripy_ddy, label="ddx") axis_mesh_field(fig, ax[2,0], mesh, stripy_ddx-analytic_sol_ddx, label="ddx_err") axis_mesh_field(fig, ax[2,1], mesh, stripy_ddy-analytic_sol_ddy, label="ddy_err") ax[0,1].axis('off') plt.show() Explanation: Derivatives of solution compared to analytic values The gradient method of Triangulation takes a data array f representing values on the mesh vertices and returns the x,y derivatives. ``` python Triangulation.gradient(f, nit=3, tol=0.001) ``` Derivatives of higher accuracy can be obtained by tweaking tol, which controls the convergence tolerance, or nit which controls the number of iterations to a solution. The default values are set to an optimal trade-off between speed and accuracy. End of explanation
5,948
Given the following text description, write Python code to implement the functionality described below step by step Description: Tutorial showing how to create Parcels in Agulhas animated gif This brief tutorial shows how to recreate the animated gif showing particles in the Agulhas region south of Africa. We start with importing the relevant modules Step1: Now load the Globcurrent fields from the GlobCurrent_example_data directory (note that unlike in the main Parcels tutorial we don't use a dictionary for the filenames here; as they are the same for all variables, we don't need to) Step2: Now create vectors of Longitude and Latitude starting locations on a regular mesh, and use these to initialise a ParticleSet object. Step3: Now we want to advect the particles. However, the Globcurrent data that we loaded in is only for a limited, regional domain and particles might be able to leave this domain. We therefore need to tell Parcels that particles that leave the domain need to be deleted. We do that using a Recovery Kernel, which will be invoked when a particle encounters an ErrorOutOfBounds error Step4: Now we can advect the particles. Note that we do this inside a for-loop, so we can save a plot every six hours (which is the value of runtime). See the plotting tutorial for more information on the pset.show() method.
Python Code: from parcels import FieldSet, ParticleSet, JITParticle, AdvectionRK4, ErrorCode from datetime import timedelta import numpy as np Explanation: Tutorial showing how to create Parcels in Agulhas animated gif This brief tutorial shows how to recreate the animated gif showing particles in the Agulhas region south of Africa. We start with importing the relevant modules End of explanation filenames = "GlobCurrent_example_data/20*.nc" variables = {'U': 'eastward_eulerian_current_velocity', 'V': 'northward_eulerian_current_velocity'} dimensions = {'lat': 'lat', 'lon': 'lon', 'time': 'time'} fieldset = FieldSet.from_netcdf(filenames, variables, dimensions) Explanation: Now load the Globcurrent fields from the GlobCurrent_example_data directory (note that unlike in the main Parcels tutorial we don't use a dictionary for the filenames here; as they are the same for all variables, we don't need to) End of explanation lons, lats = np.meshgrid(range(15, 35), range(-40, -30)) pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats) Explanation: Now create vectors of Longitude and Latitude starting locations on a regular mesh, and use these to initialise a ParticleSet object. End of explanation def DeleteParticle(particle, fieldset, time): particle.delete() Explanation: Now we want to advect the particles. However, the Globcurrent data that we loaded in is only for a limited, regional domain and particles might be able to leave this domain. We therefore need to tell Parcels that particles that leave the domain need to be deleted. We do that using a Recovery Kernel, which will be invoked when a particle encounters an ErrorOutOfBounds error: End of explanation for cnt in range(3): # First plot the particles pset.show(savefile='particles'+str(cnt).zfill(2), field='vector', land=True, vmax=2.0) # Then advect the particles for 6 hours pset.execute(AdvectionRK4, runtime=timedelta(hours=6), # runtime controls the interval of the plots dt=timedelta(minutes=5), recovery={ErrorCode.ErrorOutOfBounds: DeleteParticle}) # the recovery kernel Explanation: Now we can advect the particles. Note that we do this inside a for-loop, so we can save a plot every six hours (which is the value of runtime). See the plotting tutorial for more information on the pset.show() method. End of explanation
5,949
Given the following text description, write Python code to implement the functionality described below step by step Description: Inference with Discrete Latent Variables This tutorial describes Pyro's enumeration strategy for discrete latent variable models. This tutorial assumes the reader is already familiar with the Tensor Shapes Tutorial. Summary Pyro implements automatic enumeration over discrete latent variables. This strategy can be used alone or inside SVI (via TraceEnum_ELBO), HMC, or NUTS. The standalone infer_discrete can generate samples or MAP estimates. Annotate a sample site infer={"enumerate" Step1: Overview <a class="anchor" id="Overview"></a> Pyro's enumeration strategy (Obermeyer et al. 2019) encompasses popular algorithms including variable elimination, exact message passing, forward-filter-backward-sample, inside-out, Baum-Welch, and many other special-case algorithms. Aside from enumeration, Pyro implements a number of inference strategies including variational inference (SVI) and monte carlo (HMC and NUTS). Enumeration can be used either as a stand-alone strategy via infer_discrete, or as a component of other strategies. Thus enumeration allows Pyro to marginalize out discrete latent variables in HMC and SVI models, and to use variational enumeration of discrete variables in SVI guides. Mechanics of enumeration <a class="anchor" id="Mechanics-of-enumeration"></a> The core idea of enumeration is to interpret discrete pyro.sample statements as full enumeration rather than random sampling. Other inference algorithms can then sum out the enumerated values. For example a sample statement might return a tensor of scalar shape under the standard "sample" interpretation (we'll illustrate with trivial model and guide) Step2: However under the enumeration interpretation, the same sample site will return a fully enumerated set of values, based on its distribution's .enumerate_support() method. Step3: Note that we've used "parallel" enumeration to enumerate along a new tensor dimension. This is cheap and allows Pyro to parallelize computation, but requires downstream program structure to avoid branching on the value of z. To support dynamic program structure, you can instead use "sequential" enumeration, which runs the entire model,guide pair once per sample value, but requires running the model multiple times. Step4: Parallel enumeration is cheaper but more complex than sequential enumeration, so we'll focus the rest of this tutorial on the parallel variant. Note that both forms can be interleaved. Multiple latent variables <a class="anchor" id="Multiple-latent-variables"></a> We just saw that a single discrete sample site can be enumerated via nonstandard interpretation. A model with a single discrete latent variable is a mixture model. Models with multiple discrete latent variables can be more complex, including HMMs, CRFs, DBNs, and other structured models. In models with multiple discrete latent variables, Pyro enumerates each variable in a different tensor dimension (counting from the right; see Tensor Shapes Tutorial). This allows Pyro to determine the dependency graph among variables and then perform cheap exact inference using variable elimination algorithms. To understand enumeration dimension allocation, consider the following model, where here we collapse variables out of the model, rather than enumerate them in the guide. Step5: Examining discrete latent states <a class="anchor" id="Examining-discrete-latent-states"></a> While enumeration in SVI allows fast learning of parameters like p above, it does not give access to predicted values of the discrete latent variables like x,y,z above. We can access these using a standalone infer_discrete handler. In this case the guide was trivial, so we can simply wrap the model in infer_discrete. We need to pass a first_available_dim argument to tell infer_discrete which dimensions are available for enumeration; this is related to the max_plate_nesting arg of TraceEnum_ELBO via first_available_dim = -1 - max_plate_nesting Step6: Notice that under the hood infer_discrete runs the model twice Step7: When enumering within a plate (as described in the next section) Vindex can also be used together with capturing the plate index via with pyro.plate(...) as i to index into batch dimensions. Here's an example with nontrivial event dimensions due to the Dirichlet distribution. Step8: Plates and enumeration <a class="anchor" id="Plates-and-enumeration"></a> Pyro plates express conditional independence among random variables. Pyro's enumeration strategy can take advantage of plates to reduce the high cost (exponential in the size of the plate) of enumerating a cartesian product down to a low cost (linear in the size of the plate) of enumerating conditionally independent random variables in lock-step. This is especially important for e.g. minibatched data. To illustrate, consider a gaussian mixture model with shared variance and different mean. Step9: Observe that during inference the model is run twice, first by the AutoNormal to trace sample sites, and second by elbo to compute loss. In the first run, x has the standard interpretation of one sample per datum, hence shape (10,). In the second run enumeration can use the same three values (3,1) for all data points, and relies on broadcasting for any dependent sample or observe sites that depend on data. For example, in the pyro.sample("obs",...) statement, the distribution has shape (3,1), the data has shape(10,), and the broadcasted log probability tensor has shape (3,10). For a more in-depth treatment of enumeration in mixture models, see the Gaussian Mixture Model Tutorial and the HMM Example. Dependencies among plates <a class="anchor" id="Dependencies-among-plates"></a> The computational savings of enumerating in vectorized plates comes with restrictions on the dependency structure of models (as described in (Obermeyer et al. 2019)). These restrictions are in addition to the usual restrictions of conditional independence. The enumeration restrictions are checked by TraceEnum_ELBO and will result in an error if violated (however the usual conditional independence restriction cannot be generally verified by Pyro). For completeness we list all three restrictions Step10: We can learn the global parameters using SVI with an autoguide. Step11: Notice that the model was run twice here
Python Code: import os import torch import pyro import pyro.distributions as dist from torch.distributions import constraints from pyro import poutine from pyro.infer import SVI, Trace_ELBO, TraceEnum_ELBO, config_enumerate, infer_discrete from pyro.infer.autoguide import AutoNormal from pyro.ops.indexing import Vindex smoke_test = ('CI' in os.environ) assert pyro.__version__.startswith('1.7.0') pyro.set_rng_seed(0) Explanation: Inference with Discrete Latent Variables This tutorial describes Pyro's enumeration strategy for discrete latent variable models. This tutorial assumes the reader is already familiar with the Tensor Shapes Tutorial. Summary Pyro implements automatic enumeration over discrete latent variables. This strategy can be used alone or inside SVI (via TraceEnum_ELBO), HMC, or NUTS. The standalone infer_discrete can generate samples or MAP estimates. Annotate a sample site infer={"enumerate": "parallel"} to trigger enumeration. If a sample site determines downstream structure, instead use {"enumerate": "sequential"}. Write your models to allow arbitrarily deep batching on the left, e.g. use broadcasting. Inference cost is exponential in treewidth, so try to write models with narrow treewidth. If you have trouble, ask for help on forum.pyro.ai! Table of contents Overview Mechanics of enumeration Multiple latent variables Examining discrete latent states Indexing with enumerated variables Plates and enumeration Dependencies among plates Time series example How to enumerate more than 25 variables End of explanation def model(): z = pyro.sample("z", dist.Categorical(torch.ones(5))) print(f"model z = {z}") def guide(): z = pyro.sample("z", dist.Categorical(torch.ones(5))) print(f"guide z = {z}") elbo = Trace_ELBO() elbo.loss(model, guide); Explanation: Overview <a class="anchor" id="Overview"></a> Pyro's enumeration strategy (Obermeyer et al. 2019) encompasses popular algorithms including variable elimination, exact message passing, forward-filter-backward-sample, inside-out, Baum-Welch, and many other special-case algorithms. Aside from enumeration, Pyro implements a number of inference strategies including variational inference (SVI) and monte carlo (HMC and NUTS). Enumeration can be used either as a stand-alone strategy via infer_discrete, or as a component of other strategies. Thus enumeration allows Pyro to marginalize out discrete latent variables in HMC and SVI models, and to use variational enumeration of discrete variables in SVI guides. Mechanics of enumeration <a class="anchor" id="Mechanics-of-enumeration"></a> The core idea of enumeration is to interpret discrete pyro.sample statements as full enumeration rather than random sampling. Other inference algorithms can then sum out the enumerated values. For example a sample statement might return a tensor of scalar shape under the standard "sample" interpretation (we'll illustrate with trivial model and guide): End of explanation elbo = TraceEnum_ELBO(max_plate_nesting=0) elbo.loss(model, config_enumerate(guide, "parallel")); Explanation: However under the enumeration interpretation, the same sample site will return a fully enumerated set of values, based on its distribution's .enumerate_support() method. End of explanation elbo = TraceEnum_ELBO(max_plate_nesting=0) elbo.loss(model, config_enumerate(guide, "sequential")); Explanation: Note that we've used "parallel" enumeration to enumerate along a new tensor dimension. This is cheap and allows Pyro to parallelize computation, but requires downstream program structure to avoid branching on the value of z. To support dynamic program structure, you can instead use "sequential" enumeration, which runs the entire model,guide pair once per sample value, but requires running the model multiple times. End of explanation @config_enumerate def model(): p = pyro.param("p", torch.randn(3, 3).exp(), constraint=constraints.simplex) x = pyro.sample("x", dist.Categorical(p[0])) y = pyro.sample("y", dist.Categorical(p[x])) z = pyro.sample("z", dist.Categorical(p[y])) print(f" model x.shape = {x.shape}") print(f" model y.shape = {y.shape}") print(f" model z.shape = {z.shape}") return x, y, z def guide(): pass pyro.clear_param_store() print("Sampling:") model() print("Enumerated Inference:") elbo = TraceEnum_ELBO(max_plate_nesting=0) elbo.loss(model, guide); Explanation: Parallel enumeration is cheaper but more complex than sequential enumeration, so we'll focus the rest of this tutorial on the parallel variant. Note that both forms can be interleaved. Multiple latent variables <a class="anchor" id="Multiple-latent-variables"></a> We just saw that a single discrete sample site can be enumerated via nonstandard interpretation. A model with a single discrete latent variable is a mixture model. Models with multiple discrete latent variables can be more complex, including HMMs, CRFs, DBNs, and other structured models. In models with multiple discrete latent variables, Pyro enumerates each variable in a different tensor dimension (counting from the right; see Tensor Shapes Tutorial). This allows Pyro to determine the dependency graph among variables and then perform cheap exact inference using variable elimination algorithms. To understand enumeration dimension allocation, consider the following model, where here we collapse variables out of the model, rather than enumerate them in the guide. End of explanation serving_model = infer_discrete(model, first_available_dim=-1) x, y, z = serving_model() # takes the same args as model(), here no args print(f"x = {x}") print(f"y = {y}") print(f"z = {z}") Explanation: Examining discrete latent states <a class="anchor" id="Examining-discrete-latent-states"></a> While enumeration in SVI allows fast learning of parameters like p above, it does not give access to predicted values of the discrete latent variables like x,y,z above. We can access these using a standalone infer_discrete handler. In this case the guide was trivial, so we can simply wrap the model in infer_discrete. We need to pass a first_available_dim argument to tell infer_discrete which dimensions are available for enumeration; this is related to the max_plate_nesting arg of TraceEnum_ELBO via first_available_dim = -1 - max_plate_nesting End of explanation @config_enumerate def model(): p = pyro.param("p", torch.randn(5, 4, 3, 2).exp(), constraint=constraints.simplex) x = pyro.sample("x", dist.Categorical(torch.ones(4))) y = pyro.sample("y", dist.Categorical(torch.ones(3))) with pyro.plate("z_plate", 5): p_xy = Vindex(p)[..., x, y, :] z = pyro.sample("z", dist.Categorical(p_xy)) print(f" p.shape = {p.shape}") print(f" x.shape = {x.shape}") print(f" y.shape = {y.shape}") print(f" p_xy.shape = {p_xy.shape}") print(f" z.shape = {z.shape}") return x, y, z def guide(): pass pyro.clear_param_store() print("Sampling:") model() print("Enumerated Inference:") elbo = TraceEnum_ELBO(max_plate_nesting=1) elbo.loss(model, guide); Explanation: Notice that under the hood infer_discrete runs the model twice: first in forward-filter mode where sites are enumerated, then in replay-backward-sample model where sites are sampled. infer_discrete can also perform MAP inference by passing temperature=0. Note that while infer_discrete produces correct posterior samples, it does not currently produce correct logprobs, and should not be used in other gradient-based inference algorthms. Indexing with enumerated variables It can be tricky to use advanced indexing to select an element of a tensor using one or more enumerated variables. This is especially true in Pyro models where your model's indexing operations need to work in multiple interpretations: both sampling from the model (to generate data) and during enumerated inference. For example, suppose a plated random variable z depends on two different random variables: py p = pyro.param("p", torch.randn(5, 4, 3, 2).exp(), constraint=constraints.simplex) x = pyro.sample("x", dist.Categorical(torch.ones(4))) y = pyro.sample("y", dist.Categorical(torch.ones(3))) with pyro.plate("z_plate", 5): p_xy = p[..., x, y, :] # Not compatible with enumeration! z = pyro.sample("z", dist.Categorical(p_xy) Due to advanced indexing semantics, the expression p[..., x, y, :] will work correctly without enumeration, but is incorrect when x or y is enumerated. Pyro provides a simple way to index correctly, but first let's see how to correctly index using PyTorch's advanced indexing without Pyro: ```py Compatible with enumeration, but not recommended: p_xy = p[torch.arange(5, device=p.device).reshape(5, 1), x.unsqueeze(-1), y.unsqueeze(-1), torch.arange(2, device=p.device)] Pyro provides a helper [Vindex()[]](http://docs.pyro.ai/en/dev/ops.html#pyro.ops.indexing.Vindex) to use enumeration-compatible advanced indexing semantics rather than standard PyTorch/NumPy semantics. (Note the `Vindex` name and semantics follow the Numpy Enhancement Proposal [NEP 21](https://numpy.org/neps/nep-0021-advanced-indexing.html)). `Vindex()[]` makes the `.__getitem__()` operator broadcast like other familiar operators `+`, `*` etc. Using `Vindex()[]` we can write the same expression as if `x` and `y` were numbers (i.e. not enumerated):py Recommended syntax compatible with enumeration: p_xy = Vindex(p)[..., x, y, :] ``` Here is a complete example: End of explanation @config_enumerate def model(): data_plate = pyro.plate("data_plate", 6, dim=-1) feature_plate = pyro.plate("feature_plate", 5, dim=-2) component_plate = pyro.plate("component_plate", 4, dim=-1) with feature_plate: with component_plate: p = pyro.sample("p", dist.Dirichlet(torch.ones(3))) with data_plate: c = pyro.sample("c", dist.Categorical(torch.ones(4))) with feature_plate as vdx: # Capture plate index. pc = Vindex(p)[vdx[..., None], c, :] # Reshape it and use in Vindex. x = pyro.sample("x", dist.Categorical(pc), obs=torch.zeros(5, 6, dtype=torch.long)) print(f" p.shape = {p.shape}") print(f" c.shape = {c.shape}") print(f" vdx.shape = {vdx.shape}") print(f" pc.shape = {pc.shape}") print(f" x.shape = {x.shape}") def guide(): feature_plate = pyro.plate("feature_plate", 5, dim=-2) component_plate = pyro.plate("component_plate", 4, dim=-1) with feature_plate, component_plate: pyro.sample("p", dist.Dirichlet(torch.ones(3))) pyro.clear_param_store() print("Sampling:") model() print("Enumerated Inference:") elbo = TraceEnum_ELBO(max_plate_nesting=2) elbo.loss(model, guide); Explanation: When enumering within a plate (as described in the next section) Vindex can also be used together with capturing the plate index via with pyro.plate(...) as i to index into batch dimensions. Here's an example with nontrivial event dimensions due to the Dirichlet distribution. End of explanation @config_enumerate def model(data, num_components=3): print(f" Running model with {len(data)} data points") p = pyro.sample("p", dist.Dirichlet(0.5 * torch.ones(num_components))) scale = pyro.sample("scale", dist.LogNormal(0, num_components)) with pyro.plate("components", num_components): loc = pyro.sample("loc", dist.Normal(0, 10)) with pyro.plate("data", len(data)): x = pyro.sample("x", dist.Categorical(p)) print(" x.shape = {}".format(x.shape)) pyro.sample("obs", dist.Normal(loc[x], scale), obs=data) print(" dist.Normal(loc[x], scale).batch_shape = {}".format( dist.Normal(loc[x], scale).batch_shape)) guide = AutoNormal(poutine.block(model, hide=["x", "data"])) data = torch.randn(10) pyro.clear_param_store() print("Sampling:") model(data) print("Enumerated Inference:") elbo = TraceEnum_ELBO(max_plate_nesting=1) elbo.loss(model, guide, data); Explanation: Plates and enumeration <a class="anchor" id="Plates-and-enumeration"></a> Pyro plates express conditional independence among random variables. Pyro's enumeration strategy can take advantage of plates to reduce the high cost (exponential in the size of the plate) of enumerating a cartesian product down to a low cost (linear in the size of the plate) of enumerating conditionally independent random variables in lock-step. This is especially important for e.g. minibatched data. To illustrate, consider a gaussian mixture model with shared variance and different mean. End of explanation data_dim = 4 num_steps = 10 data = dist.Categorical(torch.ones(num_steps, data_dim)).sample() def hmm_model(data, data_dim, hidden_dim=10): print(f"Running for {len(data)} time steps") # Sample global matrices wrt a Jeffreys prior. with pyro.plate("hidden_state", hidden_dim): transition = pyro.sample("transition", dist.Dirichlet(0.5 * torch.ones(hidden_dim))) emission = pyro.sample("emission", dist.Dirichlet(0.5 * torch.ones(data_dim))) x = 0 # initial state for t, y in enumerate(data): x = pyro.sample(f"x_{t}", dist.Categorical(transition[x]), infer={"enumerate": "parallel"}) pyro.sample(f" y_{t}", dist.Categorical(emission[x]), obs=y) print(f" x_{t}.shape = {x.shape}") Explanation: Observe that during inference the model is run twice, first by the AutoNormal to trace sample sites, and second by elbo to compute loss. In the first run, x has the standard interpretation of one sample per datum, hence shape (10,). In the second run enumeration can use the same three values (3,1) for all data points, and relies on broadcasting for any dependent sample or observe sites that depend on data. For example, in the pyro.sample("obs",...) statement, the distribution has shape (3,1), the data has shape(10,), and the broadcasted log probability tensor has shape (3,10). For a more in-depth treatment of enumeration in mixture models, see the Gaussian Mixture Model Tutorial and the HMM Example. Dependencies among plates <a class="anchor" id="Dependencies-among-plates"></a> The computational savings of enumerating in vectorized plates comes with restrictions on the dependency structure of models (as described in (Obermeyer et al. 2019)). These restrictions are in addition to the usual restrictions of conditional independence. The enumeration restrictions are checked by TraceEnum_ELBO and will result in an error if violated (however the usual conditional independence restriction cannot be generally verified by Pyro). For completeness we list all three restrictions: Restriction 1: conditional independence Variables within a plate may not depend on each other (along the plate dimension). This applies to any variable, whether or not it is enumerated. This applies to both sequential plates and vectorized plates. For example the following model is invalid: py def invalid_model(): x = 0 for i in pyro.plate("invalid", 10): x = pyro.sample(f"x_{i}", dist.Normal(x, 1.)) Restriction 2: no downstream coupling No variable outside of a vectorized plate can depend on an enumerated variable inside of that plate. This would violate Pyro's exponential speedup assumption. For example the following model is invalid: py @config_enumerate def invalid_model(data): with pyro.plate("plate", 10): # &lt;--- invalid vectorized plate x = pyro.sample("x", dist.Bernoulli(0.5)) assert x.shape == (10,) pyro.sample("obs", dist.Normal(x.sum(), 1.), data) To work around this restriction, you can convert the vectorized plate to a sequential plate: py @config_enumerate def valid_model(data): x = [] for i in pyro.plate("plate", 10): # &lt;--- valid sequential plate x.append(pyro.sample(f"x_{i}", dist.Bernoulli(0.5))) assert len(x) == 10 pyro.sample("obs", dist.Normal(sum(x), 1.), data) Restriction 3: single path leaving each plate The final restriction is subtle, but is required to enable Pyro's exponential speedup For any enumerated variable x, the set of all enumerated variables on which x depends must be linearly orderable in their vectorized plate nesting. This requirement only applies when there are at least two plates and at least three variables in different plate contexts. The simplest counterexample is a Boltzmann machine py @config_enumerate def invalid_model(data): plate_1 = pyro.plate("plate_1", 10, dim=-1) # vectorized plate_2 = pyro.plate("plate_2", 10, dim=-2) # vectorized with plate_1: x = pyro.sample("y", dist.Bernoulli(0.5)) with plate_2: y = pyro.sample("x", dist.Bernoulli(0.5)) with plate_1, plate2: z = pyro.sample("z", dist.Bernoulli((1. + x + y) / 4.)) ... Here we see that the variable z depends on variable x (which is in plate_1 but not plate_2) and depends on variable y (which is in plate_2 but not plate_1). This model is invalid because there is no way to linearly order x and y such that one's plate nesting is less than the other. To work around this restriction, you can convert one of the plates to a sequential plate: py @config_enumerate def valid_model(data): plate_1 = pyro.plate("plate_1", 10, dim=-1) # vectorized plate_2 = pyro.plate("plate_2", 10) # sequential with plate_1: x = pyro.sample("y", dist.Bernoulli(0.5)) for i in plate_2: y = pyro.sample(f"x_{i}", dist.Bernoulli(0.5)) with plate_1: z = pyro.sample(f"z_{i}", dist.Bernoulli((1. + x + y) / 4.)) ... but beware that this increases the computational complexity, which may be exponential in the size of the sequential plate. Time series example <a class="anchor" id="Time-series-example"></a> Consider a discrete HMM with latent states $x_t$ and observations $y_t$. Suppose we want to learn the transition and emission probabilities. End of explanation hmm_guide = AutoNormal(poutine.block(hmm_model, expose=["transition", "emission"])) pyro.clear_param_store() elbo = TraceEnum_ELBO(max_plate_nesting=1) elbo.loss(hmm_model, hmm_guide, data, data_dim=data_dim); Explanation: We can learn the global parameters using SVI with an autoguide. End of explanation def hmm_model(data, data_dim, hidden_dim=10): with pyro.plate("hidden_state", hidden_dim): transition = pyro.sample("transition", dist.Dirichlet(0.5 * torch.ones(hidden_dim))) emission = pyro.sample("emission", dist.Dirichlet(0.5 * torch.ones(data_dim))) x = 0 # initial state for t, y in pyro.markov(enumerate(data)): x = pyro.sample(f"x_{t}", dist.Categorical(transition[x]), infer={"enumerate": "parallel"}) pyro.sample(f"y_{t}", dist.Categorical(emission[x]), obs=y) print(f"x_{t}.shape = {x.shape}") # We'll reuse the same guide and elbo. elbo.loss(hmm_model, hmm_guide, data, data_dim=data_dim); Explanation: Notice that the model was run twice here: first it was run without enumeration by AutoNormal, so that the autoguide can record all sample sites; then second it is run by TraceEnum_ELBO with enumeration enabled. We see in the first run that samples have the standard interpretation, whereas in the second run samples have the enumeration interpretation. For more complex examples, including minibatching and multiple plates, see the HMM tutorial. How to enumerate more than 25 variables <a class="anchor" id="How-to-enumerate-more-than-25-variables"></a> PyTorch tensors have a dimension limit of 25 in CUDA and 64 in CPU. By default Pyro enumerates each sample site in a new dimension. If you need more sample sites, you can annotate your model with pyro.markov to tell Pyro when it is safe to recycle tensor dimensions. Let's see how that works with the HMM model from above. The only change we need is to annotate the for loop with pyro.markov, informing Pyro that the variables in each step of the loop depend only on variables outside of the loop and variables at this step and the previous step of the loop: diff - for t, y in enumerate(data): + for t, y in pyro.markov(enumerate(data)): End of explanation
5,950
Given the following text description, write Python code to implement the functionality described below step by step Description: Training a CNN model with the CIFAR-10 dataset in ML Engine The trainer package source is inside the cifar10 directory. It was based from Tensorflow's CNN tutorial and one of the Datalab image classification example. Enable the ML Engine API We need to enable the ML Engine API since it isn't by default. Head back to the web console. Search for "API Manager" using the bar on the top middle of the page. Select Library from the sidebar. Search for "ML Engine" and select Google Cloud Machine Learning Engine. Click ENABLE Build the trainer package Step1: Submit the training job to ML Engine Step2: It will take a few minutes for ML Engine to provision a training instance for our job. While that's happening, let's talk about pricing! TensorBoard
Python Code: %%bash cd cifar10 # Clean old builds rm -rf build dist # Build wheel distribution python setup.py bdist_wheel --universal # Check the built package ls -al dist Explanation: Training a CNN model with the CIFAR-10 dataset in ML Engine The trainer package source is inside the cifar10 directory. It was based from Tensorflow's CNN tutorial and one of the Datalab image classification example. Enable the ML Engine API We need to enable the ML Engine API since it isn't by default. Head back to the web console. Search for "API Manager" using the bar on the top middle of the page. Select Library from the sidebar. Search for "ML Engine" and select Google Cloud Machine Learning Engine. Click ENABLE Build the trainer package End of explanation %%bash cd cifar10 # Set some variables JOB_NAME=cifar10_train_$(date +%s) BUCKET_NAME=dost_deeplearning_cifar10 # Change this to your own! TRAINING_PACKAGE_PATH=dist/trainer-0.0.0-py2.py3-none-any.whl # Submit the job through the gcloud tool gcloud ml-engine jobs submit training \ $JOB_NAME \ --region us-east1 \ --job-dir gs://$BUCKET_NAME/$JOB_NAME \ --packages $TRAINING_PACKAGE_PATH \ --module-name trainer.task \ --config config.yaml Explanation: Submit the training job to ML Engine End of explanation import os.path from google.datalab.ml import TensorBoard bucket_path = 'gs://dost_deeplearning_cifar10' # Change this to your own bucket job_name = 'cifar10_train_1499874404' # Change this to your own job name train_dir = os.path.join(bucket_path, job_name, 'train') TensorBoard.start(train_dir) Explanation: It will take a few minutes for ML Engine to provision a training instance for our job. While that's happening, let's talk about pricing! TensorBoard End of explanation
5,951
Given the following text description, write Python code to implement the functionality described below step by step Description: Extracting information about sequence quality and enrichment Enrichment Often want to compare two datasets (tissue 1 vs. tissue 2; -drug vs. +drug; etc.) Done by taking ratio of counts for sequences between data sets $$f_{seq} = \frac{C_{seq}}{C_{total}}$$ The normalized frequency of a sequence $f_{seq}$ is determined by the number of counts of that sequence relative to all counts in the data set. The enrichment of the sequence in dataset 2 vs. dataset 1 is given by Step1: Can create dictionary that converts letters to quality scores Recall that \begin{equation} p = 10^{- \frac{Q}{10}} \qquad \qquad \text{$(2)$} \end{equation} Step2: Example $$p_{correct} = \prod_{i=1}^{L} (1-p_{incorrect})$$ where $i$ indexes along sequence and $L$ is sequence length. Step3: Modify this code to pull out the $p_{correct}$ each the sequence
Python Code: print(chr(33)) print(chr(34)) print(chr(35)) print("...") print(chr(74)) print(chr(75)) Explanation: Extracting information about sequence quality and enrichment Enrichment Often want to compare two datasets (tissue 1 vs. tissue 2; -drug vs. +drug; etc.) Done by taking ratio of counts for sequences between data sets $$f_{seq} = \frac{C_{seq}}{C_{total}}$$ The normalized frequency of a sequence $f_{seq}$ is determined by the number of counts of that sequence relative to all counts in the data set. The enrichment of the sequence in dataset 2 vs. dataset 1 is given by: $$E_{seq} = \frac{f_{seq,2}}{f_{seq,1}}$$ where $f_{seq,1}$ and $f_{seq,2}$ are the normalized frequencies of the sequence in dataset 1 and 2. How do we decide which sequences are high enough quality to include? <img align="center" style="margin: auto" src="https://image.slidesharecdn.com/30-140211095152-phpapp01/95/new-generation-sequencing-technologies-an-overview-13-638.jpg" /> <img align="center" style="margin: auto" src="http://ted.bti.cornell.edu/epigenome/image/Fig6.jpg" /> <img align="center" style="margin: auto" src="http://tucf-genomics.tufts.edu/images/faq02_pic03.jpg" /> For each cluster, you get a sequence of colors representing the sequence Some bases are read well, others are ambiguous. <img align="center" style="margin: auto" src="https://brcf.medicine.umich.edu/wp-content/uploads/2018/02/dna_no_noise_2018.gif" /> The "Phred" score measures confidence in the base "call": \begin{equation} Q = -10log_{10}(p) \qquad \qquad \text{$(1)$} \end{equation} where $p$ is the probability that the call is wrong. By rearranging Eq. $(1)$ above, we get: \begin{equation} p = 10^{- \frac{Q}{10}} \qquad \qquad \text{$(2)$} \end{equation} Create a plot of Q vs. p. Is a high "Q" good or bad? Phred scores are encoded in last line: @SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=60 GGGTGATGGCCGCTGCCGATGGCGTCAAATCCCACCAAGTTACCCTTAACAACTTAAGGG +SRR001666.1 071112_SLXA-EAS1_s_7:5:1:817:345 length=60 IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9ICIIIIIIIIIIIIIIIIIIIIDIII Encoding goes like |Letter | ASCII | $Q$ | $p$ | |:-----:|:-----:|:---:| -------:| | ! | 33 | 0 | 1.00000 | | " | 34 | 1 | 0.79433 | | # | 35 | 2 | 0.63096 | | ... | ... | ... | ... | | J | 74 | 41 | 0.00008 | | K | 75 | 42 | 0.00006 | python chr command converts integer ASCII to character End of explanation Q_dict = {} p_dict = {} for i in range(33,76): Q_dict[chr(i)] = i-33 p_dict[chr(i)] = 10**(-(Q_dict[chr(i)])/10.) p_dict["K"] Explanation: Can create dictionary that converts letters to quality scores Recall that \begin{equation} p = 10^{- \frac{Q}{10}} \qquad \qquad \text{$(2)$} \end{equation} End of explanation qual_string = "IIIIIIIIIIIIIIIIIIIIIIIIIIIIII9IG9ICIIIIIIIIIIIIIIIIIIIIDIII" p_correct = 1.0 for q in qual_string: p_correct = p_correct*(1-p_dict[q]) print(p_correct) Explanation: Example $$p_{correct} = \prod_{i=1}^{L} (1-p_{incorrect})$$ where $i$ indexes along sequence and $L$ is sequence length. End of explanation import gzip get_line = False seqs = {} with gzip.open("files/example.fastq.gz") as f: for l in f: l_ascii = l.decode("ascii") if l_ascii[0] == "@": get_line = True continue if get_line: try: seqs[l_ascii.strip()] += 1 except KeyError: seqs[l_ascii.strip()] = 1 get_line = False Explanation: Modify this code to pull out the $p_{correct}$ each the sequence End of explanation
5,952
Given the following text description, write Python code to implement the functionality described below step by step Description: Construct structures defining the DSLWP-B telemetry. Step1: Load frames from CSV file. proxy_time is set by the client when sending the frame (using groundstation PC clock). server_time is set by the server when the frame is received (using the server clock). Step2: To choose duplicated frames (some of them have errors), we assign points to each of the groundstations according as to how many frames they have received. We choose the duplicate frame instance from the station with more points. Step3: DSLWP-B frames are TM data link frames. We classify them according to Spacecraft ID and virtual channel. Step4: Spacecraft ID's 147 and 403 are used by DSLWP-B0 (435.4MHz) and DSLWP-B1 (436.4MHz). 146 and 402 are used by DSLWP-A0 and -A1. Virtual channels 0 and 2 is used for KISS streams. Virtual channel 1 is used for SSDV. The rest of the combinations are most likely erroneous frames. Below we show the number of frames received according to Spacecraft ID and virtual channel. Step5: We perform KISS stream recovery on each of the virtual channels. To discard some invalid packets, we check that the packet length in the Space Packet header matches the length of the packet. Step6: Helper function to extract a telemetry channel with its timestamps. See usage examples below.
Python Code: TMPrimaryHeader = BitStruct('transfer_frame_version_number' / BitsInteger(2), 'spacecraft_id' / BitsInteger(10), 'virtual_channel_id' / BitsInteger(3), 'ocf_flag' / Flag, 'master_channel_frame_count' / BitsInteger(8), 'virtual_channel_frame_count' / BitsInteger(8), 'first_header_pointer' / BitsInteger(8)) SpacePacketPrimaryHeader = BitStruct('ccsds_version' / BitsInteger(3), 'packet_type' / BitsInteger(1), 'secondary_header_flag' / Flag, 'AP_ID' / BitsInteger(11), 'sequence_flags' / BitsInteger(2), 'packet_sequence_count_or_name' / BitsInteger(14), 'data_length' / BitsInteger(16)) class AffineAdapter(Adapter): def __init__(self, c, a, *args, **kwargs): self.c = c self.a = a return Adapter.__init__(self, *args, **kwargs) def _encode(self, obj, context, path = None): return int(round(obj * self.c + self.a)) def _decode(self, obj, context, path = None): return (float(obj) - self.a)/ self.c class LinearAdapter(AffineAdapter): def __init__(self, c, *args, **kwargs): return AffineAdapter.__init__(self, c, 0, *args, **kwargs) Current = LinearAdapter(1/3.2, Int8ub) Voltage = LinearAdapter(1/0.16, Int8ub) class RSSIAdapter(Adapter): def _encode(self, obj, context, path = None): obj.rssi_asm = int(round((obj.rssi_asm + 174 - obj.gain_agc)*10)) obj.rssi_channel = int(round((obj.rssi_channel + 174 - obj.gain_agc)*10)) obj.rssi_7021 = int(round((obj.rssi_channel + 174 - obj.gain_agc)*2)) return obj def _decode(self, obj, context, path = None): obj.rssi_asm = -174 + obj.rssi_asm/10 + obj.gain_agc obj.rssi_channel = -174 + obj.rssi_channel/10 + obj.gain_agc obj.rssi_7021 = -174 + obj.rssi_7021 * 0.5 + obj.gain_agc return obj HKUV = RSSIAdapter(Struct( 'config' / Int8ub, 'flag_rx' / Int8ub, 'tx_gain' / Int8ub, 'tx_modulation' / Int8ub, 'flag_tx' / Int8ub, 'flag_7021' / Int8ub, 'n_cmd_buf' / Int8ub, 'n_cmd_dropped' / Int8ub, 'i_bus_rx' / Current, 'u_bus_rx' / Voltage, 'i_bus_tx' / Current, 'u_bus_tx' / Voltage, 't_pa' / Int8sb, 't_tx7021' / Int8sb, 'n_jt4_tx' / Int8ub, 'n_ham_rx' / Int8ub, 'n_422_tx' / Int8ub, 'n_422_rx' / Int8ub, 'n_422_rx_pkg_err' / Int8ub, 'n_422_rx_exe_err' / Int8ub, 'cmd_422_last_rx' / Int8ub, 'n_rf_tx' / Int8ub, 'n_rf_tx_dropped' / Int8ub, 'n_rf_rx' / Int8ub, 'n_rf_rx_pkg_err' / Int8ub, 'n_rf_rx_exe_err' / Int8ub, 'n_rf_rx_fec_err' / Int8ub, 'cmd_rf_last_rx' / Int8ub, 'rsvd0' / Int8ub, 'rsvd1' / Int8ub, 'byte_corr' / Int8sb, 'n_cmd' / Int8ub, 'fc_asm' / LinearAdapter(32768/3.1416, Int16sb), 'snr_asm' / LinearAdapter(256, Int16ub), 'rssi_asm' / Int16ub, 'rssi_channel' / Int16ub, 'rssi_7021' / Int8ub, 'gain_agc' / Mapping(Int8ub, {43.0: 0, 33.0: 1, 26.0: 2, 29.0: 4, 19.0: 5, 12.0: 6, 17.0: 8, 7.0: 9, 0.0: 10}), 'rsvd15' / Int16sb, 'seconds_since_epoch' / Int32ub, 'cam_mode' / Int8ub, 'cam_task_flag' / Int8ub, 'cam_err_flag' / Int8ub, 'cam_pic_len' / Int24ub, 'cam_memory_id' / Int8ub, 'jt4_task_flag' / Int8ub, 'n_reset' / Int8ub, 'flag_reset' / Int8ub, 'flag_sys' / Int8ub, 'n_dma_overflow' / Int8ub, 'runtime' / LinearAdapter(1/0.004, Int32ub), 'message' / Bytes(8) )) StQ = LinearAdapter(2147483647, Int32sb) FW = LinearAdapter(2, Int16sb) Gyro = LinearAdapter(2147483647.0/400.0, Int32sb) class QuadraticAdapter(Adapter): def _encode(self, obj, context, path = None): return np.sign(obj) * np.sqrt(np.abs(obj)) def _decode(self, obj, context, path = None): return obj * np.abs(obj) class WODTempAdapter(Adapter): def _encode(self, obj, context, path = None): raise Exception('Not implemented') def _decode(self, obj, context, path = None): return 1222415/(298.15*np.log(0.0244*obj/(25-0.0122*obj))+4100)-273.1 WODTemp = WODTempAdapter(Int16sb) class WODTempThrustAdapter(Adapter): def _encode(self, obj, context, path = None): raise Exception('Not implemented') def _decode(self, obj, context, path = None): return -292525.18393*2/(-5289.94338+np.sqrt(5289.94338*5289.94338+4*292525.18393*(-4.77701-np.log(24.4*obj/(5-0.00244*obj)))))-273.15 WODTempThrust = WODTempThrustAdapter(Int16sb) HKWOD = Struct( 'seconds_since_epoch' / Int32ub, 'n_cmd_exe' / Int8ub, 'n_cmd_delay' / Int8ub, 'this_wdt_timeout_count' / Int8ub, 'that_wdt_timeout_count' / Int8ub, 'sta_reset_count' / Int8ub, 'stb_reset_count' / Int8ub, 'ss_reset_count' / Int8ub, 'is_reset_count' / Int8ub, 'pl_task_err_flag' / Int8ub, 'hsd_task_err_flag' / Int8ub, 'tc_wdt_timeout_period' / LinearAdapter(12.0, Int8ub), 'v_bus' / AffineAdapter(1/(0.00244*6.3894), 0.005/(0.00244*6.3894), Int16sb), 'v_battery' / AffineAdapter(1/(0.00244*6.3617), -0.0318/(0.00244*6.3617), Int16sb), 'i_solar_panel' / AffineAdapter(1/(0.00244*0.7171), -0.0768/(0.00244*0.7171), Int16sb), 'i_load' / AffineAdapter(1/(0.00244*1.1442), 0.5254/(0.00244*1.1442), Int16sb), 'i_bus' / AffineAdapter(1/(0.00244*0.8814), 9.4347/(0.00244*0.8814), Int16sb), 'sw_flag' / Int8ub[4], 'sta_q' / StQ[4], 'sta_flag' / Int8ub, 'stb_q' / StQ[4], 'stb_flag' / Int8ub, 'stc_q' / StQ[4], 'stc_flag' / Int8ub, 'ss_x' / Int32ub, 'ss_y' / Int32ub, 'ss_flag' / Int8ub, 'fwx_rate' / FW, 'fwx_cmd' / FW, 'fwy_rate' / FW, 'fwy_cmd' / FW, 'fwz_rate' / FW, 'fwz_cmd' / FW, 'fws_rate' / FW, 'fws_cmd' / FW, 'gyro' / Gyro[3], 'tank_pressure' / AffineAdapter(1/(0.00244*0.6528), 0.0330/(0.00244*0.6528), Int16sb), 'aocs_period' / Int8ub, 'error_q' / QuadraticAdapter(LinearAdapter(32767, Int16sb))[3], 'error_w' / LinearAdapter(3.1415926/180, QuadraticAdapter(LinearAdapter(32767, Int16sb)))[3], 'usb_agc' / LinearAdapter(256.0/5.0, Int8ub), 'usb_rf_power' / LinearAdapter(256.0/5.0, Int8ub), 'usb_temp2' / LinearAdapter(256.0/5.0, Int8ub), 'usb_flag1' / Int8ub, 'usb_flag2' / Int8ub, 'usb_n_cmd' / Int8ub, 'usb_n_direct_cmd' / Int8ub, 'usb_n_inject_cmd' / Int8ub, 'usb_n_inject_cmd_err' / Int8ub, 'usb_n_sync' / Int8ub, 't_pl' / WODTemp, 't_hsd' / WODTemp, 't_obc' / WODTemp, 't_stb' / WODTemp, 't_ss' / WODTemp, 't_battery' / WODTemp, 't_thrustor1a' / WODTempThrust, 't_thrustor5a' / WODTempThrust, 't_value1' / WODTemp, 't_value5' / WODTemp, 't_tube1' / WODTemp, 't_tank' / WODTemp, 'heater_flag' / Int8ub[5], 'uva_flag_rx' / Int8ub, 'uva_tx_gain' / Int8ub, 'uva_tx_modulation' / Int8ub, 'uva_flag_tx' / Int8ub, 'uva_fc_asm' / LinearAdapter(32768/3.1416, Int16sb), 'uva_snr_asm' / LinearAdapter(256, Int16ub), 'uva_rssi_asm' / AffineAdapter(10, 10*(174-12), Int16ub), 'uva_rssi_7021' / AffineAdapter(2, 2*(140-12), Int8ub), 'uvb_flag_rx' / Int8ub, 'uvb_tx_gain' / Int8ub, 'uvb_tx_modulation' / Int8ub, 'uvb_flag_tx' / Int8ub, 'uvb_fc_asm' / LinearAdapter(32768/3.1416, Int16sb), 'uvb_snr_asm' / LinearAdapter(256, Int16ub), 'uvb_rssi_asm' / AffineAdapter(10, 10*(174-12), Int16ub), 'uvb_rssi_7021' / AffineAdapter(2, 2*(140-12), Int8ub), ) CfgUV = Struct( 'dem_clk_divide' / Int8ub, 'tx_frequency_deviation' / Int8ub, 'tx_gain' / Int8ub, 'turbo_rate' / Int8ub, 'precoder_en' / Int8ub, 'preamble_len' / Int8ub, 'trailer_len' / Int8ub, 'rx_freq' / Int8ub, 'snr_threshold' / Float32b, 'gmsk_beacon_en' / Int8ub, 'jt4_beacon_en' / Int8ub, 'interval_beacon' / Int8ub, 'interval_vc0_timeout' / Int8ub, 'message_hk' / Bytes(8), 'callsign' / Bytes(5), 'open_camera_en' / Int8ub, 'repeater_en' / Int8ub, 'take_picture_at_power_on' / Int8ub, 'rx7021_r9' / Int32ub, 'crc' / Int32ub ) CfgCam = Struct( 'size' / Int8ub, 'brightness' / Int8ub, 'contrast' / Int8ub, 'sharpness' / Int8ub, 'exposure' / Int8ub, 'compressing' / Int8ub, 'colour' / Int8ub, 'config' / Int8ub, 'id' / Int8ub ) Packet = Struct( 'header' / SpacePacketPrimaryHeader, 'protocol' / Int8ub, 'payload' / Switch(lambda x: (x.header.AP_ID, x.protocol),\ {(0xE,0) : HKUV, (0xF,0) : HKUV, (0xE,1) : CfgCam, (0xF,1) : CfgCam,\ (0xAC,0) : HKWOD, (0xE,4) : CfgUV, (0xF,4) : CfgUV}) ) Explanation: Construct structures defining the DSLWP-B telemetry. End of explanation csv_frames = pd.read_csv('https://raw.githubusercontent.com/tammojan/dslwp-data/master/raw_frame.csv') correct_frames = csv_frames['remark'] != 'replay' csv_frames = csv_frames[correct_frames] station = [s for s in csv_frames['proxy_nickname']] proxy_time = np.array([np.datetime64(t) for t in csv_frames['proxy_receive_time']]) server_time = np.array([np.datetime64(t) for t in csv_frames['server_receive_time']]) frames = [bytes().fromhex(f) for f in csv_frames['raw_data']] Explanation: Load frames from CSV file. proxy_time is set by the client when sending the frame (using groundstation PC clock). server_time is set by the server when the frame is received (using the server clock). End of explanation stations = set(station) station_points = {s : station.count(s) for s in stations} Explanation: To choose duplicated frames (some of them have errors), we assign points to each of the groundstations according as to how many frames they have received. We choose the duplicate frame instance from the station with more points. End of explanation def get_channel(frame): h = TMPrimaryHeader.parse(frame) return (h.spacecraft_id, h.virtual_channel_id) channels = set([get_channel(f) for f in frames]) frames_by_channel = {chan : sorted([(t,f,s) for t,f,s in zip(server_time, frames, station) if get_channel(f) == chan], key = itemgetter(0)) for chan in channels} Explanation: DSLWP-B frames are TM data link frames. We classify them according to Spacecraft ID and virtual channel. End of explanation spacecrafts = {147 : 'DSWLP-B0 435.400MHz', 403 : 'DSLWP-B1 436.400MHz',\ 146 : 'DSLWP-A0 435.425MHz', 402 : 'DSLWP-A1 436.425MHz'} sorted([((spacecrafts[k[0]], k[1]),len(v)) for k,v in frames_by_channel.items()], key = itemgetter(1), reverse = True) def join_kiss_stream(frames): jumps = 0 repeated_distinct = 0 repeated_same = 0 continuation = 0 stream = list() last_frame = frames[0] frame_count = [TMPrimaryHeader.parse(f[1]).virtual_channel_frame_count for f in frames] for j in range(1,len(frames)): near_time = frames[j][0] - frames[j-1][0] < np.timedelta64(2*3600, 's') if frame_count[j] == frame_count[j-1] and near_time: # repeated frame if station_points[frames[j][2]] > station_points[last_frame[2]]: last_frame = frames[j] if frames[j][1] != frames[j-1][1]: repeated_distinct += 1 else: repeated_same += 1 elif frame_count[j] == (frame_count[j-1] + 1) % 256 and near_time: # continuation stream.append((last_frame[0], last_frame[1][TMPrimaryHeader.sizeof():])) last_frame = frames[j] continuation += 1 else: # broken KISS stream stream.append((last_frame[0], last_frame[1][TMPrimaryHeader.sizeof():])) last_frame = frames[j] jumps += 1 stream.append((last_frame[0], last_frame[1][TMPrimaryHeader.sizeof():])) print('jumps', jumps, 'repeated_distinct', repeated_distinct, 'repeated_same', repeated_same, 'continuation', continuation) return stream def parse_kiss(stream): frames = list() current = bytearray() escape = False for t,kiss in stream: for b in kiss: if b == 0xC0: if len(current): frames.append((t,bytes(current))) current = bytearray() elif b == 0xDB: escape = True elif escape and b == 0xDC: current.append(0xC0) escape = False elif escape and b == 0xDD: current.append(0xDB) escape = False else: current.append(b) escape = False return frames def filter_by_data_length(packets): return [p for p in packets if len(p[1]) >= SpacePacketPrimaryHeader.sizeof() and\ SpacePacketPrimaryHeader.parse(p[1]).data_length + 1 + SpacePacketPrimaryHeader.sizeof() == len(p[1])] def parse_packets_channel(channel): packets = parse_kiss(join_kiss_stream(frames_by_channel[channel])) parsed_packets = list() for p in filter_by_data_length(packets): try: parsed = Packet.parse(p[1]) except: pass else: parsed_packets.append((p[0], parsed)) return parsed_packets Explanation: Spacecraft ID's 147 and 403 are used by DSLWP-B0 (435.4MHz) and DSLWP-B1 (436.4MHz). 146 and 402 are used by DSLWP-A0 and -A1. Virtual channels 0 and 2 is used for KISS streams. Virtual channel 1 is used for SSDV. The rest of the combinations are most likely erroneous frames. Below we show the number of frames received according to Spacecraft ID and virtual channel. End of explanation tlm_channels = [(403,0), (403,2), (147,0), (147,2), (402,0)] # (146,0) only appears in replayed frames + [(146,0)] tlm_packets = {chan : parse_packets_channel(chan) for chan in tlm_channels} Explanation: We perform KISS stream recovery on each of the virtual channels. To discard some invalid packets, we check that the packet length in the Space Packet header matches the length of the packet. End of explanation def get_tlm_variable(chan, var): x = [(p[0], getattr(p[1].payload, var)) for p in tlm_packets[chan]\ if getattr(p[1].payload, var, None) is not None] return [a[0] for a in x], [a[1] for a in x] plt.figure(figsize = (14, 6), facecolor = 'w') for chan in tlm_channels: t, x = get_tlm_variable(chan, 'runtime') plt.plot(t, x, '.', label = f'{spacecrafts[chan[0]]} channel {chan[1]}') plt.ylim([-500,18000]) plt.legend() plt.title('HKUV frames by Spacecraft ID and Virtual Channel') plt.xlabel('UTC time') plt.ylabel('Payload runtime (s)'); plt.figure(figsize = (14, 6), facecolor = 'w') for chan in tlm_channels: t, x = get_tlm_variable(chan, 'tx_modulation') plt.plot(t, x, '.', label = f'{spacecrafts[chan[0]]} channel {chan[1]}') plt.legend() plt.title('HKUV TX modulation') plt.ylabel('tx_modulation') plt.xlabel('UTC time'); plt.figure(figsize = (14, 6), facecolor = 'w') for chan in tlm_channels: t, x = get_tlm_variable(chan, 'tx_modulation') plt.plot(t, x, '.', label = f'{spacecrafts[chan[0]]} channel {chan[1]}') plt.legend() plt.xlim([np.datetime64('2018-10-15'), np.datetime64('2018-11-05')]) plt.title('HKUV TX modulation') plt.ylabel('tx_modulation') plt.xlabel('UTC time'); plt.figure(figsize = (14, 6), facecolor = 'w') for chan in tlm_channels: t, x = get_tlm_variable(chan, 'uva_tx_modulation') plt.plot(t, x, '.', label = f'UVA {spacecrafts[chan[0]]} channel {chan[1]}', color = 'C0') t, x = get_tlm_variable(chan, 'uvb_tx_modulation') plt.plot(t, np.array(x) + 20, '.', label = f'UVB {spacecrafts[chan[0]]} channel {chan[1]}', color = 'C1') plt.legend(['UVA', 'UVB']) plt.xlim([np.datetime64('2018-10-15'), np.datetime64('2018-11-05')]) plt.title('HKWOD TX modulation') plt.ylabel('tx_modulation') plt.xlabel('UTC time'); plt.figure(figsize = (14, 6), facecolor = 'w') for chan in tlm_channels: t, x = get_tlm_variable(chan, 't_pa') plt.plot(t, x, '.', color = 'C0') t, x = get_tlm_variable(chan, 't_battery') plt.plot(t, x, '.', color = 'C1') plt.ylim([10,60]) plt.title('Temperature') plt.ylabel('Temperature (ºC)') plt.xlabel('UTC time') plt.legend(['HKUV PA temperature', 'HKWOD Battery temperature']); Explanation: Helper function to extract a telemetry channel with its timestamps. See usage examples below. End of explanation
5,953
Given the following text description, write Python code to implement the functionality described below step by step Description: Continuous Target Decoding with SPoC Source Power Comodulation (SPoC) Step1: Plot the contributions to the detected components (i.e., the forward model)
Python Code: # Author: Alexandre Barachant <[email protected]> # Jean-Remi King <[email protected]> # # License: BSD-3-Clause import matplotlib.pyplot as plt import mne from mne import Epochs from mne.decoding import SPoC from mne.datasets.fieldtrip_cmc import data_path from sklearn.pipeline import make_pipeline from sklearn.linear_model import Ridge from sklearn.model_selection import KFold, cross_val_predict # Define parameters fname = data_path() + '/SubjectCMC.ds' raw = mne.io.read_raw_ctf(fname) raw.crop(50., 200.) # crop for memory purposes # Filter muscular activity to only keep high frequencies emg = raw.copy().pick_channels(['EMGlft']).load_data() emg.filter(20., None) # Filter MEG data to focus on beta band raw.pick_types(meg=True, ref_meg=True, eeg=False, eog=False).load_data() raw.filter(15., 30.) # Build epochs as sliding windows over the continuous raw file events = mne.make_fixed_length_events(raw, id=1, duration=0.75) # Epoch length is 1.5 second meg_epochs = Epochs(raw, events, tmin=0., tmax=1.5, baseline=None, detrend=1, decim=12) emg_epochs = Epochs(emg, events, tmin=0., tmax=1.5, baseline=None) # Prepare classification X = meg_epochs.get_data() y = emg_epochs.get_data().var(axis=2)[:, 0] # target is EMG power # Classification pipeline with SPoC spatial filtering and Ridge Regression spoc = SPoC(n_components=2, log=True, reg='oas', rank='full') clf = make_pipeline(spoc, Ridge()) # Define a two fold cross-validation cv = KFold(n_splits=2, shuffle=False) # Run cross validaton y_preds = cross_val_predict(clf, X, y, cv=cv) # Plot the True EMG power and the EMG power predicted from MEG data fig, ax = plt.subplots(1, 1, figsize=[10, 4]) times = raw.times[meg_epochs.events[:, 0] - raw.first_samp] ax.plot(times, y_preds, color='b', label='Predicted EMG') ax.plot(times, y, color='r', label='True EMG') ax.set_xlabel('Time (s)') ax.set_ylabel('EMG Power') ax.set_title('SPoC MEG Predictions') plt.legend() mne.viz.tight_layout() plt.show() Explanation: Continuous Target Decoding with SPoC Source Power Comodulation (SPoC) :footcite:DahneEtAl2014 allows to identify the composition of orthogonal spatial filters that maximally correlate with a continuous target. SPoC can be seen as an extension of the CSP for continuous variables. Here, SPoC is applied to decode the (continuous) fluctuation of an electromyogram from MEG beta activity using data from Cortico-Muscular Coherence example of FieldTrip &lt;http://www.fieldtriptoolbox.org/tutorial/coherence&gt;_ End of explanation spoc.fit(X, y) spoc.plot_patterns(meg_epochs.info) Explanation: Plot the contributions to the detected components (i.e., the forward model) End of explanation
5,954
Given the following text description, write Python code to implement the functionality described below step by step Description: Copyright 2019 Google LLC. Step1: Object pose alignment <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https Step2: Now that Tensorflow Graphics is installed, let's import everything needed to run the demos contained in this notebook. Step4: 1. Machine Learning Model definition Given the 3D position of all the vertices of a known mesh, we would like a network that is capable of predicting the rotation parametrized by a quaternion (4 dimensional vector), and translation (3 dimensional vector) of this mesh with respect to a reference pose. Let's now create a very simple 3-layer fully connected network, and a loss for the task. Note that this model is very simple and definitely not optimal, which is out of scope for this notebook. Step5: Data generation Now that we have a model defined, we need data to train it. For each sample in the training set, a random 3D rotation and 3D translation are sampled and applied to the vertices of our object. Each training sample consists of all the transformed vertices and the inverse rotation and translation that would allow to revert the rotation and translation applied to the sample. Step6: Training At this point, everything is in place to start training the neural network! Step7: Testing The network is now trained and ready to use! The displayed results consist of two images. The first image contains the object in 'rest pose' (pastel lemon color) and the rotated and translated object (pastel honeydew color). This effectively allows to observe how different the two configurations are. The second image also shows the object in rest pose, but this time the transformation predicted by our trained neural network is applied to the rotated and translated version. Hopefully, the two objects are now in a very similar pose. Note Step8: Define a threejs viewer for the transformed shape Step9: Define a random rotation and translation Step10: Run the model to predict the transformation parameters, and visualize the result Step11: 2. Mathematical optimization Here the problem is tackled using mathematical optimization, which is another traditional way to approach the problem of object pose estimation. Given correspondences between the object in 'rest pose' (pastel lemon color) and its rotated and translated counter part (pastel honeydew color), the problem can be formulated as a minimization problem. The loss function can for instance be defined as the sum of Euclidean distances between the corresponding points using the current estimate of the rotation and translation of the transformed object. One can then compute the derivative of the rotation and translation parameters with respect to this loss function, and follow the gradient direction until convergence. The following cell closely follows that procedure, and uses gradient descent to align the two objects. It is worth noting that although the results are good, there are more efficient ways to solve this specific problem. The interested reader is referred to the Kabsch algorithm for further details. Note Step12: Create the optimizer. Step13: Initialize the random transformation, run the optimization and animate the result.
Python Code: #@title Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. Explanation: Copyright 2019 Google LLC. End of explanation !pip install tensorflow_graphics Explanation: Object pose alignment <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a> </td> </table> Precisely estimating the pose of objects is fundamental to many industries. For instance, in augmented and virtual reality, it allows users to modify the state of some variable by interacting with these objects (e.g. volume controlled by a mug on the user's desk). This notebook illustrates how to use Tensorflow Graphics to estimate the rotation and translation of known 3D objects. This capability is illustrated by two different demos: 1. Machine learning demo illustrating how to train a simple neural network capable of precisely estimating the rotation and translation of a given object with respect to a reference pose. 2. Mathematical optimization demo that takes a different approach to the problem; does not use machine learning. Note: The easiest way to use this tutorial is as a Colab notebook, which allows you to dive in with no setup. Setup & Imports If Tensorflow Graphics is not installed on your system, the following cell can install the Tensorflow Graphics package for you. End of explanation import time import matplotlib.pyplot as plt import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras import layers from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.math import vector from tensorflow_graphics.notebooks import threejs_visualization from tensorflow_graphics.notebooks.resources import tfg_simplified_logo tf.compat.v1.enable_v2_behavior() # Loads the Tensorflow Graphics simplified logo. vertices = tfg_simplified_logo.mesh['vertices'].astype(np.float32) faces = tfg_simplified_logo.mesh['faces'] num_vertices = vertices.shape[0] Explanation: Now that Tensorflow Graphics is installed, let's import everything needed to run the demos contained in this notebook. End of explanation # Constructs the model. model = keras.Sequential() model.add(layers.Flatten(input_shape=(num_vertices, 3))) model.add(layers.Dense(64, activation=tf.nn.tanh)) model.add(layers.Dense(64, activation=tf.nn.relu)) model.add(layers.Dense(7)) def pose_estimation_loss(y_true, y_pred): Pose estimation loss used for training. This loss measures the average of squared distance between some vertices of the mesh in 'rest pose' and the transformed mesh to which the predicted inverse pose is applied. Comparing this loss with a regular L2 loss on the quaternion and translation values is left as exercise to the interested reader. Args: y_true: The ground-truth value. y_pred: The prediction we want to evaluate the loss for. Returns: A scalar value containing the loss described in the description above. # y_true.shape : (batch, 7) y_true_q, y_true_t = tf.split(y_true, (4, 3), axis=-1) # y_pred.shape : (batch, 7) y_pred_q, y_pred_t = tf.split(y_pred, (4, 3), axis=-1) # vertices.shape: (num_vertices, 3) # corners.shape:(num_vertices, 1, 3) corners = tf.expand_dims(vertices, axis=1) # transformed_corners.shape: (num_vertices, batch, 3) # q and t shapes get pre-pre-padded with 1's following standard broadcast rules. transformed_corners = quaternion.rotate(corners, y_pred_q) + y_pred_t # recovered_corners.shape: (num_vertices, batch, 3) recovered_corners = quaternion.rotate(transformed_corners - y_true_t, quaternion.inverse(y_true_q)) # vertex_error.shape: (num_vertices, batch) vertex_error = tf.reduce_sum((recovered_corners - corners)**2, axis=-1) return tf.reduce_mean(vertex_error) optimizer = keras.optimizers.Adam() model.compile(loss=pose_estimation_loss, optimizer=optimizer) model.summary() Explanation: 1. Machine Learning Model definition Given the 3D position of all the vertices of a known mesh, we would like a network that is capable of predicting the rotation parametrized by a quaternion (4 dimensional vector), and translation (3 dimensional vector) of this mesh with respect to a reference pose. Let's now create a very simple 3-layer fully connected network, and a loss for the task. Note that this model is very simple and definitely not optimal, which is out of scope for this notebook. End of explanation def generate_training_data(num_samples): # random_angles.shape: (num_samples, 3) random_angles = np.random.uniform(-np.pi, np.pi, (num_samples, 3)).astype(np.float32) # random_quaternion.shape: (num_samples, 4) random_quaternion = quaternion.from_euler(random_angles) # random_translation.shape: (num_samples, 3) random_translation = np.random.uniform(-2.0, 2.0, (num_samples, 3)).astype(np.float32) # data.shape : (num_samples, num_vertices, 3) data = quaternion.rotate(vertices[tf.newaxis, :, :], random_quaternion[:, tf.newaxis, :] ) + random_translation[:, tf.newaxis, :] # target.shape : (num_samples, 4+3) target = tf.concat((random_quaternion, random_translation), axis=-1) return np.array(data), np.array(target) num_samples = 10000 data, target = generate_training_data(num_samples) print(data.shape) # (num_samples, num_vertices, 3): the vertices print(target.shape) # (num_samples, 4+3): the quaternion and translation Explanation: Data generation Now that we have a model defined, we need data to train it. For each sample in the training set, a random 3D rotation and 3D translation are sampled and applied to the vertices of our object. Each training sample consists of all the transformed vertices and the inverse rotation and translation that would allow to revert the rotation and translation applied to the sample. End of explanation # Callback allowing to display the progression of the training task. class ProgressTracker(keras.callbacks.Callback): def __init__(self, num_epochs, step=5): self.num_epochs = num_epochs self.current_epoch = 0. self.step = step self.last_percentage_report = 0 def on_epoch_end(self, batch, logs={}): self.current_epoch += 1. training_percentage = int(self.current_epoch * 100.0 / self.num_epochs) if training_percentage - self.last_percentage_report >= self.step: print('Training ' + str( training_percentage) + '% complete. Training loss: ' + str( logs.get('loss')) + ' | Validation loss: ' + str( logs.get('val_loss'))) self.last_percentage_report = training_percentage reduce_lr_callback = keras.callbacks.ReduceLROnPlateau( monitor='val_loss', factor=0.5, patience=10, verbose=0, mode='auto', min_delta=0.0001, cooldown=0, min_lr=0) # google internal 1 # Everything is now in place to train. EPOCHS = 100 pt = ProgressTracker(EPOCHS) history = model.fit( data, target, epochs=EPOCHS, validation_split=0.2, verbose=0, batch_size=32, callbacks=[reduce_lr_callback, pt]) plt.plot(history.history['loss']) plt.plot(history.history['val_loss']) plt.ylim([0, 1]) plt.legend(['loss', 'val loss'], loc='upper left') plt.xlabel('Train epoch') _ = plt.ylabel('Error [mean square distance]') Explanation: Training At this point, everything is in place to start training the neural network! End of explanation def transform_points(target_points, quaternion_variable, translation_variable): return quaternion.rotate(target_points, quaternion_variable) + translation_variable Explanation: Testing The network is now trained and ready to use! The displayed results consist of two images. The first image contains the object in 'rest pose' (pastel lemon color) and the rotated and translated object (pastel honeydew color). This effectively allows to observe how different the two configurations are. The second image also shows the object in rest pose, but this time the transformation predicted by our trained neural network is applied to the rotated and translated version. Hopefully, the two objects are now in a very similar pose. Note: press play multiple times to sample different test cases. You will notice that sometimes the scale of the object is off. This comes from the fact that quaternions can encode scale. Using a quaternion of unit norm would result in not changing the scale of the result. We let the interested reader experiment with adding this constraint either in the network architecture, or in the loss function. Start with a helper function to apply a quaternion and a translation: End of explanation class Viewer(object): def __init__(self, my_vertices): my_vertices = np.asarray(my_vertices) context = threejs_visualization.build_context() light1 = context.THREE.PointLight.new_object(0x808080) light1.position.set(10., 10., 10.) light2 = context.THREE.AmbientLight.new_object(0x808080) lights = (light1, light2) material = context.THREE.MeshLambertMaterial.new_object({ 'color': 0xfffacd, }) material_deformed = context.THREE.MeshLambertMaterial.new_object({ 'color': 0xf0fff0, }) camera = threejs_visualization.build_perspective_camera( field_of_view=30, position=(10.0, 10.0, 10.0)) mesh = {'vertices': vertices, 'faces': faces, 'material': material} transformed_mesh = { 'vertices': my_vertices, 'faces': faces, 'material': material_deformed } geometries = threejs_visualization.triangular_mesh_renderer( [mesh, transformed_mesh], lights=lights, camera=camera, width=400, height=400) self.geometries = geometries def update(self, transformed_points): self.geometries[1].getAttribute('position').copyArray( transformed_points.numpy().ravel().tolist()) self.geometries[1].getAttribute('position').needsUpdate = True Explanation: Define a threejs viewer for the transformed shape: End of explanation def get_random_transform(): # Forms a random translation with tf.name_scope('translation_variable'): random_translation = tf.Variable( np.random.uniform(-2.0, 2.0, (3,)), dtype=tf.float32) # Forms a random quaternion hi = np.pi lo = -hi random_angles = np.random.uniform(lo, hi, (3,)).astype(np.float32) with tf.name_scope('rotation_variable'): random_quaternion = tf.Variable(quaternion.from_euler(random_angles)) return random_quaternion, random_translation Explanation: Define a random rotation and translation: End of explanation random_quaternion, random_translation = get_random_transform() initial_orientation = transform_points(vertices, random_quaternion, random_translation).numpy() viewer = Viewer(initial_orientation) predicted_transformation = model.predict(initial_orientation[tf.newaxis, :, :]) predicted_inverse_q = quaternion.inverse(predicted_transformation[0, 0:4]) predicted_inverse_t = -predicted_transformation[0, 4:] predicted_aligned = quaternion.rotate(initial_orientation + predicted_inverse_t, predicted_inverse_q) viewer = Viewer(predicted_aligned) Explanation: Run the model to predict the transformation parameters, and visualize the result: End of explanation def loss(target_points, quaternion_variable, translation_variable): transformed_points = transform_points(target_points, quaternion_variable, translation_variable) error = (vertices - transformed_points) / num_vertices return vector.dot(error, error) def gradient_loss(target_points, quaternion, translation): with tf.GradientTape() as tape: loss_value = loss(target_points, quaternion, translation) return tape.gradient(loss_value, [quaternion, translation]) Explanation: 2. Mathematical optimization Here the problem is tackled using mathematical optimization, which is another traditional way to approach the problem of object pose estimation. Given correspondences between the object in 'rest pose' (pastel lemon color) and its rotated and translated counter part (pastel honeydew color), the problem can be formulated as a minimization problem. The loss function can for instance be defined as the sum of Euclidean distances between the corresponding points using the current estimate of the rotation and translation of the transformed object. One can then compute the derivative of the rotation and translation parameters with respect to this loss function, and follow the gradient direction until convergence. The following cell closely follows that procedure, and uses gradient descent to align the two objects. It is worth noting that although the results are good, there are more efficient ways to solve this specific problem. The interested reader is referred to the Kabsch algorithm for further details. Note: press play multiple times to sample different test cases. Define the loss and gradient functions: End of explanation learning_rate = 0.05 with tf.name_scope('optimization'): optimizer = tf.compat.v1.train.AdamOptimizer(learning_rate) Explanation: Create the optimizer. End of explanation random_quaternion, random_translation = get_random_transform() transformed_points = transform_points(vertices, random_quaternion, random_translation) viewer = Viewer(transformed_points) nb_iterations = 100 for it in range(nb_iterations): gradients_loss = gradient_loss(vertices, random_quaternion, random_translation) optimizer.apply_gradients( zip(gradients_loss, (random_quaternion, random_translation))) transformed_points = transform_points(vertices, random_quaternion, random_translation) viewer.update(transformed_points) time.sleep(0.1) Explanation: Initialize the random transformation, run the optimization and animate the result. End of explanation
5,955
Given the following text description, write Python code to implement the functionality described below step by step Description: FPLCPlot (FPLC Chromatogram plotting tool) Interactive Jupyter notebok interface to FPLCPlot. An interactive Jupyter notebook to plot chromatograms outputted from GE Life Sciences / Amersham Biosciences UNICORN 5.X software. To use this notebook, simply output an .XLS file from the UNICORN software, containing all curves / traces (i.e. UV absorbance, conductivity, Temperature, etc.). Save this file in the same directory as the .ipynb file for this notebook, and re-run the cells. Dependencies Python 2.7 or newer Matplotlib Seaborn NumPy Pandas ipywidgets Step1: Importing Excel files in current directory By default, all Excel files in the current directory are loaded into a Python list, which will be overlay plotted on the same figure. To exclude an Excel file, simply move the file temporarily to another directory. Step2: To plot your figure, simply run the below cell to generate a series of iPython widgets as interface to FPLCPlot. Any changes made to the parameters on the plot are shown after pressing the Run plotTraces button. (Note
Python Code: %matplotlib inline from fplcplot.chromatogram import plotTraces Explanation: FPLCPlot (FPLC Chromatogram plotting tool) Interactive Jupyter notebok interface to FPLCPlot. An interactive Jupyter notebook to plot chromatograms outputted from GE Life Sciences / Amersham Biosciences UNICORN 5.X software. To use this notebook, simply output an .XLS file from the UNICORN software, containing all curves / traces (i.e. UV absorbance, conductivity, Temperature, etc.). Save this file in the same directory as the .ipynb file for this notebook, and re-run the cells. Dependencies Python 2.7 or newer Matplotlib Seaborn NumPy Pandas ipywidgets End of explanation file_list = !ls *A.xls file_list Explanation: Importing Excel files in current directory By default, all Excel files in the current directory are loaded into a Python list, which will be overlay plotted on the same figure. To exclude an Excel file, simply move the file temporarily to another directory. End of explanation from ipywidgets import interact, interactive, fixed import ipywidgets as widgets from IPython.display import display interact(plotTraces, file_list=fixed(file_list), title=widgets.Text("Protein A $E. coli$", description='Title:'), output=widgets.Checkbox(value=False, description="Save file?"), f_format=widgets.Dropdown(options=['.png', '.pdf'], description='File format:'), y_lower=widgets.IntSlider(min=-200,max=100,step=10,value=-20, description='Lower y-limit:'), y_upper=widgets.IntSlider(min=-10,max=4500,step=50,value=2000, description='Upper y-limit:'), second_trace=widgets.ToggleButtons(options=['None','buffer_b', 'buffer_b_abs', 'conductivity'], description='2nd trace:'), buffer_A=widgets.IntSlider(min=0,max=500,step=10,value=10, description='Buffer A (mM):'), buffer_B=widgets.IntSlider(min=0,max=3000,step=10,value=400, description='Buffer B (mM):'), __manual=True) Explanation: To plot your figure, simply run the below cell to generate a series of iPython widgets as interface to FPLCPlot. Any changes made to the parameters on the plot are shown after pressing the Run plotTraces button. (Note: Interactive widgets are not visible in online notebook viewers such as on GitHub. Launch this notebook locally to view the widgets.) Overview of widgets Name | Description -----|----- Title | Title for figure. Latex supported by enclosing in $$ Save file? | Check to save file in same directory. File name is according to first Excel file used in plotting figure. File format | File format for outputted file. .PNG or .PDF supported. Lower y-limit | Lower limit on y-axis for UV absorption trace Upper y-limit | Upper limit on y-axis for UV absorption trace 2nd trace | Choice for plotting curve on second y-axis. <br> buffer_b plots the percentage of buffer B on the second y-axis <br> buffer_b_abs plots the actual concentration resulting from mixture of buffer A and B (This requires correct values in the buffer B sliders below). <br> conductivity plots the percentage conductivity on the second y-axis. Buffer A (mM) | The absolute value in concentration for competitor in buffer A in mM. Buffer B (mM) | The absolute value in concentration for competitor in buffer B in mM. *Note: Values for Buffer A and Buffer B must be correct when plotting using the buffer_b_abs setting, as these values are used to calculate the actual concentration of the competitor in solution (e.g. imidazole). End of explanation
5,956
Given the following text description, write Python code to implement the functionality described below step by step Description: Python 3 Tutorial Notebook We'll be using this notebook to follow the slides from the workshop. You can also use it to experiment with Python yourself! Simply add a cell wherever you want, type in some Python code, and see what happens! Topic 1 Step1: 1.2 Some other useful printing tips/tricks Step2: Topic 2 Step3: 2.2 Playing with Strings Step4: 2.3 Playing with Booleans Step5: 2.4 Mixing Types Step6: Topic 3 Step7: 3.2 Ternary Statements in Python Step8: 3.3 Practice with While Loops Step9: Topic 4 Step10: 4.1.2 Slicing Step11: 4.2 List Comprehension and Other Useful List Functions Step13: 4.3 Sets and Dictionaries Step14: Topic 5 Step15: 5.2 Recursion Step16: 5.3 Memoization Step17: Topic 6 Step18: 6.2 Inheritance Step19: Topic 7
Python Code: # When a line begins with a '#' character, it designates a comment. This means that it's not actually a line of code # This is how you say hello world print('hello world') # Can you make Python print the staircase below: # # ======== # | | # =============== # | | | # ====================== print(' ========') print(' | |') print(' ===============') print(' | | |') print(' ======================') Explanation: Python 3 Tutorial Notebook We'll be using this notebook to follow the slides from the workshop. You can also use it to experiment with Python yourself! Simply add a cell wherever you want, type in some Python code, and see what happens! Topic 1: Printing 1.1 Printing Basics End of explanation # The print(...) function can accept as many arguments as you'd like. It prints the arguments # in order, separated by a space. For example... print('hello', 'world', 'i', 'go', 'to', 'princeton') # You can change the delimiter that separates the comma-separated arguments by changing the 'sep' parameter: print('hello', 'world', 'i', 'go', 'to', 'princeton', sep=' --> ') # By default, python adds a newline to the end of any statement you print. However, you can change this # by changing the 'end' parameter. For example... # Here we've told Python to print 'hello world' and then an empty string. This effectively # removes the newline that Python adds by default. print('hello prince', end='') # The next line that we print then begins immediately after the previous thing we printed. print('ton') # We can also end our lines with something more exotic print('ACM is so cool', end=' :)') Explanation: 1.2 Some other useful printing tips/tricks End of explanation # What does the following snippet of code do? day = 24 month = 'September' year = '2021' dotw = 'Friday' print(month, day - 7, year, 'was a', dotw) # Explanation: Tells us that the day 7 days ago was the same day of the week as today. The more you know...? age_in_weeks = 1057 # What's the difference between the two statements below? Comment one of them out to check yourself! age_in_years = 1057 / 52 age_in_years = 1057 // 52 print(age_in_years) # Explanation: The '/' operator does float division in Python by default, while the '//' operator does # integer division (i.e. it returns the decimal answer, ALWAYS rounded down to an integer) # Try to calculate the following in your head and see if your answer matches what Python says mystery = 2 ** 4 * 3 ** 2 % 7 * (2 + 7) print(mystery) # Explanation: First evaluate anything in parentheses, then do the exponents, then do multiplication/modulo # FROM LEFT TO RIGHT. So # 2 ** 4 * 3 ** 2 % 7 * (2 + 7) = 2 ** 4 * 3 ** 2 % 7 * 9 = 16 * 9 % 7 * 9 = 144 % 7 * 9 = 4 * 9 = 36 # Write a function that converts a given temperature in Farenheit to Celsius and Kelvin # The relevant formulas are (degrees Celsius) = (degrees Farenheit - 32) * 5 / 9 # and (Kelvin) = (degrees Celsius + 273) farenheit = 86 # Change the value here to test your solution celsius = (farenheit - 32) * 5 / 9 kelvin = celsius + 273 print(farenheit, 'degrees farenheit =', celsius, 'degrees celsius =', kelvin, 'kelvin') Explanation: Topic 2: Variables, Basic Operators, and Data Types 2.1 Playing with Numbers For a description of all the operators that exist in Python, you can visit https://www.tutorialspoint.com/python/python_basic_operators.htm. End of explanation # You are given the following string: a = 'Thomas Cruise' # Your job is to put the phrase 'Tom Cruise is 9 outta 10' into variable b using ONLY operations on string a. # You may not concatenate letters or strings of your own. HINT: You can use the str(...) function to convert # numerical values into strings so that you can concatenate it with another string b = a[0] + a[2:4] + a[6:] + a[6] + a[10:12] + a[6] + str(a.find('u')) + a[6] + a[2] + a[9] + \ (a[0].lower() * 2) + a[4] + a[6] + str(a.find('i')) print(b) # Practice with string formatting with mad libs! For this, you'll need to know # how to receive input. It's really easy in Python: word_1 = input('Input first word:\n') # This prompts the user with the phrase 'Input first word' # and stores the result in the variable word_1 word_2 = input('Input second word:\n') word_3 = input('Input third word:\n') word_4 = input('Input fourth word:\n') # You want to print the following mad libs: # # Hi, my name is [first phrase]. # One thing that I love about Princeton is [second phrase]. # One pet peeve I have about Princeton is [third phrase], but I can get over it because I have [fourth phrase]. # # For the last sentence, use one print statement to print it! print('\nYour mad libs is: ') print('Hi, my name is {}.'.format(word_1)) print('One thing that I love about Princeton is {}.'.format(word_2)) print('One pet peeve I have about Princeton is {}, but it\'s OK because I have {}.'.format(word_3, word_4)) Explanation: 2.2 Playing with Strings End of explanation # Your objective is to write a boolean formula in Python that takes three boolean variables (a, b, c) # and returns True if and only if exactly one of them is True. This is called the xor of the variables # Toggle these to test your formula a = False b = True c = False # Write your formula here xor = (a and not b and not c) or (not a and b and not c) or (not a and not b and c) print(xor) Explanation: 2.3 Playing with Booleans End of explanation # In Python, data types are divided into two categories: truthy and falsy. Falsy values include anything # (strings, lists, etc.) that is empty, the special value None, any zero number, and the boolean False. # You can use the bool(...) function to check whether a value is truthy or falsy: print('bool(3) =', bool(3)) print('bool(0) =', bool(0)) print('bool("") =', bool('')) print('bool(" ") =', bool(' ')) print('bool(False) =', bool(False)) print('bool(True) =', bool(True)) Explanation: 2.4 Mixing Types End of explanation x = 5 # What is the difference between this snippet of code: if x % 2 == 0: print(x, 'is even') if x % 5 == 0: print(x, 'is divisible by 5') if x > 0: print(x, 'is positive') print() # And this one: if x % 2 == 0: print(x, 'is even') elif x % 5 == 0: print(x, 'is divisible by 5') elif x > 0: print(x, 'is positive') # Explanation: the elif will only execute if all statements in the block above evaluated to false. # In the second block, the first if statement evaluates to false, so the first elif's condition is # tested. It is true, so then the second elif will not execute. # Follow-up: An if statement starts its own new block. So the first snippet of code is actually # three if statement blocks. # FizzBuzz is a very well-known programming challenge. It's quite easy, but it can trip up people # who are trying to look for shortcuts to solving the problem. The problem is as follows: # # For every number k in order from 1 to 50, print # - 'FizzBuzz' if the number is divisible by 3 and 5 # - 'Fizz' if the number is only divisible by 3 # - 'Buzz' if the number is only divisble by 5 # - the value of k if none of the above options hold # # Your task is to write a snippet of code that solves FizzBuzz. for i in range(1, 51): div3 = ((i % 3) == 0) div5 = ((i % 5) == 0) if div3: if div5: print('FizzBuzz') else: print('Fizz') elif div5: print('Buzz') else: print(i) Explanation: Topic 3: If Statements, Ranges, and Loops 3.1 Practice with the Basics End of explanation # The following if statement construct is so common it has a name ('ternary statement'): # # if (condition): # x = something1 # elif (condition2): # x = something2 # else: # x = something3 # # In python, this can be shortened into a one-liner: # # x = something else something2 if (condition2) else something3 # # And this works for an arbitrary number of elif statements in between the initial if and final else. # Can you convert the following block into a one-liner? budget = 3 if budget > 50: restaurant = 'Agricola' elif budget > 30: restaurant = 'Mediterra' elif budget > 15: restaurant = 'Thai Village' else: retaurant = 'Wawa' # Write your solution below: restaurant = 'Agricola' if budget > 50 else 'Mediterra' if budget > 30 \ else 'Thai Village' if budget > 15 else 'Wawa' print(restaurant) Explanation: 3.2 Ternary Statements in Python End of explanation # Your job is to create a 'guessing game' where the program thinks of an integer from 1 to 50 # and will keep prompting you for a guess. It'll tell you each time whether your guess is # too high or too low until you find the number. # Don't touch these two lines of code; they choose a random number between 1 and 50 # and store it in mystery_num from random import randint mystery_num = randint(1, 100) # Write your guessing game below: guess = int(input('Guess a number:\n')) # First guess; don't forget to convert it to an int! while mystery_num != guess: if guess > mystery_num: guess = int(input('Nope. Guess was too high!\n')) elif guess < mystery_num: guess = int(input('Nope. Guess was too low!\n')) print('You got it!') # Follow-up: Using the best strategy, what's the worst-case number of guesses you should need? Explanation: 3.3 Practice with While Loops End of explanation # When at the top of a loop, the 'in' keyword in Python will iterate through all of the sequence's # members in order. For strings, members are individual characters; for lists and tuples, they're # the items contained. # Task: Given a list of lowercase words, print whether the word has a vowel. Example: if the input is # ['rhythm', 'owl', 'hymn', 'aardvark'], you should output the following: # rhythm has no vowels # owl has a vowel # hymn has no vowels # aardvark has a vowel # HINT: The 'in' keyword can also test whether something is a member of another object. # Also, don't forget about break and continue! vowels = ['a', 'e', 'i', 'o', 'u'] words = ['rhythm', 'owl', 'hymn', 'aardvark'] for word in words: has_vowel = False for letter in word: if letter in vowels: has_vowel = True break # Not necessary, but is more efficient if has_vowel: print(word, 'has a vowel') else: print(word, 'has no vowels') # Given a tuple, write a program to check if the value at index i is equal to the square of i. # Example: If the input is nums = (0, 2, 4, 6, 8), then the desired output is # # True # False # True # False # False # # Because nums[0] = 0^2 and nums[2] = 4 = 2^2. HINT: Use enumerate! nums = (0, 2, 4, 6, 8) for i, num in enumerate(nums): print(num == i * i) Explanation: Topic 4: Data Structures in Python 4.1 Sequences Strings, tuples, and lists are all considered sequences in Python, which is why there are many operations that work on all three of them. 4.1.1 Iterating End of explanation # Slicing is one of the operations that work on all of them. # Task 1: Given a string s whose length is odd and at least 5, can you print # the middle three characters of it? Try to do it in one line. # Example: if the input is 'PrInCeToN', the the output should be 'nCe' s = 'PrInCeToN' print(s[len(s) // 2 - 1 : len(s) // 2 + 2]) # Task 2: Given a tuple, return a tuple that includes only every other element, starting # from the first. Example: if the input is (4, 5, 'cow', True, 9.4), then the output should # be (4, 'cow', 9.4). Again, try to do it in one line — there's an easy way to do it with slicing. t = (4, 5, 'cow', True, 9.4) print(t[::2]) print(t[0::2]) # also acceptable print(t[0:len(t):2]) # also acceptable, but less ideal # Task 3: Do the same as task 2, except start from the last element and alternate backwards. # Example: if the input is (3, 9, 1, 0, True, 'Tiger'), output should be ('Tiger', 0, 9) t = (3, 9, 1, 0, True, 'Tiger') print(t[::-2]) Explanation: 4.1.2 Slicing End of explanation # Task 1: Given a list of names, return a new list where all the names which are more than 15 # characters long are removed. names = ['Nalin Ranjan', 'Howard Yen', 'Sacheth Sathyanarayanan', 'Henry Tang', \ 'Austen Mazenko', 'Michael Tang', 'Dangely Canabal', 'Vicky Feng'] # Write your solution below print([name for name in names if len(name) <= 15]) # Task 2: Given a list of strings, return a list which is the reverse of the original, with # all the strings reversed. Example: if the input is ['Its', 'nine', 'o-clock', 'on a', 'Saturday'], # then the output should be ['yadrutaS', 'a no', 'kcolc-o', 'enin', 'stI']. Try to do it in one line! # HINT: Use list comprehension and negative indices! l = ['Its', 'nine', 'o-clock', 'on a', 'Saturday'] print([word[::-1] for word in l[::-1]]) print([l[-i][::-1] for i in range(1, len(l) + 1)]) # Also acceptable, but a little less ideal l1 = [5, 2, 6, 1, 8, 2, 4] l2 = [6, 1, 2, 4] # Python has a bunch of useful built-in list functions. Some of them are l1.append(3) # adds the element 3 to the end of the the list print(l1) l1.insert(1, 7) # adds the element 7 as the second element of the list print(l1) l1.remove(2) # Removes the first occurrence of 7 in the list (DOES NOT REMOVE ALL) print(l1) l1.pop(4) # Remove the fifth item of the list (since everything is zero-indexed) print(l1) l1.sort() # Sorts the list in increasing order print(l1) l1.sort(reverse=True) # Sorts the list in decreasing order print(l1) print(l1.count(2)) # Counts the number of occurrences of the number 2 in the list l1.extend(l2) # Appends all elements in l2 to the end of l1 print(l1) # If the list is numeric, we can find the min, max, and sum easily: print('Sum:', sum(l1)) print('Minimum:', min(l1)) print('Maximum:', max(l1)) # You can see all the list methods at https://www.w3schools.com/python/python_ref_list.asp Explanation: 4.2 List Comprehension and Other Useful List Functions End of explanation # Task 1: In a dictionary, keys must be unique, but values need not be. Given a dictionary, write a script # that prints the set of all unique values in a dictionary. Example: if the dictionary is # {'Cap': 'bicker', 'Quad': 'sign-in', 'Colonial': 'sign-in', 'Tower': 'bicker', 'Charter': '???'} # The program should print {'sign-in', 'bicker', '???'} d = {'Cap': 'bicker', 'Quad': 'sign-in', 'Colonial': 'sign-in', 'Tower': 'bicker', 'Charter': '???'} unique_vals = set() for key in d: unique_vals.add(d[key]) print(unique_vals) # In fact, there's a more Pythonic way to do this in one line: print(set([val for val in d.values()])) # We used list comprehension to put every value into a list (which may have contained duplicates) # and then converting it to a set removed the duplicates (since every element in a set must be unique). # Task 2: Given a passage of text (a string), analyze the frequency of each individual letter. Sort # the letters by their frequency in the passage. Does your distribution look reasonable for English? passage = it was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us, we were all going direct to Heaven, we were all going direct the other way -- in short, the period was so far like the present period that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only # Here's the alphabet to help you out; it'll help you ignore other characters alphabet = "abcdefghijklmnopqrstuvwxyz" # This adds a key in the dictionary for each letter of the alphabet d = dict.fromkeys(alphabet, 0) for char in passage: if char in alphabet: d[char] += 1 # Don't change the code below: it'll take your dictionary of frequencies and sort it from most frequent to least freqs = [(letter, d[letter]) for letter in d] freqs.sort(key = lambda x: x[1], reverse=True) print(freqs) Explanation: 4.3 Sets and Dictionaries End of explanation # Task 1: Write a function that returns the minimum of three numbers. Don't use the built-in min function def my_min(a, b, c): if a <= b: return a if a <= c else c else: return b if b <= c else c print('Minimum of 6, 3, 7 is', my_min(6, 3, 7)) print('Minimum of 0, 3.333, -52 is', my_min(0, -52, 3.333)) print('Minimum of -3, -1, 3.14159 is', my_min(-3, -1, 3.14159)) # Task 2: Write a function that checks if a given tuple of numbers is increasing (that is, each number # is at least the number before it) def my_increasing(t): if len(t) == 0: return True prev = t[0] for i in t: if i < prev: return False prev = i return True print('(1, 2, 3, 4, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 4, 5, 7, 8))) print('(1, 2, 3, 2, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 2, 5, 7, 8))) print('(-1, 2, 3, 2.99, 5, 7, 8) is increasing:', my_increasing((1, 2, 3, 2, 5, 7, 8))) # Task 3: Given a list of numbers that is guaranteed to contain all but one of the consecutive integers # 1 to N (for some N), find the one that is missing. For example, if the input is [2, 1, 5, 4], your function # should return 3, because that's the number missing from 1-5. def my_missing(l): s = sum(l) n = len(l) + 1 return n * (n + 1) // 2 - s # Why does this work? print(my_missing([2, 1, 5, 4])) print(my_missing([3, 4, 6, 2, 5, 7, 9, 8])) Explanation: Topic 5: Functions in Python 5.1 Practice with Basic Functions End of explanation # Task: The sequence of Fibonacci numbers starts with the numbers 1 and 1 and every subsequent term # is the sum of the previous two terms. So the sequence is 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 87, 144, ... # Can you write a simple recursive function that calculates the nth Fibonacci number? # WARNING: Don't call your function for anything more than 35 or pass a non-integer parameter. # Your notebook might crash if you do. def fib(n): if n == 1 or n == 2: return 1 return fib(n - 1) + fib(n - 2) print(fib(35)) Explanation: 5.2 Recursion End of explanation # Part of the reason that we told you not to run your answer for 5.2 for large n is because the number of # function calls generated is exponentially large: for n = 35, the number of function calls you have is on # the order of 34 billion, which is a lot, even for a computer! If you did n = 75, the number of calls you # would make is approximately 37 sextillion, which is more than the number of seconds until the heat death # of the sun. # You can avert this issue, however, if you **memoize** your function, which is just a fancy way of saying # that you can remember values of your function instead of having to re-evaluate your function again. Python # has a handy memoization tool: from functools import lru_cache @lru_cache def fib(n): if n == 1 or n == 2: return 1 return fib(n - 1) + fib(n - 2) print(fib(100)) # Works no problem! # All we had to do was add the import statement and 'decorate' the function we wanted to remember # values from with the line @lru_cache Explanation: 5.3 Memoization End of explanation # Write a PrincetonStudent class, where a PrincetonStudent has a name, major, year, # set of clubs, and a preference ordering of dining halls. We want to have # # - a default constructor that initializes the PrincetonStudent with a name, major, PUID, year, no clubs, # and a random preference ordering of dining halls # - a special constructor (class method) called detailed_student that initializes a PrincetonStudent # with a name, major, year, # a specific set of clubs, and a particular preference ordering of dining halls # - a __str__() method that prints all the data of the student # - a move_dhall_to_top() function that takes a dhall and moves it to the top # of one's dining hall preference list # - a __lt__() method that returns true if and only if this student has a name that comes before # the other's alphabetically # - an __eq__() method that returns true if and only if the PUIDs of students are equal # HINT: To generate a random dining hall preference order, you can take a particular preference order # and shuffle it using the random.shuffle(list) function from random import shuffle class PrincetonStudent(): def __init__(self, name, major, puid, year): self.name = name self.major = major self.year = year self.puid = puid self.clubs = [] # From least preferred to most self.dhall_pref = ['WuCox', 'Whitman', 'Forbes', 'CJL', 'RoMa'] shuffle(self.dhall_pref) @classmethod # Initialize a student with name, major, year, specific set of clubs, and a dhall preference ordering def detailed_student(cls, name, major, puid, year, clubs, dhall_pref): new_student = PrincetonStudent(name, major, puid, year) new_student.clubs = clubs new_student.dhall_pref = dhall_pref return new_student def move_dhall_to_top(self, dhall): if dhall in self.dhall_pref: move_to_top = self.dhall_pref.remove(dhall) self.dhall_pref.append(dhall) # Returns a string description of the student. This allows us to call print(...) on a # PrincetonStudent and get an intelligible result def __str__(self): str_version = "Name: " + self.name + "\n" str_version += "Year: " + str(self.year) + "\n" str_version += "Concentration: " + self.major + "\n" str_version += "Clubs: " + str(self.clubs) + "\n" str_version += "Dining Halls from Most Favorite to Least: " + str(self.dhall_pref[::-1]) + "\n" return str_version def __lt__(self, other): return (self.name.lower() < other.name.lower()) # This works because string comparison # is automatically alphabetical # Test your PrincetonStudent class using this test suite. Feel free to write your own too! nalin = PrincetonStudent('Nalin Ranjan', 'COS', '123456789', 2022) print(nalin, end="\n\n") nalin.clubs.extend(['ACM', 'Taekwondo', 'Princeton Legal Journal', 'Badminton']) print(nalin, end="\n\n") sacheth_clubs = ['ACM', 'Table Tennis'] sacheth_prefs = ['WuCox', 'Whitman', 'Forbes', 'RoMa', 'CJL'] sacheth = PrincetonStudent.detailed_student('Sacheth Sathyanarayanan', 'COS', \ '24681012', 2022, sacheth_clubs, sacheth_prefs) print(sacheth) print('Sacheth had a great meal at Whitman! It is now his favorite.\n') sacheth.move_dhall_to_top('Whitman') print(sacheth) print('Sacheth is the same student as Nalin:', sacheth == nalin) print('Sacheth\'s name comes before Nalin\'s:', sacheth < nalin) Explanation: Topic 6: Classes in Python 6.1 Practice Writing Basic Classes End of explanation # Write an ACMOfficer class that inherits the PrincetonStudent class. An ACMOfficer has every attribute # a PrincetonStudent has, and also a position and term expiration date. You'll only need to overwrite # the constructors to accommodate these two additions. Remember that you can still call the parent's # functions as subroutines. class ACMOfficer(PrincetonStudent): def __init__(self, name, major, puid, year, acm_pos, acm_term_exp): PrincetonStudent.__init__(self, name, major, puid, year) self.acm_pos = acm_pos self.acm_term_exp = acm_term_exp @classmethod def detailed_officer(cls, name, major, puid, year, clubs, dhall_pref, acm_pos, acm_term_exp): new_officer = ACMOfficer(name, major, puid, year, acm_pos, acm_term_exp) new_officer.clubs = clubs new_officer.dhall_pref = dhall_pref return new_officer def __str__(self): str_version = PrincetonStudent.__str__(self) str_version += "Position on the ACM Board: " + self.acm_pos + "\n" str_version += "Term expires in: " + str(self.acm_term_exp) + "\n" return str_version # Test your PrincetonStudent class using this test suite. Feel free to write your own too! nalin = ACMOfficer('Nalin Ranjan', 'COS', '123456789', 2022, 'Chair', 2022) print(nalin, end="\n\n") nalin.clubs.extend(['ACM', 'Taekwondo', 'Princeton Legal Journal', 'Badminton']) print(nalin, end="\n\n") sacheth_clubs = ['ACM', 'Table Tennis'] sacheth_prefs = ['WuCox', 'Whitman', 'Forbes', 'RoMa', 'CJL'] sacheth = ACMOfficer.detailed_officer('Sacheth Sathyanarayanan', 'COS', '24681012', 2022, sacheth_clubs, sacheth_prefs, 'Treasurer', 2022) print(sacheth) print('Sacheth had a great meal at Whitman! It is now his favorite.\n') sacheth.move_dhall_to_top('Whitman') print(sacheth) print('Sacheth is the same student as Nalin:', sacheth == nalin) print('Sacheth\'s name comes before Nalin\'s:', sacheth < nalin) Explanation: 6.2 Inheritance End of explanation # Numpy contains many mathematical functions/data analysis tools you might want to use import numpy as np # First: Write a function that returns the Gudermannian function evaluated at x. def gudermannian(x, gamma): return gamma * np.arctan(np.tanh(x / gamma)) # Next: use matplotlib to plot the function. HINT: Use matplotlib.pyplot from matplotlib import pyplot as plt # You'll refer to pyplot as plt from now on # HINT: pyplot requires that you have a set of x-values and a corresponding set of y-values. # To make your plot look like a continuous curve, just make your x-values close enough (say in # increments of 0.01). You'll have to use numpy's arange function (Google it!) x_vals = np.arange(-5, 5, 0.01) # Then, you'll have to make a set of y values for each gamma. HINT: If f(x) is a function # defined on a single number, then running it on x_vals evaluates the function at every x value # in x_vals. plt.plot(x_vals, gudermannian(x_vals, 2), label='gamma = 2') plt.plot(x_vals, gudermannian(x_vals, 4), label='gamma = 4') plt.plot(x_vals, gudermannian(x_vals, 6), label='gamma = 6') plt.legend() Explanation: Topic 7: Using Existing Python Libraries Sigmoid activation functions are ubiquitous in machine learning. They all look somewhat like an S shape, starting out flat, and then somewhere in the middle jumping pretty quickly before leveling off. One example is the Gudermannian Function, which takes the form $$f(x, \gamma) = \gamma \arctan \left(\tanh \left( \frac x \gamma \right) \right)$$ for some value $\gamma$. You can think of $\gamma$ as a parameter that specifies "which" Gudermannian function we're talking about. Can you plot the Gudermannian Function in the range $[-5, 5]$ with $\gamma = {2, 4, 6}$? You will need access to numpy to find implementations of the arctan and tanh functions, and you will need matplotlib to create the actual plot. HINT: Since we have three different values of $\gamma$, we'll have three different curves on the same graph. End of explanation
5,957
Given the following text description, write Python code to implement the functionality described below step by step Description: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps Step2: Inline Question #1 Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5 Step5: You should expect to see a slightly better performance than with k = 1. Step6: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
Python Code: # Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train) Explanation: k-Nearest Neighbor (kNN) exercise Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website. The kNN classifier consists of two stages: During training, the classifier takes the training data and simply remembers it During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples The value of k is cross-validated In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code. End of explanation # Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show() Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: First we must compute the distances between all test examples and all train examples. Given these distances, for each test example we find the k nearest examples and have them vote for the label Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example. First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time. End of explanation # Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.) What in the data is the cause behind the distinctly bright rows? What causes the columns? Your Answer: fill this in. End of explanation y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5: End of explanation # Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): Call a function f with args and return the time (in seconds) that it took to execute. import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation Explanation: You should expect to see a slightly better performance than with k = 1. End of explanation num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ index = np.arange(0, num_training) splited_index = np.split(index, num_folds) for i in range(0, num_folds): X_train_folds.append(X_train[splited_index[i]]) y_train_folds.append(y_train[splited_index[i]]) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = {} ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for k in k_choices: accuracies = [] for num in range(0, num_folds): X_train_subs = [] y_train_subs = [] for fold in range(0, num_folds): if fold == num: continue X_train_subs.append(X_train_folds[fold]) y_train_subs.append(y_train_folds[fold]) X_cross = np.concatenate(X_train_subs) y_cross = np.concatenate(y_train_subs) classifier = KNearestNeighbor() classifier.train(X_cross, y_cross) y_valid = classifier.predict_labels(X_train_folds[num], k=k) num_correct = np.sum(y_valid == y_train_folds[num]) accuracies.append(float(num_correct) / y_train_folds[num].shape[0]) k_to_accuracies[k] = accuracies ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 10 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy) Explanation: Cross-validation We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. End of explanation
5,958
Given the following text description, write Python code to implement the functionality described below step by step Description: 루프(Loop) 시퀀스 자료형을 for 문 또는 while 문과 조합하여 사용하면 간단하지만 강력한 루프 프로그래밍을 완성할 수 있다. 특히 range 또는 xrange 함수를 유용하게 활용할 수 있다. for 문 루프 리스트 활용 Step1: 서식 있는 print 문 위 코드에서는 서식이 있는 print문(formatted print)을 사용하였다. 사용방식은 원하는 곳에 중괄호({})를 위치시킨 후에 사용된 중괄호 개수만큼 format 키워드의 인자를 넣어주면 된다. Step2: 아래와 같이 인덱싱을 이용하는 방식으로도 사용할 수 있다. 서식이 있는 print문에 대해서는 이후에 보다 다양한 예제를 살펴볼 것이다. Step3: 문자열 활용 문자열을 이용하여 for문을 실행하면 하나씩 보여준다. Step4: range 함수 일정한 순서로 이루어진 리스트는 range 함수를 이용하여 생성할 수 있다. Step5: 파이썬 2.x 버전에서는 range와 거의 동일한 역할을 수행하지만 리스트 전체를 보여주지 않는 xrange가 있다. xrange(n)이 리턴하는 리스트의 원소들은 인덱싱을 통해서만 확인할 수 있다. xrange는 굳이 원소 전체를 알 필요가 없고 단순히 카운팅만이 필요할 경우 보다 range보다 빠르게 리스트 원소에 접근하여 프로그램의 실행속도를 증가시키는 데에 활용할 수 있다. 주의 Step6: 주의 Step7: range 함수 인자 인자를 최대 세 개까지 받을 수 있다. 각 인자들의 역할은 슬라이싱에 사용되는 세 개의 인자들의 역할과 동일하다. range([start,] stop [, step]) start의 경우 주어지지 않으면 0을 기본값으로 갖는다. step의 경우 주어지지 않으면 1을 기본값으로 갖느다. Step8: range 함수는 for문에서 유용하게 활용된다. Step9: 단순한 카운트 역할을 수행하는 용도로 range함수를 활용할 수도 있다. Step10: C 또는 Java 언어에서와는 달리 파이썬에서는 for문에서 사용되는 변수는 지역변수가 아님에 주의할 것. Step11: range 함수 활용 연습 Step12: 연습 Step13: 연습 Step14: 연습 Step15: while 문 루프 for 문과 while 문의 차이점 for문은 특정 구간(보통 시퀀스 자료형으로 표현됨) 내에서 움직이는 동안 일을 반복해서 진행함 while문은 특정 조건(불값으로 표현됨)이 만족되는 동안 일을 반복해서 진행함 Step16: 예제 컴퓨터가 다룰 수 있는 실수형 숫자 중에서 절대값이 가장 작은 숫자의 근사값 구하기 보통 컴퓨터가 다룰 수 있는 가장 큰 숫자는 어느정도 들어서 알고 있다. 반면에 컴퓨터가 다룰 수 있는 가장 작은 양의 실수도 존재한다. 컴퓨터는 무한히 0에 가까워지는 실수를 다를 수 없다. 매우 큰 수를 다룰 때와 마찬가지로 절대값이 매우 작은 실수를 다룰 때에도 조심해야 함을 의미한다. 이는 컴퓨터의 한계때문이지 파이썬 자체의 한계가 아니다. 모든 프로그래밍언어가 동일한 한계를 갖고 있다. Step17: 연습문제 양의 정수 n을 입력 받아 0과 n 사이의 값을 균등하게 n분의 1로 쪼개는 숫자들의 리스트를 리턴하는 함수 n_divide을 작성하라. 견본답안 1 Step18: [0.0, 0.1, 0.2, 0.3, ..., 0.9, 1.0]을 기대하였지만 다르게 나왔다. n_divide 함수를 아래와 같이 코딩해 보자. 견본답안 2
Python Code: animals = ['cat', 'dog', 'mouse'] for x in animals: print("This is the {}.".format(x)) Explanation: 루프(Loop) 시퀀스 자료형을 for 문 또는 while 문과 조합하여 사용하면 간단하지만 강력한 루프 프로그래밍을 완성할 수 있다. 특히 range 또는 xrange 함수를 유용하게 활용할 수 있다. for 문 루프 리스트 활용 End of explanation for x in animals: print("{}!, this is the {}.".format("Hi", x)) Explanation: 서식 있는 print 문 위 코드에서는 서식이 있는 print문(formatted print)을 사용하였다. 사용방식은 원하는 곳에 중괄호({})를 위치시킨 후에 사용된 중괄호 개수만큼 format 키워드의 인자를 넣어주면 된다. End of explanation for x in animals: print("{1}!, you are {0}.".format("animals", x)) Explanation: 아래와 같이 인덱싱을 이용하는 방식으로도 사용할 수 있다. 서식이 있는 print문에 대해서는 이후에 보다 다양한 예제를 살펴볼 것이다. End of explanation for letter in "Hello World": print(letter) Explanation: 문자열 활용 문자열을 이용하여 for문을 실행하면 하나씩 보여준다. End of explanation a = range(10) a Explanation: range 함수 일정한 순서로 이루어진 리스트는 range 함수를 이용하여 생성할 수 있다. End of explanation b = xrange(10) b a[5] b[5] Explanation: 파이썬 2.x 버전에서는 range와 거의 동일한 역할을 수행하지만 리스트 전체를 보여주지 않는 xrange가 있다. xrange(n)이 리턴하는 리스트의 원소들은 인덱싱을 통해서만 확인할 수 있다. xrange는 굳이 원소 전체를 알 필요가 없고 단순히 카운팅만이 필요할 경우 보다 range보다 빠르게 리스트 원소에 접근하여 프로그램의 실행속도를 증가시키는 데에 활용할 수 있다. 주의: 파이썬 3.x 버전부터는 xrange 함수가 사용되지 않는다. range 함수만 사용할 것을 추천한다. End of explanation a[2:6] Explanation: 주의: xrange를 사용해서 만든 리스트에는 슬라이싱을 적용할 수 없다. 즉, 인덱싱만 사용한다. End of explanation c0 = range(4) c0 c1 = range(1, 4) c1 c2 = range(1, 10, 2) c2 Explanation: range 함수 인자 인자를 최대 세 개까지 받을 수 있다. 각 인자들의 역할은 슬라이싱에 사용되는 세 개의 인자들의 역할과 동일하다. range([start,] stop [, step]) start의 경우 주어지지 않으면 0을 기본값으로 갖는다. step의 경우 주어지지 않으면 1을 기본값으로 갖느다. End of explanation for i in range(6): print("the square of {} is {}").format(i, i ** 2) Explanation: range 함수는 for문에서 유용하게 활용된다. End of explanation for i in range(5): print("printing five times") Explanation: 단순한 카운트 역할을 수행하는 용도로 range함수를 활용할 수도 있다. End of explanation i Explanation: C 또는 Java 언어에서와는 달리 파이썬에서는 for문에서 사용되는 변수는 지역변수가 아님에 주의할 것. End of explanation def range_double(x): z = [] for y in range(x): z.append(y*2) return z range_double(4) Explanation: range 함수 활용 연습: range 활용 함수 range_double은 range 함수와 비슷한 일을 한다. 대신에, 각 원소의 값을 두 배로 하여 리스트를 생성하도록 한다. &gt;&gt;&gt; range_double (4) [0, 2, 4, 6] &gt;&gt;&gt; range_double (10) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18] End of explanation def skip13(a, b): result = [] for k in range (a,b): if k == 13: pass # 아무 것도 하지 않음 else: result.append(k) return result skip13(1, 20) Explanation: 연습: range 활용 서양에서는 숫자 13에 대한 미신이 있다. 따라서 동양에서 건물에 4층이 없는 경우가 있는 것처럼 서양에는 13층이 없는 경우가 있다. 아래 함수는 13을 건너 뛰며 리스트를 생성하여 리턴한다. End of explanation q = [] def add(name): q.append(name) def next(): return q.pop(0) def show(): for i in q: print(i) def length(): return len(q) add("SeungMin") q add("Jisung") q.pop(1) q next() add("Park") add("Kim") q show() length() next() add("Hwang") show() next() show() Explanation: 연습: 큐를 이용하여 은행대기 및 호출 프로그램 생성 리스트를 활용하여 큐(queue) 자료구조를 구현할 수 있다. 큐를 구현하기 위해서는 보통 다음 함수들을 함께 구현해야 한다. add(name): 새로운 손님이 추가될 경우 손님의 이름을 큐에 추가한다. next(): 대기자 중에서 가장 먼저 도착한 손님 이름을 리턴한다. show(): 대기자 명단을 보여(print)준다. length(): 대기자 수를 리턴한다. 준비사항: q = []를 전역변수로 선언하여 활용한다. 큐(queue) 자료구조의 활용을 이해할 수 있어야 한다. 선입선출(FIFO, First-In-First-Out) 방식을 사용한다. End of explanation def iterate(f, x, n): a = x for i in range(n): a = f(a) return a # 반 나누기를 원하는 만큼 반복하고자 하면 f 인자에 아래 divide_2 함수를 입력하면 된다. def divide_2(x): return x/2.0 iterate(divide_2, 64, 3) Explanation: 연습: range 이용 함수호출 반복하기 파이썬 함수를 다른 함수의 인자로 사용할 수 있다. C 또는 Java 언어에서는 포인터를 사용해야함 가능하다. 함수를 인자로 사용할 수 있는 언어를 고차원 언어라 한다. 예제: 아래 iterate 함수는 특정 함수 f를 인자 x에 n번 반복적용한 결과를 리턴하는 함수이다. End of explanation x = 64 while x > 1: x = x/2 print(x) Explanation: while 문 루프 for 문과 while 문의 차이점 for문은 특정 구간(보통 시퀀스 자료형으로 표현됨) 내에서 움직이는 동안 일을 반복해서 진행함 while문은 특정 조건(불값으로 표현됨)이 만족되는 동안 일을 반복해서 진행함 End of explanation eps = 1.0 while eps + 1 > 1: eps = eps / 2.0 eps + 1 > 1 # print("A very small epsilon is {}.".format(eps)) Explanation: 예제 컴퓨터가 다룰 수 있는 실수형 숫자 중에서 절대값이 가장 작은 숫자의 근사값 구하기 보통 컴퓨터가 다룰 수 있는 가장 큰 숫자는 어느정도 들어서 알고 있다. 반면에 컴퓨터가 다룰 수 있는 가장 작은 양의 실수도 존재한다. 컴퓨터는 무한히 0에 가까워지는 실수를 다를 수 없다. 매우 큰 수를 다룰 때와 마찬가지로 절대값이 매우 작은 실수를 다룰 때에도 조심해야 함을 의미한다. 이는 컴퓨터의 한계때문이지 파이썬 자체의 한계가 아니다. 모든 프로그래밍언어가 동일한 한계를 갖고 있다. End of explanation def n_divide(n): a = [] for i in range(n+1): d = 1-(float(n-i)/n) a.append(d) return a n_divide(10) Explanation: 연습문제 양의 정수 n을 입력 받아 0과 n 사이의 값을 균등하게 n분의 1로 쪼개는 숫자들의 리스트를 리턴하는 함수 n_divide을 작성하라. 견본답안 1 End of explanation def n_divide1(n): a = [] for i in range(n+1): d = (float(i)/n) a.append(d) return a n_divide1(10) Explanation: [0.0, 0.1, 0.2, 0.3, ..., 0.9, 1.0]을 기대하였지만 다르게 나왔다. n_divide 함수를 아래와 같이 코딩해 보자. 견본답안 2 End of explanation
5,959
Given the following text description, write Python code to implement the functionality described below step by step Description: Predicting Earnings from Census Data with Decision Tree taken from The Analytics Edge The Task The United States government periodically collects demographic information by conducting a census. In this problem, we are going to use census information about an individual to predict how much a person earns -- in particular, whether the person earns more than $50,000 per year. This data comes from the UCI Machine Learning Repository. The file census.csv contains 1994 census data for 31,978 individuals in the United States. The dataset includes the following 13 variables Step1: Exercise 1 Read the dataset census-2.csv. find out the name and the type of the single colums Step2: Exercise 2 sklearn classification can only work with numeric values. Therefore we first have to convert all not-numeric values to numeric values. copy the dataframe in the copy Step3: Exercise 3 Separate target variable over50k from the independent variables (all others) Step4: Exercise 4 Then, split the data randomly into a training set and a testing set, setting the random_state to 2000 before creating the split. Split the data so that the training set contains 60% of the observations, while the testing set contains 40% of the observations. Step5: Exercise 5 Let us now build a classification tree to predict "over50k". Use the training set to build the model, and all of the other variables as independent variables. Use max_depth=3 and the default parameters else. Step6: Exercise 6 Plot the decision tree using plotting_utilities.plot_decision_tree - Which are the most important feature? (Root of the Tree) - Which is the next important feature? (2nd Level) Step7: Exercise 7 Plot Top 5 most important features with plotting_utilities.plot_feature_importances. Are these features also the most important in the Decision Tree? Step8: Exercise 7 Predict for the test data and compare with the actual outcome
Python Code: import pandas as pd import numpy as np Explanation: Predicting Earnings from Census Data with Decision Tree taken from The Analytics Edge The Task The United States government periodically collects demographic information by conducting a census. In this problem, we are going to use census information about an individual to predict how much a person earns -- in particular, whether the person earns more than $50,000 per year. This data comes from the UCI Machine Learning Repository. The file census.csv contains 1994 census data for 31,978 individuals in the United States. The dataset includes the following 13 variables: age = the age of the individual in years workclass = the classification of the individual's working status (does the person work for the federal government, work for the local government, work without pay, and so on) education = the level of education of the individual (e.g., 5th-6th grade, high school graduate, PhD, so on) maritalstatus = the marital status of the individual occupation = the type of work the individual does (e.g., administrative/clerical work, farming/fishing, sales and so on) relationship = relationship of individual to his/her household race = the individual's race sex = the individual's sex capitalgain = the capital gains of the individual in 1994 (from selling an asset such as a stock or bond for more than the original purchase price) capitalloss = the capital losses of the individual in 1994 (from selling an asset such as a stock or bond for less than the original purchase price) hoursperweek = the number of hours the individual works per week nativecountry = the native country of the individual over50k = whether or not the individual earned more than $50,000 in 1994 Predict whether an individual's earnings are above $50,000 (the variable "over50k") using all of the other variables as independent variables. End of explanation # TODO Explanation: Exercise 1 Read the dataset census-2.csv. find out the name and the type of the single colums End of explanation # TODO convert over50k to boolean from sklearn.preprocessing import LabelEncoder # TODO Explanation: Exercise 2 sklearn classification can only work with numeric values. Therefore we first have to convert all not-numeric values to numeric values. copy the dataframe in the copy: convert the target column over50k to a boolean in the copy: convert the not-numeric independent variables (aka features, aka predictors) via sklearn.LabelEncoder. See http://pbpython.com/categorical-encoding.html how to use the sklearn.LabelEncoder and for further alternatives to convert not-numeric values to numeric values. End of explanation # TODO (hint: use drop(columns,axis=1)) Explanation: Exercise 3 Separate target variable over50k from the independent variables (all others): over50k -&gt; y, all others -&gt; X End of explanation from sklearn.model_selection import train_test_split # TODO Explanation: Exercise 4 Then, split the data randomly into a training set and a testing set, setting the random_state to 2000 before creating the split. Split the data so that the training set contains 60% of the observations, while the testing set contains 40% of the observations. End of explanation from sklearn.tree import DecisionTreeClassifier # TODO Explanation: Exercise 5 Let us now build a classification tree to predict "over50k". Use the training set to build the model, and all of the other variables as independent variables. Use max_depth=3 and the default parameters else. End of explanation from plotting_utilities import plot_decision_tree, plot_feature_importances import matplotlib.pyplot as plt %matplotlib inline # TODO Explanation: Exercise 6 Plot the decision tree using plotting_utilities.plot_decision_tree - Which are the most important feature? (Root of the Tree) - Which is the next important feature? (2nd Level) End of explanation # TODO Explanation: Exercise 7 Plot Top 5 most important features with plotting_utilities.plot_feature_importances. Are these features also the most important in the Decision Tree? End of explanation # TODO predict from sklearn.metrics import confusion_matrix # TODO Explanation: Exercise 7 Predict for the test data and compare with the actual outcome: Therefore print the confusion matrix for the test-data and calculate the accuracy for the trainings-data for the test-data End of explanation
5,960
Given the following text description, write Python code to implement the functionality described below step by step Description: Step8: Lending Club Data Explorations Preprocess Data Step9: Load Data Step10: Load data dictionary Step11: The first thing I am going to do is find the percentage of missing values in my dataset Step12: To make my analysis tractable given the time in which this analysis must be completed, I am going to restrict myself to features for which less than 20% are missing. Am dropping url, index, id, member_id, emp_title, title Step13: Automatically detect column type Step14: Categorical variable 'term', 'grade', 'sub_grade', 'emp_length', 'home_ownership', 'is_inc_v', 'loan_status', 'pymnt_plan', 'purpose', 'addr_city', 'addr_state', 'revol_util', 'initial_list_status', Dates 'accept_d', 'exp_d', 'list_d', 'issue_d', 'earliest_cr_line' 'last_pymnt_d', 'last_credit_pull_d' Step15: Data Preprocessing Step16: Deal with dates Step17: Deal with categorical variables Step18: Derive Features loan_rank Step19: I did some research and discovered that loans with the prefix "Does not meet the credit.." in loan_status, are past loans that were granted before Lending Club did a major change to their lending policy. These represent less than 1% of all loans in the dataset. Moving forward, I will discard these loans from my analysis. Step20: Derive Features Extract day of the week from dates Extract week of the year from dates Step21: Save Dataset to file
Python Code: %pylab inline # Import libraries from __future__ import absolute_import, division, print_function # Ignore warnings import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd from sklearn.externals import joblib # Graphing Libraries import matplotlib.pyplot as pyplt import seaborn as sns sns.set_style("white") def get_item_description(label): Print the description for an item label based on the data dictionary print(df_data_dictionary[df_data_dictionary.Item == label].Description.values[0]) import datetime import calendar def convert_date_months(col): Convert the date to difference from this month Attributes ------------ col: pandas series today = datetime.date.today() tmp_2 = pd.to_datetime(col) tmp_2 = today - tmp_2 tmp_2 = tmp_2.astype('timedelta64[M]').astype(int) return tmp_2 def convert_date_days(col): Convert the date to difference from this day Attributes ------------ col: pandas series today = datetime.date.today() tmp_2 = pd.to_datetime(col) tmp_2 = today - tmp_2 tmp_2 = tmp_2.astype('timedelta64[D]').astype(int) return tmp_2 def convert_num_day_of_week(x): Convert the date to numeric day of the week Attributes ------------ x: datetime obj my_date = pd.to_datetime(x) return my_date.weekday() def convert_day_of_week(x): Convert the date to string day of the week Attributes ------------ x: datetime obj my_date = pd.to_datetime(x) return calendar.day_name[my_date.weekday()] def convert_week_of_year(x): Convert the date to the week of the year Attributes ------------ x: datetime obj my_date = pd.to_datetime(x) return my_date.isocalendar()[1] import operator def find_missing_values(df): df: pandas dataframe Returns sorted list of features by the percentage missing values count_nan = (len(df) - df.count()) / len(df) percentage_nan = {} for item in range (0, len(count_nan)): if count_nan[item] > 0: percentage_nan[count_nan.index[item]] = round(count_nan[item], 4) return sorted(percentage_nan.items(), key=operator.itemgetter(1)) def get_var_type(df): Automatically determine variable type Attribute: ------------ df: pandas dataframe continuous_vars = [] rest = [] for cols in df.columns: a = type(df.ix[0, cols])== np.float64 and pd.notnull(df.ix[0, cols]) b = type(df.ix[0, cols])== float and pd.notnull(df.ix[0, cols]) if (a) or (b): continuous_vars.append(cols) else: rest.append(cols) return (continuous_vars, rest) Explanation: Lending Club Data Explorations Preprocess Data End of explanation dataPath = 'data' df = pd.read_csv(dataPath+'/LoanStats3a.csv') df_b = pd.read_csv(dataPath+'/LoanStats3b.csv') df_c = pd.read_csv(dataPath+'/LoanStats3c.csv') df = df.append(df_b) df = df.append(df_c) df = df.reset_index() print ("Dataset has {} samples with {} features each.".format(*df.shape)) Explanation: Load Data End of explanation xls_file = pd.ExcelFile(dataPath+'/LCDataDictionary.xlsx') xls_file.sheet_names df_data_dictionary = xls_file.parse('LoanStats') df_data_dictionary.columns = [u'Item', u'Description'] df_ = xls_file.parse('browseNotes') df_.columns = df_data_dictionary.columns df_data_dictionary = df_data_dictionary.append(df_) df_data_dictionary.head() df.head() Explanation: Load data dictionary End of explanation missing_df = find_missing_values(df) cols = [x[0] for x in missing_df] vals = [x[1] for x in missing_df] pyplt.rcParams['figure.figsize'] = (8, 16) percentage_nan_frame = pd.DataFrame({ 'Percentage': vals,'Feature': cols }) ax = percentage_nan_frame.plot(kind = 'barh', width = 0.4, x = 'Feature', color = 'slateblue', title = "Features Mapped By Percentage Missing Values") pyplt.grid(True) pyplt.savefig('report/figures/missing_features.png', format='png', dpi=200) # reset figure size pyplt.rcParams['figure.figsize'] = (6, 4) Explanation: The first thing I am going to do is find the percentage of missing values in my dataset End of explanation drop_list = ['url', 'index', 'id', 'member_id', 'emp_title', 'title'] for col, val in missing_df: if val > 0.2: drop_list.append(col) df.drop(drop_list, axis=1, inplace=True) df.head() Explanation: To make my analysis tractable given the time in which this analysis must be completed, I am going to restrict myself to features for which less than 20% are missing. Am dropping url, index, id, member_id, emp_title, title End of explanation continuous_vars, rest = get_var_type(df) Explanation: Automatically detect column type End of explanation display(df[continuous_vars[0:10]].head(1)) display(df[continuous_vars[10:20]].head(1)) display(df[continuous_vars[20:]].head(1)) Explanation: Categorical variable 'term', 'grade', 'sub_grade', 'emp_length', 'home_ownership', 'is_inc_v', 'loan_status', 'pymnt_plan', 'purpose', 'addr_city', 'addr_state', 'revol_util', 'initial_list_status', Dates 'accept_d', 'exp_d', 'list_d', 'issue_d', 'earliest_cr_line' 'last_pymnt_d', 'last_credit_pull_d' End of explanation get_item_description('revol_util') df['int_rate'] = df['int_rate'].str.extract('(\d+.\d+)') df['int_rate'] = df['int_rate'].map(lambda x: float(x)) df['revol_util'] = df['revol_util'].str.extract('(\d+.\d+)') df['revol_util'] = df['revol_util'].map(lambda x: float(x)) df['revol_util'] = df['revol_util'].fillna(-1) # that is 25% missing, so filling with -1 to indicate missing df.shape df = df.dropna() df.shape categorical_var = ['term','grade','sub_grade', 'emp_length','home_ownership','is_inc_v','pymnt_plan','purpose', 'addr_city','addr_state','revol_util','initial_list_status'] date_var = ['accept_d','list_d','exp_d','issue_d', 'earliest_cr_line','last_pymnt_d','last_credit_pull_d'] Explanation: Data Preprocessing End of explanation for cols in date_var: print (cols) df[cols+'_months'] = convert_date_months(df[cols]) df[cols+'_days'] = convert_date_days(df[cols]) Explanation: Deal with dates End of explanation display(df[categorical_var[0:10]].head(1)) display(df[categorical_var[10:]].head(1)) from sklearn.preprocessing import LabelEncoder number = LabelEncoder() df['term_old'] = df['term'] df['term'] = number.fit_transform(df['term'].astype(str)) df['addr_city_old'] = df['addr_city'] df['addr_city'] = number.fit_transform(df['addr_city'].astype(str)) df['addr_state_old'] = df['addr_state'] df['addr_state'] = number.fit_transform(df['addr_state'].astype(str)) df['home_ownership_old'] = df['home_ownership'] df['home_ownership'] = number.fit_transform(df['home_ownership'].astype(str)) df['purpose_old'] = df['purpose'] df['purpose'] = number.fit_transform(df['purpose'].astype(str)) df['pymnt_plan_old'] = df['pymnt_plan'] df['pymnt_plan'] = number.fit_transform(df['pymnt_plan'].astype(str)) df['is_inc_v_old'] = df['is_inc_v'] df['is_inc_v'] = number.fit_transform(df['is_inc_v'].astype(str)) df['initial_list_status_old'] = df['initial_list_status'] df['initial_list_status'] = number.fit_transform(df['initial_list_status'].astype(str)) df['emp_length_old'] = df['emp_length'] df['emp_length'] = number.fit_transform(df['emp_length'].astype(str)) df['grade_old'] = df['grade'] df['grade'] = number.fit_transform(df['grade'].astype(str)) df['sub_grade_old'] = df['sub_grade'] df['sub_grade'] = number.fit_transform(df['sub_grade'].astype(str)) new_grade_var = [x for x in df.columns if 'grade' in x] display(df[new_grade_var].head(1)) Explanation: Deal with categorical variables End of explanation data = df['loan_status'] data.dropna(inplace = True) df['loan_status'].value_counts() index_ = df['loan_status'][df['loan_status'].str.contains('(Does not meet the credit*)')].index Explanation: Derive Features loan_rank: 1 for bad loans, 0 for good loans Good loans are the ones where the status is current fully paid Bad loans are the ones where the status is default in grace period late charged off Extract day of the week from dates Extract week of the year from dates Derive Features Loan Rank End of explanation df.ix[index_, 'loan_status'].count() / len(df) * 100 df.drop(df.index[index_], inplace=True) # Create a set of dummy variables from the 'loan_status' variable df_loan_status = pd.get_dummies(df['loan_status']) df['loan_rank'] = df_loan_status[u'Default'] + df_loan_status[u'In Grace Period'] + \ df_loan_status[u'Late (16-30 days)'] + df_loan_status[ u'Late (31-120 days)'] + \ df_loan_status[u'Charged Off'] df.drop(['loan_status'], axis = 1, inplace = True) Explanation: I did some research and discovered that loans with the prefix "Does not meet the credit.." in loan_status, are past loans that were granted before Lending Club did a major change to their lending policy. These represent less than 1% of all loans in the dataset. Moving forward, I will discard these loans from my analysis. End of explanation for cols in date_var: print (cols) df[cols+'_num_day'] = df[cols].apply(lambda x: convert_num_day_of_week(x)) df[cols+'_day'] = df[cols].apply(lambda x: convert_day_of_week(x)) df[cols+'_week_of_year'] = df[cols].apply(lambda x: convert_week_of_year(x)) Explanation: Derive Features Extract day of the week from dates Extract week of the year from dates End of explanation joblib.dump(df, dataPath+'/df_cleaned.pkl') Explanation: Save Dataset to file End of explanation
5,961
Given the following text description, write Python code to implement the functionality described below step by step Description: Python 线程与协程(2) 我之前翻译了Python 3.5 协程原理这篇文章之后尝试用了 Tornado + Motor 模式下的协程进行异步开发,确实感受到协程所带来的好处(至少是语法上的 Step1: 后来又新增了 yield from 语法,可以将生成器串联起来: Step3: yield from/send 似乎已经满足了协程所定义的需求,最初也确实是用 @types.coroutine 修饰器将生成器转换成协程来使用,在 Python 3.5 之后则以专用的 async/await 取代了 @types.coroutine/yield from: Step4: 与线程相比 协程的执行过程如下所示: Step5: 这张图(来自
Python Code: def jump_range(upper): index = 0 while index < upper: jump = yield index if jump is None: jump = 1 index += jump jump = jump_range(5) print(jump) print(jump.send(None)) print(jump.send(3)) print(jump.send(None)) Explanation: Python 线程与协程(2) 我之前翻译了Python 3.5 协程原理这篇文章之后尝试用了 Tornado + Motor 模式下的协程进行异步开发,确实感受到协程所带来的好处(至少是语法上的:D)。至于协程的 async/await 语法是如何由开始的 yield 生成器一步一步上位至 Python 的 async/await 组合语句,前面那篇翻译的文章里面讲得已经非常详尽了。我们知道协程的本质上是: allowing multiple entry points for suspending and resuming execution at certain locations. 允许多个入口对程序进行挂起、继续执行等操作,我们首先想到的自然也是生成器: End of explanation def wait_index(i): # processing i... return (yield i) def jump_range(upper): index = 0 while index < upper: jump = yield from wait_index(index) if jump is None: jump = 1 index += jump jump = jump_range(5) print(jump) print(jump.send(None)) print(jump.send(3)) print(jump.send(None)) Explanation: 后来又新增了 yield from 语法,可以将生成器串联起来: End of explanation class Wait(object): 由于 Coroutine 协议规定 await 后只能跟 awaitable 对象, 而 awaitable 对象必须是实现了 __await__ 方法且返回迭代器 或者也是一个协程对象, 因此这里临时实现一个 awaitable 对象。 def __init__(self, index): self.index = index def __await__(self): return (yield self.index) async def jump_range(upper): index = 0 while index < upper: jump = await Wait(index) if jump is None: jump = 1 index += jump jump = jump_range(5) print(jump) print(jump.send(None)) print(jump.send(3)) print(jump.send(None)) Explanation: yield from/send 似乎已经满足了协程所定义的需求,最初也确实是用 @types.coroutine 修饰器将生成器转换成协程来使用,在 Python 3.5 之后则以专用的 async/await 取代了 @types.coroutine/yield from: End of explanation import asyncio import time import types @types.coroutine def _sum(x, y): print("Compute {} + {}...".format(x, y)) yield time.sleep(2.0) return x+y @types.coroutine def compute_sum(x, y): result = yield from _sum(x, y) print("{} + {} = {}".format(x, y, result)) loop = asyncio.get_event_loop() loop.run_until_complete(compute_sum(0,0)) Explanation: 与线程相比 协程的执行过程如下所示: End of explanation import asyncio import time # 上面的例子为了从生成器过度,下面全部改用 async/await 语法 async def _sum(x, y): print("Compute {} + {}...".format(x, y)) await asyncio.sleep(2.0) return x+y async def compute_sum(x, y): result = await _sum(x, y) print("{} + {} = {}".format(x, y, result)) start = time.time() loop = asyncio.get_event_loop() tasks = [ asyncio.ensure_future(compute_sum(0, 0)), asyncio.ensure_future(compute_sum(1, 1)), asyncio.ensure_future(compute_sum(2, 2)), ] loop.run_until_complete(asyncio.wait(tasks)) loop.close() print("Total elapsed time {}".format(time.time() - start)) Explanation: 这张图(来自: PyDocs: 18.5.3. Tasks and coroutines)清楚地描绘了由事件循环调度的协程的执行过程,上面的例子中事件循环的队列里只有一个协程,如果要与上一部分中线程实现的并发的例子相比较,只要向事件循环的任务队列中添加协程即可: End of explanation
5,962
Given the following text description, write Python code to implement the functionality described below step by step Description: Read motifs from files in other formats. Step1: You can convert a motif to several formats. Step2: Some other useful tidbits. Step3: To convert a motif to an image, use to_img(). Supported formats are png, ps and pdf. Step4: Motif scanning For very simple scanning, you can just use a Motif instance. Let’s say we have a FASTA file called test.small.fa that looks like this Step5: This return a dictionary with the sequence names as keys. The value is a list with positions where the motif matches. Here, as the AP1 motif is a palindrome, you see matches on both forward and reverse strand. This is more clear when we use pwm_scan_all() that returns position, score and strand for every match. Step6: The number of matches to return is set to 50 by default, you can control this by setting the nreport argument. Use scan_rc=False to only scan the forward orientation. Step7: While this functionality works, it is not very efficient. To scan many motifs in potentially many sequences, use the functionality in the scanner module. If you only want the best match per sequence, there is a utility function called scan_to_best_match, otherwise, use the Scanner class. Step8: The matches are in the same order as the sequences in the original file. While this function can be very useful, a Scanner instance is much more flexible. You can scan different input formats (BED, FASTA, regions), and control the thresholds and output. As an example we will use the file Gm12878.CTCF.top500.w200.fa that contains 500 top CTCF peaks. We will get the CTCF motif and scan this file in a number of different ways. Step9: Now let’s get the best score for the CTCF motif for each sequence. Step10: In many cases you’ll want to set a threshold. In this example we’ll use a 1% FPR threshold, based on scanning randomly selected sequences from the hg38 genome. The first time you run this, it will take a while. However, the tresholds will be cached. This means that for the same combination of motifs and genome, the previously generated threshold will be used. Step11: Finding de novo motifs Let’s take the Gm12878.CTCF.top500.w200.fa file as example again. For a basic example we’ll just use two motif finders, as they’re quick to run. Step12: This will basically run the same pipeline as the gimme motifs command. All output files will be stored in outdir and gimme_motifs returns a list of Motif instances. If you only need the motifs but not the graphical report, you can decide to skip it by setting create_report to False. Additionally, you can choose to skip clustering (cluster=False) or to skip calculation of significance (filter_significant=False). For instance, the following command will only predict motifs and cluster them. Step13: All parameters for motif finding are set by the params argument Although the gimme_motifs function is probably the easiest way to run the de novo finding tools, you can also run any of the tools directly. In this case you would also have to supply the background file if the specific tool requires it. Step14: Motif statistics With some motifs, a sample file and a background file you can calculate motif statistics. Let’s say I wanted to know which of the p53-family motifs is most enriched in the file TAp73alpha.fa. First, we’ll generate a GC%-matched genomic background. Then we only select p53 motifs. Step15: A lot of statistics are generated and you will not always need all of them. You can choose one or more specific metrics with the additional stats argument. Step16: Motif comparison Step17: Compare two motifs Step18: Find closest match in a motif database
Python Code: with open("MA0099.3.jaspar") as f: motifs = read_motifs(f, fmt="jaspar") print(motifs[0]) Explanation: Read motifs from files in other formats. End of explanation with open("example.pfm") as f: motifs = read_motifs(f) # pwm print(motifs[0].to_pwm()) # pfm print(motifs[0].to_pfm()) # consensus sequence print(motifs[0].to_consensus()) # TRANSFAC print(motifs[0].to_transfac()) # MEME print(motifs[0].to_meme()) Explanation: You can convert a motif to several formats. End of explanation m = motif_from_consensus("NTGASTCAN") print(len(m)) # Trim by information content m.trim(0.5) print(m.to_consensus(), len(m)) # Slices print(m[:3].to_consensus()) # Shuffle random_motif = motif_from_consensus("NTGASTGAN").randomize() print(random_motif) Explanation: Some other useful tidbits. End of explanation m = motif_from_consensus("NTGASTCAN") m.to_img("ap1.png", fmt="png") from IPython.display import Image Image("ap1.png") Explanation: To convert a motif to an image, use to_img(). Supported formats are png, ps and pdf. End of explanation from gimmemotifs.motif import motif_from_consensus from gimmemotifs.fasta import Fasta f = Fasta("test.small.fa") m = motif_from_consensus("TGAsTCA") m.pwm_scan(f) Explanation: Motif scanning For very simple scanning, you can just use a Motif instance. Let’s say we have a FASTA file called test.small.fa that looks like this: ``` seq1 AAAAAAAAAAAAAAAAAAAAAA seq2 CGCGCGTGAGTCACGCGCGCGCG seq3 TGASTCAAAAAAAAAATGASTCA ``` Now we can use this file for scanning. End of explanation m.pwm_scan_all(f) Explanation: This return a dictionary with the sequence names as keys. The value is a list with positions where the motif matches. Here, as the AP1 motif is a palindrome, you see matches on both forward and reverse strand. This is more clear when we use pwm_scan_all() that returns position, score and strand for every match. End of explanation m.pwm_scan_all(f, nreport=1, scan_rc=False) Explanation: The number of matches to return is set to 50 by default, you can control this by setting the nreport argument. Use scan_rc=False to only scan the forward orientation. End of explanation from gimmemotifs.motif import motif_from_consensus from gimmemotifs.scanner import scan_to_best_match m1 = motif_from_consensus("TGAsTCA") m1.id = "AP1" m2 = motif_from_consensus("CGCG") m2.id = "CG" motifs = [m1, m2] print("motif\tpos\tscore") result = scan_to_best_match("test.small.fa", motifs) for motif, matches in result.items(): for match in matches: print("{}\t{}\t{}".format(motif, match[1], match[0])) Explanation: While this functionality works, it is not very efficient. To scan many motifs in potentially many sequences, use the functionality in the scanner module. If you only want the best match per sequence, there is a utility function called scan_to_best_match, otherwise, use the Scanner class. End of explanation from gimmemotifs.motif import default_motifs from gimmemotifs.scanner import Scanner from gimmemotifs.fasta import Fasta import numpy as np # Input file fname = "Gm12878.CTCF.top500.w200.fa" # Select the CTCF motif from the default motif database motifs = [m for m in default_motifs() if "CTCF" in m.factors['direct']][:1] # Initialize the scanner s = Scanner() s.set_motifs(motifs) Explanation: The matches are in the same order as the sequences in the original file. While this function can be very useful, a Scanner instance is much more flexible. You can scan different input formats (BED, FASTA, regions), and control the thresholds and output. As an example we will use the file Gm12878.CTCF.top500.w200.fa that contains 500 top CTCF peaks. We will get the CTCF motif and scan this file in a number of different ways. End of explanation scores = [r[0] for r in s.best_score("Gm12878.CTCF.top500.w200.fa")] print("{}\t{:.2f}\t{:.2f}\t{:.2f}".format( len(scores), np.mean(scores), np.min(scores), np.max(scores) )) Explanation: Now let’s get the best score for the CTCF motif for each sequence. End of explanation # Set a 1% FPR threshold based on random hg38 sequence s.set_genome("hg38") s.set_threshold(fpr=0.01) # get the number of sequences with at least one match counts = [n[0] for n in s.count("Gm12878.CTCF.top500.w200.fa", nreport=1)] print(counts[:10]) # or the grand total of number of sequences with 1 match print(s.total_count("Gm12878.CTCF.top500.w200.fa", nreport=1)) # Scanner.scan() just gives all information seqs = Fasta("Gm12878.CTCF.top500.w200.fa")[:10] for i,result in enumerate(s.scan(seqs)): seqname = seqs.ids[i] for m,matches in enumerate(result): motif = motifs[m] for score, pos, strand in matches: print(seqname, motif, score, pos, strand) Explanation: In many cases you’ll want to set a threshold. In this example we’ll use a 1% FPR threshold, based on scanning randomly selected sequences from the hg38 genome. The first time you run this, it will take a while. However, the tresholds will be cached. This means that for the same combination of motifs and genome, the previously generated threshold will be used. End of explanation from gimmemotifs.denovo import gimme_motifs peaks = "Gm12878.CTCF.top500.w200.fa" outdir = "CTCF.gimme" params = { "tools": "Homer,BioProspector", "genome": "hg38", } motifs = gimme_motifs(peaks, outdir, params=params) Explanation: Finding de novo motifs Let’s take the Gm12878.CTCF.top500.w200.fa file as example again. For a basic example we’ll just use two motif finders, as they’re quick to run. End of explanation motifs = gimme_motifs(peaks, outdir, params=params, filter_significant=False, create_report=False) Explanation: This will basically run the same pipeline as the gimme motifs command. All output files will be stored in outdir and gimme_motifs returns a list of Motif instances. If you only need the motifs but not the graphical report, you can decide to skip it by setting create_report to False. Additionally, you can choose to skip clustering (cluster=False) or to skip calculation of significance (filter_significant=False). For instance, the following command will only predict motifs and cluster them. End of explanation from gimmemotifs.tools import get_tool from gimmemotifs.background import MatchedGcFasta m = get_tool("homer") # tool name is case-insensitive # Create a background fasta file with a similar GC% fa = MatchedGcFasta("TAp73alpha.fa", number=1000) fa.writefasta("bg.fa") # Run motif prediction params = { "background": "bg.fa", "width": "20", "number": 5, } motifs, stdout, stderr = m.run("TAp73alpha.fa", params=params) print(motifs[0].to_consensus()) Explanation: All parameters for motif finding are set by the params argument Although the gimme_motifs function is probably the easiest way to run the de novo finding tools, you can also run any of the tools directly. In this case you would also have to supply the background file if the specific tool requires it. End of explanation from gimmemotifs.background import MatchedGcFasta from gimmemotifs.fasta import Fasta from gimmemotifs.stats import calc_stats from gimmemotifs.motif import default_motifs sample = "TAp73alpha.fa" bg = MatchedGcFasta(sample, genome="hg19", number=1000) motifs = [m for m in default_motifs() if any(f in m.factors['direct'] for f in ["TP53", "TP63", "TP73"])] stats = calc_stats(motifs, sample, bg) print("Stats for", motifs[0]) for k, v in stats[str(motifs[0])].items(): print(k,v) print() best_motif = sorted(motifs, key=lambda x: stats[str(x)]["recall_at_fdr"])[-1] print("Best motif (recall at 10% FDR):", best_motif) Explanation: Motif statistics With some motifs, a sample file and a background file you can calculate motif statistics. Let’s say I wanted to know which of the p53-family motifs is most enriched in the file TAp73alpha.fa. First, we’ll generate a GC%-matched genomic background. Then we only select p53 motifs. End of explanation metrics = ["roc_auc", "recall_at_fdr"] stats = calc_stats(motifs, sample, bg, stats=metrics) for metric in metrics: for motif in motifs: print("{}\t{}\t{:.2f}".format( motif.id, metric, stats[str(motif)][metric] )) Explanation: A lot of statistics are generated and you will not always need all of them. You can choose one or more specific metrics with the additional stats argument. End of explanation from gimmemotifs.comparison import MotifComparer from gimmemotifs.motif import motif_from_consensus from gimmemotifs.motif import read_motifs Explanation: Motif comparison End of explanation m1 = motif_from_consensus("RRRCATGYYY") m2 = motif_from_consensus("TCRTGT") mc = MotifComparer() score, pos, orient = mc.compare_motifs(m1, m2) if orient == -1: m2 = m2.rc() pad1, pad2 = "", "" if pos < 0: pad1 = " " * -pos elif pos > 0: pad2 =" " * pos print(pad1 + m1.to_consensus()) print(pad2 + m2.to_consensus()) Explanation: Compare two motifs End of explanation motifs = [ motif_from_consensus("GATA"), motif_from_consensus("NTATAWA"), motif_from_consensus("ACGCG"), ] mc = MotifComparer() results = mc.get_closest_match(motifs, dbmotifs=read_motifs("HOMER"), metric="seqcor") # Load motifs db = read_motifs("HOMER", as_dict=True) for motif in motifs: match, scores = results[motif.id] print("{}: {} - {:.3f}".format(motif.id, match, scores[0])) dbmotif = db[match] orient = scores[2] if orient == -1: dbmotif = dbmotif.rc() padm, padd = 0, 0 if scores[1] < 0: padm = -scores[1] elif scores[1] > 0: padd = scores[1] print(" " * padm + motif.to_consensus()) print(" " * padd + dbmotif.to_consensus()) print() Explanation: Find closest match in a motif database End of explanation
5,963
Given the following text description, write Python code to implement the functionality described below step by step Description: Goal Step1: <div id="toc"></div> Step2: Step 1) Load bhm_e Step3: Load detector pair dictionaries. Step4: Step 2) Produce bhp_e Step5: Look at subsets of pairs later. For now I'll assume it's going to work... Step6: Step 3) Plot it I'm going to make a function called bhp_e_plot based off of bhp_plot. Step7: Try out the vmin and vmax input parameters. Step8: Try adding num_fissions. Step9: Looks pretty good. Zoom in on the central area to see what's going on there.
Python Code: %%javascript $.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js') Explanation: Goal: Build and plot bhp_e P. Schuster, University of Michigan June 21, 2018 Load bhm_e Build a function to sum across custom pairs for bhp_e Plot it Plot slices End of explanation %load_ext autoreload %autoreload 2 import os import sys sys.path.append('../scripts/') import numpy as np import bicorr_e import bicorr import bicorr_plot as bicorr_plot import bicorr_math as bicorr_math import matplotlib.pyplot as plt import matplotlib.colors import seaborn as sns sns.set(style='ticks') Explanation: <div id="toc"></div> End of explanation bhm_e, e_bin_edges, note = bicorr_e.load_bhm_e('../analysis/Cf072115_to_Cf072215b/datap') print(bhm_e.shape) print(e_bin_edges.shape) print(note) Explanation: Step 1) Load bhm_e End of explanation det_df = bicorr.load_det_df() dict_pair_to_index, dict_index_to_pair, dict_pair_to_angle = bicorr.build_dict_det_pair(det_df) Explanation: Load detector pair dictionaries. End of explanation help(bicorr_e.build_bhp_e) bhp_e, norm_factor = bicorr_e.build_bhp_e(bhm_e,e_bin_edges) Explanation: Step 2) Produce bhp_e End of explanation bhp_e.shape Explanation: Look at subsets of pairs later. For now I'll assume it's going to work... End of explanation help(bicorr_plot.bhp_e_plot) bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, title='Plot of bhp_e', show_flag=True) Explanation: Step 3) Plot it I'm going to make a function called bhp_e_plot based off of bhp_plot. End of explanation bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, vmin=10,vmax=1e4,title='Plot of bhp_e', show_flag=True) Explanation: Try out the vmin and vmax input parameters. End of explanation num_fissions = 2194651200.00 bhp_e, norm_factor = bicorr_e.build_bhp_e(bhm_e,e_bin_edges,num_fissions=num_fissions) bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, title='Normalized plot of bhp_e', show_flag=True) Explanation: Try adding num_fissions. End of explanation zoom_range = [0,6] bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, zoom_range=zoom_range, title='Normalized plot of bhp_e', show_flag=True) zoom_range = [0,0.5] bicorr_plot.bhp_e_plot(bhp_e, e_bin_edges, zoom_range=zoom_range, title='Normalized plot of bhp_e', show_flag=True) Explanation: Looks pretty good. Zoom in on the central area to see what's going on there. End of explanation
5,964
Given the following text description, write Python code to implement the functionality described below step by step Description: BGS Spectral Simulations The goal of this notebook is to do some BGS spectral simulations for paper one. Getting started. First, import all the package dependencies. Step1: Specify the parameters of the simulation. Next, let's specify the number and spectral type distribution of spectra we want to simulate, and the random seed. Setting the seed here (which can be any number at all!) ensures that your simulations are reproducible. Let's also explicitly set the night of the "observations" (the default is to use the current date) and the expid or exposure ID number (which would allow you to simulate more than one DESI exposure). The flavor option is used to choose the correct sky-brightness model and it also determines the distribution of targets for a given flavor. For example, flavor='dark' returns the right relative sampling density of ELGs, LRGs, and QSOs. The other available (science target) options for flavor are 'dark', 'gray', 'grey', 'bright', 'bgs', 'mws', 'lrg', 'elg', 'qso', and 'std'. (You can also set flavor to either 'arc' or 'flat' but that would be boring!) Step2: Define the range of allowable observational conditions Step3: Define the survey and targeting values for density, sky coverage, etc Any parameters you wish to set to the default can simply be commented out below, the code only replaces the keys that are defined Step4: Check our environment variables Step5: Generate the random conditions Step6: Randomly select observing consitions for each exposure Step7: Build a metadata table with the top-level simulation inputs. Step8: Open yaml to define the targeting parameter values Then for those defined above in targ_dens, change the default value to what we specified Step9: Generating noiseless simspec and fibermap spectral files The first step is to generate the fibermap and simspec files needed by quickgen. The fibermap table contains (simulated) information about the position of each target in the DESI focal plane, while the simspec table holds the "truth" spectra and the intrinsic properties of each object (redshift, noiseless photometry, [OII] flux, etc.). In detail, the simspec and fibermap data models are described at * http Step10: Simulate spectra using quickgen To get around the fact that we aren't using the command line, we use the arg parser and pass the arguments to the main function of quickgen directly. more information at Step11: Regroup the spectra working with cframe files is pretty tedious, especially across three cameras, 10 spectrographs, and more than 35 million targets! Therefore, let's combine and reorganize the individual cframe files into spectra files grouped on the sky. Spectra are organized into healpix pixels (here chosen to have nside=64). If you're interested, you can read more about the healpix directory structure here
Python Code: import os import numpy as np %matplotlib inline import matplotlib.pyplot as plt from astropy.io import fits from astropy.table import Table import yaml import desispec.io import desisim.io from desisim.scripts import quickgen from desispec.scripts import group_spectra from desispec.io.util import write_bintable, makepath #%pylab inline from desiutil.log import get_logger log = get_logger(level='WARNING') # currently in the bgssim branch of desisim from desisim.obs import new_exposure # new_exposure(flavor, nspec=5000, night=None, expid=None, tileid=None, # airmass=1.0, exptime=None, seed=None, testslit=False, # arc_lines_filename=None, flat_spectrum_filename=None, # target_densities = {}) # returns : fibermap, truth # or # wavelengths = qsim.source.wavelength_out.to(u.Angstrom).value # bgs = desisim.templates.BGS(wave=wavelengths, add_SNeIa=args.add_SNeIa) # flux, tmpwave, meta1 = bgs.make_templates(nmodel=nobj, seed=args.seed, zrange=args.zrange_bgs, # rmagrange=args.rmagrange_bgs,sne_rfluxratiorange=args.sne_rfluxratiorange) Explanation: BGS Spectral Simulations The goal of this notebook is to do some BGS spectral simulations for paper one. Getting started. First, import all the package dependencies. End of explanation nspec = 200 seed = 555 night = '20170701' flavor = 'bgs' nexp = 10 # number of exposures Explanation: Specify the parameters of the simulation. Next, let's specify the number and spectral type distribution of spectra we want to simulate, and the random seed. Setting the seed here (which can be any number at all!) ensures that your simulations are reproducible. Let's also explicitly set the night of the "observations" (the default is to use the current date) and the expid or exposure ID number (which would allow you to simulate more than one DESI exposure). The flavor option is used to choose the correct sky-brightness model and it also determines the distribution of targets for a given flavor. For example, flavor='dark' returns the right relative sampling density of ELGs, LRGs, and QSOs. The other available (science target) options for flavor are 'dark', 'gray', 'grey', 'bright', 'bgs', 'mws', 'lrg', 'elg', 'qso', and 'std'. (You can also set flavor to either 'arc' or 'flat' but that would be boring!) End of explanation exptime_range = (300, 300) airmass_range = (1.25, 1.25) moonphase_range = (0.0, 1.0) moonangle_range = (0, 150) moonzenith_range = (0, 60) Explanation: Define the range of allowable observational conditions End of explanation targ_dens = {} targ_dens['frac_std'] = 0.02 targ_dens['frac_sky'] = 0.08 targ_dens['area'] = 14000.0 targ_dens['area_bgs'] = 14000 targ_dens['nobs_bgs_bright'] = 762 targ_dens['nobs_bgs_faint'] = 475 targ_dens['ntarget_bgs_bright'] = 818 targ_dens['ntarget_bgs_faint'] = 618 targ_dens['success_bgs_bright'] = 0.97 targ_dens['success_bgs_faint'] = 0.92 targ_dens['nobs_mws'] = 700 targ_dens['ntarget_mws'] = 736 targ_dens['success_mws'] = 0.99 Explanation: Define the survey and targeting values for density, sky coverage, etc Any parameters you wish to set to the default can simply be commented out below, the code only replaces the keys that are defined End of explanation def check_env(): for env in ('DESIMODEL', 'DESI_ROOT', 'DESI_SPECTRO_SIM', 'DESI_SPECTRO_DATA', 'DESI_SPECTRO_REDUX', 'SPECPROD', 'PIXPROD','DESI_BASIS_TEMPLATES'): if env in os.environ: print('{} environment set to {}'.format(env, os.getenv(env))) else: print('Required environment variable {} not set!'.format(env)) check_env() %set_env SPECPROD=bgs-specsim-paper-kr %set_env PIXPROD=bgs-specsim-paper-kr rawdata_dir = desisim.io.simdir() %set_env DESI_SPECTRO_DATA=$rawdata_dir %set_env DESI_BASIS_TEMPLATES=/project/projectdirs/desi/spectro/templates/basis_templates/trunk print('Simulated raw data will be written to {}'.format(desisim.io.simdir())) print('Pipeline will read raw data from {}'.format(desispec.io.rawdata_root())) print(' (without knowing that it was simulated)') print('Pipeline will write processed data to {}'.format(desispec.io.specprod_root())) Explanation: Check our environment variables End of explanation rand = np.random.RandomState(seed) Explanation: Generate the random conditions: given the conditions specified above Set the random state with the seed given End of explanation expids = np.arange(nexp).astype(int) exptime = rand.uniform(exptime_range[0], exptime_range[1], nexp) airmass = rand.uniform(airmass_range[0], airmass_range[1], nexp) moonphase = rand.uniform(moonphase_range[0], moonphase_range[1], nexp) moonangle = rand.uniform(moonangle_range[0], moonangle_range[1], nexp) moonzenith = rand.uniform(moonzenith_range[0], moonzenith_range[1], nexp) Explanation: Randomly select observing consitions for each exposure End of explanation metafile = os.path.join( desisim.io.simdir(), 'mysim.fits') metacols = [ ('BRICKNAME', 'S20'), ('SEED', 'S20'), ('EXPTIME', 'f4'), ('AIRMASS', 'f4'), ('MOONPHASE', 'f4'), ('MOONANGLE', 'f4'), ('MOONZENITH', 'f4')] meta = Table(np.zeros(nexp, dtype=metacols)) meta['EXPTIME'].unit = 's' meta['MOONANGLE'].unit = 'deg' meta['MOONZENITH'].unit = 'deg' #meta['BRICKNAME'] = ['{}-{:03d}'.format(args.brickname, ii) for ii in range(args.nbrick)] meta['EXPTIME'] = exptime meta['AIRMASS'] = airmass meta['MOONPHASE'] = moonphase meta['MOONANGLE'] = moonangle meta['MOONZENITH'] = moonzenith log.info('Writing {}'.format(metafile)) Explanation: Build a metadata table with the top-level simulation inputs. End of explanation targetyaml = os.path.join(os.environ['DESIMODEL'],'data','targets','targets.yaml') tgt = yaml.load(open(targetyaml)) for key, val in targ_dens.items(): tgt[key] = val Explanation: Open yaml to define the targeting parameter values Then for those defined above in targ_dens, change the default value to what we specified End of explanation for ii,expid in enumerate(expids): fibermap, truth = new_exposure(flavor, nspec=nspec, night=night, expid=int(expid), tileid=None,\ airmass=airmass[ii], exptime=exptime[ii], seed=seed,\ target_densities=tgt) Explanation: Generating noiseless simspec and fibermap spectral files The first step is to generate the fibermap and simspec files needed by quickgen. The fibermap table contains (simulated) information about the position of each target in the DESI focal plane, while the simspec table holds the "truth" spectra and the intrinsic properties of each object (redshift, noiseless photometry, [OII] flux, etc.). In detail, the simspec and fibermap data models are described at * http://desidatamodel.readthedocs.io/en/latest/DESI_SPECTRO_SIM/PIXPROD/NIGHT/simspec-EXPID.html * http://desidatamodel.readthedocs.io/en/latest/DESI_SPECTRO_DATA/NIGHT/fibermap-EXPID.html End of explanation for ii,expid in enumerate(expids): fiberfile = desispec.io.findfile('fibermap', night=night, expid=expid) simspecfile = desisim.io.findfile('simspec', night=night, expid=expid) args = quickgen.parse([ '--simspec', simspecfile, '--fibermap', fiberfile, '--nspec', str(nspec), '--seed', str(seed), '--moon-phase', str(moonphase[ii]), '--moon-angle', str(moonangle[ii]), '--moon-zenith', str(moonzenith[ii]) ]) quickgen.main(args) Explanation: Simulate spectra using quickgen To get around the fact that we aren't using the command line, we use the arg parser and pass the arguments to the main function of quickgen directly. more information at: http://desidatamodel.readthedocs.io/en/latest/DESI_SPECTRO_REDUX/PRODNAME/exposures/NIGHT/EXPID/index.html Quickgen additional commands for 'quickbrick mode:' '--objtype', 'BGS', '--brickname', 'whatever', '--zrange-bgs', (0.01, 0.4), '--rmagrange-bgs', (15.0,19.5) '--exptime', None '--airmass', 1.5 End of explanation nside = 64 args = group_spectra.parse(['--hpxnside', '{}'.format(nside)]) group_spectra.main(args) Explanation: Regroup the spectra working with cframe files is pretty tedious, especially across three cameras, 10 spectrographs, and more than 35 million targets! Therefore, let's combine and reorganize the individual cframe files into spectra files grouped on the sky. Spectra are organized into healpix pixels (here chosen to have nside=64). If you're interested, you can read more about the healpix directory structure here: https://github.com/desihub/desispec/blob/master/doc/nb/Intro_to_DESI_spectra.ipynb Regrouping is especially important for real observations with overlapping tiles where the same object could be reobserved on different exposures separated by short or large amounts of time. End of explanation
5,965
Given the following text description, write Python code to implement the functionality described below step by step Description: RedCap Status Dashboard - One Line Per Form Creating a dashboard of one form per line Parametrization Step1: Import Libraries Set up libraries to display each form and navigate paths to find appropriate files Step2: Display Form Function applies each of the filters in filter_inventories, and displays the function with the appropriate style Step3: Arm + Form Parser Parses through each arm and form for each site, and then displays the results to the screen Step4: Main Function Passes in the specific sites, arms, and forms to look at, and then runs everything accordingly Step5: Run Main
Python Code: site: str arm: str form: str Explanation: RedCap Status Dashboard - One Line Per Form Creating a dashboard of one form per line Parametrization End of explanation from IPython.display import display, Markdown, Latex import pandas as pd from pathlib import Path import sys sys.path.append('/sibis-software/ncanda-data-integration/scripts/qc/') import filter_inventory Explanation: Import Libraries Set up libraries to display each form and navigate paths to find appropriate files End of explanation def show_form(form_data, site, arm, form): # List of filter functions to apply -> check scripts/filter_inventories.py FILTER_LIST = [ 'empty_marked_present', 'content_marked_missing', 'less_content_than_max', 'empty_unmarked', 'content_unmarked', 'content_not_complete', 'missing_not_complete', 'excluded_with_content', ] # Apply filters to dataframe for filter in FILTER_LIST: form_data[filter] = eval('filter_inventory.' + filter + '(form_data)') # Go from Booleans to Icons form_data = form_data.applymap(lambda x: '✅' if x is False else x).applymap(lambda x: '❌' if x is True else x) # Transform - transform: rotate(-90deg); table_styles = [{ 'selector': 'td', 'props': [('background-color', '#808080'), ('color', '#FFFFFF')] }, { 'selector': 'th', 'props': [('background-color', '#000000'), ('color', '#FF0000')] }, { 'selector': 'th.col_heading', 'props': [('transform', 'rotate(-90deg)')] }] # Display form data with markdown display(Markdown('## ' + site.capitalize() + ' Dashboard')) display(Markdown(form + ' form dashboard for arm ' + arm)) display(form_data.style.set_table_styles(table_styles).set_table_styles( [dict(selector="th",props=[('max-width', '50px')]), dict(selector="th.col_heading", props=[("writing-mode", "vertical-lr")])])) Explanation: Display Form Function applies each of the filters in filter_inventories, and displays the function with the appropriate style End of explanation def iter_sites(sites, arms, forms): # Goes through each site -> arm -> form for site in sites: p = Path('/fs/ncanda-share/log/make_all_inventories/inventory_by_site/' + str(site)) for arm_year in p.iterdir(): if (arm_year.is_dir() and arm_year.stem in arms): for form in arm_year.iterdir(): if (form.is_dir() == False and form.stem in forms): # Reads as Pandas dataframe and calls function to # show form df = pd.read_csv(form) if ('missing' in df.columns): show_form(df, site, arm_year.stem, form.stem) Explanation: Arm + Form Parser Parses through each arm and form for each site, and then displays the results to the screen End of explanation def main(): sites = [site] arms = [arm] forms = [form] iter_sites(sites, arms, forms) Explanation: Main Function Passes in the specific sites, arms, and forms to look at, and then runs everything accordingly End of explanation main() Explanation: Run Main End of explanation
5,966
Given the following text description, write Python code to implement the functionality described below step by step Description: SMI - Similarity of Matrices Index SMI is a measure of the similarity between the dominant subspaces of two matrices. It comes in two flavours (projections) Step1: Next, load the data that we are going to analyse using hoggorm. After the data has been loaded into the pandas data frame, we'll display it in the notebook. Step2: Orthogonal Projections The default comparison between two matrices with SMI is using Orthogonal Projections, i.e. ordinary least squares regression is used to relate the dominant subspaces in the two matrices. In contrast to PLSR, SMI is not performing av prediction of sensory properties from fluorescence measurements, but rather treats the two sets of measurements symmetrically, focusing on the major variation in each of them. More details regarding the use of the SMI are found in the documentation. Step3: A hypothesis can be made regarding the similarity of two subspaces where the null hypothesis is that they are equal and the alternative is that they are not. Permutation testing yields the following P-values (probabilities that the observed difference could be larger given the null hypothesis is true). Step4: Finally we visualize the SMI values and their corresponding P-values. Step5: The significance symbols in the diamond plot above indicate if a chosen subspace from one matrix can be found inside the subspace from the other matrix ($\supset$, $\subset$, =), or if there is signficant difference (P-values <0.001*** <0.01 ** <0.05 * <0.1 . >=0.1). From the P-values and plot we can observe that the there is a significant difference between the sensory data and the fluorescence data in the first of the dominant subspaces of the matrices. Looking only at the diagonal, we see that 6 components are needed before we loose the significance completely. Looking at the one-dimensional subspaces, we can observe that four sensory components are needed before there is no significant difference to the first fluorescence component. This can be interpreted as some fundamental difference in the information spanned by flurescence measurements and sensory perceptions that is only masked if large proportions of the subspaces are included. Procrustes Rotations The similarities using PR <= OP, and in this simple case OP$^2$ = PR. Otherwise the pattern stays the same. Step6: The number of permutations can be controlled for quick (100) or accurate (>10000) computations of significance.
Python Code: import hoggorm as ho import hoggormplot as hop import pandas as pd import numpy as np Explanation: SMI - Similarity of Matrices Index SMI is a measure of the similarity between the dominant subspaces of two matrices. It comes in two flavours (projections): - OP - Orthogonal Projections - PR - Procrustes Rotations. The former (default) compares subspaces using ordinary least squares and can be formulated as the explained variance when predicting one matrix subspace using the other matrix subspace. PR is a restriction where only rotation and scaling is allowed in the similarity calculations. Subspaces are by default computed using Principal Component Analysis (PCA). When the number of components extracted from one of the matrices is smaller than the other, the explained variance is calculated predicting the smaller subspace by using the larger subspace. Example: Sensory and Fluorescence data Import packages and prepare data First import hoggorm for analysis of the data and hoggormPlot for plotting of the analysis results. We'll also import pandas such that we can read the data into a data frame. numpy is needed for checking dimensions of the data. End of explanation # Load fluorescence data X1_df = pd.read_csv('cheese_fluorescence.txt', index_col=0, sep='\t') X1_df # Load sensory data X2_df = pd.read_csv('cheese_sensory.txt', index_col=0, sep='\t') X2_df Explanation: Next, load the data that we are going to analyse using hoggorm. After the data has been loaded into the pandas data frame, we'll display it in the notebook. End of explanation # Get the values from the data frame X1 = X1_df.values X2 = X2_df.values smiOP = ho.SMI(X1, X2, ncomp1=10, ncomp2=10) print(np.round(smiOP.smi, 2)) Explanation: Orthogonal Projections The default comparison between two matrices with SMI is using Orthogonal Projections, i.e. ordinary least squares regression is used to relate the dominant subspaces in the two matrices. In contrast to PLSR, SMI is not performing av prediction of sensory properties from fluorescence measurements, but rather treats the two sets of measurements symmetrically, focusing on the major variation in each of them. More details regarding the use of the SMI are found in the documentation. End of explanation print(np.round(smiOP.significance(), 2)) Explanation: A hypothesis can be made regarding the similarity of two subspaces where the null hypothesis is that they are equal and the alternative is that they are not. Permutation testing yields the following P-values (probabilities that the observed difference could be larger given the null hypothesis is true). End of explanation # Plot similarities hop.plotSMI(smiOP, [10, 10], X1name='fluorescence', X2name='sensory') Explanation: Finally we visualize the SMI values and their corresponding P-values. End of explanation smiPR = ho.SMI(X1, X2, ncomp1=10, ncomp2=10, projection="Procrustes") print(np.round(smiPR.smi, 2)) Explanation: The significance symbols in the diamond plot above indicate if a chosen subspace from one matrix can be found inside the subspace from the other matrix ($\supset$, $\subset$, =), or if there is signficant difference (P-values <0.001*** <0.01 ** <0.05 * <0.1 . >=0.1). From the P-values and plot we can observe that the there is a significant difference between the sensory data and the fluorescence data in the first of the dominant subspaces of the matrices. Looking only at the diagonal, we see that 6 components are needed before we loose the significance completely. Looking at the one-dimensional subspaces, we can observe that four sensory components are needed before there is no significant difference to the first fluorescence component. This can be interpreted as some fundamental difference in the information spanned by flurescence measurements and sensory perceptions that is only masked if large proportions of the subspaces are included. Procrustes Rotations The similarities using PR <= OP, and in this simple case OP$^2$ = PR. Otherwise the pattern stays the same. End of explanation print(np.round(smiPR.significance(B = 100),2)) hop.plotSMI(smiPR, X1name='fluorescence', X2name='sensory') Explanation: The number of permutations can be controlled for quick (100) or accurate (>10000) computations of significance. End of explanation
5,967
Given the following text description, write Python code to implement the functionality described below step by step Description: FD_1D_DX4_DT4_LW 1-D acoustic Finite-Difference modelling GNU General Public License v3.0 Author Step1: Input Parameter Step2: Preparation Step3: Create space and time vector Step4: Source signal - Ricker-wavelet Step5: Time stepping Step6: Save seismograms
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt Explanation: FD_1D_DX4_DT4_LW 1-D acoustic Finite-Difference modelling GNU General Public License v3.0 Author: Florian Wittkamp Finite-Difference acoustic seismic wave simulation Discretization of the first-order acoustic wave equation Temporal second-order accuracy $O(\Delta T^4)$ Spatial fourth-order accuracy $O(\Delta X^4)$ Theory to Lax-Wendroff method is given in: Dablain, M. A. (1986). The application of high-order differencing to the scalar wave equation. Geophysics, 51(1), 54-66. Initialisation End of explanation # Discretization c1=20 # Number of grid points per dominant wavelength c2=0.5 # CFL-Number nx=2000 # Number of grid points T=10 # Total propagation time # Source Signal f0= 10 # Center frequency Ricker-wavelet q0= 1 # Maximum amplitude Ricker-Wavelet xscr = 100 # Source position (in grid points) # Receiver xrec1=400 # Position Reciever 1 (in grid points) xrec2=800 # Position Reciever 2 (in grid points) xrec3=1800 # Position Reciever 3 (in grid points) # Velocity and density modell_v = np.hstack((1000*np.ones((int(nx/2))),1500*np.ones((int(nx/2))))) rho=np.hstack((1*np.ones((int(nx/2))),1.5*np.ones((int(nx/2))))) Explanation: Input Parameter End of explanation # Init wavefields vx=np.zeros(nx) p=np.zeros(nx) # Calculate first Lame-Paramter l=rho * modell_v * modell_v cmin=min(modell_v.flatten()) # Lowest P-wave velocity cmax=max(modell_v.flatten()) # Highest P-wave velocity fmax=2*f0 # Maximum frequency dx=cmin/(fmax*c1) # Spatial discretization (in m) dt=dx/(cmax)*c2 # Temporal discretization (in s) lampda_min=cmin/fmax # Smallest wavelength # Output model parameter: print("Model size: x:",dx*nx,"in m") print("Temporal discretization: ",dt," s") print("Spatial discretization: ",dx," m") print("Number of gridpoints per minimum wavelength: ",lampda_min/dx) Explanation: Preparation End of explanation x=np.arange(0,dx*nx,dx) # Space vector t=np.arange(0,T,dt) # Time vector nt=np.size(t) # Number of time steps # Plotting model fig, (ax1, ax2) = plt.subplots(1, 2) fig.subplots_adjust(wspace=0.4,right=1.6) ax1.plot(x,modell_v) ax1.set_ylabel('VP in m/s') ax1.set_xlabel('Depth in m') ax1.set_title('P-wave velocity') ax2.plot(x,rho) ax2.set_ylabel('Density in g/cm^3') ax2.set_xlabel('Depth in m') ax2.set_title('Density'); Explanation: Create space and time vector End of explanation tau=np.pi*f0*(t-1.5/f0) q=q0*(1.0-2.0*tau**2.0)*np.exp(-tau**2) # Plotting source signal plt.figure(3) plt.plot(t,q) plt.title('Source signal Ricker-Wavelet') plt.ylabel('Amplitude') plt.xlabel('Time in s') plt.draw() Explanation: Source signal - Ricker-wavelet End of explanation # Init Seismograms Seismogramm=np.zeros((3,nt)); # Three seismograms # Calculation of some coefficients i_dx=1.0/(dx) i_dx3=1.0/(dx**3) c9=dt**3/24.0 print("Starting time stepping...") ## Time stepping for n in range(2,nt): # Inject source wavelet p[xscr]=p[xscr]+q[n] # Update velocity for kx in range(5,nx-4): # Calculating spatial derivative p_x=i_dx*9.0/8.0*(p[kx+1]-p[kx])-i_dx*1.0/24.0*(p[kx+2]-p[kx-1]) p_xxx=i_dx3*(-3.0)*(p[kx+1]-p[kx])+i_dx3*(1)*(p[kx+2]-p[kx-1]) # Update velocity vx[kx]=vx[kx]-dt/rho[kx]*p_x-l[kx]*c9*1/(rho[kx]**2.0)*(p_xxx) # Update pressure for kx in range(5,nx-4): # Calculating spatial derivative vx_x= i_dx*9.0/8.0*(vx[kx]-vx[kx-1])-i_dx*1.0/24.0*(vx[kx+1]-vx[kx-2]) vx_xxx=i_dx3*(-3.0)*(vx[kx]-vx[kx-1])+i_dx3*(1)*(vx[kx+1]-vx[kx-2]) # Update pressure p[kx]=p[kx]-l[kx]*dt*(vx_x)-l[kx]**2*c9*1/(rho[kx])*(vx_xxx) # Save seismograms Seismogramm[0,n]=p[xrec1] Seismogramm[1,n]=p[xrec2] Seismogramm[2,n]=p[xrec3] print("Finished time stepping!") Explanation: Time stepping End of explanation ## Save seismograms np.save("Seismograms/FD_1D_DX4_DT4_LW",Seismogramm) ## Plot seismograms fig, (ax1, ax2, ax3) = plt.subplots(3, 1) fig.subplots_adjust(hspace=0.4,right=1.6, top = 2 ) ax1.plot(t,Seismogramm[0,:]) ax1.set_title('Seismogram 1') ax1.set_ylabel('Amplitude') ax1.set_xlabel('Time in s') ax1.set_xlim(0, T) ax2.plot(t,Seismogramm[1,:]) ax2.set_title('Seismogram 2') ax2.set_ylabel('Amplitude') ax2.set_xlabel('Time in s') ax2.set_xlim(0, T) ax3.plot(t,Seismogramm[2,:]) ax3.set_title('Seismogram 3') ax3.set_ylabel('Amplitude') ax3.set_xlabel('Time in s') ax3.set_xlim(0, T); Explanation: Save seismograms End of explanation
5,968
Given the following text description, write Python code to implement the functionality described below step by step Description: The author-topic model Step1: In the following sections we will load the data, pre-process it, train the model, and explore the results using some of the implementation's functionality. Feel free to skip the loading and pre-processing for now, if you are familiar with the process. Loading the data In the cell below, we crawl the folders and files in the dataset, and read the files into memory. Step2: Construct a mapping from author names to document IDs. Step3: Pre-processing text The text will be pre-processed using the following steps Step4: In the code below, Spacy takes care of tokenization, removing non-alphabetic characters, removal of stopwords, lemmatization and named entity recognition. Note that we only keep named entities that consist of more than one word, as single word named entities are already there. Step5: Below, we use a Gensim model to add bigrams. Note that this achieves the same goal as named entity recognition, that is, finding adjacent words that have some particular significance. Step6: Now we are ready to construct a dictionary, as our vocabulary is finalized. We then remove common words (occurring $> 50\%$ of the time), and rare words (occur $< 20$ times in total). Step7: We produce the vectorized representation of the documents, to supply the author-topic model with, by computing the bag-of-words. Step8: Let's inspect the dimensionality of our data. Step9: Train and use model We train the author-topic model on the data prepared in the previous sections. The interface to the author-topic model is very similar to that of LDA in Gensim. In addition to a corpus, ID to word mapping (id2word) and number of topics (num_topics), the author-topic model requires either an author to document ID mapping (author2doc), or the reverse (doc2author). Below, we have also (this can be skipped for now) Step10: If you believe your model hasn't converged, you can continue training using model.update(). If you have additional documents and/or authors call model.update(corpus, author2doc). Before we explore the model, let's try to improve upon it. To do this, we will train several models with different random initializations, by giving different seeds for the random number generator (random_state). We evaluate the topic coherence of the model using the top_topics method, and pick the model with the highest topic coherence. Step11: Choose the model with the highest topic coherence. Step12: We save the model, to avoid having to train it again, and also show how to load it again. Step13: Explore author-topic representation Now that we have trained a model, we can start exploring the authors and the topics. First, let's simply print the most important words in the topics. Below we have printed topic 0. As we can see, each topic is associated with a set of words, and each word has a probability of being expressed under that topic. Step14: Below, we have given each topic a label based on what each topic seems to be about intuitively. Step15: Rather than just calling model.show_topics(num_topics=10), we format the output a bit so it is easier to get an overview. Step16: These topics are by no means perfect. They have problems such as chained topics, intruded words, random topics, and unbalanced topics (see Mimno and co-authors 2011). They will do for the purposes of this tutorial, however. Below, we use the model[name] syntax to retrieve the topic distribution for an author. Each topic has a probability of being expressed given the particular author, but only the ones above a certain threshold are shown. Step17: Let's print the top topics of some authors. First, we make a function to help us do this more easily. Step18: Below, we print some high profile researchers and inspect them. Three of these, Yann LeCun, Geoffrey E. Hinton and Christof Koch, are spot on. Terrence J. Sejnowski's results are surprising, however. He is a neuroscientist, so we would expect him to get the "neuroscience" label. This may indicate that Sejnowski works with the neuroscience aspects of visual perception, or perhaps that we have labeled the topic incorrectly, or perhaps that this topic simply is not very informative. Step19: Simple model evaluation methods We can compute the per-word bound, which is a measure of the model's predictive performance (you could also say that it is the reconstruction error). To do that, we need the doc2author dictionary, which we can build automatically. Step20: Now let's evaluate the per-word bound. Step21: We can evaluate the quality of the topics by computing the topic coherence, as in the LDA class. Use this to e.g. find out which of the topics are poor quality, or as a metric for model selection. Step22: Plotting the authors Now we're going to produce the kind of pacific archipelago looking plot below. The goal of this plot is to give you a way to explore the author-topic representation in an intuitive manner. We take all the author-topic distributions (stored in model.state.gamma) and embed them in a 2D space. To do this, we reduce the dimensionality of this data using t-SNE. t-SNE is a method that attempts to reduce the dimensionality of a dataset, while maintaining the distances between the points. That means that if two authors are close together in the plot below, then their topic distributions are similar. In the cell below, we transform the author-topic representation into the t-SNE space. You can increase the smallest_author value if you do not want to view all the authors with few documents. Step23: We are now ready to make the plot. Note that if you run this notebook yourself, you will see a different graph. The random initialization of the model will be different, and the result will thus be different to some degree. You may find an entirely different representation of the data, or it may show the same interpretation slightly differently. If you can't see the plot, you are probably viewing this tutorial in a Jupyter Notebook. View it in an nbviewer instead at http Step24: The circles in the plot above are individual authors, and their sizes represent the number of documents attributed to the corresponding author. Hovering your mouse over the circles will tell you the name of the authors and their sizes. Large clusters of authors tend to reflect some overlap in interest. We see that the model tends to put duplicate authors close together. For example, Terrence J. Sejnowki and T. J. Sejnowski are the same person, and their vectors end up in the same place (see about $(-10, -10)$ in the plot). At about $(-15, -10)$ we have a cluster of neuroscientists like Christof Koch and James M. Bower. As discussed earlier, the "object recognition" topic was assigned to Sejnowski. If we get the topics of the other authors in Sejnoski's neighborhood, like Peter Dayan, we also get this same topic. Furthermore, we see that this cluster is close to the "neuroscience" cluster discussed above, which is further indication that this topic is about visual perception in the brain. Other clusters include a reinforcement learning cluster at about $(-5, 8)$, and a Bayesian modelling cluster at about $(8, -12)$. Similarity queries In this section, we are going to set up a system that takes the name of an author and yields the authors that are most similar. This functionality can be used as a component in an information retrieval (i.e. a search engine of some kind), or in an author prediction system, i.e. a system that takes an unlabelled document and predicts the author(s) that wrote it. We simply need to search for the closest vector in the author-topic space. In this sense, the approach is similar to the t-SNE plot above. Below we illustrate a similarity query using a built-in similarity framework in Gensim. Step25: However, this framework uses the cosine distance, but we want to use the Hellinger distance. The Hellinger distance is a natural way of measuring the distance (i.e. dis-similarity) between two probability distributions. Its discrete version is defined as $$ H(p, q) = \frac{1}{\sqrt{2}} \sqrt{\sum_{i=1}^K (\sqrt{p_i} - \sqrt{q_i})^2}, $$ where $p$ and $q$ are both topic distributions for two different authors. We define the similarity as $$ S(p, q) = \frac{1}{1 + H(p, q)}. $$ In the cell below, we prepare everything we need to perform similarity queries based on the Hellinger distance. Step26: Now we can find the most similar authors to some particular author. We use the Pandas library to print the results in a nice looking tables. Step27: As before, we can specify the minimum author size. Step28: Serialized corpora The AuthorTopicModel class accepts serialized corpora, that is, corpora that are stored on the hard-drive rather than in memory. This is usually done when the corpus is too big to fit in memory. There are, however, some caveats to this functionality, which we will discuss here. As these caveats make this functionality less than ideal, it may be improved in the future. It is not necessary to read this section if you don't intend to use serialized corpora. In the following, an explanation, followed by an example and a summarization will be given. If the corpus is serialized, the user must specify serialized=True. Any input corpus can then be any type of iterable or generator. The model will then take the input corpus and serialize it in the MmCorpus format, which is supported in Gensim. The user must specify the path where the model should serialize all input documents, for example serialization_path='/tmp/model_serializer.mm'. To avoid accidentally overwriting some important data, the model will raise an error if there already exists a file at serialization_path; in this case, either choose another path, or delete the old file. When you want to train on new data, and call model.update(corpus, author2doc), all the old data and the new data have to be re-serialized. This can of course be quite computationally demanding, so it is recommended that you do this only when necessary; that is, wait until you have as much new data as possible to update, rather than updating the model for every new document.
Python Code: !wget -O - 'http://www.cs.nyu.edu/~roweis/data/nips12raw_str602.tgz' > /tmp/nips12raw_str602.tgz import tarfile filename = '/tmp/nips12raw_str602.tgz' tar = tarfile.open(filename, 'r:gz') for item in tar: tar.extract(item, path='/tmp') Explanation: The author-topic model: LDA with metadata In this tutorial, you will learn how to use the author-topic model in Gensim. We will apply it to a corpus consisting of scientific papers, to get insight about the authors of the papers. The author-topic model is an extension of Latent Dirichlet Allocation (LDA), that allows us to learn topic representations of authors in a corpus. The model can be applied to any kinds of labels on documents, such as tags on posts on the web. The model can be used as a novel way of data exploration, as features in machine learning pipelines, for author (or tag) prediction, or to simply leverage your topic model with existing metadata. To learn about the theoretical side of the author-topic model, see Rosen-Zvi and co-authors 2004, for example. A report on the algorithm used in the Gensim implementation will be available soon. Naturally, familiarity with topic modelling, LDA and Gensim is assumed in this tutorial. If you are not familiar with either LDA, or its Gensim implementation, I would recommend starting there. Consider some of these resources: * Gentle introduction to the LDA model: http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/ * Gensim's LDA API documentation: https://radimrehurek.com/gensim/models/ldamodel.html * Topic modelling in Gensim: http://radimrehurek.com/topic_modeling_tutorial/2%20-%20Topic%20Modeling.html * Pre-processing and training LDA: https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/lda_training_tips.ipynb NOTE: To run this tutorial on your own, install Jupyter, Gensim, SpaCy, Scikit-Learn, Bokeh and Pandas, e.g. using pip: pip install jupyter gensim spacy sklearn bokeh pandas Note that you need to download some data for SpaCy using python -m spacy.en.download. Download the notebook at https://github.com/RaRe-Technologies/gensim/tree/develop/docs/notebooks/atmodel_tutorial.ipynb. In this tutorial, we will learn how to prepare data for the model, how to train it, and how to explore the resulting representation in different ways. We will inspect the topic representation of some well known authors like Geoffrey Hinton and Yann LeCun, and compare authors by plotting them in reduced dimensionality and performing similarity queries. Analyzing scientific papers The data we will be using consists of scientific papers about machine learning, from the Neural Information Processing Systems conference (NIPS). It is the same dataset used in the Pre-processing and training LDA tutorial, mentioned earlier. We will be performing qualitative analysis of the model, and at times this will require an understanding of the subject matter of the data. If you try running this tutorial on your own, consider applying it on a dataset with subject matter that you are familiar with. For example, try one of the StackExchange datadump datasets. You can download the data from Sam Roweis' website (http://www.cs.nyu.edu/~roweis/data.html). Or just run the cell below, and it will be downloaded and extracted into your `tmp. End of explanation import os, re # Folder containing all NIPS papers. data_dir = '/tmp/nipstxt/' # Set this path to the data on your machine. # Folders containin individual NIPS papers. yrs = ['00', '01', '02', '03', '04', '05', '06', '07', '08', '09', '10', '11', '12'] dirs = ['nips' + yr for yr in yrs] # Get all document texts and their corresponding IDs. docs = [] doc_ids = [] for yr_dir in dirs: files = os.listdir(data_dir + yr_dir) # List of filenames. for filen in files: # Get document ID. (idx1, idx2) = re.search('[0-9]+', filen).span() # Matches the indexes of the start end end of the ID. doc_ids.append(yr_dir[4:] + '_' + str(int(filen[idx1:idx2]))) # Read document text. # Note: ignoring characters that cause encoding errors. with open(data_dir + yr_dir + '/' + filen, errors='ignore', encoding='utf-8') as fid: txt = fid.read() # Replace any whitespace (newline, tabs, etc.) by a single space. txt = re.sub('\s', ' ', txt) docs.append(txt) Explanation: In the following sections we will load the data, pre-process it, train the model, and explore the results using some of the implementation's functionality. Feel free to skip the loading and pre-processing for now, if you are familiar with the process. Loading the data In the cell below, we crawl the folders and files in the dataset, and read the files into memory. End of explanation filenames = [data_dir + 'idx/a' + yr + '.txt' for yr in yrs] # Using the years defined in previous cell. # Get all author names and their corresponding document IDs. author2doc = dict() i = 0 for yr in yrs: # The files "a00.txt" and so on contain the author-document mappings. filename = data_dir + 'idx/a' + yr + '.txt' for line in open(filename, errors='ignore', encoding='utf-8'): # Each line corresponds to one author. contents = re.split(',', line) author_name = (contents[1] + contents[0]).strip() # Remove any whitespace to reduce redundant author names. author_name = re.sub('\s', '', author_name) # Get document IDs for author. ids = [c.strip() for c in contents[2:]] if not author2doc.get(author_name): # This is a new author. author2doc[author_name] = [] i += 1 # Add document IDs to author. author2doc[author_name].extend([yr + '_' + id for id in ids]) # Use an integer ID in author2doc, instead of the IDs provided in the NIPS dataset. # Mapping from ID of document in NIPS datast, to an integer ID. doc_id_dict = dict(zip(doc_ids, range(len(doc_ids)))) # Replace NIPS IDs by integer IDs. for a, a_doc_ids in author2doc.items(): for i, doc_id in enumerate(a_doc_ids): author2doc[a][i] = doc_id_dict[doc_id] Explanation: Construct a mapping from author names to document IDs. End of explanation import spacy nlp = spacy.load('en') Explanation: Pre-processing text The text will be pre-processed using the following steps: * Tokenize text. * Replace all whitespace by single spaces. * Remove all punctuation and numbers. * Remove stopwords. * Lemmatize words. * Add multi-word named entities. * Add frequent bigrams. * Remove frequent and rare words. A lot of the heavy lifting will be done by the great package, Spacy. Spacy markets itself as "industrial-strength natural language processing", is fast, enables multiprocessing, and is easy to use. First, let's import it and load the NLP pipline in english. End of explanation %%time processed_docs = [] for doc in nlp.pipe(docs, n_threads=4, batch_size=100): # Process document using Spacy NLP pipeline. ents = doc.ents # Named entities. # Keep only words (no numbers, no punctuation). # Lemmatize tokens, remove punctuation and remove stopwords. doc = [token.lemma_ for token in doc if token.is_alpha and not token.is_stop] # Remove common words from a stopword list. #doc = [token for token in doc if token not in STOPWORDS] # Add named entities, but only if they are a compound of more than word. doc.extend([str(entity) for entity in ents if len(entity) > 1]) processed_docs.append(doc) docs = processed_docs del processed_docs Explanation: In the code below, Spacy takes care of tokenization, removing non-alphabetic characters, removal of stopwords, lemmatization and named entity recognition. Note that we only keep named entities that consist of more than one word, as single word named entities are already there. End of explanation # Compute bigrams. from gensim.models import Phrases # Add bigrams and trigrams to docs (only ones that appear 20 times or more). bigram = Phrases(docs, min_count=20) for idx in range(len(docs)): for token in bigram[docs[idx]]: if '_' in token: # Token is a bigram, add to document. docs[idx].append(token) Explanation: Below, we use a Gensim model to add bigrams. Note that this achieves the same goal as named entity recognition, that is, finding adjacent words that have some particular significance. End of explanation # Create a dictionary representation of the documents, and filter out frequent and rare words. from gensim.corpora import Dictionary dictionary = Dictionary(docs) # Remove rare and common tokens. # Filter out words that occur too frequently or too rarely. max_freq = 0.5 min_wordcount = 20 dictionary.filter_extremes(no_below=min_wordcount, no_above=max_freq) _ = dictionary[0] # This sort of "initializes" dictionary.id2token. Explanation: Now we are ready to construct a dictionary, as our vocabulary is finalized. We then remove common words (occurring $> 50\%$ of the time), and rare words (occur $< 20$ times in total). End of explanation # Vectorize data. # Bag-of-words representation of the documents. corpus = [dictionary.doc2bow(doc) for doc in docs] Explanation: We produce the vectorized representation of the documents, to supply the author-topic model with, by computing the bag-of-words. End of explanation print('Number of authors: %d' % len(author2doc)) print('Number of unique tokens: %d' % len(dictionary)) print('Number of documents: %d' % len(corpus)) Explanation: Let's inspect the dimensionality of our data. End of explanation from gensim.models import AuthorTopicModel %time model = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \ author2doc=author2doc, chunksize=2000, passes=1, eval_every=0, \ iterations=1, random_state=1) Explanation: Train and use model We train the author-topic model on the data prepared in the previous sections. The interface to the author-topic model is very similar to that of LDA in Gensim. In addition to a corpus, ID to word mapping (id2word) and number of topics (num_topics), the author-topic model requires either an author to document ID mapping (author2doc), or the reverse (doc2author). Below, we have also (this can be skipped for now): * Increased the number of passes over the dataset (to improve the convergence of the optimization problem). * Decreased the number of iterations over each document (related to the above). * Specified the mini-batch size (chunksize) (primarily to speed up training). * Turned off bound evaluation (eval_every) (as it takes a long time to compute). * Turned on automatic learning of the alpha and eta priors (to improve the convergence of the optimization problem). * Set the random state (random_state) of the random number generator (to make these experiments reproducible). We load the model, and train it. End of explanation %%time model_list = [] for i in range(5): model = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \ author2doc=author2doc, chunksize=2000, passes=100, gamma_threshold=1e-10, \ eval_every=0, iterations=1, random_state=i) top_topics = model.top_topics(corpus) tc = sum([t[1] for t in top_topics]) model_list.append((model, tc)) Explanation: If you believe your model hasn't converged, you can continue training using model.update(). If you have additional documents and/or authors call model.update(corpus, author2doc). Before we explore the model, let's try to improve upon it. To do this, we will train several models with different random initializations, by giving different seeds for the random number generator (random_state). We evaluate the topic coherence of the model using the top_topics method, and pick the model with the highest topic coherence. End of explanation model, tc = max(model_list, key=lambda x: x[1]) print('Topic coherence: %.3e' %tc) Explanation: Choose the model with the highest topic coherence. End of explanation # Save model. model.save('/tmp/model.atmodel') # Load model. model = AuthorTopicModel.load('/tmp/model.atmodel') Explanation: We save the model, to avoid having to train it again, and also show how to load it again. End of explanation model.show_topic(0) Explanation: Explore author-topic representation Now that we have trained a model, we can start exploring the authors and the topics. First, let's simply print the most important words in the topics. Below we have printed topic 0. As we can see, each topic is associated with a set of words, and each word has a probability of being expressed under that topic. End of explanation topic_labels = ['Circuits', 'Neuroscience', 'Numerical optimization', 'Object recognition', \ 'Math/general', 'Robotics', 'Character recognition', \ 'Reinforcement learning', 'Speech recognition', 'Bayesian modelling'] Explanation: Below, we have given each topic a label based on what each topic seems to be about intuitively. End of explanation for topic in model.show_topics(num_topics=10): print('Label: ' + topic_labels[topic[0]]) words = '' for word, prob in model.show_topic(topic[0]): words += word + ' ' print('Words: ' + words) print() Explanation: Rather than just calling model.show_topics(num_topics=10), we format the output a bit so it is easier to get an overview. End of explanation model['YannLeCun'] Explanation: These topics are by no means perfect. They have problems such as chained topics, intruded words, random topics, and unbalanced topics (see Mimno and co-authors 2011). They will do for the purposes of this tutorial, however. Below, we use the model[name] syntax to retrieve the topic distribution for an author. Each topic has a probability of being expressed given the particular author, but only the ones above a certain threshold are shown. End of explanation from pprint import pprint def show_author(name): print('\n%s' % name) print('Docs:', model.author2doc[name]) print('Topics:') pprint([(topic_labels[topic[0]], topic[1]) for topic in model[name]]) Explanation: Let's print the top topics of some authors. First, we make a function to help us do this more easily. End of explanation show_author('YannLeCun') show_author('GeoffreyE.Hinton') show_author('TerrenceJ.Sejnowski') show_author('ChristofKoch') Explanation: Below, we print some high profile researchers and inspect them. Three of these, Yann LeCun, Geoffrey E. Hinton and Christof Koch, are spot on. Terrence J. Sejnowski's results are surprising, however. He is a neuroscientist, so we would expect him to get the "neuroscience" label. This may indicate that Sejnowski works with the neuroscience aspects of visual perception, or perhaps that we have labeled the topic incorrectly, or perhaps that this topic simply is not very informative. End of explanation from gensim.models import atmodel doc2author = atmodel.construct_doc2author(model.corpus, model.author2doc) Explanation: Simple model evaluation methods We can compute the per-word bound, which is a measure of the model's predictive performance (you could also say that it is the reconstruction error). To do that, we need the doc2author dictionary, which we can build automatically. End of explanation # Compute the per-word bound. # Number of words in corpus. corpus_words = sum(cnt for document in model.corpus for _, cnt in document) # Compute bound and divide by number of words. perwordbound = model.bound(model.corpus, author2doc=model.author2doc, \ doc2author=model.doc2author) / corpus_words print(perwordbound) Explanation: Now let's evaluate the per-word bound. End of explanation %time top_topics = model.top_topics(model.corpus) Explanation: We can evaluate the quality of the topics by computing the topic coherence, as in the LDA class. Use this to e.g. find out which of the topics are poor quality, or as a metric for model selection. End of explanation %%time from sklearn.manifold import TSNE tsne = TSNE(n_components=2, random_state=0) smallest_author = 0 # Ignore authors with documents less than this. authors = [model.author2id[a] for a in model.author2id.keys() if len(model.author2doc[a]) >= smallest_author] _ = tsne.fit_transform(model.state.gamma[authors, :]) # Result stored in tsne.embedding_ Explanation: Plotting the authors Now we're going to produce the kind of pacific archipelago looking plot below. The goal of this plot is to give you a way to explore the author-topic representation in an intuitive manner. We take all the author-topic distributions (stored in model.state.gamma) and embed them in a 2D space. To do this, we reduce the dimensionality of this data using t-SNE. t-SNE is a method that attempts to reduce the dimensionality of a dataset, while maintaining the distances between the points. That means that if two authors are close together in the plot below, then their topic distributions are similar. In the cell below, we transform the author-topic representation into the t-SNE space. You can increase the smallest_author value if you do not want to view all the authors with few documents. End of explanation # Tell Bokeh to display plots inside the notebook. from bokeh.io import output_notebook output_notebook() from bokeh.models import HoverTool from bokeh.plotting import figure, show, ColumnDataSource x = tsne.embedding_[:, 0] y = tsne.embedding_[:, 1] author_names = [model.id2author[a] for a in authors] # Radius of each point corresponds to the number of documents attributed to that author. scale = 0.1 author_sizes = [len(model.author2doc[a]) for a in author_names] radii = [size * scale for size in author_sizes] source = ColumnDataSource( data=dict( x=x, y=y, author_names=author_names, author_sizes=author_sizes, radii=radii, ) ) # Add author names and sizes to mouse-over info. hover = HoverTool( tooltips=[ ("author", "@author_names"), ("size", "@author_sizes"), ] ) p = figure(tools=[hover, 'crosshair,pan,wheel_zoom,box_zoom,reset,save,lasso_select']) p.scatter('x', 'y', radius='radii', source=source, fill_alpha=0.6, line_color=None) show(p) Explanation: We are now ready to make the plot. Note that if you run this notebook yourself, you will see a different graph. The random initialization of the model will be different, and the result will thus be different to some degree. You may find an entirely different representation of the data, or it may show the same interpretation slightly differently. If you can't see the plot, you are probably viewing this tutorial in a Jupyter Notebook. View it in an nbviewer instead at http://nbviewer.jupyter.org/github/rare-technologies/gensim/blob/develop/docs/notebooks/atmodel_tutorial.ipynb. End of explanation from gensim.similarities import MatrixSimilarity # Generate a similarity object for the transformed corpus. index = MatrixSimilarity(model[list(model.id2author.values())]) # Get similarities to some author. author_name = 'YannLeCun' sims = index[model[author_name]] Explanation: The circles in the plot above are individual authors, and their sizes represent the number of documents attributed to the corresponding author. Hovering your mouse over the circles will tell you the name of the authors and their sizes. Large clusters of authors tend to reflect some overlap in interest. We see that the model tends to put duplicate authors close together. For example, Terrence J. Sejnowki and T. J. Sejnowski are the same person, and their vectors end up in the same place (see about $(-10, -10)$ in the plot). At about $(-15, -10)$ we have a cluster of neuroscientists like Christof Koch and James M. Bower. As discussed earlier, the "object recognition" topic was assigned to Sejnowski. If we get the topics of the other authors in Sejnoski's neighborhood, like Peter Dayan, we also get this same topic. Furthermore, we see that this cluster is close to the "neuroscience" cluster discussed above, which is further indication that this topic is about visual perception in the brain. Other clusters include a reinforcement learning cluster at about $(-5, 8)$, and a Bayesian modelling cluster at about $(8, -12)$. Similarity queries In this section, we are going to set up a system that takes the name of an author and yields the authors that are most similar. This functionality can be used as a component in an information retrieval (i.e. a search engine of some kind), or in an author prediction system, i.e. a system that takes an unlabelled document and predicts the author(s) that wrote it. We simply need to search for the closest vector in the author-topic space. In this sense, the approach is similar to the t-SNE plot above. Below we illustrate a similarity query using a built-in similarity framework in Gensim. End of explanation # Make a function that returns similarities based on the Hellinger distance. from gensim import matutils import pandas as pd # Make a list of all the author-topic distributions. author_vecs = [model.get_author_topics(author) for author in model.id2author.values()] def similarity(vec1, vec2): '''Get similarity between two vectors''' dist = matutils.hellinger(matutils.sparse2full(vec1, model.num_topics), \ matutils.sparse2full(vec2, model.num_topics)) sim = 1.0 / (1.0 + dist) return sim def get_sims(vec): '''Get similarity of vector to all authors.''' sims = [similarity(vec, vec2) for vec2 in author_vecs] return sims def get_table(name, top_n=10, smallest_author=1): ''' Get table with similarities, author names, and author sizes. Return `top_n` authors as a dataframe. ''' # Get similarities. sims = get_sims(model.get_author_topics(name)) # Arrange author names, similarities, and author sizes in a list of tuples. table = [] for elem in enumerate(sims): author_name = model.id2author[elem[0]] sim = elem[1] author_size = len(model.author2doc[author_name]) if author_size >= smallest_author: table.append((author_name, sim, author_size)) # Make dataframe and retrieve top authors. df = pd.DataFrame(table, columns=['Author', 'Score', 'Size']) df = df.sort_values('Score', ascending=False)[:top_n] return df Explanation: However, this framework uses the cosine distance, but we want to use the Hellinger distance. The Hellinger distance is a natural way of measuring the distance (i.e. dis-similarity) between two probability distributions. Its discrete version is defined as $$ H(p, q) = \frac{1}{\sqrt{2}} \sqrt{\sum_{i=1}^K (\sqrt{p_i} - \sqrt{q_i})^2}, $$ where $p$ and $q$ are both topic distributions for two different authors. We define the similarity as $$ S(p, q) = \frac{1}{1 + H(p, q)}. $$ In the cell below, we prepare everything we need to perform similarity queries based on the Hellinger distance. End of explanation get_table('YannLeCun') Explanation: Now we can find the most similar authors to some particular author. We use the Pandas library to print the results in a nice looking tables. End of explanation get_table('JamesM.Bower', smallest_author=3) Explanation: As before, we can specify the minimum author size. End of explanation %time model_ser = AuthorTopicModel(corpus=corpus, num_topics=10, id2word=dictionary.id2token, \ author2doc=author2doc, random_state=1, serialized=True, \ serialization_path='/tmp/model_serialization.mm') # Delete the file, once you're done using it. import os os.remove('/tmp/model_serialization.mm') Explanation: Serialized corpora The AuthorTopicModel class accepts serialized corpora, that is, corpora that are stored on the hard-drive rather than in memory. This is usually done when the corpus is too big to fit in memory. There are, however, some caveats to this functionality, which we will discuss here. As these caveats make this functionality less than ideal, it may be improved in the future. It is not necessary to read this section if you don't intend to use serialized corpora. In the following, an explanation, followed by an example and a summarization will be given. If the corpus is serialized, the user must specify serialized=True. Any input corpus can then be any type of iterable or generator. The model will then take the input corpus and serialize it in the MmCorpus format, which is supported in Gensim. The user must specify the path where the model should serialize all input documents, for example serialization_path='/tmp/model_serializer.mm'. To avoid accidentally overwriting some important data, the model will raise an error if there already exists a file at serialization_path; in this case, either choose another path, or delete the old file. When you want to train on new data, and call model.update(corpus, author2doc), all the old data and the new data have to be re-serialized. This can of course be quite computationally demanding, so it is recommended that you do this only when necessary; that is, wait until you have as much new data as possible to update, rather than updating the model for every new document. End of explanation
5,969
Given the following text description, write Python code to implement the functionality described below step by step Description: Download, Parse and Interrogate Apple Health Export Data The first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase. https Step1: Authenticate with Google This will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https Step2: Download the most recent Apple Health export file Now that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored. Google Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate. In this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option. Step3: Download the file from Google Drive Ensure that the file downloaded is the latest file generated Step4: Unzip the most current file to a holding directory Step5: Parse Apple Health Export document Step6: List XML headers by element count Step7: List types for "Record" Header Step8: Extract Values to Data Frame TODO Step9: import calmap ts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days']) ts.index = pd.to_datetime(ts.index) tstot = ts.groupby(ts.index).median() plt.rcParams['figure.figsize'] = 16, 8 import warnings warnings.simplefilter(action='ignore', category=FutureWarning) calmap.yearplot(data=tstot,year=2017) Flag Chemotherapy Days for specific analysis The next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid. Step10: Boxplots Using Seaborn
Python Code: import xml.etree.ElementTree as et import pandas as pd import numpy as np from datetime import * import matplotlib.pyplot as plt import re import os.path import zipfile import pytz %matplotlib inline plt.rcParams['figure.figsize'] = 16, 8 Explanation: Download, Parse and Interrogate Apple Health Export Data The first part of this program is all about getting the Apple Health export and putting it into an analyzable format. At that point it can be analysed anywhere. The second part of this program is concerned with using SAS Scripting Wrapper for Analytics Transfer (SWAT) Python library to transfer the data to SAS Viya, and analyze it there. The SWAT package provides native python language access to the SAS Viya codebase. https://github.com/sassoftware/python-swat This file was created from a desire to get my hands on data collected by Apple Health, notably heart rate information collected by Apple Watch. For this to work, this file needs to be in a location accessible to Python code. A little bit of searching told me that iCloud file access is problematic and that there were already a number of ways of doing this with the Google API if the file was saved to Google Drive. I chose PyDrive. So for the end to end program to work with little user intervention, you will need to sign up for Google Drive, set up an application in the Google API and install Google Drive app to your iPhone. This may sound involved, and it is not necessary if you simply email the export file to yourself and copy it to a filesystem that Python can see. If you choose to do that, all of the Google Drive portion can be removed. I like the Google Drive process though as it enables a minimal manual work scenario. This version requires the user to grant Google access, requiring some additional clicks, but it is not too much. I think it is possible to automate this to run without user intervention as well using security files. The first step to enabling this process is exporting the data from Apple Health. As of this writing, open Apple Health and click on your user icon or photo. Near the bottom of the next page in the app will be a button or link called Export Health Data. Clicking on this will generate a xml file, zipped up. THe next dialog will ask you where you want to save it. Options are to email, save to iCloud, message etc... Select Google Drive. Google Drive allows multiple files with the same name and this is accounted for by this program. End of explanation # Authenticate into Google Drive from pydrive.auth import GoogleAuth gauth = GoogleAuth() gauth.LocalWebserverAuth() Explanation: Authenticate with Google This will open a browser to let you beging the process of authentication with an existing Google Drive account. This process will be separate from Python. For this to work, you will need to set up a Other Authentication OAuth credential at https://console.developers.google.com/apis/credentials, save the secret file in your root directory and a few other things that are detailed at https://pythonhosted.org/PyDrive/. The PyDrive instructions also show you how to set up your Google application. There are other methods for accessing the Google API from python, but this one seems pretty nice. The first time through the process, regular sign in and two factor authentication is required (if you require two factor auth) but after that it is just a process of telling Google that it is ok for your Google application to access Drive. End of explanation from pydrive.drive import GoogleDrive drive = GoogleDrive(gauth) file_list = drive.ListFile({'q': "'root' in parents and trashed=false"}).GetList() # Step through the file list and find the most current export.zip file id, then use # that later to download the file to the local machine. # This may look a little old school, but these file lists will never be massive and # it is readable and easy one pass way to get the most current file using the # least (or low) amount of resouces selection_dt = datetime.strptime("2000-01-01T01:01:01.001Z","%Y-%m-%dT%H:%M:%S.%fZ") print("Matching Files") for file1 in file_list: if re.search("^export-*\d*.zip",file1['title']): dt = datetime.strptime(file1['createdDate'],"%Y-%m-%dT%H:%M:%S.%fZ") if dt > selection_dt: selection_id = file1['id'] selection_dt = dt print(' title: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate'])) if not os.path.exists('healthextract'): os.mkdir('healthextract') Explanation: Download the most recent Apple Health export file Now that we are authenticated into Google Drive, use PyDrive to access the API and get to files stored. Google Drive allows multiple files with the same name, but it indexes them with the ID to keep them separate. In this block, we make one pass of the file list where the file name is called export.zip, and save the row that corresponds with the most recent date. We will use that file id later to download the correct file that corresponds with the most recent date. Apple Health export names the file export.zip, and at the time this was written, there is no other option. End of explanation for file1 in file_list: if file1['id'] == selection_id: print('Downloading this file: %s, id: %s createDate: %s' % (file1['title'], file1['id'], file1['createdDate'])) file1.GetContentFile("healthextract/export.zip") Explanation: Download the file from Google Drive Ensure that the file downloaded is the latest file generated End of explanation zip_ref = zipfile.ZipFile('healthextract/export.zip', 'r') zip_ref.extractall('healthextract') zip_ref.close() Explanation: Unzip the most current file to a holding directory End of explanation path = "/Users/samuelcroker/Documents/repositories/RHealthDataImport/healthextract/apple_health_export/export.xml" e = et.parse(path) #this was from an older iPhone, to demonstrate how to join files #legacy = et.parse("/Users/samuelcroker/Documents/repositories/RHealthDataImport/healthextract/apple_health_export/export.xml") Explanation: Parse Apple Health Export document End of explanation pd.Series([el.tag for el in e.iter()]).value_counts() Explanation: List XML headers by element count End of explanation pd.Series([atype.get('type') for atype in e.findall('Record')]).value_counts() Explanation: List types for "Record" Header End of explanation import pytz #Extract the heartrate values, and get a timestamp from the xml # there is likely a more efficient way, though this is very fast def txloc(xdate,fmt): eastern = pytz.timezone('US/Eastern') dte = xdate.astimezone(eastern) return datetime.strftime(dte,fmt) def xmltodf(eltree, element,outvaluename): dt = [] v = [] for atype in eltree.findall('Record'): if atype.get('type') == element: dt.append(datetime.strptime(atype.get("startDate"),"%Y-%m-%d %H:%M:%S %z")) v.append(float(atype.get("value"))) myd = pd.DataFrame({"Create":dt,outvaluename:v}) colDict = {"Year":"%Y","Month":"%Y-%m", "Week":"%Y-%U","Day":"%d","Hour":"%H","Days":"%Y-%m-%d","Month-Day":"%m-%d"} for col, fmt in colDict.items(): myd[col] = myd['Create'].dt.tz_convert('US/Eastern').dt.strftime(fmt) myd[outvaluename] = myd[outvaluename].astype(float) #.astype(int) print('Extracting ' + outvaluename + ', type: ' + element) return(myd) HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate") HA_df = xmltodf(e,"HKQuantityTypeIdentifierEnvironmentalAudioExposure","EnvAudio") EX_df = xmltodf(e,"HKQuantityTypeIdentifierAppleExerciseTime","Extime") SPO2_df = xmltodf(e,"HKQuantityTypeIdentifierOxygenSaturation","SPO2") HR_df #reset plot - just for tinkering plt.rcParams['figure.figsize'] = 60, 8 HR_df.boxplot(by='Month',column="HeartRate", return_type='axes') plt.grid(axis='x') plt.title('All Months') plt.ylabel('Heart Rate') plt.ylim(40,140) dx = HR_df[HR_df['Year']=='2019'].boxplot(by='Week',column="HeartRate", return_type='axes') plt.title('All Weeks') plt.ylabel('Heart Rate') plt.xticks(rotation=90) plt.grid(axis='x') [plt.axvline(_x, linewidth=1, color='blue') for _x in [10,12]] plt.ylim(40,140) monthval = '2019-10' #monthval1 = '2017-09' #monthval2 = '2017-10' #HR_df[(HR_df['Month']==monthval1) | (HR_df['Month']== monthval2)].boxplot(by='Month-Day',column="HeartRate", return_type='axes') HR_df[HR_df['Month']==monthval].boxplot(by='Month-Day',column="HeartRate", return_type='axes') plt.grid(axis='x') plt.rcParams['figure.figsize'] = 16, 8 plt.title('Daily for Month: '+ monthval) plt.ylabel('Heart Rate') plt.xticks(rotation=90) plt.ylim(40,140) HR_df[HR_df['Month']==monthval].boxplot(by='Hour',column="HeartRate") plt.title('Hourly for Month: '+ monthval) plt.ylabel('Heart Rate') plt.grid(axis='x') plt.ylim(40,140) Explanation: Extract Values to Data Frame TODO: Abstraction of the next code block End of explanation # This isnt efficient yet, just a first swipe. It functions as intended. def getDelta(res,ttp,cyclelength): mz = [x if (x >= 0) & (x < cyclelength) else 999 for x in res] if ttp == 0: return(mz.index(min(mz))+1) else: return(mz[mz.index(min(mz))]) #chemodays = np.array([date(2017,4,24),date(2017,5,16),date(2017,6,6),date(2017,8,14)]) chemodays = np.array([date(2018,1,26),date(2018,2,2),date(2018,2,9),date(2018,2,16),date(2018,2,26),date(2018,3,2),date(2018,3,19),date(2018,4,9),date(2018,5,1),date(2018,5,14),date(2018,6,18),date(2018,7,10),date(2018,8,6)]) HR_df = xmltodf(e,"HKQuantityTypeIdentifierHeartRate","HeartRate") #I dont think this is efficient yet... a = HR_df['Create'].apply(lambda x: [x.days for x in x.date()-chemodays]) HR_df['ChemoCycle'] = a.apply(lambda x: getDelta(x,0,21)) HR_df['ChemoDays'] = a.apply(lambda x: getDelta(x,1,21)) import seaborn as sns plotx = HR_df[HR_df['ChemoDays']<=21] plt.rcParams['figure.figsize'] = 24, 8 ax = sns.boxplot(x="ChemoDays", y="HeartRate", hue="ChemoCycle", data=plotx, palette="Set2",notch=1,whis=0,width=0.75,showfliers=False) plt.ylim(65,130) #the next statement puts the chemodays variable as a rowname, we need to fix that plotx_med = plotx.groupby('ChemoDays').median() #this puts chemodays back as a column in the frame. I need to see if there is a way to prevent the effect plotx_med.index.name = 'ChemoDays' plotx_med.reset_index(inplace=True) snsplot = sns.pointplot(x='ChemoDays', y="HeartRate", data=plotx_med,color='Gray') Explanation: import calmap ts = pd.Series(HR_df['HeartRate'].values, index=HR_df['Days']) ts.index = pd.to_datetime(ts.index) tstot = ts.groupby(ts.index).median() plt.rcParams['figure.figsize'] = 16, 8 import warnings warnings.simplefilter(action='ignore', category=FutureWarning) calmap.yearplot(data=tstot,year=2017) Flag Chemotherapy Days for specific analysis The next two cells provide the ability to introduce cycles that start on specific days and include this data in the datasets so that they can be overlaid in graphics. In the example below, there are three cycles of 21 days. The getDelta function returns the cycle number when tpp == 0 and the days since day 0 when tpp == 2. This allows the overlaying of the cycles, with the days since day 0 being overlaid. End of explanation import seaborn as sns sns.set(style="ticks", palette="muted", color_codes=True) sns.boxplot(x="Month", y="HeartRate", data=HR_df,whis=np.inf, color="c") # Add in points to show each observation snsplot = sns.stripplot(x="Month", y="HeartRate", data=HR_df,jitter=True, size=1, alpha=.15, color=".3", linewidth=0) hr_only = HR_df[['Create','HeartRate']] hr_only.tail() hr_only.to_csv('~/Downloads/stc_hr.csv') Explanation: Boxplots Using Seaborn End of explanation
5,970
Given the following text description, write Python code to implement the functionality described below step by step Description: Christofides algorithm Create a minimum spanning tree T of G. Let O be the set of vertices with odd degree in T. By the handshaking lemma, O has an even number of vertices. Find a minimum-weight perfect matching M in the induced subgraph given by the vertices from O. Combine the edges of M and T to form a connected multigraph H in which each vertex has even degree. Form an Eulerian circuit in H. Make the circuit found in previous step into a Hamiltonian circuit by skipping repeated vertices (shortcutting) The Christofides algorithim is one of the key algorithms regarding the travelling salesman problem. It has a $\frac{3}{2}$ bound on it's apprioximation, meaning the tour it produces, at most, is $\frac{3}{2}$ worse than the optimal tour. Here's a demonstration on a random Euclidean graph. The Christofides algorithm is one of the first approximation algorithms, formtive in the building of the field. Step1: Lin-kernighan ($K$-opt heuristic) The Lin-kernighan heuristic is a way to "settle" a tour into a local optimum by investigating small, local changes in the tour. The locality of the change is the parameter $k$, which specifies how many 2-node pairs to swap and check if the cost is lower. This is what a 2-opt switch looks like. If the new tour is lower cost than the unperturbed tour, we store it again and run the whole process over. Speeding this up and "boosting" out of local minima to better solutions can be achieved with higher $k$. Let's run the two-opt until there's no improvement.
Python Code: def Christofides(G): T = nx.algorithms.minimum_spanning_tree(G) O = {n for n, d in T.degree(T.nodes_iter()).items() if d%2 == 1} induced_subgraph = nx.Graph(G.subgraph(O)) M = minimum_perfect_matching(induced_subgraph) T.add_weighted_edges_from([(u,v,M[u][v]['weight']) for u, v in M.edges()]) H = T seen = set() tour = [] for u, _ in nx.eulerian_circuit(H): if not u in seen: tour.append(u) seen.add(u) return tour G = EUC_2D(10) opt_christofides = Christofides(G) cost = 0 for u, v in zip(opt_christofides, opt_christofides[1:]): cost += G[u][v]['weight'] opt_christofides, cost opt, _ = brute_force(G) opt.opt_tour opt.cost 1.5*opt.cost #upper bound on cost Explanation: Christofides algorithm Create a minimum spanning tree T of G. Let O be the set of vertices with odd degree in T. By the handshaking lemma, O has an even number of vertices. Find a minimum-weight perfect matching M in the induced subgraph given by the vertices from O. Combine the edges of M and T to form a connected multigraph H in which each vertex has even degree. Form an Eulerian circuit in H. Make the circuit found in previous step into a Hamiltonian circuit by skipping repeated vertices (shortcutting) The Christofides algorithim is one of the key algorithms regarding the travelling salesman problem. It has a $\frac{3}{2}$ bound on it's apprioximation, meaning the tour it produces, at most, is $\frac{3}{2}$ worse than the optimal tour. Here's a demonstration on a random Euclidean graph. The Christofides algorithm is one of the first approximation algorithms, formtive in the building of the field. End of explanation def two_opt(G, tour): from functools import reduce #Consider all 2-node groups in the tour (Exhaustive enumeration) cost = lambda t: sum([G[u][v]['weight'] for u, v in zip(t, t[1:])]) def inner(i_tour): for i in range(len(i_tour)): for i2 in range(len(i_tour)): if i<i2: if i2-i < 2: continue tour_c = i_tour[:] tour_c[i:i2] = list(reversed(i_tour[i:i2])) if cost(tour_c) < opt: return tour_c #not done yet, have to re-run since "bubble" has to be re-evaluated return True #No advantageous swap old_res = tour opt = cost(tour) it_opts = [] while True: res = inner(old_res) if res == True: return (old_res, it_opts) if type(res) == list: it_opts.append(res) old_res = res opt = cost(res) G = EUC_2D(50) res, ts = two_opt(G, list(range(len(G)))) #naive seed tour print(res) #That looks different! We've settled into a local minima cost = lambda t: sum([G[u][v]['weight'] for u, v in zip(t, t[1:])]) print("Pre $K$-opt: ", cost(range(len(G)))) print("Post $K$-opt cost: ", cost(res)) plt.figure() plt.plot([cost(t) for t in ts]) #Let's see how cost drops per iteration of 2-opt plt.show() Explanation: Lin-kernighan ($K$-opt heuristic) The Lin-kernighan heuristic is a way to "settle" a tour into a local optimum by investigating small, local changes in the tour. The locality of the change is the parameter $k$, which specifies how many 2-node pairs to swap and check if the cost is lower. This is what a 2-opt switch looks like. If the new tour is lower cost than the unperturbed tour, we store it again and run the whole process over. Speeding this up and "boosting" out of local minima to better solutions can be achieved with higher $k$. Let's run the two-opt until there's no improvement. End of explanation
5,971
Given the following text description, write Python code to implement the functionality described below step by step Description: Water Filling in Communications by Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggs from Boyd and Vandenberghe, Convex Optimization, example 5.2 page 145 Convex optimisation can be used to solve the classic water filling problem. This problem is where a total amount of power $P$ has to be assigned to $n$ communication channels, with the objective of maximising the total communication rate. The communication rate of the $i$th channel is given by Step1: Example As a simple example, we set $N = 3$, $P = 1$ and $\boldsymbol{\alpha} = (0.8,1.0,1.2)$. The function outputs whether the problem status, the maximum communication rate and the power allocation required is achieved with this maximum communication rate. Step2: To illustrate the water filling principle, we will plot $\alpha_i + x_i$ and check that this level is flat where power has been allocated
Python Code: #!/usr/bin/env python3 # @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs import numpy as np import cvxpy as cp def water_filling(n, a, sum_x=1): ''' Boyd and Vandenberghe, Convex Optimization, example 5.2 page 145 Water-filling. This problem arises in information theory, in allocating power to a set of n communication channels in order to maximise the total channel capacity. The variable x_i represents the transmitter power allocated to the ith channel, and log(α_i+x_i) gives the capacity or maximum communication rate of the channel. The objective is to minimise -∑log(α_i+x_i) subject to the constraint ∑x_i = 1 ''' # Declare variables and parameters x = cp.Variable(shape=n) alpha = cp.Parameter(n, nonneg=True) alpha.value = a # Choose objective function. Interpret as maximising the total communication rate of all the channels obj = cp.Maximize(cp.sum(cp.log(alpha + x))) # Declare constraints constraints = [x >= 0, cp.sum(x) - sum_x == 0] # Solve prob = cp.Problem(obj, constraints) prob.solve() if(prob.status=='optimal'): return prob.status, prob.value, x.value else: return prob.status, np.nan, np.nan Explanation: Water Filling in Communications by Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggs from Boyd and Vandenberghe, Convex Optimization, example 5.2 page 145 Convex optimisation can be used to solve the classic water filling problem. This problem is where a total amount of power $P$ has to be assigned to $n$ communication channels, with the objective of maximising the total communication rate. The communication rate of the $i$th channel is given by: $\log(\alpha_i + x_i)$ where $x_i$ represents the power allocated to channel $i$ and $\alpha_i$ represents the floor above the baseline at which power can be added to the channel. Since $-\log(X)$ is convex, we can write the water-filling problem as a convex optimisation problem: minimise $\sum_{i=1}^N -\log(\alpha_i + x_i)$ subject to $x_i \succeq 0$ and $\sum_{i=1}^N x_i = P$ This form is also very straightforward to put into DCP format and thus can be simply solved using CVXPY. End of explanation # As an example, we will solve the water filling problem with 3 buckets, each with different α np.set_printoptions(precision=3) buckets = 3 alpha = np.array([0.8, 1.0, 1.2]) stat, prob, x = water_filling(buckets, alpha) print('Problem status: {}'.format(stat)) print('Optimal communication rate = {:.4g} '.format(prob)) print('Transmitter powers:\n{}'.format(x)) Explanation: Example As a simple example, we set $N = 3$, $P = 1$ and $\boldsymbol{\alpha} = (0.8,1.0,1.2)$. The function outputs whether the problem status, the maximum communication rate and the power allocation required is achieved with this maximum communication rate. End of explanation import matplotlib import matplotlib.pylab as plt %matplotlib inline matplotlib.rcParams.update({'font.size': 14}) axis = np.arange(0.5,buckets+1.5,1) index = axis+0.5 X = x.copy() Y = alpha + X # to include the last data point as a step, we need to repeat it A = np.concatenate((alpha,[alpha[-1]])) X = np.concatenate((X,[X[-1]])) Y = np.concatenate((Y,[Y[-1]])) plt.xticks(index) plt.xlim(0.5,buckets+0.5) plt.ylim(0,1.5) plt.step(axis,A,where='post',label =r'$\alpha$',lw=2) plt.step(axis,Y,where='post',label=r'$\alpha + x$',lw=2) plt.legend(loc='lower right') plt.xlabel('Bucket Number') plt.ylabel('Power Level') plt.title('Water Filling Solution') plt.show() Explanation: To illustrate the water filling principle, we will plot $\alpha_i + x_i$ and check that this level is flat where power has been allocated: End of explanation
5,972
Given the following text description, write Python code to implement the functionality described below step by step Description: Step1: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. Step3: Explore the Data Play around with view_sentence_range to view different parts of the data. Step6: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing Step8: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. Step10: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. Step12: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU Step15: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below Step18: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. Step21: Encoding Implement encoding_layer() to create a Encoder RNN layer Step24: Decoding - Training Create a training decoding layer Step27: Decoding - Inference Create inference decoder Step30: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note Step33: Build the Neural Network Apply the functions you implemented above to Step34: Neural Network Training Hyperparameters Tune the following parameters Step36: Build the Graph Build the graph using the neural network you implemented. Step40: Batch and pad the source and target sequences Step43: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. Step45: Save Parameters Save the batch_size and save_path parameters for inference. Step47: Checkpoint Step50: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. Step52: Translate This will translate translate_sentence from English to French.
Python Code: DON'T MODIFY ANYTHING IN THIS CELL import helper import problem_unittests as tests source_path = 'data/small_vocab_en' target_path = 'data/small_vocab_fr' source_text = helper.load_data(source_path) target_text = helper.load_data(target_path) Explanation: Language Translation In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French. Get the Data Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus. End of explanation view_sentence_range = (0, 10) DON'T MODIFY ANYTHING IN THIS CELL import numpy as np print('Dataset Stats') print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()}))) sentences = source_text.split('\n') word_counts = [len(sentence.split()) for sentence in sentences] print('Number of sentences: {}'.format(len(sentences))) print('Average number of words in a sentence: {}'.format(np.average(word_counts))) print() print('English sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) print() print('French sentences {} to {}:'.format(*view_sentence_range)) print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]])) Explanation: Explore the Data Play around with view_sentence_range to view different parts of the data. End of explanation def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int): Convert source and target text to proper word ids :param source_text: String that contains all the source text. :param target_text: String that contains all the target text. :param source_vocab_to_int: Dictionary to go from the source words to an id :param target_vocab_to_int: Dictionary to go from the target words to an id :return: A tuple of lists (source_id_text, target_id_text) source_ids = [] target_ids = [] for sentence in source_text.split("\n"): source_ids.append([source_vocab_to_int[word] for word in sentence.split(' ') if word != '']) for sentence in target_text.split("\n"): target_ids.append([target_vocab_to_int[word] for word in sentence.split(' ') if word != ''] + [target_vocab_to_int['<EOS>']]) return source_ids, target_ids DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_text_to_ids(text_to_ids) Explanation: Implement Preprocessing Function Text to Word Ids As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the &lt;EOS&gt; word id at the end of target_text. This will help the neural network predict when the sentence should end. You can get the &lt;EOS&gt; word id by doing: python target_vocab_to_int['&lt;EOS&gt;'] You can get other word ids using source_vocab_to_int and target_vocab_to_int. End of explanation DON'T MODIFY ANYTHING IN THIS CELL helper.preprocess_and_save_data(source_path, target_path, text_to_ids) Explanation: Preprocess all the data and save it Running the code cell below will preprocess all the data and save it to file. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import numpy as np import helper import problem_unittests as tests (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() Explanation: Check Point This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk. End of explanation DON'T MODIFY ANYTHING IN THIS CELL from distutils.version import LooseVersion import warnings import tensorflow as tf from tensorflow.python.layers.core import Dense # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer' print('TensorFlow Version: {}'.format(tf.__version__)) # Check for a GPU if not tf.test.gpu_device_name(): warnings.warn('No GPU found. Please use a GPU to train your neural network.') else: print('Default GPU Device: {}'.format(tf.test.gpu_device_name())) Explanation: Check the Version of TensorFlow and Access to GPU This will check to make sure you have the correct version of TensorFlow and access to a GPU End of explanation def model_inputs(): Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences. :return: Tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) input_text = tf.placeholder(tf.int32, [None, None], name="input") target_text = tf.placeholder(tf.int32, [None, None], name="targets") lr = tf.placeholder(tf.float32, name="learning_rate") keep_prob = tf.placeholder(tf.float32, name="keep_prob") target_sequence_length = tf.placeholder(tf.int32, (None,), name='target_sequence_length') max_target_sequence_length = tf.reduce_max(target_sequence_length, name='max_target_len') source_sequence_length = tf.placeholder(tf.int32, (None,), name='source_sequence_length') return input_text, target_text, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_model_inputs(model_inputs) Explanation: Build the Neural Network You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below: - model_inputs - process_decoder_input - encoding_layer - decoding_layer_train - decoding_layer_infer - decoding_layer - seq2seq_model Input Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: Input text placeholder named "input" using the TF Placeholder name parameter with rank 2. Targets placeholder with rank 2. Learning rate placeholder with rank 0. Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0. Target sequence length placeholder named "target_sequence_length" with rank 1 Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0. Source sequence length placeholder named "source_sequence_length" with rank 1 Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length) End of explanation def process_decoder_input(target_data, target_vocab_to_int, batch_size): Preprocess target data for encoding :param target_data: Target Placehoder :param target_vocab_to_int: Dictionary to go from the target words to an id :param batch_size: Batch Size :return: Preprocessed target data ending = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1) return dec_input DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_process_encoding_input(process_decoder_input) Explanation: Process Decoder Input Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch. End of explanation from imp import reload reload(tests) def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size): Create encoding layer :param rnn_inputs: Inputs for the RNN :param rnn_size: RNN Size :param num_layers: Number of layers :param keep_prob: Dropout keep probability :param source_sequence_length: a list of the lengths of each sequence in the batch :param source_vocab_size: vocabulary size of source data :param encoding_embedding_size: embedding size of source data :return: tuple (RNN output, RNN state) def make_cell(rnn_size, keep_prob): cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=tf.random_uniform_initializer(-0.1, 0.1)) drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) return drop embed_input = tf.contrib.layers.embed_sequence(rnn_inputs, source_vocab_size, encoding_embedding_size) cells = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size, keep_prob) for i in range(num_layers)]) outputs, state = tf.nn.dynamic_rnn(cells, embed_input, sequence_length=source_sequence_length, dtype=tf.float32) return outputs, state DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_encoding_layer(encoding_layer) Explanation: Encoding Implement encoding_layer() to create a Encoder RNN layer: * Embed the encoder input using tf.contrib.layers.embed_sequence * Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper * Pass cell and embedded input to tf.nn.dynamic_rnn() End of explanation def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_summary_length, output_layer, keep_prob): Create a decoding layer for training :param encoder_state: Encoder State :param dec_cell: Decoder RNN Cell :param dec_embed_input: Decoder embedded input :param target_sequence_length: The lengths of each sequence in the target batch :param max_summary_length: The length of the longest sequence in the batch :param output_layer: Function to apply the output layer :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing training logits and sample_id with tf.variable_scope("decode"): training_helper = tf.contrib.seq2seq.TrainingHelper(inputs=dec_embed_input, sequence_length=target_sequence_length, time_major=False) training_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer) output = tf.contrib.seq2seq.dynamic_decode(training_decoder, impute_finished=True, maximum_iterations=max_summary_length)[0] return output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_train(decoding_layer_train) Explanation: Decoding - Training Create a training decoding layer: * Create a tf.contrib.seq2seq.TrainingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob): Create a decoding layer for inference :param encoder_state: Encoder state :param dec_cell: Decoder RNN Cell :param dec_embeddings: Decoder embeddings :param start_of_sequence_id: GO ID :param end_of_sequence_id: EOS Id :param max_target_sequence_length: Maximum length of target sequences :param vocab_size: Size of decoder/target vocabulary :param decoding_scope: TenorFlow Variable Scope for decoding :param output_layer: Function to apply the output layer :param batch_size: Batch size :param keep_prob: Dropout keep probability :return: BasicDecoderOutput containing inference logits and sample_id start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size]) inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_tokens, end_of_sequence_id) inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, inference_helper, encoder_state, output_layer) output = tf.contrib.seq2seq.dynamic_decode(inference_decoder, impute_finished=True, maximum_iterations=max_target_sequence_length)[0] return output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer_infer(decoding_layer_infer) Explanation: Decoding - Inference Create inference decoder: * Create a tf.contrib.seq2seq.GreedyEmbeddingHelper * Create a tf.contrib.seq2seq.BasicDecoder * Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode End of explanation def decoding_layer(dec_input, encoder_state, target_sequence_length, max_target_sequence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, decoding_embedding_size): Create decoding layer :param dec_input: Decoder input :param encoder_state: Encoder state :param target_sequence_length: The lengths of each sequence in the target batch :param max_target_sequence_length: Maximum length of target sequences :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :param target_vocab_size: Size of target vocabulary :param batch_size: The size of the batch :param keep_prob: Dropout keep probability :param decoding_embedding_size: Decoding embedding size :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) def make_cell(rnn_size, keep_prob): cell = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob) return drop embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) embed_input = tf.nn.embedding_lookup(embeddings, dec_input) dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size, keep_prob) for i in range(num_layers)]) output_layer = Dense(target_vocab_size, kernel_initializer=tf.truncated_normal_initializer(mean=0.0, stddev=0.1)) with tf.variable_scope("decode"): training_decoder_output = decoding_layer_train(encoder_state, dec_cell, embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) with tf.variable_scope("decode", reuse=True): inferer_decoder_output = decoding_layer_infer(encoder_state, dec_cell, embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], max_target_sequence_length, target_vocab_size, output_layer, batch_size, keep_prob) return training_decoder_output, inferer_decoder_output DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_decoding_layer(decoding_layer) Explanation: Build the Decoding Layer Implement decoding_layer() to create a Decoder RNN layer. Embed the target sequences Construct the decoder LSTM cell (just like you constructed the encoder cell above) Create an output layer to map the outputs of the decoder to the elements of our vocabulary Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits. Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits. Note: You'll need to use tf.variable_scope to share variables between training and inference. End of explanation def seq2seq_model(input_data, target_data, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sentence_length, source_vocab_size, target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int): Build the Sequence-to-Sequence part of the neural network :param input_data: Input placeholder :param target_data: Target placeholder :param keep_prob: Dropout keep probability placeholder :param batch_size: Batch Size :param source_sequence_length: Sequence Lengths of source sequences in the batch :param target_sequence_length: Sequence Lengths of target sequences in the batch :param source_vocab_size: Source vocabulary size :param target_vocab_size: Target vocabulary size :param enc_embedding_size: Decoder embedding size :param dec_embedding_size: Encoder embedding size :param rnn_size: RNN Size :param num_layers: Number of layers :param target_vocab_to_int: Dictionary to go from the target words to an id :return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput) _, enc_state = encoding_layer(input_data, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, enc_embedding_size) dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size) train_dec, infer_dec = decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) return train_dec, infer_dec DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_seq2seq_model(seq2seq_model) Explanation: Build the Neural Network Apply the functions you implemented above to: Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size). Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function. Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function. End of explanation # Number of Epochs epochs = 5 # Batch Size batch_size = 64 # RNN Size rnn_size = 128 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 128 decoding_embedding_size = 128 # Learning Rate learning_rate = 0.001 # Dropout Keep Probability keep_probability = 0.5 display_step = 100 Explanation: Neural Network Training Hyperparameters Tune the following parameters: Set epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set num_layers to the number of layers. Set encoding_embedding_size to the size of the embedding for the encoder. Set decoding_embedding_size to the size of the embedding for the decoder. Set learning_rate to the learning rate. Set keep_probability to the Dropout keep probability Set display_step to state how many steps between each debug output statement End of explanation DON'T MODIFY ANYTHING IN THIS CELL save_path = 'checkpoints/dev' (source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess() max_target_sentence_length = max([len(sentence) for sentence in source_int_text]) train_graph = tf.Graph() with train_graph.as_default(): input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs() #sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length') input_shape = tf.shape(input_data) train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, source_sequence_length, target_sequence_length, max_target_sequence_length, len(source_vocab_to_int), len(target_vocab_to_int), encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int) training_logits = tf.identity(train_logits.rnn_output, name='logits') inference_logits = tf.identity(inference_logits.sample_id, name='predictions') masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks') with tf.name_scope("optimization"): # Loss function cost = tf.contrib.seq2seq.sequence_loss( training_logits, targets, masks) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients) Explanation: Build the Graph Build the graph using the neural network you implemented. End of explanation DON'T MODIFY ANYTHING IN THIS CELL def pad_sentence_batch(sentence_batch, pad_int): Pad sentences with <PAD> so that each sentence of a batch has the same length max_sentence = max([len(sentence) for sentence in sentence_batch]) return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch] def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int): Batch targets, sources, and the lengths of their sentences together for batch_i in range(0, len(sources)//batch_size): start_i = batch_i * batch_size # Slice the right amount for the batch sources_batch = sources[start_i:start_i + batch_size] targets_batch = targets[start_i:start_i + batch_size] # Pad pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int)) pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int)) # Need the lengths for the _lengths parameters pad_targets_lengths = [] for target in pad_targets_batch: pad_targets_lengths.append(len(target)) pad_source_lengths = [] for source in pad_sources_batch: pad_source_lengths.append(len(source)) yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths Explanation: Batch and pad the source and target sequences End of explanation DON'T MODIFY ANYTHING IN THIS CELL def get_accuracy(target, logits): Calculate accuracy max_seq = max(target.shape[1], logits.shape[1]) if max_seq - target.shape[1]: target = np.pad( target, [(0,0),(0,max_seq - target.shape[1])], 'constant') if max_seq - logits.shape[1]: logits = np.pad( logits, [(0,0),(0,max_seq - logits.shape[1])], 'constant') return np.mean(np.equal(target, logits)) # Split data to training and validation sets train_source = source_int_text[batch_size:] train_target = target_int_text[batch_size:] valid_source = source_int_text[:batch_size] valid_target = target_int_text[:batch_size] (valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source, valid_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])) with tf.Session(graph=train_graph) as sess: sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate( get_batches(train_source, train_target, batch_size, source_vocab_to_int['<PAD>'], target_vocab_to_int['<PAD>'])): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate, target_sequence_length: targets_lengths, source_sequence_length: sources_lengths, keep_prob: keep_probability}) if batch_i % display_step == 0 and batch_i > 0: batch_train_logits = sess.run( inference_logits, {input_data: source_batch, source_sequence_length: sources_lengths, target_sequence_length: targets_lengths, keep_prob: 1.0}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_sources_batch, source_sequence_length: valid_sources_lengths, target_sequence_length: valid_targets_lengths, keep_prob: 1.0}) train_acc = get_accuracy(target_batch, batch_train_logits) valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}' .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss)) # Save Model saver = tf.train.Saver() saver.save(sess, save_path) print('Model Trained and Saved') Explanation: Train Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem. End of explanation DON'T MODIFY ANYTHING IN THIS CELL # Save parameters for checkpoint helper.save_params(save_path) Explanation: Save Parameters Save the batch_size and save_path parameters for inference. End of explanation DON'T MODIFY ANYTHING IN THIS CELL import tensorflow as tf import numpy as np import helper import problem_unittests as tests _, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess() load_path = helper.load_params() Explanation: Checkpoint End of explanation def sentence_to_seq(sentence, vocab_to_int): Convert a sentence to a sequence of ids :param sentence: String :param vocab_to_int: Dictionary to go from the words to an id :return: List of word ids unk_id = vocab_to_int["<UNK>"] return [vocab_to_int.get(word, unk_id) for word in sentence.split(" ")] DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE tests.test_sentence_to_seq(sentence_to_seq) Explanation: Sentence to Sequence To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences. Convert the sentence to lowercase Convert words into ids using vocab_to_int Convert words not in the vocabulary, to the &lt;UNK&gt; word id. End of explanation translate_sentence = 'he saw a old yellow truck .' DON'T MODIFY ANYTHING IN THIS CELL translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_path + '.meta') loader.restore(sess, load_path) input_data = loaded_graph.get_tensor_by_name('input:0') logits = loaded_graph.get_tensor_by_name('predictions:0') target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0') source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0') keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size, target_sequence_length: [len(translate_sentence)*2]*batch_size, source_sequence_length: [len(translate_sentence)]*batch_size, keep_prob: 1.0})[0] print('Input') print(' Word Ids: {}'.format([i for i in translate_sentence])) print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in translate_logits])) print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits]))) Explanation: Translate This will translate translate_sentence from English to French. End of explanation
5,973
Given the following text description, write Python code to implement the functionality described below step by step Description: Introduction to Linear Programming with Python - Part 2 Introduction to PuLP PuLP is an open source linear programming package for python. PuLP can be installed using pip, instructions here. In this notebook, we'll explore how to construct and solve the linear programming problem described in Part 1 using PuLP. A brief reminder of our linear programming problem Step1: Then instantiate a problem class, we'll name it "My LP problem" and we're looking for an optimal maximum so we use LpMaximize Step2: We then model our decision variables using the LpVariable class. In our example, x had a lower bound of 0 and y had a lower bound of 2. Upper bounds can be assigned using the upBound parameter. Step3: The objective function and constraints are added using the += operator to our model. The objective function is added first, then the individual constraints. Step4: We have now constructed our problem and can have a look at it. Step5: PuLP supports open source linear programming solvers such as CBC and GLPK, as well as commercial solvers such as Gurobi and IBM's CPLEX. The default solver is CBC, which comes packaged with PuLP upon installation. For most applications, the open source CBC from COIN-OR will be enough for most simple linear programming optimisation algorithms. Step6: We have also checked the status of the solver, there are 5 status codes
Python Code: import pulp Explanation: Introduction to Linear Programming with Python - Part 2 Introduction to PuLP PuLP is an open source linear programming package for python. PuLP can be installed using pip, instructions here. In this notebook, we'll explore how to construct and solve the linear programming problem described in Part 1 using PuLP. A brief reminder of our linear programming problem: We want to find the maximum solution to the objective function: $Z = 4x + 3y$ Subject to the following constraints: $ x \geq 0 \ y \geq 2 \ 2y \leq 25 - x \ 4y \geq 2x - 8 \ y \leq 2x -5 \ $ We'll begin by importing PuLP End of explanation my_lp_problem = pulp.LpProblem("My LP Problem", pulp.LpMaximize) Explanation: Then instantiate a problem class, we'll name it "My LP problem" and we're looking for an optimal maximum so we use LpMaximize End of explanation x = pulp.LpVariable('x', lowBound=0, cat='Continuous') y = pulp.LpVariable('y', lowBound=2, cat='Continuous') Explanation: We then model our decision variables using the LpVariable class. In our example, x had a lower bound of 0 and y had a lower bound of 2. Upper bounds can be assigned using the upBound parameter. End of explanation # Objective function my_lp_problem += 4 * x + 3 * y, "Z" # Constraints my_lp_problem += 2 * y <= 25 - x my_lp_problem += 4 * y >= 2 * x - 8 my_lp_problem += y <= 2 * x - 5 Explanation: The objective function and constraints are added using the += operator to our model. The objective function is added first, then the individual constraints. End of explanation my_lp_problem Explanation: We have now constructed our problem and can have a look at it. End of explanation my_lp_problem.solve() pulp.LpStatus[my_lp_problem.status] Explanation: PuLP supports open source linear programming solvers such as CBC and GLPK, as well as commercial solvers such as Gurobi and IBM's CPLEX. The default solver is CBC, which comes packaged with PuLP upon installation. For most applications, the open source CBC from COIN-OR will be enough for most simple linear programming optimisation algorithms. End of explanation for variable in my_lp_problem.variables(): print "{} = {}".format(variable.name, variable.varValue) print pulp.value(my_lp_problem.objective) Explanation: We have also checked the status of the solver, there are 5 status codes: * Not Solved: Status prior to solving the problem. * Optimal: An optimal solution has been found. * Infeasible: There are no feasible solutions (e.g. if you set the constraints x <= 1 and x >=2). * Unbounded: The constraints are not bounded, maximising the solution will tend towards infinity (e.g. if the only constraint was x >= 3). * Undefined: The optimal solution may exist but may not have been found. We can now view our maximal variable values and the maximum value of Z. We can use the varValue method to retrieve the values of our variables x and y, and the pulp.value function to view the maximum value of the objective function. End of explanation
5,974
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Here is a rather difficult problem.
Problem: import numpy as np A = np.array([[0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 1, 0, 0, 0, 0], [0, 0, 1, 1, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0]]) B = np.argwhere(A) (ystart, xstart), (ystop, xstop) = B.min(0), B.max(0) + 1 result = A[ystart:ystop, xstart:xstop]
5,975
Given the following text description, write Python code to implement the functionality described below step by step Description: Ch 04 Step1: Set up some data to work with Step2: Define the placeholders, variables, model, cost function, and training op Step3: Train the logistic model on the data Step4: Now let's see how well our logistic function matched the training data points
Python Code: %matplotlib inline import numpy as np import tensorflow as tf import matplotlib.pyplot as plt learning_rate = 0.01 training_epochs = 1000 Explanation: Ch 04: Concept 02 Logistic regression Import the usual libraries, and set up the usual hyper-parameters: End of explanation x1 = np.random.normal(-4, 2, 1000) x2 = np.random.normal(4, 2, 1000) xs = np.append(x1, x2) ys = np.asarray([0.] * len(x1) + [1.] * len(x2)) plt.scatter(xs, ys) Explanation: Set up some data to work with: End of explanation X = tf.placeholder(tf.float32, shape=(None,), name="x") Y = tf.placeholder(tf.float32, shape=(None,), name="y") w = tf.Variable([0., 0.], name="parameter", trainable=True) y_model = tf.sigmoid(w[1] * X + w[0]) cost = tf.reduce_mean(-Y * tf.log(y_model) - (1 - Y) * tf.log(1 - y_model)) train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) Explanation: Define the placeholders, variables, model, cost function, and training op: End of explanation with tf.Session() as sess: sess.run(tf.global_variables_initializer()) prev_err = 0 for epoch in range(training_epochs): err, _ = sess.run([cost, train_op], {X: xs, Y: ys}) if epoch % 100 == 0: print(epoch, err) if abs(prev_err - err) < 0.0001: break prev_err = err w_val = sess.run(w, {X: xs, Y: ys}) Explanation: Train the logistic model on the data: End of explanation all_xs = np.linspace(-10, 10, 100) with tf.Session() as sess: predicted_vals = sess.run(tf.sigmoid(all_xs * w_val[1] + w_val[0])) plt.plot(all_xs, predicted_vals) plt.scatter(xs, ys) plt.show() Explanation: Now let's see how well our logistic function matched the training data points: End of explanation
5,976
Given the following text description, write Python code to implement the functionality described below step by step Description: Lesson 2 Step1: Configure sql magic to output queries as pandas dataframes Step2: Import the data analysis libraries Step3: Import the MySQLdb library Step4: Connect to the MySQL database using sql magic commands The connection to the MySQL database uses the following format Step5: Example to create a pandas dataframe using the results of a mysql query Step6: Note the data type of the dataframe df Step7: Use %%sql to start a block of sql statements Example Step8: Exercise 4 Step9: Create a dataframe from the results of a sql query from the pandas object Step10: Now let's create the tables to hold the sensor data from our Raspberry Pi <pre> <b>Logon using an admin account and create a table called temps3 to hold sensor data Step11: Next we will create a user to access the newly created table that will be used by the Raspberry Pi program Example Step12: Next we will test access to the newly created table using the new user Start a new connection using the new user Step13: Let's add some test data to make sure we can insert using the new user Step14: Now we will delete the rows in the database
Python Code: %load_ext sql Explanation: Lesson 2: Setup Jupyter Notebook for Data Analysis Learning Objectives: <ol> <li>Create Python tools for data analysis using Jupyter Notebooks</li> <li>Learn how to access data from MySQL databases for data analysis</li> </ol> Exercise 1: Install Anaconda Access https://conda.io/miniconda.html and download the Windows Installer.<br> Run the following commands on the Anaconda command prompt:<br> <pre> conda install numpy, pandas, matplotlib conda update conda </pre> Sometimes data analysis requires previous versions of Python or other tools for a project.<br> Next we will setup three environments that can be used with various project requirements. Exercise 2: Configure conda environments for Python 2 and Python 3 data analysis To create a <b>Python 2</b> enviroment run the following from the Anaconda command prompt:<br> <pre> conda update conda -y conda create -n py2 python=2 anaconda jupyter notebook -y </pre> To activate the environment:<br> <pre>source activate py2</pre> On MacOS or Linux: <pre>source activate py2</pre> To deactivate the environment:<pr> <pre>source deactivate py2</pre> On MacOS or Linux: <pre>source deactivate py2</pre> To create the <b>Python 3</b> environment run the following from the Anaconda command prompt: <pre> conda create -n py3 python=3 anaconda jupyter notebook -y </pre> To activate the environment: <pre>activate py3</pre> On MacOS or Linux: <pre>source activate py3</pre> To deactivate the enviroment: <pre>deactivate py3</pre> On MacOS or Linux: <pre>source deactivate py3</pre> Setup Jupyter Notebook to access data from MySQL databases Exercise 3: Load the mysql libraries into the environment and access data from MySQL database Run the following commands from the Anaconda command line:<br> <pre> pip install ipython-sql conda install mysql-python </pre> This will install sql magic capabilities to Jupyter Notebook Load the sql magic jupyter notebook extension: End of explanation %config SqlMagic.autopandas=True Explanation: Configure sql magic to output queries as pandas dataframes: End of explanation import pandas as pd import numpy as np Explanation: Import the data analysis libraries: End of explanation import MySQLdb Explanation: Import the MySQLdb library End of explanation %%sql mysql://pilogger:[email protected]/pidata SELECT * FROM temps LIMIT 10; Explanation: Connect to the MySQL database using sql magic commands The connection to the MySQL database uses the following format: <pre> mysql://username:password@hostname/database </pre> To start a sql command block type: <pre>%%sql</pre> Note: Make sure the %%sql is on the top of the cell<br> Then the remaining lines can contain SQL code.<br> Example: to connect to <b>pidata</b> database and select records from the <b>temps</b> table: End of explanation df = %sql SELECT * FROM temps WHERE datetime > date(now()); df Explanation: Example to create a pandas dataframe using the results of a mysql query End of explanation type(df) Explanation: Note the data type of the dataframe df: End of explanation %%sql use pidata; show tables; Explanation: Use %%sql to start a block of sql statements Example: Show tables in the pidata database End of explanation #Enter the values for you database connection database = "pidata" # e.g. "pidata" hostname = "172.20.101.81" # e.g.: "mydbinstance.xyz.us-east-1.rds.amazonaws.com" port = 3306 # e.g. 3306 uid = "pilogger" # e.g. "user1" pwd = "foobar" # e.g. "Password123" conn = MySQLdb.connect( host=hostname, user=uid, passwd=pwd, db=database ) cur = conn.cursor() Explanation: Exercise 4: Another way to access mysql data and load into a pandas dataframe Connect using the mysqldb python library: End of explanation new_dataframe = pd.read_sql("SELECT * \ FROM temps", con=conn) conn.close() new_dataframe Explanation: Create a dataframe from the results of a sql query from the pandas object: End of explanation %%sql mysql://admin:[email protected]/pidata DROP TABLE if exists temps3; CREATE TABLE temps3 ( device varchar(20) DEFAULT NULL, datetime datetime DEFAULT NULL, temp float DEFAULT NULL, hum float DEFAULT NULL ) ENGINE=InnoDB DEFAULT CHARSET=latin1; Explanation: Now let's create the tables to hold the sensor data from our Raspberry Pi <pre> <b>Logon using an admin account and create a table called temps3 to hold sensor data:</b> The table contains the following fields: device -- VARCHAR, Name of the device that logged the data datetime -- DATETIME, Date time in ISO 8601 format YYYY-MM-DD HH:MM:SS temp -- FLOAT, temperature data hum -- FLOAT, humidity data </pre> End of explanation %%sql mysql://admin:[email protected] CREATE USER 'user1'@'%' IDENTIFIED BY 'logger'; GRANT SELECT, INSERT, DELETE, UPDATE ON pidata.temps3 TO 'user1'@'%'; FLUSH PRIVILEGES; %sql select * from mysql.user; Explanation: Next we will create a user to access the newly created table that will be used by the Raspberry Pi program Example: Start a connection using an admin account, create a new user called user1. Grant limited privileges to the pidata.temps3 table Note: Creating a user with @'%' allows the user to access the database from any host End of explanation %%sql mysql://user1:[email protected]/pidata select * from temps3; Explanation: Next we will test access to the newly created table using the new user Start a new connection using the new user End of explanation for x in range(10): %sql INSERT INTO temps3 (device,datetime,temp,hum) VALUES('pi222',date(now()),73.2,22.0); %sql SELECT * FROM temps3; Explanation: Let's add some test data to make sure we can insert using the new user End of explanation %sql DELETE FROM temps3; %sql SELECT * FROM temps3; %%sql mysql://admin:[email protected] drop user if exists 'user1'@'%'; %sql select * from mysql.user; Explanation: Now we will delete the rows in the database End of explanation
5,977
Given the following text description, write Python code to implement the functionality described below step by step Description: License Copyright (C) 2017 J. Patrick Hall, [email protected] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions Step1: Load 4-dimensional iris data set 4 dimensions is too many to plot Step2: Create symmetrical covariance matrix for PCA Covariance $C_{i,j}$ measures the amount one feature $x_i$ changes with another feature $x_j$ for all the features $j$ in the data set $X$. \begin{equation} C_{i, j} = \frac{1}{N} x_i x_j, \text{ } x_i, x_j \in X_j \end{equation} Step3: Eigen decomposition (a very important type of matrix factorization in machine learning) Eigen decomposition of a covariance or correlation matrix is known as principal components analysis (PCA). Eigen decomposition involves calculating two matrices $\mathbf{Q}$ and $\mathbf{\Lambda}$, such that the covariance or correlation matrix $\mathbf{C} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}$, where $\mathbf{Q}$ is a p x p matrix of eigenvectors and $\mathbf{\Lambda}$ is a diagonal, p x p matrix of eigenvalues. Eigenvectors are orthogonal vectors in the directions of the highest variance in the data matrix. Eigenvalues determine the length of the eigenvectors and eigenvectors are ranked by the magnitude of their corresponsing eigenvalue. The eigenvalue with the largest magnitude corresponds to the eigenvector which spans the direction of the highest variance in the original data set and so on. Eigen decomposition \begin{equation} \mathbf{C} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1} \end{equation} \begin{equation} \mathbf{C}\mathbf{Q} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}\mathbf{Q} \end{equation} \begin{equation} \mathbf{C}\mathbf{Q} = \mathbf{Q}\mathbf{\Lambda} \end{equation} The above equation can be decomposed in sets of simultaneous equations. For any eigenvector, $\mathbf{q}_j$ Step4: Use eigenvectors to perform feature extraction The original data $\mathbf{X}$ can be projected onto the new space defined by the eigenvectors $\mathbf{Q}$ using the dot product $\mathbf{XQ}$. These new vectors are known as the principal components of $\mathbf{X}$. Using a reduced set of n eigenvectors (i.e. the first n columns of $Q$) to carry out the dot product $\mathbf{XQ_{n}}$, will result in a compressed, n-dimensional representation of $\mathbf{X}$ in which the proportion of total variance has been maximized. Extract two features and plot We could not plot the four dimensions in the data set easily before performing PCA Step5: Extract three features and plot
Python Code: # numpy for matrix operations import numpy as np # matplotlib for plotting from matplotlib import pyplot as plt from mpl_toolkits.mplot3d import Axes3D from mpl_toolkits.mplot3d import proj3d %matplotlib inline # scikit for data set and easy standardization from sklearn import datasets from sklearn import preprocessing Explanation: License Copyright (C) 2017 J. Patrick Hall, [email protected] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Simple feature extraction with PCA - numpy and scikit-Learn Imports End of explanation print('Input features: \n', datasets.load_iris().feature_names) print() print('Target classes: \n', datasets.load_iris().target_names) # load and standardize data iris = datasets.load_iris().data iris = preprocessing.scale(iris) species = datasets.load_iris().target Explanation: Load 4-dimensional iris data set 4 dimensions is too many to plot End of explanation covariance_matrix = np.cov(iris, rowvar=False) print('Covariance Matrix:\n', covariance_matrix) Explanation: Create symmetrical covariance matrix for PCA Covariance $C_{i,j}$ measures the amount one feature $x_i$ changes with another feature $x_j$ for all the features $j$ in the data set $X$. \begin{equation} C_{i, j} = \frac{1}{N} x_i x_j, \text{ } x_i, x_j \in X_j \end{equation} End of explanation eigen_values, eigen_vectors = np.linalg.eig(covariance_matrix) print('Eigen Values:\n', eigen_values) print() print('Eigen Vectors:\n', eigen_vectors) Explanation: Eigen decomposition (a very important type of matrix factorization in machine learning) Eigen decomposition of a covariance or correlation matrix is known as principal components analysis (PCA). Eigen decomposition involves calculating two matrices $\mathbf{Q}$ and $\mathbf{\Lambda}$, such that the covariance or correlation matrix $\mathbf{C} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}$, where $\mathbf{Q}$ is a p x p matrix of eigenvectors and $\mathbf{\Lambda}$ is a diagonal, p x p matrix of eigenvalues. Eigenvectors are orthogonal vectors in the directions of the highest variance in the data matrix. Eigenvalues determine the length of the eigenvectors and eigenvectors are ranked by the magnitude of their corresponsing eigenvalue. The eigenvalue with the largest magnitude corresponds to the eigenvector which spans the direction of the highest variance in the original data set and so on. Eigen decomposition \begin{equation} \mathbf{C} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1} \end{equation} \begin{equation} \mathbf{C}\mathbf{Q} = \mathbf{Q}\mathbf{\Lambda}\mathbf{Q}^{-1}\mathbf{Q} \end{equation} \begin{equation} \mathbf{C}\mathbf{Q} = \mathbf{Q}\mathbf{\Lambda} \end{equation} The above equation can be decomposed in sets of simultaneous equations. For any eigenvector, $\mathbf{q}_j$: \begin{equation} \mathbf{C}\mathbf{q}_j = \mathbf{q}_j\lambda_j \end{equation} \begin{equation} \mathbf{C}\mathbf{q}_j = \lambda_j\mathbf{q}_j \end{equation} \begin{equation} \mathbf{C}\mathbf{q}_j - \lambda_j\mathbf{q}_j = 0 \end{equation} \begin{equation} (\mathbf{C} - \lambda_j\mathbf{I})\mathbf{q}_j = 0 \end{equation} Because $\mathbf{q}$ comes from the non-singular matrix of eigenvectors, $(\mathbf{C} - \lambda_j\mathbf{I})$, and thus $det(\mathbf{C} - \lambda\mathbf{I})$, must equal 0. Which implies a polynomial equation in which roots $\lambda_{j}$ can be determined using: \begin{equation} \prod_{j} (\mathbf{c}{j,j} - \lambda{j}) = 0, \text{for } j \leq p \end{equation} Once all $\lambda_{j}$, and hence $\mathbf{\Lambda}$, have been determined, $\mathbf{Q}$ can also be determined by back-solving for the columns of $\mathbf{Q}$ using $(\mathbf{C} - \lambda_j\mathbf{I})\mathbf{q}_j = 0$. Use numpy to find eigenvalues and eigenvectors Numpy ranks eigenvectors by their correct magnitude automatically. End of explanation two_PCs = iris.dot(eigen_vectors[:, :2]) fig = plt.figure(figsize=(6,6)) setosa = plt.scatter(two_PCs[0:50, 0], two_PCs[0:50, 1], alpha=0.5, color='blue') versicolor = plt.scatter(two_PCs[50:100, 0], two_PCs[50:100, 1], alpha=0.5, color='red') virginica = plt.scatter(two_PCs[100:150, 0], two_PCs[100:150, 1], alpha=0.5, color='green') plt.title('Two Dimensional Feature Extraction from Iris Data') plt.xlabel('First Principal Component') plt.ylabel('Second Principal Component') plt.legend([setosa, versicolor, virginica], ['Setosa', 'Versicolor', 'virginica']) plt.show() Explanation: Use eigenvectors to perform feature extraction The original data $\mathbf{X}$ can be projected onto the new space defined by the eigenvectors $\mathbf{Q}$ using the dot product $\mathbf{XQ}$. These new vectors are known as the principal components of $\mathbf{X}$. Using a reduced set of n eigenvectors (i.e. the first n columns of $Q$) to carry out the dot product $\mathbf{XQ_{n}}$, will result in a compressed, n-dimensional representation of $\mathbf{X}$ in which the proportion of total variance has been maximized. Extract two features and plot We could not plot the four dimensions in the data set easily before performing PCA End of explanation three_PCs = iris.dot(eigen_vectors[:, :3]) fig = plt.figure(figsize=(7, 7)) ax = plt.axes(projection='3d') setosa = ax.scatter(three_PCs [0:50, 0], three_PCs [0:50, 1], three_PCs [0:50, 2], alpha=0.5, color='blue') versicolor = ax.scatter(three_PCs [50:100, 0], three_PCs [50:100, 1], three_PCs [50:100, 2], alpha=0.5, color='red') virginica = ax.scatter(three_PCs [100:150, 0], three_PCs [100:150, 1], three_PCs [100:150, 2], alpha=0.5, color='green') plt.title('Three Dimensional Feature Extraction from Iris Data') ax.set_xlabel('First Principal Component') ax.set_ylabel('Second Principal Component') ax.set_zlabel('Third Principal Component') plt.legend([setosa, versicolor, virginica], ['Setosa', 'Versicolor', 'virginica'], bbox_to_anchor=(1.05, 0.5), loc=3, borderaxespad=0.) _ = plt.show() Explanation: Extract three features and plot End of explanation
5,978
Given the following text description, write Python code to implement the functionality described below step by step Description: Traffic flow with an on-ramp In this chapter we return to the LWR traffic model that we investigated in two earlier chapters. The LWR model involves a single length of one-way road; in this chapter we will think of this road as a highway. On a real highway, there are cars entering and leaving the highway from other roads. In general, real traffic flow must be modeled on a network of roads. the development of continuum traffic models based on LWR and other simple models is an important and very active area of research; see for instance <cite data-cite="holden1995mathematical"><a href="riemann.html#holden1995mathematical">(Holden, 1995)</a></cite> for an investigation of the Riemann problem at a junction, and <cite data-cite="garavello2006traffic"><a href="riemann.html#garavello2006traffic">(Garavello, 2006)</a></cite> for an overview of the area. Here we take a first step in that direction by considering the presence of a single on-ramp, where traffic enters the highway. Let the flux of cars from the on-ramp be denoted by $D$; we assume that $D$ is constant in time but concentrated at a single point ($x=0$ in space). Our model equation then becomes \begin{align} \label{TFR Step1: Light traffic, little inflow What happens when the on-ramp has a relatively small flux of cars, and the highway around the ramp is not congested? There will be a region of somewhat higher density starting at the ramp and propagating downstream. This is demonstrated in the example below. Step2: In contrast to the LWR model without a ramp, here we see three constant states separated by two waves in the Riemann solution. The first is a stationary wave where the traffic density abruptly increases due to the cars entering from the ramp, as predicted by \eqref{TFR Step3: The influx of cars from the ramp here causes a traffic jam that moves upstream. As we discuss further below, some real highways limit the influx in order to prevent this. Experiment with the value of $\rho_r$ and in the example above. Can you give a precise condition that determines whether the shock will move left or right? Light traffic, heavy inflow Now we come to the most interesting case. Since the maximum flux is $1/4$, it follows that if $f(\rho_l) + D = 1/4$, then the oncoming traffic from the highway and the on-ramp can just fit onto the road at $x=0$. The smaller value of $\rho_l$ for which this equation holds is $\rho^* = 1/4 - \sqrt{D}$. If $\rho_l$ exceeds this value, then not all the cars arriving at $x=0$ can fit on the road there; since our model gives priority to the cars coming from the on-ramp, the road to the left of $x=0$ must suffer a traffic jam -- a shock wave moving to the left. As long as $\rho_r < 1/2$, the value of $\rho^+$ will be exactly $1/2$, so as to maximize the flux through $x=0$. Downstream, a rarefaction will form as cars accelerate into the less-congested highway. Step4: Notice that in the extreme case that $D=1/4$, the cars from the on-ramp completely block the cars coming from the left; those cars come to a complete stop and never pass $x=0$. This may seem surprising, since the density of cars to the right of $x=0$ is just $1/2$. However, since the flux must increase by $1/4$ at $x=0$, it follows that the flux just to the left of $x=0$ must be zero. Counterintuitively, when two roads merge, limiting the influx of traffic from one or both of them can significantly increase the overall rate of traffic flow. Contrary to our model, the usual approach is to prioritize the cars already on the highway and restrict the influx of cars from an on-ramp. This is done in practice nowadays on many highway on-ramps in congested areas. Congested upstream, uncongested downstream Step5: Congested on both sides Next let us consider what happens if the incoming traffic from the upstream highway and the on-ramp exceeds the maximum flux, but the road is also congested for $x>0$ (i.e., $\rho_r>1/2$). Then no waves can travel to the right, and a left-going shock will form. If downstream congestion is too high, then the traffic from the on-ramp will not all be able to enter the highway and no solution is possible in this model (see the second condition for existence, above). Step6: Further examples Step7: Interactive solver
Python Code: %matplotlib inline import numpy as np import matplotlib.pyplot as plt from clawpack import pyclaw from clawpack import riemann from ipywidgets import interact from ipywidgets import widgets from exact_solvers import traffic_ramps from utils import riemann_tools def c(rho, xi): return (1.-2*rho) def make_plot_function(rho_l,rho_r,D): states, speeds, reval, wave_types = traffic_ramps.exact_riemann_solution(rho_l,rho_r,D) def plot_function(t): ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types,t=t,t_pointer=0, extra_axes=True,variable_names=['Density']); # Characteristic plotting isn't working right for this problem riemann_tools.plot_characteristics(reval,c,None,ax[0]) traffic_ramps.phase_plane_plot(rho_l,rho_r,D,axes=ax[2]) ax[1].set_ylim(0,1) plt.show() return plot_function def riemann_solution(rho_l, rho_r, D): plot_function = make_plot_function(rho_l,rho_r,D) interact(plot_function, t=widgets.FloatSlider(value=0.1,min=0,max=.9)); Explanation: Traffic flow with an on-ramp In this chapter we return to the LWR traffic model that we investigated in two earlier chapters. The LWR model involves a single length of one-way road; in this chapter we will think of this road as a highway. On a real highway, there are cars entering and leaving the highway from other roads. In general, real traffic flow must be modeled on a network of roads. the development of continuum traffic models based on LWR and other simple models is an important and very active area of research; see for instance <cite data-cite="holden1995mathematical"><a href="riemann.html#holden1995mathematical">(Holden, 1995)</a></cite> for an investigation of the Riemann problem at a junction, and <cite data-cite="garavello2006traffic"><a href="riemann.html#garavello2006traffic">(Garavello, 2006)</a></cite> for an overview of the area. Here we take a first step in that direction by considering the presence of a single on-ramp, where traffic enters the highway. Let the flux of cars from the on-ramp be denoted by $D$; we assume that $D$ is constant in time but concentrated at a single point ($x=0$ in space). Our model equation then becomes \begin{align} \label{TFR:balance_law} \rho_t + \left(\rho(1-\rho)\right)_x & = D \delta(x), \end{align} where $\delta(x)$ is the Dirac delta function. Equation \eqref{TFR:balance_law} is our first example of a balance law. The term on the right hand side does not take the form of a flux, and the total mass of cars is not conserved. We refer to the right-hand-side term as a source term -- quite appropriately in the present context, since it represents a source of cars entering the highway. In a more realistic model, traffic on the on-ramp itself would also be modeled. However, our goal here is primarily to illustrate the effect of a source term like that in \eqref{TFR:balance_law} on the solution of the Riemann problem. Typically, source terms have only an infinitesimal effect on the Riemann solution over short times, since they are distributed in space. The term considered here is an example of a singular source term; it has a non-negligible effect on the Riemann solution because it is concentrated at $x=0$. Recall that the flux of cars in the LWR model is given by $$f(\rho) = \rho(1-\rho)$$ where $0 \le \rho \le 1$. Thus the maximum flux is $f_\text{max} = 1/4$, achieved when $\rho=1/2$. We assume always that $D \le 1/4$, so that all the cars arriving from the on-ramp can enter the highway. As discussed already in the chapter on traffic with a varying speed limit, the flux of cars must be continuous everywhere, and in particular at $x=0$. Let $\rho^-, \rho^+$ denote the density $\rho$ in the limit as $\xi \to 0$ from the left and right, respectively. Then this condition means that \begin{align} \label{TFR:source_balance} f(\rho^-) + D = f(\rho^+). \end{align} For $D\ne0$, this condition implies that a stationary jump exists at $x=0$, similar to the stationary jump we found in the case of a varying speed limit. One approach to solving the Riemann problem is to focus on finding $\rho^-$ and $\rho^+$; the wave structure on either side of $x=0$ can then be deduced in the same way we have done for problems without a source term -- connecting $\rho_l$ to $\rho_-$ and $\rho_r$ to $\rho_+$ by entropy-satisfying shock or rarefaction waves. This approach was undertaken by Greenberg et. al. in <cite data-cite="greenberg1997analysis"><a href="riemann.html#greenberg1997analysis">(Greenberg, 1997)</a></cite> for Burgers' equation; the main results (Table 1 therein) can be transferred to the LWR model in a straightforward way. As they noted, there is typically more than one choice of $(\rho^+, \rho^-)$ that leads to an entropy-satisfying weak solution; some additional admissibility condition is required in order to choose one. Herein we will motivate the choice of $\rho^+, \rho^-$ based on physical considerations; the resulting values agree with those of Greenberg et. al. (see also <cite data-cite="isaacson1992nonlinear"><a href="riemann.html#isaacson1992nonlinear">(Isaacson, 1992)</a></cite> for yet another approach that yields the same admissibility conditions). Spatially-varying fluxes and source terms The similarity between the existence of an on-ramp at $x=0$ and a change in the speed limit at $x=0$ can be seen mathematically as follows. For the varying speed limit, we studied a conservation law of the form $$\rho_t + f(\rho,x)_x =0.$$ Using the chain rule, this is equivalent to $$\rho_t + f_\rho(\rho,x) \rho_x = - f_x(\rho,x).$$ Hence the variable-coefficient system can also be viewed as a balance law. If $f$ is discontinuous at $x=0$, then $f_x$ is a delta function. Notice that the presence of an on-ramp (positive source term) corresponds to a decrease in the speed limit. This makes sense -- both of these have the effect of reducing the rate at which cars from upstream ($x<0$) can proceed downstream ($x>0$). Thus the Riemann solutions we find in this chapter will be similar to those found in the presence of a decrease in speed limit. In the remainder of the chapter, we investigate the solution of the Riemann problem for this balance law. Conditions for existence of a solution In our model, cars entering from the on-ramp are always given priority. In a real-world scenario, traffic on the on-ramp could also back up and the flux $D$ from the ramp could be decreased. However, a much more complicated model would be required in order to account for this; see <cite data-cite="delle2014pde"><a href="riemann.html#delle2014pde">(delle Monache, 1992)</a></cite> for an example of such a model. The flux $D$ from the on-ramp cannot raise the density above $\rho_\text{max}=1$ (representing bumper-to-bumper traffic). This leads to some restrictions on $D$ in order to guarantee existence of a solution to the Riemann problem: $D \le 1/4$. This condition is necessary since otherwise the flux from the on-ramp would exceed the maximum flux of the highway, even without any other oncoming traffic. If $\rho_r > 1/2$, then $D \le f(\rho_r)$. The reason for this is as follows: if $\rho_r > 1/2$, then characteristics to the right of $x=0$ go to the left. Hence there cannot be any right-going wave (a more detailed analysis shows that a right-going transonic shock is impossible), and it must be that $\rho^+ = \rho_r$. Thus $D = f(\rho_r) - f(\rho^-) \le f(\rho_r)$. It turns out that these two conditions are also sufficient for the existence of a solution to the Riemann problem. End of explanation rho_l = 0.2 rho_r = 0.2 D = 0.05 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D) Explanation: Light traffic, little inflow What happens when the on-ramp has a relatively small flux of cars, and the highway around the ramp is not congested? There will be a region of somewhat higher density starting at the ramp and propagating downstream. This is demonstrated in the example below. End of explanation rho_l = 0.2 rho_r = 0.8 D = 0.05 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D,xmax=0.2) Explanation: In contrast to the LWR model without a ramp, here we see three constant states separated by two waves in the Riemann solution. The first is a stationary wave where the traffic density abruptly increases due to the cars entering from the ramp, as predicted by \eqref{TFR:source_balance}. Indeed that condition determines the middle state $\rho_m$ as the solution of $$f(\rho_l) + D = f(\rho_m)$$ For given values of $\rho_l$ and $D$, this is a quadratic equation for $\rho_m$, with solution \begin{align} \label{TFR:qm1} \rho_m = \frac{1 \pm \sqrt{1-4(f(\rho_l)+D)}}{2}. \end{align} As in the case of varying speed limit, we can choose the physically relevant solution by applying the condition that the characteristic speed not change sign at $x=0$. This dictates that we choose the minus sign, so that $\rho_m<1/2$, since $\rho_l < 1/2$. Downstream, there is a rarefaction as these cars accelerate and again spread out. The solution just proposed will break down if either of the following occur: If the downstream density $\rho_r$ is greater than $\rho_m$, then a shock wave will form rather than a rarefaction. If the combined flux from upstream and from the ramp exceeds $f_\text{max}$, there will be a shock wave moving upstream due to congestion at the mouth of the on-ramp. This happens if $f(\rho_l) + D > 1/4$; notice that this is precisely the condition for the value of $\rho_m$ in \eqref{TFR:qm1} to become complex. We consider each of these scenarios in the following sections. Uncongested upstream, congested downstream: transonic shock What if upstream traffic and flux from the on-ramp are light, but traffic is significantly heavier just after the on-ramp? In this case a shock wave will form, since if $\rho_r > \rho_l$, characteristics from the left and right regions must cross. The shock may move to the left or right, depending on how congested the downstream segment is. In either case, there will again be a stationary jump at $x=0$ due to the cars entering from the on-ramp. End of explanation rho_l = 0.2 rho_r = 0.2 D = 0.25 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D) Explanation: The influx of cars from the ramp here causes a traffic jam that moves upstream. As we discuss further below, some real highways limit the influx in order to prevent this. Experiment with the value of $\rho_r$ and in the example above. Can you give a precise condition that determines whether the shock will move left or right? Light traffic, heavy inflow Now we come to the most interesting case. Since the maximum flux is $1/4$, it follows that if $f(\rho_l) + D = 1/4$, then the oncoming traffic from the highway and the on-ramp can just fit onto the road at $x=0$. The smaller value of $\rho_l$ for which this equation holds is $\rho^* = 1/4 - \sqrt{D}$. If $\rho_l$ exceeds this value, then not all the cars arriving at $x=0$ can fit on the road there; since our model gives priority to the cars coming from the on-ramp, the road to the left of $x=0$ must suffer a traffic jam -- a shock wave moving to the left. As long as $\rho_r < 1/2$, the value of $\rho^+$ will be exactly $1/2$, so as to maximize the flux through $x=0$. Downstream, a rarefaction will form as cars accelerate into the less-congested highway. End of explanation rho_l = 0.6 rho_r = 0.2 D = 0.12 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D) Explanation: Notice that in the extreme case that $D=1/4$, the cars from the on-ramp completely block the cars coming from the left; those cars come to a complete stop and never pass $x=0$. This may seem surprising, since the density of cars to the right of $x=0$ is just $1/2$. However, since the flux must increase by $1/4$ at $x=0$, it follows that the flux just to the left of $x=0$ must be zero. Counterintuitively, when two roads merge, limiting the influx of traffic from one or both of them can significantly increase the overall rate of traffic flow. Contrary to our model, the usual approach is to prioritize the cars already on the highway and restrict the influx of cars from an on-ramp. This is done in practice nowadays on many highway on-ramps in congested areas. Congested upstream, uncongested downstream End of explanation rho_l = 0.6 rho_r = 0.8 D = 0.12 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D) Explanation: Congested on both sides Next let us consider what happens if the incoming traffic from the upstream highway and the on-ramp exceeds the maximum flux, but the road is also congested for $x>0$ (i.e., $\rho_r>1/2$). Then no waves can travel to the right, and a left-going shock will form. If downstream congestion is too high, then the traffic from the on-ramp will not all be able to enter the highway and no solution is possible in this model (see the second condition for existence, above). End of explanation rho_l = 0.1 rho_r = 0.6 D = 0.08 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D) rho_l = 1.0 rho_r = 0.7 D = 0.1 riemann_solution(rho_l, rho_r, D) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D) Explanation: Further examples End of explanation f = lambda q: q*(1-q) def plot_all(rho_l,rho_r,D): states, speeds, reval, wave_types = traffic_ramps.exact_riemann_solution(rho_l,rho_r,D) ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types,t=0.5,extra_axes=2); riemann_tools.plot_characteristics(reval,c,None,ax[0]) traffic_ramps.phase_plane_plot(rho_l,rho_r,D,axes=ax[2],show=False) traffic_ramps.plot_car_trajectories(rho_l,rho_r,D,axes=ax[3]) plt.show() interact(plot_all, rho_l = widgets.FloatSlider(min=0.,max=1.,step=0.01,value=0.4,description=r'$\rho_l$'), rho_r = widgets.FloatSlider(min=0.,max=1.,step=0.01,value=0.7,description=r'$\rho_r$'), D = widgets.FloatSlider(min=0.,max=0.25,step=0.01,value=0.1), ); Explanation: Interactive solver End of explanation
5,979
Given the following text description, write Python code to implement the functionality described below step by step Description: Starting comments Charles Le Losq, Geophysical Laboratory, Carnegie Institution for Science. 7 April 2015. This IPython notebook is aimed to show how you can easily fit a Raman spectrum with Python tools, for free and, in my opinion, in an elegant way. This fitting procedure is much less "black-box" than existing GUI softwares. It probably is a little bit harder to learn for the newcomer, but rewards are much greater since you can control all the procedure in every single detail. In this example, we will fit the 850-1300 cm$^{-1}$ portion of a Raman spectrum of a lithium tetrasilicate glass Li$_2$Si$_4$O$_9$, the name will be abbreviated LS4 in the following. For further references for fitting Raman spectra of glasses, please see for instance Step1: We need in particular the library lmfit created and maintained by Matt Newville, CARS, University of Chicago, and available at http Step2: And we import rampy (as rp) Step3: Importing and looking at the data Let's first have a look at the spectrum Step4: We are interested in fitting the 870-1300 cm$^{-1}$ portion of this spectrum, which can be assigned to the various symmetric and assymetric stretching vibrations of Si-O bonds in the SiO$_2$ tetrahedra present in the glass network (see the above cited litterature for details). Baseline Removal First thing we notice in Fig. 1, we have to remove a baseline because this spectrum is shifted from 0 by some "background" scattering. For that, we can use the rp.baseline() function Step5: Now we will do some manipulation to have the interested portion of spectrum in a single variable. We will assume that the errors have not been drastically affected by the correction process (in some case it can be, but this one is quite straightforward), such that we will use the initial relative errors stored in the "ese0" variable. Step6: And let's plot the portion of interest before and after baseline subtraction Step7: Last steps before fitting the spectrum So here we are. We have the corrected spectrum in the sample variable. But before going further away,we need to write a function for the optimisation. It will return the difference between the calculated and measured spectrum, following the guideline provived by lmfit (http Step8: Note that in the above function, I did not applied the square to (model - data). This is implicitely done by lmfit (see http Step9: For further details on the Parameters() object, I invite you to look at this page Step10: This avoids any divergence of the fitting procedure regarding the hald-width, because with free frequencies and badly estimated half-width and intensities, the fitting procedure always tends to extremely broaden the peaks and put them at similar frequencies, with strong overlapping. Starting the fitting procedure by fixing the parameter we know the best, i.e. the frequencies, avoid such complications. Then, we need to use a large-scale algorithm quite robust for fitting. The levenberg-marquart algorithm fails on such fitting problem in my experience. Let's choose the Nelder and Mead algorithm for this example Step11: And now we release the frequencies Step12: We can now extract the various things generated by lmfit as well as the peaks Step13: And let's have a look at the fitted spectrum Step14: Ok, we can test to change the algorithm and use the Levenberg-Marquart one which is well-used for simple problems by a lot of people. We will re-initialize the Params() object and run the entire code written above again. Step15: The comparison of Fig. 3 and 4 shows small differences. In this case, and because we have a good error model, the LM algorithm converges toward results similar to those of the Nelder-Mead algorithm. You can try to run again the calculation with removing the "sigma" input in the "minimize" function used above. You will see that the results will diverge much more than in this case. A convenient thing about the LM algorithm is that it allows to estimate the errors on the fitting parameters. This is not possible with gradient-less algorithms such as the Nelder-Mear or the Powell algorithms. For the latters, I will give a piece of code at the end of this notebook that allows to estimate good errors on parameters through bootrapping. The downside of the LM algorithm is that, in my experience, it fails if the envelop of bands to fit is broader than the one used in this example, because it seachs at all costs to fit the spectrum as good as possible... This typically results in extrem broadening and overlapping of the peaks you try to fit. A way to resolve this issue if the use of the LM algorithm is really needed is to put tigther constrains on the peak half-widths. But another way is to use a more global algorithm less prone to diverge from the initial estimations. The Nelder-Mead, Powell (Powell, 1964, Computer Journal 7 (2) Step16: The CG algorithm returns a result close to the Nelder-Mead and the LM algorithms. A bad thing about the CG algorithm is that it is extremely slow in the Scipy implementation... It is (nearly) acceptable for one fit, but for bootstrapping 100 spectra, it is not a good option at all. As a last one, we can see what the results look like with the Powell algorithm Step17: You see in Fig. 6 that the results are, again, close to those of the other algorithms, at the exception of the two last peaks. The intensity and the frequency of the peak near 1200 cm$^{-1}$ is higher in this fit than in the others. So one important thing that has to be remembered is that, with the same parameter inputs, you will obtain different results with using different fitting algorithms. The above results are close because the fitting example is quite simple. Actually, all the results given above seem reasonable. The experience with other spectra from other silicate and aluminosilicate glasses is that the Nelder-Mead and Powell algorithms will provide the most robust ways to fit the spectra. Error estimations Errors can be estimated with using the "confidence" function if you used the Levenberg-Marquardt algorithm. See the examples here Step18: Now we will define how much new spectra we want to generate (the nbsample option of the bootstrap function), and we will run the previous function. Step19: Now, we will create a loop which is going to look at each spectrum in the data_resampled variable, and to fit them with the procedure already described. For doing so, we need to declare a couple of variables to record the bootstrap mean fitting error, in order to see if we generated enought samples to obtain a statistically representative bootstrapping process, and to record each set of parameters obtained for each bootstrapped spectrum. Step20: We can have a view at the mean values and standard deviation of the parameters that have been generated by the bootstrapping Step21: Those errors are probably the best estimates of the errors that affect your fitting parameters. You can add another bootstrapping function for changing of, saying, 5 percents the initial estimations of the parameters, and you will have a complete and coherent estimation of the errors affecting the fits. But for most cases, the errors generated by this above bootstrapping technic are already quite robust. We can see if we generated enought samples to have valid bootstrap results by looking at how the mean value of the parameters and their error converge. To do a short version of such thing, we can also look at how the summation of the errors of the parameters change with the iteration number. If the summation of errors becomes constant, we can say that we have generated enought bootstrap samples to have a significant result, statistically speaking.
Python Code: %matplotlib inline import time import numpy as np # For data manipulation import scipy # For data manipulation import random import matplotlib.pyplot as plt # For doing the plots Explanation: Starting comments Charles Le Losq, Geophysical Laboratory, Carnegie Institution for Science. 7 April 2015. This IPython notebook is aimed to show how you can easily fit a Raman spectrum with Python tools, for free and, in my opinion, in an elegant way. This fitting procedure is much less "black-box" than existing GUI softwares. It probably is a little bit harder to learn for the newcomer, but rewards are much greater since you can control all the procedure in every single detail. In this example, we will fit the 850-1300 cm$^{-1}$ portion of a Raman spectrum of a lithium tetrasilicate glass Li$_2$Si$_4$O$_9$, the name will be abbreviated LS4 in the following. For further references for fitting Raman spectra of glasses, please see for instance: Virgo et al., 1980, Science 208, p 1371-1373; Mysen et al., 1982, American Mineralogist 67, p 686-695; McMillan, 1984, American Mineralogist 69, p 622-644; Mysen, 1990, American Mineralogist 75, p 120-134; Le Losq et al., 2014, Geochimica et Cosmochimica Acta 126, p 495-517 and Neuville et al., 2014, Reviews in Mineralogy and Geochemistry 78. We will use the optimization algorithms of Scipy together with the library lmfit (http://lmfit.github.io/lmfit-py/) that is extremely useful to add constrains to the fitting procedure. Importing libraries So the first part will be to import a bunch of libraries for doing various things End of explanation import lmfit from lmfit.models import GaussianModel Explanation: We need in particular the library lmfit created and maintained by Matt Newville, CARS, University of Chicago, and available at http://lmfit.github.io/lmfit-py/. See the documentation on the website for installing and using this one. End of explanation import rampy as rp #Charles' libraries and functions Explanation: And we import rampy (as rp) End of explanation # get the spectrum to deconvolute, with skipping header and footer comment lines from the spectrometer inputsp = np.genfromtxt("./data/LS4.txt",skip_header=20, skip_footer=43) # create a new plot for showing the spectrum plt.figure(figsize=(5,5)) plt.plot(inputsp[:,0],inputsp[:,1],'k.',markersize=1) plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 12) plt.ylabel("Normalized intensity, a. u.", fontsize = 12) plt.title("Fig. 1: the raw data",fontsize = 12,fontweight="bold") Explanation: Importing and looking at the data Let's first have a look at the spectrum End of explanation bir = np.array([(860,874),(1300,1330)]) # The regions where the baseline will be fitted y_corr, y_base = rp.baseline(inputsp[:,0],inputsp[:,1],bir,'poly',polynomial_order=3)# We fit a polynomial background. Explanation: We are interested in fitting the 870-1300 cm$^{-1}$ portion of this spectrum, which can be assigned to the various symmetric and assymetric stretching vibrations of Si-O bonds in the SiO$_2$ tetrahedra present in the glass network (see the above cited litterature for details). Baseline Removal First thing we notice in Fig. 1, we have to remove a baseline because this spectrum is shifted from 0 by some "background" scattering. For that, we can use the rp.baseline() function End of explanation # signal selection lb = 867 # The lower boundary of interest hb = 1300 # The upper boundary of interest x = inputsp[:,0] x_fit = x[np.where((x > lb)&(x < hb))] y_fit = y_corr[np.where((x > lb)&(x < hb))] ese0 = np.sqrt(abs(y_fit[:,0]))/abs(y_fit[:,0]) # the relative errors after baseline subtraction y_fit[:,0] = y_fit[:,0]/np.amax(y_fit[:,0])*10 # normalise spectra to maximum intensity, easier to handle sigma = abs(ese0*y_fit[:,0]) #calculate good ese Explanation: Now we will do some manipulation to have the interested portion of spectrum in a single variable. We will assume that the errors have not been drastically affected by the correction process (in some case it can be, but this one is quite straightforward), such that we will use the initial relative errors stored in the "ese0" variable. End of explanation # create a new plot for showing the spectrum plt.figure() plt.subplot(1,2,1) inp = plt.plot(x,inputsp[:,1],'k-',label='Original') corr = plt.plot(x,y_corr,'b-',label='Corrected') #we use the sample variable because it is not already normalized... bas = plt.plot(x,y_base,'r-',label='Baseline') plt.xlim(lb,1300) plt.ylim(0,40000) plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 14) plt.ylabel("Normalized intensity, a. u.", fontsize = 14) plt.legend() plt.title('A) Baseline removal') plt.subplot(1,2,2) plt.plot(x_fit,y_fit,'k.') plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 14) plt.title('B) signal to fit') #plt.tight_layout() plt.suptitle('Figure 2', fontsize = 14,fontweight = 'bold') Explanation: And let's plot the portion of interest before and after baseline subtraction: End of explanation def residual(pars, x, data=None, eps=None): #Function definition # unpack parameters, extract .value attribute for each parameter a1 = pars['a1'].value a2 = pars['a2'].value a3 = pars['a3'].value a4 = pars['a4'].value a5 = pars['a5'].value f1 = pars['f1'].value f2 = pars['f2'].value f3 = pars['f3'].value f4 = pars['f4'].value f5 = pars['f5'].value l1 = pars['l1'].value l2 = pars['l2'].value l3 = pars['l3'].value l4 = pars['l4'].value l5 = pars['l5'].value # Using the Gaussian model function from rampy peak1 = rp.gaussian(x,a1,f1,l1) peak2 = rp.gaussian(x,a2,f2,l2) peak3 = rp.gaussian(x,a3,f3,l3) peak4 = rp.gaussian(x,a4,f4,l4) peak5 = rp.gaussian(x,a5,f5,l5) model = peak1 + peak2 + peak3 + peak4 + peak5 # The global model is the sum of the Gaussian peaks if data is None: # if we don't have data, the function only returns the direct calculation return model, peak1, peak2, peak3, peak4, peak5 if eps is None: # without errors, no ponderation return (model - data) return (model - data)/eps # with errors, the difference is ponderated Explanation: Last steps before fitting the spectrum So here we are. We have the corrected spectrum in the sample variable. But before going further away,we need to write a function for the optimisation. It will return the difference between the calculated and measured spectrum, following the guideline provived by lmfit (http://lmfit.github.io/lmfit-py/) Please note that I do the fitting this way because it gives a pretty good control of the entire process, but you can use directly the builtin models of lmfit (http://lmfit.github.io/lmfit-py/builtin_models.html) for fitting the spectrum. Doing so, your code will be different from this one and you don't need to define a residual function. In such case, you want to look at the example 3 on the page http://lmfit.github.io/lmfit-py/builtin_models.html. But let's just pretend we want to write our own piece of code and use the Gaussian function implemented in Rampy. The shape of the spectrum suggests that at least three peaks are present, because of the two obvious bands near 950 and 1080 cm$^{-1}$ and a slope break near 1200 cm $^{-1}$. From previous works, we actually know that we have two additional peaks (See Mysen, 1990 or Le Losq et al., 2014) in this spectral region located near 1050 and 1150 cm$^{-1}$. So we have to fit 5 peaks, and hence, we have 5 intensities variables a1 to a5, 5 frequencies f1 to f5, and 5 half width at half peak maximum l1 to l5. This makes a total of 15 parameters. Those variables will be stored in the Parameters() object created by the lmfit software (see http://lmfit.github.io/lmfit-py/parameters.html), we will go back on this one latter. For now, let just say that the Parameters() object is called "pars" and contains the various a1-a5, f1-f5 and l1-l5 parameters, such that we can have their values with using a1 = pars['a1'].value for instance. So let's go. We create the function "residual" with arguments pars (the Parameters() object), the x axis, and, in option, the y axis as data and the errors. End of explanation params = lmfit.Parameters() # (Name, Value, Vary, Min, Max, Expr) params.add_many(('a1', 2.4, True, 0, None, None), ('f1', 946, True, 910, 970, None), ('l1', 26, True, 20, 50, None), ('a2', 3.5, True, 0, None, None), ('f2', 1026, True, 990, 1070, None), ('l2', 39, True, 20, 55, None), ('a3', 8.5, True, 7, None, None), ('f3', 1082, True, 1070, 1110, None), ('l3', 31, True, 25, 35, None), ('a4', 2.2, True, 0, None, None), ('f4', 1140, True, 1110, 1160, None), ('l4', 35, True, 20, 50, None), ('a5', 2., True, 0, None, None), ('f5', 1211, True, 1180, 1220, None), ('l5', 28, True, 20, 45, None)) Explanation: Note that in the above function, I did not applied the square to (model - data). This is implicitely done by lmfit (see http://lmfit.github.io/lmfit-py/fitting.html#fit-func-label for further information on function writting). Fitting Ok, we have our optimisation function. So we can go forward and fit the spectrum... We need five Guassians at 950, 1050, 1100, 1150 and 1200 cm$^{-1}$. We set their half-width at half-maximum at the same value. End of explanation # we constrain the positions params['f1'].vary = False params['f2'].vary = False params['f3'].vary = False params['f4'].vary = False params['f5'].vary = False Explanation: For further details on the Parameters() object, I invite you to look at this page: http://lmfit.github.io/lmfit-py/parameters.html . But from the above piece of code, you can already guess that you can make specific parameters that vary or not, you can fixe Min or Max values, and you can even put some contrains between parameters (e.g., "l1 = l2') using the last "Expr" column. You can remark that we applied some boundaries for the peak positions, but also for peak widths. This is based on previous fits made for this kind of compositions. Typically, in such glass, peaks from Si-O stretch vibrations do not present half-width greater than 50 cm$^{-1}$ or smaller than 20 cm$^{-1}$. For instance, the 1080 cm$^{-1}$ peak typically present a half-width of ~ 30 cm$^{-1}$ ± 5 cm$^{-1}$ in silica-rich silicate glasses, such that we can apply a tighter constrain there. Following such ideas, I put bonds for the parameter values for the half-width of the peaks. This avoid fitting divergence. Furthermore, we know approximately the frequencies of the peaks, such that we can also apply bondaries for them. This will help the fitting, since in this problem, we have five peaks in a broad envelop that only present two significant features at ~950, ~1080 cm$^{-1}$ as well as two barely visible shoulders near 1050 and 1200 cm$^{-1}$. But this is a simple case. For some more complex (aluminosilicate) glasses, this 850-1300 cm$^{-1}$ frequency envelop is even less resolved, such that applying reasonable constrains become crucial for any quantitative Raman fitting. For starting the fit, as we suppose we have a not bad knowledge of peak frequencies (see the discussion right above), a good thing to do is to fix for the first fit the frequencies of the peaks: End of explanation algo = 'nelder' result = lmfit.minimize(residual, params, method = algo, args=(x_fit, y_fit[:,0])) # fit data with nelder model from scipy Explanation: This avoids any divergence of the fitting procedure regarding the hald-width, because with free frequencies and badly estimated half-width and intensities, the fitting procedure always tends to extremely broaden the peaks and put them at similar frequencies, with strong overlapping. Starting the fitting procedure by fixing the parameter we know the best, i.e. the frequencies, avoid such complications. Then, we need to use a large-scale algorithm quite robust for fitting. The levenberg-marquart algorithm fails on such fitting problem in my experience. Let's choose the Nelder and Mead algorithm for this example: (http://comjnl.oxfordjournals.org/content/7/4/308.short) : End of explanation # we release the positions but contrain the FWMH and amplitude of all peaks params['f1'].vary = True params['f2'].vary = True params['f3'].vary = True params['f4'].vary = True params['f5'].vary = True #we fit twice result2 = lmfit.minimize(residual, params,method = algo, args=(x_fit, y_fit[:,0])) # fit data with leastsq model from scipy Explanation: And now we release the frequencies: End of explanation model = lmfit.fit_report(result2.params) yout, peak1,peak2,peak3,peak4,peak5 = residual(result2.params,x_fit) # the different peaks rchi2 = (1/(float(len(y_fit))-15-1))*np.sum((y_fit - yout)**2/sigma**2) # calculation of the reduced chi-square Explanation: We can now extract the various things generated by lmfit as well as the peaks: End of explanation ##### WE DO A NICE FIGURE THAT CAN BE IMPROVED FOR PUBLICATION plt.plot(x_fit,y_fit,'k-') plt.plot(x_fit,yout,'r-') plt.plot(x_fit,peak1,'b-') plt.plot(x_fit,peak2,'b-') plt.plot(x_fit,peak3,'b-') plt.plot(x_fit,peak4,'b-') plt.plot(x_fit,peak5,'b-') plt.xlim(lb,hb) plt.ylim(-0.5,10.5) plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 14) plt.ylabel("Normalized intensity, a. u.", fontsize = 14) plt.title("Fig. 3: Fit of the Si-O stretch vibrations\n in LS4 with \nthe Nelder Mead algorithm ",fontsize = 14,fontweight = "bold") print("rchi-2 = \n"+str(rchi2)) Explanation: And let's have a look at the fitted spectrum: End of explanation algo = 'leastsq' # We will use the Levenberg-Marquart algorithm # (Name, Value, Vary, Min, Max, Expr) Here I directly initialize with fixed frequencies params.add_many(('a1', 2.4, True, 0, None, None), ('f1', 946, True, 910, 970, None), ('l1', 26, True, 20, 50, None), ('a2', 3.5, True, 0, None, None), ('f2', 1026, True, 990, 1070, None), ('l2', 39, True, 20, 55, None), ('a3', 8.5, True, 7, None, None), ('f3', 1082, True, 1070, 1110, None), ('l3', 31, True, 25, 35, None), ('a4', 2.2, True, 0, None, None), ('f4', 1140, True, 1110, 1160, None), ('l4', 35, True, 20, 50, None), ('a5', 2., True, 0, None, None), ('f5', 1211, True, 1180, 1220, None), ('l5', 28, True, 20, 45, None)) result = lmfit.minimize(residual, params, method = algo, args=(x_fit, y_fit[:,0])) # we release the positions but contrain the FWMH and amplitude of all peaks params['f1'].vary = True params['f2'].vary = True params['f3'].vary = True params['f4'].vary = True params['f5'].vary = True result2 = lmfit.minimize(residual, params,method = algo, args=(x_fit, y_fit[:,0])) model = lmfit.fit_report(result2.params) # the report yout, peak1,peak2,peak3,peak4,peak5 = residual(result2.params,x_fit) # the different peaks rchi2 = (1/(float(len(y_fit))-15-1))*np.sum((y_fit - yout)**2/sigma**2) # calculation of the reduced chi-square ##### WE DO A NICE FIGURE THAT CAN BE IMPROVED FOR PUBLICATION plt.plot(x_fit,y_fit,'k-') plt.plot(x_fit,yout,'r-') plt.plot(x_fit,peak1,'b-') plt.plot(x_fit,peak2,'b-') plt.plot(x_fit,peak3,'b-') plt.plot(x_fit,peak4,'b-') plt.plot(x_fit,peak5,'b-') plt.xlim(lb,hb) plt.ylim(-0.5,10.5) plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 14) plt.ylabel("Normalized intensity, a. u.", fontsize = 14) plt.title("Fig. 4: Fit of the Si-O stretch vibrations\n in LS4 with \nthe Levenberg-Marquardt (LM) algorithm",fontsize = 14,fontweight = "bold") print("rchi-2 = \n"+str(rchi2)) Explanation: Ok, we can test to change the algorithm and use the Levenberg-Marquart one which is well-used for simple problems by a lot of people. We will re-initialize the Params() object and run the entire code written above again. End of explanation algo = 'cg' # We will use the Conjugate Gradient algorithm # (Name, Value, Vary, Min, Max, Expr) Here I directly initialize with fixed frequencies params.add_many(('a1', 2.4, True, 0, None, None), ('f1', 946, True, 910, 970, None), ('l1', 26, True, 20, 50, None), ('a2', 3.5, True, 0, None, None), ('f2', 1026, True, 990, 1070, None), ('l2', 39, True, 20, 55, None), ('a3', 8.5, True, 7, None, None), ('f3', 1082, True, 1070, 1110, None), ('l3', 31, True, 25, 35, None), ('a4', 2.2, True, 0, None, None), ('f4', 1140, True, 1110, 1160, None), ('l4', 35, True, 20, 50, None), ('a5', 2., True, 0, None, None), ('f5', 1211, True, 1180, 1220, None), ('l5', 28, True, 20, 45, None)) result = lmfit.minimize(residual, params, method = algo, args=(x_fit, y_fit[:,0])) # we release the positions but contrain the FWMH and amplitude of all peaks params['f1'].vary = True params['f2'].vary = True params['f3'].vary = True params['f4'].vary = True params['f5'].vary = True result2 = lmfit.minimize(residual, params,method = algo, args=(x_fit, y_fit[:,0])) model = lmfit.fit_report(result2.params) # the report yout, peak1,peak2,peak3,peak4,peak5 = residual(result2.params,x_fit) # the different peaks rchi2 = (1/(float(len(y_fit))-15-1))*np.sum((y_fit - yout)**2/sigma**2) # calculation of the reduced chi-square ##### WE DO A NICE FIGURE THAT CAN BE IMPROVED FOR PUBLICATION plt.plot(x_fit,y_fit,'k-') plt.plot(x_fit,yout,'r-') plt.plot(x_fit,peak1,'b-') plt.plot(x_fit,peak2,'b-') plt.plot(x_fit,peak3,'b-') plt.plot(x_fit,peak4,'b-') plt.plot(x_fit,peak5,'b-') plt.xlim(lb,hb) plt.ylim(-0.5,10.5) plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 14) plt.ylabel("Normalized intensity, a. u.", fontsize = 14) plt.title("Fig. 5: Fit of the Si-O stretch vibrations\n in LS4 with \nthe Conjugate Gradient (CG) algorithm",fontsize = 14,fontweight = "bold") print("rchi-2 = \n"+str(rchi2)) Explanation: The comparison of Fig. 3 and 4 shows small differences. In this case, and because we have a good error model, the LM algorithm converges toward results similar to those of the Nelder-Mead algorithm. You can try to run again the calculation with removing the "sigma" input in the "minimize" function used above. You will see that the results will diverge much more than in this case. A convenient thing about the LM algorithm is that it allows to estimate the errors on the fitting parameters. This is not possible with gradient-less algorithms such as the Nelder-Mear or the Powell algorithms. For the latters, I will give a piece of code at the end of this notebook that allows to estimate good errors on parameters through bootrapping. The downside of the LM algorithm is that, in my experience, it fails if the envelop of bands to fit is broader than the one used in this example, because it seachs at all costs to fit the spectrum as good as possible... This typically results in extrem broadening and overlapping of the peaks you try to fit. A way to resolve this issue if the use of the LM algorithm is really needed is to put tigther constrains on the peak half-widths. But another way is to use a more global algorithm less prone to diverge from the initial estimations. The Nelder-Mead, Powell (Powell, 1964, Computer Journal 7 (2): 155-62) or the COBYLA (see Powell, 2007 Cambridge University Technical Report DAMTP 2007) algorithms can give good results for complex problems. Also, the Conjugate Gradient algorithm may be suitable (Wright & Nocedal, “Numerical Optimization”, 1999, pp. 120-122). Let's try the latter for now: End of explanation algo = 'powell' # We will use the Powell algorithm # (Name, Value, Vary, Min, Max, Expr) Here I directly initialize with fixed frequencies params.add_many(('a1', 2.4, True, 0, None, None), ('f1', 946, True, 910, 970, None), ('l1', 26, True, 20, 50, None), ('a2', 3.5, True, 0, None, None), ('f2', 1026, True, 990, 1070, None), ('l2', 39, True, 20, 55, None), ('a3', 8.5, True, 7, None, None), ('f3', 1082, True, 1070, 1110, None), ('l3', 31, True, 25, 35, None), ('a4', 2.2, True, 0, None, None), ('f4', 1140, True, 1110, 1160, None), ('l4', 35, True, 20, 50, None), ('a5', 2., True, 0, None, None), ('f5', 1211, True, 1180, 1220, None), ('l5', 28, True, 20, 45, None)) result = lmfit.minimize(residual, params, method = algo, args=(x_fit, y_fit[:,0])) # we release the positions but contrain the FWMH and amplitude of all peaks params['f1'].vary = True params['f2'].vary = True params['f3'].vary = True params['f4'].vary = True params['f5'].vary = True result2 = lmfit.minimize(residual, params,method = algo, args=(x_fit, y_fit[:,0])) model = lmfit.fit_report(result2.params) # the report yout, peak1,peak2,peak3,peak4,peak5 = residual(result2.params,x_fit) # the different peaks rchi2 = (1/(float(len(y_fit))-15-1))*np.sum((y_fit - yout)**2/sigma**2) # calculation of the reduced chi-square ##### WE DO A NICE FIGURE THAT CAN BE IMPROVED FOR PUBLICATION plt.plot(x_fit,y_fit,'k-') plt.plot(x_fit,yout,'r-') plt.plot(x_fit,peak1,'b-') plt.plot(x_fit,peak2,'b-') plt.plot(x_fit,peak3,'b-') plt.plot(x_fit,peak4,'b-') plt.plot(x_fit,peak5,'b-') plt.xlim(lb,hb) plt.ylim(-0.5,10.5) plt.xlabel("Raman shift, cm$^{-1}$", fontsize = 14) plt.ylabel("Normalized intensity, a. u.", fontsize = 14) plt.title("Fig. 6: Fit of the Si-O stretch vibrations\n in LS4 with \nthe Powell algorithm",fontsize = 14,fontweight = "bold") print("rchi-2 = \n"+str(rchi2)) Explanation: The CG algorithm returns a result close to the Nelder-Mead and the LM algorithms. A bad thing about the CG algorithm is that it is extremely slow in the Scipy implementation... It is (nearly) acceptable for one fit, but for bootstrapping 100 spectra, it is not a good option at all. As a last one, we can see what the results look like with the Powell algorithm: End of explanation #### Bootstrap function def bootstrap(data, ese,nbsample): # Bootstrap of Raman spectra. We generate new datapoints with the basis of existing data and their standard deviation N = len(data) bootsamples = np.zeros((N,nbsample)) for i in range(nbsample): for j in range(N): bootsamples[j,i] = np.random.normal(data[j], ese[j], size=None) return bootsamples Explanation: You see in Fig. 6 that the results are, again, close to those of the other algorithms, at the exception of the two last peaks. The intensity and the frequency of the peak near 1200 cm$^{-1}$ is higher in this fit than in the others. So one important thing that has to be remembered is that, with the same parameter inputs, you will obtain different results with using different fitting algorithms. The above results are close because the fitting example is quite simple. Actually, all the results given above seem reasonable. The experience with other spectra from other silicate and aluminosilicate glasses is that the Nelder-Mead and Powell algorithms will provide the most robust ways to fit the spectra. Error estimations Errors can be estimated with using the "confidence" function if you used the Levenberg-Marquardt algorithm. See the examples here: http://lmfit.github.io/lmfit-py/confidence.html . If you use a large-scale gradient-less algorithm such as the Nelder-Mead or the Powell algorithms, you cannot do that. Thus, to calculate the errors on the parameters that those algorithms provide as well as the error introduced by choosing one or the other algorithm, we can use the bootstrapping technic. Several descriptions on the internet are available for this technic, so I will skip a complete description here. A quick overview is to say that we have datapoints Yi affected by errors e_Yi. We assume that the probability density function of the Yi points is Gaussian. According to the Central Theorem Limit, this probably is a good assumption. Therefore, for each frequency in the spectrum of Fig.1, we have points that are probably at an intensity of Yi but with an error of e_Yi. To estimate how this uncertainties affect our fitting results, we can pick new points in the Gaussian distribution of mean Yi with a standard deviation e_Yi, and construct whole new spectra that we will fit. We will repeat this procedure N times. In addition to that, we can also randomly choose between the Nelder-Mead or the Powell algorithm during the new fits, such that we will take also into account our arbitrary choice in the fitting algorithm for calculating the errors on the estimated parameters. A last thing would be to randomly change a little bit the initial values of the parameters, but this is harder to implement so we will not do it for this example. First of all, we have to write a Python function that will randomly sample the probability density functions of the points of the spectrum of Fig. 1. Here is the piece of code I wrote for doing so: End of explanation %%time nboot = 10 # Number of bootstrap samples, I set it to a low value for the example but usually you want thousands there data_resampled = bootstrap(y_fit[:,0],sigma,nboot)# resampling of data + generate the output parameter tensor Explanation: Now we will define how much new spectra we want to generate (the nbsample option of the bootstrap function), and we will run the previous function. End of explanation para_output = np.zeros((5,3,nboot)) # 5 x 3 parameters x N boot samples bootrecord = np.zeros((nboot)) # For recording boot strap efficiency for nn in range(nboot): algos = ['powell','nelder'] algo = random.choice(algos) # We randomly select between the Powell or Nelder_mear algorithm params = lmfit.Parameters() # (Name, Value, Vary, Min, Max, Expr) Here I directly initialize with fixed frequencies params.add_many(('a1', 24, True, 0, None, None), ('f1', 946, True, 910, 970, None), ('l1', 26, True, 20, 50, None), ('a2', 35, True, 0, None, None), ('f2', 1026, True, 990, 1070, None), ('l2', 39, True, 20, 55, None), ('a3', 85, True, 70, None, None), ('f3', 1082, True, 1070, 1110, None), ('l3', 31, True, 25, 35, None), ('a4', 22, True, 0, None, None), ('f4', 1140, True, 1110, 1160, None), ('l4', 35, True, 20, 50, None), ('a5', 4, True, 0, None, None), ('f5', 1211, True, 1180, 1220, None), ('l5', 28, True, 20, 45, None)) result = lmfit.minimize(residual, params, method = algo, args=(x_fit, data_resampled[:,nn],sigma)) # we release the positions but contrain the FWMH and amplitude of all peaks params['f1'].vary = True params['f2'].vary = True params['f3'].vary = True params['f4'].vary = True params['f5'].vary = True result2 = lmfit.minimize(residual, params,method = algo, args=(x_fit, data_resampled[:,nn], sigma)) vv = result2.params.valuesdict() para_output[0,0,nn] = vv['a1'] para_output[1,0,nn] = vv['a2'] para_output[2,0,nn] = vv['a3'] para_output[3,0,nn] = vv['a4'] para_output[4,0,nn] = vv['a5'] para_output[0,1,nn] = vv['f1'] para_output[1,1,nn] = vv['f2'] para_output[2,1,nn] = vv['f3'] para_output[3,1,nn] = vv['f4'] para_output[4,1,nn] = vv['f5'] para_output[0,2,nn] = vv['l1'] para_output[1,2,nn] = vv['l2'] para_output[2,2,nn] = vv['l3'] para_output[3,2,nn] = vv['l4'] para_output[4,2,nn] = vv['l5'] para_mean = np.mean(para_output,axis=2) para_ese = np.std(para_output,axis=2) for kjy in range(nboot): if kjy == 0: bootrecord[kjy] = 0 else: bootrecord[kjy] = np.sum(np.std(para_output[:,:,0:kjy],axis=2)) Explanation: Now, we will create a loop which is going to look at each spectrum in the data_resampled variable, and to fit them with the procedure already described. For doing so, we need to declare a couple of variables to record the bootstrap mean fitting error, in order to see if we generated enought samples to obtain a statistically representative bootstrapping process, and to record each set of parameters obtained for each bootstrapped spectrum. End of explanation para_mean para_ese Explanation: We can have a view at the mean values and standard deviation of the parameters that have been generated by the bootstrapping: End of explanation plt.plot(np.arange(nboot)+1,bootrecord,'ko') plt.xlim(0,nboot+1) plt.xlabel("Number of iterations",fontsize = 14) plt.ylabel("Summed errors of parameters",fontsize = 14) plt.title("Fig. 7: Bootstrap iterations for convergence",fontsize = 14, fontweight = 'bold') Explanation: Those errors are probably the best estimates of the errors that affect your fitting parameters. You can add another bootstrapping function for changing of, saying, 5 percents the initial estimations of the parameters, and you will have a complete and coherent estimation of the errors affecting the fits. But for most cases, the errors generated by this above bootstrapping technic are already quite robust. We can see if we generated enought samples to have valid bootstrap results by looking at how the mean value of the parameters and their error converge. To do a short version of such thing, we can also look at how the summation of the errors of the parameters change with the iteration number. If the summation of errors becomes constant, we can say that we have generated enought bootstrap samples to have a significant result, statistically speaking. End of explanation
5,980
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: Sklearn Stratified K-Fold - Splitting Data & Saving to File
Python Code:: import pandas as pd from sklearn.model_selection import StratifiedKFold df = pd.read_csv('data/raw/train.csv') # initialise a StratifiedKFold object with 5 folds and # declare the column that we which to group by which in this # case is the column called "label" skf = StratifiedKFold(n_splits=5) target = df.loc[:,'label'] # for each fold split the data into train and validation # sets and save the fold splits to csv fold_no = 1 for train_index, val_index in skf.split(df, target): train = df.loc[train_index,:] val = df.loc[val_index,:] train.to_csv('data/processed/folds/' + 'train_fold_' + str(fold_no) + '.csv') val.to_csv('data/processed/folds/' + 'val_fold_' + str(fold_no) + '.csv') fold_no += 1
5,981
Given the following text description, write Python code to implement the functionality described below step by step Description: This is my third attempt at creating a model using sklearn alogithms In this iteration of analysis we'll be looking at breaking out categorical varaibles and making them binary, and seeing if that makes our model more accurate. My last two attempts at this are below Step1: Load the data from our JSON file. The data is stored as a dictionary of dictionaries in the json file. We store it that way beacause it's easy to add data to the existing master data file. Also, I haven't figured out how to get it in a database yet. Step2: Clean up the data a bit Right now the 'shared' and 'split' are included in number of bathrooms. If I were to convert that to a number I would consider a shared/split bathroom to be half or 0.5 of a bathroom. Step3: Get rid of null values I haven't figured out the best way to clean this up yet. For now I'm going to drop any rows that have a null value, though I recognize that this is not a good analysis practice. We ended up dropping ~15% of data points. 😬 Also there were some CRAZY outliers, and this analysis is focused on finding a model for apartments for the 99% of us that can't afford crazy extravigant apartments Step4: It looks like Portland!!! Let's cluster the data. Start by creating a list of [['lat','long'], ...] Step5: We'll use K Means Clustering because that's the clustering method I recently learned in class! There may be others that work better, but this is the tool that I know Step6: We chose our neighborhoods! I've found that every once in a while the centers end up in different points, but are fairly consistant. Now let's process our data points and figure out where the closest neighborhood center is to it! Step7: Create a function that will label each point with a number coresponding to it's neighborhood Step8: Here's the new Part. We're breaking out the neighborhood values into their own columns. Now the algorithms can read them as categorical data rather than continuous data. Step9: Split data into Training and Testing Data We're selecting variables now. So far I haven't been able to figure out how to feed in discrete variables. It seems to handle continuous variables just fine. The neighborhood variable is encoded, so I'm assuming that it's being interpreted like a continuous variable. The Sci-Kit Learn documentation says that it can handle both kinds of variables, but I've found a few other discussion forums that say that they do not, to create a separate feature for each type of varaible... Fun. Someone mentioned one hot encoder. That will likely be the next iteration of my analysis, but I'm just not there yet. ... Also need to figure out what to do with my missing data... I'm dropping ~15% right now because I'm dropping all of the null values Step10: Ok, lets put it through Decision Tree! Step11: .83 Woot! That's about how good we were with random forests last time! Let's see if we can make even better using an ensemble method! What about Random Forest? Step12: Wow! up to .87! That's our best yet! What if we add more trees??? Step13: Up to .88! So what is our goal now? I'd like to see if adjusting the number of neighborhoods increases the accuracy. same for the affect with the number of trees Step14: Looks like the optimum is right around 10 or 11, and then starts to drop off. Let's get a little more granular and look at a smaller range Step15: Trying a few times, it looks like 10, 11 and 12 get the best results at ~.85. Of course, we'll need to redo some of these optomizations after we properly process our data. Hopefully we'll see some more consistency then too.
Python Code: # start with imports import numpy as np import pandas as pd from pandas import DataFrame, Series import json import matplotlib as mpl import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline Explanation: This is my third attempt at creating a model using sklearn alogithms In this iteration of analysis we'll be looking at breaking out categorical varaibles and making them binary, and seeing if that makes our model more accurate. My last two attempts at this are below: https://github.com/rileyrustad/CLCrawler/blob/master/First_Analysis.ipynb https://github.com/rileyrustad/CLCrawler/blob/master/Second_Analysis.ipynb End of explanation with open('/Users/mac28/CLCrawler/MasterApartmentData.json') as f: my_dict = json.load(f) dframe = DataFrame(my_dict) dframe = dframe.T dframe.shape Explanation: Load the data from our JSON file. The data is stored as a dictionary of dictionaries in the json file. We store it that way beacause it's easy to add data to the existing master data file. Also, I haven't figured out how to get it in a database yet. End of explanation dframe.bath = dframe.bath.replace('shared',0.5) dframe.bath = dframe.bath.replace('split',0.5) Explanation: Clean up the data a bit Right now the 'shared' and 'split' are included in number of bathrooms. If I were to convert that to a number I would consider a shared/split bathroom to be half or 0.5 of a bathroom. End of explanation df = dframe[dframe.price < 10000][['bath','bed','feet','price']].dropna() sns.distplot(df.price) data = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5] plt.figure(figsize=(15,10)) plt.scatter(data = data, x = 'long',y='lat') Explanation: Get rid of null values I haven't figured out the best way to clean this up yet. For now I'm going to drop any rows that have a null value, though I recognize that this is not a good analysis practice. We ended up dropping ~15% of data points. 😬 Also there were some CRAZY outliers, and this analysis is focused on finding a model for apartments for the 99% of us that can't afford crazy extravigant apartments End of explanation XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5] data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index] Explanation: It looks like Portland!!! Let's cluster the data. Start by creating a list of [['lat','long'], ...] End of explanation from sklearn.cluster import KMeans km = KMeans(n_clusters=11) km.fit(data) neighborhoods = km.cluster_centers_ neighborhoods %pylab inline figure(1,figsize=(20,12)) plot([row[1] for row in data],[row[0] for row in data],'b.') for i in km.cluster_centers_: plot(i[1],i[0], 'g*',ms=25) '''Note to Riley: come back and make it look pretty''' Explanation: We'll use K Means Clustering because that's the clustering method I recently learned in class! There may be others that work better, but this is the tool that I know End of explanation neighborhoods = neighborhoods.tolist() for i in enumerate(neighborhoods): i[1].append(i[0]) print neighborhoods Explanation: We chose our neighborhoods! I've found that every once in a while the centers end up in different points, but are fairly consistant. Now let's process our data points and figure out where the closest neighborhood center is to it! End of explanation def clusterer(X, Y,neighborhoods): neighbors = [] for i in neighborhoods: distance = ((i[0]-X)**2 + (i[1]-Y)**2) neighbors.append(distance) closest = min(neighbors) return neighbors.index(closest) neighborhoodlist = [] for i in dframe.index: neighborhoodlist.append(clusterer(dframe['lat'][i],dframe['long'][i],neighborhoods)) Explanation: Create a function that will label each point with a number coresponding to it's neighborhood End of explanation def column_maker(neighborhoodlist,dframe): seriesList = [[],[],[],[],[],[],[],[],[],[],[]] for item,_ in enumerate(seriesList): for hood in neighborhoodlist: if hood == item: seriesList[item].append(1) else: seriesList[item].append(0) return seriesList seriesList = column_maker(neighborhoodlist,dframe) for i,_ in enumerate(seriesList): dframe['neighborhood'+str(i)] = Series((seriesList[i]), index=dframe.index) pd.set_option("display.max_columns",30) dframe.head() Explanation: Here's the new Part. We're breaking out the neighborhood values into their own columns. Now the algorithms can read them as categorical data rather than continuous data. End of explanation from __future__ import division print len(dframe) df2 = dframe[dframe.price < 10000][['bath', 'bed', 'cat', 'content', 'dog', 'feet', 'getphotos', 'hasmap', 'lat', u'long', 'price', 'neighborhood0', 'neighborhood1', 'neighborhood2', 'neighborhood3', 'neighborhood4', 'neighborhood5', 'neighborhood6', 'neighborhood7', 'neighborhood8', 'neighborhood9', 'neighborhood10']].dropna() print len(df2) print len(df2)/len(dframe) features = df2[['bath', 'bed', 'cat', 'content', 'dog', 'feet', 'getphotos', 'hasmap', 'lat', u'long', 'neighborhood0', 'neighborhood1', 'neighborhood2', 'neighborhood3', 'neighborhood4', 'neighborhood5', 'neighborhood6', 'neighborhood7', 'neighborhood8', 'neighborhood9', 'neighborhood10']].values price = df2[['price']].values from sklearn.cross_validation import train_test_split features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1, random_state=42) Explanation: Split data into Training and Testing Data We're selecting variables now. So far I haven't been able to figure out how to feed in discrete variables. It seems to handle continuous variables just fine. The neighborhood variable is encoded, so I'm assuming that it's being interpreted like a continuous variable. The Sci-Kit Learn documentation says that it can handle both kinds of variables, but I've found a few other discussion forums that say that they do not, to create a separate feature for each type of varaible... Fun. Someone mentioned one hot encoder. That will likely be the next iteration of my analysis, but I'm just not there yet. ... Also need to figure out what to do with my missing data... I'm dropping ~15% right now because I'm dropping all of the null values End of explanation from sklearn import tree from sklearn.metrics import r2_score clf = tree.DecisionTreeRegressor() clf = clf.fit(features_train, price_train) pred = np.array([[item] for item in clf.predict(features_test)]) print r2_score(pred, price_test) plt.scatter(pred,price_test) Explanation: Ok, lets put it through Decision Tree! End of explanation from sklearn.ensemble import RandomForestRegressor reg = RandomForestRegressor() reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) print r2_score(forest_pred, price_test) plt.scatter(pred,price_test) Explanation: .83 Woot! That's about how good we were with random forests last time! Let's see if we can make even better using an ensemble method! What about Random Forest? End of explanation reg = RandomForestRegressor(n_estimators = 100) reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) print r2_score(forest_pred, price_test) print plt.scatter(pred,price_test) Explanation: Wow! up to .87! That's our best yet! What if we add more trees??? End of explanation def neighborhood_optimizer(dframe,neighborhood_number_range, counter_num): XYdf = dframe[dframe.lat > 45.4][dframe.lat < 45.6][dframe.long < -122.0][dframe.long > -123.5] data = [[XYdf['lat'][i],XYdf['long'][i]] for i in XYdf.index] r2_dict = [] for i in neighborhood_number_range: counter = counter_num average_accuracy_list = [] while counter > 0: km = KMeans(n_clusters=i) km.fit(data) neighborhoods = km.cluster_centers_ neighborhoods = neighborhoods.tolist() for x in enumerate(neighborhoods): x[1].append(x[0]) neighborhoodlist = [] for z in dframe.index: neighborhoodlist.append(clusterer(dframe['lat'][z],dframe['long'][z],neighborhoods)) dframecopy = dframe.copy() dframecopy['neighborhood'] = Series((neighborhoodlist), index=dframe.index) df2 = dframecopy[dframe.price < 10000][['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'price','neighborhood']].dropna() features = df2[['bath','bed','feet','dog','cat','content','getphotos', 'hasmap', 'neighborhood']].values price = df2[['price']].values features_train, features_test, price_train, price_test = train_test_split(features, price, test_size=0.1) reg = RandomForestRegressor() reg = reg.fit(features_train, price_train) forest_pred = reg.predict(features_test) forest_pred = np.array([[item] for item in forest_pred]) counter -= 1 average_accuracy_list.append(r2_score(forest_pred, price_test)) total = 0 for entry in average_accuracy_list: total += entry r2_accuracy = total/len(average_accuracy_list) r2_dict.append((i,r2_accuracy)) print r2_dict return r2_dict neighborhood_number_range = [i for _,i in enumerate(range(2,31,2))] neighborhood_number_range r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10) r2_dict[:][0] plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict]) Explanation: Up to .88! So what is our goal now? I'd like to see if adjusting the number of neighborhoods increases the accuracy. same for the affect with the number of trees End of explanation neighborhood_number_range = [i for _,i in enumerate(range(7,15))] neighborhood_number_range r2_dict = neighborhood_optimizer(dframe,neighborhood_number_range,10) print r2_dict plt.scatter([x[0] for x in r2_dict],[x[1] for x in r2_dict]) Explanation: Looks like the optimum is right around 10 or 11, and then starts to drop off. Let's get a little more granular and look at a smaller range End of explanation r2_dict = neighborhood_optimizer(dframe,[10,11,12],25) Explanation: Trying a few times, it looks like 10, 11 and 12 get the best results at ~.85. Of course, we'll need to redo some of these optomizations after we properly process our data. Hopefully we'll see some more consistency then too. End of explanation
5,982
Given the following text description, write Python code to implement the functionality described below step by step Description: In this tutorial will write a alignment algorithm with MDAnalysis functions and later look into the documentation to find functions for the implementation of the algorithm. Step1: First we have to load the trajectory we want to align and a reference structure. Step2: Develop Alignment Algorithm We want to align align the ADK protein on the backbone of the open state. So the first thing is to create a selection of the backbone for the reference and trajectory. Step3: RMSD Calculation Our alignment algorithm will be based on comparing the Root Mean Square Deviation (RMSD). The RMSD is a measure for the deviation of the distance between atoms in two structures. \begin{align} RMSD(x, y) = \sqrt{\frac{\sum_{i=1}^{3N} (x_i - y_i)^2}{3N}} \end{align} $N$ is the number of atoms of both structures here. We can calculate the RMSD of the first frame in our trajectory with the reference Step4: NOTE Step5: For later comparison we will save the current RMSD in a variable called RMSD_before Step6: Alignment of proteins The alignment algorithm is based an RMSD comparison between two structures. This means we will determine a rotation matrix that minimizes the RMSD between two structures. We can use MDAnalysis.analysis.align.rotation_matrix for this. Very important is that rotation_matrix is only calculating a rotation so we have to center both structures at the same point first. A common choice is the center of mass of the reference structure. Step7: R is the rotation_matrix we need to the alignment and rmsd is the RMSD after the alignment is done. To align the structures in the lab coordinate system we can't just apply R. First we have to move the structures to be centered at ref_bb_com then we can rotate them. Later we have to move the structures back Step8: Just for fun we can now check that the RMSD has changed using our own RMSD function Step9: Align a complete trajectory. Now we want to apply this algorithm on a complete trajectory and save the result in a file called rmsd-fit.dcd. Step10: We can look how the structures slowly goes into the open conformation by plotting the RMSD value after the alignment vs the frame. Step11: To see the aligned trajectory run vmd -e align.vmd Use MDAnalysis algorithm For convenience MDAnalysis already includes a function which makes the alignment of two proteins easy.
Python Code: from __future__ import print_function %matplotlib inline import matplotlib.pyplot as plt import numpy as np import MDAnalysis as mda Explanation: In this tutorial will write a alignment algorithm with MDAnalysis functions and later look into the documentation to find functions for the implementation of the algorithm. End of explanation trj = mda.Universe('data/adk.psf', 'data/adk_dims.dcd') ref = mda.Universe('data/adk.psf', 'data/adk_open.pdb') Explanation: First we have to load the trajectory we want to align and a reference structure. End of explanation ref_bb = ref.atoms.select_atoms('backbone') trj_bb = trj.atoms.select_atoms('backbone') Explanation: Develop Alignment Algorithm We want to align align the ADK protein on the backbone of the open state. So the first thing is to create a selection of the backbone for the reference and trajectory. End of explanation np.sqrt(np.sum((trj_bb.positions - ref_bb.positions))**2 / (3 * trj_bb.n_atoms)) Explanation: RMSD Calculation Our alignment algorithm will be based on comparing the Root Mean Square Deviation (RMSD). The RMSD is a measure for the deviation of the distance between atoms in two structures. \begin{align} RMSD(x, y) = \sqrt{\frac{\sum_{i=1}^{3N} (x_i - y_i)^2}{3N}} \end{align} $N$ is the number of atoms of both structures here. We can calculate the RMSD of the first frame in our trajectory with the reference End of explanation trj_xyz = trj_bb.positions - trj_bb.center_of_mass() ref_xyz = ref_bb.positions - ref_bb.center_of_mass() np.sqrt(np.sum((trj_xyz - ref_xyz) ** 2 / (3 * trj_bb.n_atoms))) def RMSD(a, b): a_xyz = a.positions - a.center_of_mass() b_xyz = b.positions - b.center_of_mass() return np.sqrt(np.sum((a_xyz - b_xyz) ** 2 / (3 * a.n_atoms))) Explanation: NOTE: The RMSD is sensitive to the origin of our structures. By common convention structures are normally centered at either the center of mass or center of geometry before calculating the RMSD. We will use the center of mass now. End of explanation RMSD_before = RMSD(trj.atoms, ref.atoms) print('RMSD before alignment = {}'.format(RMSD_before)) Explanation: For later comparison we will save the current RMSD in a variable called RMSD_before End of explanation ref_bb_com = ref_bb.center_of_mass() trj_bb_com = trj_bb.center_of_mass() ref_xyz = ref_bb.positions - ref_bb_com trj_xyz = trj_bb.positions - trj_bb_com R, rmsd = mda.analysis.align.rotation_matrix(trj_xyz, ref_xyz) print('RMSD after alignment = {}'.format(rmsd)) Explanation: Alignment of proteins The alignment algorithm is based an RMSD comparison between two structures. This means we will determine a rotation matrix that minimizes the RMSD between two structures. We can use MDAnalysis.analysis.align.rotation_matrix for this. Very important is that rotation_matrix is only calculating a rotation so we have to center both structures at the same point first. A common choice is the center of mass of the reference structure. End of explanation trj.atoms.translate(-trj_bb_com) trj.atoms.rotate(R) trj.atoms.translate(trj_bb_com) Explanation: R is the rotation_matrix we need to the alignment and rmsd is the RMSD after the alignment is done. To align the structures in the lab coordinate system we can't just apply R. First we have to move the structures to be centered at ref_bb_com then we can rotate them. Later we have to move the structures back End of explanation RMSD_after = RMSD(trj.atoms, ref.atoms) print('RMSD before = {}'.format(RMSD_before)) print('RMSD after = {}'.format(RMSD_after)) Explanation: Just for fun we can now check that the RMSD has changed using our own RMSD function End of explanation rmsd = np.zeros(len(trj.trajectory)) with mda.Writer('rmsd-fit.dcd', trj.atoms.n_atoms, start=trj.trajectory.start_timestep, step=trj.trajectory.skip_timestep, dt=trj.trajectory.delta) as w: for i, ts in enumerate(trj.trajectory): com = trj_bb.center_of_mass() trj_xyz = trj_bb.positions - com R, rmsd[i] = mda.analysis.align.rotation_matrix(trj_xyz, ref_xyz) trj.atoms.translate(-com) trj.atoms.rotate(R) trj.atoms.translate(com) w.write(trj) Explanation: Align a complete trajectory. Now we want to apply this algorithm on a complete trajectory and save the result in a file called rmsd-fit.dcd. End of explanation f, ax = plt.subplots() ax.plot(rmsd, '.-') ax.set(xlabel='frame', ylabel='rmsd', title='RMSD of ADK to open conformation') plt.tight_layout() Explanation: We can look how the structures slowly goes into the open conformation by plotting the RMSD value after the alignment vs the frame. End of explanation rmsd_mda = np.zeros((len(trj.trajectory), 2)) with mda.Writer('rms-alignment.dcd', trj.atoms.n_atoms, start=trj.trajectory.start_timestep, step=trj.trajectory.skip_timestep, dt=trj.trajectory.delta) as w: for i, ts in enumerate(trj.trajectory): rmsd_mda[i] = mda.analysis.align.alignto(trj, ref, select='backbone') w.write(trj) f, ax = plt.subplots() # the aligned rmsd is saved in the second column of the array ax.plot(rmsd_mda[:, 1], '.-') ax.set(xlabel='frame', ylabel='rmsd', title='RMSD of ADK to open conformation') plt.tight_layout() Explanation: To see the aligned trajectory run vmd -e align.vmd Use MDAnalysis algorithm For convenience MDAnalysis already includes a function which makes the alignment of two proteins easy. End of explanation
5,983
Given the following text description, write Python code to implement the functionality described below step by step Description: ``` Copyright 2018 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https Step1: Example line interpolations Samples Step2: Correct interpolation Step3: Data-space interpolation Step4: Abrupt change Step5: Overshooting Step6: Unrealistic Step7: Line results table Step8: Real line interpolation examples Step14: Real data interpolations Step15: VAE line samples Step16: Single-layer classifier table
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline import scipy.ndimage import lib.eval import collections import tensorflow as tf import glob import lib.utils import all_aes from absl import flags import sys FLAGS = flags.FLAGS FLAGS(['--lr', '0.0001']) import os if not os.path.exists('figures'): os.makedirs('figures') def flatten_lines(lines, padding=2): padding = np.ones((lines.shape[0], padding) + lines.shape[2:]) lines = np.concatenate([padding, lines, padding], 1) lines = np.concatenate(lines, 0) return np.transpose(lines, [1, 0] + list(range(2, lines.ndim))) def get_final_value_median(values, steps, N=20): sorted_steps = np.argsort(steps) values = np.array(values)[sorted_steps] return np.median(values[-N:]) HEIGHT = 32 WIDTH = 32 N_LINES = 16 START_ANGLE = 5*np.pi/7 END_ANGLE = 3*np.pi/2. Explanation: ``` Copyright 2018 Google LLC Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ``` Figures This notebook contains code for generating the figures and tables from the paper "Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer". The code is mainly provided as an example and may require modification to be run in a different setting. End of explanation example_lines = np.zeros((N_LINES, HEIGHT, WIDTH)) # Cover the space of angles somewhat evenly angles = np.linspace(0, 2*np.pi - np.pi/N_LINES, N_LINES) np.random.shuffle(angles) for n, angle in enumerate(angles): example_lines[n] = lib.data.draw_line(angle, HEIGHT, WIDTH)[..., 0] fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(example_lines), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_samples.pdf', aspect='normal') Explanation: Example line interpolations Samples End of explanation line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) angles = np.linspace(START_ANGLE, END_ANGLE, N_LINES) for n in range(N_LINES): line_interpolation[n] = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0] fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_correct_interpolation.pdf', aspect='normal') print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis]) Explanation: Correct interpolation End of explanation line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0] end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0] weights = np.linspace(1, 0, N_LINES) for n in range(N_LINES): line_interpolation[n] = weights[n]*start_line + (1 - weights[n])*end_line fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_data_interpolation.pdf', aspect='normal') print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis]) Explanation: Data-space interpolation End of explanation line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0] end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0] for n in range(N_LINES): line_interpolation[n] = start_line if n < N_LINES/2 else end_line fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_abrupt_interpolation.pdf', aspect='normal') print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis]) Explanation: Abrupt change End of explanation line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) angles = np.linspace(START_ANGLE, END_ANGLE - 2*np.pi, N_LINES) for n in range(N_LINES): line_interpolation[n] = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0] fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_overshooting_interpolation.pdf', aspect='normal') print lib.eval.line_eval(line_interpolation[np.newaxis, ..., np.newaxis]) Explanation: Overshooting End of explanation line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) angles = np.linspace(START_ANGLE, END_ANGLE, N_LINES) blur = np.sin(np.linspace(0, np.pi, N_LINES)) for n in range(N_LINES): line = lib.data.draw_line(angles[n], HEIGHT, WIDTH)[..., 0] line_interpolation[n] = scipy.ndimage.gaussian_filter(line + np.sqrt(blur[n]), blur[n]*1.5) fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(line_interpolation), cmap=plt.cm.gray, interpolation='nearest', vmin=-1, vmax=1) plt.gca().set_axis_off() plt.savefig('figures/line_unrealistic_interpolation.pdf', aspect='normal') Explanation: Unrealistic End of explanation RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/lines32' experiments = collections.defaultdict(list) for run_path in glob.glob(RESULTS_PATH): for path in glob.glob(os.path.join(run_path, '*')): experiments[os.path.split(path)[-1]].append(os.path.join(path, 'tf', 'summaries')) ALGS = collections.OrderedDict([ ('Baseline', 'AEBaseline_depth16_latent16_scales4'), ('Dropout', 'AEDropout_depth16_dropout0.5_latent16_scales4'), ('Denoising', 'AEDenoising_depth16_latent16_noise1.0_scales4'), ('VAE', 'VAE_beta1.0_depth16_latent16_scales4'), ('AAE', 'AAE_adversary_lr0.0001_depth16_disc_layer_sizes100,100_latent16_scales4'), ('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth16_emaTrue_latent16_noise0.0_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'), ('ACAI', 'ARAReg_advdepth16_advweight0.5_depth16_latent16_reg0.2_scales4'), ]) experiment_results = collections.defaultdict( lambda: collections.defaultdict( lambda: collections.defaultdict( lambda: collections.defaultdict(list)))) for experiment_key, experiment_paths in experiments.items(): for n, experiment_path in enumerate(experiment_paths): print 'Getting results for', experiment_key, n for events_file in glob.glob(os.path.join(experiment_path, 'events*')): try: for e in tf.train.summary_iterator(events_file): for v in e.summary.value: experiment_results[experiment_key][n][v.tag]['step'].append(e.step) experiment_results[experiment_key][n][v.tag]['value'].append(v.simple_value) except Exception as e: print e mean_distance = collections.defaultdict(list) mean_smoothness = collections.defaultdict(list) for experiment_name, events_lists in experiment_results.items(): for events in events_lists.values(): mean_distance[experiment_name].append(get_final_value_median( events['mean_distance_1']['value'], events['mean_distance_1']['step'])) mean_smoothness[experiment_name].append(get_final_value_median( events['mean_smoothness_1']['value'], events['mean_smoothness_1']['step'])) print 'Metric & ' + ' & '.join(ALGS.keys()) + ' \\\\' print 'Mean Distance ($\\times 10^{-3}$) & ' + ' & '.join( ['{:.2f}$\pm${:.2f}'.format(np.mean(mean_distance[alg_name])*10**3, np.std(mean_distance[alg_name])*10**3) for alg_name in ALGS.values()]) + ' \\\\' print 'Mean Smoothness & ' + ' & '.join( ['{:.2f}$\pm${:.2f}'.format(np.mean(mean_smoothness[alg_name]), np.std(mean_smoothness[alg_name])) for alg_name in ALGS.values()]) + ' \\\\' Explanation: Line results table End of explanation line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0] end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0] DATASET = 'lines32' BATCH = 64 for alg_name, alg_path in ALGS.items(): ae_path = os.path.join(RESULTS_PATH.replace('*', 'RUN3'), alg_path) ae, _ = lib.utils.load_ae(ae_path, DATASET, BATCH, all_aes.ALL_AES) with lib.utils.HookReport.disable(): ae.eval_mode() input_lines = np.concatenate([ start_line[np.newaxis, ..., np.newaxis], end_line[np.newaxis, ..., np.newaxis]]) start_latent, end_latent = ae.eval_sess.run(ae.eval_ops.encode, {ae.eval_ops.x: input_lines}) weights = np.linspace(1, 0, N_LINES).reshape(-1, 1, 1, 1) interped_latents = weights*start_latent[np.newaxis] + (1 - weights)*end_latent[np.newaxis] output_interp = ae.eval_sess.run(ae.eval_ops.decode, {ae.eval_ops.h: interped_latents}) fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(flatten_lines(output_interp[..., 0]), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_{}_example.pdf'.format(alg_name.lower()), aspect='normal') Explanation: Real line interpolation examples End of explanation BATCH = 64 DBERTH_RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/RUN2' DATASETS_DEPTHS = collections.OrderedDict([('mnist32', 16), ('svhn32', 64), ('celeba32', 64)]) LATENTS = [2, 16] ALGS_FORMAT = collections.OrderedDict([ ('Baseline', 'AEBaseline_depth{depth}_latent{latent}_scales3'), ('Dropout', 'AEDropout_depth{depth}_dropout0.5_latent{latent}_scales3'), ('Denoising', 'AEDenoising_depth{depth}_latent{latent}_noise1.0_scales3'), ('VAE', 'VAE_beta1.0_depth{depth}_latent{latent}_scales3'), ('AAE', 'AAE_adversary_lr0.0001_depth{depth}_disc_layer_sizes100,100_latent{latent}_scales3'), ('VQ-VAE', 'AEVQVAE_beta10.0_depth{depth}_latent{latent}_num_latents10_run1_scales3_z_log_size14'), ('ACAI', 'ARAReg_advdepth{depth}_advweight0.5_depth{depth}_latent{latent}_reg0.2_scales3'), ]) DATASETS_MINS = {'mnist32': -1, 'celeba32': -1.2, 'svhn32': -1} DATASETS_MAXS = {'mnist32': 1, 'celeba32': 1.2, 'svhn32': 1} N_IMAGES_PER_INTERPOLATION = 16 N_IMAGES = 4 def interpolate(sess, ops, image_left, image_right, dataset_min, dataset_max, interpolation=N_IMAGES_PER_INTERPOLATION): def batched_op(op, op_input, array): return sess.run(op, feed_dict={op_input: array}) # Interpolations interpolation_x = np.array([image_left, image_right], 'f') latent_x = batched_op(ops.encode, ops.x, interpolation_x) latents = [] for x in range(interpolation): latents.append((latent_x[:1] * (interpolation - x - 1) + latent_x[1:] * x) / float(interpolation - 1)) latents = np.concatenate(latents, axis=0) interpolation_y = batched_op(ops.decode, ops.h, latents) interpolation_y = interpolation_y.reshape( (interpolation, 1) + interpolation_y.shape[1:]) interpolation_y = interpolation_y.transpose(1, 0, 2, 3, 4) image_interpolation = lib.utils.images_to_grid(interpolation_y) padding = np.ones((image_interpolation.shape[0], 2) + image_interpolation.shape[2:]) image = np.concatenate( [image_left, padding, image_interpolation, padding, image_right], axis=1) image = (image - dataset_min)/(dataset_max - dataset_min) image = np.clip(image, 0, 1) return image def get_dataset_samples(sess, ops, dataset, batches=100): batch = FLAGS.batch with tf.Graph().as_default(): data_in = dataset.make_one_shot_iterator().get_next() with tf.Session() as sess_new: images = [] labels = [] while True: try: payload = sess_new.run(data_in) images.append(payload['x']) assert images[-1].shape[0] == 1 labels.append(payload['label']) if len(images) == batches: break except tf.errors.OutOfRangeError: break images = np.concatenate(images, axis=0) labels = np.concatenate(labels, axis=0) latents = [sess.run(ops.encode, feed_dict={ops.x: images[p:p + batch]}) for p in range(0, images.shape[0], FLAGS.batch)] latents = np.concatenate(latents, axis=0) latents = latents.reshape([latents.shape[0], -1]) return images, latents, labels left_images = collections.defaultdict(lambda: None) right_images = collections.defaultdict(lambda: None) for dataset, depth in DATASETS_DEPTHS.items(): for latent in LATENTS: for alg_name, alg_format in ALGS_FORMAT.items(): for n in range(N_IMAGES): output_name = '{}_{}_latent_{}_interpolation_{}'.format(dataset, alg_name.lower(), latent, n + 1) alg_path = os.path.join(DBERTH_RESULTS_PATH, dataset, alg_format.format(depth=depth, latent=latent)) if 1: # try: ae, ds = lib.utils.load_ae( alg_path, dataset, BATCH, all_aes.ALL_AES, return_dataset=True) with lib.utils.HookReport.disable(): ae.eval_mode() images, latents, labels = get_dataset_samples(ae.eval_sess, ae.eval_ops, ds.test) labels = np.argmax(labels, axis=1) if left_images[n] is None: left_img_idx = n if dataset == 'celeba32': right_img_idx = N_IMAGES + n else: if n < N_IMAGES/2: right_img_idx = np.flatnonzero(labels == labels[n])[N_IMAGES + n] else: right_img_idx = np.flatnonzero(labels != labels[n])[N_IMAGES + n] print left_img_idx, labels[left_img_idx] print right_img_idx, labels[right_img_idx] left_images[n] = images[left_img_idx] right_images[n] = images[right_img_idx] left_image = left_images[n] right_image = right_images[n] image = interpolate(ae.eval_sess, ae.eval_ops, left_image, right_image, DATASETS_MINS[dataset], DATASETS_MAXS[dataset]) fig = plt.figure(figsize=(15, 1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) ax.imshow(np.squeeze(image), cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/{}.pdf'.format(output_name), aspect='normal') plt.close() for n in range(N_IMAGES): del left_images[n] del right_images[n] DATASET_NAMES = {'mnist32': 'MNIST', 'svhn32': 'SVHN', 'celeba32': 'CelebA'} output = "" for dataset, depth in DATASETS_DEPTHS.items(): for latent in LATENTS: output += r \begin{figure} \centering for n in range(N_IMAGES): alg_list = collections.OrderedDict() for alg_name, alg_format in ALGS_FORMAT.items(): figure_name = '{}_{}_latent_{}_interpolation_{}'.format(dataset, alg_name.lower(), latent, n + 1) alg_list[figure_name] = alg_name if alg_name == ALGS_FORMAT.keys()[-1]: reset = r"\addtocounter{{subfigure}}{{-{}}}".format(len(ALGS_FORMAT)) else: reset = "" output += r \begin{{subfigure}}[b]{{\textwidth}} \centering\parbox{{.09\linewidth}}{{\vspace{{0.3em}}\subcaption{{}}\label{{fig:{figure_name}}}}} \parbox{{.75\linewidth}}{{\includegraphics[width=\linewidth]{{figures/{figure_name}.pdf}}}}{reset} \end{{subfigure}} .format(figure_name=figure_name, reset=reset) if alg_name == ALGS_FORMAT.keys()[-1]: output += r \vspace{0.5em} output += r \caption{{Example interpolations on {} with a latent dimensionality of {} for .format( DATASET_NAMES[dataset], latent*16) output += ', '.join([r'(\subref{{fig:{}}}) {}'.format(fn, an) for fn, an in alg_list.items()]) output += r autoencoders.}} \label{{fig:{}_{}_interpolations}} \end{{figure}} .format(dataset, latent) print output Explanation: Real data interpolations End of explanation RESULTS_PATH = '/home/craffel/data/autoencoder/results_final/lines32' line_interpolation = np.zeros((N_LINES, HEIGHT, WIDTH)) start_line = lib.data.draw_line(START_ANGLE, HEIGHT, WIDTH)[..., 0] end_line = lib.data.draw_line(END_ANGLE, HEIGHT, WIDTH)[..., 0] DATASET = 'lines32' BATCH = 64 ae_path = os.path.join(RESULTS_PATH, 'VAE_beta1.0_depth16_latent16_scales4') ae, _ = lib.utils.load_ae(ae_path, DATASET, BATCH, all_aes.ALL_AES) with lib.utils.HookReport.disable(): ae.eval_mode() random_latents = np.random.standard_normal(size=(16*16, 2, 2, 16)) random_images = ae.eval_sess.run(ae.eval_ops.decode, {ae.eval_ops.h: random_latents}) fig = plt.figure(figsize=(15, 15)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) padding = np.ones((2, WIDTH*N_LINES + 4*N_LINES)) line_matrix = np.concatenate([ np.concatenate([padding, flatten_lines(random_images[n:n + 16, ..., 0]), padding], axis=0) for n in range(0, 16*16, 16)], axis=0) ax.imshow(line_matrix, cmap=plt.cm.gray, interpolation='nearest') plt.gca().set_axis_off() plt.savefig('figures/line_vae_samples.pdf'.format(alg_name.lower()), aspect='normal') Explanation: VAE line samples End of explanation def get_all_results(results_path, event_key): experiments = collections.defaultdict(list) for run_path in glob.glob(results_path): for path in glob.glob(os.path.join(run_path, '*')): experiments[os.path.split(path)[-1]].append(os.path.join(path, 'tf', 'summaries')) experiment_results = collections.defaultdict( lambda: collections.defaultdict( lambda: collections.defaultdict( lambda: collections.defaultdict(list)))) for experiment_key, experiment_paths in experiments.items(): for n, experiment_path in enumerate(experiment_paths): print 'Getting results for', experiment_key, n for events_file in glob.glob(os.path.join(experiment_path, 'events*')): try: for e in tf.train.summary_iterator(events_file): for v in e.summary.value: experiment_results[experiment_key][n][v.tag]['step'].append(e.step) experiment_results[experiment_key][n][v.tag]['value'].append(v.simple_value) except Exception as e: print e event_values = collections.defaultdict(list) for experiment_name, events_lists in experiment_results.items(): for events in events_lists.values(): event_values[experiment_name].append(get_final_value_median( events[event_key]['value'], events[event_key]['step'])) return event_values RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/mnist32' accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1') ALGS = collections.OrderedDict([ ('Baseline', 'AEBaseline_depth16_latent{}_scales3'), ('Dropout', 'AEDropout_depth16_dropout0.5_latent{}_scales3'), ('Denoising', 'AEDenoising_depth16_latent{}_noise1.0_scales3'), ('VAE', 'VAE_beta1.0_depth16_latent{}_scales3'), ('AAE', 'AAE_adversary_lr0.0001_depth16_disc_layer_sizes100,100_latent{}_scales3'), ('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth16_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'), ('ACAI', 'ARAReg_advdepth16_advweight0.5_depth16_latent{}_reg0.2_scales3')]) for latent_size in [2, 16]: print '{} & '.format(latent_size*16) + ' & '.join( ['{:.2f}$\pm${:.2f}'.format( np.mean(accuracy[alg_name.format(latent_size)]), np.std(accuracy[alg_name.format(latent_size)])) for alg_name in ALGS.values()]) + ' \\\\' RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/svhn32' accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1') ALGS = collections.OrderedDict([ ('Baseline', 'AEBaseline_depth64_latent{}_scales3'), ('Dropout', 'AEDropout_depth64_dropout0.5_latent{}_scales3'), ('Denoising', 'AEDenoising_depth64_latent{}_noise1.0_scales3'), ('VAE', 'VAE_beta1.0_depth64_latent{}_scales3'), ('AAE', 'AAE_adversary_lr0.0001_depth64_disc_layer_sizes100,100_latent{}_scales3'), ('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth64_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'), ('ACAI', 'ARAReg_advdepth64_advweight0.5_depth64_latent{}_reg0.2_scales3')]) for latent_size in [2, 16]: print '{} & '.format(latent_size*16) + ' & '.join( ['{:.2f}$\pm${:.2f}'.format( np.mean(accuracy[alg_name.format(latent_size)]), np.std(accuracy[alg_name.format(latent_size)])) for alg_name in ALGS.values()]) + ' \\\\' RESULTS_PATH = '/home/craffel/data/dberth/RERUNS/*/cifar10' accuracy = get_all_results(RESULTS_PATH, 'latent_accuracy_1') ALGS = collections.OrderedDict([ ('Baseline', 'AEBaseline_depth64_latent{}_scales3'), ('Dropout', 'AEDropout_depth64_dropout0.75_latent{}_scales3'), ('Denoising', 'AEDenoising_depth64_latent{}_noise1.0_scales3'), ('VAE', 'VAE_beta1.0_depth64_latent{}_scales3'), ('AAE', 'AAE_adversary_lr0.0001_depth64_disc_layer_sizes100,100_latent{}_scales3'), ('VQ-VAE', 'AEVQVAE_advdepth16_advweight0.0_beta10.0_depth64_emaTrue_latent{}_noiseFalse_num_blocks1_num_latents10_num_residuals1_reg0.5_scales3_z_log_size14'), ('ACAI', 'ARAReg_advdepth64_advweight0.5_depth64_latent{}_reg0.2_scales3')]) for latent_size in [16, 64]: print '{} & '.format(latent_size*16) + ' & '.join( ['{:.2f}$\pm${:.2f}'.format( np.mean(accuracy[alg_name.format(latent_size)]), np.std(accuracy[alg_name.format(latent_size)])) for alg_name in ALGS.values()]) + ' \\\\' Explanation: Single-layer classifier table End of explanation
5,984
Given the following text description, write Python code to implement the functionality described below step by step Description: Cloud AI Platform + What-if Tool Step1: Loading the test dataset The model we'll be exploring here is a binary classification model built with XGBoost and trained on a mortgage dataset. It predicts whether or not a mortgage application will be approved. In this section we'll Step2: Using the What-if Tool to interpret our model With our test examples ready, we can now connect our model to the What-if Tool using the WitWidget. To use the What-if Tool with Cloud AI Platform, we need to send it
Python Code: import sys python_version = sys.version_info[0] # If you're running on Colab, you'll need to install the What-if Tool package and authenticate on the TF instance def pip_install(module): if python_version == '2': !pip install {module} --quiet else: !pip3 install {module} --quiet try: import google.colab IN_COLAB = True except: IN_COLAB = False if IN_COLAB: pip_install('witwidget') from google.colab import auth auth.authenticate_user() import pandas as pd import numpy as np import witwidget from witwidget.notebook.visualization import WitWidget, WitConfigBuilder Explanation: Cloud AI Platform + What-if Tool: Playground XGBoost Example This notebook shows how to use the What-if Tool on a deployed Cloud AI Platform model. You don't need your own cloud project to run this notebook. For instructions on creating a Cloud project, see the documentation here. End of explanation # Download our Pandas dataframe and our test features and labels !gsutil cp gs://mortgage_dataset_files/data.pkl . !gsutil cp gs://mortgage_dataset_files/x_test.npy . !gsutil cp gs://mortgage_dataset_files/y_test.npy . # Preview the features from our model as a pandas DataFrame features = pd.read_pickle('data.pkl') features.head() # Load the test features and labels into numpy ararys x_test = np.load('x_test.npy') y_test = np.load('y_test.npy') # Combine the features and labels into one array for the What-if Tool test_examples = np.hstack((x_test,y_test.reshape(-1,1))) Explanation: Loading the test dataset The model we'll be exploring here is a binary classification model built with XGBoost and trained on a mortgage dataset. It predicts whether or not a mortgage application will be approved. In this section we'll: Download some test data from Cloud Storage and load it into a numpy array + Pandas DataFrame Preview the features for our model in Pandas End of explanation # Create a What-if Tool visualization, it may take a minute to load # See the cell below this for exploration ideas # This prediction adjustment function is needed as this xgboost model's # prediction returns just a score for the positive class of the binary # classification, whereas the What-If Tool expects a list of scores for each # class (in this case, both the negative class and the positive class). def adjust_prediction(pred): return [1 - pred, pred] config_builder = (WitConfigBuilder(test_examples.tolist(), features.columns.tolist() + ['mortgage_status']) .set_ai_platform_model('wit-caip-demos', 'xgb_mortgage', 'v1', adjust_prediction=adjust_prediction) .set_target_feature('mortgage_status') .set_label_vocab(['denied', 'approved'])) WitWidget(config_builder, height=800) Explanation: Using the What-if Tool to interpret our model With our test examples ready, we can now connect our model to the What-if Tool using the WitWidget. To use the What-if Tool with Cloud AI Platform, we need to send it: * A Python list of our test features + ground truth labels * Optionally, the names of our columns * Our Cloud project, model, and version name (we've created a public one for you to play around with) See the next cell for some exploration ideas in the What-if Tool. End of explanation
5,985
Given the following text description, write Python code to implement the functionality described below step by step Description: Answer the following questions in Python by defining a function. Ensure you have a docstring and that your function succeeds on the example use. Create a function which takes two arguments Step1: Create a function which sums the numbers between its first (inclusive) and second argument (exclusive). For example, if you pass in 3 and 6, it should return 12 (3+4+5). Your function should return None if the given arguments are not integers. Your function must be called int_sum to receive credit. Step2: Create a function which takes in two arguments
Python Code: #The points awarded this cell corresopnd to partial credit and/or documentation ### BEGIN SOLUTION def power(x, p=2): '''Computes x^p Args: x: input number p: input power, defaults to 2 returns: x^p as a floating point ''' return x**p ### END SOLUTION '''Check if your function returns the correct values''' from numpy import testing as t t.assert_almost_equal( power(3,2), 9 ) ### BEGIN HIDDEN TESTS import numpy as np test_x = np.array([-2, -1.5, 0, 4]) test_p = np.array([-3, 2, 4, 1]) t.assert_almost_equal( power(test_x, test_p), test_x ** test_p) t.assert_almost_equal( power(3), 3**2) ### END HIDDEN TESTS Explanation: Answer the following questions in Python by defining a function. Ensure you have a docstring and that your function succeeds on the example use. Create a function which takes two arguments: x and p, where p defaults to 2. It should return $x\,^p$. Your function must be named power to receive credit. For this problem, do not consider invalid use of your function (i.e., what happens if x is a string). End of explanation #The points awarded this cell corresopnd to partial credit and/or documentation ### BEGIN SOLUTION def int_sum(x, y): '''Computes sum from x to y (excluding y) Args: x: start of sum p: end of sum returns: the sum as an integer ''' if type(x) != type(1) or type(y) != type(1): return None s = 0 for i in range(x, y): s += i return s ### END SOLUTION '''check that it returns correct answer''' from numpy import testing as t t.assert_equal( int_sum(3,6), 12) ### BEGIN HIDDEN TESTS t.assert_equal( int_sum(-2, 7), sum(range(-2, 7))) t.assert_equal( int_sum(0, 5), sum(range(0, 5))) ### END HIDDEN TESTS '''check that it deals with invalid input correctly''' from numpy import testing as t t.assert_equal( int_sum(4.4, 4.6), None) ### BEGIN HIDDEN TESTS t.assert_equal( int_sum('test', 4), None) t.assert_equal( int_sum(3, 'test'), None) t.assert_array_equal( int_sum(5,4), 0) ### END HIDDEN TESTS Explanation: Create a function which sums the numbers between its first (inclusive) and second argument (exclusive). For example, if you pass in 3 and 6, it should return 12 (3+4+5). Your function should return None if the given arguments are not integers. Your function must be called int_sum to receive credit. End of explanation #The points awarded this cell corresopnd to partial credit and/or documentation ### BEGIN SOLUTION def pprint(x, i): '''Prints x to the given precision indicated by i Args: x: the number to print i: the integer precision returns: a string ''' if( not (type(x) == float or type(x) == int)): return None if(type(i) != int or i <= 0): return None return '{:.{}}'.format(x, i) ### END SOLUTION '''check answer is correct''' from numpy import testing as t t.assert_equal( pprint(4.3212, 2), '4.3') ### BEGIN HIDDEN TESTS t.assert_equal( pprint(-4.3212, 2), '-4.3') t.assert_equal( pprint(5.45676, 3), '5.46') t.assert_equal( pprint(11.2, 1), '1e+01') ### END HIDDEN TESTS '''check that your function correctly deals with invalid input''' from numpy import testing as t t.assert_equal( pprint('not a number', 4), None) ### BEGIN HIDDEN TESTS t.assert_equal( pprint(-4.3212, -2), None) t.assert_equal( pprint(5.45676, 'b'), None) t.assert_equal( pprint(55, 4.12), None) t.assert_equal( pprint(55, 0), None) ### END HIDDEN TESTS Explanation: Create a function which takes in two arguments: a floating point number and an integer representing precision. It should return a string that prints the number to the given precision or None if any of the arguments are invalid. Your function must be called pprint to receive credit. End of explanation
5,986
Given the following text description, write Python code to implement the functionality described below step by step Description: Airport Wait Time Simulation Step1: For this simulation, we'll be using numpy and scipy for their statistical and matrix math prowess and matplotlib as our primary plotting tool Step2: Setting the arrival rates for each of the steps in the airport arrival process. First is the arrival to the queue, then to the scanning machines and then scanning to the frisking booth. We have discounted travel time in the queue, and assumed that all the frisking and scaanning booths are similar and this the overall rate of scanning and friskign will be sum of all the rates at each step Step3: We're taking the arrivals at each of the time intervals, generated by a poisson function and storing the number of people who have arrived at each minute. The ARRIVAL_LIST variable is used to calculate the entry time of each of the people in the queue. This will be later used to assess overall wait time for people in the queue. The time axis is used to help plot results as X-axis variable Step4: And this is the pattern for the scanner Step5: Critical to note that this ignores the queuing and assumes that xx people are processed at each time interval at the counter. This will be used in conjunction with the scanner output to choose the bottle neck at each point in time Step6: Minimum number of processed people between the scanners and the frisking is the bottleneck at any given time, and this will be the exit rate at any given time.
Python Code: %matplotlib inline #Imports for solution import numpy as np import scipy.stats as sp from matplotlib.pyplot import * #Setting Distribution variables ##All rates are in per Minute. Explanation: Airport Wait Time Simulation End of explanation #Everything will me modeled as a Poisson Process SIM_TIME = 180 QUEUE_ARRIVAL_RATE = 15 N_SCANNERS =4 SCANNER_BAG_CHECKING_RATE = 3 #Takes 20 seconds to put your bag on Scanner FRISK_MACHINES_PER_SCANNER = 3 #Number of people checking machine per scanner N_FRISK_MACHINES = N_SCANNERS*FRISK_MACHINES_PER_SCANNER FRISK_CHECKING_RATE = 2 #Half a minute per frisk SCANNER_RATE = SCANNER_BAG_CHECKING_RATE*N_SCANNERS FRISK_RATE = FRISK_CHECKING_RATE*N_FRISK_MACHINES FRISK_ARRIVAL_RATE = SCANNER_RATE Explanation: For this simulation, we'll be using numpy and scipy for their statistical and matrix math prowess and matplotlib as our primary plotting tool End of explanation #Queue Modeling ARRIVAL_PATTERN = sp.poisson.rvs(QUEUE_ARRIVAL_RATE,size = SIM_TIME) #for an hour ARRIVAL_LIST = [] for index, item in enumerate(ARRIVAL_PATTERN): ARRIVAL_LIST += [index]*item #print ARRIVAL_LIST TIMEAXIS = np.linspace(1,SIM_TIME,SIM_TIME) fig = figure() arrivalplot = plot(TIMEAXIS,ARRIVAL_PATTERN,'go-') ylabel('People arrived at time t') xlabel("Time (minutes)") show() Explanation: Setting the arrival rates for each of the steps in the airport arrival process. First is the arrival to the queue, then to the scanning machines and then scanning to the frisking booth. We have discounted travel time in the queue, and assumed that all the frisking and scaanning booths are similar and this the overall rate of scanning and friskign will be sum of all the rates at each step End of explanation SCAN_PATTERN = sp.poisson.rvs(SCANNER_RATE,size=SIM_TIME) SCAN_LIST = [] for index, item in enumerate(SCAN_PATTERN): SCAN_LIST += [index]*item arrivalfig = figure() arrivalplot = plot(TIMEAXIS,SCAN_PATTERN,'o-') ylabel('People arrived at time t for the scanner') xlabel("Time (minutes)") show() Explanation: We're taking the arrivals at each of the time intervals, generated by a poisson function and storing the number of people who have arrived at each minute. The ARRIVAL_LIST variable is used to calculate the entry time of each of the people in the queue. This will be later used to assess overall wait time for people in the queue. The time axis is used to help plot results as X-axis variable End of explanation FRISK_PATTERN = sp.poisson.rvs(FRISK_RATE,size=SIM_TIME) FRISK_LIST = [] for index, item in enumerate(FRISK_PATTERN): FRISK_LIST += [index]*item arrivalfig = figure() arrivalplot = plot(TIMEAXIS,FRISK_PATTERN,'ro-') ylabel('People Leaving at time t from frisking counter') xlabel("Time (minutes)") show() Explanation: And this is the pattern for the scanner End of explanation EXIT_NUMER = zip(FRISK_PATTERN,SCAN_PATTERN) EXIT_NUMBER = [min(k) for k in EXIT_NUMER] #plot(EXIT_NUMBER,'o') #show() EXIT_PATTERN = [] for index, item in enumerate(EXIT_NUMBER): EXIT_PATTERN += [index]*item Explanation: Critical to note that this ignores the queuing and assumes that xx people are processed at each time interval at the counter. This will be used in conjunction with the scanner output to choose the bottle neck at each point in time End of explanation RESIDUAL_ARRIVAL_PATTERN = ARRIVAL_LIST[0:len(EXIT_PATTERN)] WAIT_TIMES = [m-n for m,n in zip(EXIT_PATTERN,RESIDUAL_ARRIVAL_PATTERN)] #print EXIT_PATTERN ''' for i,val in EXIT_PATTERN: WAIT_TIMES += [ARRIVAL_PATTERN(i) - val] ''' plot(WAIT_TIMES,'r-') ylabel('Wait times for people entering the queue') xlabel("Order of entering the queue") ylim([0,40]) show() Explanation: Minimum number of processed people between the scanners and the frisking is the bottleneck at any given time, and this will be the exit rate at any given time. End of explanation
5,987
Given the following text description, write Python code to implement the functionality described below step by step Description: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. Step1: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! Step2: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. Step3: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). Step4: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. Step5: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). Step7: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters Step8: Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. Step9: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. Step10: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Python Code: %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import pandas as pd import matplotlib.pyplot as plt Explanation: Your first neural network In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more. End of explanation data_path = 'Bike-Sharing-Dataset/hour.csv' rides = pd.read_csv(data_path) rides.head() rides.corr() # Maybe some freatures strongly correlate and can be removed from the model Explanation: Load and prepare the data A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon! End of explanation rides[:24*10].plot(x='dteday', y='cnt') Explanation: Checking out the data This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above. Below is a plot showing the number of bike riders over the first 10 days or so in the data set. (Some days don't have exactly 24 entries in the data set, so it's not exactly 10 days.) You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model. End of explanation dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday'] for each in dummy_fields: dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False) rides = pd.concat([rides, dummies], axis=1) fields_to_drop = ['instant', 'dteday', 'season', 'weathersit', 'weekday', 'atemp', 'mnth', 'workingday', 'hr'] data = rides.drop(fields_to_drop, axis=1) data.head() Explanation: Dummy variables Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies(). End of explanation quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed'] # Store scalings in a dictionary so we can convert back later scaled_features = {} for each in quant_features: mean, std = data[each].mean(), data[each].std() scaled_features[each] = [mean, std] data.loc[:, each] = (data[each] - mean)/std Explanation: Scaling target variables To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1. The scaling factors are saved so we can go backwards when we use the network for predictions. End of explanation # Save data for approximately the last 21 days test_data = data[-21*24:] # Now remove the test data from the data set data = data[:-21*24] # Separate the data into features and targets target_fields = ['cnt', 'casual', 'registered'] features, targets = data.drop(target_fields, axis=1), data[target_fields] test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields] Explanation: Splitting the data into training, testing, and validation sets We'll save the data for the last approximately 21 days to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders. End of explanation # Hold out the last 60 days or so of the remaining data as a validation set train_features, train_targets = features[:-60*24], targets[:-60*24] val_features, val_targets = features[-60*24:], targets[-60*24:] #print("Train_freatures shape: {}\nTrain_targets shape:{}".format(np.shape(train_features),np.shape(train_targets))) Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set). End of explanation class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5, (self.input_nodes, self.hidden_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.output_nodes)) self.lr = learning_rate #### TODO: Set self.activation_function to your implemented sigmoid function #### # # Note: in Python, you can define a function with a lambda expression, # as shown below. self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation. self.activation_prime = lambda x: x*(1-x) # Not exactly the derivative but it's arithematic operation # to save computing time since we calculated the sigmoid earlier def train(self, features, targets): ''' Train the network on batch of features and targets. Arguments --------- features: 2D array, each row is one data record, each column is a feature targets: 1D array of target values ''' n_records = features.shape[0] delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape) delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape) for X, y in zip(features, targets): #### Implement the forward pass here #### hidden_inputs = np.dot(X,self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer # Since the activation function on the output node is f(x) = x --> input = output final_outputs = final_inputs # signals from final output layer ### Backward pass ### error = y-final_outputs # Output layer error is the difference between desired target and actual output. # Error gradient in the output unit output_error_term = error#*(final_outputs) # final_output is the derivative hidden_error_term = np.dot(self.weights_hidden_to_output, output_error_term)*self.activation_prime(hidden_outputs) # Weight step (hidden to output) delta_weights_h_o += output_error_term*hidden_outputs[:, None] # reshaping delta_weights_i_h += hidden_error_term*X[:,None] # reshaping #Update the weights - Replace these values with your calculations. self.weights_hidden_to_output += self.lr*delta_weights_h_o/n_records # update hidden-to-output weights with gradient descent step self.weights_input_to_hidden += self.lr*delta_weights_i_h/n_records # update input-to-hidden weights with gradient descent step def run(self, features): ''' Run a forward pass through the network with input features Arguments --------- features: 1D array of feature values ''' #### Implement the forward pass here #### # TODO: Hidden layer - replace these values with the appropriate calculations. hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer # TODO: Output layer - Replace these values with the appropriate calculations. final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer # Since the output activation function is f(x) = x final_outputs = final_inputs # signals from final output layer return final_outputs def MSE(y, Y): return np.mean((y-Y)**2) Explanation: Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. <img src="assets/neural_network.png" width=300px> The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method. End of explanation import unittest inputs = np.array([[0.5, -0.2, 0.1]]) targets = np.array([[0.4]]) test_w_i_h = np.array([[0.1, -0.2], [0.4, 0.5], [-0.3, 0.2]]) test_w_h_o = np.array([[0.3], [-0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328], [-0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, -0.20185996], [0.39775194, 0.50074398], [-0.29887597, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner().run(suite) Explanation: Unit tests Run these unit tests to check the correctness of your network implementation. This will help you be sure your network was implemented correctly befor you starting trying to train it. These tests must all be successful to pass the project. End of explanation import sys ### Set the hyperparameters here ### iterations = 12000 learning_rate = 0.4 hidden_nodes = 19 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) train_loss = var_loss = 0 losses = {'train':[], 'validation':[]} for ii in range(iterations): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt'] network.train(X, y) # Printing out the training progress train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values) val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values) sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) sys.stdout.flush() losses['train'].append(train_loss) losses['validation'].append(val_loss) #print('train loss:{}, val_loss:{}'.format(train_loss, val_loss)) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() _ = plt.ylim() Explanation: Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of iterations This is the number of batches of samples from the training data we'll use to train the network. The more iterations you use, the better the model will fit the data. However, if you use too many iterations, then the model with not generalize well to other data, this is called overfitting. You want to find a number here where the network has a low training loss, and the validation loss is at a minimum. As you start overfitting, you'll see the training loss continue to decrease while the validation loss starts to increase. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose. End of explanation fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features).T*std + mean ax.plot(predictions[0], label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45) data = (test_targets['cnt']*std+mean).reshape(504,1) pred = predictions.T acc = MSE(data,pred) print(acc) Explanation: Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly. End of explanation
5,988
Given the following text description, write Python code to implement the functionality described below step by step Description: ES-DOC CMIP6 Model Properties - Ocean MIP Era Step1: Document Authors Set document authors Step2: Document Contributors Specify document contributors Step3: Document Publication Specify document publication status Step4: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required Step5: 1.2. Model Name Is Required Step6: 1.3. Model Family Is Required Step7: 1.4. Basic Approximations Is Required Step8: 1.5. Prognostic Variables Is Required Step9: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required Step10: 2.2. Eos Functional Temp Is Required Step11: 2.3. Eos Functional Salt Is Required Step12: 2.4. Eos Functional Depth Is Required Step13: 2.5. Ocean Freezing Point Is Required Step14: 2.6. Ocean Specific Heat Is Required Step15: 2.7. Ocean Reference Density Is Required Step16: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required Step17: 3.2. Type Is Required Step18: 3.3. Ocean Smoothing Is Required Step19: 3.4. Source Is Required Step20: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required Step21: 4.2. River Mouth Is Required Step22: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required Step23: 5.2. Code Version Is Required Step24: 5.3. Code Languages Is Required Step25: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required Step26: 6.2. Canonical Horizontal Resolution Is Required Step27: 6.3. Range Horizontal Resolution Is Required Step28: 6.4. Number Of Horizontal Gridpoints Is Required Step29: 6.5. Number Of Vertical Levels Is Required Step30: 6.6. Is Adaptive Grid Is Required Step31: 6.7. Thickness Level 1 Is Required Step32: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required Step33: 7.2. Global Mean Metrics Used Is Required Step34: 7.3. Regional Metrics Used Is Required Step35: 7.4. Trend Metrics Used Is Required Step36: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required Step37: 8.2. Scheme Is Required Step38: 8.3. Consistency Properties Is Required Step39: 8.4. Corrected Conserved Prognostic Variables Is Required Step40: 8.5. Was Flux Correction Used Is Required Step41: 9. Grid Ocean grid 9.1. Overview Is Required Step42: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required Step43: 10.2. Partial Steps Is Required Step44: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required Step45: 11.2. Staggering Is Required Step46: 11.3. Scheme Is Required Step47: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required Step48: 12.2. Diurnal Cycle Is Required Step49: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required Step50: 13.2. Time Step Is Required Step51: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required Step52: 14.2. Scheme Is Required Step53: 14.3. Time Step Is Required Step54: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required Step55: 15.2. Time Step Is Required Step56: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required Step57: 17. Advection Ocean advection 17.1. Overview Is Required Step58: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required Step59: 18.2. Scheme Name Is Required Step60: 18.3. ALE Is Required Step61: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required Step62: 19.2. Flux Limiter Is Required Step63: 19.3. Effective Order Is Required Step64: 19.4. Name Is Required Step65: 19.5. Passive Tracers Is Required Step66: 19.6. Passive Tracers Advection Is Required Step67: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required Step68: 20.2. Flux Limiter Is Required Step69: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required Step70: 21.2. Scheme Is Required Step71: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required Step72: 22.2. Order Is Required Step73: 22.3. Discretisation Is Required Step74: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required Step75: 23.2. Constant Coefficient Is Required Step76: 23.3. Variable Coefficient Is Required Step77: 23.4. Coeff Background Is Required Step78: 23.5. Coeff Backscatter Is Required Step79: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required Step80: 24.2. Submesoscale Mixing Is Required Step81: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required Step82: 25.2. Order Is Required Step83: 25.3. Discretisation Is Required Step84: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required Step85: 26.2. Constant Coefficient Is Required Step86: 26.3. Variable Coefficient Is Required Step87: 26.4. Coeff Background Is Required Step88: 26.5. Coeff Backscatter Is Required Step89: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required Step90: 27.2. Constant Val Is Required Step91: 27.3. Flux Type Is Required Step92: 27.4. Added Diffusivity Is Required Step93: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required Step94: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required Step95: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required Step96: 30.2. Closure Order Is Required Step97: 30.3. Constant Is Required Step98: 30.4. Background Is Required Step99: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required Step100: 31.2. Closure Order Is Required Step101: 31.3. Constant Is Required Step102: 31.4. Background Is Required Step103: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required Step104: 32.2. Tide Induced Mixing Is Required Step105: 32.3. Double Diffusion Is Required Step106: 32.4. Shear Mixing Is Required Step107: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required Step108: 33.2. Constant Is Required Step109: 33.3. Profile Is Required Step110: 33.4. Background Is Required Step111: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required Step112: 34.2. Constant Is Required Step113: 34.3. Profile Is Required Step114: 34.4. Background Is Required Step115: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required Step116: 35.2. Scheme Is Required Step117: 35.3. Embeded Seaice Is Required Step118: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required Step119: 36.2. Type Of Bbl Is Required Step120: 36.3. Lateral Mixing Coef Is Required Step121: 36.4. Sill Overflow Is Required Step122: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required Step123: 37.2. Surface Pressure Is Required Step124: 37.3. Momentum Flux Correction Is Required Step125: 37.4. Tracers Flux Correction Is Required Step126: 37.5. Wave Effects Is Required Step127: 37.6. River Runoff Budget Is Required Step128: 37.7. Geothermal Heating Is Required Step129: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required Step130: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required Step131: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required Step132: 40.2. Ocean Colour Is Required Step133: 40.3. Extinction Depth Is Required Step134: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required Step135: 41.2. From Sea Ice Is Required Step136: 41.3. Forced Mode Restoring Is Required
Python Code: # DO NOT EDIT ! from pyesdoc.ipython.model_topic import NotebookOutput # DO NOT EDIT ! DOC = NotebookOutput('cmip6', 'bcc', 'sandbox-1', 'ocean') Explanation: ES-DOC CMIP6 Model Properties - Ocean MIP Era: CMIP6 Institute: BCC Source ID: SANDBOX-1 Topic: Ocean Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. Properties: 133 (101 required) Model descriptions: Model description details Initialized From: -- Notebook Help: Goto notebook help page Notebook Initialised: 2018-02-15 16:53:39 Document Setup IMPORTANT: to be executed each time you run the notebook End of explanation # Set as follows: DOC.set_author("name", "email") # TODO - please enter value(s) Explanation: Document Authors Set document authors End of explanation # Set as follows: DOC.set_contributor("name", "email") # TODO - please enter value(s) Explanation: Document Contributors Specify document contributors End of explanation # Set publication status: # 0=do not publish, 1=publish. DOC.set_publication_status(0) Explanation: Document Publication Specify document publication status End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: Document Table of Contents 1. Key Properties 2. Key Properties --&gt; Seawater Properties 3. Key Properties --&gt; Bathymetry 4. Key Properties --&gt; Nonoceanic Waters 5. Key Properties --&gt; Software Properties 6. Key Properties --&gt; Resolution 7. Key Properties --&gt; Tuning Applied 8. Key Properties --&gt; Conservation 9. Grid 10. Grid --&gt; Discretisation --&gt; Vertical 11. Grid --&gt; Discretisation --&gt; Horizontal 12. Timestepping Framework 13. Timestepping Framework --&gt; Tracers 14. Timestepping Framework --&gt; Baroclinic Dynamics 15. Timestepping Framework --&gt; Barotropic 16. Timestepping Framework --&gt; Vertical Physics 17. Advection 18. Advection --&gt; Momentum 19. Advection --&gt; Lateral Tracers 20. Advection --&gt; Vertical Tracers 21. Lateral Physics 22. Lateral Physics --&gt; Momentum --&gt; Operator 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff 24. Lateral Physics --&gt; Tracers 25. Lateral Physics --&gt; Tracers --&gt; Operator 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity 28. Vertical Physics 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum 32. Vertical Physics --&gt; Interior Mixing --&gt; Details 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum 35. Uplow Boundaries --&gt; Free Surface 36. Uplow Boundaries --&gt; Bottom Boundary Layer 37. Boundary Forcing 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing 1. Key Properties Ocean key properties 1.1. Model Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 1.2. Model Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean model code (NEMO 3.6, MOM 5.0,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.model_family') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "OGCM" # "slab ocean" # "mixed layer ocean" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.3. Model Family Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of ocean model. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.basic_approximations') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Primitive equations" # "Non-hydrostatic" # "Boussinesq" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.4. Basic Approximations Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Basic approximations made in the ocean. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.prognostic_variables') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # "Salinity" # "U-velocity" # "V-velocity" # "W-velocity" # "SSH" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 1.5. Prognostic Variables Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N List of prognostic variables in the ocean component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Wright, 1997" # "Mc Dougall et al." # "Jackett et al. 2006" # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2. Key Properties --&gt; Seawater Properties Physical properties of seawater in ocean 2.1. Eos Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Potential temperature" # "Conservative temperature" # TODO - please enter value(s) Explanation: 2.2. Eos Functional Temp Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Temperature used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Practical salinity Sp" # "Absolute salinity Sa" # TODO - please enter value(s) Explanation: 2.3. Eos Functional Salt Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Salinity used in EOS for sea water End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Pressure (dbars)" # "Depth (meters)" # TODO - please enter value(s) Explanation: 2.4. Eos Functional Depth Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Depth or pressure used in EOS for sea water ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "TEOS 2010" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 2.5. Ocean Freezing Point Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.6. Ocean Specific Heat Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Specific heat in ocean (cpocean) in J/(kg K) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 2.7. Ocean Reference Density Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Boussinesq reference density (rhozero) in kg / m3 End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Present day" # "21000 years BP" # "6000 years BP" # "LGM" # "Pliocene" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 3. Key Properties --&gt; Bathymetry Properties of bathymetry in ocean 3.1. Reference Dates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Reference date of bathymetry End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.type') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 3.2. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the bathymetry fixed in time in the ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.3. Ocean Smoothing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any smoothing or hand editing of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.bathymetry.source') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 3.4. Source Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe source of bathymetry in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4. Key Properties --&gt; Nonoceanic Waters Non oceanic waters treatement in ocean 4.1. Isolated Seas Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how isolated seas is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 4.2. River Mouth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe if/how river mouth mixing or estuaries specific treatment is performed End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.repository') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5. Key Properties --&gt; Software Properties Software properties of ocean code 5.1. Repository Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Location of code for this component. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.2. Code Version Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Code version identifier. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 5.3. Code Languages Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Code language(s). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6. Key Properties --&gt; Resolution Resolution in the ocean grid 6.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.2. Canonical Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 6.3. Range Horizontal Resolution Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.4. Number Of Horizontal Gridpoints Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Total number of horizontal (XY) points (or degrees of freedom) on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.5. Number Of Vertical Levels Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Number of vertical levels resolved on computational grid. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 6.6. Is Adaptive Grid Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Default is False. Set true if grid resolution changes during execution. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 6.7. Thickness Level 1 Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Thickness of first surface ocean level (in meters) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7. Key Properties --&gt; Tuning Applied Tuning methodology for ocean component 7.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 General overview description of tuning: explain and motivate the main targets and metrics retained. &amp;Document the relative weight given to climate performance metrics versus process oriented metrics, &amp;and on the possible conflicts with parameterization level tuning. In particular describe any struggle &amp;with a parameter value that required pushing it to its limits to solve a particular model deficiency. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.2. Global Mean Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List set of metrics of the global mean state used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.3. Regional Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 7.4. Trend Metrics Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N List observed trend metrics used in tuning model/component End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.description') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8. Key Properties --&gt; Conservation Conservation in the ocean component 8.1. Description Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Brief description of conservation methodology End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.scheme') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Energy" # "Enstrophy" # "Salt" # "Volume of ocean" # "Momentum" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 8.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.N Properties conserved in the ocean by the numerical schemes End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.3. Consistency Properties Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 8.4. Corrected Conserved Prognostic Variables Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Set of variables which are conserved by more than the numerical scheme alone. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 8.5. Was Flux Correction Used Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Does conservation involve flux correction ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 9. Grid Ocean grid 9.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of grid in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Z-coordinate" # "Z*-coordinate" # "S-coordinate" # "Isopycnic - sigma 0" # "Isopycnic - sigma 2" # "Isopycnic - sigma 4" # "Isopycnic - other" # "Hybrid / Z+S" # "Hybrid / Z+isopycnic" # "Hybrid / other" # "Pressure referenced (P)" # "P*" # "Z**" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 10. Grid --&gt; Discretisation --&gt; Vertical Properties of vertical discretisation in ocean 10.1. Coordinates Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical coordinates in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 10.2. Partial Steps Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Using partial steps with Z or Z vertical coordinate in ocean ?* End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Lat-lon" # "Rotated north pole" # "Two north poles (ORCA-style)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11. Grid --&gt; Discretisation --&gt; Horizontal Type of horizontal discretisation scheme in ocean 11.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal grid type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Arakawa B-grid" # "Arakawa C-grid" # "Arakawa E-grid" # "N/a" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.2. Staggering Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Horizontal grid staggering type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Finite difference" # "Finite volumes" # "Finite elements" # "Unstructured grid" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 11.3. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Horizontal discretisation scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 12. Timestepping Framework Ocean Timestepping Framework 12.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Via coupling" # "Specific treatment" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 12.2. Diurnal Cycle Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Diurnal cycle type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 13. Timestepping Framework --&gt; Tracers Properties of tracers time stepping in ocean 13.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time stepping scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 13.2. Time Step Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Tracers time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Preconditioned conjugate gradient" # "Sub cyling" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14. Timestepping Framework --&gt; Baroclinic Dynamics Baroclinic dynamics in ocean 14.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics type End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Leap-frog + Asselin filter" # "Leap-frog + Periodic Euler" # "Predictor-corrector" # "Runge-Kutta 2" # "AM3-LF" # "Forward-backward" # "Forward operator" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 14.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Baroclinic dynamics scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 14.3. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Baroclinic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "split explicit" # "implicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 15. Timestepping Framework --&gt; Barotropic Barotropic time stepping in ocean 15.1. Splitting Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Time splitting method End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 15.2. Time Step Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Barotropic time step (in seconds) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 16. Timestepping Framework --&gt; Vertical Physics Vertical physics time stepping in ocean 16.1. Method Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Details of vertical time stepping in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 17. Advection Ocean advection 17.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of advection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Flux form" # "Vector form" # TODO - please enter value(s) Explanation: 18. Advection --&gt; Momentum Properties of lateral momemtum advection scheme in ocean 18.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of lateral momemtum advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.scheme_name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 18.2. Scheme Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Name of ocean momemtum advection scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.momentum.ALE') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 18.3. ALE Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Using ALE for vertical advection ? (if vertical coordinates are sigma) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19. Advection --&gt; Lateral Tracers Properties of lateral tracer advection scheme in ocean 19.1. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 19.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for lateral tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 19.3. Effective Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Effective order of limited lateral tracer advection scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.4. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') # PROPERTY VALUE(S): # Set as follows: DOC.set_value("value") # Valid Choices: # "Ideal age" # "CFC 11" # "CFC 12" # "SF6" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 19.5. Passive Tracers Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.N Passive tracers advected End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 19.6. Passive Tracers Advection Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Is advection of passive tracers different than active ? if so, describe. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.name') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 20. Advection --&gt; Vertical Tracers Properties of vertical tracer advection scheme in ocean 20.1. Name Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 20.2. Flux Limiter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Monotonic flux limiter for vertical tracer advection scheme in ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 21. Lateral Physics Ocean lateral physics 21.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of lateral physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Eddy active" # "Eddy admitting" # TODO - please enter value(s) Explanation: 21.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of transient eddy representation in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22. Lateral Physics --&gt; Momentum --&gt; Operator Properties of lateral physics operator for momentum in ocean 22.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 22.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics momemtum scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 23. Lateral Physics --&gt; Momentum --&gt; Eddy Viscosity Coeff Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean 23.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics momemtum eddy viscosity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 23.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 23.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 23.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24. Lateral Physics --&gt; Tracers Properties of lateral physics for tracers in ocean 24.1. Mesoscale Closure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a mesoscale closure in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 24.2. Submesoscale Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Horizontal" # "Isopycnal" # "Isoneutral" # "Geopotential" # "Iso-level" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25. Lateral Physics --&gt; Tracers --&gt; Operator Properties of lateral physics operator for tracers in ocean 25.1. Direction Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Direction of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Harmonic" # "Bi-harmonic" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.2. Order Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Order of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Second order" # "Higher order" # "Flux limiter" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 25.3. Discretisation Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Discretisation of lateral physics tracers scheme in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant" # "Space varying" # "Time + space varying (Smagorinsky)" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 26. Lateral Physics --&gt; Tracers --&gt; Eddy Diffusity Coeff Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean 26.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Lateral physics tracers eddy diffusity coeff type in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.2. Constant Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 26.3. Variable Coefficient Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 26.4. Coeff Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 26.5. Coeff Backscatter Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "GM" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 27. Lateral Physics --&gt; Tracers --&gt; Eddy Induced Velocity Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean 27.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV in lateral physics tracers in the ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 27.2. Constant Val Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If EIV scheme for tracers is constant, specify coefficient value (M2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.3. Flux Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV flux (advective or skew) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 27.4. Added Diffusivity Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of EIV added diffusivity (constant, flow dependent or none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 28. Vertical Physics Ocean Vertical Physics 28.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of vertical physics in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 29. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Details Properties of vertical physics in ocean 29.1. Langmuir Cells Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there Langmuir cells mixing in upper ocean ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 30. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Tracers *Properties of boundary layer (BL) mixing on tracers in the ocean * 30.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 30.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 30.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure - TKE" # "Turbulent closure - KPP" # "Turbulent closure - Mellor-Yamada" # "Turbulent closure - Bulk Mixed Layer" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 31. Vertical Physics --&gt; Boundary Layer Mixing --&gt; Momentum *Properties of boundary layer (BL) mixing on momentum in the ocean * 31.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of boundary layer mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.2. Closure Order Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: FLOAT&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 31.3. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant BL mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 31.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Non-penetrative convective adjustment" # "Enhanced vertical diffusion" # "Included in turbulence closure" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 32. Vertical Physics --&gt; Interior Mixing --&gt; Details *Properties of interior mixing in the ocean * 32.1. Convection Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of vertical convection in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 32.2. Tide Induced Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how tide induced mixing is modelled (barotropic, baroclinic, none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.3. Double Diffusion Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there double diffusion End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 32.4. Shear Mixing Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is there interior shear mixing End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 33. Vertical Physics --&gt; Interior Mixing --&gt; Tracers *Properties of interior mixing on tracers in the ocean * 33.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for tracers in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 33.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of tracers, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 33.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Constant value" # "Turbulent closure / TKE" # "Turbulent closure - Mellor-Yamada" # "Richardson number dependent - PP" # "Richardson number dependent - KT" # "Imbeded as isopycnic vertical coordinate" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 34. Vertical Physics --&gt; Interior Mixing --&gt; Momentum *Properties of interior mixing on momentum in the ocean * 34.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of interior mixing for momentum in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 34.2. Constant Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If constant interior mixing of momentum, specific coefficient (m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.3. Profile Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 34.4. Background Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 35. Uplow Boundaries --&gt; Free Surface Properties of free surface in ocean 35.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of free surface in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear implicit" # "Linear filtered" # "Linear semi-explicit" # "Non-linear implicit" # "Non-linear filtered" # "Non-linear semi-explicit" # "Fully explicit" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 35.2. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Free surface scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 35.3. Embeded Seaice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the sea-ice embeded in the ocean model (instead of levitating) ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36. Uplow Boundaries --&gt; Bottom Boundary Layer Properties of bottom boundary layer in ocean 36.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Diffusive" # "Acvective" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 36.2. Type Of Bbl Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of bottom boundary layer in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # TODO - please enter value(s) Explanation: 36.3. Lateral Mixing Coef Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: INTEGER&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 36.4. Sill Overflow Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe any specific treatment of sill overflows End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.overview') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37. Boundary Forcing Ocean boundary forcing 37.1. Overview Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Overview of boundary forcing in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.2. Surface Pressure Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...) End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.3. Momentum Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.4. Tracers Flux Correction Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.5. Wave Effects Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how wave effects are modelled at ocean surface. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.6. River Runoff Budget Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe how river runoff from land surface is routed to ocean and any global adjustment done. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 37.7. Geothermal Heating Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Describe if/how geothermal heating is present at ocean bottom. End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Linear" # "Non-linear" # "Non-linear (drag function of speed of tides)" # "Constant drag coefficient" # "None" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 38. Boundary Forcing --&gt; Momentum --&gt; Bottom Friction Properties of momentum bottom friction in ocean 38.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum bottom friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "None" # "Free-slip" # "No-slip" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 39. Boundary Forcing --&gt; Momentum --&gt; Lateral Friction Properties of momentum lateral friction in ocean 39.1. Type Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of momentum lateral friction in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "1 extinction depth" # "2 extinction depth" # "3 extinction depth" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 40. Boundary Forcing --&gt; Tracers --&gt; Sunlight Penetration Properties of sunlight penetration scheme in ocean 40.1. Scheme Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of sunlight penetration scheme in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') # PROPERTY VALUE: # Set as follows: DOC.set_value(value) # Valid Choices: # True # False # TODO - please enter value(s) Explanation: 40.2. Ocean Colour Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: BOOLEAN&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Is the ocean sunlight penetration scheme ocean colour dependent ? End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 40.3. Extinction Depth Is Required: FALSE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 0.1 Describe and list extinctions depths for sunlight penetration scheme (if applicable). End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41. Boundary Forcing --&gt; Tracers --&gt; Fresh Water Forcing Properties of surface fresh water forcing in ocean 41.1. From Atmopshere Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from atmos in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # Valid Choices: # "Freshwater flux" # "Virtual salt flux" # "Real salt flux" # "Other: [Please specify]" # TODO - please enter value(s) Explanation: 41.2. From Sea Ice Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: ENUM&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface fresh water forcing from sea-ice in ocean End of explanation # PROPERTY ID - DO NOT EDIT ! DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') # PROPERTY VALUE: # Set as follows: DOC.set_value("value") # TODO - please enter value(s) Explanation: 41.3. Forced Mode Restoring Is Required: TRUE&nbsp;&nbsp;&nbsp;&nbsp;Type: STRING&nbsp;&nbsp;&nbsp;&nbsp;Cardinality: 1.1 Type of surface salinity restoring in forced mode (OMIP) End of explanation
5,989
Given the following text problem statement, write Python code to implement the functionality described below in problem statement Problem: I have the tensors:
Problem: import numpy as np import pandas as pd import torch ids, x = load_data() idx = ids.repeat(1, 114).view(30, 1, 114) result = torch.gather(x, 1, idx) result = result.squeeze(1)
5,990
Given the following text description, write Python code to implement the functionality described below step by step Description: Knapsack Problem Bin packing tried to minimize the number of bins needed for a fixed number of items, if we instead fix the number of bins and assign some way to value objects, then the knapsack problem tells us which objects to take to maximize our total item value. Rather than object sizes, in the traditional formulation we consider item weights and imagine that we are packing a backpack for a camping trip or a suitcase for a vacation. How many items of different weights can we fit in our suitcase before our suitcase is too heavy. Which objects should we take? This problems is also known as the capital budgeting problem. Instead of items we think of investment opportunities, instead of value we consider investment return, weight becomes value, and the maximum weight we can carry becomes our budget. Which investments should we take on to maximize our return with the current budget? Step1: 1. First lets make some fake data Step2: Solve it! Step3: And the result
Python Code: from pulp import * import numpy as np Explanation: Knapsack Problem Bin packing tried to minimize the number of bins needed for a fixed number of items, if we instead fix the number of bins and assign some way to value objects, then the knapsack problem tells us which objects to take to maximize our total item value. Rather than object sizes, in the traditional formulation we consider item weights and imagine that we are packing a backpack for a camping trip or a suitcase for a vacation. How many items of different weights can we fit in our suitcase before our suitcase is too heavy. Which objects should we take? This problems is also known as the capital budgeting problem. Instead of items we think of investment opportunities, instead of value we consider investment return, weight becomes value, and the maximum weight we can carry becomes our budget. Which investments should we take on to maximize our return with the current budget? End of explanation items=['item_%d'%i for i in range(20)] item_weights = dict( (i,np.random.randint(1,20)) for i in items) item_values = dict( (i,10*np.random.rand()) for i in items) W = 100 #variables. How many of each object to take. For simplicity lets make this 0 or 1 (classic 0-1 knapsack problem) x = LpVariable.dicts('item',items,0,1, LpBinary) #create the problme prob=LpProblem("knapsack",LpMaximize) #the objective cost = lpSum([ item_values[i]*x[i] for i in items]) prob+=cost #constraint prob += lpSum([ item_weights[i]*x[i] for i in items]) <= W Explanation: 1. First lets make some fake data End of explanation %time prob.solve() print(LpStatus[prob.status]) Explanation: Solve it! End of explanation for i in items: print(i, value(x[i])) print(value(prob.objective)) print(sum([ item_weights[i]*value(x[i]) for i in items])) Explanation: And the result: End of explanation
5,991
Given the following text description, write Python code to implement the functionality described below step by step Description: We use pyfst, a pyhton wrapper for the great openfst library. Thanks to Victor Chahuneau for the wrapper (check it on github). Step5: Helper code Step6: Input sigma and delta are the source and target vocabulary, respectively. Step7: the input consists in a linear-chain automaton (implemented as a self-transducer) Step8: Rule table Set of known bilingual mappings (word-to-word or phrase-to-phrase). Step9: Monotone word-replacement model For a simple monotone word-replacement model, the automaton G matches our model of translational equivalences exactly. Step10: Permutations Monotone translation is not all that interesting. Let's explore some strategies for word reordering. Step11: Phrase-based It is straightforward to use phrase mappings rather than word mappings. Implementing the model in Moses strictly requires adapting WLd to permute source phrases and project target phrases directly. Simply running WLd on the input and intersecting with the phrase based grammar can result in discountiguous parts of the input beeing covered by a contiguous phrase. Example Step12: Having those two biphrases taking care of local reordering, the monotone model can produce the correct translation our mutual friend Step13: Obviously, we can also use the more general WLd2 model.
Python Code: import fst Explanation: We use pyfst, a pyhton wrapper for the great openfst library. Thanks to Victor Chahuneau for the wrapper (check it on github). End of explanation # Let's see the input as a simple linear chain FSA def make_input(srcstr, sigma = None): converts a nonempty string into a linear chain acceptor @param srcstr is a nonempty string @param sigma is the source vocabulary assert(srcstr.split()) return fst.linear_chain(srcstr.split(), sigma) def parse_rule(rulestr, table = None): rule = [field.strip().split() for field in rulestr.split('|||')] if table is not None: table.append(rule) return rule ruletable = [] parse_rule('A ||| a', ruletable) parse_rule('B ||| b', ruletable) parse_rule('C ||| c', ruletable) parse_rule('A C ||| d', ruletable) parse_rule('D ||| d e', ruletable) parse_rule('C D E ||| d c e', ruletable) ruletable import itertools def make_rules(rules, sigma = None, delta = None): This procedure creates a transducer with all rules in the rule table. It isn't optimised to produce minimal transducers. R = fst.Transducer(isyms = sigma, osyms = delta) n = 1 # next non-initial state for rule in rules: fphrase, ephrase = rule[0], rule[1] longest = max(len(fphrase), len(ephrase)) # the phrases are represented from 0 (initial) to 0 (accepting) # the states in the middle depend on the value of n and on the length of the longest phrase states = [0] + range(n, n + longest) states[-1] = 0 # create the arcs # print R.isyms.find(0) # ε fphrase = tuple(x.encode('utf-8') for x in fphrase) ephrase = tuple(x.encode('utf-8') for x in ephrase) #print fphrase #print ephrase #print fst.EPSILON.encode('utf-8') [R.add_arc(states[i], states[i+1], fe[0], fe[1], 0) for i, fe in enumerate(itertools.izip_longest(fphrase, ephrase, fillvalue=fst.EPSILON))] # updates n (except for single words -- they don't require extra states) n += longest - 1 R[0].initial = True R[0].final = 0 return R R = make_rules(ruletable) R # I am going to start with a very simple wrapper for a python dictionary that # will help us associate unique ids to items # this wrapper simply offers one aditional method (insert) similar to the insert method of an std::map class ItemFactory(object): def __init__(self): self.nextid_ = 0 self.i2s_ = {} def insert(self, item): Inserts a previously unmapped item. Returns the item's unique id and a flag with the result of the intertion. uid = self.i2s_.get(item, None) if uid is None: uid = self.nextid_ self.nextid_ += 1 self.i2s_[item] = uid return uid, True return uid, False def get(self, item): Returns the item's unique id (assumes the item has been mapped before) return self.i2s_[item] # Let's define a model of translational equivalences that performs word replacement of arbitrary permutations of the input # constrained to a window of length $d$ (see WLd in (Lopez, 2009)) # same strategy in Moses (for phrase-based models) def WLdPermutations(sentence, d = 2, sigma = None, delta = None): from collections import deque from itertools import takewhile A = fst.Transducer(isyms = sigma, osyms = delta) I = len(sentence) axiom = (1, tuple([False]*min(I - 1, d - 1))) ifactory = ItemFactory() ifactory.insert(axiom) Q = deque([axiom]) while Q: ant = Q.popleft() # antecedent l, C = ant # signature sfrom = ifactory.get(ant) # state id if l == I + 1: # goal item A[sfrom].final = 0 # is a final node continue # adjacent n = 0 if (len(C) == 0 or not C[0]) else sum(takewhile(lambda b : b, C)) # leading ones ll = l + n + 1 CC = list(C[n+1:]) maxlen = min(I - ll, d - 1) if maxlen: m = maxlen - len(CC) # missing positions [CC.append(False) for _ in range(m)] cons = (ll, tuple(CC)) sto, inserted = ifactory.insert(cons) if inserted: Q.append(cons) A.add_arc(sfrom, sto, str(l), sentence[l-1], 0) # non-adjacent ll = l for i in range(l + 1, I + 1): if i - l + 1 > d: # beyond limit break if C[i - l - 1]: # already used continue # free position CC = list(C) CC[i-l-1] = True cons = (ll, tuple(CC)) sto, inserted = ifactory.insert(cons) if inserted: Q.append(cons) A.add_arc(sfrom, sto, str(i), sentence[i-1], 0) return A Explanation: Helper code End of explanation # Let's create a table for the input vocabulary $\Sigma$ sigma = fst.SymbolTable() # and for the output vocabulary $\Delta$ delta = fst.SymbolTable() Explanation: Input sigma and delta are the source and target vocabulary, respectively. End of explanation # Let's have a look at the input as an automaton # we call it F ('f' is the cannonical source language) ex1_F = make_input('nosso amigo comum', sigma) ex1_F Explanation: the input consists in a linear-chain automaton (implemented as a self-transducer) End of explanation # this is how we parse rules # for now let's use word-to-word mappings ex1_table = [] parse_rule('nosso ||| our', ex1_table) parse_rule('nosso ||| ours', ex1_table) parse_rule('amigo ||| friend', ex1_table) parse_rule('amigo ||| mate', ex1_table) parse_rule('comum ||| ordinary', ex1_table) parse_rule('comum ||| mutual', ex1_table) parse_rule('comum ||| common', ex1_table) parse_rule('comum ||| usual', ex1_table) ex1_table # Let's call G the automaton that represents the phrase table (G for 'grammar' due to connection to CFG models) ex1_G = make_rules(ex1_table) ex1_G Explanation: Rule table Set of known bilingual mappings (word-to-word or phrase-to-phrase). End of explanation # The composition defines the space of all sentence pairs whose input is F ex1_E = ex1_F >> ex1_G ex1_E # a projection lets us focus on target strings only ex1_E.project_output() ex1_E Explanation: Monotone word-replacement model For a simple monotone word-replacement model, the automaton G matches our model of translational equivalences exactly. End of explanation # these are the permutations of the input according to WL$2$ ex1_WLd2 = WLdPermutations('nosso amigo comum'.split(), 2, None, sigma) ex1_WLd2 # The translations defined by the extended model can be found via composition with R ex1_E = ex1_WLd2 >> ex1_G ex1_E.project_output() ex1_E # these are in fact all permutations (because I = d = 3) ex1_WLd3 = WLdPermutations('nosso amigo comum'.split(), 3, None, sigma) ex1_WLd3 # and these are all solutions with d=3 ex1_WLd3 >> ex1_G Explanation: Permutations Monotone translation is not all that interesting. Let's explore some strategies for word reordering. End of explanation # let's add two biphrases that account for some local reordering ex2_table = list(ex1_table) parse_rule('amigo comum ||| mutual friend', ex2_table) parse_rule('nosso amigo ||| our friend', ex2_table) ex2_table ex2_G = make_rules(ex2_table, sigma, delta) ex2_G Explanation: Phrase-based It is straightforward to use phrase mappings rather than word mappings. Implementing the model in Moses strictly requires adapting WLd to permute source phrases and project target phrases directly. Simply running WLd on the input and intersecting with the phrase based grammar can result in discountiguous parts of the input beeing covered by a contiguous phrase. Example: a b c d e can be permuted into a c b d e by WLd3, and a phrase pair a c =&gt; x y might exist. However, phrase-based MT in its standard formulation would not allow that. Let's disregard that for now. End of explanation ex1_F >> ex2_G Explanation: Having those two biphrases taking care of local reordering, the monotone model can produce the correct translation our mutual friend End of explanation ex1_WLd2 >> ex2_G def make_ngram(ngrams, delta): pass nac_F = WLdPermutations('nosso amigo comum'.split(), 2, None, sigma) nac_F nac_table = [] parse_rule('nosso ||| our', nac_table) parse_rule('amigo ||| friend', nac_table) parse_rule('comum ||| common', nac_table) parse_rule('comum ||| mutual', nac_table) parse_rule('comum ||| ordinary', nac_table) parse_rule('amigo comum ||| mutual friend', nac_table) nac_table nac_G = make_rules(ex2_table, sigma, delta) nac_G nac_F >> nac_G Explanation: Obviously, we can also use the more general WLd2 model. End of explanation
5,992
Given the following text description, write Python code to implement the functionality described below step by step Description: <h1> 1. Exploring natality dataset </h1> This notebook illustrates Step2: <h2> Explore data </h2> The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. Step4: Let's write a query to find the unique values for each of the columns and the count of those values. This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value.
Python Code: # change these to try this notebook out BUCKET = 'cloud-training-demos-ml' PROJECT = 'cloud-training-demos' REGION = 'us-central1' import os os.environ['BUCKET'] = BUCKET os.environ['PROJECT'] = PROJECT os.environ['REGION'] = REGION %%bash if ! gsutil ls | grep -q gs://${BUCKET}/; then gsutil mb -l ${REGION} gs://${BUCKET} fi Explanation: <h1> 1. Exploring natality dataset </h1> This notebook illustrates: <ol> <li> Exploring a BigQuery dataset using AI Platform Notebooks. </ol> End of explanation # Create SQL query using natality data after the year 2000 query = SELECT weight_pounds, is_male, mother_age, plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 # Call BigQuery and examine in dataframe from google.cloud import bigquery df = bigquery.Client().query(query + " LIMIT 100").to_dataframe() df.head() Explanation: <h2> Explore data </h2> The data is natality data (record of births in the US). My goal is to predict the baby's weight given a number of factors about the pregnancy and the baby's mother. Later, we will want to split the data into training and eval datasets. The hash of the year-month will be used for that -- this way, twins born on the same day won't end up in different cuts of the data. End of explanation # Create function that finds the number of records and the average weight for each value of the chosen column def get_distinct_values(column_name): sql = SELECT {0}, COUNT(1) AS num_babies, AVG(weight_pounds) AS avg_wt FROM publicdata.samples.natality WHERE year > 2000 GROUP BY {0} .format(column_name) return bigquery.Client().query(sql).to_dataframe() # Bar plot to see is_male with avg_wt linear and num_babies logarithmic df = get_distinct_values('is_male') df.plot(x='is_male', y='num_babies', kind='bar'); df.plot(x='is_male', y='avg_wt', kind='bar'); # Line plots to see mother_age with avg_wt linear and num_babies logarithmic df = get_distinct_values('mother_age') df = df.sort_values('mother_age') df.plot(x='mother_age', y='num_babies'); df.plot(x='mother_age', y='avg_wt'); # Bar plot to see plurality(singleton, twins, etc.) with avg_wt linear and num_babies logarithmic df = get_distinct_values('plurality') df = df.sort_values('plurality') df.plot(x='plurality', y='num_babies', logy=True, kind='bar'); df.plot(x='plurality', y='avg_wt', kind='bar'); # Bar plot to see gestation_weeks with avg_wt linear and num_babies logarithmic df = get_distinct_values('gestation_weeks') df = df.sort_values('gestation_weeks') df.plot(x='gestation_weeks', y='num_babies', logy=True, kind='bar'); df.plot(x='gestation_weeks', y='avg_wt', kind='bar'); Explanation: Let's write a query to find the unique values for each of the columns and the count of those values. This is important to ensure that we have enough examples of each data value, and to verify our hunch that the parameter has predictive value. End of explanation
5,993
Given the following text description, write Python code to implement the functionality described below step by step Description: ISQA 8080 Homework 2.2 Brian Detweiler Step1: Tasks The fourth task is to upload a dataset and to write and run different queries on this dataset. The fifth task starts with Redis. First, installation and running Redis either on your own computer or running it on The sixth task demonstrates three examples of using Redis. The seventh tasks asks you to describe a solution to use Redis for storing user ratings. The eighth tasks asks you to write a reflection on the discussion boards. Deliverables (Summary, Detailed description in the tasks) Step2: 1. Display all occupations Step3: 2. Chose an occupation and select all users with this occupation. Only show user information and hide the users' movie ratings. Step4: 3. Count the number of men in the database. Step5: 4. Select all users whose zipcode starts with a 5. Only show the user ID and zipcode. Step6: 5. Select all movies from the year 1998 and category comedy. Step7: 6. Count the number of movies from the year 1990 and 1995. Step8: 7. Display all movies published before the year 1992. Step9: 8. Imagine that you registered for MovieLens. Create a new user with your user data. Do not include any ratings. Note Step10: 9. Update the user record you created in the previous query and insert a new rating for a movie of your choice. You will need to use $addToSet (https Step11: 10. A query of your choice. Here, we'll find users near Omaha
Python Code: import pymongo from pymongo import MongoClient import datetime import re from pymongo import InsertOne, DeleteOne, ReplaceOne import datetime client = MongoClient() client = MongoClient('mongodb://localhost:27017/') db = client.homework2 users = db.users movies = db.movies Explanation: ISQA 8080 Homework 2.2 Brian Detweiler End of explanation movieList = movies.find({"_id": 1}) for movie in movieList: print movie Explanation: Tasks The fourth task is to upload a dataset and to write and run different queries on this dataset. The fifth task starts with Redis. First, installation and running Redis either on your own computer or running it on The sixth task demonstrates three examples of using Redis. The seventh tasks asks you to describe a solution to use Redis for storing user ratings. The eighth tasks asks you to write a reflection on the discussion boards. Deliverables (Summary, Detailed description in the tasks): See task four: In the answer sheet "Homework 2 Answer Sheet.docx", write the 10 queries including a screenshot of the command and the first lines of the result. See task seven: In the answer sheet "Homework 2 Answer Sheet. Docx", describe how Redis can be used to store user ratings. You need to include screenshots to show how data would be stored as well as retrieved in Redis. Submit the Answer Sheet on Blackboard using the link "Homework 2 Part 2: MongoDB and Redis" (where you downloaded the instructions and the text files). See task eight: Write a reflection by replying to the thread "Homework Reflection 2" in the discussion board End of explanation occupations = sorted(users.distinct(u'occupation')) for occupation in occupations: print occupation Explanation: 1. Display all occupations End of explanation # We will choose 'technician/engineer' and hide movie reviews - limit to first 10 results engineers = users.find({u'occupation': 'technician/engineer'}, {u'movies':0}).limit(10) for engineer in engineers: print engineer Explanation: 2. Chose an occupation and select all users with this occupation. Only show user information and hide the users' movie ratings. End of explanation men = users.find({u'gender': 'M'}).count() print "There are " + str(men) + " men in the database." Explanation: 3. Count the number of men in the database. End of explanation regex = re.compile("^5.*") by_zipcodes = users.find({u'zipcode': regex}, {u'_id':1, u'zipcode': 1}).limit(10) for zipcode in by_zipcodes: print zipcode Explanation: 4. Select all users whose zipcode starts with a 5. Only show the user ID and zipcode. End of explanation comedies_from_1998 = movies.find({u'year': 1998, u'category': u'Comedy'}).limit(10) for comedy in comedies_from_1998: print comedy Explanation: 5. Select all movies from the year 1998 and category comedy. End of explanation movies_between_1990_and_1995 = movies.find({u'year': {'$gte': 1990, '$lte': 1995}}).count() print 'There are ' + str(movies_between_1990_and_1995) + ' movies between 1990 and 1995' Explanation: 6. Count the number of movies from the year 1990 and 1995. End of explanation movies_before_1992 = movies.find({u'year': {'$lt': 1992}})\ .limit(10)\ .sort('year', pymongo.ASCENDING) for movie in movies_before_1992: print movie Explanation: 7. Display all movies published before the year 1992. End of explanation # users.delete_one({u'zipcode': u'68102'}) me = users.find_one({u'zipcode': u'68102'}) if me is not None: print me else: max_id = users.find_one({u'zipcode': u'68102'}) myself = {u'gender': u'M', u'age': u'25-34', u'zipcode': u'68102', u'occupation': u'technician/engineer'} result = db.users.insert_one(myself) my_id = result.inserted_id print 'created my profile with id ' + str(my_id) Explanation: 8. Imagine that you registered for MovieLens. Create a new user with your user data. Do not include any ratings. Note: The _id here is auto-generated, as per MongoDB conventions I have seen ways to grab the max ID and increment it by one, but this is not safe in general, so I am not implementing it here. End of explanation movie_review = {u'moveID': 3272, # Bad Lieutenant u'rating': 5, u'timestamp': datetime.datetime.utcnow()} users.update_one( { u'zipcode': u'68102'}, { "$addToSet":{"movies": movie_review} }, upsert=True) me = users.find_one({u'zipcode': u'68102'}) print me Explanation: 9. Update the user record you created in the previous query and insert a new rating for a movie of your choice. You will need to use $addToSet (https://docs.mongodb.com/v3.0/reference/operator/update/addToSet/) to add a value that does not exist in an array to the end of the array. You can use Math.round(new Date().getTime()/1000) to calculate the time in Unix format (seconds since start from epoch). End of explanation regex = re.compile("^681.*") by_zipcodes = users.find({u'zipcode': regex}, {u'_id':1, u'zipcode': 1}).limit(10) for zipcode in by_zipcodes: print zipcode Explanation: 10. A query of your choice. Here, we'll find users near Omaha End of explanation
5,994
Given the following text description, write Python code to implement the functionality described below step by step Description: 4. Mutant lists and other X-Men (or X-People, as you prefer) Lists are simultaneously among the most useful and most confusing data structures in Python. Why? Because of mutability. Mutating. So what is that and why can't we use lists as keys in dictionaries? Let's see by creating some lists and "mutating" them. Though do be careful Step1: Interesting... We wanted to change two elements (line 7), but added four instead! Rather, we mutated list a with list b. "Mutability" means, simply, that we can change the structure of the object. It's easy to change, or mutate, a list. Virtually any action on the list changes its structure, length, the order of its elements, or something else that is likely important. After mutation, it's the same list with the same variable name, but different content. Hashing. Besides being mutable, objects may or may not be hashable. As you should know by know, Python provides built-in collections, like dict and set, which provide "high-speed" access to their elements. The way they do that is through the __hash__() method. A hash function is one that takes an object as input and returns an (arbitrary) number. You can read more about hash functions on Wikipedia. If an object has a __hash__() method then it is hashable. We can check how it works trying to call hash() function on different data types Step2: We see that we can get a hash from int (exact int value), from float, and string (although hash for them is not as obvious as for int), but not for a list. You can see that trying to call hash() function on a list returns a TypeError. That happens because list doesn't implement __hash__() method, which is a consequence of list mutability. Why? Because if a list can mutate, there isn't a way to create a unique mapping of a given list to a consistent value, because the list could change at any time. So, when you have some collection of elements in a list and you need to make it hashable (for example, to use a key for dict), turn it into tuple Step3: Aliasing. Another important concept is aliasing. Let's start with example Step4: What happened? Apparently, when we create a new variable and assign it the name of another variable, Python does not create a new variable. Instead it creates an alias, which is a new name that points to the same object. Literally, both a and b point to the same list, so when we change the list using the name a, accessing it through the name b also reflects these changes. To avoid such behavior we need to make a copy using, for instance, a copy constructor, like so Step5: Almost all list methods in Python do not return a new list, but modify (or mutate) it. For that reason, if you have several aliases, all of them reflect the changes after list mutation. Step6: Exercise. Aliasing also happens at function call boundaries. Try to predict what the following code will do before you run it.
Python Code: a = list(range(5)) print ("The list we created:", a, "of length", len(a)) b = list(range(6,10)) print ("The second list we created:", b, "of length", len(b)) a[1:3] = b # Line 7 print ("The first list after we changed a couple of elements is", a, "with length", len(a)) Explanation: 4. Mutant lists and other X-Men (or X-People, as you prefer) Lists are simultaneously among the most useful and most confusing data structures in Python. Why? Because of mutability. Mutating. So what is that and why can't we use lists as keys in dictionaries? Let's see by creating some lists and "mutating" them. Though do be careful: mutations may be hazardous to your (program's) health! End of explanation print ("hash of int(42) is", hash(42)) print ("hash of float(42.001) is", hash(42.001)) print ("hash of str('42') is", hash('42')) try: print ("hash of list(42) is", hash([42])) except TypeError: print("TypeError: unhashable type: 'list'") Explanation: Interesting... We wanted to change two elements (line 7), but added four instead! Rather, we mutated list a with list b. "Mutability" means, simply, that we can change the structure of the object. It's easy to change, or mutate, a list. Virtually any action on the list changes its structure, length, the order of its elements, or something else that is likely important. After mutation, it's the same list with the same variable name, but different content. Hashing. Besides being mutable, objects may or may not be hashable. As you should know by know, Python provides built-in collections, like dict and set, which provide "high-speed" access to their elements. The way they do that is through the __hash__() method. A hash function is one that takes an object as input and returns an (arbitrary) number. You can read more about hash functions on Wikipedia. If an object has a __hash__() method then it is hashable. We can check how it works trying to call hash() function on different data types: End of explanation print ("hash of tuple(42, '42') is", hash((42, '42'))) Explanation: We see that we can get a hash from int (exact int value), from float, and string (although hash for them is not as obvious as for int), but not for a list. You can see that trying to call hash() function on a list returns a TypeError. That happens because list doesn't implement __hash__() method, which is a consequence of list mutability. Why? Because if a list can mutate, there isn't a way to create a unique mapping of a given list to a consistent value, because the list could change at any time. So, when you have some collection of elements in a list and you need to make it hashable (for example, to use a key for dict), turn it into tuple: End of explanation a = list(range(5)) print ("The list we created:", a) b = a print ("The second list we created:", b) a[3] = 10 print ("The first list after changing it:", a) print ("And the second list:", b) Explanation: Aliasing. Another important concept is aliasing. Let's start with example: End of explanation a = list(range(5)) print ("The list we created:", a) b = a[:] print ("The second list we created:", b) a[3] = 10 print ("The first list after changing it:", a) print ("And the second list:", b) Explanation: What happened? Apparently, when we create a new variable and assign it the name of another variable, Python does not create a new variable. Instead it creates an alias, which is a new name that points to the same object. Literally, both a and b point to the same list, so when we change the list using the name a, accessing it through the name b also reflects these changes. To avoid such behavior we need to make a copy using, for instance, a copy constructor, like so: End of explanation a = list(range(5)) print ("The list we created:", a) b = a print ("The second list we created:", b) b.append(11) a.extend([12, 13]) print ("The first list after mutation:", a) print ("The second list after mutation:", b) a = list(range(5)) b = a + [11, 12] print ("The list we created:", a) print ("The second list we created:", b) b.append(21) a.extend([22, 23]) print ("The first list after mutation:", a) print ("The second list after mutation:", b) Explanation: Almost all list methods in Python do not return a new list, but modify (or mutate) it. For that reason, if you have several aliases, all of them reflect the changes after list mutation. End of explanation def rem_sublist(L, i, j): L[i:j] = [] a = list(range(10)) print(a) rem_sublist(a, 2, 5) print(a) Explanation: Exercise. Aliasing also happens at function call boundaries. Try to predict what the following code will do before you run it. End of explanation
5,995
Given the following text description, write Python code to implement the functionality described below step by step Description: Extinction Step1: As always, let's do imports and initialize a logger and a new bundle. Step2: First we'll define the system parameters Step3: And then create three light curve datasets at the same times, but in different passbands Step4: Now we'll set some atmosphere and limb-darkening options Step5: And flip the extinction constraint so we can provide E(B-V).
Python Code: #!pip install -I "phoebe>=2.3,<2.4" Explanation: Extinction: Eclipse Depth Difference as Function of Temperature In this example, we'll reproduce Figure 3 in the extinction release paper (Jones et al. 2020). NOTE: this script takes a long time to run. <img src="jones+20_fig3.png" alt="Figure 3" width="800px"/> Setup Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab). End of explanation import matplotlib matplotlib.rcParams['text.usetex'] = True matplotlib.rcParams['pdf.fonttype'] = 42 matplotlib.rcParams['ps.fonttype'] = 42 matplotlib.rcParams['mathtext.fontset'] = 'stix' matplotlib.rcParams['font.family'] = 'STIXGeneral' from matplotlib import gridspec %matplotlib inline from astropy.table import Table import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger('error') b = phoebe.default_binary() Explanation: As always, let's do imports and initialize a logger and a new bundle. End of explanation b['period@orbit']=10*u.d b['teff@secondary']=5780.*u.K b['requiv@secondary']=1.0*u.solRad b.flip_constraint('mass@primary', solve_for='sma@binary') b.flip_constraint('mass@secondary', solve_for='q') Explanation: First we'll define the system parameters End of explanation times = phoebe.linspace(0, 10, 301) b.add_dataset('lc', times=times, dataset='B', passband="Johnson:B") b.add_dataset('lc', times=times, dataset='R', passband="Cousins:R") Explanation: And then create three light curve datasets at the same times, but in different passbands End of explanation b.set_value_all('gravb_bol', 0.0) b.set_value_all('ld_mode', 'manual') b.set_value_all('ld_func', 'linear') b.set_value_all('ld_coeffs', [0.0]) Explanation: Now we'll set some atmosphere and limb-darkening options End of explanation b.flip_constraint('ebv', solve_for='Av') masses=np.array([ 0.6 , 0.7 , 0.8 , 0.9 , 1. , 1.1 , 1.2 , 1.3 , 1.4 , 1.5 , 1.6 , 1.7 , 1.8 , 1.9 , 1.95, 2. , 2.1 , 2.2 , 2.3 , 2.5 , 3. , 3.5 , 4. , 4.5 , 5. , 6. , 7. , 8. , 10. , 12. , 15. , 20. ]) temps=np.array([ 4285., 4471., 4828., 5242., 5616., 5942., 6237., 6508., 6796., 7121., 7543., 7968., 8377., 8759., 8947., 9130., 9538., 9883., 10155., 10801., 12251., 13598., 14852., 16151., 17092., 19199., 21013., 22526., 25438., 27861., 30860., 34753.]) radii=np.array([0.51, 0.63, 0.72, 0.80, 0.90, 1.01, 1.13, 1.26, 1.36, 1.44, 1.48, 1.51, 1.54, 1.57, 1.59, 1.61, 1.65, 1.69, 1.71, 1.79, 1.97, 2.14, 2.30, 2.48, 2.59, 2.90, 3.17, 3.39, 3.87, 4.29, 4.85, 5.69]) t=Table(names=('Mass','Tdiff','B1','B2','R1','R2'), dtype=('f4', 'f4', 'f8', 'f8', 'f8', 'f8')) def binmodel(teff,requiv,mass): b.set_value('teff', component='primary', value=teff*u.K) b.set_value('requiv', component='primary', value=requiv*u.solRad) b.set_value('mass', component='primary', value=mass*u.solMass) b.set_value('mass', component='secondary', value=1.0*u.solMass) b.set_value('ebv', value=0.0) b.run_compute(distortion_method='rotstar', irrad_method='none', model='noext', overwrite=True) b.set_value('ebv', value=1.0) b.run_compute(distortion_method='rotstar', irrad_method='none', model='ext', overwrite=True) Bextmags=-2.5*np.log10(b['value@fluxes@B@ext@model']) Bnoextmags=-2.5*np.log10(b['value@fluxes@B@noext@model']) Bdiff=(Bextmags-Bextmags.min())-(Bnoextmags-Bnoextmags.min()) Rextmags=-2.5*np.log10(b['value@fluxes@R@ext@model']) Rnoextmags=-2.5*np.log10(b['value@fluxes@R@noext@model']) Rdiff=(Rextmags-Rextmags.min())-(Rnoextmags-Rnoextmags.min()) tdiff=teff-5780 t.add_row((mass, tdiff, Bdiff[0],Bdiff[150],Rdiff[0],Rdiff[150])) def binmodel_teff(teff): b.set_value('teff', component='primary', value=teff*u.K) b.set_value('ebv', value=0.0) b.run_compute(distortion_method='rotstar', irrad_method='none', model='noext', overwrite=True) b.set_value('ebv', value=1.0) b.run_compute(distortion_method='rotstar', irrad_method='none', model='ext', overwrite=True) Bextmags=-2.5*np.log10(b['value@fluxes@B@ext@model']) Bnoextmags=-2.5*np.log10(b['value@fluxes@B@noext@model']) Bdiff=(Bextmags-Bextmags.min())-(Bnoextmags-Bnoextmags.min()) Rextmags=-2.5*np.log10(b['value@fluxes@R@ext@model']) Rnoextmags=-2.5*np.log10(b['value@fluxes@R@noext@model']) Rdiff=(Rextmags-Rextmags.min())-(Rnoextmags-Rnoextmags.min()) tdiff=teff-5780 t_teff.add_row((tdiff, Bdiff[0],Bdiff[150],Rdiff[0],Rdiff[150])) # NOTE: this loop takes a long time to run for i in range(0,len(masses)): binmodel(temps[i], radii[i], masses[i]) #t.write("Extinction_G2V_ZAMS.dat", format='ascii', overwrite=True) #t=Table.read("Extinction_G2V_ZAMS.dat", format='ascii') plt.clf() plt.plot(t['Tdiff'],t['B1'],color="b",ls="-", label="G2V eclipsed") plt.plot(t['Tdiff'],t['B2'],color="b",ls="--", label="Secondary eclipsed") plt.plot(t['Tdiff'],t['R1'],color="r",ls="-", label="") plt.plot(t['Tdiff'],t['R2'],color="r",ls="--", label="") plt.ylabel(r'$\Delta m$ ') plt.xlabel(r'$T_\mathrm{secondary} - T_\mathrm{G2V}$') plt.legend() plt.xlim([-1450,25000]) t_teff=Table(names=('Tdiff','B1','B2','R1','R2'), dtype=('f4', 'f8', 'f8', 'f8', 'f8')) b.set_value('requiv', component='primary', value=1.0*u.solRad) b.set_value('mass', component='primary', value=1.0*u.solMass) b.set_value('mass', component='secondary', value=1.0*u.solMass) # NOTE: this loop takes a long time to run for i in range(0,len(temps)): binmodel_teff(temps[i]) #t_teff.write("Extinction_Solar_exceptTeff_test.dat", format='ascii', overwrite=True) #t_teff=Table.read("Extinction_Solar_exceptTeff_test.dat", format='ascii') plt.clf() plt.plot(t_teff['Tdiff'],t_teff['B1'],color="b",ls="-", label="G2V eclipsed") plt.plot(t_teff['Tdiff'],t_teff['B2'],color="b",ls="--", label="Secondary eclipsed") plt.plot(t_teff['Tdiff'],t_teff['R1'],color="r",ls="-", label="") plt.plot(t_teff['Tdiff'],t_teff['R2'],color="r",ls="--", label="") plt.ylabel(r'$\Delta m$ ') plt.xlabel(r'$T_\mathrm{secondary} - T_\mathrm{G2V}$') plt.legend() plt.xlim([-1450,25000]) Explanation: And flip the extinction constraint so we can provide E(B-V). End of explanation
5,996
Given the following text description, write Python code to implement the functionality described below step by step Description: CSE 6040, Fall 2015 [24] Step5: Scalability with the problem size To start, here is some code to help generate synthetic problems of a certain size, namely, $m \times (d+1)$, where $m$ is the number of observations and $d$ the number of predictors. The $+1$ comes from our usual dummy coefficient for a non-zero intercept. Step6: Benchmark varying $m$ Let's benchmark the time to compute $x$ when the dimension $n$ of each point is fixed but the number $m$ of points varies. Step7: Exercise. How does the running time scale with $m$? Step8: Exercise. Now fix the number $m$ of observations but vary the dimension $n$. How does time scale with $n$? Complete the benchmark code below to find out. In particular, given the array N[ Step9: An online algorithm The empirical scaling appears to be pretty good, being roughly linear in $m$ or at worst quadratic in $n$. But there is still a downside in time and storage Step10: Recall that we need a value for $\phi$, for which we have an upper-bound of $\lambda_{\mathrm{max}}(A^TA)$. Let's cheat by computing it explicitly, even though in practice we would need to do something different. Step11: Exercise. Implement the online LMS algorithm in the code cell below where indicated. It should produce a final parameter estimate, x_lms, as a column vector. In addition, the skeleton code below uses rel_diff() to record the relative difference between the estimate and the true vector, storing the $k$-th relative difference in rel_diffs[k]. Doing so will allow you to see the convergence behavior of the method. Lastly, to help you out, we've defined a constant in terms of $\lambda_{\mathrm{max}}(A^TA)$ that you can use for $\phi$. In practice, you would only maintain the current estimate, or maybe just a few recent estimates, rather than all of them. Since we want to inspect these vectors later, let's just store them all. Step12: Let's compare the true coefficients against the estimates, both from the batch algorithm and the online algorithm. Step13: Let's also compute the relative differences between each estimate X[ Step14: Finally, if the dimension is d=1, let's go ahead and do a sanity-check regression fit plot.
Python Code: import numpy as np import matplotlib.pyplot as plt %matplotlib inline Explanation: CSE 6040, Fall 2015 [24]: "Online" regression This notebook continues the linear regression problem from last time, but asks about a method that can estimate the regression coefficients when you only get to see samples "one-at-a-time." We refer to such a fitting procedure as being "online" (or "incrementally"), rather than "offline" (or "batched"). End of explanation def generate_model (d): Returns a set of (random) d+1 linear model coefficients. return np.random.rand (d+1, 1) def generate_data (m, x, sigma=1.0/(2**0.5)): Generates 'm' noisy observations for a linear model whose predictor (non-intercept) coefficients are given in 'x'. Decrease 'sigma' to decrease the amount of noise. assert (type (x) is np.ndarray) and (x.ndim == 2) and (x.shape[1] == 1) n = len (x) A = np.random.rand (m, n) A[:, 0] = 1.0 b = A.dot (x) + sigma*np.random.randn (m, 1) return (A, b) def estimate_coeffs (A, b): Solves Ax=b by a linear least squares method. result = np.linalg.lstsq (A, b) x = result[0] return x def rel_diff (x, y, ord=2): Computes ||x-y|| / ||y||. Uses 2-norm by default; override by setting 'ord'. return np.linalg.norm (x - y, ord=ord) / np.linalg.norm (y, ord=ord) # Demo the above routines for a 2-D dataset. m = 50 x_true = generate_model (1) (A, b) = generate_data (m, x_true, sigma=0.1) print A.shape print x_true.shape print b.shape print "Condition number of the data matrix: ", np.linalg.cond (A) print "True model coefficients:", x_true.T x = estimate_coeffs (A, b) print "Estimated model coefficients:", x.T print "Relative error in the coefficients:", rel_diff (x, x_true) fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot (A[:, 1], b, 'b+') # Noisy observations ax1.plot (A[:, 1], A.dot (x), 'r*') # Fit ax1.plot (A[:, 1], A.dot (x_true), 'go') # True solution Explanation: Scalability with the problem size To start, here is some code to help generate synthetic problems of a certain size, namely, $m \times (d+1)$, where $m$ is the number of observations and $d$ the number of predictors. The $+1$ comes from our usual dummy coefficient for a non-zero intercept. End of explanation # Benchmark, as 'm' varies: n = 32 # dimension M = [100, 1000, 10000, 100000, 1000000] times = [0.] * len (M) for (i, m) in enumerate (M): x_true = generate_model (n) (A, b) = generate_data (m, x_true, sigma=0.1) t = %timeit -o estimate_coeffs (A, b) times[i] = t.best Explanation: Benchmark varying $m$ Let's benchmark the time to compute $x$ when the dimension $n$ of each point is fixed but the number $m$ of points varies. End of explanation t_linear = [times[0]/M[0]*m for m in M] fig = plt.figure() ax1 = fig.add_subplot(111) ax1.loglog (M, times, 'bo') ax1.loglog (M, t_linear, 'r--') Explanation: Exercise. How does the running time scale with $m$? End of explanation N = [2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 3072] m = 5000 times = [0.] * len (N) # @YOUSE: Implement a benchmark to compute the time, # `times[i]`, to execute a problem of size `N[i]`. for (i, n) in enumerate (N): pass t_linear = [times[0]/N[0]*n for n in N] t_quadratic = [times[0]/N[0]/N[0]*n*n for n in N] fig = plt.figure() ax1 = fig.add_subplot(111) ax1.loglog (N, times, 'bo') ax1.loglog (N, t_linear, 'r--') ax1.loglog (N, t_quadratic, 'g--') Explanation: Exercise. Now fix the number $m$ of observations but vary the dimension $n$. How does time scale with $n$? Complete the benchmark code below to find out. In particular, given the array N[:], compute an array, times[:], such that times[i] is the running time for a problem of size m$\times$(N[i]+1). Hint: You can basically copy and modify the preceding benchmark. Also, note that the code cell following the one immediately below will plot your results against $\mathcal{O}(n)$ and $\mathcal{O}(n^2)$. End of explanation m = 100000 d = 1 x_true = generate_model (d) (A, b) = generate_data (m, x_true, sigma=0.1) print "Condition number of the data matrix: ", np.linalg.cond (A) x = estimate_coeffs (A, b) e_rel = rel_diff (x, x_true) print "Relative error:", e_rel Explanation: An online algorithm The empirical scaling appears to be pretty good, being roughly linear in $m$ or at worst quadratic in $n$. But there is still a downside in time and storage: each time there is a change in the data, you appear to need to form the data matrix all over again and recompute the solution from scratch, possibly touching the entire data set again! This approach, which requires the full data, is often referred to as a batched or offline procedure. This begs the question, is there a way to incrementally update the model coefficients whenever a new data point, or perhaps a small batch of new data points, arrives? Such a procedure would be considered incremental or online, rather than batched or offline. Setup: Key assumptions and main goal In the discussion that follows, assume that you only get to see the observations one-at-a-time. Let $(b_k, a_k^T)$ denote the current observation. (Relative to our previous notation, this tuple is just element $k$ of $b$ and row $k$ of $A$.) Additionally, assume that, at the time the $k$-th observation arrives, you start with a current estimate of the parameters, $\hat{x}k$, which is a vector. If for whatever reason you need to refer to element $i$ of that vector, use $\hat{x}{i,k}$. You will then compute a new estimate, $\hat{x}{k+1}$ using $\hat{x}_k$ and $(b_k, a_k^T)$. For the discussion below, further assume that you throw out $\hat{x}_k$ once you have $\hat{x}{k+1}$. As for your goal, recall that in the batch setting you start with all the observations, $(b, A)$. From this starting point, you may estimate the linear regression model's parameters, $x$, by solving $Ax=b$. In the online setting, you compute estimates one at a time. After seeing all $m$ observations in $A$, your goal is to compute an $\hat{x}_{m-1} \approx x$. An initial idea Indeed, there is a technique from the signal processing literature that we can apply to the linear regression problem, known as the least mean square (LMS) algorithm. Before describing it, let's start with an initial idea. Suppose that you have a current estimate of the parameters, $x_k$, when you get a new sample, $(b_k, a_k^T)$. The error in your prediction will be, $$b_k - a_k^T \hat{x}_k.$$ Ideally, this error would be zero. So, let's ask if there exists a correction, $\Delta_k$, such that $$ \begin{array}{rrcl} & b_k - a_k^T (\hat{x}_k + \Delta_k) & = & 0 \ \iff & b_k - a_k^T \hat{x}_k & = & a_k^T \Delta_k \end{array} $$ Then, you could compute a new estimate of the parameter by $\hat{x}_{k+1} = \hat{x}_k + \Delta_k$. This idea has a major flaw, which we will discuss below. But before we do, please try the following exercise. Exercise. Verify that the following choice of $\Delta_k$ would make the preceding equation true. $$ \begin{array}{rcl} \Delta_k & = & \frac{a_k}{\|a_k\|_2^2} (b_k - a_k^T \hat{x}_k). \end{array} $$ Refining (or rather, "hacking") the basic idea: The least mean square (LMS) procedure The basic idea sketched above has at least one major flaw: the choice of $\Delta_k$ might allow you to correctly predicts $b_k$ from $a_k$ and the new estimate $\hat{x}{k+1} = \hat{x}_k + \Delta_k$, but there is no guarantee that this new estimate $\hat{x}{k+1}$ preserves the quality of predictions made at all previous iterations! There are a number of ways to deal with this problem, which includes carrying out an update with respect to some (or all) previous data. However, there is also a simpler "hack" that, though it might require some parameter tuning, can be made to work in practice. That hack is as follows. Rather than using $\Delta_k$ as computed above, let's compute a different update that has a "fudge" factor, $\phi$: $$ \begin{array}{rrcl} & \hat{x}_{k+1} & = & \hat{x}_k + \Delta_k \ \mbox{where} & \Delta_k & = & \phi \cdot a_k (b_k - a_k^T \hat{x}_k). \end{array} $$ A big question is how to choose $\phi$. There is some analysis out there that can help. We will just state the results of this analysis without proof. Let $\lambda_{\mathrm{max}}(A^TA)$ be the largest eigenvalue of $A^TA$. The result is that as the number of samples $m \rightarrow \infty$, any choice of $\phi$ that satisfies the following condition will eventually converge to the best least-squares estimator of $x$, that is, the estimate of $x$ you would have gotten by solving the least squares with all the data. $$ 0 < \phi < \frac{2}{\lambda_{\mathrm{max}}(A^TA)}. $$ This condition is not very satisfying, because you cannot really know $\lambda_{\mathrm{max}}(A^TA)$ until you've seen all the data, whereas we would like to apply this procedure online as the data arrive. Nevertheless, in practice you can imagine hybrid schemes that, given a batch of data points, use the QR fitting procedure to get a starting estimate for $x$ as well as to estimate a value of $\phi$ to use for all future updates. Summary of the LMS algorithm To summarize, the algorithm is as follows: * Choose any initial guess, $x_0$, such as $x_0 \leftarrow 0$. * For each observation $(b_k, a_k^T)$, do the update: $x_{k+1} \leftarrow x_k + \Delta_k$, where $\Delta_k = \phi \cdot a_k (b_k - a_k^T x_k)$. Trying out the LMS idea Now you should implement the LMS algorithm and see how it behaves. To start, let's generate an initial 1-D problem (2 regression coefficients, a slope and intercept), and solve it using the batch procedure. End of explanation LAMBDA_MAX = max (np.linalg.eigvals (A.T.dot (A))) print LAMBDA_MAX Explanation: Recall that we need a value for $\phi$, for which we have an upper-bound of $\lambda_{\mathrm{max}}(A^TA)$. Let's cheat by computing it explicitly, even though in practice we would need to do something different. End of explanation PHI = 1.99 / LAMBDA_MAX # Fudge factor rel_diffs = np.zeros ((m+1, 1)) x_k = np.zeros ((d+1)) for k in range (m): rel_diffs[k] = rel_diff (x_k, x_true) # @YOUSE: Implement the online LMS algorithm. # Use (b[k], A[k, :]) as the k-th observation. pass x_lms = x_k rel_diffs[m] = rel_diff (x_lms, x_true) Explanation: Exercise. Implement the online LMS algorithm in the code cell below where indicated. It should produce a final parameter estimate, x_lms, as a column vector. In addition, the skeleton code below uses rel_diff() to record the relative difference between the estimate and the true vector, storing the $k$-th relative difference in rel_diffs[k]. Doing so will allow you to see the convergence behavior of the method. Lastly, to help you out, we've defined a constant in terms of $\lambda_{\mathrm{max}}(A^TA)$ that you can use for $\phi$. In practice, you would only maintain the current estimate, or maybe just a few recent estimates, rather than all of them. Since we want to inspect these vectors later, let's just store them all. End of explanation print x_true.T print x.T print x_lms.T Explanation: Let's compare the true coefficients against the estimates, both from the batch algorithm and the online algorithm. End of explanation plt.plot (range (len (rel_diffs)), rel_diffs) Explanation: Let's also compute the relative differences between each estimate X[:, k] and the true coefficients x_true, measured in the two-norm, to see if the estimate is converging to the truth. End of explanation STEP = A.shape[0] / 500 if d == 1: fig = plt.figure() ax1 = fig.add_subplot(111) ax1.plot (A[::STEP, 1], b[::STEP], 'b+') # blue - data ax1.plot (A[::STEP, 1], A.dot (x_true)[::STEP], 'r*') # red - true ax1.plot (A[::STEP, 1], A.dot (x)[::STEP], 'go') # green - batch ax1.plot (A[::STEP, 1], A.dot (x_lms)[::STEP], 'mo') # magenta - pure LMS else: print "Plot is multidimensional; I live in Flatland, so I don't do that." Explanation: Finally, if the dimension is d=1, let's go ahead and do a sanity-check regression fit plot. End of explanation
5,997
Given the following text description, write Python code to implement the functionality described below step by step Description: 處理旅程資訊 先照之前的,讀取資料 Step1: 時間的格式固定 Step2: 先用慢動作來解析看看格式 先抓出第 0 筆資料的 TripInformation 看看要怎麼拆解這個字串,得到我們要的資料 Step3: 用迴圈來對前十筆資料做相同的事情 Step4: 偵測站 手冊附錄 https Step5: Q 查看一下內容,比方看國道五號 python node_data[node_data['編號'].str.startswith('05')] 畫圖看看 Step6: Q 試試看其他劃法,比方依照方向設定顏色 python colors = node_data.方向.apply({'S'
Python Code: import tqdm import tarfile import pandas from urllib.request import urlopen # 檔案名稱格式 filename_format="M06A_{year:04d}{month:02d}{day:02d}.tar.gz".format xz_filename_format="xz/M06A_{year:04d}{month:02d}{day:02d}.tar.xz".format csv_format = "M06A/{year:04d}{month:02d}{day:02d}/{hour:02d}/TDCS_M06A_{year:04d}{month:02d}{day:02d}_{hour:02d}0000.csv".format # 打開剛才下載的檔案試試 data_config ={"year":2016, "month":12, "day":18} tar = tarfile.open(filename_format(**data_config), 'r') # 如果沒有下載,可以試試看 xz 檔案 #data_dconfig ={"year":2016, "month":11, "day":18} #tar = tarfile.open(xz_filename_format(**data_config), 'r') # 設定欄位名稱 M06A_fields = ['VehicleType', 'DetectionTime_O','GantryID_O', 'DetectionTime_D','GantryID_D ', 'TripLength', 'TripEnd', 'TripInformation'] # 打開裡面 10 點鐘的資料 csv = tar.extractfile(csv_format(hour=10, **data_config)) # 讀進資料 data = pandas.read_csv(csv, names=M06A_fields) # 檢查異常的資料 print("異常資料數:", data[data.TripEnd == 'N'].shape[0]) # 去除異常資料 data = data[data.TripEnd == 'Y'] # 只保留 TripInformation 和 VehicleType data = data[['VehicleType', "TripInformation"]] # 看前五筆 data.head(5) Explanation: 處理旅程資訊 先照之前的,讀取資料 End of explanation import datetime # 用來解析時間格式 def strptime(x): return datetime.datetime.strptime(x, "%Y-%m-%d %H:%M:%S") Explanation: 時間的格式固定 End of explanation data.iloc[0].TripInformation Explanation: 先用慢動作來解析看看格式 先抓出第 0 筆資料的 TripInformation 看看要怎麼拆解這個字串,得到我們要的資料 End of explanation # 合在一起看看 for idx, row in data.head(10).iterrows(): # 處理過程 # 節省記憶體 del data Explanation: 用迴圈來對前十筆資料做相同的事情 End of explanation node_data_url = "http://www.freeway.gov.tw/Upload/DownloadFiles/%e5%9c%8b%e9%81%93%e8%a8%88%e8%b2%bb%e9%96%80%e6%9e%b6%e5%ba%a7%e6%a8%99%e5%8f%8a%e9%87%8c%e7%a8%8b%e7%89%8c%e5%83%b9%e8%a1%a8104.09.04%e7%89%88.csv" node_data = pandas.read_csv(urlopen(node_data_url), encoding='big5', header=1) # 簡單清理資料 node_data = node_data[node_data["方向"].apply(lambda x:x in 'NS')] node_data.head(10) Explanation: 偵測站 手冊附錄 https://zh.wikipedia.org/wiki/%E9%AB%98%E9%80%9F%E5%85%AC%E8%B7%AF%E9%9B%BB%E5%AD%90%E6%94%B6%E8%B2%BB%E7%B3%BB%E7%B5%B1_(%E8%87%BA%E7%81%A3)#.E6.94.B6.E8.B2.BB.E9.96.80.E6.9E.B6 交流道服務區里程 http://www.freeway.gov.tw/Publish.aspx?cnid=1906 門架資訊 https://www.freeway.gov.tw/Upload/DownloadFiles/%e5%9c%8b%e9%81%93%e8%a8%88%e8%b2%bb%e9%96%80%e6%9e%b6%e5%ba%a7%e6%a8%99%e5%8f%8a%e9%87%8c%e7%a8%8b%e7%89%8c%e5%83%b9%e8%a1%a8104.09.04%e7%89%88.csv End of explanation %matplotlib inline node_data['經度(東經)'] = node_data['經度(東經)'].astype(float) node_data['緯度(北緯)'] = node_data['緯度(北緯)'].astype(float) node_data.plot.scatter(x='經度(東經)', y='緯度(北緯)') from PIL import Image import numpy as np import matplotlib.pyplot as plt # 網路上的台灣地圖,有經緯度 taiwan_img_url="http://gallery.mjes.ntpc.edu.tw/gallery2/main.php?g2_view=core.DownloadItem&g2_itemId=408&g2_serialNumber=1" taiwan_img = Image.open(urlopen(taiwan_img_url)) taiwan_img # 查看編號的前置碼 set(node_data['編號'].str[:3].tolist()) # 依照路線編號 cfunc = {'01F':"green", '01H':"blue", '03A':"yellow", '03F':"red", '05F':"purple"}.get colors = node_data['編號'].str[:3].apply(cfunc) fig = plt.gcf() fig.set_size_inches(8,8) extent=[118.75,123.05,21.45,25.75] plt.xlim(*extent[:2]) plt.ylim(*extent[2:]) plt.scatter(node_data['經度(東經)'], node_data['緯度(北緯)'], c=colors, alpha=1) plt.imshow(np.array(taiwan_img), extent=extent); Explanation: Q 查看一下內容,比方看國道五號 python node_data[node_data['編號'].str.startswith('05')] 畫圖看看 End of explanation node_data[node_data.編號=="03F-318.7S"] node_data[node_data.編號=="03F-321.1S"] Explanation: Q 試試看其他劃法,比方依照方向設定顏色 python colors = node_data.方向.apply({'S':'red', 'N':'blue'}.get).tolist() 或只畫國道一號、改變 mark。 End of explanation
5,998
Given the following text description, write Python code to implement the functionality described below step by step Description: News classification with topic models in gensim News article classification is a task which is performed on a huge scale by news agencies all over the world. We will be looking into how topic modeling can be used to accurately classify news articles into different categories such as sports, technology, politics etc. Our aim in this tutorial is to come up with some topic model which can come up with topics that can easily be interpreted by us. Such a topic model can be used to discover hidden structure in the corpus and can also be used to determine the membership of a news article into one of the topics. For this tutorial, we will be using the Lee corpus which is a shortened version of the Lee Background Corpus. The shortened version consists of 300 documents selected from the Australian Broadcasting Corporation's news mail service. It consists of texts of headline stories from around the year 2000-2001. Accompanying slides can be found here. Requirements In this tutorial we look at how different topic models can be easily created using gensim. Following are the dependencies for this tutorial Step2: Analysing our corpus. - The first document talks about a bushfire that had occured in New South Wales. - The second talks about conflict between India and Pakistan in Kashmir. - The third talks about road accidents in the New South Wales area. - The fourth one talks about Argentina's economic and political crisis during that time. - The last one talks about the use of drugs by midwives in a Sydney hospital. Our final topic model should be giving us keywords which we can easily interpret and make a small summary out of. Without this the topic model cannot be of much practical use. Step4: Preprocessing our data. Remember Step5: Finalising our dictionary and corpus Step6: Topic modeling with LSI This is a useful topic modeling algorithm in that it can rank topics by itself. Thus it outputs topics in a ranked order. However it does require a num_topics parameter (set to 200 by default) to determine the number of latent dimensions after the SVD. Step7: Topic modeling with HDP An HDP model is fully unsupervised. It can also determine the ideal number of topics it needs through posterior inference. Step8: Topic modeling using LDA This is one the most popular topic modeling algorithms today. It is a generative model in that it assumes each document is a mixture of topics and in turn, each topic is a mixture of words. To understand it better you can watch this lecture by David Blei. Let's choose 10 topics to initialize this. Step9: pyLDAvis is a great way to visualize an LDA model. To summarize in short, the area of the circles represent the prevelance of the topic. The length of the bars on the right represent the membership of a term in a particular topic. pyLDAvis is based on this paper. Step11: Finding out the optimal number of topics Introduction to topic coherence Step13: LDA as LSI One of the problem with LDA is that if we train it on a large number of topics, the topics get "lost" among the numbers. Let us see if we can dig out the best topics from the best LDA model we can produce. The function below can be used to control the quality of the LDA model we produce. Step14: Inference We can clearly see below that the first topic is about cinema, second is about email malware, third is about the land which was given back to the Larrakia aboriginal community of Australia in 2000. Then there's one about Australian cricket. LDA as LSI has worked wonderfully in finding out the best topics from within LDA. Step16: Evaluating all the topic models Any topic model which can come up with topic terms can be plugged into the coherence pipeline. You can even plug in an NMF topic model created with scikit-learn. Step17: Customizing the topic coherence measure Till now we only used the c_v coherence measure. There are others such as u_mass, c_uci, c_npmi. All of these calculate coherence in a different way. c_v is found to be most in line with human ratings but can be much slower than u_mass since it uses a sliding window over the texts. Making your own coherence measure Let's modify c_uci to use s_one_pre instead of s_one_one segmentation Step18: To get topics out of the topic model Step19: Step 1 Step20: Step 2 Step21: Step 3 Step22: Step 4
Python Code: import os import re import operator import matplotlib.pyplot as plt import warnings import gensim import numpy as np warnings.filterwarnings('ignore') # Let's not pay heed to them right now from gensim.models import CoherenceModel, LdaModel, LsiModel, HdpModel from gensim.models.wrappers import LdaMallet from gensim.corpora import Dictionary from pprint import pprint %matplotlib inline test_data_dir = '{}'.format(os.sep).join([gensim.__path__[0], 'test', 'test_data']) lee_train_file = test_data_dir + os.sep + 'lee_background.cor' Explanation: News classification with topic models in gensim News article classification is a task which is performed on a huge scale by news agencies all over the world. We will be looking into how topic modeling can be used to accurately classify news articles into different categories such as sports, technology, politics etc. Our aim in this tutorial is to come up with some topic model which can come up with topics that can easily be interpreted by us. Such a topic model can be used to discover hidden structure in the corpus and can also be used to determine the membership of a news article into one of the topics. For this tutorial, we will be using the Lee corpus which is a shortened version of the Lee Background Corpus. The shortened version consists of 300 documents selected from the Australian Broadcasting Corporation's news mail service. It consists of texts of headline stories from around the year 2000-2001. Accompanying slides can be found here. Requirements In this tutorial we look at how different topic models can be easily created using gensim. Following are the dependencies for this tutorial: - Gensim Version >=0.13.1 would be preferred since we will be using topic coherence metrics extensively here. - matplotlib - Patterns library; Gensim uses this for lemmatization. - nltk.stopwords - pyLDAVis We will be playing around with 4 different topic models here: - LSI (Latent Semantic Indexing) - HDP (Hierarchical Dirichlet Process) - LDA (Latent Dirichlet Allocation) - LDA (tweaked with topic coherence to find optimal number of topics) and - LDA as LSI with the help of topic coherence metrics First we'll fit those topic models on our existing data then we'll compare each against the other and see how they rank in terms of human interpretability. All can be found in gensim and can be easily used in a plug-and-play fashion. We will tinker with the LDA model using the newly added topic coherence metrics in gensim based on this paper by Roeder et al and see how the resulting topic model compares with the exsisting ones. End of explanation with open(lee_train_file) as f: for n, l in enumerate(f): if n < 5: print([l]) def build_texts(fname): Function to build tokenized texts from file Parameters: ---------- fname: File to be read Returns: ------- yields preprocessed line with open(fname) as f: for line in f: yield gensim.utils.simple_preprocess(line, deacc=True, min_len=3) train_texts = list(build_texts(lee_train_file)) len(train_texts) Explanation: Analysing our corpus. - The first document talks about a bushfire that had occured in New South Wales. - The second talks about conflict between India and Pakistan in Kashmir. - The third talks about road accidents in the New South Wales area. - The fourth one talks about Argentina's economic and political crisis during that time. - The last one talks about the use of drugs by midwives in a Sydney hospital. Our final topic model should be giving us keywords which we can easily interpret and make a small summary out of. Without this the topic model cannot be of much practical use. End of explanation bigram = gensim.models.Phrases(train_texts) # for bigram collocation detection bigram[['new', 'york', 'example']] from gensim.utils import lemmatize from nltk.corpus import stopwords stops = set(stopwords.words('english')) # nltk stopwords list def process_texts(texts): Function to process texts. Following are the steps we take: 1. Stopword Removal. 2. Collocation detection. 3. Lemmatization (not stem since stemming can reduce the interpretability). Parameters: ---------- texts: Tokenized texts. Returns: ------- texts: Pre-processed tokenized texts. texts = [[word for word in line if word not in stops] for line in texts] texts = [bigram[line] for line in texts] texts = [[word.split('/')[0] for word in lemmatize(' '.join(line), allowed_tags=re.compile('(NN)'), min_length=3)] for line in texts] return texts train_texts = process_texts(train_texts) train_texts[5:6] Explanation: Preprocessing our data. Remember: Garbage In Garbage Out "NLP is 80% preprocessing." -Lev Konstantinovskiy This is the single most important step in setting up a good topic modeling system. If the preprocessing is not good, the algorithm can't do much since we would be feeding it a lot of noise. In this tutorial, we will be filtering out the noise using the following steps in this order for each line: 1. Stopword removal using NLTK's english stopwords dataset. 2. Bigram collocation detection (frequently co-occuring tokens) using gensim's Phrases. This is our first attempt to find some hidden structure in the corpus. You can even try trigram collocation detection. 3. Lemmatization (using gensim's lemmatize) to only keep the nouns. Lemmatization is generally better than stemming in the case of topic modeling since the words after lemmatization still remain understable. However, generally stemming might be preferred if the data is being fed into a vectorizer and isn't intended to be viewed. End of explanation dictionary = Dictionary(train_texts) corpus = [dictionary.doc2bow(text) for text in train_texts] Explanation: Finalising our dictionary and corpus End of explanation lsimodel = LsiModel(corpus=corpus, num_topics=10, id2word=dictionary) lsimodel.show_topics(num_topics=5) # Showing only the top 5 topics lsitopics = lsimodel.show_topics(formatted=False) Explanation: Topic modeling with LSI This is a useful topic modeling algorithm in that it can rank topics by itself. Thus it outputs topics in a ranked order. However it does require a num_topics parameter (set to 200 by default) to determine the number of latent dimensions after the SVD. End of explanation hdpmodel = HdpModel(corpus=corpus, id2word=dictionary) hdpmodel.show_topics() hdptopics = hdpmodel.show_topics(formatted=False) Explanation: Topic modeling with HDP An HDP model is fully unsupervised. It can also determine the ideal number of topics it needs through posterior inference. End of explanation ldamodel = LdaModel(corpus=corpus, num_topics=10, id2word=dictionary) Explanation: Topic modeling using LDA This is one the most popular topic modeling algorithms today. It is a generative model in that it assumes each document is a mixture of topics and in turn, each topic is a mixture of words. To understand it better you can watch this lecture by David Blei. Let's choose 10 topics to initialize this. End of explanation import pyLDAvis.gensim pyLDAvis.enable_notebook() pyLDAvis.gensim.prepare(ldamodel, corpus, dictionary) ldatopics = ldamodel.show_topics(formatted=False) Explanation: pyLDAvis is a great way to visualize an LDA model. To summarize in short, the area of the circles represent the prevelance of the topic. The length of the bars on the right represent the membership of a term in a particular topic. pyLDAvis is based on this paper. End of explanation def evaluate_graph(dictionary, corpus, texts, limit): Function to display num_topics - LDA graph using c_v coherence Parameters: ---------- dictionary : Gensim dictionary corpus : Gensim corpus limit : topic limit Returns: ------- lm_list : List of LDA topic models c_v : Coherence values corresponding to the LDA model with respective number of topics c_v = [] lm_list = [] for num_topics in range(1, limit): lm = LdaModel(corpus=corpus, num_topics=num_topics, id2word=dictionary) lm_list.append(lm) cm = CoherenceModel(model=lm, texts=texts, dictionary=dictionary, coherence='c_v') c_v.append(cm.get_coherence()) # Show graph x = range(1, limit) plt.plot(x, c_v) plt.xlabel("num_topics") plt.ylabel("Coherence score") plt.legend(("c_v"), loc='best') plt.show() return lm_list, c_v %%time lmlist, c_v = evaluate_graph(dictionary=dictionary, corpus=corpus, texts=train_texts, limit=10) pyLDAvis.gensim.prepare(lmlist[2], corpus, dictionary) lmtopics = lmlist[5].show_topics(formatted=False) Explanation: Finding out the optimal number of topics Introduction to topic coherence: <img src="https://rare-technologies.com/wp-content/uploads/2016/06/pipeline.png"> Topic coherence in essence measures the human interpretability of a topic model. Traditionally perplexity has been used to evaluate topic models however this does not correlate with human annotations at times. Topic coherence is another way to evaluate topic models with a much higher guarantee on human interpretability. Thus this can be used to compare different topic models among many other use-cases. Here's a short blog I wrote explaining topic coherence: What is topic coherence? End of explanation def ret_top_model(): Since LDAmodel is a probabilistic model, it comes up different topics each time we run it. To control the quality of the topic model we produce, we can see what the interpretability of the best topic is and keep evaluating the topic model until this threshold is crossed. Returns: ------- lm: Final evaluated topic model top_topics: ranked topics in decreasing order. List of tuples top_topics = [(0, 0)] while top_topics[0][1] < 0.97: lm = LdaModel(corpus=corpus, id2word=dictionary) coherence_values = {} for n, topic in lm.show_topics(num_topics=-1, formatted=False): topic = [word for word, _ in topic] cm = CoherenceModel(topics=[topic], texts=train_texts, dictionary=dictionary, window_size=10) coherence_values[n] = cm.get_coherence() top_topics = sorted(coherence_values.items(), key=operator.itemgetter(1), reverse=True) return lm, top_topics lm, top_topics = ret_top_model() print(top_topics[:5]) Explanation: LDA as LSI One of the problem with LDA is that if we train it on a large number of topics, the topics get "lost" among the numbers. Let us see if we can dig out the best topics from the best LDA model we can produce. The function below can be used to control the quality of the LDA model we produce. End of explanation pprint([lm.show_topic(topicid) for topicid, c_v in top_topics[:10]]) lda_lsi_topics = [[word for word, prob in lm.show_topic(topicid)] for topicid, c_v in top_topics] Explanation: Inference We can clearly see below that the first topic is about cinema, second is about email malware, third is about the land which was given back to the Larrakia aboriginal community of Australia in 2000. Then there's one about Australian cricket. LDA as LSI has worked wonderfully in finding out the best topics from within LDA. End of explanation lsitopics = [[word for word, prob in topic] for topicid, topic in lsitopics] hdptopics = [[word for word, prob in topic] for topicid, topic in hdptopics] ldatopics = [[word for word, prob in topic] for topicid, topic in ldatopics] lmtopics = [[word for word, prob in topic] for topicid, topic in lmtopics] lsi_coherence = CoherenceModel(topics=lsitopics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence() hdp_coherence = CoherenceModel(topics=hdptopics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence() lda_coherence = CoherenceModel(topics=ldatopics, texts=train_texts, dictionary=dictionary, window_size=10).get_coherence() lm_coherence = CoherenceModel(topics=lmtopics, texts=train_texts, dictionary=dictionary, window_size=10).get_coherence() lda_lsi_coherence = CoherenceModel(topics=lda_lsi_topics[:10], texts=train_texts, dictionary=dictionary, window_size=10).get_coherence() def evaluate_bar_graph(coherences, indices): Function to plot bar graph. coherences: list of coherence values indices: Indices to be used to mark bars. Length of this and coherences should be equal. assert len(coherences) == len(indices) n = len(coherences) x = np.arange(n) plt.bar(x, coherences, width=0.2, tick_label=indices, align='center') plt.xlabel('Models') plt.ylabel('Coherence Value') evaluate_bar_graph([lsi_coherence, hdp_coherence, lda_coherence, lm_coherence, lda_lsi_coherence], ['LSI', 'HDP', 'LDA', 'LDA_Mod', 'LDA_LSI']) Explanation: Evaluating all the topic models Any topic model which can come up with topic terms can be plugged into the coherence pipeline. You can even plug in an NMF topic model created with scikit-learn. End of explanation from gensim.topic_coherence import (segmentation, probability_estimation, direct_confirmation_measure, indirect_confirmation_measure, aggregation) from gensim.matutils import argsort from collections import namedtuple make_pipeline = namedtuple('Coherence_Measure', 'seg, prob, conf, aggr') measure = make_pipeline(segmentation.s_one_one, probability_estimation.p_boolean_sliding_window, direct_confirmation_measure.log_ratio_measure, aggregation.arithmetic_mean) Explanation: Customizing the topic coherence measure Till now we only used the c_v coherence measure. There are others such as u_mass, c_uci, c_npmi. All of these calculate coherence in a different way. c_v is found to be most in line with human ratings but can be much slower than u_mass since it uses a sliding window over the texts. Making your own coherence measure Let's modify c_uci to use s_one_pre instead of s_one_one segmentation End of explanation topics = [] for topic in lm.state.get_lambda(): bestn = argsort(topic, topn=10, reverse=True) topics.append(bestn) Explanation: To get topics out of the topic model: End of explanation # Perform segmentation segmented_topics = measure.seg(topics) Explanation: Step 1: Segmentation End of explanation # Since this is a window-based coherence measure we will perform window based prob estimation per_topic_postings, num_windows = measure.prob(texts=train_texts, segmented_topics=segmented_topics, dictionary=dictionary, window_size=2) Explanation: Step 2: Probability estimation End of explanation confirmed_measures = measure.conf(segmented_topics, per_topic_postings, num_windows, normalize=False) Explanation: Step 3: Confirmation Measure End of explanation print(measure.aggr(confirmed_measures)) Explanation: Step 4: Aggregation End of explanation
5,999
Given the following text description, write Python code to implement the functionality described below step by step Description: Imports Step1: Simple Mock Data Lets create a simple mock dataset with one independent variable and one dependent variable with a little noise. Step2: Boston Housing Dataset feautres Step3: Center and Normalize Step4: Statsmodels Linear Regression Step5: Linear Regression, Keras
Python Code: import pandas as pd import numpy as np import tensorflow as tf from tensorflow.contrib import keras from sklearn import datasets from sklearn import linear_model import statsmodels.api as sm import matplotlib.pyplot as plt %matplotlib inline import seaborn as sns Explanation: Imports End of explanation Nsamp = 50 Nfeatures = 1 xarr = np.linspace(-0.5, 0.5, Nsamp) np.random.seed(83749) beta_0 = -2.0 beta_1 = 4.3 yarr = (beta_0 + beta_1 * xarr) + (np.random.normal(size=Nsamp) * 0.5) mdl = linear_model.LinearRegression(fit_intercept=False) mdl = mdl.fit(np.c_[np.ones(Nsamp), xarr], yarr) mdl.coef_ fig, ax = plt.subplots(figsize=(5,5)) plt.scatter(xarr, yarr, s=10, color='blue') plt.plot(xarr, mdl.coef_[0] + mdl.coef_[1] * xarr, color='red') ph_x = tf.placeholder(tf.float32, [None, Nfeatures], name='features') ph_y = tf.placeholder(tf.float32, [None, 1], name='output') ph_x, ph_y # Set model weights v_W = tf.Variable(tf.random_normal([Nfeatures, 1]), name='weights') v_b = tf.Variable(tf.zeros([1]), name='bias') v_z = tf.matmul(ph_x, v_W) + v_b cost_1 = tf.squared_difference(v_z, ph_y) cost_2 = tf.reduce_mean(cost_1) learning_rate=0.1 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_2) # Construct model and encapsulating all ops into scopes, making # Tensorboard's Graph visualization more convenient #with tf.name_scope('Model'): # # Model # pred = tf.matmul(x, W) + b # basic linear regression #with tf.name_scope('Loss'): # # Minimize error (mean squared error) # cost = tf.reduce_mean(-tf.reduce_sum(y - pred)*tf.log(pred), reduction_indices=1)) #with tf.name_scope('SGD'): # # Gradient Descent # optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #with tf.name_scope('Accuracy'): # # Accuracy # acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # acc = tf.reduce_mean(tf.cast(acc, tf.float32)) init = tf.global_variables_initializer() merged = tf.summary.merge_all() # Launch the graph feed_dict = {ph_x: xarr.reshape(Nsamp, 1), ph_y: yarr.reshape(Nsamp,1)} with tf.Session() as sess: train_writer = tf.summary.FileWriter('/tmp/tensorflow/logs', sess.graph) sess.run(init) z_out = sess.run(v_z, feed_dict=feed_dict) cost_1_out = sess.run(cost_1, feed_dict=feed_dict) cost_2_out = sess.run(cost_2, feed_dict=feed_dict) for i in range(300): train_step_out = sess.run(train_step, feed_dict=feed_dict) W_out = sess.run(v_W, feed_dict=feed_dict) b_out = sess.run(v_b, feed_dict=feed_dict) print(W_out) print(b_out) Explanation: Simple Mock Data Lets create a simple mock dataset with one independent variable and one dependent variable with a little noise. End of explanation boston = datasets.load_boston() print(boston['DESCR']) features = pd.DataFrame(data=boston['data'], columns=boston['feature_names']) target = pd.DataFrame(data=boston['target'], columns=['MEDV']) features.head(5) target.head(5) hh = features.hist(figsize=(14,18)) Explanation: Boston Housing Dataset feautres: raw features variables in DataFrame target: raw target variable in DataFrame End of explanation from sklearn.preprocessing import StandardScaler scalerX = StandardScaler() scalerX.fit(features) dfXn = pd.DataFrame(data=scalerX.transform(features), columns=features.columns) scalerY = StandardScaler() scalerY.fit(target) dfYn = pd.DataFrame(data=scalerY.transform(target), columns=target.columns) dfXn.head(5) dfYn.head(5) Explanation: Center and Normalize End of explanation dfXn1 = dfXn.copy() dfXn1.insert(loc=0, column='intercept', value=1) results = sm.OLS(dfYn, dfXn1).fit() print(results.summary()) dfYn.max() target.max() plt.scatter(dfYn.values, results.fittedvalues.values) from sklearn import linear_model mdl = linear_model.LinearRegression(fit_intercept=False) mdl = mdl.fit(dfXn1.values, dfYn.values) print('n_params (statsmodels): ', len(results.params)) print('n params (sklearn linear): ', len(mdl.coef_.flatten())) print(results.params) print() print(mdl.coef_) np.all(np.abs(mdl.coef_ - results.params.values) < 1.0e-10) plt.scatter(dfYn.values, mdl.predict(dfXn1.values).flatten()) Explanation: Statsmodels Linear Regression End of explanation from keras.models import Sequential from keras.layers import Dense, InputLayer from keras.optimizers import SGD, Adam, RMSprop from keras.losses import mean_squared_error nfeatures = features.shape[1] model = Sequential() model.add(InputLayer(input_shape=(nfeatures,), name='input')) model.add(Dense(1, kernel_initializer='uniform', activation='linear', name='dense_1')) model.summary() weights_initial = model.get_weights() print('weights_initial - input nodes: \n', weights_initial[0]) print('weights_initial - bias node: ', weights_initial[1]) model.compile(optimizer=RMSprop(lr=0.001), loss='mean_squared_error') dfYn.shape model.set_weights(weights_initial) history = model.fit(dfXn.values, dfYn.values, epochs=5000, batch_size=dfYn.shape[0], verbose=0) plt.plot(history.history['loss']) model.get_weights() mdl.coef_ plt.scatter(model.get_weights()[0].flatten(), mdl.coef_.flatten()[1:]) fig, ax = plt.subplots(figsize=(10,10)) plt.scatter(dfYn.values, mdl.predict(dfXn1.values).flatten(), color='red', alpha=0.6, marker='o') plt.scatter(dfYn.values, model.predict(dfXn.values), color='blue', alpha=0.6, marker='+') # tf Graph Input # input data n_samples, n_features = features.shape x = tf.placeholder(tf.float32, [None, n_features], name='InputData') # output data y = tf.placeholder(tf.float32, [None, 1], name='TargetData') # Set model weights W = tf.Variable(tf.random_normal([n_features, 1]), name='Weights') b = tf.Variable(tf.zeros([1]), name='Bias') z = tf.matmul(x,W) + b cost_1 = tf.squared_difference(z,y) cost_2 = tf.reduce_mean(cost_1) learning_rate=0.1 train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost_2) # Construct model and encapsulating all ops into scopes, making # Tensorboard's Graph visualization more convenient #with tf.name_scope('Model'): # # Model # pred = tf.matmul(x, W) + b # basic linear regression #with tf.name_scope('Loss'): # # Minimize error (mean squared error) # cost = tf.reduce_mean(-tf.reduce_sum(y - pred)*tf.log(pred), reduction_indices=1)) #with tf.name_scope('SGD'): # # Gradient Descent # optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #with tf.name_scope('Accuracy'): # # Accuracy # acc = tf.equal(tf.argmax(pred, 1), tf.argmax(y, 1)) # acc = tf.reduce_mean(tf.cast(acc, tf.float32)) tf.Session().run(y, feed_dict={x: features.values, y: target.values}).shape init = tf.global_variables_initializer() # Launch the graph with tf.Session() as sess: sess.run(init) z_out = sess.run(z, feed_dict={x: features.values, y:target.values}) cost_1_out = sess.run(cost_1, feed_dict={x: features.values, y:target.values}) cost_2_out = sess.run(cost_2, feed_dict={x: features.values, y:target.values}) for i in range(100): train_step_out = sess.run(train_step, feed_dict={x: features.values, y:target.values}) print(cost_1_out[0:5,:]) print(cost_2_out) print(train_step_out) sess = tf.Session() sess.run(c) x y W b Explanation: Linear Regression, Keras End of explanation