markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
--- Question 2 Write the training loop* Create the loss function. This should be a loss function suitable for multi-class classification.* Create the metric accumulator. This should the compute and store the accuracy of the model during training* Create the trainer with the `adam` optimizer and learning rate of `0.002`* Write the training loop
def train(network, training_dataloader, batch_size, epochs): """ Should take an initialized network and train that network using data from the data loader. :param network: initialized gluon network to be trained :type network: gluon.Block :param training_dataloader: the training DataLoader provides batches for data for every iteration :type training_dataloader: gluon.data.DataLoader :param batch_size: batch size for the DataLoader. :type batch_size: int :param epochs: number of epochs to train the DataLoader :type epochs: int :return: tuple of trained network and the final training accuracy :rtype: (gluon.Block, float) """ trainer = gluon.Trainer(network.collect_params(), 'adam', {'learning_rate': 0.002}) metric = mx.metric.Accuracy() for epoch in range(epochs): train_loss =0. for data,label in training_dataloader: # print (data.shape) # print (label.shape) with autograd.record(): output = network(data) loss=mx.ndarray.softmax_cross_entropy(output,label) loss.backward() trainer.step(batch_size) train_loss += loss.mean().asscalar() metric.update(label, output) print (epoch , metric.get()[1]) training_accuracy = metric.get()[1] return network, training_accuracy
_____no_output_____
MIT
Module_5_LeNet_on_MNIST (1).ipynb
vigneshb-it19/AWS-Computer-Vision-GluonCV
Let's define and initialize a network to test the train function.
net = gluon.nn.Sequential() net.add(gluon.nn.Conv2D(channels=6, kernel_size=5, activation='relu'), gluon.nn.MaxPool2D(pool_size=2, strides=2), gluon.nn.Conv2D(channels=16, kernel_size=3, activation='relu'), gluon.nn.MaxPool2D(pool_size=2, strides=2), gluon.nn.Flatten(), gluon.nn.Dense(120, activation="relu"), gluon.nn.Dense(84, activation="relu"), gluon.nn.Dense(10)) net.initialize(init=init.Xavier()) n, ta = train(net, t, 128, 5) assert ta >= .95 d, l = next(iter(v)) p = (n(d).argmax(axis=1)) assert (p.asnumpy() == l.asnumpy()).sum()/128.0 > .95
0 0.93415 1 0.9572583333333333 2 0.9668111111111111 3 0.972375 4 0.97606
MIT
Module_5_LeNet_on_MNIST (1).ipynb
vigneshb-it19/AWS-Computer-Vision-GluonCV
--- Question 3 Write the validation loop* Create the metric accumulator. This should the compute and store the accuracy of the model on the validation set* Write the validation loop
def validate(network, validation_dataloader): """ Should compute the accuracy of the network on the validation set. :param network: initialized gluon network to be trained :type network: gluon.Block :param validation_dataloader: the training DataLoader provides batches for data for every iteration :type validation_dataloader: gluon.data.DataLoader :return: validation accuracy :rtype: float """ val_acc = mx.metric.Accuracy() for data,label in validation_dataloader: output = network(data) val_acc.update(label,output) print (val_acc.get()[1]) return val_acc.get()[1] assert validate(n, v) > .95
_____no_output_____
MIT
Module_5_LeNet_on_MNIST (1).ipynb
vigneshb-it19/AWS-Computer-Vision-GluonCV
Callbacks and Multiple inputs
import pandas as pd import numpy as np %matplotlib inline import matplotlib.pyplot as plt from sklearn.preprocessing import scale from keras.optimizers import SGD from keras.layers import Dense, Input, concatenate, BatchNormalization from keras.callbacks import EarlyStopping, TensorBoard, ModelCheckpoint from keras.models import Model import keras.backend as K df = pd.read_csv("../data/titanic-train.csv") Y = df['Survived'] df.info() df.head() num_features = df[['Age', 'Fare', 'SibSp', 'Parch']].fillna(0) num_features.head() cat_features = pd.get_dummies(df[['Pclass', 'Sex', 'Embarked']].astype('str')) cat_features.head() X1 = scale(num_features.values) X2 = cat_features.values K.clear_session() # Numerical features branch inputs1 = Input(shape = (X1.shape[1],)) b1 = BatchNormalization()(inputs1) b1 = Dense(3, kernel_initializer='normal', activation = 'tanh')(b1) b1 = BatchNormalization()(b1) # Categorical features branch inputs2 = Input(shape = (X2.shape[1],)) b2 = Dense(8, kernel_initializer='normal', activation = 'relu')(inputs2) b2 = BatchNormalization()(b2) b2 = Dense(4, kernel_initializer='normal', activation = 'relu')(b2) b2 = BatchNormalization()(b2) b2 = Dense(2, kernel_initializer='normal', activation = 'relu')(b2) b2 = BatchNormalization()(b2) merged = concatenate([b1, b2]) preds = Dense(1, activation = 'sigmoid')(merged) # final model model = Model([inputs1, inputs2], preds) model.compile(loss = 'binary_crossentropy', optimizer = 'rmsprop', metrics = ['accuracy']) model.summary() outpath='/tmp/tensorflow_logs/titanic/' early_stopper = EarlyStopping(monitor='val_acc', patience=10) tensorboard = TensorBoard(outpath+'tensorboard/', histogram_freq=1) checkpointer = ModelCheckpoint(outpath+'weights_epoch_{epoch:02d}_val_acc_{val_acc:.2f}.hdf5', monitor='val_acc') # You may have to run this a couple of times if stuck on local minimum np.random.seed(2017) h = model.fit([X1, X2], Y.values, batch_size = 32, epochs = 40, verbose = 1, validation_split=0.2, callbacks=[early_stopper, tensorboard, checkpointer]) import os sorted(os.listdir(outpath))
_____no_output_____
MIT
solutions_do_not_open/Lab_05_DL Callbacks and Multiple Inputs_solution.ipynb
Dataweekends/global_AI_conference_Jan_2018
HypothesisI believe that random forest will have a better score since the data frame has a lot of catergorical data and a lot of columns in general.
# Train the Logistic Regression model on the unscaled data and print the model score classifier = LogisticRegression() classifier.fit(X_dummies_train, y_label_1) print(f"Training Data Score: {classifier.score(X_dummies_train, y_label_1)}") print(f"Testing Data Score: {classifier.score(X_dummies_test, y_label_2)}") # Train a Random Forest Classifier model and print the model score clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_dummies_train, y_label_1) print(f'Training Score: {clf.score(X_dummies_train, y_label_1)}') print(f'Testing Score: {clf.score(X_dummies_test, y_label_2)}')
Training Score: 1.0 Testing Score: 0.646958740961293
MIT
Credit Risk Evaluator.ipynb
J-Schea29/Supervised-Machine-Learning-Challenge
Hypothesis 2I think that by scaling my scores are going to get better and that the testing and training will be less spread out.
# Scale the data scaler = StandardScaler().fit(X_dummies_train) X_train_scaled = scaler.transform(X_dummies_train) X_test_scaled = scaler.transform(X_dummies_test) X_test_scaled # Train the Logistic Regression model on the scaled data and print the model score classifier = LogisticRegression() classifier.fit(X_train_scaled, y_label_1) print(f"Training Data Score: {classifier.score(X_train_scaled, y_label_1)}") print(f"Testing Data Score: {classifier.score(X_test_scaled, y_label_2)}") # Train a Random Forest Classifier model on the scaled data and print the model score clf = RandomForestClassifier(random_state=1, n_estimators=500).fit(X_train_scaled, y_label_1) print(f'Training Score: {clf.score(X_train_scaled, y_label_1)}') print(f'Testing Score: {clf.score(X_test_scaled, y_label_2)}')
Training Score: 1.0 Testing Score: 0.6480221182475542
MIT
Credit Risk Evaluator.ipynb
J-Schea29/Supervised-Machine-Learning-Challenge
IntroHere's a simple example where we produce a set of plots, called a tear sheet, for a single stock. Imports and Settings
# silence warnings import warnings warnings.filterwarnings('ignore') import yfinance as yf import pyfolio as pf %matplotlib inline
_____no_output_____
Apache-2.0
pyfolio/examples/single_stock_example.ipynb
MBounouar/pyfolio-reloaded
Download daily stock prices using yfinance Pyfolio expects tz-aware input set to UTC timezone. You may have to import `yfinance` first by running:```bashpip install yfinance```
fb = yf.Ticker('FB') history = fb.history('max') history.index = history.index.tz_localize('utc') history.info() returns = history.Close.pct_change()
_____no_output_____
Apache-2.0
pyfolio/examples/single_stock_example.ipynb
MBounouar/pyfolio-reloaded
Create returns tear sheetThis will show charts and analysis about returns of the single stock.
pf.create_returns_tear_sheet(returns, live_start_date='2020-1-1')
_____no_output_____
Apache-2.0
pyfolio/examples/single_stock_example.ipynb
MBounouar/pyfolio-reloaded
Import Data
import numpy as np from sklearn.model_selection import GridSearchCV import matplotlib.pyplot as plt # load data import os from google.colab import drive drive.mount('/content/drive') filedir = './drive/My Drive/Final/CNN_data' with open(filedir + '/' + 'feature_extracted', 'rb') as f: X = np.load(f) with open(filedir + '/' + 'Y', 'rb') as f: Y = np.load(f).astype(np.int32) # import MFCC data with open('./drive/My Drive/Final/mfcc_data/X', 'rb') as f: X_mfcc = np.load(f) with open('./drive/My Drive/Final/mfcc_data/Y', 'rb') as f: Y_mfcc = np.load(f) print('X_shape: {}\nY_shape: {}'.format(X_mfcc.shape, Y_mfcc.shape)) import warnings warnings.filterwarnings("ignore") ''' X_new = np.zeros([300,0]) for i in range(X.shape[1]): col = X[:,i,None] if((np.abs(col) > 1e-6).any()): X_new = np.hstack([X_new, col]) else: print('Yes') print('X.shape: {}\nX_new.shape: {}\nY.shape: {}'.format(X.shape, X_new.shape, Y.shape)) print(X_new.shape) print(np.max(X_new, axis=1) != np.max(X, axis=1)) print(np.min(X_new, axis=1)) '''
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
CLF1 Ridge Classifier
''' from sklearn.linear_model import RidgeClassifier parameters = {'alpha':[1]} rc = RidgeClassifier(alpha = 1) clf = GridSearchCV(rc, parameters, cv=3) clf.fit(X[:30], Y[:30]) clf.best_estimator_.fit(X[:30], Y[:30]).score(X, Y) clf.best_index_ ''' from sklearn.linear_model import RidgeClassifier def clf_RidgeClassifier(training_set, training_lable, testing_set, testing_lable): parameters = {'alpha':[10, 1, 1e-1, 1e-2, 1e-3]} rc = RidgeClassifier(alpha = 1) clf = GridSearchCV(rc, parameters, cv=3, return_train_score=True, iid=False) clf.fit(training_set, training_lable) results = clf.cv_results_ opt_index = clf.best_index_ training_score = results['mean_train_score'][opt_index] validation_score = results['mean_test_score'][opt_index] testing_score = clf.best_estimator_.fit(training_set, training_lable).score(testing_set, testing_lable) return [training_score, validation_score, testing_score], clf.best_params_ clf_RidgeClassifier(X[:240], Y[:240], X[240:], Y[240:])
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
CLF2 SVM
from sklearn.svm import SVC def clf_SVM(X_train, Y_train, X_test, Y_test): parameters = {'C':[10, 1, 1e-1, 1e-2, 1e-3]} svc = SVC(kernel='linear') clf = GridSearchCV(svc, parameters, cv=3, return_train_score=True, iid=False) clf.fit(X_train, Y_train) results = clf.cv_results_ opt_index = clf.best_index_ training_score = results['mean_train_score'][opt_index] validation_score = results['mean_test_score'][opt_index] testing_score = clf.best_estimator_.fit(X_train, Y_train).score(X_test, Y_test) return [training_score, validation_score, testing_score], clf.best_params_ clf_SVM(X[:240], Y[:240], X[240:], Y[240:])
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
CLF3 LDA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis def clf_lda(Xtrain, Ytrain, Xtest, Ytest): """ Input: training data, labels, testing data, labels Output: training set mean prediciton accuracy, validation accuracy = None, testing set mean prediction accuracy Note: LDA has no hyperparameters to tune because a model is solved in closed form therefore there is no need for model selection via grid search cross validation therefore there is no validation accuracy """ clf = LinearDiscriminantAnalysis() clf.fit(Xtrain, Ytrain) train_acc = clf.score(Xtrain,Ytrain) val_acc = None test_acc = clf.score(Xtest,Ytest) return [train_acc,val_acc,test_acc], None clf_lda(X[:240],Y[:240],X[240:],Y[240:])
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
CLF4 KNN
from sklearn.neighbors import KNeighborsClassifier def clf_KNN(X_train, Y_train, X_test, Y_test): parameters = {'n_neighbors':[1,5,20]} knn = KNeighborsClassifier(algorithm='auto', weights='uniform') clf = GridSearchCV(knn, parameters, cv=3, return_train_score=True, iid=False) clf.fit(X_train, Y_train) results = clf.cv_results_ opt_index = clf.best_index_ training_score = results['mean_train_score'][opt_index] validation_score = results['mean_test_score'][opt_index] testing_score = clf.best_estimator_.fit(X_train, Y_train).score(X_test, Y_test) return [training_score, validation_score, testing_score], clf.best_params_ clf_KNN(X[:240], Y[:240], X[240:], Y[240:])
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
CLF5 Decision Tree
from sklearn.tree import DecisionTreeClassifier def clf_DecisionTree(X_train, Y_train, X_test, Y_test): parameters = {'max_depth':[5,10,15,20,25], 'criterion':['entropy', 'gini']} dtc = DecisionTreeClassifier() clf = GridSearchCV(dtc, parameters, cv=3, return_train_score=True, iid=False) clf.fit(X_train, Y_train) results = clf.cv_results_ opt_index = clf.best_index_ training_score = results['mean_train_score'][opt_index] validation_score = results['mean_test_score'][opt_index] testing_score = clf.best_estimator_.fit(X_train, Y_train).score(X_test, Y_test) return [training_score, validation_score, testing_score], clf.best_params_ clf_DecisionTree(X[:240], Y[:240], X[240:], Y[240:])
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
Testing On Data
clf_list = [clf_RidgeClassifier, clf_SVM, clf_lda, clf_KNN, clf_DecisionTree] def test_trial(X_shuffled, Y_shuffled): global clf_list error = np.zeros((3,5,3)) # partition(3) * clf(5) * error(3) # (8/2,5/5,2/8) * (clf_list) * (trn,val,tst) opt_param = np.empty((3,5), dtype=dict) # partition(3) * clf(5) sample_size = len(X_shuffled) # 80/20 split train_size = int(sample_size * 0.8) X_train = X_shuffled[:train_size] Y_train = Y_shuffled[:train_size] X_test = X_shuffled[train_size:] Y_test = Y_shuffled[train_size:] for i in range(len(clf_list)): clffn = clf_list[i] error[0,i,:], opt_param[0,i] = clffn(X_train, Y_train, X_test, Y_test) # 50/50 split train_size = int(sample_size * 0.5) X_train = X_shuffled[:train_size] Y_train = Y_shuffled[:train_size] X_test = X_shuffled[train_size:] Y_test = Y_shuffled[train_size:] for i in range(len(clf_list)): clffn = clf_list[i] error[1,i,:], opt_param[1,i] = clffn(X_train, Y_train, X_test, Y_test) # 80/20 split train_size = int(sample_size * 0.2) X_train = X_shuffled[:train_size] Y_train = Y_shuffled[:train_size] X_test = X_shuffled[train_size:] Y_test = Y_shuffled[train_size:] for i in range(len(clf_list)): clffn = clf_list[i] error[2,i,:], opt_param[2,i] = clffn(X_train, Y_train, X_test, Y_test) # return error array return error, opt_param from sklearn.utils import shuffle def test_data(X, Y): error = np.zeros((3,3,5,3)) # trial(3) * error_from_test_trial(3*5*3) opt_param = np.empty((3,3,5), dtype=dict) # trial(3) * opt_param_from_test_trial(3*5) # trial 1 X_shuffled, Y_shuffled = shuffle(X, Y) error[0], opt_param[0] = test_trial(X_shuffled, Y_shuffled) # trial 2 X_shuffled, Y_shuffled = shuffle(X_shuffled, Y_shuffled) error[1], opt_param[1] = test_trial(X_shuffled, Y_shuffled) # trial 3 X_shuffled, Y_shuffled = shuffle(X_shuffled, Y_shuffled) error[2], opt_param[2] = test_trial(X_shuffled, Y_shuffled) return error, opt_param # test on CNN-extracted features acc_CNN, opt_param_CNN = test_data(X, Y) np.mean(acc_CNN[:,:,:,:], axis=0) acc_clf, opt_param = test_data(X_mfcc, Y_mfcc) avg_cnn_acc = np.mean(acc_CNN, axis=0) avg_clf_acc = np.mean(acc_clf, axis=0) print('cnn: {}'.format(avg_cnn_acc)) print('clf: {}'.format(avg_clf_acc)) # partition_accuracy plot from matplotlib import rcParams rcParams['figure.figsize'] = (8,8) colors = ['cyan', 'green', 'red', 'orange','black'] clf = ['RidgeRegression', 'SVM', 'LDA', 'KNN', 'DecisionTree'] for clfid in range(5): plt.plot(avg_cnn_acc[:,clfid,-1], color=colors[clfid], linestyle='solid', label='CNN '+clf[clfid]) plt.plot(avg_clf_acc[:,clfid,-1], color=colors[clfid], linestyle='dashed', label='MFCC '+clf[clfid]) plt.legend(loc='lower left') plt.xticks((0,1,2),['80/20', '50/50', '20/80']) plt.xlabel('partition (train/test)') plt.ylabel('average test accuracy') plt.savefig('./drive/My Drive/Final/graphs/partition_accuracy.png', bbox='tight') # SVM hyperparameter error plot parameters = {'C':[10, 1, 1e-1, 1e-2, 1e-3]} svc = SVC(kernel='linear') clf = GridSearchCV(svc, parameters, cv=3, return_train_score=True, iid=False) clf.fit(X[:240], Y[:240]) results = clf.cv_results_ opt_index = clf.best_index_ training_score = results['mean_train_score'] validation_score = results['mean_test_score'] param_x = results['param_C'].data.astype(np.float32) plt.plot(param_x, training_score, 'r-', label='training') plt.plot(param_x, validation_score, 'b-', label='validation') plt.legend(loc='lower left') plt.xticks([0,2.5,5,7.5,10], ['10','1','1e-1','1e-2','1e-3']) plt.xlabel('param_C') plt.ylabel('accuracy') #plt.show() plt.savefig('./drive/My Drive/Final/graphs/SVM_hyperparameter_accuracy.png') # avg cross-partition accuracy cnn_cp_acc = np.mean(avg_cnn_acc[:,:,-1], axis=0) clf_cp_acc = np.mean(avg_clf_acc[:,:,-1], axis=0) print('cnn_cp_acc: {}'.format(cnn_cp_acc)) print('clf_cp_acc: {}'.format(clf_cp_acc)) avg_totalcp_acc = (cnn_cp_acc + clf_cp_acc) / 2 print(avg_totalcp_acc) (avg_cnn_acc + avg_clf_acc)/2 opt_param opt_param_CNN max_ind_cnn = np.argpartition(np.sum(X, axis=0), -2)[-2:] std_ind_cnn = np.argpartition(np.std(X, axis=0), -2)[-2:] max_ind_clf = np.argpartition(np.sum(X_mfcc, axis=0), -2)[-2:] std_ind_clf = np.argpartition(np.std(X_mfcc, axis=0), -2)[-2:] max_cnn = X[:,max_ind_cnn] std_cnn = X[:,std_ind_cnn] max_clf = X_mfcc[:,max_ind_clf] std_clf = X_mfcc[:,std_ind_clf] def plot_features(X, Y): return X[Y==0,:], X[Y==1,:] # 2 max features from cnn plotted plt.clf() feature0, feature1 = plot_features(max_cnn, Y) plt.plot(feature0[:,0], feature0[:,1],'ro', label='digit 0') plt.plot(feature1[:,0], feature1[:,1],'go', label='digit 1') plt.legend(loc='lower right') plt.show() #plt.savefig('./drive/My Drive/Final/graphs/2_max_sum_cnn_features.png') # 2 var features from cnn plotted feature0, feature1 = plot_features(std_cnn, Y) plt.plot(feature0[:,0], feature0[:,1],'ro', label='digit 0') plt.plot(feature1[:,0], feature1[:,1],'go', label='digit 1') plt.legend(loc='lower right') #plt.show() plt.savefig('./drive/My Drive/Final/graphs/2_max_var_cnn_features.png') # 2 max features from mfcc plotted feature0, feature1 = plot_features(max_clf, Y) plt.plot(feature0[:,0], feature0[:,1],'ro', label='digit 0') plt.plot(feature1[:,0], feature1[:,1],'go', label='digit 1') plt.legend(loc='lower right') #plt.show() plt.savefig('./drive/My Drive/Final/graphs/2_max_sum_mfcc_features.png') # 2 var features from mfcc plotted feature0, feature1 = plot_features(std_clf, Y) plt.plot(feature0[:,0], feature0[:,1],'ro', label='digit 0') plt.plot(feature1[:,0], feature1[:,1],'go', label='digit 1') plt.legend(loc='lower right') #plt.show() plt.savefig('./drive/My Drive/Final/graphs/2_max_var_mfcc_features.png')
_____no_output_____
MIT
cnn_classifier.ipynb
Poxls88/triggerword
View source on GitHub Notebook Viewer Run in binder Run in Google Colab Install Earth Engine API and geemapInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemapdependencies), including earthengine-api, folium, and ipyleaflet.**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
# Installs geemap package import subprocess try: import geemap except ImportError: print('geemap package not installed. Installing ...') subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap']) # Checks whether this notebook is running on Google Colab try: import google.colab import geemap.eefolium as emap except: import geemap as emap # Authenticates and initializes Earth Engine import ee try: ee.Initialize() except Exception as e: ee.Authenticate() ee.Initialize()
_____no_output_____
MIT
Image/extract_value_to_points.ipynb
YuePanEdward/earthengine-py-notebooks
Create an interactive map The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.pyL13) can be added using the `Map.add_basemap()` function.
Map = emap.Map(center=[40,-100], zoom=4) Map.add_basemap('ROADMAP') # Add Google Map Map
_____no_output_____
MIT
Image/extract_value_to_points.ipynb
YuePanEdward/earthengine-py-notebooks
Add Earth Engine Python script
# Add Earth Engine dataset # Input imagery is a cloud-free Landsat 8 composite. l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1') image = ee.Algorithms.Landsat.simpleComposite(**{ 'collection': l8.filterDate('2018-01-01', '2018-12-31'), 'asFloat': True }) # Use these bands for prediction. bands = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7', 'B10', 'B11'] # Load training points. The numeric property 'class' stores known labels. points = ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels') # This property of the table stores the land cover labels. label = 'landcover' # Overlay the points on the imagery to get training. training = image.select(bands).sampleRegions(**{ 'collection': points, 'properties': [label], 'scale': 30 }) # Define visualization parameters in an object literal. vizParams = {'bands': ['B5', 'B4', 'B3'], 'min': 0, 'max': 1, 'gamma': 1.3} Map.centerObject(points, 10) Map.addLayer(image, vizParams, 'Image') Map.addLayer(points, {'color': "yellow"}, 'Training points') first = training.first() print(first.getInfo())
_____no_output_____
MIT
Image/extract_value_to_points.ipynb
YuePanEdward/earthengine-py-notebooks
Display Earth Engine data layers
Map.addLayerControl() # This line is not needed for ipyleaflet-based Map. Map
_____no_output_____
MIT
Image/extract_value_to_points.ipynb
YuePanEdward/earthengine-py-notebooks
FmriprepToday, many excellent general-purpose, open-source neuroimaging software packages exist: [SPM](https://www.fil.ion.ucl.ac.uk/spm/) (Matlab-based), [FSL](https://fsl.fmrib.ox.ac.uk/fsl/fslwiki), [AFNI](https://afni.nimh.nih.gov/), and [Freesurfer](https://surfer.nmr.mgh.harvard.edu/) (with a shell interface). We argue that there is not one single package that is always the best choice for every step in your preprocessing pipeline. Fortunately, people from the [Poldrack lab](https://poldracklab.stanford.edu/) created [fmriprep](https://fmriprep.readthedocs.io/en/stable/), a software package that offers a preprocessing pipeline which "glues together" functionality from different neuroimaging software packages (such as Freesurfer and FSL), such that each step in the pipeline is executed by the software package that (arguably) does it best.We have been using *Fmriprep* for preprocessing of our own data and we strongly recommend it. It is relatively simple to use, requires minimal user intervention, and creates extensive visual reports for users to do visual quality control (to check whether each step in the pipeline worked as expected). The *only* requirement to use Fmriprep is that your data is formatted as specified in the Brain Imaging Data Structure (BIDS). The BIDS-format[BIDS](https://bids.neuroimaging.io/) is a specification on how to format, name, and organize your MRI dataset. It specifies the file format of MRI files (i.e., compressed Nifti: `.nii.gz` files), lays out rules for how you should name your files (i.e., with "key-value" pairs, such as: `sub-01_ses-1_task-1back_run-1_bold.nii.gz`), and outlines the file/folder structure of your dataset (where each subject has its own directory with separate subdirectories for different MRI modalities, including fieldmaps, functional, diffusion, and anatomical MRI). Additionally, it specifies a way to include "metadata" about the (MRI) files in your dataset with [JSON](https://en.wikipedia.org/wiki/JSON) files: plain-text files with key-value pairs (in the form "parameter: value"). Given that your dataset is BIDS-formatted and contains the necessary metadata, you can use `fmriprep` on your dataset. (You can use the awesome [bids-validator](https://bids-standard.github.io/bids-validator/) to see whether your dataset is completely valid according to BIDS.)There are different tools to convert your "raw" scanner data (e.g., in DICOM or PAR/REC format) to BIDS, including [heudiconv](https://heudiconv.readthedocs.io/en/latest/), [bidscoin](https://github.com/Donders-Institute/bidscoin), and [bidsify](https://github.com/NILAB-UvA/bidsify) (created by Lukas). We'll skip over this step and assume that you'll be able to convert your data to BIDS. Installing FmriprepNow, having your data in BIDS is an important step in getting started with Fmriprep. The next step is installing the package. Technically, Fmriprep is a Python package, so it can be installed as such (using `pip install fmriprep`), but we do not recommend this "bare metal" installation, because it depends on a host of neuroimaging software packages (including FSL, Freesurfer, AFNI, and ANTs). So if you'd want to directly install Fmriprep, you'd need to install those extra neuroimaging software packages as well (which is not worth your time, trust us).Fortunately, Fmriprep also offers a "Docker container" in which Fmriprep and all the associated dependencies are already installed. [Docker](https://www.docker.com/) is software that allows you to create "containers", which are like lightweight "virtual machines" ([VM](https://en.wikipedia.org/wiki/Virtual_machine)) that are like a separate (Linux-based) operating system with a specific software configuration. You can download the Fmriprep-specific docker "image", which is like a "recipe", build the Fmriprep-specific "container" according to this "recipe" on your computer, and finally use this container to run Fmriprep on your computer as if all dependencies were actually installed on your computer! Docker is available on Linux, Mac, and Windows. To install Docker, google something like "install docker for {Windows,Mac,Linux}" to find a google walkthrough.Note that you need administrator ("root") privilege on your computer (which is likely the case for your own computer, but not on shared analysis servers) to run Docker. If you don't have root access on your computer/server, ask you administrator/sysadmin to install [singularity](https://fmriprep.readthedocs.io/en/stable/installation.htmlsingularity-container), which allows you to convert Docker images to Singularity images, which you can run without administrator privileges.Assuming you have installed Docker, you can run the "containerized" Fmriprep from your command line directly, which involves a fairly long and complicated command (i.e., `docker run -it --rm -v bids_dir /data ... etc`), or using the `fmriprep-docker` Python package. This `fmriprep-docker` package is just a simple wrapper around the appropriate Docker command to run the complicated "containerized" Fmriprep command. We strongly recommend this method.To install `fmriprep-docker`, you can use `pip` (from your command line):```pip install fmriprep-docker```Now, you should have access to the `fmriprep-docker` command on your command line and you're ready to start preprocessing your dataset. For more detailed information about installing Fmriprep, check out their [website](https://fmriprep.readthedocs.io/en/stable/installation.html). Running FmriprepAssuming you have Docker and `fmriprep-docker` installed, you're ready to run Fmriprep. The basic format of the `fmriprep-docker` command is as follows:```fmriprep-docker ```This means that `fmriprep-docker` has two mandatory positional arguments: the first one being your BIDS-folder (i.e., the path to your folder with BIDS-formattefd data), and the second one being the output-folder (i.e., where you want Fmriprep to output the preprocessed data). We recommend setting your output-folder to a subfolder of your BIDS-folder named "derivatives": `/derivatives`.Then, you can add a bunch of extra "flags" (parameters) to the command to specify the preprocessing pipeline as you like it. We highlight a couple of important ones here, but for the full list of parameters, check out the [Fmriprep](https://fmriprep.readthedocs.io/en/stable/usage.html) website. FreesurferWhen running Fmriprep from Docker, you don't need to have Freesurfer installed, but you *do* need a Freesurfer license. You can download this here: https://surfer.nmr.mgh.harvard.edu/fswiki/License. Then, you need to supply the `--fs-license-file ` parameter to your `fmriprep-docker` command:```fmriprep-docker --fs-license-file /home/lukas/license.txt``` Configuring what is preprocessedIf you just run Fmriprep with the mandatory BIDS-folder and output-folder arguments, it will preprocess everything it finds in the BIDS-folder. Sometimes, however, you may just want to run one (or several) specific participants, or one (or more) specific tasks (e.g., only the MRI files associated with the localizer runs, but not the working memory runs). You can do this by adding the `--participant` and `--task` flags to the command:```fmriprep-docker --participant sub-01 --task localizer```You can also specify some things to be ignored during preprocessing using the `--ignore` parameters (like `fieldmaps`):```fmriprep-docker --ignore fieldmaps``` Handling performanceIt's very easy to parallelize the preprocessing pipeline by setting the `--nthreads` and `--omp-nthreads` parameters, which refer to the number of threads that should be used to run Fmriprep on. Note that laptops usually have 4 threads available (but analysis servers usually have more!). You can also specify the maximum of RAM that Fmriprep is allowed to use by the `--mem_mb` parameters. So, if you for example want to run Fmriprep with 3 threads and a maximum of 3GB of RAM, you can run:```fmriprep-docker --nthreads 3 --omp-nthreads 3 --mem_mb 3000```In our experience, however, specifying the `--mem_mb` parameter is rarely necessary if you don't parallelize too much. Output spacesSpecifying your "output spaces" (with the `--output-spaces` flag) tells Fmriprep to what "space(s)" you want your preprocessed data registered to. For example, you can specify `T1w` to have your functional data registered to the participant's T1 scan. You can, instead or in addition to, also specify some standard template, like the MNI template (`MNI152NLin2009cAsym` or `MNI152NLin6Asym`). You can even specify surface templates if you want (like `fsaverage`), which will sample your volumetric functional data onto the surface (as computed by freesurfer). In addition to the specific output space(s), you can add a resolution "modifier" to the parameter to specify in what spatial resolution you want your resampled data to be. Without any resolution modifier, the native resolution of your functional files (e.g., $3\times3\times3$ mm.) will be kept intact. But if you want to upsample your resampled files to 2mm, you can add `YourTemplate:2mm`. For example, if you want to use the FSL-style MNI template (`MNI152NLin6Asym`) resampled at 2 mm, you'd use:```fmriprep-docker --output-spaces MNI152NLin6Asym:2mm```You can of course specify multiple output-spaces:```fmriprep-docker --output-spaces MNI152NLin6Asym:2mm T1w fsaverage``` Other parametersThere are many options that you can set when running Fmriprep. Check out the [Fmriprep website](https://fmriprep.readthedocs.io/) (under "Usage") for a list of all options! Issues, errors, and troubleshootingWhile Fmriprep often works out-of-the-box (assuming your data are properly BIDS-formatted), it may happen that it crashes or otherwise gives unexpected results. A great place to start looking for help is [neurostars.org](https://neurostars.org). This website is dedicated to helping neuroscientists with neuroimaging/neuroscience-related questions. Make sure to check whether your question has been asked here already and, if not, pose it here!If you encounter Fmriprep-specific bugs, you can also submit and issue at the [Github repository](https://github.com/poldracklab/fmriprep) of Fmriprep. Fmriprep output/reportsAfter Fmriprep has run, it outputs, for each participants separately, a directory with results (i.e., preprocessed files) and an HTML-file with a summary and figures of the different steps in the preprocessing pipeline.We ran Fmriprep on a single run/task (`flocBLOCKED`) from a single subject (`sub-03`) some data with the following command:```fmriprep-docker /home/lsnoek1/ni-edu/bids /home/lsnoek1/ni-edu/bids/derivatives --participant-label sub-03 --output-spaces T1w MNI152NLin2009cAsym```We've copied the Fmriprep output for this subject (`sub-03`) in the `fmriprep` subdirectory of the `week_4` directory. Let's check its contents:
import os print(os.listdir('bids/derivatives/fmriprep'))
_____no_output_____
MIT
NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
lukassnoek/NI-edu
As said, Fmriprep outputs a directory with results (`sub-03`) and an associated HTML-file with a summary of the (intermediate and final) results. Let's check the directory with results first:
from pprint import pprint # pprint stands for "pretty print", sub_path = os.path.join('bids/derivatives/fmriprep', 'sub-03') pprint(sorted(os.listdir(sub_path)))
_____no_output_____
MIT
NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
lukassnoek/NI-edu
The `figures` directory contains several figures with the result of different preprocessing stages (like functional → high-res anatomical registration), but these figures are also included in the HTML-file, so we'll leave that for now. The other two directories, `anat` and `func`, contain the preprocessed anatomical and functional files, respectively. Let's inspect the `anat` directory:
anat_path = os.path.join(sub_path, 'anat') pprint(os.listdir(anat_path))
_____no_output_____
MIT
NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
lukassnoek/NI-edu
Here, we see a couple of different files. There are both (preprocessed) nifti images (`*.nii.gz`) and associated meta-data (plain-text files in JSON format: `*.json`).Importantly, the nifti outputs are in two different spaces: one set of files are in the original "T1 space", so without any resampling to another space (these files have the same resolution and orientation as the original T1 anatomical scan). For example, the `sub_03_desc-preproc_T1w.nii.gz` scan is the preprocessed (i.e., bias-corrected) T1 scan. In addition, most files are also available in `MNI152NLin2009cAsym` space, a standard template. For example, the `sub-03_space-MNI152NLin2009cAsym_desc-preproc_T1w.nii.gz` is the same file as `sub_03_desc-preproc_T1w.nii.gz`, but resampled to the `MNI152NLin2009cAsym` template. In addition, there are subject-specific brain parcellations (the `*aparcaseg_dseg.nii.gz `and `*aseg_dseg.nii.gz` files), files with registration parameters (`*from- ... -to ...` files), probabilistic tissue segmentation files (`*label-{CSF,GM,WM}_probseg.nii.gz`) files, and brain masks (to outline what is brain and not skull/dura/etc; `*brain_mask.nii.gz`).Again, on the [Fmriprep website](https://fmriprep.readthedocs.io/), you can find more information about the specific outputs.Now, let's check out the `func` directory:
func_path = os.path.join(sub_path, 'func') pprint(os.listdir(func_path))
_____no_output_____
MIT
NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
lukassnoek/NI-edu
Again, like the files in the `anat` folder, the functional outputs are available in two spaces: `T1w` and `MNI152NLin2009cAsym`. In terms of actual images, there are preprocessed BOLD files (ending in `preproc_bold.nii.gz`), the functional volume used for "functional → anatomical" registration (ending in `boldref.nii.gz`), brain parcellations in functional space (ending in `dseg.nii.gz`), and brain masks (ending in `brain_mask.nii.gz`). In addition, there are files with "confounds" (ending in `confounds_regressors.tsv`) which contain variables that you might want to include as nuisance regressors in your first-level analysis. These confound files are speadsheet-like files (like `csv` files, but instead of being comma-delimited, they are tab-delimited) and can be easily loaded in Python using the [pandas](https://pandas.pydata.org/) package:
import pandas as pd conf_path = os.path.join(func_path, 'sub-03_task-flocBLOCKED_desc-confounds_regressors.tsv') conf = pd.read_csv(conf_path, sep='\t') conf.head()
_____no_output_____
MIT
NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
lukassnoek/NI-edu
Confound files from Fmriprep contain a large set of confounds, ranging from motion parameters (`rot_x`, `rot_y`, `rot_z`, `trans_x`, `trans_y`, and `trans_z`) and their derivatives (`*derivative1`) and squares (`*_power2`) to the average signal from the brain's white matter and cerebrospinal fluid (CSF), which should contain sources of noise such as respiratory, cardiac, or motion related signals (but not signal from neural sources, which should be largely constrained to gray matter). For a full list and explanation of Fmriprep's estimated confounds, check their website. Also, check [this thread](https://neurostars.org/t/confounds-from-fmriprep-which-one-would-you-use-for-glm/326) on Neurostars for a discussion on which confounds to include in your analyses. In addition to the actual preprocessed outputs, Fmriprep also provides you with a nice (visual) summary of the different (major) preprocessing steps in an HTML-file, which you'd normally open in any standard browser to view. Here. we load this file for our example participants (`sub-03`) inside the notebook below. Scroll through it to see which preprocessing steps are highlighted. Note that the images from the HTML-file are not properly rendered in Jupyter notebooks, but you can right-click the image links (e.g., `sub-03/figures/sub-03_dseg.svg`) and click "Open link in new tab" to view the image.
from IPython.display import IFrame IFrame(src='./bids/derivatives/fmriprep/sub-03.html', width=700, height=600)
_____no_output_____
MIT
NI-edu/fMRI-introduction/week_4/fmriprep.ipynb
lukassnoek/NI-edu
Desafio 1 do [Paulo Silveira](https://twitter.com/paulo_caelum) Encontrar quantos filmes não possuem avaliações e quais são esses filmes
count_rating_by_movieId = movies_rating.pivot_table(index=['movieId'], aggfunc='size').rename('votes') count_rating_by_movieId movies_with_votes = movies.join(count_rating_by_movieId, on="movieId") movies_with_votes[movies_with_votes['votes'].isnull()]
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Desafio 2 do [Guilherme Silveira](https://twitter.com/guilhermecaelum) Alterar o nome da coluna nota do dataframe filmes_com_media para nota_média após o join.
rating = movies_rating.groupby("movieId")['rating'].mean() rating filmes_com_media = movies.join(rating, on="movieId").rename(columns={'rating': 'nota_média'}) filmes_com_media
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Desafio 3 do [Guilherme Silveira](https://twitter.com/guilhermecaelum) Adicionar ao filmes_com_media o total de votos de cada filme
movies_with_rating_and_votes = filmes_com_media.join(count_rating_by_movieId, on="movieId") movies_with_rating_and_votes
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Desafio 4 do [Thiago Gonçalves](https://twitter.com/tgcsantos) Arredondar as médias (coluna de nota média) para duas casas decimais.
movies_with_rating_and_votes = movies_with_rating_and_votes.round({'nota_média':2}) movies_with_rating_and_votes
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Desafio 5 do [Allan Spadini](https://twitter.com/allanspadini) Descobrir os generos dos filmes (quais são eles, únicos). (esse aqui o bicho pega)
genres_split = movies.genres.str.split("|") genres_split genres = pd.DataFrame({'genre':np.concatenate(genres_split.values)}) list_genres = genres.groupby('genre').size().reset_index(name='count') list_genres['genre']
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Desafio 6 da [Thais André](https://twitter.com/thais_tandre) Contar o número de aparições de cada genero.
list_genres
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Desafio 7 do [Guilherme Silveira](https://twitter.com/guilhermecaelum) Plotar o gráfico de aparições de cada genero. Pode ser um gráfico de tipo igual a barra.
list_genres[['genre', 'count']].sort_values(by=['genre'], ascending=True).plot(x='genre', kind='barh', title="Generos")
_____no_output_____
MIT
desafios_aula01.ipynb
justapixel/QuarentenaDados
Write a program to remove characters from a string starting from zero up to n and return a new string.__Example:__remove_char("Untitled", 4) so output must be tled. Here we need to remove first four characters from a string
def remove_char(a, b): # Write your code here print("started") a="Untitled" b=4 remove_char(a,b)
started
MIT
test.ipynb
sharaththota/Test
Write a program to find how many times substring appears in the given string.__Example:__"You can use Markdown to format documentation you add to Markdown cells" sub_string: MarkdownIn the above the substring Markdown is appeared two times.So the count is two
def sub_string(m_string,s_string): # Write your code here print("started") m_string="You can use Markdown to format documentation you add to Markdown cells" s_string="Markdown" sub_string(m_string,s_string)
started
MIT
test.ipynb
sharaththota/Test
Write a program to check if the given number is a palindrome number.__Exapmle:__A palindrome number is a number that is same after reverse. For example 242, is the palindrome number
def palindrom_check(a): # Write your code here print("started") palindrom_check(242)
started
MIT
test.ipynb
sharaththota/Test
Write a program to Extract Unique values from dictionary values__Example:__test= {"gfg': [5, 6, 7, 8], 'is': [10, 11, 7, 5], 'best' : [6, 12, 10, 8], 'for': [1, 2, 5]}out_put: [1, 2, 5, 6, 7, 8, 10, 11, 12]
def extract_unique(a): # Write your code here print("started") test= {'gfg': [5, 6, 7, 8], 'is': [10, 11, 7, 5], 'best' : [6, 12, 10, 8], 'for': [1, 2, 5]} extract_unique(test)
started
MIT
test.ipynb
sharaththota/Test
Write a program to find the dictionary with maximum count of pairs__Example:__Input: test_list = [{"gfg": 2, "best":4}, {"gfg": 2, "is" : 3, "best": 4, "CS":9}, {"gfg":2}] Output: 4
def max_count(a): # Write your code here print("started") test_list = [{"gfg": 2, "best":4}, {"gfg": 2, "is" : 3, "best": 4, "CS":9}, {"gfg":2}] max_count(test_list)
started
MIT
test.ipynb
sharaththota/Test
Access the value of key 'history' from the below dict
def key_access(a): # Write your code here print("started") sampleDict = { "class":{ "student":{ "name": "Mike", "marks" : { "physics":70, "history":80 } } } } key_access(sampleDict)
started
MIT
test.ipynb
sharaththota/Test
Print the value of key hair Print the third element of the key interested in
def third_ele(a): # Write your code here print("started") info={ "personal data":{ "name":"Lauren", "age":20, "major":"Information Science", "physical_features":{ "color":{ "eye":"blue", "hair":"brown" }, "height":"5'8" } }, "other":{ "favorite_colors":[ "purple", "green", "blue" ], "interested_in":[ "social media", "intellectual property", "copyright", "music", "books" ] } } third_ele(info) import pandas as pd import numpy as np exam_data = {'name': ['Anastasia', 'Dima', 'Katherine', 'James', 'Emily', 'Michael', 'Matthew', 'Laura', 'Kevin', 'Jonas'], 'score': [12.5, 9, 16.5, np.nan, 9, 20, 14.5, np.nan, 8, 19], 'attempts': [1, 3, 2, 3, 2, 3, 1, 1, 2, 1], 'qualify': ['yes', 'no', 'yes', 'no', 'no', 'yes', 'yes', 'no', 'no', 'yes']} labels = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j'] df = pd.DataFrame(exam_data , index=labels) df
_____no_output_____
MIT
test.ipynb
sharaththota/Test
Print the Unique values from attempts column
def un_values(df): # Write your code here print("started") un_values(df)
started
MIT
test.ipynb
sharaththota/Test
Print the top five rows from the data frame
def top_five(df): # Write your code here print("started") top_five(df)
started
MIT
test.ipynb
sharaththota/Test
Print the max and min values of the coulmn attempts
def min_max(df): # Write your code here print("started") min_max(df)
started
MIT
test.ipynb
sharaththota/Test
Import data
df = pd.read_hdf('data/car.h5') df.shape df.columns
_____no_output_____
MIT
matrix_two/day3.ipynb
kmwolowiec/data_workshop
Dummy Model
df.select_dtypes(np.number).columns X = df['car_id'] y = df['price_value'] model = DummyRegressor() model.fit(X, y) y_pred = model.predict(X) mae(y, y_pred) [x for x in df.columns if 'price' in x] df['price_currency'].value_counts() df = df[ df.price_currency == 'PLN'] df.shape
_____no_output_____
MIT
matrix_two/day3.ipynb
kmwolowiec/data_workshop
Features
df.sample(5) suffix_cat = '__cat' for feat in df.columns: if isinstance(df[feat][0], list):continue factorized_values = df[feat].factorize()[0] if suffix_cat in feat: df[feat] = factorized_values else: df[feat+suffix_cat] = factorized_values cat_feats = [x for x in df.columns if suffix_cat in x] cat_feats = [x for x in cat_feats if 'price' not in x] cat_feats len(cat_feats) X = df[cat_feats].values y = df['price_value'].values model = DecisionTreeRegressor(max_depth=5) scores = cross_val_score(model, X, y, cv=3, scoring='neg_mean_absolute_error') np.mean(scores) m = DecisionTreeRegressor(max_depth=5) m.fit(X, y) imp = PermutationImportance(m, random_state=0).fit(X, y) eli5.show_weights(imp, feature_names=cat_feats) df[['param_napęd', 'price_value']].groupby('param_napęd').agg(['mean', 'median', 'std', 'count']) df['param_rok-produkcji'] = df['param_rok-produkcji'].astype(float) fig = plt.figure(constrained_layout=True, figsize=(16,8)) gs = GridSpec(2, 4, figure=fig) ax1 = fig.add_subplot(gs[0, :2]) ax2 = fig.add_subplot(gs[0, 2]) ax3 = fig.add_subplot(gs[0, 3]) ax4 = fig.add_subplot(gs[1, :]) sns.boxplot(data=df, x='param_napęd', y='price_value', ax=ax1) sns.boxplot(data=df, x='param_faktura-vat__cat', y='price_value', ax=ax2) sns.boxplot(data=df, x='param_stan', y='price_value', ax=ax3) sns.scatterplot(x="param_rok-produkcji", y="price_value", data=df, alpha=0.1, linewidth=0, ax=ax4); !git push origin master
Counting objects: 1 Counting objects: 4, done. Delta compression using up to 2 threads. Compressing objects: 25% (1/4) Compressing objects: 50% (2/4) Compressing objects: 75% (3/4) Compressing objects: 100% (4/4) Compressing objects: 100% (4/4), done. Writing objects: 25% (1/4) Writing objects: 50% (2/4) Writing objects: 75% (3/4) Writing objects: 100% (4/4) Writing objects: 100% (4/4), 76.21 KiB | 5.86 MiB/s, done. Total 4 (delta 1), reused 0 (delta 0) remote: Resolving deltas: 100% (1/1), completed with 1 local object. remote: This repository moved. Please use the new location: remote: https://github.com/kmwolowiec/data_workshop.git To https://github.com/ThePearsSon/data_workshop.git 874fb89..1c4aeef master -> master
MIT
matrix_two/day3.ipynb
kmwolowiec/data_workshop
Pattern Mining Library
source("https://raw.githubusercontent.com/eogasawara/mylibrary/master/myPreprocessing.R") loadlibrary("arules") loadlibrary("arulesViz") loadlibrary("arulesSequences") data(AdultUCI) dim(AdultUCI) head(AdultUCI)
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Removing attributes
AdultUCI$fnlwgt <- NULL AdultUCI$"education-num" <- NULL
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Conceptual Hierarchy and Binning
AdultUCI$age <- ordered(cut(AdultUCI$age, c(15,25,45,65,100)), labels = c("Young", "Middle-aged", "Senior", "Old")) AdultUCI$"hours-per-week" <- ordered(cut(AdultUCI$"hours-per-week", c(0,25,40,60,168)), labels = c("Part-time", "Full-time", "Over-time", "Workaholic")) AdultUCI$"capital-gain" <- ordered(cut(AdultUCI$"capital-gain", c(-Inf,0,median(AdultUCI$"capital-gain"[AdultUCI$"capital-gain">0]), Inf)), labels = c("None", "Low", "High")) AdultUCI$"capital-loss" <- ordered(cut(AdultUCI$"capital-loss", c(-Inf,0, median(AdultUCI$"capital-loss"[AdultUCI$"capital-loss">0]), Inf)), labels = c("None", "Low", "High")) head(AdultUCI)
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Convert to transactions
AdultTrans <- as(AdultUCI, "transactions")
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
A Priori
rules <- apriori(AdultTrans, parameter=list(supp = 0.5, conf = 0.9, minlen=2, maxlen= 10, target = "rules"), appearance=list(rhs = c("capital-gain=None"), default="lhs"), control=NULL) inspect(rules) rules_a <- as(rules, "data.frame") head(rules_a)
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Analysis of Rules
imrules <- interestMeasure(rules, transactions = AdultTrans) head(imrules)
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Removing redundant rules
nrules <- rules[!is.redundant(rules)] arules::inspect(nrules)
lhs rhs support confidence [1] {hours-per-week=Full-time} => {capital-gain=None} 0.5435895 0.9290688 [2] {sex=Male} => {capital-gain=None} 0.6050735 0.9051455 [3] {workclass=Private} => {capital-gain=None} 0.6413742 0.9239073 [4] {race=White} => {capital-gain=None} 0.7817862 0.9143240 [5] {native-country=United-States} => {capital-gain=None} 0.8219565 0.9159062 [6] {capital-loss=None} => {capital-gain=None} 0.8706646 0.9133376 coverage lift count [1] 0.5850907 1.0127342 26550 [2] 0.6684820 0.9866565 29553 [3] 0.6941976 1.0071078 31326 [4] 0.8550428 0.9966616 38184 [5] 0.8974243 0.9983862 40146 [6] 0.9532779 0.9955863 42525
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Showing the transactions that support the rulesIn this example, we can see the transactions (trans) that support rules 1.
st <- supportingTransactions(nrules[1], AdultTrans) trans <- unique(st@data@i) length(trans) print(c(length(trans)/length(AdultTrans), nrules[1]@quality$support))
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Now we can see the transactions (trans) that support rules 1 and 2. As can be observed, the support for both rules is not the sum of the support of each rule.
st <- supportingTransactions(nrules[1:2], AdultTrans) trans <- unique(st@data@i) length(trans) print(c(length(trans)/length(AdultTrans), nrules[1:2]@quality$support))
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Rules visualization
options(repr.plot.width=7, repr.plot.height=4) plot(rules) options(repr.plot.width=7, repr.plot.height=4) plot(rules, method="paracoord", control=list(reorder=TRUE))
_____no_output_____
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Sequence Mining
x <- read_baskets(con = system.file("misc", "zaki.txt", package = "arulesSequences"), info = c("sequenceID","eventID","SIZE")) as(x, "data.frame") s1 <- cspade(x, parameter = list(support = 0.4), control = list(verbose = TRUE)) as(s1, "data.frame")
parameter specification: support : 0.4 maxsize : 10 maxlen : 10 algorithmic control: bfstype : FALSE verbose : TRUE summary : FALSE tidLists : FALSE preprocessing ... 1 partition(s), 0 MB [0.046s] mining transactions ... 0 MB [0.032s] reading sequences ... [0.027s] total elapsed time: 0.105s
MIT
todo/Pattern.ipynb
lucasgiutavares/mylibrary
Load data
df = pd.DataFrame({ 'x': [4.5, 4.9, 5.0, 4.8, 5.8, 5.6, 5.7, 5.8], 'y': [35, 38, 45, 49, 59, 65, 73, 82], 'z': [0, 0, 0, 0, 1, 1, 1, 1] }) df plt.scatter(df['x'], df['y'], c=df['z'])
_____no_output_____
MIT
docs/!ml/notebooks/Perceptron.ipynb
a-mt/dev-roadmap
Train model
def fit(X, y, max_epochs=500): """ X : numpy 2D array. Each row corresponds to one training example. y : numpy 1D array. Label (0 or 1) of each example. """ n = X.shape[1] # Initialize weights weights = np.zeros((n, )) bias = 0.0 for _ in range(max_epochs): errors = 0 # Loop through the examples for i, xi in enumerate(X): predict_y = 1 if xi.dot(weights) + bias >= 0 else 0 error = y[i] - predict_y # Update weights if error != 0: weights += error * xi bias += error errors += 1 # We converged if errors == 0: break return (weights, bias) X = df.drop('z', axis=1).values y = df['z'].values weights, bias = fit(X, y) weights, bias
_____no_output_____
MIT
docs/!ml/notebooks/Perceptron.ipynb
a-mt/dev-roadmap
Plot predictions
def plot_decision_boundary(): # Draw points plt.scatter(X[:,0], X[:,1], c=y) a = -weights[0]/weights[1] b = -bias/weights[1] # Draw hyperplane with margin _X = np.arange(X[:,0].min(), X[:,0].max()+1, .1) _Y = _X * a + b plt.plot(_X, _Y) plot_decision_boundary() def plot_contour(): # Draw points plt.scatter(X[:,0], X[:,1], c=y) x_min, x_max = plt.gca().get_xlim() y_min, y_max = plt.gca().get_ylim() # Draw contour xx, yy = np.meshgrid(np.arange(x_min, x_max+.1, .1), np.arange(y_min, y_max+.1, .1)) _X = np.c_[xx.ravel(), yy.ravel()] Z = np.sign(_X.dot(weights) + bias) \ .reshape(xx.shape) plt.contourf(xx, yy, Z, cmap=plt.cm.Pastel1, alpha=0.3) plot_contour()
_____no_output_____
MIT
docs/!ml/notebooks/Perceptron.ipynb
a-mt/dev-roadmap
Compare with logistic regression
from sklearn.linear_model import LogisticRegression model = LogisticRegression(C=1e20, solver='liblinear', random_state=0) model.fit(X, y) weights = model.coef_[0] bias = model.intercept_[0] plot_decision_boundary()
_____no_output_____
MIT
docs/!ml/notebooks/Perceptron.ipynb
a-mt/dev-roadmap
Compare with SVM
from sklearn import svm model = svm.SVC(kernel='linear', C=1.0) model.fit(X, y) weights = model.coef_[0] bias = model.intercept_[0] plot_decision_boundary()
_____no_output_____
MIT
docs/!ml/notebooks/Perceptron.ipynb
a-mt/dev-roadmap
Gradient Boosting
from sklearn.ensemble import GradientBoostingClassifier from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split cancer = load_breast_cancer() X_train, X_test, y_train, y_test = train_test_split( cancer.data, cancer.target, random_state=0) gbrt = GradientBoostingClassifier(random_state=0) gbrt.fit(X_train, y_train) print("accuracy on training set: %f" % gbrt.score(X_train, y_train)) print("accuracy on test set: %f" % gbrt.score(X_test, y_test)) gbrt = GradientBoostingClassifier(random_state=0, max_depth=1) gbrt.fit(X_train, y_train) print("accuracy on training set: %f" % gbrt.score(X_train, y_train)) print("accuracy on test set: %f" % gbrt.score(X_test, y_test)) gbrt = GradientBoostingClassifier(random_state=0, learning_rate=0.01) gbrt.fit(X_train, y_train) print("accuracy on training set: %f" % gbrt.score(X_train, y_train)) print("accuracy on test set: %f" % gbrt.score(X_test, y_test)) gbrt = GradientBoostingClassifier(random_state=0, max_depth=1) gbrt.fit(X_train, y_train) plt.barh(range(cancer.data.shape[1]), gbrt.feature_importances_) plt.yticks(range(cancer.data.shape[1]), cancer.feature_names); ax = plt.gca() ax.set_position([0.4, .2, .9, .9]) from xgboost import XGBClassifier xgb = XGBClassifier() xgb.fit(X_train, y_train) print("accuracy on training set: %f" % xgb.score(X_train, y_train)) print("accuracy on test set: %f" % xgb.score(X_test, y_test)) from xgboost import XGBClassifier xgb = XGBClassifier(n_estimators=1000) xgb.fit(X_train, y_train) print("accuracy on training set: %f" % xgb.score(X_train, y_train)) print("accuracy on test set: %f" % xgb.score(X_test, y_test))
_____no_output_____
MIT
notebooks/extra - Gradient Boosting.ipynb
lampsonnguyen/ml-training-advance
import numpy as np import matplotlib.pyplot as plt
_____no_output_____
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
Introduction to vectors Plot vector that has notation (2,4,4). Another vector has notation (1,2,3). Find the direction cosines of each vector, the angles of each vector to the three axes, and the angle between the two vectors!
from mpl_toolkits.mplot3d import axes3d X = np.array((0, 0)) Y= np.array((0, 0)) Z = np.array((0, 0)) U = np.array((2, 1)) V = np.array((4, 2)) W = np.array((4, 3)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.quiver(X, Y, Z, U, V, W) ax.set_xlim([-4, 4]) ax.set_ylim([-4, 4]) ax.set_zlim([-4, 4]) # vector A and B A_mag = np.sqrt(((U[0] - X[0])**2) + ((V[0] - Y[0])**2) + ((W[0] - Z[0])**2)) print('Magnitude of vector A:', A_mag, 'units') B_mag = np.sqrt(((U[1] - X[1])**2) + ((V[1] - Y[1])**2) + ((W[1] - Z[1])**2)) print('Magnitude of vector B:', B_mag, 'units') # direction cosines l_A = (U[0] - X[0]) / A_mag m_A = (V[0] - Y[0]) / A_mag n_A = (W[0] - Z[0]) / A_mag print('Direction cosine to x axis (cos alpha):', l_A, "to y axis (cos beta):", m_A, "to z axis (cos gamma):", n_A) print('Pythagorean Sum of direction cosines of vector A:', l_A**2 + m_A**2 + n_A**2, "and must be equals to 1") l_B = (U[1] - X[1]) / B_mag m_B = (V[1] - Y[1]) / B_mag n_B = (W[1] - Z[1]) / B_mag print('Direction cosine to x axis (cos alpha):', l_B, "to y axis (cos beta):", m_B, "to z axis (cos gamma):", n_B) print('Pythagorean Sum of direction cosines of vector B:', l_B**2 + m_B**2 + n_B**2, "and must be equals to 1") # angles alpha_A = np.rad2deg(np.arccos(l_A)) beta_A = np.rad2deg(np.arccos(m_A)) gamma_A = np.rad2deg(np.arccos(n_A)) print('Angle to x axis (alpha):', alpha_A, "to y axis (beta):", beta_A, "to z axis (gamma):", gamma_A) alpha_B = np.rad2deg(np.arccos(l_B)) beta_B= np.rad2deg(np.arccos(m_B)) gamma_B = np.rad2deg(np.arccos(n_B)) print('Angle to x axis (alpha):', alpha_B, "to y axis (beta):", beta_B, "to z axis (gamma):", gamma_B) # angle between two vectors cosine_angle = (l_A * l_B) + (m_A * m_B) + (n_A * n_B) angle = np.rad2deg(np.arccos(cosine_angle)) print('Angle between vector A and B:', angle, 'degrees')
Angle between vector A and B: 11.490459903731518 degrees
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
Exercise 10-3. Effective, Normal, and Shear Stress on a Plane Consider a plane that makes an angle 60 degrees with $\sigma_1$ and 60 degrees with $\sigma_3$. The principal stresses are: -600, -400, -200 MPa. Calculate:* Total effective stress* Normal stress* Shear stress
# principle stresses sigma_1 = -600; sigma_2 = -400; sigma_3 = -200 # calculate the angle of plane to second principal stress sigma 2 # using pythagorean alpha = 60; gamma = 60 l = np.cos(np.deg2rad(alpha)) n = np.cos(np.deg2rad(gamma)) m = np.sqrt(1 - l**2 - n**2) beta = np.rad2deg(np.arccos(m)) print("The second principal stress sigma 2 makes angle:", beta, "degrees to the plane") # effective stress sigma_eff = np.sqrt(((sigma_1**2) * (l**2)) + ((sigma_2**2) * (m**2)) + ((sigma_3**2) * (n**2))) print("The effective stress is:", -sigma_eff, "MPa (minus because it's compressive)") # normal stress sigma_normal = (sigma_1 * (l**2)) + (sigma_2 * (m**2)) + (sigma_3 * (n**2)) print("The normal stress is:", sigma_normal, "MPa") # shear stress sigma_shear = np.sqrt((sigma_eff**2) - (sigma_normal**2)) print("The shear stress is:", sigma_shear, "MPa")
The second principal stress sigma 2 makes angle: 45.000000000000014 degrees to the plane The effective stress is: -424.26406871192853 MPa (minus because it's compressive) The normal stress is: -400.0 MPa The shear stress is: 141.4213562373095 MPa
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
Stress Tensor Components
stress_tensor = [[sigma_xx, sigma_xy, sigma_xz], [sigma_yx, sigma_yy, sigma_yz], [sigma_zx, sigma_zy, sigma_zz]] import numpy as np from mpl_toolkits.mplot3d import Axes3D import matplotlib.pyplot as plt # point of cube points = np.array([[-5, -5, -5], [5, -5, -5 ], [5, 5, -5], [-5, 5, -5], [-5, -5, 5], [5, -5, 5 ], [5, 5, 5], [-5, 5, 5]]) # vector a = np.array((0, 0)) b= np.array((0, 0)) c = np.array((0, 0)) u = np.array((0, -4)) v = np.array((5, 0)) w = np.array((0, -4)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.quiver(a, b, c, u, v, w, color='black') ax.set_xlim([-5, 5]) ax.set_ylim([-5, 5]) ax.set_zlim([-5, 5]) r = [-5,5] X, Y = np.meshgrid(r, r) one = np.array([5, 5, 5, 5]) one = one.reshape(2, 2) ax.plot_wireframe(X,Y,one, alpha=0.5) ax.plot_wireframe(X,Y,-one, alpha=0.5) ax.plot_wireframe(X,-one,Y, alpha=0.5) ax.plot_wireframe(X,one,Y, alpha=0.5) ax.plot_wireframe(one,X,Y, alpha=0.5) ax.plot_wireframe(-one,X,Y, alpha=0.5) ax.scatter3D(points[:, 0], points[:, 1], points[:, 2]) ax.set_xlabel('X') ax.set_ylabel('Y') ax.set_zlabel('Z') plt.show() np.ones(4)
_____no_output_____
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
Exercise 10-7 Total Stress, Deviatoric Stress, Effective Stress, Cauchy Summation$$\sigma_{ij}=\tau_{ij}+P_{ij}$$$$P_{ij}=P \cdot \delta_{ij}$$Pressure is: $P=|\sigma_{mean}|=|\frac{\sigma_{xx}+\sigma_{yy}+\sigma_{zz}}{3}|$Knorecker Delta is: $\delta_{ij}=\begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$Pressure tensor is: $P_{ij}=P \cdot \delta_{ij}$So, overall the total stress is: $\sigma_{ij}=\begin{bmatrix} P+\tau_{xx} & \tau_{xy} & \tau_{zx} \\ \tau_{yx} & P+\tau_{yy} & \tau_{yz} \\ \tau_{zx} & \tau_{zy} & P+\tau_{zz} \end{bmatrix}$Cauchy summation to calculate the components of effective stress$$\sigma_{eff}=\begin{bmatrix} \sigma_x \\ \sigma_y \\ \sigma_z \end{bmatrix}=\begin{bmatrix} \sigma_{xx} & \sigma_{xy} & \sigma_{xz} \\ \sigma_{yx} & \sigma_{yy} & \sigma_{zy} \\ \sigma_{zx} & \sigma_{zy} & \sigma_{zz} \end{bmatrix} \cdot \begin{bmatrix} l \\ m \\ n \end{bmatrix}$$ **Known**: direction cosines of plane ABC, total stress tensor.**Task**:* Determine the deviatoric stress tensor* Calculate the components of effective stress on plane ABC (use Cauchy's summation)* Calculate total effective stress, total normal stress, total shear stress
# known l, m, n = 0.7, 0.5, 0.5 # direction cosines alpha, beta, gamma = 45, 60, 60 # angles stress_ij = np.array([[-40, -40, -35], [-40, 45, -50], [-35, -50, -20]]) # total stress tensor # calculate pressure P = np.abs(np.mean(np.array([(stress_ij[0][0]), (stress_ij[1][1]), (stress_ij[2][2])]))) print("Pressure:", P, "MPa") # pressure TENSOR kronecker = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]]) P_ij = P * kronecker print('Pressure tensor:') print(P_ij) # deviatoric stress TENSOR tau_ij = stress_ij - P_ij print('Deviatoric stress tensor:') print(tau_ij) # direction cosines VECTOR lmn = np.array([[l], [m], [n]]) # effective stress VECTOR stress_eff = np.dot(stress_ij, lmn) stress_eff_1 = stress_eff[0][0] stress_eff_2 = stress_eff[1][0] stress_eff_3 = stress_eff[2][0] print('Effective stress vector:') print(stress_eff) print('X component of effective stress:', stress_eff_1, 'MPa') print('Y component of effective stress:', stress_eff_2, 'MPa') print('Z component of effective stress:', stress_eff_3, 'MPa') # total / magnitude of effective stress, is SCALAR sigma_eff = np.sqrt((stress_eff_1**2) + (stress_eff_2**2) + (stress_eff_3**2)) print("The total effective stress is:", -sigma_eff, "MPa") # principal stresses sigma_1 = stress_eff_1 / l sigma_2 = stress_eff_2 / m sigma_3 = stress_eff_3 / n print('X component of principal stress:', sigma_1, 'MPa') print('Y component of principal stress:', sigma_2, 'MPa') print('Z component of principal stress:', sigma_3, 'MPa') # total normal stress sigma_normal = (sigma_1 * (l**2)) + (sigma_2 * (m**2)) + (sigma_3 * (n**2)) print("The normal stress is:", sigma_normal, "MPa") print("Because normal stress", sigma_normal, "MPa nearly equals to sigma 1", sigma_1, "MPa, the plane is nearly normal to sigma 1") # total shear stress sigma_shear = np.sqrt((sigma_eff**2) - (sigma_normal**2)) print("The shear stress is:", sigma_shear, "MPa")
The total effective stress is: -93.59887819840577 MPa X component of principal stress: -93.57142857142858 MPa Y component of principal stress: -61.0 MPa Z component of principal stress: -119.0 MPa The normal stress is: -90.85 MPa Because normal stress -90.85 MPa nearly equals to sigma 1 -93.57142857142858 MPa, the plane is nearly normal to sigma 1 The shear stress is: 22.517271149053567 MPa
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
Exercise 10-8 Transforming Stress Tensor (Containing all the 9 tensors of shear and normal) to Principal Stress Tensor using Cubic Equation
sigma_ij = np.array([[0, 0, 100], [0, 0, 0], [-100, 0, 0]]) # stress tensor # cubic equation coeff3 = 1 coeff2 = -((sigma_ij[0][0] + sigma_ij[1][1] + sigma_ij[2][2])) coeff1 = (sigma_ij[0][0] * sigma_ij[1][1]) + (sigma_ij[1][1] * sigma_ij[2][2]) + (sigma_ij[2][2] * sigma_ij[0][0]) - ((sigma_ij[0][1])**2) - ((sigma_ij[1][2])**2) - ((sigma_ij[2][0])**2) coeff0 = -((sigma_ij[0][0] * sigma_ij[1][1] * sigma_ij[2][2]) + (2 * sigma_ij[0][1] * sigma_ij[1][2] * sigma_ij[2][0]) - (sigma_ij[0][0] * (sigma_ij[1][2])**2) - (sigma_ij[1][1] * (sigma_ij[2][0])**2) - (sigma_ij[2][2]* (sigma_ij[0][1])**2)) roots = np.roots([coeff3, coeff2, coeff1, coeff0]) sigma = np.sort(roots) sigma_1 = sigma[2] sigma_2 = sigma[1] sigma_3 = sigma[0] sigma_principal = np.array([[sigma_1, 0, 0], [0, sigma_2, 0], [0, 0, sigma_3]]) print("The principal stresses are, sigma 1:", sigma_1, "MPa, sigma 2:", sigma_2, "MPa, and sigma 3:", sigma_3, "MPa") print("Principal stress tensor:") print(sigma_principal) denominator_l = (sigma_ij[0][0] * sigma_ij[2][2]) - (sigma_ij[1][1] * sigma_1) - (sigma_ij[2][2] * sigma_1) + (sigma_1)**2 - (sigma_ij[1][2])**2 denominator_m = (sigma_2 * sigma_ij[0][1]) + (sigma_ij[2][0] * sigma_ij[1][2]) - (sigma_ij[0][1] * sigma_ij[2][2]) denominator_n = (sigma_3 * sigma_ij[2][0]) + (sigma_ij[0][1] * sigma_ij[1][2]) - (sigma_ij[2][0] * sigma_ij[1][1]) denominator_l, denominator_m, denominator_n
_____no_output_____
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
***
from mpl_toolkits.mplot3d import axes3d X = np.array((0)) Y= np.array((0)) U = np.array((0)) V = np.array((4)) fig, ax = plt.subplots() q = ax.quiver(X, Y, U, V,units='xy' ,scale=1) plt.grid() ax.set_aspect('equal') plt.xlim(-5,5) plt.ylim(-5,5) from mpl_toolkits.mplot3d import axes3d X = np.array((0)) Y= np.array((0)) Z = np.array((0)) U = np.array((1)) V = np.array((1)) W = np.array((1)) fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.quiver(X, Y, Z, U, V, W) ax.set_xlim([-1, 1]) ax.set_ylim([-1, 1]) ax.set_zlim([-1, 1]) from mpl_toolkits.mplot3d import axes3d vx_mag = v_mag * l vy_mag = v_mag * m vz_mag = v_mag * n x = 0; y = 0; z = 0 fig = plt.figure() ax = fig.gca(projection='3d') ax.quiver(x, y, z, vx_mag, vy_mag, vz_mag) ax.set_xlim(0, 10); ax.set_ylim(0, 10); ax.set_zlim(0, 5)
_____no_output_____
MIT
delft course dr weijermars/stress_tensor.ipynb
rksin8/reservoir-geomechanics
AutoGluon Tabular with SageMaker[AutoGluon](https://github.com/awslabs/autogluon) automates machine learning tasks enabling you to easily achieve strong predictive performance in your applications. With just a few lines of code, you can train and deploy high-accuracy deep learning models on tabular, image, and text data.This notebook shows how to use AutoGluon-Tabular with Amazon SageMaker by creating custom containers. PrerequisitesIf using a SageMaker hosted notebook, select kernel `conda_mxnet_p36`.
# Make sure docker compose is set up properly for local mode !./setup.sh # Imports import os import boto3 import sagemaker from time import sleep from collections import Counter import numpy as np import pandas as pd from sagemaker import get_execution_role, local, Model, utils, fw_utils, s3 from sagemaker.estimator import Estimator from sagemaker.predictor import RealTimePredictor, csv_serializer, StringDeserializer from sklearn.metrics import accuracy_score, classification_report from IPython.core.display import display, HTML from IPython.core.interactiveshell import InteractiveShell # Print settings InteractiveShell.ast_node_interactivity = "all" pd.set_option('display.max_columns', 500) pd.set_option('display.max_rows', 10) # Account/s3 setup session = sagemaker.Session() local_session = local.LocalSession() bucket = session.default_bucket() prefix = 'sagemaker/autogluon-tabular' region = session.boto_region_name role = get_execution_role() client = session.boto_session.client( "sts", region_name=region, endpoint_url=utils.sts_regional_endpoint(region) ) account = client.get_caller_identity()['Account'] ecr_uri_prefix = utils.get_ecr_image_uri_prefix(account, region) registry_id = fw_utils._registry_id(region, 'mxnet', 'py3', account, '1.6.0') registry_uri = utils.get_ecr_image_uri_prefix(registry_id, region)
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Build docker images First, build autogluon package to copy into docker image.
if not os.path.exists('package'): !pip install PrettyTable -t package !pip install --upgrade boto3 -t package !pip install bokeh -t package !pip install --upgrade matplotlib -t package !pip install autogluon -t package
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Now build the training/inference image and push to ECR
training_algorithm_name = 'autogluon-sagemaker-training' inference_algorithm_name = 'autogluon-sagemaker-inference' !./container-training/build_push_training.sh {account} {region} {training_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri} !./container-inference/build_push_inference.sh {account} {region} {inference_algorithm_name} {ecr_uri_prefix} {registry_id} {registry_uri}
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Get the data In this example we'll use the direct-marketing dataset to build a binary classification model that predicts whether customers will accept or decline a marketing offer. First we'll download the data and split it into train and test sets. AutoGluon does not require a separate validation set (it uses bagged k-fold cross-validation).
# Download and unzip the data !aws s3 cp --region {region} s3://sagemaker-sample-data-{region}/autopilot/direct_marketing/bank-additional.zip . !unzip -qq -o bank-additional.zip !rm bank-additional.zip local_data_path = './bank-additional/bank-additional-full.csv' data = pd.read_csv(local_data_path) # Split train/test data train = data.sample(frac=0.7, random_state=42) test = data.drop(train.index) # Split test X/y label = 'y' y_test = test[label] X_test = test.drop(columns=[label])
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Check the data
train.head(3) train.shape test.head(3) test.shape X_test.head(3) X_test.shape
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Upload the data to s3
train_file = 'train.csv' train.to_csv(train_file,index=False) train_s3_path = session.upload_data(train_file, key_prefix='{}/data'.format(prefix)) test_file = 'test.csv' test.to_csv(test_file,index=False) test_s3_path = session.upload_data(test_file, key_prefix='{}/data'.format(prefix)) X_test_file = 'X_test.csv' X_test.to_csv(X_test_file,index=False) X_test_s3_path = session.upload_data(X_test_file, key_prefix='{}/data'.format(prefix))
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Hyperparameter SelectionThe minimum required settings for training is just a target label, `fit_args['label']`.Additional optional hyperparameters can be passed to the `autogluon.task.TabularPrediction.fit` function via `fit_args`.Below shows a more in depth example of AutoGluon-Tabular hyperparameters from the example [Predicting Columns in a Table - In Depth](https://autogluon.mxnet.io/tutorials/tabular_prediction/tabular-indepth.htmlmodel-ensembling-with-stacking-bagging). Please see [fit parameters](https://autogluon.mxnet.io/api/autogluon.task.html?highlight=eval_metricautogluon.task.TabularPrediction.fit) for further information. Note that in order for hyperparameter ranges to work in SageMaker, values passed to the `fit_args['hyperparameters']` must be represented as strings.```pythonnn_options = { 'num_epochs': "10", 'learning_rate': "ag.space.Real(1e-4, 1e-2, default=5e-4, log=True)", 'activation': "ag.space.Categorical('relu', 'softrelu', 'tanh')", 'layers': "ag.space.Categorical([100],[1000],[200,100],[300,200,100])", 'dropout_prob': "ag.space.Real(0.0, 0.5, default=0.1)"}gbm_options = { 'num_boost_round': "100", 'num_leaves': "ag.space.Int(lower=26, upper=66, default=36)"}model_hps = {'NN': nn_options, 'GBM': gbm_options} fit_args = { 'label': 'y', 'presets': ['best_quality', 'optimize_for_deployment'], 'time_limits': 60*10, 'hyperparameters': model_hps, 'hyperparameter_tune': True, 'search_strategy': 'skopt'}hyperparameters = { 'fit_args': fit_args, 'feature_importance': True}```**Note:** Your hyperparameter choices may affect the size of the model package, which could result in additional time taken to upload your model and complete training. Including `'optimize_for_deployment'` in the list of `fit_args['presets']` is recommended to greatly reduce upload times.
# Define required label and optional additional parameters fit_args = { 'label': 'y', # Adding 'best_quality' to presets list will result in better performance (but longer runtime) 'presets': ['optimize_for_deployment'], } # Pass fit_args to SageMaker estimator hyperparameters hyperparameters = { 'fit_args': fit_args, 'feature_importance': True }
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
TrainFor local training set `train_instance_type` to `local` . For non-local training the recommended instance type is `ml.m5.2xlarge`. **Note:** Depending on how many underlying models are trained, `train_volume_size` may need to be increased so that they all fit on disk.
%%time instance_type = 'ml.m5.2xlarge' #instance_type = 'local' ecr_image = f'{ecr_uri_prefix}/{training_algorithm_name}:latest' estimator = Estimator(image_name=ecr_image, role=role, train_instance_count=1, train_instance_type=instance_type, hyperparameters=hyperparameters, train_volume_size=100) # Set inputs. Test data is optional, but requires a label column. inputs = {'training': train_s3_path, 'testing': test_s3_path} estimator.fit(inputs)
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Create Model
# Create predictor object class AutoGluonTabularPredictor(RealTimePredictor): def __init__(self, *args, **kwargs): super().__init__(*args, content_type='text/csv', serializer=csv_serializer, deserializer=StringDeserializer(), **kwargs) ecr_image = f'{ecr_uri_prefix}/{inference_algorithm_name}:latest' if instance_type == 'local': model = estimator.create_model(image=ecr_image, role=role) else: model_uri = os.path.join(estimator.output_path, estimator._current_job_name, "output", "model.tar.gz") model = Model(model_uri, ecr_image, role=role, sagemaker_session=session, predictor_cls=AutoGluonTabularPredictor)
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Batch Transform For local mode, either `s3:////output/` or `file:///` can be used as outputs.By including the label column in the test data, you can also evaluate prediction performance (In this case, passing `test_s3_path` instead of `X_test_s3_path`).
output_path = f's3://{bucket}/{prefix}/output/' # output_path = f'file://{os.getcwd()}' transformer = model.transformer(instance_count=1, instance_type=instance_type, strategy='MultiRecord', max_payload=6, max_concurrent_transforms=1, output_path=output_path) transformer.transform(test_s3_path, content_type='text/csv', split_type='Line') transformer.wait()
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Endpoint Deploy remote or local endpoint
instance_type = 'ml.m5.2xlarge' #instance_type = 'local' predictor = model.deploy(initial_instance_count=1, instance_type=instance_type)
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Attach to endpoint (or reattach if kernel was restarted)
# Select standard or local session based on instance_type if instance_type == 'local': sess = local_session else: sess = session # Attach to endpoint predictor = AutoGluonTabularPredictor(predictor.endpoint, sagemaker_session=sess)
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Predict on unlabeled test data
results = predictor.predict(X_test.to_csv(index=False)).splitlines() # Check output print(Counter(results))
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Predict on data that includes label column Prediction performance metrics will be printed to endpoint logs.
results = predictor.predict(test.to_csv(index=False)).splitlines() # Check output print(Counter(results))
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Check that classification performance metrics match evaluation printed to endpoint logs as expected
y_results = np.array(results) print("accuracy: {}".format(accuracy_score(y_true=y_test, y_pred=y_results))) print(classification_report(y_true=y_test, y_pred=y_results, digits=6))
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Clean up endpoint
predictor.delete_endpoint()
_____no_output_____
Apache-2.0
advanced_functionality/autogluon-tabular/AutoGluon_Tabular_SageMaker.ipynb
phiamazon/amazon-sagemaker-examples
Berdasarkan isu [73](https://github.com/taruma/hidrokit/issues/73): **request: mengolah berkas dari data bmkg**Deskripsi:- mengolah berkas excel yang diperoleh dari data online bmkg untuk siap dipakai- memeriksa kondisi dataFungsi yang diharapkan:__Umum / General__- Memeriksa apakah data lengkap atau tidak? Jika tidak, data apa dan pada tanggal berapa?- Memeriksa apakah data tidak ada data / tidak ada pengukuran (9999) atau data tidak diukur (8888)? Jika ada, data apa dan pada tanggal berapa?- Menampilkan "potongan" baris yang tidak memiliki data / tidak melakukan pengukuran? DATASET
# AKSES GOOGLE DRIVE from google.colab import drive drive.mount('/content/gdrive') # DRIVE PATH DRIVE_DROP_PATH = '/content/gdrive/My Drive/Colab Notebooks/_dropbox' DRIVE_DATASET_PATH = '/content/gdrive/My Drive/Colab Notebooks/_dataset/uma_pamarayan' DATASET_PATH = DRIVE_DATASET_PATH + '/klimatologi_geofisika_tangerang_1998_2009.xlsx'
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
FUNGSI
import pandas as pd import numpy as np from operator import itemgetter from itertools import groupby def _read_bmkg(io): return pd.read_excel( io, skiprows=8, skipfooter=16, header=0, index_col=0, parse_dates=True, date_parser=lambda x: pd.to_datetime(x, format='%d-%m-%Y') ) def _have_nan(dataset): if dataset.isna().any().any(): return True else: return False def _get_index1D(array1D_bool): return np.argwhere(array1D_bool).reshape(-1,) def _get_nan(dataset): nan = {} for col in dataset.columns: nan[col] = _get_index1D(dataset[col].isna().values).tolist() return nan def _get_missing(dataset): missing = {} for col in dataset.columns: masking = (dataset[col] == 8888) | (dataset[col] == 9999) missing[col] = _get_index1D(masking.values) return missing def _check_nan(dataset): if _have_nan(dataset): return _get_nan(dataset) else: return None def _get_nan_columns(dataset): return dataset.columns[dataset.isna().any()].tolist() def _group_as_list(array): # based on https://stackoverflow.com/a/15276206 group_list = [] for _, g in groupby(enumerate(array), lambda x: x[0]-x[1]): single_list = sorted(list(map(itemgetter(1), g))) group_list.append(single_list) return group_list def _group_as_index( group_list, index=None, date_format='%Y%m%d', format_date = '{}-{}' ): group_index = [] date_index = isinstance(index, pd.DatetimeIndex) for item in group_list: if len(item) == 1: if date_index: group_index.append(index[item[0]].strftime(date_format)) else: group_index.append(index[item[0]]) else: if date_index: group_index.append( format_date.format( index[item[0]].strftime(date_format), index[item[-1]].strftime(date_format) ) ) else: group_index.append( format_date.format( index[item[0]], index[item[-1]] ) ) return group_index
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
PENGGUNAAN Fungsi `_read_bmkg`Tujuan: Impor berkas excel bmkg ke dataframe
dataset = _read_bmkg(DATASET_PATH) dataset.head() dataset.tail()
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_have_nan()`Tujuan: Memeriksa apakah di dalam tabel memiliki nilai yang hilang (np.nan)
_have_nan(dataset)
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_get_index1D()`Tujuan: Memperoleh index data yang hilang untuk setiap array
_get_index1D(dataset['RH_avg'].isna().values)
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_get_nan()`Tujuan: Memperoleh index data yang hilang untuk setiap kolom dalam bentuk `dictionary`
_get_nan(dataset).keys() print(_get_nan(dataset)['RH_avg'])
[852, 1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1220, 1221, 1222, 1223, 1224, 1628, 1629, 1697, 2657]
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_get_nan_columns()`Tujuan: Memperoleh nama kolom yang memiliki nilai yang hilang `NaN`.
_get_nan_columns(dataset)
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_check_nan()`Tujuan: Gabungan dari `_have_nan()` dan `_get_nan()`. Memeriksa apakah dataset memiliki `NaN`, jika iya, memberikan nilai hasil `_get_nan()`, jika tidak memberikan nilai `None`.
_check_nan(dataset).items() # Jika tidak memiliki nilai nan print(_check_nan(dataset.drop(_get_nan_columns(dataset), axis=1)))
None
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_group_as_list()`Tujuan: Mengelompokkan kelompok array yang bersifat kontinu (nilainya berurutan) dalam masing-masing list.Referensi: https://stackoverflow.com/a/15276206 (dimodifikasi untuk Python 3.x dan kemudahan membaca)
missing_dict = _get_nan(dataset) missing_RH_avg = missing_dict['RH_avg'] print(missing_RH_avg) print(_group_as_list(missing_RH_avg))
[[852], [1037, 1038, 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067], [1220, 1221, 1222, 1223, 1224], [1628, 1629], [1697], [2657]]
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_group_as_index()`Tujuan: Mengubah hasil pengelompokkan menjadi jenis index dataset (dalam kasus ini dalam bentuk tanggal dibandingkan dalam bentuk angka-index dataset).
_group_as_index(_group_as_list(missing_RH_avg), index=dataset.index, date_format='%d %b %Y')
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Fungsi `_get_missing()`Tujuan: Memperoleh index yang memiliki nilai tidak terukur (bernilai `8888` atau `9999`) untuk setiap kolomnya
_get_missing(dataset)
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Penerapan Menampilkan index yang bermasalahTujuan: Setelah memperoleh index dari hasil `_get_missing()` atau `_get_nan()`, bisa menampilkan potongan index tersebut dalam dataframe.
dataset.iloc[_get_missing(dataset)['RR']] _group_as_list(_get_missing(dataset)['RR']) _group_as_index(_, index=dataset.index, date_format='%d %b %Y', format_date='{} sampai {}')
_____no_output_____
MIT
hidrokit/contrib_taruma/ipynb/taruma_hk73_bmkg.ipynb
hidrokit/manual
Homework - Random Walks (18 pts) Continuous random walk in three dimensionsWrite a program simulating a three-dimensional random walk in a continuous space. Let 1000 independent particles all start at random positions within a cube with corners at (0,0,0) and (1,1,1). At each time step each particle will move in a random direction by a random amount between -1 and 1 along each axis (x, y, z). 1. (3 pts) Create data structure(s) to store your simulated particle positions for each of 2000 time steps and initialize them with the particles starting positions.
import numpy as np numTimeSteps = 2000 numParticles = 1000 positions = np.zeros( (numParticles, 3, numTimeSteps) ) # initialize starting positions on first time step positions[:,:,0] = np.random.random( (numParticles, 3) )
_____no_output_____
Unlicense
homework/key-random_walks.ipynb
nishadalal120/NEU-365P-385L-Spring-2021
2. (3 pts) Write code to run your simulation for 2000 time steps.
for t in range(numTimeSteps-1): # 2 * [0 to 1] - 1 --> [-1 to 1] jumpsForAllParticles = 2 * np.random.random((numParticles, 3)) - 1 positions[:,:,t+1] = positions[:,:,t] + jumpsForAllParticles # just for fun, here's another way to run the simulation above without a loop jumpsForAllParticlesAndAllTimeSteps = 2 * np.random.random((numParticles, 3, numTimeSteps-1)) - 1 positions[:,:,1:] = positions[:,:,0].reshape(numParticles, 3, 1) + np.cumsum(jumpsForAllParticlesAndAllTimeSteps, axis=2)
_____no_output_____
Unlicense
homework/key-random_walks.ipynb
nishadalal120/NEU-365P-385L-Spring-2021