markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
k大于63后,精度会急剧下降。 这是由于数据集中每个类只有50个实例。 因此,让我们通过将“ n_neighbors”的值限制为较小的值来进行深入研究。
def hyperopt_train_test(params): clf = KNeighborsClassifier(**params) return cross_val_score(clf, X, y).mean() space4knn = { 'n_neighbors': hp.choice('n_neighbors', range(1,50)) } def f(params): acc = hyperopt_train_test(params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4knn, algo=tpe.suggest, max_evals=100, trials=trials) print ('best:') print (best) f, ax = plt.subplots(1) #, figsize=(10,10)) xs = [t['misc']['vals']['n_neighbors'] for t in trials.trials] ys = [-t['result']['loss'] for t in trials.trials] ax.scatter(xs, ys, s=20, linewidth=0.01, alpha=0.5) ax.set_title('Iris Dataset - KNN', fontsize=18) ax.set_xlabel('n_neighbors', fontsize=12) ax.set_ylabel('cross validation accuracy', fontsize=12)
_____no_output_____
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
上面的模型没有做任何预处理。所以我们来归一化和缩放特征,看看是否有帮助。用如下代码:
# 归一化和缩放特征 from sklearn.preprocessing import normalize, scale iris = datasets.load_iris() X = iris.data y = iris.target def hyperopt_train_test(params): X_ = X[:] if 'normalize' in params: if params['normalize'] == 1: X_ = normalize(X_) del params['normalize'] if 'scale' in params: if params['scale'] == 1: X_ = scale(X_) del params['scale'] clf = KNeighborsClassifier(**params) return cross_val_score(clf, X_, y).mean() space4knn = { 'n_neighbors': hp.choice('n_neighbors', range(1,50)), 'scale': hp.choice('scale', [0, 1]), 'normalize': hp.choice('normalize', [0, 1]) } def f(params): acc = hyperopt_train_test(params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4knn, algo=tpe.suggest, max_evals=100, trials=trials) print('best:',best)
100%|█| 100/100 [00:02<00:00, 34.37it/s, best loss: -0.98000000 best: {'n_neighbors': 3, 'normalize': 1, 'scale': 0}
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
绘制参数
parameters = ['n_neighbors', 'scale', 'normalize'] cols = len(parameters) f, axes = plt.subplots(nrows=1, ncols=cols, figsize=(15,5)) cmap = plt.cm.jet for i, val in enumerate(parameters): xs = np.array([t['misc']['vals'][val] for t in trials.trials]).ravel() ys = [-t['result']['loss'] for t in trials.trials] xs, ys = zip(*sorted(zip(xs, ys))) ys = np.array(ys) axes[i].scatter(xs, ys, s=20, linewidth=0.01, alpha=0.75, c=cmap(float(i)/len(parameters)))
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
支持向量机(SVM)由于这是一个分类任务,我们将使用sklearn的SVC类。代码如下:
from sklearn.svm import SVC def hyperopt_train_test(params): X_ = X[:] if 'normalize' in params: if params['normalize'] == 1: X_ = normalize(X_) del params['normalize'] if 'scale' in params: if params['scale'] == 1: X_ = scale(X_) del params['scale'] clf = SVC(**params) return cross_val_score(clf, X_, y).mean() # SVM模型有两个非常重要的参数C与gamma。其中 C是惩罚系数,即对误差的宽容度。 # c越高,说明越不能容忍出现误差,容易过拟合。C越小,容易欠拟合。C过大或过小,泛化能力变差 space4svm = { 'C': hp.uniform('C', 0, 20), 'kernel': hp.choice('kernel', ['linear', 'sigmoid', 'poly', 'rbf']), 'gamma': hp.uniform('gamma', 0, 20), 'scale': hp.choice('scale', [0, 1]), 'normalize': hp.choice('normalize', [0, 1]) } def f(params): acc = hyperopt_train_test(params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4svm, algo=tpe.suggest, max_evals=100, trials=trials) print('best:',best)
100%|█| 100/100 [00:08<00:00, 12.02it/s, best loss: -0.98666666 best: {'C': 8.238774783515044, 'gamma': 1.1896015071446002, 'kernel': 3, 'normalize': 1, 'scale': 1}
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
同样,缩放和规范化也无济于事。 核函数的最佳选择是(线性核),最佳C值为1.4168540399911616,最佳gamma为15.04230279483486。 这组参数的分类精度为99.3%。
parameters = ['C', 'kernel', 'gamma', 'scale', 'normalize'] cols = len(parameters) f, axes = plt.subplots(nrows=1, ncols=cols, figsize=(20,5)) cmap = plt.cm.jet for i, val in enumerate(parameters): xs = np.array([t['misc']['vals'][val] for t in trials.trials]).ravel() ys = [-t['result']['loss'] for t in trials.trials] xs, ys = zip(*sorted(zip(xs, ys))) axes[i].scatter(xs, ys, s=20, linewidth=0.01, alpha=0.25, c=cmap(float(i)/len(parameters))) axes[i].set_title(val) axes[i].set_ylim([0.9, 1.0])
'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points. 'c' argument looks like a single numeric RGB or RGBA sequence, which should be avoided as value-mapping will have precedence in case its length matches with 'x' & 'y'. Please use a 2-D array with a single row if you really want to specify the same RGB or RGBA value for all points.
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
决策树我们将尝试只优化决策树的一些参数,码如下。
from sklearn.tree import DecisionTreeClassifier def hyperopt_train_test(params): X_ = X[:] if 'normalize' in params: if params['normalize'] == 1: X_ = normalize(X_) del params['normalize'] if 'scale' in params: if params['scale'] == 1: X_ = scale(X_) del params['scale'] clf = DecisionTreeClassifier(**params) return cross_val_score(clf, X, y).mean() space4dt = { 'max_depth': hp.choice('max_depth', range(1,20)), 'max_features': hp.choice('max_features', range(1,5)), 'criterion': hp.choice('criterion', ["gini", "entropy"]), 'scale': hp.choice('scale', [0, 1]), 'normalize': hp.choice('normalize', [0, 1]) } def f(params): acc = hyperopt_train_test(params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4dt, algo=tpe.suggest, max_evals=100, trials=trials) print('best:',best)
100%|█| 100/100 [00:01<00:00, 54.98it/s, best loss: -0.97333333 best: {'criterion': 0, 'max_depth': 2, 'max_features': 3, 'normalize': 0, 'scale': 0}
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
Random Forests让我们看看 ensemble 的分类器 随机森林,它只是一组决策树的集合。
from sklearn.ensemble import RandomForestClassifier def hyperopt_train_test(params): X_ = X[:] if 'normalize' in params: if params['normalize'] == 1: X_ = normalize(X_) del params['normalize'] if 'scale' in params: if params['scale'] == 1: X_ = scale(X_) del params['scale'] clf = RandomForestClassifier(**params) return cross_val_score(clf, X, y).mean() space4rf = { 'max_depth': hp.choice('max_depth', range(1,20)), 'max_features': hp.choice('max_features', range(1,5)), 'n_estimators': hp.choice('n_estimators', range(1,20)), 'criterion': hp.choice('criterion', ["gini", "entropy"]), 'scale': hp.choice('scale', [0, 1]), 'normalize': hp.choice('normalize', [0, 1]) } best = 0 def f(params): global best acc = hyperopt_train_test(params) if acc > best: best = acc return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space4rf, algo=tpe.suggest, max_evals=100, trials=trials) print('best:') print(best)
100%|█| 100/100 [00:11<00:00, 8.92it/s, best loss: -0.97333333 best: {'criterion': 1, 'max_depth': 14, 'max_features': 2, 'n_estimators': 0, 'normalize': 0, 'scale': 0}
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
同样的我们得到 97.3 % 的正确率 , 和decision tree 的结果一致. All Together Now一次自动调整一个模型的参数(例如,SVM或KNN)既有趣又有启发性,但如果一次调整所有模型参数并最终获得最佳模型更为有用。 这使我们能够一次比较所有模型和所有参数,从而为我们提供最佳模型。
from sklearn.naive_bayes import BernoulliNB def hyperopt_train_test(params): t = params['type'] del params['type'] if t == 'naive_bayes': clf = BernoulliNB(**params) elif t == 'svm': clf = SVC(**params) elif t == 'dtree': clf = DecisionTreeClassifier(**params) elif t == 'knn': clf = KNeighborsClassifier(**params) else: return 0 return cross_val_score(clf, X, y).mean() space = hp.choice('classifier_type', [ { 'type': 'naive_bayes', 'alpha': hp.uniform('alpha', 0.0, 2.0) }, { 'type': 'svm', 'C': hp.uniform('C', 0, 10.0), 'kernel': hp.choice('kernel', ['linear', 'rbf']), 'gamma': hp.uniform('gamma', 0, 20.0) }, { 'type': 'randomforest', 'max_depth': hp.choice('max_depth', range(1,20)), 'max_features': hp.choice('max_features', range(1,5)), 'n_estimators': hp.choice('n_estimators', range(1,20)), 'criterion': hp.choice('criterion', ["gini", "entropy"]), 'scale': hp.choice('scale', [0, 1]), 'normalize': hp.choice('normalize', [0, 1]) }, { 'type': 'knn', 'n_neighbors': hp.choice('knn_n_neighbors', range(1,50)) } ]) count = 0 best = 0 def f(params): global best, count count += 1 acc = hyperopt_train_test(params.copy()) if acc > best: print ('new best:', acc, 'using', params['type']) best = acc if count % 50 == 0: print ('iters:', count, ', acc:', acc, 'using', params) return {'loss': -acc, 'status': STATUS_OK} trials = Trials() best = fmin(f, space, algo=tpe.suggest, max_evals=50, trials=trials) print('best:') print(best)
new best: 0.9333333333333333 using knn new best: 0.9733333333333334 using svm new best: 0.9800000000000001 using svm new best: 0.9866666666666667 using svm iters: 50 , acc: 0.9866666666666667 using {'C': 0.9033939243580144, 'gamma': 19.28858951292339, 'kernel': 'linear', 'type': 'svm'} 100%|█| 50/50 [00:01<00:00, 26.62it/s, best loss: -0.9866666666 best: {'C': 0.9059462783976437, 'classifier_type': 1, 'gamma': 4.146008164096844, 'kernel': 0}
Apache-2.0
notebooks/hyperopt_on_iris_data.ipynb
jianzhnie/AutoML-Tools
Black-Scholes Algorithm Using Numba-dppy Sections- [Black Sholes algorithm](Black-Sholes-algorithm)- _Code:_ [Implementation of Black Scholes targeting CPU using Numba JIT](Implementation-of-Black-Scholes-targeting-CPU-using-Numba-JIT)- _Code:_ [Implementation of Black Scholes targeting GPU using Kernels](Implementation-of-Black-Scholes-targeting-GPU-using-Kernels)- _Code:_ [Implementation of Black Scholes targeting GPU using Numpy](Implementation-of-Black-Scholes-targeting-GPU-using-Numpy) Learning Objectives* Build a Numba implementation of Black Scholes targeting CPU and GPU using Numba Jit* Build a Numba-DPPY implementation of Black Scholes on CPU and GPU using Kernel approach* Build a Numba-DPPY implementation of Black Scholes on GPU using Numpy approach numba-dppyNumba-dppy is a standalone extension to the Numba JIT compiler that adds SYCL programming capabilities to Numba. Numba-dppy is packaged as part of the IDP that comes with oneAPI base toolkit, and you don’t need to install any specific Conda packages. The support for SYCL is via DPC++'s SYCL runtime and other SYCL compilers are not supported by Numba-dppy. Black Sholes algorithmThe Black-Scholes program computes the price of a portfolio of options using partial differential equations. The entire computation performed by Black-Scholes is data-parallel, where each option can be priced independent of other options.The Black-Scholes Model is one of the most important concepts in modern quantitative finance theory. Developed in 1973 by Fisher Black, Robert Merton, and Myron Scholes; it is still widely used today, and regarded as one of the best ways to determine fair prices of financial derivatives. Implementation of Black-Scholes FormulaThe Black-Scholes formula is used widely in almost every aspect of quantitative finance. The Black-Scholes calculation has essentially permeated every quantitative finance library by traders and quantitative analysts alike. Let’s look at a hypothetic situation in which a firm has to calculate European options for millions of financial instruments. For each instrument, it has current price, strike price, and option expiration time. For each set of these data, it makes several thousand Black-Scholes calculations, much like the way options of neighboring stock prices, strike prices, and different option expiration times were calculated. Implementation of Black Scholes targeting CPU using Numba JITIn the following example, we introduce a naive Black-Sholes implementation that targets a CPU using the Numba JIT, where we calculate the Black-Sholes formula as described:This is the decorator-based approach, where we offload data parallel code sections like parallel-for, and certain NumPy function calls. With the decorator method, a programmer needs to simply identify the most time-consuming parts of the program. If those parts can be parallelized, the programmer needs to just annotate those sections using Numba-DPPy, and can expect those code sections to execute on a GPU.1. Inspect the code cell below and click run ▶ to save the code to a file.2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
%%writefile lab/black_sholes_jit_cpu.py # Copyright (C) 2017-2018 Intel Corporation # # SPDX-License-Identifier: MIT import dpctl import base_bs_erf import numba as nb from math import log, sqrt, exp, erf # blackscholes implemented as a parallel loop using numba.prange @nb.njit(parallel=True, fastmath=True) def black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put): mr = -rate sig_sig_two = vol * vol * 2 for i in nb.prange(nopt): P = price[i] S = strike[i] T = t[i] a = log(P / S) b = T * mr z = T * sig_sig_two c = 0.25 * z y = 1.0 / sqrt(z) w1 = (a - b + c) * y w2 = (a - b - c) * y d1 = 0.5 + 0.5 * erf(w1) d2 = 0.5 + 0.5 * erf(w2) Se = exp(b) * S r = P * d1 - Se * d2 call[i] = r put[i] = r - P + Se def black_scholes(nopt, price, strike, t, rate, vol, call, put): # offload blackscholes computation to CPU (toggle level0 or opencl driver). with dpctl.device_context(base_bs_erf.get_device_selector()): black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put) # call the run function to setup input data and performance data infrastructure base_bs_erf.run("Numba@jit-loop-par", black_scholes)
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Build and RunSelect the cell below and click run ▶ to compile and execute the code:
! chmod 755 q; chmod 755 run_black_sholes_jit_cpu.sh; if [ -x "$(command -v qsub)" ]; then ./q run_black_sholes_jit_cpu.sh; else ./run_black_sholes_jit_cpu.sh; fi
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Implementation of Black Scholes targeting GPU using Numba JITIn the below example we introduce to a Naive Blacksholes implementation that targets a GPU using the Numba Jit where we calculate the blacksholes formula as described above.1. Inspect the code cell below and click run ▶ to save the code to a file.2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
%%writefile lab/black_sholes_jit_gpu.py # Copyright (C) 2017-2018 Intel Corporation # # SPDX-License-Identifier: MIT import dpctl import base_bs_erf_gpu import numba as nb from math import log, sqrt, exp, erf # blackscholes implemented as a parallel loop using numba.prange @nb.njit(parallel=True, fastmath=True) def black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put): mr = -rate sig_sig_two = vol * vol * 2 for i in nb.prange(nopt): P = price[i] S = strike[i] T = t[i] a = log(P / S) b = T * mr z = T * sig_sig_two c = 0.25 * z y = 1.0 / sqrt(z) w1 = (a - b + c) * y w2 = (a - b - c) * y d1 = 0.5 + 0.5 * erf(w1) d2 = 0.5 + 0.5 * erf(w2) Se = exp(b) * S r = P * d1 - Se * d2 call[i] = r put[i] = r - P + Se def black_scholes(nopt, price, strike, t, rate, vol, call, put): # offload blackscholes computation to GPU (toggle level0 or opencl driver). with dpctl.device_context(base_bs_erf_gpu.get_device_selector()): black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put) # call the run function to setup input data and performance data infrastructure base_bs_erf_gpu.run("Numba@jit-loop-par", black_scholes)
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Build and RunSelect the cell below and click run ▶ to compile and execute the code:
! chmod 755 q; chmod 755 run_black_sholes_jit_gpu.sh; if [ -x "$(command -v qsub)" ]; then ./q run_black_sholes_jit_gpu.sh; else ./run_black_sholes_jit_gpu.sh; fi
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Implementation of Black Scholes targeting GPU using Kernels Writing Explicit Kernels in numba-dppyWriting a SYCL kernel using the `@numba_dppy.kernel` decorator has similar syntax to writing OpenCL kernels. As such, the numba-dppy module provides similar indexing and other functions as OpenCL. The indexing functions supported inside a `numba_dppy.kernel` are:* numba_dppy.get_local_id : Gets the local ID of the item* numba_dppy.get_local_size: Gets the local work group size of the device* numba_dppy.get_group_id : Gets the group ID of the item* numba_dppy.get_num_groups: Gets the number of gropus in a worksgroupRefer https://intelpython.github.io/numba-dppy/latest/user_guides/kernel_programming_guide/index.html for more details.In the following example we use dppy-kernel approach for explicit kernel programming where if the programmer wants to extract further performance from the offloaded code, the programmer can use the explicit kernel programming approach using dppy-kernels and tune the GPU parameterswhere we take advantage of the workgroups and the workitems in a device using the kernel approach 1. Inspect the code cell below and click run ▶ to save the code to a file.2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
%%writefile lab/black_sholes_kernel.py # Copyright (C) 2017-2018 Intel Corporation # # SPDX-License-Identifier: MIT import dpctl import base_bs_erf_gpu import numba_dppy from math import log, sqrt, exp, erf # blackscholes implemented using dppy.kernel @numba_dppy.kernel( access_types={"read_only": ["price", "strike", "t"], "write_only": ["call", "put"]} ) def black_scholes(nopt, price, strike, t, rate, vol, call, put): mr = -rate sig_sig_two = vol * vol * 2 i = numba_dppy.get_global_id(0) P = price[i] S = strike[i] T = t[i] a = log(P / S) b = T * mr z = T * sig_sig_two c = 0.25 * z y = 1.0 / sqrt(z) w1 = (a - b + c) * y w2 = (a - b - c) * y d1 = 0.5 + 0.5 * erf(w1) d2 = 0.5 + 0.5 * erf(w2) Se = exp(b) * S r = P * d1 - Se * d2 call[i] = r put[i] = r - P + Se def black_scholes_driver(nopt, price, strike, t, rate, vol, call, put): # offload blackscholes computation to GPU (toggle level0 or opencl driver). with dpctl.device_context(base_bs_erf_gpu.get_device_selector()): black_scholes[nopt, numba_dppy.DEFAULT_LOCAL_SIZE]( nopt, price, strike, t, rate, vol, call, put ) # call the run function to setup input data and performance data infrastructure base_bs_erf_gpu.run("Numba@jit-loop-par", black_scholes_driver)
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Build and RunSelect the cell below and click run ▶ to compile and execute the code:
! chmod 755 q; chmod 755 run_black_sholes_kernel.sh; if [ -x "$(command -v qsub)" ]; then ./q run_black_sholes_kernel.sh; else ./run_black_sholes_kernel.sh; fi
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Implementation of Black Scholes targeting GPU using NumpyIn the following example, we can observe the Black Scholes NumPy implementation and we target the GPU using the NumPy approach.1. Inspect the code cell below and click run ▶ to save the code to a file.2. Next run ▶ the cell in the __Build and Run__ section below the code to compile and execute the code.
%%writefile lab/black_sholes_numpy_graph.py # Copyright (C) 2017-2018 Intel Corporation # # SPDX-License-Identifier: MIT # Copyright (C) 2017-2018 Intel Corporation # # SPDX-License-Identifier: MIT import dpctl import base_bs_erf_graph import numba as nb import numpy as np from numpy import log, exp, sqrt from math import erf # Numba does know erf function from numpy or scipy @nb.vectorize(nopython=True) def nberf(x): return erf(x) # blackscholes implemented using numpy function calls @nb.jit(nopython=True, parallel=True, fastmath=True) def black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put): mr = -rate sig_sig_two = vol * vol * 2 P = price S = strike T = t a = log(P / S) b = T * mr z = T * sig_sig_two c = 0.25 * z y = 1.0 / sqrt(z) w1 = (a - b + c) * y w2 = (a - b - c) * y d1 = 0.5 + 0.5 * nberf(w1) d2 = 0.5 + 0.5 * nberf(w2) Se = exp(b) * S r = P * d1 - Se * d2 call[:] = r # temporary `r` is necessary for faster `put` computation put[:] = r - P + Se def black_scholes(nopt, price, strike, t, rate, vol, call, put): # offload blackscholes computation to GPU (toggle level0 or opencl driver). with dpctl.device_context(base_bs_erf_graph.get_device_selector()): black_scholes_kernel(nopt, price, strike, t, rate, vol, call, put) # call the run function to setup input data and performance data infrastructure base_bs_erf_graph.run("Numba@jit-numpy", black_scholes)
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Build and RunSelect the cell below and click run ▶ to compile and execute the code:
! chmod 755 q; chmod 755 run_black_sholes_numpy_graph.sh; if [ -x "$(command -v qsub)" ]; then ./q run_black_sholes_numpy_graph.sh; else ./run_black_sholes_numpy_graph.sh; fi
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Plot GPU ResultsThe algorithm below is detecting Calls and Puts verses Current price for a strike price in range 23 to 25 and plots the results in a graph as shown below. View the resultsSelect the cell below and click run ▶ to view the graph:
from matplotlib import pyplot as plt import numpy as np def read_dictionary(fn): import pickle # Load data (deserialize) with open(fn, 'rb') as handle: dictionary = pickle.load(handle) return dictionary resultsDict = read_dictionary('resultsDict.pkl') limit = 10 call = resultsDict['call'] put = resultsDict['put'] price = resultsDict['price'] strike = resultsDict['strike'] plt.style.use('dark_background') priceRange = [23.0, 23.5] # strikeIndex = np.where((price >= priceRange[0]) & (price < priceRange[1]) )[0] # plt.scatter(strike[strikeIndex], put[strikeIndex], c= 'r', s = 2, alpha = 1, label = 'puts') # plt.scatter(strike[strikeIndex], call[strikeIndex], c= 'b', s = 2, alpha = 1, label = 'calls') # plt.title('Calls and Puts verses Strike for a current price in range {}'.format(priceRange)) # plt.ylabel('Option Price [$]') # plt.xlabel('Strike Price [$]') # plt.legend() # plt.grid() strikeRange = [23.0, 23.5] strikeIndex = np.where((strike >= strikeRange[0]) & (strike < strikeRange[1]) )[0] plt.scatter(price[strikeIndex], put[strikeIndex], c= 'r', s = 2, alpha = 1, label = 'puts') plt.scatter(price[strikeIndex], call[strikeIndex], c= 'b', s = 2, alpha = 1, label = 'calls') plt.title('Calls and Puts verses Current price for a strike price in range {}'.format(priceRange)) plt.ylabel('Option Price [$]') plt.xlabel('Current Price [$]') plt.legend() plt.grid()
_____no_output_____
MIT
AI-and-Analytics/Jupyter/Numba_DPPY_Essentials_training/04_DPPY_Black_Sholes/DPPY_Black_Sholes.ipynb
krzeszew/oneAPI-samples
Dataset: winequality-white.csv
# Read the csv file into a pandas DataFrame white = pd.read_csv('./datasets/winequality-white.csv') white.head() # Assign the data to X and y # Note: Sklearn requires a two-dimensional array of values # so we use reshape to create this X = white.alcohol.values.reshape(-1, 1) y = white.quality.values.reshape(-1, 1) print("Shape: ", X.shape, y.shape) X # Plot the data ### BEGIN SOLUTION plt.scatter(X, y) ### END SOLUTION # Create the model and fit the model to the data from sklearn.linear_model import LinearRegression ### BEGIN SOLUTION model = LinearRegression() ### END SOLUTION # Fit the model to the data. # Note: This is the training step where you fit the line to the data. ### BEGIN SOLUTION model.fit(X, y) ### END SOLUTION # Print the coefficient and the intercept for the model ### BEGIN SOLUTION print('Weight coefficients: ', model.coef_) print('y-axis intercept: ', model.intercept_) ### END SOLUTION # Note: we have to transform our min and max values # so they are in the format: array([[ 1.17]]) # This is the required format for `model.predict()` x_min = np.array([[X.min()]]) x_max = np.array([[X.max()]]) print(f"Min X Value: {x_min}") print(f"Max X Value: {x_max}") # Calculate the y_min and y_max using model.predict and x_min and x_max ### BEGIN SOLUTION y_min = model.predict(x_min) y_max = model.predict(x_max) ### END SOLUTION # Plot X and y using plt.scatter # Plot the model fit line using [x_min[0], x_max[0]], [y_min[0], y_max[0]] ### BEGIN SOLUTION plt.scatter(X, y, c='blue') plt.plot([x_min[0], x_max[0]], [y_min[0], y_max[0]], c='red') ### END SOLUTION
_____no_output_____
FTL
.ipynb_checkpoints/1-2-linear-regression-winequality-white-checkpoint.ipynb
hockeylori/FinalProject-Team8
Test
%matplotlib inline import matplotlib.pyplot as plt from boxplot import boxplot as bx import numpy as np
_____no_output_____
MIT
code/test.ipynb
HurryZhao/boxplot
Quality of data
# Integers Integers = [np.random.randint(-3, 3, 500, dtype='l'),np.random.randint(-10, 10, 500, dtype='l')] Float = np.random.random([2,500]).tolist() fig,ax = plt.subplots(figsize=(10,10)) bx.boxplot(ax,Integers) fig,ax = plt.subplots(figsize=(10,10)) bx.info_boxplot(ax,Integers) fig,ax = plt.subplots(figsize=(10,10)) bx.hist_boxplot(ax,Integers) fig,ax = plt.subplots(figsize=(10,10)) bx.creative_boxplot(ax,Integers) fig,ax = plt.subplots(figsize=(10,10)) bx.boxplot(ax,Float) fig,ax = plt.subplots(figsize=(10,10)) bx.info_boxplot(ax,Float) fig,ax = plt.subplots(figsize=(10,10)) bx.hist_boxplot(ax,Float) fig,ax = plt.subplots(figsize=(10,10)) bx.creative_boxplot(ax,Float)
[0.36623335, 0.7324667]
MIT
code/test.ipynb
HurryZhao/boxplot
Real dataset
import pandas as pd data = pd.read_csv('/Users/hurryzhao/boxplot/results_merged.csv') data.head() t_d1 = data.commits[data.last_updated=='2017-08-28'] t_d2 = data.commits[data.last_updated=='2017-08-26'] t_d3 = data.commits[data.last_updated=='2017-08-24'] t_d4 = data.commits[data.last_updated=='2017-08-22'] t_d5 = data.commits[data.last_updated=='2017-08-20'] t_d=[t_d1,t_d2,t_d3,t_d4,t_d5] t_d fig,ax = plt.subplots(figsize=(10,10)) bx.boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False) fig,ax = plt.subplots(figsize=(10,10)) bx.info_boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False) fig,ax = plt.subplots(figsize=(10,10)) bx.hist_boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False) fig,ax = plt.subplots(figsize=(10,10)) bx.creative_boxplot(ax,t_d,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)
[333.3333333319238, 666.6666666680761, 999.9999999999999, 1333.3333333319238, 1666.6666666680758]
MIT
code/test.ipynb
HurryZhao/boxplot
Robustness
data=[['1','1','2','2','3','4'],['1','1','2','2','3','4']] fig,ax = plt.subplots(figsize=(10,10)) bx.boxplot(ax,data,outlier_facecolor='white',outlier_edgecolor='r',outlier=False)
Wrong data type, please input a list of numerical list
MIT
code/test.ipynb
HurryZhao/boxplot
TSG097 - Get BDC stateful sets (Kubernetes)===========================================Description-----------Steps----- Common functionsDefine helper functions used in this notebook.
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows import sys import os import re import json import platform import shlex import shutil import datetime from subprocess import Popen, PIPE from IPython.display import Markdown retry_hints = {} # Output in stderr known to be transient, therefore automatically retry error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help install_hint = {} # The SOP to help install the executable if it cannot be found first_run = True rules = None debug_logging = False def run(cmd, return_output=False, no_output=False, retry_count=0): """Run shell command, stream stdout, print stderr and optionally return output NOTES: 1. Commands that need this kind of ' quoting on Windows e.g.: kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name} Need to actually pass in as '"': kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name} The ' quote approach, although correct when pasting into Windows cmd, will hang at the line: `iter(p.stdout.readline, b'')` The shlex.split call does the right thing for each platform, just use the '"' pattern for a ' """ MAX_RETRIES = 5 output = "" retry = False global first_run global rules if first_run: first_run = False rules = load_rules() # When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see: # # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)') # if platform.system() == "Windows" and cmd.startswith("azdata sql query"): cmd = cmd.replace("\n", " ") # shlex.split is required on bash and for Windows paths with spaces # cmd_actual = shlex.split(cmd) # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries # user_provided_exe_name = cmd_actual[0].lower() # When running python, use the python in the ADS sandbox ({sys.executable}) # if cmd.startswith("python "): cmd_actual[0] = cmd_actual[0].replace("python", sys.executable) # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail # with: # # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128) # # Setting it to a default value of "en_US.UTF-8" enables pip install to complete # if platform.system() == "Darwin" and "LC_ALL" not in os.environ: os.environ["LC_ALL"] = "en_US.UTF-8" # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc` # if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ: cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc") # To aid supportabilty, determine which binary file will actually be executed on the machine # which_binary = None # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to # get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we # look for the 2nd installation of CURL in the path) if platform.system() == "Windows" and cmd.startswith("curl "): path = os.getenv('PATH') for p in path.split(os.path.pathsep): p = os.path.join(p, "curl.exe") if os.path.exists(p) and os.access(p, os.X_OK): if p.lower().find("system32") == -1: cmd_actual[0] = p which_binary = p break # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) # # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split. # if which_binary == None: which_binary = shutil.which(cmd_actual[0]) if which_binary == None: if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None: display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") else: cmd_actual[0] = which_binary start_time = datetime.datetime.now().replace(microsecond=0) print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)") print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})") print(f" cwd: {os.getcwd()}") # Command-line tools such as CURL and AZDATA HDFS commands output # scrolling progress bars, which causes Jupyter to hang forever, to # workaround this, use no_output=True # # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait # wait = True try: if no_output: p = Popen(cmd_actual) else: p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1) with p.stdout: for line in iter(p.stdout.readline, b''): line = line.decode() if return_output: output = output + line else: if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file regex = re.compile(' "(.*)"\: "(.*)"') match = regex.match(line) if match: if match.group(1).find("HTML") != -1: display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"')) else: display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"')) wait = False break # otherwise infinite hang, have not worked out why yet. else: print(line, end='') if rules is not None: apply_expert_rules(line) if wait: p.wait() except FileNotFoundError as e: if install_hint is not None: display(Markdown(f'HINT: Use {install_hint} to resolve this issue.')) raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait() if not no_output: for line in iter(p.stderr.readline, b''): try: line_decoded = line.decode() except UnicodeDecodeError: # NOTE: Sometimes we get characters back that cannot be decoded(), e.g. # # \xa0 # # For example see this in the response from `az group create`: # # ERROR: Get Token request returned http error: 400 and server # response: {"error":"invalid_grant",# "error_description":"AADSTS700082: # The refresh token has expired due to inactivity.\xa0The token was # issued on 2018-10-25T23:35:11.9832872Z # # which generates the exception: # # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte # print("WARNING: Unable to decode stderr line, printing raw bytes:") print(line) line_decoded = "" pass else: # azdata emits a single empty line to stderr when doing an hdfs cp, don't # print this empty "ERR:" as it confuses. # if line_decoded == "": continue print(f"STDERR: {line_decoded}", end='') if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"): exit_code_workaround = 1 # inject HINTs to next TSG/SOP based on output in stderr # if user_provided_exe_name in error_hints: for error_hint in error_hints[user_provided_exe_name]: if line_decoded.find(error_hint[0]) != -1: display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.')) # apply expert rules (to run follow-on notebooks), based on output # if rules is not None: apply_expert_rules(line_decoded) # Verify if a transient error, if so automatically retry (recursive) # if user_provided_exe_name in retry_hints: for retry_hint in retry_hints[user_provided_exe_name]: if line_decoded.find(retry_hint) != -1: if retry_count < MAX_RETRIES: print(f"RETRY: {retry_count} (due to: {retry_hint})") retry_count = retry_count + 1 output = run(cmd, return_output=return_output, retry_count=retry_count) if return_output: return output else: return elapsed = datetime.datetime.now().replace(microsecond=0) - start_time # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so # don't wait here, if success known above # if wait: if p.returncode != 0: raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n') else: if exit_code_workaround !=0 : raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n') print(f'\nSUCCESS: {elapsed}s elapsed.\n') if return_output: return output def load_json(filename): """Load a json file from disk and return the contents""" with open(filename, encoding="utf8") as json_file: return json.load(json_file) def load_rules(): """Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable""" # Load this notebook as json to get access to the expert rules in the notebook metadata. # try: j = load_json("tsg097-get-statefulsets.ipynb") except: pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename? else: if "metadata" in j and \ "azdata" in j["metadata"] and \ "expert" in j["metadata"]["azdata"] and \ "expanded_rules" in j["metadata"]["azdata"]["expert"]: rules = j["metadata"]["azdata"]["expert"]["expanded_rules"] rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first. # print (f"EXPERT: There are {len(rules)} rules to evaluate.") return rules def apply_expert_rules(line): """Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so inject a 'HINT' to the follow-on SOP/TSG to run""" global rules for rule in rules: notebook = rule[1] cell_type = rule[2] output_type = rule[3] # i.e. stream or error output_type_name = rule[4] # i.e. ename or name output_type_value = rule[5] # i.e. SystemExit or stdout details_name = rule[6] # i.e. evalue or text expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it! if debug_logging: print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.") if re.match(expression, line, re.DOTALL): if debug_logging: print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook)) match_found = True display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.')) print('Common functions defined successfully.') # Hints for binary (transient fault) retry, (known) error and install guide # retry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond']} error_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']]} install_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb']}
_____no_output_____
MIT
Big-Data-Clusters/CU4/Public/content/monitor-k8s/tsg097-get-statefulsets.ipynb
gantz-at-incomm/tigertoolbox
Get the Kubernetes namespace for the big data clusterGet the namespace of the Big Data Cluster use the kubectl command lineinterface .**NOTE:**If there is more than one Big Data Cluster in the target Kubernetescluster, then either:- set \[0\] to the correct value for the big data cluster.- set the environment variable AZDATA\_NAMESPACE, before starting Azure Data Studio.
# Place Kubernetes namespace name for BDC into 'namespace' variable if "AZDATA_NAMESPACE" in os.environ: namespace = os.environ["AZDATA_NAMESPACE"] else: try: namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True) except: from IPython.display import Markdown print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.") display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.')) display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.')) raise print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
_____no_output_____
MIT
Big-Data-Clusters/CU4/Public/content/monitor-k8s/tsg097-get-statefulsets.ipynb
gantz-at-incomm/tigertoolbox
Run kubectl to display the Stateful sets
run(f"kubectl get statefulset -n {namespace} -o wide") print('Notebook execution complete.')
_____no_output_____
MIT
Big-Data-Clusters/CU4/Public/content/monitor-k8s/tsg097-get-statefulsets.ipynb
gantz-at-incomm/tigertoolbox
Trajectory equations:
%matplotlib inline import matplotlib.pyplot as plt from sympy import * init_printing() Bx, By, Bz, B = symbols("B_x, B_y, B_z, B") x, y, z = symbols("x, y, z" ) x_0, y_0, z_0 = symbols("x_0, y_0, z_0") vx, vy, vz, v = symbols("v_x, v_y, v_z, v") vx_0, vy_0, vz_0 = symbols("v_x0, v_y0, v_z0") t = symbols("t") q, m = symbols("q, m") c, eps0 = symbols("c, epsilon_0")
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
The equation of motion:$$\begin{gather*} m \frac{d^2 \vec{r} }{dt^2} = \frac{q}{c} [ \vec{v} \vec{B} ] \end{gather*}$$ For the case of a uniform magnetic field along the $z$-axis: $$ \vec{B} = B_z = B, \quad B_x = 0, \quad B_y = 0 $$ In Cortesian coordinates:
eq_x = Eq( Derivative(x(t), t, 2), q / c / m * Bz * Derivative(y(t),t) ) eq_y = Eq( Derivative(y(t), t, 2), - q / c / m * Bz * Derivative(x(t),t) ) eq_z = Eq( Derivative(z(t), t, 2), 0 ) display( eq_x, eq_y, eq_z )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
Motion is uniform along the $z$-axis:
z_eq = dsolve( eq_z, z(t) ) vz_eq = Eq( z_eq.lhs.diff(t), z_eq.rhs.diff(t) ) display( z_eq, vz_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
The constants of integration can be found from the initial conditions $z(0) = z_0$ and $v_z(0) = v_{z0}$:
c1_c2_system = [] initial_cond_subs = [(t, 0), (z(0), z_0), (diff(z(t),t).subs(t,0), vz_0) ] c1_c2_system.append( z_eq.subs( initial_cond_subs ) ) c1_c2_system.append( vz_eq.subs( initial_cond_subs ) ) c1, c2 = symbols("C1, C2") c1_c2 = solve( c1_c2_system, [c1, c2] ) c1_c2
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
So that
z_sol = z_eq.subs( c1_c2 ) vz_sol = vz_eq.subs( c1_c2 ).subs( [( diff(z(t),t), vz(t) ) ] ) display( z_sol, vz_sol )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
For some reason I have not been able to solve the system of differential equations for $x$ and $y$ directlywith Sympy's `dsolve` function:
#dsolve( [eq_x, eq_y], [x(t),y(t)] )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
It is necessary to resort to the manual solution. The method is to differentiate one of them over time and substitute the other. This will result in oscillator-type second-order equations for $v_y$ and $v_x$. Their solution is known. Integrating one more time, it is possible to obtain laws of motion $x(t)$ and $y(t)$.
v_subs = [ (Derivative(x(t),t), vx(t)), (Derivative(y(t),t), vy(t)) ] eq_vx = eq_x.subs( v_subs ) eq_vy = eq_y.subs( v_subs ) display( eq_vx, eq_vy ) eq_d2t_vx = Eq( diff(eq_vx.lhs,t), diff(eq_vx.rhs,t)) eq_d2t_vx = eq_d2t_vx.subs( [(eq_vy.lhs, eq_vy.rhs)] ) display( eq_d2t_vx )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
The solution of the last equation is
C1, C2, Omega = symbols( "C1, C2, Omega" ) vx_eq = Eq( vx(t), C1 * cos( Omega * t ) + C2 * sin( Omega * t )) display( vx_eq ) omega_eq = Eq( Omega, Bz * q / c / m ) display( omega_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
where $\Omega$ is a cyclotron frequency.
display( vx_eq ) vy_eq = Eq( vy(t), solve( Eq( diff(vx_eq.rhs,t), eq_vx.rhs ), ( vy(t) ) )[0] ) vy_eq = vy_eq.subs( [(Omega*c*m / Bz / q, omega_eq.rhs * c * m / Bz / q)]).simplify() display( vy_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
For initial conditions $v_x(0) = v_{x0}, v_y(0) = v_{y0}$:
initial_cond_subs = [(t,0), (vx(0), vx_0), (vy(0), vy_0) ] vx0_eq = vx_eq.subs( initial_cond_subs ) vy0_eq = vy_eq.subs( initial_cond_subs ) display( vx0_eq, vy0_eq ) c1_c2 = solve( [vx0_eq, vy0_eq] ) c1_c2_subs = [ ("C1", c1_c2[c1]), ("C2", c1_c2[c2]) ] vx_eq = vx_eq.subs( c1_c2_subs ) vy_eq = vy_eq.subs( c1_c2_subs ) display( vx_eq, vy_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
These equations can be integrated to obtain the laws of motion:
x_eq = vx_eq.subs( vx(t), diff(x(t),t)) x_eq = dsolve( x_eq ) y_eq = vy_eq.subs( vy(t), diff(y(t),t)) y_eq = dsolve( y_eq ).subs( C1, C2 ) display( x_eq, y_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
For nonzero $\Omega$:
x_eq = x_eq.subs( [(Omega, 123)] ).subs( [(123, Omega)] ).subs( [(Rational(1,123), 1/Omega)] ) y_eq = y_eq.subs( [(Omega, 123)] ).subs( [(123, Omega)] ).subs( [(Rational(1,123), 1/Omega)] ) display( x_eq, y_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
For initial conditions $x(0) = x_0, y(0) = y_0$:
initial_cond_subs = [(t,0), (x(0), x_0), (y(0), y_0) ] x0_eq = x_eq.subs( initial_cond_subs ) y0_eq = y_eq.subs( initial_cond_subs ) display( x0_eq, y0_eq ) c1_c2 = solve( [x0_eq, y0_eq] ) c1_c2_subs = [ ("C1", c1_c2[0][c1]), ("C2", c1_c2[0][c2]) ] x_eq = x_eq.subs( c1_c2_subs ) y_eq = y_eq.subs( c1_c2_subs ) display( x_eq, y_eq ) x_eq = x_eq.simplify() y_eq = y_eq.simplify() x_eq = x_eq.expand().collect(Omega) y_eq = y_eq.expand().collect(Omega) display( x_eq, y_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
Finally
display( x_eq, y_eq, z_sol ) display( vx_eq, vy_eq, vz_sol ) display( omega_eq )
_____no_output_____
MIT
examples/single_particle_in_magnetic_field/Single Particle in Uniform Magnetic Field.ipynb
tnakaicode/ChargedPaticle-LowEnergy
人力规划等级:高级 目的和先决条件此模型是人员编制问题的一个示例。在人员编制计划问题中,必须在招聘,培训,裁员(裁员)和安排工时方面做出选择。人员配备问题在制造业和服务业广泛存在。 What You Will LearnIn this example, we will model and solve a manpower planning problem. We have three types of workers with different skills levels. For each year in the planning horizon, the forecasted number of required workers with specific skills is given. It is possible to recruit new people, train workers to improve their skills, or shift them to a part-time working arrangement. The aim is to create an optimal multi-period operation plan that achieves one of the following two objectives: minimizing the total number of layoffs over the whole horizon or minimizing total costs.More information on this type of model can be found in example 5 of the fifth edition of Model Building in Mathematical Programming, by H. Paul Williams on pages 256-257 and 303-304.This modeling example is at the advanced level, where we assume that you know Python and the Gurobi Python API and that you have advanced knowledge of building mathematical optimization models. Typically, the objective function and/or constraints of these examples are complex or require advanced features of the Gurobi Python API.**Note:** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). In order to run this Jupyter Notebook properly, you must have a Gurobi license. If you do not have one, you can request an [evaluation license](https://www.gurobi.com/downloads/request-an-evaluation-license/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=CommercialDataScience) as a *commercial user*, or download a [free license](https://www.gurobi.com/academia/academic-program-and-licenses/?utm_source=Github&utm_medium=website_JupyterME&utm_campaign=AcademicDataScience) as an *academic user*.--- Problem DescriptionA company is changing how it runs its business, and therefore its staffing needs are expected to change.Through the purchase of new machinery, it is expected that there will be less need for unskilled labor and more need for skilled and semi-skilled labor. In addition, a lower sales forecast ⁠— driven by an economic slowdown that is predicted to happen in the next year ⁠— is expected to further reduce labor needs across all categories.The forecast for labor needs over the next three years is as follows:| | Unskilled | Semi-skilled | Skilled || --- | --- | --- | --- || Current Strength | 2000 | 1500 | 1000 || Year 1 | 1000 | 1400 | 1000 || Year 2 | 500 | 2000 | 1500 || Year 3 | 0 | 2500 | 2000 |The company needs to determine the following for each of the next three years:- Recruitment- Retraining- Layoffs (redundancy)- Part-time vs. full-time employeesIt is important to note that labor is subject to a certain level of natural attrition each year. The rate of attrition is relatively high in the first year after a new employee is hired and relatively low in subsequent years. The expected attrition rates are as follows:| | Unskilled (%)| Semi-skilled (%) | Skilled (%) || --- | --- | --- | --- || $< 1$ year of service | 25 | 20 | 10 || $\geq 1$ year of service | 10 | 5 | 5 |All of the current workers have been with the company for at least one year. RecruitmentEach year, it is possible to hire a limited number of employees in each classification from outside the company as follows:| Unskilled | Semi-skilled | Skilled || --- | --- | --- || 500 | 800 | 500 | RetrainingEach year, it is possible to train up to 200 unskilled workers to make them into semi-skilled workers. This training costs the company $\$400$ per worker.In addition, it is possible train semi-skilled workers to make them into skilled workers. However, this number can not exceed 25% of the current skilled labor force and this training costs $\$500$ per worker.Lastly, downgrading workers to a lower skill level can be done. However, 50% of the downgraded workers will leave the company, increasing the natural attrition rate described above. LayoffsEach laid-off worker is entitled to a separation payment at the rate of $\$200$ per unskilled worker and $\$500$ per semi-skilled or skilled worker. Excess EmployeesIt is possible to have workers in excess of the actual number needed, up to 150 workers in total in any given year, but this will result in the following additional cost per excess employee per year.| Unskilled | Semi-skilled | Skilled || --- | --- | --- || $\$1500$ | $\$2000$ | $\$3000$ | Part-time WorkersUp to 50 employees of each skill level can be assigned to part-time work. The cost of doing so (per employee, per year) is as follows:| Unskilled | Semi-skilled | Skilled || --- | --- | --- || $\$500$ | $\$400$ | $\$400$ |**Note:** A part-time employee is half as productive as a full-time employee.If the company’s objective is to minimize layoffs, what plan should they adopt in order to do this?If their objective is to minimize costs, how much could they further reduce costs?How can they determine the annual savings possible across each job?--- Model Formulation Sets and Indices$t \in \text{Years}=\{1,2,3\}$: Set of years.$s \in \text{Skills}=\{s_1: \text{unskilled},s_2: \text{semi_skilled},s_3: \text{skilled}\}$: Set of skills. Parameters$\text{rookie_attrition} \in [0,1] \subset \mathbb{R}^+$: Percentage of workers who leave within the first year of service.$\text{veteran_attrition} \in [0,1] \subset \mathbb{R}^+$: Percentage of workers who leave after the first year of service.$\text{demoted_attrition} \in [0,1] \subset \mathbb{R}^+$: Percentage of workers who leave the company after a demotion.$\text{parttime_cap} \in [0,1] \subset \mathbb{R}^+$: Productivity of part-time workers with respect to full-time workers.$\text{max_train_unskilled} \in \mathbb{N}$: Maximum number of unskilled workers that can be trained on any given year.$\text{max_train_semiskilled} \in [0,1] \subset \mathbb{R}^+$: Maximum proportion of semi-skilled workers (w.r.t. skilled ones) that can be trained on any given year.$\text{max_parttime} \in \mathbb{N}$: Maximum number of part-time workers of each skill at any given year.$\text{max_overmanning} \in \mathbb{N}$: Maximum number of overmanned workers at any given year.$\text{max_hiring}_s \in \mathbb{N}$: Maximum number of workers of skill $s$ that can be hired any given year.$\text{training_cost}_s \in \mathbb{R}^+$: Cost for training a worker of skill $s$ to the next level.$\text{layoff_cost}_s \in \mathbb{R}^+$: Cost for laying off a worker of skill $s$.$\text{parttime_cost}_s \in \mathbb{R}^+$: Cost for assigning a worker of skill $s$ to part-time work.$\text{overmanning_cost}_s \in \mathbb{R}^+$: Yearly cost for having excess manpower of skill $s$.$\text{curr_workforce}_s \in \mathbb{N}$: Current manpower of skill $s$ at the beginning of the planning horizon.$\text{demand}_{t,s} \in \mathbb{N}$: Required manpower of skill $s$ in year $t$. Decision Variables$\text{hire}_{t,s} \in [0,\text{max_hiring}_s] \subset \mathbb{R}^+$: Number of workers of skill $s$ to hire in year $t$.$\text{part_time}_{t,s} \in [0,\text{max_parttime}] \subset \mathbb{R}^+$: Number of part-time workers of skill $s$ working in year $t$.$\text{workforce}_{t,s} \in \mathbb{R}^+$: Number of workers of skill $s$ that are available in year $t$.$\text{layoff}_{t,s} \in \mathbb{R}^+$: Number of workers of skill $s$ that are laid off in year $t$.$\text{excess}_{t,s} \in \mathbb{R}^+$: Number of workers of skill $s$ that are overmanned in year $t$.$\text{train}_{t,s,s'} \in \mathbb{R}^+$: Number of workers of skill $s$ to retrain to skill $s'$ in year $t$. Objective Function- **Layoffs:** Minimize the total layoffs during the planning horizon.\begin{equation}\text{Minimize} \quad Z = \sum_{t \in \text{Years}}\sum_{s \in \text{Skills}}{\text{layoff}_{t,s}}\end{equation}- **Cost:** Minimize the total cost (in USD) incurred by training, overmanning, part-time workers, and layoffs in the planning horizon.\begin{equation}\text{Minimize} \quad W = \sum_{t \in \text{Years}}{\{\text{training_cost}_{s_1}*\text{train}_{t,s1,s2} + \text{training_cost}_{s_2}*\text{train}_{t,s2,s3}\}}\end{equation}\begin{equation}+ \sum_{t \in \text{Years}}\sum_{s \in \text{Skills}}{\{\text{parttime_cost}*\text{part_time}_{t,s} + \text{layoff_cost}_s*\text{layoff}_{t,s} + \text{overmanning_cost}_s*\text{excess}_{t,s}\}}\end{equation} Constraints- **Initial Balance:** Workforce $s$ available in year $t=1$ is equal to the workforce of the previous year, recent hires, promoted and demoted workers (after accounting for attrition), minus layoffs and transferred workers.\begin{equation}\text{workforce}_{1,s} = (1-\text{veteran_attrition}_s)*\text{curr_workforce} + (1-\text{rookie_attrition}_s)*\text{hire}_{1,s} \end{equation}\begin{equation}+ \sum_{s' \in \text{Skills} | s' < s}{\{(1-\text{veteran_attrition})*\text{train}_{1,s',s} - \text{train}_{1,s,s'}\}} \end{equation}\begin{equation}+ \sum_{s' \in \text{Skills} | s' > s}{\{(1-\text{demoted_attrition})*\text{train}_{1,s',s} - \text{train}_{1,s,s'}\}} - \text{layoff}_{1,s} \qquad \forall s \in \text{Skills}\end{equation}- **Balance:** Workforce $s$ available in year $t > 1$ is equal to the workforce of the previous year, recent hires, promoted and demoted workers (after accounting for attrition), minus layoffs and transferred workers.\begin{equation}\text{workforce}_{t,s} = (1-\text{veteran_attrition}_s)*\text{workforce}_{t-1,s} + (1-\text{rookie_attrition}_s)*\text{hire}_{t,s} \end{equation}\begin{equation}+ \sum_{s' \in \text{Skills} | s' < s}{\{(1-\text{veteran_attrition})*\text{train}_{t,s',s} - \text{train}_{t,s,s'}\}}\end{equation}\begin{equation}+ \sum_{s' \in \text{Skills} | s' > s}{\{(1-\text{demotion_attrition})*\text{train}_{t,s',s} - \text{train}_{t,s,s'}\}} - \text{layoff}_{t,s} \quad \forall (t > 1,s) \in \text{Years} \times \text{Skills}\end{equation}- **Unskilled Training:** Unskilled workers trained in year $t$ cannot exceed the maximum allowance. Unskilled workers cannot be immediately transformed into skilled workers.\begin{equation}\text{train}_{t,s_1,s_2} \leq 200 \quad \forall t \in \text{Years}\end{equation}\begin{equation}\text{train}_{t,s_1,s_3} = 0 \quad \forall t \in \text{Years}\end{equation}- **Semi-skilled Training:** Semi-skilled workers trained in year $t$ cannot exceed the maximum allowance.\begin{equation}\text{train}_{t,s_2,s_3} \leq 0.25*\text{available}_{t,s_3} \quad \forall t \in \text{Years}\end{equation}- **Overmanning:** Excess workers in year $t$ cannot exceed the maximum allowance.\begin{equation}\sum_{s \in \text{Skills}}{\text{excess}_{t,s}} \leq \text{max_overmanning} \quad \forall t \in \text{Years}\end{equation}- **Demand:** Workforce $s$ available in year $t$ equals the required number of workers plus the excess workers and the part-time workers.\begin{equation}\text{available}_{t,s} = \text{demand}_{t,s} + \text{excess}_{t,s} + \text{parttime_cap}*\text{part_time}_{t,s} \quad \forall (t,s) \in \text{Years} \times \text{Skills}\end{equation}--- Python ImplementationWe import the Gurobi Python Module and other Python libraries.
import gurobipy as gp import numpy as np import pandas as pd from gurobipy import GRB # tested with Python 3.7.0 & Gurobi 9.0
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Input DataWe define all the input data of the model.
# Parameters years = [1, 2, 3] skills = ['s1', 's2', 's3'] curr_workforce = {'s1': 2000, 's2': 1500, 's3': 1000} demand = { (1, 's1'): 1000, (1, 's2'): 1400, (1, 's3'): 1000, (2, 's1'): 500, (2, 's2'): 2000, (2, 's3'): 1500, (3, 's1'): 0, (3, 's2'): 2500, (3, 's3'): 2000 } rookie_attrition = {'s1': 0.25, 's2': 0.20, 's3': 0.10} veteran_attrition = {'s1': 0.10, 's2': 0.05, 's3': 0.05} demoted_attrition = 0.50 max_hiring = { (1, 's1'): 500, (1, 's2'): 800, (1, 's3'): 500, (2, 's1'): 500, (2, 's2'): 800, (2, 's3'): 500, (3, 's1'): 500, (3, 's2'): 800, (3, 's3'): 500 } max_overmanning = 150 max_parttime = 50 parttime_cap = 0.50 max_train_unskilled = 200 max_train_semiskilled = 0.25 training_cost = {'s1': 400, 's2': 500} layoff_cost = {'s1': 200, 's2': 500, 's3': 500} parttime_cost = {'s1': 500, 's2': 400, 's3': 400} overmanning_cost = {'s1': 1500, 's2': 2000, 's3': 3000}
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Model DeploymentWe create a model and the variables. For each of the three skill levels and for each year, we will create variables for the number of workers that get recruited, transferred into part-time work, are available as workers, are redundant, or are overmanned. For each pair of skill levels and each year, we have a variable for the amount of workers that get retrained to a higher/lower skill level. The number of people who are part-time and can be recruited is limited.
manpower = gp.Model('Manpower planning') hire = manpower.addVars(years, skills, ub=max_hiring, name="Hire") part_time = manpower.addVars(years, skills, ub=max_parttime, name="Part_time") workforce = manpower.addVars(years, skills, name="Available") layoff = manpower.addVars(years, skills, name="Layoff") excess = manpower.addVars(years, skills, name="Overmanned") train = manpower.addVars(years, skills, skills, name="Train")
Using license file c:\gurobi\gurobi.lic Set parameter TokenServer to value SANTOS-SURFACE-
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Next, we insert the constraints. The balance constraints ensure that per skill level and per year the workers who are currently required (LaborForce) and the people who get laid off, and the people who get retrained to the current level, minus the people who get retrained from the current level to a different skill, equals the LaborForce of the last year (or the CurrentStrength in the first year) plus the recruited people. A certain amount of people leave the company each year, so this is also considered to be a factor. This constraint describes the change in the total amount of employed workers.
#1.1 & 1.2 Balance Balance = manpower.addConstrs( (workforce[year, level] == (1-veteran_attrition[level])*(curr_workforce[level] if year == 1 else workforce[year-1, level]) + (1-rookie_attrition[level])*hire[year, level] + gp.quicksum((1- veteran_attrition[level])* train[year, level2, level] -train[year, level, level2] for level2 in skills if level2 < level) + gp.quicksum((1- demoted_attrition)* train[year, level2, level] -train[year, level, level2] for level2 in skills if level2 > level) - layoff[year, level] for year in years for level in skills), "Balance")
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
The Unskilled training constraints force that per year only 200 workers can be retrained from Unskilled to Semi-skilled due to capacity limitations. Also, no one can be trained in one year from Unskilled to Skilled.
#2.1 & 2.2 Unskilled training UnskilledTrain1 = manpower.addConstrs((train[year, 's1', 's2'] <= max_train_unskilled for year in years), "Unskilled_training1") UnskilledTrain2 = manpower.addConstrs((train[year, 's1', 's3'] == 0 for year in years), "Unskilled_training2")
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
The Semi-skilled training states that the retraining of Semi-skilled workers to skilled workers is limited to no more than one quarter of the skilled labor force at this time. This is due to capacity limitations.
#3. Semi-skilled training SemiskilledTrain = manpower.addConstrs((train[year,'s2', 's3'] <= max_train_semiskilled * workforce[year,'s3'] for year in years), "Semiskilled_training")
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
The overmanning constraints ensure that the total overmanning over all skill levels in one year is no more than 150.
#4. Overmanning Overmanning = manpower.addConstrs((excess.sum(year, '*') <= max_overmanning for year in years), "Overmanning")
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
The demand constraints ensure that the number of workers of each level and year equals the required number of workers plus the Overmanned workers and the number of workers who are working part-time.
#5. Demand Demand = manpower.addConstrs((workforce[year, level] == demand[year,level] + excess[year, level] + parttime_cap * part_time[year, level] for year in years for level in skills), "Requirements")
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
The first objective is to minimize the total number of laid off workers. This can be stated as:
#0.1 Objective Function: Minimize layoffs obj1 = layoff.sum() manpower.setObjective(obj1, GRB.MINIMIZE)
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
The second alternative objective is to minimize the total cost of all employed workers and costs for retraining:```obj2 = quicksum((training_cost[level]*train[year, level, skills[skills.index(level)+1]] if level < 's3' else 0) + layoff_cost[level]*layoff[year, level] + parttime_cost[level]*part_time[year, level] + overmanning_cost[level] * excess[year, level] for year in years for level in skills)```Next we start the optimization with the objective function of minimizing layoffs, and Gurobi finds the optimal solution.
manpower.optimize()
Gurobi Optimizer version 9.0.0 build v9.0.0rc2 (win64) Optimize a model with 30 rows, 72 columns and 117 nonzeros Model fingerprint: 0x06ec5b66 Coefficient statistics: Matrix range [3e-01, 1e+00] Objective range [1e+00, 1e+00] Bounds range [5e+01, 8e+02] RHS range [2e+02, 3e+03] Presolve removed 18 rows and 44 columns Presolve time: 0.01s Presolved: 12 rows, 28 columns, 56 nonzeros Iteration Objective Primal Inf. Dual Inf. Time 0 8.4000000e+02 6.484375e+01 0.000000e+00 0s 8 8.4179688e+02 0.000000e+00 0.000000e+00 0s Solved in 8 iterations and 0.01 seconds Optimal objective 8.417968750e+02
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
AnalysisThe minimum number of layoffs is 841.80. The optimal policies to achieve this minimum number of layoffs are given below. Hiring PlanThis plan determines the number of new workers to hire at each year of the planning horizon (rows) and each skill level (columns). For example, at year 2 we are going to hire 649.3 Semi-skilled workers.
rows = years.copy() columns = skills.copy() hire_plan = pd.DataFrame(columns=columns, index=rows, data=0.0) for year, level in hire.keys(): if (abs(hire[year, level].x) > 1e-6): hire_plan.loc[year, level] = np.round(hire[year, level].x, 1) hire_plan
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Training and Demotions PlanThis plan defines the number of workers to promote by training (or demote) at each year of the planning horizon. For example, in year 1 we are going to demote 168.4 skilled (s3) workers to the level of semi-skilled (s2).
rows = years.copy() columns = ['{0} to {1}'.format(level1, level2) for level1 in skills for level2 in skills if level1 != level2] train_plan = pd.DataFrame(columns=columns, index=rows, data=0.0) for year, level1, level2 in train.keys(): col = '{0} to {1}'.format(level1, level2) if (abs(train[year, level1, level2].x) > 1e-6): train_plan.loc[year, col] = np.round(train[year, level1, level2].x, 1) train_plan
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Layoffs PlanThis plan determines the number of workers to layoff of each skill level at each year of the planning horizon. For example, we are going to layoff 232.5 Unskilled workers in year 3.
rows = years.copy() columns = skills.copy() layoff_plan = pd.DataFrame(columns=columns, index=rows, data=0.0) for year, level in layoff.keys(): if (abs(layoff[year, level].x) > 1e-6): layoff_plan.loc[year, level] = np.round(layoff[year, level].x, 1) layoff_plan
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Part-time PlanThis plan defines the number of part-time workers of each skill level working at each year of the planning horizon. For example, in year 1, we have 50 part-time skilled workers.
rows = years.copy() columns = skills.copy() parttime_plan = pd.DataFrame(columns=columns, index=rows, data=0.0) for year, level in part_time.keys(): if (abs(part_time[year, level].x) > 1e-6): parttime_plan.loc[year, level] = np.round(part_time[year, level].x, 1) parttime_plan
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
Overmanning PlanThis plan determines the number of excess workers of each skill level working at each year of the planning horizon. For example, we have 150 Unskilled excess workers in year 3.
rows = years.copy() columns = skills.copy() excess_plan = pd.DataFrame(columns=columns, index=rows, data=0.0) for year, level in excess.keys(): if (abs(excess[year, level].x) > 1e-6): excess_plan.loc[year, level] = np.round(excess[year, level].x, 1) excess_plan
_____no_output_____
Apache-2.0
documents/Advanced/ManpowerPlanning/manpower_planning.ipynb
biancaitian/gurobi-official-examples
1. 基础回归 1.1 线性回归 1.1.1 sklearn.linear_model.LinearRegression https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.htmlsklearn.linear_model.LinearRegression
X_data, y_data = load_linear_data(point_count=500, max_=10, w=3.2412, b=-5.2941, random_state=10834) X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, random_state=19332) rgs = LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=None) rgs.fit(X_train, y_train) rgs.coef_, rgs.intercept_ rgs.score(X_test, y_test) show_regressor_linear(X_test, y_test, rgs.coef_, rgs.intercept_)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
正规化Normalizer 每个样本求范数,再用每个特征除以范数
norm = Normalizer(norm="l2", copy=True) X_train_norm = norm.fit_transform(X_train) X_test_norm = norm.transform(X_test) rgs = LinearRegression() rgs.fit(X_train_norm, y_train) rgs.coef_, rgs.intercept_ rgs.score(X_test_norm, y_test) X_train_norm[:10], X_test_norm[:10] X_train[:5] rgs = LinearRegression(fit_intercept=True, normalize=True, # bool. fit_intercept为True才生效。 如果为True,则将在回归之前通过减去均值并除以12范数来对回归变量X进行归一化。 copy_X=False, n_jobs=None) rgs.fit(X_train, y_train) X_train[:5] X_test[:5] rgs.score(X_test, y_test) X_test[:5] rgs.coef_, rgs.intercept_ %%timeit rgs = LinearRegression(n_jobs=2) rgs.fit(X_train, y_train) %%timeit rgs = LinearRegression(n_jobs=-1) rgs.fit(X_train, y_train) %%timeit rgs = LinearRegression(n_jobs=None) rgs.fit(X_train, y_train)
376 µs ± 35.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
1.1.2 sklearn.linear_model.SGDRegressor https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.htmlsklearn.linear_model.SGDRegressor
X_data, y_data = load_linear_data(point_count=500, max_=10, w=3.2412, b=-5.2941, random_state=10834) X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, random_state=19332) rgs = SGDRegressor(random_state=10190) rgs.fit(X_train, y_train) rgs.score(X_test, y_test)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
标准化StandardScaler https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler z = (x - u) / s, u是均值, s是标准差
scaler = StandardScaler(copy=True, with_mean=True, with_std=True) X_train_scaler = scaler.fit_transform(X_train) X_test_scaler = scaler.transform(X_test) scaler.mean_, scaler.scale_ rgs = SGDRegressor( loss='squared_loss', # ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’ penalty='l2', # 惩罚项(正则项) alpha=0.0001, # 正则系数 fit_intercept=True, max_iter=100, tol=0.001, shuffle=True, verbose=0, epsilon=0.1, random_state=10190, learning_rate='invscaling', eta0=0.01, power_t=0.25, early_stopping=True, validation_fraction=0.1, n_iter_no_change=5, warm_start=False, average=False ) rgs.fit(X_train_scaler, y_train) rgs.coef_, rgs.intercept_ rgs.score(X_test_scaler, y_test) show_regressor_linear(X_test_scaler, y_test, pred_coef=rgs.coef_, pred_intercept=rgs.intercept_)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
1.2 多项式回归
def load_data_from_func(func=lambda X_data: 0.1383 * np.square(X_data) - 1.2193 * X_data + 2.4096, x_min=0, x_max=10, n_samples=500, loc=0, scale=1, random_state=None): if random_state is not None and isinstance(random_state, int): np.random.seed(random_state) x = np.random.uniform(x_min, x_max, n_samples) y = func(x) noise = np.random.normal(loc=loc, scale=scale, size=n_samples) y += noise return x.reshape([-1, 1]), y X_data, y_data = load_data_from_func(n_samples=500, random_state=10392)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.htmlsklearn.preprocessing.PolynomialFeatures/
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, random_state=10319) poly = PolynomialFeatures() # [1, a, b, a^2, ab, b^2] X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.transform(X_test) X_train_poly.shape rgs = LinearRegression() rgs.fit(X_train_poly, y_train) rgs.score(X_test_poly, y_test) y_pred = rgs.predict(X_test_poly) def show_regression_line(X_data, y_data, y_pred): plt.figure(figsize=[10, 5]) plt.xlabel("x") plt.ylabel("y") if X_data.ndim == 2: X_data = X_data.reshape(-1) plt.scatter(X_data, y_data) idx = np.argsort(X_data) X_data = X_data[idx] y_pred = y_pred[idx] plt.plot(X_data, y_pred, color="darkorange") plt.show() show_regression_line(X_test, y_test, y_pred)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
2. 加利福尼亚房价数据集
df = fetch_california_housing(data_home="./data", as_frame=True) X_data = df['data'] X_data.describe() X_train, X_test, y_train, y_test = train_test_split(X_data, df.target, random_state=1, shuffle=True)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
2.1 线性回归
rgs = LinearRegression() rgs.fit(X_train, y_train) rgs.score(X_test, y_test) scaler = StandardScaler() X_train_scaler = scaler.fit_transform(X_train) X_test_scaler = scaler.transform(X_test) rgs = LinearRegression() rgs.fit(X_train_scaler, y_train) rgs.score(X_test_scaler, y_test)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
2.2 岭回归 https://scikit-learn.org/stable/modules/linear_model.htmlridge-regression-and-classification https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.htmlsklearn.linear_model.Ridge
rgs = Ridge(alpha=1.0, solver="auto") rgs.fit(X_train, y_train) rgs.score(X_test, y_test) rgs.coef_
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RidgeCV.htmlsklearn.linear_model.RidgeCV 2.2.1 交叉验证
rgs = RidgeCV( alphas=(0.001, 0.01, 0.1, 1.0, 10.0), fit_intercept=True, normalize= False, scoring=None, # 如果为None,则当cv为'auto'或为None时为负均方误差,否则为r2得分。scorer(estimator, X, y) cv=None, # int, cross-validation generator or an iterable, default=None gcv_mode='auto', # {‘auto’, ‘svd’, eigen’}, default=’auto’ store_cv_values=None, # bool, 是否将与每个alpha对应的交叉验证值存储在cv_values_属性中, 仅cv=None有效 ) rgs.fit(X_train, y_train) rgs.best_score_ rgs.score(X_test, y_test) rgs = RidgeCV( alphas=(0.001, 0.01, 0.1, 1.0, 10.0), fit_intercept=True, normalize= False, scoring=None, # 如果为None,则当cv为'auto'或为None时为负均方误差,否则为r2得分。scorer(estimator, X, y) cv=10, # int, cross-validation generator or an iterable, default=None gcv_mode='auto', # {‘auto’, ‘svd’, eigen’}, default=’auto’ store_cv_values=None, # bool, 是否将与每个alpha对应的交叉验证值存储在cv_values_属性中, 仅cv=None有效 ) rgs.fit(X_train, y_train) rgs.best_score_, rgs.score(X_test, y_test)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
2.3 索套回归 https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.htmlsklearn.linear_model.Lasso https://scikit-learn.org/stable/modules/linear_model.htmllasso
rgs = Lasso() rgs.fit(X_train, y_train) rgs.score(X_test, y_test) rgs.coef_
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
2.4 多项式回归 https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html?highlight=polynomialfeaturessklearn.preprocessing.PolynomialFeatures
poly = PolynomialFeatures(degree=2, interaction_only=False, include_bias=True) X_train_poly = poly.fit_transform(X_train) # [1, a, b, a^2, ab, b^2] X_train_poly.shape poly.get_feature_names() X_test_poly = poly.transform(X_test) rgs = LinearRegression() rgs.fit(X_train_poly, y_train) rgs.score(X_test_poly, y_test) poly = PolynomialFeatures(degree=2, interaction_only=True, # 是否只保留插乘特征,除去指数项 include_bias=True, order="C") # Order of output array in the dense case. ‘F’ order is faster to compute, but may slow down subsequent estimators. X_train_poly = poly.fit_transform(X_train) X_test_poly = poly.transform(X_test) X_train_poly.shape poly.get_feature_names() rgs = LinearRegression() rgs.fit(X_train_poly, y_train) rgs.score(X_test_poly, y_test)
_____no_output_____
MIT
02.2.LinearRegression-sklearn.ipynb
LossJ/Statistical-Machine-Learning
Local Time
hdr['LOCTIME'] #local time at start of exposure in header images_time = [] for i in range(len(images)): im,hdr = fits.getdata(images[i],header=True) #reading the fits image (data + header) images_time.append(hdr['LOCTIME']) update_progress((i+1.)/len(images)) print images_time #our local time series
['22:29:57', '22:32:14', '22:34:31', '22:36:48', '22:39:05', '22:41:22', '22:43:39', '22:45:57', '22:48:14', '22:50:31', '22:52:48', '22:55:06', '22:57:23', '22:59:40', '23:01:57', '23:04:14', '23:06:31', '23:08:48', '23:11:05', '23:13:23', '23:15:40', '23:17:57', '23:20:14', '23:22:31', '23:24:48', '23:27:05', '23:29:22', '23:31:40', '23:33:57', '23:36:14', '23:38:31', '23:40:48', '23:43:05', '23:45:22', '23:47:40', '23:49:57', '23:52:14', '23:54:31', '23:56:48', '23:59:05', '00:01:22', '00:03:39', '00:05:57', '00:08:14', '00:10:31', '00:12:48', '00:15:05', '00:17:22', '00:19:39', '00:21:56', '00:24:13', '00:26:31', '00:28:48', '00:31:05', '00:33:22', '00:35:39', '00:37:56', '00:40:13', '00:42:30', '00:44:48', '00:47:05', '00:49:22', '00:51:39', '00:53:56', '00:56:13', '00:58:30', '01:00:47', '01:03:04', '01:05:22', '01:07:39', '01:09:56', '01:12:13', '01:14:30', '01:16:47', '01:19:04', '01:21:22', '01:23:39', '01:25:56', '01:28:13', '01:30:30', '01:32:47', '01:35:04', '01:37:21', '01:39:38', '01:41:56', '01:44:13', '01:46:30', '01:48:47', '01:51:04', '01:58:51', '02:01:08', '02:03:25', '02:05:42', '02:07:59', '02:10:16', '02:12:33', '02:14:50', '02:17:08', '02:19:25', '02:21:42', '02:23:59', '02:26:16', '02:28:33', '02:30:50', '02:33:07', '02:35:24', '02:37:42', '02:39:59', '02:42:16', '02:44:33', '02:46:50', '02:49:07', '02:51:24', '02:53:41', '02:55:59', '02:58:16', '03:00:33', '03:02:50', '03:05:07', '03:07:24', '03:09:41', '03:11:59', '03:14:16', '03:16:33', '03:18:50', '03:21:07', '03:23:24', '03:25:41', '03:27:58', '03:30:16', '03:32:33', '03:34:50', '03:37:07']
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
FITS Time
fits_time = [] for i in range(len(images)): im,hdr = fits.getdata(images[i],header=True) #reading the fits image (data + header) fits_time.append(hdr['DATE']) update_progress((i+1.)/len(images)) print fits_time
['2016-02-08T17:01:06', '2016-02-08T17:01:07', '2016-02-08T17:01:07', '2016-02-08T17:01:07', '2016-02-08T17:01:08', '2016-02-08T17:01:09', '2016-02-08T17:01:10', '2016-02-08T17:01:10', '2016-02-08T17:01:10', '2016-02-08T17:01:11', '2016-02-08T17:01:11', '2016-02-08T17:01:12', '2016-02-08T17:01:12', '2016-02-08T17:01:14', '2016-02-08T17:01:15', '2016-02-08T17:01:16', '2016-02-08T17:01:16', '2016-02-08T17:01:16', '2016-02-08T17:01:17', '2016-02-08T17:01:18', '2016-02-08T17:01:18', '2016-02-08T17:01:18', '2016-02-08T17:01:19', '2016-02-08T17:01:19', '2016-02-08T17:01:20', '2016-02-08T17:01:20', '2016-02-08T17:01:21', '2016-02-08T17:01:21', '2016-02-08T17:01:21', '2016-02-08T17:01:22', '2016-02-08T17:01:22', '2016-02-08T17:01:22', '2016-02-08T17:01:23', '2016-02-08T17:01:23', '2016-02-08T17:01:24', '2016-02-08T17:01:24', '2016-02-08T17:01:25', '2016-02-08T17:01:25', '2016-02-08T17:01:25', '2016-02-08T17:01:26', '2016-02-08T17:01:26', '2016-02-08T17:01:27', '2016-02-08T17:01:27', '2016-02-08T17:01:28', '2016-02-08T17:01:30', '2016-02-08T17:01:30', '2016-02-08T17:01:31', '2016-02-08T17:01:31', '2016-02-08T17:01:31', '2016-02-08T17:01:32', '2016-02-08T17:01:32', '2016-02-08T17:01:33', '2016-02-08T17:01:33', '2016-02-08T17:01:35', '2016-02-08T17:01:36', '2016-02-08T17:01:38', '2016-02-08T17:01:39', '2016-02-08T17:01:41', '2016-02-08T17:01:42', '2016-02-08T17:01:43', '2016-02-08T17:01:44', '2016-02-08T17:01:44', '2016-02-08T17:01:46', '2016-02-08T17:01:47', '2016-02-08T17:01:49', '2016-02-08T17:01:50', '2016-02-08T17:01:50', '2016-02-08T17:01:51', '2016-02-08T17:01:52', '2016-02-08T17:01:53', '2016-02-08T17:01:54', '2016-02-08T17:01:55', '2016-02-08T17:01:56', '2016-02-08T17:01:58', '2016-02-08T17:01:58', '2016-02-08T17:01:59', '2016-02-08T17:01:59', '2016-02-08T17:02:00', '2016-02-08T17:02:00', '2016-02-08T17:02:00', '2016-02-08T17:02:01', '2016-02-08T17:02:01', '2016-02-08T17:02:02', '2016-02-08T17:02:02', '2016-02-08T17:02:02', '2016-02-08T17:02:03', '2016-02-08T17:02:03', '2016-02-08T17:02:04', '2016-02-08T17:02:04', '2016-02-08T17:02:05', '2016-02-08T17:02:05', '2016-02-08T17:02:06', '2016-02-08T17:02:06', '2016-02-08T17:02:06', '2016-02-08T17:02:07', '2016-02-08T17:02:07', '2016-02-08T17:02:08', '2016-02-08T17:02:08', '2016-02-08T17:02:09', '2016-02-08T17:02:09', '2016-02-08T17:02:09', '2016-02-08T17:02:10', '2016-02-08T17:02:10', '2016-02-08T17:02:11', '2016-02-08T17:02:11', '2016-02-08T17:02:12', '2016-02-08T17:02:12', '2016-02-08T17:02:13', '2016-02-08T17:02:13', '2016-02-08T17:02:13', '2016-02-08T17:02:14', '2016-02-08T17:02:14', '2016-02-08T17:00:55', '2016-02-08T17:00:55', '2016-02-08T17:00:56', '2016-02-08T17:00:56', '2016-02-08T17:00:56', '2016-02-08T17:00:57', '2016-02-08T17:00:57', '2016-02-08T17:00:57', '2016-02-08T17:00:58', '2016-02-08T17:00:58', '2016-02-08T17:00:58', '2016-02-08T17:00:59', '2016-02-08T17:01:00', '2016-02-08T17:01:01', '2016-02-08T17:01:03', '2016-02-08T17:01:04', '2016-02-08T17:01:04', '2016-02-08T17:01:05', '2016-02-08T17:01:05', '2016-02-08T17:01:06', '2016-02-08T17:01:06']
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
Observatory (location)
#geting the observatory im,hdr = fits.getdata(images[0],header=True) #reading the fits image (data + header) observatory_loc = hdr['OBSERVAT'] print observatory_loc
mtbigelow
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
Obtain UT using local time and observatory
#time formats print list(Time.FORMATS) #Let's using fits time teste = Time(fits_time[0],format=u'fits') teste teste.jd #convert my object test in fits date to julian date #Let's make to all time series serie = np.zeros(len(fits_time)) for i in range(len(fits_time)): serie[i] = Time(fits_time[i],format=u'fits').jd serie #Let's confirm our serie hjd = np.loadtxt('../Results/hjd') #original data hjd
_____no_output_____
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
Error 404: Date don't found!Yes, and I know why! THe date in abxo2b*.fits images are the date from when it were created. Because of that, we need to extract the date from original images!
os.chdir('../') images = glob.glob('xo2b*.fits') os.chdir(save_path) print images fits_time = [] os.chdir(data_path) for i in range(len(images)): im,hdr = fits.getdata(images[i],header=True) #reading the fits image (data + header) fits_time.append(hdr['DATE']) update_progress((i+1.)/len(images)) os.chdir(save_path) print fits_time #Let's make to all time series serie = np.zeros(len(fits_time)) for i in range(len(fits_time)): serie[i] = Time(fits_time[i],format=u'fits').jd serie hjd diff = serie-hjd plt.figure() plt.grid() plt.scatter(hjd,diff) plt.ylim(min(diff),max(diff)) im,hdr = fits.getdata('../'+images[0],header=True) hdr hdr['LOCTIME'],hdr['DATE-OBS'] tempo_imagem = hdr['DATE-OBS']+' '+hdr['LOCTIME'] print tempo_imagem teste = Time(tempo_imagem,format=u'iso') teste.jd #Nope hjd[0] #****** change time hdr['UT'] location = '+32:24:59.3 110:44:04.3' teste = Time(hdr['DATE-OBS']+'T'+hdr['UT'],format='isot',scale='utc') teste teste.jd hjd[0] hdr.['']
_____no_output_____
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
Working with date in header following Kyle's subroutine stcoox.cl in ExoDRPL
import yaml file = yaml.load(open('C:/Users/walte/MEGA/work/codes/iraf_task/input_path.yaml')) RA,DEC, epoch = file['RA'],file['DEC'],file['epoch'] print RA,DEC,epoch hdr['DATE-OBS'], hdr['UT'] local_time = Time(hdr['DATE-OBS']+'T'+hdr['ut'],format='isot') print local_time.jd teste_loc_time = Time('2012-12-09'+'T'+hdr['ut'],format='isot') print teste_loc_time.jd hdr['DATE'] Time(hdr['DATE'],format='fits',scale='tai') hjd[0] Time(hdr['DATE'],format='fits',scale='tai').jd2000 hdr import datetime hdr['DATE-OBS'],hdr['DATE'],hdr['LOCTIME'],hdr['TIME-OBS'],hdr['TIMESYS'] Time(hdr['DATE'],format='fits',scale='utc') print Time(hdr['DATE'],scale='utc',format='isot').jd print Time(hdr['DATE-OBS']+'T'+hdr['TIME-OBS'],scale='utc',format='isot').jd hjd[0], len(hjd) hdr['UTC-OBS'] Time(hdr['IRAF-TLM'],scale='utc',format='isot').jd diff = (Time(hdr['IRAF-TLM'],scale='utc',format='isot').jd - Time(hdr['DATE'],scale='utc',format='isot').jd)/2 print diff print Time(hdr['IRAF-TLM'],scale='utc',format='isot').jd - diff
0.000370370224118 2456271.73
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
Local Time to sideral time
local_time = Time(hdr['DATE-OBS']+'T'+hdr['Time-obs'],format='isot',scale='utc') time_sd = local_time.sidereal_time('apparent',longitude=file['lon-obs'])#with precession and nutation print time_sd time_sd.T.hms[0],time_sd.T.hms[1],time_sd.T.hms[2] local_time.sidereal_time('mean',longitude=file['lon-obs']) #with precession file['observatory'],file['lon-obs'] time_sd.deg, time_sd.hour
_____no_output_____
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
Change degrees to hours...
from astropy.coordinates import SkyCoord from astropy import units as unit from astropy.coordinates import Angle RA = Angle(file['RA']+file['u.RA']) DEC = Angle(file['DEC']+file['u.DEC']) coordenadas = SkyCoord(RA,DEC,frame='fk5') coordenadas coordenadas.ra.hour, coordenadas.dec.deg,coordenadas.equinox,coordenadas.equinox.value local_time local_time.hjd #airmass airmass = np.loadtxt('../Results/XYpos+Airmass.txt',unpack=True) airmass[2] hdr['DATE-OBS'],hdr['UTC-OBS'] file['time-zone'] = 7 file['time-zone'] local_time import string hdr['DATE-OBS'].split('-') float(hdr['DATE-OBS'].split('-')[2]) hdr['UTC-OBS'].split(':'),hdr['UTC-OBS'].split(':')[0] if float(hdr['UTC-OBS'].split(':')[0]) < file['time-zone']: new_date = float(hdr['DATE-OBS'].split('-')[2]) - 1 hdr['DATE-OBS'] = hdr['DATE-OBS'].split('-')[0]+'-'+hdr['DATE-OBS'].split('-')[1]+'-'+str(int(new_date)) new_date hdr['DATE-OBS']
_____no_output_____
CC-BY-4.0
development/Obtain Universal Time UT using astropy.ipynb
waltersmartinsf/iraf_task
**This notebook is an exercise in the [Geospatial Analysis](https://www.kaggle.com/learn/geospatial-analysis) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/interactive-maps).**--- IntroductionYou are an urban safety planner in Japan, and you are analyzing which areas of Japan need extra earthquake reinforcement. Which areas are both high in population density and prone to earthquakes?Before you get started, run the code cell below to set everything up.
import pandas as pd import geopandas as gpd import folium from folium import Choropleth from folium.plugins import HeatMap from learntools.core import binder binder.bind(globals()) from learntools.geospatial.ex3 import *
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
We define a function `embed_map()` for displaying interactive maps. It accepts two arguments: the variable containing the map, and the name of the HTML file where the map will be saved.This function ensures that the maps are visible [in all web browsers](https://github.com/python-visualization/folium/issues/812).
def embed_map(m, file_name): from IPython.display import IFrame m.save(file_name) return IFrame(file_name, width='100%', height='500px')
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Exercises 1) Do earthquakes coincide with plate boundaries?Run the code cell below to create a DataFrame `plate_boundaries` that shows global plate boundaries. The "coordinates" column is a list of (latitude, longitude) locations along the boundaries.
plate_boundaries = gpd.read_file("../input/geospatial-learn-course-data/Plate_Boundaries/Plate_Boundaries/Plate_Boundaries.shp") plate_boundaries['coordinates'] = plate_boundaries.apply(lambda x: [(b,a) for (a,b) in list(x.geometry.coords)], axis='columns') plate_boundaries.drop('geometry', axis=1, inplace=True) plate_boundaries.head()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Next, run the code cell below without changes to load the historical earthquake data into a DataFrame `earthquakes`.
# Load the data and print the first 5 rows earthquakes = pd.read_csv("../input/geospatial-learn-course-data/earthquakes1970-2014.csv", parse_dates=["DateTime"]) earthquakes.head()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
The code cell below visualizes the plate boundaries on a map. Use all of the earthquake data to add a heatmap to the same map, to determine whether earthquakes coincide with plate boundaries.
# Create a base map with plate boundaries m_1 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5) for i in range(len(plate_boundaries)): folium.PolyLine(locations=plate_boundaries.coordinates.iloc[i], weight=2, color='black').add_to(m_1) # Your code here: Add a heatmap to the map HeatMap(data=earthquakes[['Latitude', 'Longitude']], radius=10).add_to(m_1) # Uncomment to see a hint #q_1.a.hint() # Show the map embed_map(m_1, 'q_1.html') # Get credit for your work after you have created a map q_1.a.check() # Uncomment to see our solution (your code may look different!) q_1.a.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
So, given the map above, do earthquakes coincide with plate boundaries?
# View the solution (Run this code cell to receive credit!) q_1.b.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
2) Is there a relationship between earthquake depth and proximity to a plate boundary in Japan?You recently read that the depth of earthquakes tells us [important information](https://www.usgs.gov/faqs/what-depth-do-earthquakes-occur-what-significance-depth?qt-news_science_products=0qt-news_science_products) about the structure of the earth. You're interested to see if there are any intereresting global patterns, and you'd also like to understand how depth varies in Japan.
# Create a base map with plate boundaries m_2 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5) for i in range(len(plate_boundaries)): folium.PolyLine(locations=plate_boundaries.coordinates.iloc[i], weight=2, color='black').add_to(m_2) # Your code here: Add a map to visualize earthquake depth # Custom function to assign a color to each circle def color_producer(val): if val < 50: return 'forestgreen' elif val < 100: return 'darkorange' else: return 'darkred' # Add a map to visualize earthquake depth for i in range(0,len(earthquakes)): folium.Circle( location=[earthquakes.iloc[i]['Latitude'], earthquakes.iloc[i]['Longitude']], radius=2000, color=color_producer(earthquakes.iloc[i]['Depth'])).add_to(m_2) # Uncomment to see a hint #q_2.a.hint() # View the map embed_map(m_2, 'q_2.html') # Get credit for your work after you have created a map q_2.a.check() # Uncomment to see our solution (your code may look different!) q_2.a.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Can you detect a relationship between proximity to a plate boundary and earthquake depth? Does this pattern hold globally? In Japan?
# View the solution (Run this code cell to receive credit!) q_2.b.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
3) Which prefectures have high population density?Run the next code cell (without changes) to create a GeoDataFrame `prefectures` that contains the geographical boundaries of Japanese prefectures.
# GeoDataFrame with prefecture boundaries prefectures = gpd.read_file("../input/geospatial-learn-course-data/japan-prefecture-boundaries/japan-prefecture-boundaries/japan-prefecture-boundaries.shp") prefectures.set_index('prefecture', inplace=True) prefectures.head()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
The next code cell creates a DataFrame `stats` containing the population, area (in square kilometers), and population density (per square kilometer) for each Japanese prefecture. Run the code cell without changes.
# DataFrame containing population of each prefecture population = pd.read_csv("../input/geospatial-learn-course-data/japan-prefecture-population.csv") population.set_index('prefecture', inplace=True) # Calculate area (in square kilometers) of each prefecture area_sqkm = pd.Series(prefectures.geometry.to_crs(epsg=32654).area / 10**6, name='area_sqkm') stats = population.join(area_sqkm) # Add density (per square kilometer) of each prefecture stats['density'] = stats["population"] / stats["area_sqkm"] stats.head()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Use the next code cell to create a choropleth map to visualize population density.
# Create a base map m_3 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5) # Your code here: create a choropleth map to visualize population density Choropleth(geo_data=prefectures['geometry'].__geo_interface__, data=stats['density'], key_on="feature.id", fill_color='YlGnBu', legend_name='Population density (per square kilometer)' ).add_to(m_3) # Uncomment to see a hint # q_3.a.hint() # View the map embed_map(m_3, 'q_3.html') # Get credit for your work after you have created a map q_3.a.check() # Uncomment to see our solution (your code may look different!) q_3.a.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Which three prefectures have relatively higher density than the others? Are they spread throughout the country, or all located in roughly the same geographical region? (*If you're unfamiliar with Japanese geography, you might find [this map](https://en.wikipedia.org/wiki/Prefectures_of_Japan) useful to answer the questions.)*
# View the solution (Run this code cell to receive credit!) q_3.b.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
4) Which high-density prefecture is prone to high-magnitude earthquakes?Create a map to suggest one prefecture that might benefit from earthquake reinforcement. Your map should visualize both density and earthquake magnitude.
# Create a base map m_4 = folium.Map(location=[35,136], tiles='cartodbpositron', zoom_start=5) # Your code here: create a map def color_producer(magnitude): if magnitude > 6.5: return 'red' else: return 'green' Choropleth( geo_data=prefectures['geometry'].__geo_interface__, data=stats['density'], key_on="feature.id", fill_color='BuPu', legend_name='Population density (per square kilometer)').add_to(m_4) for i in range(0,len(earthquakes)): folium.Circle( location=[earthquakes.iloc[i]['Latitude'], earthquakes.iloc[i]['Longitude']], popup=("{} ({})").format( earthquakes.iloc[i]['Magnitude'], earthquakes.iloc[i]['DateTime'].year), radius=earthquakes.iloc[i]['Magnitude']**5.5, color=color_producer(earthquakes.iloc[i]['Magnitude'])).add_to(m_4) # Uncomment to see a hint q_4.a.hint() # View the map embed_map(m_4, 'q_4.html') # Get credit for your work after you have created a map q_4.a.check() # Uncomment to see our solution (your code may look different!) q_4.a.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Which prefecture do you recommend for extra earthquake reinforcement?
# View the solution (Run this code cell to receive credit!) q_4.b.solution()
_____no_output_____
MIT
course/Geospatial Analysis/exercise-interactive-maps.ipynb
furyhawk/kaggle_practice
Classification with Neural Network for Yoga poses detection Import Dependencies
import numpy as np import pandas as pd import os import matplotlib.pyplot as plt import tensorflow as tf from tensorflow.keras.utils import to_categorical from tensorflow.keras.preprocessing.image import load_img, img_to_array from tensorflow.python.keras.preprocessing.image import ImageDataGenerator from sklearn.metrics import classification_report, log_loss, accuracy_score from sklearn.model_selection import train_test_split
_____no_output_____
Unlicense
_Project_Analysis/Neural_Network_model_training.ipynb
sijal001/Yoga_Pose_Detection
Getting the data (images) and labels
# Data path train_dir = 'pose_recognition_data/dataset' # Getting the folders name to be able to labelize the data Name=[] for file in os.listdir(train_dir): Name+=[file] print(Name) print(len(Name)) N=[] for i in range(len(Name)): N+=[i] normal_mapping=dict(zip(Name,N)) reverse_mapping=dict(zip(N,Name)) def mapper(value): return reverse_mapping[value] dataset=[] testset=[] count=0 for file in os.listdir(train_dir): t=0 path=os.path.join(train_dir,file) for im in os.listdir(path): image=load_img(os.path.join(path,im), grayscale=False, color_mode='rgb', target_size=(40,40)) image=img_to_array(image) image=image/255.0 if t<60: dataset+=[[image,count]] else: testset+=[[image,count]] t+=1 count=count+1 data,labels0=zip(*dataset) test,testlabels0=zip(*testset) labels1=to_categorical(labels0) labels=np.array(labels1) # Transforming the into Numerical Data data=np.array(data) test=np.array(test) trainx,testx,trainy,testy=train_test_split(data,labels,test_size=0.2,random_state=44) print(trainx.shape) print(testx.shape) print(trainy.shape) print(testy.shape) # Data augmentation datagen = ImageDataGenerator(horizontal_flip=True,vertical_flip=True,rotation_range=20,zoom_range=0.2, width_shift_range=0.2,height_shift_range=0.2,shear_range=0.1,fill_mode="nearest") # Loading the pretrained model , here DenseNet201 pretrained_model3 = tf.keras.applications.DenseNet201(input_shape=(40,40,3),include_top=False,weights='imagenet',pooling='avg') pretrained_model3.trainable = False inputs3 = pretrained_model3.input x3 = tf.keras.layers.Dense(128, activation='relu')(pretrained_model3.output) outputs3 = tf.keras.layers.Dense(107, activation='softmax')(x3) model = tf.keras.Model(inputs=inputs3, outputs=outputs3) model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['accuracy']) his=model.fit(datagen.flow(trainx,trainy,batch_size=32),validation_data=(testx,testy),epochs=50) y_pred=model.predict(testx) pred=np.argmax(y_pred,axis=1) ground = np.argmax(testy,axis=1) print(classification_report(ground,pred)) #Checking accuracy of our model get_acc = his.history['accuracy'] value_acc = his.history['val_accuracy'] get_loss = his.history['loss'] validation_loss = his.history['val_loss'] epochs = range(len(get_acc)) plt.plot(epochs, get_acc, 'r', label='Accuracy of Training data') plt.plot(epochs, value_acc, 'b', label='Accuracy of Validation data') plt.title('Training vs validation accuracy') plt.legend(loc=0) plt.figure() plt.show() # Checking the loss of data epochs = range(len(get_loss)) plt.plot(epochs, get_loss, 'r', label='Loss of Training data') plt.plot(epochs, validation_loss, 'b', label='Loss of Validation data') plt.title('Training vs validation loss') plt.legend(loc=0) plt.figure() plt.show() load_img("pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png",target_size=(40,40)) image = load_img("pose_recognition_data/dataset/adho mukha svanasana/95. downward-facing-dog-pose.png",target_size=(40,40)) image=img_to_array(image) image=image/255.0 prediction_image=np.array(image) prediction_image= np.expand_dims(image, axis=0) prediction=model.predict(prediction_image) value=np.argmax(prediction) move_name=mapper(value) print("Prediction is {}.".format(move_name)) print(test.shape) pred2=model.predict(test) print(pred2.shape) PRED=[] for item in pred2: value2=np.argmax(item) PRED+=[value2] ANS=testlabels0 accuracy=accuracy_score(ANS,PRED) print(accuracy)
0.27466666666666667
Unlicense
_Project_Analysis/Neural_Network_model_training.ipynb
sijal001/Yoga_Pose_Detection
TVAE Model===========In this guide we will go through a series of steps that will let youdiscover functionalities of the `TVAE` model, including how to:- Create an instance of `TVAE`.- Fit the instance to your data.- Generate synthetic versions of your data.- Use `TVAE` to anonymize PII information.- Specify hyperparameters to improve the output quality.What is TVAE?--------------The `sdv.tabular.TVAE` model is based on the VAE-based Deep Learningdata synthesizer which was presented at the NeurIPS 2020 conference bythe paper titled [Modeling Tabular data using ConditionalGAN](https://arxiv.org/abs/1907.00503).Let\'s now discover how to learn a dataset and later on generatesynthetic data with the same format and statistical properties by usingthe `TVAE` class from SDV.Quick Usage-----------We will start by loading one of our demo datasets, the`student_placements`, which contains information about MBA students thatapplied for placements during the year 2020.**Warning**In order to follow this guide you need to have `tvae` installed on yoursystem. If you have not done it yet, please install `tvae` now byexecuting the command `pip install sdv` in a terminal.
from sdv.demo import load_tabular_demo data = load_tabular_demo('student_placements') data.head()
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
As you can see, this table contains information about students whichincludes, among other things:- Their id and gender- Their grades and specializations- Their work experience- The salary that they were offered- The duration and dates of their placementYou will notice that there is data with the following characteristics:- There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.T There are float, integer, boolean, categorical and datetime values.- There are some variables that have missing data. In particular, all the data related to the placement details is missing in the rows where the student was not placed.Let us use `TVAE` to learn this data and then sample synthetic dataabout new students to see how well the model captures the characteristicsindicated above. In order to do this you will need to:- Import the `sdv.tabular.TVAE` class and create an instance of it.- Call its `fit` method passing our table.- Call its `sample` method indicating the number of synthetic rows that you want to generate.
from sdv.tabular import TVAE model = TVAE() model.fit(data)
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
**Note**Notice that the model `fitting` process took care of transforming thedifferent fields using the appropriate [Reversible DataTransforms](http://github.com/sdv-dev/RDT) to ensure that the data has aformat that the underlying TVAESynthesizer class can handle. Generate synthetic data from the modelOnce the modeling has finished you are ready to generate new syntheticdata by calling the `sample` method from your model passing the numberof rows that we want to generate.
new_data = model.sample(num_rows=200)
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
This will return a table identical to the one which the model was fittedon, but filled with new data which resembles the original one.
new_data.head()
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
**Note**There are a number of other parameters in this method that you can use tooptimize the process of generating synthetic data. Use ``output_file_path``to directly write results to a CSV file, ``batch_size`` to break up samplinginto smaller pieces & track their progress and ``randomize_samples`` todetermine whether to generate the same synthetic data every time.See the API Section for more details. Save and Load the modelIn many scenarios it will be convenient to generate synthetic versionsof your data directly in systems that do not have access to the originaldata source. For example, if you may want to generate testing data onthe fly inside a testing environment that does not have access to yourproduction database. In these scenarios, fitting the model with realdata every time that you need to generate new data is feasible, so youwill need to fit a model in your production environment, save the fittedmodel into a file, send this file to the testing environment and thenload it there to be able to `sample` from it.Let\'s see how this process works. Save and share the modelOnce you have fitted the model, all you need to do is call its `save`method passing the name of the file in which you want to save the model.Note that the extension of the filename is not relevant, but we will beusing the `.pkl` extension to highlight that the serialization protocolused is [pickle](https://docs.python.org/3/library/pickle.html).
model.save('my_model.pkl')
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
This will have created a file called `my_model.pkl` in the samedirectory in which you are running SDV.**Important**If you inspect the generated file you will notice that its size is muchsmaller than the size of the data that you used to generate it. This isbecause the serialized model contains **no information about theoriginal data**, other than the parameters it needs to generatesynthetic versions of it. This means that you can safely share this`my_model.pkl` file without the risc of disclosing any of your realdata! Load the model and generate new dataThe file you just generated can be sent over to the system where thesynthetic data will be generated. Once it is there, you can load itusing the `TVAE.load` method, and then you are ready to sample new datafrom the loaded instance:
loaded = TVAE.load('my_model.pkl') new_data = loaded.sample(num_rows=200)
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
**Warning**Notice that the system where the model is loaded needs to also have`sdv` and `tvae` installed, otherwise it will not be able to load themodel and use it. Specifying the Primary Key of the tableOne of the first things that you may have noticed when looking at the demodata is that there is a `student_id` column which acts as the primarykey of the table, and which is supposed to have unique values. Indeed,if we look at the number of times that each value appears, we see thatall of them appear at most once:
data.student_id.value_counts().max()
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
However, if we look at the synthetic data that we generated, we observethat there are some values that appear more than once:
new_data[new_data.student_id == new_data.student_id.value_counts().index[0]]
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
This happens because the model was not notified at any point about thefact that the `student_id` had to be unique, so when it generates newdata it will provoke collisions sooner or later. In order to solve this,we can pass the argument `primary_key` to our model when we create it,indicating the name of the column that is the index of the table.
model = TVAE( primary_key='student_id' ) model.fit(data) new_data = model.sample(200) new_data.head()
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV
As a result, the model will learn that this column must be unique andgenerate a unique sequence of values for the column:
new_data.student_id.value_counts().max()
_____no_output_____
MIT
tutorials/single_table_data/04_TVAE_Model.ipynb
HDI-Project/SDV