hexsha
stringlengths
40
40
size
int64
6
14.9M
ext
stringclasses
1 value
lang
stringclasses
1 value
max_stars_repo_path
stringlengths
6
260
max_stars_repo_name
stringlengths
6
119
max_stars_repo_head_hexsha
stringlengths
40
41
max_stars_repo_licenses
list
max_stars_count
int64
1
191k
max_stars_repo_stars_event_min_datetime
stringlengths
24
24
max_stars_repo_stars_event_max_datetime
stringlengths
24
24
max_issues_repo_path
stringlengths
6
260
max_issues_repo_name
stringlengths
6
119
max_issues_repo_head_hexsha
stringlengths
40
41
max_issues_repo_licenses
list
max_issues_count
int64
1
67k
max_issues_repo_issues_event_min_datetime
stringlengths
24
24
max_issues_repo_issues_event_max_datetime
stringlengths
24
24
max_forks_repo_path
stringlengths
6
260
max_forks_repo_name
stringlengths
6
119
max_forks_repo_head_hexsha
stringlengths
40
41
max_forks_repo_licenses
list
max_forks_count
int64
1
105k
max_forks_repo_forks_event_min_datetime
stringlengths
24
24
max_forks_repo_forks_event_max_datetime
stringlengths
24
24
avg_line_length
float64
2
1.04M
max_line_length
int64
2
11.2M
alphanum_fraction
float64
0
1
cells
list
cell_types
list
cell_type_groups
list
4a3973de7604531c8a61042c34226df704df2d5c
6,111
ipynb
Jupyter Notebook
Ch2_Linear-Algebra/Ch2_Exam1.ipynb
arashash/deep_exercises
2c40802ee367ba9bf1f6fa5dad96cfa1a74e092b
[ "MIT" ]
1
2020-12-09T10:27:37.000Z
2020-12-09T10:27:37.000Z
Ch2_Linear-Algebra/Ch2_Exam1.ipynb
arashash/deep_exercises
2c40802ee367ba9bf1f6fa5dad96cfa1a74e092b
[ "MIT" ]
null
null
null
Ch2_Linear-Algebra/Ch2_Exam1.ipynb
arashash/deep_exercises
2c40802ee367ba9bf1f6fa5dad96cfa1a74e092b
[ "MIT" ]
null
null
null
25.676471
248
0.413189
[ [ [ "<a href=\"https://colab.research.google.com/github/arashash/deep_exercises/blob/main/Ch2_Linear-Algebra/Ch2_Exam1.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Chapter 2 - Linear Algebra", "_____no_output_____" ], [ "## 2.1 Scalars, Vectors, Matrices and Tensors", "_____no_output_____" ], [ "### Q1 [10 Points, M]\nDenote the set of all n-dimensional binary vectors with Cartesian product notation", "_____no_output_____" ], [ "### Q2 [10 Points, S]\nGiven the vector\n$\n\\boldsymbol{x}=\\left[\\begin{array}{c}\nx_{1} \\\\\nx_{2} \\\\\nx_{3} \\\\\nx_{4} \\\\\n\\end{array}\\right]\n$, \nand the set \n$\nS = \\{2, 4\\}\n$,\nobtain the vectors $\\boldsymbol{x}_{S}$ and $\\boldsymbol{x}_{-S}$", "_____no_output_____" ], [ "### Q3 [20 Points, S]\nEvaluate the following expressions with broadcasting rules,\n\n$$\n\\left[\\begin{array}{lll}\n0 & 1 & 2\n\\end{array}\\right]+[5]=\n$$\n\n$$\n\\left[\\begin{array}{lll}\n1 & 1 & 1 \\\\\n1 & 1 & 1 \\\\\n1 & 1 & 1\n\\end{array}\\right]+\\left[\\begin{array}{lll}\n0 & 1 & 1\n\\end{array}\\right]=\n$$\n\n$$\n\\left[\\begin{array}{l}\n0 \\\\\n1 \\\\\n2\n\\end{array}\\right]+\\left[\\begin{array}{ll}\n0 & 1 & 2\n\\end{array}\\right]=\n$$", "_____no_output_____" ], [ "## 2.2-3 Multiplying Matrices and Vectors and Identity and Inverse Matrices", "_____no_output_____" ], [ "### Q4 [20 Points, H]\nLet $A$ be a $2 \\times 2$ matrix, if $A B=B A$ for every $B$ of the size $2 \\times 2$, Prove that:\n$$\nA=\\left[\\begin{array}{ll}\na & 0 \\\\\n0 & a\n\\end{array}\\right], \\\na \\in \\mathbb{R}\n$$\n", "_____no_output_____" ], [ "## 2.4 Linear Dependence and Span", "_____no_output_____" ], [ "### Q5 [10 Points, H]\nProve that if a linear system of equations have two solutions, then it has infinitely many solutions.", "_____no_output_____" ], [ "### Q6 [5 Points, M]\nGiven $A x=0$, where $A \\in \\mathbb{R}^{m \\times n}$ is any matrix, and $x \\in \\mathbb{R}^{n}$ is a vector of unknown variables to be solved, what is the condition such that there is infinitely many solutions?", "_____no_output_____" ], [ "## 2.5 Norms", "_____no_output_____" ], [ "### Q7 [15 Points, M]\nProve that **Max Norm** follows these conditions,\n$$\n\\begin{align}\nf(\\boldsymbol{x})=0 \\Rightarrow \\boldsymbol{x}=\\mathbf{0} \\\\\nf(\\boldsymbol{x}+\\boldsymbol{y}) \\leq f(\\boldsymbol{x})+f(\\boldsymbol{y}) \\\\\n\\forall \\alpha \\in \\mathbb{R}, f(\\alpha \\boldsymbol{x})=|\\alpha| f(\\boldsymbol{x})\n\\end{align}\n$$", "_____no_output_____" ], [ "## 2.6 Special Kinds of Matrices and Vectors", "_____no_output_____" ], [ "### Q8 [10 Points, M]\nSolve the following system of equations,\n\n$$\n\\frac{1}{2}\\left[\\begin{array}{cccc}\n1 & 1 & 1 & 1 \\\\\n1 & 1 & -1 & -1 \\\\\n1 & -1 & 1 & -1 \\\\\n1 & -1 & -1 & 1\n\\end{array}\\right] \\left[\\begin{array}{c}\nx_{1} \\\\\nx_{2} \\\\\nx_{3} \\\\\nx_{4} \\\\\n\\end{array}\\right] = \\left[\\begin{array}{c}\n1 \\\\\n2 \\\\\n3 \\\\\n4 \\\\\n\\end{array}\\right]\n$$", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a3979cb4b0ada03fcba2ebd818380e46dd0ffa9
46,971
ipynb
Jupyter Notebook
notebooks_for_models/1905/model5_penalized_svm_1905.ipynb
knishina/World_Series_Prediction_Revisited
1bcddc3ac63f2404c48b9200adb9bab9e986fbc6
[ "MIT" ]
1
2019-08-13T22:34:15.000Z
2019-08-13T22:34:15.000Z
notebooks_for_models/1905/model5_penalized_svm_1905.ipynb
knishina/World_Series_Prediction_Revisited
1bcddc3ac63f2404c48b9200adb9bab9e986fbc6
[ "MIT" ]
null
null
null
notebooks_for_models/1905/model5_penalized_svm_1905.ipynb
knishina/World_Series_Prediction_Revisited
1bcddc3ac63f2404c48b9200adb9bab9e986fbc6
[ "MIT" ]
null
null
null
38.50082
340
0.428669
[ [ [ "## Purpose: Try different models-- Part5.\n### Penalized_SVM.", "_____no_output_____" ] ], [ [ "# import dependencies.\nimport pandas as pd\nimport numpy as np\n\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.metrics import classification_report\nfrom sklearn.model_selection import train_test_split, GridSearchCV\nfrom sklearn.svm import SVC", "_____no_output_____" ] ], [ [ "#### STEP1: Read in dataset. Remove data from 2016-2019.\n- data from 2016-2018 will be used to bs test the model.\n- data from 2019 will be used to predict the winners of the 2019 WS.", "_____no_output_____" ] ], [ [ "# read in the data.\nteam_data = pd.read_csv(\"../../Resources/clean_data_1905.csv\")\ndel team_data[\"Unnamed: 0\"]\nteam_data.head()", "_____no_output_____" ], [ "# remove data from 2016 through 2019.\nteam_data_new = team_data.loc[team_data[\"year\"] < 2016]\nteam_data_new.head()", "_____no_output_____" ], [ "target = team_data_new[\"winners\"]\nfeatures = team_data_new.drop({\"team\", \"year\", \"winners\"}, axis=1)\nfeature_columns = list(features.columns)\nprint (target.shape)\nprint (features.shape)\nprint (feature_columns)", "(2344,)\n(2344, 52)\n['A', 'DP', 'E', 'G2', 'GS2', 'INN', 'PB', 'PO', 'TC', '2B', '3B', 'AB', 'AO', 'BB', 'G', 'H', 'HBP', 'HR', 'NP_x', 'OBP', 'OPS_x', 'PA', 'R', 'RBI', 'SAC', 'SB', 'SLG', 'TB', 'XBH', 'BB1', 'BK', 'CG', 'ER', 'ERA', 'G1', 'GF', 'GS', 'H1', 'HB', 'HR1', 'IP', 'L', 'OBP1', 'R1', 'SHO', 'SO1', 'SV', 'TBF', 'W', 'WHIP', 'WP', 'WPCT']\n" ] ], [ [ "#### STEP2: Split and scale the data.", "_____no_output_____" ] ], [ [ "# split data.\nX_train, X_test, y_train, y_test = train_test_split(features, target, random_state=42)\n\n# scale data.\nscaler = StandardScaler()\nX_train_scaled = scaler.fit_transform(X_train)\nX_test_scaled = scaler.fit_transform(X_test)", "/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.\n return self.partial_fit(X, y)\n/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/base.py:464: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.\n return self.fit(X, **fit_params).transform(X)\n/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/preprocessing/data.py:645: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.\n return self.partial_fit(X, y)\n/Applications/anaconda3/envs/PythonData/lib/python3.6/site-packages/sklearn/base.py:464: DataConversionWarning: Data with input dtype int64, float64 were all converted to float64 by StandardScaler.\n return self.fit(X, **fit_params).transform(X)\n" ] ], [ [ "#### STEP3: Try the SVC model.", "_____no_output_____" ] ], [ [ "# generate the model.\nmodel = SVC(kernel=\"rbf\",\n class_weight=\"balanced\",\n probability=True)\n\n# fit the model.\nmodel.fit(X_train_scaled, y_train)\n\n# predict.\nprediction = model.predict(X_test_scaled)\n\nprint ((classification_report(y_test, prediction, target_names=[\"0\", \"1\"])))", " precision recall f1-score support\n\n 0 0.97 0.79 0.87 553\n 1 0.13 0.55 0.21 33\n\n micro avg 0.77 0.77 0.77 586\n macro avg 0.55 0.67 0.54 586\nweighted avg 0.92 0.77 0.83 586\n\n" ] ], [ [ "#### STEP4: Predict the winner 2016-2018.", "_____no_output_____" ] ], [ [ "def predict_the_winner(model, year, team_data, X_train):\n '''\n INPUT: \n -X_train = scaled X train data.\n -model = the saved model.\n -team_data = complete dataframe with all data.\n -year = the year want to look at.\n \n OUTPUT:\n -printed prediction.\n \n DESCRIPTION:\n -data from year of interest is isolated.\n -the data are scaled.\n -the prediction is made.\n -print out the resulting probability and the name of the team.\n '''\n \n # grab the data.\n team_data = team_data.loc[team_data[\"year\"] == year].reset_index()\n\n # set features (no team, year, winners).\n # set target (winners).\n features = team_data[feature_columns]\n \n # scale.\n scaler = StandardScaler()\n X_train_scaled = scaler.fit_transform(X_train)\n features = scaler.fit_transform(features)\n \n # fit the model.\n probabilities = model.predict_proba(features)\n\n # convert predictions to datafram.e\n WS_predictions = pd.DataFrame(probabilities[:,1])\n\n # Sort the DataFrame (descending)\n WS_predictions = WS_predictions.sort_values(0, ascending=False)\n\n WS_predictions['Probability'] = WS_predictions[0]\n\n # Print 50 highest probability HoF inductees from still eligible players\n for i, row in WS_predictions.head(50).iterrows():\n prob = ' '.join(('WS Probability =', str(row['Probability'])))\n print('')\n print(prob)\n print(team_data.iloc[i,1:27][\"team\"])", "_____no_output_____" ], [ "# predict for 2018.\npredict_the_winner(model, 2018, team_data, X_train_scaled)", "\nWS Probability = 0.1671745151449622\nArizona Diamondbacks\n\nWS Probability = 0.09440424861372654\nMinnesota Twins\n\nWS Probability = 0.07801780312098003\nHouston Astros\n\nWS Probability = 0.07620344827537079\nNew York Yankees\n\nWS Probability = 0.07251064948783716\nWashington Nationals\n\nWS Probability = 0.06045023236310526\nTexas Rangers\n\nWS Probability = 0.05862509214981055\nAtlanta Braves\n\nWS Probability = 0.05860905640586009\nTampa Bay Rays\n\nWS Probability = 0.05670236189298418\nCleveland Indians\n\nWS Probability = 0.05583194521387857\nSt. Louis Cardinals\n\nWS Probability = 0.05081627445196096\nCincinnati Reds\n\nWS Probability = 0.05028167791790393\nOakland Athletics\n\nWS Probability = 0.04889714149856926\nNew York Mets\n\nWS Probability = 0.04867623213093658\nChicago Cubs\n\nWS Probability = 0.04583032329073765\nBoston Red Sox\n\nWS Probability = 0.04014256645170161\nMilwaukee Brewers\n\nWS Probability = 0.035425253234462495\nLos Angeles Angels\n\nWS Probability = 0.02871561110466993\nLos Angeles Dodgers\n\nWS Probability = 0.02736982833740287\nKansas City Royals\n\nWS Probability = 0.024037733321353432\nChicago White Sox\n\nWS Probability = 0.023501615298715424\nMiami Marlins\n\nWS Probability = 0.022892380298107765\nBaltimore Orioles\n\nWS Probability = 0.018039932692641675\nPittsburgh Pirates\n\nWS Probability = 0.015271868897365887\nDetroit Tigers\n\nWS Probability = 0.01499065696651269\nColorado Rockies\n\nWS Probability = 0.012498459783630853\nPhiladelphia Phillies\n\nWS Probability = 0.010992749491642178\nToronto Blue Jays\n\nWS Probability = 0.009620296585996148\nSeattle Mariners\n\nWS Probability = 0.009283098983509323\nSan Francisco Giants\n\nWS Probability = 0.00563640736688245\nSan Diego Padres\n" ], [ "# predict for 2017.\npredict_the_winner(model, 2017, team_data, X_train_scaled)", "\nWS Probability = 0.1418385959252063\nAtlanta Braves\n\nWS Probability = 0.13635618088991616\nWashington Nationals\n\nWS Probability = 0.12612142395575937\nCleveland Indians\n\nWS Probability = 0.11219515678660376\nLos Angeles Angels\n\nWS Probability = 0.08958854769979219\nBoston Red Sox\n\nWS Probability = 0.07663599453718797\nNew York Yankees\n\nWS Probability = 0.07636767636478183\nTampa Bay Rays\n\nWS Probability = 0.07359221381705146\nOakland Athletics\n\nWS Probability = 0.06846683014586137\nHouston Astros\n\nWS Probability = 0.04529791742473508\nSeattle Mariners\n\nWS Probability = 0.04098928838490769\nLos Angeles Dodgers\n\nWS Probability = 0.03758918537878983\nPittsburgh Pirates\n\nWS Probability = 0.0369147526893549\nMilwaukee Brewers\n\nWS Probability = 0.031100166000034613\nArizona Diamondbacks\n\nWS Probability = 0.029284013255889325\nBaltimore Orioles\n\nWS Probability = 0.026251978608958212\nKansas City Royals\n\nWS Probability = 0.026193527897479328\nColorado Rockies\n\nWS Probability = 0.024981381957032166\nChicago White Sox\n\nWS Probability = 0.024526161892631133\nNew York Mets\n\nWS Probability = 0.019074814861446576\nChicago Cubs\n\nWS Probability = 0.017276952744794224\nSan Francisco Giants\n\nWS Probability = 0.016208709280338168\nMiami Marlins\n\nWS Probability = 0.01403468674564293\nMinnesota Twins\n\nWS Probability = 0.010472635316352839\nSt. Louis Cardinals\n\nWS Probability = 0.009901130020250998\nToronto Blue Jays\n\nWS Probability = 0.008732733809696474\nTexas Rangers\n\nWS Probability = 0.008286680318348089\nDetroit Tigers\n\nWS Probability = 0.006756952981716591\nPhiladelphia Phillies\n\nWS Probability = 0.006712003245970522\nSan Diego Padres\n\nWS Probability = 0.006043715235308389\nCincinnati Reds\n" ] ], [ [ "Ok. This didn't work. Let's try this penalized model with a grid search.", "_____no_output_____" ] ], [ [ "def grid_search_svc(X_train, X_test, y_train, y_test):\n '''\n INPUT: \n -X_train = scaled X train data.\n -X_test = scaled X test data.\n -y_train = y train data.\n -y_test = y test data.\n \n OUTPUT:\n -classification report (has F1 score, precision and recall).\n -grid = saved model for prediction. \n \n DESCRIPTION:\n -the scaled and split data is put through a grid search with svc.\n -the model is trained.\n -a prediction is made.\n -print out the classification report and give the model.\n '''\n \n # set up svc model.\n model = SVC(kernel=\"rbf\", \n class_weight=\"balanced\",\n probability=True)\n\n # create gridsearch estimator.\n param_grid = {\"C\": [0.0001, 0.001, 0.01, 0.1, 1, 10, 100],\n \"gamma\": [0.0001, 0.001, 0.01, 0.1]}\n grid = GridSearchCV(model, param_grid, verbose=3)\n \n # fit the model.\n grid.fit(X_train, y_train)\n\n # predict.\n prediction = grid.predict(X_test)\n \n # print out the basic information about the grid search.\n print (grid.best_params_)\n print (grid.best_score_)\n print (grid.best_estimator_)\n \n grid = grid.best_estimator_\n predictions = grid.predict(X_test)\n print (classification_report(y_test, prediction, target_names=[\"0\", \"1\"]))\n \n return grid", "_____no_output_____" ], [ "model_grid = grid_search_svc(X_train, X_test, y_train, y_test)", "Fitting 3 folds for each of 28 candidates, totalling 84 fits\n[CV] C=0.0001, gamma=0.0001 ..........................................\n" ] ], [ [ "Nope. This is terrible. Lots of no.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a397f2aea56e080859125ee313efed3626e8f90
5,967
ipynb
Jupyter Notebook
notebooks/03_get_ecg_beats.ipynb
davicn/siena_eeg_ecg
2fec0d1d21c5b33d0a2e27caf2e2e4b073388dac
[ "MIT" ]
null
null
null
notebooks/03_get_ecg_beats.ipynb
davicn/siena_eeg_ecg
2fec0d1d21c5b33d0a2e27caf2e2e4b073388dac
[ "MIT" ]
null
null
null
notebooks/03_get_ecg_beats.ipynb
davicn/siena_eeg_ecg
2fec0d1d21c5b33d0a2e27caf2e2e4b073388dac
[ "MIT" ]
null
null
null
32.606557
96
0.48433
[ [ [ "import os\nfrom os.path import join, dirname\nfrom dotenv import load_dotenv\n\ndotenv_path = join(dirname('__file__'), '.env')\n\nload_dotenv(dotenv_path)\n\nDATALAKE_PATH = os.environ.get(\"DATALAKE_PATH\")\nROOT_PATH = os.environ.get(\"ROOT_PATH\")\nSOURCE_PATH = os.environ.get(\"SOURCE_PATH\")\n\nfs = 512\n", "_____no_output_____" ], [ "import pandas as pd", "_____no_output_____" ], [ "infos = pd.read_json(f\"{ROOT_PATH}/docs/infos.json\")\n\nitens = [item for sublist in infos[infos['ekg'] == 'yes']\n ['collections'].to_list() for item in sublist]\n", "_____no_output_____" ], [ "# Execução de pipeline\n\nfor item in itens:\n name = item['name'].replace('.edf','')\n\n for key in ['ictal', 'normal', 'pos-ictal', 'pre-ictal', 'recuperacao']:\n file = f\"{DATALAKE_PATH}/siena/raw/periods/ecg/{key}/{name}.parquet\"\n cmd = f\"get_ecg_beats('{file}','{key}')\"\n !matlab -nodisplay -nosplash -nodesktop -r \"cd('../scripts/'); {cmd}; exit;\"\n\n\n", "\u001b[?1h\u001b=\n < M A T L A B (R) >\n Copyright 1984-2021 The MathWorks, Inc.\n R2021a Update 6 (9.10.0.1851785) 64-bit (glnxa64)\n January 6, 2022\n\n \nTo get started, type doc.\nFor product information, visit www.mathworks.com.\n \nStarting parallel pool (parpool) using the 'local' profile ...\nConnected to the parallel pool (number of workers: 6).\n\u001b[?1l\u001b>\u001b[?1h\u001b=\n < M A T L A B (R) >\n Copyright 1984-2021 The MathWorks, Inc.\n R2021a Update 6 (9.10.0.1851785) 64-bit (glnxa64)\n January 6, 2022\n\n \nTo get started, type doc.\nFor product information, visit www.mathworks.com.\n \nStarting parallel pool (parpool) using the 'local' profile ...\nConnected to the parallel pool (number of workers: 6).\n\u0007Error using ECGsegmentationF (line 43)\nArray indices must be positive integers or logical values.\n\nError in get_ecg_beats (line 14)\n [B,P,QRS,T] = ECGsegmentationF(x, fs);\n \n>> \n>> \n>> \u001b[?1h\u001b=\n < M A T L A B (R) >\n Copyright 1984-2021 The MathWorks, Inc.\n R2021a Update 6 (9.10.0.1851785) 64-bit (glnxa64)\n January 6, 2022\n\n \nTo get started, type doc.\nFor product information, visit www.mathworks.com.\n \nStarting parallel pool (parpool) using the 'local' profile ...\nConnected to the parallel pool (number of workers: 6).\n\u001b[?1l\u001b>\u001b[?1h\u001b=\n < M A T L A B (R) >\n Copyright 1984-2021 The MathWorks, Inc.\n R2021a Update 6 (9.10.0.1851785) 64-bit (glnxa64)\n January 6, 2022\n\n \nTo get started, type doc.\nFor product information, visit www.mathworks.com.\n \nStarting parallel pool (parpool) using the 'local' profile ...\nConnected to the parallel pool (number of workers: 6).\n\u001b[?1l\u001b>\u001b[?1h\u001b=\n < M A T L A B (R) >\n Copyright 1984-2021 The MathWorks, Inc.\n R2021a Update 6 (9.10.0.1851785) 64-bit (glnxa64)\n January 6, 2022\n\n \nTo get started, type doc.\nFor product information, visit www.mathworks.com.\n \nStarting parallel pool (parpool) using the 'local' profile ...\n\u001b[?1h\u001b=\n < M A T L A B (R) >\n Copyright 1984-2021 The MathWorks, Inc.\n R2021a Update 6 (9.10.0.1851785) 64-bit (glnxa64)\n January 6, 2022\n\n \nTo get started, type doc.\nFor product information, visit www.mathworks.com.\n \nStarting parallel pool (parpool) using the 'local' profile ...\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a3986123f8b520c3006390f71053a409410a96b
23,928
ipynb
Jupyter Notebook
Chapter11/Activity25/Activity25.ipynb
stuffstuffstuf1/The-Python-Workshop
b529995980a7a8f8f09e9d2f8dd20d6e4d6acb80
[ "MIT" ]
238
2019-12-13T15:44:34.000Z
2022-03-21T05:38:21.000Z
Chapter11/Activity25/Activity25.ipynb
stuffstuffstuf1/The-Python-Workshop
b529995980a7a8f8f09e9d2f8dd20d6e4d6acb80
[ "MIT" ]
8
2020-05-04T03:33:29.000Z
2022-03-12T00:47:26.000Z
Chapter11/Activity25/Activity25.ipynb
stuffstuffstuf1/The-Python-Workshop
b529995980a7a8f8f09e9d2f8dd20d6e4d6acb80
[ "MIT" ]
345
2019-10-08T09:15:11.000Z
2022-03-31T18:28:03.000Z
31.15625
163
0.443873
[ [ [ "import pandas as pd\nimport numpy as np", "_____no_output_____" ], [ "df = pd.read_csv('CHURN.csv')", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "df.info()", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 7043 entries, 0 to 7042\nData columns (total 21 columns):\ncustomerID 7043 non-null object\ngender 7043 non-null object\nSeniorCitizen 7043 non-null int64\nPartner 7043 non-null object\nDependents 7043 non-null object\ntenure 7043 non-null int64\nPhoneService 7043 non-null object\nMultipleLines 7043 non-null object\nInternetService 7043 non-null object\nOnlineSecurity 7043 non-null object\nOnlineBackup 7043 non-null object\nDeviceProtection 7043 non-null object\nTechSupport 7043 non-null object\nStreamingTV 7043 non-null object\nStreamingMovies 7043 non-null object\nContract 7043 non-null object\nPaperlessBilling 7043 non-null object\nPaymentMethod 7043 non-null object\nMonthlyCharges 7043 non-null float64\nTotalCharges 7043 non-null object\nChurn 7043 non-null object\ndtypes: float64(1), int64(2), object(18)\nmemory usage: 1.1+ MB\n" ], [ "df['Churn'] = df['Churn'].replace(to_replace=['No', 'Yes'], value=[0, 1])", "_____no_output_____" ], [ "X = df.iloc[:,1:-1] \ny = df.iloc[:, -1]", "_____no_output_____" ], [ "X = pd.get_dummies(X)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\n\ndef clf_model (model, cv=3):\n clf = model\n \n scores = cross_val_score(clf, X, y, cv=cv)\n\n \n print('Scores:', scores)\n print('Mean score', scores.mean())", "_____no_output_____" ], [ "from sklearn.linear_model import LogisticRegression\nclf_model(LogisticRegression())", "C:\\Users\\Ratan Singh\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\nC:\\Users\\Ratan Singh\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\nC:\\Users\\Ratan Singh\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\n" ], [ "from sklearn.neighbors import KNeighborsClassifier\nclf_model(KNeighborsClassifier())", "Scores: [0.77938671 0.76320273 0.77290158]\nMean score 0.7718303381000114\n" ], [ "from sklearn.naive_bayes import GaussianNB\nclf_model(GaussianNB())", "Scores: [0.27725724 0.28109029 0.27652322]\nMean score 0.2782902503153228\n" ], [ "from sklearn.ensemble import RandomForestClassifier\nclf_model(RandomForestClassifier())", "Scores: [0.78534923 0.78492334 0.78909246]\nMean score 0.78645501028655\n" ], [ "from sklearn.ensemble import AdaBoostClassifier\nclf_model(AdaBoostClassifier())", "Scores: [0.80366269 0.80451448 0.80059651]\nMean score 0.8029245594131428\n" ], [ "from sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.model_selection import train_test_split", "_____no_output_____" ], [ "X_train, X_test ,y_train, y_test = train_test_split(X, y, test_size = 0.25)", "_____no_output_____" ], [ "def confusion(model):\n clf = model\n clf.fit(X_train, y_train)\n y_pred = clf.predict(X_test)\n print('Confusion Matrix:', confusion_matrix(y_test, y_pred))\n print('Classfication Report:', classification_report(y_test, y_pred))\n \n return clf", "_____no_output_____" ], [ "confusion(AdaBoostClassifier())", "Confusion Matrix: [[1151 125]\n [ 213 272]]\nClassfication Report: precision recall f1-score support\n\n 0 0.84 0.90 0.87 1276\n 1 0.69 0.56 0.62 485\n\n accuracy 0.81 1761\n macro avg 0.76 0.73 0.74 1761\nweighted avg 0.80 0.81 0.80 1761\n\n" ], [ "confusion(RandomForestClassifier())", "Confusion Matrix: [[1159 117]\n [ 259 226]]\nClassfication Report: precision recall f1-score support\n\n 0 0.82 0.91 0.86 1276\n 1 0.66 0.47 0.55 485\n\n accuracy 0.79 1761\n macro avg 0.74 0.69 0.70 1761\nweighted avg 0.77 0.79 0.77 1761\n\n" ], [ "confusion(LogisticRegression())", "C:\\Users\\Ratan Singh\\Anaconda3\\lib\\site-packages\\sklearn\\linear_model\\_logistic.py:940: ConvergenceWarning: lbfgs failed to converge (status=1):\nSTOP: TOTAL NO. of ITERATIONS REACHED LIMIT.\n\nIncrease the number of iterations (max_iter) or scale the data as shown in:\n https://scikit-learn.org/stable/modules/preprocessing.html\nPlease also refer to the documentation for alternative solver options:\n https://scikit-learn.org/stable/modules/linear_model.html#logistic-regression\n extra_warning_msg=_LOGISTIC_SOLVER_CONVERGENCE_MSG)\n" ], [ "confusion(AdaBoostClassifier(n_estimators=250))", "Confusion Matrix: [[1137 139]\n [ 213 272]]\nClassfication Report: precision recall f1-score support\n\n 0 0.84 0.89 0.87 1276\n 1 0.66 0.56 0.61 485\n\n accuracy 0.80 1761\n macro avg 0.75 0.73 0.74 1761\nweighted avg 0.79 0.80 0.79 1761\n\n" ], [ "confusion(AdaBoostClassifier(n_estimators=25))", "Confusion Matrix: [[1141 135]\n [ 215 270]]\nClassfication Report: precision recall f1-score support\n\n 0 0.84 0.89 0.87 1276\n 1 0.67 0.56 0.61 485\n\n accuracy 0.80 1761\n macro avg 0.75 0.73 0.74 1761\nweighted avg 0.79 0.80 0.80 1761\n\n" ], [ "confusion(AdaBoostClassifier(n_estimators=15))", "Confusion Matrix: [[1142 134]\n [ 225 260]]\nClassfication Report: precision recall f1-score support\n\n 0 0.84 0.89 0.86 1276\n 1 0.66 0.54 0.59 485\n\n accuracy 0.80 1761\n macro avg 0.75 0.72 0.73 1761\nweighted avg 0.79 0.80 0.79 1761\n\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3989f6770e2e8948f4fdef1c84667a0389b149
95,575
ipynb
Jupyter Notebook
module2-convolutional-neural-networks/Toby's_LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
TobyChen320/DS-Unit-4-Sprint-3-Deep-Learning
e8c73604136304f5827f4c53ad3c8121e37142bc
[ "MIT" ]
null
null
null
module2-convolutional-neural-networks/Toby's_LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
TobyChen320/DS-Unit-4-Sprint-3-Deep-Learning
e8c73604136304f5827f4c53ad3c8121e37142bc
[ "MIT" ]
null
null
null
module2-convolutional-neural-networks/Toby's_LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb
TobyChen320/DS-Unit-4-Sprint-3-Deep-Learning
e8c73604136304f5827f4c53ad3c8121e37142bc
[ "MIT" ]
null
null
null
72.735921
523
0.570955
[ [ [ "<a href=\"https://colab.research.google.com/github/TobyChen320/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/Toby's_LS_DS_432_Convolution_Neural_Networks_Assignment.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "<img align=\"left\" src=\"https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png\" width=200>\n<br></br>\n<br></br>\n\n## *Data Science Unit 4 Sprint 3 Assignment 2*\n# Convolutional Neural Networks (CNNs)", "_____no_output_____" ], [ "# Assignment\n\n- <a href=\"#p1\">Part 1:</a> Pre-Trained Model\n- <a href=\"#p2\">Part 2:</a> Custom CNN Model\n- <a href=\"#p3\">Part 3:</a> CNN with Data Augmentation\n\n\nYou will apply three different CNN models to a binary image classification model using Keras. Classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). \n\n|Mountain (+)|Forest (-)|\n|---|---|\n|![](https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/data/train/mountain/art1131.jpg?raw=1)|![](https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/data/validation/forest/cdmc317.jpg?raw=1)|\n\nThe problem is relatively difficult given that the sample is tiny: there are about 350 observations per class. This sample size might be something that you can expect with prototyping an image classification problem/solution at work. Get accustomed to evaluating several different possible models.", "_____no_output_____" ], [ "# Pre - Trained Model\n<a id=\"p1\"></a>\n\nLoad a pretrained network from Keras, [ResNet50](https://tfhub.dev/google/imagenet/resnet_v1_50/classification/1) - a 50 layer deep network trained to recognize [1000 objects](https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt). Starting usage:\n\n```python\nimport numpy as np\n\nfrom tensorflow.keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\n\nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D\nfrom tensorflow.keras.models import Model # This is the functional API\n\nresnet = ResNet50(weights='imagenet', include_top=False)\n\n```\n\nThe `include_top` parameter in `ResNet50` will remove the full connected layers from the ResNet model. The next step is to turn off the training of the ResNet layers. We want to use the learned parameters without updating them in future training passes. \n\n```python\nfor layer in resnet.layers:\n layer.trainable = False\n```\n\nUsing the Keras functional API, we will need to additional additional full connected layers to our model. We we removed the top layers, we removed all preivous fully connected layers. In other words, we kept only the feature processing portions of our network. You can expert with additional layers beyond what's listed here. The `GlobalAveragePooling2D` layer functions as a really fancy flatten function by taking the average of each of the last convolutional layer outputs (which is two dimensional still). \n\n```python\nx = resnet.output\nx = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten\nx = Dense(1024, activation='relu')(x)\npredictions = Dense(1, activation='sigmoid')(x)\nmodel = Model(resnet.input, predictions)\n```\n\nYour assignment is to apply the transfer learning above to classify images of Mountains (`./data/train/mountain/*`) and images of forests (`./data/train/forest/*`). Treat mountains as the positive class (1) and the forest images as the negative (zero). \n\nSteps to complete assignment: \n1. Load in Image Data into numpy arrays (`X`) \n2. Create a `y` for the labels\n3. Train your model with pre-trained layers from resnet\n4. Report your model's accuracy", "_____no_output_____" ], [ "## Load in Data\n\nThis surprisingly more difficult than it seems, because you are working with directories of images instead of a single file. This boiler plate will help you download a zipped version of the directory of images. The directory is organized into \"train\" and \"validation\" which you can use inside an `ImageGenerator` class to stream batches of images thru your model. \n", "_____no_output_____" ], [ "### Download & Summarize the Data\n\nThis step is completed for you. Just run the cells and review the results. ", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport os\n\n_URL = 'https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/data.zip?raw=true'\n\npath_to_zip = tf.keras.utils.get_file('./data.zip', origin=_URL, extract=True)\nPATH = os.path.join(os.path.dirname(path_to_zip), 'data')", "Downloading data from https://github.com/LambdaSchool/DS-Unit-4-Sprint-3-Deep-Learning/blob/main/module2-convolutional-neural-networks/data.zip?raw=true\n42172416/42170838 [==============================] - 1s 0us/step\n" ], [ "train_dir = os.path.join(PATH, 'train')\nvalidation_dir = os.path.join(PATH, 'validation')", "_____no_output_____" ], [ "train_mountain_dir = os.path.join(train_dir, 'mountain') # directory with our training cat pictures\ntrain_forest_dir = os.path.join(train_dir, 'forest') # directory with our training dog pictures\nvalidation_mountain_dir = os.path.join(validation_dir, 'mountain') # directory with our validation cat pictures\nvalidation_forest_dir = os.path.join(validation_dir, 'forest') # directory with our validation dog pictures", "_____no_output_____" ], [ "num_mountain_tr = len(os.listdir(train_mountain_dir))\nnum_forest_tr = len(os.listdir(train_forest_dir))\n\nnum_mountain_val = len(os.listdir(validation_mountain_dir))\nnum_forest_val = len(os.listdir(validation_forest_dir))\n\ntotal_train = num_mountain_tr + num_forest_tr\ntotal_val = num_mountain_val + num_forest_val", "_____no_output_____" ], [ "print('total training mountain images:', num_mountain_tr)\nprint('total training forest images:', num_forest_tr)\n\nprint('total validation mountain images:', num_mountain_val)\nprint('total validation forest images:', num_forest_val)\nprint(\"--\")\nprint(\"Total training images:\", total_train)\nprint(\"Total validation images:\", total_val)", "total training mountain images: 254\ntotal training forest images: 270\ntotal validation mountain images: 125\ntotal validation forest images: 62\n--\nTotal training images: 524\nTotal validation images: 187\n" ] ], [ [ "### Keras `ImageGenerator` to Process the Data\n\nThis step is completed for you, but please review the code. The `ImageGenerator` class reads in batches of data from a directory and pass them to the model one batch at a time. Just like large text files, this method is advantageous, because it stifles the need to load a bunch of images into memory. \n\nCheck out the documentation for this class method: [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/#imagedatagenerator-class). You'll expand it's use in the third assignment objective.", "_____no_output_____" ] ], [ [ "batch_size = 16\nepochs = 50\nIMG_HEIGHT = 224\nIMG_WIDTH = 224", "_____no_output_____" ], [ "from tensorflow.keras.preprocessing.image import ImageDataGenerator\n\ntrain_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data\nvalidation_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our validation data", "_____no_output_____" ], [ "train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,\n directory=train_dir,\n shuffle=True,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary')", "Found 533 images belonging to 2 classes.\n" ], [ "val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,\n directory=validation_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary')", "Found 195 images belonging to 2 classes.\n" ] ], [ [ "## Instatiate Model", "_____no_output_____" ] ], [ [ "import numpy as np\n \nfrom tensorflow.keras.applications.resnet50 import ResNet50\nfrom tensorflow.keras.preprocessing import image\nfrom tensorflow.keras.applications.resnet50 import preprocess_input, decode_predictions\nfrom tensorflow.keras.layers import Dense, GlobalAveragePooling2D\nfrom tensorflow.keras.models import Model # This is the functional API\n\nresnet = ResNet50(weights='imagenet', include_top=False)\nfor layer in resnet.layers:\n layer.trainable=False\n\nx = resnet.output\nx = GlobalAveragePooling2D()(x) # This layer is a really fancy flatten\nx = Dense(1024, activation='relu')(x)\npredictions = Dense(1, activation='sigmoid')(x)\nmodel = Model(resnet.input, predictions)\nmodel.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])\nmodel.summary()", "Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5\n94773248/94765736 [==============================] - 0s 0us/step\nModel: \"functional_1\"\n__________________________________________________________________________________________________\nLayer (type) Output Shape Param # Connected to \n==================================================================================================\ninput_1 (InputLayer) [(None, None, None, 0 \n__________________________________________________________________________________________________\nconv1_pad (ZeroPadding2D) (None, None, None, 3 0 input_1[0][0] \n__________________________________________________________________________________________________\nconv1_conv (Conv2D) (None, None, None, 6 9472 conv1_pad[0][0] \n__________________________________________________________________________________________________\nconv1_bn (BatchNormalization) (None, None, None, 6 256 conv1_conv[0][0] \n__________________________________________________________________________________________________\nconv1_relu (Activation) (None, None, None, 6 0 conv1_bn[0][0] \n__________________________________________________________________________________________________\npool1_pad (ZeroPadding2D) (None, None, None, 6 0 conv1_relu[0][0] \n__________________________________________________________________________________________________\npool1_pool (MaxPooling2D) (None, None, None, 6 0 pool1_pad[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_conv (Conv2D) (None, None, None, 6 4160 pool1_pool[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_bn (BatchNormali (None, None, None, 6 256 conv2_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_1_relu (Activation (None, None, None, 6 0 conv2_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_conv (Conv2D) (None, None, None, 6 36928 conv2_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_bn (BatchNormali (None, None, None, 6 256 conv2_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_2_relu (Activation (None, None, None, 6 0 conv2_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_conv (Conv2D) (None, None, None, 2 16640 pool1_pool[0][0] \n__________________________________________________________________________________________________\nconv2_block1_3_conv (Conv2D) (None, None, None, 2 16640 conv2_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block1_0_bn (BatchNormali (None, None, None, 2 1024 conv2_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_3_bn (BatchNormali (None, None, None, 2 1024 conv2_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block1_add (Add) (None, None, None, 2 0 conv2_block1_0_bn[0][0] \n conv2_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block1_out (Activation) (None, None, None, 2 0 conv2_block1_add[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_conv (Conv2D) (None, None, None, 6 16448 conv2_block1_out[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_bn (BatchNormali (None, None, None, 6 256 conv2_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_1_relu (Activation (None, None, None, 6 0 conv2_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_conv (Conv2D) (None, None, None, 6 36928 conv2_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_bn (BatchNormali (None, None, None, 6 256 conv2_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_2_relu (Activation (None, None, None, 6 0 conv2_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_3_conv (Conv2D) (None, None, None, 2 16640 conv2_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block2_3_bn (BatchNormali (None, None, None, 2 1024 conv2_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block2_add (Add) (None, None, None, 2 0 conv2_block1_out[0][0] \n conv2_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block2_out (Activation) (None, None, None, 2 0 conv2_block2_add[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_conv (Conv2D) (None, None, None, 6 16448 conv2_block2_out[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_bn (BatchNormali (None, None, None, 6 256 conv2_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_1_relu (Activation (None, None, None, 6 0 conv2_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_conv (Conv2D) (None, None, None, 6 36928 conv2_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_bn (BatchNormali (None, None, None, 6 256 conv2_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_2_relu (Activation (None, None, None, 6 0 conv2_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_3_conv (Conv2D) (None, None, None, 2 16640 conv2_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv2_block3_3_bn (BatchNormali (None, None, None, 2 1024 conv2_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv2_block3_add (Add) (None, None, None, 2 0 conv2_block2_out[0][0] \n conv2_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv2_block3_out (Activation) (None, None, None, 2 0 conv2_block3_add[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_conv (Conv2D) (None, None, None, 1 32896 conv2_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_bn (BatchNormali (None, None, None, 1 512 conv3_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_1_relu (Activation (None, None, None, 1 0 conv3_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_bn (BatchNormali (None, None, None, 1 512 conv3_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_2_relu (Activation (None, None, None, 1 0 conv3_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_conv (Conv2D) (None, None, None, 5 131584 conv2_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block1_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block1_0_bn (BatchNormali (None, None, None, 5 2048 conv3_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block1_add (Add) (None, None, None, 5 0 conv3_block1_0_bn[0][0] \n conv3_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block1_out (Activation) (None, None, None, 5 0 conv3_block1_add[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_conv (Conv2D) (None, None, None, 1 65664 conv3_block1_out[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_bn (BatchNormali (None, None, None, 1 512 conv3_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_1_relu (Activation (None, None, None, 1 0 conv3_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_bn (BatchNormali (None, None, None, 1 512 conv3_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_2_relu (Activation (None, None, None, 1 0 conv3_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block2_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block2_add (Add) (None, None, None, 5 0 conv3_block1_out[0][0] \n conv3_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block2_out (Activation) (None, None, None, 5 0 conv3_block2_add[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_conv (Conv2D) (None, None, None, 1 65664 conv3_block2_out[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_bn (BatchNormali (None, None, None, 1 512 conv3_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_1_relu (Activation (None, None, None, 1 0 conv3_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_bn (BatchNormali (None, None, None, 1 512 conv3_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_2_relu (Activation (None, None, None, 1 0 conv3_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block3_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block3_add (Add) (None, None, None, 5 0 conv3_block2_out[0][0] \n conv3_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block3_out (Activation) (None, None, None, 5 0 conv3_block3_add[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_conv (Conv2D) (None, None, None, 1 65664 conv3_block3_out[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_bn (BatchNormali (None, None, None, 1 512 conv3_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_1_relu (Activation (None, None, None, 1 0 conv3_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_conv (Conv2D) (None, None, None, 1 147584 conv3_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_bn (BatchNormali (None, None, None, 1 512 conv3_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_2_relu (Activation (None, None, None, 1 0 conv3_block4_2_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_3_conv (Conv2D) (None, None, None, 5 66048 conv3_block4_2_relu[0][0] \n__________________________________________________________________________________________________\nconv3_block4_3_bn (BatchNormali (None, None, None, 5 2048 conv3_block4_3_conv[0][0] \n__________________________________________________________________________________________________\nconv3_block4_add (Add) (None, None, None, 5 0 conv3_block3_out[0][0] \n conv3_block4_3_bn[0][0] \n__________________________________________________________________________________________________\nconv3_block4_out (Activation) (None, None, None, 5 0 conv3_block4_add[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_conv (Conv2D) (None, None, None, 2 131328 conv3_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_1_relu (Activation (None, None, None, 2 0 conv4_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_2_relu (Activation (None, None, None, 2 0 conv4_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_conv (Conv2D) (None, None, None, 1 525312 conv3_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block1_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block1_0_bn (BatchNormali (None, None, None, 1 4096 conv4_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block1_add (Add) (None, None, None, 1 0 conv4_block1_0_bn[0][0] \n conv4_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block1_out (Activation) (None, None, None, 1 0 conv4_block1_add[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block1_out[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_1_relu (Activation (None, None, None, 2 0 conv4_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_2_relu (Activation (None, None, None, 2 0 conv4_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block2_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block2_add (Add) (None, None, None, 1 0 conv4_block1_out[0][0] \n conv4_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block2_out (Activation) (None, None, None, 1 0 conv4_block2_add[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block2_out[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_1_relu (Activation (None, None, None, 2 0 conv4_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_2_relu (Activation (None, None, None, 2 0 conv4_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block3_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block3_add (Add) (None, None, None, 1 0 conv4_block2_out[0][0] \n conv4_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block3_out (Activation) (None, None, None, 1 0 conv4_block3_add[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block3_out[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block4_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_1_relu (Activation (None, None, None, 2 0 conv4_block4_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block4_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block4_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_2_relu (Activation (None, None, None, 2 0 conv4_block4_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block4_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block4_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block4_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block4_add (Add) (None, None, None, 1 0 conv4_block3_out[0][0] \n conv4_block4_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block4_out (Activation) (None, None, None, 1 0 conv4_block4_add[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block4_out[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block5_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_1_relu (Activation (None, None, None, 2 0 conv4_block5_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block5_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block5_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_2_relu (Activation (None, None, None, 2 0 conv4_block5_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block5_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block5_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block5_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block5_add (Add) (None, None, None, 1 0 conv4_block4_out[0][0] \n conv4_block5_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block5_out (Activation) (None, None, None, 1 0 conv4_block5_add[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_conv (Conv2D) (None, None, None, 2 262400 conv4_block5_out[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_bn (BatchNormali (None, None, None, 2 1024 conv4_block6_1_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_1_relu (Activation (None, None, None, 2 0 conv4_block6_1_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_conv (Conv2D) (None, None, None, 2 590080 conv4_block6_1_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_bn (BatchNormali (None, None, None, 2 1024 conv4_block6_2_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_2_relu (Activation (None, None, None, 2 0 conv4_block6_2_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_3_conv (Conv2D) (None, None, None, 1 263168 conv4_block6_2_relu[0][0] \n__________________________________________________________________________________________________\nconv4_block6_3_bn (BatchNormali (None, None, None, 1 4096 conv4_block6_3_conv[0][0] \n__________________________________________________________________________________________________\nconv4_block6_add (Add) (None, None, None, 1 0 conv4_block5_out[0][0] \n conv4_block6_3_bn[0][0] \n__________________________________________________________________________________________________\nconv4_block6_out (Activation) (None, None, None, 1 0 conv4_block6_add[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_conv (Conv2D) (None, None, None, 5 524800 conv4_block6_out[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_bn (BatchNormali (None, None, None, 5 2048 conv5_block1_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_1_relu (Activation (None, None, None, 5 0 conv5_block1_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_conv (Conv2D) (None, None, None, 5 2359808 conv5_block1_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_bn (BatchNormali (None, None, None, 5 2048 conv5_block1_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_2_relu (Activation (None, None, None, 5 0 conv5_block1_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_conv (Conv2D) (None, None, None, 2 2099200 conv4_block6_out[0][0] \n__________________________________________________________________________________________________\nconv5_block1_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block1_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block1_0_bn (BatchNormali (None, None, None, 2 8192 conv5_block1_0_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block1_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block1_add (Add) (None, None, None, 2 0 conv5_block1_0_bn[0][0] \n conv5_block1_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block1_out (Activation) (None, None, None, 2 0 conv5_block1_add[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_conv (Conv2D) (None, None, None, 5 1049088 conv5_block1_out[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_bn (BatchNormali (None, None, None, 5 2048 conv5_block2_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_1_relu (Activation (None, None, None, 5 0 conv5_block2_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_conv (Conv2D) (None, None, None, 5 2359808 conv5_block2_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_bn (BatchNormali (None, None, None, 5 2048 conv5_block2_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_2_relu (Activation (None, None, None, 5 0 conv5_block2_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block2_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block2_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block2_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block2_add (Add) (None, None, None, 2 0 conv5_block1_out[0][0] \n conv5_block2_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block2_out (Activation) (None, None, None, 2 0 conv5_block2_add[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_conv (Conv2D) (None, None, None, 5 1049088 conv5_block2_out[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_bn (BatchNormali (None, None, None, 5 2048 conv5_block3_1_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_1_relu (Activation (None, None, None, 5 0 conv5_block3_1_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_conv (Conv2D) (None, None, None, 5 2359808 conv5_block3_1_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_bn (BatchNormali (None, None, None, 5 2048 conv5_block3_2_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_2_relu (Activation (None, None, None, 5 0 conv5_block3_2_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_3_conv (Conv2D) (None, None, None, 2 1050624 conv5_block3_2_relu[0][0] \n__________________________________________________________________________________________________\nconv5_block3_3_bn (BatchNormali (None, None, None, 2 8192 conv5_block3_3_conv[0][0] \n__________________________________________________________________________________________________\nconv5_block3_add (Add) (None, None, None, 2 0 conv5_block2_out[0][0] \n conv5_block3_3_bn[0][0] \n__________________________________________________________________________________________________\nconv5_block3_out (Activation) (None, None, None, 2 0 conv5_block3_add[0][0] \n__________________________________________________________________________________________________\nglobal_average_pooling2d (Globa (None, 2048) 0 conv5_block3_out[0][0] \n__________________________________________________________________________________________________\ndense (Dense) (None, 1024) 2098176 global_average_pooling2d[0][0] \n__________________________________________________________________________________________________\ndense_1 (Dense) (None, 1) 1025 dense[0][0] \n==================================================================================================\nTotal params: 25,686,913\nTrainable params: 2,099,201\nNon-trainable params: 23,587,712\n__________________________________________________________________________________________________\n" ] ], [ [ "## Fit Model", "_____no_output_____" ] ], [ [ "history = model.fit(\n train_data_gen,\n steps_per_epoch=total_train // batch_size,\n epochs=epochs,\n validation_data=val_data_gen,\n validation_steps=total_val // batch_size\n)", "Epoch 1/50\n32/32 [==============================] - 3s 107ms/step - loss: 0.7980 - accuracy: 0.5808 - val_loss: 0.6329 - val_accuracy: 0.7273\nEpoch 2/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.5624 - accuracy: 0.7685 - val_loss: 0.5171 - val_accuracy: 0.8352\nEpoch 3/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.5067 - accuracy: 0.7964 - val_loss: 0.4290 - val_accuracy: 0.7784\nEpoch 4/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.4855 - accuracy: 0.7645 - val_loss: 0.4455 - val_accuracy: 0.8295\nEpoch 5/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.4156 - accuracy: 0.8383 - val_loss: 0.5267 - val_accuracy: 0.7727\nEpoch 6/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.3944 - accuracy: 0.8423 - val_loss: 0.3575 - val_accuracy: 0.8239\nEpoch 7/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.3543 - accuracy: 0.8723 - val_loss: 0.5511 - val_accuracy: 0.7386\nEpoch 8/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.3895 - accuracy: 0.8124 - val_loss: 0.5831 - val_accuracy: 0.6989\nEpoch 9/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.3184 - accuracy: 0.8902 - val_loss: 0.3063 - val_accuracy: 0.9034\nEpoch 10/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.2840 - accuracy: 0.8902 - val_loss: 0.3908 - val_accuracy: 0.8466\nEpoch 11/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2761 - accuracy: 0.8822 - val_loss: 0.4959 - val_accuracy: 0.8011\nEpoch 12/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2980 - accuracy: 0.8663 - val_loss: 0.2618 - val_accuracy: 0.9091\nEpoch 13/50\n32/32 [==============================] - 3s 80ms/step - loss: 0.2579 - accuracy: 0.9023 - val_loss: 0.3267 - val_accuracy: 0.8636\nEpoch 14/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2302 - accuracy: 0.9162 - val_loss: 0.2871 - val_accuracy: 0.8864\nEpoch 15/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2283 - accuracy: 0.9122 - val_loss: 0.6701 - val_accuracy: 0.6875\nEpoch 16/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2335 - accuracy: 0.9122 - val_loss: 0.2354 - val_accuracy: 0.9318\nEpoch 17/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2074 - accuracy: 0.9301 - val_loss: 0.2978 - val_accuracy: 0.8580\nEpoch 18/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1761 - accuracy: 0.9481 - val_loss: 0.2885 - val_accuracy: 0.8750\nEpoch 19/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.1858 - accuracy: 0.9461 - val_loss: 0.8293 - val_accuracy: 0.6591\nEpoch 20/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2426 - accuracy: 0.8882 - val_loss: 0.2434 - val_accuracy: 0.9205\nEpoch 21/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.2111 - accuracy: 0.9242 - val_loss: 0.6544 - val_accuracy: 0.7216\nEpoch 22/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.3193 - accuracy: 0.8523 - val_loss: 0.2323 - val_accuracy: 0.9148\nEpoch 23/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.1794 - accuracy: 0.9361 - val_loss: 0.2254 - val_accuracy: 0.9375\nEpoch 24/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1621 - accuracy: 0.9421 - val_loss: 0.2264 - val_accuracy: 0.9375\nEpoch 25/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.1363 - accuracy: 0.9541 - val_loss: 0.2613 - val_accuracy: 0.8864\nEpoch 26/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.1513 - accuracy: 0.9481 - val_loss: 0.4836 - val_accuracy: 0.8068\nEpoch 27/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.1786 - accuracy: 0.9381 - val_loss: 0.2235 - val_accuracy: 0.9318\nEpoch 28/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.1446 - accuracy: 0.9481 - val_loss: 0.2073 - val_accuracy: 0.9375\nEpoch 29/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.1677 - accuracy: 0.9421 - val_loss: 0.3818 - val_accuracy: 0.8523\nEpoch 30/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1349 - accuracy: 0.9521 - val_loss: 0.2342 - val_accuracy: 0.9205\nEpoch 31/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1278 - accuracy: 0.9621 - val_loss: 0.2623 - val_accuracy: 0.9034\nEpoch 32/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.2164 - accuracy: 0.9042 - val_loss: 0.7988 - val_accuracy: 0.6989\nEpoch 33/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.1447 - accuracy: 0.9361 - val_loss: 0.3514 - val_accuracy: 0.8523\nEpoch 34/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1681 - accuracy: 0.9381 - val_loss: 0.2472 - val_accuracy: 0.8977\nEpoch 35/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1171 - accuracy: 0.9601 - val_loss: 0.4210 - val_accuracy: 0.8295\nEpoch 36/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.1222 - accuracy: 0.9581 - val_loss: 0.1912 - val_accuracy: 0.9432\nEpoch 37/50\n32/32 [==============================] - 3s 79ms/step - loss: 0.1024 - accuracy: 0.9601 - val_loss: 0.2687 - val_accuracy: 0.8977\nEpoch 38/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.0941 - accuracy: 0.9701 - val_loss: 0.2056 - val_accuracy: 0.9318\nEpoch 39/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.0791 - accuracy: 0.9741 - val_loss: 0.2059 - val_accuracy: 0.9375\nEpoch 40/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.0892 - accuracy: 0.9681 - val_loss: 0.2070 - val_accuracy: 0.9318\nEpoch 41/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.0790 - accuracy: 0.9800 - val_loss: 0.2534 - val_accuracy: 0.9261\nEpoch 42/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.0959 - accuracy: 0.9581 - val_loss: 0.1970 - val_accuracy: 0.9375\nEpoch 43/50\n32/32 [==============================] - 3s 78ms/step - loss: 0.0753 - accuracy: 0.9760 - val_loss: 0.1883 - val_accuracy: 0.9545\nEpoch 44/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.0965 - accuracy: 0.9701 - val_loss: 0.2112 - val_accuracy: 0.9489\nEpoch 45/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.1548 - accuracy: 0.9381 - val_loss: 0.5566 - val_accuracy: 0.8011\nEpoch 46/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.1280 - accuracy: 0.9641 - val_loss: 0.2616 - val_accuracy: 0.9034\nEpoch 47/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.0797 - accuracy: 0.9741 - val_loss: 0.2060 - val_accuracy: 0.9432\nEpoch 48/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.0634 - accuracy: 0.9800 - val_loss: 0.1838 - val_accuracy: 0.9489\nEpoch 49/50\n32/32 [==============================] - 2s 77ms/step - loss: 0.0792 - accuracy: 0.9780 - val_loss: 0.1775 - val_accuracy: 0.9489\nEpoch 50/50\n32/32 [==============================] - 2s 78ms/step - loss: 0.0988 - accuracy: 0.9621 - val_loss: 0.5186 - val_accuracy: 0.8182\n" ] ], [ [ "# Custom CNN Model\n\nIn this step, write and train your own convolutional neural network using Keras. You can use any architecture that suits you as long as it has at least one convolutional and one pooling layer at the beginning of the network - you can add more if you want. ", "_____no_output_____" ] ], [ [ "# Define the Model\nfrom tensorflow.keras import datasets\nfrom tensorflow.keras.models import Sequential, Model # <- May Use\nfrom tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dropout\n\nmy_model = Sequential([Conv2D(32,3, activation=\"relu\", input_shape=(224,224,3)),\n MaxPooling2D(),\n Conv2D(32,3, activation=\"relu\", input_shape=(224,224,3)),\n MaxPooling2D(),\n Conv2D(64,3, activation=\"relu\", input_shape=(224,224,3)),\n MaxPooling2D(),\n Flatten(),\n Dense(64, activation = \"relu\"),\n Dropout(0.1),\n Dense(1, activation=\"sigmoid\")\n ])", "_____no_output_____" ], [ "# Compile Model\nmy_model.compile(optimizer='adam',\n loss='binary_crossentropy',\n metrics=['accuracy'])", "_____no_output_____" ], [ "# Fit Model\nhistory = my_model.fit(\n train_data_gen,\n steps_per_epoch=total_train // batch_size,\n epochs=epochs,\n validation_data=val_data_gen,\n validation_steps=total_val // batch_size\n)", "Epoch 1/50\n32/32 [==============================] - 2s 66ms/step - loss: 0.5109 - accuracy: 0.7745 - val_loss: 0.2195 - val_accuracy: 0.9148\nEpoch 2/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.2451 - accuracy: 0.9022 - val_loss: 0.3978 - val_accuracy: 0.8523\nEpoch 3/50\n32/32 [==============================] - 2s 63ms/step - loss: 0.1814 - accuracy: 0.9261 - val_loss: 0.1644 - val_accuracy: 0.9261\nEpoch 4/50\n32/32 [==============================] - 2s 63ms/step - loss: 0.1423 - accuracy: 0.9421 - val_loss: 0.2635 - val_accuracy: 0.8920\nEpoch 5/50\n32/32 [==============================] - 2s 64ms/step - loss: 0.1330 - accuracy: 0.9461 - val_loss: 0.1697 - val_accuracy: 0.9318\nEpoch 6/50\n32/32 [==============================] - 2s 64ms/step - loss: 0.1201 - accuracy: 0.9521 - val_loss: 0.1672 - val_accuracy: 0.9261\nEpoch 7/50\n32/32 [==============================] - 2s 64ms/step - loss: 0.1225 - accuracy: 0.9561 - val_loss: 0.2557 - val_accuracy: 0.9034\nEpoch 8/50\n32/32 [==============================] - 2s 62ms/step - loss: 0.0538 - accuracy: 0.9780 - val_loss: 0.2272 - val_accuracy: 0.9318\nEpoch 9/50\n32/32 [==============================] - 2s 63ms/step - loss: 0.0581 - accuracy: 0.9780 - val_loss: 0.1438 - val_accuracy: 0.9375\nEpoch 10/50\n32/32 [==============================] - 2s 63ms/step - loss: 0.0572 - accuracy: 0.9785 - val_loss: 0.2025 - val_accuracy: 0.9205\nEpoch 11/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0766 - accuracy: 0.9681 - val_loss: 0.4217 - val_accuracy: 0.8693\nEpoch 12/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0389 - accuracy: 0.9880 - val_loss: 0.1795 - val_accuracy: 0.9432\nEpoch 13/50\n32/32 [==============================] - 2s 62ms/step - loss: 0.0126 - accuracy: 0.9980 - val_loss: 0.1679 - val_accuracy: 0.9545\nEpoch 14/50\n32/32 [==============================] - 2s 62ms/step - loss: 0.1930 - accuracy: 0.9361 - val_loss: 0.1867 - val_accuracy: 0.9148\nEpoch 15/50\n32/32 [==============================] - 2s 62ms/step - loss: 0.0608 - accuracy: 0.9800 - val_loss: 0.3342 - val_accuracy: 0.9148\nEpoch 16/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0236 - accuracy: 0.9940 - val_loss: 0.3731 - val_accuracy: 0.8920\nEpoch 17/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0084 - accuracy: 1.0000 - val_loss: 0.2363 - val_accuracy: 0.9261\nEpoch 18/50\n32/32 [==============================] - 2s 62ms/step - loss: 0.0021 - accuracy: 1.0000 - val_loss: 0.3817 - val_accuracy: 0.9205\nEpoch 19/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0015 - accuracy: 1.0000 - val_loss: 0.3313 - val_accuracy: 0.9432\nEpoch 20/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0027 - accuracy: 1.0000 - val_loss: 0.4406 - val_accuracy: 0.9148\nEpoch 21/50\n32/32 [==============================] - 2s 61ms/step - loss: 8.5748e-04 - accuracy: 1.0000 - val_loss: 0.3169 - val_accuracy: 0.9375\nEpoch 22/50\n32/32 [==============================] - 2s 62ms/step - loss: 6.4382e-04 - accuracy: 1.0000 - val_loss: 0.3207 - val_accuracy: 0.9261\nEpoch 23/50\n32/32 [==============================] - 2s 61ms/step - loss: 4.7190e-04 - accuracy: 1.0000 - val_loss: 0.2776 - val_accuracy: 0.9489\nEpoch 24/50\n32/32 [==============================] - 2s 61ms/step - loss: 6.8468e-04 - accuracy: 1.0000 - val_loss: 0.3045 - val_accuracy: 0.9375\nEpoch 25/50\n32/32 [==============================] - 2s 62ms/step - loss: 0.0084 - accuracy: 0.9960 - val_loss: 0.2429 - val_accuracy: 0.9432\nEpoch 26/50\n32/32 [==============================] - 2s 60ms/step - loss: 0.0690 - accuracy: 0.9760 - val_loss: 0.1636 - val_accuracy: 0.9375\nEpoch 27/50\n32/32 [==============================] - 2s 59ms/step - loss: 0.0256 - accuracy: 0.9960 - val_loss: 0.2140 - val_accuracy: 0.9261\nEpoch 28/50\n32/32 [==============================] - 2s 59ms/step - loss: 0.0162 - accuracy: 0.9940 - val_loss: 0.3650 - val_accuracy: 0.9148\nEpoch 29/50\n32/32 [==============================] - 2s 60ms/step - loss: 0.0429 - accuracy: 0.9880 - val_loss: 0.6649 - val_accuracy: 0.8636\nEpoch 30/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0758 - accuracy: 0.9760 - val_loss: 0.4856 - val_accuracy: 0.8409\nEpoch 31/50\n32/32 [==============================] - 2s 60ms/step - loss: 0.0192 - accuracy: 0.9940 - val_loss: 0.1966 - val_accuracy: 0.9545\nEpoch 32/50\n32/32 [==============================] - 2s 61ms/step - loss: 0.0078 - accuracy: 1.0000 - val_loss: 0.2080 - val_accuracy: 0.9318\nEpoch 33/50\n32/32 [==============================] - 2s 61ms/step - loss: 9.6866e-04 - accuracy: 1.0000 - val_loss: 0.3177 - val_accuracy: 0.9318\nEpoch 34/50\n32/32 [==============================] - 2s 60ms/step - loss: 7.5597e-04 - accuracy: 1.0000 - val_loss: 0.3239 - val_accuracy: 0.9091\nEpoch 35/50\n32/32 [==============================] - 2s 60ms/step - loss: 1.9554e-04 - accuracy: 1.0000 - val_loss: 0.3392 - val_accuracy: 0.9091\nEpoch 36/50\n32/32 [==============================] - 2s 61ms/step - loss: 4.9188e-04 - accuracy: 1.0000 - val_loss: 0.2509 - val_accuracy: 0.9432\nEpoch 37/50\n32/32 [==============================] - 2s 61ms/step - loss: 1.4790e-04 - accuracy: 1.0000 - val_loss: 0.3925 - val_accuracy: 0.9091\nEpoch 38/50\n32/32 [==============================] - 2s 59ms/step - loss: 1.5187e-04 - accuracy: 1.0000 - val_loss: 0.3603 - val_accuracy: 0.9205\nEpoch 39/50\n32/32 [==============================] - 2s 59ms/step - loss: 1.2851e-04 - accuracy: 1.0000 - val_loss: 0.2882 - val_accuracy: 0.9261\nEpoch 40/50\n32/32 [==============================] - 2s 60ms/step - loss: 8.8036e-05 - accuracy: 1.0000 - val_loss: 0.3368 - val_accuracy: 0.9318\nEpoch 41/50\n32/32 [==============================] - 2s 59ms/step - loss: 1.3889e-04 - accuracy: 1.0000 - val_loss: 0.3426 - val_accuracy: 0.9261\nEpoch 42/50\n32/32 [==============================] - 2s 60ms/step - loss: 2.2228e-04 - accuracy: 1.0000 - val_loss: 0.3402 - val_accuracy: 0.9261\nEpoch 43/50\n32/32 [==============================] - 2s 60ms/step - loss: 1.5130e-04 - accuracy: 1.0000 - val_loss: 0.2071 - val_accuracy: 0.9318\nEpoch 44/50\n32/32 [==============================] - 2s 60ms/step - loss: 1.6147e-04 - accuracy: 1.0000 - val_loss: 0.4133 - val_accuracy: 0.9091\nEpoch 45/50\n32/32 [==============================] - 2s 60ms/step - loss: 7.6479e-05 - accuracy: 1.0000 - val_loss: 0.4413 - val_accuracy: 0.9034\nEpoch 46/50\n32/32 [==============================] - 2s 58ms/step - loss: 5.1222e-05 - accuracy: 1.0000 - val_loss: 0.3961 - val_accuracy: 0.9205\nEpoch 47/50\n32/32 [==============================] - 2s 59ms/step - loss: 5.9265e-05 - accuracy: 1.0000 - val_loss: 0.4222 - val_accuracy: 0.9091\nEpoch 48/50\n32/32 [==============================] - 2s 59ms/step - loss: 6.0774e-05 - accuracy: 1.0000 - val_loss: 0.4017 - val_accuracy: 0.9091\nEpoch 49/50\n32/32 [==============================] - 2s 59ms/step - loss: 4.0281e-05 - accuracy: 1.0000 - val_loss: 0.4196 - val_accuracy: 0.9205\nEpoch 50/50\n32/32 [==============================] - 2s 58ms/step - loss: 4.9743e-05 - accuracy: 1.0000 - val_loss: 0.4078 - val_accuracy: 0.9205\n" ] ], [ [ "# Custom CNN Model with Image Manipulations\n\nTo simulate an increase in a sample of image, you can apply image manipulation techniques: cropping, rotation, stretching, etc. Luckily Keras has some handy functions for us to apply these techniques to our mountain and forest example. Simply, you should be able to modify our image generator for the problem. Check out these resources to help you get started: \n\n1. [Keras `ImageGenerator` Class](https://keras.io/preprocessing/image/#imagedatagenerator-class)\n2. [Building a powerful image classifier with very little data](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html)\n ", "_____no_output_____" ] ], [ [ "train_image_generator = ImageDataGenerator(\n rescale=1./255,\n rotation_range=20,\n width_shift_range=0.1,\n height_shift_range=0.1,\n shear_range=0.1,\n zoom_range=0.1,\n horizontal_flip=True) # Generator for our training data\n\nvalidation_image_generator = ImageDataGenerator(\n rescale=1./255,\n rotation_range=20,\n width_shift_range=0.1,\n height_shift_range=0.1,\n shear_range=0.1,\n zoom_range=0.1,\n horizontal_flip=True) # Generator for our validation data\n\ntrain_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,\n directory=train_dir,\n shuffle=True,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary')\n\nval_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,\n directory=validation_dir,\n target_size=(IMG_HEIGHT, IMG_WIDTH),\n class_mode='binary')\n\n\nhistory = my_model.fit(\n train_data_gen,\n steps_per_epoch=total_train // batch_size,\n epochs=epochs,\n validation_data=val_data_gen,\n validation_steps=total_val // batch_size\n)", "Found 533 images belonging to 2 classes.\nFound 195 images belonging to 2 classes.\nEpoch 1/50\n32/32 [==============================] - 9s 277ms/step - loss: 0.2894 - accuracy: 0.9002 - val_loss: 0.1725 - val_accuracy: 0.9375\nEpoch 2/50\n32/32 [==============================] - 9s 268ms/step - loss: 0.1700 - accuracy: 0.9142 - val_loss: 0.2902 - val_accuracy: 0.8864\nEpoch 3/50\n32/32 [==============================] - 9s 267ms/step - loss: 0.1433 - accuracy: 0.9501 - val_loss: 0.6274 - val_accuracy: 0.7784\nEpoch 4/50\n32/32 [==============================] - 9s 273ms/step - loss: 0.2231 - accuracy: 0.9102 - val_loss: 0.1382 - val_accuracy: 0.9659\nEpoch 5/50\n32/32 [==============================] - 9s 268ms/step - loss: 0.2036 - accuracy: 0.9202 - val_loss: 0.2110 - val_accuracy: 0.8807\nEpoch 6/50\n32/32 [==============================] - 9s 268ms/step - loss: 0.1298 - accuracy: 0.9361 - val_loss: 0.1668 - val_accuracy: 0.9432\nEpoch 7/50\n32/32 [==============================] - 8s 265ms/step - loss: 0.1458 - accuracy: 0.9421 - val_loss: 0.5174 - val_accuracy: 0.8068\nEpoch 8/50\n32/32 [==============================] - 8s 262ms/step - loss: 0.1537 - accuracy: 0.9381 - val_loss: 0.2335 - val_accuracy: 0.8864\nEpoch 9/50\n32/32 [==============================] - 9s 267ms/step - loss: 0.1205 - accuracy: 0.9601 - val_loss: 0.1507 - val_accuracy: 0.9432\nEpoch 10/50\n32/32 [==============================] - 9s 266ms/step - loss: 0.1169 - accuracy: 0.9541 - val_loss: 0.2245 - val_accuracy: 0.9034\nEpoch 11/50\n32/32 [==============================] - 9s 267ms/step - loss: 0.0935 - accuracy: 0.9648 - val_loss: 0.3275 - val_accuracy: 0.8920\nEpoch 12/50\n32/32 [==============================] - 8s 265ms/step - loss: 0.0908 - accuracy: 0.9701 - val_loss: 0.2164 - val_accuracy: 0.9148\nEpoch 13/50\n32/32 [==============================] - 8s 261ms/step - loss: 0.1218 - accuracy: 0.9481 - val_loss: 0.2367 - val_accuracy: 0.9148\nEpoch 14/50\n32/32 [==============================] - 8s 262ms/step - loss: 0.1430 - accuracy: 0.9561 - val_loss: 0.2271 - val_accuracy: 0.8977\nEpoch 15/50\n32/32 [==============================] - 8s 265ms/step - loss: 0.1089 - accuracy: 0.9601 - val_loss: 0.1869 - val_accuracy: 0.9318\nEpoch 16/50\n32/32 [==============================] - 9s 269ms/step - loss: 0.0982 - accuracy: 0.9581 - val_loss: 0.6390 - val_accuracy: 0.8239\nEpoch 17/50\n32/32 [==============================] - 8s 261ms/step - loss: 0.1338 - accuracy: 0.9581 - val_loss: 0.1874 - val_accuracy: 0.9205\nEpoch 18/50\n32/32 [==============================] - 8s 260ms/step - loss: 0.0936 - accuracy: 0.9581 - val_loss: 0.1512 - val_accuracy: 0.9375\nEpoch 19/50\n32/32 [==============================] - 8s 261ms/step - loss: 0.0913 - accuracy: 0.9629 - val_loss: 0.2708 - val_accuracy: 0.8977\nEpoch 20/50\n32/32 [==============================] - 8s 258ms/step - loss: 0.0872 - accuracy: 0.9641 - val_loss: 0.1851 - val_accuracy: 0.9261\nEpoch 21/50\n32/32 [==============================] - 8s 256ms/step - loss: 0.1830 - accuracy: 0.9421 - val_loss: 0.1755 - val_accuracy: 0.9318\nEpoch 22/50\n32/32 [==============================] - 8s 258ms/step - loss: 0.1301 - accuracy: 0.9551 - val_loss: 0.1323 - val_accuracy: 0.9432\nEpoch 23/50\n32/32 [==============================] - 8s 258ms/step - loss: 0.1025 - accuracy: 0.9621 - val_loss: 0.2870 - val_accuracy: 0.8977\nEpoch 24/50\n32/32 [==============================] - 8s 262ms/step - loss: 0.1008 - accuracy: 0.9661 - val_loss: 0.2817 - val_accuracy: 0.8693\nEpoch 25/50\n32/32 [==============================] - 8s 259ms/step - loss: 0.0768 - accuracy: 0.9641 - val_loss: 0.2083 - val_accuracy: 0.9261\nEpoch 26/50\n32/32 [==============================] - 8s 256ms/step - loss: 0.0542 - accuracy: 0.9800 - val_loss: 0.1312 - val_accuracy: 0.9375\nEpoch 27/50\n32/32 [==============================] - 8s 256ms/step - loss: 0.0693 - accuracy: 0.9741 - val_loss: 0.2582 - val_accuracy: 0.9205\nEpoch 28/50\n32/32 [==============================] - 8s 253ms/step - loss: 0.0770 - accuracy: 0.9661 - val_loss: 0.1196 - val_accuracy: 0.9545\nEpoch 29/50\n32/32 [==============================] - 8s 256ms/step - loss: 0.0904 - accuracy: 0.9581 - val_loss: 0.1322 - val_accuracy: 0.9375\nEpoch 30/50\n32/32 [==============================] - 8s 257ms/step - loss: 0.0630 - accuracy: 0.9741 - val_loss: 0.1767 - val_accuracy: 0.9432\nEpoch 31/50\n32/32 [==============================] - 8s 258ms/step - loss: 0.0741 - accuracy: 0.9760 - val_loss: 0.1981 - val_accuracy: 0.9375\nEpoch 32/50\n32/32 [==============================] - 8s 261ms/step - loss: 0.0545 - accuracy: 0.9741 - val_loss: 0.2579 - val_accuracy: 0.9261\nEpoch 33/50\n32/32 [==============================] - 8s 262ms/step - loss: 0.0445 - accuracy: 0.9820 - val_loss: 0.3489 - val_accuracy: 0.9034\nEpoch 34/50\n32/32 [==============================] - 8s 263ms/step - loss: 0.0961 - accuracy: 0.9641 - val_loss: 0.5279 - val_accuracy: 0.8864\nEpoch 35/50\n32/32 [==============================] - 8s 256ms/step - loss: 0.1001 - accuracy: 0.9641 - val_loss: 0.1302 - val_accuracy: 0.9545\nEpoch 36/50\n32/32 [==============================] - 8s 258ms/step - loss: 0.0617 - accuracy: 0.9727 - val_loss: 0.1394 - val_accuracy: 0.9489\nEpoch 37/50\n32/32 [==============================] - 8s 258ms/step - loss: 0.0938 - accuracy: 0.9621 - val_loss: 0.1645 - val_accuracy: 0.9375\nEpoch 38/50\n32/32 [==============================] - 8s 262ms/step - loss: 0.1243 - accuracy: 0.9541 - val_loss: 0.1880 - val_accuracy: 0.9489\nEpoch 39/50\n32/32 [==============================] - 9s 271ms/step - loss: 0.1107 - accuracy: 0.9681 - val_loss: 0.2409 - val_accuracy: 0.9318\nEpoch 40/50\n32/32 [==============================] - 9s 274ms/step - loss: 0.0573 - accuracy: 0.9800 - val_loss: 0.0965 - val_accuracy: 0.9659\nEpoch 41/50\n32/32 [==============================] - 8s 255ms/step - loss: 0.0425 - accuracy: 0.9840 - val_loss: 0.1050 - val_accuracy: 0.9602\nEpoch 42/50\n32/32 [==============================] - 8s 253ms/step - loss: 0.0280 - accuracy: 0.9940 - val_loss: 0.1423 - val_accuracy: 0.9432\nEpoch 43/50\n32/32 [==============================] - 8s 253ms/step - loss: 0.0663 - accuracy: 0.9780 - val_loss: 0.3572 - val_accuracy: 0.9148\nEpoch 44/50\n32/32 [==============================] - 8s 254ms/step - loss: 0.0862 - accuracy: 0.9601 - val_loss: 0.1892 - val_accuracy: 0.9318\nEpoch 45/50\n32/32 [==============================] - 8s 254ms/step - loss: 0.0623 - accuracy: 0.9701 - val_loss: 0.2642 - val_accuracy: 0.9375\nEpoch 46/50\n32/32 [==============================] - 8s 254ms/step - loss: 0.0676 - accuracy: 0.9641 - val_loss: 0.3372 - val_accuracy: 0.8977\nEpoch 47/50\n32/32 [==============================] - 8s 254ms/step - loss: 0.0434 - accuracy: 0.9880 - val_loss: 0.1997 - val_accuracy: 0.9318\nEpoch 48/50\n32/32 [==============================] - 8s 252ms/step - loss: 0.0496 - accuracy: 0.9824 - val_loss: 0.1488 - val_accuracy: 0.9545\nEpoch 49/50\n32/32 [==============================] - 8s 251ms/step - loss: 0.0289 - accuracy: 0.9880 - val_loss: 0.2369 - val_accuracy: 0.9318\nEpoch 50/50\n32/32 [==============================] - 8s 253ms/step - loss: 0.0393 - accuracy: 0.9880 - val_loss: 0.3412 - val_accuracy: 0.9091\n" ] ], [ [ "# Resources and Stretch Goals\n\nStretch goals\n- Enhance your code to use classes/functions and accept terms to search and classes to look for in recognizing the downloaded images (e.g. download images of parties, recognize all that contain balloons)\n- Check out [other available pretrained networks](https://tfhub.dev), try some and compare\n- Image recognition/classification is somewhat solved, but *relationships* between entities and describing an image is not - check out some of the extended resources (e.g. [Visual Genome](https://visualgenome.org/)) on the topic\n- Transfer learning - using images you source yourself, [retrain a classifier](https://www.tensorflow.org/hub/tutorials/image_retraining) with a new category\n- (Not CNN related) Use [piexif](https://pypi.org/project/piexif/) to check out the metadata of images passed in to your system - see if they're from a national park! (Note - many images lack GPS metadata, so this won't work in most cases, but still cool)\n\nResources\n- [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) - influential paper (introduced ResNet)\n- [YOLO: Real-Time Object Detection](https://pjreddie.com/darknet/yolo/) - an influential convolution based object detection system, focused on inference speed (for applications to e.g. self driving vehicles)\n- [R-CNN, Fast R-CNN, Faster R-CNN, YOLO](https://towardsdatascience.com/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms-36d53571365e) - comparison of object detection systems\n- [Common Objects in Context](http://cocodataset.org/) - a large-scale object detection, segmentation, and captioning dataset\n- [Visual Genome](https://visualgenome.org/) - a dataset, a knowledge base, an ongoing effort to connect structured image concepts to language", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a398fa5e89e45ff5a213dad8cb05a46d4de909c
3,750
ipynb
Jupyter Notebook
03_8_One-hot encoding(Logistic).ipynb
dongwook412/NorthKoreaReactionInRussia
4145c51576b31b4075e230752a4da7c2106521b5
[ "MIT" ]
2
2018-11-03T12:29:09.000Z
2021-05-20T09:52:58.000Z
03_8_One-hot encoding(Logistic).ipynb
dongwook412/NorthKoreaReactionInRussia
4145c51576b31b4075e230752a4da7c2106521b5
[ "MIT" ]
null
null
null
03_8_One-hot encoding(Logistic).ipynb
dongwook412/NorthKoreaReactionInRussia
4145c51576b31b4075e230752a4da7c2106521b5
[ "MIT" ]
null
null
null
22.45509
106
0.503467
[ [ [ "# Model", "_____no_output_____" ] ], [ [ "f = open('/home/ydw/capston/python/data/sum(fromPH.D+twitter)/text.txt', 'r', encoding='utf-8')\ntext = f.read().splitlines()\nf.close() \nf = open('/home/ydw/capston/python/data/sum(fromPH.D+twitter)/result.txt', 'r', encoding='utf-8')\ny = f.read().splitlines()\nf.close() ", "_____no_output_____" ], [ "#One-hot encoding Model\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.feature_extraction.text import CountVectorizer\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.linear_model import LogisticRegression\n\ntext_train, text_test, y_train, y_test = train_test_split(text, y, test_size=0.2, random_state=1)\npipe_b = make_pipeline(CountVectorizer(ngram_range=(1,3), min_df=0), LogisticRegression())\npipe_b.fit(text_train, y_train)\n\nprint(\"train score: {:.2f}\".format(pipe_b.score(text_train, y_train)))\nprint(\"test score: {:.2f}\".format(pipe_b.score(text_test, y_test)))", "train score: 0.99\ntest score: 0.63\n" ] ], [ [ "# Test(expert 100)", "_____no_output_____" ] ], [ [ "neu = '2'\npos = '0'\nneg = '1'", "_____no_output_____" ], [ "#expert 100\nf = open('/home/ydw/capston/python/data/test/expert/expert_100.txt', 'r', encoding='utf-8')\ntext = f.read().splitlines()\nf.close() \n\n#expert 100\nf = open('/home/ydw/capston/expert100/y.txt', 'r', encoding='utf-8')\nry = f.read().splitlines()\nf.close() \n\ny = []\nfor i in ry:\n if(i == 'l'):\n y.append(neu)\n elif(i == 'p'):\n y.append(pos)\n elif(i == 'n'):\n y.append(neg)\n else:\n y.append('-1')", "_____no_output_____" ], [ "#word2vec\npre_y = []\nfor doc in text:\n rdoc = [doc]\n pre_y.append(pipe_b.predict(rdoc)[0])", "_____no_output_____" ], [ "count = 0\nfor i in range(len(y)):\n if(y[i] == pre_y[i]):\n count = count+1\ncount/len(y)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a39989fc60227f48c515694be085365b427b213
1,947
ipynb
Jupyter Notebook
25_11_20.ipynb
Ed-10/Daa_2021_1
6e0bd414851b24cc1ec371aaebee1ed32d94ed48
[ "MIT" ]
null
null
null
25_11_20.ipynb
Ed-10/Daa_2021_1
6e0bd414851b24cc1ec371aaebee1ed32d94ed48
[ "MIT" ]
null
null
null
25_11_20.ipynb
Ed-10/Daa_2021_1
6e0bd414851b24cc1ec371aaebee1ed32d94ed48
[ "MIT" ]
null
null
null
36.055556
308
0.535182
[ [ [ "<a href=\"https://colab.research.google.com/github/Ed-10/Daa_2021_1/blob/master/25_11_20.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "frase=\"\"\"El lema que anima a la Universidad Nacional, Por mi raza hablará el espíritu, revela la vocación humanística con la que fue concebida. El autor de esta célebre frase, José Vasconcelos, asumió la rectoría en 1920, en una época en que las esperanzas de la Revolución aún estaban vivas,\n había una gran fe en la Patria y el ánimo redentor se extendía en el ambiente.\"\"\"\n\nfrase= frase.strip().replace(\"\\n\",\"\").replace(\",\",\"\").replace(\".\",\"\").lower().split(\" \")\nprint(frase)\nfrecuencias = {}\nfor index in range(len(frase)):\n if frase[index] in frecuencias:\n pass\n else:\n frecuencias[frase[index]] = 1 \n for pivote in range(index+1,len(frase),1):\n #print(frase[index],\"comparada contra:\", frase[pivote])\n if frases[ubdex] == frase [pivote]:\n frecuencas [frase[index]] +=1\nprint(frecuencias)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
4a39a29b82c27afc60b2e0e20b86bdff9d186530
6,987
ipynb
Jupyter Notebook
Array/0924/918. Maximum Sum Circular Subarray.ipynb
YuHe0108/Leetcode
90d904dde125dd35ee256a7f383961786f1ada5d
[ "Apache-2.0" ]
1
2020-08-05T11:47:47.000Z
2020-08-05T11:47:47.000Z
Array/0924/918. Maximum Sum Circular Subarray.ipynb
YuHe0108/LeetCode
b9e5de69b4e4d794aff89497624f558343e362ad
[ "Apache-2.0" ]
null
null
null
Array/0924/918. Maximum Sum Circular Subarray.ipynb
YuHe0108/LeetCode
b9e5de69b4e4d794aff89497624f558343e362ad
[ "Apache-2.0" ]
null
null
null
27.72619
100
0.470016
[ [ [ "说明:\n 给定一个用A表示的整数的圆形数组C,找到C的一个非空子数组的最大可能和。\n 这里,圆形数组表示该数组的末尾连接到该数组的开头。 \n (通常,当0 <= i <A.length时,C [i] = A [i];当i> = 0时,C [i A.length] = C [i]。)\n 此外,子数组只能包含每个固定缓冲区A的元素最多一次。 \n (形式上,对于子数组C [i],C [i+1],...,C [j],不存在i <= k1,k2 <= j,k1%A.length = k2%A.长度。)\n\nExample 1:\n Input: [1,-2,3,-2]\n Output: 3\n Explanation: Subarray [3] has maximum sum 3\n\nExample 2:\n Input: [5,-3,5]\n Output: 10\n Explanation: Subarray [5,5] has maximum sum 5 + 5 = 10\n\nExample 3:\n Input: [3,-1,2,-1]\n Output: 4\n Explanation: Subarray [2,-1,3] has maximum sum 2 + (-1) + 3 = 4\n\nExample 4:\n Input: [3,-2,2,-3]\n Output: 3\n Explanation: Subarray [3] and [3,-2,2] both have maximum sum 3\n\nExample 5:\n Input: [-2,-3,-1]\n Output: -1\n Explanation: Subarray [-1] has maximum sum -1\n\nNote:\n 1、-30000 <= A[i] <= 30000\n 2、1 <= A.length <= 30000", "_____no_output_____" ] ], [ [ "### 当涉及圆形子数组时,有两种情况。\n 1、情况1:没有交叉边界的最大子数组总和\n 2、情况2:具有交叉边界的最大子数组总和\n 写下一些小写的案例,并考虑案例2的一般模式。\n 记住为输入数组中的所有元素都为负数做一个角点案例句柄。\n<img src='https://assets.leetcode.com/users/brianchiang_tw/image_1589539736.png'>", "_____no_output_____" ] ], [ [ "class Solution:\n def maxSubarraySumCircular(self, A) -> int:\n array_sum = 0\n \n local_min_sum, global_min_sum = 0, float('inf')\n local_max_sum, global_max_sum = 0, float('-inf')\n \n for num in A:\n local_min_sum = min(local_min_sum + num, num)\n global_min_sum = min(global_min_sum, local_min_sum)\n \n local_max_sum = max(local_max_sum + num, num)\n global_max_sum = max(global_max_sum, local_max_sum)\n \n array_sum += num\n \n if global_max_sum > 0:\n return max(array_sum - global_min_sum, global_max_sum)\n return global_max_sum", "_____no_output_____" ], [ "class Solution:\n def maxSubarraySumCircular(self, A) -> int:\n min_sum = min_glo_sum = max_sum = max_glo_sum = A[0]\n for a in A[1:]:\n min_sum = min(a, a + min_sum)\n min_glo_sum = min(min_sum, min_glo_sum)\n \n max_sum = max(a, a + max_sum)\n max_glo_sum = max(max_sum, max_glo_sum)\n if sum(A) == min_glo_sum:\n return max_glo_sum\n return max(max_glo_sum, sum(A) - min_glo_sum)", "_____no_output_____" ], [ "class Solution:\n def maxSubarraySumCircular(self, A) -> int:\n \n array_sum = 0\n \n local_min_sum, global_min_sum = 0, float('inf')\n local_max_sum, global_max_sum = 0, float('-inf')\n \n for number in A:\n \n local_min_sum = min( local_min_sum + number, number )\n global_min_sum = min( global_min_sum, local_min_sum )\n \n local_max_sum = max( local_max_sum + number, number )\n global_max_sum = max( global_max_sum, local_max_sum )\n \n array_sum += number\n \n \n \n # global_max_sum denotes the maximum subarray sum without crossing boundary\n # arry_sum - global_min_sum denotes the maximum subarray sum with crossing boundary\n \n if global_max_sum > 0:\n return max( array_sum - global_min_sum, global_max_sum )\n else:\n # corner case handle for all number are negative\n return global_max_sum", "_____no_output_____" ], [ "solution = Solution()\nsolution.maxSubarraySumCircular([3,1,3,2,6])", "_____no_output_____" ], [ "# 时间复杂度较高\nclass Solution:\n def maxSubarraySumCircular(self, A) -> int:\n res = -float('inf')\n for i in range(len(A)):\n temp_sum = A[i]\n temp_max = A[i]\n for j in range(i+1, len(A) * 2):\n j %= len(A)\n if j == i:\n break\n temp_sum += A[j]\n temp_max = max(temp_max, temp_sum)\n res = max(temp_max, res, A[i])\n return res", "_____no_output_____" ], [ "from collections import Counter\n\n# 时间复杂度较高\nclass Solution:\n def maxSubarraySumCircular(self, A) -> int:\n h = Counter(A)\n ", "_____no_output_____" ], [ "solution = Solution()\nsolution.maxSubarraySumCircular([3,1,3,2,6])", "_____no_output_____" ] ] ]
[ "raw", "markdown", "code" ]
[ [ "raw" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a39c241227bc66c8b6736ea474db26a6ce311c4
58,928
ipynb
Jupyter Notebook
Basicos.ipynb
msalazarcgeo/Curso_intro_Prog_Ana_Dat
123b2a9e1fe4a60a56efaf4b6c1d64893f3f440b
[ "Apache-2.0" ]
null
null
null
Basicos.ipynb
msalazarcgeo/Curso_intro_Prog_Ana_Dat
123b2a9e1fe4a60a56efaf4b6c1d64893f3f440b
[ "Apache-2.0" ]
null
null
null
Basicos.ipynb
msalazarcgeo/Curso_intro_Prog_Ana_Dat
123b2a9e1fe4a60a56efaf4b6c1d64893f3f440b
[ "Apache-2.0" ]
null
null
null
23.505385
788
0.524708
[ [ [ "# Básico de Python\n\nEsta sección, esta pensada para ser una breve introducción al lenguaje de programación *Python* con la intención de conocer los comandos básicos para hacer uso de sus estructuras de datos y las herramientas necesarias que se utilizaran durante el curso. No concideramos que sea un curso formal de programación, pues se pensó para que todos los comando de *Python* puedan ser ejecutados dentro de un notebook utilizando *Jupyter-lab* para tener una familiaridad con el lenguaje y así tener el conocimiento necesario para generar código que sea de utilidad en el análisis de datos. Para iniciar una sesión dentro de *jupyter-lab* es necesario tenerlo instalado (se describe en [Instalación](faltalink)) y ejecutar `jupyter-lab` dentro del ambiente donde se realizó la instalación. \n\n## Cadenas de símbolos (*String*)\n\nEn todo curso que involucra algún lenguaje de programación empiezan con *Hello world*, **este no es un curso de progeamación** pero vamos hacer muchas cosas similares por lo que vamos ha intentarlo. Es decir lo primero que deseamos es poder imprimir cosas, para tal proposito *Python* tiene la función `print()` esta función nos permite imprimir dentro de la terminal lo que deseamos. \n", "_____no_output_____" ] ], [ [ "print('Hello world!!')", "Hello world!!\n" ] ], [ [ "Hemos impreso nuestra primera línea usando *Python*, se puede ver que la función print no imprime las comillas, ¿que pasa si las quitamos?", "_____no_output_____" ] ], [ [ "print(! Hola mundo !)", "_____no_output_____" ] ], [ [ "De aquí podemos hacer un par de observaciones, la función `print` acepta *strings* que son cadenas de símbolos. Para declarar que se empieza una cadena de símbolos, se hace utilizando `'` o bien `\"`, para declarar que hemos terminado la cadena la próxima `'` o `\"` (respectivamente) determina el fin. ", "_____no_output_____" ] ], [ [ "print('!Hola mundo !')", "!Hola mundo !\n" ] ], [ [ "Podemos hacer varias cosas con un *string* como es \"multiplicarlo\" ", "_____no_output_____" ] ], [ [ "print('Hola mundo '*2)", "Hola mundo Hola mundo \n" ], [ "print('Hola mundo '*4)", "Hola mundo Hola mundo Hola mundo Hola mundo \n" ] ], [ [ "\"sumarlo\" (juntarlo )", "_____no_output_____" ] ], [ [ "print('Hola mundo' + 'junto mal')", "Hola mundojunto mal\n" ], [ "print('Hola mundo' + ' ' + 'junto mal')", "Hola mundo junto mal\n" ] ], [ [ "Y otras operaciones, durante el curso veremos más operaciones que se puden hacer con las cadenas de símbolos. ", "_____no_output_____" ], [ "## Variables\n\nDentro de los lenguajes de programación es necesario dar nombres a las cosas, en *pyhton* esta asignación se hace usando el símbolo de =.\n", "_____no_output_____" ] ], [ [ "a= 2\na", "_____no_output_____" ] ], [ [ "La línea anterior asignó el valor 2 a la variable `a`. Como estamos dentro de un notebook dentro de *jupyter-lab* al llamar a la variable `a` imprime lo que se encuentra dentro de la variable. ", "_____no_output_____" ] ], [ [ "b= 5\na+b", "_____no_output_____" ] ], [ [ "Se pueden realizar operaciones con las variables, si se tienen definidas estas operaciones entre tipos los tipos definidos. Como es el caso de las *strings* y los enteros.", "_____no_output_____" ] ], [ [ "a = 'Hola mundo'\nb = 'Jupiter'\na+b\n", "_____no_output_____" ], [ "a[0:5]+b\n", "_____no_output_____" ] ], [ [ "A diferencia de otros lenguajes de programación como son *C++* o *Java*, en *Python* no es necesario declarar la variable ni determinar el tipo de variable. Usando la función `type` es posible ver que tipo de variable es. \n", "_____no_output_____" ] ], [ [ "a= 5\nb= 'Hola'", "_____no_output_____" ], [ "type(a)", "_____no_output_____" ], [ "type(b)", "_____no_output_____" ] ], [ [ "Es importante observar que las operaciones son de tipo específico. Por ejemplo el operando `+` aplicado a las variables de tipo *int* o *float* se aplica la función la suma, mientras que el caso de variables *string* funciona como la unión de las cadenas. ", "_____no_output_____" ] ], [ [ "a+b", "_____no_output_____" ] ], [ [ "Como se observa, el operando `+` no esta definido para usarse con *int* y *str*(*string*).\n\nExisten distintos tipos de variables como son:\n\n| Tipo | Descripción |\n|:-------|:-----------:|\n| *int*| Para guardar enteros ($\\mathbb{Z}$).|\n| *float*| Números de doble presición (ejemplo $2.34$).| \n| *bool*| Valores verdadero(`True`) o falso (`False`). |\n| *str* | Cadenas de símbolos con codificación (UTF-8).|\n| *bytes*| Código ASCII en bytes.|\n| *None* | Valor de *Python* para nulo.|", "_____no_output_____" ], [ "Los elementos en *Python* se pueden comparar, para saber si un elemento es igual a otro o si cumple con alguna condición específica. Las comparaciones entre los elementos de *Python* tienen que ser del mismo tipo, en algunos casos se permite hacer comparaciones entre tipos como *float* e *int*, sin embargo, lo que sucede en el fondo *Pyhton* cambia el tipo y se hace la comparacion.\n\nLas comparaciones se hacen usando `==` (iguales), `>` (mayor que), `<`(menor que), `>= ` (mayor o igual que), `<=` (menor o igual que). El resultado de las comparaciones es siempre un *booleano* (tipo `bool`).\n\n", "_____no_output_____" ] ], [ [ "4==5", "_____no_output_____" ], [ "4>=5", "_____no_output_____" ], [ "'b' < 'b'", "_____no_output_____" ], [ "'acs'=='acs'", "_____no_output_____" ], [ "4.5>4", "_____no_output_____" ], [ "4 > 'a'", "_____no_output_____" ] ], [ [ "Utilizando estas comparaciones es posible hacer algebra booleana utilizando los operadores lógicos implementados para ello, \n\n\n|Operador| Descripción |Ejemplo|\n|---------|-----------|-------|\n|`and` | Si los dos operandos son `True` entonces la condición se vuelve `True`| `a and b`|\n| `or` | Si alguno de los operandos es `True` entonces la condición se vuelve `True`| `a or b`|\n| `not` | Se regresa el estado inverso del operando| `not a`|\n", "_____no_output_____" ] ], [ [ "4== 4 and 5<4", "_____no_output_____" ], [ "4== 4 or 5<4", "_____no_output_____" ], [ "not 4==4", "_____no_output_____" ] ], [ [ "# Listas\n\nLas listas en python son especialmente útiles, pues nos permiten tener una estructura para el manejo de los datos y estructuras, para declarar una lista se utilizan `[ ]` y separando por `,` sus elementos.", "_____no_output_____" ] ], [ [ "mi_lista= [1, 2, 3,4]\nmi_lista", "_____no_output_____" ] ], [ [ "Se le pueden añadir elementos a las listas utilizando el método `append` implementado detro de las lista, de la siguiente forma ", "_____no_output_____" ] ], [ [ "mi_lista.append(5)\nmi_lista", "_____no_output_____" ] ], [ [ "También se pueden remover elementos de la lista usando el método `pop`", "_____no_output_____" ] ], [ [ "a = mi_lista.pop()\nprint(a)\nmi_lista", "5\n" ] ], [ [ "Como se observa el método `pop` remueve el último elemento en la lista y lo asigna a la variable `a`, no es necesario asignar el valor a una variable para remover el último elemento.", "_____no_output_____" ] ], [ [ "mi_lista.pop()\nmi_lista", "_____no_output_____" ] ], [ [ "Para acceder a los elementos de una lista en python se hace a través de `[]`, python enumera los elementos de las listas a partir de 0 para el primer elemento y de forma asendente, también es posible acceder a los elementos usando enteros negativos. ", "_____no_output_____" ] ], [ [ "mi_lista[0]", "_____no_output_____" ], [ "mi_lista[2]", "_____no_output_____" ], [ "mi_lista[-1]", "_____no_output_____" ] ], [ [ "Es posible cambiar el valor de los elementos ", "_____no_output_____" ] ], [ [ "mi_lista[0]= 5\nmi_lista", "_____no_output_____" ] ], [ [ "Pero para aumentar el tamaño de las listas es necesario usar el método `append` u otro. ", "_____no_output_____" ] ], [ [ "mi_lista[3]= 5\nmi_lista", "_____no_output_____" ] ], [ [ "Para remover un elemento especifico utilizamos el método `remove`, el cual elimina la primera aparición de elemento que se desea remover.", "_____no_output_____" ] ], [ [ "mi_lista = [1,2,3, 4,2]\nmi_lista.remove(2)\nmi_lista", "_____no_output_____" ] ], [ [ "Es posible unir dos o más listas ", "_____no_output_____" ] ], [ [ "mi_lista = [1,2,3, 4]\nmi_lista_2= [3,4,5,6]\nmi_lista+mi_lista_2\n", "_____no_output_____" ], [ "mi_lista", "_____no_output_____" ] ], [ [ "Los elementos de las listas pueden ser de distintos tipos\n", "_____no_output_____" ] ], [ [ "mi_lista = [1,2,4,'Hola', 2.34, str(4.56)]\nmi_lista", "_____no_output_____" ] ], [ [ "En la lista generada en la linea anterior se hace notar la función *str()*, esta convierte lo que se encuentre dentro de los paréntesis a tipo *string*, siempre y cuando la conversión sea posible. Como ejemplo se transforma el número flotante a un *string*. Este tipo de funciones también existen para *int* , *bool* o *float*.", "_____no_output_____" ] ], [ [ "int(2.75)", "_____no_output_____" ], [ "float(3)", "_____no_output_____" ], [ "float('3')", "_____no_output_____" ], [ "bool(1)", "_____no_output_____" ] ], [ [ "A partir de las listas podemos tomar \"rebanadas\" de estas. Para tal propósito, en las listas utilizamos `:` para indicar los rangos que se desean, como se ve en los ejemplos a continuación\n\nTomemos los elementos de la posición 2 a la posición 5", "_____no_output_____" ] ], [ [ "mi_lista= [1,2,3,4,5,6]\nmi_lista[2:5]", "_____no_output_____" ] ], [ [ "De la posición 0 a la posición antes que 4 ", "_____no_output_____" ] ], [ [ "mi_lista[:4]", "_____no_output_____" ] ], [ [ "De la posición 3 hasta el último elemento", "_____no_output_____" ] ], [ [ "mi_lista[3:]", "_____no_output_____" ] ], [ [ "## Diccionarios \n\nLos diccionarios en *Python* nos ofrecen la posibilidad de hacer funciones entre conjuntos de forma directa, para la construcción de los diccionarios usamos `{ }` y `:`para determinar la regla de la relación entre los conjuntos. Veamos el siguiente ejemplo.", "_____no_output_____" ] ], [ [ "mi_dict= {'a': 'Hola', 'b': 'jupyter', 'c':'mundo'}", "_____no_output_____" ], [ "mi_dict['a']", "_____no_output_____" ], [ "mi_dict['b']", "_____no_output_____" ], [ "print(mi_dict['a'], mi_dict['c'])", "Hola mundo\n" ], [ "mi_dict['d']", "_____no_output_____" ] ], [ [ "La función de los diccionarios es regresar los valores que se le asignan a cada elemento, a los elementos antes de `:` se les denomina *keys* (llave) y a los que se encuentran después se les denomina *values* (valor)", "_____no_output_____" ] ], [ [ "mi_dict.keys()", "_____no_output_____" ], [ "mi_dict.values()", "_____no_output_____" ] ], [ [ "Si se desea añadir una pareja llave-valor al diccionario, se puede hacer directamente", "_____no_output_____" ] ], [ [ "mi_dict[3]= 'luna'", "_____no_output_____" ], [ "mi_dict[3]", "_____no_output_____" ], [ "mi_dict", "_____no_output_____" ] ], [ [ "Para eliminar una pareja llave-valor utilizamos el método `pop`", "_____no_output_____" ] ], [ [ "mi_dict.pop('b')\nmi_dict", "_____no_output_____" ] ], [ [ "Si se desea añadir más de un elemento al diccionario o añadir todo un diccionario a uno ya exixtente esto se puede hacer usando el método `update` y otro diciionario.", "_____no_output_____" ] ], [ [ "mi_dict.update({'e': 'marte', 3:'casa', 2:'Hola'})", "_____no_output_____" ], [ "mi_dict", "_____no_output_____" ] ], [ [ "Es importante recalcar que en los diccionarios puede haber llaves que tengan el mismo valor pero no llaves iguales ", "_____no_output_____" ] ], [ [ "mi_dic_2= {'a': 'Hola', 'b': 'mundo', 'a': 'jupyter'}\nmi_dic_2", "_____no_output_____" ] ], [ [ "En este caso el diccionario asigna a la llave el último valor asignado. Aunque Python nos permite tener como llaves (*keys*) cualquier tipo de objeto es preferible utilizar elementos que sean *hashables* como lo son *int*, *float* o *strings*, hay ciertas estructuras o valor en python que nos son de esta clase de elementos. La idea es que este tipos de estructuras se les puede dar un cierto \"nombre\" para identificar a los elementos, una explicación más detallada de lo que son los elemenntos *hashable* esta fuera del alcance de este curso.\n\n\n### Tuplas \n\nOtra estructura importante en *Python* son las \"tuplas\", estos objetos nos permiten almacenar valores como *int*, *float* o *strings* de forma similar a una notación vectorial (como coordenadas). Para declarar a una tupla se hace con `()` y se separa a los elementos usando `,`.", "_____no_output_____" ] ], [ [ "tu= (1,2,3,'a') \ntu", "_____no_output_____" ] ], [ [ "Una de las propiedades importante que distingue a las *tuplas* de las listas, es que las tuplas son objetos inmutables. Es decir, éstas no pueden ser modificadas una vez que fueron declaradas, manteniendo la integridad de los datos que contienen. ", "_____no_output_____" ] ], [ [ "tu[0]", "_____no_output_____" ], [ "tu[0]= 5", "_____no_output_____" ] ], [ [ "Existen ciertas operaciones que se pueden realizar con las tuplas, si sumamos (`+`) tuplas el resultado es concatenar las tuplas, si multiplicamos por un entero $n$ el resultado es la tupla concatenada $n$. ", "_____no_output_____" ] ], [ [ "tu_2= (4,5,6,'b')", "_____no_output_____" ], [ "tu+tu_2", "_____no_output_____" ], [ "tu*2", "_____no_output_____" ] ], [ [ "### Conjuntos\n\nLos conjunto en *python* nos permiten almacenar conjuntos de distintos elementos, como su palabra lo dice, los conjuntos son conjunto en el sentido matemático, es decir, podemos hacer las operaciones usuales de conjuntos como son unión, intersección, diferencia (resta), diferencia simétrica, añadir elementos etc. Para declarar un conjunto se utiliza `{}` separando sus elementos por `,` ", "_____no_output_____" ] ], [ [ "conjunto_1= {1,2,3,4,5,6,7,8,7,6,5, '5', (2,4)}\nconjunto_1", "_____no_output_____" ] ], [ [ "Para añadir elementos al conjunto se puede hacer usando el método `add`", "_____no_output_____" ] ], [ [ "conjunto_1.add('li')\nconjunto_1", "_____no_output_____" ] ], [ [ "Si deseamos obtener el conjunto de una estructura que contiene datos como pueden ser listas, tuplas o diccionarios se puede hacer a través de la función `set`", "_____no_output_____" ] ], [ [ "con_lista= set(mi_lista)\ncon_lista", "_____no_output_____" ] ], [ [ "Las operaciones entre conjuntos se hacen a través de los métodos implementados en el objeto, utilizando como argumento otro conjunto.", "_____no_output_____" ] ], [ [ "conjunto_2= {'a', 'b', 'c', '2', 6,8, 15}", "_____no_output_____" ], [ "conjunto_1.union(conjunto_2)", "_____no_output_____" ], [ "conjunto_1.difference(conjunto_2)", "_____no_output_____" ], [ "conjunto_1.intersection( conjunto_2)", "_____no_output_____" ] ], [ [ "### Condicionales y ciclos\n\nYa hemos visto que se pueden comparar elementos en *Python*, esto nos permite diferenciar entre los distintos elementos, en cierta estructura como pueden ser diccionarios, listas o tuplas.\n\nLas estructuras de control son una de las principales herramientas en lenguajes de programación, estas nos permiten estableces condiciones para poder tomar desiciones. Las estructuras de control que usaremos con mayor frecuencia son `if` `else`, `for` , `while`. Estas estructuras se declaran usando la palabra seguida de una condición logica que se debe de cumplir y `:`, en la siguiente linea se indenta (con un número fijo de espacios o con *tab* ) y se escribe la linea de código a ejecutar. \n\n`if x == 4:\n print('Es cuatro')`\n \n#### if \n\nLa estructura más utilizada es la estructura `if else` en esta se da una condición lógica, si la condición se cumple entonces se ejecuta el código que se encuentra indentado, si no se cumple la condición puede continuar la ejecución del código donde no esta indentado. La estructura `if` permite dos condicionales más, `elif` y `else`. `elif` permite añadir un condicional más a la estructura y `else` se ejecuta si ninguna de las condiciones se cumple.", "_____no_output_____" ] ], [ [ "x= 4", "_____no_output_____" ], [ "if x == 4:\n print('Es cuatro')", "Es cuatro\n" ], [ "x= 5", "_____no_output_____" ], [ "if x == 4:\n print('Es cuatro')\nprint('Se termino la estructura')", "Se termino la estructura\n" ], [ "if x == 4:\n print('Es cuatro')\nelse:\n print('No es cuatro')\nprint('Se termino la estructura de control')", "No es cuatro\nSe termino la estructura de control\n" ], [ "x= 12\nif x == 4:\n print('Es cuatro')\nelif x>4 and x < 10:\n print('Es mayor que 4')\nelse:\n print('No es cuatro y es mayor que 10')\nprint('Se termino la estructura de control')", "No es cuatro y es mayor que 10\nSe termino la estructura de control\n" ] ], [ [ "#### For \n\nLa estructura de control `for` se utiliza para hacer cíclos sobre los elementos dentro de una estructura de datos. En el caso de python lo usual es aplicar cierto código a los elementos.\n\nPara declarar los ciclos `for ` se utiliza la la palabra `for` el nombre de una variable, que nos servirá para referirnos al elemento dentro del ciclo, seguido por la palabra `in` y la estructura sobre la cual deseamos que se haga el ciclo terminado la línea con `:`. Al igual que con `if`, el código que se desea ejecutar dentro del ciclo debe estar en una nueva línea indentada con un cierto número de espacios o bien con un número fijo de tabulados (\"tabs\"). El ciclo se termina cuando la línea del código a ejecutar esta en la misma línea que la palabra `for`. ", "_____no_output_____" ] ], [ [ "lista= [1,2,3,4,5]\nfor i in lista:\n print('Número ', i*2)\nprint('Se acabo el ciclo')", "Numero 2\nNumero 4\nNumero 6\nNumero 8\nNumero 10\nSe acabo el ciclo\n" ] ], [ [ "Existen otro tipos de estructuras implementadas dentro de *python* que podemos utilizar para crear ciclos o para utilizarlas en otros casos como `range`, se puede utilizar esta función para generar los cíclos.", "_____no_output_____" ] ], [ [ "for j in range(3, 15):\n print(j)", "3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n" ] ], [ [ " #### while\n \nLa estructura `while` es similar a la estructura de control `for`, esta nos permite ejecutar el código deseado dentro de un ciclo mientras se cumpla una condición determinada. Cuando dicha condición ya no se cumple entonces se sale del ciclo. Para declarar la estructura de control se hace a través de la palabra `while`, una condición que se debe de cumplir (True) para entrar en el ciclo y `:` al igual que en las estructuras de control anteriores, en una nueva línea indentada por un número fijo de espacios o tabulaciones se escribe el código a ejecutar. El ciclo termina en donde la nueva línea de código se encuentra indentada a la par de la palabra `while`. \n`\nwhile x <= 3: \n print(x , 'es menor o igual que 3')\n x += 1\n`", "_____no_output_____" ] ], [ [ "x = 0\nwhile x <= 3: \n print(x , 'es menor o igual que 3')\n x += 1\nprint('termino el ciclo')", "0 es menor o igual que 3\n1 es menor o igual que 3\n2 es menor o igual que 3\n3 es menor o igual que 3\ntermino el ciclo\n" ] ], [ [ "La diferencia entre el `for`y `while` radica en que en `for` se sabe que el número de veces que se ejecutara el ciclo, mientras que en `while` el ciclo se seguirá ejecutando mientras se cumpla la condición. Por esta razón no se recomienda usar esta estructura de control cuando se emnpieza a programar, debido a que se pueden cometer errores con facilidad y generar ciclos infinitos. Cuando se use esta estructura, se recomienda estar seguro de la condición del ciclo no sea infinita al menos que así se desee. \n\n## Funciones\n\nLas funciones son bloques de código para reutilizar, adicionalmente esto nos ayuda a dividir el código en distintos bloques lo cual ayuda a la lectura, comprensión, y limpieza del código. Para declarar una función se hace usando la palabra `def` seguida del nombre de la función con `()` encerrando los parámetros de la función terminando con `:`. En una nueva línea, y al igual que con las estructuras de control, para definir el código que se ejecuta dentro de la función este se encuentra indentado usando un número fijo de espacios o un número fijo de tabulares la función termina usando la palabra `return` o cuando el indentado se encuentra a la par de la palabra `def`.\n\n`\ndef imprime_saludo():\n print('Que bueno que estoy aprendiendo Python')\n return \n`\nEl código que contiene la función sólo se ejecuta cuando se hace un llamado a la funcíon, esto se hace simplemente escribiendo el nombre de la función y poniendo entre parentesis los párametros que se le pasan a ésta.\n", "_____no_output_____" ] ], [ [ "def imprime_saludo():\n print('Que bueno que estoy aprendiendo Python')\n return \n", "_____no_output_____" ], [ "imprime_saludo()", "Que bueno que estoy aprendiendo Python\n" ] ], [ [ "Para que la función utilice argumentos, estos se declaran entre los paréntesis cuando se define la función. Para hacer referencia en el código a ejecutar dentro de la función, se hace con el nombre declarado entre los paréntesis. ", "_____no_output_____" ] ], [ [ "def hola_tu( nombre):\n print('Hola ', nombre, ' que bueno estas aprendiendo Python')\n return \n", "_____no_output_____" ], [ "hola_tu( 'Miguel')", "Hola Miguel que bueno estas aprendiendo Python\n" ] ], [ [ "Las funciones pueden (o no) regresar objetos, los cuales muchas veces son *strings*, números o cosas mucho más complejas. Para que la función regrese un objeto es necesario escribir la variable que contiene dicho objeto despues de la palabra `return`. El objeto que se regresa de la función tiene que ser asignado a una variable esta asignación se hace utilizando la asignación normal en ptython `=`. ", "_____no_output_____" ] ], [ [ "def hola_tu_2(nombre):\n print('Hola ', nombre, ' que bueno estas aprendiendo Python')\n ret = 'Te va ayudar mucho'\n return ret", "_____no_output_____" ], [ "mi_nom= 'Miguel'\nque= hola_tu_2(mi_nom) ### Se llama a la función \nprint(que) ## Ver que hay en la variable", "Hola Miguel que bueno estas aprendiendo Python\nTe va ayudar mucho\n" ] ], [ [ "Es recomendable y una buena práctica escribir un pequeño texto que nos ayude a describir para que sirve la función, esto se hace usando `\"\"\"` o `'''` y se cierra el texto con `\"\"\"` o `'''` respectivamente. Lo que se escribe ayuda a la reutilización del código tanto para la persona que desarrollando la función y como documentación del código desarrollado. ", "_____no_output_____" ] ], [ [ "def hola_tu_2(nombre):\n \"\"\"Imprime el nombre con una frase y regresa un agradecimiento que es un string\"\"\"\n print('Hola ', nombre, ' que bueno estas aprendiendo Python')\n ret = 'Te va ayudar mucho'\n return ret", "_____no_output_____" ] ], [ [ "### Funciones implementadas en Python\n\nExisten muchas funciones implentadas en *Python* que nos ayudan a resolver tareas comunes necesarias a la hora de programar, el conocer estas funciones nos ayuda a que desarrollar de forma más rápida y que la ejecución sea más eficiente y veloz pues la funciones en la mayoria de los casos se encuentran optimizadas. \n\nEl enlistar todas las funciones que se encuentran implementadas dentro de *pyhton* se encuentran fuera del alcance de este curso, pero se considera enlistar las que se consideraron de mayor útilidad para el curso, a continuacion se dan ejemplos de algunas de estas funciones. \n\n#### Range\n\nLa función `range` nos permite definir un intervalo de números enteros con un mínimo, un máximo y el tamaño de incremento entre uno y otro. Se puede omitir el mínimo y el incremento, el comportamiento por defecto es tomar el mínimo como 0 y el incremento como 1", "_____no_output_____" ] ], [ [ "for i in range(5, 15, 3):\n print(i)", "5\n8\n11\n14\n" ] ], [ [ "#### enumerate\nLa función `enumerate` enumera sobre un objeto iterable y regresa una tupla, donde la primera entrada es el entero la posición en el objeto iterable y la segunda es el elemento iterable.\n", "_____no_output_____" ] ], [ [ "lista_1= ['diez', 'nueve', 'ocho', 'siete', 'seis', 'cinco', 'cuatro', 'tres', 'dos', 'uno']\nfor i in enumerate(lista_1):\n print (i)", "(0, 'diez')\n(1, 'nueve')\n(2, 'ocho')\n(3, 'siete')\n(4, 'seis')\n(5, 'cinco')\n(6, 'cuatro')\n(7, 'tres')\n(8, 'dos')\n(9, 'uno')\n" ] ], [ [ "#### Zip\nLa función `zip` nos permite fusionar dos objetos iterables en un conjunto de tuplas donde cada entrada de la tupla corresponde a un elemento de las listas", "_____no_output_____" ] ], [ [ "lista_1= ['diez', 'nueve', 'ocho', 'siete', 'seis', 'cinco', 'cuatro', 'tres', 'dos']\nlista_2 = list('abcdefghij')\nres =zip(lista_2, lista_1)\nfor j in res:\n print(j)", "('a', 'diez')\n('b', 'nueve')\n('c', 'ocho')\n('d', 'siete')\n('e', 'seis')\n('f', 'cinco')\n('g', 'cuatro')\n('h', 'tres')\n('i', 'dos')\n" ] ], [ [ "#### list\n`list` nos permite crear listas sobre objetos iterables, si no se pasa ningún argumento entonces `list` regresa una lista vacía.", "_____no_output_____" ] ], [ [ "list('1234567')", "_____no_output_____" ] ], [ [ "### list \nRegresa la lontgitud del objeto ", "_____no_output_____" ] ], [ [ "len(lista_1)", "_____no_output_____" ], [ "len(conjunto_1)", "_____no_output_____" ] ], [ [ "### Funciones cast\n\nLas funciones `int` , `float`, `str` y `bool` convierten a lo que se pasa como argumento en un entero, punto flotante, cadena de símbolos o un boleano respectivamente. \n\n\n### max, min\nDe un objeto iterable regresa el máximo o el mínimo respectivamente.\n\n\n", "_____no_output_____" ] ], [ [ "max([1,4,6,7,3])", "_____no_output_____" ], [ "min({1,3,4,3,5,6,5,7})", "_____no_output_____" ], [ "max(list('abcdefghijk'))", "_____no_output_____" ] ], [ [ "### set\nDe un objeto iterable regresa el conjunto, si no se pasa ningun argumento regresa un conjunto vacío.", "_____no_output_____" ] ], [ [ "set(list('abcdesftredfredfre'))", "_____no_output_____" ] ], [ [ "### abs\nRegresa el valor absoluto del argumento.", "_____no_output_____" ] ], [ [ "abs(-5.8)", "_____no_output_____" ] ], [ [ "### any \nRegresa `True`de un objeto iterable si al aplicar `bool` alguno de elementos regresa `True`", "_____no_output_____" ] ], [ [ "\n\nany([0, None, False ])\n", "_____no_output_____" ], [ "any([0, None, False, 'a' ])", "_____no_output_____" ] ], [ [ "### all \nRegresa `True` si todos los elementos de un objeto iterable regresan `True` al aplicar `bool`.", "_____no_output_____" ] ], [ [ "all([0, None, False, 'a' ])", "_____no_output_____" ], [ "all([1,2,3,4,5,6])", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a39c56e02b4d0d01819af6581afe63e4b78766f
526,646
ipynb
Jupyter Notebook
conv_net_text_classification/ConvNet character based.ipynb
robertjankowski/reproducing-dl-papers
01ad85eac333b87358b3d2e2276292333cacf0e0
[ "Apache-2.0" ]
2
2021-06-06T09:45:33.000Z
2021-06-07T20:00:33.000Z
conv_net_text_classification/ConvNet character based.ipynb
robertjankowski/reproducing-dl-papers
01ad85eac333b87358b3d2e2276292333cacf0e0
[ "Apache-2.0" ]
null
null
null
conv_net_text_classification/ConvNet character based.ipynb
robertjankowski/reproducing-dl-papers
01ad85eac333b87358b3d2e2276292333cacf0e0
[ "Apache-2.0" ]
2
2021-06-03T01:40:28.000Z
2021-06-07T06:56:18.000Z
605.34023
107,052
0.355937
[ [ [ "import tensorflow as tf\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom matplotlib import rc\nfrom IPython import display\n\nimport os\n\n\nrc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})\nrc('text', usetex=True)", "_____no_output_____" ], [ "path = \"rt-polaritydata/rt-polaritydata/\"\n\npos_path = os.path.join(path, 'rt-polarity.pos')\nneg_path = os.path.join(path, 'rt-polarity.neg')\n\ndef load_review(path, is_pos=True):\n with open(path, encoding='latin-1') as f:\n review = pd.DataFrame({'review':f.read().splitlines()})\n review['sentiment'] = 1 if is_pos else 0\n return review\n\npos_review = load_review(pos_path, is_pos=True)\nneg_review = load_review(neg_path, is_pos=False)\n\n# display.display(pos_review.head(), neg_review.head())", "_____no_output_____" ], [ "all_reviews = pd.concat([pos_review, neg_review])\nall_reviews.head()", "_____no_output_____" ], [ "plt.hist(all_reviews.sentiment)\nplt.show()", "_____no_output_____" ], [ "all_reviews[\"review_splitted\"] = all_reviews.review.apply(lambda review: tf.keras.preprocessing.text.text_to_word_sequence(review))", "_____no_output_____" ], [ "import functools\nimport operator\n\ndef get_all_characters(df):\n chars = []\n for review in df.review_splitted:\n for word in review:\n chars.append(word)\n chars = functools.reduce(operator.iconcat, chars, [])\n return list(set(chars))", "_____no_output_____" ], [ "chars = get_all_characters(all_reviews)\n\nNUM_CHARS = len(chars)\nprint('Total number of characters: {}\\n{}'.format(NUM_CHARS, chars))", "Total number of characters: 60\n['5', 'p', 't', 'k', '4', 'x', 'â', '2', 'ü', \"'\", 'c', 'h', 'l', 'ê', 'r', '3', 'ô', 'j', '0', '\\x96', 'è', 'a', 'd', '6', 'û', 'æ', 'v', 'ã', 'y', '½', '1', 'f', 'é', 'b', '\\x91', 'ú', 'q', 'i', '\\x97', 'õ', 'á', 'w', 'n', 'u', 'í', '9', 's', 'z', 'e', 'g', 'ç', 'ó', 'ñ', 'ï', 'à', 'm', '7', 'ö', '8', 'o']\n" ], [ "char_to_num = {chars[i]: i for i in range(NUM_CHARS)}\nnum_to_char = {i: chars[i] for i in range(NUM_CHARS)}", "_____no_output_____" ] ], [ [ "Find the maximum length of review -- padding", "_____no_output_____" ] ], [ [ "def get_max_len(df):\n all_lenghts = []\n for review in df.review:\n all_lenghts.append(len(list(review)))\n return max(all_lenghts)", "_____no_output_____" ], [ "MAX_LEN_POS = get_max_len(pos_review)\nMAX_LEN_NEG = get_max_len(neg_review)\n\nMAX_LEN_POS, MAX_LEN_NEG", "_____no_output_____" ], [ "MAX_LEN = get_max_len(all_reviews)\nprint('Maximum length of review: {} (in characters)'.format(MAX_LEN))", "Maximum length of review: 269 (in characters)\n" ], [ "from stop_words import get_stop_words\n\ndef review_to_one_hot(char):\n one_hot = [0] * NUM_CHARS\n pos = char_to_num[char]\n one_hot[pos] = 1\n return one_hot\n\ndef process_review(review, pad=True, max_len=MAX_LEN):\n review = tf.keras.preprocessing.text.text_to_word_sequence(review)\n review = [word for word in review if word not in get_stop_words('english')]\n review = [list(s) for s in review] # to characters\n review = functools.reduce(operator.iconcat, review, [])\n review_one_hot = [review_to_one_hot(char) for char in review]\n if pad:\n # append 0 value padding\n while len(review_one_hot) < max_len:\n review_one_hot.append([0] * NUM_CHARS)\n review_one_hot = review_one_hot[:max_len] # trucate to max length\n return review_one_hot", "_____no_output_____" ], [ "def get_len_review(review):\n review = tf.keras.preprocessing.text.text_to_word_sequence(review)\n review = [word for word in review if word not in get_stop_words('english')]\n review = [list(s) for s in review] # to characters\n review = functools.reduce(operator.iconcat, review, [])\n return len(review)\n\nreviews_len = all_reviews.review.apply(get_len_review)", "_____no_output_____" ], [ "np.median(reviews_len)", "_____no_output_____" ], [ "plt.hist(reviews_len, bins=20, color=(2/255, 0, 247/255, 0.5))\nplt.vlines(np.median(reviews_len), 0, 1500)\n# plt.vlines(np.quantile(reviews_len, q=0.75), 0, 1500, color='red')\nplt.ylim([0, 1300])\n\nplt.xlabel('# characters')\nplt.ylabel('Count')\nplt.savefig('figures/cnn_character_matrix.pdf', bbox_inches='tight')\n# plt.show()", "_____no_output_____" ], [ "plt.figure(figsize=(6, 5))\nplt.subplot(1, 2, 1)\nposition = 180\ntitle = plt.title(neg_review.review.iloc[position])\nplt.setp(title, color='blue')\nplt.imshow([p for p in process_review(neg_review.review.iloc[position], max_len=100)], cmap='gray')\nplt.axis('off')\n\nplt.subplot(1, 2, 2)\nt1 = pos_review.review.iloc[position]\nt2 = 'a droll , well-acted , character-driven \\ncomedy with unexpected deposits of feeling . '\ntitle = plt.title(t2, y=-0.15)\nplt.setp(title, color='red')\nplt.imshow([p for p in process_review(pos_review.review.iloc[position], max_len=100)], cmap='gray')\nplt.axis('off')\n\n\n# plt.savefig('cnn_character_example.pdf', bbox_inches='tight')\n# plt.show()", "_____no_output_____" ], [ "MAX_LEN_SEQ = 66 # 66 - median -- in characters\n\nprocessed_review = all_reviews.review.apply(lambda review: process_review(review, max_len=MAX_LEN_SEQ))\n\nX = processed_review.to_numpy().tolist()\ny = all_reviews.sentiment.values", "_____no_output_____" ], [ "from tensorflow.keras import backend as K\n\ndef f1(y_true, y_pred):\n \"\"\"\n Create F1 metric for Keras\n From: https://stackoverflow.com/a/45305384/9511702\n \"\"\"\n def recall(y_true, y_pred):\n tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))\n recall = tp / (possible_positives + K.epsilon())\n return recall\n \n def precision(y_true, y_pred):\n tp = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))\n predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))\n precision = tp / (predicted_positives + K.epsilon())\n return precision\n \n precision = precision(y_true, y_pred)\n recall = recall(y_true, y_pred)\n return 2 * ((precision * recall) / (precision + recall + K.epsilon()))\n\n \ndef build_model():\n model = tf.keras.Sequential([\n tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(MAX_LEN_SEQ, NUM_CHARS, 1)),\n tf.keras.layers.MaxPool2D((2, 2)),\n tf.keras.layers.Dropout(0.25),\n tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),\n tf.keras.layers.MaxPool2D((2, 2)),\n tf.keras.layers.Dropout(0.25),\n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(128, activation='relu'),\n tf.keras.layers.Dropout(0.25),\n tf.keras.layers.Dense(10, activation='relu'),\n tf.keras.layers.Dense(1, activation='sigmoid')\n ])\n metrics = ['accuracy', tf.keras.metrics.AUC(), f1] \n optimizer = tf.keras.optimizers.RMSprop(lr=0.001, rho=0.9, epsilon=1e-08, decay=0.0)\n model.compile(optimizer=optimizer, loss='binary_crossentropy', metrics=metrics)\n return model", "_____no_output_____" ], [ "early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_accuracy', patience=8)\nlearning_rate_reduction = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_accuracy', \n patience=4, \n verbose=1, \n factor=0.5, \n min_lr=0.00001)\n\ndef train(X_train, y_train, X_test, y_test, epochs=30, batch_size=64):\n model = build_model()\n history = model.fit(X_train, \n y_train, \n epochs=epochs, \n batch_size=batch_size,\n validation_data=(X_test, y_test),\n callbacks=[early_stopping, learning_rate_reduction],\n verbose=0)\n test_results = model.evaluate(X_test, y_test, batch_size)\n return history.history, model, test_results ", "_____no_output_____" ], [ "from sklearn.model_selection import StratifiedKFold\n\ndef X_transform(X):\n X = tf.convert_to_tensor(X)\n X = tf.reshape(X, [X.shape[0], X.shape[1], X.shape[2], 1]) # one channel (black or white)\n return X\n\ndef y_transform(y):\n return tf.convert_to_tensor(y)\n\ndef cross_validate(X, y, split_size=3):\n results = []\n models = []\n test_results = []\n kf = StratifiedKFold(n_splits=split_size)\n for train_idx, val_idx in kf.split(X, y):\n X_train = X_transform(X[train_idx])\n y_train = y_transform(y[train_idx])\n X_test = X_transform(X[val_idx])\n y_test = y_transform(y[val_idx])\n result, model, test_result = train(X_train, y_train, X_test, y_test)\n results.append(result)\n models.append(model)\n test_results.append(test_result)\n return results, models, test_results", "_____no_output_____" ], [ "X_new = np.array(X)\ny_new = np.array(y)\n\nresults, models, test_results = cross_validate(X_new, y_new)", "\nEpoch 00011: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.\n\nEpoch 00015: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.\n3565/1 [======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 1s 156us/sample - loss: 0.8550 - accuracy: 0.5332 - auc_14: 0.5508 - f1: 0.3429\n\nEpoch 00009: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.\n\nEpoch 00013: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.\n3565/1 [======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 1s 167us/sample - loss: 0.8738 - accuracy: 0.5391 - auc_15: 0.5593 - f1: 0.3128\n\nEpoch 00007: ReduceLROnPlateau reducing learning rate to 0.0005000000237487257.\n\nEpoch 00015: ReduceLROnPlateau reducing learning rate to 0.0002500000118743628.\n\nEpoch 00019: ReduceLROnPlateau reducing learning rate to 0.0001250000059371814.\n3565/1 [======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 1s 154us/sample - loss: 0.9758 - accuracy: 0.5276 - auc_16: 0.5378 - f1: 0.3465\n" ], [ "test_results", "_____no_output_____" ], [ "def predict(model, review, max_len=MAX_LEN_SEQ, shape=(MAX_LEN_SEQ, NUM_CHARS, 1)):\n input_ = [p for p in process_review(review, max_len=max_len)]\n input_ = tf.cast(input_, tf.float32)\n input_ = tf.reshape(input_, shape)\n input_ = input_[np.newaxis, ...]\n prediction = model.predict(input_)[0][0]\n print(prediction)\n if prediction > 0.5:\n print('Positive review with probability: {:.2f}%'.format(prediction * 100))\n else:\n print('Negative review with probability: {:.2f}%'.format(100 - prediction * 100))", "_____no_output_____" ], [ "shape = (MAX_LEN_SEQ, NUM_CHARS, 1)\npredict(models[2], \"I really like this film, one of the best I've ever seen\", shape=shape)", "0.91704285\nPositive review with probability: 91.70%\n" ], [ "predict(models[2], 'I like this film and recommend to everyone.', shape=shape)", "0.8560918\nPositive review with probability: 85.61%\n" ], [ "predict(models[2], \"The movie was terrible, not worth watching once again\", shape=shape)", "0.9224865\nPositive review with probability: 92.25%\n" ], [ "for i, model in enumerate(models):\n print(f\"\\nModel {i}: \\n\")\n predict(model, \"I really like this film, one of the best I've ever seen\", shape=shape)\n predict(model, 'I like this film and recommend to everyone.', shape=shape)\n predict(model, 'Sometimes boring with a simple plot twist.', shape=shape)\n predict(model, \"The movie was terrible, not worth watching once again\", shape=shape)", "\nModel 0: \n\n0.50744855\nPositive review with probability: 50.74%\n0.44882903\nNegative review with probability: 55.12%\n0.33789903\nNegative review with probability: 66.21%\n0.68163157\nPositive review with probability: 68.16%\n\nModel 1: \n\n0.14877345\nNegative review with probability: 85.12%\n0.338893\nNegative review with probability: 66.11%\n0.35493448\nNegative review with probability: 64.51%\n0.5286727\nPositive review with probability: 52.87%\n\nModel 2: \n\n0.91704285\nPositive review with probability: 91.70%\n0.8560918\nPositive review with probability: 85.61%\n0.049008247\nNegative review with probability: 95.10%\n0.9224865\nPositive review with probability: 92.25%\n" ], [ "def plot_result(i, result):\n plt.figure(figsize=(20, 4))\n plt.subplot(1, 4, 1)\n plt.plot(result['loss'], label='train')\n plt.plot(result['val_loss'], label='test')\n plt.xlabel('epoch', fontsize=14)\n plt.ylabel('loss', fontsize=14)\n plt.suptitle(f'Model {i+1}', fontsize=15)\n plt.legend(fontsize=13)\n #plt.tick_params(labelsize=14)\n \n auc_metrics = []\n for key, value in result.items():\n if 'auc' in key:\n auc_metrics.append(key)\n plt.subplot(1, 4, 2)\n plt.plot(result[auc_metrics[0]], label='train')\n plt.plot(result[auc_metrics[1]], label='test')\n plt.xlabel('epoch', fontsize=14)\n plt.ylabel('AUC', fontsize=14)\n plt.legend(fontsize=13)\n\n plt.subplot(1, 4, 3)\n plt.plot(result['f1'], label='train')\n plt.plot(result['val_f1'], label='test')\n plt.xlabel('epoch', fontsize=14)\n plt.ylabel(r'$F_1$', fontsize=14)\n plt.legend(fontsize=13)\n\n plt.subplot(1, 4, 4)\n plt.plot(result['accuracy'], label='train')\n plt.plot(result['val_accuracy'], label='test')\n plt.xlabel('epoch', fontsize=14)\n plt.ylabel('accuracy', fontsize=14)\n plt.legend(fontsize=13)\n\n plt.savefig(f'figures/cnn_character_training_{i+1}.pdf', bbox_inches='tight')\n #plt.show()", "_____no_output_____" ], [ "for i, r in enumerate(results):\n plot_result(i, r)", "_____no_output_____" ], [ "from tensorflow.keras.utils import model_to_dot\n\ndef save_model_architecture(filename):\n dot_model = model_to_dot(build_model(), show_shapes=True, show_layer_names=False)\n dot_model.write_pdf(filename)", "_____no_output_____" ], [ "save_model_architecture('figures/cnn_characters_model.pdf')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a39cbdc8cfa149525e0a6036572396a35f9aee6
4,614
ipynb
Jupyter Notebook
notebooks/Proprietary/Machine_Learning/Non_Linear_Classifiers/Neural_Network.ipynb
k-zen/SigmaProject
b844766d28d142ed1fb4d2e20f4e9dbad0ad90a6
[ "BSD-2-Clause" ]
null
null
null
notebooks/Proprietary/Machine_Learning/Non_Linear_Classifiers/Neural_Network.ipynb
k-zen/SigmaProject
b844766d28d142ed1fb4d2e20f4e9dbad0ad90a6
[ "BSD-2-Clause" ]
8
2020-04-27T19:31:23.000Z
2021-08-06T19:43:46.000Z
notebooks/Proprietary/Machine_Learning/Non_Linear_Classifiers/Neural_Network.ipynb
k-zen/SigmaProject
b844766d28d142ed1fb4d2e20f4e9dbad0ad90a6
[ "BSD-2-Clause" ]
null
null
null
19.550847
121
0.494148
[ [ [ "<div style=\"text-align:right\">\n <b>Author:</b> Andreas P. Koenzen (akc at apkc.net) / <a href=\"http://www.apkc.net\">www.apkc.net</a>\n</div>", "_____no_output_____" ], [ "# Simple Neural Network", "_____no_output_____" ], [ "## Imports", "_____no_output_____" ] ], [ [ "%run '../../../../imports.py'", "_____no_output_____" ] ], [ [ "## Configuration", "_____no_output_____" ] ], [ [ "%run '../../../../config.py'", "_____no_output_____" ] ], [ [ "## Environment variables", "_____no_output_____" ] ], [ [ "%run '../../../../env_variables.py'", "_____no_output_____" ], [ "# Epochs\nEPOCHS = 20000", "_____no_output_____" ] ], [ [ "## Generate the data", "_____no_output_____" ] ], [ [ "data = data.Data()\ndata.generate_data(2)\n\nX_train, y_train = data.data", "_____no_output_____" ] ], [ [ "## Plot the data", "_____no_output_____" ] ], [ [ "plotting.Plotting().scatter(X_train, y_train)", "_____no_output_____" ] ], [ [ "## Linear classifier", "_____no_output_____" ] ], [ [ "lr = LogisticRegressionCV(cv=5)\n_ = lr.fit(X_train, y_train)", "_____no_output_____" ], [ "plot = plotting.Plotting()\n\nplot.decision_boundary(X_train, lambda x: lr.predict(x))\nplot.scatter(X_train, y_train)", "_____no_output_____" ] ], [ [ "## Neural network", "_____no_output_____" ] ], [ [ "net = network.Network()\n\nnet.add_layer(\n layer.Layer(2, # Input (Rows)\n 2, # Output (Columns)\n layer_type.LayerType.HIDDEN, \n activation_function.ActivationFunction(activation_function_type.ActivationFunctionType.TANH))\n)\nnet.add_layer(\n layer.Layer(2, # Input (Rows)\n 2, # Output (Columns)\n layer_type.LayerType.OUTPUT,\n activation_function.ActivationFunction(activation_function_type.ActivationFunctionType.SIGMOID))\n)", "_____no_output_____" ], [ "# Train the Network.\nloss = []\nfor i in range(0, EPOCHS):\n _ = net.forward(X_train)\n _ = net.backward(X_train, y_train)\n _ = loss.append(net.loss(y_train))", "_____no_output_____" ], [ "plot = plotting.Plotting()\nplot.loss(loss)", "_____no_output_____" ], [ "plot = plotting.Plotting()\nplot.decision_boundary(X_train, lambda x: net.predict(x))\nplot.scatter(X_train, y_train)", "_____no_output_____" ] ], [ [ "***\n# End", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ] ]
4a39cf012c92de4e9d5380981e16bb20dc2e94ac
43,911
ipynb
Jupyter Notebook
Analysis/Ablation Study/post_proc_var_1.ipynb
Yuexuan-Kong/covid_fake_news
f8ff94f5d5cb4601abd6caafbe331640b030770c
[ "MIT" ]
31
2021-02-26T20:28:48.000Z
2022-03-26T21:57:06.000Z
Analysis/Ablation Study/post_proc_var_1.ipynb
Yuexuan-Kong/covid_fake_news
f8ff94f5d5cb4601abd6caafbe331640b030770c
[ "MIT" ]
1
2021-07-04T04:01:11.000Z
2021-07-04T04:01:11.000Z
Analysis/Ablation Study/post_proc_var_1.ipynb
Yuexuan-Kong/covid_fake_news
f8ff94f5d5cb4601abd6caafbe331640b030770c
[ "MIT" ]
52
2021-02-27T19:18:40.000Z
2022-03-20T12:11:37.000Z
36.440664
138
0.388604
[ [ [ "import pandas as pd\nfrom sklearn.metrics import classification_report\n", "_____no_output_____" ], [ "!ls", "Constraint_Val.csv test_false_pred_final.csv\r\nEDA-Copy1.ipynb test_false_pred_final_1.csv\r\nEDA.ipynb test_false_pred_final_var_1.csv\r\nenglish_test_with_labels.csv test_false_pred_final_var_1_1.csv\r\npost_proc_final-Copy1.ipynb test_false_pred_var_1.csv\r\npost_proc_final-Copy2.ipynb test_false_pred_var_1_1.csv\r\npost_proc_final.ipynb test_false_pred_var_2.csv\r\npost_proc_final_1.ipynb test_false_pred_var_2_1.csv\r\npost_proc_final_var_1.1.ipynb val_false_pred_final.csv\r\npost_proc_final_var_1.ipynb val_false_pred_final_1.csv\r\npost_proc_var_1.1.ipynb val_false_pred_final_var_1.csv\r\npost_proc_var_1.ipynb val_false_pred_final_var_1_1.csv\r\npost_proc_var_2.1.ipynb val_false_pred_var_1.csv\r\npost_proc_var_2.ipynb val_false_pred_var_1_1.csv\r\npostproc_test.csv val_false_pred_var_2.csv\r\npostproc_train.csv val_false_pred_var_2_1.csv\r\npostproc_val.csv\r\n" ], [ "train = pd.read_csv('../Post Processing/data/postproc_train.csv')\nval = pd.read_csv('../Post Processing/data/postproc_val.csv')\ntest = pd.read_csv('../Post Processing/data/postproc_test.csv')\ntest_gt = pd.read_csv('../../data/english_test_with_labels.csv')\nval_gt = pd.read_csv('../../data/Constraint_Val.csv')", "_____no_output_____" ], [ "def post_proc(row):\n if (row['domain_real']>row['domain_fake']) & (row['domain_real']>0.88):\n return 0\n elif (row['domain_real']<row['domain_fake']) & (row['domain_fake']>0.88):\n return 1\n else:\n# if (row['username_real']>row['username_fake']) & (row['username_real']>0.88):\n# return 0\n# elif (row['username_real']<row['username_fake']) & (row['username_fake']>0.88):\n# return 1\n# else:\n if row['class1_pred']>row['class0_pred']:\n return 1\n elif row['class1_pred']<row['class0_pred']:\n return 0\n\ndef post_proc1(row):\n if row['class1_pred']>row['class0_pred']:\n return 1\n elif row['class1_pred']<row['class0_pred']:\n return 0\n", "_____no_output_____" ], [ "train['final_pred'] = train.apply(lambda x: post_proc(x), 1)", "_____no_output_____" ], [ "print(classification_report(train['label'], train['final_pred']))\n", " precision recall f1-score support\n\n 0 1.00 1.00 1.00 3360\n 1 1.00 1.00 1.00 3060\n\n accuracy 1.00 6420\n macro avg 1.00 1.00 1.00 6420\nweighted avg 1.00 1.00 1.00 6420\n\n" ], [ "val['final_pred'] = val.apply(lambda x: post_proc(x), 1)", "_____no_output_____" ], [ "print(classification_report(val['label'], val['final_pred']))\n", " precision recall f1-score support\n\n 0 0.99 1.00 0.99 1120\n 1 1.00 0.99 0.99 1020\n\n accuracy 0.99 2140\n macro avg 0.99 0.99 0.99 2140\nweighted avg 0.99 0.99 0.99 2140\n\n" ], [ "from sklearn.metrics import f1_score,accuracy_score,precision_score,recall_score\nprint('f1_score : ',f1_score(val['label'], val['final_pred'],average='micro'))\nprint('precision_score : ',precision_score(val['label'], val['final_pred'],average='micro'))\nprint('recall_score : ',recall_score(val['label'], val['final_pred'],average='micro'))", "f1_score : 0.991588785046729\nprecision_score : 0.991588785046729\nrecall_score : 0.991588785046729\n" ], [ "test['final_pred'] = test.apply(lambda x: post_proc(x), 1)", "_____no_output_____" ], [ "print(classification_report(test['label'], test['final_pred']))\n", " precision recall f1-score support\n\n 0 0.98 1.00 0.99 1120\n 1 1.00 0.98 0.99 1020\n\n accuracy 0.99 2140\n macro avg 0.99 0.99 0.99 2140\nweighted avg 0.99 0.99 0.99 2140\n\n" ], [ "from sklearn.metrics import f1_score,accuracy_score,precision_score,recall_score\nprint('f1_score : ',f1_score(test['label'], test['final_pred'],average='micro'))\nprint('precision_score : ',precision_score(test['label'], test['final_pred'],average='micro'))\nprint('recall_score : ',recall_score(test['label'], test['final_pred'],average='micro'))", "f1_score : 0.9878504672897196\nprecision_score : 0.9878504672897196\nrecall_score : 0.9878504672897196\n" ] ], [ [ "## Get False Pred samples", "_____no_output_____" ] ], [ [ "val_false_pred = val[val.final_pred!=val.label]", "_____no_output_____" ], [ "pd.merge(val_false_pred, val_gt, left_index=True, right_index=True)", "_____no_output_____" ], [ "pd.merge(val_false_pred, val_gt, left_index=True, right_index=True).to_csv('../Post Processing/results/val_false_pred_var_1.csv')", "_____no_output_____" ], [ "test_false_pred = test[test.final_pred!=test.label]", "_____no_output_____" ], [ "pd.merge(test_false_pred, test_gt, left_index=True, right_index=True)", "_____no_output_____" ], [ "pd.merge(test_false_pred, test_gt, left_index=True, right_index=True).to_csv('../Post Processing/results/test_false_pred_var_1.csv')", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ] ]
4a39db56c2745b54d64877d89af8d730fe67fe67
220,391
ipynb
Jupyter Notebook
Introduction to Computer_Vision/CNN_Layers and Feature Visulization/Feature viz for FashionMNIST.ipynb
sudoberlin/Computer_Vision_ND
6211d0a610a26f6ed54116127588adb6ff4b7ba9
[ "Apache-2.0" ]
1
2020-08-09T19:49:38.000Z
2020-08-09T19:49:38.000Z
Introduction to Computer_Vision/CNN_Layers and Feature Visulization/Feature viz for FashionMNIST.ipynb
sudoberlin/Computer_Vision_ND
6211d0a610a26f6ed54116127588adb6ff4b7ba9
[ "Apache-2.0" ]
null
null
null
Introduction to Computer_Vision/CNN_Layers and Feature Visulization/Feature viz for FashionMNIST.ipynb
sudoberlin/Computer_Vision_ND
6211d0a610a26f6ed54116127588adb6ff4b7ba9
[ "Apache-2.0" ]
null
null
null
455.353306
70,660
0.934476
[ [ [ "# Visualizing CNN Layers\n---\nIn this notebook, we load a trained CNN (from a solution to FashionMNIST) and implement several feature visualization techniques to see what features this network has learned to extract.", "_____no_output_____" ], [ "### Load the [data](http://pytorch.org/docs/stable/torchvision/datasets.html)\n\nIn this cell, we load in just the **test** dataset from the FashionMNIST class.", "_____no_output_____" ] ], [ [ "# our basic libraries\nimport torch\nimport torchvision\n\n# data loading and transforming\nfrom torchvision.datasets import FashionMNIST\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms\n\n# The output of torchvision datasets are PILImage images of range [0, 1]. \n# We transform them to Tensors for input into a CNN\n\n## Define a transform to read the data in as a tensor\ndata_transform = transforms.ToTensor()\n\ntest_data = FashionMNIST(root='./data', train=False,\n download=True, transform=data_transform)\n\n\n# Print out some stats about the test data\nprint('Test data, number of images: ', len(test_data))", "Test data, number of images: 10000\n" ], [ "# prepare data loaders, set the batch_size\n## TODO: you can try changing the batch_size to be larger or smaller\n## when you get to training your network, see how batch_size affects the loss\nbatch_size = 32\n\ntest_loader = DataLoader(test_data, batch_size=batch_size, shuffle=True)\n\n# specify the image classes\nclasses = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', \n 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']", "_____no_output_____" ] ], [ [ "### Visualize some test data\n\nThis cell iterates over the training dataset, loading a random batch of image/label data, using `dataiter.next()`. It then plots the batch of images and labels in a `2 x batch_size/2` grid.", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n \n# obtain one batch of training images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\nprint(images.shape)\n# plot the images in the batch, along with the corresponding labels\nfig = plt.figure(figsize=(25, 4))\nfor idx in np.arange(batch_size):\n ax = fig.add_subplot(2, batch_size/2, idx+1, xticks=[], yticks=[])\n ax.imshow(np.squeeze(images[idx]), cmap='gray')\n ax.set_title(classes[labels[idx]])", "(32, 1, 28, 28)\n" ] ], [ [ "### Define the network architecture\n\nThe various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll use a simple series of layers:\n* Convolutional layers\n* Maxpooling layers\n* Fully-connected (linear) layers", "_____no_output_____" ] ], [ [ "import torch.nn as nn\nimport torch.nn.functional as F\n\nclass Net(nn.Module):\n\n def __init__(self):\n super(Net, self).__init__()\n \n # 1 input image channel (grayscale), 10 output channels/feature maps\n # 3x3 square convolution kernel\n ## output size = (W-F)/S +1 = (28-3)/1 +1 = 26\n # the output Tensor for one image, will have the dimensions: (10, 26, 26)\n # after one pool layer, this becomes (10, 13, 13)\n self.conv1 = nn.Conv2d(1, 10, 3)\n \n # maxpool layer\n # pool with kernel_size=2, stride=2\n self.pool = nn.MaxPool2d(2, 2)\n \n # second conv layer: 10 inputs, 20 outputs, 3x3 conv\n ## output size = (W-F)/S +1 = (13-3)/1 +1 = 11\n # the output tensor will have dimensions: (20, 11, 11)\n # after another pool layer this becomes (20, 5, 5); 5.5 is rounded down\n self.conv2 = nn.Conv2d(10, 20, 3)\n \n # 20 outputs * the 5*5 filtered/pooled map size\n self.fc1 = nn.Linear(20*5*5, 50)\n \n # dropout with p=0.4\n self.fc1_drop = nn.Dropout(p=0.4)\n \n # finally, create 10 output channels (for the 10 classes)\n self.fc2 = nn.Linear(50, 10)\n\n # define the feedforward behavior\n def forward(self, x):\n # two conv/relu + pool layers\n x = self.pool(F.relu(self.conv1(x)))\n x = self.pool(F.relu(self.conv2(x)))\n\n # prep for linear layer\n # this line of code is the equivalent of Flatten in Keras\n x = x.view(x.size(0), -1)\n \n # two linear layers with dropout in between\n x = F.relu(self.fc1(x))\n x = self.fc1_drop(x)\n x = self.fc2(x)\n \n # final output\n return x\n", "_____no_output_____" ] ], [ [ "### Load in our trained net\n\nThis notebook needs to know the network architecture, as defined above, and once it knows what the \"Net\" class looks like, we can instantiate a model and load in an already trained network.\n\nThe architecture above is taken from the example solution code, which was trained and saved in the directory `saved_models/`.\n", "_____no_output_____" ] ], [ [ "# instantiate your Net\nnet = Net()\n\n# load the net parameters by name\nnet.load_state_dict(torch.load('saved_models/fashion_net_ex.pt'))\n\nprint(net)", "Net(\n (conv1): Conv2d(1, 10, kernel_size=(3, 3), stride=(1, 1))\n (pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)\n (conv2): Conv2d(10, 20, kernel_size=(3, 3), stride=(1, 1))\n (fc1): Linear(in_features=500, out_features=50, bias=True)\n (fc1_drop): Dropout(p=0.4)\n (fc2): Linear(in_features=50, out_features=10, bias=True)\n)\n" ] ], [ [ "## Feature Visualization\n\nSometimes, neural networks are thought of as a black box, given some input, they learn to produce some output. CNN's are actually learning to recognize a variety of spatial patterns and you can visualize what each convolutional layer has been trained to recognize by looking at the weights that make up each convolutional kernel and applying those one at a time to a sample image. These techniques are called feature visualization and they are useful for understanding the inner workings of a CNN.\n\nIn the cell below, you'll see how to extract and visualize the filter weights for all of the filters in the first convolutional layer.\n\nNote the patterns of light and dark pixels and see if you can tell what a particular filter is detecting. For example, the filter pictured in the example below has dark pixels on either side and light pixels in the middle column, and so it may be detecting vertical edges.\n\n<img src='edge_filter_ex.png' width= 30% height=30%/>\n\n", "_____no_output_____" ] ], [ [ "# Get the weights in the first conv layer\nweights = net.conv1.weight.data\nw = weights.numpy()\n\n# for 10 filters\nfig=plt.figure(figsize=(20, 8))\ncolumns = 5\nrows = 2\nfor i in range(0, columns*rows):\n fig.add_subplot(rows, columns, i+1)\n plt.imshow(w[i][0], cmap='gray')\n \nprint('First convolutional layer')\nplt.show()\n\nweights = net.conv2.weight.data\nw = weights.numpy()", "First convolutional layer\n" ] ], [ [ "### Activation Maps\n\nNext, you'll see how to use OpenCV's `filter2D` function to apply these filters to a sample test image and produce a series of **activation maps** as a result. We'll do this for the first and second convolutional layers and these activation maps whould really give you a sense for what features each filter learns to extract.", "_____no_output_____" ] ], [ [ "# obtain one batch of testing images\ndataiter = iter(test_loader)\nimages, labels = dataiter.next()\nimages = images.numpy()\n\n# select an image by index\nidx = 3\nimg = np.squeeze(images[idx])\n\n# Use OpenCV's filter2D function \n# apply a specific set of filter weights (like the one's displayed above) to the test image\n\nimport cv2\nplt.imshow(img, cmap='gray')\n\nweights = net.conv1.weight.data\nw = weights.numpy()\n\n# 1. first conv layer\n# for 10 filters\nfig=plt.figure(figsize=(30, 10))\ncolumns = 5*2\nrows = 2\nfor i in range(0, columns*rows):\n fig.add_subplot(rows, columns, i+1)\n if ((i%2)==0):\n plt.imshow(w[int(i/2)][0], cmap='gray')\n else:\n c = cv2.filter2D(img, -1, w[int((i-1)/2)][0])\n plt.imshow(c, cmap='gray')\nplt.show()", "_____no_output_____" ], [ "# Same process but for the second conv layer (20, 3x3 filters):\nplt.imshow(img, cmap='gray')\n\n# second conv layer, conv2\nweights = net.conv2.weight.data\nw = weights.numpy()\n\n# 1. first conv layer\n# for 20 filters\nfig=plt.figure(figsize=(30, 10))\ncolumns = 5*2\nrows = 2*2\nfor i in range(0, columns*rows):\n fig.add_subplot(rows, columns, i+1)\n if ((i%2)==0):\n plt.imshow(w[int(i/2)][0], cmap='gray')\n else:\n c = cv2.filter2D(img, -1, w[int((i-1)/2)][0])\n plt.imshow(c, cmap='gray')\nplt.show()", "_____no_output_____" ] ], [ [ "### Question: Choose a filter from one of your trained convolutional layers; looking at these activations, what purpose do you think it plays? What kind of feature do you think it detects?\n", "_____no_output_____" ], [ "**Answer**: In the first convolutional layer (conv1), the very first filter, pictured in the top-left grid corner, appears to detect horizontal edges. It has a negatively-weighted top row and positively weighted middel/bottom rows and seems to detect the horizontal edges of sleeves in a pullover. \n\nIn the second convolutional layer (conv2) the first filter looks like it may be dtecting the background color (since that is the brightest area in the filtered image) and the more vertical edges of a pullover.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
4a39faf820ed54f08887193a36c3415afd57b58c
4,577
ipynb
Jupyter Notebook
crawler/techplus_IT.ipynb
yskft04/satobot-crawler-open
50b8b72d20b18de45671475299e0f45000cc7055
[ "Unlicense" ]
1
2021-12-19T05:40:32.000Z
2021-12-19T05:40:32.000Z
crawler/techplus_IT.ipynb
yskft04/satobot-crawler-open
50b8b72d20b18de45671475299e0f45000cc7055
[ "Unlicense" ]
null
null
null
crawler/techplus_IT.ipynb
yskft04/satobot-crawler-open
50b8b72d20b18de45671475299e0f45000cc7055
[ "Unlicense" ]
null
null
null
61.851351
2,687
0.38191
[ [ [ "from urllib import request # urllib.requestモジュールをインポート\nfrom bs4 import BeautifulSoup # BeautifulSoupクラスをインポート\n\nurl = \"https://news.mynavi.jp/techplus/list/headline/\"\nresponse = request.urlopen(url)\nsoup = BeautifulSoup(response)\nresponse.close()\n#soup\n\n## soup と実行すると、ソースコードを吐き出してくれる。\n## そこから、必要な要素を考える。\nlinks = soup.find_all(\"div\",class_=\"c-archiveList_daily\")\nfor li in links:\n #print(li)\n l = li.find(\"a\")\n if l != None: #liに実態があれば\n url = l.get(\"href\")\n title = li.text.replace( '\\n' , '' )#何故か改行が入ってしまってる場合がある。\n print(title)\n print(url)", "2022年01月04日(火) 企業IT 2022年はデータ利活用の垣根が取り払われ、データから価値を得られる年に - Tableau佐藤氏 1時間前 企業IT モダン・データ・エクスペリエンスでビジネスの回復を支援 - ピュア・ストレージ田中社長 1時間前 企業IT どうなる? 2022年の旅行業界① 澤田秀雄・エイチ・アイ・エス会長兼社長に直撃! 1時間前 企業IT グローバルで質の伴った成長を - NTTデータ 年頭所感 1時間前 企業IT 2022年は顧客のビジネス変革を支え持続可能な社会に貢献を - HPE望月社長 2時間前 レポート 企業IT イノベーションを後押しするDXの実現へ - シトリックス年頭所感 2時間前 企業IT 日本のIT市場におけるクラウドスキルの底上げをリード - Veeam 年頭所感 2時間前 企業IT 「The World Works with ServiceNow」を目指す - ServiceNow 年頭所感 2時間前 企業IT 日本社会の再活性化に向けた変革を支援 - マイクロソフト 年頭所感 2時間前 企業IT データ・ファースト・モダナイゼーションに注力 - HPE 望月社長 3時間前 企業IT お客様の変革に貢献するパートナーを目指して - デル 年頭所感 3時間前 企業IT グローバル創業50周年、大転換の1年に - SAPジャパン 鈴木社長 3時間前 企業IT 「人」と「データ」へ取組みを強化 - 内田洋行 年頭所感 3時間前 企業IT 航空機の技術とメカニズムの裏側 第310回 米海軍が導入した “魔法の絨毯”とは? 4時間前 連載 企業IT デジタルを駆使し、 デジタル変革を支援 - シスコ 年頭所感 4時間前 企業IT 「ドコモビジネス」を信頼されるブランドに - NTT Com 年頭所感 4時間前 企業IT ついに施行された改正電帳法、結局何が変わった? 電子化の動きを止めるな 6時間前 レポート 企業IT 次の30年もインターネットの技術分野でイニシアティブを執る‐IIJ年頭所感 6時間前 企業IT 『2022年を大胆予測』特別インタビュー 安永竜夫・三井物産会長 6時間前 \n/techplus/article/20220104-2242922/\n2022年01月03日(月) 企業IT 2021年デバイス別シェア、パソコンがスマホを上回る 14時間前 \n/techplus/article/20220103-2242624/\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a39ff4c79a5ba3a3abc58918d3271e5f9235117
465,517
ipynb
Jupyter Notebook
cleaners_districts.ipynb
rmchrkv/statistics
eadca1ff519d6d7470fb5f48eac9471ee7bc1a75
[ "MIT" ]
null
null
null
cleaners_districts.ipynb
rmchrkv/statistics
eadca1ff519d6d7470fb5f48eac9471ee7bc1a75
[ "MIT" ]
null
null
null
cleaners_districts.ipynb
rmchrkv/statistics
eadca1ff519d6d7470fb5f48eac9471ee7bc1a75
[ "MIT" ]
null
null
null
138.546726
1,116
0.35101
[ [ [ "import numpy as np\nimport random as rnd\nimport itertools\nimport collections", "_____no_output_____" ], [ "n_districts = 10\nn_cleaners = 100\ncleaner_a = 1\ncleaner_b = 2\nnum_repeat = 200", "_____no_output_____" ], [ "cleaners = np.arange(start= 1, stop = n_cleaners+1)\ndistricts = np.arange(start= 1, stop = n_districts+1)\ncleaners_per_district = int(n_cleaners/n_districts)\n\ndef chunks(arr, n):\n n = max(1, n)\n return (arr[i:i+n] for i in range(0, len(arr), n))\n\ndef common_district(cleaner_1, cleaner_2):\n return indices.get(cleaner_1, set()) & indices.get(cleaner_2, set())", "_____no_output_____" ], [ "prob_array = []\nfor i in range(0,num_repeat):\n np.random.shuffle(cleaners)\n print('Iteration', i+1, ':\\n\\n', cleaners, '\\n')\n districts_cleaners = list(itertools.islice(chunks(cleaners, cleaners_per_district), n_districts))\n print(list(enumerate(districts_cleaners)), '\\n')\n indices = {}\n for j, district in enumerate(districts_cleaners):\n for cleaner in district:\n indices.setdefault(cleaner, set()).add(j)\n ordered_indices = collections.OrderedDict(sorted(indices.items()))\n print(ordered_indices)\n check = list(common_district(cleaner_a, cleaner_b))\n if len(check) == 0:\n prob_array.append(0)\n print('\\nNo common district for cleaners A and B', '\\n\\n')\n else:\n prob_array.append(1)\n print('\\nCleaner A and cleaner B are in disctrict #', check, '\\n\\n')", "Iteration 1 :\n\n [ 70 84 16 54 79 5 7 12 71 42 13 59 82 26 2 96 74 47\n 34 91 80 89 37 92 65 63 46 100 20 52 8 66 43 55 56 25\n 57 4 14 22 48 69 77 81 45 27 29 75 78 30 31 39 90 23\n 83 51 88 38 95 41 67 35 10 94 33 73 53 86 28 93 19 40\n 24 36 62 99 68 50 3 76 85 6 1 18 97 61 9 72 58 98\n 44 11 64 21 87 60 49 15 32 17] \n\n[(0, array([70, 84, 16, 54, 79, 5, 7, 12, 71, 42])), (1, array([13, 59, 82, 26, 2, 96, 74, 47, 34, 91])), (2, array([ 80, 89, 37, 92, 65, 63, 46, 100, 20, 52])), (3, array([ 8, 66, 43, 55, 56, 25, 57, 4, 14, 22])), (4, array([48, 69, 77, 81, 45, 27, 29, 75, 78, 30])), (5, array([31, 39, 90, 23, 83, 51, 88, 38, 95, 41])), (6, array([67, 35, 10, 94, 33, 73, 53, 86, 28, 93])), (7, array([19, 40, 24, 36, 62, 99, 68, 50, 3, 76])), (8, array([85, 6, 1, 18, 97, 61, 9, 72, 58, 98])), (9, array([44, 11, 64, 21, 87, 60, 49, 15, 32, 17]))] \n\nOrderedDict([(1, {8}), (2, {1}), (3, {7}), (4, {3}), (5, {0}), (6, {8}), (7, {0}), (8, {3}), (9, {8}), (10, {6}), (11, {9}), (12, {0}), (13, {1}), (14, {3}), (15, {9}), (16, {0}), (17, {9}), (18, {8}), (19, {7}), (20, {2}), (21, {9}), (22, {3}), (23, {5}), (24, {7}), (25, {3}), (26, {1}), (27, {4}), (28, {6}), (29, {4}), (30, {4}), (31, {5}), (32, {9}), (33, {6}), (34, {1}), (35, {6}), (36, {7}), (37, {2}), (38, {5}), (39, {5}), (40, {7}), (41, {5}), (42, {0}), (43, {3}), (44, {9}), (45, {4}), (46, {2}), (47, {1}), (48, {4}), (49, {9}), (50, {7}), (51, {5}), (52, {2}), (53, {6}), (54, {0}), (55, {3}), (56, {3}), (57, {3}), (58, {8}), (59, {1}), (60, {9}), (61, {8}), (62, {7}), (63, {2}), (64, {9}), (65, {2}), (66, {3}), (67, {6}), (68, {7}), (69, {4}), (70, {0}), (71, {0}), (72, {8}), (73, {6}), (74, {1}), (75, {4}), (76, {7}), (77, {4}), (78, {4}), (79, {0}), (80, {2}), (81, {4}), (82, {1}), (83, {5}), (84, {0}), (85, {8}), (86, {6}), (87, {9}), (88, {5}), (89, {2}), (90, {5}), (91, {1}), (92, {2}), (93, {6}), (94, {6}), (95, {5}), (96, {1}), (97, {8}), (98, {8}), (99, {7}), (100, {2})])\n\nNo common district for cleaners A and B \n\n\nIteration 2 :\n\n [ 88 19 59 51 97 32 55 41 78 49 74 30 22 7 66 98 34 46\n 71 72 1 2 86 53 99 33 3 26 11 5 57 63 43 69 90 81\n 15 12 61 84 75 29 36 91 38 24 87 52 79 93 4 77 17 65\n 54 6 50 35 76 14 9 44 100 62 56 64 20 21 40 47 70 94\n 8 60 10 58 48 42 89 28 18 73 67 92 37 83 13 23 80 45\n 25 16 96 27 95 39 82 85 68 31] \n\n[(0, array([88, 19, 59, 51, 97, 32, 55, 41, 78, 49])), (1, array([74, 30, 22, 7, 66, 98, 34, 46, 71, 72])), (2, array([ 1, 2, 86, 53, 99, 33, 3, 26, 11, 5])), (3, array([57, 63, 43, 69, 90, 81, 15, 12, 61, 84])), (4, array([75, 29, 36, 91, 38, 24, 87, 52, 79, 93])), (5, array([ 4, 77, 17, 65, 54, 6, 50, 35, 76, 14])), (6, array([ 9, 44, 100, 62, 56, 64, 20, 21, 40, 47])), (7, array([70, 94, 8, 60, 10, 58, 48, 42, 89, 28])), (8, array([18, 73, 67, 92, 37, 83, 13, 23, 80, 45])), (9, array([25, 16, 96, 27, 95, 39, 82, 85, 68, 31]))] \n\nOrderedDict([(1, {2}), (2, {2}), (3, {2}), (4, {5}), (5, {2}), (6, {5}), (7, {1}), (8, {7}), (9, {6}), (10, {7}), (11, {2}), (12, {3}), (13, {8}), (14, {5}), (15, {3}), (16, {9}), (17, {5}), (18, {8}), (19, {0}), (20, {6}), (21, {6}), (22, {1}), (23, {8}), (24, {4}), (25, {9}), (26, {2}), (27, {9}), (28, {7}), (29, {4}), (30, {1}), (31, {9}), (32, {0}), (33, {2}), (34, {1}), (35, {5}), (36, {4}), (37, {8}), (38, {4}), (39, {9}), (40, {6}), (41, {0}), (42, {7}), (43, {3}), (44, {6}), (45, {8}), (46, {1}), (47, {6}), (48, {7}), (49, {0}), (50, {5}), (51, {0}), (52, {4}), (53, {2}), (54, {5}), (55, {0}), (56, {6}), (57, {3}), (58, {7}), (59, {0}), (60, {7}), (61, {3}), (62, {6}), (63, {3}), (64, {6}), (65, {5}), (66, {1}), (67, {8}), (68, {9}), (69, {3}), (70, {7}), (71, {1}), (72, {1}), (73, {8}), (74, {1}), (75, {4}), (76, {5}), (77, {5}), (78, {0}), (79, {4}), (80, {8}), (81, {3}), (82, {9}), (83, {8}), (84, {3}), (85, {9}), (86, {2}), (87, {4}), (88, {0}), (89, {7}), (90, {3}), (91, {4}), (92, {8}), (93, {4}), (94, {7}), (95, {9}), (96, {9}), (97, {0}), (98, {1}), (99, {2}), (100, {6})])\n\nCleaner A and cleaner B are in disctrict # [2] \n\n\nIteration 3 :\n\n [ 18 78 54 51 28 42 37 91 13 22 21 65 100 38 14 76 52 47\n 3 94 32 10 20 69 79 58 39 67 8 84 83 23 62 48 33 50\n 49 35 31 45 26 30 7 34 63 70 40 87 2 60 92 66 73 19\n 5 56 16 41 90 86 57 15 53 88 46 80 68 12 93 81 27 44\n 36 82 9 77 89 96 29 6 75 72 97 25 17 71 43 95 74 11\n 1 24 64 59 55 61 98 85 4 99] \n\n[(0, array([18, 78, 54, 51, 28, 42, 37, 91, 13, 22])), (1, array([ 21, 65, 100, 38, 14, 76, 52, 47, 3, 94])), (2, array([32, 10, 20, 69, 79, 58, 39, 67, 8, 84])), (3, array([83, 23, 62, 48, 33, 50, 49, 35, 31, 45])), (4, array([26, 30, 7, 34, 63, 70, 40, 87, 2, 60])), (5, array([92, 66, 73, 19, 5, 56, 16, 41, 90, 86])), (6, array([57, 15, 53, 88, 46, 80, 68, 12, 93, 81])), (7, array([27, 44, 36, 82, 9, 77, 89, 96, 29, 6])), (8, array([75, 72, 97, 25, 17, 71, 43, 95, 74, 11])), (9, array([ 1, 24, 64, 59, 55, 61, 98, 85, 4, 99]))] \n\nOrderedDict([(1, {9}), (2, {4}), (3, {1}), (4, {9}), (5, {5}), (6, {7}), (7, {4}), (8, {2}), (9, {7}), (10, {2}), (11, {8}), (12, {6}), (13, {0}), (14, {1}), (15, {6}), (16, {5}), (17, {8}), (18, {0}), (19, {5}), (20, {2}), (21, {1}), (22, {0}), (23, {3}), (24, {9}), (25, {8}), (26, {4}), (27, {7}), (28, {0}), (29, {7}), (30, {4}), (31, {3}), (32, {2}), (33, {3}), (34, {4}), (35, {3}), (36, {7}), (37, {0}), (38, {1}), (39, {2}), (40, {4}), (41, {5}), (42, {0}), (43, {8}), (44, {7}), (45, {3}), (46, {6}), (47, {1}), (48, {3}), (49, {3}), (50, {3}), (51, {0}), (52, {1}), (53, {6}), (54, {0}), (55, {9}), (56, {5}), (57, {6}), (58, {2}), (59, {9}), (60, {4}), (61, {9}), (62, {3}), (63, {4}), (64, {9}), (65, {1}), (66, {5}), (67, {2}), (68, {6}), (69, {2}), (70, {4}), (71, {8}), (72, {8}), (73, {5}), (74, {8}), (75, {8}), (76, {1}), (77, {7}), (78, {0}), (79, {2}), (80, {6}), (81, {6}), (82, {7}), (83, {3}), (84, {2}), (85, {9}), (86, {5}), (87, {4}), (88, {6}), (89, {7}), (90, {5}), (91, {0}), (92, {5}), (93, {6}), (94, {1}), (95, {8}), (96, {7}), (97, {8}), (98, {9}), (99, {9}), (100, {1})])\n\nNo common district for cleaners A and B \n\n\nIteration 4 :\n\n [ 91 35 32 29 79 52 96 63 92 75 95 64 58 40 80 13 47 67\n 62 37 56 33 24 76 20 28 21 78 12 70 61 19 39 86 99 81\n 4 30 9 83 27 54 100 3 45 23 25 5 72 69 1 82 51 84\n 11 88 55 57 8 36 2 42 90 71 73 93 53 48 17 66 41 10\n 26 46 34 14 89 22 15 94 59 50 44 77 65 18 43 49 60 87\n 98 68 6 16 85 38 74 31 97 7] \n\n[(0, array([91, 35, 32, 29, 79, 52, 96, 63, 92, 75])), (1, array([95, 64, 58, 40, 80, 13, 47, 67, 62, 37])), (2, array([56, 33, 24, 76, 20, 28, 21, 78, 12, 70])), (3, array([61, 19, 39, 86, 99, 81, 4, 30, 9, 83])), (4, array([ 27, 54, 100, 3, 45, 23, 25, 5, 72, 69])), (5, array([ 1, 82, 51, 84, 11, 88, 55, 57, 8, 36])), (6, array([ 2, 42, 90, 71, 73, 93, 53, 48, 17, 66])), (7, array([41, 10, 26, 46, 34, 14, 89, 22, 15, 94])), (8, array([59, 50, 44, 77, 65, 18, 43, 49, 60, 87])), (9, array([98, 68, 6, 16, 85, 38, 74, 31, 97, 7]))] \n\nOrderedDict([(1, {5}), (2, {6}), (3, {4}), (4, {3}), (5, {4}), (6, {9}), (7, {9}), (8, {5}), (9, {3}), (10, {7}), (11, {5}), (12, {2}), (13, {1}), (14, {7}), (15, {7}), (16, {9}), (17, {6}), (18, {8}), (19, {3}), (20, {2}), (21, {2}), (22, {7}), (23, {4}), (24, {2}), (25, {4}), (26, {7}), (27, {4}), (28, {2}), (29, {0}), (30, {3}), (31, {9}), (32, {0}), (33, {2}), (34, {7}), (35, {0}), (36, {5}), (37, {1}), (38, {9}), (39, {3}), (40, {1}), (41, {7}), (42, {6}), (43, {8}), (44, {8}), (45, {4}), (46, {7}), (47, {1}), (48, {6}), (49, {8}), (50, {8}), (51, {5}), (52, {0}), (53, {6}), (54, {4}), (55, {5}), (56, {2}), (57, {5}), (58, {1}), (59, {8}), (60, {8}), (61, {3}), (62, {1}), (63, {0}), (64, {1}), (65, {8}), (66, {6}), (67, {1}), (68, {9}), (69, {4}), (70, {2}), (71, {6}), (72, {4}), (73, {6}), (74, {9}), (75, {0}), (76, {2}), (77, {8}), (78, {2}), (79, {0}), (80, {1}), (81, {3}), (82, {5}), (83, {3}), (84, {5}), (85, {9}), (86, {3}), (87, {8}), (88, {5}), (89, {7}), (90, {6}), (91, {0}), (92, {0}), (93, {6}), (94, {7}), (95, {1}), (96, {0}), (97, {9}), (98, {9}), (99, {3}), (100, {4})])\n\nNo common district for cleaners A and B \n\n\nIteration 5 :\n\n [ 18 70 25 64 83 69 71 57 78 82 91 94 6 100 95 55 85 21\n 50 66 44 27 68 77 20 72 45 96 9 1 43 49 36 33 52 8\n 60 75 89 19 38 93 73 11 88 13 14 59 51 84 79 30 29 53\n 17 48 10 87 34 99 23 37 90 4 56 42 41 31 46 16 15 86\n 32 81 97 67 74 39 54 7 22 28 47 5 24 40 26 58 61 65\n 3 80 62 2 12 98 92 76 63 35] \n\n[(0, array([18, 70, 25, 64, 83, 69, 71, 57, 78, 82])), (1, array([ 91, 94, 6, 100, 95, 55, 85, 21, 50, 66])), (2, array([44, 27, 68, 77, 20, 72, 45, 96, 9, 1])), (3, array([43, 49, 36, 33, 52, 8, 60, 75, 89, 19])), (4, array([38, 93, 73, 11, 88, 13, 14, 59, 51, 84])), (5, array([79, 30, 29, 53, 17, 48, 10, 87, 34, 99])), (6, array([23, 37, 90, 4, 56, 42, 41, 31, 46, 16])), (7, array([15, 86, 32, 81, 97, 67, 74, 39, 54, 7])), (8, array([22, 28, 47, 5, 24, 40, 26, 58, 61, 65])), (9, array([ 3, 80, 62, 2, 12, 98, 92, 76, 63, 35]))] \n\nOrderedDict([(1, {2}), (2, {9}), (3, {9}), (4, {6}), (5, {8}), (6, {1}), (7, {7}), (8, {3}), (9, {2}), (10, {5}), (11, {4}), (12, {9}), (13, {4}), (14, {4}), (15, {7}), (16, {6}), (17, {5}), (18, {0}), (19, {3}), (20, {2}), (21, {1}), (22, {8}), (23, {6}), (24, {8}), (25, {0}), (26, {8}), (27, {2}), (28, {8}), (29, {5}), (30, {5}), (31, {6}), (32, {7}), (33, {3}), (34, {5}), (35, {9}), (36, {3}), (37, {6}), (38, {4}), (39, {7}), (40, {8}), (41, {6}), (42, {6}), (43, {3}), (44, {2}), (45, {2}), (46, {6}), (47, {8}), (48, {5}), (49, {3}), (50, {1}), (51, {4}), (52, {3}), (53, {5}), (54, {7}), (55, {1}), (56, {6}), (57, {0}), (58, {8}), (59, {4}), (60, {3}), (61, {8}), (62, {9}), (63, {9}), (64, {0}), (65, {8}), (66, {1}), (67, {7}), (68, {2}), (69, {0}), (70, {0}), (71, {0}), (72, {2}), (73, {4}), (74, {7}), (75, {3}), (76, {9}), (77, {2}), (78, {0}), (79, {5}), (80, {9}), (81, {7}), (82, {0}), (83, {0}), (84, {4}), (85, {1}), (86, {7}), (87, {5}), (88, {4}), (89, {3}), (90, {6}), (91, {1}), (92, {9}), (93, {4}), (94, {1}), (95, {1}), (96, {2}), (97, {7}), (98, {9}), (99, {5}), (100, {1})])\n\nNo common district for cleaners A and B \n\n\nIteration 6 :\n\n [ 66 78 90 46 26 16 58 18 17 29 33 91 97 86 88 93 70 28\n 95 52 31 64 9 10 22 63 40 92 50 84 21 25 1 41 6 39\n 71 62 47 57 61 12 83 60 94 77 7 43 56 15 54 81 19 99\n 11 4 34 49 73 37 38 85 48 13 5 80 82 30 67 32 55 27\n 68 23 89 53 100 51 59 8 69 45 76 87 14 98 36 2 96 35\n 74 79 3 65 24 20 44 42 72 75] \n\n[(0, array([66, 78, 90, 46, 26, 16, 58, 18, 17, 29])), (1, array([33, 91, 97, 86, 88, 93, 70, 28, 95, 52])), (2, array([31, 64, 9, 10, 22, 63, 40, 92, 50, 84])), (3, array([21, 25, 1, 41, 6, 39, 71, 62, 47, 57])), (4, array([61, 12, 83, 60, 94, 77, 7, 43, 56, 15])), (5, array([54, 81, 19, 99, 11, 4, 34, 49, 73, 37])), (6, array([38, 85, 48, 13, 5, 80, 82, 30, 67, 32])), (7, array([ 55, 27, 68, 23, 89, 53, 100, 51, 59, 8])), (8, array([69, 45, 76, 87, 14, 98, 36, 2, 96, 35])), (9, array([74, 79, 3, 65, 24, 20, 44, 42, 72, 75]))] \n\nOrderedDict([(1, {3}), (2, {8}), (3, {9}), (4, {5}), (5, {6}), (6, {3}), (7, {4}), (8, {7}), (9, {2}), (10, {2}), (11, {5}), (12, {4}), (13, {6}), (14, {8}), (15, {4}), (16, {0}), (17, {0}), (18, {0}), (19, {5}), (20, {9}), (21, {3}), (22, {2}), (23, {7}), (24, {9}), (25, {3}), (26, {0}), (27, {7}), (28, {1}), (29, {0}), (30, {6}), (31, {2}), (32, {6}), (33, {1}), (34, {5}), (35, {8}), (36, {8}), (37, {5}), (38, {6}), (39, {3}), (40, {2}), (41, {3}), (42, {9}), (43, {4}), (44, {9}), (45, {8}), (46, {0}), (47, {3}), (48, {6}), (49, {5}), (50, {2}), (51, {7}), (52, {1}), (53, {7}), (54, {5}), (55, {7}), (56, {4}), (57, {3}), (58, {0}), (59, {7}), (60, {4}), (61, {4}), (62, {3}), (63, {2}), (64, {2}), (65, {9}), (66, {0}), (67, {6}), (68, {7}), (69, {8}), (70, {1}), (71, {3}), (72, {9}), (73, {5}), (74, {9}), (75, {9}), (76, {8}), (77, {4}), (78, {0}), (79, {9}), (80, {6}), (81, {5}), (82, {6}), (83, {4}), (84, {2}), (85, {6}), (86, {1}), (87, {8}), (88, {1}), (89, {7}), (90, {0}), (91, {1}), (92, {2}), (93, {1}), (94, {4}), (95, {1}), (96, {8}), (97, {1}), (98, {8}), (99, {5}), (100, {7})])\n\nNo common district for cleaners A and B \n\n\nIteration 7 :\n\n [ 10 40 51 18 15 63 57 86 74 95 55 14 44 12 43 83 69 4\n 94 42 26 68 32 49 2 52 80 17 8 23 41 96 99 84 29 28\n 20 66 30 33 47 75 72 61 92 50 5 58 39 71 59 62 13 67\n 90 35 22 97 70 11 21 54 45 6 89 37 100 87 79 48 77 38\n 82 85 46 1 60 31 56 78 65 3 98 64 27 36 34 24 73 19\n 91 53 93 76 88 9 7 25 81 16] \n\n[(0, array([10, 40, 51, 18, 15, 63, 57, 86, 74, 95])), (1, array([55, 14, 44, 12, 43, 83, 69, 4, 94, 42])), (2, array([26, 68, 32, 49, 2, 52, 80, 17, 8, 23])), (3, array([41, 96, 99, 84, 29, 28, 20, 66, 30, 33])), (4, array([47, 75, 72, 61, 92, 50, 5, 58, 39, 71])), (5, array([59, 62, 13, 67, 90, 35, 22, 97, 70, 11])), (6, array([ 21, 54, 45, 6, 89, 37, 100, 87, 79, 48])), (7, array([77, 38, 82, 85, 46, 1, 60, 31, 56, 78])), (8, array([65, 3, 98, 64, 27, 36, 34, 24, 73, 19])), (9, array([91, 53, 93, 76, 88, 9, 7, 25, 81, 16]))] \n\nOrderedDict([(1, {7}), (2, {2}), (3, {8}), (4, {1}), (5, {4}), (6, {6}), (7, {9}), (8, {2}), (9, {9}), (10, {0}), (11, {5}), (12, {1}), (13, {5}), (14, {1}), (15, {0}), (16, {9}), (17, {2}), (18, {0}), (19, {8}), (20, {3}), (21, {6}), (22, {5}), (23, {2}), (24, {8}), (25, {9}), (26, {2}), (27, {8}), (28, {3}), (29, {3}), (30, {3}), (31, {7}), (32, {2}), (33, {3}), (34, {8}), (35, {5}), (36, {8}), (37, {6}), (38, {7}), (39, {4}), (40, {0}), (41, {3}), (42, {1}), (43, {1}), (44, {1}), (45, {6}), (46, {7}), (47, {4}), (48, {6}), (49, {2}), (50, {4}), (51, {0}), (52, {2}), (53, {9}), (54, {6}), (55, {1}), (56, {7}), (57, {0}), (58, {4}), (59, {5}), (60, {7}), (61, {4}), (62, {5}), (63, {0}), (64, {8}), (65, {8}), (66, {3}), (67, {5}), (68, {2}), (69, {1}), (70, {5}), (71, {4}), (72, {4}), (73, {8}), (74, {0}), (75, {4}), (76, {9}), (77, {7}), (78, {7}), (79, {6}), (80, {2}), (81, {9}), (82, {7}), (83, {1}), (84, {3}), (85, {7}), (86, {0}), (87, {6}), (88, {9}), (89, {6}), (90, {5}), (91, {9}), (92, {4}), (93, {9}), (94, {1}), (95, {0}), (96, {3}), (97, {5}), (98, {8}), (99, {3}), (100, {6})])\n\nNo common district for cleaners A and B \n\n\nIteration 8 :\n\n [ 54 14 71 11 38 12 43 56 48 4 49 82 58 91 34 36 55 51\n 3 23 13 25 29 50 62 26 92 21 16 40 46 97 93 84 65 44\n 35 32 59 94 78 87 52 86 90 22 73 77 19 39 100 72 67 99\n 31 37 68 1 74 63 60 79 47 80 17 81 64 27 53 33 45 89\n 8 15 75 61 24 41 28 57 2 10 95 30 98 88 7 6 83 20\n 66 18 96 9 5 42 76 69 85 70] \n\n[(0, array([54, 14, 71, 11, 38, 12, 43, 56, 48, 4])), (1, array([49, 82, 58, 91, 34, 36, 55, 51, 3, 23])), (2, array([13, 25, 29, 50, 62, 26, 92, 21, 16, 40])), (3, array([46, 97, 93, 84, 65, 44, 35, 32, 59, 94])), (4, array([78, 87, 52, 86, 90, 22, 73, 77, 19, 39])), (5, array([100, 72, 67, 99, 31, 37, 68, 1, 74, 63])), (6, array([60, 79, 47, 80, 17, 81, 64, 27, 53, 33])), (7, array([45, 89, 8, 15, 75, 61, 24, 41, 28, 57])), (8, array([ 2, 10, 95, 30, 98, 88, 7, 6, 83, 20])), (9, array([66, 18, 96, 9, 5, 42, 76, 69, 85, 70]))] \n\nOrderedDict([(1, {5}), (2, {8}), (3, {1}), (4, {0}), (5, {9}), (6, {8}), (7, {8}), (8, {7}), (9, {9}), (10, {8}), (11, {0}), (12, {0}), (13, {2}), (14, {0}), (15, {7}), (16, {2}), (17, {6}), (18, {9}), (19, {4}), (20, {8}), (21, {2}), (22, {4}), (23, {1}), (24, {7}), (25, {2}), (26, {2}), (27, {6}), (28, {7}), (29, {2}), (30, {8}), (31, {5}), (32, {3}), (33, {6}), (34, {1}), (35, {3}), (36, {1}), (37, {5}), (38, {0}), (39, {4}), (40, {2}), (41, {7}), (42, {9}), (43, {0}), (44, {3}), (45, {7}), (46, {3}), (47, {6}), (48, {0}), (49, {1}), (50, {2}), (51, {1}), (52, {4}), (53, {6}), (54, {0}), (55, {1}), (56, {0}), (57, {7}), (58, {1}), (59, {3}), (60, {6}), (61, {7}), (62, {2}), (63, {5}), (64, {6}), (65, {3}), (66, {9}), (67, {5}), (68, {5}), (69, {9}), (70, {9}), (71, {0}), (72, {5}), (73, {4}), (74, {5}), (75, {7}), (76, {9}), (77, {4}), (78, {4}), (79, {6}), (80, {6}), (81, {6}), (82, {1}), (83, {8}), (84, {3}), (85, {9}), (86, {4}), (87, {4}), (88, {8}), (89, {7}), (90, {4}), (91, {1}), (92, {2}), (93, {3}), (94, {3}), (95, {8}), (96, {9}), (97, {3}), (98, {8}), (99, {5}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 9 :\n\n [ 59 10 15 13 62 35 82 38 18 85 27 43 16 61 6 3 91 98\n 46 78 84 9 76 20 71 21 63 34 12 11 23 1 72 29 5 64\n 37 69 89 56 25 81 75 58 2 31 45 22 65 60 28 67 68 40\n 32 39 86 30 8 33 73 93 52 51 50 70 44 41 94 95 74 83\n 49 77 80 14 57 53 42 24 48 17 97 99 66 92 4 96 87 47\n 54 100 26 36 19 7 88 79 55 90] \n\n[(0, array([59, 10, 15, 13, 62, 35, 82, 38, 18, 85])), (1, array([27, 43, 16, 61, 6, 3, 91, 98, 46, 78])), (2, array([84, 9, 76, 20, 71, 21, 63, 34, 12, 11])), (3, array([23, 1, 72, 29, 5, 64, 37, 69, 89, 56])), (4, array([25, 81, 75, 58, 2, 31, 45, 22, 65, 60])), (5, array([28, 67, 68, 40, 32, 39, 86, 30, 8, 33])), (6, array([73, 93, 52, 51, 50, 70, 44, 41, 94, 95])), (7, array([74, 83, 49, 77, 80, 14, 57, 53, 42, 24])), (8, array([48, 17, 97, 99, 66, 92, 4, 96, 87, 47])), (9, array([ 54, 100, 26, 36, 19, 7, 88, 79, 55, 90]))] \n\nOrderedDict([(1, {3}), (2, {4}), (3, {1}), (4, {8}), (5, {3}), (6, {1}), (7, {9}), (8, {5}), (9, {2}), (10, {0}), (11, {2}), (12, {2}), (13, {0}), (14, {7}), (15, {0}), (16, {1}), (17, {8}), (18, {0}), (19, {9}), (20, {2}), (21, {2}), (22, {4}), (23, {3}), (24, {7}), (25, {4}), (26, {9}), (27, {1}), (28, {5}), (29, {3}), (30, {5}), (31, {4}), (32, {5}), (33, {5}), (34, {2}), (35, {0}), (36, {9}), (37, {3}), (38, {0}), (39, {5}), (40, {5}), (41, {6}), (42, {7}), (43, {1}), (44, {6}), (45, {4}), (46, {1}), (47, {8}), (48, {8}), (49, {7}), (50, {6}), (51, {6}), (52, {6}), (53, {7}), (54, {9}), (55, {9}), (56, {3}), (57, {7}), (58, {4}), (59, {0}), (60, {4}), (61, {1}), (62, {0}), (63, {2}), (64, {3}), (65, {4}), (66, {8}), (67, {5}), (68, {5}), (69, {3}), (70, {6}), (71, {2}), (72, {3}), (73, {6}), (74, {7}), (75, {4}), (76, {2}), (77, {7}), (78, {1}), (79, {9}), (80, {7}), (81, {4}), (82, {0}), (83, {7}), (84, {2}), (85, {0}), (86, {5}), (87, {8}), (88, {9}), (89, {3}), (90, {9}), (91, {1}), (92, {8}), (93, {6}), (94, {6}), (95, {6}), (96, {8}), (97, {8}), (98, {1}), (99, {8}), (100, {9})])\n\nNo common district for cleaners A and B \n\n\nIteration 10 :\n\n [ 63 87 31 30 54 93 38 15 33 12 25 80 6 74 79 16 68 69\n 3 72 82 76 61 83 27 32 96 43 98 29 23 41 10 42 52 77\n 65 73 20 11 99 46 81 13 39 19 90 85 100 47 66 24 4 92\n 62 22 7 59 17 91 18 1 57 26 60 78 2 75 48 88 64 51\n 45 21 44 8 40 14 67 97 34 28 5 71 89 36 35 50 86 49\n 58 70 84 94 56 53 37 9 55 95] \n\n[(0, array([63, 87, 31, 30, 54, 93, 38, 15, 33, 12])), (1, array([25, 80, 6, 74, 79, 16, 68, 69, 3, 72])), (2, array([82, 76, 61, 83, 27, 32, 96, 43, 98, 29])), (3, array([23, 41, 10, 42, 52, 77, 65, 73, 20, 11])), (4, array([ 99, 46, 81, 13, 39, 19, 90, 85, 100, 47])), (5, array([66, 24, 4, 92, 62, 22, 7, 59, 17, 91])), (6, array([18, 1, 57, 26, 60, 78, 2, 75, 48, 88])), (7, array([64, 51, 45, 21, 44, 8, 40, 14, 67, 97])), (8, array([34, 28, 5, 71, 89, 36, 35, 50, 86, 49])), (9, array([58, 70, 84, 94, 56, 53, 37, 9, 55, 95]))] \n\nOrderedDict([(1, {6}), (2, {6}), (3, {1}), (4, {5}), (5, {8}), (6, {1}), (7, {5}), (8, {7}), (9, {9}), (10, {3}), (11, {3}), (12, {0}), (13, {4}), (14, {7}), (15, {0}), (16, {1}), (17, {5}), (18, {6}), (19, {4}), (20, {3}), (21, {7}), (22, {5}), (23, {3}), (24, {5}), (25, {1}), (26, {6}), (27, {2}), (28, {8}), (29, {2}), (30, {0}), (31, {0}), (32, {2}), (33, {0}), (34, {8}), (35, {8}), (36, {8}), (37, {9}), (38, {0}), (39, {4}), (40, {7}), (41, {3}), (42, {3}), (43, {2}), (44, {7}), (45, {7}), (46, {4}), (47, {4}), (48, {6}), (49, {8}), (50, {8}), (51, {7}), (52, {3}), (53, {9}), (54, {0}), (55, {9}), (56, {9}), (57, {6}), (58, {9}), (59, {5}), (60, {6}), (61, {2}), (62, {5}), (63, {0}), (64, {7}), (65, {3}), (66, {5}), (67, {7}), (68, {1}), (69, {1}), (70, {9}), (71, {8}), (72, {1}), (73, {3}), (74, {1}), (75, {6}), (76, {2}), (77, {3}), (78, {6}), (79, {1}), (80, {1}), (81, {4}), (82, {2}), (83, {2}), (84, {9}), (85, {4}), (86, {8}), (87, {0}), (88, {6}), (89, {8}), (90, {4}), (91, {5}), (92, {5}), (93, {0}), (94, {9}), (95, {9}), (96, {2}), (97, {7}), (98, {2}), (99, {4}), (100, {4})])\n\nCleaner A and cleaner B are in disctrict # [6] \n\n\nIteration 11 :\n\n [ 44 59 30 90 21 24 79 47 57 93 64 5 71 34 25 92 18 48\n 69 85 78 74 97 52 10 95 6 53 20 77 88 50 15 84 66 12\n 17 67 46 9 83 16 26 68 27 38 37 11 65 87 80 54 32 96\n 55 89 62 82 100 39 63 98 19 7 56 3 13 58 23 45 1 86\n 99 35 28 36 14 72 29 40 81 8 2 43 91 61 33 41 75 73\n 76 49 60 42 22 70 31 51 94 4] \n\n[(0, array([44, 59, 30, 90, 21, 24, 79, 47, 57, 93])), (1, array([64, 5, 71, 34, 25, 92, 18, 48, 69, 85])), (2, array([78, 74, 97, 52, 10, 95, 6, 53, 20, 77])), (3, array([88, 50, 15, 84, 66, 12, 17, 67, 46, 9])), (4, array([83, 16, 26, 68, 27, 38, 37, 11, 65, 87])), (5, array([ 80, 54, 32, 96, 55, 89, 62, 82, 100, 39])), (6, array([63, 98, 19, 7, 56, 3, 13, 58, 23, 45])), (7, array([ 1, 86, 99, 35, 28, 36, 14, 72, 29, 40])), (8, array([81, 8, 2, 43, 91, 61, 33, 41, 75, 73])), (9, array([76, 49, 60, 42, 22, 70, 31, 51, 94, 4]))] \n\nOrderedDict([(1, {7}), (2, {8}), (3, {6}), (4, {9}), (5, {1}), (6, {2}), (7, {6}), (8, {8}), (9, {3}), (10, {2}), (11, {4}), (12, {3}), (13, {6}), (14, {7}), (15, {3}), (16, {4}), (17, {3}), (18, {1}), (19, {6}), (20, {2}), (21, {0}), (22, {9}), (23, {6}), (24, {0}), (25, {1}), (26, {4}), (27, {4}), (28, {7}), (29, {7}), (30, {0}), (31, {9}), (32, {5}), (33, {8}), (34, {1}), (35, {7}), (36, {7}), (37, {4}), (38, {4}), (39, {5}), (40, {7}), (41, {8}), (42, {9}), (43, {8}), (44, {0}), (45, {6}), (46, {3}), (47, {0}), (48, {1}), (49, {9}), (50, {3}), (51, {9}), (52, {2}), (53, {2}), (54, {5}), (55, {5}), (56, {6}), (57, {0}), (58, {6}), (59, {0}), (60, {9}), (61, {8}), (62, {5}), (63, {6}), (64, {1}), (65, {4}), (66, {3}), (67, {3}), (68, {4}), (69, {1}), (70, {9}), (71, {1}), (72, {7}), (73, {8}), (74, {2}), (75, {8}), (76, {9}), (77, {2}), (78, {2}), (79, {0}), (80, {5}), (81, {8}), (82, {5}), (83, {4}), (84, {3}), (85, {1}), (86, {7}), (87, {4}), (88, {3}), (89, {5}), (90, {0}), (91, {8}), (92, {1}), (93, {0}), (94, {9}), (95, {2}), (96, {5}), (97, {2}), (98, {6}), (99, {7}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 12 :\n\n [ 87 56 15 59 69 100 3 38 86 10 96 18 55 6 32 90 97 80\n 60 67 72 33 1 50 81 20 91 42 82 53 63 54 44 62 85 47\n 64 19 41 16 25 9 78 13 11 36 24 14 34 39 48 43 23 17\n 58 68 89 37 46 73 2 77 66 8 65 4 31 94 35 93 74 99\n 79 51 75 30 76 28 40 45 7 52 83 61 49 88 70 84 57 22\n 27 26 29 71 98 12 5 95 21 92] \n\n[(0, array([ 87, 56, 15, 59, 69, 100, 3, 38, 86, 10])), (1, array([96, 18, 55, 6, 32, 90, 97, 80, 60, 67])), (2, array([72, 33, 1, 50, 81, 20, 91, 42, 82, 53])), (3, array([63, 54, 44, 62, 85, 47, 64, 19, 41, 16])), (4, array([25, 9, 78, 13, 11, 36, 24, 14, 34, 39])), (5, array([48, 43, 23, 17, 58, 68, 89, 37, 46, 73])), (6, array([ 2, 77, 66, 8, 65, 4, 31, 94, 35, 93])), (7, array([74, 99, 79, 51, 75, 30, 76, 28, 40, 45])), (8, array([ 7, 52, 83, 61, 49, 88, 70, 84, 57, 22])), (9, array([27, 26, 29, 71, 98, 12, 5, 95, 21, 92]))] \n\nOrderedDict([(1, {2}), (2, {6}), (3, {0}), (4, {6}), (5, {9}), (6, {1}), (7, {8}), (8, {6}), (9, {4}), (10, {0}), (11, {4}), (12, {9}), (13, {4}), (14, {4}), (15, {0}), (16, {3}), (17, {5}), (18, {1}), (19, {3}), (20, {2}), (21, {9}), (22, {8}), (23, {5}), (24, {4}), (25, {4}), (26, {9}), (27, {9}), (28, {7}), (29, {9}), (30, {7}), (31, {6}), (32, {1}), (33, {2}), (34, {4}), (35, {6}), (36, {4}), (37, {5}), (38, {0}), (39, {4}), (40, {7}), (41, {3}), (42, {2}), (43, {5}), (44, {3}), (45, {7}), (46, {5}), (47, {3}), (48, {5}), (49, {8}), (50, {2}), (51, {7}), (52, {8}), (53, {2}), (54, {3}), (55, {1}), (56, {0}), (57, {8}), (58, {5}), (59, {0}), (60, {1}), (61, {8}), (62, {3}), (63, {3}), (64, {3}), (65, {6}), (66, {6}), (67, {1}), (68, {5}), (69, {0}), (70, {8}), (71, {9}), (72, {2}), (73, {5}), (74, {7}), (75, {7}), (76, {7}), (77, {6}), (78, {4}), (79, {7}), (80, {1}), (81, {2}), (82, {2}), (83, {8}), (84, {8}), (85, {3}), (86, {0}), (87, {0}), (88, {8}), (89, {5}), (90, {1}), (91, {2}), (92, {9}), (93, {6}), (94, {6}), (95, {9}), (96, {1}), (97, {1}), (98, {9}), (99, {7}), (100, {0})])\n\nNo common district for cleaners A and B \n\n\nIteration 13 :\n\n [ 4 45 22 36 62 87 41 56 59 67 61 84 50 27 77 75 76 28\n 68 26 71 78 95 58 48 82 34 90 73 38 39 97 15 83 93 35\n 43 74 99 91 72 13 94 98 3 65 80 29 37 11 64 66 23 9\n 14 33 79 49 54 88 96 24 8 69 44 18 6 60 63 1 86 92\n 12 70 10 30 55 52 17 32 25 81 40 89 100 51 16 53 5 7\n 85 42 47 19 2 57 46 21 20 31] \n\n[(0, array([ 4, 45, 22, 36, 62, 87, 41, 56, 59, 67])), (1, array([61, 84, 50, 27, 77, 75, 76, 28, 68, 26])), (2, array([71, 78, 95, 58, 48, 82, 34, 90, 73, 38])), (3, array([39, 97, 15, 83, 93, 35, 43, 74, 99, 91])), (4, array([72, 13, 94, 98, 3, 65, 80, 29, 37, 11])), (5, array([64, 66, 23, 9, 14, 33, 79, 49, 54, 88])), (6, array([96, 24, 8, 69, 44, 18, 6, 60, 63, 1])), (7, array([86, 92, 12, 70, 10, 30, 55, 52, 17, 32])), (8, array([ 25, 81, 40, 89, 100, 51, 16, 53, 5, 7])), (9, array([85, 42, 47, 19, 2, 57, 46, 21, 20, 31]))] \n\nOrderedDict([(1, {6}), (2, {9}), (3, {4}), (4, {0}), (5, {8}), (6, {6}), (7, {8}), (8, {6}), (9, {5}), (10, {7}), (11, {4}), (12, {7}), (13, {4}), (14, {5}), (15, {3}), (16, {8}), (17, {7}), (18, {6}), (19, {9}), (20, {9}), (21, {9}), (22, {0}), (23, {5}), (24, {6}), (25, {8}), (26, {1}), (27, {1}), (28, {1}), (29, {4}), (30, {7}), (31, {9}), (32, {7}), (33, {5}), (34, {2}), (35, {3}), (36, {0}), (37, {4}), (38, {2}), (39, {3}), (40, {8}), (41, {0}), (42, {9}), (43, {3}), (44, {6}), (45, {0}), (46, {9}), (47, {9}), (48, {2}), (49, {5}), (50, {1}), (51, {8}), (52, {7}), (53, {8}), (54, {5}), (55, {7}), (56, {0}), (57, {9}), (58, {2}), (59, {0}), (60, {6}), (61, {1}), (62, {0}), (63, {6}), (64, {5}), (65, {4}), (66, {5}), (67, {0}), (68, {1}), (69, {6}), (70, {7}), (71, {2}), (72, {4}), (73, {2}), (74, {3}), (75, {1}), (76, {1}), (77, {1}), (78, {2}), (79, {5}), (80, {4}), (81, {8}), (82, {2}), (83, {3}), (84, {1}), (85, {9}), (86, {7}), (87, {0}), (88, {5}), (89, {8}), (90, {2}), (91, {3}), (92, {7}), (93, {3}), (94, {4}), (95, {2}), (96, {6}), (97, {3}), (98, {4}), (99, {3}), (100, {8})])\n\nNo common district for cleaners A and B \n\n\nIteration 14 :\n\n [ 27 21 91 65 60 80 13 32 11 25 52 44 20 93 19 62 14 71\n 64 33 68 76 88 35 72 78 45 8 6 81 83 66 54 47 23 97\n 18 38 85 34 82 56 94 42 73 51 2 7 15 24 70 67 39 96\n 100 53 79 43 17 92 22 69 90 95 40 10 59 46 50 84 61 58\n 89 86 37 74 30 29 9 63 55 31 3 77 16 4 48 41 98 5\n 26 36 57 99 49 75 87 1 28 12] \n\n[(0, array([27, 21, 91, 65, 60, 80, 13, 32, 11, 25])), (1, array([52, 44, 20, 93, 19, 62, 14, 71, 64, 33])), (2, array([68, 76, 88, 35, 72, 78, 45, 8, 6, 81])), (3, array([83, 66, 54, 47, 23, 97, 18, 38, 85, 34])), (4, array([82, 56, 94, 42, 73, 51, 2, 7, 15, 24])), (5, array([ 70, 67, 39, 96, 100, 53, 79, 43, 17, 92])), (6, array([22, 69, 90, 95, 40, 10, 59, 46, 50, 84])), (7, array([61, 58, 89, 86, 37, 74, 30, 29, 9, 63])), (8, array([55, 31, 3, 77, 16, 4, 48, 41, 98, 5])), (9, array([26, 36, 57, 99, 49, 75, 87, 1, 28, 12]))] \n\nOrderedDict([(1, {9}), (2, {4}), (3, {8}), (4, {8}), (5, {8}), (6, {2}), (7, {4}), (8, {2}), (9, {7}), (10, {6}), (11, {0}), (12, {9}), (13, {0}), (14, {1}), (15, {4}), (16, {8}), (17, {5}), (18, {3}), (19, {1}), (20, {1}), (21, {0}), (22, {6}), (23, {3}), (24, {4}), (25, {0}), (26, {9}), (27, {0}), (28, {9}), (29, {7}), (30, {7}), (31, {8}), (32, {0}), (33, {1}), (34, {3}), (35, {2}), (36, {9}), (37, {7}), (38, {3}), (39, {5}), (40, {6}), (41, {8}), (42, {4}), (43, {5}), (44, {1}), (45, {2}), (46, {6}), (47, {3}), (48, {8}), (49, {9}), (50, {6}), (51, {4}), (52, {1}), (53, {5}), (54, {3}), (55, {8}), (56, {4}), (57, {9}), (58, {7}), (59, {6}), (60, {0}), (61, {7}), (62, {1}), (63, {7}), (64, {1}), (65, {0}), (66, {3}), (67, {5}), (68, {2}), (69, {6}), (70, {5}), (71, {1}), (72, {2}), (73, {4}), (74, {7}), (75, {9}), (76, {2}), (77, {8}), (78, {2}), (79, {5}), (80, {0}), (81, {2}), (82, {4}), (83, {3}), (84, {6}), (85, {3}), (86, {7}), (87, {9}), (88, {2}), (89, {7}), (90, {6}), (91, {0}), (92, {5}), (93, {1}), (94, {4}), (95, {6}), (96, {5}), (97, {3}), (98, {8}), (99, {9}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 15 :\n\n [ 89 26 22 32 58 19 23 94 51 5 15 85 41 33 48 11 60 43\n 62 71 31 38 46 3 74 81 45 92 40 50 18 78 68 54 84 25\n 2 88 96 82 13 20 52 17 69 63 28 70 14 57 7 67 9 73\n 100 99 8 35 61 66 55 56 53 30 12 39 44 10 37 75 34 72\n 1 98 6 65 86 64 80 83 77 21 27 49 36 91 16 42 4 97\n 76 90 29 93 59 24 87 79 95 47] \n\n[(0, array([89, 26, 22, 32, 58, 19, 23, 94, 51, 5])), (1, array([15, 85, 41, 33, 48, 11, 60, 43, 62, 71])), (2, array([31, 38, 46, 3, 74, 81, 45, 92, 40, 50])), (3, array([18, 78, 68, 54, 84, 25, 2, 88, 96, 82])), (4, array([13, 20, 52, 17, 69, 63, 28, 70, 14, 57])), (5, array([ 7, 67, 9, 73, 100, 99, 8, 35, 61, 66])), (6, array([55, 56, 53, 30, 12, 39, 44, 10, 37, 75])), (7, array([34, 72, 1, 98, 6, 65, 86, 64, 80, 83])), (8, array([77, 21, 27, 49, 36, 91, 16, 42, 4, 97])), (9, array([76, 90, 29, 93, 59, 24, 87, 79, 95, 47]))] \n\nOrderedDict([(1, {7}), (2, {3}), (3, {2}), (4, {8}), (5, {0}), (6, {7}), (7, {5}), (8, {5}), (9, {5}), (10, {6}), (11, {1}), (12, {6}), (13, {4}), (14, {4}), (15, {1}), (16, {8}), (17, {4}), (18, {3}), (19, {0}), (20, {4}), (21, {8}), (22, {0}), (23, {0}), (24, {9}), (25, {3}), (26, {0}), (27, {8}), (28, {4}), (29, {9}), (30, {6}), (31, {2}), (32, {0}), (33, {1}), (34, {7}), (35, {5}), (36, {8}), (37, {6}), (38, {2}), (39, {6}), (40, {2}), (41, {1}), (42, {8}), (43, {1}), (44, {6}), (45, {2}), (46, {2}), (47, {9}), (48, {1}), (49, {8}), (50, {2}), (51, {0}), (52, {4}), (53, {6}), (54, {3}), (55, {6}), (56, {6}), (57, {4}), (58, {0}), (59, {9}), (60, {1}), (61, {5}), (62, {1}), (63, {4}), (64, {7}), (65, {7}), (66, {5}), (67, {5}), (68, {3}), (69, {4}), (70, {4}), (71, {1}), (72, {7}), (73, {5}), (74, {2}), (75, {6}), (76, {9}), (77, {8}), (78, {3}), (79, {9}), (80, {7}), (81, {2}), (82, {3}), (83, {7}), (84, {3}), (85, {1}), (86, {7}), (87, {9}), (88, {3}), (89, {0}), (90, {9}), (91, {8}), (92, {2}), (93, {9}), (94, {0}), (95, {9}), (96, {3}), (97, {8}), (98, {7}), (99, {5}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 16 :\n\n [ 13 45 24 31 39 65 54 57 68 80 67 74 96 99 81 87 4 94\n 7 15 85 97 62 69 51 44 70 84 19 23 10 98 16 34 36 35\n 49 86 17 6 92 25 72 53 41 58 20 50 63 66 11 82 14 27\n 93 8 91 48 89 2 60 52 12 18 22 73 59 42 26 88 47 83\n 90 5 56 71 29 64 79 77 21 28 100 33 46 32 40 38 43 78\n 55 9 75 61 1 76 37 30 95 3] \n\n[(0, array([13, 45, 24, 31, 39, 65, 54, 57, 68, 80])), (1, array([67, 74, 96, 99, 81, 87, 4, 94, 7, 15])), (2, array([85, 97, 62, 69, 51, 44, 70, 84, 19, 23])), (3, array([10, 98, 16, 34, 36, 35, 49, 86, 17, 6])), (4, array([92, 25, 72, 53, 41, 58, 20, 50, 63, 66])), (5, array([11, 82, 14, 27, 93, 8, 91, 48, 89, 2])), (6, array([60, 52, 12, 18, 22, 73, 59, 42, 26, 88])), (7, array([47, 83, 90, 5, 56, 71, 29, 64, 79, 77])), (8, array([ 21, 28, 100, 33, 46, 32, 40, 38, 43, 78])), (9, array([55, 9, 75, 61, 1, 76, 37, 30, 95, 3]))] \n\nOrderedDict([(1, {9}), (2, {5}), (3, {9}), (4, {1}), (5, {7}), (6, {3}), (7, {1}), (8, {5}), (9, {9}), (10, {3}), (11, {5}), (12, {6}), (13, {0}), (14, {5}), (15, {1}), (16, {3}), (17, {3}), (18, {6}), (19, {2}), (20, {4}), (21, {8}), (22, {6}), (23, {2}), (24, {0}), (25, {4}), (26, {6}), (27, {5}), (28, {8}), (29, {7}), (30, {9}), (31, {0}), (32, {8}), (33, {8}), (34, {3}), (35, {3}), (36, {3}), (37, {9}), (38, {8}), (39, {0}), (40, {8}), (41, {4}), (42, {6}), (43, {8}), (44, {2}), (45, {0}), (46, {8}), (47, {7}), (48, {5}), (49, {3}), (50, {4}), (51, {2}), (52, {6}), (53, {4}), (54, {0}), (55, {9}), (56, {7}), (57, {0}), (58, {4}), (59, {6}), (60, {6}), (61, {9}), (62, {2}), (63, {4}), (64, {7}), (65, {0}), (66, {4}), (67, {1}), (68, {0}), (69, {2}), (70, {2}), (71, {7}), (72, {4}), (73, {6}), (74, {1}), (75, {9}), (76, {9}), (77, {7}), (78, {8}), (79, {7}), (80, {0}), (81, {1}), (82, {5}), (83, {7}), (84, {2}), (85, {2}), (86, {3}), (87, {1}), (88, {6}), (89, {5}), (90, {7}), (91, {5}), (92, {4}), (93, {5}), (94, {1}), (95, {9}), (96, {1}), (97, {2}), (98, {3}), (99, {1}), (100, {8})])\n\nNo common district for cleaners A and B \n\n\nIteration 17 :\n\n [ 43 18 71 75 97 21 93 4 89 34 85 41 40 59 35 22 87 82\n 76 81 96 54 49 12 10 74 48 17 70 63 60 73 58 90 99 92\n 29 38 25 68 98 24 61 56 72 32 65 78 53 31 66 100 79 55\n 62 52 15 86 95 33 36 45 46 2 57 16 7 50 84 6 9 42\n 77 88 13 39 27 30 1 37 69 3 26 47 64 51 80 94 8 23\n 5 67 19 14 83 20 11 28 44 91] \n\n[(0, array([43, 18, 71, 75, 97, 21, 93, 4, 89, 34])), (1, array([85, 41, 40, 59, 35, 22, 87, 82, 76, 81])), (2, array([96, 54, 49, 12, 10, 74, 48, 17, 70, 63])), (3, array([60, 73, 58, 90, 99, 92, 29, 38, 25, 68])), (4, array([98, 24, 61, 56, 72, 32, 65, 78, 53, 31])), (5, array([ 66, 100, 79, 55, 62, 52, 15, 86, 95, 33])), (6, array([36, 45, 46, 2, 57, 16, 7, 50, 84, 6])), (7, array([ 9, 42, 77, 88, 13, 39, 27, 30, 1, 37])), (8, array([69, 3, 26, 47, 64, 51, 80, 94, 8, 23])), (9, array([ 5, 67, 19, 14, 83, 20, 11, 28, 44, 91]))] \n\nOrderedDict([(1, {7}), (2, {6}), (3, {8}), (4, {0}), (5, {9}), (6, {6}), (7, {6}), (8, {8}), (9, {7}), (10, {2}), (11, {9}), (12, {2}), (13, {7}), (14, {9}), (15, {5}), (16, {6}), (17, {2}), (18, {0}), (19, {9}), (20, {9}), (21, {0}), (22, {1}), (23, {8}), (24, {4}), (25, {3}), (26, {8}), (27, {7}), (28, {9}), (29, {3}), (30, {7}), (31, {4}), (32, {4}), (33, {5}), (34, {0}), (35, {1}), (36, {6}), (37, {7}), (38, {3}), (39, {7}), (40, {1}), (41, {1}), (42, {7}), (43, {0}), (44, {9}), (45, {6}), (46, {6}), (47, {8}), (48, {2}), (49, {2}), (50, {6}), (51, {8}), (52, {5}), (53, {4}), (54, {2}), (55, {5}), (56, {4}), (57, {6}), (58, {3}), (59, {1}), (60, {3}), (61, {4}), (62, {5}), (63, {2}), (64, {8}), (65, {4}), (66, {5}), (67, {9}), (68, {3}), (69, {8}), (70, {2}), (71, {0}), (72, {4}), (73, {3}), (74, {2}), (75, {0}), (76, {1}), (77, {7}), (78, {4}), (79, {5}), (80, {8}), (81, {1}), (82, {1}), (83, {9}), (84, {6}), (85, {1}), (86, {5}), (87, {1}), (88, {7}), (89, {0}), (90, {3}), (91, {9}), (92, {3}), (93, {0}), (94, {8}), (95, {5}), (96, {2}), (97, {0}), (98, {4}), (99, {3}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 18 :\n\n [ 83 19 23 24 96 36 3 95 100 63 39 75 91 1 82 43 27 32\n 86 53 2 65 88 87 60 41 28 70 64 14 44 16 37 67 61 74\n 94 56 42 47 99 55 92 69 85 38 78 17 8 72 68 52 59 84\n 20 97 5 26 31 33 12 54 77 15 79 90 58 22 81 11 76 98\n 34 46 29 80 57 18 51 50 49 62 48 45 7 89 21 6 93 30\n 35 10 66 73 25 13 71 40 9 4] \n\n[(0, array([ 83, 19, 23, 24, 96, 36, 3, 95, 100, 63])), (1, array([39, 75, 91, 1, 82, 43, 27, 32, 86, 53])), (2, array([ 2, 65, 88, 87, 60, 41, 28, 70, 64, 14])), (3, array([44, 16, 37, 67, 61, 74, 94, 56, 42, 47])), (4, array([99, 55, 92, 69, 85, 38, 78, 17, 8, 72])), (5, array([68, 52, 59, 84, 20, 97, 5, 26, 31, 33])), (6, array([12, 54, 77, 15, 79, 90, 58, 22, 81, 11])), (7, array([76, 98, 34, 46, 29, 80, 57, 18, 51, 50])), (8, array([49, 62, 48, 45, 7, 89, 21, 6, 93, 30])), (9, array([35, 10, 66, 73, 25, 13, 71, 40, 9, 4]))] \n\nOrderedDict([(1, {1}), (2, {2}), (3, {0}), (4, {9}), (5, {5}), (6, {8}), (7, {8}), (8, {4}), (9, {9}), (10, {9}), (11, {6}), (12, {6}), (13, {9}), (14, {2}), (15, {6}), (16, {3}), (17, {4}), (18, {7}), (19, {0}), (20, {5}), (21, {8}), (22, {6}), (23, {0}), (24, {0}), (25, {9}), (26, {5}), (27, {1}), (28, {2}), (29, {7}), (30, {8}), (31, {5}), (32, {1}), (33, {5}), (34, {7}), (35, {9}), (36, {0}), (37, {3}), (38, {4}), (39, {1}), (40, {9}), (41, {2}), (42, {3}), (43, {1}), (44, {3}), (45, {8}), (46, {7}), (47, {3}), (48, {8}), (49, {8}), (50, {7}), (51, {7}), (52, {5}), (53, {1}), (54, {6}), (55, {4}), (56, {3}), (57, {7}), (58, {6}), (59, {5}), (60, {2}), (61, {3}), (62, {8}), (63, {0}), (64, {2}), (65, {2}), (66, {9}), (67, {3}), (68, {5}), (69, {4}), (70, {2}), (71, {9}), (72, {4}), (73, {9}), (74, {3}), (75, {1}), (76, {7}), (77, {6}), (78, {4}), (79, {6}), (80, {7}), (81, {6}), (82, {1}), (83, {0}), (84, {5}), (85, {4}), (86, {1}), (87, {2}), (88, {2}), (89, {8}), (90, {6}), (91, {1}), (92, {4}), (93, {8}), (94, {3}), (95, {0}), (96, {0}), (97, {5}), (98, {7}), (99, {4}), (100, {0})])\n\nNo common district for cleaners A and B \n\n\nIteration 19 :\n\n [ 16 86 48 41 69 1 66 37 34 98 99 46 24 57 82 65 27 71\n 67 85 42 68 47 39 89 53 38 6 21 97 43 93 96 84 62 55\n 77 95 17 91 49 51 3 10 5 63 75 94 58 18 31 36 87 90\n 79 35 25 13 54 70 59 9 40 88 30 33 8 14 74 20 7 80\n 73 92 64 72 11 26 12 19 83 52 32 15 28 60 2 100 61 22\n 45 76 44 81 29 56 50 78 23 4] \n\n[(0, array([16, 86, 48, 41, 69, 1, 66, 37, 34, 98])), (1, array([99, 46, 24, 57, 82, 65, 27, 71, 67, 85])), (2, array([42, 68, 47, 39, 89, 53, 38, 6, 21, 97])), (3, array([43, 93, 96, 84, 62, 55, 77, 95, 17, 91])), (4, array([49, 51, 3, 10, 5, 63, 75, 94, 58, 18])), (5, array([31, 36, 87, 90, 79, 35, 25, 13, 54, 70])), (6, array([59, 9, 40, 88, 30, 33, 8, 14, 74, 20])), (7, array([ 7, 80, 73, 92, 64, 72, 11, 26, 12, 19])), (8, array([ 83, 52, 32, 15, 28, 60, 2, 100, 61, 22])), (9, array([45, 76, 44, 81, 29, 56, 50, 78, 23, 4]))] \n\nOrderedDict([(1, {0}), (2, {8}), (3, {4}), (4, {9}), (5, {4}), (6, {2}), (7, {7}), (8, {6}), (9, {6}), (10, {4}), (11, {7}), (12, {7}), (13, {5}), (14, {6}), (15, {8}), (16, {0}), (17, {3}), (18, {4}), (19, {7}), (20, {6}), (21, {2}), (22, {8}), (23, {9}), (24, {1}), (25, {5}), (26, {7}), (27, {1}), (28, {8}), (29, {9}), (30, {6}), (31, {5}), (32, {8}), (33, {6}), (34, {0}), (35, {5}), (36, {5}), (37, {0}), (38, {2}), (39, {2}), (40, {6}), (41, {0}), (42, {2}), (43, {3}), (44, {9}), (45, {9}), (46, {1}), (47, {2}), (48, {0}), (49, {4}), (50, {9}), (51, {4}), (52, {8}), (53, {2}), (54, {5}), (55, {3}), (56, {9}), (57, {1}), (58, {4}), (59, {6}), (60, {8}), (61, {8}), (62, {3}), (63, {4}), (64, {7}), (65, {1}), (66, {0}), (67, {1}), (68, {2}), (69, {0}), (70, {5}), (71, {1}), (72, {7}), (73, {7}), (74, {6}), (75, {4}), (76, {9}), (77, {3}), (78, {9}), (79, {5}), (80, {7}), (81, {9}), (82, {1}), (83, {8}), (84, {3}), (85, {1}), (86, {0}), (87, {5}), (88, {6}), (89, {2}), (90, {5}), (91, {3}), (92, {7}), (93, {3}), (94, {4}), (95, {3}), (96, {3}), (97, {2}), (98, {0}), (99, {1}), (100, {8})])\n\nNo common district for cleaners A and B \n\n\nIteration 20 :\n\n [ 89 100 65 6 97 1 32 53 24 44 50 68 59 99 80 45 72 17\n 76 40 91 87 4 46 37 22 21 36 61 31 49 18 57 94 5 28\n 78 35 51 63 93 41 56 77 30 85 47 86 55 19 14 54 34 75\n 67 79 90 84 74 10 20 60 43 58 27 9 70 7 73 64 81 12\n 15 71 39 33 16 42 8 95 23 98 88 52 26 83 2 92 69 29\n 96 62 25 38 13 3 48 82 11 66] \n\n[(0, array([ 89, 100, 65, 6, 97, 1, 32, 53, 24, 44])), (1, array([50, 68, 59, 99, 80, 45, 72, 17, 76, 40])), (2, array([91, 87, 4, 46, 37, 22, 21, 36, 61, 31])), (3, array([49, 18, 57, 94, 5, 28, 78, 35, 51, 63])), (4, array([93, 41, 56, 77, 30, 85, 47, 86, 55, 19])), (5, array([14, 54, 34, 75, 67, 79, 90, 84, 74, 10])), (6, array([20, 60, 43, 58, 27, 9, 70, 7, 73, 64])), (7, array([81, 12, 15, 71, 39, 33, 16, 42, 8, 95])), (8, array([23, 98, 88, 52, 26, 83, 2, 92, 69, 29])), (9, array([96, 62, 25, 38, 13, 3, 48, 82, 11, 66]))] \n\nOrderedDict([(1, {0}), (2, {8}), (3, {9}), (4, {2}), (5, {3}), (6, {0}), (7, {6}), (8, {7}), (9, {6}), (10, {5}), (11, {9}), (12, {7}), (13, {9}), (14, {5}), (15, {7}), (16, {7}), (17, {1}), (18, {3}), (19, {4}), (20, {6}), (21, {2}), (22, {2}), (23, {8}), (24, {0}), (25, {9}), (26, {8}), (27, {6}), (28, {3}), (29, {8}), (30, {4}), (31, {2}), (32, {0}), (33, {7}), (34, {5}), (35, {3}), (36, {2}), (37, {2}), (38, {9}), (39, {7}), (40, {1}), (41, {4}), (42, {7}), (43, {6}), (44, {0}), (45, {1}), (46, {2}), (47, {4}), (48, {9}), (49, {3}), (50, {1}), (51, {3}), (52, {8}), (53, {0}), (54, {5}), (55, {4}), (56, {4}), (57, {3}), (58, {6}), (59, {1}), (60, {6}), (61, {2}), (62, {9}), (63, {3}), (64, {6}), (65, {0}), (66, {9}), (67, {5}), (68, {1}), (69, {8}), (70, {6}), (71, {7}), (72, {1}), (73, {6}), (74, {5}), (75, {5}), (76, {1}), (77, {4}), (78, {3}), (79, {5}), (80, {1}), (81, {7}), (82, {9}), (83, {8}), (84, {5}), (85, {4}), (86, {4}), (87, {2}), (88, {8}), (89, {0}), (90, {5}), (91, {2}), (92, {8}), (93, {4}), (94, {3}), (95, {7}), (96, {9}), (97, {0}), (98, {8}), (99, {1}), (100, {0})])\n\nNo common district for cleaners A and B \n\n\nIteration 21 :\n\n [ 4 97 41 65 64 18 50 76 47 77 70 59 43 100 31 71 84 81\n 13 93 91 26 17 61 28 55 6 32 79 29 49 2 94 8 20 66\n 34 15 78 21 68 12 19 1 38 54 9 44 60 80 36 53 58 73\n 7 27 45 75 24 33 69 62 35 92 98 51 40 87 42 3 96 89\n 10 23 83 14 11 99 39 52 82 88 25 95 16 63 67 56 85 57\n 46 37 86 74 5 90 30 22 48 72] \n\n[(0, array([ 4, 97, 41, 65, 64, 18, 50, 76, 47, 77])), (1, array([ 70, 59, 43, 100, 31, 71, 84, 81, 13, 93])), (2, array([91, 26, 17, 61, 28, 55, 6, 32, 79, 29])), (3, array([49, 2, 94, 8, 20, 66, 34, 15, 78, 21])), (4, array([68, 12, 19, 1, 38, 54, 9, 44, 60, 80])), (5, array([36, 53, 58, 73, 7, 27, 45, 75, 24, 33])), (6, array([69, 62, 35, 92, 98, 51, 40, 87, 42, 3])), (7, array([96, 89, 10, 23, 83, 14, 11, 99, 39, 52])), (8, array([82, 88, 25, 95, 16, 63, 67, 56, 85, 57])), (9, array([46, 37, 86, 74, 5, 90, 30, 22, 48, 72]))] \n\nOrderedDict([(1, {4}), (2, {3}), (3, {6}), (4, {0}), (5, {9}), (6, {2}), (7, {5}), (8, {3}), (9, {4}), (10, {7}), (11, {7}), (12, {4}), (13, {1}), (14, {7}), (15, {3}), (16, {8}), (17, {2}), (18, {0}), (19, {4}), (20, {3}), (21, {3}), (22, {9}), (23, {7}), (24, {5}), (25, {8}), (26, {2}), (27, {5}), (28, {2}), (29, {2}), (30, {9}), (31, {1}), (32, {2}), (33, {5}), (34, {3}), (35, {6}), (36, {5}), (37, {9}), (38, {4}), (39, {7}), (40, {6}), (41, {0}), (42, {6}), (43, {1}), (44, {4}), (45, {5}), (46, {9}), (47, {0}), (48, {9}), (49, {3}), (50, {0}), (51, {6}), (52, {7}), (53, {5}), (54, {4}), (55, {2}), (56, {8}), (57, {8}), (58, {5}), (59, {1}), (60, {4}), (61, {2}), (62, {6}), (63, {8}), (64, {0}), (65, {0}), (66, {3}), (67, {8}), (68, {4}), (69, {6}), (70, {1}), (71, {1}), (72, {9}), (73, {5}), (74, {9}), (75, {5}), (76, {0}), (77, {0}), (78, {3}), (79, {2}), (80, {4}), (81, {1}), (82, {8}), (83, {7}), (84, {1}), (85, {8}), (86, {9}), (87, {6}), (88, {8}), (89, {7}), (90, {9}), (91, {2}), (92, {6}), (93, {1}), (94, {3}), (95, {8}), (96, {7}), (97, {0}), (98, {6}), (99, {7}), (100, {1})])\n\nNo common district for cleaners A and B \n\n\nIteration 22 :\n\n [ 69 29 1 83 96 37 94 56 59 18 82 72 26 73 85 54 61 11\n 76 86 53 88 20 13 5 2 52 68 57 3 49 27 91 93 89 46\n 12 80 23 87 4 55 36 98 75 42 63 90 65 22 25 15 17 66\n 35 21 39 19 38 64 9 81 34 43 62 70 41 24 30 100 74 33\n 97 79 10 95 8 32 16 50 51 77 99 92 40 31 6 45 78 67\n 84 14 58 44 60 48 47 7 71 28] \n\n[(0, array([69, 29, 1, 83, 96, 37, 94, 56, 59, 18])), (1, array([82, 72, 26, 73, 85, 54, 61, 11, 76, 86])), (2, array([53, 88, 20, 13, 5, 2, 52, 68, 57, 3])), (3, array([49, 27, 91, 93, 89, 46, 12, 80, 23, 87])), (4, array([ 4, 55, 36, 98, 75, 42, 63, 90, 65, 22])), (5, array([25, 15, 17, 66, 35, 21, 39, 19, 38, 64])), (6, array([ 9, 81, 34, 43, 62, 70, 41, 24, 30, 100])), (7, array([74, 33, 97, 79, 10, 95, 8, 32, 16, 50])), (8, array([51, 77, 99, 92, 40, 31, 6, 45, 78, 67])), (9, array([84, 14, 58, 44, 60, 48, 47, 7, 71, 28]))] \n\nOrderedDict([(1, {0}), (2, {2}), (3, {2}), (4, {4}), (5, {2}), (6, {8}), (7, {9}), (8, {7}), (9, {6}), (10, {7}), (11, {1}), (12, {3}), (13, {2}), (14, {9}), (15, {5}), (16, {7}), (17, {5}), (18, {0}), (19, {5}), (20, {2}), (21, {5}), (22, {4}), (23, {3}), (24, {6}), (25, {5}), (26, {1}), (27, {3}), (28, {9}), (29, {0}), (30, {6}), (31, {8}), (32, {7}), (33, {7}), (34, {6}), (35, {5}), (36, {4}), (37, {0}), (38, {5}), (39, {5}), (40, {8}), (41, {6}), (42, {4}), (43, {6}), (44, {9}), (45, {8}), (46, {3}), (47, {9}), (48, {9}), (49, {3}), (50, {7}), (51, {8}), (52, {2}), (53, {2}), (54, {1}), (55, {4}), (56, {0}), (57, {2}), (58, {9}), (59, {0}), (60, {9}), (61, {1}), (62, {6}), (63, {4}), (64, {5}), (65, {4}), (66, {5}), (67, {8}), (68, {2}), (69, {0}), (70, {6}), (71, {9}), (72, {1}), (73, {1}), (74, {7}), (75, {4}), (76, {1}), (77, {8}), (78, {8}), (79, {7}), (80, {3}), (81, {6}), (82, {1}), (83, {0}), (84, {9}), (85, {1}), (86, {1}), (87, {3}), (88, {2}), (89, {3}), (90, {4}), (91, {3}), (92, {8}), (93, {3}), (94, {0}), (95, {7}), (96, {0}), (97, {7}), (98, {4}), (99, {8}), (100, {6})])\n\nNo common district for cleaners A and B \n\n\nIteration 23 :\n\n [ 20 62 66 29 40 17 35 10 81 43 8 33 63 14 38 72 69 61\n 79 56 6 97 99 83 98 36 52 50 41 74 75 90 49 58 37 47\n 78 42 64 91 54 59 46 28 24 45 48 100 27 5 94 92 2 15\n 21 18 13 93 3 53 30 85 31 73 1 65 51 7 68 9 96 80\n 25 87 82 11 95 23 12 67 55 16 19 60 71 22 32 57 44 70\n 84 76 88 77 86 26 39 4 34 89] \n\n[(0, array([20, 62, 66, 29, 40, 17, 35, 10, 81, 43])), (1, array([ 8, 33, 63, 14, 38, 72, 69, 61, 79, 56])), (2, array([ 6, 97, 99, 83, 98, 36, 52, 50, 41, 74])), (3, array([75, 90, 49, 58, 37, 47, 78, 42, 64, 91])), (4, array([ 54, 59, 46, 28, 24, 45, 48, 100, 27, 5])), (5, array([94, 92, 2, 15, 21, 18, 13, 93, 3, 53])), (6, array([30, 85, 31, 73, 1, 65, 51, 7, 68, 9])), (7, array([96, 80, 25, 87, 82, 11, 95, 23, 12, 67])), (8, array([55, 16, 19, 60, 71, 22, 32, 57, 44, 70])), (9, array([84, 76, 88, 77, 86, 26, 39, 4, 34, 89]))] \n\nOrderedDict([(1, {6}), (2, {5}), (3, {5}), (4, {9}), (5, {4}), (6, {2}), (7, {6}), (8, {1}), (9, {6}), (10, {0}), (11, {7}), (12, {7}), (13, {5}), (14, {1}), (15, {5}), (16, {8}), (17, {0}), (18, {5}), (19, {8}), (20, {0}), (21, {5}), (22, {8}), (23, {7}), (24, {4}), (25, {7}), (26, {9}), (27, {4}), (28, {4}), (29, {0}), (30, {6}), (31, {6}), (32, {8}), (33, {1}), (34, {9}), (35, {0}), (36, {2}), (37, {3}), (38, {1}), (39, {9}), (40, {0}), (41, {2}), (42, {3}), (43, {0}), (44, {8}), (45, {4}), (46, {4}), (47, {3}), (48, {4}), (49, {3}), (50, {2}), (51, {6}), (52, {2}), (53, {5}), (54, {4}), (55, {8}), (56, {1}), (57, {8}), (58, {3}), (59, {4}), (60, {8}), (61, {1}), (62, {0}), (63, {1}), (64, {3}), (65, {6}), (66, {0}), (67, {7}), (68, {6}), (69, {1}), (70, {8}), (71, {8}), (72, {1}), (73, {6}), (74, {2}), (75, {3}), (76, {9}), (77, {9}), (78, {3}), (79, {1}), (80, {7}), (81, {0}), (82, {7}), (83, {2}), (84, {9}), (85, {6}), (86, {9}), (87, {7}), (88, {9}), (89, {9}), (90, {3}), (91, {3}), (92, {5}), (93, {5}), (94, {5}), (95, {7}), (96, {7}), (97, {2}), (98, {2}), (99, {2}), (100, {4})])\n\nNo common district for cleaners A and B \n\n\nIteration 24 :\n\n [ 60 99 46 98 54 7 30 13 53 94 39 25 38 97 67 78 89 79\n 59 49 47 80 12 45 92 77 17 5 36 50 88 69 29 71 33 19\n 82 8 32 15 1 18 85 57 75 81 43 3 2 65 37 41 44 62\n 24 48 52 27 6 83 26 84 23 61 11 87 4 21 64 76 96 28\n 10 86 56 35 22 70 73 51 72 66 42 34 9 90 93 14 91 95\n 68 55 16 63 74 20 40 100 58 31] \n\n[(0, array([60, 99, 46, 98, 54, 7, 30, 13, 53, 94])), (1, array([39, 25, 38, 97, 67, 78, 89, 79, 59, 49])), (2, array([47, 80, 12, 45, 92, 77, 17, 5, 36, 50])), (3, array([88, 69, 29, 71, 33, 19, 82, 8, 32, 15])), (4, array([ 1, 18, 85, 57, 75, 81, 43, 3, 2, 65])), (5, array([37, 41, 44, 62, 24, 48, 52, 27, 6, 83])), (6, array([26, 84, 23, 61, 11, 87, 4, 21, 64, 76])), (7, array([96, 28, 10, 86, 56, 35, 22, 70, 73, 51])), (8, array([72, 66, 42, 34, 9, 90, 93, 14, 91, 95])), (9, array([ 68, 55, 16, 63, 74, 20, 40, 100, 58, 31]))] \n\nOrderedDict([(1, {4}), (2, {4}), (3, {4}), (4, {6}), (5, {2}), (6, {5}), (7, {0}), (8, {3}), (9, {8}), (10, {7}), (11, {6}), (12, {2}), (13, {0}), (14, {8}), (15, {3}), (16, {9}), (17, {2}), (18, {4}), (19, {3}), (20, {9}), (21, {6}), (22, {7}), (23, {6}), (24, {5}), (25, {1}), (26, {6}), (27, {5}), (28, {7}), (29, {3}), (30, {0}), (31, {9}), (32, {3}), (33, {3}), (34, {8}), (35, {7}), (36, {2}), (37, {5}), (38, {1}), (39, {1}), (40, {9}), (41, {5}), (42, {8}), (43, {4}), (44, {5}), (45, {2}), (46, {0}), (47, {2}), (48, {5}), (49, {1}), (50, {2}), (51, {7}), (52, {5}), (53, {0}), (54, {0}), (55, {9}), (56, {7}), (57, {4}), (58, {9}), (59, {1}), (60, {0}), (61, {6}), (62, {5}), (63, {9}), (64, {6}), (65, {4}), (66, {8}), (67, {1}), (68, {9}), (69, {3}), (70, {7}), (71, {3}), (72, {8}), (73, {7}), (74, {9}), (75, {4}), (76, {6}), (77, {2}), (78, {1}), (79, {1}), (80, {2}), (81, {4}), (82, {3}), (83, {5}), (84, {6}), (85, {4}), (86, {7}), (87, {6}), (88, {3}), (89, {1}), (90, {8}), (91, {8}), (92, {2}), (93, {8}), (94, {0}), (95, {8}), (96, {7}), (97, {1}), (98, {0}), (99, {0}), (100, {9})])\n\nCleaner A and cleaner B are in disctrict # [4] \n\n\nIteration 25 :\n\n [ 32 1 57 77 51 96 7 73 69 98 59 19 12 20 56 3 88 47\n 55 90 74 86 82 79 14 40 91 9 92 95 71 66 26 4 62 52\n 5 31 63 16 97 37 23 49 68 17 21 34 18 11 76 99 80 58\n 6 50 67 38 8 89 15 81 94 44 48 54 65 83 87 46 72 35\n 33 24 100 29 70 64 39 45 53 25 22 84 10 61 13 41 60 75\n 42 43 27 2 36 93 85 78 30 28] \n\n[(0, array([32, 1, 57, 77, 51, 96, 7, 73, 69, 98])), (1, array([59, 19, 12, 20, 56, 3, 88, 47, 55, 90])), (2, array([74, 86, 82, 79, 14, 40, 91, 9, 92, 95])), (3, array([71, 66, 26, 4, 62, 52, 5, 31, 63, 16])), (4, array([97, 37, 23, 49, 68, 17, 21, 34, 18, 11])), (5, array([76, 99, 80, 58, 6, 50, 67, 38, 8, 89])), (6, array([15, 81, 94, 44, 48, 54, 65, 83, 87, 46])), (7, array([ 72, 35, 33, 24, 100, 29, 70, 64, 39, 45])), (8, array([53, 25, 22, 84, 10, 61, 13, 41, 60, 75])), (9, array([42, 43, 27, 2, 36, 93, 85, 78, 30, 28]))] \n\nOrderedDict([(1, {0}), (2, {9}), (3, {1}), (4, {3}), (5, {3}), (6, {5}), (7, {0}), (8, {5}), (9, {2}), (10, {8}), (11, {4}), (12, {1}), (13, {8}), (14, {2}), (15, {6}), (16, {3}), (17, {4}), (18, {4}), (19, {1}), (20, {1}), (21, {4}), (22, {8}), (23, {4}), (24, {7}), (25, {8}), (26, {3}), (27, {9}), (28, {9}), (29, {7}), (30, {9}), (31, {3}), (32, {0}), (33, {7}), (34, {4}), (35, {7}), (36, {9}), (37, {4}), (38, {5}), (39, {7}), (40, {2}), (41, {8}), (42, {9}), (43, {9}), (44, {6}), (45, {7}), (46, {6}), (47, {1}), (48, {6}), (49, {4}), (50, {5}), (51, {0}), (52, {3}), (53, {8}), (54, {6}), (55, {1}), (56, {1}), (57, {0}), (58, {5}), (59, {1}), (60, {8}), (61, {8}), (62, {3}), (63, {3}), (64, {7}), (65, {6}), (66, {3}), (67, {5}), (68, {4}), (69, {0}), (70, {7}), (71, {3}), (72, {7}), (73, {0}), (74, {2}), (75, {8}), (76, {5}), (77, {0}), (78, {9}), (79, {2}), (80, {5}), (81, {6}), (82, {2}), (83, {6}), (84, {8}), (85, {9}), (86, {2}), (87, {6}), (88, {1}), (89, {5}), (90, {1}), (91, {2}), (92, {2}), (93, {9}), (94, {6}), (95, {2}), (96, {0}), (97, {4}), (98, {0}), (99, {5}), (100, {7})])\n\nNo common district for cleaners A and B \n\n\nIteration 26 :\n\n [ 92 83 28 68 64 77 76 88 32 86 43 7 10 70 67 69 47 40\n 59 11 60 42 44 9 87 38 79 91 84 4 93 18 46 97 52 81\n 75 36 16 17 98 82 33 14 66 50 45 15 90 37 1 71 31 39\n 25 29 63 27 20 6 34 22 3 48 35 49 19 23 95 8 74 85\n 21 56 51 73 62 13 89 72 94 58 2 80 24 61 30 26 96 12\n 99 54 41 57 78 53 55 5 100 65] \n\n[(0, array([92, 83, 28, 68, 64, 77, 76, 88, 32, 86])), (1, array([43, 7, 10, 70, 67, 69, 47, 40, 59, 11])), (2, array([60, 42, 44, 9, 87, 38, 79, 91, 84, 4])), (3, array([93, 18, 46, 97, 52, 81, 75, 36, 16, 17])), (4, array([98, 82, 33, 14, 66, 50, 45, 15, 90, 37])), (5, array([ 1, 71, 31, 39, 25, 29, 63, 27, 20, 6])), (6, array([34, 22, 3, 48, 35, 49, 19, 23, 95, 8])), (7, array([74, 85, 21, 56, 51, 73, 62, 13, 89, 72])), (8, array([94, 58, 2, 80, 24, 61, 30, 26, 96, 12])), (9, array([ 99, 54, 41, 57, 78, 53, 55, 5, 100, 65]))] \n\nOrderedDict([(1, {5}), (2, {8}), (3, {6}), (4, {2}), (5, {9}), (6, {5}), (7, {1}), (8, {6}), (9, {2}), (10, {1}), (11, {1}), (12, {8}), (13, {7}), (14, {4}), (15, {4}), (16, {3}), (17, {3}), (18, {3}), (19, {6}), (20, {5}), (21, {7}), (22, {6}), (23, {6}), (24, {8}), (25, {5}), (26, {8}), (27, {5}), (28, {0}), (29, {5}), (30, {8}), (31, {5}), (32, {0}), (33, {4}), (34, {6}), (35, {6}), (36, {3}), (37, {4}), (38, {2}), (39, {5}), (40, {1}), (41, {9}), (42, {2}), (43, {1}), (44, {2}), (45, {4}), (46, {3}), (47, {1}), (48, {6}), (49, {6}), (50, {4}), (51, {7}), (52, {3}), (53, {9}), (54, {9}), (55, {9}), (56, {7}), (57, {9}), (58, {8}), (59, {1}), (60, {2}), (61, {8}), (62, {7}), (63, {5}), (64, {0}), (65, {9}), (66, {4}), (67, {1}), (68, {0}), (69, {1}), (70, {1}), (71, {5}), (72, {7}), (73, {7}), (74, {7}), (75, {3}), (76, {0}), (77, {0}), (78, {9}), (79, {2}), (80, {8}), (81, {3}), (82, {4}), (83, {0}), (84, {2}), (85, {7}), (86, {0}), (87, {2}), (88, {0}), (89, {7}), (90, {4}), (91, {2}), (92, {0}), (93, {3}), (94, {8}), (95, {6}), (96, {8}), (97, {3}), (98, {4}), (99, {9}), (100, {9})])\n\nNo common district for cleaners A and B \n\n\nIteration 27 :\n\n [ 45 25 26 15 29 7 46 36 67 30 61 74 3 43 4 89 20 47\n 62 44 98 33 75 92 96 78 59 10 38 86 40 37 27 22 71 80\n 65 54 12 51 63 17 41 19 11 31 34 21 72 6 76 99 50 79\n 94 49 83 69 53 35 1 16 42 82 93 100 85 13 77 64 39 70\n 73 8 28 84 91 90 5 56 57 55 68 66 95 9 97 14 58 18\n 32 23 48 52 2 24 88 87 81 60] \n\n[(0, array([45, 25, 26, 15, 29, 7, 46, 36, 67, 30])), (1, array([61, 74, 3, 43, 4, 89, 20, 47, 62, 44])), (2, array([98, 33, 75, 92, 96, 78, 59, 10, 38, 86])), (3, array([40, 37, 27, 22, 71, 80, 65, 54, 12, 51])), (4, array([63, 17, 41, 19, 11, 31, 34, 21, 72, 6])), (5, array([76, 99, 50, 79, 94, 49, 83, 69, 53, 35])), (6, array([ 1, 16, 42, 82, 93, 100, 85, 13, 77, 64])), (7, array([39, 70, 73, 8, 28, 84, 91, 90, 5, 56])), (8, array([57, 55, 68, 66, 95, 9, 97, 14, 58, 18])), (9, array([32, 23, 48, 52, 2, 24, 88, 87, 81, 60]))] \n\nOrderedDict([(1, {6}), (2, {9}), (3, {1}), (4, {1}), (5, {7}), (6, {4}), (7, {0}), (8, {7}), (9, {8}), (10, {2}), (11, {4}), (12, {3}), (13, {6}), (14, {8}), (15, {0}), (16, {6}), (17, {4}), (18, {8}), (19, {4}), (20, {1}), (21, {4}), (22, {3}), (23, {9}), (24, {9}), (25, {0}), (26, {0}), (27, {3}), (28, {7}), (29, {0}), (30, {0}), (31, {4}), (32, {9}), (33, {2}), (34, {4}), (35, {5}), (36, {0}), (37, {3}), (38, {2}), (39, {7}), (40, {3}), (41, {4}), (42, {6}), (43, {1}), (44, {1}), (45, {0}), (46, {0}), (47, {1}), (48, {9}), (49, {5}), (50, {5}), (51, {3}), (52, {9}), (53, {5}), (54, {3}), (55, {8}), (56, {7}), (57, {8}), (58, {8}), (59, {2}), (60, {9}), (61, {1}), (62, {1}), (63, {4}), (64, {6}), (65, {3}), (66, {8}), (67, {0}), (68, {8}), (69, {5}), (70, {7}), (71, {3}), (72, {4}), (73, {7}), (74, {1}), (75, {2}), (76, {5}), (77, {6}), (78, {2}), (79, {5}), (80, {3}), (81, {9}), (82, {6}), (83, {5}), (84, {7}), (85, {6}), (86, {2}), (87, {9}), (88, {9}), (89, {1}), (90, {7}), (91, {7}), (92, {2}), (93, {6}), (94, {5}), (95, {8}), (96, {2}), (97, {8}), (98, {2}), (99, {5}), (100, {6})])\n\nNo common district for cleaners A and B \n\n\nIteration 28 :\n\n [ 19 59 32 80 65 9 94 88 54 63 64 84 83 97 78 68 60 72\n 7 75 49 85 23 100 8 42 47 10 69 67 96 43 41 20 52 61\n 24 4 31 34 73 18 82 95 14 93 6 22 26 58 37 12 55 66\n 56 51 16 92 13 62 53 25 15 86 30 89 5 27 1 99 17 90\n 70 50 33 35 2 87 46 36 98 81 3 57 11 40 77 38 29 91\n 48 28 45 21 79 39 76 71 74 44] \n\n[(0, array([19, 59, 32, 80, 65, 9, 94, 88, 54, 63])), (1, array([64, 84, 83, 97, 78, 68, 60, 72, 7, 75])), (2, array([ 49, 85, 23, 100, 8, 42, 47, 10, 69, 67])), (3, array([96, 43, 41, 20, 52, 61, 24, 4, 31, 34])), (4, array([73, 18, 82, 95, 14, 93, 6, 22, 26, 58])), (5, array([37, 12, 55, 66, 56, 51, 16, 92, 13, 62])), (6, array([53, 25, 15, 86, 30, 89, 5, 27, 1, 99])), (7, array([17, 90, 70, 50, 33, 35, 2, 87, 46, 36])), (8, array([98, 81, 3, 57, 11, 40, 77, 38, 29, 91])), (9, array([48, 28, 45, 21, 79, 39, 76, 71, 74, 44]))] \n\nOrderedDict([(1, {6}), (2, {7}), (3, {8}), (4, {3}), (5, {6}), (6, {4}), (7, {1}), (8, {2}), (9, {0}), (10, {2}), (11, {8}), (12, {5}), (13, {5}), (14, {4}), (15, {6}), (16, {5}), (17, {7}), (18, {4}), (19, {0}), (20, {3}), (21, {9}), (22, {4}), (23, {2}), (24, {3}), (25, {6}), (26, {4}), (27, {6}), (28, {9}), (29, {8}), (30, {6}), (31, {3}), (32, {0}), (33, {7}), (34, {3}), (35, {7}), (36, {7}), (37, {5}), (38, {8}), (39, {9}), (40, {8}), (41, {3}), (42, {2}), (43, {3}), (44, {9}), (45, {9}), (46, {7}), (47, {2}), (48, {9}), (49, {2}), (50, {7}), (51, {5}), (52, {3}), (53, {6}), (54, {0}), (55, {5}), (56, {5}), (57, {8}), (58, {4}), (59, {0}), (60, {1}), (61, {3}), (62, {5}), (63, {0}), (64, {1}), (65, {0}), (66, {5}), (67, {2}), (68, {1}), (69, {2}), (70, {7}), (71, {9}), (72, {1}), (73, {4}), (74, {9}), (75, {1}), (76, {9}), (77, {8}), (78, {1}), (79, {9}), (80, {0}), (81, {8}), (82, {4}), (83, {1}), (84, {1}), (85, {2}), (86, {6}), (87, {7}), (88, {0}), (89, {6}), (90, {7}), (91, {8}), (92, {5}), (93, {4}), (94, {0}), (95, {4}), (96, {3}), (97, {1}), (98, {8}), (99, {6}), (100, {2})])\n\nNo common district for cleaners A and B \n\n\nIteration 29 :\n\n [ 51 92 30 66 62 9 69 57 35 60 75 77 88 96 91 8 2 4\n 50 6 10 81 71 43 39 58 67 68 97 82 20 19 38 17 7 36\n 87 95 47 5 26 54 21 64 13 3 72 61 56 37 99 32 1 80\n 85 90 14 76 46 41 25 23 40 84 45 18 11 42 15 29 94 24\n 31 86 16 93 79 63 53 49 73 28 100 33 98 34 55 44 52 48\n 74 83 12 78 22 70 89 27 59 65] \n\n[(0, array([51, 92, 30, 66, 62, 9, 69, 57, 35, 60])), (1, array([75, 77, 88, 96, 91, 8, 2, 4, 50, 6])), (2, array([10, 81, 71, 43, 39, 58, 67, 68, 97, 82])), (3, array([20, 19, 38, 17, 7, 36, 87, 95, 47, 5])), (4, array([26, 54, 21, 64, 13, 3, 72, 61, 56, 37])), (5, array([99, 32, 1, 80, 85, 90, 14, 76, 46, 41])), (6, array([25, 23, 40, 84, 45, 18, 11, 42, 15, 29])), (7, array([94, 24, 31, 86, 16, 93, 79, 63, 53, 49])), (8, array([ 73, 28, 100, 33, 98, 34, 55, 44, 52, 48])), (9, array([74, 83, 12, 78, 22, 70, 89, 27, 59, 65]))] \n\nOrderedDict([(1, {5}), (2, {1}), (3, {4}), (4, {1}), (5, {3}), (6, {1}), (7, {3}), (8, {1}), (9, {0}), (10, {2}), (11, {6}), (12, {9}), (13, {4}), (14, {5}), (15, {6}), (16, {7}), (17, {3}), (18, {6}), (19, {3}), (20, {3}), (21, {4}), (22, {9}), (23, {6}), (24, {7}), (25, {6}), (26, {4}), (27, {9}), (28, {8}), (29, {6}), (30, {0}), (31, {7}), (32, {5}), (33, {8}), (34, {8}), (35, {0}), (36, {3}), (37, {4}), (38, {3}), (39, {2}), (40, {6}), (41, {5}), (42, {6}), (43, {2}), (44, {8}), (45, {6}), (46, {5}), (47, {3}), (48, {8}), (49, {7}), (50, {1}), (51, {0}), (52, {8}), (53, {7}), (54, {4}), (55, {8}), (56, {4}), (57, {0}), (58, {2}), (59, {9}), (60, {0}), (61, {4}), (62, {0}), (63, {7}), (64, {4}), (65, {9}), (66, {0}), (67, {2}), (68, {2}), (69, {0}), (70, {9}), (71, {2}), (72, {4}), (73, {8}), (74, {9}), (75, {1}), (76, {5}), (77, {1}), (78, {9}), (79, {7}), (80, {5}), (81, {2}), (82, {2}), (83, {9}), (84, {6}), (85, {5}), (86, {7}), (87, {3}), (88, {1}), (89, {9}), (90, {5}), (91, {1}), (92, {0}), (93, {7}), (94, {7}), (95, {3}), (96, {1}), (97, {2}), (98, {8}), (99, {5}), (100, {8})])\n\nNo common district for cleaners A and B \n\n\nIteration 30 :\n\n [ 26 57 29 13 86 37 3 76 30 63 89 62 94 74 25 98 47 27\n 24 40 18 52 5 48 51 22 36 7 17 49 77 97 96 9 20 99\n 90 19 61 100 55 65 50 35 75 42 46 12 80 31 23 60 71 81\n 84 11 45 44 92 73 87 59 54 88 85 64 82 33 16 95 6 43\n 15 39 83 21 69 2 8 58 53 68 91 1 66 4 32 56 14 93\n 72 78 70 28 79 10 41 67 34 38] \n\n[(0, array([26, 57, 29, 13, 86, 37, 3, 76, 30, 63])), (1, array([89, 62, 94, 74, 25, 98, 47, 27, 24, 40])), (2, array([18, 52, 5, 48, 51, 22, 36, 7, 17, 49])), (3, array([ 77, 97, 96, 9, 20, 99, 90, 19, 61, 100])), (4, array([55, 65, 50, 35, 75, 42, 46, 12, 80, 31])), (5, array([23, 60, 71, 81, 84, 11, 45, 44, 92, 73])), (6, array([87, 59, 54, 88, 85, 64, 82, 33, 16, 95])), (7, array([ 6, 43, 15, 39, 83, 21, 69, 2, 8, 58])), (8, array([53, 68, 91, 1, 66, 4, 32, 56, 14, 93])), (9, array([72, 78, 70, 28, 79, 10, 41, 67, 34, 38]))] \n\nOrderedDict([(1, {8}), (2, {7}), (3, {0}), (4, {8}), (5, {2}), (6, {7}), (7, {2}), (8, {7}), (9, {3}), (10, {9}), (11, {5}), (12, {4}), (13, {0}), (14, {8}), (15, {7}), (16, {6}), (17, {2}), (18, {2}), (19, {3}), (20, {3}), (21, {7}), (22, {2}), (23, {5}), (24, {1}), (25, {1}), (26, {0}), (27, {1}), (28, {9}), (29, {0}), (30, {0}), (31, {4}), (32, {8}), (33, {6}), (34, {9}), (35, {4}), (36, {2}), (37, {0}), (38, {9}), (39, {7}), (40, {1}), (41, {9}), (42, {4}), (43, {7}), (44, {5}), (45, {5}), (46, {4}), (47, {1}), (48, {2}), (49, {2}), (50, {4}), (51, {2}), (52, {2}), (53, {8}), (54, {6}), (55, {4}), (56, {8}), (57, {0}), (58, {7}), (59, {6}), (60, {5}), (61, {3}), (62, {1}), (63, {0}), (64, {6}), (65, {4}), (66, {8}), (67, {9}), (68, {8}), (69, {7}), (70, {9}), (71, {5}), (72, {9}), (73, {5}), (74, {1}), (75, {4}), (76, {0}), (77, {3}), (78, {9}), (79, {9}), (80, {4}), (81, {5}), (82, {6}), (83, {7}), (84, {5}), (85, {6}), (86, {0}), (87, {6}), (88, {6}), (89, {1}), (90, {3}), (91, {8}), (92, {5}), (93, {8}), (94, {1}), (95, {6}), (96, {3}), (97, {3}), (98, {1}), (99, {3}), (100, {3})])\n\nNo common district for cleaners A and B \n\n\nIteration 31 :\n\n [ 66 13 72 83 4 69 50 37 89 96 26 62 43 16 88 8 1 14\n 20 68 60 86 56 54 44 63 75 38 23 64 82 55 9 73 77 46\n 81 95 65 5 19 41 90 52 42 18 76 85 21 30 27 45 80 15\n 49 29 17 94 36 12 39 2 35 71 51 61 47 74 6 10 32 7\n 92 59 11 53 91 84 99 70 97 33 48 58 40 87 93 22 57 31\n 78 98 28 25 79 100 34 67 3 24] \n\n[(0, array([66, 13, 72, 83, 4, 69, 50, 37, 89, 96])), (1, array([26, 62, 43, 16, 88, 8, 1, 14, 20, 68])), (2, array([60, 86, 56, 54, 44, 63, 75, 38, 23, 64])), (3, array([82, 55, 9, 73, 77, 46, 81, 95, 65, 5])), (4, array([19, 41, 90, 52, 42, 18, 76, 85, 21, 30])), (5, array([27, 45, 80, 15, 49, 29, 17, 94, 36, 12])), (6, array([39, 2, 35, 71, 51, 61, 47, 74, 6, 10])), (7, array([32, 7, 92, 59, 11, 53, 91, 84, 99, 70])), (8, array([97, 33, 48, 58, 40, 87, 93, 22, 57, 31])), (9, array([ 78, 98, 28, 25, 79, 100, 34, 67, 3, 24]))] \n\nOrderedDict([(1, {1}), (2, {6}), (3, {9}), (4, {0}), (5, {3}), (6, {6}), (7, {7}), (8, {1}), (9, {3}), (10, {6}), (11, {7}), (12, {5}), (13, {0}), (14, {1}), (15, {5}), (16, {1}), (17, {5}), (18, {4}), (19, {4}), (20, {1}), (21, {4}), (22, {8}), (23, {2}), (24, {9}), (25, {9}), (26, {1}), (27, {5}), (28, {9}), (29, {5}), (30, {4}), (31, {8}), (32, {7}), (33, {8}), (34, {9}), (35, {6}), (36, {5}), (37, {0}), (38, {2}), (39, {6}), (40, {8}), (41, {4}), (42, {4}), (43, {1}), (44, {2}), (45, {5}), (46, {3}), (47, {6}), (48, {8}), (49, {5}), (50, {0}), (51, {6}), (52, {4}), (53, {7}), (54, {2}), (55, {3}), (56, {2}), (57, {8}), (58, {8}), (59, {7}), (60, {2}), (61, {6}), (62, {1}), (63, {2}), (64, {2}), (65, {3}), (66, {0}), (67, {9}), (68, {1}), (69, {0}), (70, {7}), (71, {6}), (72, {0}), (73, {3}), (74, {6}), (75, {2}), (76, {4}), (77, {3}), (78, {9}), (79, {9}), (80, {5}), (81, {3}), (82, {3}), (83, {0}), (84, {7}), (85, {4}), (86, {2}), (87, {8}), (88, {1}), (89, {0}), (90, {4}), (91, {7}), (92, {7}), (93, {8}), (94, {5}), (95, {3}), (96, {0}), (97, {8}), (98, {9}), (99, {7}), (100, {9})])\n\nNo common district for cleaners A and B \n\n\nIteration 32 :\n\n [ 3 24 74 61 97 8 36 64 18 69 21 65 6 54 96 63 17 72\n 25 33 16 22 91 12 62 79 26 2 81 75 7 52 78 14 94 56\n 11 4 89 34 90 43 57 49 10 1 9 84 82 48 73 20 51 40\n 46 76 59 30 85 13 100 32 86 47 15 19 39 58 41 70 77 80\n 44 95 35 38 23 55 5 50 92 60 88 53 93 68 98 83 28 27\n 67 31 29 87 42 37 71 45 99 66] \n\n[(0, array([ 3, 24, 74, 61, 97, 8, 36, 64, 18, 69])), (1, array([21, 65, 6, 54, 96, 63, 17, 72, 25, 33])), (2, array([16, 22, 91, 12, 62, 79, 26, 2, 81, 75])), (3, array([ 7, 52, 78, 14, 94, 56, 11, 4, 89, 34])), (4, array([90, 43, 57, 49, 10, 1, 9, 84, 82, 48])), (5, array([73, 20, 51, 40, 46, 76, 59, 30, 85, 13])), (6, array([100, 32, 86, 47, 15, 19, 39, 58, 41, 70])), (7, array([77, 80, 44, 95, 35, 38, 23, 55, 5, 50])), (8, array([92, 60, 88, 53, 93, 68, 98, 83, 28, 27])), (9, array([67, 31, 29, 87, 42, 37, 71, 45, 99, 66]))] \n\nOrderedDict([(1, {4}), (2, {2}), (3, {0}), (4, {3}), (5, {7}), (6, {1}), (7, {3}), (8, {0}), (9, {4}), (10, {4}), (11, {3}), (12, {2}), (13, {5}), (14, {3}), (15, {6}), (16, {2}), (17, {1}), (18, {0}), (19, {6}), (20, {5}), (21, {1}), (22, {2}), (23, {7}), (24, {0}), (25, {1}), (26, {2}), (27, {8}), (28, {8}), (29, {9}), (30, {5}), (31, {9}), (32, {6}), (33, {1}), (34, {3}), (35, {7}), (36, {0}), (37, {9}), (38, {7}), (39, {6}), (40, {5}), (41, {6}), (42, {9}), (43, {4}), (44, {7}), (45, {9}), (46, {5}), (47, {6}), (48, {4}), (49, {4}), (50, {7}), (51, {5}), (52, {3}), (53, {8}), (54, {1}), (55, {7}), (56, {3}), (57, {4}), (58, {6}), (59, {5}), (60, {8}), (61, {0}), (62, {2}), (63, {1}), (64, {0}), (65, {1}), (66, {9}), (67, {9}), (68, {8}), (69, {0}), (70, {6}), (71, {9}), (72, {1}), (73, {5}), (74, {0}), (75, {2}), (76, {5}), (77, {7}), (78, {3}), (79, {2}), (80, {7}), (81, {2}), (82, {4}), (83, {8}), (84, {4}), (85, {5}), (86, {6}), (87, {9}), (88, {8}), (89, {3}), (90, {4}), (91, {2}), (92, {8}), (93, {8}), (94, {3}), (95, {7}), (96, {1}), (97, {0}), (98, {8}), (99, {9}), (100, {6})])\n\nNo common district for cleaners A and B \n\n\nIteration 33 :\n\n [ 94 100 74 21 58 99 65 67 72 34 64 35 48 98 15 27 9 71\n 82 38 11 88 45 44 36 73 23 57 97 10 85 33 12 89 50 8\n 86 20 4 59 92 46 66 6 3 37 16 28 53 79 75 2 78 63\n 7 25 1 56 87 24 77 70 5 55 19 43 39 52 95 96 42 22\n 93 69 17 49 76 54 83 84 31 51 26 81 60 47 14 40 61 90\n 32 30 13 62 41 29 68 80 91 18] \n\n[(0, array([ 94, 100, 74, 21, 58, 99, 65, 67, 72, 34])), (1, array([64, 35, 48, 98, 15, 27, 9, 71, 82, 38])), (2, array([11, 88, 45, 44, 36, 73, 23, 57, 97, 10])), (3, array([85, 33, 12, 89, 50, 8, 86, 20, 4, 59])), (4, array([92, 46, 66, 6, 3, 37, 16, 28, 53, 79])), (5, array([75, 2, 78, 63, 7, 25, 1, 56, 87, 24])), (6, array([77, 70, 5, 55, 19, 43, 39, 52, 95, 96])), (7, array([42, 22, 93, 69, 17, 49, 76, 54, 83, 84])), (8, array([31, 51, 26, 81, 60, 47, 14, 40, 61, 90])), (9, array([32, 30, 13, 62, 41, 29, 68, 80, 91, 18]))] \n\nOrderedDict([(1, {5}), (2, {5}), (3, {4}), (4, {3}), (5, {6}), (6, {4}), (7, {5}), (8, {3}), (9, {1}), (10, {2}), (11, {2}), (12, {3}), (13, {9}), (14, {8}), (15, {1}), (16, {4}), (17, {7}), (18, {9}), (19, {6}), (20, {3}), (21, {0}), (22, {7}), (23, {2}), (24, {5}), (25, {5}), (26, {8}), (27, {1}), (28, {4}), (29, {9}), (30, {9}), (31, {8}), (32, {9}), (33, {3}), (34, {0}), (35, {1}), (36, {2}), (37, {4}), (38, {1}), (39, {6}), (40, {8}), (41, {9}), (42, {7}), (43, {6}), (44, {2}), (45, {2}), (46, {4}), (47, {8}), (48, {1}), (49, {7}), (50, {3}), (51, {8}), (52, {6}), (53, {4}), (54, {7}), (55, {6}), (56, {5}), (57, {2}), (58, {0}), (59, {3}), (60, {8}), (61, {8}), (62, {9}), (63, {5}), (64, {1}), (65, {0}), (66, {4}), (67, {0}), (68, {9}), (69, {7}), (70, {6}), (71, {1}), (72, {0}), (73, {2}), (74, {0}), (75, {5}), (76, {7}), (77, {6}), (78, {5}), (79, {4}), (80, {9}), (81, {8}), (82, {1}), (83, {7}), (84, {7}), (85, {3}), (86, {3}), (87, {5}), (88, {2}), (89, {3}), (90, {8}), (91, {9}), (92, {4}), (93, {7}), (94, {0}), (95, {6}), (96, {6}), (97, {2}), (98, {1}), (99, {0}), (100, {0})])\n\nCleaner A and cleaner B are in disctrict # [5] \n\n\nIteration 34 :\n\n [ 6 50 90 78 73 80 46 60 64 20 74 23 3 96 45 93 22 47\n 61 68 31 8 62 72 54 69 59 19 17 9 5 91 12 83 34 63\n 99 98 75 10 48 13 4 25 33 77 95 44 28 53 38 16 57 65\n 15 87 100 11 92 2 70 52 43 55 30 82 14 42 49 21 37 67\n 7 40 94 24 79 27 36 89 84 26 41 29 1 66 97 81 88 85\n 32 86 51 35 18 58 76 39 71 56] \n\n[(0, array([ 6, 50, 90, 78, 73, 80, 46, 60, 64, 20])), (1, array([74, 23, 3, 96, 45, 93, 22, 47, 61, 68])), (2, array([31, 8, 62, 72, 54, 69, 59, 19, 17, 9])), (3, array([ 5, 91, 12, 83, 34, 63, 99, 98, 75, 10])), (4, array([48, 13, 4, 25, 33, 77, 95, 44, 28, 53])), (5, array([ 38, 16, 57, 65, 15, 87, 100, 11, 92, 2])), (6, array([70, 52, 43, 55, 30, 82, 14, 42, 49, 21])), (7, array([37, 67, 7, 40, 94, 24, 79, 27, 36, 89])), (8, array([84, 26, 41, 29, 1, 66, 97, 81, 88, 85])), (9, array([32, 86, 51, 35, 18, 58, 76, 39, 71, 56]))] \n\nOrderedDict([(1, {8}), (2, {5}), (3, {1}), (4, {4}), (5, {3}), (6, {0}), (7, {7}), (8, {2}), (9, {2}), (10, {3}), (11, {5}), (12, {3}), (13, {4}), (14, {6}), (15, {5}), (16, {5}), (17, {2}), (18, {9}), (19, {2}), (20, {0}), (21, {6}), (22, {1}), (23, {1}), (24, {7}), (25, {4}), (26, {8}), (27, {7}), (28, {4}), (29, {8}), (30, {6}), (31, {2}), (32, {9}), (33, {4}), (34, {3}), (35, {9}), (36, {7}), (37, {7}), (38, {5}), (39, {9}), (40, {7}), (41, {8}), (42, {6}), (43, {6}), (44, {4}), (45, {1}), (46, {0}), (47, {1}), (48, {4}), (49, {6}), (50, {0}), (51, {9}), (52, {6}), (53, {4}), (54, {2}), (55, {6}), (56, {9}), (57, {5}), (58, {9}), (59, {2}), (60, {0}), (61, {1}), (62, {2}), (63, {3}), (64, {0}), (65, {5}), (66, {8}), (67, {7}), (68, {1}), (69, {2}), (70, {6}), (71, {9}), (72, {2}), (73, {0}), (74, {1}), (75, {3}), (76, {9}), (77, {4}), (78, {0}), (79, {7}), (80, {0}), (81, {8}), (82, {6}), (83, {3}), (84, {8}), (85, {8}), (86, {9}), (87, {5}), (88, {8}), (89, {7}), (90, {0}), (91, {3}), (92, {5}), (93, {1}), (94, {7}), (95, {4}), (96, {1}), (97, {8}), (98, {3}), (99, {3}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 35 :\n\n [ 2 86 67 33 61 51 58 87 80 26 94 68 40 41 21 8 28 13\n 46 88 92 17 49 43 74 70 23 39 66 30 85 5 12 97 62 99\n 93 19 3 69 50 55 9 20 78 35 1 44 82 100 98 18 95 34\n 31 89 60 38 45 14 52 7 15 96 42 57 54 79 71 65 4 16\n 76 29 24 64 11 27 59 36 53 56 81 84 22 37 32 77 10 48\n 83 72 91 6 75 47 73 63 25 90] \n\n[(0, array([ 2, 86, 67, 33, 61, 51, 58, 87, 80, 26])), (1, array([94, 68, 40, 41, 21, 8, 28, 13, 46, 88])), (2, array([92, 17, 49, 43, 74, 70, 23, 39, 66, 30])), (3, array([85, 5, 12, 97, 62, 99, 93, 19, 3, 69])), (4, array([ 50, 55, 9, 20, 78, 35, 1, 44, 82, 100])), (5, array([98, 18, 95, 34, 31, 89, 60, 38, 45, 14])), (6, array([52, 7, 15, 96, 42, 57, 54, 79, 71, 65])), (7, array([ 4, 16, 76, 29, 24, 64, 11, 27, 59, 36])), (8, array([53, 56, 81, 84, 22, 37, 32, 77, 10, 48])), (9, array([83, 72, 91, 6, 75, 47, 73, 63, 25, 90]))] \n\nOrderedDict([(1, {4}), (2, {0}), (3, {3}), (4, {7}), (5, {3}), (6, {9}), (7, {6}), (8, {1}), (9, {4}), (10, {8}), (11, {7}), (12, {3}), (13, {1}), (14, {5}), (15, {6}), (16, {7}), (17, {2}), (18, {5}), (19, {3}), (20, {4}), (21, {1}), (22, {8}), (23, {2}), (24, {7}), (25, {9}), (26, {0}), (27, {7}), (28, {1}), (29, {7}), (30, {2}), (31, {5}), (32, {8}), (33, {0}), (34, {5}), (35, {4}), (36, {7}), (37, {8}), (38, {5}), (39, {2}), (40, {1}), (41, {1}), (42, {6}), (43, {2}), (44, {4}), (45, {5}), (46, {1}), (47, {9}), (48, {8}), (49, {2}), (50, {4}), (51, {0}), (52, {6}), (53, {8}), (54, {6}), (55, {4}), (56, {8}), (57, {6}), (58, {0}), (59, {7}), (60, {5}), (61, {0}), (62, {3}), (63, {9}), (64, {7}), (65, {6}), (66, {2}), (67, {0}), (68, {1}), (69, {3}), (70, {2}), (71, {6}), (72, {9}), (73, {9}), (74, {2}), (75, {9}), (76, {7}), (77, {8}), (78, {4}), (79, {6}), (80, {0}), (81, {8}), (82, {4}), (83, {9}), (84, {8}), (85, {3}), (86, {0}), (87, {0}), (88, {1}), (89, {5}), (90, {9}), (91, {9}), (92, {2}), (93, {3}), (94, {1}), (95, {5}), (96, {6}), (97, {3}), (98, {5}), (99, {3}), (100, {4})])\n\nNo common district for cleaners A and B \n\n\nIteration 36 :\n\n [100 62 91 99 2 78 72 26 67 71 28 77 79 44 60 65 53 27\n 29 39 42 9 57 58 48 18 6 63 94 85 13 73 19 92 40 96\n 68 38 81 45 3 59 35 93 16 89 31 82 47 21 14 34 90 76\n 46 25 11 66 95 50 51 32 15 36 56 43 17 23 70 5 86 69\n 98 52 49 4 41 8 10 61 1 87 37 54 7 22 12 75 88 97\n 20 84 83 74 30 33 64 80 55 24] \n\n[(0, array([100, 62, 91, 99, 2, 78, 72, 26, 67, 71])), (1, array([28, 77, 79, 44, 60, 65, 53, 27, 29, 39])), (2, array([42, 9, 57, 58, 48, 18, 6, 63, 94, 85])), (3, array([13, 73, 19, 92, 40, 96, 68, 38, 81, 45])), (4, array([ 3, 59, 35, 93, 16, 89, 31, 82, 47, 21])), (5, array([14, 34, 90, 76, 46, 25, 11, 66, 95, 50])), (6, array([51, 32, 15, 36, 56, 43, 17, 23, 70, 5])), (7, array([86, 69, 98, 52, 49, 4, 41, 8, 10, 61])), (8, array([ 1, 87, 37, 54, 7, 22, 12, 75, 88, 97])), (9, array([20, 84, 83, 74, 30, 33, 64, 80, 55, 24]))] \n\nOrderedDict([(1, {8}), (2, {0}), (3, {4}), (4, {7}), (5, {6}), (6, {2}), (7, {8}), (8, {7}), (9, {2}), (10, {7}), (11, {5}), (12, {8}), (13, {3}), (14, {5}), (15, {6}), (16, {4}), (17, {6}), (18, {2}), (19, {3}), (20, {9}), (21, {4}), (22, {8}), (23, {6}), (24, {9}), (25, {5}), (26, {0}), (27, {1}), (28, {1}), (29, {1}), (30, {9}), (31, {4}), (32, {6}), (33, {9}), (34, {5}), (35, {4}), (36, {6}), (37, {8}), (38, {3}), (39, {1}), (40, {3}), (41, {7}), (42, {2}), (43, {6}), (44, {1}), (45, {3}), (46, {5}), (47, {4}), (48, {2}), (49, {7}), (50, {5}), (51, {6}), (52, {7}), (53, {1}), (54, {8}), (55, {9}), (56, {6}), (57, {2}), (58, {2}), (59, {4}), (60, {1}), (61, {7}), (62, {0}), (63, {2}), (64, {9}), (65, {1}), (66, {5}), (67, {0}), (68, {3}), (69, {7}), (70, {6}), (71, {0}), (72, {0}), (73, {3}), (74, {9}), (75, {8}), (76, {5}), (77, {1}), (78, {0}), (79, {1}), (80, {9}), (81, {3}), (82, {4}), (83, {9}), (84, {9}), (85, {2}), (86, {7}), (87, {8}), (88, {8}), (89, {4}), (90, {5}), (91, {0}), (92, {3}), (93, {4}), (94, {2}), (95, {5}), (96, {3}), (97, {8}), (98, {7}), (99, {0}), (100, {0})])\n\nNo common district for cleaners A and B \n\n\nIteration 37 :\n\n [ 39 48 24 57 2 65 53 55 41 60 42 11 78 44 36 71 100 31\n 17 76 85 70 93 80 49 91 29 69 8 50 18 32 73 59 81 74\n 5 1 21 15 37 89 22 4 96 34 10 97 46 26 25 23 16 30\n 64 92 77 47 33 72 12 84 28 35 58 66 52 67 7 79 88 61\n 38 95 94 90 40 3 19 54 51 87 68 45 13 99 9 75 63 62\n 86 6 14 43 20 82 83 98 27 56] \n\n[(0, array([39, 48, 24, 57, 2, 65, 53, 55, 41, 60])), (1, array([ 42, 11, 78, 44, 36, 71, 100, 31, 17, 76])), (2, array([85, 70, 93, 80, 49, 91, 29, 69, 8, 50])), (3, array([18, 32, 73, 59, 81, 74, 5, 1, 21, 15])), (4, array([37, 89, 22, 4, 96, 34, 10, 97, 46, 26])), (5, array([25, 23, 16, 30, 64, 92, 77, 47, 33, 72])), (6, array([12, 84, 28, 35, 58, 66, 52, 67, 7, 79])), (7, array([88, 61, 38, 95, 94, 90, 40, 3, 19, 54])), (8, array([51, 87, 68, 45, 13, 99, 9, 75, 63, 62])), (9, array([86, 6, 14, 43, 20, 82, 83, 98, 27, 56]))] \n\nOrderedDict([(1, {3}), (2, {0}), (3, {7}), (4, {4}), (5, {3}), (6, {9}), (7, {6}), (8, {2}), (9, {8}), (10, {4}), (11, {1}), (12, {6}), (13, {8}), (14, {9}), (15, {3}), (16, {5}), (17, {1}), (18, {3}), (19, {7}), (20, {9}), (21, {3}), (22, {4}), (23, {5}), (24, {0}), (25, {5}), (26, {4}), (27, {9}), (28, {6}), (29, {2}), (30, {5}), (31, {1}), (32, {3}), (33, {5}), (34, {4}), (35, {6}), (36, {1}), (37, {4}), (38, {7}), (39, {0}), (40, {7}), (41, {0}), (42, {1}), (43, {9}), (44, {1}), (45, {8}), (46, {4}), (47, {5}), (48, {0}), (49, {2}), (50, {2}), (51, {8}), (52, {6}), (53, {0}), (54, {7}), (55, {0}), (56, {9}), (57, {0}), (58, {6}), (59, {3}), (60, {0}), (61, {7}), (62, {8}), (63, {8}), (64, {5}), (65, {0}), (66, {6}), (67, {6}), (68, {8}), (69, {2}), (70, {2}), (71, {1}), (72, {5}), (73, {3}), (74, {3}), (75, {8}), (76, {1}), (77, {5}), (78, {1}), (79, {6}), (80, {2}), (81, {3}), (82, {9}), (83, {9}), (84, {6}), (85, {2}), (86, {9}), (87, {8}), (88, {7}), (89, {4}), (90, {7}), (91, {2}), (92, {5}), (93, {2}), (94, {7}), (95, {7}), (96, {4}), (97, {4}), (98, {9}), (99, {8}), (100, {1})])\n\nNo common district for cleaners A and B \n\n\nIteration 38 :\n\n [ 60 99 11 26 96 86 29 8 9 33 89 88 63 83 52 75 18 39\n 50 91 68 46 80 67 16 53 49 78 55 72 84 69 71 34 59 73\n 58 77 17 32 47 70 51 5 56 57 13 12 30 14 25 36 82 62\n 94 61 76 6 74 92 27 1 48 42 28 45 10 100 66 79 2 44\n 35 37 81 24 22 15 20 38 85 21 7 98 64 3 19 31 43 87\n 97 90 40 23 54 95 41 93 4 65] \n\n[(0, array([60, 99, 11, 26, 96, 86, 29, 8, 9, 33])), (1, array([89, 88, 63, 83, 52, 75, 18, 39, 50, 91])), (2, array([68, 46, 80, 67, 16, 53, 49, 78, 55, 72])), (3, array([84, 69, 71, 34, 59, 73, 58, 77, 17, 32])), (4, array([47, 70, 51, 5, 56, 57, 13, 12, 30, 14])), (5, array([25, 36, 82, 62, 94, 61, 76, 6, 74, 92])), (6, array([ 27, 1, 48, 42, 28, 45, 10, 100, 66, 79])), (7, array([ 2, 44, 35, 37, 81, 24, 22, 15, 20, 38])), (8, array([85, 21, 7, 98, 64, 3, 19, 31, 43, 87])), (9, array([97, 90, 40, 23, 54, 95, 41, 93, 4, 65]))] \n\nOrderedDict([(1, {6}), (2, {7}), (3, {8}), (4, {9}), (5, {4}), (6, {5}), (7, {8}), (8, {0}), (9, {0}), (10, {6}), (11, {0}), (12, {4}), (13, {4}), (14, {4}), (15, {7}), (16, {2}), (17, {3}), (18, {1}), (19, {8}), (20, {7}), (21, {8}), (22, {7}), (23, {9}), (24, {7}), (25, {5}), (26, {0}), (27, {6}), (28, {6}), (29, {0}), (30, {4}), (31, {8}), (32, {3}), (33, {0}), (34, {3}), (35, {7}), (36, {5}), (37, {7}), (38, {7}), (39, {1}), (40, {9}), (41, {9}), (42, {6}), (43, {8}), (44, {7}), (45, {6}), (46, {2}), (47, {4}), (48, {6}), (49, {2}), (50, {1}), (51, {4}), (52, {1}), (53, {2}), (54, {9}), (55, {2}), (56, {4}), (57, {4}), (58, {3}), (59, {3}), (60, {0}), (61, {5}), (62, {5}), (63, {1}), (64, {8}), (65, {9}), (66, {6}), (67, {2}), (68, {2}), (69, {3}), (70, {4}), (71, {3}), (72, {2}), (73, {3}), (74, {5}), (75, {1}), (76, {5}), (77, {3}), (78, {2}), (79, {6}), (80, {2}), (81, {7}), (82, {5}), (83, {1}), (84, {3}), (85, {8}), (86, {0}), (87, {8}), (88, {1}), (89, {1}), (90, {9}), (91, {1}), (92, {5}), (93, {9}), (94, {5}), (95, {9}), (96, {0}), (97, {9}), (98, {8}), (99, {0}), (100, {6})])\n\nNo common district for cleaners A and B \n\n\nIteration 39 :\n\n [ 3 43 85 83 33 29 49 11 30 26 66 78 56 86 91 27 51 90\n 42 60 68 72 97 15 32 44 2 47 28 76 24 52 88 21 75 25\n 80 57 39 34 4 58 96 92 31 17 5 19 45 50 82 100 7 87\n 79 46 63 95 74 54 14 9 93 84 99 40 35 38 16 22 81 6\n 23 18 71 77 65 53 59 20 13 70 73 1 62 98 41 89 37 64\n 10 8 12 55 67 61 36 94 69 48] \n\n[(0, array([ 3, 43, 85, 83, 33, 29, 49, 11, 30, 26])), (1, array([66, 78, 56, 86, 91, 27, 51, 90, 42, 60])), (2, array([68, 72, 97, 15, 32, 44, 2, 47, 28, 76])), (3, array([24, 52, 88, 21, 75, 25, 80, 57, 39, 34])), (4, array([ 4, 58, 96, 92, 31, 17, 5, 19, 45, 50])), (5, array([ 82, 100, 7, 87, 79, 46, 63, 95, 74, 54])), (6, array([14, 9, 93, 84, 99, 40, 35, 38, 16, 22])), (7, array([81, 6, 23, 18, 71, 77, 65, 53, 59, 20])), (8, array([13, 70, 73, 1, 62, 98, 41, 89, 37, 64])), (9, array([10, 8, 12, 55, 67, 61, 36, 94, 69, 48]))] \n\nOrderedDict([(1, {8}), (2, {2}), (3, {0}), (4, {4}), (5, {4}), (6, {7}), (7, {5}), (8, {9}), (9, {6}), (10, {9}), (11, {0}), (12, {9}), (13, {8}), (14, {6}), (15, {2}), (16, {6}), (17, {4}), (18, {7}), (19, {4}), (20, {7}), (21, {3}), (22, {6}), (23, {7}), (24, {3}), (25, {3}), (26, {0}), (27, {1}), (28, {2}), (29, {0}), (30, {0}), (31, {4}), (32, {2}), (33, {0}), (34, {3}), (35, {6}), (36, {9}), (37, {8}), (38, {6}), (39, {3}), (40, {6}), (41, {8}), (42, {1}), (43, {0}), (44, {2}), (45, {4}), (46, {5}), (47, {2}), (48, {9}), (49, {0}), (50, {4}), (51, {1}), (52, {3}), (53, {7}), (54, {5}), (55, {9}), (56, {1}), (57, {3}), (58, {4}), (59, {7}), (60, {1}), (61, {9}), (62, {8}), (63, {5}), (64, {8}), (65, {7}), (66, {1}), (67, {9}), (68, {2}), (69, {9}), (70, {8}), (71, {7}), (72, {2}), (73, {8}), (74, {5}), (75, {3}), (76, {2}), (77, {7}), (78, {1}), (79, {5}), (80, {3}), (81, {7}), (82, {5}), (83, {0}), (84, {6}), (85, {0}), (86, {1}), (87, {5}), (88, {3}), (89, {8}), (90, {1}), (91, {1}), (92, {4}), (93, {6}), (94, {9}), (95, {5}), (96, {4}), (97, {2}), (98, {8}), (99, {6}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 40 :\n\n [ 36 15 6 82 13 23 28 16 35 90 51 84 60 11 66 10 62 12\n 92 94 67 37 76 20 5 93 48 29 7 99 14 9 58 22 73 69\n 61 8 59 78 50 96 42 32 54 88 57 30 2 41 1 64 38 33\n 45 83 3 95 80 44 100 68 31 40 87 70 65 72 97 43 27 85\n 17 53 75 77 25 34 81 89 91 39 18 26 24 63 98 74 49 79\n 56 47 52 86 55 71 4 46 19 21] \n\n[(0, array([36, 15, 6, 82, 13, 23, 28, 16, 35, 90])), (1, array([51, 84, 60, 11, 66, 10, 62, 12, 92, 94])), (2, array([67, 37, 76, 20, 5, 93, 48, 29, 7, 99])), (3, array([14, 9, 58, 22, 73, 69, 61, 8, 59, 78])), (4, array([50, 96, 42, 32, 54, 88, 57, 30, 2, 41])), (5, array([ 1, 64, 38, 33, 45, 83, 3, 95, 80, 44])), (6, array([100, 68, 31, 40, 87, 70, 65, 72, 97, 43])), (7, array([27, 85, 17, 53, 75, 77, 25, 34, 81, 89])), (8, array([91, 39, 18, 26, 24, 63, 98, 74, 49, 79])), (9, array([56, 47, 52, 86, 55, 71, 4, 46, 19, 21]))] \n\nOrderedDict([(1, {5}), (2, {4}), (3, {5}), (4, {9}), (5, {2}), (6, {0}), (7, {2}), (8, {3}), (9, {3}), (10, {1}), (11, {1}), (12, {1}), (13, {0}), (14, {3}), (15, {0}), (16, {0}), (17, {7}), (18, {8}), (19, {9}), (20, {2}), (21, {9}), (22, {3}), (23, {0}), (24, {8}), (25, {7}), (26, {8}), (27, {7}), (28, {0}), (29, {2}), (30, {4}), (31, {6}), (32, {4}), (33, {5}), (34, {7}), (35, {0}), (36, {0}), (37, {2}), (38, {5}), (39, {8}), (40, {6}), (41, {4}), (42, {4}), (43, {6}), (44, {5}), (45, {5}), (46, {9}), (47, {9}), (48, {2}), (49, {8}), (50, {4}), (51, {1}), (52, {9}), (53, {7}), (54, {4}), (55, {9}), (56, {9}), (57, {4}), (58, {3}), (59, {3}), (60, {1}), (61, {3}), (62, {1}), (63, {8}), (64, {5}), (65, {6}), (66, {1}), (67, {2}), (68, {6}), (69, {3}), (70, {6}), (71, {9}), (72, {6}), (73, {3}), (74, {8}), (75, {7}), (76, {2}), (77, {7}), (78, {3}), (79, {8}), (80, {5}), (81, {7}), (82, {0}), (83, {5}), (84, {1}), (85, {7}), (86, {9}), (87, {6}), (88, {4}), (89, {7}), (90, {0}), (91, {8}), (92, {1}), (93, {2}), (94, {1}), (95, {5}), (96, {4}), (97, {6}), (98, {8}), (99, {2}), (100, {6})])\n\nNo common district for cleaners A and B \n\n\nIteration 41 :\n\n [ 39 11 24 30 50 93 44 66 15 63 17 61 58 100 74 33 23 8\n 2 1 78 60 31 64 82 98 83 71 65 51 70 35 4 68 85 76\n 5 81 12 55 21 22 29 69 26 43 95 10 40 6 97 89 91 19\n 99 72 54 77 84 73 52 96 36 38 41 57 67 20 88 28 87 59\n 3 25 53 75 13 62 9 80 16 92 90 42 32 86 56 37 94 34\n 48 45 27 79 46 14 18 7 49 47] \n\n[(0, array([39, 11, 24, 30, 50, 93, 44, 66, 15, 63])), (1, array([ 17, 61, 58, 100, 74, 33, 23, 8, 2, 1])), (2, array([78, 60, 31, 64, 82, 98, 83, 71, 65, 51])), (3, array([70, 35, 4, 68, 85, 76, 5, 81, 12, 55])), (4, array([21, 22, 29, 69, 26, 43, 95, 10, 40, 6])), (5, array([97, 89, 91, 19, 99, 72, 54, 77, 84, 73])), (6, array([52, 96, 36, 38, 41, 57, 67, 20, 88, 28])), (7, array([87, 59, 3, 25, 53, 75, 13, 62, 9, 80])), (8, array([16, 92, 90, 42, 32, 86, 56, 37, 94, 34])), (9, array([48, 45, 27, 79, 46, 14, 18, 7, 49, 47]))] \n\nOrderedDict([(1, {1}), (2, {1}), (3, {7}), (4, {3}), (5, {3}), (6, {4}), (7, {9}), (8, {1}), (9, {7}), (10, {4}), (11, {0}), (12, {3}), (13, {7}), (14, {9}), (15, {0}), (16, {8}), (17, {1}), (18, {9}), (19, {5}), (20, {6}), (21, {4}), (22, {4}), (23, {1}), (24, {0}), (25, {7}), (26, {4}), (27, {9}), (28, {6}), (29, {4}), (30, {0}), (31, {2}), (32, {8}), (33, {1}), (34, {8}), (35, {3}), (36, {6}), (37, {8}), (38, {6}), (39, {0}), (40, {4}), (41, {6}), (42, {8}), (43, {4}), (44, {0}), (45, {9}), (46, {9}), (47, {9}), (48, {9}), (49, {9}), (50, {0}), (51, {2}), (52, {6}), (53, {7}), (54, {5}), (55, {3}), (56, {8}), (57, {6}), (58, {1}), (59, {7}), (60, {2}), (61, {1}), (62, {7}), (63, {0}), (64, {2}), (65, {2}), (66, {0}), (67, {6}), (68, {3}), (69, {4}), (70, {3}), (71, {2}), (72, {5}), (73, {5}), (74, {1}), (75, {7}), (76, {3}), (77, {5}), (78, {2}), (79, {9}), (80, {7}), (81, {3}), (82, {2}), (83, {2}), (84, {5}), (85, {3}), (86, {8}), (87, {7}), (88, {6}), (89, {5}), (90, {8}), (91, {5}), (92, {8}), (93, {0}), (94, {8}), (95, {4}), (96, {6}), (97, {5}), (98, {2}), (99, {5}), (100, {1})])\n\nCleaner A and cleaner B are in disctrict # [1] \n\n\nIteration 42 :\n\n [ 46 6 11 4 45 67 68 73 93 79 17 78 19 28 3 53 65 60\n 81 77 87 84 27 75 88 37 54 52 29 8 86 24 38 98 83 92\n 26 66 61 96 40 55 90 85 99 76 41 12 71 91 47 32 1 59\n 20 23 49 21 51 48 14 50 94 30 63 43 64 7 44 97 62 10\n 39 9 100 35 5 34 74 31 42 72 58 16 36 18 13 57 2 56\n 70 95 25 80 89 22 82 33 69 15] \n\n[(0, array([46, 6, 11, 4, 45, 67, 68, 73, 93, 79])), (1, array([17, 78, 19, 28, 3, 53, 65, 60, 81, 77])), (2, array([87, 84, 27, 75, 88, 37, 54, 52, 29, 8])), (3, array([86, 24, 38, 98, 83, 92, 26, 66, 61, 96])), (4, array([40, 55, 90, 85, 99, 76, 41, 12, 71, 91])), (5, array([47, 32, 1, 59, 20, 23, 49, 21, 51, 48])), (6, array([14, 50, 94, 30, 63, 43, 64, 7, 44, 97])), (7, array([ 62, 10, 39, 9, 100, 35, 5, 34, 74, 31])), (8, array([42, 72, 58, 16, 36, 18, 13, 57, 2, 56])), (9, array([70, 95, 25, 80, 89, 22, 82, 33, 69, 15]))] \n\nOrderedDict([(1, {5}), (2, {8}), (3, {1}), (4, {0}), (5, {7}), (6, {0}), (7, {6}), (8, {2}), (9, {7}), (10, {7}), (11, {0}), (12, {4}), (13, {8}), (14, {6}), (15, {9}), (16, {8}), (17, {1}), (18, {8}), (19, {1}), (20, {5}), (21, {5}), (22, {9}), (23, {5}), (24, {3}), (25, {9}), (26, {3}), (27, {2}), (28, {1}), (29, {2}), (30, {6}), (31, {7}), (32, {5}), (33, {9}), (34, {7}), (35, {7}), (36, {8}), (37, {2}), (38, {3}), (39, {7}), (40, {4}), (41, {4}), (42, {8}), (43, {6}), (44, {6}), (45, {0}), (46, {0}), (47, {5}), (48, {5}), (49, {5}), (50, {6}), (51, {5}), (52, {2}), (53, {1}), (54, {2}), (55, {4}), (56, {8}), (57, {8}), (58, {8}), (59, {5}), (60, {1}), (61, {3}), (62, {7}), (63, {6}), (64, {6}), (65, {1}), (66, {3}), (67, {0}), (68, {0}), (69, {9}), (70, {9}), (71, {4}), (72, {8}), (73, {0}), (74, {7}), (75, {2}), (76, {4}), (77, {1}), (78, {1}), (79, {0}), (80, {9}), (81, {1}), (82, {9}), (83, {3}), (84, {2}), (85, {4}), (86, {3}), (87, {2}), (88, {2}), (89, {9}), (90, {4}), (91, {4}), (92, {3}), (93, {0}), (94, {6}), (95, {9}), (96, {3}), (97, {6}), (98, {3}), (99, {4}), (100, {7})])\n\nNo common district for cleaners A and B \n\n\nIteration 43 :\n\n [ 55 71 24 28 50 31 2 42 44 14 32 84 39 45 100 37 58 16\n 64 87 79 20 52 15 35 77 85 3 49 40 94 76 43 4 80 22\n 38 57 62 17 95 1 41 63 8 72 66 11 90 34 23 12 89 46\n 67 18 9 61 13 6 51 7 29 33 48 53 88 59 70 26 68 65\n 73 74 96 56 27 21 60 86 30 78 97 82 81 91 99 47 98 54\n 75 10 92 93 36 69 25 83 19 5] \n\n[(0, array([55, 71, 24, 28, 50, 31, 2, 42, 44, 14])), (1, array([ 32, 84, 39, 45, 100, 37, 58, 16, 64, 87])), (2, array([79, 20, 52, 15, 35, 77, 85, 3, 49, 40])), (3, array([94, 76, 43, 4, 80, 22, 38, 57, 62, 17])), (4, array([95, 1, 41, 63, 8, 72, 66, 11, 90, 34])), (5, array([23, 12, 89, 46, 67, 18, 9, 61, 13, 6])), (6, array([51, 7, 29, 33, 48, 53, 88, 59, 70, 26])), (7, array([68, 65, 73, 74, 96, 56, 27, 21, 60, 86])), (8, array([30, 78, 97, 82, 81, 91, 99, 47, 98, 54])), (9, array([75, 10, 92, 93, 36, 69, 25, 83, 19, 5]))] \n\nOrderedDict([(1, {4}), (2, {0}), (3, {2}), (4, {3}), (5, {9}), (6, {5}), (7, {6}), (8, {4}), (9, {5}), (10, {9}), (11, {4}), (12, {5}), (13, {5}), (14, {0}), (15, {2}), (16, {1}), (17, {3}), (18, {5}), (19, {9}), (20, {2}), (21, {7}), (22, {3}), (23, {5}), (24, {0}), (25, {9}), (26, {6}), (27, {7}), (28, {0}), (29, {6}), (30, {8}), (31, {0}), (32, {1}), (33, {6}), (34, {4}), (35, {2}), (36, {9}), (37, {1}), (38, {3}), (39, {1}), (40, {2}), (41, {4}), (42, {0}), (43, {3}), (44, {0}), (45, {1}), (46, {5}), (47, {8}), (48, {6}), (49, {2}), (50, {0}), (51, {6}), (52, {2}), (53, {6}), (54, {8}), (55, {0}), (56, {7}), (57, {3}), (58, {1}), (59, {6}), (60, {7}), (61, {5}), (62, {3}), (63, {4}), (64, {1}), (65, {7}), (66, {4}), (67, {5}), (68, {7}), (69, {9}), (70, {6}), (71, {0}), (72, {4}), (73, {7}), (74, {7}), (75, {9}), (76, {3}), (77, {2}), (78, {8}), (79, {2}), (80, {3}), (81, {8}), (82, {8}), (83, {9}), (84, {1}), (85, {2}), (86, {7}), (87, {1}), (88, {6}), (89, {5}), (90, {4}), (91, {8}), (92, {9}), (93, {9}), (94, {3}), (95, {4}), (96, {7}), (97, {8}), (98, {8}), (99, {8}), (100, {1})])\n\nNo common district for cleaners A and B \n\n\nIteration 44 :\n\n [ 12 90 60 23 81 5 1 27 33 58 24 91 99 83 2 92 39 20\n 38 30 76 66 49 19 96 54 15 57 77 59 29 3 65 28 88 25\n 56 94 36 78 64 85 11 21 95 48 55 69 46 47 9 93 18 10\n 82 98 97 41 32 67 79 45 74 13 34 22 37 42 4 62 17 43\n 86 51 87 14 100 72 40 6 80 75 68 35 7 73 8 52 50 53\n 44 31 16 71 89 70 26 84 61 63] \n\n[(0, array([12, 90, 60, 23, 81, 5, 1, 27, 33, 58])), (1, array([24, 91, 99, 83, 2, 92, 39, 20, 38, 30])), (2, array([76, 66, 49, 19, 96, 54, 15, 57, 77, 59])), (3, array([29, 3, 65, 28, 88, 25, 56, 94, 36, 78])), (4, array([64, 85, 11, 21, 95, 48, 55, 69, 46, 47])), (5, array([ 9, 93, 18, 10, 82, 98, 97, 41, 32, 67])), (6, array([79, 45, 74, 13, 34, 22, 37, 42, 4, 62])), (7, array([ 17, 43, 86, 51, 87, 14, 100, 72, 40, 6])), (8, array([80, 75, 68, 35, 7, 73, 8, 52, 50, 53])), (9, array([44, 31, 16, 71, 89, 70, 26, 84, 61, 63]))] \n\nOrderedDict([(1, {0}), (2, {1}), (3, {3}), (4, {6}), (5, {0}), (6, {7}), (7, {8}), (8, {8}), (9, {5}), (10, {5}), (11, {4}), (12, {0}), (13, {6}), (14, {7}), (15, {2}), (16, {9}), (17, {7}), (18, {5}), (19, {2}), (20, {1}), (21, {4}), (22, {6}), (23, {0}), (24, {1}), (25, {3}), (26, {9}), (27, {0}), (28, {3}), (29, {3}), (30, {1}), (31, {9}), (32, {5}), (33, {0}), (34, {6}), (35, {8}), (36, {3}), (37, {6}), (38, {1}), (39, {1}), (40, {7}), (41, {5}), (42, {6}), (43, {7}), (44, {9}), (45, {6}), (46, {4}), (47, {4}), (48, {4}), (49, {2}), (50, {8}), (51, {7}), (52, {8}), (53, {8}), (54, {2}), (55, {4}), (56, {3}), (57, {2}), (58, {0}), (59, {2}), (60, {0}), (61, {9}), (62, {6}), (63, {9}), (64, {4}), (65, {3}), (66, {2}), (67, {5}), (68, {8}), (69, {4}), (70, {9}), (71, {9}), (72, {7}), (73, {8}), (74, {6}), (75, {8}), (76, {2}), (77, {2}), (78, {3}), (79, {6}), (80, {8}), (81, {0}), (82, {5}), (83, {1}), (84, {9}), (85, {4}), (86, {7}), (87, {7}), (88, {3}), (89, {9}), (90, {0}), (91, {1}), (92, {1}), (93, {5}), (94, {3}), (95, {4}), (96, {2}), (97, {5}), (98, {5}), (99, {1}), (100, {7})])\n\nNo common district for cleaners A and B \n\n\nIteration 45 :\n\n [ 12 6 83 25 86 7 81 18 93 95 41 24 49 36 58 45 68 40\n 59 60 31 94 27 90 19 39 42 9 72 17 57 28 70 64 26 77\n 11 99 92 30 53 62 14 52 34 98 48 51 21 82 96 80 69 87\n 88 20 32 74 37 71 43 67 5 89 50 76 16 47 8 33 15 2\n 91 3 29 100 55 46 84 4 13 85 66 44 38 10 22 35 79 75\n 63 65 97 1 56 73 23 54 78 61] \n\n[(0, array([12, 6, 83, 25, 86, 7, 81, 18, 93, 95])), (1, array([41, 24, 49, 36, 58, 45, 68, 40, 59, 60])), (2, array([31, 94, 27, 90, 19, 39, 42, 9, 72, 17])), (3, array([57, 28, 70, 64, 26, 77, 11, 99, 92, 30])), (4, array([53, 62, 14, 52, 34, 98, 48, 51, 21, 82])), (5, array([96, 80, 69, 87, 88, 20, 32, 74, 37, 71])), (6, array([43, 67, 5, 89, 50, 76, 16, 47, 8, 33])), (7, array([ 15, 2, 91, 3, 29, 100, 55, 46, 84, 4])), (8, array([13, 85, 66, 44, 38, 10, 22, 35, 79, 75])), (9, array([63, 65, 97, 1, 56, 73, 23, 54, 78, 61]))] \n\nOrderedDict([(1, {9}), (2, {7}), (3, {7}), (4, {7}), (5, {6}), (6, {0}), (7, {0}), (8, {6}), (9, {2}), (10, {8}), (11, {3}), (12, {0}), (13, {8}), (14, {4}), (15, {7}), (16, {6}), (17, {2}), (18, {0}), (19, {2}), (20, {5}), (21, {4}), (22, {8}), (23, {9}), (24, {1}), (25, {0}), (26, {3}), (27, {2}), (28, {3}), (29, {7}), (30, {3}), (31, {2}), (32, {5}), (33, {6}), (34, {4}), (35, {8}), (36, {1}), (37, {5}), (38, {8}), (39, {2}), (40, {1}), (41, {1}), (42, {2}), (43, {6}), (44, {8}), (45, {1}), (46, {7}), (47, {6}), (48, {4}), (49, {1}), (50, {6}), (51, {4}), (52, {4}), (53, {4}), (54, {9}), (55, {7}), (56, {9}), (57, {3}), (58, {1}), (59, {1}), (60, {1}), (61, {9}), (62, {4}), (63, {9}), (64, {3}), (65, {9}), (66, {8}), (67, {6}), (68, {1}), (69, {5}), (70, {3}), (71, {5}), (72, {2}), (73, {9}), (74, {5}), (75, {8}), (76, {6}), (77, {3}), (78, {9}), (79, {8}), (80, {5}), (81, {0}), (82, {4}), (83, {0}), (84, {7}), (85, {8}), (86, {0}), (87, {5}), (88, {5}), (89, {6}), (90, {2}), (91, {7}), (92, {3}), (93, {0}), (94, {2}), (95, {0}), (96, {5}), (97, {9}), (98, {4}), (99, {3}), (100, {7})])\n\nNo common district for cleaners A and B \n\n\nIteration 46 :\n\n [ 48 74 97 17 9 35 26 54 67 89 81 56 39 28 5 95 90 66\n 6 62 41 31 83 10 16 32 60 12 65 71 24 51 27 96 3 98\n 44 79 87 100 70 82 91 36 80 50 58 30 59 8 47 4 69 72\n 15 19 18 29 92 88 13 43 33 22 53 78 34 46 42 45 57 21\n 55 68 73 63 2 76 11 1 20 52 37 99 49 85 25 38 75 84\n 40 61 23 7 77 64 94 14 86 93] \n\n[(0, array([48, 74, 97, 17, 9, 35, 26, 54, 67, 89])), (1, array([81, 56, 39, 28, 5, 95, 90, 66, 6, 62])), (2, array([41, 31, 83, 10, 16, 32, 60, 12, 65, 71])), (3, array([ 24, 51, 27, 96, 3, 98, 44, 79, 87, 100])), (4, array([70, 82, 91, 36, 80, 50, 58, 30, 59, 8])), (5, array([47, 4, 69, 72, 15, 19, 18, 29, 92, 88])), (6, array([13, 43, 33, 22, 53, 78, 34, 46, 42, 45])), (7, array([57, 21, 55, 68, 73, 63, 2, 76, 11, 1])), (8, array([20, 52, 37, 99, 49, 85, 25, 38, 75, 84])), (9, array([40, 61, 23, 7, 77, 64, 94, 14, 86, 93]))] \n\nOrderedDict([(1, {7}), (2, {7}), (3, {3}), (4, {5}), (5, {1}), (6, {1}), (7, {9}), (8, {4}), (9, {0}), (10, {2}), (11, {7}), (12, {2}), (13, {6}), (14, {9}), (15, {5}), (16, {2}), (17, {0}), (18, {5}), (19, {5}), (20, {8}), (21, {7}), (22, {6}), (23, {9}), (24, {3}), (25, {8}), (26, {0}), (27, {3}), (28, {1}), (29, {5}), (30, {4}), (31, {2}), (32, {2}), (33, {6}), (34, {6}), (35, {0}), (36, {4}), (37, {8}), (38, {8}), (39, {1}), (40, {9}), (41, {2}), (42, {6}), (43, {6}), (44, {3}), (45, {6}), (46, {6}), (47, {5}), (48, {0}), (49, {8}), (50, {4}), (51, {3}), (52, {8}), (53, {6}), (54, {0}), (55, {7}), (56, {1}), (57, {7}), (58, {4}), (59, {4}), (60, {2}), (61, {9}), (62, {1}), (63, {7}), (64, {9}), (65, {2}), (66, {1}), (67, {0}), (68, {7}), (69, {5}), (70, {4}), (71, {2}), (72, {5}), (73, {7}), (74, {0}), (75, {8}), (76, {7}), (77, {9}), (78, {6}), (79, {3}), (80, {4}), (81, {1}), (82, {4}), (83, {2}), (84, {8}), (85, {8}), (86, {9}), (87, {3}), (88, {5}), (89, {0}), (90, {1}), (91, {4}), (92, {5}), (93, {9}), (94, {9}), (95, {1}), (96, {3}), (97, {0}), (98, {3}), (99, {8}), (100, {3})])\n\nCleaner A and cleaner B are in disctrict # [7] \n\n\nIteration 47 :\n\n [ 26 42 13 58 62 61 36 94 73 14 71 81 49 60 29 48 21 30\n 68 32 65 47 55 83 17 9 4 63 50 22 33 98 95 72 37 2\n 74 92 7 66 27 38 78 88 45 20 1 100 44 3 18 57 99 34\n 56 10 24 89 59 16 97 19 84 40 82 53 51 43 75 46 54 96\n 41 79 76 35 15 6 39 11 93 23 70 86 90 64 12 80 69 91\n 5 8 77 67 52 31 87 28 85 25] \n\n[(0, array([26, 42, 13, 58, 62, 61, 36, 94, 73, 14])), (1, array([71, 81, 49, 60, 29, 48, 21, 30, 68, 32])), (2, array([65, 47, 55, 83, 17, 9, 4, 63, 50, 22])), (3, array([33, 98, 95, 72, 37, 2, 74, 92, 7, 66])), (4, array([ 27, 38, 78, 88, 45, 20, 1, 100, 44, 3])), (5, array([18, 57, 99, 34, 56, 10, 24, 89, 59, 16])), (6, array([97, 19, 84, 40, 82, 53, 51, 43, 75, 46])), (7, array([54, 96, 41, 79, 76, 35, 15, 6, 39, 11])), (8, array([93, 23, 70, 86, 90, 64, 12, 80, 69, 91])), (9, array([ 5, 8, 77, 67, 52, 31, 87, 28, 85, 25]))] \n\nOrderedDict([(1, {4}), (2, {3}), (3, {4}), (4, {2}), (5, {9}), (6, {7}), (7, {3}), (8, {9}), (9, {2}), (10, {5}), (11, {7}), (12, {8}), (13, {0}), (14, {0}), (15, {7}), (16, {5}), (17, {2}), (18, {5}), (19, {6}), (20, {4}), (21, {1}), (22, {2}), (23, {8}), (24, {5}), (25, {9}), (26, {0}), (27, {4}), (28, {9}), (29, {1}), (30, {1}), (31, {9}), (32, {1}), (33, {3}), (34, {5}), (35, {7}), (36, {0}), (37, {3}), (38, {4}), (39, {7}), (40, {6}), (41, {7}), (42, {0}), (43, {6}), (44, {4}), (45, {4}), (46, {6}), (47, {2}), (48, {1}), (49, {1}), (50, {2}), (51, {6}), (52, {9}), (53, {6}), (54, {7}), (55, {2}), (56, {5}), (57, {5}), (58, {0}), (59, {5}), (60, {1}), (61, {0}), (62, {0}), (63, {2}), (64, {8}), (65, {2}), (66, {3}), (67, {9}), (68, {1}), (69, {8}), (70, {8}), (71, {1}), (72, {3}), (73, {0}), (74, {3}), (75, {6}), (76, {7}), (77, {9}), (78, {4}), (79, {7}), (80, {8}), (81, {1}), (82, {6}), (83, {2}), (84, {6}), (85, {9}), (86, {8}), (87, {9}), (88, {4}), (89, {5}), (90, {8}), (91, {8}), (92, {3}), (93, {8}), (94, {0}), (95, {3}), (96, {7}), (97, {6}), (98, {3}), (99, {5}), (100, {4})])\n\nNo common district for cleaners A and B \n\n\nIteration 48 :\n\n [ 34 43 60 9 78 38 15 4 11 42 35 72 3 64 24 7 61 31\n 47 49 39 40 26 10 46 23 76 50 44 85 36 63 97 8 55 68\n 17 81 75 56 20 93 22 14 57 67 83 18 65 2 66 100 59 98\n 86 70 30 88 25 91 80 95 82 99 6 27 28 19 71 69 96 73\n 16 90 89 48 29 62 21 54 41 1 53 5 79 92 94 12 13 74\n 52 84 87 58 77 33 45 32 51 37] \n\n[(0, array([34, 43, 60, 9, 78, 38, 15, 4, 11, 42])), (1, array([35, 72, 3, 64, 24, 7, 61, 31, 47, 49])), (2, array([39, 40, 26, 10, 46, 23, 76, 50, 44, 85])), (3, array([36, 63, 97, 8, 55, 68, 17, 81, 75, 56])), (4, array([20, 93, 22, 14, 57, 67, 83, 18, 65, 2])), (5, array([ 66, 100, 59, 98, 86, 70, 30, 88, 25, 91])), (6, array([80, 95, 82, 99, 6, 27, 28, 19, 71, 69])), (7, array([96, 73, 16, 90, 89, 48, 29, 62, 21, 54])), (8, array([41, 1, 53, 5, 79, 92, 94, 12, 13, 74])), (9, array([52, 84, 87, 58, 77, 33, 45, 32, 51, 37]))] \n\nOrderedDict([(1, {8}), (2, {4}), (3, {1}), (4, {0}), (5, {8}), (6, {6}), (7, {1}), (8, {3}), (9, {0}), (10, {2}), (11, {0}), (12, {8}), (13, {8}), (14, {4}), (15, {0}), (16, {7}), (17, {3}), (18, {4}), (19, {6}), (20, {4}), (21, {7}), (22, {4}), (23, {2}), (24, {1}), (25, {5}), (26, {2}), (27, {6}), (28, {6}), (29, {7}), (30, {5}), (31, {1}), (32, {9}), (33, {9}), (34, {0}), (35, {1}), (36, {3}), (37, {9}), (38, {0}), (39, {2}), (40, {2}), (41, {8}), (42, {0}), (43, {0}), (44, {2}), (45, {9}), (46, {2}), (47, {1}), (48, {7}), (49, {1}), (50, {2}), (51, {9}), (52, {9}), (53, {8}), (54, {7}), (55, {3}), (56, {3}), (57, {4}), (58, {9}), (59, {5}), (60, {0}), (61, {1}), (62, {7}), (63, {3}), (64, {1}), (65, {4}), (66, {5}), (67, {4}), (68, {3}), (69, {6}), (70, {5}), (71, {6}), (72, {1}), (73, {7}), (74, {8}), (75, {3}), (76, {2}), (77, {9}), (78, {0}), (79, {8}), (80, {6}), (81, {3}), (82, {6}), (83, {4}), (84, {9}), (85, {2}), (86, {5}), (87, {9}), (88, {5}), (89, {7}), (90, {7}), (91, {5}), (92, {8}), (93, {4}), (94, {8}), (95, {6}), (96, {7}), (97, {3}), (98, {5}), (99, {6}), (100, {5})])\n\nNo common district for cleaners A and B \n\n\nIteration 49 :\n\n [ 13 29 94 100 91 99 65 70 61 83 2 62 96 52 8 18 33 20\n 37 11 64 1 10 19 86 56 87 76 34 3 35 55 16 72 6 85\n 58 93 44 14 90 40 28 23 80 77 51 5 60 45 95 9 73 63\n 4 41 66 97 92 30 22 53 89 98 32 82 39 71 81 26 24 68\n 50 67 49 69 79 38 84 57 17 31 47 48 27 36 88 74 75 15\n 54 12 78 21 59 7 43 46 25 42] \n\n[(0, array([ 13, 29, 94, 100, 91, 99, 65, 70, 61, 83])), (1, array([ 2, 62, 96, 52, 8, 18, 33, 20, 37, 11])), (2, array([64, 1, 10, 19, 86, 56, 87, 76, 34, 3])), (3, array([35, 55, 16, 72, 6, 85, 58, 93, 44, 14])), (4, array([90, 40, 28, 23, 80, 77, 51, 5, 60, 45])), (5, array([95, 9, 73, 63, 4, 41, 66, 97, 92, 30])), (6, array([22, 53, 89, 98, 32, 82, 39, 71, 81, 26])), (7, array([24, 68, 50, 67, 49, 69, 79, 38, 84, 57])), (8, array([17, 31, 47, 48, 27, 36, 88, 74, 75, 15])), (9, array([54, 12, 78, 21, 59, 7, 43, 46, 25, 42]))] \n\nOrderedDict([(1, {2}), (2, {1}), (3, {2}), (4, {5}), (5, {4}), (6, {3}), (7, {9}), (8, {1}), (9, {5}), (10, {2}), (11, {1}), (12, {9}), (13, {0}), (14, {3}), (15, {8}), (16, {3}), (17, {8}), (18, {1}), (19, {2}), (20, {1}), (21, {9}), (22, {6}), (23, {4}), (24, {7}), (25, {9}), (26, {6}), (27, {8}), (28, {4}), (29, {0}), (30, {5}), (31, {8}), (32, {6}), (33, {1}), (34, {2}), (35, {3}), (36, {8}), (37, {1}), (38, {7}), (39, {6}), (40, {4}), (41, {5}), (42, {9}), (43, {9}), (44, {3}), (45, {4}), (46, {9}), (47, {8}), (48, {8}), (49, {7}), (50, {7}), (51, {4}), (52, {1}), (53, {6}), (54, {9}), (55, {3}), (56, {2}), (57, {7}), (58, {3}), (59, {9}), (60, {4}), (61, {0}), (62, {1}), (63, {5}), (64, {2}), (65, {0}), (66, {5}), (67, {7}), (68, {7}), (69, {7}), (70, {0}), (71, {6}), (72, {3}), (73, {5}), (74, {8}), (75, {8}), (76, {2}), (77, {4}), (78, {9}), (79, {7}), (80, {4}), (81, {6}), (82, {6}), (83, {0}), (84, {7}), (85, {3}), (86, {2}), (87, {2}), (88, {8}), (89, {6}), (90, {4}), (91, {0}), (92, {5}), (93, {3}), (94, {0}), (95, {5}), (96, {1}), (97, {5}), (98, {6}), (99, {0}), (100, {0})])\n\nNo common district for cleaners A and B \n\n\nIteration 50 :\n\n [ 11 77 44 92 31 17 35 64 97 79 90 27 57 53 43 78 50 93\n 22 12 85 5 2 30 91 61 7 8 56 76 39 83 15 36 20 1\n 51 81 33 41 98 40 25 60 3 84 70 66 68 14 18 21 72 94\n 54 63 23 38 87 9 49 89 67 34 29 16 37 96 65 55 100 80\n 95 59 73 75 99 48 42 71 46 32 26 24 6 47 19 28 62 52\n 82 13 4 86 10 45 74 69 88 58] \n\n[(0, array([11, 77, 44, 92, 31, 17, 35, 64, 97, 79])), (1, array([90, 27, 57, 53, 43, 78, 50, 93, 22, 12])), (2, array([85, 5, 2, 30, 91, 61, 7, 8, 56, 76])), (3, array([39, 83, 15, 36, 20, 1, 51, 81, 33, 41])), (4, array([98, 40, 25, 60, 3, 84, 70, 66, 68, 14])), (5, array([18, 21, 72, 94, 54, 63, 23, 38, 87, 9])), (6, array([49, 89, 67, 34, 29, 16, 37, 96, 65, 55])), (7, array([100, 80, 95, 59, 73, 75, 99, 48, 42, 71])), (8, array([46, 32, 26, 24, 6, 47, 19, 28, 62, 52])), (9, array([82, 13, 4, 86, 10, 45, 74, 69, 88, 58]))] \n\nOrderedDict([(1, {3}), (2, {2}), (3, {4}), (4, {9}), (5, {2}), (6, {8}), (7, {2}), (8, {2}), (9, {5}), (10, {9}), (11, {0}), (12, {1}), (13, {9}), (14, {4}), (15, {3}), (16, {6}), (17, {0}), (18, {5}), (19, {8}), (20, {3}), (21, {5}), (22, {1}), (23, {5}), (24, {8}), (25, {4}), (26, {8}), (27, {1}), (28, {8}), (29, {6}), (30, {2}), (31, {0}), (32, {8}), (33, {3}), (34, {6}), (35, {0}), (36, {3}), (37, {6}), (38, {5}), (39, {3}), (40, {4}), (41, {3}), (42, {7}), (43, {1}), (44, {0}), (45, {9}), (46, {8}), (47, {8}), (48, {7}), (49, {6}), (50, {1}), (51, {3}), (52, {8}), (53, {1}), (54, {5}), (55, {6}), (56, {2}), (57, {1}), (58, {9}), (59, {7}), (60, {4}), (61, {2}), (62, {8}), (63, {5}), (64, {0}), (65, {6}), (66, {4}), (67, {6}), (68, {4}), (69, {9}), (70, {4}), (71, {7}), (72, {5}), (73, {7}), (74, {9}), (75, {7}), (76, {2}), (77, {0}), (78, {1}), (79, {0}), (80, {7}), (81, {3}), (82, {9}), (83, {3}), (84, {4}), (85, {2}), (86, {9}), (87, {5}), (88, {9}), (89, {6}), (90, {1}), (91, {2}), (92, {0}), (93, {1}), (94, {5}), (95, {7}), (96, {6}), (97, {0}), (98, {4}), (99, {7}), (100, {7})])\n\nNo common district for cleaners A and B \n\n\nIteration 51 :\n\n [ 78 96 68 4 27 47 38 8 1 65 20 69 44 57 55 79 23 42\n 6 84 15 67 18 81 25 10 71 83 30 43 3 35 46 58 21 32\n 70 50 56 28 12 5 7 17 59 77 24 99 61 86 49 52 63 51\n 62 41 37 54 94 76 48 73 100 60 87 26 40 95 14 29 9 89\n 22 19 75 34 85 11 93 80 13 72 92 91 36 45 98 74 31 82\n 64 66 2 16 88 97 33 53 39 90] \n\n" ], [ "print(prob_array)\nsuccess = 0\nfor i in prob_array:\n if i == 1:\n success += 1\nprint('Success rate from the sample of', num_repeat, 'iterations:', success/num_repeat)", "[0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0]\nSuccess rate from the sample of 200 iterations: 0.095\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a3a0f48e6cb4dd29b9e66df22866ece32194c87
31,162
ipynb
Jupyter Notebook
notebooks/Churn-CatBoost.ipynb
ItsmeKernel/Credit-Card-Customers
b2e55e643ccce9e039dd8b6982e1c228a697d4c0
[ "Apache-2.0" ]
null
null
null
notebooks/Churn-CatBoost.ipynb
ItsmeKernel/Credit-Card-Customers
b2e55e643ccce9e039dd8b6982e1c228a697d4c0
[ "Apache-2.0" ]
null
null
null
notebooks/Churn-CatBoost.ipynb
ItsmeKernel/Credit-Card-Customers
b2e55e643ccce9e039dd8b6982e1c228a697d4c0
[ "Apache-2.0" ]
null
null
null
48.919937
8,099
0.626757
[ [ [ "# Customer Churning\n\nIn this notebook I go through the process of evaluating different Classification Models. I end up using `CatBoost`, as\nit yielded the highest `recall` of all.\n\n## Disclaimer\n\nThis notebook doesn't include an EDA nor any other type of analysis, given that I already submitted another\n[notebook](https://www.kaggle.com/augusto1982/credit-card-customers-analysis) for that.\n\n## Loading the data", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nfrom sklearn.experimental import enable_iterative_imputer\nfrom sklearn.impute import IterativeImputer\n\nfrom sklearn.inspection import permutation_importance\nfrom sklearn.preprocessing import RobustScaler\nfrom sklearn.model_selection import train_test_split, KFold, cross_validate, cross_val_score\nfrom sklearn.feature_selection import SelectFromModel\nfrom sklearn.metrics import confusion_matrix, recall_score, accuracy_score\n\nfrom sklearn.preprocessing import LabelEncoder, OneHotEncoder\n\nimport seaborn as sns\n\nfrom sklearn.model_selection import GridSearchCV\n\n\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom catboost import CatBoostClassifier\nfrom xgboost import XGBClassifier\nimport xgboost as xgb", "_____no_output_____" ], [ "df = pd.read_csv('../input/credit-card-customers/BankChurners.csv')\ndf = df.iloc[:, :-2]\n\n# Setting the index\ndf.set_index('CLIENTNUM', inplace=True)", "_____no_output_____" ], [ "# Replacing 'Unknown' values.\ncategorical = ['Education_Level', 'Marital_Status', 'Income_Category']\n\nencoders = {}\n\nfor cat in categorical:\n encoder = LabelEncoder()\n encoders[cat] = encoder\n values = df[cat]\n known_values = values[values != 'Unknown']\n df[cat] = pd.Series( encoder.fit_transform(known_values), index=known_values.index)\n\nimp_cat = IterativeImputer(estimator=RandomForestClassifier(),\n initial_strategy='most_frequent',\n max_iter=10, random_state=0)\n\n\ndf[categorical] = imp_cat.fit_transform(df[categorical])\n\nfor cat in categorical:\n df[cat] = encoders[cat].inverse_transform(df[cat].astype(int))\n", "_____no_output_____" ], [ "def make_categorical(data: pd.DataFrame, column: str, categories: list, ordered: bool = False):\n data[column] = pd.Categorical(df[column],\n categories=categories,\n ordered=ordered)\n\n\ndf['Attrition_Flag'] = df['Attrition_Flag'].map({'Attrited Customer':1, 'Existing Customer':0})\n\nmake_categorical(df, 'Gender', ['F', 'M'])\n\nmake_categorical(df, 'Education_Level', ['Uneducated', 'High School', 'Graduate', 'College', 'Post-Graduate', 'Doctorate'], True)\n\nmake_categorical(df, 'Marital_Status', ['Married', 'Single', 'Divorced'])\n\nmake_categorical(df, 'Income_Category', ['Less than $40K', '$40K - $60K', '$60K - $80K', '$80K - $120K', '$120K +'], True)\n\nmake_categorical(df, 'Card_Category', ['Blue', 'Silver', 'Gold', 'Platinum'], True)", "_____no_output_____" ] ], [ [ "## Adding additional columns", "_____no_output_____" ] ], [ [ "# These columns I added while doing the EDA.\n\nage_bins = [20, 40, 60, 80]\nage_labels = ['20 - 40', '40 - 60', '60 - 80']\ndf['Age_Range'] = pd.cut(df['Customer_Age'], age_bins, labels=age_labels, ordered=True)\n\ndf['No_Revolving_Bal'] = df['Total_Revolving_Bal'] == 0\n\ndf['New_Customer'] = df['Months_on_book'] <= 24\n\ndf['Optimal_Utilization'] = df['Avg_Utilization_Ratio'] <= 0.3\n\n# The next two columns I added after doing some Feature Selection analysis (more on that below).\n\ndf['Avg_Transaction'] = df['Total_Trans_Amt'] / df['Total_Trans_Ct']\n\ndef get_avg_q4_q1(row):\n if row['Total_Ct_Chng_Q4_Q1'] == 0:\n return 0\n return row['Total_Amt_Chng_Q4_Q1'] / row['Total_Ct_Chng_Q4_Q1']\n\n\ndf['Avg_Q4_Q1'] = df.apply(get_avg_q4_q1, axis=1)", "_____no_output_____" ] ], [ [ "## Encoding the categorical variables", "_____no_output_____" ] ], [ [ "label_encoding_columns = ['Education_Level', 'Marital_Status']\n\ndummy_encoding_columns = ['Gender', 'Income_Category', 'Card_Category', 'Age_Range']\n\ndf[label_encoding_columns]= df[label_encoding_columns].apply(LabelEncoder().fit_transform)\ndf = pd.get_dummies(df, columns=dummy_encoding_columns, prefix=dummy_encoding_columns, drop_first=True)", "_____no_output_____" ] ], [ [ "## Splitting the target and independent variables", "_____no_output_____" ] ], [ [ "X = df.iloc[:, 1:]\ny = df.iloc[:, 0]", "_____no_output_____" ] ], [ [ "## Feature Selection\n\nHere I don't use Feature Selection for selecting a subset of relevant features, as that didn't improve the score of the model.\nInstead, I use it to determine which of the whole group turn out to be more relevant and see if there's any other column\nI create to reinforce the model.\n\nThe process determined these are the most relevant:\n```\n[\n 'Total_Relationship_Count', 'Months_Inactive_12_mon', 'Contacts_Count_12_mon',\n 'Total_Revolving_Bal', 'Total_Amt_Chng_Q4_Q1', 'Total_Trans_Amt', 'Total_Trans_Ct',\n 'Total_Ct_Chng_Q4_Q1', 'No_Revolving_Bal'\n]\n```\n\nAs we can see we have the columns regarding Q4/Q1, and the two for the total of transactions. Therefore, I decided\nto create two additional columns, as I previously mentioned (`Avg_Transaction` and `Avg_Q4_Q1`).\n", "_____no_output_____" ] ], [ [ "# forest = ExtraTreesClassifier(n_estimators=250)\n# forest.fit(X, y)\n#\n# feat_importances = pd.Series(forest.feature_importances_, index=X.columns).sort_values(ascending=False)\n#\n# sel = SelectFromModel(forest)\n# sel.fit(X, y)\n# selected_feat= X.columns[sel.get_support()]\n#\n# df_sel = df[selected_feat]", "_____no_output_____" ] ], [ [ "## Scaling the data", "_____no_output_____" ] ], [ [ "X = RobustScaler().fit_transform(X)", "_____no_output_____" ] ], [ [ "## Split into train and test sets", "_____no_output_____" ] ], [ [ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0, stratify=y)\n", "_____no_output_____" ] ], [ [ "## Evaluate different models with K-Fold", "_____no_output_____" ] ], [ [ "base_models = [\n (\"LR_model\", LogisticRegression(random_state=42,n_jobs=-1)),\n (\"KNN_model\", KNeighborsClassifier(n_jobs=-1)),\n (\"SVM_model\", SVC(random_state=42, kernel = 'rbf')),\n (\"DT_model\", DecisionTreeClassifier(random_state=42)),\n (\"RF_model\", RandomForestClassifier(random_state=42,n_jobs=-1)),\n (\"XGB_model\", XGBClassifier(random_state=42, n_jobs=-1, scale_pos_weight=5)),\n (\"CXGB_model\", CatBoostClassifier(random_state=42, auto_class_weights='Balanced'))\n]\n\n\nsplit = KFold(n_splits=4, shuffle=True, random_state=42)\n\n# Preprocessing, fitting, making predictions and scoring for every model:\nfor name, model in base_models:\n\n # get cross validation score for each model:\n cv_results = cross_val_score(model,\n X, y,\n cv=split,\n scoring=\"recall\",\n n_jobs=-1)\n # output:\n min_score = round(min(cv_results), 4)\n max_score = round(max(cv_results), 4)\n mean_score = round(np.mean(cv_results), 4)\n std_dev = round(np.std(cv_results), 4)\n print(f\"{name} cross validation recall score: {mean_score} +/- {std_dev} (std) min: {min_score}, max: {max_score}\")", "LR_model cross validation recall score: 0.6263 +/- 0.0275 (std) min: 0.596, max: 0.6708\nKNN_model cross validation recall score: 0.6068 +/- 0.0215 (std) min: 0.5895, max: 0.6436\nSVM_model cross validation recall score: 0.7056 +/- 0.0222 (std) min: 0.6843, max: 0.7426\nDT_model cross validation recall score: 0.8097 +/- 0.0282 (std) min: 0.7709, max: 0.8465\nRF_model cross validation recall score: 0.8334 +/- 0.0115 (std) min: 0.8232, max: 0.8515\nXGB_model cross validation recall score: 0.9169 +/- 0.0154 (std) min: 0.8995, max: 0.9356\nCXGB_model cross validation recall score: 0.9447 +/- 0.012 (std) min: 0.9314, max: 0.9629\n" ] ], [ [ "As we can see, `CatBoost` seems to be the best option.\n\n## Search for optimal hyperparameters\n\nI commented the code below, given that it takes hours to run. Its execution produced the following combination of parameters:\n\n```\n{\n 'border_count': 100,\n 'depth': 6,\n 'iterations': 250,\n 'l2_leaf_reg': 100,\n 'learning_rate': 0.1\n}\n```\n\nHowever, I ran this before adding the last two columns, so I tweak them manually some more afterwards.", "_____no_output_____" ] ], [ [ "# grid_params = {\n# 'depth':[4, 5, 6, 7, 8 ,9, 10],\n# 'iterations':[250, 500, 1000],\n# 'learning_rate':[0.001, 0.1, 0.2, 0.3],\n# 'l2_leaf_reg':[3, 5, 10, 100],\n# 'border_count':[10, 20, 50, 100],\n# }\n#\n# gd_sr = GridSearchCV(estimator=CatBoostClassifier(random_state=42, auto_class_weights='Balanced'),\n# param_grid=grid_params,\n# scoring='recall',\n# cv=5,\n# n_jobs=-1)\n#\n# gd_sr.fit(X_train, y_train)\n#\n# best_parameters = gd_sr.best_params_\n# print(best_parameters)", "_____no_output_____" ] ], [ [ "## Construction and execution of the optimal? model", "_____no_output_____" ] ], [ [ "best_classifier = CatBoostClassifier(\n random_state=42,\n border_count=100,\n depth=6,\n iterations=140,\n l2_leaf_reg=100,\n learning_rate=0.1,\n auto_class_weights='Balanced',\n verbose=False\n)\n\nbest_classifier.fit(X_train, y_train)\ny_pred = best_classifier.predict(X_test)", "_____no_output_____" ] ], [ [ "## Confusion Matrix", "_____no_output_____" ] ], [ [ "cm = confusion_matrix(y_test, y_pred)\nrecall = recall_score(y_test, y_pred)\n# labels = ['Survived', 'No Survived']\nax = sns.heatmap(cm, annot=True)\nprint(\"recall: {}\".format(recall))", "recall: 0.9405737704918032\n" ] ], [ [ "## K-Fold and CatBoost", "_____no_output_____" ] ], [ [ "\nnp.mean(\n cross_val_score(\n best_classifier,\n X,\n y,\n cv=split,\n scoring=\"recall\",\n n_jobs=-1)\n)\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3a48087d2db28298246cdb82bcff34b6bfb91d
57,824
ipynb
Jupyter Notebook
_notebooks/2021-06-12-twitter sementics with Roberta.ipynb
peiyiHung/mywebsite
4f6bb8dab272960d39e84c04b545f5417278a081
[ "Apache-2.0" ]
1
2020-12-16T13:40:27.000Z
2020-12-16T13:40:27.000Z
_notebooks/2021-06-12-twitter sementics with Roberta.ipynb
peiyiHung/mywebsite
4f6bb8dab272960d39e84c04b545f5417278a081
[ "Apache-2.0" ]
1
2021-07-03T06:24:55.000Z
2021-07-03T06:24:56.000Z
_notebooks/2021-06-12-twitter sementics with Roberta.ipynb
peiyiHung/mywebsite
4f6bb8dab272960d39e84c04b545f5417278a081
[ "Apache-2.0" ]
null
null
null
50.678352
12,588
0.651252
[ [ [ "# \"Text Classification with Roberta - Does a Twitter post actually announce a diasater?\"\n\n- toc:true\n- branch: master\n- badges: true\n- comments: true\n- author: Peiyi Hung\n- categories: [category, project]\n- image: \"images/tweet-class.png\"", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\nfrom fastai.text.all import *\nimport re", "_____no_output_____" ] ], [ [ "# Import the data and clean it", "_____no_output_____" ] ], [ [ "dir_path = \"/kaggle/input/nlp-getting-started/\"\ntrain_df = pd.read_csv(dir_path + \"train.csv\")\ntest_df = pd.read_csv(dir_path + \"test.csv\")", "_____no_output_____" ], [ "train_df", "_____no_output_____" ], [ "train_df = train_df.drop(columns=[\"id\", \"keyword\", \"location\"])", "_____no_output_____" ], [ "train_df[\"target\"].value_counts()", "_____no_output_____" ], [ "def remove_URL(text):\n url = re.compile(r'https?://\\S+|www\\.\\S+')\n return url.sub(r'',text)\n\ntrain_df[\"text\"] = train_df[\"text\"].apply(remove_URL)\ntest_df[\"text\"] = test_df[\"text\"].apply(remove_URL)", "_____no_output_____" ], [ "def remove_html(text):\n html=re.compile(r'<.*?>')\n return html.sub(r'',text)\n\ntrain_df[\"text\"] = train_df[\"text\"].apply(remove_html)\ntest_df[\"text\"] = test_df[\"text\"].apply(remove_html)", "_____no_output_____" ], [ "def remove_emoji(text):\n emoji_pattern = re.compile(\"[\"\n u\"\\U0001F600-\\U0001F64F\" # emoticons\n u\"\\U0001F300-\\U0001F5FF\" # symbols & pictographs\n u\"\\U0001F680-\\U0001F6FF\" # transport & map symbols\n u\"\\U0001F1E0-\\U0001F1FF\" # flags (iOS)\n u\"\\U00002702-\\U000027B0\"\n u\"\\U000024C2-\\U0001F251\"\n \"]+\", flags=re.UNICODE)\n return emoji_pattern.sub(r'', text)\n\ntrain_df[\"text\"] = train_df[\"text\"].apply(remove_emoji)\ntest_df[\"text\"] = test_df[\"text\"].apply(remove_emoji)", "_____no_output_____" ], [ "train_df", "_____no_output_____" ], [ "train_df[\"text\"].apply(lambda x:len(x.split())).plot(kind=\"hist\");", "_____no_output_____" ] ], [ [ "# Get tokens for the transformer", "_____no_output_____" ] ], [ [ "from transformers import AutoTokenizer, AutoModelForSequenceClassification", "_____no_output_____" ], [ "tokenizer = AutoTokenizer.from_pretrained(\"roberta-large\")", "_____no_output_____" ] ], [ [ "From the graph above, we can know that the longest tweet has 30 words, so I set the `max_length` to 30.", "_____no_output_____" ] ], [ [ "train_tensor = tokenizer(list(train_df[\"text\"]), padding=\"max_length\",\n truncation=True, max_length=30,\n return_tensors=\"pt\")[\"input_ids\"]", "_____no_output_____" ] ], [ [ "# Preparing datasets and dataloaders", "_____no_output_____" ] ], [ [ "class TweetDataset:\n def __init__(self, tensors, targ, ids):\n self.text = tensors[ids, :]\n self.targ = targ[ids].reset_index(drop=True)\n \n def __len__(self):\n return len(self.text)\n \n def __getitem__(self, idx):\n \n t = self.text[idx]\n y = self.targ[idx]\n \n return t, tensor(y)", "_____no_output_____" ], [ "train_ids, valid_ids = RandomSplitter()(train_df)\n\n\ntarget = train_df[\"target\"]\n\ntrain_ds = TweetDataset(train_tensor, target, train_ids)\nvalid_ds = TweetDataset(train_tensor, target, valid_ids)\n\ntrain_dl = DataLoader(train_ds, bs=64)\nvalid_dl = DataLoader(valid_ds, bs=512)\ndls = DataLoaders(train_dl, valid_dl).to(\"cuda\")", "_____no_output_____" ] ], [ [ "# Get the model", "_____no_output_____" ] ], [ [ "bert = AutoModelForSequenceClassification.from_pretrained(\"roberta-large\", num_labels=2).train().to(\"cuda\")\n\nclass BertClassifier(Module):\n def __init__(self, bert):\n self.bert = bert\n def forward(self, x):\n return self.bert(x).logits\n\nmodel = BertClassifier(bert)", "_____no_output_____" ] ], [ [ "# Start training", "_____no_output_____" ] ], [ [ "learn = Learner(dls, model, metrics=[accuracy, F1Score()]).to_fp16()\nlearn.lr_find()", "_____no_output_____" ], [ "learn.fit_one_cycle(3, lr_max=1e-5)", "_____no_output_____" ] ], [ [ "# Find the best threshold for f1 score", "_____no_output_____" ] ], [ [ "from sklearn.metrics import f1_score\n\npreds, targs = learn.get_preds()\n\nmin_threshold = None\nmax_f1 = -float(\"inf\")\nthresholds = np.linspace(0.3, 0.7, 50)\nfor threshold in thresholds:\n f1 = f1_score(targs, F.softmax(preds, dim=1)[:, 1]>threshold)\n if f1 > max_f1:\n min_threshold = threshold\n min_f1 = f1\n print(f\"threshold:{threshold:.4f} - f1:{f1:.4f}\")", "_____no_output_____" ] ], [ [ "# Make prediction on the test set and submit the prediction", "_____no_output_____" ] ], [ [ "test_tensor = tokenizer(list(test_df[\"text\"]),\n padding=\"max_length\",\n truncation=True,\n max_length=30,\n return_tensors=\"pt\")[\"input_ids\"]", "_____no_output_____" ], [ "class TestDS:\n def __init__(self, tensors):\n self.tensors = tensors\n \n def __len__(self):\n return len(self.tensors)\n \n def __getitem__(self, idx):\n t = self.tensors[idx]\n return t, tensor(0)\n\ntest_dl = DataLoader(TestDS(test_tensor), bs=128)", "_____no_output_____" ], [ "test_preds = learn.get_preds(dl=test_dl)", "_____no_output_____" ], [ "sub = pd.read_csv(dir_path + \"sample_submission.csv\")\nprediction = (F.softmax(test_preds[0], dim=1)[:, 1]>min_threshold).int()\nsub = pd.read_csv(dir_path + \"sample_submission.csv\")\nsub[\"target\"] = prediction\nsub.to_csv(\"submission.csv\", index=False)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a3a48116481033da3eec59b0d828badc1309c7f
17,270
ipynb
Jupyter Notebook
notebooks/fasterrcnn_coco_validation.ipynb
jeffmm/learning-pytorch
e3a736640664c6580685af4823c0119b2dbe0d52
[ "MIT" ]
null
null
null
notebooks/fasterrcnn_coco_validation.ipynb
jeffmm/learning-pytorch
e3a736640664c6580685af4823c0119b2dbe0d52
[ "MIT" ]
null
null
null
notebooks/fasterrcnn_coco_validation.ipynb
jeffmm/learning-pytorch
e3a736640664c6580685af4823c0119b2dbe0d52
[ "MIT" ]
null
null
null
30.192308
129
0.50388
[ [ [ "## COCO dataset validation using Faster-RCNN", "_____no_output_____" ] ], [ [ "import json\nfrom pathlib import Path\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport cv2 as cv\nimport torch\nimport tqdm\nimport torchvision.datasets as dset\nfrom torchvision.models.detection.faster_rcnn import FastRCNNPredictor, fasterrcnn_resnet50_fpn\nfrom torchvision.transforms import ToTensor, Compose\nfrom torchvision.datasets import CocoDetection\nfrom pycocotools.coco import COCO\nfrom pycocotools.cocoeval import COCOeval", "_____no_output_____" ], [ "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')", "_____no_output_____" ], [ "coco_val = dset.CocoDetection(root=\"../data/coco/val2017/\",\n annFile=\"../data/coco/annotations/instances_val2017.json\",\n transform=ToTensor())", "loading annotations into memory...\nDone (t=1.23s)\ncreating index...\nindex created!\n" ], [ "model = fasterrcnn_resnet50_fpn(pretrained=True)\nmodel.to(device)\nparams = [p for p in model.parameters() if p.requires_grad]\noptimizer = torch.optim.SGD(params, lr=0.005, momentum=0.9, weight_decay=0.0005)", "_____no_output_____" ], [ "# Since images are different sizes, must keep batch size to 1...\ncoco_val_dl = torch.utils.data.DataLoader(coco_val, batch_size=1, num_workers=1)", "_____no_output_____" ], [ "def validation_loop(coco_dataloader, model):\n # Prepare a dictionary of counts for each category\n counts = {}\n for cid in coco_dataloader.dataset.coco.cats.keys():\n counts[cid] = 0\n results = []\n model.eval()\n dl = tqdm.tqdm(coco_dataloader)\n with torch.no_grad():\n for X, y in dl:\n pred = model(X.to(device))\n # For some reason, some images return empty labels (?)\n if not y:\n continue\n image_id = y[0]['image_id'].item()\n # Record instances of each category\n for gt in y:\n cid = gt['category_id'].item()\n counts[cid] += 1\n for p in pred:\n for label, box, score in zip(p['labels'].tolist(), p['boxes'].tolist(), p['scores'].tolist()):\n res = {'image_id': image_id}\n res['category_id'] = label\n # Convert to x, y, width, height\n res['bbox'] = [box[0], box[1], box[2] - box[0], box[3] - box[1]]\n res['score'] = score\n results.append(res)\n return results, counts", "_____no_output_____" ], [ "#results, counts = validation_loop(coco_val_dl, model)", "_____no_output_____" ], [ "#with open(\"results.json\", \"w\") as f:\n# json.dump(results, f)\n#with open(\"counts.json\", \"w\") as f:\n# json.dump(counts, f)", "_____no_output_____" ], [ "with open(\"results.json\", \"r\") as f:\n results = json.load(f)\nwith open(\"counts.json\", \"r\") as f:\n counts = json.load(f)", "_____no_output_____" ], [ "img_ids = set()\ncat_ids = set()\nfor res in results:\n img_ids.add(res['image_id'])\n cat_ids.add(res['category_id'])", "_____no_output_____" ], [ "coco_res = coco_val.coco.loadRes(\"results.json\")", "Loading and preparing results...\nDONE (t=1.67s)\ncreating index...\nindex created!\n" ], [ "coco_eval = COCOeval(cocoGt=coco_val.coco, cocoDt=coco_res, iouType='bbox')", "_____no_output_____" ], [ "def coco_summarize(coco_eval, ap=1, iouThr=None, areaRng='all', maxDets=100 ):\n p = coco_eval.params\n iStr = ' {:<18} {} @[ IoU={:<9} | area={:>6s} | maxDets={:>3d} ] = {:0.3f}'\n titleStr = 'Average Precision' if ap == 1 else 'Average Recall'\n typeStr = '(AP)' if ap==1 else '(AR)'\n iouStr = '{:0.2f}:{:0.2f}'.format(p.iouThrs[0], p.iouThrs[-1]) \\\n if iouThr is None else '{:0.2f}'.format(iouThr)\n \n aind = [i for i, aRng in enumerate(p.areaRngLbl) if aRng == areaRng]\n mind = [i for i, mDet in enumerate(p.maxDets) if mDet == maxDets]\n if ap == 1:\n # dimension of precision: [TxRxKxAxM]\n s = coco_eval.eval['precision']\n # IoU\n if iouThr is not None:\n t = np.where(iouThr == p.iouThrs)[0]\n s = s[t]\n s = s[:,:,:,aind,mind]\n else:\n # dimension of recall: [TxKxAxM]\n s = coco_eval.eval['recall']\n if iouThr is not None:\n t = np.where(iouThr == p.iouThrs)[0]\n s = s[t]\n s = s[:,:,aind,mind]\n if len(s[s>-1])==0:\n mean_s = -1\n else:\n mean_s = np.mean(s[s>-1])\n return mean_s", "_____no_output_____" ], [ "coco_eval.params.areaRng = [[0, 1e8]]\ncoco_eval.params.areaRngLbl = ['all']\ncoco_eval.params.maxDets = [100]\ncoco_eval.params.iouThrs = [0.5]\ncoco_eval.params.imgIds = list(img_ids)\ncoco_eval.params.catIds = list(cat_ids)\ncoco_eval.evaluate()\ncoco_eval.accumulate()", "Running per image evaluation...\nEvaluate annotation type *bbox*\nDONE (t=21.26s).\nAccumulating evaluation results...\nDONE (t=3.28s).\n Average Precision (AP) @[ IoU=0.50:0.50 | area= all | maxDets=100 ] = 0.586\n Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000\n Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000\n Average Precision (AP) @[ IoU=0.50:0.50 | area= small | maxDets=100 ] = 0.368\n Average Precision (AP) @[ IoU=0.50:0.50 | area=medium | maxDets=100 ] = 0.634\n Average Precision (AP) @[ IoU=0.50:0.50 | area= large | maxDets=100 ] = 0.715\n Average Recall (AR) @[ IoU=0.50:0.50 | area= all | maxDets= 1 ] = 0.436\n Average Recall (AR) @[ IoU=0.50:0.50 | area= all | maxDets= 10 ] = 0.725\n Average Recall (AR) @[ IoU=0.50:0.50 | area= all | maxDets=100 ] = 0.771\n Average Recall (AR) @[ IoU=0.50:0.50 | area= small | maxDets=100 ] = 0.533\n Average Recall (AR) @[ IoU=0.50:0.50 | area=medium | maxDets=100 ] = 0.824\n Average Recall (AR) @[ IoU=0.50:0.50 | area= large | maxDets=100 ] = 0.914\n" ], [ "%%capture\nprecisions = []\nrecalls = []\n# IoU threshold\nfor cid in cat_ids:\n coco_eval.params.catIds = [cid]\n coco_eval.evaluate()\n coco_eval.accumulate()\n precisions.append(coco_summarize(coco_eval))\n recalls.append(coco_summarize(coco_eval, ap=0))\nprecisions = np.array(precisions)\nrecalls = np.array(recalls)", "_____no_output_____" ], [ "f1_scores = 2 * precisions * recalls / (precisions + recalls)", "_____no_output_____" ], [ "k = 20\nk_lowest = np.argsort(f1_scores)[:k]\nbad_cats = np.array(list(cat_ids))[k_lowest]\nbad_cat_dict = {coco_val.coco.cats[cid]['name']: cid for cid in bad_cats}\nprint(f\"{'Category': >16}\\t{'F1': >5} \\tInstances\\n\")\nfor cid, low in zip(bad_cats, k_lowest):\n print(f\"{coco_val.coco.cats[cid]['name']: >16}\", f\"\\t{f1_scores[low]:0.04f} \\t{counts[str(cid)]}\")", " Category\t F1 \tInstances\n\n hair drier \t0.1164 \t11\n spoon \t0.3449 \t253\n handbag \t0.3670 \t540\n apple \t0.3797 \t239\n knife \t0.3811 \t326\n backpack \t0.3846 \t371\n scissors \t0.4088 \t36\n book \t0.4106 \t1161\n bench \t0.4299 \t413\n toothbrush \t0.4658 \t57\n carrot \t0.4996 \t371\n hot dog \t0.5134 \t127\n toaster \t0.5216 \t9\n banana \t0.5232 \t379\n orange \t0.5259 \t287\n chair \t0.5295 \t1791\n dining table \t0.5377 \t697\n skis \t0.5421 \t241\n broccoli \t0.5444 \t316\n remote \t0.5457 \t283\n" ], [ "%%capture\ncoco_eval.params.catIds = list(cat_ids)\n\nbad_cat_imgs = []\n\nfor cid in bad_cats:\n cat_img_ids = set()\n coco_eval.params.catIds = [cid]\n coco_eval.evaluate()\n coco_eval.accumulate()\n cat_imgs = np.where(coco_eval.evalImgs)[0]\n for cimg in cat_imgs:\n cat_img_ids.add(coco_eval.evalImgs[cimg]['image_id'])\n bad_cat_imgs.append(sorted(list(cat_img_ids)))", "_____no_output_____" ], [ "coco_eval = COCOeval(cocoGt=coco_val.coco, cocoDt=coco_res, iouType='bbox')\n\n# IoU threshold\ncoco_eval.params.catIds = list(cat_ids)\ncoco_eval.params.imgIds = list(img_ids)\ncoco_eval.params.maxDets = [100]\ncoco_eval.params.iouThrs = [0.5]\ncoco_eval.evaluate()\ncoco_eval.accumulate()", "Running per image evaluation...\nEvaluate annotation type *bbox*\nDONE (t=22.59s).\nAccumulating evaluation results...\nDONE (t=1.24s).\n" ], [ "def get_img_from_id(iid):\n ind = np.where(np.array(coco_val.ids) == iid)[0][0]\n img, ann = coco_val[ind]\n return img.squeeze().permute(1, 2, 0).numpy().copy(), ann", "_____no_output_____" ], [ "bad_cat_imgs = [list(set(coco_val.coco.catToImgs[cat])) for cat in bad_cats]", "_____no_output_____" ], [ "results_by_img = {}\nfor res in results:\n if results_by_img.get(res['image_id']):\n results_by_img[res['image_id']].append(res)\n else:\n results_by_img[res['image_id']] = [res]", "_____no_output_____" ], [ "def write_bad_cat_images(cat_name, output_dir=Path(\"../data/coco_val_results/\"), draw_bboxes=True, show_images=False):\n outdir = Path(output_dir) / f\"{cat_name}\"\n outdir.mkdir(exist_ok=True)\n cid = bad_cat_dict[cat_name]\n cat_imgs = bad_cat_imgs[np.where(bad_cats == cid)[0][0]]\n for iid in cat_imgs:\n metrics = coco_eval.evaluateImg(iid, cid, [0, 1e6], 1000)\n if metrics['gtMatches'].any():\n continue\n img_res = results_by_img[iid]\n img, anns = get_img_from_id(iid)\n if draw_bboxes:\n for res in anns:\n if res['category_id'] == cid:\n bx, by, w, h = np.array(res['bbox'], dtype=int)\n img = cv.rectangle(img, (bx, by), (bx+w, by+h), color=[1, 0, 0], thickness=3)\n for res in img_res:\n if res['category_id'] == cid:\n bx, by, w, h = np.array(res['bbox'], dtype=int)\n img = cv.rectangle(img, (bx, by), (bx+w, by+h), color=[0, 1, 1], thickness=2)\n if show_images:\n plt.figure(figsize=(5, 5))\n plt.imshow(img)\n plt.axis(\"off\")\n plt.show()\n plt.close()\n img = (np.round(img * 255)).astype(np.uint8)\n cv.imwrite(str(outdir / f\"{iid}.jpg\"), cv.cvtColor(img, cv.COLOR_BGR2RGB))", "_____no_output_____" ], [ "img_dir = Path(\"../data/coco_val_results/unlabeled\")\nimg_dir.mkdir(exist_ok=True)", "_____no_output_____" ], [ "for cat_name in [\"dining table\", \"handbag\", \"backpack\", \"bench\", \"chair\", \"hair drier\"]:\n write_bad_cat_images(cat_name, output_dir=img_dir, draw_bboxes=False)", "_____no_output_____" ], [ "coco_eval.evaluate_image(cid", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3a50192675994f71c96086bf84972443a37934
34,428
ipynb
Jupyter Notebook
notebooks/talk_demo.ipynb
hadrianmontes/jax-md
cea1cc6b22db6044a502eeeab4bddde35ac15d94
[ "ECL-2.0", "Apache-2.0" ]
713
2019-05-14T19:02:00.000Z
2022-03-31T17:42:23.000Z
notebooks/talk_demo.ipynb
hadrianmontes/jax-md
cea1cc6b22db6044a502eeeab4bddde35ac15d94
[ "ECL-2.0", "Apache-2.0" ]
109
2019-05-15T13:27:09.000Z
2022-03-17T16:15:59.000Z
notebooks/talk_demo.ipynb
hadrianmontes/jax-md
cea1cc6b22db6044a502eeeab4bddde35ac15d94
[ "ECL-2.0", "Apache-2.0" ]
117
2019-05-17T13:23:37.000Z
2022-03-18T10:32:29.000Z
26.462721
229
0.445655
[ [ [ "<a href=\"https://colab.research.google.com/github/google/jax-md/blob/main/notebooks/talk_demo.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ] ], [ [ "#@title Import & Util\n\n!pip install -q git+https://www.github.com/google/jax\n!pip install -q git+https://www.github.com/google/jax-md\n!pip install dm-haiku\n!pip install optax\n\nimport jax.numpy as np\nfrom jax import device_put\nfrom jax.config import config\n# TODO: Uncomment this and enable warnings when XLA bug is fixed.\nimport warnings; warnings.simplefilter('ignore')\n# config.update('jax_enable_x64', True)\nfrom IPython.display import set_matplotlib_formats\nset_matplotlib_formats('pdf', 'svg')\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport pickle\n\nimport warnings\nwarnings.simplefilter(\"ignore\")\n\nsns.set_style(style='white')\nbackground_color = [56 / 256] * 3\ndef plot(x, y, *args):\n plt.plot(x, y, *args, linewidth=3)\n plt.gca().set_facecolor([1, 1, 1])\ndef draw(R, **kwargs):\n if 'c' not in kwargs:\n kwargs['color'] = [1, 1, 0.9]\n ax = plt.axes(xlim=(0, float(np.max(R[:, 0]))), \n ylim=(0, float(np.max(R[:, 1]))))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n ax.set_facecolor(background_color)\n plt.scatter(R[:, 0], R[:, 1], marker='o', s=1024, **kwargs)\n plt.gcf().patch.set_facecolor(background_color)\n plt.gcf().set_size_inches(6, 6)\n plt.tight_layout()\ndef draw_big(R, **kwargs):\n if 'c' not in kwargs:\n kwargs['color'] = [1, 1, 0.9]\n fig = plt.figure(dpi=128)\n ax = plt.axes(xlim=(0, float(np.max(R[:, 0]))),\n ylim=(0, float(np.max(R[:, 1]))))\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n ax.set_facecolor(background_color)\n s = plt.scatter(R[:, 0], R[:, 1], marker='o', s=0.5, **kwargs)\n s.set_rasterized(True)\n plt.gcf().patch.set_facecolor(background_color)\n plt.gcf().set_size_inches(10, 10)\n plt.tight_layout()\ndef draw_displacement(R, dR):\n plt.quiver(R[:, 0], R[:, 1], dR[:, 0], dR[:, 1], color=[1, 0.5, 0.5])\n\n# Progress Bars\n\nfrom IPython.display import HTML, display\nimport time\n\ndef ProgressIter(iter_fun, iter_len=0):\n if not iter_len:\n iter_len = len(iter_fun)\n out = display(progress(0, iter_len), display_id=True)\n for i, it in enumerate(iter_fun):\n yield it\n out.update(progress(i + 1, iter_len))\n\ndef progress(value, max):\n return HTML(\"\"\"\n <progress\n value='{value}'\n max='{max}',\n style='width: 45%'\n >\n {value}\n </progress>\n \"\"\".format(value=value, max=max))\n\n# Data Loading\n\n!wget -O silica_train.npz https://www.dropbox.com/s/3dojk4u4di774ve/silica_train.npz?dl=0\n!wget https://raw.githubusercontent.com/google/jax-md/main/examples/models/si_gnn.pickle\n\nimport numpy as onp\n\nwith open('silica_train.npz', 'rb') as f:\n files = onp.load(f)\n Rs, Es, Fs = [device_put(x) for x in (files['arr_0'], files['arr_1'], files['arr_2'])]\n Rs = Rs[:10]\n Es = Es[:10]\n Fs = Fs[:10]\n test_Rs, test_Es, test_Fs = [device_put(x) for x in (files['arr_3'], files['arr_4'], files['arr_5'])]\n test_Rs = test_Rs[:200]\n test_Es = test_Es[:200]\n test_Fs = test_Fs[:200]\n\ndef tile(box_size, positions, tiles):\n pos = positions\n for dx in range(tiles):\n for dy in range(tiles):\n for dz in range(tiles):\n if dx == 0 and dy == 0 and dz == 0:\n continue\n pos = np.concatenate((pos, positions + box_size * np.array([[dx, dy, dz]])))\n return box_size * tiles, pos", "_____no_output_____" ] ], [ [ "## Demo\n\nwww.github.com/google/jax-md -> notebooks -> talk_demo.ipynb", "_____no_output_____" ], [ "### Energy and Automatic Differentiation", "_____no_output_____" ], [ "$u(r) = \\begin{cases}\\frac13(1 - r)^3 & \\text{if $r < 1$} \\\\ 0 & \\text{otherwise} \\end{cases}$", "_____no_output_____" ] ], [ [ "import jax.numpy as np\n\ndef soft_sphere(r):\n return np.where(r < 1, \n 1/3 * (1 - r) ** 3,\n 0.)\n\nprint(soft_sphere(0.5))", "_____no_output_____" ], [ "r = np.linspace(0, 2., 200)\nplot(r, soft_sphere(r))", "_____no_output_____" ] ], [ [ "We can compute its derivative automatically", "_____no_output_____" ] ], [ [ "from jax import grad\n\ndu_dr = grad(soft_sphere)\n\nprint(du_dr(0.5))", "_____no_output_____" ] ], [ [ "We can vectorize the derivative computation over many radii", "_____no_output_____" ] ], [ [ "from jax import vmap\n\ndu_dr_v = vmap(du_dr)\n\nplot(r, soft_sphere(r))\nplot(r, -du_dr_v(r))", "_____no_output_____" ] ], [ [ "### Randomly Initialize a System", "_____no_output_____" ] ], [ [ "from jax import random\n\nkey = random.PRNGKey(0)\n\nparticle_count = 128\ndim = 2", "_____no_output_____" ], [ "from jax_md.quantity import box_size_at_number_density\n\n# number_density = N / V\nbox_size = box_size_at_number_density(particle_count = particle_count, \n number_density = 1.0, \n spatial_dimension = dim)\n\nR = random.uniform(key, (particle_count, dim), maxval=box_size)\ndraw(R)", "_____no_output_____" ] ], [ [ "### Displacements and Distances\n", "_____no_output_____" ] ], [ [ "from jax_md import space\n\ndisplacement, shift = space.periodic(box_size)\n\nprint(displacement(R[0], R[1]))", "_____no_output_____" ], [ "metric = space.metric(displacement)\n\nprint(metric(R[0], R[1]))", "_____no_output_____" ] ], [ [ "Compute distances between pairs of points", "_____no_output_____" ] ], [ [ "displacement = space.map_product(displacement)\nmetric = space.map_product(metric)\n\nprint(metric(R[:3], R[:3]))", "_____no_output_____" ] ], [ [ "### Total energy of a system", "_____no_output_____" ] ], [ [ "def energy(R):\n dr = metric(R, R)\n return 0.5 * np.sum(soft_sphere(dr))", "_____no_output_____" ], [ "print(energy(R))", "_____no_output_____" ], [ "print(grad(energy)(R).shape)", "_____no_output_____" ] ], [ [ "### Minimization", "_____no_output_____" ] ], [ [ "from jax_md.minimize import fire_descent\n\ninit_fn, apply_fn = fire_descent(energy, shift)", "_____no_output_____" ], [ "state = init_fn(R)\n\ntrajectory = []\n\nwhile np.max(np.abs(state.force)) > 1e-3:\n state = apply_fn(state)\n trajectory += [state.position]", "_____no_output_____" ], [ "from jax_md.colab_tools import renderer\n\ntrajectory = np.stack(trajectory)\n\nrenderer.render(box_size,\n {'particles': renderer.Disk(trajectory)},\n resolution=(512, 512))", "_____no_output_____" ], [ "cond_fn = lambda state: np.max(np.abs(state.force)) > 1e-3", "_____no_output_____" ] ], [ [ "### Making it Fast", "_____no_output_____" ] ], [ [ "def minimize(R):\n init, apply = fire_descent(energy, shift)\n\n state = init(R)\n\n for _ in range(20):\n state = apply(state)\n\n return energy(state.position)", "_____no_output_____" ], [ "%%timeit\nminimize(R).block_until_ready()", "_____no_output_____" ], [ "from jax import jit\n\n# Just-In-Time compile to GPU\nminimize = jit(minimize)", "_____no_output_____" ], [ "# The first call incurs a compilation cost\nminimize(R)", "_____no_output_____" ], [ "%%timeit\nminimize(R).block_until_ready()", "_____no_output_____" ], [ "from jax.lax import while_loop\n\ndef minimize(R):\n init_fn, apply_fn = fire_descent(energy, shift)\n\n state = init_fn(R)\n # Using a JAX loop reduces compilation cost\n state = while_loop(cond_fun=cond_fn,\n body_fun=apply_fn,\n init_val=state)\n\n return state.position", "_____no_output_____" ], [ "from jax import jit\n\nminimize = jit(minimize)", "_____no_output_____" ], [ "R_is = minimize(R)", "_____no_output_____" ], [ "%%timeit\nminimize(R).block_until_ready()", "_____no_output_____" ] ], [ [ "### Elastic Moduli", "_____no_output_____" ] ], [ [ "displacement, shift = space.periodic_general(box_size, \n fractional_coordinates=False)", "_____no_output_____" ], [ "from jax_md import energy\n\nsoft_sphere = energy.soft_sphere_pair(displacement, alpha=3)\n\nprint(soft_sphere(R_is))", "_____no_output_____" ], [ "strain_energy = lambda strain, R: soft_sphere(R, new_box=box_size * strain)", "_____no_output_____" ], [ "from jax import hessian\n\nelastic_constants = hessian(strain_energy)(np.eye(2), R_is)", "_____no_output_____" ], [ "elastic_constants.shape", "_____no_output_____" ], [ "from jax_md.quantity import bulk_modulus\n\nB = bulk_modulus(elastic_constants)\nprint(B)", "_____no_output_____" ], [ "from functools import partial\n\n@jit\ndef elastic_moduli(number_density, key):\n # Randomly initialize particles.\n box_size = box_size_at_number_density(particle_count = particle_count, \n number_density = number_density, \n spatial_dimension = dim)\n R = random.uniform(key, (particle_count, dim), maxval=box_size)\n\n # Create the space and energy function.\n displacement, shift = space.periodic_general(box_size, \n fractional_coordinates=False)\n soft_sphere = energy.soft_sphere_pair(displacement, alpha=3)\n\n # Minimize at no strain.\n init_fn, apply_fn = fire_descent(soft_sphere, shift)\n\n state = init_fn(R)\n state = while_loop(cond_fn, apply_fn, state)\n\n # Compute the bulk modulus.\n strain_energy = lambda strain, R: soft_sphere(R, new_box=box_size * strain)\n elastic_constants = hessian(strain_energy)(np.eye(2), state.position)\n return bulk_modulus(elastic_constants)", "_____no_output_____" ], [ "number_densities = np.linspace(1.0, 1.6, 40)\n\nelastic_moduli = vmap(elastic_moduli, in_axes=(0, None))\nB = elastic_moduli(number_densities, key)\n\nplot(number_densities, B)", "_____no_output_____" ], [ "keys = random.split(key, 10)\n\nelastic_moduli = vmap(elastic_moduli, in_axes=(None, 0))\nB_ensemble = elastic_moduli(number_densities, keys)\n\nfor B in B_ensemble:\n plt.plot(number_densities, B)\n\nplot(number_densities, np.mean(B_ensemble, axis=0), 'k')", "_____no_output_____" ] ], [ [ "### Going Big", "_____no_output_____" ] ], [ [ "key = random.PRNGKey(0)\n\nparticle_count = 128000\nbox_size = box_size_at_number_density(particle_count = particle_count, \n number_density = 1.0, \n spatial_dimension = dim)\n\n\nR = random.uniform(key, (particle_count, dim)) * box_size\n\ndisplacement, shift = space.periodic(box_size)\n\nrenderer.render(box_size,\n {'particles': renderer.Disk(R)},\n resolution=(512, 512))", "_____no_output_____" ], [ "from jax_md.energy import soft_sphere_neighbor_list\n\nneighbor_fn, energy_fn = soft_sphere_neighbor_list(displacement, box_size)\n\ninit_fn, apply_fn = fire_descent(energy_fn, shift)", "_____no_output_____" ], [ "nbrs = neighbor_fn(R)\nprint(nbrs.idx.shape)", "_____no_output_____" ], [ "state = init_fn(R, neighbor=nbrs)\n\ndef cond_fn(state_and_nbrs):\n state, _ = state_and_nbrs\n return np.any(np.abs(state.force) > 1e-3)\n\ndef step_fn(state_and_nbrs):\n state, nbrs = state_and_nbrs\n nbrs = neighbor_fn(state.position, nbrs)\n state = apply_fn(state, neighbor=nbrs)\n return state, nbrs\n\nstate, nbrs = while_loop(cond_fn,\n step_fn,\n (state, nbrs))\n\nrenderer.render(box_size,\n {'particles': renderer.Disk(state.position)},\n resolution=(700, 700))", "_____no_output_____" ], [ "nbrs = neighbor_fn(state.position)", "_____no_output_____" ], [ "nbrs.idx.shape", "_____no_output_____" ] ], [ [ "## Neural Network Potentials", "_____no_output_____" ], [ "Here is some data we loaded of a 64-atom Silicon system computed using DFT.", "_____no_output_____" ] ], [ [ "print(Rs.shape) # Positions\nprint(Es.shape) # Energies\nprint(Fs.shape) # Forces", "_____no_output_____" ], [ "E_mean = np.mean(Es)\nE_std = np.std(Es)\n\nprint(f'E_mean = {E_mean}, E_std = {E_std}')", "_____no_output_____" ], [ "plt.hist(Es)", "_____no_output_____" ] ], [ [ "Setup the system and a Graph Neural Network energy function", "_____no_output_____" ] ], [ [ "box_size = 10.862\ndisplacement, shift = space.periodic(box_size)", "_____no_output_____" ], [ "from jax_md.energy import graph_network\n\ninit_fn, energy_fn = graph_network(displacement, r_cutoff=3.0)", "_____no_output_____" ], [ "params = init_fn(key, test_Rs[0])\nenergy_fn(params, test_Rs[0])", "_____no_output_____" ], [ "vectorized_energy_fn = vmap(energy_fn, (None, 0))\npredicted_Es = vectorized_energy_fn(params, test_Rs)\nplt.plot(test_Es, predicted_Es, 'o')", "_____no_output_____" ] ], [ [ "Define a loss function.", "_____no_output_____" ] ], [ [ "def energy_loss_fn(params):\n return np.mean((vectorized_energy_fn(params, Rs) - Es) ** 2)\n\ndef force_loss_fn(params):\n # We want the gradient with respect to the position, not the parameters.\n grad_fn = vmap(grad(energy_fn, argnums=1), (None, 0))\n return np.mean((grad_fn(params, Rs) + Fs) ** 2)\n\n@jit\ndef loss_fn(params):\n return energy_loss_fn(params) + force_loss_fn(params)", "_____no_output_____" ] ], [ [ "Take a few steps of gradient descent.", "_____no_output_____" ] ], [ [ "import optax\n\nopt = optax.chain(optax.clip_by_global_norm(0.01),\n optax.adam(1e-4))\n\nopt_state = opt.init(params)\n\n@jit\ndef update(params, opt_state):\n updates, opt_state = opt.update(grad(loss_fn)(params), opt_state)\n return optax.apply_updates(params, updates), opt_state\n\nfor i in ProgressIter(range(100)):\n params, opt_state = update(params, opt_state)\n if i % 10 == 0:\n print(f'Loss at step {i} is {loss_fn(params)}')", "_____no_output_____" ], [ "predicted_Es = vectorized_energy_fn(params, test_Rs)\nplt.plot(test_Es, predicted_Es, 'o')", "_____no_output_____" ] ], [ [ "Now load a pretrained model.", "_____no_output_____" ] ], [ [ "with open('si_gnn.pickle', 'rb') as f:\n params = pickle.load(f)", "_____no_output_____" ], [ "from functools import partial\nenergy_fn = partial(energy_fn, params)", "_____no_output_____" ], [ "predicted_Es = vmap(energy_fn)(test_Rs)\nplt.plot(test_Es, predicted_Es, 'o')", "_____no_output_____" ], [ "from jax_md.quantity import force\n\nforce_fn = force(energy_fn)\npredicted_Fs = force_fn(test_Rs[1])\n\nplt.plot(test_Fs[1].reshape((-1,)), predicted_Fs.reshape((-1,)), 'o')", "_____no_output_____" ] ], [ [ "This energy can be used in a simulation", "_____no_output_____" ] ], [ [ "from jax_md.simulate import nvt_nose_hoover\nfrom jax_md.quantity import temperature\n\nK_B = 8.617e-5\ndt = 1e-3\nkT = K_B * 300 \nSi_mass = 2.91086E-3\n\ninit_fn, apply_fn = nvt_nose_hoover(energy_fn, shift, dt, kT)\n\napply_fn = jit(apply_fn)", "_____no_output_____" ], [ "from jax.lax import fori_loop\n\nstate = init_fn(key, Rs[0], Si_mass, T_initial=300 * K_B)\n\n@jit\ndef take_steps(state):\n return fori_loop(0, 100, lambda i, state: apply_fn(state), state)\n\ntimes = np.arange(100) * dt\ntemperatures = []\ntrajectory = []\n\nfor _ in ProgressIter(times):\n state = take_steps(state)\n\n temperatures += [temperature(state.velocity, Si_mass) / K_B]\n trajectory += [state.position]", "_____no_output_____" ], [ "plot(times, temperatures)", "_____no_output_____" ], [ "trajectory = np.stack(trajectory)\n\nrenderer.render(box_size,\n {'atoms': renderer.Sphere(trajectory)},\n resolution=(512,512))", "_____no_output_____" ], [ "box_size, R = tile(box_size, Rs[0], 3)", "_____no_output_____" ], [ "displacement, shift = space.periodic(box_size)\n\nneighbor_fn, _, energy_fn = energy.graph_network_neighbor_list(displacement, \n box_size,\n r_cutoff=3.0,\n dr_threshold=0.5)\nenergy_fn = partial(energy_fn, params)\n\ninit_fn, apply_fn = nvt_nose_hoover(energy_fn, shift, dt, kT)\n\napply_fn = jit(apply_fn)", "_____no_output_____" ], [ "nbrs = neighbor_fn(R)\nstate = init_fn(key, R, Si_mass, T_initial=300 * K_B, neighbor=nbrs)\n\ndef step_fn(i, state_and_nbrs):\n state, nbrs = state_and_nbrs\n nbrs = neighbor_fn(state.position, nbrs)\n state = apply_fn(state, neighbor=nbrs)\n return state, nbrs\n\ntimes = np.arange(100) * dt\ntemperatures = []\ntrajectory = []\n\nfor _ in ProgressIter(times):\n state, nbrs = fori_loop(0, 100, step_fn, (state, nbrs))\n\n temperatures += [temperature(state.velocity, Si_mass) / K_B]\n trajectory += [state.position]", "_____no_output_____" ], [ "trajectory = np.stack(trajectory)\n\nrenderer.render(box_size,\n {\n 'atoms': renderer.Sphere(trajectory,\n color=np.array([0, 0, 1])),\n 'bonds': renderer.Bond('atoms', nbrs.idx,\n color=np.array([1, 0, 0]))\n },\n resolution=(512,512))", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3a62458408be32333aa55dc105493ea021593f
52,885
ipynb
Jupyter Notebook
materials/materials/lectures/05-containerization.ipynb
UBC-DSCI/reproducible-and-trustworthy-workflows-for-data-science
6676e628ec858acdfae43fbf5b27891849952e83
[ "CC-BY-4.0" ]
null
null
null
materials/materials/lectures/05-containerization.ipynb
UBC-DSCI/reproducible-and-trustworthy-workflows-for-data-science
6676e628ec858acdfae43fbf5b27891849952e83
[ "CC-BY-4.0" ]
1
2022-01-06T00:23:55.000Z
2022-01-06T00:23:55.000Z
materials/materials/lectures/05-containerization.ipynb
UBC-DSCI/reproducible-and-trustworthy-workflows-for-data-science
6676e628ec858acdfae43fbf5b27891849952e83
[ "CC-BY-4.0" ]
null
null
null
46.43108
792
0.635
[ [ [ "# Managing dependencies using containerization", "_____no_output_____" ], [ "## Topic learning objectives\n\nBy the end of this topic, students should be able to:\n\n1. Explain what containers are, and why they can be useful for reproducible data\nanalyses\n2. Discuss the advantages and limitations of containerization (e.g., Docker) in the\ncontext of reproducible data analyses\n3. Compare and contrast the difference between running software/scripts in a virtual\nenvironment, a virtual machine and a container\n4. Evaluate, choose and justify an appropriate environment management solution based\non the data analysis project’s complexity, expected usage and longevity.\n5. Use a containerization software (e.g., Docker) to run the software needed for your\nanalysis\n6. Write a container file (e.g., Dockerfile) that can be used to reproducibly build a\ncontainer image that would contain the needed software and environment\ndependencies of your Data Science project\n7. Use manual and automated tools (e.g., Docker, GitHub Actions) to build and share\ncontainer images\n8. List good container base images for Data Science projects", "_____no_output_____" ], [ "## Introduction to containerization\n\n### Documenting and loading dependencies\n\nYou've made some beautiful data analysis pipeline/project using make, R, and/or Python. It runs on your machine, but how easily can you, or someone else, get it working on theirs? The answer usually is, it depends...\n\nWhat does it depend on?\n\n\n\n", "_____no_output_____" ], [ "1. Does your `README` and your scripts make it blatantly obvious what programming languages and packages need to run your data analysis pipeline/project? \n ", "_____no_output_____" ], [ "2. Do you also document the version numbers of the programming languages and packages you used? This can have big consequences when it comes to reproducibility... (*e.g.*,the [change to random number generation](https://blog.revolutionanalytics.com/2019/05/whats-new-in-r-360.html) in R in 2019?)\n\n3. Did you document what other software (beyond the the programming languages and packages used) and operating system dependencies are needed to run your data analysis pipeline/project?\n\n*Virtual environments can be tremendously helpful with #1 & #2, however, they may or may not be helpful to manage #3...* __*Enter containerization as a possible solution!*__", "_____no_output_____" ], [ "### What is a container?\n\nContainers are another way to generate (and share!) isolated computational environments. They differ from virtual environments (which we discussed previously) in that they are even more isolated from the computers operating system, as well as they can be used share many other types of software, applications and operating system dependencies. \n\nBefore we can fully define containers, however, we need to define **virtualization**. Virtualization is a process that allows us to divide the the elements of a single computer into several virtual elements. These elements can include computer hardware platforms, storage devices, and computer network resources, and even operating system user spaces (e.g., graphical tools, utilities, and programming languages). \n\nContainers virtualize operating system user spaces so that they can isolate the processes they contain, as well as control the processes’ access to computer resources (e.g., CPUs, memory and desk space). What this means in practice, is that an operating system user space can be carved up into multiple containers running the same, or different processes, in isolation. Below we show the schematic of a container whose virtual user space contains the:\n- R programming language, the Bioconductor package manager, and two Bioconductor packages\n- Galaxy workflow software and two toolboxes that can be used with it\n- Python programming language, iPython interpreter and Jupyter notebook package\n\n<img src=\"img/13742_2016_article_135_f7.jpeg\" width=250>\n\n**Schematic of a container for genomics research.** Source: <https://doi.org/10.1186/s13742-016-0135-4>\n\n\n#### Exercise - running a simple container\n\nTo further illustrate what a container looks like, and feels like, we can use Docker (containerization software) to run one and explore. First we will run an linux (debian-flavoured) container that has R installed. To run this type:\n\n```\ndocker run --rm -it rocker/r-base:3.6.3\n```\n\nWhen you successfully launch the container, R should have started. Check the version of R - is it the same as your computer's version of R? Use `getwd()` and `list.files()` to explore the containers filesystem from R. Does this look like your computer's filesystem or something else?\n\n#### Exercise - running a container with a web app\n\nNext, try to use Docker to run a container that contains the RStudio server web-application installed:\n\n```\ndocker run --rm -p 8787:8787 -e PASSWORD=\"apassword\" rocker/rstudio:4.1.2\n```\n\nThen visit a web browser on your computer and type: <http://localhost:8787>\n\nIf it worked, then you should be at an RStudio Sign In page. To sign in, use the following credentials:\n\n- **username:** rstudio\n- **password:** apassword\n\nThe RStudio server web app being run by the container should look something like this: \n\n<img src=\"img/rstudio-container-web-app.png\" width=600>\n", "_____no_output_____" ], [ "## Contrasting containers with virtual machines\n\nVirtual machines are another technology that can be used to generate (and share) isolated computational environments. Virtual machines emulate the functionality an entire computer on a another physical computer. With virtual machine the virtualization occurs at the layer of software that sits between the computer's hardware and the operating system(s). This software is called a hypervisor. For example, on a Mac laptop, you could install a program called [Oracle Virtual Box](https://www.virtualbox.org/) to run a virtual machine whose operating system was Windows 10, as the screen shot below shows:\n\n<img src=\"https://www.virtualbox.org/raw-attachment/wiki/Screenshots/Windows_8.1_on_OSX.png\">\n\n\n**A screenshot of a Mac OS computer running a Windows virtual machine.** Source: <https://www.virtualbox.org/wiki/Screenshots>\n\nBelow, we share an illustration that compares where virtualization happens in containers compared to virtual machines. This difference, leads to containers being more light-weight and portable compared to virtual machines, and also less isolated.\n\n<img src=\"img/container_v_vm.png\" width=600>\n\n*Source: https://www.docker.com/resources/what-container*\n\n\n**Key take home:** - Containerization software shares the host's operating system, whereas virtual machines have a completely separate, additional operating system. This can make containers lighter (smaller in terms of size) and more resource and time-efficient than using a virtual machine.*", "_____no_output_____" ], [ "## Contrasting common computational environment virtualization strategies\n\n| Feature | Virtual environment | Container | Virtual machine |\n|---------|---------------------|-----------|-----------------|\n| Virtualization level | Application | Operating system user-space | Hardware |\n| Isolation | Programming languages, packages | Programming languages, packages, **other software, operating system dependencies, filesystems, networks** | Programming languages, packages, other software, operating system dependencies, filesystems, networks, **operating systems** |\n| Size | Extremely light-weight | light-weight | heavy-weight |\n", "_____no_output_____" ], [ "## Virtualization strategy advantages and disadvantages for reproducibility\n\nLet's collaboratively generate a list of advantages and disadvantages of each virtualization strategy in the context of reproducibility:\n\n\n### Virtual environment\n\n#### Advantages\n- Extremely small size\n- Porous (less isolated) - makes it easy to pair the virtualized computational environment with files on your computer\n- Specify these with a single text file\n\n#### Disadvantages\n- Not always possible to capture and share operating system dependencies, and other software your analysis depends upon\n- Computational environment is not fully isolated, and so silent missed dependencies\n", "_____no_output_____" ], [ "### Containers\n\n#### Advantages\n- Somewhat light-weight in size (manageable for easy sharing - there are tools and software to facilitate this)\n- Possible to capture and share operating system dependencies, and other software your analysis depends upon\n- Computational environment is fully isolated, and errors will occur if dependencies are missing\n- Specify these with a single text file\n- Can share volumes and ports (advantage compared to virtual machines)\n\n#### Disadvantages\n- Possible security issues - running software on your computer that you may allow to be less isolated (i.e., mount volumes, expose ports)\n- Takes some effort to share volumes and ports (disadvantage compared to virtual environments)\n", "_____no_output_____" ], [ "### Virtual machine\n\n#### Advantages\n- High security, because these are much more isolated (filesystem, ports, etc)\n- Can share an entirely different operating system (might not so useful in the context of reproducibility however...)\n\n#### Disadvantages\n- Very big in size, which can make it prohibitive to share them\n- Takes great effort to share volumes and ports - which makes it hard to give access to data on your computer\n", "_____no_output_____" ], [ "## Container useage workflow\n\nA schematic of Container useage workflow from a [blog post](https://blog.octo.com/en/docker-registry-first-steps/) by Arnaud Mazin:\n\n<img src=\"img/docker-stages.png\" width=600>\n\n*Source: [OctoTalks](https://blog.octo.com/en/docker-registry-first-steps/)*", "_____no_output_____" ], [ "## Image vs container?\n\nAnalogy: The program Chrome is like a Docker image, whereas a Chrome window is like a Docker container.\n\n<img src=\"img/instance_analogy.png\" width=\"600\">", "_____no_output_____" ], [ "You can list the container **images** on your computer that you pulled using Docker via: `docker images`. You should see a list like this when you do this:\n\n```\nREPOSITORY TAG IMAGE ID CREATED SIZE\nrocker/rstudio 4.1.2 ff47c56c9c0b 8 days ago 1.89GB\ncontinuumio/miniconda3 latest 4d529c886124 4 weeks ago 399MB\njupyter/base-notebook latest 8610b7acbd67 5 weeks ago 683MB\njupyter/minimal-notebook latest 4801dcfde35b 2 months ago 1.38GB\nrocker/r-base latest 91af7f4c94cd 3 months ago 814MB\nubuntu focal ba6acccedd29 3 months ago 72.8MB\nrocker/r-base 3.6.3 ddcf1852524d 23 months ago 679MB\n```\n\nYou can list the states of containers that have been started by Docker on your computer (and not yet removed) via: `docker ps -a`:\n\n```\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n9160100c7d4b rocker/r-base:3.6.3 \"R\" 5 seconds ago Up 4 seconds friendly_merkle\n0d0871c90313 rocker/rstudio:4.1.2 \"/init\" 33 minutes ago Up 33 minutes 0.0.0.0:8787->8787/tcp, :::8787->8787/tcp exciting_kepler\n```", "_____no_output_____" ], [ "## What is a container registry\n\nA container registry Is a remote repository used to share container images. This is similar to remote version control repositories for sharing code. Instead of code however, it is container images that are pushed and pulled to/from there. There are many container registries that can be used, but for this course we will focus on the widely-used DockerHub container registry: <https://hub.docker.com/>\n\n#### Demonstration\n\nLet's visit the repositories for the two container images that we used in the exercise earlier in class:\n\n- [rocker/r-base](https://hub.docker.com/r/rocker/r-base)\n- [rocker/rstudio](https://hub.docker.com/r/rocker/rstudio)\n\nQuestion: how did we get the images for the exercise earlier in class? We were just prompted to type `docker run...`\n\nAnswer: `docker run ...` will first look for images you have locally, and run those if they exist. If they do not exist, it then attempts to pull the image from DockerHub.", "_____no_output_____" ], [ "## How do we specify a container image?\n\nContainer images are specified from plain text files! In the case of the Docker containerization software, we call these `Dockerfiles`. We will explain these in more detail later, however for now it is useful to look at one to get a general idea of their structure:\n\nExample `Dockerfile`:\n\n```\nFROM continuumio/miniconda3\n\n# Install Jupyter, JupterLab, R & the IRkernel\nRUN conda install -y --quiet \\\n jupyter \\\n jupyterlab=3.* \\\n r-base=4.1.* \\\n r-irkernel\n\n# Install JupyterLab Git Extension\nRUN pip install jupyterlab-git\n\n# Create working directory for mounting volumes\nRUN mkdir -p /opt/notebooks\n\n# Make port 8888 available for JupyterLab\nEXPOSE 8888\n\n# Install Git, the nano-tiny text editor and less (needed for R help)\nRUN apt-get update && \\\n apt-get install --yes \\\n git \\\n nano-tiny \\\n less\n\n# Copy JupyterLab start-up script into container\nCOPY start-notebook.sh /usr/local/bin/\n\n# Change permission of startup script and execute it\nRUN chmod +x /usr/local/bin/start-notebook.sh\nENTRYPOINT [\"/usr/local/bin/start-notebook.sh\"]\n\n# Switch to staring in directory where volumes will be mounted\nWORKDIR \"/opt/notebooks\"\n```\n\nThe commands in all capitals are Docker commands. `Dockerfile`s typically start with a `FROM` command that specifies which base image the new image should be built off. Docker images are built in layers - this helps make them more light-weight. The `FROM` command is usually followed by `RUN` commands that usually install new software, or execute configuration commands. Other commands in this example copy in needed configuration files, expose ports, specify the working directory, and specify programs to execute at start-up.\n\n#### Demonstration of container images being built from layers\n\nLet's take a look at the `Dockerfile` for the `jupyter/docker-stacks` `r-notebook` container image:\n- [Dockerfile](https://github.com/jupyter/docker-stacks/blob/master/r-notebook/Dockerfile)\n\n*Question: What images does it build off?*", "_____no_output_____" ], [ "## Running containers\n\nBelow we demonstrate how to run containers using the [`continuumio/miniconda3` image](https://hub.docker.com/r/continuumio/miniconda3) as an example:", "_____no_output_____" ], [ "#### Step 1 - launch the Docker app (for OSX & Windows only)\n- Use launchpad/Finder/Start menu/etc to find and launch Docker\n\n> Note: Docker might already be running, if so great, but if its not, the commands below will not work. So it is always good to check!", "_____no_output_____" ], [ "#### Step 2 - get container image from Dockerhub\n- open the terminal\n- type: `docker pull continuumio/miniconda3`\n- verify that it successfully pulled by typing: `docker images`, you should see something like:\n```\nREPOSITORY TAG IMAGE ID CREATED SIZE\ncontinuumio/miniconda3 latest 4d529c886124 4 weeks ago 399MB\n```\n\n> Note 1: You can skip this step and just got onto `docker run ...` as that command will pull the image if you do not have it locally.\n>\n> Note 2: If you ever need to delete a container image from your computer, you can run `docker rmi <IMAGE_ID>` to do so.", "_____no_output_____" ], [ "#### Step 3 - launch a container from the image and poke around!\n\n- type: `docker run -it continuumio/miniconda3`\n- If it worked, then your command line prompt should now look something like this:\n\n```\nroot@ad0560c5b81a:/# \n```\n- use `ls`, `cd`, `pwd` and explore the container\n- type `exit` to leave when you are done (your prompt will look normal again)!", "_____no_output_____" ], [ "#### Step 4 - clean up your container!\n\n- After you close a container it still \"hangs\" around... \n- View any existing containers using `docker ps -a`\n- Remove the container by typing `docker rm <container_id>`\n- Prove to yourself that the container is no longer \"hanging around\" via `docker ps -a`, but that you still have the image installed (via `docker images`)\n\n> Note: to remove running containers, you will need to first stop them via `docker stop <container_id>`", "_____no_output_____" ], [ "#### That's a lot of work...\n\n- We can tell Docker to delete the container upon exit using the `--rm` flag in the run command.\n- Type the command below to run the container again, exit it and prove to yourself that the container was deleted (but not the image!):\n\n```\ndocker run -it --rm continuumio/miniconda3\n```", "_____no_output_____" ], [ "## Mounting volumes to containers\n\nOften times we want to use the software made available to us in containers on files on our computers. \nTo do this, we need to explicitly tell Docker to mount a volume to the container. \nWe can do this via: `-v <path_to_computer_directory>:<absolute_path_to_container_directory>`\n\nOften, we want to mount the volume from our current directory (where we are working) and we can do that with a short-form of `/$(pwd)` in place of the path to our computer's directory.\n\nTo mount our current directory to a container from the `continuumio/miniconda3` image we type the following on your laptop:\n\n```\ndocker run -it --rm -v /$(pwd):/home/my_mounted_volume continuumio/miniconda3\n```\n\nNavigate to the directory where you mounted your files via: `cd /home/my_mounted_volume` and type `ls` to ensure you can see them.\n\n> Note: if you are mounting volumes to a container from a Docker image that runs a web app, be sure to read the documentation to see where you should mount that volume. Usually the web apps are only exposed to certain directories and you will only be able to access the files in the mounted volume if you mount them to the correct place. For example, in the `rocker/rstudio` image that we loaded earlier, volumes need to be mounted within `/home/rstudio/` to be able to access them via the RStudio server web app.\n\n### Windows notes for mounting volumes:\n- Windows machines need to explicitly share drives with Docker - this should be part of your computer setup!\n- On Windows, the laptop path depends what shell you are using, here are some details:\n - If you are going to run it in Windows terminal, then the command should be: `docker run --rm -it -v /$(pwd):<PATH_ON_CONTAINER> <IMAGE_NAME>` to share the current directory.\n - If you are going to run it in Power Shell, then the command should be: `docker run --rm -it -v <ABSOLUTE_PATH_TO_CONTAINER>:<PATH_ON_CONTAINER> <IMAGE_NAME>` (`pwd` and variants do not seem to work). And the path must be formatted like: `C:\\Users\\tiffany.timbers\\Documents\\project\\:/home/project`", "_____no_output_____" ], [ "## Mapping ports to containers with web apps\n\n[Docker documentation on Container networking](https://docs.docker.com/config/containers/container-networking/)\n\nIf we want to use a graphical user interface (GUI) with our containers, for example to be able to use the computational environment in the container in an integrated development environment (IDE) such as RStudio or JupyterLab, then we need to map the correct port from the container to a port on our computer.\n\nTo do this, we use the `-p` flag with `docker run`, specifying the port in the container on the left-hand side, and the port on your computer (the container/Docker host) on the right-hand side of `:`. For example, to run the `rocker/rstudio` container image we would type `-p 8787:8787` to map the ports as shown in the `docker run` command below:\n\n\n```\ndocker run --rm -p 8787:8787 -e PASSWORD=\"apassword\" rocker/rstudio:4.1.2\n```\n\nThen to access the web app, we need to navigate a browser url to `http://localhost:<COMPUTER_PORT>`. In this case we would navigate to <http://localhost:8787> to use the RStudio server web app from the container.\n\nNote that we can only map one port on our computer (the container/Docker host) to a container at any given time. However,\nour computer (the container/Docker host) has many ports we can choose from to map. So if we wanted to run a second `rocker/rstudio` container, then we could map it to a different port as shown below:\n\n```\ndocker run --rm -p 8788:8787 -e PASSWORD=\"apassword\" rocker/rstudio:4.1.2\n```\n\nWhen we do this, to run the app in a browser on our computer, we need to go to <http://localhost:8788> (instead of <http://localhost:8787>) to access this container as we mapped it to the `8788` port on our computer (and not `8787`).\n\n\nAnother important note is that the container port is specific to the container, and the web app installed therein. So we cannot change that without changing the container image, and/or application installed therein. Where do you learn what port is exposed in a container image? The image documentation should specify this. For example, in the [`rocker/rstudio` container image documentation](https://hub.docker.com/r/rocker/rstudio) it states:\n\n<img src=\"img/rocker-rstudio-port-docs.png\" width=600>\n\n*Source: <https://hub.docker.com/r/rocker/rstudio>*", "_____no_output_____" ], [ "## Running a Docker container non-interactively\n\nSo far we have been running our containers interactively, but sometimes we want to automate further and run things non interactively. We do this be dropping the `-it` flag from our `docker run` command as well as calling a command or a script after the docker image is specified.\n\nThe general form for for running things non-interactively is this:\n\n```\ndocker run --rm -v PATH_ON_YOUR_COMPUTER:VOLUME_ON_CONTAINER DOCKER_IMAGE PROGRAM_TO_RUN PROGRAM_ARGUMENTS\n```\n\nFor example, let's use the container run a `cowsay::say` function call to print some asci art with a cute message! \n\n```\n$ docker run --rm ttimbers/dockerfile-practice:v0.1.0 Rscript -e \"library(cowsay); say('Snow again this week?', 'snowman')\"\n```\n\n\nAnd if succesfful, we should get:\n\n```\n----- \nSnow again this week? \n ------ \n \\ \n \\\n _[_]_\n (\")\n >--( : )--<\n (__:__) [nosig]\n``` \n\nNow that was a silly example, but this can be made powerful so that we can run an analysis pipeline, such as a Makefile non-interactively using Docker! Here's a demo we can try: https://github.com/ttimbers/data_analysis_pipeline_eg/tree/v4.0\n\n#### Exercise 1: \n\nDownload https://github.com/ttimbers/data_analysis_pipeline_eg/archive/v4.0.zip, unzip it and navigate to the root of the project directory, try to run the analysis via `make all`.\n\n#### Exercise 2: \n\nNow try to run the analysis using Docker via:\n\n```\ndocker run --rm -v /$(pwd):/home/rstudio/data_analysis_eg ttimbers/data_analysis_pipeline_eg make -C /home/rstudio/data_analysis_eg all\n```\n\n*note - windows users must use Git Bash, set Docker to use Linux containers, and have shared their drives with Docker (see docs [here](https://token2shell.com/howto/docker/sharing-windows-folders-with-containers/)) for this to work*", "_____no_output_____" ], [ "## Docker commands\n\nThe table below summarizes the Docker commands we have learned so far and can serve as a useful reference when we are using Docker:\n\n| command/flag | What it does | \n|--------------|-----------------------|\n| `pull` | Downloads a Docker image from Docker Hub |\n| `images` | Tells you what container images are installed on your machine |\n| `rmi` | Deletes a specified container image from your machine |\n| `ps -a` | Tells you what containers are running on your machine |\n| `stop` | Stops a specified running container |\n| `rm` | Removes a specified stopped container |\n| `run` | Launches a container from an image |\n| `-it` | Tells Docker to run the container interactively |\n| `--rm` | Makes a container ephemeral (deletes it upon exit) |\n| `-v` | Mounts a volume of your computer to the Docker container |\n| `-p` | Specifies the ports to map a web app to |\n| `-e` | Sets environment variables in the container (*e.g.*, PASSWORD=\"apassword\") |\n| `exit` | Exits a Docker container|", "_____no_output_____" ], [ "## Building container images from `Dockerfile`'s\n\n- A `Dockerfile` is a plain text file that contains commands primarily about what software to install in the Docker image. This is the more trusted and transparent way to build Docker images.\n\n- Once we have created a `Dockerfile` we can build it into a Docker image.\n\n- Docker images are built in layers, and as such, `Dockerfiles` always start by specifiying a base Docker image that the new image is to be built on top off.\n\n- Docker containers are all Linux containers and thus use Linux commands to install software, however there are different flavours of Linux (e.g., Ubuntu, Debian, CentOs, RedHat, etc) and thus you need to use the right Linux install commands to match your flavour of container. For this course we will focus on Ubuntu- or Debian-based images and thus use `apt-get` as our installation program.\n\n", "_____no_output_____" ], [ "### Workflow for building a Dockerfile\n\n1. Choose a base image to build off (from https://hub.docker.com/).\n\n2. Create a `Dockerfile` named `Dockerfile` and save it in an appropriate project repository. Open that file and type `FROM <BASE_IMAGE> on the first line`.\n\n3. In a terminal, type `docker run --rm -it <IMAGE_NAME>` and interactively try the install commands you think will work. Edit and try again until the install command works.\n\n4. Write working install commands in the `Dockerfile`, preceeding them with `RUN` and save the `Dockerfile`.\n\n5. After adding every 2-3 commands to your `Dockerfile`, try building the Docker image via `docker build --tag <TEMP_IMAGE_NAME> <PATH_TO_DOCKERFILE_DIRECTORY>`.\n\n6. Once the entire Dockerfile works from beginning to end on your laptop, then you can finally move to building remotely (e.g., creating a trusted build on GitHub Actions).", "_____no_output_____" ], [ "### Demo workflow for creating a `Dockfile` locally\n\nWe will demo this workflow together to build a Docker image locally on our machines that has R and the `cowsay` R package installed.\n\nLet's start with the `debian:stable` image, so the first line of our `Dockerfile` should be as such:\n\n```\nFROM debian:stable\n```\n\nNow let's run the `debian:stable` image so we can work on our install commands to find some that work!\n\n```\n$ docker run --rm -it debian:stable\n```\n\nNow that we are in a container instance of the `debian:stable` Docker image, we can start playing around with installing things. To install things in the Debian flavour of Linux we use the command `apt-get`. We will do some demo's in class today, but a more comprehensive tutorial can be found [here](https://www.digitalocean.com/community/tutorials/how-to-manage-packages-in-ubuntu-and-debian-with-apt-get-apt-cache).\n\nTo install R on Debian, we can figure out how to do this by following the CRAN documentation available [here](https://cran.r-project.org/bin/linux/debian/).\n\nFirst they recommend updating the list of available software package we can install with `apt-get` to us via the `apt-get update` command:\n\n```\nroot@5d0f4d21a1f9:/# apt-get update\n```\n\nNext, they suggest the following commands to install R:\n\n```\nroot@5d0f4d21a1f9:/# apt-get install r-base r-base-dev\n```\n\nOK, great! That seemed to have worked! Let's test it by trying out R! \n\n```\nroot@5d0f4d21a1f9:/# R\n\nR version 3.5.2 (2018-12-20) -- \"Eggshell Igloo\"\nCopyright (C) 2018 The R Foundation for Statistical Computing\nPlatform: x86_64-pc-linux-gnu (64-bit)\n\nR is free software and comes with ABSOLUTELY NO WARRANTY.\nYou are welcome to redistribute it under certain conditions.\nType 'license()' or 'licence()' for distribution details.\n\nR is a collaborative project with many contributors.\nType 'contributors()' for more information and\n'citation()' on how to cite R or R packages in publications.\n\nType 'demo()' for some demos, 'help()' for on-line help, or\n'help.start()' for an HTML browser interface to help.\nType 'q()' to quit R.\n\n> \n\n```\n\nAwesome! This seemed to have worked! Let's exit R (via `q()`) and the Docker container (via `exit`). Then we can add these commands to the Dockerfile, proceeding them with `RUN` and try to build our image to ensure this works.\n\nOur `Dockerfile` so far:\n```\nFROM debian:stable\n\nRUN apt-get update\n\nRUN apt-get install r-base r-base-dev -y \n```\n\n```\n$ docker build --tag testr1 src\n```\n\nWait! That didn't seem to work! Let's focus on the last two lines of the error message:\n\n```\nDo you want to continue? [Y/n] Abort.\nThe command '/bin/sh -c apt-get install r-base r-base-dev' returned a non-zero code: 1\n```\n\nOhhhh, right! As we were interactively installing this, we were prompted to press \"Y\" on our keyboard to continue the installation. We need to include this in our Dockerfile so that we don't get this error. To do this we append the `-y` flag to the end of the line contianing `RUN apt-get install r-base r-base-dev`. Let's try building again!\n\nGreat! Success! Now we can play with installing R packages! \n\nLet's start now with the test image we have built from our `Dockerfile`:\n\n```\n$ docker run -it --rm testr1\n```\n\nNow while we are in the container interactively, we can try to install the R package via:\n\n```\nroot@51f56d653892:/# Rscript -e \"install.packages('cowsay')\"\n```\n\nAnd it looks like it worked! Let's confirm by trying to call a function from the `cowsay` package in R:\n\n```\nroot@51f56d653892:/# R\n\n> cowsay::say(\"Smart for using Docker are you\", \"yoda\")\n```\n\n\n\nGreat, let's exit the container, and add this command to our `Dockerfile` and try to build it again!\n\n```\nroot@51f56d653892:/# exit\n```\n\nOur `Dockerfile` now:\n```\nFROM debian:stable\n\nRUN apt-get update\n\nRUN apt-get install r-base r-base-dev -y \n\nRUN Rscript -e \"install.packages('cowsay')\"\n```\n\nBuild the `Dockerfile` into an image:\n\n```\n$ docker build --tag testr1 src\n\n$ docker run -it --rm testr1\n```\n\nLooks like a success, let's be sure we can use the `cowsay` package:\n\n```\nroot@861487da5d00:/# R\n\n> cowsay::say(\"why did the chicken cross the road\", \"chicken\")\n\n```\n\nHurray! We did it! Now we can automate this build on GitHub, push it to Docker Hub and share this Docker image with the world!\n\n<img src=\"https://media.giphy.com/media/ZcKASxMYMKA9SQnhIl/giphy-downsized.gif\">\nSource: https://giphy.com/gifs/memecandy-ZcKASxMYMKA9SQnhIl", "_____no_output_____" ], [ "## Tips for installing things programmatically on Debian-flavoured Linux\n\n### Installing things with `apt-get`\n\nBefore you install things with `apt-get` you will want to update the list of packages that `apt-get` can see. We do this via `apt-get update`. \n\nNext, to install something with `apt-get` you will use the `apt-get install` command along with the name of the software. For example, to install the Git version control software we would type `apt-get install git`. Note however that we will be building our containers non-interactively, and so we want to preempt any questions/prompts the installation software we will get by including the answers in our commands. So for example, to `apt-get install` we append `--yes` to tell `apt-get` that yes we are happy to install the software we asked it to install, using the amount of disk space required to install it. If we didn't append this, the installation would stall out at this point waiting for our answer to this question. Thus, the full command to Git via `apt-get` looks like:\n\n```\napt-get install --yes git\n```", "_____no_output_____" ], [ "### Breaking shell commands across lines\n\nIf we want to break a single command across lines in the shell, we use the `\\` character. \nFor example, to reduce the long line below which uses `apt-get` to install the programs Git, Tiny Nano, Less, and wget:\n\n```\napt-get install --yes git nano-tiny less wget\n```\n\nWe can use `\\` after each program, to break the long command across lines and make the command more readable (especially if there were even more programs to install). Similarly, we indent the lines after `\\` to increase readability:\n\n```\napt-get install --yes \\\n git \\\n nano-tidy \\\n less \\\n wget\n```\n\n### Running commands only if the previous one worked\n\nSometimes we don't want to run a command if the command that was run immediately before it failed. We can specify this in the shell using `&&`. For example, if we want to not run `apt-get` installation commands if `apt-get update` failed, we can write:\n\n```\napt-get update && \\\n apt-get install --yes git\n```", "_____no_output_____" ], [ "## `Dockerfile` command summary\n\nMost common `Dockerfile` commands I use:\n\n| Command | Description |\n|---------|-------------|\n| FROM | States which base image the new Docker image should be built on top of |\n| RUN | Specifies that a command should be run in a shell |\n| ENV | Sets environment variables |\n| EXPOSE | Specifies the port the container should listen to at runtime |\n| COPY or ADD | adds files (or URL's in the case of ADD) to a container's filesystem |\n| ENTRYPOINT | Configure a container that will run as an executable |\n| WORKDIR | sets the working directory for any `RUN`, `CMD`, `ENTRYPOINT`, COPY and ADD instructions that follow it in the `Dockerfile` |\n\nAnd more here in the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/).", "_____no_output_____" ], [ "## Choosing a base image for your Dockerfile\n\n<img src=\"https://themuslimtimesdotinfodotcom.files.wordpress.com/2018/10/newton-quotes-2.jpg?w=1334\" width=700>\n\nSource: https://themuslimtimes.info/2018/10/25/if-i-have-seen-further-it-is-by-standing-on-the-shoulders-of-giants/\n\n### Good base images to work from for R or Python projects!\n\n| Image | Software installed | \n|-------|--------------------|\n| [rocker/tidyverse](https://hub.docker.com/r/rocker/tidyverse/) | R, R packages (including the tidyverse), RStudio, make |\n| [continuumio/anaconda3](https://hub.docker.com/r/continuumio/anaconda3/) | Python 3.7.4, Ananconda base package distribution, Jupyter notebook |\n| [jupyter/scipy-notebook](https://hub.docker.com/r/jupyter/scipy-notebook) | Includes popular packages from the scientific Python ecosystem. |\n\nFor mixed language projects, I would recommend using the `rocker/tidyverse` image as the base and then installing Anaconda or miniconda as I have done here: https://github.com/UBC-DSCI/introduction-to-datascience/blob/b0f86fc4d6172cd043a0eb831b5d5a8743f29c81/Dockerfile#L19\n\nThis is also a nice tour de Docker images from the Jupyter core team: https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html#selecting-an-image", "_____no_output_____" ], [ "## Dockerfile FAQ:\n\n#### 1. Where does the `Dockerfile` live?\n\nThe Dockerfile should live in the root directory of your project.\n\n#### 2. How do I make an image from a `Dockerfile`?\n\nThere are 2 ways to do this! I use the first when developing my `Dockerfile` (to test quickly that it works), and then the second I use when I think I am \"done\" and want to have it archived on [Docker Hub](https://hub.docker.com/). \n\n1. Build a Docker image locally on your laptop\n\n2. Build a Docker image and push it to DockerHub using GitHub Actions, \n\n#### 3. How do I build an image locally on my laptop \n \nFrom the directory that contains your `Dockerfile` (usually your project root):\n\n```\ndocker build --tag IMAGE_NAME:VERSION .\n```\n \n*note: `--tag` let's you name and version the Docker image. You can call this anything you want. The version number/name comes after the colon*\n \nAfter I build, I think try to `docker run ...` to test the image locally. If I don't like it, or it doesn't work, I delete the image with `docker rmi {IMAGE_NAME}`, edit my Dockerfile and try to build and run it again.", "_____no_output_____" ], [ "## Build a Docker image from a Dockerfile on GitHub Actions\n\nBuilding a Docker image from a Dockerfile using an automated tool (e.g., DockerHub or GitHub Actions) lets others trust your image as they can clearly see which Dockerfile was used to build which image. \n\nWe will do this in this course by using GitHub Actions (a continuous integration tool) because is provides a great deal of nuanced control over when to trigger the automated builds of the Docker image, and how to tag them.\n\nAn example GitHub repository that uses GitHub Actions to build a Docker image from a Dockerfile and publish it on DockerHub is available here: [https://github.com/ttimbers/gha_docker_build](https://github.com/ttimbers/gha_docker_build)\n\nWe will work through a demonstration of this now starting here: [https://github.com/ttimbers/dockerfile-practice](https://github.com/ttimbers/dockerfile-practice)", "_____no_output_____" ], [ "## Version Docker images and report software and package versions\n\nIt is easier to create a Docker image from a Dockerfile and tag it (or use it's digest) than to control the version of each thing that goes into your Docker image.\n\n- tags are human readable, however they can be associated with different builds of the image (potentially using different Dockerfiles...)\n- digests are not human readable, but specify a specific build of an image\n\nExample of how to pull using a tag: \n```\ndocker pull ttimbers/dockerfile-practice:v1.0\n```\n\nExample of how to pull using a digest:\n```\ndocker pull ttimbers/dockerfile-practice@sha256:cc512c9599054f24f4020e2c7e3337b9e71fd6251dfde5bcd716dc9b1f8c3a73\n```\n\nTags are specified when you build on Docker Hub on the Builds tab under the Configure automated builds options. Digests are assigned to a build. You can see the digests on the Tags tab, by clicking on the \"Digest\" link for a specific tag of the image.", "_____no_output_____" ], [ "### How to get the versions of your software in your container\n\nEasiest is to enter the container interactively and poke around using the following commands:\n\n- `python --version` and `R --version` to find out the versions of Python and R, respectively\n- `pip freeze` or `conda list` in the bash shell to find out Python package versions\n- Enter R and load the libraries used in your scripts, then use `sessionInfo()` to print the package versions", "_____no_output_____" ], [ "### But I want to control the versions!\n\n### How to in R:\n\n#### The Rocker team's strategy\n\nThis is not an easy thing, but the Rocker team has made a concerted effort to do this. Below is their strategy:\n\n> Using the R version tag will naturally lock the R version, and also lock the install date of any R packages on the image. For example, rocker/tidyverse:3.3.1 Docker image will always rebuild with R 3.3.1 and R packages installed from the 2016-10-31 MRAN snapshot, corresponding to the last day that version of R was the most recent release. Meanwhile rocker/tidyverse:latest will always have both the latest R version and latest versions of the R packages, built nightly.\n\nSee [VERSIONS.md](https://github.com/rocker-org/rocker-versioned/blob/master/VERSIONS.md) for details, but in short they use the line below to lock the R version (or view in r-ver Dockerfile [here](https://github.com/rocker-org/rocker-versioned/blob/c4a9f540d4c66a6277f281be6dcfe55d3cb40ec0/r-ver/3.6.1.Dockerfile#L76) for more context):\n``` \n && curl -O https://cran.r-project.org/src/base/R-3/R-${R_VERSION}.tar.gz \\\n```\n\nAnd this line to specify the CRAN snapshot from which to grab the R packages (or view in r-ver Dockerfile [here](mhttps://github.com/rocker-org/rocker-versioned/blob/c4a9f540d4c66a6277f281be6dcfe55d3cb40ec0/r-ver/3.6.1.Dockerfile#L121) for more context):\n```\n && Rscript -e \"install.packages(c('littler', 'docopt'), repo = '$MRAN')\" \\\n```\n\n### A newer thing that might be useful!\n\nYou can pair [renv](https://rstudio.github.io/renv/articles/docker.html?q=docker#running-docker-containers-with-renv) with Docker - this is new and will be covered in tutorial this week! 🎉\n\n### How to in Python:\n\nPython version:\n\n- `conda` to specify an install of specific Python version, either when downloading (see example [here](https://github.com/ContinuumIO/docker-images/blob/8e10242c6d7804a0e991a9d9d758e25b340f4fce/miniconda3/debian/Dockerfile#L10), or after downloading with `conda install python=3.6`).\n- Or you can install a specific version of Python yourself, as they do in the Python official images (see [here](https://github.com/docker-library/python/blob/master/3.7/stretch/slim/Dockerfile) for example), but this is more complicated.\n\nFor Python packages, there are a few tools:\n- conda (via `conda install scipy=0.15.0` for example)\n- pip (via `pip install scipy=0.15.0` for example)\n\n### Take home messages:\n\n- At a minimum, tag your Docker images or reference image digests\n- If you want to version installs inside the container, use base images that version R & Python, and add what you need on top in a versioned manner!", "_____no_output_____" ], [ "## Docker compose\n\nDocker compose is a tool that uses a `YAML` file to configure/specify how you want to run one or more Docker containers. To use Docker compose, we create a `docker-compose.yml` file that specifies things such as:\n- the Docker images (and version)\n- the ports\n- volume mapping\n- any environment variables\n\nThen to run the Docker container using the specifications in the `docker-compose.yml` file, we run:\n\n```\ndocker-compose run --rm service command\n```\n\n- `service` is a name you give to your application configurations in the `docker-compose.yml`\n- `command` is some command or script you would like to run (e.g., `make all`)\n\nHere is an example `docker-compose.yml`:\n\n```\nservices:\n analysis-env:\n image: ttimbers/bc_predictor:v4.0\n ports:\n - \"8787:8787\"\n volumes:\n - .:/home/rstudio/introduction-to-datascience\n environment:\n PASSWORD: password\n```\n\nAnd to run the container and the analysis we would type:\n\n```\ndocker-compose run --rm analysis-env make -C /home/rstudio/breast_cancer_predictor all\n```\n\nThis means we do not have to type out the:\n- ports\n- volume mapping\n- environment variables\n- and potentially more!", "_____no_output_____" ], [ "## Where to next?\n\n- Testing code written for data science", "_____no_output_____" ] ] ]
[ "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a3a90c19b0a13be360a9b3435625cf1ffe0e07b
65,257
ipynb
Jupyter Notebook
sentiment_example.ipynb
flower-go/diplomka
9a24b14041adbad602d151a7b7ddb25335635e6d
[ "CC-BY-4.0" ]
null
null
null
sentiment_example.ipynb
flower-go/diplomka
9a24b14041adbad602d151a7b7ddb25335635e6d
[ "CC-BY-4.0" ]
1
2020-04-18T16:25:25.000Z
2020-04-18T16:25:25.000Z
sentiment_example.ipynb
flower-go/DiplomaThesis
9a24b14041adbad602d151a7b7ddb25335635e6d
[ "CC-BY-4.0" ]
null
null
null
69.128178
2,938
0.564583
[ [ [ "<a href=\"https://colab.research.google.com/github/flower-go/DiplomaThesis/blob/master/sentiment_example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Helper code\n", "_____no_output_____" ] ], [ [ "#clone repo\n!git clone https://github.com/flower-go/DiplomaThesis.git", "Cloning into 'DiplomaThesis'...\nremote: Enumerating objects: 4319, done.\u001b[K\nremote: Counting objects: 100% (1032/1032), done.\u001b[K\nremote: Compressing objects: 100% (666/666), done.\u001b[K\nremote: Total 4319 (delta 778), reused 595 (delta 364), pack-reused 3287\u001b[K\nReceiving objects: 100% (4319/4319), 36.22 MiB | 22.37 MiB/s, done.\nResolving deltas: 100% (3178/3178), done.\n" ], [ "!pip install ufal.morphodita", "Collecting ufal.morphodita\n Downloading ufal.morphodita-1.10.1.1-cp37-cp37m-manylinux2010_x86_64.whl (382 kB)\n\u001b[?25l\r\u001b[K |▉ | 10 kB 27.8 MB/s eta 0:00:01\r\u001b[K |█▊ | 20 kB 24.1 MB/s eta 0:00:01\r\u001b[K |██▋ | 30 kB 17.5 MB/s eta 0:00:01\r\u001b[K |███▍ | 40 kB 15.2 MB/s eta 0:00:01\r\u001b[K |████▎ | 51 kB 7.3 MB/s eta 0:00:01\r\u001b[K |█████▏ | 61 kB 8.5 MB/s eta 0:00:01\r\u001b[K |██████ | 71 kB 8.9 MB/s eta 0:00:01\r\u001b[K |██████▉ | 81 kB 9.2 MB/s eta 0:00:01\r\u001b[K |███████▊ | 92 kB 9.4 MB/s eta 0:00:01\r\u001b[K |████████▋ | 102 kB 7.8 MB/s eta 0:00:01\r\u001b[K |█████████▍ | 112 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████▎ | 122 kB 7.8 MB/s eta 0:00:01\r\u001b[K |███████████▏ | 133 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████ | 143 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████▉ | 153 kB 7.8 MB/s eta 0:00:01\r\u001b[K |█████████████▊ | 163 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████▋ | 174 kB 7.8 MB/s eta 0:00:01\r\u001b[K |███████████████▍ | 184 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████████▎ | 194 kB 7.8 MB/s eta 0:00:01\r\u001b[K |█████████████████▏ | 204 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████████ | 215 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████████▉ | 225 kB 7.8 MB/s eta 0:00:01\r\u001b[K |███████████████████▊ | 235 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████████████▋ | 245 kB 7.8 MB/s eta 0:00:01\r\u001b[K |█████████████████████▍ | 256 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████▎ | 266 kB 7.8 MB/s eta 0:00:01\r\u001b[K |███████████████████████▏ | 276 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████ | 286 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████▉ | 296 kB 7.8 MB/s eta 0:00:01\r\u001b[K |█████████████████████████▊ | 307 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████████▋ | 317 kB 7.8 MB/s eta 0:00:01\r\u001b[K |███████████████████████████▍ | 327 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████████▎ | 337 kB 7.8 MB/s eta 0:00:01\r\u001b[K |█████████████████████████████▏ | 348 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████ | 358 kB 7.8 MB/s eta 0:00:01\r\u001b[K |██████████████████████████████▉ | 368 kB 7.8 MB/s eta 0:00:01\r\u001b[K |███████████████████████████████▊| 378 kB 7.8 MB/s eta 0:00:01\r\u001b[K |████████████████████████████████| 382 kB 7.8 MB/s \n\u001b[?25hInstalling collected packages: ufal.morphodita\nSuccessfully installed ufal.morphodita-1.10.1.1\n" ], [ "!pip install -r \"DiplomaThesis/requirements.txt\"", "Collecting absl-py==0.9.0\n Downloading absl-py-0.9.0.tar.gz (104 kB)\n\u001b[K |████████████████████████████████| 104 kB 8.7 MB/s \n\u001b[?25hRequirement already satisfied: astor==0.8.1 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 2)) (0.8.1)\nRequirement already satisfied: astunparse==1.6.3 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 3)) (1.6.3)\nCollecting attrs==20.2.0\n Downloading attrs-20.2.0-py2.py3-none-any.whl (48 kB)\n\u001b[K |████████████████████████████████| 48 kB 5.8 MB/s \n\u001b[?25hCollecting boto3==1.12.24\n Downloading boto3-1.12.24-py2.py3-none-any.whl (128 kB)\n\u001b[K |████████████████████████████████| 128 kB 44.9 MB/s \n\u001b[?25hCollecting botocore==1.15.24\n Downloading botocore-1.15.24-py2.py3-none-any.whl (6.0 MB)\n\u001b[K |████████████████████████████████| 6.0 MB 45.5 MB/s \n\u001b[?25hCollecting cachetools==4.0.0\n Downloading cachetools-4.0.0-py3-none-any.whl (10 kB)\nCollecting certifi==2019.11.28\n Downloading certifi-2019.11.28-py2.py3-none-any.whl (156 kB)\n\u001b[K |████████████████████████████████| 156 kB 55.6 MB/s \n\u001b[?25hRequirement already satisfied: chardet==3.0.4 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 9)) (3.0.4)\nCollecting click==7.1.1\n Downloading click-7.1.1-py2.py3-none-any.whl (82 kB)\n\u001b[K |████████████████████████████████| 82 kB 1.6 MB/s \n\u001b[?25hRequirement already satisfied: cycler==0.10.0 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 11)) (0.10.0)\nCollecting dill==0.3.2\n Downloading dill-0.3.2.zip (177 kB)\n\u001b[K |████████████████████████████████| 177 kB 59.9 MB/s \n\u001b[?25hCollecting dm-tree==0.1.5\n Downloading dm_tree-0.1.5-cp37-cp37m-manylinux1_x86_64.whl (294 kB)\n\u001b[K |████████████████████████████████| 294 kB 56.8 MB/s \n\u001b[?25hCollecting docutils==0.15.2\n Downloading docutils-0.15.2-py3-none-any.whl (547 kB)\n\u001b[K |████████████████████████████████| 547 kB 43.3 MB/s \n\u001b[?25hRequirement already satisfied: filelock==3.0.12 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 15)) (3.0.12)\nCollecting future==0.18.2\n Downloading future-0.18.2.tar.gz (829 kB)\n\u001b[K |████████████████████████████████| 829 kB 36.8 MB/s \n\u001b[?25hCollecting gast==0.3.3\n Downloading gast-0.3.3-py2.py3-none-any.whl (9.7 kB)\nCollecting google-auth==1.11.2\n Downloading google_auth-1.11.2-py2.py3-none-any.whl (76 kB)\n\u001b[K |████████████████████████████████| 76 kB 6.4 MB/s \n\u001b[?25hCollecting google-auth-oauthlib==0.4.1\n Downloading google_auth_oauthlib-0.4.1-py2.py3-none-any.whl (18 kB)\nCollecting google-pasta==0.1.8\n Downloading google_pasta-0.1.8-py3-none-any.whl (57 kB)\n\u001b[K |████████████████████████████████| 57 kB 6.1 MB/s \n\u001b[?25hCollecting googleapis-common-protos==1.52.0\n Downloading googleapis_common_protos-1.52.0-py2.py3-none-any.whl (100 kB)\n\u001b[K |████████████████████████████████| 100 kB 11.7 MB/s \n\u001b[?25hCollecting grpcio==1.27.2\n Downloading grpcio-1.27.2-cp37-cp37m-manylinux2010_x86_64.whl (2.7 MB)\n\u001b[K |████████████████████████████████| 2.7 MB 32.2 MB/s \n\u001b[?25hCollecting h5py==2.10.0\n Downloading h5py-2.10.0-cp37-cp37m-manylinux1_x86_64.whl (2.9 MB)\n\u001b[K |████████████████████████████████| 2.9 MB 29.6 MB/s \n\u001b[?25hCollecting idna==2.8\n Downloading idna-2.8-py2.py3-none-any.whl (58 kB)\n\u001b[K |████████████████████████████████| 58 kB 7.1 MB/s \n\u001b[?25hCollecting importlib-metadata==3.7.3\n Downloading importlib_metadata-3.7.3-py3-none-any.whl (12 kB)\nCollecting importlib-resources==3.0.0\n Downloading importlib_resources-3.0.0-py2.py3-none-any.whl (23 kB)\nCollecting jmespath==0.9.5\n Downloading jmespath-0.9.5-py2.py3-none-any.whl (24 kB)\nCollecting joblib==0.14.1\n Downloading joblib-0.14.1-py2.py3-none-any.whl (294 kB)\n\u001b[K |████████████████████████████████| 294 kB 56.7 MB/s \n\u001b[?25hRequirement already satisfied: Keras==2.4.3 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 29)) (2.4.3)\nCollecting Keras-Applications==1.0.8\n Downloading Keras_Applications-1.0.8-py3-none-any.whl (50 kB)\n\u001b[K |████████████████████████████████| 50 kB 8.4 MB/s \n\u001b[?25hRequirement already satisfied: Keras-Preprocessing==1.1.2 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 31)) (1.1.2)\nRequirement already satisfied: kiwisolver==1.3.1 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 32)) (1.3.1)\nCollecting Markdown==3.2.1\n Downloading Markdown-3.2.1-py2.py3-none-any.whl (88 kB)\n\u001b[K |████████████████████████████████| 88 kB 10.2 MB/s \n\u001b[?25hCollecting matplotlib==3.3.4\n Downloading matplotlib-3.3.4-cp37-cp37m-manylinux1_x86_64.whl (11.5 MB)\n\u001b[K |████████████████████████████████| 11.5 MB 18.7 MB/s \n\u001b[?25hCollecting nltk==3.5\n Downloading nltk-3.5.zip (1.4 MB)\n\u001b[K |████████████████████████████████| 1.4 MB 34.2 MB/s \n\u001b[?25hCollecting numpy==1.18.1\n Downloading numpy-1.18.1-cp37-cp37m-manylinux1_x86_64.whl (20.1 MB)\n\u001b[K |████████████████████████████████| 20.1 MB 1.2 MB/s \n\u001b[?25hCollecting oauthlib==3.1.0\n Downloading oauthlib-3.1.0-py2.py3-none-any.whl (147 kB)\n\u001b[K |████████████████████████████████| 147 kB 73.7 MB/s \n\u001b[?25hCollecting opt-einsum==3.1.0\n Downloading opt_einsum-3.1.0.tar.gz (69 kB)\n\u001b[K |████████████████████████████████| 69 kB 10.1 MB/s \n\u001b[?25hCollecting packaging==20.9\n Downloading packaging-20.9-py2.py3-none-any.whl (40 kB)\n\u001b[K |████████████████████████████████| 40 kB 6.6 MB/s \n\u001b[?25hCollecting pandas==1.1.4\n Downloading pandas-1.1.4-cp37-cp37m-manylinux1_x86_64.whl (9.5 MB)\n\u001b[K |████████████████████████████████| 9.5 MB 17.3 MB/s \n\u001b[?25hCollecting Pillow==8.1.0\n Downloading Pillow-8.1.0-cp37-cp37m-manylinux1_x86_64.whl (2.2 MB)\n\u001b[K |████████████████████████████████| 2.2 MB 45.4 MB/s \n\u001b[?25hRequirement already satisfied: promise==2.3 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 42)) (2.3)\nCollecting protobuf==3.11.3\n Downloading protobuf-3.11.3-cp37-cp37m-manylinux1_x86_64.whl (1.3 MB)\n\u001b[K |████████████████████████████████| 1.3 MB 37.7 MB/s \n\u001b[?25hRequirement already satisfied: pyasn1==0.4.8 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 44)) (0.4.8)\nRequirement already satisfied: pyasn1-modules==0.2.8 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 45)) (0.2.8)\nCollecting pydot==1.4.1\n Downloading pydot-1.4.1-py2.py3-none-any.whl (19 kB)\nRequirement already satisfied: pyparsing==2.4.7 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 47)) (2.4.7)\nRequirement already satisfied: python-dateutil==2.8.1 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 48)) (2.8.1)\nCollecting pytz==2020.4\n Downloading pytz-2020.4-py2.py3-none-any.whl (509 kB)\n\u001b[K |████████████████████████████████| 509 kB 64.4 MB/s \n\u001b[?25hCollecting PyYAML==5.3.1\n Downloading PyYAML-5.3.1.tar.gz (269 kB)\n\u001b[K |████████████████████████████████| 269 kB 50.7 MB/s \n\u001b[?25hCollecting regex==2020.2.20\n Downloading regex-2020.2.20-cp37-cp37m-manylinux2010_x86_64.whl (689 kB)\n\u001b[K |████████████████████████████████| 689 kB 50.6 MB/s \n\u001b[?25hCollecting requests==2.22.0\n Downloading requests-2.22.0-py2.py3-none-any.whl (57 kB)\n\u001b[K |████████████████████████████████| 57 kB 6.4 MB/s \n\u001b[?25hRequirement already satisfied: requests-oauthlib==1.3.0 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 53)) (1.3.0)\nCollecting rsa==4.0\n Downloading rsa-4.0-py2.py3-none-any.whl (38 kB)\nCollecting s3transfer==0.3.3\n Downloading s3transfer-0.3.3-py2.py3-none-any.whl (69 kB)\n\u001b[K |████████████████████████████████| 69 kB 9.5 MB/s \n\u001b[?25hCollecting sacremoses==0.0.38\n Downloading sacremoses-0.0.38.tar.gz (860 kB)\n\u001b[K |████████████████████████████████| 860 kB 45.7 MB/s \n\u001b[?25hCollecting scikit-learn==0.23.2\n Downloading scikit_learn-0.23.2-cp37-cp37m-manylinux1_x86_64.whl (6.8 MB)\n\u001b[K |████████████████████████████████| 6.8 MB 25.8 MB/s \n\u001b[?25hRequirement already satisfied: scipy==1.4.1 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 58)) (1.4.1)\nCollecting sentencepiece==0.1.85\n Downloading sentencepiece-0.1.85-cp37-cp37m-manylinux1_x86_64.whl (1.0 MB)\n\u001b[K |████████████████████████████████| 1.0 MB 47.6 MB/s \n\u001b[?25hCollecting six==1.14.0\n Downloading six-1.14.0-py2.py3-none-any.whl (10 kB)\nCollecting tensorboard==2.3.0\n Downloading tensorboard-2.3.0-py3-none-any.whl (6.8 MB)\n\u001b[K |████████████████████████████████| 6.8 MB 21.4 MB/s \n\u001b[?25hCollecting tensorboard-plugin-wit==1.7.0\n Downloading tensorboard_plugin_wit-1.7.0-py3-none-any.whl (779 kB)\n\u001b[K |████████████████████████████████| 779 kB 67.4 MB/s \n\u001b[?25hCollecting tensorflow==2.3.1\n Downloading tensorflow-2.3.1-cp37-cp37m-manylinux2010_x86_64.whl (320.4 MB)\n\u001b[K |████████████████████████████████| 320.4 MB 23 kB/s \n\u001b[?25hCollecting tensorflow-addons==0.8.1\n Downloading tensorflow_addons-0.8.1-cp37-cp37m-manylinux2010_x86_64.whl (1.0 MB)\n\u001b[K |████████████████████████████████| 1.0 MB 47.0 MB/s \n\u001b[?25hRequirement already satisfied: tensorflow-datasets==4.0.1 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 65)) (4.0.1)\nCollecting tensorflow-estimator==2.3.0\n Downloading tensorflow_estimator-2.3.0-py2.py3-none-any.whl (459 kB)\n\u001b[K |████████████████████████████████| 459 kB 64.6 MB/s \n\u001b[?25hCollecting tensorflow-metadata==0.24.0\n Downloading tensorflow_metadata-0.24.0-py3-none-any.whl (44 kB)\n\u001b[K |████████████████████████████████| 44 kB 2.9 MB/s \n\u001b[?25hRequirement already satisfied: termcolor==1.1.0 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 68)) (1.1.0)\nCollecting threadpoolctl==2.1.0\n Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB)\nCollecting tokenizers==0.10.1\n Downloading tokenizers-0.10.1-cp37-cp37m-manylinux2010_x86_64.whl (3.2 MB)\n\u001b[K |████████████████████████████████| 3.2 MB 46.3 MB/s \n\u001b[?25hCollecting tqdm==4.43.0\n Downloading tqdm-4.43.0-py2.py3-none-any.whl (59 kB)\n\u001b[K |████████████████████████████████| 59 kB 8.0 MB/s \n\u001b[?25hCollecting transformers==4.4.2\n Downloading transformers-4.4.2-py3-none-any.whl (2.0 MB)\n\u001b[K |████████████████████████████████| 2.0 MB 35.7 MB/s \n\u001b[?25hRequirement already satisfied: typeguard==2.7.1 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 73)) (2.7.1)\nRequirement already satisfied: typing-extensions==3.7.4.3 in /usr/local/lib/python3.7/dist-packages (from -r DiplomaThesis/requirements.txt (line 74)) (3.7.4.3)\nCollecting urllib3==1.25.8\n Downloading urllib3-1.25.8-py2.py3-none-any.whl (125 kB)\n\u001b[K |████████████████████████████████| 125 kB 69.7 MB/s \n\u001b[?25hCollecting Werkzeug==1.0.0\n Downloading Werkzeug-1.0.0-py2.py3-none-any.whl (298 kB)\n\u001b[K |████████████████████████████████| 298 kB 51.7 MB/s \n\u001b[?25hCollecting wrapt==1.12.0\n Downloading wrapt-1.12.0.tar.gz (27 kB)\nCollecting zipp==3.3.1\n Downloading zipp-3.3.1-py3-none-any.whl (5.3 kB)\nRequirement already satisfied: wheel<1.0,>=0.23.0 in /usr/local/lib/python3.7/dist-packages (from astunparse==1.6.3->-r DiplomaThesis/requirements.txt (line 3)) (0.36.2)\nRequirement already satisfied: setuptools>=40.3.0 in /usr/local/lib/python3.7/dist-packages (from google-auth==1.11.2->-r DiplomaThesis/requirements.txt (line 18)) (57.2.0)\nBuilding wheels for collected packages: absl-py, dill, future, nltk, opt-einsum, PyYAML, sacremoses, wrapt\n Building wheel for absl-py (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for absl-py: filename=absl_py-0.9.0-py3-none-any.whl size=121940 sha256=d46f8d11b0339b50d87ae5520b2f757d80b7592c2d4c7e6a375ff0994d138ccc\n Stored in directory: /root/.cache/pip/wheels/cc/af/1a/498a24d0730ef484019e007bb9e8cef3ac00311a672c049a3e\n Building wheel for dill (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for dill: filename=dill-0.3.2-py3-none-any.whl size=78927 sha256=d07a26a0700ce23aa54f50732562980ae53ebacd48cbac41507f5566b06f602e\n Stored in directory: /root/.cache/pip/wheels/72/6b/d5/5548aa1b73b8c3d176ea13f9f92066b02e82141549d90e2100\n Building wheel for future (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for future: filename=future-0.18.2-py3-none-any.whl size=491070 sha256=c6c09e28303d8b911e61a25c5f74b59f66a5377a8561f0cf68c7137813978516\n Stored in directory: /root/.cache/pip/wheels/56/b0/fe/4410d17b32f1f0c3cf54cdfb2bc04d7b4b8f4ae377e2229ba0\n Building wheel for nltk (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for nltk: filename=nltk-3.5-py3-none-any.whl size=1434694 sha256=e2f2a25a38daa8bbfca5ccf2e68c287d640ee8c24a6b49d031fa79fdd65b747c\n Stored in directory: /root/.cache/pip/wheels/45/6c/46/a1865e7ba706b3817f5d1b2ff7ce8996aabdd0d03d47ba0266\n Building wheel for opt-einsum (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for opt-einsum: filename=opt_einsum-3.1.0-py3-none-any.whl size=61695 sha256=c69c57ca648633b3ef1a4ec89e6493e26c4b67da1d4123640ddb8af6c221eb50\n Stored in directory: /root/.cache/pip/wheels/21/e3/31/0d3919995e859eff01713d381aac3b6b43c69915a2942e5c65\n Building wheel for PyYAML (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for PyYAML: filename=PyYAML-5.3.1-cp37-cp37m-linux_x86_64.whl size=44636 sha256=7bbd01c0cf200cfa71e45ba66c3bd78e44b830e03b50506b67680b0ffe2c58b5\n Stored in directory: /root/.cache/pip/wheels/5e/03/1e/e1e954795d6f35dfc7b637fe2277bff021303bd9570ecea653\n Building wheel for sacremoses (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for sacremoses: filename=sacremoses-0.0.38-py3-none-any.whl size=884621 sha256=288a93c42f6b08a00780e598d73505ed6f21a067d96387696d26599fecaaba50\n Stored in directory: /root/.cache/pip/wheels/99/c9/5a/a5e36bce983040ea5061a8ec65b5852bfebad4b1afa8d5b394\n Building wheel for wrapt (setup.py) ... \u001b[?25l\u001b[?25hdone\n Created wheel for wrapt: filename=wrapt-1.12.0-cp37-cp37m-linux_x86_64.whl size=68728 sha256=8d4e853e26b9395b9c506b8b68c5d0cb2c7c5d29edc2a8ed1811e13a3728687f\n Stored in directory: /root/.cache/pip/wheels/e5/78/69/f40ab7cae531c8f07003a9d1b4b81ebec14cda95519c57e7dd\nSuccessfully built absl-py dill future nltk opt-einsum PyYAML sacremoses wrapt\nInstalling collected packages: urllib3, idna, certifi, six, rsa, requests, oauthlib, cachetools, protobuf, jmespath, google-auth, docutils, zipp, Werkzeug, tqdm, tensorboard-plugin-wit, regex, numpy, Markdown, joblib, grpcio, googleapis-common-protos, google-auth-oauthlib, click, botocore, absl-py, wrapt, tokenizers, threadpoolctl, tensorflow-metadata, tensorflow-estimator, tensorboard, sacremoses, s3transfer, PyYAML, pytz, Pillow, packaging, opt-einsum, importlib-resources, importlib-metadata, h5py, google-pasta, gast, future, dm-tree, dill, attrs, transformers, tensorflow-addons, tensorflow, sentencepiece, scikit-learn, pydot, pandas, nltk, matplotlib, Keras-Applications, boto3\n Attempting uninstall: urllib3\n Found existing installation: urllib3 1.24.3\n Uninstalling urllib3-1.24.3:\n Successfully uninstalled urllib3-1.24.3\n Attempting uninstall: idna\n Found existing installation: idna 2.10\n Uninstalling idna-2.10:\n Successfully uninstalled idna-2.10\n Attempting uninstall: certifi\n Found existing installation: certifi 2021.5.30\n Uninstalling certifi-2021.5.30:\n Successfully uninstalled certifi-2021.5.30\n Attempting uninstall: six\n Found existing installation: six 1.15.0\n Uninstalling six-1.15.0:\n Successfully uninstalled six-1.15.0\n Attempting uninstall: rsa\n Found existing installation: rsa 4.7.2\n Uninstalling rsa-4.7.2:\n Successfully uninstalled rsa-4.7.2\n Attempting uninstall: requests\n Found existing installation: requests 2.23.0\n Uninstalling requests-2.23.0:\n Successfully uninstalled requests-2.23.0\n Attempting uninstall: oauthlib\n Found existing installation: oauthlib 3.1.1\n Uninstalling oauthlib-3.1.1:\n Successfully uninstalled oauthlib-3.1.1\n Attempting uninstall: cachetools\n Found existing installation: cachetools 4.2.2\n Uninstalling cachetools-4.2.2:\n Successfully uninstalled cachetools-4.2.2\n Attempting uninstall: protobuf\n Found existing installation: protobuf 3.17.3\n Uninstalling protobuf-3.17.3:\n Successfully uninstalled protobuf-3.17.3\n Attempting uninstall: google-auth\n Found existing installation: google-auth 1.32.1\n Uninstalling google-auth-1.32.1:\n Successfully uninstalled google-auth-1.32.1\n Attempting uninstall: docutils\n Found existing installation: docutils 0.17.1\n Uninstalling docutils-0.17.1:\n Successfully uninstalled docutils-0.17.1\n Attempting uninstall: zipp\n Found existing installation: zipp 3.5.0\n Uninstalling zipp-3.5.0:\n Successfully uninstalled zipp-3.5.0\n Attempting uninstall: Werkzeug\n Found existing installation: Werkzeug 1.0.1\n Uninstalling Werkzeug-1.0.1:\n Successfully uninstalled Werkzeug-1.0.1\n Attempting uninstall: tqdm\n Found existing installation: tqdm 4.41.1\n Uninstalling tqdm-4.41.1:\n Successfully uninstalled tqdm-4.41.1\n Attempting uninstall: tensorboard-plugin-wit\n Found existing installation: tensorboard-plugin-wit 1.8.0\n Uninstalling tensorboard-plugin-wit-1.8.0:\n Successfully uninstalled tensorboard-plugin-wit-1.8.0\n Attempting uninstall: regex\n Found existing installation: regex 2019.12.20\n Uninstalling regex-2019.12.20:\n Successfully uninstalled regex-2019.12.20\n Attempting uninstall: numpy\n Found existing installation: numpy 1.19.5\n Uninstalling numpy-1.19.5:\n Successfully uninstalled numpy-1.19.5\n Attempting uninstall: Markdown\n Found existing installation: Markdown 3.3.4\n Uninstalling Markdown-3.3.4:\n Successfully uninstalled Markdown-3.3.4\n Attempting uninstall: joblib\n Found existing installation: joblib 1.0.1\n Uninstalling joblib-1.0.1:\n Successfully uninstalled joblib-1.0.1\n Attempting uninstall: grpcio\n Found existing installation: grpcio 1.34.1\n Uninstalling grpcio-1.34.1:\n Successfully uninstalled grpcio-1.34.1\n Attempting uninstall: googleapis-common-protos\n Found existing installation: googleapis-common-protos 1.53.0\n Uninstalling googleapis-common-protos-1.53.0:\n Successfully uninstalled googleapis-common-protos-1.53.0\n Attempting uninstall: google-auth-oauthlib\n Found existing installation: google-auth-oauthlib 0.4.4\n Uninstalling google-auth-oauthlib-0.4.4:\n Successfully uninstalled google-auth-oauthlib-0.4.4\n Attempting uninstall: click\n Found existing installation: click 7.1.2\n Uninstalling click-7.1.2:\n Successfully uninstalled click-7.1.2\n Attempting uninstall: absl-py\n Found existing installation: absl-py 0.12.0\n Uninstalling absl-py-0.12.0:\n Successfully uninstalled absl-py-0.12.0\n Attempting uninstall: wrapt\n Found existing installation: wrapt 1.12.1\n Uninstalling wrapt-1.12.1:\n Successfully uninstalled wrapt-1.12.1\n Attempting uninstall: tensorflow-metadata\n Found existing installation: tensorflow-metadata 1.1.0\n Uninstalling tensorflow-metadata-1.1.0:\n Successfully uninstalled tensorflow-metadata-1.1.0\n Attempting uninstall: tensorflow-estimator\n Found existing installation: tensorflow-estimator 2.5.0\n Uninstalling tensorflow-estimator-2.5.0:\n Successfully uninstalled tensorflow-estimator-2.5.0\n Attempting uninstall: tensorboard\n Found existing installation: tensorboard 2.5.0\n Uninstalling tensorboard-2.5.0:\n Successfully uninstalled tensorboard-2.5.0\n Attempting uninstall: PyYAML\n Found existing installation: PyYAML 3.13\n Uninstalling PyYAML-3.13:\n Successfully uninstalled PyYAML-3.13\n Attempting uninstall: pytz\n Found existing installation: pytz 2018.9\n Uninstalling pytz-2018.9:\n Successfully uninstalled pytz-2018.9\n Attempting uninstall: Pillow\n Found existing installation: Pillow 7.1.2\n Uninstalling Pillow-7.1.2:\n Successfully uninstalled Pillow-7.1.2\n Attempting uninstall: packaging\n Found existing installation: packaging 21.0\n Uninstalling packaging-21.0:\n Successfully uninstalled packaging-21.0\n Attempting uninstall: opt-einsum\n Found existing installation: opt-einsum 3.3.0\n Uninstalling opt-einsum-3.3.0:\n Successfully uninstalled opt-einsum-3.3.0\n Attempting uninstall: importlib-resources\n Found existing installation: importlib-resources 5.2.0\n Uninstalling importlib-resources-5.2.0:\n Successfully uninstalled importlib-resources-5.2.0\n Attempting uninstall: importlib-metadata\n Found existing installation: importlib-metadata 4.6.1\n Uninstalling importlib-metadata-4.6.1:\n Successfully uninstalled importlib-metadata-4.6.1\n Attempting uninstall: h5py\n Found existing installation: h5py 3.1.0\n Uninstalling h5py-3.1.0:\n Successfully uninstalled h5py-3.1.0\n Attempting uninstall: google-pasta\n Found existing installation: google-pasta 0.2.0\n Uninstalling google-pasta-0.2.0:\n Successfully uninstalled google-pasta-0.2.0\n Attempting uninstall: gast\n Found existing installation: gast 0.4.0\n Uninstalling gast-0.4.0:\n Successfully uninstalled gast-0.4.0\n Attempting uninstall: future\n Found existing installation: future 0.16.0\n Uninstalling future-0.16.0:\n Successfully uninstalled future-0.16.0\n Attempting uninstall: dm-tree\n Found existing installation: dm-tree 0.1.6\n Uninstalling dm-tree-0.1.6:\n Successfully uninstalled dm-tree-0.1.6\n Attempting uninstall: dill\n Found existing installation: dill 0.3.4\n Uninstalling dill-0.3.4:\n Successfully uninstalled dill-0.3.4\n Attempting uninstall: attrs\n Found existing installation: attrs 21.2.0\n Uninstalling attrs-21.2.0:\n Successfully uninstalled attrs-21.2.0\n Attempting uninstall: tensorflow\n Found existing installation: tensorflow 2.5.0\n Uninstalling tensorflow-2.5.0:\n Successfully uninstalled tensorflow-2.5.0\n Attempting uninstall: scikit-learn\n Found existing installation: scikit-learn 0.22.2.post1\n Uninstalling scikit-learn-0.22.2.post1:\n Successfully uninstalled scikit-learn-0.22.2.post1\n Attempting uninstall: pydot\n Found existing installation: pydot 1.3.0\n Uninstalling pydot-1.3.0:\n Successfully uninstalled pydot-1.3.0\n Attempting uninstall: pandas\n Found existing installation: pandas 1.1.5\n Uninstalling pandas-1.1.5:\n Successfully uninstalled pandas-1.1.5\n Attempting uninstall: nltk\n Found existing installation: nltk 3.2.5\n Uninstalling nltk-3.2.5:\n Successfully uninstalled nltk-3.2.5\n Attempting uninstall: matplotlib\n Found existing installation: matplotlib 3.2.2\n Uninstalling matplotlib-3.2.2:\n Successfully uninstalled matplotlib-3.2.2\n\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\npymc3 3.11.2 requires cachetools>=4.2.1, but you have cachetools 4.0.0 which is incompatible.\nmultiprocess 0.70.12.2 requires dill>=0.3.4, but you have dill 0.3.2 which is incompatible.\nkapre 0.3.5 requires numpy>=1.18.5, but you have numpy 1.18.1 which is incompatible.\ngoogle-colab 1.0.0 requires google-auth>=1.17.2, but you have google-auth 1.11.2 which is incompatible.\ngoogle-colab 1.0.0 requires requests~=2.23.0, but you have requests 2.22.0 which is incompatible.\ngoogle-colab 1.0.0 requires six~=1.15.0, but you have six 1.14.0 which is incompatible.\ngoogle-api-python-client 1.12.8 requires google-auth>=1.16.0, but you have google-auth 1.11.2 which is incompatible.\ngoogle-api-core 1.26.3 requires google-auth<2.0dev,>=1.21.1, but you have google-auth 1.11.2 which is incompatible.\ngoogle-api-core 1.26.3 requires protobuf>=3.12.0, but you have protobuf 3.11.3 which is incompatible.\ndatascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.\nalbumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.\u001b[0m\nSuccessfully installed Keras-Applications-1.0.8 Markdown-3.2.1 Pillow-8.1.0 PyYAML-5.3.1 Werkzeug-1.0.0 absl-py-0.9.0 attrs-20.2.0 boto3-1.12.24 botocore-1.15.24 cachetools-4.0.0 certifi-2019.11.28 click-7.1.1 dill-0.3.2 dm-tree-0.1.5 docutils-0.15.2 future-0.18.2 gast-0.3.3 google-auth-1.11.2 google-auth-oauthlib-0.4.1 google-pasta-0.1.8 googleapis-common-protos-1.52.0 grpcio-1.27.2 h5py-2.10.0 idna-2.8 importlib-metadata-3.7.3 importlib-resources-3.0.0 jmespath-0.9.5 joblib-0.14.1 matplotlib-3.3.4 nltk-3.5 numpy-1.18.1 oauthlib-3.1.0 opt-einsum-3.1.0 packaging-20.9 pandas-1.1.4 protobuf-3.11.3 pydot-1.4.1 pytz-2020.4 regex-2020.2.20 requests-2.22.0 rsa-4.0 s3transfer-0.3.3 sacremoses-0.0.38 scikit-learn-0.23.2 sentencepiece-0.1.85 six-1.14.0 tensorboard-2.3.0 tensorboard-plugin-wit-1.7.0 tensorflow-2.3.1 tensorflow-addons-0.8.1 tensorflow-estimator-2.3.0 tensorflow-metadata-0.24.0 threadpoolctl-2.1.0 tokenizers-0.10.1 tqdm-4.43.0 transformers-4.4.2 urllib3-1.25.8 wrapt-1.12.0 zipp-3.3.1\n" ], [ "!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/forms.vectors-w5-d300-ns5.16b.npz", "--2021-07-27 14:03:39-- http://aic.ufal.mff.cuni.cz/~doubrap1/forms.vectors-w5-d300-ns5.16b.npz\nResolving aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)... 195.113.21.4\nConnecting to aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)|195.113.21.4|:80... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://aic.ufal.mff.cuni.cz//~doubrap1/forms.vectors-w5-d300-ns5.16b.npz [following]\n--2021-07-27 14:03:40-- https://aic.ufal.mff.cuni.cz//~doubrap1/forms.vectors-w5-d300-ns5.16b.npz\nConnecting to aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)|195.113.21.4|:443... connected.\nWARNING: cannot verify aic.ufal.mff.cuni.cz's certificate, issued by ‘CN=GEANT OV RSA CA 4,O=GEANT Vereniging,C=NL’:\n Unable to locally verify the issuer's authority.\nHTTP request sent, awaiting response... 200 OK\nLength: 891609378 (850M)\nSaving to: ‘forms.vectors-w5-d300-ns5.16b.npz’\n\nforms.vectors-w5-d3 100%[===================>] 850.30M 18.8MB/s in 48s \n\n2021-07-27 14:04:29 (17.8 MB/s) - ‘forms.vectors-w5-d300-ns5.16b.npz’ saved [891609378/891609378]\n\n" ], [ "!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...index\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...data-00000-of-00001\n\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-07-05_115521-a=16,bs=2,b=...index\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-07-05_115521-a=16,bs=2,b=...data-00000-of-00001\n\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-06-08_234151-a=12,bs=4,b=...index\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-06-08_234151-a=12,bs=4,b=...data-00000-of-00001\n\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-06-08_172844-a=12,bs=4,b=...index\n!wget --no-check-certificate aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-06-08_172844-a=12,bs=4,b=...data-00000-of-00001\n", "--2021-07-27 14:58:49-- http://aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...index\nResolving aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)... 195.113.21.4\nConnecting to aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)|195.113.21.4|:80... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://aic.ufal.mff.cuni.cz//~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...index [following]\n--2021-07-27 14:58:49-- https://aic.ufal.mff.cuni.cz//~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...index\nConnecting to aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)|195.113.21.4|:443... connected.\nWARNING: cannot verify aic.ufal.mff.cuni.cz's certificate, issued by ‘CN=GEANT OV RSA CA 4,O=GEANT Vereniging,C=NL’:\n Unable to locally verify the issuer's authority.\nHTTP request sent, awaiting response... 200 OK\nLength: 14990 (15K)\nSaving to: ‘sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...index.1’\n\nsentiment_analysis. 100%[===================>] 14.64K 96.9KB/s in 0.2s \n\n2021-07-27 14:58:50 (96.9 KB/s) - ‘sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...index.1’ saved [14990/14990]\n\n--2021-07-27 14:58:50-- http://aic.ufal.mff.cuni.cz/~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...data-00000-of-00001\nResolving aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)... 195.113.21.4\nConnecting to aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)|195.113.21.4|:80... connected.\nHTTP request sent, awaiting response... 302 Found\nLocation: https://aic.ufal.mff.cuni.cz//~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...data-00000-of-00001 [following]\n--2021-07-27 14:58:51-- https://aic.ufal.mff.cuni.cz//~doubrap1/sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...data-00000-of-00001\nConnecting to aic.ufal.mff.cuni.cz (aic.ufal.mff.cuni.cz)|195.113.21.4|:443... connected.\nWARNING: cannot verify aic.ufal.mff.cuni.cz's certificate, issued by ‘CN=GEANT OV RSA CA 4,O=GEANT Vereniging,C=NL’:\n Unable to locally verify the issuer's authority.\nHTTP request sent, awaiting response... 200 OK\nLength: 506220995 (483M)\nSaving to: ‘sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...data-00000-of-00001’\n\nsentiment_analysis. 100%[===================>] 482.77M 11.3MB/s in 34s \n\n2021-07-27 14:59:25 (14.3 MB/s) - ‘sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=...data-00000-of-00001’ saved [506220995/506220995]\n\n" ] ], [ [ "# Creating model and input data", "_____no_output_____" ] ], [ [ "s_16 = \"sentiment_analysis.py-2021-07-02_181019-a=32,bs=1,b=..\"\ncsfd_69 = \"sentiment_analysis.py-2021-07-05_115521-a=16,bs=2,b=..\"\nmall_63 = \"sentiment_analysis.py-2021-06-08_234151-a=12,bs=4,b=..\"\nfb_45 = \"sentiment_analysis.py-2021-06-08_172844-a=12,bs=4,b=..\"", "_____no_output_____" ], [ "import os\nos.chdir(\"/content/DiplomaThesis/code/sentiment\")", "_____no_output_____" ], [ "!pip install --upgrade tensorflow\n!pip install --upgrade tensorflow-gpu", "Requirement already satisfied: tensorflow in /usr/local/lib/python3.7/dist-packages (2.5.0)\nRequirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.6.3)\nRequirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.36.2)\nRequirement already satisfied: h5py~=3.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.1.0)\nRequirement already satisfied: six~=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.15.0)\nRequirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.12.1)\nRequirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.1.0)\nRequirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.2.0)\nRequirement already satisfied: grpcio~=1.34.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.34.1)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.11.3)\nRequirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.4.0)\nRequirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.1.2)\nRequirement already satisfied: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (0.13.0)\nRequirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.3.0)\nRequirement already satisfied: tensorboard~=2.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0)\nRequirement already satisfied: tensorflow-estimator<2.6.0,>=2.5.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0)\nRequirement already satisfied: keras-nightly~=2.5.0.dev in /usr/local/lib/python3.7/dist-packages (from tensorflow) (2.5.0.dev2021032900)\nRequirement already satisfied: typing-extensions~=3.7.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (3.7.4.3)\nRequirement already satisfied: flatbuffers~=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.12)\nRequirement already satisfied: numpy~=1.19.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow) (1.19.5)\nRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py~=3.1.0->tensorflow) (1.5.2)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from protobuf>=3.9.2->tensorflow) (57.2.0)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.7.0)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (2.22.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (0.4.1)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (0.6.1)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.0.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (3.2.1)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow) (1.11.2)\nRequirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (4.0)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (4.0.0)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow) (1.3.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow) (0.4.8)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (1.25.8)\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (2.8)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (3.0.4)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow) (2019.11.28)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow) (3.1.0)\nRequirement already satisfied: tensorflow-gpu in /usr/local/lib/python3.7/dist-packages (2.5.0)\nRequirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.3.0)\nRequirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.1.2)\nRequirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.12.1)\nRequirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.1.0)\nRequirement already satisfied: tensorboard~=2.5 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (2.5.0)\nRequirement already satisfied: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.13.0)\nRequirement already satisfied: gast==0.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.4.0)\nRequirement already satisfied: grpcio~=1.34.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.34.1)\nRequirement already satisfied: flatbuffers~=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.12)\nRequirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.2.0)\nRequirement already satisfied: six~=1.15.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.15.0)\nRequirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.6.3)\nRequirement already satisfied: h5py~=3.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.1.0)\nRequirement already satisfied: tensorflow-estimator<2.6.0,>=2.5.0rc0 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (2.5.0)\nRequirement already satisfied: typing-extensions~=3.7.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.7.4.3)\nRequirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (3.11.3)\nRequirement already satisfied: numpy~=1.19.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (1.19.5)\nRequirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (0.36.2)\nRequirement already satisfied: keras-nightly~=2.5.0.dev in /usr/local/lib/python3.7/dist-packages (from tensorflow-gpu) (2.5.0.dev2021032900)\nRequirement already satisfied: cached-property in /usr/local/lib/python3.7/dist-packages (from h5py~=3.1.0->tensorflow-gpu) (1.5.2)\nRequirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from protobuf>=3.9.2->tensorflow-gpu) (57.2.0)\nRequirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (3.2.1)\nRequirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (1.7.0)\nRequirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (1.0.0)\nRequirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (1.11.2)\nRequirement already satisfied: requests<3,>=2.21.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (2.22.0)\nRequirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (0.4.1)\nRequirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.5->tensorflow-gpu) (0.6.1)\nRequirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow-gpu) (4.0.0)\nRequirement already satisfied: rsa<4.1,>=3.1.4 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow-gpu) (4.0)\nRequirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow-gpu) (0.2.8)\nRequirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow-gpu) (1.3.0)\nRequirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /usr/local/lib/python3.7/dist-packages (from pyasn1-modules>=0.2.1->google-auth<2,>=1.6.3->tensorboard~=2.5->tensorflow-gpu) (0.4.8)\nRequirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow-gpu) (2019.11.28)\nRequirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow-gpu) (1.25.8)\nRequirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow-gpu) (3.0.4)\nRequirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.21.0->tensorboard~=2.5->tensorflow-gpu) (2.8)\nRequirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.5->tensorflow-gpu) (3.1.0)\n" ], [ "import os\nos.chdir(\"DiplomaThesis/code/sentiment\")", "_____no_output_____" ], [ "import sentiment_analysis as sen", "_____no_output_____" ], [ "pokusny_vstup = \"Velmi příjemný zážitek.\\nJe to na hovno.\\nRelativně dobrý.\"", "_____no_output_____" ] ], [ [ "You can also upload own file with each text for classification separated by a newline. In that case, do not run followinf cell or/and change the value of the argument \"predict\". You can also change the model by changing the argument \"model\".", "_____no_output_____" ] ], [ [ "with open(\"pokusny_vstup\", \"w\") as f:\n f.write(pokusny_vstup)", "_____no_output_____" ], [ "sen.main([\"--bert\", \"../morphodita-research/robeczech/noeol-210323/\", \"--model\", \"./\" + s_16, \"--predict\", \"pokusny_vstup\"])", "append path\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\nWARNING:tensorflow:Inconsistent references when loading the checkpoint into this object graph. Either the Trackable object references in the Python program have changed in an incompatible way, or the checkpoint was generated in an incompatible program.\n\nTwo checkpoint references resolved to different objects (<tensorflow.python.keras.layers.core.Dense object at 0x7f729c5847d0> and <tensorflow.python.keras.layers.core.Dropout object at 0x7f729d24a950>).\n3\n8\nmax\n[0, 5590, 15343, 5770, 31, 2]\ni\n[0, 144, 16, 9, 39946, 337, 31, 2]\ni\n[0, 1827, 112, 7431, 1686, 31, 2]\ni\nWARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).\nWARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.\npost\nVelmi příjemný zážitek.\npost\nJe to na hovno.\npost\nRelativně dobrý.\n" ], [ "!cat ./pokusny_vstup_vystup | sed 's/^2/positive/' | sed 's/^1/negative/' | sed 's/^0/neutral/'", "positive\tVelmi příjemný zážitek.\nnegative\tJe to na hovno.\nneutral\tRelativně dobrý.\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a3aa08f32e7a52e4006ba55c1b8d1dd14b750fb
74,765
ipynb
Jupyter Notebook
CPD-DV Hands on Lab.ipynb
thsonvt/CPDDVLAB
73f7a5c84a40ec5331b8966a2745f715d3204e4e
[ "Apache-2.0" ]
3
2020-03-25T12:54:43.000Z
2021-12-06T16:17:09.000Z
CPD-DV Hands on Lab.ipynb
thsonvt/CPDDVLAB
73f7a5c84a40ec5331b8966a2745f715d3204e4e
[ "Apache-2.0" ]
null
null
null
CPD-DV Hands on Lab.ipynb
thsonvt/CPDDVLAB
73f7a5c84a40ec5331b8966a2745f715d3204e4e
[ "Apache-2.0" ]
7
2020-07-14T02:00:28.000Z
2021-08-13T15:12:18.000Z
42.287896
507
0.639765
[ [ [ "# IBM Cloud Pak for Data - Multi-Cloud Virtualization Hands-on Lab", "_____no_output_____" ], [ "## Introduction\nWelcome to the IBM Cloud Pak for Data Multi-Cloud Virtualization Hands on Lab. \n\nIn this lab you analyze data from multiple data sources, from across multiple Clouds, without copying data into a warehouse.\n\nThis hands-on lab uses live databases, were data is “virtually” available through the IBM Cloud Pak for Data Virtualization Service. This makes it easy to analyze data from across your multi-cloud enterprise using tools like, Jupyter Notebooks, Watson Studio or your favorite reporting tool like Cognos. ", "_____no_output_____" ], [ "### Where to find this sample online\nYou can find a copy of this notebook on GITHUB at https://github.com/Db2-DTE-POC/CPDDVLAB.", "_____no_output_____" ], [ "### The business problem and the landscape\nThe Acme Company needs timely analysis of stock trading data from multiple source systems. \n\nTheir data science and development teams needs access to:\n* Customer data\n* Account data\n* Trading data\n* Stock history and Symbol data\n\n<img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/CPDDVLandscape.png\">\n\nThe data sources are running on premises and on the cloud. In this example many of the databases are also running on OpenShift but they could be managed, virtual or bare-metal cloud installations. IBM Cloud Pak for Data doesn't care. Enterprise DB (Postgres) is also running in the Cloud. Mongo and Informix are running on premises. Finally, we also have a VSAM file on zOS leveraging the Data Virtualization Manager for zOS. \n\nTo simplify access for Data Scientists and Developers the Acme team wants to make all their data look like it is coming from a single database. They also want to combine data to create simple to use tables.\n\nIn the past, Acme built a dedicated data warehouse, and then created ETL (Export, Transform and Load) job to move data from each data source into the warehouse were it could be combined. Now they can just virtualize your data without moving it.", "_____no_output_____" ], [ "### In this lab you learn how to:\n\n* Sign into IBM Cloud Pak for Data using your own Data Engineer and Data Scientist (User) userids\n* Connect to different data sources, on premises and across a multi-vendor Cloud\n* Make remote data from across your multi-vendor enterprise look and act like local tables in a single database\n* Make combining complex data and queries simple even for basic users\n* Capture complex SQL in easy to consume VIEWs that act just like simple tables\n* Ensure that users can securely access even complex data across multiple sources \n* Use roles and priviledges to ensure that only the right user may see the right data\n* Make development easy by connecting to your virtualized data using Analytic tools and Application from outside of IBM Cloud Pak for Data. ", "_____no_output_____" ], [ "## Getting Started", "_____no_output_____" ], [ "### Using Jupyter notebooks\nYou are now officially using a Jupyter notebook! If this is your first time using a Jupyter notebook you might want to go through the [An Introduction to Jupyter Notebooks](http://localhost:8888/notebooks/An_Introduction_to_Jupyter_Notebooks.ipynb). The introduction shows you some of the basics of using a notebook, including how to create the cells, run code, and save files for future use. \n\nJupyter notebooks are based on IPython which started in development in the 2006/7 timeframe. The existing Python interpreter was limited in functionality and work was started to create a richer development environment. By 2011 the development efforts resulted in IPython being released (http://blog.fperez.org/2012/01/ipython-notebook-historical.html).\n\nJupyter notebooks were a spinoff (2014) from the original IPython project. IPython continues to be the kernel that Jupyter runs on, but the notebooks are now a project on their own.\n\nJupyter notebooks run in a browser and communicate to the backend IPython server which renders this content. These notebooks are used extensively by data scientists and anyone wanting to document, plot, and execute their code in an interactive environment. The beauty of Jupyter notebooks is that you document what you do as you go along.", "_____no_output_____" ], [ "### Connecting to IBM Cloud Pak for Data\nFor this lab you will be assigned two IBM Cloud Pak for Data User IDs: A Data Engineer userid and and end-user userid. Check with the lab coordinator which userid and passwords you should use.\n* **Engineer:**\n * ID: LABDATAENGINEERx\n * PASSWORD: xxx\n* **User:**\n * ID: LABUSERx\n\n * PASSWORD: xxx\n\nTo get started, sign in using you Engineer id:\n1. Right-click the following link and select **open link in new window** to open the IBM Cloud Pak for Data Console: https://services-uscentral.skytap.com:9152/\n1. Organize your screen so that you can see both this notebook as well as the IBM Cloud Pak for Data Console at the same time. This will make it much easier for you to complete the lab without switch back and forth between screens.\n2. Sign in using your Engineer userid and password\n3. Click the icon at the very top right of the webpage. It will look something like this:\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.06.10 EngineerUserIcon.png\">\n\n4. Click **Profile and settings**\n5. Click **Permissions** and review the user permissions for this user\n6. Click the **three bar menu** at the very top left of the console webpage\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/2.42.03 Three Bar.png\">\n\n7. Click **Collect** if the Collect menu isn't already open\n7. Click **Data Virtualization**. The Data Virtualization user interface is displayed\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.06.12 CollectDataVirtualization.png\">\n\n8. Click the carrot symbol beside **Menu** below the Data Virtualization title\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/3.07.47 Menu Carrot.png\">\n\nThis displays the actions available to your user. Different user have access to more or fewer menu options depending on their role in Data Virtualization. \n\nAs a Data Engineer you can:\n* Add and modify Data sources. Each source is a connection to a single database, either inside or outside of IBM Cloud Pak for Data.\n* Virtualize data. This makes tables in other data sources look and act like tables that are local to the Data Virtualization database\n* Work with the data you have virtualized.\n* Write SQL to access and join data that you have virtualized\n* See detailed information on how to connect external analytic tools and applications to your virtualized data\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.12.54 Menu Data sources.png\">\n\nAs a User you can only:\n* Work with data that has been virtualized for you\n* Write SQL to work with that data\n* See detailed connection information\n\nAs an Administrator (only available to the course instructor) you can also:\n* Manage IBM Cloud Pak for Data User Access and Roles\n* Create and Manage Data Caches to accelerate performance\n* Change key service setttings", "_____no_output_____" ], [ "## Basic Data Virtualiation", "_____no_output_____" ], [ "### Exploring Data Source Connections\nLet's start by looking at the the Data Source Connections that are already available. \n\n1. Click the Data Virtualization menu and select **Data Sources**.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.12.54 Menu Data sources.png\">\n\n2. Click the **icon below the menu with a circle with three connected dots**.\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.50 Connections Icons Spider.png\">\n3. A spider diagram of the connected data sources opens. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.15.31 Data Sources Spider.png\">\n\n This displays the Data Source Graph with 8 active data sources:\n * 4 Db2 Family Databases hosted on premises, IBM Cloud, Azure and AWS\n * 1 EDB Postgres Database on Azure\n * 1 zOS VSAM file\n * 1 Informix Database running on premises \n\n**We are not going to add a new data source** but just go through the steps so you can see how to add additional data sources.\n1. Click **+ Add** at the right of the console screen\n2. Select **Add data source** from the menu\nYou can see a history of other data source connection information that was used before. This history is maintain to make reconnecting to data sources easier and faster.\n3. Click **Add connection**\n4. Click the field below **Connection type**\n5. Scroll through all the **available data sources** to see the available connection types\n6. Select **different data connection types** from the list to see the information required to connect to a new data source. \nAt a minumum you typically need the host URL and port address, database name, userid and password. You can also connect using an SSL certificate that can be dragged and dropped directly into the console interface. \n7. Click **Cancel** to return to the previous list of connections to add\n8. Click **Cancel** again to return to the list of currently connected data sources", "_____no_output_____" ], [ "### Exploring the available data\nNow that you understand how to connect to data sources you can start virtualizing data. Much of the work has already been done for you. IBM Cloud Pak for Data searches through the available data sources and compiles a single large inventory of all the tables and data available to virtualize in IBM Cloud Pak for Data. \n\n1. Click the Data Virtualization menu and select **Virtualize**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.07 Menu Virtualize.png\">\n \n2. Check the total number of available tables at the top of the list. There should be well over 500 available.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.15.50 Available Tables.png\">\n\n3. Enter \"STOCK\" into the search field and hit **Enter**. Any tables with the string\n**STOCK** in the table name, the table schema or with a colunn name that includes **STOCK** appears in the search results. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.39.43 Find STOCK.png\">\n\n4. Hover your mouse pointer to the far right side to the search results table. An **eye** icon will appear on each row as you move your mouse. \n5. Click the **eye** icon beside one table. This displays a preview of the data in the selected table.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/3.26.54 Eye.png\">\n\n6. Click **X** at the top right of the dialog box to return to the search results.", "_____no_output_____" ], [ "### Creating New Tables\nSo that each user in this lab can have their own data to virtualize you will create your own table in a remote database.\n\nIn this part of the lab you will use this Jupyter notebook and Python code to connect to a source database, create a simple table and populate it with data. \n\nIBM Cloud Pak for Data will automatically detect the change in the source database and make the new table available for virtualization.\n\nIn this example, you connect to the Db2 Warehouse database running in IBM Cloud Pak for Data but the database can be anywhere. All you need is the connection information and authorized credentials. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/Db2CPDDatabase.png\">", "_____no_output_____" ], [ "The first step is to connect to one of our remote data sources directly as if we were part of the team builing a new business application. Since each lab user will create their own table in their own schema the first thing you need to do is update and run the cell below with your engineer name. \n1. In this Juypyter notebook, click on the cell below \n2. Update the lab number in the cell below to your assigned user and lab number\n3. Click **Run** from the Jupyter notebook menu above", "_____no_output_____" ] ], [ [ "# Setting your userID\nlabnumber = 0\nengineer = 'DATAENGINEER' + str(labnumber)\nprint('variable engineer set to = ' + str(engineer))", "_____no_output_____" ] ], [ [ "The next part of the lab relies on a Jupyter notebook extension, commonly refer to as a \"magic\" command, to connect to a Db2 database. To use the commands you load load the extension by running another notebook call db2 that contains all the required code \n<pre>\n&#37;run db2.ipynb\n</pre>\nThe cell below loads the Db2 extension directly from GITHUB. Note that it will take a few seconds for the extension to load, so you should generally wait until the \"Db2 Extensions Loaded\" message is displayed in your notebook. \n1. Click the cell below\n2. Click **Run**. When the cell is finished running, In[*] will change to In[2]", "_____no_output_____" ] ], [ [ "# !wget https://raw.githubusercontent.com/IBM/db2-jupyter/master/db2.ipynb\n!wget -O db2.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/db2.ipynb\n\n%run db2.ipynb\nprint('db2.ipynb loaded')", "_____no_output_____" ] ], [ [ "#### Connecting to Db2\n\nBefore any SQL commands can be issued, a connection needs to be made to the Db2 database that you will be using. \n\nThe Db2 magic command tracks whether or not a connection has occured in the past and saves this information between notebooks and sessions. When you start up a notebook and issue a command, the program will reconnect to the database using your credentials from the last session. In the event that you have not connected before, the system will prompt you for all the information it needs to connect. This information includes:\n\n- Database name\n- Hostname\n- PORT \n- Userid\n- Password\n\nRun the next cell.", "_____no_output_____" ], [ "#### Connecting to Db2", "_____no_output_____" ] ], [ [ "# Connect to the Db2 Warehouse on IBM Cloud Pak for Data Database from inside of IBM Cloud Pak for Data\ndatabase = 'bludb'\nuser = 'user999'\npassword = 't1cz?K9-X1_Y-2Wi'\nhost = 'openshift-skytap-nfs-woker-5.ibm.com'\nport = '31928'\n\n%sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}", "_____no_output_____" ] ], [ [ "To check that the connection is working. Run the following cell. It lists the tables in the database in the **DVDEMO** schema. Only the first 5 tables are listed.", "_____no_output_____" ] ], [ [ "%sql select TABNAME, OWNER from syscat.tables where TABSCHEMA = 'DVDEMO'", "_____no_output_____" ] ], [ [ "Now that you can successfully connect to the database, you are going to create two tables with the same name and column across two different schemas. In following steps of the lab you are going to virtualize these tables in IBM Cloud Paks for Data and fold them together into a single table. \n\nThe next cell sets the default schema to your engineer name followed by 'A'. Notice how you can set a python variable and substitute it into the SQL Statement in the cell. The **-e** option echos the command. \n\nRun the next cell.", "_____no_output_____" ] ], [ [ "schema_name = engineer+'A'\ntable_name = 'DISCOVER_'+str(labnumber)\n\nprint(\"\")\nprint(\"Lab #: \"+str(labnumber))\nprint(\"Schema name: \" + str(schema_name))\nprint(\"Table name: \" + str(table_name))\n\n%sql -e SET CURRENT SCHEMA {schema}", "_____no_output_____" ] ], [ [ "Run next cell to create a table with a single INTEGER column containing values from 1 to 10. The **-q** flag in the %sql command supresses any warning message if the table already exists.", "_____no_output_____" ] ], [ [ "sqlin = f'''\nDROP TABLE {table_name}; \nCREATE TABLE {table_name} (A INT); \nINSERT INTO {table_name} VALUES 1,2,3,4,5,6,7,8,9,10; \nSELECT * FROM {table_name}; \n'''\n\n%sql -q {sqlin}", "_____no_output_____" ] ], [ [ "Run the next two cells to create the same table in a schema ending in **B**. It is populated with values from 11 to 20.", "_____no_output_____" ] ], [ [ "schema_name = engineer+'B'\n\nprint(\"\")\nprint(\"Lab #: \"+str(labnumber))\nprint(\"Schema name: \" + str(schema_name))\nprint(\"Table name: \" + str(table_name))\n\n%sql -e SET CURRENT SCHEMA {schema_name}", "_____no_output_____" ], [ "sqlin = f'''\nDROP TABLE {table_name}; \nCREATE TABLE {table_name} (A INT); \nINSERT INTO {table_name} VALUES 11,12,13,14,15,16,17,18,19,20; \nSELECT * FROM {table_name}; \n'''\n%sql -q {sqlin}", "_____no_output_____" ] ], [ [ "Run the next cell to see all the tables in the database you just created. ", "_____no_output_____" ] ], [ [ "%sql SELECT TABSCHEMA, TABNAME FROM SYSCAT.TABLES WHERE TABNAME = '{table_name}'", "_____no_output_____" ] ], [ [ "Run the next cell to see all the tables in the database that are like **DISCOVER**. You may see tables created by other people running the lab. ", "_____no_output_____" ] ], [ [ "%sql SELECT TABSCHEMA, TABNAME FROM SYSCAT.TABLES WHERE TABNAME LIKE 'DISCOVER%'", "_____no_output_____" ] ], [ [ "### Virtualizing your new Tables\nNow that you have created two new tables you can virtualize that data and make it look like a single table in your database.\n1. Return to the IBM Cloud Pak for Data Console\n2. Click **Virtualize** in the Data Virtualization menu if you are not still in the Virtualize page\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.07 Menu Virtualize.png\">\n\n3. Enter your current userid, i.e. DATAENGINEER1 in the search bar and hit **Enter**. Now you can see that your new tables have automatically been discovered by IBM Cloud Pak for Data.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.31.01 Available Discover Tables.png\">\n\n4. Select the two tables you just created by clicking the **check box** beside each table. Make sure you only select those for your LABDATAENGINEER schema.\n5. Click **Add to Cart**. Notice that the number of items in your cart is now **2**.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.33.11 Available ENGINEER Tables.png\">\n\n6. Click **View Cart**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.33.31 View Cart(2).png\">\n\n7. Change the name of your two tables from DISCOVER to **DISCOVERA** and **DISCOVERB**. These are the new names that you will be able to use to find your tables in the Data Virtualization database. Don't change the Schema name. It is unique to your current userid. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.34.21 Assign to Project.png\">\n \n9. Click the **back arrow** beside **Review cart and virtualize tables**. We are going to add one more thing to your cart.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.34.30 Back Arrow Icon.png\">\n\n10. Click the checkbox beside **Automatically group tables**. Notice how all the tables called **DISCOVER** have been grouped together into a single entry.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.35.18 Automatically Group Available Tables.png\">\n\n11. Select the row were all the DISCOVER tables have been grouped together\n12. Click **Add to cart**. \n13. Click **View cart**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.35.28 View cart(3).png\">\n \n You should now see three items in your cart.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.35.57 Cart with Fold.png\">\n\n14. Hover over the elipsis icon at the right side of the list for the **DISCOVER** table\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.34.44 Elipsis.png\">\n\n15. Select **Edit grouped tables**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.36.11 Cart Elipsis Menu.png\">\n\n16. Deselect all the tables except for those in one of the schemas you created. You should now have two tables selected. \n17. Click **Apply**\n17. Change the name of the new combined table to **DISCOVERFOLD**\n18. Select the **Data Virtualization Hands in Lab** project from the drop down list. \n20. Click **Virtualize**. You see that three new virtual tables have been created. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.36.49 Virtualize.png\">\n \n The Virtual tables created dialog box opens.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.37.24 Virtual tables created.png\">\n \n21. Click **View my virtualized data**. You return to the My virtualized data page.", "_____no_output_____" ], [ "### Working with your new tables\n1. Enter DISCOVER_# where # is your lab number\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.37.55 Find DISCOVER.png\">\n \nYou should see the three virtual tables you just created. Notice that you do not see tables that other users have created. By default, Data Engineers only see virtualized tables they have virtualized or virtual tables where they have been given access by other users. \n2. Click the elipsis (...) beside your **DISCOVERFOLD_#** table and select **Preview** to confirm that it contains 20 rows.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/4.32.01 Elipsis Fold.png\">\n\n3. Click **SQL Editor** from the Data Virtualization menu\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.33 Menu SQL editor.png\">\n\n4. Click **Blank** to create a new blank SQL Script\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.38.24 + Blank.png\">\n\n4. Enter **SELECT * FROM DISCOVERFOLD_#;** into the SQL Editor\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.38.44 SELECT*.png\">\n\n5. Click **Run All** at the bottom left of the SQL Editor window. You should see 20 rows returned in the result. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.38.52 Run all.png\">\n\nNotice that you didn't have to specify the schema for your new virtual tables. The SQL Editor automatically uses the schema associated with your userid that was used when you created your new tables. \n\nNow you can:\n* Create connection to a remote data source \n* Make a new or existing table in that remote data source look and act like a local table \n* Fold data from different tables in the same data source or access data sources by folding it together into a single virtual table", "_____no_output_____" ], [ "## Gaining Insight from Virtualized Data", "_____no_output_____" ], [ "Now that you understand the basics of Data Virtualization you can explore how easy it is to gain insight across multiple data sources without moving data. \n\nIn the next set of steps you connect to virtualized data from this notebook using your LABDATAENGINEER userid. You can use the same techniques to connect to virtualized data from applications and analytic tools from outside of IBM Cloud Pak for Data. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/ConnectingTotheAnalyticsDatabase.png\">\n\nConnecting to all your virtualized data is just like connecting to a single database. All the complexity of a dozens of tables across multiple databases on different on premises and cloud providers is now as simple as connecting to a single database and querying a table. \n\nWe are going to connect to the IBM Cloud Pak for Data Virtualization database in exactly the same way we connected to a Db2 database earlier in this lab. However we need to change the detailed connection information. \n\n1. Click **Connection Details** in the Data Virtualization menu\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.44 Menu connection details.png\">\n\n2. Click **Without SSL**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.29 Connection details.png\">\n\n3. Copy the **User ID** by highlighting it with your mouse, right click and select **Copy**\n4. Paste the **User ID** in to the next cell in this notebook where **user=** (see below) between the quotation marks\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.54.27 Notebook Login.png\">\n\n5. Click **Service Settings** in the Data Virtualization menu\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.05 Menu Service settings.png\">\n\n6. Look for the Access Information section of the page\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.14.15 Access information.png\">\n\n6. Click **Show** to see the password. Highlight the password and copy using the right-click menu\n7. Paste the **password** into the cell below between the quotation marks using the righ click paste.\n8. Run the cell below to connect to the Data Virtualization database. ", "_____no_output_____" ], [ "#### Connecting to Data Virtualization SQL Engine", "_____no_output_____" ] ], [ [ "# Connect to the IBM Cloud Pak for Data Virtualization Database from inside CPD\ndatabase = 'bigsql'\nuser = 'userxxxx'\npassword = 'xxxxxxxxxxxxxx'\nhost = 'openshift-skytap-nfs-lb.ibm.com'\nport = '32080'\n\n%sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}", "_____no_output_____" ] ], [ [ "### Stock Symbol Table\n#### Get information about the stocks that are in the database\n**System Z - VSAM**\nThis table comes from a VSAM file on zOS. IBM Cloud Pak for Data Virtualization works together with Data Virtualization Manager for zOS to make this looks like a local database table. For the following examples you can substitute any of the symbols below.", "_____no_output_____" ] ], [ [ "%sql -a select * from DVDEMO.STOCK_SYMBOLS", "_____no_output_____" ] ], [ [ "### Stock History Table\n#### Get Price of a Stock over the Year\nSet the Stock Symbol in the line below and run the cell. This information is folded together with data coming from two identical tables, one on Db2 database and on on and Informix database. Run the next two cells. Then pick a new stock symbol from the list above, enter it into the cell below and run both cells again.\n\n**CP4D - Db2, Skytap - Informix**", "_____no_output_____" ] ], [ [ "stock = 'AXP'\nprint('variable stock set to = ' + str(stock))", "_____no_output_____" ], [ "%%sql -pl\nSELECT WEEK(TX_DATE) AS WEEK, OPEN FROM FOLDING.STOCK_HISTORY\nWHERE SYMBOL = :stock AND TX_DATE != '2017-12-01'\nORDER BY WEEK(TX_DATE) ASC", "_____no_output_____" ] ], [ [ "#### Trend of Three Stocks\nThis chart shows three stock prices over the course of a year. It uses the same folded stock history information.\n\n**CP4D - Db2, Skytap - Informix**", "_____no_output_____" ] ], [ [ "stocks = ['INTC','MSFT','AAPL']", "_____no_output_____" ], [ "%%sql -pl\nSELECT SYMBOL, WEEK(TX_DATE), OPEN FROM FOLDING.STOCK_HISTORY\nWHERE SYMBOL IN (:stocks) AND TX_DATE != '2017-12-01'\nORDER BY WEEK(TX_DATE) ASC", "_____no_output_____" ] ], [ [ "#### 30 Day Moving Average of a Stock\nEnter the Stock Symbol below to see the 30 day moving average of a single stock.\n\n**CP4D - Db2, Skytap - Informix**", "_____no_output_____" ] ], [ [ "stock = 'AAPL'", "_____no_output_____" ], [ "sqlin = \\\n\"\"\"\nSELECT WEEK(TX_DATE) AS WEEK, OPEN, \n AVG(OPEN) OVER (\n ORDER BY TX_DATE\n ROWS BETWEEN 15 PRECEDING AND 15 FOLLOWING) AS MOVING_AVG\n FROM FOLDING.STOCK_HISTORY\n WHERE SYMBOL = :stock\n ORDER BY WEEK(TX_DATE)\n\"\"\"\ndf = %sql {sqlin}\ntxdate= df['WEEK']\nsales = df['OPEN']\navg = df['MOVING_AVG']\n\nplt.xlabel(\"Day\", fontsize=12);\nplt.ylabel(\"Opening Price\", fontsize=12);\nplt.suptitle(\"Opening Price and Moving Average of \" + stock, fontsize=20);\nplt.plot(txdate, sales, 'r');\nplt.plot(txdate, avg, 'b');\nplt.show();", "_____no_output_____" ] ], [ [ "#### Trading volume of INTC versus MSFT and AAPL in first week of November\n**CP4D - Db2, Skytap - Informix**", "_____no_output_____" ] ], [ [ "stocks = ['INTC','MSFT','AAPL']", "_____no_output_____" ], [ "%%sql -pb\nSELECT SYMBOL, DAY(TX_DATE), VOLUME/1000000 FROM FOLDING.STOCK_HISTORY\nWHERE SYMBOL IN (:stocks) AND WEEK(TX_DATE) = 45\nORDER BY DAY(TX_DATE) ASC", "_____no_output_____" ] ], [ [ "#### Show Stocks that Represent at least 3% of the Total Purchases during Week 45\n**CP4D - Db2, Skytap - Informix**", "_____no_output_____" ] ], [ [ "%%sql -pie\nWITH WEEK45(SYMBOL, PURCHASES) AS (\n SELECT SYMBOL, SUM(VOLUME * CLOSE) FROM FOLDING.STOCK_HISTORY\n WHERE WEEK(TX_DATE) = 45 AND SYMBOL <> 'DJIA'\n GROUP BY SYMBOL\n),\nALL45(TOTAL) AS (\n SELECT SUM(PURCHASES) * .03 FROM WEEK45\n)\nSELECT SYMBOL, PURCHASES FROM WEEK45, ALL45\nWHERE PURCHASES > TOTAL\nORDER BY SYMBOL, PURCHASES", "_____no_output_____" ] ], [ [ "### Stock Transaction Table\n#### Show Transactions by Customer\nThis next two examples uses data folded together from three different data sources representing three different trading organizations to create a combined of a single customer's stock trades. \n\n**AWS - Db2, Azure - EDB (Postgres), Azure - Db2**", "_____no_output_____" ] ], [ [ "%%sql -a\nSELECT * FROM FOLDING.STOCK_TRANSACTIONS_DV\n WHERE CUSTID = '107196'\n FETCH FIRST 10 ROWS ONLY", "_____no_output_____" ] ], [ [ "#### Bought/Sold Amounts of Top 5 stocks \n**AWS - Db2, Azure - EDB (Postgres), Azure - Db2**", "_____no_output_____" ] ], [ [ "%%sql -a\nWITH BOUGHT(SYMBOL, AMOUNT) AS\n (\n SELECT SYMBOL, SUM(QUANTITY) FROM FOLDING.STOCK_TRANSACTIONS_DV\n WHERE QUANTITY > 0\n GROUP BY SYMBOL\n ),\nSOLD(SYMBOL, AMOUNT) AS\n (\n SELECT SYMBOL, -SUM(QUANTITY) FROM FOLDING.STOCK_TRANSACTIONS_Dv\n WHERE QUANTITY < 0\n GROUP BY SYMBOL\n )\nSELECT B.SYMBOL, B.AMOUNT AS BOUGHT, S.AMOUNT AS SOLD\nFROM BOUGHT B, SOLD S\nWHERE B.SYMBOL = S.SYMBOL\nORDER BY B.AMOUNT DESC\nFETCH FIRST 5 ROWS ONLY", "_____no_output_____" ] ], [ [ "### Customer Accounts\n#### Show Top 5 Customer Balance\nThese next two examples use data folded from systems running on AWS and Azure.\n**AWS - Db2, Azure - EDB (Postgres), Azure - Db2**", "_____no_output_____" ] ], [ [ "%%sql -a\nSELECT CUSTID, BALANCE FROM FOLDING.ACCOUNTS_DV\nORDER BY BALANCE DESC\nFETCH FIRST 5 ROWS ONLY", "_____no_output_____" ] ], [ [ "#### Show Bottom 5 Customer Balance\n**AWS - Db2, Azure - EDB (Postgres), Azure - Db2**", "_____no_output_____" ] ], [ [ "%%sql -a\nSELECT CUSTID, BALANCE FROM FOLDING.ACCOUNTS_DV\nORDER BY BALANCE ASC\nFETCH FIRST 5 ROWS ONLY", "_____no_output_____" ] ], [ [ "### Selecting Customer Information from MongoDB\nThe MongoDB database (running on premises) has customer information in a document format. In order to materialize the document data as relational tables, a total of four virtual tables are generated. The following query shows the tables that are generated for the Customer document collection.", "_____no_output_____" ] ], [ [ "%sql LIST TABLES FOR SCHEMA MONGO_ONPREM", "_____no_output_____" ] ], [ [ "The tables are all connected through the CUSTOMERID field, which is based on the generated _id of the main CUSTOMER colllection. In order to reassemble these tables into a document, we must join them using this unique identifier. An example of the contents of the CUSTOMER_CONTACT table is shown below.", "_____no_output_____" ] ], [ [ "%sql -a SELECT * FROM MONGO_ONPREM.CUSTOMER_CONTACT FETCH FIRST 5 ROWS ONLY", "_____no_output_____" ] ], [ [ "A full document record is shown in the following SQL statement which joins all of the tables together.", "_____no_output_____" ] ], [ [ "%%sql -a\nSELECT C.CUSTOMERID AS CUSTID, \n CI.FIRSTNAME, CI.LASTNAME, CI.BIRTHDATE,\n CC.CITY, CC.ZIPCODE, CC.EMAIL, CC.PHONE, CC.STREET, CC.STATE,\n CP.CARD_TYPE, CP.CARD_NO\nFROM MONGO_ONPREM.CUSTOMER C, MONGO_ONPREM.CUSTOMER_CONTACT CC, \n MONGO_ONPREM.CUSTOMER_IDENTITY CI, MONGO_ONPREM.CUSTOMER_PAYMENT CP\nWHERE CC.CUSTOMER_ID = C.\"_ID\" AND\n CI.CUSTOMER_ID = C.\"_ID\" AND\n CP.CUSTOMER_ID = C.\"_ID\"\nFETCH FIRST 3 ROWS ONLY", "_____no_output_____" ] ], [ [ "### Querying All Virtualized Data\nIn this final example we use data from each data source to answer a complex business question. \"What are the names of the customers in Ohio, who bought the most during the highest trading day of the year (based on the Dow Jones Industrial Index)?\" \n\n**AWS Db2, Azure EDB, Azure Db2, Skytap MongoDB, CP4D Db2Wh, Skytap Informix**", "_____no_output_____" ] ], [ [ "%%sql\nWITH MAX_VOLUME(AMOUNT) AS (\n SELECT MAX(VOLUME) FROM FOLDING.STOCK_HISTORY\n WHERE SYMBOL = 'DJIA'\n),\nHIGHDATE(TX_DATE) AS (\n SELECT TX_DATE FROM FOLDING.STOCK_HISTORY, MAX_VOLUME M\n WHERE SYMBOL = 'DJIA' AND VOLUME = M.AMOUNT\n),\nCUSTOMERS_IN_OHIO(CUSTID) AS (\n SELECT C.CUSTID FROM TRADING.CUSTOMERS C \n WHERE C.STATE = 'OH'\n),\nTOTAL_BUY(CUSTID,TOTAL) AS (\n SELECT C.CUSTID, SUM(SH.QUANTITY * SH.PRICE) \n FROM CUSTOMERS_IN_OHIO C, FOLDING.STOCK_TRANSACTIONS_DV SH, HIGHDATE HD\n WHERE SH.CUSTID = C.CUSTID AND\n SH.TX_DATE = HD.TX_DATE AND \n QUANTITY > 0 \n GROUP BY C.CUSTID\n)\n SELECT LASTNAME, T.TOTAL\n FROM MONGO_ONPREM.CUSTOMER_IDENTITY CI, MONGO_ONPREM.CUSTOMER C, TOTAL_BUY T\n WHERE CI.CUSTOMER_ID = C.\"_ID\" AND C.CUSTOMERID = CUSTID\n ORDER BY TOTAL DESC", "_____no_output_____" ] ], [ [ "### Seeing where your Virtualized Data is coming from\nYou may eventually work with a complex Data Virtualization system. As an administrator or a Data Scientist you may need to understand where data is coming from. \n\nFortunately, the Data Virtualization engine is based on Db2. It includes the same catalog of information as does Db2 with some additional features. If you want to work backwards and understand where each of your virtualized tables comes from, the information is included in the **SYSCAT.TABOPTIONS** catalog table. ", "_____no_output_____" ] ], [ [ "%%sql \nSELECT TABSCHEMA, TABNAME, SETTING\n FROM SYSCAT.TABOPTIONS\n WHERE OPTION = 'SOURCELIST' \n AND TABSCHEMA <> 'QPLEXSYS';", "_____no_output_____" ], [ "%%sql \nSELECT * from SYSCAT.TABOPTIONS;", "_____no_output_____" ] ], [ [ "The table includes more information than you need to answer the question of where is my data coming from. The query below only shows the rows that contain the information of the source of the data ('SOURCELIST'). Notice that tables that have been folded together from several tables includes each of the data source information seperated by a semi-colon. ", "_____no_output_____" ] ], [ [ "%%sql \nSELECT TABSCHEMA, TABNAME, SETTING\n FROM SYSCAT.TABOPTIONS\n WHERE OPTION = 'SOURCELIST' \n AND TABSCHEMA <> 'QPLEXSYS';", "_____no_output_____" ], [ "%%sql \nSELECT TABSCHEMA, TABNAME, SETTING\n FROM SYSCAT.TABOPTIONS\n WHERE TABSCHEMA = 'DVDEMO';", "_____no_output_____" ] ], [ [ "In this last example, you can search for any virtualized data coming from a Postgres database by searching for **SETTING LIKE '%POST%'**.", "_____no_output_____" ] ], [ [ "%%sql\nSELECT TABSCHEMA, TABNAME, SETTING\n FROM SYSCAT.TABOPTIONS\n WHERE OPTION = 'SOURCELIST' \n AND SETTING LIKE '%POST%'\n AND TABSCHEMA <> 'QPLEXSYS';", "_____no_output_____" ] ], [ [ "What is missing is additional detail for each connection. For example all we can see in the table above is a connection. You can find that detail in another table: **QPLEXSYS.LISTRDBC**. In the last cell, you can see that CID DB210113 is included in the STOCK_TRANSACTIONS virtual table. You can find the details on that copy of Db2 by running the next cell. ", "_____no_output_____" ] ], [ [ "%%sql\nSELECT CID, USR, SRCTYPE, SRCHOSTNAME, SRCPORT, DBNAME, IS_DOCKER FROM QPLEXSYS.LISTRDBC;", "_____no_output_____" ] ], [ [ "## Advanced Data Virtualization \nNow that you have seen how powerful and easy it is to gain insight from your existing virtualized data, you can learn more about how to do advanced data virtualization. You will learn how to join different remote tables together to create a new virtual table and how to capture complex SQL into VIEWs.\n\n\n### Joining Tables Together\nThe virtualized tables below come from different data sources on different systems. We can combine them into a single virtual table. \n\n* Select **My virtualized data** from the Data Virtualization menu\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.20 Menu My virtual data.png\">\n \n* Enter **Stock** in the find field and hit enter\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.39.43 Find STOCK.png\">\n \n* Select table **STOCK_TRANSACTIONS** in the **FOLDING** schema\n* Select table **STOCK_SYMBOLS** in the **DVDEMO** schema\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.40.18 Two STOCK seleted.png\">\n \n* Click **Join View**\n* In table STOCK_SYMBOLS: deselect **SYMBOL**\n* In table STOCK_TRANSACTIONS: deselect **TX_NO** \n* Click **STOCK_TRANSACTION.SYMBOL** and drag to **STOCK_SYMBOLS.SYMBOL**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.41.07 Joining Tables.png\">\n \n* Click **Preview** to check that your join is working. Each row shoud now contain the stock symbol and the long stock name.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.41.55 New Join Preview.png\">\n \n* Click **X** to close the preview window\n* Click **JOIN**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.42.20 Join.png\">\n \n* Type view name **TRANSACTIONS_FULLNAME**\n* Don't change the default schema. This corresponds to your LABENGINEER user id. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.43.10 View Name.png\">\n \n* Click **NEXT**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.43.30 Next.png\">\n \n* Select the **Data Virtualization Hands on Lab** project. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.43.58 Assign to Project.png\">\n \n* Click **CREATE VIEW**. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.44.06 Create view.png\">\n \n You see the successful Join View window.\n \n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.44.23 Join view created.png\">\n \n \n* Click **View my virtualized data**\n* Click the elipsis menu beside **TRANSACTIONS_FULLNAME**\n* Click **Preview**\n\nYou can now join virtualize tables together to combine them into new virtualized tables. Now that you know how to perform simple table joins you can learn how to combine multiple data sources and virtual tables using the powerful SQL query engine that is part of the IBM Cloud Pak for Data - Virtualization.", "_____no_output_____" ], [ "### Using Queries to Answer Complex Business Questions\nThe IBM Cloud Pak for Data Virtualization Administrator has set up more complex data from multiple source for the next steps. The administrator has also given you access to this virtualized data. You may have noticed this in previous steps. \n1. Select **My virtualized data** from the Data Virtualiztion menu. All of these virtualized tables look and act like normal Db2 tables. \n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.20 Menu My virtual data.png\">\n \n2. Click **Preview** for any of the tables to see what they contain. \n\nThe virtualized tables in the **FOLDING** schema have all been created by combining the same tables from different data sources. Folding isn't something that is restricted to the same data source in the simple example you just completed.\n\nThe virtualized tables in the **TRADING** schema are a view of complex queries that were use to combine data from multiple data sources to answer specific business questions. \n\n3. Select **SQL Editor** from the Data Virtualization menu.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.33 Menu SQL editor.png\">\n\n4. Select **Script Library**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.45.02 Script Library.png\">\n\n5. Search for **OHIO**\n6. Select and expand the **OHIO Customer** query\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.45.47 Ohio Script.png\">\n\n7. Click the **Open a script to edit** icon to open the script in the SQL Editor. **Note** that if you cannot open the script then you may have to refresh your browser or contract and expand the script details section before the icon is active.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.45.54 Open Script.png\">\n\n8. Click **Run All**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.46.21 Run Ohio Script.png\">\n\n\nThis script is a complex SQL join query that uses data from all the virtualize data sources you explored in the first steps of this lab. While the SQL looks complex the author of the query did not have be aware that the data was coming from multiple sources. Everything used in this query looks like it comes from a single database, not eight different data sources across eight different systems on premises or in the Cloud. ", "_____no_output_____" ], [ "### Making Complex SQL Simple to Consume\nYou can easily make this complex query easy for a user to consume. Instead of sharing this query with other users, you can wrap the query into a view that looks and acts like a simple table. \n1. Enter **CREATE VIEW MYOHIOQUERY AS** in the SQL Editor at the first line below the comment and before the **WITH** clause\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.46.54 Add CREATE VIEW.png\">\n\n2. Click **Run all**\n3. Click **+** to **Add a new script**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.48.28 Add to script.png\">\n \n4. Click **Blank**\n4. Enter **SELECT * FROM MYOHIOQUERY;**\n5. Click **Run all**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.48.57 Run Ohio View.png\">\n \n\nNow you have a very simple virtualized table that is pulling data from eight different data sources, combining the data together to resolve a complex business problem. In the next step you will share your new virtualized data with a user.", "_____no_output_____" ], [ "### Sharing Virtualized Tables\n1. Select **My virtualized data** from the Data Virtualization Menu.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.13.20 Menu My virtual data.png\">\n \n2. Click the elipsis (...) menu to the right of the **MYOHIOQUERY** virtualized table\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.49.30 Select MYOHIOQUERY.png\">\n \n3. Select **Manage Access** from the elipsis menu\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.49.46 Virtualized Data Menu.png\">\n \n3. Click **Grant access**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.50.07 Grant access.png\">\n \n4. Select the **LABUSERx** id associated with your lab. For example, if you are LABDATAENGINEER5, then select LABUSER5.\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.52.42 Grant access to specific user.png\">\n \n5. Click **Add**\n\n <img src=\"https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/media/11.50.28 Add.png\">\n \n\nYou should now see that your **LABUSER** id has view-only access to the new virtualized table. Next switch to your LABUSERx id to check that you can see the data you have just granted access for.\n\n6. Click the user icon at the very top right of the console\n7. Click **Log out**\n8. Sign in using the LABUSER id specified by your lab instructor\n9. Click the three bar menu at the top left of the IBM Cloud Pak for Data console\n10. Select **Data Virtualization**\n\nYou should see the **MYOHIOQUERY** with the schema from your engineer userid in the list of virtualized data.\n\n11. Make a note of the schema of the MYOHIOQUERY in your list of virtualized tables. It starts with **USER**.\n12. Select the **SQL Editor** from the Data virtualization menu\n13. Click **Blank** to open a new SQL Editor window\n14. Enter **SELECT * FROM USERxxxx.MYOHIOQUERY** where xxxx is the user number of your engineer user. The view created by your engineer user was created in their default schema. \n15. Click **Run all**\n16. Add the following to your query: **WHERE TOTAL > 3000 ORDER BY TOTAL**\n17. Click **</>** to format the query so it is easiler to read\n18. Click **Run all**\n\nYou can see how you have just make a very complex data set extremely easy to consume by a data user. They don't have to know how to connect to multiple data sources or how to combine the data using complex SQL. You can hide that complexity while ensuring only the right user has access to the right data. \n\nIn the next steps you will learn how to access virtualized data from outside of IBM Cloud Pak for Data.", "_____no_output_____" ], [ "### Allowing User to Access Virtualized Data with Analytic Tools\nIn the next set of steps you connect to virtualized data from this notebook using your **LABUSER** userid. \n\nJust like you connected to IBM Cloud Pak for Data Virtualized Data using your LABDATAENGINEER you can connect using your LABUSER. \n\nWe are going to connect to the IBM Cloud Pak for Data Virtualization database in exactly the same way we connected using you LABENGINEER. However you need to change the detailed connection information. Each user has their own unique userid and password to connect to the service. This ensures that no matter what tool you use to connect to virtualized data you are always in control over who can access specifical virtualized data. \n\n2. Click the user icon at the top right of the IBM Cloud Pak for data console to confirm that you are using your **LABUSER** id\n1. Click **Connection Details** in the Data Virtualization menu\n2. Click **Without SSL**\n3. Copy the **User ID** by highlighting it with your mouse, right click and select **Copy**\n4. Paste the **User ID** in to the cell below were **user =** between the quotation marks \n5. Click **Service Settings** in the Data Virtualization menu\n6. Show the password. Highlight the password and copy using the right click menu\n7. Paste the **password** into the cell below between the quotation marks using the righ click paste.\n8. Run the cell below to connect to the Data Virtualization database. ", "_____no_output_____" ], [ "#### Connecting a USER to Data Virtualization SQL Engine", "_____no_output_____" ] ], [ [ "# Connect to the IBM Cloud Pak for Data Virtualization Database from inside CPD\ndatabase = 'bigsql'\nuser = 'userxxxx'\npassword = 'xxxxxxxxxxxxxxxxxx'\nhost = 'openshift-skytap-nfs-lb.ibm.com'\nport = '32080'\n\n%sql CONNECT TO {database} USER {user} USING {password} HOST {host} PORT {port}", "_____no_output_____" ] ], [ [ "Now you can try out the view that was created by the LABDATAENGINEER userid. \n\nSubstitute the **xxxx** for the schema used by your ***LABDATAENGINEERx*** user in the next two cells before you run them.", "_____no_output_____" ] ], [ [ "%sql SELECT * FROM USERxxxx.MYOHIOQUERY WHERE TOTAL > 3000 ORDER BY TOTAL;", "_____no_output_____" ] ], [ [ "Only LABENGINEER virtualized tables that have been authorized for the LABUSER to see are available. Try running the next cell. You should receive an error that the current user does not have the required authorization or privlege to perform the operation.", "_____no_output_____" ] ], [ [ "%sql SELECT * FROM USERxxxx.DISCOVERFOLD;", "_____no_output_____" ] ], [ [ "### Next Steps:\nNow you can use IBM Cloud Pak for Data to make even complex data and queries from different data sources, on premises and across a multi-vendor Cloud look like simple tables in a single database. You are ready for some more advanced labs. \n\n1. Use Db2 SQL and Jupyter Notebooks to Analyze Virtualized Data\n * Build simple to complex queries to answer important business questions using the virtualized data available to you in IBM Cloud Pak for Data\n * See how you can transform the queries into simple tables available to all your users\n2. Use Open RESTful Services to connect to the IBM Cloud Pak for Data Virtualization \n * Everything you can do in the IBM Cloud Pak for Data User Interface is accessible through Open RESTful APIs\n * Learn how to automate and script your managment of Data Virtualization using RESTful API\n * Learn how to accelerate appliation development by accessing virtaulied data through RESTful APIs", "_____no_output_____" ], [ "## Automating Data Virtualization Setup and Management through REST", "_____no_output_____" ], [ "The IBM Cloud Pak for Data Console is only one way you can interact with the Virtualization service. IBM Cloud Pak for Data is built on a set of microservices that communicate with each other and the Console user interface using RESTful APIs. You can use these services to automate anything you can do throught the user interface.\n\nThis Jupyter Notebook contains examples of how to use the Open APIs to retrieve information from the virtualization service, how to run SQL statements directly against the service through REST and how to provide authoritization to objects. This provides a way write your own script to automate the setup and configuration of the virtualization service. ", "_____no_output_____" ], [ "The next part of the lab relies on a set of base classes to help you interact with the RESTful Services API for IBM Cloud Pak for Data Virtualization. You can access this library on GITHUT. The commands below download the library and run them as part of this notebook.\n<pre>\n&#37;run CPDDVRestClass.ipynb\n</pre>\nThe cell below loads the RESTful Service Classes and methods directly from GITHUB. Note that it will take a few seconds for the extension to load, so you should generally wait until the \"Db2 Extensions Loaded\" message is displayed in your notebook. \n1. Click the cell below\n2. Click **Run**", "_____no_output_____" ] ], [ [ "!wget -O CPDDVRestClass.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVLAB/master/CPDDVRestClass.ipynb\n%run CPDDVRestClass.ipynb", "_____no_output_____" ] ], [ [ "### The Db2 Class\nThe CPDDVRestClass.ipynb notebook includes a Python class called Db2 that encapsulates the Rest API calls used to connect to the IBM Cloud Pak for Data Virtualization service. \n\nTo access the service you need to first authenticate with the service and create a reusable token that we can use for each call to the service. This ensures that we don't have to provide a userID and password each time we run a command. The token makes sure this is secure. \n\nEach request is constructed of several parts. First, the URL and the API identify how to connect to the service. Second the REST service request that identifies the request and the options. For example '/metrics/applications/connections/current/list'. And finally some complex requests also include a JSON payload. For example running SQL includes a JSON object that identifies the script, statement delimiters, the maximum number of rows in the results set as well as what do if a statement fails.\n\nYou can find this class and use it for your own notebooks in GITHUB. Have a look at how the class encapsulated the API calls by clicking on the following link: https://github.com/Db2-DTE-POC/CPDDVLAB/blob/master/CPDDVRestClass.ipynb", "_____no_output_____" ], [ "### Example Connections\nTo connect to the Data Virtualization service you need to provide the URL, the service name (v1) and profile the console user name and password. For this lab we are assuming that the following values are used for the connection:\n* Userid: LABDATAENGINEERx\n* Password: password\n\nSubstitute your assigned LABDATAENGINEER userid below along with your password and run the cell. It will generate a breaer token that is used in the following steps to authenticate your use of the API. ", "_____no_output_____" ], [ "#### Connecting to Data Virtualization API Service", "_____no_output_____" ] ], [ [ "# Set the service URL to connect from inside the ICPD Cluster\nConsole = 'https://openshift-skytap-nfs-lb.ibm.com'\n\n# Connect to the Db2 Data Management Console service\nuser = 'labdataengineerx'\npassword = 'password'\n\n# Set up the required connection\ndatabaseAPI = Db2(Console)\napi = '/v1'\ndatabaseAPI.authenticate(api, user, password)\ndatabase = Console", "_____no_output_____" ] ], [ [ "#### Data Sources\nThe following call (getDataSources) uses an SQL call in the DB2 class to run the same SQL statement you saw earlier in the lab. ", "_____no_output_____" ] ], [ [ "# Display the Available Data Sources already configured\njson = databaseAPI.getDataSources()\ndatabaseAPI.displayResults(json)", "_____no_output_____" ] ], [ [ "#### Virtualized Data\nThis call retrieves all of the virtualized data available to the role of Data Engineer. It uses a direct RESTful service call and does not use SQL. The service returns a JSON result set that is converted into a Python Pandas dataframe. Dataframes are very useful in being able to manipulate tables of data in Python. If there is a problem with the call, the error code is displayed.", "_____no_output_____" ] ], [ [ "# Display the Virtualized Assets Avalable to Engineers\nroles = ['DV_ENGINEER']\nfor role in roles:\n r = databaseAPI.getRole(role)\n if (databaseAPI.getStatusCode(r)==200):\n json = databaseAPI.getJSON(r)\n df = pd.DataFrame(json_normalize(json['objects']))\n display(df)\n else:\n print(databaseAPI.getStatusCode(r)) ", "_____no_output_____" ] ], [ [ "#### Virtualized Tables and Views\nThis call retrieves all the virtualized tables and view available to the userid that you use to connect to the service. In this example the whole call is included in the DB2 class library and returned as a complete Dataframe ready for display or to be used for analysis or administration.", "_____no_output_____" ] ], [ [ "### Display Virtualized Tables and Views \ndisplay(databaseAPI.getVirtualizedTablesDF())\ndisplay(databaseAPI.getVirtualizedViewsDF())", "_____no_output_____" ] ], [ [ "#### Get a list of the IBM Cloud Pak for Data Users\nThis example returns a list of all the users of the IBM Cloud Pak for Data system. It only displays three colunns in the Dataframe, but the list of all the available columns is als printed out. Try changing the code to display other columns.", "_____no_output_____" ] ], [ [ "# Get the list of CPD Users\nr = databaseAPI.getUsers()\nif (databaseAPI.getStatusCode(r)==200):\n json = databaseAPI.getJSON(r)\n df = pd.DataFrame(json_normalize(json))\n print(', '.join(list(df))) # List available column names\n display(df[['uid','username','displayName']])\nelse:\n print(databaseAPI.getStatusCode(r))", "_____no_output_____" ] ], [ [ "#### Get the list of available schemas in the DV Database\nDo not forget that the Data Virtualization engine supports the same function as a regular Db2 database. So you can also look at standard Db2 objects like schemas.", "_____no_output_____" ] ], [ [ "# Get the list of available schemas in the DV Database\nr = databaseAPI.getSchemas()\nif (databaseAPI.getStatusCode(r)==200):\n json = databaseAPI.getJSON(r)\n df = pd.DataFrame(json_normalize(json['resources']))\n print(', '.join(list(df)))\n display(df[['name']].head(10))\nelse:\n print(databaseAPI.getStatusCode(r)) ", "_____no_output_____" ] ], [ [ "#### Object Search\nFuzzy object search is also available. The call is a bit more complex. If you look at the routine in the DB2 class it posts a RESTful service call that includes a JSON payload. The payload includes the details of the search request. ", "_____no_output_____" ] ], [ [ "# Search for tables across all schemas that match simple search critera \n# Display the first 100\n# Switch between searching tables or views\nobject = 'view'\n# object = 'table'\nr = databaseAPI.postSearchObjects(object,\"TRADING\",10,'false','false')\nif (databaseAPI.getStatusCode(r)==200):\n json = databaseAPI.getJSON(r)\n df = pd.DataFrame(json_normalize(json))\n print('Columns:')\n print(', '.join(list(df)))\n display(df[[object+'_name']].head(100))\nelse:\n print(\"RC: \"+str(databaseAPI.getStatusCode(r)))", "_____no_output_____" ] ], [ [ "#### Run SQL through the SQL Editor Service\nYou can also use the SQL Editor service to run your own SQL. Statements are submitted to the editor. Your code then needs to poll the editor service until the script is complete. Fortunately you can use the DB2 class included in this lab so that it becomes a very simple Python call. The **runScript** routine runs the SQL and the **displayResults** routine formats the returned JSON.", "_____no_output_____" ] ], [ [ "sqlText = \\\n'''\nWITH MAX_VOLUME(AMOUNT) AS (\n SELECT MAX(VOLUME) FROM FOLDING.STOCK_HISTORY\n WHERE SYMBOL = 'DJIA'\n),\nHIGHDATE(TX_DATE) AS (\n SELECT TX_DATE FROM FOLDING.STOCK_HISTORY, MAX_VOLUME M\n WHERE SYMBOL = 'DJIA' AND VOLUME = M.AMOUNT\n),\nCUSTOMERS_IN_OHIO(CUSTID) AS (\n SELECT C.CUSTID FROM TRADING.CUSTOMERS C \n WHERE C.STATE = 'OH'\n),\nTOTAL_BUY(CUSTID,TOTAL) AS (\n SELECT C.CUSTID, SUM(SH.QUANTITY * SH.PRICE) \n FROM CUSTOMERS_IN_OHIO C, FOLDING.STOCK_TRANSACTIONS SH, HIGHDATE HD\n WHERE SH.CUSTID = C.CUSTID AND\n SH.TX_DATE = HD.TX_DATE AND \n QUANTITY > 0 \n GROUP BY C.CUSTID\n)\n SELECT LASTNAME, T.TOTAL\n FROM MONGO_ONPREM.CUSTOMER_IDENTITY CI, MONGO_ONPREM.CUSTOMER C, TOTAL_BUY T\n WHERE CI.CUSTOMER_ID = C.\"_ID\" AND C.CUSTOMERID = CUSTID\n ORDER BY TOTAL DESC\nFETCH FIRST 5 ROWS ONLY;\n'''\n\ndatabaseAPI.displayResults(databaseAPI.runScript(sqlText))", "_____no_output_____" ] ], [ [ "#### Run scripts of SQL Statements repeatedly through the SQL Editor Service\nThe runScript routine can contain more than one statement. The next example runs a scipt with eight SQL statements multple times. ", "_____no_output_____" ] ], [ [ "repeat = 3\nsqlText = \\\n'''\nSELECT * FROM TRADING.MOVING_AVERAGE;\nSELECT * FROM TRADING.VOLUME;\nSELECT * FROM TRADING.THREEPERCENT;\nSELECT * FROM TRADING.TRANSBYCUSTOMER;\nSELECT * FROM TRADING.TOPBOUGHTSOLD;\nSELECT * FROM TRADING.TOPFIVE;\nSELECT * FROM TRADING.BOTTOMFIVE;\nSELECT * FROM TRADING.OHIO;\n'''\n\nfor x in range(0, repeat):\n print('Repetition number: '+str(x))\n databaseAPI.displayResults(databaseAPI.runScript(sqlText))\nprint('done')", "_____no_output_____" ] ], [ [ "### What's next\nif you are inteested in finding out more about using RESTful services to work with Db2, check out this DZone article: https://dzone.com/articles/db2-dte-pocdb2dmc. The article also includes a link to a complete hands-on lab for Db2 and the Db2 Data Management Console. In it you can find out more about using REST and Db2 together. ", "_____no_output_____" ], [ "#### Credits: IBM 2019, Peter Kohlmann [[email protected]]", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a3ab25761dfe04d66749c9d3f211ebce2e56549
47,698
ipynb
Jupyter Notebook
Chapter03/Spam detector.ipynb
tavousi/Python-Artificial-Intelligence-Projects-for-Beginners
4b9bda7e4d79a5aa47bc81c64fe2a86c82eb27ec
[ "MIT" ]
223
2018-08-31T00:00:18.000Z
2022-03-20T17:12:43.000Z
Chapter03/Spam detector.ipynb
tavousi/Python-Artificial-Intelligence-Projects-for-Beginners
4b9bda7e4d79a5aa47bc81c64fe2a86c82eb27ec
[ "MIT" ]
85
2021-03-18T13:00:26.000Z
2021-04-09T03:46:55.000Z
Chapter03/Spam detector.ipynb
tavousi/Python-Artificial-Intelligence-Projects-for-Beginners
4b9bda7e4d79a5aa47bc81c64fe2a86c82eb27ec
[ "MIT" ]
153
2018-08-01T07:15:17.000Z
2022-03-29T09:39:34.000Z
23.992958
283
0.402113
[ [ [ "import pandas as pd\nd = pd.read_csv(\"YouTube-Spam-Collection-v1/Youtube01-Psy.csv\")", "_____no_output_____" ], [ "d.tail()", "_____no_output_____" ], [ "len(d.query('CLASS == 1'))", "_____no_output_____" ], [ "len(d.query('CLASS == 0'))", "_____no_output_____" ], [ "len(d)", "_____no_output_____" ], [ "from sklearn.feature_extraction.text import CountVectorizer\nvectorizer = CountVectorizer()", "_____no_output_____" ], [ "dvec = vectorizer.fit_transform(d['CONTENT'])", "_____no_output_____" ], [ "dvec", "_____no_output_____" ], [ "analyze = vectorizer.build_analyzer()", "_____no_output_____" ], [ "print(d['CONTENT'][349])\nanalyze(d['CONTENT'][349])", "The first billion viewed this because they thought it was really cool, the other billion and a half came to see how stupid the first billion were...\n" ], [ "vectorizer.get_feature_names()", "_____no_output_____" ], [ "dshuf = d.sample(frac=1)", "_____no_output_____" ], [ "d_train = dshuf[:300]\nd_test = dshuf[300:]\nd_train_att = vectorizer.fit_transform(d_train['CONTENT']) # fit bag-of-words on training set\nd_test_att = vectorizer.transform(d_test['CONTENT']) # reuse on testing set\nd_train_label = d_train['CLASS']\nd_test_label = d_test['CLASS']", "_____no_output_____" ], [ "d_train_att", "_____no_output_____" ], [ "d_test_att", "_____no_output_____" ], [ "from sklearn.ensemble import RandomForestClassifier\nclf = RandomForestClassifier(n_estimators=80)", "_____no_output_____" ], [ "clf.fit(d_train_att, d_train_label)", "_____no_output_____" ], [ "clf.score(d_test_att, d_test_label)", "_____no_output_____" ], [ "from sklearn.metrics import confusion_matrix\npred_labels = clf.predict(d_test_att)\nconfusion_matrix(d_test_label, pred_labels)", "_____no_output_____" ], [ "from sklearn.model_selection import cross_val_score\nscores = cross_val_score(clf, d_train_att, d_train_label, cv=5)\n# show average score and +/- two standard deviations away (covering 95% of scores)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))", "Accuracy: 0.95 (+/- 0.07)\n" ], [ "# load all datasets and combine them\nd = pd.concat([pd.read_csv(\"YouTube-Spam-Collection-v1/Youtube01-Psy.csv\"),\n pd.read_csv(\"YouTube-Spam-Collection-v1/Youtube02-KatyPerry.csv\"),\n pd.read_csv(\"YouTube-Spam-Collection-v1/Youtube03-LMFAO.csv\"),\n pd.read_csv(\"YouTube-Spam-Collection-v1/Youtube04-Eminem.csv\"),\n pd.read_csv(\"YouTube-Spam-Collection-v1/Youtube05-Shakira.csv\")])", "_____no_output_____" ], [ "len(d)", "_____no_output_____" ], [ "len(d.query('CLASS == 1'))", "_____no_output_____" ], [ "len(d.query('CLASS == 0'))", "_____no_output_____" ], [ "dshuf = d.sample(frac=1)\nd_content = dshuf['CONTENT']\nd_label = dshuf['CLASS']", "_____no_output_____" ], [ "# set up a pipeline\nfrom sklearn.pipeline import Pipeline, make_pipeline\npipeline = Pipeline([\n ('bag-of-words', CountVectorizer()),\n ('random forest', RandomForestClassifier()),\n])\npipeline", "_____no_output_____" ], [ "# or: pipeline = make_pipeline(CountVectorizer(), RandomForestClassifier())\nmake_pipeline(CountVectorizer(), RandomForestClassifier())", "_____no_output_____" ], [ "pipeline.fit(d_content[:1500],d_label[:1500])", "_____no_output_____" ], [ "pipeline.score(d_content[1500:], d_label[1500:])", "_____no_output_____" ], [ "pipeline.predict([\"what a neat video!\"])", "_____no_output_____" ], [ "pipeline.predict([\"plz subscribe to my channel\"])", "_____no_output_____" ], [ "scores = cross_val_score(pipeline, d_content, d_label, cv=5)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))", "Accuracy: 0.94 (+/- 0.03)\n" ], [ "# add tfidf\nfrom sklearn.feature_extraction.text import TfidfTransformer\npipeline2 = make_pipeline(CountVectorizer(),\n TfidfTransformer(norm=None),\n RandomForestClassifier())", "_____no_output_____" ], [ "scores = cross_val_score(pipeline2, d_content, d_label, cv=5)\nprint(\"Accuracy: %0.2f (+/- %0.2f)\" % (scores.mean(), scores.std() * 2))", "Accuracy: 0.94 (+/- 0.03)\n" ], [ "pipeline2.steps", "_____no_output_____" ], [ "# parameter search\nparameters = {\n 'countvectorizer__max_features': (None, 1000, 2000),\n 'countvectorizer__ngram_range': ((1, 1), (1, 2)), # unigrams or bigrams\n 'countvectorizer__stop_words': ('english', None),\n 'tfidftransformer__use_idf': (True, False), # effectively turn on/off tfidf\n 'randomforestclassifier__n_estimators': (20, 50, 100)\n}\nfrom sklearn.model_selection import GridSearchCV\ngrid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1)", "_____no_output_____" ], [ "grid_search.fit(d_content, d_label)", "Fitting 3 folds for each of 72 candidates, totalling 216 fits\n" ], [ "print(\"Best score: %0.3f\" % grid_search.best_score_)\nprint(\"Best parameters set:\")\nbest_parameters = grid_search.best_estimator_.get_params()\nfor param_name in sorted(parameters.keys()):\n print(\"\\t%s: %r\" % (param_name, best_parameters[param_name]))", "Best score: 0.958\nBest parameters set:\n\tcountvectorizer__max_features: 1000\n\tcountvectorizer__ngram_range: (1, 1)\n\tcountvectorizer__stop_words: 'english'\n\trandomforestclassifier__n_estimators: 100\n\ttfidftransformer__use_idf: True\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3ac6b74e66c07dc741ad42b296e1effeb2fee4
21,163
ipynb
Jupyter Notebook
codes/2-basics_in_machine_learning/linear_regression/code/linear_regression.ipynb
arcyfelix/TensorFlow-World
013e59e51f8cd65e2256b43b194fe76e85af2c1c
[ "MIT" ]
null
null
null
codes/2-basics_in_machine_learning/linear_regression/code/linear_regression.ipynb
arcyfelix/TensorFlow-World
013e59e51f8cd65e2256b43b194fe76e85af2c1c
[ "MIT" ]
null
null
null
codes/2-basics_in_machine_learning/linear_regression/code/linear_regression.ipynb
arcyfelix/TensorFlow-World
013e59e51f8cd65e2256b43b194fe76e85af2c1c
[ "MIT" ]
1
2018-04-16T19:16:19.000Z
2018-04-16T19:16:19.000Z
91.614719
15,508
0.827293
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport xlrd\nimport matplotlib.pyplot as plt\nimport os\nfrom sklearn.utils import check_random_state", "_____no_output_____" ], [ "# Generating artificial data.\nn = 50\nXX = np.arange(n)\nrs = check_random_state(0)\nYY = rs.randint(-10, 10, size=(n,)) + 2.0 * XX\ndata = np.stack([XX,YY], axis=1)", "_____no_output_____" ], [ "#######################\n## Defining flags #####\n#######################\n\nnum_epochs = 5", "_____no_output_____" ], [ "# creating the weight and bias.\n# The defined variables will be initialized to zero.\nW = tf.Variable(0.0, name=\"weights\")\nb = tf.Variable(0.0, name=\"bias\")", "_____no_output_____" ], [ "###############################\n##### Necessary functions #####\n###############################\n\n# Creating placeholders for input X and label Y.\ndef inputs():\n \"\"\"\n Defining the place_holders.\n :return:\n Returning the data and label place holders.\n \"\"\"\n X = tf.placeholder(tf.float32, name=\"X\")\n Y = tf.placeholder(tf.float32, name=\"Y\")\n return X,Y\n\n# Create the prediction.\ndef inference(X):\n \"\"\"\n Forward passing the X.\n :param X: Input.\n :return: X*W + b.\n \"\"\"\n return X * W + b\n\ndef loss(X, Y):\n '''\n compute the loss by comparing the predicted value to the actual label.\n :param X: The input.\n :param Y: The label.\n :return: The loss over the samples.\n '''\n\n # Making the prediction.\n Y_predicted = inference(X)\n return tf.squared_difference(Y, Y_predicted)\n\n\n# The training function.\ndef train(loss):\n learning_rate = 0.0001\n return tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)", "_____no_output_____" ], [ "with tf.Session() as sess:\n\n # Initialize the variables[w and b].\n sess.run(tf.global_variables_initializer())\n\n # Get the input tensors\n X, Y = inputs()\n\n # Return the train loss and create the train_op.\n train_loss = loss(X, Y)\n train_op = train(train_loss)\n\n # Step 8: train the model\n for epoch_num in range(num_epochs): # run 100 epochs\n for x, y in data:\n train_op = train(train_loss)\n\n # Session runs train_op to minimize loss\n loss_value,_ = sess.run([train_loss,train_op], feed_dict={X: x, Y: y})\n\n # Displaying the loss per epoch.\n print('epoch %d, loss=%f' %(epoch_num+1, loss_value))\n\n # save the values of weight and bias\n wcoeff, bias = sess.run([W, b])", "epoch 1, loss=5.052377\nepoch 2, loss=5.002597\nepoch 3, loss=5.002904\nepoch 4, loss=5.003245\nepoch 5, loss=5.003587\n" ], [ "###############################\n#### Evaluate and plot ########\n###############################\nInput_values = data[:,0]\nLabels = data[:,1]\nPrediction_values = data[:,0] * wcoeff + bias\n\n# uncomment if plotting is desired!\nplt.plot(Input_values, Labels, 'ro', label='main')\nplt.plot(Input_values, Prediction_values, label='Predicted')\n\n# Saving the result.\nplt.legend()\nplt.show()\nplt.close()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a3aea1a9de1d86943e6e6d233b7002b00506421
4,273
ipynb
Jupyter Notebook
docs/source/notebooks/FeatureEngineering/MissingData.ipynb
sandy1618/LearnAI
b9884fa836a7436a95203529206b9f934ea6f294
[ "MIT" ]
null
null
null
docs/source/notebooks/FeatureEngineering/MissingData.ipynb
sandy1618/LearnAI
b9884fa836a7436a95203529206b9f934ea6f294
[ "MIT" ]
null
null
null
docs/source/notebooks/FeatureEngineering/MissingData.ipynb
sandy1618/LearnAI
b9884fa836a7436a95203529206b9f934ea6f294
[ "MIT" ]
null
null
null
25.284024
96
0.429675
[ [ [ "# All about handling missing data in various scenerios.\n ", "_____no_output_____" ], [ "### What all pandas consider as null values?\n> Is values like 'nan' in string format also null values? \n >> 'na' \n\n> pd.isnull,pd.notnull, pd.notna\n### How can we customize these settings and also manually add new types of null values? \n", "_____no_output_____" ] ], [ [ "# importing pandas as pd\nimport pandas as pd\n\n# importing numpy as np\nimport numpy as np\n\n# dictionary of lists\ndict = {'First Score':[100, 90, np.nan, 95],\n\t\t'Second Score': [30, 45, 56, np.nan],\n\t\t'Third Score':[np.nan, 40, 80, np.nan]}\n\n# creating a dataframe from list\ndf = pd.DataFrame(dict)\ndf", "_____no_output_____" ], [ "# It shows all columns with nan values. \n_null= df.isnull().sum()\n# Per column based frequency of 'nan' values in the dataframe so that we can drop \n# the columns with significant number of 'nan' values.\n_null.sort_values(ascending=False, inplace=True)\n_null\n\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ] ]
4a3aeba1538e9a21fc08e0d372f6e821c1609573
5,583
ipynb
Jupyter Notebook
demos/jupyter-functionality.ipynb
markuspf/JupyterZMQ
5c8081d13c10e61746a2afc8bb82f9640ab7cb30
[ "BSD-3-Clause" ]
16
2017-10-06T06:11:09.000Z
2022-01-11T18:28:56.000Z
demos/jupyter-functionality.ipynb
gap-packages/JupyterKernel
18785690c4dab0bee670b8022655ac37b683c548
[ "BSD-3-Clause" ]
111
2017-10-03T15:30:56.000Z
2022-01-28T12:55:13.000Z
demos/jupyter-functionality.ipynb
markuspf/JupyterZMQ
5c8081d13c10e61746a2afc8bb82f9640ab7cb30
[ "BSD-3-Clause" ]
16
2017-10-18T14:48:56.000Z
2021-06-30T09:46:37.000Z
27.234146
642
0.578721
[ [ [ "# Jupyter-Specific Functionality\nWhile GAP does provide a lot of useful functionality by itself on the command line, it is enhanced greatly by the numerous features that Jupyter notebooks have to offer. This notebook attempts to provide some insight into how Jupyter notebooks can improve the workflow of a user who is already well-versed in GAP.", "_____no_output_____" ], [ "## The Basics\nIn Jupyter, code is split into a number of cells. While these cells may look independent from one another, and can be run independently, there is some interconnectedness between them. One major example of this is that variables defined in one cell are accessible from cells that are run **after** the cell containing the variable. The value of the variable will be taken from the **most recent** assignment to that variable:", "_____no_output_____" ] ], [ [ "a := 3; b := 5;", "_____no_output_____" ], [ "a + b;", "_____no_output_____" ], [ "a := 7;", "_____no_output_____" ], [ "a + b;", "_____no_output_____" ] ], [ [ "To run a cell, users can either use the toolbar at the top and clicking the play button, or use the handy keyboard shortcut `Shift + Enter`. Using this shortcut will also create a new cell so users can continue their work while the cell runs. Using `Enter` by itself will allow users to add lines to a cell, should they so desire. The `Cell` option in the top menu also provides some other commands to run all cells.\n\nAdditionally, cells can also support a multitude of different inputs. One useful example of this is markdown. In order to use markdown syntax within a cell, it must be converted to a markdown cell. This conversion can be done by either using the dropdown menu at the top which allows users to change the type of the cell (it will be `Code` by default). Alternatively, users can press the `Esc` key while in the cell, which allows them to access \"Command Mode\" for the cell. While in this mode, the `M` key can be pressed to convert the cell to a Markdown cell. While in Markdown cells, all the typical markdown syntax is supported.\n\nFurthermore, while in \"Command Mode\", users can use the key sequence `D` `D` to delete cells as they wish. The key `H` can be pressed to look at other useful key shortcuts while in this mode.", "_____no_output_____" ], [ "## Cell Magic", "_____no_output_____" ], [ "While the main purpose of most users will be GAP-orientated, Jupyter can also render and run some other code fragments. For example, the code magic `%%html` allows Jupyter to render the contents of a code cell as html:", "_____no_output_____" ], [ "## Visualisation\nAnother neat feature about Jupyter is the ability to visualise items right after running cells.", "_____no_output_____" ], [ "## Notebook Conversion\nSince Jupyter Notebooks are simply JSON, they can be easily converted to other formats. For example, to convert to HTML one would run:\n\n jupyter nbconvert --to html notebook.ipynb\n\nfrom their terminal.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a3afd3277cf903f793bb80cf0e5fba5f0702b2e
2,458
ipynb
Jupyter Notebook
SquaresofaSortedArray.ipynb
ZengyiMa/Algorithms-Python
4f267c6e0207e062c81d634ac613bce4bebe3745
[ "MIT" ]
null
null
null
SquaresofaSortedArray.ipynb
ZengyiMa/Algorithms-Python
4f267c6e0207e062c81d634ac613bce4bebe3745
[ "MIT" ]
null
null
null
SquaresofaSortedArray.ipynb
ZengyiMa/Algorithms-Python
4f267c6e0207e062c81d634ac613bce4bebe3745
[ "MIT" ]
null
null
null
25.604167
153
0.396257
[ [ [ "# Squares of a Sorted Array\n\nGiven an array of integers A sorted in non-decreasing order, return an array of the squares of each number, also in sorted non-decreasing order.\n\n \n```\nExample 1:\n\nInput: [-4,-1,0,3,10]\nOutput: [0,1,9,16,100]\nExample 2:\n\nInput: [-7,-3,2,3,11]\nOutput: [4,9,9,49,121]\n```\n\nNote:\n\n1 <= A.length <= 10000\n\n-10000 <= A[i] <= 10000\n\nA is sorted in non-decreasing order.\n\n[Squares of a Sorted Array](https://leetcode.com/problems/squares-of-a-sorted-array/)", "_____no_output_____" ] ], [ [ "class Solution:\n def sortedSquares(self, A):\n \"\"\"\n :type A: List[int]\n :rtype: List[int]\n \"\"\"\n result = []\n for a in A:\n result.append(a ** 2)\n self.quickSort(0, len(A) - 1, result)\n return result\n \n def quickSort(self, start, end, a):\n if start < end:\n l = start\n r = end\n p = a[l]\n while l < r:\n while l < r and a[r] > p:\n r -= 1\n if l < r:\n a[l] = a[r]\n l += 1\n while l < r and a[l] < p:\n l += 1\n if l < r:\n a[r] = a[l]\n r -= 1\n a[l] = p\n self.quickSort(start, l - 1, a)\n self.quickSort(l + 1, end, a)", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code" ] ]
4a3b09b3e26609d6efced135e4055a78efac2efa
33,388
ipynb
Jupyter Notebook
codes/analiz/Dezenformasyon_quantitative_text_analysis.ipynb
burakozturan/css_covid19
ed2d9e8eedff07b644658dfd2afcc94ecf05cde5
[ "MIT" ]
4
2020-12-19T16:46:07.000Z
2021-03-07T15:40:54.000Z
codes/analiz/Dezenformasyon_quantitative_text_analysis.ipynb
burakozturan/css_covid19
ed2d9e8eedff07b644658dfd2afcc94ecf05cde5
[ "MIT" ]
null
null
null
codes/analiz/Dezenformasyon_quantitative_text_analysis.ipynb
burakozturan/css_covid19
ed2d9e8eedff07b644658dfd2afcc94ecf05cde5
[ "MIT" ]
2
2020-08-26T00:27:37.000Z
2021-01-10T22:32:06.000Z
84.956743
20,330
0.768749
[ [ [ "# Kurulum ve Gerekli Modullerin Yuklenmesi", "_____no_output_____" ] ], [ [ "from google.colab import drive\ndrive.mount('/content/gdrive')\nimport sys\nimport os", "Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly\n\nEnter your authorization code:\n··········\nMounted at /content/gdrive\n" ], [ "import pandas as pd\nimport matplotlib.pyplot as plt\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport nltk\nimport os\nfrom nltk import sent_tokenize, word_tokenize\nfrom nltk.stem.snowball import SnowballStemmer\nfrom nltk.stem.wordnet import WordNetLemmatizer\nimport nltk\n\nnltk.download('stopwords')\nimport matplotlib.pyplot as plt\nimport pandas as pd\n\nnltk.download('punkt')\nimport string\nfrom nltk.corpus import stopwords\nimport pandas as pd\nimport numpy as np\nimport re ", "[nltk_data] Downloading package stopwords to /root/nltk_data...\n[nltk_data] Unzipping corpora/stopwords.zip.\n[nltk_data] Downloading package punkt to /root/nltk_data...\n[nltk_data] Unzipping tokenizers/punkt.zip.\n" ] ], [ [ "# Incelenecek konu basligindaki tweetlerin yuklenmesi ", "_____no_output_____" ], [ "Burada ornek olarak ulkeler konu basligi gosteriliyor gosteriliyor", "_____no_output_____" ] ], [ [ "os.chdir(\"/content/gdrive/My Drive/css/dezenformasyon_before\")\ndf3 = pd.read_csv(\"/content/gdrive/My Drive/css/dezenformasyon_before/dezenformasyon_before_nodublication.csv\", engine = 'python')\ndf3['tweet'] = df3['tweet'].astype(str)", "_____no_output_____" ] ], [ [ "Data pre-processing (on temizlemesi):\n\n1. kucuk harfe cevirme\n2. turkce karakter uyumlarini duzeltme\n3. ozel karakterleri, noktalamalari temizleme\n", "_____no_output_____" ] ], [ [ "df3.tweet = df3.tweet.apply(lambda x: re.sub(r\"İ\", \"i\",x)) #harika calisiyor\ndf3.tweet = df3.tweet.apply(lambda x: x.lower())\ndf3.loc[:,\"tweet\"] = df3.tweet.apply(lambda x : \" \".join(re.findall('[\\w]+',x)))", "_____no_output_____" ] ], [ [ "# Tokenize islemi, stop wordlerin atilmasi ve kelime frequencylerini (kullanim sayilarini) ileride gelecek gorsellestirme icin kaydetme", "_____no_output_____" ] ], [ [ "top_N = 10\n\n\ntxt = df3.tweet.str.lower().str.replace(r'\\|', ' ').str.cat(sep=' ')\nwords = nltk.tokenize.word_tokenize(txt)\nword_dist = nltk.FreqDist(words)\n\nuser_defined_stop_words = ['ekonomi', '1', 'ye', 'nin' ,'nın', 'koronavirüs', 'olsun', 'karşı' , 'covid_19', 'artık', '3', 'sayısı' , 'olarak', 'oldu', 'olan', '2' , 'nedeniyle','bile' , 'sonra' ,'sen','virüs', 'ben', 'vaka' , 'son', 'yeni', 'sayi', 'sayisi','virüsü','bir','com','twitter', 'kadar', 'dan' , 'değil' ,'pic' , 'http', 'https' , 'www' , 'status' , 'var', 'bi', 'mi','yok', 'bu' , 've', 'korona' ,'corona' ,'19' ,'kovid', 'covid'] \n\ni = nltk.corpus.stopwords.words('turkish')\nj = list(string.punctuation) + user_defined_stop_words\nstopwords = set(i).union(j)\n\n\nwords_except_stop_dist = nltk.FreqDist(w for w in words if w not in stopwords) \n\nprint('All frequencies, including STOPWORDS:')\nprint('=' * 60)\nrslt3 = pd.DataFrame(word_dist.most_common(top_N),\n columns=['Word', 'Frequency'])\nprint(rslt3)\nprint('=' * 60)\n\nrslt3 = pd.DataFrame(words_except_stop_dist.most_common(top_N),\n columns=['Word', 'Frequency']).set_index('Word')", "All frequencies, including STOPWORDS:\n============================================================\n Word Frequency\n0 sahte 38177\n1 bir 15496\n2 bu 14100\n3 ve 13806\n4 com 11712\n5 twitter 9524\n6 da 6823\n7 de 6647\n8 https 6300\n9 komplo 6014\n============================================================\n" ] ], [ [ "# TR deki ilk vakan onceki tweetlerin incelenmek icin yuklenmesi", "_____no_output_____" ] ], [ [ "df2 = pd.read_csv(\"/content/gdrive/My Drive/css/dezenformasyon_after/dezenformasyon_after_nodublication.csv\", engine = 'python')\n\ndf2['tweet'] = df2['tweet'].astype(str)", "_____no_output_____" ], [ "df2['tweet'] = df2['tweet'].astype(str)\ndf2.tweet = df2.tweet.apply(lambda x: re.sub(r\"İ\", \"i\",x)) #harika calisiyor\ndf2.tweet = df2.tweet.apply(lambda x: x.lower())\ndf2.loc[:,\"tweet\"] = df2.tweet.apply(lambda x : \" \".join(re.findall('[\\w]+',x)))", "_____no_output_____" ], [ "top_N = 10\n\n\ntxt = df2.tweet.str.lower().str.replace(r'\\|', ' ').str.cat(sep=' ')\nwords = nltk.tokenize.word_tokenize(txt)\nword_dist = nltk.FreqDist(words)\n\nuser_defined_stop_words = ['ekonomi', '1', 'ye', 'nin' ,'nın', 'koronavirüs', 'olsun', 'karşı' , 'covid_19', 'artık', '3', 'sayısı' , 'olarak', 'oldu', 'olan', '2' , 'nedeniyle','bile' , 'sonra' ,'sen','virüs', 'ben', 'vaka' , 'son', 'yeni', 'sayi', 'sayisi','virüsü','bir','com','twitter', 'kadar', 'dan' , 'değil' ,'pic' , 'http', 'https' , 'www' , 'status' , 'var', 'bi', 'mi','yok', 'bu' , 've', 'korona' ,'corona' ,'19' ,'kovid', 'covid'] \n\ni = nltk.corpus.stopwords.words('turkish')\nj = list(string.punctuation) + user_defined_stop_words\nstopwords = set(i).union(j)\n\n\nwords_except_stop_dist = nltk.FreqDist(w for w in words if w not in stopwords) \n\nprint('All frequencies, including STOPWORDS:')\nprint('=' * 60)\nrslt = pd.DataFrame(word_dist.most_common(top_N),\n columns=['Word', 'Frequency'])\nprint(rslt)\nprint('=' * 60)\n\nrslt = pd.DataFrame(words_except_stop_dist.most_common(top_N),\n columns=['Word', 'Frequency']).set_index('Word')\n\n", "All frequencies, including STOPWORDS:\n============================================================\n Word Frequency\n0 sahte 37777\n1 bir 30686\n2 komplo 29424\n3 bu 28859\n4 ve 25136\n5 com 20487\n6 twitter 16939\n7 da 12807\n8 de 12445\n9 https 11229\n============================================================\n" ] ], [ [ "# Karsilastirmali gorsellestirme (Ayni konu basliklarinin 11 marttan oncesi ve sonrasi )", "_____no_output_____" ] ], [ [ "fig, (ax1, ax2) = plt.subplots(1,2, sharex=False, sharey= True, figsize=(24,5)) \nrslt3.plot.bar(rot=0, ax =ax1 , title = \"Dezenformasyon_Once\" )\nrslt.plot.bar(rot=0, ax =ax2 , title = \"Dezenformasyon_Sonra\" )\nplt.savefig('Disinfo_comparison.png',dpi=300)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a3b1bb3d2e60959dbe2194c799fa81510591f05
4,168
ipynb
Jupyter Notebook
ALVEO - ARRAY INPUTS - DOT - OPTIMIZATIONS/jupyter notebooks/DOT-ALVEO.ipynb
MakarenaLabs/Xilinx-FPGA-HLS-Flow
0189606d59060097130e7a96047ba6b16b1209ed
[ "MIT" ]
15
2020-04-30T13:30:48.000Z
2022-01-24T10:16:47.000Z
ALVEO - ARRAY INPUTS - DOT - OPTIMIZATIONS/jupyter notebooks/DOT-ALVEO.ipynb
Anubhav2017/Xilinx-FPGA-HLS-PYNQ-ALVEO-Flow
0189606d59060097130e7a96047ba6b16b1209ed
[ "MIT" ]
null
null
null
ALVEO - ARRAY INPUTS - DOT - OPTIMIZATIONS/jupyter notebooks/DOT-ALVEO.ipynb
Anubhav2017/Xilinx-FPGA-HLS-PYNQ-ALVEO-Flow
0189606d59060097130e7a96047ba6b16b1209ed
[ "MIT" ]
2
2020-05-30T17:21:50.000Z
2022-01-17T00:06:05.000Z
24.232558
266
0.5619
[ [ [ "# ARRAY INPUTS - DOT - OPTIMIZATIONS\n\nIn this notebook we explain how to use ```pynq``` framework to test the acceleration (optimized) of dot multiplication on ```Alveo U200```.", "_____no_output_____" ] ], [ [ "from pynq import Overlay\nfrom pynq import DefaultIP\nfrom pynq import DefaultHierarchy\nfrom pynq import MMIO\nfrom pynq.pl import *\nimport pynq.lib.dma", "_____no_output_____" ] ], [ [ "The function initializes the hardware of FPGA building an object that contains synthesized module (```ol```), which contains all infos to execute IP module, and a reference to IP (```ip```). At the end of initialization, prints the signature of the C function.", "_____no_output_____" ] ], [ [ "def init_hw(filepath):\n global ol, dot\n ol = Overlay(filepath)\n dot = ol.dot_matrix_1\n print(dot.signature)", "_____no_output_____" ], [ "init_hw(\"/path/to/binary_container_1.xclbin\")", "_____no_output_____" ] ], [ [ "In this block the variables that are needed later are allocated and initialized. This specifies the allocation of the variables where the size and their type must be specified as written in Vivado HLS. The suggestion is to use ```numpy```.", "_____no_output_____" ] ], [ [ "DIM = 300\n\na = allocate(shape=((DIM, DIM)), dtype=np.int32, cacheable=True)\nb = allocate(shape=((DIM, DIM)), dtype=np.int32, cacheable=True)\nc = allocate(shape=((DIM, DIM)), dtype=np.int32, cacheable=True)\n\na[:] = np.ones((DIM,DIM)).astype('int') * 3\nb[:] = np.ones((DIM,DIM)).astype('int') * 3\nc[:] = np.zeros((DIM,DIM)).astype('int')", "_____no_output_____" ] ], [ [ "Now variables previously allocated are flushed in the global memory of Alveo.", "_____no_output_____" ] ], [ [ "a.sync_to_device()\nb.sync_to_device()\nc.sync_to_device()", "_____no_output_____" ] ], [ [ "The ```call``` function starts the execution of the IP module and ```wait``` function is used to synchronize the events avoiding reading/writing on the buffer of the IP module which may lead to race conditions.", "_____no_output_____" ] ], [ [ "dot.call(a, b, c)", "_____no_output_____" ] ], [ [ "The ```invalidate``` function is used on the output buffer because we have no other computations to do and so we want to store the result without using it again.", "_____no_output_____" ] ], [ [ "c.sync_from_device()", "_____no_output_____" ], [ "result[:] = c", "_____no_output_____" ], [ "del a\ndel b\ndel c", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a3b1c196dc66b4e840ccceeeca3025b209b8437
494,169
ipynb
Jupyter Notebook
Notebooks/Chapter11-Classifiers/Chapter11-Classifiers-1-kNN.ipynb
NTForked-ML/Deep-Learning-A-Visual-Approach
d734fcfe3d1e12ead1ef5df416e2c10596732cd2
[ "MIT" ]
1
2021-07-05T14:08:15.000Z
2021-07-05T14:08:15.000Z
Notebooks/Chapter11-Classifiers/Chapter11-Classifiers-1-kNN.ipynb
ashoknp-git/Deep-Learning-A-Visual-Approach
2728001eff8dfde29ba5094bd92f193043760b10
[ "MIT" ]
null
null
null
Notebooks/Chapter11-Classifiers/Chapter11-Classifiers-1-kNN.ipynb
ashoknp-git/Deep-Learning-A-Visual-Approach
2728001eff8dfde29ba5094bd92f193043760b10
[ "MIT" ]
null
null
null
1,110.492135
102,592
0.955938
[ [ [ "## <small>\nCopyright (c) 2017-21 Andrew Glassner\n\nPermission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.\n</small>\n\n\n\n# Deep Learning: A Visual Approach\n## by Andrew Glassner, https://glassner.com\n### Order: https://nostarch.com/deep-learning-visual-approach\n### GitHub: https://github.com/blueberrymusic\n------\n\n### What's in this notebook\n\nThis notebook is provided as a “behind-the-scenes” look at code used to make some of the figures in this chapter. It is cleaned up a bit from the original code that I hacked together, and is only lightly commented. I wrote the code to be easy to interpret and understand, even for those who are new to Python. I tried never to be clever or even more efficient at the cost of being harder to understand. The code is in Python3, using the versions of libraries as of April 2021. \n\nThis notebook may contain additional code to create models and images not in the book. That material is included here to demonstrate additional techniques.\n\nNote that I've included the output cells in this saved notebook, but Jupyter doesn't save the variables or data that were used to generate them. To recreate any cell's output, evaluate all the cells from the start up to that cell. A convenient way to experiment is to first choose \"Restart & Run All\" from the Kernel menu, so that everything's been defined and is up to date. Then you can experiment using the variables, data, functions, and other stuff defined in this notebook.", "_____no_output_____" ], [ "## Chapter 11: Classifers, Notebook 1: kNN", "_____no_output_____" ], [ "Figures demonstrating k nearest neighbors (kNN)", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn.datasets.samples_generator import make_blobs\nfrom sklearn.neighbors import KNeighborsClassifier\nimport math\n\nimport seaborn as sns; sns.set()", "/Users/Andrew/opt/anaconda3/envs/tf/lib/python3.7/site-packages/sklearn/utils/deprecation.py:143: FutureWarning: The sklearn.datasets.samples_generator module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.datasets. Anything that cannot be imported from sklearn.datasets is now part of the private API.\n warnings.warn(message, FutureWarning)\n" ], [ "# Make a File_Helper for saving and loading files.\n\nsave_files = False\n\nimport os, sys, inspect\ncurrent_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\nsys.path.insert(0, os.path.dirname(current_dir)) # path to parent dir\nfrom DLBasics_Utilities import File_Helper\nfile_helper = File_Helper(save_files)", "Using TensorFlow backend.\n" ], [ "# create a custom color map with nice colors\n\nfrom matplotlib.colors import LinearSegmentedColormap\ndot_clr_0 = np.array((79, 135, 219))/255. # blue\ndot_clr_1 = np.array((255, 141, 54))/255. # orange\ndot_cmap = LinearSegmentedColormap.from_list('dot_map', [dot_clr_0, dot_clr_1], N=100)", "_____no_output_____" ], [ "# Show a scatter plot with blue/orange colors and no ticks\n\ndef show_Xy(X, y, filename):\n plt.scatter(X[:,0], X[:,1], c=y, s=50, cmap=dot_cmap)\n plt.xticks([],[])\n plt.yticks([],[])\n file_helper.save_figure(filename)\n plt.show()", "_____no_output_____" ], [ "# Create the \"smile\" dataset. A curve for the smile with a circle at each end.\n# All the magic values were picked by hand.\n\ndef make_smile(num_samples = 20, thickness=0.3, noise=0.0):\n np.random.seed(42)\n X = []\n y = []\n for i in range(num_samples):\n px = np.random.uniform(-1.5, 1.5)\n py = np.random.uniform(-1, 1)\n c = 0\n if (px - -0.8)**2 + (py-.4)**2 < thickness**2:\n c = 1\n if (px - 0.8)**2 + (py-.4)**2 < thickness**2:\n c = 1\n theta = np.arctan2(py-.4, px)\n r = math.sqrt((px**2)+((py-.4)**2))\n if (theta < 0) and (r > .8-thickness) and (r < .8+thickness):\n c = 1\n px += np.random.uniform(-noise, noise)\n py += np.random.uniform(-noise, noise)\n X.append([px,py])\n y.append(c)\n return (np.array(X),y)", "_____no_output_____" ], [ "# Create the \"happy face\" dataset by adding some eyes to the smile.\n# All the magic values were picked by hand.\n\ndef make_happy_face(num_samples = 20, thickness=0.3, noise=0.0):\n np.random.seed(42)\n X = []\n y = []\n eye_x = .5\n eye_y = 1.5\n for i in range(num_samples):\n px = np.random.uniform(-1.5, 1.5)\n py = np.random.uniform(-1, 2.0)\n c = 0\n if (px - eye_x)**2 + (py-eye_y)**2 < thickness**2:\n c = 1\n if (px - -eye_x)**2 + (py-eye_y)**2 < thickness**2:\n c = 1\n if (px - -0.8)**2 + (py-.4)**2 < thickness**2:\n c = 1\n if (px - 0.8)**2 + (py-.4)**2 < thickness**2:\n c = 1\n theta = np.arctan2(py-.4, px)\n r = math.sqrt((px**2)+((py-.4)**2))\n if (theta < 0) and (r > .8-thickness) and (r < .8+thickness):\n c = 1\n px += np.random.uniform(-noise, noise)\n py += np.random.uniform(-noise, noise)\n X.append([px,py])\n y.append(c)\n return (np.array(X),y)", "_____no_output_____" ], [ "# Show the clean smile\n\nX_clean, y_clean = make_smile(1000, .3, 0)\nshow_Xy(X_clean, y_clean, 'KNN-smile-data-clean')", "_____no_output_____" ], [ "# Show the noisy smile\n\nX_noisy, y_noisy = make_smile(1000, .3, .25)\nshow_Xy(X_noisy, y_noisy, 'KNN-smile-data-noisy')", "_____no_output_____" ], [ "# Show a grid of k-nearest-neighbors (kNN) results for different values of k. \n# For large values of k, this can take a little while.\n\ndef show_fit_grid(X, y, data_version):\n k_list = [1, 2, 3, 4, 5, 6, 10, 20, 50]\n plt.figure(figsize=(8,6))\n resolution = 500\n xmin = np.min(X[:,0]) - .1\n xmax = np.max(X[:,0]) + .1\n ymin = np.min(X[:,1]) - .1\n ymax = np.max(X[:,1]) + .1\n xx, yy = np.meshgrid(np.linspace(xmin, xmax, resolution), np.linspace(ymin, ymax, resolution))\n zin = np.array([xx.ravel(), yy.ravel()]).T\n for i in range(9):\n plt.subplot(3, 3, i+1)\n num_neighbors = k_list[i]\n knn = KNeighborsClassifier(n_neighbors=num_neighbors)\n knn.fit(X,y)\n Z = knn.predict(zin)\n Z = Z.reshape(xx.shape)\n plt.contourf(xx, yy, Z, cmap=dot_cmap)\n #plt.scatter(X[:,0], X[:,1], c=y, s=5, alpha=0.3, cmap='cool')\n plt.xticks([],[])\n plt.yticks([],[])\n plt.title('k='+str(num_neighbors))\n plt.tight_layout()\n file_helper.save_figure('KNN-smile-grid-'+data_version)\n plt.show()", "_____no_output_____" ], [ "# Show the grid for the clean smile dataset\n\nshow_fit_grid(X_clean, y_clean, 'clean')", "_____no_output_____" ], [ "# Show the grid for the noisy smile dataset\n\nshow_fit_grid(X_noisy, y_noisy, 'noisy')", "_____no_output_____" ], [ "# Show the clean face dataset\n\nX_clean_face, y_clean_face = make_happy_face(1000, .3, 0)\nshow_Xy(X_clean_face, y_clean_face, 'KNN-face-data-clean')", "_____no_output_____" ], [ "# Show the grid for the clean face dataset\n\nshow_fit_grid(X_clean_face, y_clean_face, 'clean-face')", "_____no_output_____" ], [ "# Show the noisy face dataset\n\nX_noisy_face, y_noisy_face = make_happy_face(1000, .3, .25)\nshow_Xy(X_noisy_face, y_noisy_face, 'KNN-face-data-noisy')", "_____no_output_____" ], [ "# Show the grid for the noisy face dataset\n\nshow_fit_grid(X_noisy_face, y_noisy_face, 'noisy-face')", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3b215c75c1d5187f0326a71c9c38cc8c062725
64,070
ipynb
Jupyter Notebook
chapters/chapter_1/PyTorch_Basics.ipynb
RRisto/pytorchnlpbook
3ab532236ded14338b53683aa9adc75790bd8f0c
[ "Apache-2.0" ]
2
2020-02-08T23:39:11.000Z
2020-03-01T02:51:20.000Z
chapters/chapter_1/PyTorch_Basics.ipynb
RRisto/pytorchnlpbook
3ab532236ded14338b53683aa9adc75790bd8f0c
[ "Apache-2.0" ]
1
2020-02-07T18:02:15.000Z
2020-02-07T18:02:15.000Z
chapters/chapter_1/PyTorch_Basics.ipynb
RRisto/pytorchnlpbook
3ab532236ded14338b53683aa9adc75790bd8f0c
[ "Apache-2.0" ]
null
null
null
24.756569
1,477
0.461807
[ [ [ "# PyTorch Basics", "_____no_output_____" ] ], [ [ "import torch\nimport numpy as np\ntorch.manual_seed(1234)", "_____no_output_____" ] ], [ [ "## Tensors", "_____no_output_____" ], [ "* Scalar is a single number.\n* Vector is an array of numbers.\n* Matrix is a 2-D array of numbers.\n* Tensors are N-D arrays of numbers.", "_____no_output_____" ], [ "#### Creating Tensors", "_____no_output_____" ], [ "You can create tensors by specifying the shape as arguments. Here is a tensor with 5 rows and 3 columns", "_____no_output_____" ] ], [ [ "def describe(x):\n print(\"Type: {}\".format(x.type()))\n print(\"Shape/size: {}\".format(x.shape))\n print(\"Values: \\n{}\".format(x))", "_____no_output_____" ], [ "describe(torch.Tensor(2, 3))", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[ 3.1654e+09, 4.5635e-41, -5.4825e-21],\n [ 3.0718e-41, 4.4842e-44, 0.0000e+00]])\n" ], [ "describe(torch.randn(2, 3))", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[ 0.0461, 0.4024, -1.0115],\n [ 0.2167, -0.6123, 0.5036]])\n" ] ], [ [ "It's common in prototyping to create a tensor with random numbers of a specific shape.", "_____no_output_____" ] ], [ [ "x = torch.rand(2, 3)\ndescribe(x)", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0.7749, 0.8208, 0.2793],\n [0.6817, 0.2837, 0.6567]])\n" ] ], [ [ "You can also initialize tensors of ones or zeros.", "_____no_output_____" ] ], [ [ "describe(torch.zeros(2, 3))\nx = torch.ones(2, 3)\ndescribe(x)\nx.fill_(5)\ndescribe(x)", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0., 0., 0.],\n [0., 0., 0.]])\nType: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[1., 1., 1.],\n [1., 1., 1.]])\nType: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[5., 5., 5.],\n [5., 5., 5.]])\n" ] ], [ [ "Tensors can be initialized and then filled in place. \n\nNote: operations that end in an underscore (`_`) are in place operations.", "_____no_output_____" ] ], [ [ "x = torch.Tensor(3,4).fill_(5)\nprint(x.type())\nprint(x.shape)\nprint(x)", "torch.FloatTensor\ntorch.Size([3, 4])\ntensor([[5., 5., 5., 5.],\n [5., 5., 5., 5.],\n [5., 5., 5., 5.]])\n" ] ], [ [ "Tensors can be initialized from a list of lists", "_____no_output_____" ] ], [ [ "x = torch.Tensor([[1, 2,], \n [2, 4,]])\ndescribe(x)", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 2])\nValues: \ntensor([[1., 2.],\n [2., 4.]])\n" ] ], [ [ "Tensors can be initialized from numpy matrices", "_____no_output_____" ] ], [ [ "npy = np.random.rand(2, 3)\ndescribe(torch.from_numpy(npy))\nprint(npy.dtype)", "Type: torch.DoubleTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0.6938, 0.0125, 0.7894],\n [0.4493, 0.1734, 0.4403]], dtype=torch.float64)\nfloat64\n" ] ], [ [ "#### Tensor Types", "_____no_output_____" ], [ "The FloatTensor has been the default tensor that we have been creating all along", "_____no_output_____" ] ], [ [ "import torch\nx = torch.arange(6).view(2, 3)\ndescribe(x)", "Type: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0, 1, 2],\n [3, 4, 5]])\n" ], [ "x = torch.FloatTensor([[1, 2, 3], \n [4, 5, 6]])\ndescribe(x)\n\nx = x.long()\ndescribe(x)\n\nx = torch.tensor([[1, 2, 3], \n [4, 5, 6]], dtype=torch.int64)\ndescribe(x)\n\nx = x.float() \ndescribe(x)", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[1., 2., 3.],\n [4., 5., 6.]])\nType: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[1, 2, 3],\n [4, 5, 6]])\nType: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[1, 2, 3],\n [4, 5, 6]])\nType: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[1., 2., 3.],\n [4., 5., 6.]])\n" ], [ "x = torch.randn(2, 3)\ndescribe(x)", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[ 1.5385, -0.9757, 1.5769],\n [ 0.3840, -0.6039, -0.5240]])\n" ], [ "describe(torch.add(x, x))", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[ 3.0771, -1.9515, 3.1539],\n [ 0.7680, -1.2077, -1.0479]])\n" ], [ "describe(x + x)", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[ 3.0771, -1.9515, 3.1539],\n [ 0.7680, -1.2077, -1.0479]])\n" ], [ "x = torch.arange(6)\ndescribe(x)", "Type: torch.LongTensor\nShape/size: torch.Size([6])\nValues: \ntensor([0, 1, 2, 3, 4, 5])\n" ], [ "x = x.view(2, 3)\ndescribe(x)", "Type: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0, 1, 2],\n [3, 4, 5]])\n" ], [ "describe(torch.sum(x, dim=0))\ndescribe(torch.sum(x, dim=1))", "Type: torch.LongTensor\nShape/size: torch.Size([3])\nValues: \ntensor([3, 5, 7])\nType: torch.LongTensor\nShape/size: torch.Size([2])\nValues: \ntensor([ 3, 12])\n" ], [ "describe(torch.transpose(x, 0, 1))", "Type: torch.LongTensor\nShape/size: torch.Size([3, 2])\nValues: \ntensor([[0, 3],\n [1, 4],\n [2, 5]])\n" ], [ "import torch\nx = torch.arange(6).view(2, 3)\ndescribe(x)\ndescribe(x[:1, :2])\ndescribe(x[0, 1])", "Type: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0, 1, 2],\n [3, 4, 5]])\nType: torch.LongTensor\nShape/size: torch.Size([1, 2])\nValues: \ntensor([[0, 1]])\nType: torch.LongTensor\nShape/size: torch.Size([])\nValues: \n1\n" ], [ "indices = torch.LongTensor([0, 2])\ndescribe(torch.index_select(x, dim=1, index=indices))", "Type: torch.LongTensor\nShape/size: torch.Size([2, 2])\nValues: \ntensor([[0, 2],\n [3, 5]])\n" ], [ "indices = torch.LongTensor([0, 0])\ndescribe(torch.index_select(x, dim=0, index=indices))", "Type: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0, 1, 2],\n [0, 1, 2]])\n" ], [ "row_indices = torch.arange(2).long()\ncol_indices = torch.LongTensor([0, 1])\ndescribe(x[row_indices, col_indices])", "Type: torch.LongTensor\nShape/size: torch.Size([2])\nValues: \ntensor([0, 4])\n" ] ], [ [ "Long Tensors are used for indexing operations and mirror the `int64` numpy type", "_____no_output_____" ] ], [ [ "x = torch.LongTensor([[1, 2, 3], \n [4, 5, 6],\n [7, 8, 9]])\ndescribe(x)\nprint(x.dtype)\nprint(x.numpy().dtype)", "Type: torch.LongTensor\nShape/size: torch.Size([3, 3])\nValues: \ntensor([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\ntorch.int64\nint64\n" ] ], [ [ "You can convert a FloatTensor to a LongTensor", "_____no_output_____" ] ], [ [ "x = torch.FloatTensor([[1, 2, 3], \n [4, 5, 6],\n [7, 8, 9]])\nx = x.long()\ndescribe(x)", "Type: torch.LongTensor\nShape/size: torch.Size([3, 3])\nValues: \ntensor([[1, 2, 3],\n [4, 5, 6],\n [7, 8, 9]])\n" ] ], [ [ "### Special Tensor initializations", "_____no_output_____" ], [ "We can create a vector of incremental numbers", "_____no_output_____" ] ], [ [ "x = torch.arange(0, 10)\nprint(x)", "tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n" ] ], [ [ "Sometimes it's useful to have an integer-based arange for indexing", "_____no_output_____" ] ], [ [ "x = torch.arange(0, 10).long()\nprint(x)", "tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n" ] ], [ [ "## Operations\n\nUsing the tensors to do linear algebra is a foundation of modern Deep Learning practices", "_____no_output_____" ], [ "Reshaping allows you to move the numbers in a tensor around. One can be sure that the order is preserved. In PyTorch, reshaping is called `view`", "_____no_output_____" ] ], [ [ "x = torch.arange(0, 20)\n\nprint(x.view(1, 20))\nprint(x.view(2, 10))\nprint(x.view(4, 5))\nprint(x.view(5, 4))\nprint(x.view(10, 2))\nprint(x.view(20, 1))", "tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17,\n 18, 19]])\ntensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14, 15, 16, 17, 18, 19]])\ntensor([[ 0, 1, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]])\ntensor([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11],\n [12, 13, 14, 15],\n [16, 17, 18, 19]])\ntensor([[ 0, 1],\n [ 2, 3],\n [ 4, 5],\n [ 6, 7],\n [ 8, 9],\n [10, 11],\n [12, 13],\n [14, 15],\n [16, 17],\n [18, 19]])\ntensor([[ 0],\n [ 1],\n [ 2],\n [ 3],\n [ 4],\n [ 5],\n [ 6],\n [ 7],\n [ 8],\n [ 9],\n [10],\n [11],\n [12],\n [13],\n [14],\n [15],\n [16],\n [17],\n [18],\n [19]])\n" ] ], [ [ "We can use view to add size-1 dimensions, which can be useful for combining with other tensors. This is called broadcasting. ", "_____no_output_____" ] ], [ [ "x = torch.arange(12).view(3, 4)\ny = torch.arange(4).view(1, 4)\nz = torch.arange(3).view(3, 1)\n\nprint(x)\nprint(y)\nprint(z)\nprint(x + y)\nprint(x + z)", "tensor([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]])\ntensor([[0, 1, 2, 3]])\ntensor([[0],\n [1],\n [2]])\ntensor([[ 0, 2, 4, 6],\n [ 4, 6, 8, 10],\n [ 8, 10, 12, 14]])\ntensor([[ 0, 1, 2, 3],\n [ 5, 6, 7, 8],\n [10, 11, 12, 13]])\n" ] ], [ [ "Unsqueeze and squeeze will add and remove 1-dimensions.", "_____no_output_____" ] ], [ [ "x = torch.arange(12).view(3, 4)\nprint(x.shape)\n\nx = x.unsqueeze(dim=1)\nprint(x.shape)\n\nx = x.squeeze()\nprint(x.shape)", "torch.Size([3, 4])\ntorch.Size([3, 1, 4])\ntorch.Size([3, 4])\n" ] ], [ [ "all of the standard mathematics operations apply (such as `add` below)", "_____no_output_____" ] ], [ [ "x = torch.rand(3,4)\nprint(\"x: \\n\", x)\nprint(\"--\")\nprint(\"torch.add(x, x): \\n\", torch.add(x, x))\nprint(\"--\")\nprint(\"x+x: \\n\", x + x)", "x: \n tensor([[0.6662, 0.3343, 0.7893, 0.3216],\n [0.5247, 0.6688, 0.8436, 0.4265],\n [0.9561, 0.0770, 0.4108, 0.0014]])\n--\ntorch.add(x, x): \n tensor([[1.3324, 0.6686, 1.5786, 0.6433],\n [1.0494, 1.3377, 1.6872, 0.8530],\n [1.9123, 0.1540, 0.8216, 0.0028]])\n--\nx+x: \n tensor([[1.3324, 0.6686, 1.5786, 0.6433],\n [1.0494, 1.3377, 1.6872, 0.8530],\n [1.9123, 0.1540, 0.8216, 0.0028]])\n" ] ], [ [ "The convention of `_` indicating in-place operations continues:", "_____no_output_____" ] ], [ [ "x = torch.arange(12).reshape(3, 4)\nprint(x)\nprint(x.add_(x))", "tensor([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]])\ntensor([[ 0, 2, 4, 6],\n [ 8, 10, 12, 14],\n [16, 18, 20, 22]])\n" ] ], [ [ "There are many operations for which reduce a dimension. Such as sum:", "_____no_output_____" ] ], [ [ "x = torch.arange(12).reshape(3, 4)\nprint(\"x: \\n\", x)\nprint(\"---\")\nprint(\"Summing across rows (dim=0): \\n\", x.sum(dim=0))\nprint(\"---\")\nprint(\"Summing across columns (dim=1): \\n\", x.sum(dim=1))", "x: \n tensor([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]])\n---\nSumming across rows (dim=0): \n tensor([12, 15, 18, 21])\n---\nSumming across columns (dim=1): \n tensor([ 6, 22, 38])\n" ] ], [ [ "#### Indexing, Slicing, Joining and Mutating", "_____no_output_____" ] ], [ [ "x = torch.arange(6).view(2, 3)\nprint(\"x: \\n\", x)\nprint(\"---\")\nprint(\"x[:2, :2]: \\n\", x[:2, :2])\nprint(\"---\")\nprint(\"x[0][1]: \\n\", x[0][1])\nprint(\"---\")\nprint(\"Setting [0][1] to be 8\")\nx[0][1] = 8\nprint(x)", "x: \n tensor([[0, 1, 2],\n [3, 4, 5]])\n---\nx[:2, :2]: \n tensor([[0, 1],\n [3, 4]])\n---\nx[0][1]: \n tensor(1)\n---\nSetting [0][1] to be 8\ntensor([[0, 8, 2],\n [3, 4, 5]])\n" ] ], [ [ "We can select a subset of a tensor using the `index_select`", "_____no_output_____" ] ], [ [ "x = torch.arange(9).view(3,3)\nprint(x)\n\nprint(\"---\")\nindices = torch.LongTensor([0, 2])\nprint(torch.index_select(x, dim=0, index=indices))\n\nprint(\"---\")\nindices = torch.LongTensor([0, 2])\nprint(torch.index_select(x, dim=1, index=indices))", "tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n---\ntensor([[0, 1, 2],\n [6, 7, 8]])\n---\ntensor([[0, 2],\n [3, 5],\n [6, 8]])\n" ] ], [ [ "We can also use numpy-style advanced indexing:", "_____no_output_____" ] ], [ [ "x = torch.arange(9).view(3,3)\nindices = torch.LongTensor([0, 2])\n\nprint(x[indices])\nprint(\"---\")\nprint(x[indices, :])\nprint(\"---\")\nprint(x[:, indices])", "tensor([[0, 1, 2],\n [6, 7, 8]])\n---\ntensor([[0, 1, 2],\n [6, 7, 8]])\n---\ntensor([[0, 2],\n [3, 5],\n [6, 8]])\n" ] ], [ [ "We can combine tensors by concatenating them. First, concatenating on the rows", "_____no_output_____" ] ], [ [ "x = torch.arange(6).view(2,3)\ndescribe(x)\ndescribe(torch.cat([x, x], dim=0))\ndescribe(torch.cat([x, x], dim=1))\ndescribe(torch.stack([x, x]))", "Type: torch.LongTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0, 1, 2],\n [3, 4, 5]])\nType: torch.LongTensor\nShape/size: torch.Size([4, 3])\nValues: \ntensor([[0, 1, 2],\n [3, 4, 5],\n [0, 1, 2],\n [3, 4, 5]])\nType: torch.LongTensor\nShape/size: torch.Size([2, 6])\nValues: \ntensor([[0, 1, 2, 0, 1, 2],\n [3, 4, 5, 3, 4, 5]])\nType: torch.LongTensor\nShape/size: torch.Size([2, 2, 3])\nValues: \ntensor([[[0, 1, 2],\n [3, 4, 5]],\n\n [[0, 1, 2],\n [3, 4, 5]]])\n" ] ], [ [ "We can concentate along the first dimension.. the columns.", "_____no_output_____" ] ], [ [ "x = torch.arange(9).view(3,3)\n\nprint(x)\nprint(\"---\")\nnew_x = torch.cat([x, x, x], dim=1)\nprint(new_x.shape)\nprint(new_x)", "tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n---\ntorch.Size([3, 9])\ntensor([[0, 1, 2, 0, 1, 2, 0, 1, 2],\n [3, 4, 5, 3, 4, 5, 3, 4, 5],\n [6, 7, 8, 6, 7, 8, 6, 7, 8]])\n" ] ], [ [ "We can also concatenate on a new 0th dimension to \"stack\" the tensors:", "_____no_output_____" ] ], [ [ "x = torch.arange(9).view(3,3)\nprint(x)\nprint(\"---\")\nnew_x = torch.stack([x, x, x])\nprint(new_x.shape)\nprint(new_x)", "tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n---\ntorch.Size([3, 3, 3])\ntensor([[[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]],\n\n [[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]],\n\n [[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]]])\n" ] ], [ [ "#### Linear Algebra Tensor Functions", "_____no_output_____" ], [ "Transposing allows you to switch the dimensions to be on different axis. So we can make it so all the rows are columsn and vice versa. ", "_____no_output_____" ] ], [ [ "x = torch.arange(0, 12).view(3,4)\nprint(\"x: \\n\", x) \nprint(\"---\")\nprint(\"x.tranpose(1, 0): \\n\", x.transpose(1, 0))", "x: \n tensor([[ 0, 1, 2, 3],\n [ 4, 5, 6, 7],\n [ 8, 9, 10, 11]])\n---\nx.tranpose(1, 0): \n tensor([[ 0, 4, 8],\n [ 1, 5, 9],\n [ 2, 6, 10],\n [ 3, 7, 11]])\n" ] ], [ [ "A three dimensional tensor would represent a batch of sequences, where each sequence item has a feature vector. It is common to switch the batch and sequence dimensions so that we can more easily index the sequence in a sequence model. \n\nNote: Transpose will only let you swap 2 axes. Permute (in the next cell) allows for multiple", "_____no_output_____" ] ], [ [ "batch_size = 3\nseq_size = 4\nfeature_size = 5\n\nx = torch.arange(batch_size * seq_size * feature_size).view(batch_size, seq_size, feature_size)\n\nprint(\"x.shape: \\n\", x.shape)\nprint(\"x: \\n\", x)\nprint(\"-----\")\n\nprint(\"x.transpose(1, 0).shape: \\n\", x.transpose(1, 0).shape)\nprint(\"x.transpose(1, 0): \\n\", x.transpose(1, 0))", "x.shape: \n torch.Size([3, 4, 5])\nx: \n tensor([[[ 0, 1, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]],\n\n [[20, 21, 22, 23, 24],\n [25, 26, 27, 28, 29],\n [30, 31, 32, 33, 34],\n [35, 36, 37, 38, 39]],\n\n [[40, 41, 42, 43, 44],\n [45, 46, 47, 48, 49],\n [50, 51, 52, 53, 54],\n [55, 56, 57, 58, 59]]])\n-----\nx.transpose(1, 0).shape: \n torch.Size([4, 3, 5])\nx.transpose(1, 0): \n tensor([[[ 0, 1, 2, 3, 4],\n [20, 21, 22, 23, 24],\n [40, 41, 42, 43, 44]],\n\n [[ 5, 6, 7, 8, 9],\n [25, 26, 27, 28, 29],\n [45, 46, 47, 48, 49]],\n\n [[10, 11, 12, 13, 14],\n [30, 31, 32, 33, 34],\n [50, 51, 52, 53, 54]],\n\n [[15, 16, 17, 18, 19],\n [35, 36, 37, 38, 39],\n [55, 56, 57, 58, 59]]])\n" ] ], [ [ "Permute is a more general version of tranpose:", "_____no_output_____" ] ], [ [ "batch_size = 3\nseq_size = 4\nfeature_size = 5\n\nx = torch.arange(batch_size * seq_size * feature_size).view(batch_size, seq_size, feature_size)\n\nprint(\"x.shape: \\n\", x.shape)\nprint(\"x: \\n\", x)\nprint(\"-----\")\n\nprint(\"x.permute(1, 0, 2).shape: \\n\", x.permute(1, 0, 2).shape)\nprint(\"x.permute(1, 0, 2): \\n\", x.permute(1, 0, 2))", "x.shape: \n torch.Size([3, 4, 5])\nx: \n tensor([[[ 0, 1, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]],\n\n [[20, 21, 22, 23, 24],\n [25, 26, 27, 28, 29],\n [30, 31, 32, 33, 34],\n [35, 36, 37, 38, 39]],\n\n [[40, 41, 42, 43, 44],\n [45, 46, 47, 48, 49],\n [50, 51, 52, 53, 54],\n [55, 56, 57, 58, 59]]])\n-----\nx.permute(1, 0, 2).shape: \n torch.Size([4, 3, 5])\nx.permute(1, 0, 2): \n tensor([[[ 0, 1, 2, 3, 4],\n [20, 21, 22, 23, 24],\n [40, 41, 42, 43, 44]],\n\n [[ 5, 6, 7, 8, 9],\n [25, 26, 27, 28, 29],\n [45, 46, 47, 48, 49]],\n\n [[10, 11, 12, 13, 14],\n [30, 31, 32, 33, 34],\n [50, 51, 52, 53, 54]],\n\n [[15, 16, 17, 18, 19],\n [35, 36, 37, 38, 39],\n [55, 56, 57, 58, 59]]])\n" ] ], [ [ "Matrix multiplication is `mm`:", "_____no_output_____" ] ], [ [ "torch.randn(2, 3, requires_grad=True)", "_____no_output_____" ], [ "x1 = torch.arange(6).view(2, 3).float()\ndescribe(x1)\n\nx2 = torch.ones(3, 2)\nx2[:, 1] += 1\ndescribe(x2)\n\ndescribe(torch.mm(x1, x2))", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 3])\nValues: \ntensor([[0., 1., 2.],\n [3., 4., 5.]])\nType: torch.FloatTensor\nShape/size: torch.Size([3, 2])\nValues: \ntensor([[1., 2.],\n [1., 2.],\n [1., 2.]])\nType: torch.FloatTensor\nShape/size: torch.Size([2, 2])\nValues: \ntensor([[ 3., 6.],\n [12., 24.]])\n" ], [ "x = torch.arange(0, 12).view(3,4).float()\nprint(x)\n\nx2 = torch.ones(4, 2)\nx2[:, 1] += 1\nprint(x2)\n\nprint(x.mm(x2))", "tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.]])\ntensor([[1., 2.],\n [1., 2.],\n [1., 2.],\n [1., 2.]])\ntensor([[ 6., 12.],\n [22., 44.],\n [38., 76.]])\n" ] ], [ [ "See the [PyTorch Math Operations Documentation](https://pytorch.org/docs/stable/torch.html#math-operations) for more!", "_____no_output_____" ], [ "## Computing Gradients", "_____no_output_____" ] ], [ [ "x = torch.tensor([[2.0, 3.0]], requires_grad=True)\nz = 3 * x\nprint(z)", "tensor([[6., 9.]], grad_fn=<MulBackward0>)\n" ] ], [ [ "In this small snippet, you can see the gradient computations at work. We create a tensor and multiply it by 3. Then, we create a scalar output using `sum()`. A Scalar output is needed as the the loss variable. Then, called backward on the loss means it computes its rate of change with respect to the inputs. Since the scalar was created with sum, each position in z and x are independent with respect to the loss scalar. \n\nThe rate of change of x with respect to the output is just the constant 3 that we multiplied x by.", "_____no_output_____" ] ], [ [ "x = torch.tensor([[2.0, 3.0]], requires_grad=True)\nprint(\"x: \\n\", x)\nprint(\"---\")\nz = 3 * x\nprint(\"z = 3*x: \\n\", z)\nprint(\"---\")\n\nloss = z.sum()\nprint(\"loss = z.sum(): \\n\", loss)\nprint(\"---\")\n\nloss.backward()\n\nprint(\"after loss.backward(), x.grad: \\n\", x.grad)\n", "x: \n tensor([[2., 3.]], requires_grad=True)\n---\nz = 3*x: \n tensor([[6., 9.]], grad_fn=<MulBackward0>)\n---\nloss = z.sum(): \n tensor(15., grad_fn=<SumBackward0>)\n---\nafter loss.backward(), x.grad: \n tensor([[3., 3.]])\n" ] ], [ [ "### Example: Computing a conditional gradient\n\n$$ \\text{ Find the gradient of f(x) at x=1 } $$\n$$ {} $$\n$$ f(x)=\\left\\{\n\\begin{array}{ll}\n sin(x) \\text{ if } x>0 \\\\\n cos(x) \\text{ otherwise } \\\\\n\\end{array}\n\\right.$$", "_____no_output_____" ] ], [ [ "def f(x):\n if (x.data > 0).all():\n return torch.sin(x)\n else:\n return torch.cos(x)", "_____no_output_____" ], [ "x = torch.tensor([1.0], requires_grad=True)\ny = f(x)\ny.backward()\nprint(x.grad)", "tensor([0.5403])\n" ] ], [ [ "We could apply this to a larger vector too, but we need to make sure the output is a scalar:", "_____no_output_____" ] ], [ [ "x = torch.tensor([1.0, 0.5], requires_grad=True)\ny = f(x)\n# this is meant to break!\ny.backward()\nprint(x.grad)", "_____no_output_____" ] ], [ [ "Making the output a scalar:", "_____no_output_____" ] ], [ [ "x = torch.tensor([1.0, 0.5], requires_grad=True)\ny = f(x)\ny.sum().backward()\nprint(x.grad)", "tensor([0.5403, 0.8776])\n" ] ], [ [ "but there was an issue.. this isn't right for this edge case:", "_____no_output_____" ] ], [ [ "x = torch.tensor([1.0, -1], requires_grad=True)\ny = f(x)\ny.sum().backward()\nprint(x.grad)", "tensor([-0.8415, 0.8415])\n" ], [ "x = torch.tensor([-0.5, -1], requires_grad=True)\ny = f(x)\ny.sum().backward()\nprint(x.grad)", "tensor([0.4794, 0.8415])\n" ] ], [ [ "This is because we aren't doing the boolean computation and subsequent application of cos and sin on an elementwise basis. So, to solve this, it is common to use masking:", "_____no_output_____" ] ], [ [ "def f2(x):\n mask = torch.gt(x, 0).float()\n return mask * torch.sin(x) + (1 - mask) * torch.cos(x)\n\nx = torch.tensor([1.0, -1], requires_grad=True)\ny = f2(x)\ny.sum().backward()\nprint(x.grad)", "tensor([0.5403, 0.8415])\n" ], [ "def describe_grad(x):\n if x.grad is None:\n print(\"No gradient information\")\n else:\n print(\"Gradient: \\n{}\".format(x.grad))\n print(\"Gradient Function: {}\".format(x.grad_fn))", "_____no_output_____" ], [ "import torch\nx = torch.ones(2, 2, requires_grad=True)\ndescribe(x)\ndescribe_grad(x)\nprint(\"--------\")\n\ny = (x + 2) * (x + 5) + 3\ndescribe(y)\nz = y.mean()\ndescribe(z)\ndescribe_grad(x)\nprint(\"--------\")\nz.backward(create_graph=True, retain_graph=True)\ndescribe_grad(x)\nprint(\"--------\")\n", "Type: torch.FloatTensor\nShape/size: torch.Size([2, 2])\nValues: \ntensor([[1., 1.],\n [1., 1.]], requires_grad=True)\nNo gradient information\n--------\nType: torch.FloatTensor\nShape/size: torch.Size([2, 2])\nValues: \ntensor([[21., 21.],\n [21., 21.]], grad_fn=<AddBackward0>)\nType: torch.FloatTensor\nShape/size: torch.Size([])\nValues: \n21.0\nNo gradient information\n--------\nGradient: \ntensor([[2.2500, 2.2500],\n [2.2500, 2.2500]], grad_fn=<CloneBackward>)\nGradient Function: None\n--------\n" ], [ "x = torch.ones(2, 2, requires_grad=True)", "_____no_output_____" ], [ "y = x + 2", "_____no_output_____" ], [ "y.grad_fn", "_____no_output_____" ] ], [ [ "### CUDA Tensors", "_____no_output_____" ], [ "PyTorch's operations can seamlessly be used on the GPU or on the CPU. There are a couple basic operations for interacting in this way.", "_____no_output_____" ] ], [ [ "print(torch.cuda.is_available())", "True\n" ], [ "x = torch.rand(3,3)\ndescribe(x)", "Type: torch.FloatTensor\nShape/size: torch.Size([3, 3])\nValues: \ntensor([[0.9149, 0.3993, 0.1100],\n [0.2541, 0.4333, 0.4451],\n [0.4966, 0.7865, 0.6604]])\n" ], [ "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nprint(device)", "cuda\n" ], [ "x = torch.rand(3, 3).to(device)\ndescribe(x)\nprint(x.device)", "Type: torch.cuda.FloatTensor\nShape/size: torch.Size([3, 3])\nValues: \ntensor([[0.1303, 0.3498, 0.3824],\n [0.8043, 0.3186, 0.2908],\n [0.4196, 0.3728, 0.3769]], device='cuda:0')\ncuda:0\n" ], [ "cpu_device = torch.device(\"cpu\")", "_____no_output_____" ], [ "# this will break!\ny = torch.rand(3, 3)\nx + y", "_____no_output_____" ], [ "y = y.to(cpu_device)\nx = x.to(cpu_device)\nx + y", "_____no_output_____" ], [ "if torch.cuda.is_available(): # only is GPU is available\n a = torch.rand(3,3).to(device='cuda:0') # CUDA Tensor\n print(a)\n \n b = torch.rand(3,3).cuda()\n print(b)\n\n print(a + b)\n\n a = a.cpu() # Error expected\n print(a + b)", "tensor([[0.5274, 0.6325, 0.0910],\n [0.2323, 0.7269, 0.1187],\n [0.3951, 0.7199, 0.7595]], device='cuda:0')\ntensor([[0.5311, 0.6449, 0.7224],\n [0.4416, 0.3634, 0.8818],\n [0.9874, 0.7316, 0.2814]], device='cuda:0')\ntensor([[1.0585, 1.2775, 0.8134],\n [0.6739, 1.0903, 1.0006],\n [1.3825, 1.4515, 1.0409]], device='cuda:0')\n" ] ], [ [ "### Exercises\n\nSome of these exercises require operations not covered in the notebook. You will have to look at [the documentation](https://pytorch.org/docs/) (on purpose!)\n\n\n(Answers are at the bottom)", "_____no_output_____" ], [ "#### Exercise 1\n\nCreate a 2D tensor and then add a dimension of size 1 inserted at the 0th axis.", "_____no_output_____" ], [ "#### Exercise 2\n\nRemove the extra dimension you just added to the previous tensor.", "_____no_output_____" ], [ "#### Exercise 3\n\nCreate a random tensor of shape 5x3 in the interval [3, 7)", "_____no_output_____" ], [ "#### Exercise 4\n\nCreate a tensor with values from a normal distribution (mean=0, std=1).", "_____no_output_____" ], [ "#### Exercise 5\n\nRetrieve the indexes of all the non zero elements in the tensor torch.Tensor([1, 1, 1, 0, 1]).", "_____no_output_____" ], [ "#### Exercise 6\n\nCreate a random tensor of size (3,1) and then horizonally stack 4 copies together.", "_____no_output_____" ], [ "#### Exercise 7\n\nReturn the batch matrix-matrix product of two 3 dimensional matrices (a=torch.rand(3,4,5), b=torch.rand(3,5,4)).", "_____no_output_____" ], [ "#### Exercise 8\n\nReturn the batch matrix-matrix product of a 3D matrix and a 2D matrix (a=torch.rand(3,4,5), b=torch.rand(5,4)).", "_____no_output_____" ], [ "Answers below", "_____no_output_____" ], [ "Answers still below.. Keep Going", "_____no_output_____" ], [ "#### Exercise 1\n\nCreate a 2D tensor and then add a dimension of size 1 inserted at the 0th axis.", "_____no_output_____" ] ], [ [ "a = torch.rand(3,3)\na = a.unsqueeze(0)\nprint(a)\nprint(a.shape)", "_____no_output_____" ] ], [ [ "#### Exercise 2 \n\nRemove the extra dimension you just added to the previous tensor.", "_____no_output_____" ] ], [ [ "a = a.squeeze(0)\nprint(a.shape)", "_____no_output_____" ] ], [ [ "#### Exercise 3\n\nCreate a random tensor of shape 5x3 in the interval [3, 7)", "_____no_output_____" ] ], [ [ "3 + torch.rand(5, 3) * 4", "_____no_output_____" ] ], [ [ "#### Exercise 4\n\nCreate a tensor with values from a normal distribution (mean=0, std=1).", "_____no_output_____" ] ], [ [ "a = torch.rand(3,3)\na.normal_(mean=0, std=1)", "_____no_output_____" ] ], [ [ "#### Exercise 5\n\nRetrieve the indexes of all the non zero elements in the tensor torch.Tensor([1, 1, 1, 0, 1]).", "_____no_output_____" ] ], [ [ "a = torch.Tensor([1, 1, 1, 0, 1])\ntorch.nonzero(a)", "_____no_output_____" ] ], [ [ "#### Exercise 6\n\nCreate a random tensor of size (3,1) and then horizonally stack 4 copies together.", "_____no_output_____" ] ], [ [ "a = torch.rand(3,1)\na.expand(3,4)", "_____no_output_____" ] ], [ [ "#### Exercise 7\n\nReturn the batch matrix-matrix product of two 3 dimensional matrices (a=torch.rand(3,4,5), b=torch.rand(3,5,4)).", "_____no_output_____" ] ], [ [ "a = torch.rand(3,4,5)\nb = torch.rand(3,5,4)\ntorch.bmm(a, b)", "_____no_output_____" ] ], [ [ "#### Exercise 8\n\nReturn the batch matrix-matrix product of a 3D matrix and a 2D matrix (a=torch.rand(3,4,5), b=torch.rand(5,4)).", "_____no_output_____" ] ], [ [ "a = torch.rand(3,4,5)\nb = torch.rand(5,4)\ntorch.bmm(a, b.unsqueeze(0).expand(a.size(0), *b.size()))", "_____no_output_____" ] ], [ [ "### END", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a3b273fff0e4cf2d36009fe87da13346875a237
94,799
ipynb
Jupyter Notebook
ML Course/SVC.ipynb
Ashleshk/Machine-Learning-Data-Science-Deep-Learning
03357ab98155bf73b8f1d2fd53255cc16bea2333
[ "MIT" ]
1
2020-05-24T06:55:31.000Z
2020-05-24T06:55:31.000Z
ML Course/SVC.ipynb
Ashleshk/Machine-Learning-Data-Science-Deep-Learning
03357ab98155bf73b8f1d2fd53255cc16bea2333
[ "MIT" ]
null
null
null
ML Course/SVC.ipynb
Ashleshk/Machine-Learning-Data-Science-Deep-Learning
03357ab98155bf73b8f1d2fd53255cc16bea2333
[ "MIT" ]
null
null
null
390.119342
33,176
0.938385
[ [ [ "# Support Vector Machines", "_____no_output_____" ], [ "Let's create the same fake income / age clustered data that we used for our K-Means clustering example:", "_____no_output_____" ] ], [ [ "import numpy as np\n\n#Create fake income/age clusters for N people in k clusters\ndef createClusteredData(N, k):\n np.random.seed(1234)\n pointsPerCluster = float(N)/k\n X = []\n y = []\n for i in range (k):\n incomeCentroid = np.random.uniform(20000.0, 200000.0)\n ageCentroid = np.random.uniform(20.0, 70.0)\n for j in range(int(pointsPerCluster)):\n X.append([np.random.normal(incomeCentroid, 10000.0), np.random.normal(ageCentroid, 2.0)])\n y.append(i)\n X = np.array(X)\n y = np.array(y)\n return X, y", "_____no_output_____" ], [ "%matplotlib inline\nfrom pylab import *\nfrom sklearn.preprocessing import MinMaxScaler\n\n(X, y) = createClusteredData(100, 5)\n\nplt.figure(figsize=(8, 6))\nplt.scatter(X[:,0], X[:,1], c=y.astype(np.float))\nplt.show()\n\nscaling = MinMaxScaler(feature_range=(-1,1)).fit(X)\nX = scaling.transform(X)\n\nplt.figure(figsize=(8, 6))\nplt.scatter(X[:,0], X[:,1], c=y.astype(np.float))\nplt.show()", "_____no_output_____" ] ], [ [ "Now we'll use linear SVC to partition our graph into clusters:", "_____no_output_____" ] ], [ [ "from sklearn import svm, datasets\n\nC = 1.0\nsvc = svm.SVC(kernel='linear', C=C).fit(X, y)", "_____no_output_____" ] ], [ [ "By setting up a dense mesh of points in the grid and classifying all of them, we can render the regions of each cluster as distinct colors:", "_____no_output_____" ] ], [ [ "def plotPredictions(clf):\n # Create a dense grid of points to sample \n xx, yy = np.meshgrid(np.arange(-1, 1, .001),\n np.arange(-1, 1, .001))\n \n # Convert to Numpy arrays\n npx = xx.ravel()\n npy = yy.ravel()\n \n # Convert to a list of 2D (income, age) points\n samplePoints = np.c_[npx, npy]\n \n # Generate predicted labels (cluster numbers) for each point\n Z = clf.predict(samplePoints)\n\n plt.figure(figsize=(8, 6))\n Z = Z.reshape(xx.shape) #Reshape results to match xx dimension\n plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8) # Draw the contour\n plt.scatter(X[:,0], X[:,1], c=y.astype(np.float)) # Draw the points\n plt.show()\n \nplotPredictions(svc)", "_____no_output_____" ] ], [ [ "Or just use predict for a given point:", "_____no_output_____" ] ], [ [ "print(svc.predict(scaling.transform([[200000, 40]])))", "[3]\n" ], [ "print(svc.predict(scaling.transform([[50000, 65]])))", "[2]\n" ] ], [ [ "## Activity", "_____no_output_____" ], [ "\"Linear\" is one of many kernels scikit-learn supports on SVC. Look up the documentation for scikit-learn online to find out what the other possible kernel options are. Do any of them work well for this data set?", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ] ]
4a3b27d90a280218c265e6a969dbf4a3b6ec93c0
38,590
ipynb
Jupyter Notebook
Colab Notebooks/run autoencoders.ipynb
gunjanmahindre/Pre-training-Oracle
231c5aca8984e87ba8b2713f1acad70bd375e74b
[ "MIT" ]
1
2021-01-27T18:16:49.000Z
2021-01-27T18:16:49.000Z
Colab Notebooks/run autoencoders.ipynb
gunjanmahindre/Pre-training-Oracle
231c5aca8984e87ba8b2713f1acad70bd375e74b
[ "MIT" ]
null
null
null
Colab Notebooks/run autoencoders.ipynb
gunjanmahindre/Pre-training-Oracle
231c5aca8984e87ba8b2713f1acad70bd375e74b
[ "MIT" ]
null
null
null
38,590
38,590
0.821223
[ [ [ "from google.colab import drive\r\ndrive.mount('/content/drive')", "Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount(\"/content/drive\", force_remount=True).\n" ], [ "import tensorflow as tf", "_____no_output_____" ], [ "# run on training variation of powerlaw:\n# path for fine tuning: !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/Window method Supervised autoencoder with fine tuning/script.py\"\n# path for stage 1: !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/Window_method_Supervised_autoencoder/script.py\"\n# path for stage 2: !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/stage 2 - window method/script.py\" ", "_____no_output_____" ], [ "2+3", "_____no_output_____" ], [ "# ", "_____no_output_____" ], [ "# run trivial test results:----------------------------------\r\n# trivial 0 : !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/trivial tests/trivial 0/script.py\"\r\n# trivial 1: !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/trivial tests/trivial 1/script.py\"\r\n# train on just observed: !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/trivial tests/train on only observed entries/script.py\" ", "_____no_output_____" ], [ "# run codes for train bombing network:\r\n# stage 0: !python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/codes for Rasika/stage 1/script.py\"\r\n# stage 1:\r\n# stage 2:", "_____no_output_____" ], [ "!python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/trivial tests/trivial 1/script.py\"", "here 0\n2020-12-23 21:42:01.547675: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\nhere 1\ncurrent window:---------------------------------------------------------------------------------------- [12, 13, 14]\nDirectory '12_13_14' created\nRESULTS FOR SUPERVISED AUTOENCODERS \nhere 2\ninside main_code\ntraining parameters: 12 13 14\n[[0. 1. 1. ... 5. 5. 5.]\n [1. 0. 2. ... 6. 6. 6.]\n [1. 2. 0. ... 6. 6. 6.]\n ...\n [5. 6. 6. ... 0. 2. 2.]\n [5. 6. 6. ... 2. 0. 2.]\n [5. 6. 6. ... 2. 2. 0.]]\n[[0. 1. 1. ... 5. 5. 5.]\n [1. 0. 2. ... 6. 6. 6.]\n [1. 2. 0. ... 6. 6. 6.]\n ...\n [5. 6. 6. ... 0. 2. 2.]\n [5. 6. 6. ... 2. 0. 2.]\n [5. 6. 6. ... 2. 2. 0.]]\n(4039, 4039)\n2020-12-23 21:42:17.287267: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set\n2020-12-23 21:42:17.288674: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcuda.so.1\n2020-12-23 21:42:17.344080: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:17.344753: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\ncoreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s\n2020-12-23 21:42:17.344824: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n2020-12-23 21:42:17.562009: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10\n2020-12-23 21:42:17.562146: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10\n2020-12-23 21:42:17.691320: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n2020-12-23 21:42:17.713326: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n2020-12-23 21:42:17.991255: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n2020-12-23 21:42:18.010927: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10\n2020-12-23 21:42:18.512530: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.7\n2020-12-23 21:42:18.512793: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:18.513446: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:18.516985: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n2020-12-23 21:42:18.517513: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set\n2020-12-23 21:42:18.517665: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:18.518239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\ncoreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s\n2020-12-23 21:42:18.518291: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n2020-12-23 21:42:18.518409: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10\n2020-12-23 21:42:18.518497: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10\n2020-12-23 21:42:18.518529: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n2020-12-23 21:42:18.518579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n2020-12-23 21:42:18.518605: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n2020-12-23 21:42:18.518631: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10\n2020-12-23 21:42:18.518657: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.7\n2020-12-23 21:42:18.518752: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:18.519363: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:18.519928: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n2020-12-23 21:42:18.522304: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n2020-12-23 21:42:22.513210: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-12-23 21:42:22.513276: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 \n2020-12-23 21:42:22.513296: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N \n2020-12-23 21:42:22.519907: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:22.520615: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:22.521238: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 21:42:22.521803: W tensorflow/core/common_runtime/gpu/gpu_bfc_allocator.cc:39] Overriding allow_growth setting because the TF_FORCE_GPU_ALLOW_GROWTH environment variable is set. Original config value was 0.\n2020-12-23 21:42:22.521878: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13960 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\ninside session\n2020-12-23 21:42:22.703498: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:196] None of the MLIR optimization passes are enabled (registered 0 passes)\n2020-12-23 21:42:22.724973: I tensorflow/core/platform/profile_utils/cpu_utils.cc:112] CPU Frequency: 2200000000 Hz\n-------------- Calculating error only for unobserved entries--------------------\n72.92454008469012 2.6920410106645694 1.1942564891421967 1.1942564891421967\nFraction-------------------------------- 99.5\ninside main_code\ntraining parameters: 12 13 14\n[[0. 1. 1. ... 5. 5. 5.]\n [1. 0. 2. ... 6. 6. 6.]\n [1. 2. 0. ... 6. 6. 6.]\n ...\n [5. 6. 6. ... 0. 2. 2.]\n [5. 6. 6. ... 2. 0. 2.]\n [5. 6. 6. ... 2. 2. 0.]]\n[[0. 1. 1. ... 5. 5. 5.]\n [1. 0. 2. ... 6. 6. 6.]\n [1. 2. 0. ... 6. 6. 6.]\n ...\n [5. 6. 6. ... 0. 2. 2.]\n [5. 6. 6. ... 2. 0. 2.]\n [5. 6. 6. ... 2. 2. 0.]]\n(4039, 4039)\n2020-12-23 22:46:53.721240: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set\n2020-12-23 22:46:53.721707: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 22:46:53.722499: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1720] Found device 0 with properties: \npciBusID: 0000:00:04.0 name: Tesla T4 computeCapability: 7.5\ncoreClock: 1.59GHz coreCount: 40 deviceMemorySize: 14.73GiB deviceMemoryBandwidth: 298.08GiB/s\n2020-12-23 22:46:53.722596: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1\n2020-12-23 22:46:53.722676: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublas.so.10\n2020-12-23 22:46:53.722700: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcublasLt.so.10\n2020-12-23 22:46:53.722722: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcufft.so.10\n2020-12-23 22:46:53.722747: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcurand.so.10\n2020-12-23 22:46:53.722767: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusolver.so.10\n2020-12-23 22:46:53.722788: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcusparse.so.10\n2020-12-23 22:46:53.722808: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudnn.so.7\n2020-12-23 22:46:53.722906: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 22:46:53.723470: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 22:46:53.723983: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1862] Adding visible gpu devices: 0\n2020-12-23 22:46:53.724218: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1261] Device interconnect StreamExecutor with strength 1 edge matrix:\n2020-12-23 22:46:53.724237: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1267] 0 \n2020-12-23 22:46:53.724247: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1280] 0: N \n2020-12-23 22:46:53.724487: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 22:46:53.725046: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:941] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero\n2020-12-23 22:46:53.725548: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1406] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 13960 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:00:04.0, compute capability: 7.5)\ninside session\n-------------- Calculating error only for unobserved entries--------------------\n72.92481964127815 2.6920844922483975 1.1942454356566796 1.1942454356566796\nFraction-------------------------------- 99.9\n[99.5, 99.9]\n" ], [ "-------------- Calculating error only for unobserved entries--------------------\r\n72.93460370607394 2.691420013784058 1.194878183344078 1.194878183344078\r\nFraction-------------------------------- 40\r\n\r\n-------------- Calculating error only for unobserved entries--------------------\r\n72.95451892710567 2.6908033993328098 1.1939517644297712 1.1939517644297712\r\nFraction-------------------------------- 20\r\ninside main_code\r\n\r\n-------------- Calculating error only for unobserved entries--------------------\r\n72.92523846562295 2.6921294640853106 1.1942552286744916 1.1942552286744916\r\nFraction-------------------------------- 99\r\ninside main_code\r\n\r\n-------------- Calculating error only for unobserved entries--------------------\r\n72.92569927908839 2.692057610346308 1.1943441352077104 1.1943441352077104\r\nFraction-------------------------------- 90\r\ninside main_code\r\n\r\n-------------- Calculating error only for unobserved entries--------------------\r\n72.92596729008994 2.69190896794981 1.194432209824622 1.194432209824622\r\nFraction-------------------------------- 80\r\ninside main_code\r\n\r\n-------------- Calculating error only for unobserved entries--------------------\r\n72.93001407568013 2.6919051981735547 1.194144671184917 1.194144671184917\r\nFraction-------------------------------- 60\r\ninside main_code\r\n\r\n", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "", "_____no_output_____" ], [ "# run protein network ", "_____no_output_____" ], [ "# try collaboration network", "_____no_output_____" ], [ "cost3 = [158.96724, 98.78402, 74.9231, 64.00145, 58.63235, 55.861347, 54.366524, 53.519352, 53.006138, 52.66433, 52.408646, 52.192135, 51.986546, 51.774937, 51.549442, 51.309853, 51.061428, 50.811607, 50.565613, 50.329323, 58.723026, 58.361225, 58.161457, 58.010284, 57.86779, 57.714756, 57.539577, 57.335247, 57.09924, 56.833363, 56.542717, 56.234505, 55.917282, 55.598743, 55.28897, 54.9952, 54.72235, 54.47319, 54.24661, 54.041183, 61.436134, 60.774326, 60.399055, 60.09885, 59.79915, 59.45637, 59.046936, 58.57618, 58.07581, 57.58582, 57.134533, 56.730938, 56.37296, 56.049274, 55.762352, 55.497215, 55.25945, 55.037453, 54.838146, 54.65893]\ncost2 = [158.96724, 98.78402, 74.9231, 64.00145, 58.63235, 55.861347, 54.366524, 53.519352, 53.006138, 52.66433, 52.408646, 52.192135, 51.986546, 51.774937, 51.549442, 51.309853, 51.061428, 50.811607, 50.565613, 50.329323, 58.723026, 58.361225, 58.161457, 58.010284, 57.86779, 57.714756, 57.539577, 57.335247, 57.09924, 56.833363, 56.542717, 56.234505, 55.917282, 55.598743, 55.28897, 54.9952, 54.72235, 54.47319, 54.24661, 54.041183]\ncost1 = [158.96724, 98.78402, 74.9231, 64.00145, 58.63235, 55.861347, 54.366524, 53.519352, 53.006138, 52.66433, 52.408646, 52.192135, 51.986546, 51.774937, 51.549442, 51.309853, 51.061428, 50.811607, 50.565613, 50.329323]\n\nimport matplotlib.pyplot as plt\nplt.plot(cost1, label = 'nw1')\n# plt.plot(cost2, label = 'nw2')\n# plt.plot(cost3, label = 'nw3')\nplt.xlabel('number of iterations')\nplt.ylabel('cost')\nplt.title('Cost value vs Iterations for various training sessions')\nplt.legend()\nplt.show()", "_____no_output_____" ], [ "# run for facebook:\n", "_____no_output_____" ], [ "!python3 \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/codes for Rasika/stage 1/script.py\"", "2020-12-05 20:39:20.817442: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1\ncurrent window:---------------------------------------------------------------------------------------- [40, 41, 42]\nDirectory '40_41_42' created\nRESULTS FOR SUPERVISED AUTOENCODERS \ninside main_code\n--------------------------Using tensorflow 2.0.0-----------------------------\nTraceback (most recent call last):\n File \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/codes for Rasika/stage 1/script.py\", line 100, in <module>\n [mean_err, abs_err, mean_std, abs_std] = main_code(fraction, w)\n File \"/content/drive/MyDrive/PhD work/Projects/parameter estimation/codes for Rasika/stage 1/RobustDeepAutoencoder.py\", line 188, in main_code\n data_test = np.loadtxt(path)\n File \"/usr/local/lib/python3.6/dist-packages/numpy/lib/npyio.py\", line 981, in loadtxt\n fh = np.lib._datasource.open(fname, 'rt', encoding=encoding)\n File \"/usr/local/lib/python3.6/dist-packages/numpy/lib/_datasource.py\", line 269, in open\n return ds.open(path, mode, encoding=encoding, newline=newline)\n File \"/usr/local/lib/python3.6/dist-packages/numpy/lib/_datasource.py\", line 623, in open\n raise IOError(\"%s not found.\" % path)\nOSError: /content/drive/MyDrive/PhD work/data/undirected networks/facebook/dHp.txt not found.\n" ], [ "", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3b2f1a7488652d0cf87208905a2fb3c30d67bc
32,535
ipynb
Jupyter Notebook
data-wrangling/python-pandas-consecutive-streaks.ipynb
JoseParrenoGarcia/Tips-on-Python-and-sklearn
6441b85627a97955d174d62bce396ecb4c3539b8
[ "MIT" ]
null
null
null
data-wrangling/python-pandas-consecutive-streaks.ipynb
JoseParrenoGarcia/Tips-on-Python-and-sklearn
6441b85627a97955d174d62bce396ecb4c3539b8
[ "MIT" ]
null
null
null
data-wrangling/python-pandas-consecutive-streaks.ipynb
JoseParrenoGarcia/Tips-on-Python-and-sklearn
6441b85627a97955d174d62bce396ecb4c3539b8
[ "MIT" ]
null
null
null
67.640333
2,647
0.423083
[ [ [ "import pandas as pd\n\nfrom pandasql import sqldf\nmysql = lambda q: sqldf(q, globals())", "_____no_output_____" ] ], [ [ "# Group an ID by consecutive dates\nCalculate the number of consecutive days for a given ID. If there is a gap of days for an ID, we should capture both streaks as different rows", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'ID': [1, 1, 1, 1, 2, 2, 2, 2],\n 'Date': ['2017-01-07', '2017-01-08', '2017-01-09', '2017-01-23',\n '2017-01-05', '2017-01-06', '2017-01-10', '2017-01-11']\n })\ndf1['Date'] = pd.to_datetime(df1['Date'])\ndf1", "_____no_output_____" ] ], [ [ "#### PYTHON - Method 1: using diff for datetime datatype", "_____no_output_____" ] ], [ [ "# In order for SHIFT to work properly, we would probably need to sort the dataframe\n\n# 1. Is there more than 1 day difference with the previous day? (use the not equal method ne(1))\ndf1['is_there_more_than_one_day_difference'] = df1.groupby('ID')['Date'].diff().dt.days.ne(1)\n\n# 2. Group the booleans by using cumsum()\ndf1['streak_id'] = df1['is_there_more_than_one_day_difference'].cumsum()\n\n# Calculate the size of each grouped_streaks by ID\ndf1['streak_size_days'] = df1.groupby(['ID', 'streak_id'])['streak_id'].transform('size')\n\n# With this we could extract, for each ID, what is the longest streak\ndf1['longest_streak_rank'] = df1.groupby('ID')['streak_size_days'].rank(method='dense', ascending=False)\ndf1", "_____no_output_____" ], [ "df1[['ID', 'streak_size_days', 'longest_streak_rank']].drop_duplicates().sort_values(['ID','longest_streak_rank'])", "_____no_output_____" ] ], [ [ "#### SQL", "_____no_output_____" ] ], [ [ "df1 = pd.DataFrame({'ID': [1, 1, 1, 1, 2, 2, 2, 2],\n 'Date': ['2017-01-07', '2017-01-08', '2017-01-09', '2017-01-23',\n '2017-01-05', '2017-01-06', '2017-01-10', '2017-01-11']\n })\ndf1['Date'] = pd.to_datetime(df1['Date'])", "_____no_output_____" ], [ "# In order for LAG to work properly, we would probably need to sort the dataframe\nquery = \"WITH previous_date_df AS (\" \\\n \"SELECT ID, \" \\\n \" Date AS first_date, \" \\\n \" COALESCE(LAG(Date) OVER (PARTITION BY ID ORDER BY Date), Date) AS previous_date \" \\\n \"FROM df1), \" \\\n \"date_difference_is_not_one_df AS (\" \\\n \"SELECT *, \" \\\n \" CASE WHEN (julianday(first_date) - (julianday(previous_date))) != 1 THEN True ELSE False END AS is_there_more_than_one_day_difference \" \\\n \"FROM previous_date_df), \" \\\n \"grouped_streaks_df AS (\" \\\n \"SELECT *, \" \\\n \" SUM(is_there_more_than_one_day_difference) OVER (ORDER BY ID, first_date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as streak_id \" \\\n \"FROM date_difference_is_not_one_df) \" \\\n \"SELECT ID, streak_id, COUNT(*) AS streak_size_days FROM grouped_streaks_df GROUP BY ID, streak_id \"\n\nmysql(query)", "_____no_output_____" ] ], [ [ "# Groupby an ID by consecutive events\nFor example, wins and losses", "_____no_output_____" ] ], [ [ "df2 = pd.DataFrame({'Group':['A','A', 'A','A','A','A','B','B','B','B','B','B','B'],\n 'Score':['win', 'loss', 'loss', 'loss', 'win', 'win', 'win', 'win', 'win', 'loss', 'win', 'loss', 'loss']})\ndf2", "_____no_output_____" ] ], [ [ "#### PYTHON - Overall win streak", "_____no_output_____" ] ], [ [ "# 1. Extract previous score by using the shift() method\ndf2['previous_score'] = df2['Score'].shift(periods=1)\n\n# 2. Compare if they are not equal\ndf2['is_score_not_equal_to_previous'] = df2['Score'] != df2['previous_score']\n\n# 3. Calculate the grouped scores streaks by using cumsum() and the booleans from is_score_equal_to_previous\ndf2['streak_id'] = df2['is_score_not_equal_to_previous'].cumsum()\n\n# 4. Calculate the streaks\ndf2['cumulative_streaks'] = df2.groupby('streak_id')['Score'].cumcount()+1\n\ndf2", "_____no_output_____" ] ], [ [ "#### SQL - Overall win streak", "_____no_output_____" ] ], [ [ "df2 = pd.DataFrame({'ID': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],\n 'Group_':['A','A', 'A','A','A','A','B','B','B','B','B','B','B'],\n 'Score':['win', 'loss', 'loss', 'loss', 'win', 'win', 'win', 'win', 'win', 'loss', 'win', 'loss', 'loss']})\n\n# In order for LAG to work properly, we would probably need to sort the dataframe\nquery = \"WITH previous_score_df AS (\" \\\n \"SELECT ID, \" \\\n \" Score AS first_score, \" \\\n \" COALESCE(LAG(Score) OVER (ORDER BY ID, Score), Score) AS previous_score \" \\\n \"FROM df2), \" \\\n \"are_scores_equal_df AS (\" \\\n \"SELECT *, \" \\\n \" CASE WHEN first_score != previous_score THEN True ELSE False END AS is_previous_score_equal \" \\\n \"FROM previous_score_df), \" \\\n \"grouped_streaks_df AS (\" \\\n \"SELECT *, \" \\\n \" SUM(is_previous_score_equal) OVER (ORDER BY ID ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as streak_id \" \\\n \"FROM are_scores_equal_df) \" \\\n \"SELECT first_score, streak_id, COUNT(*) AS streak_size_days \" \\\n \"FROM grouped_streaks_df \" \\\n \"GROUP BY first_score, streak_id \" \\\n \"ORDER BY streak_id\"\n\nmysql(query)", "_____no_output_____" ] ], [ [ "#### PYTHON - Win streak by group", "_____no_output_____" ] ], [ [ "df2 = pd.DataFrame({'Group':['A','A', 'A','A','A','A','B','B','B','B','B','B','B'],\n 'Score':['win', 'loss', 'loss', 'loss', 'win', 'win', 'win', 'win', 'win', 'loss', 'win', 'loss', 'loss']})\n\n# 1. Extract previous score by using the shift() method\ndf2['previous_score'] = df2.groupby(['Group'])['Score'].shift(periods=1)\n\n# 2. Compare if they are equal\ndf2['is_score_equal_to_previous'] = df2['Score'] != df2['previous_score']\n\n# 3. Calculate the grouped scores streaks by using cumsum() and the booleans from is_score_equal_to_previous\ndf2['equal_grouped_scores'] = df2['is_score_equal_to_previous'].cumsum()\n\n# 4. Calculate the streaks\ndf2['streaks'] = df2.groupby('equal_grouped_scores')['Score'].cumcount()+1\n\ndf2", "_____no_output_____" ] ], [ [ "#### SQL - Win streak by group", "_____no_output_____" ] ], [ [ "df2 = pd.DataFrame({'ID': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13],\n 'Group_':['A','A', 'A','A','A','A','B','B','B','B','B','B','B'],\n 'Score':['win', 'loss', 'loss', 'loss', 'win', 'win', 'win', 'win', 'win', 'loss', 'win', 'loss', 'loss']})\n\n# In order for LAG to work properly, we would probably need to sort the dataframe\nquery = \"WITH previous_score_df AS (\" \\\n \"SELECT ID, \" \\\n \" Group_, \" \\\n \" Score AS first_score, \" \\\n \" COALESCE(LAG(Score) OVER (PARTITION BY Group_ ORDER BY ID, Score), Score) AS previous_score \" \\\n \"FROM df2), \" \\\n \"are_scores_equal_df AS (\" \\\n \"SELECT *, \" \\\n \" CASE WHEN first_score != previous_score THEN True ELSE False END AS is_previous_score_equal \" \\\n \"FROM previous_score_df), \" \\\n \"grouped_streaks_df AS (\" \\\n \"SELECT *, \" \\\n \" SUM(is_previous_score_equal) OVER (ORDER BY ID, Group_ ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as streak_id \" \\\n \"FROM are_scores_equal_df) \" \\\n \"SELECT Group_, first_score, streak_id, COUNT(*) AS streak_size_days \" \\\n \"FROM grouped_streaks_df \" \\\n \"GROUP BY Group_, first_score, streak_id \" \\\n \"ORDER BY streak_id\"\n\nmysql(query)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3b442f8dbd459ff7cae89bdecd720aee563ab4
13,496
ipynb
Jupyter Notebook
ML/NearestNeighbour/NearestNeighbor_SpineData.ipynb
Silver-Creek/Lessons
0be4c4d32cc50d79cac093ad5f740228642fd46e
[ "MIT" ]
null
null
null
ML/NearestNeighbour/NearestNeighbor_SpineData.ipynb
Silver-Creek/Lessons
0be4c4d32cc50d79cac093ad5f740228642fd46e
[ "MIT" ]
null
null
null
ML/NearestNeighbour/NearestNeighbor_SpineData.ipynb
Silver-Creek/Lessons
0be4c4d32cc50d79cac093ad5f740228642fd46e
[ "MIT" ]
null
null
null
29.858407
394
0.560981
[ [ [ "<a href=\"https://colab.research.google.com/github/MHadavand/Lessons/blob/master/ML/NearestNeighbour/Nearest_neighbor_spine.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Nearest neighbor for spine injury classification", "_____no_output_____" ], [ "In this homework notebook we use **nearest neighbor classification** to classify back injuries for patients in a hospital, based on measurements of the shape and orientation of their pelvis and spine.\n\nThe data set contains information from **310** patients. For each patient, there are: six measurements (the x i.e. features) and a label (the y). The label has **3** possible values, `’NO’` (normal), `’DH’` (herniated disk), or `’SL’` (spondilolysthesis). \n\nCredits: Edx Machine Learning Fundamentals", "_____no_output_____" ], [ "# 1. Setup notebook", "_____no_output_____" ] ], [ [ "import numpy as np", "_____no_output_____" ] ], [ [ "Load the data set and divide the data into a training set of 248 patients and a separate test set of 62 patients. The following arrays are created:\n\n* **`trainx`** : The training data's features, one point per row.\n* **`trainy`** : The training data's labels.\n* **`testx`** : The test data's features, one point per row.\n* **`testy`** : The test data's labels.\n\nWe will use the training set (`trainx` and `trainy`), with nearest neighbor classification, to predict labels for the test data (`testx`). We will then compare these predictions with the correct labels, `testy`.", "_____no_output_____" ], [ "Notice that we code the three labels as `0. = ’NO’, 1. = ’DH’, 2. = ’SL’`.", "_____no_output_____" ] ], [ [ "# Load data set and code labels as 0 = ’NO’, 1 = ’DH’, 2 = ’SL’\nlabels = [b'NO', b'DH', b'SL']\ndata = np.loadtxt('../Data/NN_Spine/column_3C.dat', converters={6: lambda s: labels.index(s)} )\n\n# Separate features from labels\nx = data[:,0:6]\ny = data[:,6]\n\n# Divide into training and test set\ntraining_indices = list(range(0,20)) + list(range(40,188)) + list(range(230,310))\ntest_indices = list(range(20,40)) + list(range(188,230))\n\ntrainx = x[training_indices,:]\ntrainy = y[training_indices]\ntestx = x[test_indices,:]\ntesty = y[test_indices]", "_____no_output_____" ] ], [ [ "## 2. Nearest neighbor classification with L2 (*Euclidean*) distance\n\nA brute forces nearest neighbor implementation based on Euclidean distance between a test feature and entire training data set", "_____no_output_____" ] ], [ [ "def get_l_dist(x,y, p=2):\n '''\n computes P normed distance between two arrays\n '''\n \n return (np.sum(abs(x-y)**p))**(1/p)\n\ndef NN_Euclidean(trainx, trainy, testx):\n \n '''\n A naive algorithm to find the nearest neighbor without any sorting \n '''\n \n if len(testx.shape)> 1: # Recursive call\n return np.array(list(map(lambda test_item: NN_Euclidean(trainx, trainy, test_item), testx)))\n \n distances = [get_l_dist(trainx_instance, testx) for trainx_instance in trainx]\n \n return trainy[np.argmin(distances)]", "_____no_output_____" ], [ "testy_L2 = NN_Euclidean(trainx, trainy, testx)\n## Compute the accuracy\naccuracy = np.equal(testy_L2, testy)\naccuracy = float(np.sum(accuracy))/len(testy)\n\nprint(\"Accuracy of nearest neighbor classifier (Euclidean): %{:.2f}\".format(accuracy*100))", "Accuracy of nearest neighbor classifier (Euclidean): %79.03\n" ] ], [ [ "# 3. Nearest neighbor classification with L1 (Manhattan) distance", "_____no_output_____" ], [ "We now compute nearest neighbors using the L1 distance (sometimes called *Manhattan Distance*).\n\n<font color=\"magenta\">**For you to do:**</font> Write a function, **NN_L1**, which again takes as input the arrays `trainx`, `trainy`, and `testx`, and predicts labels for the test points using 1-nearest neighbor classification. For **NN_L1**, the L1 distance metric should be used. As before, the predicted labels should be returned in a `numpy` array with one entry per test point.\n\nNotice that **NN_L1** and **NN_L2** may well produce different predictions on the test set.", "_____no_output_____" ] ], [ [ "def NN_Manhattan(trainx, trainy, testx):\n \n '''\n A naive algorithm to find the nearest neighbor without any sorting \n '''\n \n if len(testx.shape)> 1: # Recursive call\n return np.array(list(map(lambda test_item: NN_Manhattan(trainx, trainy, test_item), testx)))\n \n distances = [get_l_dist(trainx_instance, testx, p=1) for trainx_instance in trainx]\n \n return trainy[np.argmin(distances)]", "_____no_output_____" ], [ "testy_L1 = NN_Manhattan(trainx, trainy, testx)\n## Compute the accuracy\naccuracy = np.equal(testy_L1, testy)\naccuracy = float(np.sum(accuracy))/len(testy)\n\nprint(\"Accuracy of nearest neighbor classifier (Manhattan): %{:.2f}\".format(accuracy*100))", "Accuracy of nearest neighbor classifier (Manhattan): %77.42\n" ] ], [ [ "# 4. Confusion matrix", "_____no_output_____" ], [ "In order to have a more in depth comparison between the two distance functions, we can use the <font color=\"blue\">*confusion matrix*</font> that is often use to evaluate a classifier.\n\nWe will now look a bit more deeply into the specific types of errors made by nearest neighbor classification, by constructing the .\n\n| | NO | DH | SL |\n| ------------- |:-------------:| -----:|-----:|\n| NO | | | | |\n| DH | | | | |\n| SL | | | | |\n\nSince there are three labels, the confusion matrix is a 3x3 matrix whose rows correspond to the true label and whose columns correspond to the predicted label. For example, the entry at row DH, column SL, contains the number of test points whose correct label was DH but which were classified as SL.", "_____no_output_____" ] ], [ [ "def confusion_matrix(testy,testy_fit):\n dict_label ={0: 'NO', 1: 'DH', 2: 'SL'} \n n_label = len(dict_label)\n output = np.zeros((n_label,n_label))\n \n if len(testy) != len(testy_fit):\n raise ValueError('The two data sets must have the same length')\n \n for predict, true in zip(testy_fit, testy):\n output[int(true), int(predict)] +=1\n \n return output", "_____no_output_____" ], [ "L1_cm = confusion_matrix(testy, testy_L1)\nL2_cm = confusion_matrix(testy, testy_L2)\n\nprint( 'Confusion matrix nearest neighbor classifier (Manhattan):\\n', L1_cm )\n\nprint( 'Confusion matrix nearest neighbor classifier (Euclidean):\\n', L2_cm )", "Confusion matrix nearest neighbor classifier (Manhattan):\n [[16. 2. 2.]\n [10. 10. 0.]\n [ 0. 0. 22.]]\nConfusion matrix nearest neighbor classifier (Euclidean):\n [[17. 1. 2.]\n [10. 10. 0.]\n [ 0. 0. 22.]]\n" ] ], [ [ "# 5. Some Observations from the results\n\n*Note down the answers to these, since you will need to enter them as part of this week's assignment.*\n\n* DH was **most frequently** misclassified by the L1-based nearest neighbor classifier?\n* SL was **never** misclassified by the L2-based nearest neighbor classifier?\n* Only one test point had a different prediction between the two classification schemes (based on L1 and L2 distance)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a3b5aca7f5dd6c3762f82acb85a670cc1f2938e
18,304
ipynb
Jupyter Notebook
tutorials/Getting_Started.ipynb
ljw23/ConvLab-2
13d48ea0e441701bd66100689b6c25b561f15525
[ "Apache-2.0" ]
339
2020-03-04T09:43:22.000Z
2022-03-26T17:27:38.000Z
tutorials/Getting_Started.ipynb
ljw23/ConvLab-2
13d48ea0e441701bd66100689b6c25b561f15525
[ "Apache-2.0" ]
122
2020-04-12T04:19:06.000Z
2022-03-23T14:20:57.000Z
tutorials/Getting_Started.ipynb
ljw23/ConvLab-2
13d48ea0e441701bd66100689b6c25b561f15525
[ "Apache-2.0" ]
138
2020-02-18T16:48:04.000Z
2022-03-26T17:27:43.000Z
29.380417
289
0.513276
[ [ [ "# Getting Started\n\nIn this tutorial, you will know how to\n- use the models in **ConvLab-2** to build a dialog agent.\n- build a simulator to chat with the agent and evaluate the performance.\n- try different module combinations.\n- use analysis tool to diagnose your system.\n\nLet's get started!", "_____no_output_____" ], [ "## Environment setup\nRun the command below to install ConvLab-2. Then restart the notebook and skip this commend.", "_____no_output_____" ] ], [ [ "# first install ConvLab-2 and restart the notebook\n! git clone https://github.com/thu-coai/ConvLab-2.git && cd ConvLab-2 && pip install -e .", "_____no_output_____" ], [ "# installing en_core_web_sm for spacy to resolve error in BERTNLU\n!python -m spacy download en_core_web_sm", "_____no_output_____" ] ], [ [ "## build an agent\n\nWe use the models adapted on [Multiwoz](https://www.aclweb.org/anthology/D18-1547) dataset to build our agent. This pipeline agent consists of NLU, DST, Policy and NLG modules.\n\nFirst, import some models:", "_____no_output_____" ] ], [ [ "# common import: convlab2.$module.$model.$dataset\nfrom convlab2.nlu.jointBERT.multiwoz import BERTNLU\nfrom convlab2.nlu.milu.multiwoz import MILU\nfrom convlab2.dst.rule.multiwoz import RuleDST\nfrom convlab2.policy.rule.multiwoz import RulePolicy\nfrom convlab2.nlg.template.multiwoz import TemplateNLG\nfrom convlab2.dialog_agent import PipelineAgent, BiSession\nfrom convlab2.evaluator.multiwoz_eval import MultiWozEvaluator\nfrom pprint import pprint\nimport random\nimport numpy as np\nimport torch", "_____no_output_____" ] ], [ [ "Then, create the models and build an agent:", "_____no_output_____" ] ], [ [ "# go to README.md of each model for more information\n# BERT nlu\nsys_nlu = BERTNLU()\n# simple rule DST\nsys_dst = RuleDST()\n# rule policy\nsys_policy = RulePolicy()\n# template NLG\nsys_nlg = TemplateNLG(is_user=False)\n# assemble\nsys_agent = PipelineAgent(sys_nlu, sys_dst, sys_policy, sys_nlg, name='sys')", "_____no_output_____" ] ], [ [ "That's all! Let's chat with the agent using its response function:", "_____no_output_____" ] ], [ [ "sys_agent.response(\"I want to find a moderate hotel\")", "_____no_output_____" ], [ "sys_agent.response(\"Which type of hotel is it ?\")", "_____no_output_____" ], [ "sys_agent.response(\"OK , where is its address ?\")", "_____no_output_____" ], [ "sys_agent.response(\"Thank you !\")", "_____no_output_____" ], [ "sys_agent.response(\"Try to find me a Chinese restaurant in south area .\")", "_____no_output_____" ], [ "sys_agent.response(\"Which kind of food it provides ?\")", "_____no_output_____" ], [ "sys_agent.response(\"Book a table for 5 , this Sunday .\")", "_____no_output_____" ] ], [ [ "## Build a simulator to chat with the agent and evaluate\n\nIn many one-to-one task-oriented dialog system, a simulator is essential to train an RL agent. In our framework, we doesn't distinguish user or system. All speakers are **agents**. The simulator is also an agent, with specific policy inside for accomplishing the user goal.\n\nWe use `Agenda` policy for the simulator, this policy requires dialog act input, which means we should set DST argument of `PipelineAgent` to None. Then the `PipelineAgent` will pass dialog act to policy directly. Refer to `PipelineAgent` doc for more details.", "_____no_output_____" ] ], [ [ "# MILU\nuser_nlu = MILU()\n# not use dst\nuser_dst = None\n# rule policy\nuser_policy = RulePolicy(character='usr')\n# template NLG\nuser_nlg = TemplateNLG(is_user=True)\n# assemble\nuser_agent = PipelineAgent(user_nlu, user_dst, user_policy, user_nlg, name='user')", "_____no_output_____" ] ], [ [ "\nNow we have a simulator and an agent. we will use an existed simple one-to-one conversation controller BiSession, you can also define your own Session class for your special need.\n\nWe add `MultiWozEvaluator` to evaluate the performance. It uses the parsed dialog act input and policy output dialog act to calculate **inform f1**, **book rate**, and whether the task is **success**.", "_____no_output_____" ] ], [ [ "evaluator = MultiWozEvaluator()\nsess = BiSession(sys_agent=sys_agent, user_agent=user_agent, kb_query=None, evaluator=evaluator)", "_____no_output_____" ] ], [ [ "Let's make this two agents chat! The key is `next_turn` method of `BiSession` class.", "_____no_output_____" ] ], [ [ "def set_seed(r_seed):\n random.seed(r_seed)\n np.random.seed(r_seed)\n torch.manual_seed(r_seed)\n\nset_seed(20200131)\n\nsys_response = ''\nsess.init_session()\nprint('init goal:')\npprint(sess.evaluator.goal)\nprint('-'*50)\nfor i in range(20):\n sys_response, user_response, session_over, reward = sess.next_turn(sys_response)\n print('user:', user_response)\n print('sys:', sys_response)\n print()\n if session_over is True:\n break\nprint('task success:', sess.evaluator.task_success())\nprint('book rate:', sess.evaluator.book_rate())\nprint('inform precision/recall/f1:', sess.evaluator.inform_F1())\nprint('-'*50)\nprint('final goal:')\npprint(sess.evaluator.goal)\nprint('='*100)", "_____no_output_____" ] ], [ [ "## Try different module combinations\n\nThe combination modes of pipeline agent modules are flexible. We support joint models such as TRADE, SUMBT for word-DST and MDRG, HDSA, LaRL for word-Policy, once the input and output are matched with previous and next module. We also support End2End models such as Sequicity.\n\nAvailable models:\n\n- NLU: BERTNLU, MILU, SVMNLU\n- DST: RuleDST\n- Word-DST: SUMBT, TRADE (set `sys_nlu` to `None`)\n- Policy: RulePolicy, Imitation, REINFORCE, PPO, GDPL\n- Word-Policy: MDRG, HDSA, LaRL (set `sys_nlg` to `None`)\n- NLG: Template, SCLSTM\n- End2End: Sequicity, DAMD, RNN_rollout (directly used as `sys_agent`)\n- Simulator policy: Agenda, VHUS (for `user_policy`)\n", "_____no_output_____" ] ], [ [ "# available NLU models\nfrom convlab2.nlu.svm.multiwoz import SVMNLU\nfrom convlab2.nlu.jointBERT.multiwoz import BERTNLU\nfrom convlab2.nlu.milu.multiwoz import MILU\n# available DST models\nfrom convlab2.dst.rule.multiwoz import RuleDST\nfrom convlab2.dst.sumbt.multiwoz import SUMBT\nfrom convlab2.dst.trade.multiwoz import TRADE\n# available Policy models\nfrom convlab2.policy.rule.multiwoz import RulePolicy\nfrom convlab2.policy.ppo.multiwoz import PPOPolicy\nfrom convlab2.policy.pg.multiwoz import PGPolicy\nfrom convlab2.policy.mle.multiwoz import MLEPolicy\nfrom convlab2.policy.gdpl.multiwoz import GDPLPolicy\nfrom convlab2.policy.vhus.multiwoz import UserPolicyVHUS\nfrom convlab2.policy.mdrg.multiwoz import MDRGWordPolicy\nfrom convlab2.policy.hdsa.multiwoz import HDSA\nfrom convlab2.policy.larl.multiwoz import LaRL\n# available NLG models\nfrom convlab2.nlg.template.multiwoz import TemplateNLG\nfrom convlab2.nlg.sclstm.multiwoz import SCLSTM\n# available E2E models\nfrom convlab2.e2e.sequicity.multiwoz import Sequicity\nfrom convlab2.e2e.damd.multiwoz import Damd", "_____no_output_____" ] ], [ [ "NLU+RuleDST or Word-DST:", "_____no_output_____" ] ], [ [ "# NLU+RuleDST:\nsys_nlu = BERTNLU()\n# sys_nlu = MILU()\n# sys_nlu = SVMNLU()\nsys_dst = RuleDST()\n\n# or Word-DST:\n# sys_nlu = None\n# sys_dst = SUMBT()\n# sys_dst = TRADE()", "_____no_output_____" ] ], [ [ "Policy+NLG or Word-Policy:", "_____no_output_____" ] ], [ [ "# Policy+NLG:\nsys_policy = RulePolicy()\n# sys_policy = PPOPolicy()\n# sys_policy = PGPolicy()\n# sys_policy = MLEPolicy()\n# sys_policy = GDPLPolicy()\nsys_nlg = TemplateNLG(is_user=False)\n# sys_nlg = SCLSTM(is_user=False)\n\n# or Word-Policy:\n# sys_policy = LaRL()\n# sys_policy = HDSA()\n# sys_policy = MDRGWordPolicy()\n# sys_nlg = None", "_____no_output_____" ] ], [ [ "Assemble the Pipeline system agent:", "_____no_output_____" ] ], [ [ "sys_agent = PipelineAgent(sys_nlu, sys_dst, sys_policy, sys_nlg, 'sys')", "_____no_output_____" ] ], [ [ "Or Directly use an end-to-end model:", "_____no_output_____" ] ], [ [ "# sys_agent = Sequicity()\n# sys_agent = Damd()", "_____no_output_____" ] ], [ [ "Config an user agent similarly:", "_____no_output_____" ] ], [ [ "user_nlu = BERTNLU()\n# user_nlu = MILU()\n# user_nlu = SVMNLU()\nuser_dst = None\nuser_policy = RulePolicy(character='usr')\n# user_policy = UserPolicyVHUS(load_from_zip=True)\nuser_nlg = TemplateNLG(is_user=True)\n# user_nlg = SCLSTM(is_user=True)\nuser_agent = PipelineAgent(user_nlu, user_dst, user_policy, user_nlg, name='user')", "_____no_output_____" ] ], [ [ "## Use analysis tool to diagnose the system\nWe provide an analysis tool presents rich statistics and summarizes common mistakes from simulated dialogues, which facilitates error analysis and\nsystem improvement. The analyzer will generate an HTML report which contains\nrich statistics of simulated dialogues. For more information, please refer to `convlab2/util/analysis_tool`.", "_____no_output_____" ] ], [ [ "from convlab2.util.analysis_tool.analyzer import Analyzer\n\n# if sys_nlu!=None, set use_nlu=True to collect more information\nanalyzer = Analyzer(user_agent=user_agent, dataset='multiwoz')\n\nset_seed(20200131)\nanalyzer.comprehensive_analyze(sys_agent=sys_agent, model_name='sys_agent', total_dialog=100)", "_____no_output_____" ] ], [ [ "To compare several models:", "_____no_output_____" ] ], [ [ "set_seed(20200131)\nanalyzer.compare_models(agent_list=[sys_agent1, sys_agent2], model_name=['sys_agent1', 'sys_agent2'], total_dialog=100)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3b697a2c15fbe776d7bd0f3be853fbad698f71
229,322
ipynb
Jupyter Notebook
prepareData.ipynb
singchenyeo/CKD
73e9b5e3530fee75204e6b862996e390c6d1a443
[ "Apache-2.0" ]
null
null
null
prepareData.ipynb
singchenyeo/CKD
73e9b5e3530fee75204e6b862996e390c6d1a443
[ "Apache-2.0" ]
null
null
null
prepareData.ipynb
singchenyeo/CKD
73e9b5e3530fee75204e6b862996e390c6d1a443
[ "Apache-2.0" ]
null
null
null
38.239453
26,196
0.403446
[ [ [ "import pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom matplotlib import rcParams\nrcParams['figure.figsize'] = 11.7,8.27 # figure size in inches\n\npd.options.mode.chained_assignment = None # default='warn'\npd.set_option('display.max_colwidth', None)\npd.set_option('display.max_rows', 500) \npd.set_option('display.max_columns', 30) \n\nfrom IPython.core.interactiveshell import InteractiveShell\nInteractiveShell.ast_node_interactivity = \"all\"\n\n%config Completer.use_jedi = False\n\nfrom sklearn.impute import KNNImputer\nfrom sklearn.preprocessing import LabelEncoder", "_____no_output_____" ] ], [ [ "# Note\n* Aggregate data by every 180 days", "_____no_output_____" ] ], [ [ "df_creatinine = pd.read_csv('CSV/T_creatinine.csv'); df_creatinine.rename(columns = {'value': 'creatinine'}, inplace=True)\ndf_dbp = pd.read_csv('CSV/T_DBP.csv'); df_dbp.rename(columns = {'value': 'dbp'}, inplace=True)\n\ndf_glucose = pd.read_csv('CSV/T_glucose.csv'); df_glucose.rename(columns = {'value': 'glucose'}, inplace=True)\ndf_hgb = pd.read_csv('CSV/T_HGB.csv'); df_hgb.rename(columns = {'value': 'hgb'}, inplace=True)\ndf_ldl = pd.read_csv('CSV/T_ldl.csv'); df_ldl.rename(columns = {'value': 'ldl'}, inplace=True)\ndf_meds = pd.read_csv('CSV/T_meds.csv')\ndf_sbp = pd.read_csv('CSV/T_sbp.csv'); df_sbp.rename(columns = {'value': 'sbp'}, inplace=True)\n", "_____no_output_____" ] ], [ [ "# Compute maximum time point (day) for each subject", "_____no_output_____" ] ], [ [ "df_creatinine_d = df_creatinine.groupby(['id'])['time'].max()\ndf_dbp_d = df_dbp.groupby(['id'])['time'].max()\ndf_glucose_d = df_glucose.groupby(['id'])['time'].max()\ndf_hgb_d = df_hgb.groupby(['id'])['time'].max()\ndf_ldl_d = df_ldl.groupby(['id'])['time'].max()\ndf_sbp_d = df_sbp.groupby(['id'])['time'].max()\ndf_meds_d = df_meds.groupby(['id'])['end_day'].max()\ndf_meds_d = df_meds_d.rename('time')\n\ndf_d_merge = pd.DataFrame(pd.concat([df_creatinine_d, df_dbp_d, df_glucose_d, df_hgb_d, df_ldl_d, df_sbp_d, df_meds_d])).reset_index()\ndf_d_merge = df_d_merge.groupby(['id']).max().reset_index()\ndf_d_merge = df_d_merge.sort_values('time')\nprint('Minimum = ' + str(df_d_merge['time'].min()) + ', Maximum = ' + str(df_d_merge['time'].max()))\nprint('Mean = ' + str(df_d_merge['time'].mean()) + ', Median = ' + str(df_d_merge['time'].median()))\nplt.plot(list(range(df_d_merge.shape[0])), df_d_merge['time'], '-p', markersize=1)\nplt.xlabel(\"Subject\")\nplt.ylabel(\"Days\")\nplt.title(\"Days of record\")\ndf_d_merge.to_csv('CSV/days_of_record.csv', index=False)", "Minimum = 708, Maximum = 1429\nMean = 1131.21, Median = 1160.0\n" ] ], [ [ "# Process med data\n", "_____no_output_____" ] ], [ [ "# Ignore medication ended before day 0\ndf_meds = df_meds[df_meds['end_day'] >= 0]\ndf_meds.head(10)", "_____no_output_____" ], [ "period_bin = 180\n\ndef generate_bin(n_start, n_end):\n \n global period_bin\n \n start_count = period_bin\n n = 1\n token = 0\n \n # keep trying until a code is assigned\n while token == 0:\n \n if n_end <= start_count:\n\n # start and end within period\n if n_start <= (start_count + 1):\n return int(start_count / period_bin)\n token = 1\n\n else:\n \n # the \"end of period\" is within start and end (e.g.: 90 < 180 < 280)\n if n_start <= start_count:\n \n # set a code for processing later\n return 99\n token = 1\n\n # start and end are both outside of the period\n else:\n \n # try the next period\n n += 1\n start_count *= n\n \n \ndf_meds['days_bin'] = df_meds.apply(lambda x: generate_bin(x['start_day'], x['end_day']), axis=1)", "_____no_output_____" ], [ "# Fix the in-between\nMID = df_meds['days_bin'] == 99\n\n# Replicate the error part to be fixed and concat with the main one\ndf_temp = df_meds[MID]\n\n\n# Bin months based on end_day\ndf_temp['days_bin'] = (df_temp['end_day'] / period_bin).astype(int) + 1\n\n# Value to be used to replace start (+1) or end\nv = (np.floor(df_meds.loc[MID, 'end_day'] / period_bin) * period_bin).astype(int)\n\ndf_meds.loc[MID, 'end_day'] = v\n# Bin months based on end_day\ndf_meds['days_bin'] = (df_meds['end_day'] / period_bin).astype(int) + 1\ndf_temp['start_day'] = (v + 1).astype(int)\n\ndf_meds = pd.concat([df_meds, df_temp], axis=0)\n\n\n", "_____no_output_____" ], [ "df_meds['days_bin'].value_counts().sort_index()", "_____no_output_____" ], [ "df_meds['end_day'].max()", "_____no_output_____" ], [ "# Get the total dosage during the period\ndf_meds['total_day'] = df_meds['end_day'] - df_meds['start_day'] + 1\ndf_meds['total_dosage'] = df_meds['total_day'] * df_meds['daily_dosage']", "_____no_output_____" ], [ "# Bin the data by days_bin\ndf_med_binned = df_meds.groupby(['id', 'days_bin', 'drug'])['total_dosage'].sum().reset_index()", "_____no_output_____" ], [ "df_med_binned.head()", "_____no_output_____" ], [ "# Convert df to wide format, with each column = dosage of one med\n# If drug not taken, assumed it's 0\ndf_med_wide = df_med_binned.pivot(index=['id', 'days_bin'],columns='drug',values='total_dosage').reset_index().fillna(0)\ndf_med_wide.head()", "_____no_output_____" ] ], [ [ "# Merge the raw measurements", "_____no_output_____" ] ], [ [ "# Check how many is between day 699 and day 720\ndf_hgb[(df_hgb['time']> 699) & (df_hgb['time'] <= 720)].shape[0]", "_____no_output_____" ], [ "# Sort columns to id, time, value first\n# First values are blood pressure, and systolic comes before diastolic\ndf_sbp = df_sbp[['id', 'time', 'sbp']]\n\ndf_merged = df_sbp.merge(df_dbp, on = ['id','time'], how='outer')\ndf_merged = df_merged.merge(df_creatinine, on = ['id','time'], how='outer')\ndf_merged = df_merged.merge(df_glucose, on = ['id','time'], how='outer')\ndf_merged = df_merged.merge(df_ldl, on = ['id','time'], how='outer')\ndf_merged = df_merged.merge(df_hgb, on = ['id','time'], how='outer')\ndf_merged = df_merged.sort_values(['id','time'])\ndf_merged.head()", "_____no_output_____" ], [ "# bin time \ndf_merged['days_bin'] = (df_merged['time'] / period_bin).astype(int) + 1\ndf_merged = df_merged.drop('time', axis=1)\ndf_merged['days_bin'].value_counts().sort_index()", "_____no_output_____" ], [ "# Aggregate data by days_bin and get median\ndf_merged = df_merged.groupby(['id', 'days_bin']).median().reset_index()\ndf_merged.head()", "_____no_output_____" ], [ "# Merge with med\ndf_merged = df_merged.merge(df_med_wide, on = ['id','days_bin'], how='outer')\ndf_merged.head()", "_____no_output_____" ], [ "# Save output for modelling\ndf_merged.to_csv('CSV/df_daybin.csv', index=False)", "_____no_output_____" ], [ "# Only first 4 bins (720 days)\ndf_merged_4 = df_merged[df_merged['days_bin'] <= 4]\n\n# Change NA to 0 for drugs\ndf_merged_4.iloc[:, 8:29] = df_merged_4.iloc[:, 8:29].fillna(0)\n\n\n# Use KNNImputer to fill continuous missing values\nimputer = KNNImputer(n_neighbors=3)\n\nfor day in range(1,5):\n DID = df_merged_4['days_bin'] == day\n df_day = df_merged_4[DID]\n\n # Remove id from imputation\n df_day.iloc[:,2:8] = pd.DataFrame(imputer.fit_transform(df_day.iloc[:,2:8]), index = df_day.index, columns = df_day.columns[2:8]) \n df_merged_4[DID] = df_day\n\n# Merge with demographic \ndf_demo = pd.read_csv('CSV/T_demo.csv')\n\n# Change the unknown in df_demo race to the mode (White)\ndf_demo.loc[df_demo['race'] == 'Unknown','race'] = 'White'\n\ndf_merged_4 = df_merged_4.merge(df_demo, on='id')\n\n# Merge with output\ndf_stage = pd.read_csv('CSV/T_stage.csv')\n\n# Change state to 0, 1\ndf_stage['Stage_Progress'] = np.where(df_stage['Stage_Progress'] == True, 1, 0)\n\ndf_merged_4 = df_merged_4.merge(df_stage, on='id')\n\n# Save output for modelling\ndf_merged_4.to_csv('CSV/df_daybin_4.csv', index=False)", "_____no_output_____" ], [ "df_merged_4.head()", "_____no_output_____" ] ], [ [ "# Aggregated data", "_____no_output_____" ] ], [ [ "df_agg = df_merged_4.copy()\n# Take out demographic and outcome\ndf_agg.drop( ['race', 'gender', 'age', 'Stage_Progress'], axis=1, inplace=True)\n\ndf_agg_mean = df_agg.groupby('id').mean().reset_index()\ndf_agg_mean.head()", "_____no_output_____" ], [ "# Mean sbp, dbp, creatinine, glucose, ldl, hgb\ndf_agg_mean = df_agg.groupby('id').mean().reset_index()\ndf_agg_mean = df_agg_mean.iloc[:, np.r_[0, 2:8]]\ndf_agg_mean.head()\ndf_agg_mean.shape\n\n# Sum drugs\ndf_agg_sum = df_agg.groupby('id').sum().reset_index()\ndf_agg_sum = df_agg_sum.iloc[:, 8:]\ndf_agg_sum.head()\ndf_agg_sum.shape\n\ndf_agg_fixed = pd.concat([df_agg_mean, df_agg_sum], axis=1)\ndf_agg_fixed.shape\n\n# Put back demo\ndf_agg_fixed = df_agg_fixed.merge(df_demo, on = 'id')\n# Put back outcome\ndf_agg_fixed = df_agg_fixed.merge(df_stage, on = 'id')\n\ndf_agg_fixed.head()\ndf_agg_fixed.shape\ndf_agg_fixed.to_csv('CSV/df_agg.csv', index=False)", "_____no_output_____" ] ], [ [ "# Temporal data\n* Only use first 2 years of data (most measurements stop at day 699)", "_____no_output_____" ] ], [ [ "df_temporal = df_merged_4.copy()\ndf_temporal.head()", "_____no_output_____" ], [ "# Take out demographic and outcome\ndf_temporal.drop( ['race', 'gender', 'age', 'Stage_Progress'], axis=1, inplace=True)", "_____no_output_____" ], [ "# Convert to wide format\ndf_temporal = df_temporal.set_index(['id','days_bin']).unstack()\ndf_temporal.columns = df_temporal.columns.map(lambda x: '{}_{}'.format(x[0], x[1]))", "_____no_output_____" ], [ "# Some subjects don't have data in a time_bin, KNNImpute again\ndf_temporal = pd.DataFrame(imputer.fit_transform(df_temporal), index = df_temporal.index, columns = df_temporal.columns) ", "_____no_output_____" ], [ "df_temporal = df_temporal.reset_index()\n# Put back demo\ndf_temporal = df_temporal.merge(df_demo, on = 'id')\n# Put back outcome\ndf_temporal = df_temporal.merge(df_stage, on = 'id')", "_____no_output_____" ], [ "df_temporal.head()", "_____no_output_____" ], [ "# Save output for modelling\ndf_temporal.to_csv('CSV/df_temporal.csv', index=False)", "_____no_output_____" ] ], [ [ "# Categorize measurements\n* Set continuous readings to 1=low, 2=normal, 3=high\n* Categorize medicine by tertile split total dosage to categorize severity (1=low, 2=normal, 3=high)\n* Categorize medicine by the treatment target, sum binary code ", "_____no_output_____" ] ], [ [ "# Remove 0, get 75th percentile as threshold for high dosage\n# Set normal as 1, high as 2\ndef categorize_drug(df):\n \n NID = df > 0\n if sum(NID) > 0:\n threshold = np.percentile(df[NID], 75)\n df[NID] = np.where(df[NID] > threshold, 2, 1)\n\n return df", "_____no_output_____" ] ], [ [ "## Day_bin", "_____no_output_____" ] ], [ [ "df_merged_4_cat = df_merged_4.copy()\ndf_merged_4_cat.head()", "_____no_output_____" ], [ "names = ['1', '2', '3']\n\nbins = [0, 90, 120, np.inf]\ndf_merged_4_cat['sbp'] = pd.cut(df_merged_4['sbp'], bins, labels=names)\n\nbins = [0, 60, 80, np.inf]\ndf_merged_4_cat['dbp'] = pd.cut(df_merged_4['dbp'], bins, labels=names)\n\nbins = [0, 3.9, 7.8, np.inf]\ndf_merged_4_cat['glucose'] = pd.cut(df_merged_4['glucose'], bins, labels=names)\n\nbins = [0, 100, 129, np.inf]\ndf_merged_4_cat['ldl'] = pd.cut(df_merged_4['ldl'], bins, labels=names)\n\nMID = df_merged_4['gender'] == 'Male'\n\nbins = [0, 0.74, 1.35, np.inf]\ndf_merged_4_cat.loc[MID, 'creatinine'] = pd.cut(df_merged_4.loc[MID, 'creatinine'], bins, labels=names)\nbins = [0, 0.59, 1.04, np.inf]\ndf_merged_4_cat.loc[~MID, 'creatinine'] = pd.cut(df_merged_4.loc[~MID, 'creatinine'], bins, labels=names)\n\nbins = [0, 14, 17.5, np.inf]\ndf_merged_4_cat.loc[MID, 'hgb'] = pd.cut(df_merged_4.loc[MID, 'hgb'], bins, labels=names)\nbins = [0, 12.3, 15.3, np.inf]\ndf_merged_4_cat.loc[~MID, 'hgb'] = pd.cut(df_merged_4.loc[~MID, 'hgb'], bins, labels=names)\n\ndf_merged_4_cat.head()", "_____no_output_____" ], [ "# Remove 0, get 75th percentile as threshold for high dosage, set normal as 1, high as 2\n# Need to compute separately for different days_bin\n\nfor day in range(1, 5):\n DID = df_merged_4_cat['days_bin'] == day\n df_day = df_merged_4_cat[DID]\n df_merged_4_cat = df_merged_4_cat[~DID] \n df_day.iloc[:, 8:29] = df_day.iloc[:, 8:29].apply(lambda x: categorize_drug(x)).astype(int) \n df_merged_4_cat = pd.concat([df_merged_4_cat, df_day])\n \n", "_____no_output_____" ], [ "# Label encode race and gender\nle = LabelEncoder()\ndf_merged_4_cat['race'] = le.fit_transform(df_merged_4_cat['race'])\ndf_merged_4_cat['gender'] = le.fit_transform(df_merged_4_cat['gender'])", "_____no_output_____" ], [ "# Group age to young-old (≤74 y.o.) as 1, middle-old (75 to 84 y.o.) as 2, and old-old (≥85 y.o.) as 3\ndf_merged_4_cat['age'] = pd.qcut(df_merged_4['age'], 3, labels=[1,2,3])\ndf_merged_4_cat['age'].value_counts()", "_____no_output_____" ], [ "df_merged_4_cat.to_csv('CSV/df_merged_4_cat.csv', index=False)", "_____no_output_____" ], [ "# Group drug by treatment (sum the binary code)\ndf_merged_4_cat_drug = df_merged_4_cat.copy()\n\nglucose_col = ['canagliflozin', 'dapagliflozin', 'metformin']\ndf_merged_4_cat_drug['glucose_treatment'] = df_merged_4_cat_drug[glucose_col].sum(axis=1).astype(int)\ndf_merged_4_cat_drug.drop(glucose_col, axis=1, inplace=True)\n\nbp_col = ['atenolol','bisoprolol','carvedilol','irbesartan','labetalol','losartan','metoprolol','nebivolol','olmesartan','propranolol','telmisartan','valsartan']\ndf_merged_4_cat_drug['bp_treatment'] = df_merged_4_cat_drug[bp_col].sum(axis=1).astype(int)\ndf_merged_4_cat_drug.drop(bp_col, axis=1, inplace=True)\n\ncholesterol_col = ['atorvastatin','lovastatin','pitavastatin','pravastatin','rosuvastatin','simvastatin']\ndf_merged_4_cat_drug['cholesterol_treatment'] = df_merged_4_cat_drug[cholesterol_col].sum(axis=1).astype(int)\ndf_merged_4_cat_drug.drop(cholesterol_col, axis=1, inplace=True)\n\ndf_merged_4_cat_drug.head()\ndf_merged_4_cat_drug.to_csv('CSV/df_merged_4_cat_drug.csv', index=False)", "_____no_output_____" ] ], [ [ "## Aggregated", "_____no_output_____" ] ], [ [ "df_agg_cat = df_agg_fixed", "_____no_output_____" ], [ "names = ['1', '2', '3']\n\nbins = [0, 90, 120, np.inf]\ndf_agg_cat['sbp'] = pd.cut(df_agg_fixed['sbp'], bins, labels=names)\n\nbins = [0, 60, 80, np.inf]\ndf_agg_cat['dbp'] = pd.cut(df_agg_fixed['dbp'], bins, labels=names)\n\nbins = [0, 3.9, 7.8, np.inf]\ndf_agg_cat['glucose'] = pd.cut(df_agg_fixed['glucose'], bins, labels=names)\n\nbins = [0, 100, 129, np.inf]\ndf_agg_cat['ldl'] = pd.cut(df_agg_fixed['ldl'], bins, labels=names)\n\nMID = df_agg_fixed['gender'] == 'Male'\n\nbins = [0, 0.74, 1.35, np.inf]\ndf_agg_cat.loc[MID, 'creatinine'] = pd.cut(df_agg_fixed.loc[MID, 'creatinine'], bins, labels=names)\nbins = [0, 0.59, 1.04, np.inf]\ndf_agg_cat.loc[~MID, 'creatinine'] = pd.cut(df_agg_fixed.loc[~MID, 'creatinine'], bins, labels=names)\n\nbins = [0, 14, 17.5, np.inf]\ndf_agg_cat.loc[MID, 'hgb'] = pd.cut(df_agg_fixed.loc[MID, 'hgb'], bins, labels=names)\nbins = [0, 12.3, 15.3, np.inf]\ndf_agg_cat.loc[~MID, 'hgb'] = pd.cut(df_agg_fixed.loc[~MID, 'hgb'], bins, labels=names)\n\ndf_agg_cat.head()", "_____no_output_____" ], [ "# Remove 0, get 75th percentile as threshold for high dosage, set normal as 1, high as 2\ndf_agg_cat.iloc[:,7:28] = df_agg_fixed.iloc[:,7:28].apply(lambda x: categorize_drug(x)).astype(int)", "_____no_output_____" ], [ "# Label encode race and gender\nle = LabelEncoder()\ndf_agg_cat['race'] = le.fit_transform(df_agg_cat['race'])\ndf_agg_cat['gender'] = le.fit_transform(df_agg_cat['gender'])", "_____no_output_____" ], [ "# Group age to young-old (≤74 y.o.) as 1, middle-old (75 to 84 y.o.) as 2, and old-old (≥85 y.o.) as 3\ndf_agg_cat['age'] = pd.qcut(df_agg_cat['age'], 3, labels=[1,2,3])\ndf_agg_cat['age'].value_counts()", "_____no_output_____" ], [ "df_agg_cat.to_csv('CSV/df_agg_cat.csv', index=False)", "_____no_output_____" ], [ "# Group drug by treatment (sum the binary code)\ndf_agg_cat_drug = df_agg_cat.copy()\n\nglucose_col = ['canagliflozin', 'dapagliflozin', 'metformin']\ndf_agg_cat_drug['glucose_treatment'] = df_agg_cat_drug[glucose_col].sum(axis=1).astype(int)\ndf_agg_cat_drug.drop(glucose_col, axis=1, inplace=True)\n\nbp_col = ['atenolol','bisoprolol','carvedilol','irbesartan','labetalol','losartan','metoprolol','nebivolol','olmesartan','propranolol','telmisartan','valsartan']\ndf_agg_cat_drug['bp_treatment'] = df_agg_cat_drug[bp_col].sum(axis=1).astype(int)\ndf_agg_cat_drug.drop(bp_col, axis=1, inplace=True)\n\ncholesterol_col = ['atorvastatin','lovastatin','pitavastatin','pravastatin','rosuvastatin','simvastatin']\ndf_agg_cat_drug['cholesterol_treatment'] = df_agg_cat_drug[cholesterol_col].sum(axis=1).astype(int)\ndf_agg_cat_drug.drop(cholesterol_col, axis=1, inplace=True)\n\ndf_agg_cat_drug.head()\ndf_agg_cat_drug.to_csv('CSV/df_agg_cat_drug.csv', index=False)", "_____no_output_____" ] ], [ [ "## Temporal", "_____no_output_____" ] ], [ [ "df_temporal_cat = df_temporal.copy()", "_____no_output_____" ], [ "names = ['1', '2', '3']\n\nbins = [0, 90, 120, np.inf]\nfor colname in ['sbp_1', 'sbp_2', 'sbp_3', 'sbp_4']:\n df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)\n\nbins = [0, 60, 80, np.inf]\nfor colname in ['dbp_1', 'dbp_2', 'dbp_3', 'dbp_4']:\n df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)\n\nbins = [0, 3.9, 7.8, np.inf]\nfor colname in ['glucose_1', 'glucose_2', 'glucose_3', 'glucose_4']:\n df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)\n\nbins = [0, 100, 129, np.inf]\nfor colname in ['ldl_1', 'ldl_2', 'ldl_3', 'ldl_4']:\n df_temporal_cat[colname] = pd.cut(df_temporal_cat[colname], bins, labels=names)\n\nMID = df_temporal_cat['gender'] == 'Male'\n\nbins = [0, 0.74, 1.35, np.inf]\nfor colname in ['creatinine_1', 'creatinine_2', 'creatinine_3', 'creatinine_4']:\n df_temporal_cat.loc[MID, colname] = pd.cut(df_temporal_cat.loc[MID, colname], bins, labels=names)\n \n\nbins = [0, 0.59, 1.04, np.inf]\nfor colname in ['creatinine_1', 'creatinine_2', 'creatinine_3', 'creatinine_4']:\n df_temporal_cat.loc[~MID, colname] = pd.cut(df_temporal_cat.loc[~MID, colname], bins, labels=names)\n\nbins = [0, 14, 17.5, np.inf]\nfor colname in ['hgb_1', 'hgb_2', 'hgb_3', 'hgb_4']:\n df_temporal_cat.loc[MID, colname] = pd.cut(df_temporal_cat.loc[MID, colname], bins, labels=names)\n \nbins = [0, 12.3, 15.3, np.inf]\nfor colname in ['hgb_1', 'hgb_2', 'hgb_3', 'hgb_4']:\n df_temporal_cat.loc[~MID, colname] = pd.cut(df_temporal_cat.loc[~MID, colname], bins, labels=names)\n\ndf_temporal_cat.head()", "_____no_output_____" ], [ "# Remove 0, get 75th percentile as threshold for high dosage, set normal as 1, high as 2\ndf_temporal_cat.iloc[:,25:109] = df_temporal_cat.iloc[:,25:109].apply(lambda x: categorize_drug(x)).astype(int)", "_____no_output_____" ], [ "# Label encode race and gender\nle = LabelEncoder()\ndf_temporal_cat['race'] = le.fit_transform(df_temporal_cat['race'])\ndf_temporal_cat['gender'] = le.fit_transform(df_temporal_cat['gender'])", "_____no_output_____" ], [ "# Group age to young-old (≤74 y.o.) as 1, middle-old (75 to 84 y.o.) as 2, and old-old (≥85 y.o.) as 3\ndf_temporal_cat['age'] = pd.qcut(df_temporal_cat['age'], 3, labels=[1,2,3])\ndf_temporal_cat['age'].value_counts()", "_____no_output_____" ], [ "df_temporal_cat.to_csv('CSV/df_temporal_cat.csv', index=False)", "_____no_output_____" ], [ "# Group drug by treatment (sum the binary code)\ndf_temporal_cat_drug = df_temporal_cat.copy()\n\nfor i in range(1,5):\n glucose_col = ['canagliflozin_' + str(i), 'dapagliflozin_' + str(i), 'metformin_' + str(i)]\n df_temporal_cat_drug['glucose_treatment_'+ str(i)] = df_temporal_cat_drug[glucose_col].sum(axis=1).astype(int)\n df_temporal_cat_drug.drop(glucose_col, axis=1, inplace=True)\n\n bp_col = ['atenolol_' + str(i),'bisoprolol_' + str(i),'carvedilol_' + str(i),'irbesartan_' + str(i),'labetalol_' + str(i),'losartan_' + str(i),'metoprolol_' + str(i),'nebivolol_' + str(i),'olmesartan_' + str(i),'propranolol_' + str(i),'telmisartan_' + str(i),'valsartan_' + str(i)]\n df_temporal_cat_drug['bp_treatment_'+ str(i)] = df_temporal_cat_drug[bp_col].sum(axis=1).astype(int)\n df_temporal_cat_drug.drop(bp_col, axis=1, inplace=True)\n\n cholesterol_col = ['atorvastatin_' + str(i),'lovastatin_' + str(i),'pitavastatin_' + str(i),'pravastatin_' + str(i),'rosuvastatin_' + str(i),'simvastatin_' + str(i)]\n df_temporal_cat_drug['cholesterol_treatment_'+ str(i)] = df_temporal_cat_drug[cholesterol_col].sum(axis=1).astype(int)\n df_temporal_cat_drug.drop(cholesterol_col, axis=1, inplace=True)\n\ndf_temporal_cat_drug.head()\ndf_temporal_cat_drug.to_csv('CSV/df_temporal_cat_drug.csv', index=False)", "_____no_output_____" ] ], [ [ "# Compute GFR\n* CKD-EPI equations", "_____no_output_____" ] ], [ [ "def computeGFR(df):\n gender = df['gender'] \n f_constant = 1 \n if gender == 'Male':\n \n k = 0.9\n a = -0.411\n \n else:\n \n k = 0.7\n a = -0.329\n f_constant = 1.018 \n \n \n race = df['race']\n b_constant = 1\n if race == 'Black':\n \n b_constant = 1.159\n \n gfr = 141 * min(df['creatinine'] / k, 1) * (max(df['creatinine'] / k, 1)**(-1.209)) * (0.993**df['age']) * f_constant * b_constant\n \n return gfr", "_____no_output_____" ] ], [ [ "## 180-day bin", "_____no_output_____" ] ], [ [ "col_gfr = ['id', 'days_bin', 'creatinine', 'race', 'gender', 'age', 'Stage_Progress']\ndf_merged_4_gfr = df_merged_4[col_gfr].copy()\ndf_merged_4_gfr['gfr'] = df_merged_4_gfr.apply(lambda x: computeGFR(x), axis=1)\ndf_merged_4_gfr.drop(['creatinine', 'race', 'gender', 'age'], axis=1, inplace=True)\n# Categorize GFR\ndf_merged_4_gfr['gfr_cat'] = np.where(df_merged_4_gfr['gfr'] < 60, 1, 2)\ndf_merged_4_gfr['gfr_cat'].value_counts()\ndf_merged_4_gfr.to_csv('CSV/df_merged_4_gfr.csv', index=False)", "_____no_output_____" ], [ "df_merged_4.head()", "_____no_output_____" ], [ "df_merged_4_gfr.head()", "_____no_output_____" ] ], [ [ "## Aggregated", "_____no_output_____" ] ], [ [ "df_agg_fixed = pd.read_csv('CSV/df_agg.csv')\ncol_gfr = ['id', 'creatinine', 'race', 'gender', 'age', 'Stage_Progress']\ndf_agg_gfr = df_agg_fixed[col_gfr].copy()\ndf_agg_gfr['gfr'] = df_agg_gfr.apply(lambda x: computeGFR(x), axis=1)\ndf_agg_gfr.drop(['creatinine', 'race', 'gender', 'age'], axis=1, inplace=True)\n# Categorize GFR\ndf_agg_gfr['gfr_cat'] = np.where(df_agg_gfr['gfr'] < 60, 1, 2)\ndf_agg_gfr['gfr_cat'].value_counts()\ndf_agg_gfr.to_csv('CSV/df_agg_gfr.csv', index=False)", "_____no_output_____" ] ], [ [ "## Temporal", "_____no_output_____" ] ], [ [ "def computeGFR_temporal(df, i):\n gender = df['gender'] \n f_constant = 1 \n if gender == 'Male':\n \n k = 0.9\n a = -0.411\n \n else:\n \n k = 0.7\n a = -0.329\n f_constant = 1.018 \n \n \n race = df['race']\n b_constant = 1\n if race == 'Black':\n \n b_constant = 1.159\n \n gfr = 141 * min(df['creatinine_' + str(i)] / k, 1) * (max(df['creatinine_' + str(i)] / k, 1)**(-1.209)) * (0.993**df['age']) * f_constant * b_constant\n \n return gfr", "_____no_output_____" ], [ "col_gfr = ['id', 'creatinine_1', 'creatinine_2', 'creatinine_3', 'creatinine_4', 'race', 'gender', 'age', 'Stage_Progress']\ndf_temporal_gfr = df_temporal[col_gfr].copy()\nfor i in range(1, 5):\n df_temporal_gfr['gfr_' + str(i)] = df_temporal_gfr.apply(lambda x: computeGFR_temporal(x, i), axis=1)\n df_temporal_gfr.drop('creatinine_' + str(i), axis=1, inplace=True)\n\ndf_temporal_gfr.drop(['race', 'gender', 'age'], axis=1, inplace=True)\n# Categorize GFR\nfor i in range(1, 5):\n df_temporal_gfr['gfr_cat_' + str(i)] = np.where(df_temporal_gfr['gfr_' + str(i)] < 60, 1, 2)\n \ndf_temporal_gfr.to_csv('CSV/df_temporal_gfr.csv', index=False)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a3b7ececf1aaec0b8b871dec580248168080622
40,376
ipynb
Jupyter Notebook
_notebooks/scraping.ipynb
carlobailey/location-intelligence
4b2dfaaca8f57b73e100d2075c5727a6144ffdad
[ "CC0-1.0" ]
null
null
null
_notebooks/scraping.ipynb
carlobailey/location-intelligence
4b2dfaaca8f57b73e100d2075c5727a6144ffdad
[ "CC0-1.0" ]
null
null
null
_notebooks/scraping.ipynb
carlobailey/location-intelligence
4b2dfaaca8f57b73e100d2075c5727a6144ffdad
[ "CC0-1.0" ]
null
null
null
42.411765
7,573
0.533485
[ [ [ "# Scraping Google Maps with Python\n\n[Web scraping](https://en.wikipedia.org/wiki/Web_scraping) is the process of extracting data from web pages using software. There are many [techniques](https://en.wikipedia.org/wiki/Web_scraping#Techniques) to scrape data: computer vision, manual copy and pasting, pattern matching, etc – for this tutorial, we will use [HTML parsing](https://en.wikipedia.org/wiki/Web_scraping#HTML_parsing) with HTTP programming. Given the high costs of downloading large amounts of proprietary data, web scraping is sometimes the only option for small scale research projects. Google maps for example, the defacto mapping and navigation platform (as of 2020/2021), has a monopoly on information about the location of businesses across the globe. To get access to this information is very very costly (problems of monoply rule), therefore scraping the data is one of the only options for students. This tutorial demonstrates how to scrape Google Maps POI (points of interest) data in NYC with Python using the Selenium and Beautiful Soup libraries. \n\nThe first step is to import all the necessary libraries:\n- [Requests](https://requests.readthedocs.io/en/master/): a HTTP library for Python\n- [Selenium](https://www.selenium.dev/selenium/docs/api/py/): a tool for automating web browsing\n- [BeautfiulSoup](https://en.wikipedia.org/wiki/Beautiful_Soup_(HTML_parser)): a package to parse HTML elements\n- [Shapely](https://shapely.readthedocs.io/en/stable/manual.html): a library for manipulaitng and analysing geometry\n- [Geopandas](https://geopandas.org/index.html): similar to pandas with extensions to make it easier to work with geospatial data.", "_____no_output_____" ] ], [ [ "import requests\nfrom bs4 import BeautifulSoup\nimport os\nimport pandas as pd\nimport geopandas as gpd\nimport time\nimport re\nfrom shapely.geometry import Point\nimport psycopg2 as pg", "_____no_output_____" ], [ "from selenium import webdriver\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.support import expected_conditions as EC\nfrom selenium.webdriver.common.by import By\nfrom selenium.common.exceptions import TimeoutException\nfrom webdriver_manager.chrome import ChromeDriverManager\nfrom selenium.webdriver.chrome.options import Options", "_____no_output_____" ] ], [ [ "### Search requests\n\nEvery request to google maps usually starts from a location and a search query. This first section involves gathering a list of locations (latitudes & longitudes) and search queries (types of POI) to feed into the base URL below. The base url is the request we send to Google and what we'll get back is a selection of POI (businesses) for that location related to the search category.", "_____no_output_____" ] ], [ [ "base_url = \"https://www.google.com/maps/search/{0}/@{1},{2},16z\"", "_____no_output_____" ] ], [ [ "### Search locations\n\nAny search in Google maps requires a location to search from, typically a latitude and longitude pair. Below we use the centroids of select zipcodes within NYC to use as our search locations. There are only 11 zip codes below but this could easily be scaled to the entire city or country.", "_____no_output_____" ] ], [ [ "conn = pg.connect(\n host=os.environ['aws_db_host'],\n port=\"5432\",\n user=os.environ['aws_db_u'],\n password=os.environ['aws_db_p']\n)", "_____no_output_____" ], [ "gdf = gpd.read_postgis('''\n SELECT \n region_id,\n geom AS geom\n FROM geographies.blockgroups\n WHERE (STARTS_WITH(region_id, '36061') OR STARTS_WITH(region_id, '36047')\n OR STARTS_WITH(region_id, '36085') OR STARTS_WITH(region_id, '36005')\n OR STARTS_WITH(region_id, '36081'))\n ''', conn)", "_____no_output_____" ], [ "coords = gdf['geom'].centroid", "_____no_output_____" ], [ "xs = coords.apply(lambda x: x.x).tolist()\nys = coords.apply(lambda x: x.y).tolist()", "_____no_output_____" ], [ "counties = gdf['region_id'].str[2:5].tolist()", "_____no_output_____" ] ], [ [ "<br>\n\nHere we outline the search categories.", "_____no_output_____" ] ], [ [ "searches = ['Restaurants', 'bakery', 'coffee', 'gym', 'yoga', \n 'clothing', 'electronics', \n 'beauty', 'hardware', 'galleries',\n 'museums', 'Hotels', 'deli', \n 'liquor', 'bar', 'Groceries', 'Takeout', \n 'Banks', 'Pharmacies']", "_____no_output_____" ], [ "searches = ['Restaurants']", "_____no_output_____" ] ], [ [ "### Sending a GET request\n\nThe next step is to send a request to Google's servers about the information we would like returned. This type of request is called a [GET request](https://en.wikipedia.org/wiki/Hypertext_Transfer_Protocol#Request_methods) in HTTP programming. \n\nBelow we'll send a single test request to the servers to see what information we get back. We insert a single lat/lng pair into the **base_url** variable. ", "_____no_output_____" ] ], [ [ "url = base_url.format(searches[0], xs[0], ys[0])", "_____no_output_____" ] ], [ [ "<br>\n\nThe next portion of code starts a selenium web driver (the vehicle that powers automated web browsing) and specifies a few browsing options. Specifically we state the web browser should run *headless*, meaning it should not open up a new browser window, and install a Chrome browser.", "_____no_output_____" ] ], [ [ "delay = 10\nchrome_options = Options()\nchrome_options.add_argument(\"--headless\")\ndriver = webdriver.Chrome(ChromeDriverManager().install(), \n options=chrome_options)", "[WDM] - Current google-chrome version is 88.0.4324\nINFO:WDM:Current google-chrome version is 88.0.4324\n[WDM] - Get LATEST driver version for 88.0.4324\nINFO:WDM:Get LATEST driver version for 88.0.4324\n" ] ], [ [ "<br>\n\n### Intentionally slowing down\n\nAlthough web scraping is not [illegal](https://twitter.com/OrinKerr/status/1171116153948626944), researchers should be careful when scraping private websites. Not due to legality issues, but due to being blocked by the company you're trying to scrape from. Web scraping tools like Selenium browse pages in light speed, way faster than any human, therefore it is easy for company's servers to spot when individuals are trying to scrape their sites. This next section will introduce a few options to intentionally slow down our browsing. Specifically we tell the driver object to wait until certain conditions have been met before moving forward. Once conditions have been met we download the HTML information returned by Google.", "_____no_output_____" ] ], [ [ "driver.get(url)\ntry:\n # wait for button to be enabled\n WebDriverWait(driver, delay).until(\n EC.element_to_be_clickable((By.CLASS_NAME, 'section-result'))\n )\n driver.implicitly_wait(40)\n html = driver.find_element_by_tag_name('html').get_attribute('innerHTML')\nexcept TimeoutException:\n print('Loading took too much time!')\nelse:\n print('getting html')\n html = driver.page_source\nfinally:\n driver.quit()", "getting html\n" ] ], [ [ "<br>\n\nNow that we have the HTML data, we feed this into Beautiful Soup for parsing, without having to worry about tripping up Google's servers.", "_____no_output_____" ] ], [ [ "soup = BeautifulSoup(html)\nresults = soup.find_all('div', {'class': 'section-result'})", "_____no_output_____" ] ], [ [ "<br> \n\nLet's view one item from the results returned", "_____no_output_____" ] ], [ [ "results[0]", "_____no_output_____" ] ], [ [ "<br>\n<br>\n\nAs you can see, there is a ton of information returned but it is not in a human readable format.\n\nTo interpret this information we will have to go to a page from google maps and understand what our information is represented as in [HTML format](https://www.w3schools.com/whatis/whatis_htmldom.asp).", "_____no_output_____" ], [ "<img src=\"../static/img/web_elements.png\">", "_____no_output_____" ], [ "<br>\n<br>\n\nAbove is an example of a restaurant in Brooklyn as seen from Google maps, alongside the HTML representation. To open up the pop-up, right click anywhere on a page in Google maps and click **inspect elements**. Now hovering over a certain piece of information will tell you what HTML tag, class or ID the element has been assigned. For example, the name of the restaurant above, Park Plaza, is in a **H3** tag inside another **span** tag (highlighted in bliue at the bottom). Now we know the HTML representation, we can more easily parse the data with Beautiful Soup to return the information we want.\n\nBelow we ask Beatiful Soup to find a H3 tag element and then get the text from the subsequent span element.", "_____no_output_____" ] ], [ [ "results[0].find('h3').span.text", "_____no_output_____" ] ], [ [ "<br>\n\nWe can now scale the above logic to get all the attributes we're interested in (name, address, price, tags, etc.), from all the zip codes, and all the POI categories. We'll wrap the POI parsing in a nested loop and save the results to a CSV using Pandas. We also add Python's built-in **try** and **except** objects to the HTML parsing logic so that the loop continues even when there data is not returned as expected (a common thing in web scraping). At the end of the loop we will delay the next loop by 15 seconds to not trigger Google's servers.", "_____no_output_____" ] ], [ [ "for search in searches: \n for idx, coord in enumerate(zip(ys, xs)):\n filename = search + str(coord[0]).replace('.', '') + '_' + str(coord[1]).replace('.', '') + '.csv'\n if not os.path.isfile('data/'+filename):\n url = base_url.format(search, coord[0], coord[1])\n delay = 10\n chrome_options = Options()\n chrome_options.add_argument(\"--headless\")\n driver = webdriver.Chrome('/Users/carlo/.wdm/drivers/chromedriver/mac64/88.0.4324.96/chromedriver', \n options=chrome_options)\n driver.get(url)\n\n try:\n # wait for button to be enabled\n WebDriverWait(driver, delay).until(\n EC.element_to_be_clickable((By.CLASS_NAME, 'section-result'))\n )\n driver.implicitly_wait(10)\n html = driver.find_element_by_tag_name('html').get_attribute('innerHTML')\n except TimeoutException:\n print('Loading took too much time! iteration {0}'.format(idx))\n continue\n else:\n html = driver.page_source\n finally:\n driver.quit()\n soup = BeautifulSoup(html)\n results = soup.find_all('div', {'class': 'section-result'})\n data = []\n for item in results:\n try:\n name = item.find('h3').span.text\n except:\n name = None\n try:\n rating = item.find('span', {'class': 'cards-rating-score'}).text\n except:\n rating = None\n try:\n num_reviews = item.find('span', {'class': 'section-result-num-ratings'}).text\n except:\n num_reviews = None\n try:\n cost = item.find('span', {'class': 'section-result-cost'}).text\n except:\n cost = None\n try:\n tags = item.find('span', {'class': 'section-result-details'}).text\n except:\n tags = None\n try:\n addr = item.find('span', {'class': 'section-result-location'}).text\n except:\n addr = None\n try:\n descrip = item.find('div', {'class': 'section-result-description'}).text\n except:\n descrip = None\n data.append({'name': name, 'rating': rating, 'num_revs': num_reviews,\n 'cost': cost, 'tags': tags, 'address': addr, \n 'description': descrip, 'county': counties[idx]})\n temp_df = pd.DataFrame(data)\n temp_df.to_csv('data/' + filename)", "_____no_output_____" ] ], [ [ "<br>\n\nNow that we have the scraped results from Google Maps, we merge all the CSVs into a sinlge DataFrame for easier analysis. In addition, we also change the county codes to county names (to help with the later (optional) geocoding).", "_____no_output_____" ] ], [ [ "import glob \nall_files = glob.glob('data' + \"/*.csv\")\n\nli = []\n\nfor filename in all_files:\n df = pd.read_csv(filename, dtype={'county': str})\n li.append(df)\n\nframe = pd.concat(li)", "_____no_output_____" ], [ "frame.drop('Unnamed: 0', axis=1).drop_duplicates().to_csv('nyc_poi_county.csv',index=False)", "_____no_output_____" ], [ "frame = frame.drop('Unnamed: 0', axis=1).drop_duplicates()", "_____no_output_____" ], [ "frame['county'] = frame['county'].replace({'061': 'new york', \n '081': 'queens', \n '047': 'brooklyn', \n '005': 'bronx', \n '085': 'staten island'})", "_____no_output_____" ], [ "frame['complete'] = frame['address'] + ' ' + frame['county']", "_____no_output_____" ], [ "print(frame.shape)\nframe.head()", "(7752, 9)\n" ] ], [ [ "End!\n\n<br>\n<br>\n<br>\n\n### (optional) Code to Geocode Results\n\nThe below loop uses the HERE API to geocode the above results to get latitude and longitude for each result", "_____no_output_____" ] ], [ [ "app_id = os.environ['here_app_id']\ncode = os.environ['here_app_code']\nlats = []\nlngs = []\n\nfor idx, address in enumerate(frame['complete']):\n try:\n search = frame['complete'].iloc[idx]\n url = \"https://geocoder.api.here.com/6.2/geocode.json?app_id=%s&app_code=%s&searchtext=%s&country=USA\"\n url = url % (app_id, code, search)\n r = requests.get(url)\n features = r.json()\n view = features['Response']['View'][0]\n res = view['Result'][0]\n loc = res['Location']\n pt = loc['DisplayPosition']\n lat, lng = pt['Latitude'], pt['Longitude']\n lats.append(lat)\n lngs.append(lng)\n except:\n lats.append(None)\n lngs.append(None)", "_____no_output_____" ] ], [ [ "<br>\n\nFinally we add the points to the DataFrame to create a GeoDataFrame and export it as a GeoJSON file.", "_____no_output_____" ] ], [ [ "frame['lat'] = lats\nframe['lng'] = lngs", "_____no_output_____" ], [ "frame['coords'] = frame.apply(lambda x: Point([x['lng'], x['lat']]), axis=1)", "_____no_output_____" ], [ "gdf_frame = gpd.GeoDataFrame(frame, geometry='coords', crs=\"EPSG:4326\")", "_____no_output_____" ], [ "print(gdf_frame.dropna(subset=['lat']).shape)\ngdf_frame.head()", "(7573, 12)\n" ], [ "gdf_frame.dropna(subset=['lat']).to_file(\"nyc_poi.geojson\", driver='GeoJSON')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a3b8424dbbb1630aed5c91f7bb13f85cd0e1e0b
17,940
ipynb
Jupyter Notebook
docs/examples/advanced/processes.ipynb
seankmartin/nengo
de345f6d201ac5063fc4c5a7e56c0b16c26785c1
[ "BSD-2-Clause" ]
null
null
null
docs/examples/advanced/processes.ipynb
seankmartin/nengo
de345f6d201ac5063fc4c5a7e56c0b16c26785c1
[ "BSD-2-Clause" ]
null
null
null
docs/examples/advanced/processes.ipynb
seankmartin/nengo
de345f6d201ac5063fc4c5a7e56c0b16c26785c1
[ "BSD-2-Clause" ]
null
null
null
31.808511
87
0.563545
[ [ [ "# Processes and how to use them\n\nProcesses in Nengo can be used to describe\ngeneral functions or dynamical systems,\nincluding those with randomness.\nThey can be useful if you want a `Node` output\nthat has a state (like a dynamical system),\nand they're also used for things like\ninjecting noise into Ensembles\nso that you can not only have \"white\" noise\nthat samples from a distribution,\nbut can also have \"colored\" noise\nwhere subsequent samples are correlated with past samples.\n\nThis notebook will first present the basic process interface,\nthen demonstrate some of the built-in Nengo processes\nand how they can be used in your code.\nIt will also describe how to create your own custom process.", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nimport nengo", "_____no_output_____" ] ], [ [ "## Interface\n\nWe will begin by looking at how to run an existing process instance.\n\nThe key functions for running processes\nare `run`, `run_steps`, and `apply`.\nThe first two are for running without an input,\nand the third is for applying the process to an input.\n\nThere are also two helper functions,\n`trange` and `ntrange`,\nwhich return the time points corresponding to a process output,\ngiven either a length of time or a number of steps, respectively.", "_____no_output_____" ], [ "### `run`: running a process for a length of time\n\nThe `run` function runs a process\nfor a given length of time, without any input.\nMany of the random processes in `nengo.processes` will be run this way,\nsince they do not require an input signal.", "_____no_output_____" ] ], [ [ "# Create a process (details on the FilteredNoise process below)\nprocess = nengo.processes.FilteredNoise(\n synapse=nengo.synapses.Alpha(0.1), seed=0)\n\n# run the process for two seconds\ny = process.run(2.0)\n\n# get a corresponding two-second time range\nt = process.trange(2.0)\n\nplt.figure()\nplt.plot(t, y)\nplt.xlabel('time [s]')\nplt.ylabel('process output');", "_____no_output_____" ] ], [ [ "### `run_steps`: running a process for a number of steps\n\nTo run the process for a number of steps, use the `run_steps` function.\nThe length of the generated signal will depend on the process's `default_dt`.", "_____no_output_____" ] ], [ [ "process = nengo.processes.FilteredNoise(\n synapse=nengo.synapses.Alpha(0.1), seed=0)\n\n# run the process for 1000 steps\ny = process.run_steps(1000)\n\n# get a corresponding 1000-step time range\nt = process.ntrange(1000)\n\nplt.figure()\nplt.plot(t, y)\nplt.xlabel('time [s]')\nplt.ylabel('process output');", "_____no_output_____" ] ], [ [ "### `apply`: running a process with an input\n\nTo run a process with an input, use the `apply` function.", "_____no_output_____" ] ], [ [ "process = nengo.synapses.Lowpass(0.2)\n\nt = process.trange(5)\nx = np.minimum(t % 2, 2 - (t % 2)) # sawtooth wave\ny = process.apply(x) # general to all Processes\nz = process.filtfilt(x) # specific to Synapses\n\nplt.figure()\nplt.plot(t, x, label='input')\nplt.plot(t, y, label='output')\nplt.plot(t, z, label='filtfilt')\nplt.xlabel('time [s]')\nplt.ylabel('signal')\nplt.legend();", "_____no_output_____" ] ], [ [ "Note that Synapses are a special kind of process,\nand have the additional functions `filt` and `filtfilt`.\n`filt` works mostly the same as `apply`,\nbut with some additional functionality\nsuch as the ability to filter along any axis.\n`filtfilt` provides zero-phase filtering.", "_____no_output_____" ], [ "### Changing the time-step (`dt` and `default_dt`)\n\nTo run a process with a different time-step,\nyou can either pass the new time step (`dt`) when calling the functions,\nor change the `default_dt` property of the process.", "_____no_output_____" ] ], [ [ "process = nengo.processes.FilteredNoise(\n synapse=nengo.synapses.Alpha(0.1), seed=0)\ny1 = process.run(2.0, dt=0.05)\nt1 = process.trange(2.0, dt=0.05)\n\nprocess = nengo.processes.FilteredNoise(\n synapse=nengo.synapses.Alpha(0.1), default_dt=0.1, seed=0)\ny2 = process.run(2.0)\nt2 = process.trange(2.0)\n\nplt.figure()\nplt.plot(t1, y1, label='dt = %s' % 0.05)\nplt.plot(t2, y2, label='dt = %s' % 0.1)\nplt.xlabel('time [s]')\nplt.ylabel('output');", "_____no_output_____" ] ], [ [ "## `WhiteSignal`\n\nThe `WhiteSignal` process is used to generate band-limited white noise,\nwith only frequencies below a given cutoff frequency.", "_____no_output_____" ] ], [ [ "with nengo.Network() as model:\n a = nengo.Node(nengo.processes.WhiteSignal(1.0, high=5, seed=0))\n b = nengo.Node(nengo.processes.WhiteSignal(1.0, high=10, seed=0))\n c = nengo.Node(nengo.processes.WhiteSignal(1.0, high=5, rms=0.3, seed=0))\n d = nengo.Node(nengo.processes.WhiteSignal(0.5, high=5, seed=0))\n ap = nengo.Probe(a)\n bp = nengo.Probe(b)\n cp = nengo.Probe(c)\n dp = nengo.Probe(d)\n\nwith nengo.Simulator(model) as sim:\n sim.run(1.0)\n\nplt.figure()\nplt.plot(sim.trange(), sim.data[ap], label='5 Hz cutoff')\nplt.plot(sim.trange(), sim.data[bp], label='10 Hz cutoff')\nplt.plot(sim.trange(), sim.data[cp], label='5 Hz cutoff, 0.3 RMS amplitude')\nplt.plot(sim.trange(), sim.data[dp], label='5 Hz cutoff, 0.5 s period')\nplt.xlabel(\"time [s]\")\nplt.legend(loc=2);", "_____no_output_____" ] ], [ [ "Note that the 10 Hz signal (green)\nhas similar low frequency characteristics\nas the 5 Hz signal (blue),\nbut with additional higher-frequency components.\nThe 0.3 RMS amplitude 5 Hz signal (red)\nis the same as the original 5 Hz signal (blue),\nbut scaled down (the default RMS amplitude is 0.5).\nFinally, the signal with a 0.5 s period\n(instead of a 1 s period like the others)\nis completely different,\nbecause changing the period changes\nthe spacing of the random frequency components\nand thus creates a completely different signal.\nNote how the signal with the 0.5 s period repeats itself;\nfor example, the value at $t = 0$\nis the same as the value at $t = 0.5$,\nand the value at $t = 0.4$\nis the same as the value at $t = 0.9$.", "_____no_output_____" ], [ "## `WhiteNoise`\n\nThe `WhiteNoise` process generates white noise,\nwith equal power across all frequencies.\nBy default, it is scaled so that the integral process (Brownian noise)\nwill have the same standard deviation regardless of `dt`.", "_____no_output_____" ] ], [ [ "process = nengo.processes.WhiteNoise(dist=nengo.dists.Gaussian(0, 1))\n\nt = process.trange(0.5)\ny = process.run(0.5)\nplt.figure()\nplt.plot(t, y);", "_____no_output_____" ] ], [ [ "One use of the `WhiteNoise` process\nis to inject noise into neural populations.\nHere, we create two identical ensembles,\nbut add a bit of noise to one and no noise to the other.\nWe plot the membrane voltages of both.", "_____no_output_____" ] ], [ [ "process = nengo.processes.WhiteNoise(\n dist=nengo.dists.Gaussian(0, 0.01), seed=1)\n\nwith nengo.Network() as model:\n ens_args = dict(encoders=[[1]], intercepts=[0.01], max_rates=[100])\n a = nengo.Ensemble(1, 1, **ens_args)\n b = nengo.Ensemble(1, 1, noise=process, **ens_args)\n a_voltage = nengo.Probe(a.neurons, 'voltage')\n b_voltage = nengo.Probe(b.neurons, 'voltage')\n\nwith nengo.Simulator(model) as sim:\n sim.run(0.15)\n\nplt.figure()\nplt.plot(sim.trange(), sim.data[a_voltage], label=\"deterministic\")\nplt.plot(sim.trange(), sim.data[b_voltage], label=\"noisy\")\nplt.xlabel('time [s]')\nplt.ylabel('voltage')\nplt.legend(loc=4);", "_____no_output_____" ] ], [ [ "We see that the neuron without noise (blue)\napproaches its firing threshold,\nbut never quite gets there.\nAdding a bit of noise (green)\ncauses the neuron to occasionally jitter above the threshold,\nresulting in two spikes\n(where the voltage suddenly drops to zero).", "_____no_output_____" ], [ "## `FilteredNoise`\n\nThe `FilteredNoise` process takes a white noise signal\nand passes it through a filter.\nUsing any type of lowpass filter (e.g. `Lowpass`, `Alpha`)\nwill result in a signal similar to `WhiteSignal`,\nbut rather than being ideally filtered\n(i.e. no frequency content above the cutoff),\nthe `FilteredNoise` signal\nwill have some frequency content above the cutoff,\nwith the amount depending on the filter used.\nHere, we can see how an `Alpha` filter\n(a second-order lowpass filter)\nis much better than the `Lowpass` filter\n(a first-order lowpass filter)\nat removing the high-frequency content.", "_____no_output_____" ] ], [ [ "process1 = nengo.processes.FilteredNoise(\n dist=nengo.dists.Gaussian(0, 0.01), synapse=nengo.Alpha(0.005), seed=0)\n\nprocess2 = nengo.processes.FilteredNoise(\n dist=nengo.dists.Gaussian(0, 0.01), synapse=nengo.Lowpass(0.005), seed=0)\n\ntlen = 0.5\nplt.figure()\nplt.plot(process1.trange(tlen), process1.run(tlen))\nplt.plot(process2.trange(tlen), process2.run(tlen));", "_____no_output_____" ] ], [ [ "The `FilteredNoise` process with an `Alpha` synapse (blue)\nhas significantly lower high-frequency components\nthan a similar process with a `Lowpass` synapse (green).", "_____no_output_____" ], [ "## `PresentInput`\n\nThe `PresentInput` process is useful for\npresenting a series of static inputs to a network,\nwhere each input is shown for the same length of time.\nOnce all the images have been shown,\nthey repeat from the beginning.\nOne application is presenting a series of images to a classification network.", "_____no_output_____" ] ], [ [ "inputs = [[0, 0.5], [0.3, 0.2], [-0.1, -0.7], [-0.8, 0.6]]\nprocess = nengo.processes.PresentInput(inputs, presentation_time=0.1)\n\ntlen = 0.8\nplt.figure()\nplt.plot(process.trange(tlen), process.run(tlen))\nplt.xlim([0, tlen])\nplt.ylim([-1, 1]);", "_____no_output_____" ] ], [ [ "## Custom processes\n\nYou can create custom processes\nby inheriting from the `nengo.Process` class\nand overloading the `make_step` and `make_state` methods.\n\nAs an example, we'll make a simple custom process\nthat implements a two-dimensional oscillator dynamical system.\nThe `make_state` function defines a `state` variable\nto store the state.\nThe `make_step` function uses that state\nand a fixed `A` matrix to determine\nhow the state changes over time.\n\nOne advantage to using a process over a simple function\nis that if we reset our simulator,\n`make_step` will be called again\nand the process state\nwill be restored to the initial state.", "_____no_output_____" ] ], [ [ "class SimpleOscillator(nengo.Process):\n def make_state(self, shape_in, shape_out, dt, dtype=None):\n # return a dictionary mapping strings to their initial state\n return {\"state\": np.array([1., 0.])}\n\n def make_step(self, shape_in, shape_out, dt, rng, state):\n A = np.array([[-0.1, -1.], [1., -0.1]])\n s = state[\"state\"]\n\n # define the step function, which will be called\n # by the node every time step\n def step(t):\n s[:] += dt * np.dot(A, s)\n return s\n\n return step # return the step function\n\n\nwith nengo.Network() as model:\n a = nengo.Node(SimpleOscillator(), size_in=0, size_out=2)\n a_p = nengo.Probe(a)\n\nwith nengo.Simulator(model) as sim:\n sim.run(20.0)\n\nplt.figure()\nplt.plot(sim.trange(), sim.data[a_p])\nplt.xlabel('time [s]');", "_____no_output_____" ] ], [ [ "We can generalize this process to one\nthat can implement arbitrary linear dynamical systems,\ngiven `A` and `B` matrices.\nWe will overload the `__init__` method\nto take and store these matrices,\nas well as check the matrix shapes\nand set the default size in and out.\nThe advantage of using the default sizes\nis that when we then create a node using the process,\nor run the process using `apply`,\nwe do not need to specify the sizes.", "_____no_output_____" ] ], [ [ "class LTIProcess(nengo.Process):\n def __init__(self, A, B, **kwargs):\n A, B = np.asarray(A), np.asarray(B)\n\n # check that the matrix shapes are compatible\n assert A.ndim == 2 and A.shape[0] == A.shape[1]\n assert B.ndim == 2 and B.shape[0] == A.shape[0]\n\n # store the matrices for `make_step`\n self.A = A\n self.B = B\n\n # pass the default sizes to the Process constructor\n super().__init__(\n default_size_in=B.shape[1], default_size_out=A.shape[0], **kwargs)\n\n def make_state(self, shape_in, shape_out, dt, dtype=None):\n return {\"state\": np.zeros(self.A.shape[0])}\n\n def make_step(self, shape_in, shape_out, dt, rng, state):\n assert shape_in == (self.B.shape[1],)\n assert shape_out == (self.A.shape[0],)\n A, B = self.A, self.B\n s = state[\"state\"]\n\n def step(t, x):\n s[:] += dt * (np.dot(A, s) + np.dot(B, x))\n return s\n\n return step\n\n\n# demonstrate the LTIProcess in action\nA = [[-0.1, -1], [1, -0.1]]\nB = [[10], [-10]]\n\nwith nengo.Network() as model:\n u = nengo.Node(lambda t: 1 if t < 0.1 else 0)\n # we don't need to specify size_in and size_out!\n a = nengo.Node(LTIProcess(A, B))\n nengo.Connection(u, a)\n a_p = nengo.Probe(a)\n\nwith nengo.Simulator(model) as sim:\n sim.run(20.0)\n\nplt.figure()\nplt.plot(sim.trange(), sim.data[a_p])\nplt.xlabel('time [s]');", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3b8e0ac8db0f27de7ace47c37ba3b139ca2ef3
602,250
ipynb
Jupyter Notebook
basic_supervised_net/rosey/rosey_v2.ipynb
ARCC-RACE/ai_training_notebooks
7414ddecc1b8700b5c3533cebb237e88996298cd
[ "MIT" ]
1
2020-03-04T07:46:01.000Z
2020-03-04T07:46:01.000Z
basic_supervised_net/rosey/rosey_v2.ipynb
ARCC-RACE/ai_training_notebooks
7414ddecc1b8700b5c3533cebb237e88996298cd
[ "MIT" ]
null
null
null
basic_supervised_net/rosey/rosey_v2.ipynb
ARCC-RACE/ai_training_notebooks
7414ddecc1b8700b5c3533cebb237e88996298cd
[ "MIT" ]
null
null
null
708.529412
302,416
0.94568
[ [ [ "# Jetsoncar Rosey V2\n\nTensorflow 2.0, all in notebook, optimized with RT", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nprint(tf.__version__)\ntf.config.experimental.list_physical_devices('GPU') # If device does not show and using conda env with tensorflow-gpu then try restarting computer", "2.1.0\n" ], [ "# verify the image data directory\nimport os\ndata_directory = \"/media/michael/BigMemory/datasets/jetsoncar/training_data/data/dataset\"\nos.listdir(data_directory)[:10]\n\nimport matplotlib.pyplot as plt\nimg = plt.imread(os.path.join(data_directory + \"/color_images\", os.listdir(data_directory + \"/color_images\")[0]))\nprint(img.shape)\nplt.imshow(img)", "(480, 848, 3)\n" ] ], [ [ "## Create the datagenerator and augmentation framework", "_____no_output_____" ] ], [ [ "# Include the custom utils.py and perform tests\nimport importlib\nutils = importlib.import_module('utils')\nimport numpy as np\n\nprint(utils.INPUT_SHAPE)\n\nimg = utils.load_image(os.path.join(data_directory, 'color_images'),os.listdir(data_directory + \"/color_images\")[0])\nprint(img.shape)\n\nfig = plt.figure(figsize=(20,20))\nfig.add_subplot(1, 3, 1)\nplt.imshow(img)\n\nimg, _ = utils.preprocess_data(last_color_image=img)\nprint(img.shape)\n\nfig.add_subplot(1, 3, 2)\nplt.imshow(np.squeeze(img))\n\nplt.show()", "(240, 640, 3)\n(480, 848, 3)\n(1, 240, 640, 3)\n" ], [ "# Load the steering angles and image paths from labels.csv\nimport csv, random\nimport seaborn as sns\n\n# these will be 2D arrays where each row represents a dataset\nx = [] # images\ny = [] # steering\nz = [] # speed\nwith open(os.path.join(data_directory, \"tags.csv\")) as csvfile:\n reader = csv.DictReader(csvfile)\n for row in reader:\n # print(row['Time_stamp'] + \".jpg\", row['Steering_angle'])\n if not float(row['raw_speed']) == 0:\n x.append(row['time_stamp'] + \".jpg\",) # get image path\n y.append(float(row['raw_steering']),) # get steering value\n z.append(float(row['raw_speed']))\n\nprint(\"Number of data samples is \" + str(len(y)))\n\ndata = list(zip(x,y))\nrandom.shuffle(data)\nx,y = zip(*data)\n\n# plot of steering angle distribution without correction\nsns.distplot(y)", "_____no_output_____" ], [ "# plot of speed distribution\nsns.distplot(z)", "_____no_output_____" ], [ "# Split the training data\nvalidation_split = 0.2\ntrain_x = x[0:int(len(x)*(1.0-validation_split))]\ntrain_y = y[0:int(len(y)*(1.0-validation_split))]\nprint(\"Training data shape: \" + str(len(train_x)))\ntest_x = x[int(len(x)*(1.0-validation_split)):]\ntest_y = y[int(len(y)*(1.0-validation_split)):]\nprint(\"Validation data shape: \" + str(len(test_x)) + \"\\n\")", "Training data shape: 13771\nValidation data shape: 3443\n\n" ], [ "# Define and test batch generator\ndef batch_generator(data_dir, image_paths, steering_angles, batch_size, is_training):\n \"\"\"\n Generate training image give image paths and associated steering angles\n \"\"\"\n images = np.empty([batch_size, utils.IMAGE_HEIGHT, utils.IMAGE_WIDTH, utils.IMAGE_CHANNELS], dtype=np.float32)\n steers = np.empty(batch_size)\n while True:\n i = 0\n for index in np.random.permutation(len(image_paths)):\n img = image_paths[index]\n steering_angle = steering_angles[index]\n # argumentation\n if is_training and np.random.rand() < 0.8:\n image, steering_angle = utils.augument(data_dir, os.path.join(\"color_images\",img), steering_angle)\n else:\n image, _ = utils.preprocess_data(utils.load_image(data_dir, os.path.join(\"color_images\",img)))\n # add the image and steering angle to the batch\n images[i] = image\n steers[i] = steering_angle\n i += 1\n if i == batch_size:\n break\n yield images, steers\n \ntrain_generator = batch_generator(data_directory, train_x, train_y, 32, True)\nvalidation_generator = batch_generator(data_directory, test_x, test_y, 32, False)\n\ntrain_image = next(train_generator) # returns tuple with steering and throttle\nprint(train_image[0].shape)\nprint(train_image[1][0])\nplt.imshow(train_image[0][0])", "(32, 240, 640, 3)\n0.2661118747113352\n" ] ], [ [ "## Define the model and start training", "_____no_output_____" ] ], [ [ "model = tf.keras.models.Sequential([\n tf.keras.Input((utils.IMAGE_HEIGHT, utils.IMAGE_WIDTH, utils.IMAGE_CHANNELS)),\n tf.keras.layers.Conv2D(32, (11,11), padding='same', kernel_initializer='lecun_uniform'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.ELU(),\n tf.keras.layers.MaxPool2D((2,2)),\n \n tf.keras.layers.Conv2D(32, (7,7), padding='same', kernel_initializer='lecun_uniform'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.ELU(),\n tf.keras.layers.MaxPool2D((2,2)),\n \n tf.keras.layers.Conv2D(64, (5,5), padding='same', kernel_initializer='lecun_uniform'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.ELU(),\n tf.keras.layers.MaxPool2D((2,2)),\n \n tf.keras.layers.Conv2D(64, (3,3), padding='same', kernel_initializer='lecun_uniform'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.ELU(),\n tf.keras.layers.MaxPool2D((2,2)),\n \n tf.keras.layers.Conv2D(32, (3,3), padding='same', kernel_initializer='lecun_uniform'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.ELU(),\n tf.keras.layers.MaxPool2D((2,3)),\n \n tf.keras.layers.Conv2D(16, (3,3), padding='same', kernel_initializer='lecun_uniform'),\n tf.keras.layers.BatchNormalization(),\n tf.keras.layers.ELU(),\n tf.keras.layers.MaxPool2D((2,3)),\n \n tf.keras.layers.Flatten(),\n tf.keras.layers.Dense(10, activation='elu'),\n tf.keras.layers.Dense(1, activation='linear')\n])\nmodel.summary()\nmodel.compile(loss='mean_squared_error', optimizer='adam')", "Model: \"sequential\"\n_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\nconv2d (Conv2D) (None, 240, 640, 32) 11648 \n_________________________________________________________________\nbatch_normalization (BatchNo (None, 240, 640, 32) 128 \n_________________________________________________________________\nelu (ELU) (None, 240, 640, 32) 0 \n_________________________________________________________________\nmax_pooling2d (MaxPooling2D) (None, 120, 320, 32) 0 \n_________________________________________________________________\nconv2d_1 (Conv2D) (None, 120, 320, 32) 50208 \n_________________________________________________________________\nbatch_normalization_1 (Batch (None, 120, 320, 32) 128 \n_________________________________________________________________\nelu_1 (ELU) (None, 120, 320, 32) 0 \n_________________________________________________________________\nmax_pooling2d_1 (MaxPooling2 (None, 60, 160, 32) 0 \n_________________________________________________________________\nconv2d_2 (Conv2D) (None, 60, 160, 64) 51264 \n_________________________________________________________________\nbatch_normalization_2 (Batch (None, 60, 160, 64) 256 \n_________________________________________________________________\nelu_2 (ELU) (None, 60, 160, 64) 0 \n_________________________________________________________________\nmax_pooling2d_2 (MaxPooling2 (None, 30, 80, 64) 0 \n_________________________________________________________________\nconv2d_3 (Conv2D) (None, 30, 80, 64) 36928 \n_________________________________________________________________\nbatch_normalization_3 (Batch (None, 30, 80, 64) 256 \n_________________________________________________________________\nelu_3 (ELU) (None, 30, 80, 64) 0 \n_________________________________________________________________\nmax_pooling2d_3 (MaxPooling2 (None, 15, 40, 64) 0 \n_________________________________________________________________\nconv2d_4 (Conv2D) (None, 15, 40, 32) 18464 \n_________________________________________________________________\nbatch_normalization_4 (Batch (None, 15, 40, 32) 128 \n_________________________________________________________________\nelu_4 (ELU) (None, 15, 40, 32) 0 \n_________________________________________________________________\nmax_pooling2d_4 (MaxPooling2 (None, 7, 13, 32) 0 \n_________________________________________________________________\nconv2d_5 (Conv2D) (None, 7, 13, 16) 4624 \n_________________________________________________________________\nbatch_normalization_5 (Batch (None, 7, 13, 16) 64 \n_________________________________________________________________\nelu_5 (ELU) (None, 7, 13, 16) 0 \n_________________________________________________________________\nmax_pooling2d_5 (MaxPooling2 (None, 3, 4, 16) 0 \n_________________________________________________________________\nflatten (Flatten) (None, 192) 0 \n_________________________________________________________________\ndense (Dense) (None, 10) 1930 \n_________________________________________________________________\ndense_1 (Dense) (None, 1) 11 \n=================================================================\nTotal params: 176,037\nTrainable params: 175,557\nNon-trainable params: 480\n_________________________________________________________________\n" ], [ "import datetime\nlog_dir=\"logs/fit/\" + datetime.datetime.now().strftime(\"%Y%m%d-%H%M%S\")\ntensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)\nprint(\"To view tensorboard please run `tensorboard --logdir logs/fit` in the code directory from the terminal with deeplearning env active\")\n\ncheckpoint = tf.keras.callbacks.ModelCheckpoint('rosey_v2.{epoch:03d}-{val_loss:.2f}.h5', # filepath = working directory/\n monitor='val_loss',\n verbose=0,\n save_best_only=True,\n mode='auto')\n\nmodel.fit_generator(train_generator,\n steps_per_epoch=100, \n epochs=20,\n validation_data=validation_generator,\n validation_steps=1,\n callbacks=[tensorboard_callback, checkpoint])", "To view tensorboard please run `tensorboard --logdir logs/fit` in the code directory from the terminal with deeplearning env active\nEpoch 1/20\n100/100 [==============================] - 131s 1s/step - loss: 0.0886 - val_loss: 0.0705\nEpoch 2/20\n100/100 [==============================] - 123s 1s/step - loss: 0.0577 - val_loss: 0.0177\nEpoch 3/20\n100/100 [==============================] - 123s 1s/step - loss: 0.0573 - val_loss: 0.0715\nEpoch 4/20\n100/100 [==============================] - 124s 1s/step - loss: 0.0510 - val_loss: 0.0220\nEpoch 5/20\n100/100 [==============================] - 123s 1s/step - loss: 0.0529 - val_loss: 0.0250\nEpoch 6/20\n100/100 [==============================] - 123s 1s/step - loss: 0.0498 - val_loss: 0.0582\nEpoch 7/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0506 - val_loss: 0.0379\nEpoch 8/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0470 - val_loss: 0.0084\nEpoch 9/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0465 - val_loss: 0.0166\nEpoch 10/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0502 - val_loss: 0.0313\nEpoch 11/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0502 - val_loss: 0.0176\nEpoch 12/20\n100/100 [==============================] - 123s 1s/step - loss: 0.0472 - val_loss: 0.0196\nEpoch 13/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0469 - val_loss: 0.0473\nEpoch 14/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0492 - val_loss: 0.0109\nEpoch 15/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0450 - val_loss: 0.0221\nEpoch 16/20\n100/100 [==============================] - 121s 1s/step - loss: 0.0453 - val_loss: 0.0668\nEpoch 17/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0507 - val_loss: 0.0279\nEpoch 18/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0428 - val_loss: 0.0132\nEpoch 19/20\n100/100 [==============================] - 122s 1s/step - loss: 0.0511 - val_loss: 0.0162\nEpoch 20/20\n100/100 [==============================] - 121s 1s/step - loss: 0.0451 - val_loss: 0.0237\n" ], [ "# Test the model\nimage, steering = next(train_generator)\nprint(steering)\nprint(model.predict(image))\nprint(\"\")\n\nimage, steering = next(validation_generator)\nprint(steering)\nprint(model.predict(image))", "[ 0.38866069 -0.01653751 0.15497394 0.18995237 -0.24068273 -0.29661691\n -0.2366807 -0.23459999 -0.09765687 -0.35514895 -0.28255172 0.15013804\n -0.15041455 -0.14102635 -0.04422271 -0.2006 -0.2032 -0.10150293\n -0.18395218 0.34925292 -0.22099999 0.250549 0.28476982 0.40067244\n 0.0918623 0.20941769 0.01414458 0.29502599 -0.27545909 0.27571231\n -0.06456293 -0.38859999]\n[[-0.01409152]\n [ 0.08915222]\n [-0.0566576 ]\n [-0.03755104]\n [ 0.08308045]\n [-0.15913627]\n [-0.16558295]\n [-0.12865719]\n [ 0.04993952]\n [-0.01573419]\n [-0.02034076]\n [-0.15650505]\n [ 0.06376354]\n [ 0.01313779]\n [-0.12256726]\n [-0.1740151 ]\n [-0.19206235]\n [ 0.08967156]\n [-0.09032293]\n [ 0.2788198 ]\n [-0.13434035]\n [ 0.00317884]\n [ 0.15187286]\n [ 0.15058331]\n [ 0.01717911]\n [ 0.00433371]\n [-0.14564866]\n [ 0.00086454]\n [-0.08476945]\n [-0.05350491]\n [-0.0531204 ]\n [-0.1822269 ]]\n\n[-0.26859999 -0.20259999 -0.31899998 -0.25319999 -0.1032 -0.17560001\n -0.21020001 -0.146 -0.2956 -0.1936 -0.15899999 -0.2\n -0.1512 -0.1336 -0.161 -0.16160001 -0.29519999 -0.2418\n -0.2006 -0.1636 -0.16160001 -0.3206 -0.1772 -0.31059998\n -0.1036 -0.15899999 -0.19180001 -0.16620001 -0.07160001 -0.2052\n -0.15359999 -1. ]\n[[-0.11714696]\n [-0.20285761]\n [-0.15389183]\n [-0.19883284]\n [-0.16768342]\n [-0.1410008 ]\n [-0.18112859]\n [-0.1797688 ]\n [-0.17058238]\n [-0.15402645]\n [-0.14007717]\n [-0.14992392]\n [-0.18626127]\n [-0.15838376]\n [-0.16476256]\n [-0.12435078]\n [-0.10820477]\n [-0.16049647]\n [-0.18037426]\n [-0.12607521]\n [-0.14107633]\n [-0.22561407]\n [-0.18220991]\n [-0.17157105]\n [-0.18844417]\n [-0.16825321]\n [-0.17849213]\n [-0.19069886]\n [-0.14187187]\n [-0.14629892]\n [-0.07397493]\n [-0.12378275]]\n" ] ], [ [ "## Save the model as tensor RT and export to Jetson format", "_____no_output_____" ] ], [ [ "# Load the model that you would like converted to RT\nmodel_path = 'model.h5'\nexport_path = \"/home/michael/Desktop/model\"\n\nimport shutil\nif not os.path.isdir(export_path):\n os.mkdir(export_path)\nelse:\n response = input(\"Do you want to delete existing export_path directory? y/n\")\n if response == 'y':\n shutil.rmtree(export_path)\n os.mkdir(export_path)\n\nloaded_model = tf.keras.models.load_model(model_path)\n\nshutil.copy(\"./utils.py\", os.path.join(export_path, \"utils.py\"))\nshutil.copy(\"./__init__.py\", os.path.join(export_path, \"__init__.py\"))\nshutil.copy(\"./notes.txt\", os.path.join(export_path, \"notes.txt\"))\nshutil.copy(\"./config.yaml\", os.path.join(export_path, \"config.yaml\"))\n# Save as tf saved_model (faster than h5)\ntf.saved_model.save(loaded_model, export_path)", "Do you want to delete existing export_path directory? y/n n\n" ], [ "from tensorflow.python.compiler.tensorrt import trt_convert as trt\n\nconversion_params = trt.DEFAULT_TRT_CONVERSION_PARAMS\nconversion_params = conversion_params._replace(max_workspace_size_bytes=(1 << 32))\nconversion_params = conversion_params._replace(precision_mode=\"INT8\")\nconversion_params = conversion_params._replace(maximum_cached_engines=100)\nconversion_params = conversion_params._replace(use_calibration=True)\n\ndef my_calibration_input_fn():\n for i in range(20):\n image, _ = utils.preprocess_data(utils.load_image(data_directory, os.path.join(\"color_images\",x[i])))\n yield image.astype(np.float32),\n\nconverter = tf.experimental.tensorrt.Converter(input_saved_model_dir=export_path,conversion_params=conversion_params)\n\ngen = my_calibration_input_fn()\n\nconverter.convert(calibration_input_fn=my_calibration_input_fn)\nconverter.build(my_calibration_input_fn)\n\nif not os.path.isdir(os.path.join(export_path, \"rt\")):\n os.mkdir(os.path.join(export_path, \"rt\"))\n \nconverter.save(os.path.join(export_path, \"rt\"))", "INFO:tensorflow:Linked TensorRT version: (0, 0, 0)\nINFO:tensorflow:Loaded TensorRT version: (0, 0, 0)\nINFO:tensorflow:Assets written to: /home/michael/Desktop/model/rt/assets\n" ], [ "# Test normal saved model\nsaved_model = tf.saved_model.load(export_path) # normal saved model\n\nimage, _ = next(validation_generator)\n\nimport time\noutput = saved_model(image.astype(np.float32)) # load once to get more accurate representation of speed\nstart = time.time()\noutput = saved_model(image.astype(np.float32))\nstop = time.time()\nprint(\"inference time: \" + str(stop - start))\nprint(\"Output: %.20f\"%output[8,0])", "inference time: 0.0645287036895752\nOutput: -0.14043229818344116211\n" ], [ "# Test TRT optimized saved model\nsaved_model = tf.saved_model.load(os.path.join(export_path, \"rt\")) # normal saved model\n\n\nimage, _ = next(validation_generator)\n\nimport time\noutput = saved_model(image) # load once to get more accurate representation of speed\nstart = time.time()\noutput = saved_model(image)\nstop = time.time()\nprint(\"inference time: \" + str(stop - start))\nprint(\"Output: %.20f\"%output[8,0])", "inference time: 0.0647423267364502\nOutput: -0.23017483949661254883\n" ], [ "# Run many samples through and save distribution \nvalidation_generator = batch_generator(data_directory, test_x, test_y, 32, False)\ntest = []\nfor i in range(50):\n img, _ = next(validation_generator)\n test.append(saved_model(img.astype(np.float32))[0][0])\n print(str(i), end=\"\\r\")\nsns.distplot(test)", "49\r" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a3b9673855e7849d2929aed3bbec97816aa7d31
782,568
ipynb
Jupyter Notebook
original_KD.ipynb
FLHonker/AMTML-KD-code
926ab3c65e995cd97dc98bfcdb8c1bc62994329c
[ "MIT" ]
19
2020-12-02T06:33:03.000Z
2022-03-28T03:39:33.000Z
original_KD.ipynb
FLHonker/AMTML-KD-code
926ab3c65e995cd97dc98bfcdb8c1bc62994329c
[ "MIT" ]
null
null
null
original_KD.ipynb
FLHonker/AMTML-KD-code
926ab3c65e995cd97dc98bfcdb8c1bc62994329c
[ "MIT" ]
4
2021-08-14T02:23:40.000Z
2022-03-24T08:21:55.000Z
88.696362
372
0.586046
[ [ [ "from __future__ import print_function\nimport os\nimport time\nimport logging\nimport argparse\nfrom visdom import Visdom\nimport numpy as np\nimport torch\nimport torch.nn as nn\nimport torch.nn.functional as F\nimport torch.optim as optim\nfrom torch.autograd import Variable\nfrom torch.utils.data import DataLoader\nfrom torchvision import datasets, transforms\nfrom utils import *\nimport dataset\n\n# Teacher models\nimport models\n\n# Student models\n\n\nstart_time = time.time()\n# os.makedirs('./checkpoint', exist_ok=True)\n\n# Training settings\nparser = argparse.ArgumentParser(description='PyTorch original KD')\nparser.add_argument('--dataset',\n choices=['CIFAR10',\n 'CIFAR100',\n 'tinyimagenet'\n ],\n default='CIFAR10')\nparser.add_argument('--teacher',\n choices=['ResNet32',\n 'ResNet50',\n 'ResNet56',\n 'ResNet110'\n ],\n default='ResNet110')\nparser.add_argument('--student',\n choices=[\n 'ResNet8',\n 'ResNet15',\n 'ResNet16',\n 'ResNet20',\n 'myNet'\n ],\n default='ResNet20')\nparser.add_argument('--n_class', type=int, default=10, metavar='N', help='num of classes')\nparser.add_argument('--T', type=float, default=20.0, metavar='Temputure', help='Temputure for distillation')\nparser.add_argument('--batch_size', type=int, default=128, metavar='N', help='input batch size for training')\nparser.add_argument('--test_batch_size', type=int, default=128, metavar='N', help='input test batch size for training')\nparser.add_argument('--epochs', type=int, default=20, metavar='N', help='number of epochs to train (default: 20)')\nparser.add_argument('--lr', type=float, default=0.1, metavar='LR', help='learning rate (default: 0.01)')\nparser.add_argument('--momentum', type=float, default=0.9, metavar='M', help='SGD momentum (default: 0.5)')\nparser.add_argument('--device', default='cuda:1', type=str, help='device: cuda or cpu')\nparser.add_argument('--print_freq', type=int, default=10, metavar='N', help='how many batches to wait before logging training status')\n\nconfig = ['--dataset', 'CIFAR100', '--epochs', '200', '--n_class', '100', '--teacher', 'ResNet110', '--student', 'ResNet8', '--T', '5.0', '--device', 'cuda:0']\nargs = parser.parse_args(config)\n\ndevice = args.device if torch.cuda.is_available() else 'cpu'\nload_dir = './checkpoint/' + args.dataset + '/'\n\nteacher_model = getattr(models, args.teacher)(args.n_class)\nteacher_model.load_state_dict(torch.load(load_dir + args.teacher + '.pth'))\nteacher_model.to(device)\n\nst_model = getattr(models, args.student)(args.n_class) # args.student()\nst_model.to(device)\n\n# logging\nlogfile = load_dir + 'KD_' + st_model.model_name + '.log'\nif os.path.exists(logfile):\n os.remove(logfile)\ndef log_out(info):\n f = open(logfile, mode='a')\n f.write(info)\n f.write('\\n')\n f.close()\n print(info)\n \n# visualizer\nvis = Visdom(env='distill')\nloss_win = vis.line(\n X=np.array([0]),\n Y=np.array([0]),\n opts=dict(\n title=args.student + ' KD Loss',\n xtickmin=0,\n# xtickmax=1,\n# xtickstep=5,\n ytickmin=0,\n# ytickmax=10,\n# ytickstep=5,\n# markers=True,\n# markersymbol='dot',\n# markersize=5,\n ),\n name=\"loss\"\n)\n \nacc_win = vis.line(\n X=np.column_stack((0, 0)),\n Y=np.column_stack((0, 0)),\n opts=dict(\n title=args.student + ' KD ACC',\n xtickmin=0,\n# xtickstep=5,\n ytickmin=0,\n ytickmax=100,\n# markers=True,\n# markersymbol='dot',\n# markersize=5,\n legend=['train_acc', 'test_acc']\n ),\n name=\"acc\"\n)\n\n\n# data\nnormalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])\ntrain_transform = transforms.Compose([\n transforms.RandomHorizontalFlip(),\n transforms.RandomCrop(32, 4),\n transforms.ToTensor(),\n normalize,\n])\ntest_transform = transforms.Compose([transforms.ToTensor(), normalize])\nif args.dataset == 'tinyimagenet':\n train_set = dataset.TinyImageNet(root='../data/tiny-imagenet-200', transform=train_transform)\n test_set = dataset.TinyImageNet(root='../data/tiny-imagenet-200', transform=test_transform)\nelse:\n train_set = getattr(datasets, args.dataset)(root='../data', train=True, download=True, transform=train_transform)\n test_set = getattr(datasets, args.dataset)(root='../data', train=False, download=False, transform=test_transform)\ntrain_loader = DataLoader(train_set, batch_size=args.batch_size, shuffle=True)\ntest_loader = DataLoader(test_set, batch_size=args.test_batch_size, shuffle=False)\n\n# optimizer = optim.SGD(st_model.parameters(), lr=args.lr, momentum=args.momentum)\noptimizer_sgd = optim.SGD(st_model.parameters(), lr=args.lr, momentum=0.9, weight_decay=5e-4)\nlr_scheduler = optim.lr_scheduler.MultiStepLR(optimizer_sgd, milestones=[100, 150])\n\ndef distillation(y, labels, teacher_scores, T, alpha):\n return nn.KLDivLoss()(F.log_softmax(y/T), F.softmax(teacher_scores/T)) * (T*T * 2.0 * alpha) + F.cross_entropy(y, labels) * (1. - alpha)\n\n\ndef train(epoch, model, loss_fn):\n print('Training:')\n # switch to train mode\n model.train()\n batch_time = AverageMeter()\n data_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n\n end = time.time()\n for i, (input, target) in enumerate(train_loader):\n\n # measure data loading time\n data_time.update(time.time() - end)\n\n input, target = input.to(device), target.to(device)\n optimizer_sgd.zero_grad()\n # compute outputs\n _,_,_,_, output = model(input)\n with torch.no_grad():\n _,_,_,_, t_output = teacher_model(input)\n\n# print(output.size(), target.size(), teacher_output.size())\n\n # compute gradient and do SGD step\n loss = loss_fn(output, target, t_output, T=args.T, alpha=0.7)\n\n loss.backward()\n optimizer_sgd.step()\n\n output = output.float()\n loss = loss.float()\n # measure accuracy and record loss\n train_acc = accuracy(output.data, target.data)[0]\n losses.update(loss.item(), input.size(0))\n top1.update(train_acc, input.size(0))\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % args.print_freq == 0:\n log_out('[{0}/{1}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Data {data_time.val:.3f} ({data_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})'.format(\n i, len(train_loader), batch_time=batch_time,\n data_time=data_time, loss=losses, top1=top1))\n return losses.avg, train_acc.cpu().numpy()\n\n\ndef test(model):\n print('Testing:')\n # switch to evaluate mode\n model.eval()\n batch_time = AverageMeter()\n losses = AverageMeter()\n top1 = AverageMeter()\n\n end = time.time()\n with torch.no_grad():\n for i, (input, target) in enumerate(test_loader):\n input, target = input.to(device), target.to(device)\n\n # compute output\n _,_,_,_, output = model(input)\n loss = F.cross_entropy(output, target)\n\n output = output.float()\n loss = loss.float()\n\n # measure accuracy and record loss\n test_acc = accuracy(output.data, target.data)[0]\n losses.update(loss.item(), input.size(0))\n top1.update(test_acc, input.size(0))\n\n # measure elapsed time\n batch_time.update(time.time() - end)\n end = time.time()\n\n if i % args.print_freq == 0:\n log_out('Test: [{0}/{1}]\\t'\n 'Time {batch_time.val:.3f} ({batch_time.avg:.3f})\\t'\n 'Loss {loss.val:.4f} ({loss.avg:.4f})\\t'\n 'Prec@1 {top1.val:.3f} ({top1.avg:.3f})'.format(\n i, len(test_loader), batch_time=batch_time, loss=losses,\n top1=top1))\n\n log_out(' * Prec@1 {top1.avg:.3f}'.format(top1=top1))\n\n return losses.avg, test_acc.cpu().numpy(), top1.avg.cpu().numpy()\n\n\nprint('StudentNet:\\n')\nprint(st_model)\nbest_acc = 0\nfor epoch in range(1, args.epochs + 1):\n log_out(\"\\n===> epoch: {}/{}\".format(epoch, args.epochs))\n log_out('current lr {:.5e}'.format(optimizer_sgd.param_groups[0]['lr']))\n lr_scheduler.step()\n train_loss, train_acc = train(epoch, st_model, loss_fn=distillation)\n # visaulize loss\n vis.line(np.array([train_loss]), np.array([epoch]), loss_win, update=\"append\")\n _, test_acc, top1 = test(st_model)\n best_acc = max(top1, best_acc)\n vis.line(np.column_stack((train_acc, top1)), np.column_stack((epoch, epoch)), acc_win, update=\"append\")\n\n# torch.save(st_model.state_dict(), load_dir + args.teacher + '_distill_' + args.student + '.pth')\n# release GPU memory\ntorch.cuda.empty_cache()\n\nlog_out(\"@ BEST ACC = {:.4f}%\".format(best_acc))\nlog_out(\"--- {:.3f} mins ---\".format((time.time() - start_time)/60))\n", "/home/data/yaliu/jupyterbooks/multi-KD/models/teacher/resnet20.py:36: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.\n init.kaiming_normal(m.weight)\n/home/data/yaliu/jupyterbooks/multi-KD/models/student/resnet_s.py:13: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_.\n init.kaiming_normal(m.weight)\nWARNING:root:Setting up a new session...\n" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a3ba072363b4d324e2246f863299ef952a0d38f
8,323
ipynb
Jupyter Notebook
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab2.ipynb
AditiRM/gpubootcamp
bd14fec2383d7f116760ca9f4efd5816523f9be9
[ "Apache-2.0" ]
null
null
null
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab2.ipynb
AditiRM/gpubootcamp
bd14fec2383d7f116760ca9f4efd5816523f9be9
[ "Apache-2.0" ]
null
null
null
hpc/miniprofiler/English/C/jupyter_notebook/profiling-c-lab2.ipynb
AditiRM/gpubootcamp
bd14fec2383d7f116760ca9f4efd5816523f9be9
[ "Apache-2.0" ]
null
null
null
44.989189
463
0.66106
[ [ [ "In this lab, we will optimize the weather simulation application written in C++ (if you prefer to use Fortran, click [this link](../../Fortran/jupyter_notebook/profiling-fortran.ipynb)). \n\nLet's execute the cell below to display information about the GPUs running on the server by running the nvaccelinfo command, which ships with the NVIDIA HPC compiler that we will be using. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.", "_____no_output_____" ] ], [ [ "!nvaccelinfo", "_____no_output_____" ] ], [ [ "## Exercise 2 \n\n### Learning objectives\nLearn how to identify and parallelise the computationally expensive routines in your application using OpenACC compute constructs (A compute construct is a parallel, kernels, or serial construct.). In this exercise you will:\n\n- Implement OpenACC parallelism using parallel directives to parallelise the serial application\n- Learn how to compile your parallel application with NVIDIA HPC compiler\n- Benchmark and compare the parallel version of the application with the serial version\n- Learn how to interpret NVIDIA HPC compiler feedback to ensure the applied optimization were successful", "_____no_output_____" ], [ "Click on the <b>[miniWeather_openacc.cpp](../source_code/lab2/miniWeather_openacc.cpp)</b> and <b>[Makefile](../source_code/lab2/Makefile)</b> and inspect the code before running below cells. We have already added OpenACC compute directives (`#pragma acc parallel`) around the expensive routines (loops) in the code.\n\nOnce done, compile the code with `make`. View the NVIDIA HPC compiler feedback (enabled by adding `-Minfo=accel` flag) and investigate the compiler feedback for the OpenACC code. The compiler feedback provides useful information about applied optimizations.", "_____no_output_____" ] ], [ [ "!cd ../source_code/lab2 && make clean && make", "_____no_output_____" ] ], [ [ "Let's inspect part of the compiler feedback and see what it's telling us.\n\n<img src=\"images/cfeedback1.png\">\n\n- Using `-ta=tesla:managed`, instruct the compiler to build for an NVIDIA Tesla GPU using \"CUDA Managed Memory\"\n- Using `-Minfo` command-line option, we will see all output from the compiler. In this example, we use `-Minfo=accel` to only see the output corresponding to the accelerator (in this case an NVIDIA GPU).\n- The first line of the output, `compute_tendencies_x`, tells us which function the following information is in reference to.\n- The line starting with 227, shows we created a parallel OpenACC loop. This loop is made up of gangs (a grid of blocks in CUDA language) and vector parallelism (threads in CUDA language) with the vector size being 128 per gang. `277, #pragma acc loop gang, vector(128) /* blockIdx.x threadIdx.x */`\n- The rest of the information concerns data movement. Compiler detected possible need to move data and handled it for us. We will get into this later in this lab.\n\nIt is very important to inspect the feedback to make sure the compiler is doing what you have asked of it.\n\nNow, let's **Run** the application for small values of `nx_glob`,`nz_glob`, and `sim_time`: **40, 20, 1000**", "_____no_output_____" ] ], [ [ "!cd ../source_code/lab2 && nsys profile -t nvtx,openacc --stats=true --force-overwrite true -o miniWeather_3 ./miniWeather 40 20 1000", "_____no_output_____" ] ], [ [ "You can see that the changes made actually slowed down the code and it runs slower compared to the non-accelerated CPU only version. Let's checkout the profiler's report. [Download the profiler output](../source_code/lab2/miniWeather_3.qdrep) and open it via the GUI. \n\nFrom the \"timeline view\" on the top pane, double click on the \"CUDA\" from the function table on the left and expand it. Zoom in on the timeline and you can see a pattern similar to the screenshot below. The blue boxes are the compute kernels and each of these groupings of kernels is surrounded by purple and teal boxes (annotated with red color) representing data movements. **Screenshots represents profiler report for the values of 400,200,1500.**\n\n<img src=\"images/nsys_slow.png\" width=\"80%\" height=\"80%\">\n\nLet's hover your mouse over kernels (blue boxes) one by one from each row and checkout the provided information.\n\n<img src=\"images/occu-1.png\" width=\"60%\" height=\"60%\">\n\n**Note**: In the next two exercises, we start optimizing the application by improving the occupancy and reducing data movements.", "_____no_output_____" ], [ "## Post-Lab Summary\n\nIf you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.", "_____no_output_____" ] ], [ [ "%%bash\ncd ..\nrm -f openacc_profiler_files.zip\nzip -r openacc_profiler_files.zip *", "_____no_output_____" ] ], [ [ "**After** executing the above zip command, you should be able to download the zip file [here](../openacc_profiler_files.zip).", "_____no_output_____" ], [ "-----\n\n# <p style=\"text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em\"> <a href=../../profiling_start.ipynb>HOME</a>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<span style=\"float:center\"> <a href=profiling-c-lab3.ipynb>NEXT</a></span> </p>\n\n-----", "_____no_output_____" ], [ "# Links and Resources\n\n[OpenACC API Guide](https://www.openacc.org/sites/default/files/inline-files/OpenACC%20API%202.6%20Reference%20Guide.pdf)\n\n[NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)\n\n[CUDA Toolkit Download](https://developer.nvidia.com/cuda-downloads)\n\n**NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).\n\nDon't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.\n\n--- \n\n## Licensing \n\nThis material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ] ]
4a3babbb714486fb9d1607a42f104653115f15b9
161,096
ipynb
Jupyter Notebook
time-convention/convention-2-rt.ipynb
ecmwf-lab/playground
68e709cebd7c37d3ae0f74f455d8c1f90d5f71f0
[ "Apache-2.0" ]
null
null
null
time-convention/convention-2-rt.ipynb
ecmwf-lab/playground
68e709cebd7c37d3ae0f74f455d8c1f90d5f71f0
[ "Apache-2.0" ]
null
null
null
time-convention/convention-2-rt.ipynb
ecmwf-lab/playground
68e709cebd7c37d3ae0f74f455d8c1f90d5f71f0
[ "Apache-2.0" ]
null
null
null
68.522331
50,740
0.665398
[ [ [ "#! pip install zarr\n#! pip install s3fs\n#! pip install climetlab\n#! pip install climetlab_s2s_ai_competition --quiet\n# !pip install matplotlib -quiet", "_____no_output_____" ], [ "import climetlab as cml ", "module 'Magics' has no attribute 'strict_mode'\n" ], [ "import climetlab_s2s_ai_competition\nprint(f'Climetlab version : {cml.__version__}')\nprint(f'Climetlab-s2s-ai-competition plugin version : {climetlab_s2s_ai_competition.__version__}')", "Climetlab version : 0.2.3\nClimetlab-s2s-ai-competition plugin version : 0.2.3\n" ], [ "#version = '0.1.6' # version of the data\nversion = '0.1.20' # version of the data", "_____no_output_____" ], [ "param = 't2m'", "_____no_output_____" ], [ "cmlds = cml.load_dataset(\"s2s-ai-competition-reference-set\",\n date=[\"20200102\", \"20200109\", \"20200116\"],\n version=version,\n parameter = param,\n #hindcast=True,\n format='netcdf')\nds = cmlds.to_xarray()\n#ds", "By downloading data from this dataset, you agree to the their terms: Attribution 4.0 International(CC BY 4.0). If you do not agree with such terms, do not download the data. For more information, please visit https://www.ecmwf.int/en/terms-use and https://apps.ecmwf.int/datasets/data/s2s/licence/.\n" ], [ "ts = ds.sel(realization=0).sel(latitude=37, longitude=0, method='nearest').drop(['realization','latitude','longitude']) ; ts", "_____no_output_____" ], [ "%matplotlib inline\nts[param].plot(hue='forecast_time');", "_____no_output_____" ], [ "ts.mean('forecast_time')[param]", "_____no_output_____" ], [ "ts[[param,'time']].groupby('time').mean()[param]", "_____no_output_____" ] ], [ [ "## More exploration", "_____no_output_____" ] ], [ [ "a = ts[[param,'time']].groupby('time').count().rename({param:'count'})\nb = ts[[param, 'time']].groupby('time').mean()\nimport xarray as xr\nxr.merge([a,b])", "_____no_output_____" ], [ "da = ts[['t2m','time']]\n#da.groupby(\"time.dayofyear\") - \n#da.groupby(\"time.dayofyear\").mean([\"time\",\"stacked_forecast_time_step\"])\nda.groupby(\"time.dayofyear\").count() #\"time.dayofyear\")", "_____no_output_____" ] ], [ [ "See : http://xarray.pydata.org/en/stable/groupby.html", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a3bb459725796c02c3b9cc2bedb8ede28cb3a30
29,985
ipynb
Jupyter Notebook
examples/wasserstein_runtime/WassersteinRuntime.ipynb
kjetil-lye/phd_thesis_standalone_plots
e35c683f800f2ef856ea374dfcde4599b8d5d5aa
[ "MIT" ]
null
null
null
examples/wasserstein_runtime/WassersteinRuntime.ipynb
kjetil-lye/phd_thesis_standalone_plots
e35c683f800f2ef856ea374dfcde4599b8d5d5aa
[ "MIT" ]
null
null
null
examples/wasserstein_runtime/WassersteinRuntime.ipynb
kjetil-lye/phd_thesis_standalone_plots
e35c683f800f2ef856ea374dfcde4599b8d5d5aa
[ "MIT" ]
null
null
null
178.482143
25,912
0.902751
[ [ [ "import numpy as np\nimport sys\nsys.path.append('../../python')\nimport plot_info\nimport matplotlib.pyplot as plt\nplot_info.set_notebook_name('WassersteinRuntime.ipynb')\nimport ot\nimport time", "_____no_output_____" ], [ "\n# Measure runtime (in seconds) as a function of number of samples\n\nsample_numbers = 2**np.arange(4, 15)\nretries = 10\n\nruntimes = np.zeros((len(sample_numbers), retries))\n\nfor n, sample_number in enumerate(sample_numbers):\n print(f'sample_number = {sample_number}')\n weights = np.ones(sample_number)/sample_number\n for retry in range(retries):\n x1 = np.random.uniform(0,1, (sample_number, 1))\n x2 = np.random.uniform(0,1, (sample_number, 1))\n cost_matrix = ot.dist(x1, x2, metric='euclidean')\n \n start = time.time()\n \n assignment = ot.emd(weights, weights, cost_matrix)\n \n end = time.time()\n \n runtimes[n, retry] = end-start\n \n \nmean_runtimes = np.mean(runtimes, axis=1)\nstd_runtimes = np.std(runtimes, axis=1)\n\nplt.loglog(sample_numbers, mean_runtimes, '-o', label='Runtime')\n\npoly = np.polyfit(np.log(sample_numbers), np.log(mean_runtimes), 1)\nplt.loglog(sample_numbers, np.exp(poly[1])*sample_numbers**poly[0], '--',\n label=f'$\\\\mathcal{{O}}(M^{{{poly[0]:.1f}}})$')\n\nplt.legend()\n\nplt.ylabel(\"Runtime (s)\")\nplt.xlabel(\"Number of samples in ensemble $M$\")\n\nplt.xscale('log', basex=2)\nplt.yscale('log', basey=2)\n\nplt.xticks(sample_numbers, [f'${M}$' for M in sample_numbers])\n \nplot_info.showAndSave('wasserstein_runtime')", "sample_number = 16\nsample_number = 32\nsample_number = 64\nsample_number = 128\nsample_number = 256\nsample_number = 512\nsample_number = 1024\nsample_number = 2048\nsample_number = 4096\nsample_number = 8192\nsample_number = 16384\n\n\n" ], [ "print(mean_runtimes[sample_numbers==1024]*1024**(3*2)/60/60/24/365)", "[4.82792066e+09]\n" ], [ "print(mean_runtimes[sample_numbers==1024]*1024**(3*2)/60/60*0.0255)", "[1.07846092e+12]\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code" ] ]
4a3bc36a979330de8592a46f3842b609df5a6be0
31,175
ipynb
Jupyter Notebook
guides/ipynb/transfer_learning.ipynb
wariua/keras-io-ko
b89fa9c34af006aa3584dd765fe78f36374246a7
[ "Apache-2.0" ]
1,542
2020-05-06T20:23:07.000Z
2022-03-31T15:25:03.000Z
guides/ipynb/transfer_learning.ipynb
wariua/keras-io-ko
b89fa9c34af006aa3584dd765fe78f36374246a7
[ "Apache-2.0" ]
625
2020-05-07T10:21:15.000Z
2022-03-31T17:19:35.000Z
guides/ipynb/transfer_learning.ipynb
wariua/keras-io-ko
b89fa9c34af006aa3584dd765fe78f36374246a7
[ "Apache-2.0" ]
1,616
2020-05-07T06:28:33.000Z
2022-03-31T13:35:35.000Z
35.385925
137
0.61405
[ [ [ "# Transfer learning & fine-tuning\n\n**Author:** [fchollet](https://twitter.com/fchollet)<br>\n**Date created:** 2020/04/15<br>\n**Last modified:** 2020/05/12<br>\n**Description:** Complete guide to transfer learning & fine-tuning in Keras.", "_____no_output_____" ], [ "## Setup", "_____no_output_____" ] ], [ [ "import numpy as np\nimport tensorflow as tf\nfrom tensorflow import keras", "_____no_output_____" ] ], [ [ "## Introduction\n\n**Transfer learning** consists of taking features learned on one problem, and\nleveraging them on a new, similar problem. For instance, features from a model that has\nlearned to identify racoons may be useful to kick-start a model meant to identify\n tanukis.\n\nTransfer learning is usually done for tasks where your dataset has too little data to\n train a full-scale model from scratch.\n\nThe most common incarnation of transfer learning in the context of deep learning is the\n following workflow:\n\n1. Take layers from a previously trained model.\n2. Freeze them, so as to avoid destroying any of the information they contain during\n future training rounds.\n3. Add some new, trainable layers on top of the frozen layers. They will learn to turn\n the old features into predictions on a new dataset.\n4. Train the new layers on your dataset.\n\nA last, optional step, is **fine-tuning**, which consists of unfreezing the entire\nmodel you obtained above (or part of it), and re-training it on the new data with a\nvery low learning rate. This can potentially achieve meaningful improvements, by\n incrementally adapting the pretrained features to the new data.\n\nFirst, we will go over the Keras `trainable` API in detail, which underlies most\n transfer learning & fine-tuning workflows.\n\nThen, we'll demonstrate the typical workflow by taking a model pretrained on the\nImageNet dataset, and retraining it on the Kaggle \"cats vs dogs\" classification\n dataset.\n\nThis is adapted from\n[Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python)\n and the 2016 blog post\n[\"building powerful image classification models using very little\n data\"](https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html).", "_____no_output_____" ], [ "## Freezing layers: understanding the `trainable` attribute\n\nLayers & models have three weight attributes:\n\n- `weights` is the list of all weights variables of the layer.\n- `trainable_weights` is the list of those that are meant to be updated (via gradient\n descent) to minimize the loss during training.\n- `non_trainable_weights` is the list of those that aren't meant to be trained.\n Typically they are updated by the model during the forward pass.\n\n**Example: the `Dense` layer has 2 trainable weights (kernel & bias)**", "_____no_output_____" ] ], [ [ "layer = keras.layers.Dense(3)\nlayer.build((None, 4)) # Create the weights\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "_____no_output_____" ] ], [ [ "In general, all weights are trainable weights. The only built-in layer that has\nnon-trainable weights is the `BatchNormalization` layer. It uses non-trainable weights\n to keep track of the mean and variance of its inputs during training.\nTo learn how to use non-trainable weights in your own custom layers, see the\n[guide to writing new layers from scratch](https://keras.io/guides/making_new_layers_and_models_via_subclassing/).\n\n**Example: the `BatchNormalization` layer has 2 trainable weights and 2 non-trainable\n weights**", "_____no_output_____" ] ], [ [ "layer = keras.layers.BatchNormalization()\nlayer.build((None, 4)) # Create the weights\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "_____no_output_____" ] ], [ [ "Layers & models also feature a boolean attribute `trainable`. Its value can be changed.\nSetting `layer.trainable` to `False` moves all the layer's weights from trainable to\nnon-trainable. This is called \"freezing\" the layer: the state of a frozen layer won't\nbe updated during training (either when training with `fit()` or when training with\n any custom loop that relies on `trainable_weights` to apply gradient updates).\n\n**Example: setting `trainable` to `False`**", "_____no_output_____" ] ], [ [ "layer = keras.layers.Dense(3)\nlayer.build((None, 4)) # Create the weights\nlayer.trainable = False # Freeze the layer\n\nprint(\"weights:\", len(layer.weights))\nprint(\"trainable_weights:\", len(layer.trainable_weights))\nprint(\"non_trainable_weights:\", len(layer.non_trainable_weights))", "_____no_output_____" ] ], [ [ "When a trainable weight becomes non-trainable, its value is no longer updated during\n training.", "_____no_output_____" ] ], [ [ "# Make a model with 2 layers\nlayer1 = keras.layers.Dense(3, activation=\"relu\")\nlayer2 = keras.layers.Dense(3, activation=\"sigmoid\")\nmodel = keras.Sequential([keras.Input(shape=(3,)), layer1, layer2])\n\n# Freeze the first layer\nlayer1.trainable = False\n\n# Keep a copy of the weights of layer1 for later reference\ninitial_layer1_weights_values = layer1.get_weights()\n\n# Train the model\nmodel.compile(optimizer=\"adam\", loss=\"mse\")\nmodel.fit(np.random.random((2, 3)), np.random.random((2, 3)))\n\n# Check that the weights of layer1 have not changed during training\nfinal_layer1_weights_values = layer1.get_weights()\nnp.testing.assert_allclose(\n initial_layer1_weights_values[0], final_layer1_weights_values[0]\n)\nnp.testing.assert_allclose(\n initial_layer1_weights_values[1], final_layer1_weights_values[1]\n)", "_____no_output_____" ] ], [ [ "Do not confuse the `layer.trainable` attribute with the argument `training` in\n`layer.__call__()` (which controls whether the layer should run its forward pass in\n inference mode or training mode). For more information, see the\n[Keras FAQ](\n https://keras.io/getting_started/faq/#whats-the-difference-between-the-training-argument-in-call-and-the-trainable-attribute).", "_____no_output_____" ], [ "## Recursive setting of the `trainable` attribute\n\nIf you set `trainable = False` on a model or on any layer that has sublayers,\nall children layers become non-trainable as well.\n\n**Example:**", "_____no_output_____" ] ], [ [ "inner_model = keras.Sequential(\n [\n keras.Input(shape=(3,)),\n keras.layers.Dense(3, activation=\"relu\"),\n keras.layers.Dense(3, activation=\"relu\"),\n ]\n)\n\nmodel = keras.Sequential(\n [keras.Input(shape=(3,)), inner_model, keras.layers.Dense(3, activation=\"sigmoid\"),]\n)\n\nmodel.trainable = False # Freeze the outer model\n\nassert inner_model.trainable == False # All layers in `model` are now frozen\nassert inner_model.layers[0].trainable == False # `trainable` is propagated recursively", "_____no_output_____" ] ], [ [ "## The typical transfer-learning workflow\n\nThis leads us to how a typical transfer learning workflow can be implemented in Keras:\n\n1. Instantiate a base model and load pre-trained weights into it.\n2. Freeze all layers in the base model by setting `trainable = False`.\n3. Create a new model on top of the output of one (or several) layers from the base\n model.\n4. Train your new model on your new dataset.\n\nNote that an alternative, more lightweight workflow could also be:\n\n1. Instantiate a base model and load pre-trained weights into it.\n2. Run your new dataset through it and record the output of one (or several) layers\n from the base model. This is called **feature extraction**.\n3. Use that output as input data for a new, smaller model.\n\nA key advantage of that second workflow is that you only run the base model once on\n your data, rather than once per epoch of training. So it's a lot faster & cheaper.\n\nAn issue with that second workflow, though, is that it doesn't allow you to dynamically\nmodify the input data of your new model during training, which is required when doing\ndata augmentation, for instance. Transfer learning is typically used for tasks when\nyour new dataset has too little data to train a full-scale model from scratch, and in\nsuch scenarios data augmentation is very important. So in what follows, we will focus\n on the first workflow.\n\nHere's what the first workflow looks like in Keras:\n\nFirst, instantiate a base model with pre-trained weights.\n\n```python\nbase_model = keras.applications.Xception(\n weights='imagenet', # Load weights pre-trained on ImageNet.\n input_shape=(150, 150, 3),\n include_top=False) # Do not include the ImageNet classifier at the top.\n```\n\nThen, freeze the base model.\n\n```python\nbase_model.trainable = False\n```\n\nCreate a new model on top.\n\n```python\ninputs = keras.Input(shape=(150, 150, 3))\n# We make sure that the base_model is running in inference mode here,\n# by passing `training=False`. This is important for fine-tuning, as you will\n# learn in a few paragraphs.\nx = base_model(inputs, training=False)\n# Convert features of shape `base_model.output_shape[1:]` to vectors\nx = keras.layers.GlobalAveragePooling2D()(x)\n# A Dense classifier with a single unit (binary classification)\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n```\n\nTrain the model on new data.\n\n```python\nmodel.compile(optimizer=keras.optimizers.Adam(),\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()])\nmodel.fit(new_dataset, epochs=20, callbacks=..., validation_data=...)\n```", "_____no_output_____" ], [ "## Fine-tuning\n\nOnce your model has converged on the new data, you can try to unfreeze all or part of\n the base model and retrain the whole model end-to-end with a very low learning rate.\n\nThis is an optional last step that can potentially give you incremental improvements.\n It could also potentially lead to quick overfitting -- keep that in mind.\n\nIt is critical to only do this step *after* the model with frozen layers has been\ntrained to convergence. If you mix randomly-initialized trainable layers with\ntrainable layers that hold pre-trained features, the randomly-initialized layers will\ncause very large gradient updates during training, which will destroy your pre-trained\n features.\n\nIt's also critical to use a very low learning rate at this stage, because\nyou are training a much larger model than in the first round of training, on a dataset\n that is typically very small.\nAs a result, you are at risk of overfitting very quickly if you apply large weight\n updates. Here, you only want to readapt the pretrained weights in an incremental way.\n\nThis is how to implement fine-tuning of the whole base model:\n\n```python\n# Unfreeze the base model\nbase_model.trainable = True\n\n# It's important to recompile your model after you make any changes\n# to the `trainable` attribute of any inner layer, so that your changes\n# are take into account\nmodel.compile(optimizer=keras.optimizers.Adam(1e-5), # Very low learning rate\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()])\n\n# Train end-to-end. Be careful to stop before you overfit!\nmodel.fit(new_dataset, epochs=10, callbacks=..., validation_data=...)\n```\n\n**Important note about `compile()` and `trainable`**\n\nCalling `compile()` on a model is meant to \"freeze\" the behavior of that model. This\n implies that the `trainable`\nattribute values at the time the model is compiled should be preserved throughout the\n lifetime of that model,\nuntil `compile` is called again. Hence, if you change any `trainable` value, make sure\n to call `compile()` again on your\nmodel for your changes to be taken into account.\n\n**Important notes about `BatchNormalization` layer**\n\nMany image models contain `BatchNormalization` layers. That layer is a special case on\n every imaginable count. Here are a few things to keep in mind.\n\n- `BatchNormalization` contains 2 non-trainable weights that get updated during\ntraining. These are the variables tracking the mean and variance of the inputs.\n- When you set `bn_layer.trainable = False`, the `BatchNormalization` layer will\nrun in inference mode, and will not update its mean & variance statistics. This is not\nthe case for other layers in general, as\n[weight trainability & inference/training modes are two orthogonal concepts](\n https://keras.io/getting_started/faq/#whats-the-difference-between-the-training-argument-in-call-and-the-trainable-attribute).\nBut the two are tied in the case of the `BatchNormalization` layer.\n- When you unfreeze a model that contains `BatchNormalization` layers in order to do\nfine-tuning, you should keep the `BatchNormalization` layers in inference mode by\n passing `training=False` when calling the base model.\nOtherwise the updates applied to the non-trainable weights will suddenly destroy\nwhat the model has learned.\n\nYou'll see this pattern in action in the end-to-end example at the end of this guide.\n", "_____no_output_____" ], [ "## Transfer learning & fine-tuning with a custom training loop\n\nIf instead of `fit()`, you are using your own low-level training loop, the workflow\nstays essentially the same. You should be careful to only take into account the list\n `model.trainable_weights` when applying gradient updates:\n\n```python\n# Create base model\nbase_model = keras.applications.Xception(\n weights='imagenet',\n input_shape=(150, 150, 3),\n include_top=False)\n# Freeze base model\nbase_model.trainable = False\n\n# Create new model on top.\ninputs = keras.Input(shape=(150, 150, 3))\nx = base_model(inputs, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n\nloss_fn = keras.losses.BinaryCrossentropy(from_logits=True)\noptimizer = keras.optimizers.Adam()\n\n# Iterate over the batches of a dataset.\nfor inputs, targets in new_dataset:\n # Open a GradientTape.\n with tf.GradientTape() as tape:\n # Forward pass.\n predictions = model(inputs)\n # Compute the loss value for this batch.\n loss_value = loss_fn(targets, predictions)\n\n # Get gradients of loss wrt the *trainable* weights.\n gradients = tape.gradient(loss_value, model.trainable_weights)\n # Update the weights of the model.\n optimizer.apply_gradients(zip(gradients, model.trainable_weights))\n```", "_____no_output_____" ], [ "Likewise for fine-tuning.", "_____no_output_____" ], [ "## An end-to-end example: fine-tuning an image classification model on a cats vs. dogs dataset\n\nTo solidify these concepts, let's walk you through a concrete end-to-end transfer\nlearning & fine-tuning example. We will load the Xception model, pre-trained on\n ImageNet, and use it on the Kaggle \"cats vs. dogs\" classification dataset.", "_____no_output_____" ], [ "### Getting the data\n\nFirst, let's fetch the cats vs. dogs dataset using TFDS. If you have your own dataset,\nyou'll probably want to use the utility\n`tf.keras.preprocessing.image_dataset_from_directory` to generate similar labeled\n dataset objects from a set of images on disk filed into class-specific folders.\n\nTransfer learning is most useful when working with very small datasets. To keep our\ndataset small, we will use 40% of the original training data (25,000 images) for\n training, 10% for validation, and 10% for testing.", "_____no_output_____" ] ], [ [ "import tensorflow_datasets as tfds\n\ntfds.disable_progress_bar()\n\ntrain_ds, validation_ds, test_ds = tfds.load(\n \"cats_vs_dogs\",\n # Reserve 10% for validation and 10% for test\n split=[\"train[:40%]\", \"train[40%:50%]\", \"train[50%:60%]\"],\n as_supervised=True, # Include labels\n)\n\nprint(\"Number of training samples: %d\" % tf.data.experimental.cardinality(train_ds))\nprint(\n \"Number of validation samples: %d\" % tf.data.experimental.cardinality(validation_ds)\n)\nprint(\"Number of test samples: %d\" % tf.data.experimental.cardinality(test_ds))", "_____no_output_____" ] ], [ [ "These are the first 9 images in the training dataset -- as you can see, they're all\n different sizes.", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\n\nplt.figure(figsize=(10, 10))\nfor i, (image, label) in enumerate(train_ds.take(9)):\n ax = plt.subplot(3, 3, i + 1)\n plt.imshow(image)\n plt.title(int(label))\n plt.axis(\"off\")", "_____no_output_____" ] ], [ [ "We can also see that label 1 is \"dog\" and label 0 is \"cat\".", "_____no_output_____" ], [ "### Standardizing the data\n\nOur raw images have a variety of sizes. In addition, each pixel consists of 3 integer\nvalues between 0 and 255 (RGB level values). This isn't a great fit for feeding a\n neural network. We need to do 2 things:\n\n- Standardize to a fixed image size. We pick 150x150.\n- Normalize pixel values between -1 and 1. We'll do this using a `Normalization` layer as\n part of the model itself.\n\nIn general, it's a good practice to develop models that take raw data as input, as\nopposed to models that take already-preprocessed data. The reason being that, if your\nmodel expects preprocessed data, any time you export your model to use it elsewhere\n(in a web browser, in a mobile app), you'll need to reimplement the exact same\npreprocessing pipeline. This gets very tricky very quickly. So we should do the least\n possible amount of preprocessing before hitting the model.\n\nHere, we'll do image resizing in the data pipeline (because a deep neural network can\nonly process contiguous batches of data), and we'll do the input value scaling as part\n of the model, when we create it.\n\nLet's resize images to 150x150:", "_____no_output_____" ] ], [ [ "size = (150, 150)\n\ntrain_ds = train_ds.map(lambda x, y: (tf.image.resize(x, size), y))\nvalidation_ds = validation_ds.map(lambda x, y: (tf.image.resize(x, size), y))\ntest_ds = test_ds.map(lambda x, y: (tf.image.resize(x, size), y))", "_____no_output_____" ] ], [ [ "Besides, let's batch the data and use caching & prefetching to optimize loading speed.", "_____no_output_____" ] ], [ [ "batch_size = 32\n\ntrain_ds = train_ds.cache().batch(batch_size).prefetch(buffer_size=10)\nvalidation_ds = validation_ds.cache().batch(batch_size).prefetch(buffer_size=10)\ntest_ds = test_ds.cache().batch(batch_size).prefetch(buffer_size=10)", "_____no_output_____" ] ], [ [ "### Using random data augmentation\n\nWhen you don't have a large image dataset, it's a good practice to artificially\n introduce sample diversity by applying random yet realistic transformations to\nthe training images, such as random horizontal flipping or small random rotations. This\nhelps expose the model to different aspects of the training data while slowing down\n overfitting.", "_____no_output_____" ] ], [ [ "from tensorflow import keras\nfrom tensorflow.keras import layers\n\ndata_augmentation = keras.Sequential(\n [layers.RandomFlip(\"horizontal\"), layers.RandomRotation(0.1),]\n)", "_____no_output_____" ] ], [ [ "Let's visualize what the first image of the first batch looks like after various random\n transformations:", "_____no_output_____" ] ], [ [ "import numpy as np\n\nfor images, labels in train_ds.take(1):\n plt.figure(figsize=(10, 10))\n first_image = images[0]\n for i in range(9):\n ax = plt.subplot(3, 3, i + 1)\n augmented_image = data_augmentation(\n tf.expand_dims(first_image, 0), training=True\n )\n plt.imshow(augmented_image[0].numpy().astype(\"int32\"))\n plt.title(int(labels[0]))\n plt.axis(\"off\")", "_____no_output_____" ] ], [ [ "## Build a model\n\nNow let's built a model that follows the blueprint we've explained earlier.\n\nNote that:\n\n- We add a `Rescaling` layer to scale input values (initially in the `[0, 255]`\n range) to the `[-1, 1]` range.\n- We add a `Dropout` layer before the classification layer, for regularization.\n- We make sure to pass `training=False` when calling the base model, so that\nit runs in inference mode, so that batchnorm statistics don't get updated\neven after we unfreeze the base model for fine-tuning.", "_____no_output_____" ] ], [ [ "base_model = keras.applications.Xception(\n weights=\"imagenet\", # Load weights pre-trained on ImageNet.\n input_shape=(150, 150, 3),\n include_top=False,\n) # Do not include the ImageNet classifier at the top.\n\n# Freeze the base_model\nbase_model.trainable = False\n\n# Create new model on top\ninputs = keras.Input(shape=(150, 150, 3))\nx = data_augmentation(inputs) # Apply random data augmentation\n\n# Pre-trained Xception weights requires that input be scaled\n# from (0, 255) to a range of (-1., +1.), the rescaling layer\n# outputs: `(inputs * scale) + offset`\nscale_layer = keras.layers.Rescaling(scale=1 / 127.5, offset=-1)\nx = scale_layer(x)\n\n# The base model contains batchnorm layers. We want to keep them in inference mode\n# when we unfreeze the base model for fine-tuning, so we make sure that the\n# base_model is running in inference mode here.\nx = base_model(x, training=False)\nx = keras.layers.GlobalAveragePooling2D()(x)\nx = keras.layers.Dropout(0.2)(x) # Regularize with dropout\noutputs = keras.layers.Dense(1)(x)\nmodel = keras.Model(inputs, outputs)\n\nmodel.summary()", "_____no_output_____" ] ], [ [ "## Train the top layer", "_____no_output_____" ] ], [ [ "model.compile(\n optimizer=keras.optimizers.Adam(),\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()],\n)\n\nepochs = 20\nmodel.fit(train_ds, epochs=epochs, validation_data=validation_ds)", "_____no_output_____" ] ], [ [ "## Do a round of fine-tuning of the entire model\n\nFinally, let's unfreeze the base model and train the entire model end-to-end with a low\n learning rate.\n\nImportantly, although the base model becomes trainable, it is still running in\ninference mode since we passed `training=False` when calling it when we built the\nmodel. This means that the batch normalization layers inside won't update their batch\nstatistics. If they did, they would wreck havoc on the representations learned by the\n model so far.", "_____no_output_____" ] ], [ [ "# Unfreeze the base_model. Note that it keeps running in inference mode\n# since we passed `training=False` when calling it. This means that\n# the batchnorm layers will not update their batch statistics.\n# This prevents the batchnorm layers from undoing all the training\n# we've done so far.\nbase_model.trainable = True\nmodel.summary()\n\nmodel.compile(\n optimizer=keras.optimizers.Adam(1e-5), # Low learning rate\n loss=keras.losses.BinaryCrossentropy(from_logits=True),\n metrics=[keras.metrics.BinaryAccuracy()],\n)\n\nepochs = 10\nmodel.fit(train_ds, epochs=epochs, validation_data=validation_ds)", "_____no_output_____" ] ], [ [ "After 10 epochs, fine-tuning gains us a nice improvement here.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a3bd0500002af11d407396ba566967e23386540
490,978
ipynb
Jupyter Notebook
notebooks/Sandford_2020/section_3.ipynb
NathanSandford/Chem-I-Calc
34ec9b9e6c23a7d55f64b20de3b17547e1471dfd
[ "MIT" ]
4
2020-06-18T15:38:48.000Z
2021-04-09T04:49:16.000Z
notebooks/Sandford_2020/section_3.ipynb
NathanSandford/Chem-I-Calc
34ec9b9e6c23a7d55f64b20de3b17547e1471dfd
[ "MIT" ]
19
2019-08-02T15:13:35.000Z
2020-06-22T16:18:15.000Z
notebooks/Sandford_2020/section_3.ipynb
NathanSandford/Chem-I-Calc
34ec9b9e6c23a7d55f64b20de3b17547e1471dfd
[ "MIT" ]
4
2019-09-16T23:10:20.000Z
2021-05-10T08:19:42.000Z
2,442.676617
483,912
0.958829
[ [ [ "<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Sandford+-2020,-Section-3:-Methods\" data-toc-modified-id=\"Sandford+-2020,-Section-3:-Methods-1\"><span class=\"toc-item-num\">1&nbsp;&nbsp;</span>Sandford+ 2020, Section 3: Methods</a></span><ul class=\"toc-item\"><li><span><a href=\"#Imports\" data-toc-modified-id=\"Imports-1.1\"><span class=\"toc-item-num\">1.1&nbsp;&nbsp;</span>Imports</a></span></li><li><span><a href=\"#Plotting-Configs\" data-toc-modified-id=\"Plotting-Configs-1.2\"><span class=\"toc-item-num\">1.2&nbsp;&nbsp;</span>Plotting Configs</a></span></li><li><span><a href=\"#Figure-2:-HR-&amp;-Kiel-Diagrams\" data-toc-modified-id=\"Figure-2:-HR-&amp;-Kiel-Diagrams-1.3\"><span class=\"toc-item-num\">1.3&nbsp;&nbsp;</span>Figure 2: HR &amp; Kiel Diagrams</a></span></li></ul></li></ul></div>", "_____no_output_____" ], [ "# Sandford+ 2020, Section 3: Methods\n## Imports ", "_____no_output_____" ] ], [ [ "import numpy as np\nimport pandas as pd\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nfrom matplotlib.gridspec import GridSpec\n%matplotlib inline", "_____no_output_____" ] ], [ [ "## Plotting Configs", "_____no_output_____" ] ], [ [ "output_dir = './figures/'\n\nmpl.rc('axes', grid=True, lw=2)\nmpl.rc('ytick', direction='in', labelsize=14)\nmpl.rc('ytick.major', size=5, width=1)\nmpl.rc('xtick', direction='in', labelsize=14)\nmpl.rc('xtick.major', size=5, width=1)\nmpl.rc('ytick', direction='in', labelsize=14)\nmpl.rc('ytick.major', size=5, width=1)\nmpl.rc('grid', lw=0)\nmpl.rc('figure', dpi=300)", "_____no_output_____" ] ], [ [ "## Figure 2: HR & Kiel Diagrams", "_____no_output_____" ] ], [ [ "log_age = 10 # Select 10 Gyr old isochrones\nmetallicities = [-0.5, -1.0, -1.5, -2.0, -2.5] # Select isochrone metallicities\nc = plt.cm.get_cmap('plasma', len(metallicities) + 1)\n\n# Initialize figure\nfig = plt.figure(figsize=(5, 9))\ngs = GridSpec(2, 1, hspace=0)\nax1 = plt.subplot(gs[0, 0])\nax2 = plt.subplot(gs[1, 0])\n\n# Loop through metallicities\nfor i, feh in enumerate(metallicities):\n iso = pd.read_hdf('./isochrones.h5', f'{feh:1.1f}') # Load isochrone\n iso = iso[(iso['log10_isochrone_age_yr'] == log_age) & (10**iso['log_Teff'] >= 3500)] # Select on age and effective temperature\n rgb_idx = (np.abs(iso['Bessell_V'] + 0.5)).idxmin() # Find RGB star w/ M_V = -0.5 \n # Plot Isochrones\n ax1.plot(10**iso['log_Teff'], iso['Bessell_V'], c=c(i), lw=2, zorder=-1, label=r'$\\log(Z)=$'+f'{feh:1.1f}')\n ax2.plot(10**iso['log_Teff'], iso['log_g'], c=c(i), lw=2, zorder=-1, label=r'$\\log(Z)=$'+f'{feh:1.1f}')\n # Plot Reference Stars\n ax1.scatter(10**iso['log_Teff'][rgb_idx], iso['Bessell_V'][rgb_idx], \n marker='*', c=[c(i)], edgecolor='k', lw=0.5, s=150)\n ax2.scatter(10**iso['log_Teff'][rgb_idx], iso['log_g'][rgb_idx], \n marker='*', c=[c(i)], edgecolor='k', lw=0.5, s=150)\n if feh == -1.5:\n trgb_idx = (np.abs(iso['Bessell_V'] + 2.5)).idxmin() # Find TRGB star w/ M_V = -2.5\n msto_idx = (np.abs(iso['Bessell_V'] - 3.5)).idxmin() # Find MSTO star w/ M_V = +3.5\n ax1.scatter(10**iso['log_Teff'][trgb_idx], iso['Bessell_V'][trgb_idx], \n marker='o', c=[c(2)], edgecolor='k', lw=0.5, s=100, label=f'TRGB')\n ax2.scatter(10**iso['log_Teff'][trgb_idx], iso['log_g'][trgb_idx], \n marker='o', c=[c(2)], edgecolor='k', lw=0.5, s=100, label=f'TRGB')\n ax1.scatter(10**iso['log_Teff'][rgb_idx], iso['Bessell_V'][rgb_idx], \n marker='*', c=[c(2)], edgecolor='k', lw=0.5, s=150, label=f'RGB')\n ax2.scatter(10**iso['log_Teff'][rgb_idx], iso['log_g'][rgb_idx], \n marker='*', c=[c(2)], edgecolor='k', lw=0.5, s=150, label=f'RGB')\n ax1.scatter(10**iso['log_Teff'][msto_idx], iso['Bessell_V'][msto_idx], \n marker='s', c=[c(2)], edgecolor='k', lw=0.5, s=100, label=f'MSTO')\n ax2.scatter(10**iso['log_Teff'][msto_idx], iso['log_g'][msto_idx], \n marker='s', c=[c(2)], edgecolor='k', lw=0.5, s=100, label=f'MSTO')\n# Axes\nax1.set_ylabel(r'$M_V$', size=24)\nax2.set_ylabel(r'$\\log(g)$', size=24)\nax2.set_xlabel(r'$T_{eff}$', size=24)\nax1.set_ylim(-3.5, 7)\nax1.invert_xaxis()\nax1.invert_yaxis()\nax2.invert_xaxis()\nax2.invert_yaxis()\n# Legend\nax1.legend(fontsize=10, loc='upper left')\n\nplt.tight_layout()\nfig.savefig('./figures/hr_kiel.png')\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3bd0629f1f8d096ccc9906caf96f05ad945d9f
49,505
ipynb
Jupyter Notebook
Big-Data-Clusters/CU8/Public/content/cert-management/cer024-create-controller-cert.ipynb
DiHo78/tigertoolbox
90018529993cd12db7fc54e1def247e332e208dd
[ "MIT" ]
null
null
null
Big-Data-Clusters/CU8/Public/content/cert-management/cer024-create-controller-cert.ipynb
DiHo78/tigertoolbox
90018529993cd12db7fc54e1def247e332e208dd
[ "MIT" ]
1
2019-11-13T10:23:20.000Z
2019-11-13T10:23:20.000Z
Big-Data-Clusters/CU8/Public/content/cert-management/cer024-create-controller-cert.ipynb
DiHo78/tigertoolbox
90018529993cd12db7fc54e1def247e332e208dd
[ "MIT" ]
null
null
null
53.751357
2,724
0.442683
[ [ [ "CER024 - Create Controller certificate\n======================================\n\nThis notebook creates a certificate for the Controller endpoint. It\ncreates a controller-privatekey.pem as the private key and\ncontroller-signingrequest.csr as the signing request.\n\nThe private key is a secret. The signing request (CSR) will be used by\nthe CA to generate a signed certificate for the service.\n\nSteps\n-----\n\n### Parameters", "_____no_output_____" ] ], [ [ "import getpass\n\napp_name = \"controller\"\nscaledset_name = \"control\"\ncontainer_name = \"controller\"\nprefix_keyfile_name = \"controller\"\ncommon_name = \"controller-svc\"\n\ncountry_name = \"US\"\nstate_or_province_name = \"Illinois\"\nlocality_name = \"Chicago\"\norganization_name = \"Contoso\"\norganizational_unit_name = \"Finance\"\nemail_address = f\"{getpass.getuser().lower()}@contoso.com\"\n\nssl_configuration_file = \"service.openssl.cnf\"\n\ndays = \"398\" # the number of days to certify the certificate for\n\ntest_cert_store_root = \"/var/opt/secrets/test-certificates\"\n\nextendedKeyUsage = \"extendedKeyUsage = critical, clientAuth, serverAuth\"", "_____no_output_____" ] ], [ [ "### Common functions\n\nDefine helper functions used in this notebook.", "_____no_output_____" ] ], [ [ "# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows\nimport sys\nimport os\nimport re\nimport json\nimport platform\nimport shlex\nimport shutil\nimport datetime\n\nfrom subprocess import Popen, PIPE\nfrom IPython.display import Markdown\n\nretry_hints = {} # Output in stderr known to be transient, therefore automatically retry\nerror_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help\ninstall_hint = {} # The SOP to help install the executable if it cannot be found\n\nfirst_run = True\nrules = None\ndebug_logging = False\n\ndef run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):\n \"\"\"Run shell command, stream stdout, print stderr and optionally return output\n\n NOTES:\n\n 1. Commands that need this kind of ' quoting on Windows e.g.:\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}\n\n Need to actually pass in as '\"':\n\n kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='\"'data-pool'\"')].metadata.name}\n\n The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:\n \n `iter(p.stdout.readline, b'')`\n\n The shlex.split call does the right thing for each platform, just use the '\"' pattern for a '\n \"\"\"\n MAX_RETRIES = 5\n output = \"\"\n retry = False\n\n global first_run\n global rules\n\n if first_run:\n first_run = False\n rules = load_rules()\n\n # When running `azdata sql query` on Windows, replace any \\n in \"\"\" strings, with \" \", otherwise we see:\n #\n # ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')\n #\n if platform.system() == \"Windows\" and cmd.startswith(\"azdata sql query\"):\n cmd = cmd.replace(\"\\n\", \" \")\n\n # shlex.split is required on bash and for Windows paths with spaces\n #\n cmd_actual = shlex.split(cmd)\n\n # Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries\n #\n user_provided_exe_name = cmd_actual[0].lower()\n\n # When running python, use the python in the ADS sandbox ({sys.executable})\n #\n if cmd.startswith(\"python \"):\n cmd_actual[0] = cmd_actual[0].replace(\"python\", sys.executable)\n\n # On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail\n # with:\n #\n # UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)\n #\n # Setting it to a default value of \"en_US.UTF-8\" enables pip install to complete\n #\n if platform.system() == \"Darwin\" and \"LC_ALL\" not in os.environ:\n os.environ[\"LC_ALL\"] = \"en_US.UTF-8\"\n\n # When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`\n #\n if cmd.startswith(\"kubectl \") and \"AZDATA_OPENSHIFT\" in os.environ:\n cmd_actual[0] = cmd_actual[0].replace(\"kubectl\", \"oc\")\n\n # To aid supportability, determine which binary file will actually be executed on the machine\n #\n which_binary = None\n\n # Special case for CURL on Windows. The version of CURL in Windows System32 does not work to\n # get JWT tokens, it returns \"(56) Failure when receiving data from the peer\". If another instance\n # of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost\n # always the first curl.exe in the path, and it can't be uninstalled from System32, so here we\n # look for the 2nd installation of CURL in the path)\n if platform.system() == \"Windows\" and cmd.startswith(\"curl \"):\n path = os.getenv('PATH')\n for p in path.split(os.path.pathsep):\n p = os.path.join(p, \"curl.exe\")\n if os.path.exists(p) and os.access(p, os.X_OK):\n if p.lower().find(\"system32\") == -1:\n cmd_actual[0] = p\n which_binary = p\n break\n\n # Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this\n # seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound) \n #\n # NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.\n #\n if which_binary == None:\n which_binary = shutil.which(cmd_actual[0])\n\n # Display an install HINT, so the user can click on a SOP to install the missing binary\n #\n if which_binary == None:\n if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:\n display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\")\n else: \n cmd_actual[0] = which_binary\n\n start_time = datetime.datetime.now().replace(microsecond=0)\n\n print(f\"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)\")\n print(f\" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})\")\n print(f\" cwd: {os.getcwd()}\")\n\n # Command-line tools such as CURL and AZDATA HDFS commands output\n # scrolling progress bars, which causes Jupyter to hang forever, to\n # workaround this, use no_output=True\n #\n\n # Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait\n #\n wait = True \n\n try:\n if no_output:\n p = Popen(cmd_actual)\n else:\n p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)\n with p.stdout:\n for line in iter(p.stdout.readline, b''):\n line = line.decode()\n if return_output:\n output = output + line\n else:\n if cmd.startswith(\"azdata notebook run\"): # Hyperlink the .ipynb file\n regex = re.compile(' \"(.*)\"\\: \"(.*)\"') \n match = regex.match(line)\n if match:\n if match.group(1).find(\"HTML\") != -1:\n display(Markdown(f' - \"{match.group(1)}\": \"{match.group(2)}\"'))\n else:\n display(Markdown(f' - \"{match.group(1)}\": \"[{match.group(2)}]({match.group(2)})\"'))\n\n wait = False\n break # otherwise infinite hang, have not worked out why yet.\n else:\n print(line, end='')\n if rules is not None:\n apply_expert_rules(line)\n\n if wait:\n p.wait()\n except FileNotFoundError as e:\n if install_hint is not None:\n display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))\n\n raise FileNotFoundError(f\"Executable '{cmd_actual[0]}' not found in path (where/which)\") from e\n\n exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()\n\n if not no_output:\n for line in iter(p.stderr.readline, b''):\n try:\n line_decoded = line.decode()\n except UnicodeDecodeError:\n # NOTE: Sometimes we get characters back that cannot be decoded(), e.g.\n #\n # \\xa0\n #\n # For example see this in the response from `az group create`:\n #\n # ERROR: Get Token request returned http error: 400 and server \n # response: {\"error\":\"invalid_grant\",# \"error_description\":\"AADSTS700082: \n # The refresh token has expired due to inactivity.\\xa0The token was \n # issued on 2018-10-25T23:35:11.9832872Z\n #\n # which generates the exception:\n #\n # UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte\n #\n print(\"WARNING: Unable to decode stderr line, printing raw bytes:\")\n print(line)\n line_decoded = \"\"\n pass\n else:\n\n # azdata emits a single empty line to stderr when doing an hdfs cp, don't\n # print this empty \"ERR:\" as it confuses.\n #\n if line_decoded == \"\":\n continue\n \n print(f\"STDERR: {line_decoded}\", end='')\n\n if line_decoded.startswith(\"An exception has occurred\") or line_decoded.startswith(\"ERROR: An error occurred while executing the following cell\"):\n exit_code_workaround = 1\n\n # inject HINTs to next TSG/SOP based on output in stderr\n #\n if user_provided_exe_name in error_hints:\n for error_hint in error_hints[user_provided_exe_name]:\n if line_decoded.find(error_hint[0]) != -1:\n display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))\n\n # apply expert rules (to run follow-on notebooks), based on output\n #\n if rules is not None:\n apply_expert_rules(line_decoded)\n\n # Verify if a transient error, if so automatically retry (recursive)\n #\n if user_provided_exe_name in retry_hints:\n for retry_hint in retry_hints[user_provided_exe_name]:\n if line_decoded.find(retry_hint) != -1:\n if retry_count < MAX_RETRIES:\n print(f\"RETRY: {retry_count} (due to: {retry_hint})\")\n retry_count = retry_count + 1\n output = run(cmd, return_output=return_output, retry_count=retry_count)\n\n if return_output:\n if base64_decode:\n import base64\n\n return base64.b64decode(output).decode('utf-8')\n else:\n return output\n\n elapsed = datetime.datetime.now().replace(microsecond=0) - start_time\n\n # WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so\n # don't wait here, if success known above\n #\n if wait: \n if p.returncode != 0:\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(p.returncode)}.\\n')\n else:\n if exit_code_workaround !=0 :\n raise SystemExit(f'Shell command:\\n\\n\\t{cmd} ({elapsed}s elapsed)\\n\\nreturned non-zero exit code: {str(exit_code_workaround)}.\\n')\n\n print(f'\\nSUCCESS: {elapsed}s elapsed.\\n')\n\n if return_output:\n if base64_decode:\n import base64\n\n return base64.b64decode(output).decode('utf-8')\n else:\n return output\n\ndef load_json(filename):\n \"\"\"Load a json file from disk and return the contents\"\"\"\n\n with open(filename, encoding=\"utf8\") as json_file:\n return json.load(json_file)\n\ndef load_rules():\n \"\"\"Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable\"\"\"\n\n # Load this notebook as json to get access to the expert rules in the notebook metadata.\n #\n try:\n j = load_json(\"cer024-create-controller-cert.ipynb\")\n except:\n pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?\n else:\n if \"metadata\" in j and \\\n \"azdata\" in j[\"metadata\"] and \\\n \"expert\" in j[\"metadata\"][\"azdata\"] and \\\n \"expanded_rules\" in j[\"metadata\"][\"azdata\"][\"expert\"]:\n\n rules = j[\"metadata\"][\"azdata\"][\"expert\"][\"expanded_rules\"]\n\n rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.\n\n # print (f\"EXPERT: There are {len(rules)} rules to evaluate.\")\n\n return rules\n\ndef apply_expert_rules(line):\n \"\"\"Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so\n inject a 'HINT' to the follow-on SOP/TSG to run\"\"\"\n\n global rules\n\n for rule in rules:\n notebook = rule[1]\n cell_type = rule[2]\n output_type = rule[3] # i.e. stream or error\n output_type_name = rule[4] # i.e. ename or name \n output_type_value = rule[5] # i.e. SystemExit or stdout\n details_name = rule[6] # i.e. evalue or text \n expression = rule[7].replace(\"\\\\*\", \"*\") # Something escaped *, and put a \\ in front of it!\n\n if debug_logging:\n print(f\"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.\")\n\n if re.match(expression, line, re.DOTALL):\n\n if debug_logging:\n print(\"EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'\".format(output_type_name, output_type_value, expression, notebook))\n\n match_found = True\n\n display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))\n\n\n\n\nprint('Common functions defined successfully.')\n\n# Hints for binary (transient fault) retry, (known) error and install guide\n#\nretry_hints = {'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'], 'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)']}\nerror_hints = {'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb']], 'azdata': [['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: \"ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: \"ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], [\"Can't open lib 'ODBC Driver 17 for SQL Server\", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], [\"[Errno 2] No such file or directory: '..\\\\\\\\\", 'TSG053 - ADS Provided Books must be saved before use', '../repair/tsg053-save-book-first.ipynb'], [\"NameError: name 'azdata_login_secret_name' is not defined\", 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', \"TSG124 - 'No credentials were supplied' error from azdata login\", '../repair/tsg124-no-credentials-were-supplied.ipynb']]}\ninstall_hint = {'kubectl': ['SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb'], 'azdata': ['SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb']}", "_____no_output_____" ] ], [ [ "### Get the Kubernetes namespace for the big data cluster\n\nGet the namespace of the Big Data Cluster use the kubectl command line\ninterface .\n\n**NOTE:**\n\nIf there is more than one Big Data Cluster in the target Kubernetes\ncluster, then either:\n\n- set \\[0\\] to the correct value for the big data cluster.\n- set the environment variable AZDATA\\_NAMESPACE, before starting\n Azure Data Studio.", "_____no_output_____" ] ], [ [ "# Place Kubernetes namespace name for BDC into 'namespace' variable\n\nif \"AZDATA_NAMESPACE\" in os.environ:\n namespace = os.environ[\"AZDATA_NAMESPACE\"]\nelse:\n try:\n namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)\n except:\n from IPython.display import Markdown\n print(f\"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.\")\n display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))\n display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))\n raise\n\nprint(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')", "_____no_output_____" ] ], [ [ "### Create a temporary directory to stage files", "_____no_output_____" ] ], [ [ "# Create a temporary directory to hold configuration files\n\nimport tempfile\n\ntemp_dir = tempfile.mkdtemp()\n\nprint(f\"Temporary directory created: {temp_dir}\")", "_____no_output_____" ] ], [ [ "### Helper function to save configuration files to disk", "_____no_output_____" ] ], [ [ "# Define helper function 'save_file' to save configuration files to the temporary directory created above\nimport os\nimport io\n\ndef save_file(filename, contents):\n with io.open(os.path.join(temp_dir, filename), \"w\", encoding='utf8', newline='\\n') as text_file:\n text_file.write(contents)\n\n print(\"File saved: \" + os.path.join(temp_dir, filename))\n\nprint(\"Function `save_file` defined successfully.\")", "_____no_output_____" ] ], [ [ "### Get endpoint hostname", "_____no_output_____" ] ], [ [ "import json\nimport urllib\n\nendpoint_name = \"sql-server-master\" if app_name == \"master\" else app_name \n\nendpoint = run(f'azdata bdc endpoint list --endpoint=\"{endpoint_name}\"', return_output=True)\nendpoint = json.loads(endpoint)\nendpoint = endpoint['endpoint']\n\nprint(f\"endpoint: {endpoint}\")\n\nhostname = urllib.parse.urlparse(endpoint).hostname\n\nprint(f\"hostname: {hostname}\")", "_____no_output_____" ] ], [ [ "### Get name of the ‘Running’ `controller` `pod`", "_____no_output_____" ] ], [ [ "# Place the name of the 'Running' controller pod in variable `controller`\n\ncontroller = run(f'kubectl get pod --selector=app=controller -n {namespace} -o jsonpath={{.items[0].metadata.name}} --field-selector=status.phase=Running', return_output=True)\n\nprint(f\"Controller pod name: {controller}\")", "_____no_output_____" ] ], [ [ "### Create the DNS alt\\_names for data plane in secure clusters\n\nGet the cluster configuration from the Big Data Cluster using\n`azdata bdc config`, and pull the Active Directory DNS names out of it,\nand place them into the certificate configuration file as DNS alt\\_names", "_____no_output_____" ] ], [ [ "import json\n\nalt_names = \"\"\nbdc_fqdn = \"\"\n\nhdfs_vault_svc = \"hdfsvault-svc\"\nbdc_config = run(\"azdata bdc config show\", return_output=True)\nbdc_config = json.loads(bdc_config)\n\ndns_counter = 3 # DNS.1 and DNS.2 are already in the certificate template.\n\nif app_name == \"gateway\" or app_name == \"master\":\n alt_names += f'DNS.{str(dns_counter)} = {pod}.{common_name}\\n'\n dns_counter = dns_counter + 1\n alt_names += f'DNS.{str(dns_counter)} = {pod}.{common_name}.{namespace}.svc.cluster.local\\n'\n dns_counter = dns_counter + 1\n\nif \"security\" in bdc_config[\"spec\"] and \"activeDirectory\" in bdc_config[\"spec\"][\"security\"]:\n domain_dns_name = bdc_config[\"spec\"][\"security\"][\"activeDirectory\"][\"domainDnsName\"]\n sub_domain_name = bdc_config[\"spec\"][\"security\"][\"activeDirectory\"][\"subdomain\"]\n\n alt_names += f\"DNS.{str(dns_counter)} = {common_name}.{domain_dns_name}\\n\"\n dns_counter = dns_counter + 1\n\n if app_name == \"gateway\" or app_name == \"master\":\n alt_names += f'DNS.{str(dns_counter)} = {pod}.{domain_dns_name}\\n'\n dns_counter = dns_counter + 1\n\n if sub_domain_name:\n bdc_fqdn = f\"{sub_domain_name}.{domain_dns_name}\"\n else:\n bdc_fqdn = domain_dns_name\n\nif app_name in bdc_config[\"spec\"][\"resources\"]:\n app_name_endpoints = bdc_config[\"spec\"][\"resources\"][app_name][\"spec\"][\"endpoints\"]\n for endpoint in app_name_endpoints:\n if \"dnsName\" in endpoint:\n alt_names += f'DNS.{str(dns_counter)} = {endpoint[\"dnsName\"]}\\n'\n dns_counter = dns_counter + 1\n\n# Special case for the controller certificate\n#\nif app_name == \"controller\":\n alt_names += f\"DNS.{str(dns_counter)} = localhost\\n\"\n dns_counter = dns_counter + 1\n\n # Add hdfsvault-svc host for key management calls.\n #\n alt_names += f\"DNS.{str(dns_counter)} = {hdfs_vault_svc}\\n\"\n dns_counter = dns_counter + 1\n\n # Add hdfsvault-svc FQDN for key management calls.\n #\n if bdc_fqdn:\n alt_names += f\"DNS.{str(dns_counter)} = {hdfs_vault_svc}.{bdc_fqdn}\\n\"\n dns_counter = dns_counter + 1\n\nprint(\"DNS alt_names (data plane):\")\nprint(alt_names)", "_____no_output_____" ] ], [ [ "### Create the DNS alt\\_names for control plane in secure clusters\n\nGet the cluster configuration from the Big Data Cluster using\n`azdata bdc endpoint list`, and pull the Active Directory DNS names out\nof it for the control plane expternal endpoints (Controller and\nManagement Proxy), and place them into the certificate configuration\nfile as DNS alt\\_names", "_____no_output_____" ] ], [ [ "import json\nfrom urllib.parse import urlparse\n\nif app_name == \"controller\" or app_name == \"mgmtproxy\":\n bdc_endpoint_list = run(\"azdata bdc endpoint list\", return_output=True)\n bdc_endpoint_list = json.loads(bdc_endpoint_list)\n\n # Parse the DNS host name from:\n #\n # \"endpoint\": \"https://monitor.aris.local:30777\"\n # \n for endpoint in bdc_endpoint_list:\n if endpoint[\"name\"] == app_name:\n url = urlparse(endpoint[\"endpoint\"])\n alt_names += f\"DNS.{str(dns_counter)} = {url.hostname}\\n\"\n dns_counter = dns_counter + 1\n\nprint(\"DNS alt_names (control plane):\")\nprint(alt_names)", "_____no_output_____" ] ], [ [ "### Create alt\\_names\n\nIf the Kuberenetes service is of “NodePort” type, then the IP address\nneeded to validate the cluster certificate could be for any node in the\nKubernetes cluster, so here all node IP addresses in the Big Data\nCluster are added as alt\\_names. Otherwise (if not NodePort, and\ntherefore LoadBalancer), add just the hostname as returned from\n`azdata bdc endpoint list` above.", "_____no_output_____" ] ], [ [ "service_type = run(f\"kubectl get svc {common_name}-external -n {namespace} -o jsonpath={{.spec.type}}\", return_output=True)\n\nprint(f\"Service type for '{common_name}-external' is: '{service_type}'\")\nprint(\"\")\n\nif service_type == \"NodePort\":\n nodes_ip_address = run(\"kubectl \"\"get nodes -o jsonpath={.items[*].status.addresses[0].address}\"\"\", return_output=True)\n nodes_ip_address = nodes_ip_address.split(' ')\n\n counter = 1\n for ip in nodes_ip_address:\n alt_names += f\"IP.{counter} = {ip}\\n\"\n counter = counter + 1\nelse:\n alt_names += f\"IP.1 = {hostname}\\n\"\n\nprint(\"All (DNS and IP) alt_names:\")\nprint(alt_names)", "_____no_output_____" ] ], [ [ "### Generate Certificate Configuration file\n\nNOTE: There is a special case for the `controller` certificate, that\nneeds to be generated in PKCS\\#1 format.", "_____no_output_____" ] ], [ [ "certificate = f\"\"\"\n[ req ]\n# Options for the `req` tool (`man req`).\ndefault_bits = 2048\ndefault_keyfile = {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-privatekey{\".pkcs8\" if app_name == \"controller\" else \"\"}.pem\ndistinguished_name = req_distinguished_name\nstring_mask = utf8only\n\n# SHA-1 is deprecated, so use SHA-2 instead.\ndefault_md = sha256\nreq_extensions = v3_req\n\n[ req_distinguished_name ]\ncountryName = Country Name (2 letter code)\ncountryName_default = {country_name}\n\nstateOrProvinceName = State or Province Name (full name)\nstateOrProvinceName_default = {state_or_province_name}\n\nlocalityName = Locality Name (eg, city)\nlocalityName_default = {locality_name}\n\norganizationName = Organization Name (eg, company)\norganizationName_default = {organization_name}\n\norganizationalUnitName = Organizational Unit (eg, division)\norganizationalUnitName_default = {organizational_unit_name}\n\ncommonName = Common Name (e.g. server FQDN or YOUR name)\ncommonName_default = {common_name}\n\nemailAddress = Email Address\nemailAddress_default = {email_address}\n\n[ v3_req ]\nsubjectAltName = @alt_names\nsubjectKeyIdentifier = hash\nbasicConstraints = CA:FALSE\nkeyUsage = digitalSignature, keyEncipherment\n{extendedKeyUsage}\n\n[ alt_names ]\nDNS.1 = {common_name}\nDNS.2 = {common_name}.{namespace}.svc.cluster.local # Use the namespace applicable for your cluster\n{alt_names}\n\"\"\"\n\nprint(certificate)\n\nsave_file(ssl_configuration_file, certificate)", "_____no_output_____" ] ], [ [ "### Copy certificate configuration to `controller` `pod`", "_____no_output_____" ] ], [ [ "import os\n\ncwd = os.getcwd()\nos.chdir(temp_dir) # Use chdir to workaround kubectl bug on Windows, which incorrectly processes 'c:\\' on kubectl cp cmd line \n\nrun(f'kubectl exec {controller} -c controller -n {namespace} -- bash -c \"mkdir -p {test_cert_store_root}/{app_name}\"')\n\nrun(f'kubectl cp {ssl_configuration_file} {controller}:{test_cert_store_root}/{app_name}/{ssl_configuration_file} -c controller -n {namespace}')\n\nos.chdir(cwd)", "_____no_output_____" ] ], [ [ "### Generate certificate\n\nUse openssl req to generate a certificate in PKCS\\#10 format. See:\n\n- https://www.openssl.org/docs/man1.0.2/man1/req.html", "_____no_output_____" ] ], [ [ "cmd = f\"openssl req -config {test_cert_store_root}/{app_name}/service.openssl.cnf -newkey rsa:2048 -sha256 -nodes -days {days} -out {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-signingrequest.csr -outform PEM -subj '/C={country_name}/ST={state_or_province_name}/L={locality_name}/O={organization_name}/OU={organizational_unit_name}/CN={common_name}'\"\n\nrun(f'kubectl exec {controller} -n {namespace} -c controller -- bash -c \"{cmd}\"')", "_____no_output_____" ] ], [ [ "### Convert the private key to PKCS12 format\n\nThe private key for controller needs to be converted to PKCS12 format.", "_____no_output_____" ] ], [ [ "cmd = f'openssl rsa -in {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-privatekey.pkcs8.pem -out {test_cert_store_root}/{app_name}/{prefix_keyfile_name}-privatekey.pem'\n\nrun(f'kubectl exec {controller} -n {namespace} -c controller -- bash -c \"{cmd}\"')", "_____no_output_____" ] ], [ [ "### Clean up temporary directory for staging configuration files", "_____no_output_____" ] ], [ [ "# Delete the temporary directory used to hold configuration files\n\nimport shutil\n\nshutil.rmtree(temp_dir)\n\nprint(f'Temporary directory deleted: {temp_dir}')", "_____no_output_____" ], [ "print('Notebook execution complete.')", "_____no_output_____" ] ], [ [ "Related\n-------\n\n- [CER030 - Sign Management Proxy certificate with generated\n CA](../cert-management/cer030-sign-service-proxy-generated-cert.ipynb)\n\n- [CER034 - Sign Controller certificate with cluster Root\n CA](../cert-management/cer034-sign-controller-generated-cert.ipynb)\n\n- [CER044 - Install signed Controller\n certificate](../cert-management/cer044-install-controller-cert.ipynb)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a3bd5f99d870cecd4bdbe996321e7a4f41001c0
24,140
ipynb
Jupyter Notebook
Notebooks/04_Machine Learning/04_01_ML_Data_Prep.ipynb
Coleridge-Initiative/ada-2019-usda
fb3e278e32ce68a7d25c79bb88fa0bf2cd2953c9
[ "CC0-1.0" ]
1
2020-04-20T18:40:52.000Z
2020-04-20T18:40:52.000Z
Notebooks/04_Machine Learning/04_01_ML_Data_Prep.ipynb
Coleridge-Initiative/ada-2019-usda
fb3e278e32ce68a7d25c79bb88fa0bf2cd2953c9
[ "CC0-1.0" ]
null
null
null
Notebooks/04_Machine Learning/04_01_ML_Data_Prep.ipynb
Coleridge-Initiative/ada-2019-usda
fb3e278e32ce68a7d25c79bb88fa0bf2cd2953c9
[ "CC0-1.0" ]
1
2020-09-11T04:36:16.000Z
2020-09-11T04:36:16.000Z
40.639731
607
0.62005
[ [ [ "<img style=\"float: center;\" src=\"images/CI_horizontal.png\" width=\"600\">\n<center>\n <span style=\"font-size: 1.5em;\">\n <a href='https://www.coleridgeinitiative.org'>Website</a>\n </span>\n</center>\n\nGhani, Rayid, Frauke Kreuter, Julia Lane, Adrianne Bradford, Alex Engler, Nicolas Guetta Jeanrenaud, Graham Henke, Daniela Hochfellner, Clayton Hunter, Brian Kim, Avishek Kumar, Jonathan Morgan, and Benjamin Feder.\n\n_source to be updated when notebook added to GitHub_", "_____no_output_____" ], [ "# Table of Contents\n\nJupyterLab contains a dynamic Table of Contents that can be accessed by clicking the last of the six icons on the left-hand sidebar.", "_____no_output_____" ], [ "# Dataset Preparation\n----------", "_____no_output_____" ], [ "In this notebook, we will walk through preparing our data for machine learning. In practice, the data preparation should take some time as you will need to think deeply about the question at the heart of your project.", "_____no_output_____" ], [ "## The Machine Learning Process\n\nThe Machine Learning Process is as follows:\n\n- [**Understand the problem and goal.**](#problem-formulation) *This sounds obvious but is often nontrivial.* Problems typically start as vague \ndescriptions of a goal - improving health outcomes, increasing graduation rates, understanding the effect of a \nvariable *X* on an outcome *Y*, etc. It is really important to work with people who understand the domain being\nstudied to dig deeper and define the problem more concretely. What is the analytical formulation of the metric \nthat you are trying to optimize?\n- [**Formulate it as a machine learning problem.**](#problem-formulation) Is it a classification problem or a regression problem? Is the \ngoal to build a model that generates a ranked list prioritized by risk, or is it to detect anomalies as new data \ncome in? Knowing what kinds of tasks machine learning can solve will allow you to map the problem you are working on\nto one or more machine learning settings and give you access to a suite of methods.\n- **Data exploration and preparation.** Next, you need to carefully explore the data you have. What additional data\ndo you need or have access to? What variable will you use to match records for integrating different data sources?\nWhat variables exist in the data set? Are they continuous or categorical? What about missing values? Can you use the \nvariables in their original form, or do you need to alter them in some way?\n- [**Feature engineering.**](#feature-generation) In machine learning language, what you might know as independent variables or predictors \nor factors or covariates are called \"features.\" Creating good features is probably the most important step in the \nmachine learning process. This involves doing transformations, creating interaction terms, or aggregating over data\npoints or over time and space.\n- **Method selection.** Having formulated the problem and created your features, you now have a suite of methods to\nchoose from. It would be great if there were a single method that always worked best for a specific type of problem. Typically, in machine learning, you take a variety of methods and try them, empirically validating which one is the best approach to your problem.\n- [**Evaluation.**](#evaluation) As you build a large number of possible models, you need a way choose the best among them. We'll cover methodology to validate models on historical data and discuss a variety of evaluation metrics. The next step is to validate using a field trial or experiment.\n- [**Deployment.**](#deployment) Once you have selected the best model and validated it using historical data as well as a field\ntrial, you are ready to put the model into practice. You still have to keep in mind that new data will be coming in,\nand the model might change over time.\n\nHere, to reiterate, we will work through all the steps we can accomplish querying directly from our Athena database, and then in the following notebook, we will bring our table we created in this notebook into python and complete the machine learning process.", "_____no_output_____" ], [ "## Problem Formulation\n\nFirst, you need to turn something into a real objective function. What do you care about? Do you have data on that thing? What action can you take based on your findings? Do you risk introducing any bias based on the way you model something? \n\n### Four Main Types of ML Tasks for Policy Problems\n\n- **Description**: [How can we identify and respond to the most urgent online government petitions?](https://dssg.uchicago.edu/project/improving-government-response-to-citizen-requests-online/)\n- **Prediction**: [Which students will struggle academically by third grade?](https://dssg.uchicago.edu/project/predicting-students-that-will-struggle-academically-by-third-grade/)\n- **Detection**: [Which police officers are likely to have an adverse interaction with the public?](https://dssg.uchicago.edu/project/expanding-our-early-intervention-system-for-adverse-police-interactions/)\n- **Behavior Change**: [How can we prevent juveniles from interacting with the criminal justice system?](https://dssg.uchicago.edu/project/preventing-juvenile-interactions-with-the-criminal-justice-system/)\n \n### Our Machine Learning Problem\n> Out of low-income households, can we predict which ones did not purchase a 100% whole wheat product in a year's time? If so, what are the most important household features?\n\nThis is an example of a *binary prediction classification problem*.\n\nNote the time windows are completely arbitrary. You could use an outcome window of 5, 3, 1 years or 1 day. The outcome window will depend on how often you receive new data, how accurate your predictions are for a given time period, or on what time-scale you can use the output of the data.\n\n> By low-income households, we're referring to only those who are WIC participants or WIC-eligible.", "_____no_output_____" ], [ "## Access the Data\n\nAs always, we will bring in the python libraries that we need to use, as well as set up our connection to the database.", "_____no_output_____" ] ], [ [ "# pandas-related imports\nimport pandas as pd\n\n# database interaction imports\nfrom pyathenajdbc import connect", "_____no_output_____" ], [ "conn = connect(s3_staging_dir = 's3://usda-iri-2019-queryresults/',\n region_name = 'us-gov-west-1',\n LogLevel = '0',\n workgroup = 'workgroup-iri_usda')", "_____no_output_____" ] ], [ [ "## Define our Cohort", "_____no_output_____" ], [ "Since the machine learning problem focuses on finding the features most important in predicting if a low-income household will not purchase 100% whole wheat product at least once in a year, we will focus just on households that were either WIC-eligible or participants in a given year. Here, we will train our models on data from low-income households in 2014 and their presence of 100% whole wheat purchases in 2015 and test on low-income households in 2015 buying 100% whole wheat product(s) in 2016.\n\nLet's first see how many of these households we will have in our testing and training datasets.\n\n> We already created our 2014 and 2015 household tables, `init_train` and `init_test` in the `iri_usda_2019_db` database, by changing the years from the code used to create `project_q2_cohort` in the [Second Data Exploration](02_02_Data_Exploration_Popular_Foods.ipynb) notebook. We also subsetted the `panid` to only include households who had static purchasing data (`projection61k` > 0) the year we're predicting on and the year prior (i.e. 2014 and 2015 for our training set). `init_train` and `init_test` contain the exact same variables as the `demo_all` table in the `iri_usda` Athena database.", "_____no_output_____" ] ], [ [ "# get count for 2014 low-income households\nqry = '''\nselect count(*) as num_2014\nfrom iri_usda_2019_db.init_train\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ], [ "# get count for 2015 low-income households\nqry = '''\nselect count(*) as num_2015\nfrom iri_usda_2019_db.init_test\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ] ], [ [ "## Create Foundation for Training and Testing Datasets", "_____no_output_____" ], [ "Now that we have defined our cohorts for our testing and training datasets, we need to combine our available datasets so that each low-income household is a row containing demographic data from the previous year, if they purchased a 100% whole wheat proudct in the following calendar year, and aggregate purchasing data from the prior year. For the purchasing data, we want to aggregate the amount the household spent and their total amount of trips.\n\nTo do this, we will first find all households that purchased any 100% whole wheat product in our given prediction years (2015 and 2016), and then we will join it to our low-income household datasets from the previous year. Because we will be relying on the table of households who purchased any 100% whole wheat product to create our desired table in Athena, we will save it as a permanent table. Then, we will join this table with our low-income cohort and one containing aggregate purchasing data for the prior year for these households.\n\n> Note: It is possible to do this process in one step. However, for your understanding and ease in reproducibility, we broke it down into multiple steps to avoid a larger subquerying process.", "_____no_output_____" ] ], [ [ "# see existing table list\ntable_list = pd.read_sql('show tables IN iri_usda_2019_db;', conn)\nprint(table_list)\n\n# get a series of tab_name values\ns = pd.Series(list(table_list['tab_name']))", "_____no_output_____" ], [ "# create table to find households that bought 100% whole wheat products in 2015 or 2016\nif('ml_aggregate' not in s.unique()):\n print('creating table')\n qry = '''\n create table iri_usda_2019_db.ml_aggregate\n with(\n format = 'Parquet',\n parquet_compression = 'SNAPPY'\n )\n as\n select t.panid, t.year, sum(t.dollarspaid) as dollarspaid\n from iri_usda.pd_pos_all p, iri_usda.trip_all t\n where p.upc = t.upc and (t.year = '2016' or t.year = '2015') and p.upcdesc like '%100% WHOLE WHEAT%' and \n p.year = t.year\n group by t.panid, t.year\n ;\n '''\n with conn.cursor() as cursor:\n cursor.execute(qry)\nelse:\n print('table already exists')", "_____no_output_____" ] ], [ [ "<font color = red><h2> Checkpoint 1: What question are we asking?</h2> </font>\n\nAbove, we are creating an aggregated table of all purchases in which a product with \"100% Whole Wheat\" in the description was purchased. However, we might want to broaden the definition to include other whole grains. For example, you might want to include corn tortillas or oatmeal, to make sure you're catching as many of the different types of whole grains that people may purchase. How would you include these other whole grain items in your table?", "_____no_output_____" ], [ "## Creating Train and Test Sets\n\nNow that we've created the aggregated table for households that purchased any 100% whole wheat products, we can combine that with `init_train` and `init_test` to get demographic data and define our label. Let's first take a look at the `ml_aggregate` table to see how it looks. Remember, this is a table that contains each household that purchased a 100% whole wheat product along with the total dollars paid in that year for 100% whole wheat products.", "_____no_output_____" ] ], [ [ "# view ml_aggregate\nqry = '''\nselect *\nfrom iri_usda_2019_db.ml_aggregate\nlimit 10\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ] ], [ [ "We can now join `ml_aggregate` with `init_train` and `init_test` to grab the demographic data. Since we would like to match households that purchased 100% whole wheat products in either 2015 or 2016 to low-income households in `init_train` and `init_test` (those with no 100% whole wheat product purchases the following year will have NAs), we will left join `ml_aggregate` to `init_train` and `init_test`. Also, we will add our dependent variable, `label`, using a `case when` statement that is `yes` when the household purchased 100% whole wheat products in the following calendar year. ", "_____no_output_____" ] ], [ [ "# match ml_aggregate with demographic data for just our training cohort\n# left join so that we maintain all low-income households who didn't buy any 100% whole wheat products\nif('ml_combined_train' not in s.unique()):\n qry = '''\n create table iri_usda_2019_db.ml_combined_train\n with(\n format = 'Parquet',\n parquet_compression = 'SNAPPY'\n )\n as\n select c.panid, c.hhsize, c.hhinc, c.race, c.hisp, c.ac, c.fed, c.femp, c.med,\n c.memp, c.mocc, c.marital, c.rentown, c.cats, c.dogs, c.hhtype, c.region, c.wic_june, c.snap_june,\n c.projection61k,\n case when a.dollarspaid > 0 then 0\n else 1\n end as label\n from iri_usda_2019_db.init_train c\n left join (\n select *\n from iri_usda_2019_db.ml_aggregate a\n where year = '2015'\n ) a\n on c.panid = a.panid\n '''\n with conn.cursor() as cursor:\n cursor.execute(qry)\nelse:\n print('table already exists')", "_____no_output_____" ], [ "# match ml_aggregate with demographic data for just our testing cohort\n# left join so that we maintain all low-income households who didn't buy any 100% whole wheat products\nif('ml_combined_test' not in s.unique()):\n qry = '''\n create table iri_usda_2019_db.ml_combined_test\n with(\n format = 'Parquet',\n parquet_compression = 'SNAPPY'\n )\n as\n select c.panid, c.hhsize, c.hhinc, c.race, c.hisp, c.ac, c.fed, c.femp, c.med,\n c.memp, c.mocc, c.marital, c.rentown, c.cats, c.dogs, c.hhtype, c.region, c.wic_june, c.snap_june,\n c.projection61k,\n case when a.dollarspaid > 0 then 0\n else 1\n end as label\n from iri_usda_2019_db.init_test c\n left join (\n select *\n from iri_usda_2019_db.ml_aggregate a\n where year = '2016'\n ) a\n on c.panid = a.panid\n '''\n with conn.cursor() as cursor:\n cursor.execute(qry)\nelse:\n print('table already exists')", "_____no_output_____" ], [ "# verify ml_combined_train is what we want\nqry = '''\nselect *\nfrom iri_usda_2019_db.ml_combined_train\nlimit 5\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ], [ "# verify ml_combined_test is what we want\nqry = '''\nselect *\nfrom iri_usda_2019_db.ml_combined_test\nlimit 5\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ] ], [ [ "Finally, we want to add in the amount spent and number of trips in 2014 or 2015 for these households in the IRI database. We will first confirm that we can find the amount spent and number of trips a household took according to the `trip_all` table in either 2014 or 2015 for households in `ml_combined_train` and `ml_combined_test`.\n> Recall that to calculate the amount spent, you can subtract `coupon` from `dollarspaid`. The number of trips per household is the distinct value of `tripnumber` and `purdate`.", "_____no_output_____" ] ], [ [ "# find aggregate purchasing information by households in 2014 and 2015\nqry = '''\nselect panid, year, round(sum(dollarspaid) - sum(coupon),2) as total, \n count(distinct(purdate, tripnumber)) as num_trips\nfrom iri_usda.trip_all \nwhere year in ('2014', '2015') and panid in \n (\n select distinct panid \n from iri_usda_2019_db.ml_combined\n )\ngroup by year, panid\nlimit 5\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ] ], [ [ "Now that we can find aggregate purchasing data in 2014 and 2015 for households in `ml_combined_train` and `ml_combined_test`, we can perform another left join using this query. We just need to make sure that we are matching based on `panid`, and making sure that we are selecting the purchasing data from the year prior for each row in `ml_combined_train` and `ml_combined_test`.\n\nThis will be our final table we create before moving onto the [Machine Learning](04_02_Machine_Learning.ipynb) notebook.", "_____no_output_____" ] ], [ [ "if('ml_model_train' not in s.unique()):\n qry = '''\n create table iri_usda_2019_db.ml_model_train\n with(\n format = 'Parquet',\n parquet_compression = 'SNAPPY'\n )\n as\n select a.*, b.total, b.num_trips\n from iri_usda_2019_db.ml_combined_train a \n left join\n (select panid, round(sum(dollarspaid) - sum(coupon),2) as total, \n count(distinct(purdate, tripnumber)) as num_trips\n from iri_usda.trip_all \n where year in ('2014') and panid in \n (\n select distinct panid \n from iri_usda_2019_db.ml_combined_train\n )\n group by panid\n ) b\n on a.panid = b.panid\n '''\n with conn.cursor() as cursor:\n cursor.execute(qry)\nelse:\n print('table already exists')", "_____no_output_____" ], [ "if('ml_model_test' not in s.unique()):\n qry = '''\n create table iri_usda_2019_db.ml_model_test\n with(\n format = 'Parquet',\n parquet_compression = 'SNAPPY'\n )\n as\n select a.*, b.total, b.num_trips\n from iri_usda_2019_db.ml_combined_test a \n left join\n (select panid, round(sum(dollarspaid) - sum(coupon),2) as total, \n count(distinct(purdate, tripnumber)) as num_trips\n from iri_usda.trip_all \n where year in ('2015') and panid in \n (\n select distinct panid \n from iri_usda_2019_db.ml_combined_test\n )\n group by panid\n ) b\n on a.panid = b.panid \n '''\n with conn.cursor() as cursor:\n cursor.execute(qry)\nelse:\n print('table already exists')", "_____no_output_____" ], [ "# verify ml_model_train is what we want\nqry = '''\nselect *\nfrom iri_usda_2019_db.ml_model_train\nlimit 5\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ], [ "# verify ml_model_test is what we want\nqry = '''\nselect *\nfrom iri_usda_2019_db.ml_model_test\nlimit 5\n'''\n\npd.read_sql(qry, conn)", "_____no_output_____" ], [ "# and that tables have unique PANID values, ie a row is a household in the given year\nqry = '''\nselect count(*) recs, count(distinct panid)\nfrom iri_usda_2019_db.ml_model_train\n'''\npd.read_sql(qry, conn)", "_____no_output_____" ], [ "# same for test set\n\nqry = '''\nselect count(*) recs, count(distinct panid)\nfrom iri_usda_2019_db.ml_model_test\n'''\npd.read_sql(qry, conn)", "_____no_output_____" ] ], [ [ "Now we should have everything we need from our Athena data tables to run some machine learning models to tackle our guiding question. ", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ] ]
4a3c01b35960580864619ab9983a326632c3e503
218,005
ipynb
Jupyter Notebook
analysis/Gapminder.ipynb
samnooij/reproducible_science_workshop-20200211
e89a963079ef12b86073ca247cecbd7f65e799cc
[ "Apache-2.0" ]
null
null
null
analysis/Gapminder.ipynb
samnooij/reproducible_science_workshop-20200211
e89a963079ef12b86073ca247cecbd7f65e799cc
[ "Apache-2.0" ]
null
null
null
analysis/Gapminder.ipynb
samnooij/reproducible_science_workshop-20200211
e89a963079ef12b86073ca247cecbd7f65e799cc
[ "Apache-2.0" ]
null
null
null
171.79275
54,880
0.883351
[ [ [ "# Introduction to sharing interactive Jupyter notebooks\n## From the workshop '[Getting Started with Reproducible and Open Research](https://escience-academy.github.io/2020-02-11-Reproducible-and-Open-Research/)'\n\n_Date: 11-12 February 2020_ \n_Author: Sam Nooij_\n\n---\n\nIn this example notebook, I will:\n\n1. Load data from [gapminder](https://www.gapminder.org)\n\n2. Use the Python library `pandas` to explore and visualise the data\n\n3. Create (interactive) figures with `matplotlib`, `plotnine` and `ipywidgets`\n\n## Loading data", "_____no_output_____" ] ], [ [ "import pandas as pd\nfrom pathlib import Path #this function helps use paths in a platform-independent way", "_____no_output_____" ], [ "pd.__version__", "_____no_output_____" ], [ "data_path = Path(\"./data/\")", "_____no_output_____" ] ], [ [ "### Check which files were downloaded", "_____no_output_____" ] ], [ [ "list(data_path.glob('*'))", "_____no_output_____" ], [ "population = pd.read_csv(data_path / \"population_total.csv\")", "_____no_output_____" ], [ "population[\"country\"]", "_____no_output_____" ], [ "population[\"2089\"]", "_____no_output_____" ] ], [ [ "### Open all three csv files downloaded from gapminder.org as DataFrames", "_____no_output_____" ] ], [ [ "population = pd.read_csv(data_path / \"population_total.csv\", index_col=\"country\")\nlife_expectancy = pd.read_csv(data_path / \"life_expectancy_years.csv\", index_col=\"country\")\nincome = pd.read_csv(data_path / \"income_per_person_gdppercapita_ppp_inflation_adjusted.csv\", index_col=\"country\")", "_____no_output_____" ], [ "population.head()", "_____no_output_____" ], [ "population.index", "_____no_output_____" ], [ "population.T.head() #transposes the table", "_____no_output_____" ] ], [ [ "## Visualise population sizes per year in a few countries\n\n**1 Portugal**", "_____no_output_____" ] ], [ [ "population.T[\"Portugal\"].plot()", "_____no_output_____" ] ], [ [ "**2 The Netherlands**", "_____no_output_____" ] ], [ [ "population.T[\"Netherlands\"].plot()", "_____no_output_____" ] ], [ [ "**3 Japan**", "_____no_output_____" ] ], [ [ "population.T[\"Japan\"].plot()", "_____no_output_____" ] ], [ [ "**4 Kenya**", "_____no_output_____" ] ], [ [ "population.T[\"Kenya\"].plot()", "_____no_output_____" ] ], [ [ "### Now start looking into the life expectancy data", "_____no_output_____" ] ], [ [ "life_expectancy.info()", "<class 'pandas.core.frame.DataFrame'>\nIndex: 187 entries, Afghanistan to Zimbabwe\nColumns: 219 entries, 1800 to 2018\ndtypes: float64(219)\nmemory usage: 321.4+ KB\n" ] ], [ [ "**For which countries do we have population size data, but not the life expectancy?**", "_____no_output_____" ] ], [ [ "life_expectancy.index ^ population.index", "_____no_output_____" ], [ "set(population.index) - set(life_expectancy.index) ", "_____no_output_____" ] ], [ [ "**And for which countries do we have income data, but no life expectancy?**", "_____no_output_____" ] ], [ [ "income.index ^ life_expectancy.index", "_____no_output_____" ] ], [ [ "### As an example, visualise income vs. life expectancy in Yemen", "_____no_output_____" ] ], [ [ "yemen = pd.concat([income.T[\"Yemen\"].rename(\"income\"),\n life_expectancy.T[\"Yemen\"].rename(\"life_expectancy\")],\n axis = 1)", "_____no_output_____" ], [ "from plotnine import ggplot, geom_point, aes", "_____no_output_____" ], [ "ggplot(yemen, aes(\"income\", \"life_expectancy\")) + geom_point()", "/home/snooij/miniconda3/envs/ROR_workshop/lib/python3.8/site-packages/plotnine/layer.py:452: PlotnineWarning: geom_point : Removed 22 rows containing missing values.\n self.data = self.geom.handle_na(self.data)\n" ] ], [ [ "### Now try out matplotlib", "_____no_output_____" ] ], [ [ "from matplotlib import pyplot as plt\nimport numpy as np", "_____no_output_____" ], [ "fig, ax = plt.subplots()", "_____no_output_____" ], [ "fig2, ax2 = plt.subplots(2, 2)", "_____no_output_____" ] ], [ [ "**As an example, we will look at the income vs. life expectancy again, but now for all countries, whose population sizes are represented as circle size.**", "_____no_output_____" ] ], [ [ "example_df = pd.concat([\n population[\"1900\"].rename(\"population\"),\n life_expectancy[\"1900\"].rename(\"life_expectancy\"),\n income[\"1900\"].rename(\"income\")\n], axis = 1, join = \"inner\")", "_____no_output_____" ], [ "fig, ax = plt.subplots()\nax.scatter(example_df[\"income\"], example_df[\"life_expectancy\"], s=np.sqrt(example_df[\"population\"])/50, alpha=0.3)\nax.set_xlabel(\"annual income per capita, inflation corrected in USD\")\nax.set_ylabel(\"life expectancy\")\nfig.savefig(\"bubbleplot.svg\")\nplt.show()", "_____no_output_____" ] ], [ [ "Generally, life expectancy is higher in countries with a higher annual income (Figure 1).\n\n![My first bubble plot](bubbleplot.svg)\n**Figure 1. My first bubble plot.** Saved as svg image.", "_____no_output_____" ], [ "---\n\n### Now modularise the code for easier re-use of the same function\n\n**Also increase the plot size for more convenient viewing in the notebook.**", "_____no_output_____" ] ], [ [ "def gapminder_bubble(year: int):\n year_str = str(year)\n x = pd.concat([\n population[year_str].rename(\"population\"),\n life_expectancy[year_str].rename(\"life_expectancy\"),\n income[year_str].rename(\"income\")], axis = 1, join = \"inner\")\n \n fig, ax = plt.subplots(figsize=(12, 8))\n ax.scatter(x[\"income\"], x[\"life_expectancy\"], s=np.sqrt(x[\"population\"])/50, alpha=0.3)\n ax.set_xlabel(\"annual income per capita, inflation corrected in USD\")\n ax.set_ylabel(\"life expectancy\")\n ax.set_xlim(300, 1e5)\n ax.set_ylim(0, 90)\n ax.set_xscale(\"log\")\n return fig, ax\n#We are going to make a function for plotting different years", "_____no_output_____" ], [ "fig, ax = gapminder_bubble(1980)", "_____no_output_____" ] ], [ [ "**These are the life expectancies per annual income for all listed countries in the year 1980.**\n\n---\n\n_Notes to self:_\n\n> Now you could lookup ipywidgets to make interactive figures with a slider for years.\n\n> Look at the [ipywidgets documentation](https://ipywidgets.readthedocs.io/en/stable/user_install.html) for installation instructions. (It involves installing `ipywidgets` and `nodejs` with conda, and then running another command in the shell to install the extension.)\n\nThe figure below only works in interactive environments, e.g. by running the notebook on [MyBinder](https://mybinder.org/v2/gh/samnooij/reproducible_science_workshop-20200211/master?filepath=analysis%2FGapminder.ipynb). It is an interactive view of the above figure, using a slider to select the year (between 1800 and 2018).", "_____no_output_____" ] ], [ [ "from ipywidgets import interact", "_____no_output_____" ], [ "interact(gapminder_bubble, year=(1800, 2018))", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a3c039f31b8c993f67700233445c2f8229a6b69
70,233
ipynb
Jupyter Notebook
week13-02-named_entity_recognition/IS407_NER.ipynb
uiuc-ischool-20221-jseo1005-1/slides
b88076008ccf9675bd1a6004fc3b31a06795f981
[ "CC0-1.0" ]
null
null
null
week13-02-named_entity_recognition/IS407_NER.ipynb
uiuc-ischool-20221-jseo1005-1/slides
b88076008ccf9675bd1a6004fc3b31a06795f981
[ "CC0-1.0" ]
null
null
null
week13-02-named_entity_recognition/IS407_NER.ipynb
uiuc-ischool-20221-jseo1005-1/slides
b88076008ccf9675bd1a6004fc3b31a06795f981
[ "CC0-1.0" ]
2
2022-01-25T15:36:13.000Z
2022-02-15T15:41:45.000Z
61.988526
2,014
0.533339
[ [ [ "## Part-1: Introduction", "_____no_output_____" ] ], [ [ "## Loading the libraries\n\nimport spacy # open-source NLP library in Python with several pre-trained models\nfrom spacy import displacy # spacy's built-in library to visualise the behavior of the entity recognition model interactively\nnlp = spacy.load(\"en_core_web_sm\") # English pipeline optimized for CPU\n", "_____no_output_____" ], [ "## Sample sentence\n\ndoc = nlp('European authorities fined Google a record $5.1 billion on Wednesday for abusing its power in the mobile phone market and ordered the company to alter its practices')\n\nprint([(X.text, X.label_) for X in doc.ents])", "[('European', 'NORP'), ('Google', 'ORG'), ('$5.1 billion', 'MONEY'), ('Wednesday', 'DATE')]\n" ] ], [ [ "In the above code, the arguments passed are:\n\n\n\n* doc.ents: entity token spans \n* X.text: the original entity text\n* X.label_: the entity type's string description", "_____no_output_____" ] ], [ [ "## Visualizing the entities\n\ndisplacy.render(doc, jupyter=True, style='ent')", "_____no_output_____" ], [ "print(spacy.explain(\"NORP\"))\nprint(spacy.explain(\"ORG\"))", "Nationalities or religious or political groups\nCompanies, agencies, institutions, etc.\n" ], [ "## Visualizing the dependency tree\n\ndisplacy.render(doc, jupyter = True, style='dep', options = {'distance': 50})", "_____no_output_____" ] ], [ [ "## Part-2: Example", "_____no_output_____" ] ], [ [ "import requests\n\ntarget_url = \"https://raw.githubusercontent.com/sharvitomar/text-file/main/temp.txt\"\nresponse = requests.get(target_url)\ndata = response.text\ndata", "_____no_output_____" ], [ "article = nlp(data)\nprint(article)", "Technology firms including Microsoft have tried to disrupt a cybercriminal group whose malicious software has been used in ransomware attacks and other hacks around the world, the companies said Wednesday.\n\nThe effort included a court order from the US District Court for the Northern District of Georgia that allowed Microsoft (MSFT) to seize 65 internet domains used by the hacking group behind widely used malware known as ZLoader, Microsoft said.\n\nSince surfacing in 2019, ZLoader has been used in an array of financially motivated hacking schemes — many of them aimed at organizations in North America. The hackers have also been involved in a tool for deploying a type of ransomware that has to be used in hacks against health care organizations, according to Microsoft.\n\nMicrosoft said it identified one of the people involved in the hacking enterprise and that it referred information to law enforcement authorities.\nThe US Justice Department did not respond to a request for comment.\n\nOther cybersecurity firms involved in the takedown included US companies Lumen and Palo Alto Networks, and Slovakia-based ESET.\n\nIt's just the latest corporate or government effort to dismantle computer infrastructure, which is often registered in the United States, used by cybercriminals or intelligence operatives.\nMicrosoft said last week that it had used another court order to disable seven internet domains that a hacking group linked with Russian intelligence was using in a likely effort to support Russia's war in Ukraine.\n\nThe actions are far from fatal blows to the hacking groups, but it's an important effort to make it harder for them to operate.\n\"Each time we have a successful takedown like this, we increase the cost for them to do business and set the example for their successors that there is increased risk associated with their malicious activities,\" said Wendi Whitmore, head of Palo Alto Network's Unit 42 threat intelligence section.\n\n" ], [ "## 1. Number of entities in the article\nlen(article.ents)", "_____no_output_____" ], [ "from collections import Counter\n\n## 2. Number of unique labels of the entities\nlabels = [x.label_ for x in article.ents]\nCounter(labels)", "_____no_output_____" ], [ "## 3. The 3 most frequent tokens\nitems = [x.text for x in article.ents]\nCounter(items).most_common(3)", "_____no_output_____" ], [ "## 4. Visualise 1 sentence\nsentences = [x for x in article.sents]\ndisplacy.render(sentences[2], jupyter=True, style='ent')", "_____no_output_____" ], [ "## 5. Visualizating the entire article\ndisplacy.render(article, jupyter=True, style='ent')", "_____no_output_____" ] ], [ [ "## Part-3: Adding custom tokens as NE\n", "_____no_output_____" ] ], [ [ "doc = nlp('Tesla to build a U.K. factory for $6 million')\nprint([(X.text, X.label_) for X in doc.ents])", "[('U.K.', 'GPE'), ('$6 million', 'MONEY')]\n" ] ], [ [ "Right now, spaCy foes not recognize \"Tesla\" as a company.", "_____no_output_____" ] ], [ [ "from spacy.tokens import Span\n\n# Get the hash value of the ORG entity label\nORG = doc.vocab.strings['ORG']\n\n# Create a Span for the new entity\nnew_ent = Span(doc, 0, 1, label=ORG)\n\n# Add the entity to the existing Doc object\ndoc.ents = list(doc.ents) + [new_ent]", "_____no_output_____" ] ], [ [ "In the code above, the arguments passed to Span() are:\n\n\n\n* doc - the name of the Doc object\n* 0 - the start index position of the token in the doc\n* 1 - the stop index position(exclusive) of the token in the doc\n* label = ORG - the label assigned to our entity\n\n", "_____no_output_____" ] ], [ [ "print([(X.text, X.label_) for X in doc.ents])", "[('Tesla', 'ORG'), ('U.K.', 'GPE'), ('$6 million', 'MONEY')]\n" ], [ "displacy.render(doc, jupyter=True, style='ent')", "_____no_output_____" ], [ "", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a3c049106c895236ae812626acf58dd82423f19
10,013
ipynb
Jupyter Notebook
tests/notebooks/shortcash.ipynb
lmservas/vectorbt
be3ac88b7ed50834db599b3fd53a8421dfa480ed
[ "Apache-2.0" ]
1,787
2019-08-25T02:53:56.000Z
2022-03-31T23:28:01.000Z
tests/notebooks/shortcash.ipynb
lmservas/vectorbt
be3ac88b7ed50834db599b3fd53a8421dfa480ed
[ "Apache-2.0" ]
251
2020-02-25T09:14:51.000Z
2022-03-29T22:31:49.000Z
tests/notebooks/shortcash.ipynb
lmservas/vectorbt
be3ac88b7ed50834db599b3fd53a8421dfa480ed
[ "Apache-2.0" ]
304
2019-08-18T13:37:35.000Z
2022-03-31T16:00:44.000Z
33.600671
98
0.504944
[ [ [ "from datetime import datetime\nimport backtrader as bt\nimport pandas as pd\nimport numpy as np\nimport vectorbt as vbt\n\ndf = pd.DataFrame(index=[datetime(2020, 1, i + 1) for i in range(9)])\ndf['open'] = [1, 1, 2, 3, 4, 5, 6, 7, 8]\ndf['high'] = df['open'] + 0.5\ndf['low'] = df['open'] - 0.5\ndf['close'] = df['open']\ndata = bt.feeds.PandasData(dataname=df)\nsize = np.array([5, 5, -5, -5, -5, -5, 5, 5, 0])\n\n\nclass CommInfoFloat(bt.CommInfoBase):\n \"\"\"Commission schema that keeps size as float.\"\"\"\n params = (\n ('stocklike', True),\n ('commtype', bt.CommInfoBase.COMM_PERC),\n ('percabs', True),\n )\n \n def getsize(self, price, cash):\n if not self._stocklike:\n return self.p.leverage * (cash / self.get_margin(price))\n\n return self.p.leverage * (cash / price)\n\n\nclass CashValueAnalyzer(bt.analyzers.Analyzer):\n \"\"\"Analyzer to extract cash and value.\"\"\"\n def create_analysis(self):\n self.rets = {}\n\n def notify_cashvalue(self, cash, value):\n self.rets[self.strategy.datetime.datetime()] = (cash, value)\n\n def get_analysis(self):\n return self.rets\n\n\nclass TestStrategy(bt.Strategy):\n def __init__(self):\n self.i = 0\n \n def log(self, txt, dt=None):\n dt = dt or self.data.datetime[0]\n dt = bt.num2date(dt)\n print('%s, %s' % (dt.isoformat(), txt))\n \n def notify_order(self, order):\n if order.status in [bt.Order.Submitted, bt.Order.Accepted]:\n return # Await further notifications\n\n if order.status == order.Completed:\n if order.isbuy():\n buytxt = 'BUY COMPLETE {}, size = {:.2f}, price = {:.2f}'.format(\n order.data._name, order.executed.size, order.executed.price)\n self.log(buytxt, order.executed.dt)\n else:\n selltxt = 'SELL COMPLETE {}, size = {:.2f}, price = {:.2f}'.format(\n order.data._name, order.executed.size, order.executed.price)\n self.log(selltxt, order.executed.dt)\n\n elif order.status in [order.Expired, order.Canceled, order.Margin]:\n self.log('%s ,' % order.Status[order.status])\n pass # Simply log\n\n # Allow new orders\n self.orderid = None\n \n def next(self):\n if size[self.i] > 0:\n self.buy(size=size[self.i])\n elif size[self.i] < 0:\n self.sell(size=-size[self.i])\n self.i += 1\n\ndef bt_simulate(shortcash):\n cerebro = bt.Cerebro()\n comminfo = CommInfoFloat(commission=0.01)\n cerebro.broker.addcommissioninfo(comminfo)\n cerebro.addstrategy(TestStrategy)\n cerebro.addanalyzer(CashValueAnalyzer)\n cerebro.broker.setcash(100.)\n cerebro.broker.set_checksubmit(False)\n cerebro.broker.set_shortcash(shortcash)\n cerebro.adddata(data)\n return cerebro.run()[0]", "_____no_output_____" ], [ "strategy = bt_simulate(True)\nstrategy.analyzers.cashvalueanalyzer.get_analysis()", "2020-01-02T00:00:00, BUY COMPLETE , size = 5.00, price = 1.00\n2020-01-03T00:00:00, BUY COMPLETE , size = 5.00, price = 2.00\n2020-01-04T00:00:00, SELL COMPLETE , size = -5.00, price = 3.00\n2020-01-05T00:00:00, SELL COMPLETE , size = -5.00, price = 4.00\n2020-01-06T00:00:00, SELL COMPLETE , size = -5.00, price = 5.00\n2020-01-07T00:00:00, SELL COMPLETE , size = -5.00, price = 6.00\n2020-01-08T00:00:00, BUY COMPLETE , size = 5.00, price = 7.00\n2020-01-09T00:00:00, BUY COMPLETE , size = 5.00, price = 8.00\n" ], [ "portfolio = vbt.Portfolio.from_orders(df.close, [np.nan] + size[:-1].tolist(), fees=0.01)\nprint(portfolio.cash(free=False))\nprint(portfolio.value())", "2020-01-01 100.00\n2020-01-02 94.95\n2020-01-03 84.85\n2020-01-04 99.70\n2020-01-05 119.50\n2020-01-06 144.25\n2020-01-07 173.95\n2020-01-08 138.60\n2020-01-09 98.20\nName: close, dtype: float64\n2020-01-01 100.00\n2020-01-02 99.95\n2020-01-03 104.85\n2020-01-04 114.70\n2020-01-05 119.50\n2020-01-06 119.25\n2020-01-07 113.95\n2020-01-08 103.60\n2020-01-09 98.20\nName: close, dtype: float64\n" ], [ "strategy = bt_simulate(False)\nstrategy.analyzers.cashvalueanalyzer.get_analysis()", "2020-01-02T00:00:00, BUY COMPLETE , size = 5.00, price = 1.00\n2020-01-03T00:00:00, BUY COMPLETE , size = 5.00, price = 2.00\n2020-01-04T00:00:00, SELL COMPLETE , size = -5.00, price = 3.00\n2020-01-05T00:00:00, SELL COMPLETE , size = -5.00, price = 4.00\n2020-01-06T00:00:00, SELL COMPLETE , size = -5.00, price = 5.00\n2020-01-07T00:00:00, SELL COMPLETE , size = -5.00, price = 6.00\n2020-01-08T00:00:00, BUY COMPLETE , size = 5.00, price = 7.00\n2020-01-09T00:00:00, BUY COMPLETE , size = 5.00, price = 8.00\n" ], [ "print(portfolio.cash(free=True))\nprint(portfolio.value())", "2020-01-01 100.00\n2020-01-02 94.95\n2020-01-03 84.85\n2020-01-04 99.70\n2020-01-05 119.50\n2020-01-06 94.25\n2020-01-07 63.95\n2020-01-08 83.60\n2020-01-09 98.20\nName: close, dtype: float64\n2020-01-01 100.00\n2020-01-02 99.95\n2020-01-03 104.85\n2020-01-04 114.70\n2020-01-05 119.50\n2020-01-06 119.25\n2020-01-07 113.95\n2020-01-08 103.60\n2020-01-09 98.20\nName: close, dtype: float64\n" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a3c173691341174dd43f1130fbceb514875c3fc
171,197
ipynb
Jupyter Notebook
notebooks/cmip6_1pctCO2_ua_HadGEM3-GC31-MM_30yr_meandiff_pah.ipynb
cmip6moap/project05
f20eba17f15f8f4f24e553cc843390e85b8a8e25
[ "CC-BY-4.0" ]
null
null
null
notebooks/cmip6_1pctCO2_ua_HadGEM3-GC31-MM_30yr_meandiff_pah.ipynb
cmip6moap/project05
f20eba17f15f8f4f24e553cc843390e85b8a8e25
[ "CC-BY-4.0" ]
null
null
null
notebooks/cmip6_1pctCO2_ua_HadGEM3-GC31-MM_30yr_meandiff_pah.ipynb
cmip6moap/project05
f20eba17f15f8f4f24e553cc843390e85b8a8e25
[ "CC-BY-4.0" ]
1
2021-06-04T15:36:25.000Z
2021-06-04T15:36:25.000Z
88.38255
64,956
0.694136
[ [ [ "## Plot difference between 2 30yr means of zonal mean zonal wind in :\n#### HadGEM3-GC31-MM for selected season (DJF) and over selected latitude range (0-90)\n\n##### Created as part of PAMIP group during CMIP6 hackathon 2021\n##### Created by : Phoebe Hudson / Colin Manning\n\n", "_____no_output_____" ] ], [ [ "from itertools import chain\nfrom glob import glob\n\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport xarray as xr\nimport numpy as np\n\n# Set figsize and resolution\nplt.rcParams['figure.figsize'] = (10, 6)\nplt.rcParams['figure.dpi'] = 100", "_____no_output_____" ], [ "### Set data directory where data structure follows \n# /badc/cmip6/data/<mip_era>/<activity_id>/<institution_id>/<source_id>/<experiment_id>/<variant_label>/<table_id>/<variable_id>/<grid_label>/<version>\n# and experiment id follows\n# r = realization, i = initialization, p = physics, f = forcing\n\ndata_dir = \"/badc/cmip6/data/CMIP6/CMIP/MOHC/HadGEM3-GC31-MM/1pctCO2/r1i1p1f3/Amon/ua/gn/latest\"\n#data_dir = \"/badc/cmip6/data/CMIP6/CMIP/*/*/1pctCO2/*/Amon/ua/gn/latest\"\n\n!ls {data_dir}", "ua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_185001-186912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_187001-188912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_189001-190912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_191001-192912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_193001-194912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_195001-196912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_197001-198912.nc\nua_Amon_HadGEM3-GC31-MM_1pctCO2_r1i1p1f3_gn_199001-199912.nc\n" ], [ "%%time \nds = xr.open_mfdataset(data_dir + '/*.nc')\nds", "CPU times: user 742 ms, sys: 153 ms, total: 895 ms\nWall time: 4.9 s\n" ], [ "### Calculate zonal mean of zonal wind (mean over all longitudes) and sub-select by latitude (N hem = 0,90)\n\nds_lonmean = ds.mean(dim='lon')\nds_lonmean = ds_lonmean.sel(lat=slice(0, 90))\nds_lonmean.time", "_____no_output_____" ], [ "### Select winter months (DJF=Dec-Feb) from zonal mean zonal wind\n\nis_winter = ds_lonmean['time'].dt.season == 'DJF'\nds_lonmean_winter = ds_lonmean.isel(time=is_winter)\nds_lonmean_winter", "_____no_output_____" ], [ "### Calculate difference between two 30-yr means of zonal mean zonal winds for\n# 1850-01-16-1880-12-16 and \n# 1969-01-16-1999-12-16\n\nds_lonmean_endyrs = ds_lonmean_winter.sel(time=slice('1969-01-16', '1999-12-16')).mean(dim='time')\nds_lonmean_startyrs = ds_lonmean_winter.sel(time=slice('1850-01-16', '1880-12-16')).mean(dim='time')\nds_lonmean_yrsdiff = ds_lonmean_endyrs - ds_lonmean_startyrs\nds_lonmean_yrsdiff = ds_lonmean_yrsdiff.ua.values\nds_lonmean_yrsdiff", "/gws/pw/j05/cop26_hackathons/bristol/env/lib/python3.8/site-packages/dask/array/numpy_compat.py:39: RuntimeWarning: invalid value encountered in true_divide\n x = np.divide(x1, x2, out)\n" ], [ "### Plot zonal mean zonal wind difference for selected season in selected latitude region\n\n#ds_lonmean.ua.sel(time='1850-01-16').plot()\nfig, axes = plt.subplots(nrows=1, ncols=1) #, figsize=(15,15))\nax = plt.subplot(1,1,1)\ncmap = plt.get_cmap('RdBu_r')\nplt.contourf(ds_lonmean.lat, ds_lonmean.plev, ds_lonmean_yrsdiff, cmap='RdBu_r', levels=np.arange(-1,1,0.1), extend='both')\n\n#plt.pcolormesh(ds_lonmean.lat, ds_lonmean.plev, ds_lonmean_yrsdiff.ua.values, cmap='RdBu_r')\nplt.xlabel('Lat')\nplt.ylabel('Pressure Level (hPa)')\nplt.ylim([100000, 10000])\nplt.title('HadGEM3-GC31-MM')\n#axs=np.append(axs,ax)\n\nnormalize=matplotlib.colors.Normalize(vmin=-1, vmax=1)\ncax, _ = matplotlib.colorbar.make_axes(ax, location = \"bottom\", pad=0.05)\ncbar = matplotlib.colorbar.ColorbarBase(cax, cmap=cmap, norm=normalize,orientation=\"horizontal\")\n#cbar.set_label('Zonal windspeed ',size=16)\ncbar.ax.tick_params(labelsize=16)\nplt.show()\n", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a3c240c5f28a27de62e55a7495df9220e79bb4a
5,188
ipynb
Jupyter Notebook
Numpy_neural_network_implementation/Numpy_XOR_classification_exercise.ipynb
tillaczel/Machine-learning-workshop
1792a0215731dbcca75bc0c36cb11b8ae65f5025
[ "MIT" ]
null
null
null
Numpy_neural_network_implementation/Numpy_XOR_classification_exercise.ipynb
tillaczel/Machine-learning-workshop
1792a0215731dbcca75bc0c36cb11b8ae65f5025
[ "MIT" ]
null
null
null
Numpy_neural_network_implementation/Numpy_XOR_classification_exercise.ipynb
tillaczel/Machine-learning-workshop
1792a0215731dbcca75bc0c36cb11b8ae65f5025
[ "MIT" ]
null
null
null
24.704762
308
0.469352
[ [ [ "<a href=\"https://colab.research.google.com/github/tillaczel/Machine-learning-workshop/blob/resturcture/Numpy_neural_network_implementation/Numpy_XOR_classification_exercise.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# XOR classification\nBuild a neural network, which can classify the XOR data. The classifier should be a logistic regression with a treshold of 0.5. Use MSE for the loss function. Complete the code in the 'TASK: defining the neural network' section.\n", "_____no_output_____" ], [ "## Importing libraries", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ] ], [ [ "## Defineing visualization", "_____no_output_____" ] ], [ [ "def vis(X,Y,title):\n fig = plt.figure(figsize=(8,8))\n plt.scatter(X[:,0], X[:,1], c=Y)\n plt.title(title)\n plt.show()", "_____no_output_____" ] ], [ [ "## TASK: Defineing the neural network", "_____no_output_____" ] ], [ [ "class network():\n\n def __init__(self, learning_rate, hidden_size):\n\n\n def predict(self,X):\n\n\n def train(self,X,Y,epoch):\n", "_____no_output_____" ] ], [ [ "## Training the model", "_____no_output_____" ] ], [ [ "learning_rate = 0.1\nhidden_size = 20\nmodel = network(learning_rate, hidden_size)", "_____no_output_____" ] ], [ [ "Creating the data and saving the prediction of the initialized model.", "_____no_output_____" ] ], [ [ "X = np.random.rand(10000,2)*2-1\nY = np.clip(X[:,0]*X[:,1]*np.inf,0,1)\nvis(X, Y, 'Training data')\nold_model_prediction = model.predict(X)", "_____no_output_____" ] ], [ [ "Training the network and visualizing the resutls", "_____no_output_____" ] ], [ [ "model.train(X,Y,10000)\nvis(X, np.round(old_model_prediction[:,0]), 'Prediction of the model before training')\nvis(X, np.round(model.predict(X)[:,0]), 'Prediction of the model after training')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3c264bf77ae46d480a5bab45ba4503cc53da36
21,138
ipynb
Jupyter Notebook
Notebooks/fake-news-rnn-variants .ipynb
satya1013/Fake-News-1
7c53e60407dfdd46315647004d2acd00c17f2451
[ "MIT" ]
null
null
null
Notebooks/fake-news-rnn-variants .ipynb
satya1013/Fake-News-1
7c53e60407dfdd46315647004d2acd00c17f2451
[ "MIT" ]
null
null
null
Notebooks/fake-news-rnn-variants .ipynb
satya1013/Fake-News-1
7c53e60407dfdd46315647004d2acd00c17f2451
[ "MIT" ]
1
2021-09-09T12:31:31.000Z
2021-09-09T12:31:31.000Z
36.954545
161
0.606112
[ [ [ "# This Python 3 environment comes with many helpful analytics libraries installed\n# It is defined by the kaggle/python Docker image: https://github.com/kaggle/docker-python\n# For example, here's several helpful packages to load\n\nimport numpy as np # linear algebra\nimport pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)\n\n# Input data files are available in the read-only \"../input/\" directory\n# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory\n\nimport os\nfor dirname, _, filenames in os.walk('/kaggle/input'):\n for filename in filenames:\n print(os.path.join(dirname, filename))\n\n# You can write up to 20GB to the current directory (/kaggle/working/) that gets preserved as output when you create a version using \"Save & Run All\" \n# You can also write temporary files to /kaggle/temp/, but they won't be saved outside of the current session", "_____no_output_____" ], [ "#dependencies\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n!pip install contractions\nimport nltk\nnltk.download('stopwords')\nnltk.download('punkt')\nnltk.download('words')\nimport re\nimport pickle\nimport numpy as np\nimport pandas as pd\nimport contractions\nimport seaborn as sns\nimport matplotlib.pyplot as plt\nfrom nltk.corpus import stopwords\nfrom nltk.stem import WordNetLemmatizer\nfrom tensorflow.keras.preprocessing.text import Tokenizer\nfrom tensorflow.keras.preprocessing.sequence import pad_sequences\nfrom tensorflow.keras.models import Sequential\nfrom tensorflow.keras.layers import Dense, Embedding, LSTM, Conv1D, MaxPooling1D, Dropout, BatchNormalization\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.metrics import classification_report, accuracy_score\nimport gensim\nfrom sklearn.metrics import confusion_matrix\nimport tensorflow as tf \nfrom tensorflow.keras.models import Model, load_model\n#from tensorflow.keras.callbacks import ReduceLROnPlateau, LearningRateScheduler, EarlyStopping, ModelCheckpoint\nfrom kaggle_datasets import KaggleDatasets\nimport transformers\nfrom transformers import TFAutoModel, AutoTokenizer\nfrom tqdm.notebook import tqdm\n#from tokenizers import Tokenizer\nfrom tensorflow.keras.layers import Input, Dense\nfrom tensorflow.keras.optimizers import Adam\nimport tensorflow.keras", "_____no_output_____" ], [ "df_2 = pd.read_csv(\"/kaggle/input/fake-news/train.csv\", header=0, index_col=0)\ndf_t = pd.read_csv(\"/kaggle/input/fake-news/test.csv\", header=0, index_col=0)\ndf_2 = df_2.drop(['title','author'], axis = 1)\ndf_t = df_t.drop(['title','author'], axis = 1)\ndf_2.dropna(inplace = True)\ndf_t.fillna('',inplace = True)\n#print(df_2.isnull().sum(axis = 0))\n\n", "_____no_output_____" ], [ "def clean_text(text_col ):\n text_col = text_col.apply(lambda x: [contractions.fix(word, slang=False).lower() for word in x.split()])\n text_col = text_col.apply(lambda x: [re.sub(r'[^\\w\\s]','', word) for word in x])\n stop_words = set(stopwords.words('english'))\n text_col = text_col.apply(lambda x: [word for word in x if word not in stop_words])\n text_col = text_col.apply(lambda x: [word for word in x if re.search(\"[@_!#$%^&*()<>?/|}{~:0-9]\", word) == None])\n return text_col\ndf_2[\"text\"] = clean_text(df_2[\"text\"])\ndf_t[\"text\"] = clean_text(df_t[\"text\"])\ndf_2['label'] = df_2['label'].apply(lambda x: int(x))\ny = df_2['label']", "_____no_output_____" ], [ "#y = df_2[\"label\"]", "_____no_output_____" ], [ "#lemmatizing\nwordnet_lemmatizer = WordNetLemmatizer()\nx = []\nx_test = []\nenglish_words = set(nltk.corpus.words.words())\nfor words in df_2['text']:\n tmp = []\n fil_wor = [wordnet_lemmatizer.lemmatize(word, 'n') for word in words if word in english_words]\n tmp.extend(fil_wor)\n x.append(tmp)\n \nfor words in df_t['text']:\n tmp = []\n fil_wor = [wordnet_lemmatizer.lemmatize(word, 'n') for word in words if word in english_words]\n tmp.extend(fil_wor)\n x_test.append(tmp)\n \ndf_2[\"text\"] = x\ndf_t[\"text\"] = x_test\n", "_____no_output_____" ], [ "#creating word embedding\nx_all = x.copy()\nfor y in range(len(x_test)):\n x_all.append(x_test[y])\n#n of vectors we are generating\nEMBEDDING_DIM = 100\n#Creating Word Vectors by Word2Vec Method (takes time...)\nw2v_model = gensim.models.Word2Vec(sentences=x_all, vector_size=EMBEDDING_DIM, window=5, min_count=1)\nprint(len(w2v_model.wv))\n\n#testing a word embedding\nw2v_model.wv[\"liberty\"]\n\n#similarity between words\nword = 'people'\nw2v_model.wv.most_similar(word)", "_____no_output_____" ], [ "#tokenizing\ntokenizer = Tokenizer()\ntokenizer.fit_on_texts(x)\n\nx = tokenizer.texts_to_sequences(x)\nx_test = tokenizer.texts_to_sequences(x_test)\nprint(x[0][:10])\n\nword_index = tokenizer.word_index\nfor word, num in word_index.items():\n print(f\"{word} -> {num}\")\n if num == 10:\n break", "_____no_output_____" ], [ "#padding\nmaxlen = 700\nx = pad_sequences(x, maxlen=maxlen)\nx_test = pad_sequences(x_test, maxlen=maxlen)", "_____no_output_____" ], [ "# Function to create weight matrix from word2vec gensim model\ndef get_weight_matrix(model, vocab):\n # total vocabulary size plus 0 for unknown words\n vocab_size = len(vocab) + 1\n # define weight matrix dimensions with all 0\n weight_matrix = np.zeros((vocab_size, EMBEDDING_DIM))\n # step vocab, store vectors using the Tokenizer's integer mapping\n for word, i in vocab.items():\n weight_matrix[i] = model[word]\n return weight_matrix, vocab_size\n\n#Getting embedding vectors from word2vec and usings it as weights of non-trainable keras embedding layer\nembedding_vectors, vocab_size = get_weight_matrix(w2v_model.wv, word_index)", "_____no_output_____" ], [ "#Defining Neural Network\nmodel = Sequential()\n#Non-trainable embeddidng layer\nmodel.add(Embedding(vocab_size, output_dim=EMBEDDING_DIM, weights=[embedding_vectors], input_length=maxlen, trainable=False))\n#LSTM \nmodel.add(Dropout(0.2))\n#model.add(Conv1D(filters=32, kernel_size=5, padding='same', activation='relu'))\n#model.add(MaxPooling1D(pool_size=2))\nmodel.add(Conv1D(filters=64, kernel_size=3, padding='same', activation='relu'))\nmodel.add(MaxPooling1D(pool_size=2))\nmodel.add(LSTM(units=128,dropout=0.2, return_sequences=True))\nmodel.add(BatchNormalization())\nmodel.add(LSTM(units=128,dropout=0.2))\nmodel.add(BatchNormalization())\n#model.add(Dropout(0.2))\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])\nmodel.summary()\n", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(x, y)\nmodel.fit(X_train, y_train, validation_data= (X_test,y_test), epochs=50)", "_____no_output_____" ], [ "#validation_data_performance evaluation\ny_pred = (model.predict(X_test) >= 0.5).astype(\"int\")\nprint(accuracy_score(y_test, y_pred))\nprint(classification_report(y_test, y_pred)) \ncm = confusion_matrix(y_test, y_pred)\nplt.figure(figsize=(10,7))\nsns.heatmap(cm, annot=True)\nplt.xlabel('Predicted')\nplt.ylabel('Truth')", "_____no_output_____" ], [ "#test_data_for_scoring_on_kaggle\ny_t = (model.predict(x_test) >= 0.5).astype(\"int\")\nresult = pd.DataFrame({\"id\" :df_t.index, \"label\":y_t.squeeze() }, index = None )\nresult.to_csv(\"result_rnn.csv\",index = False)", "_____no_output_____" ], [ "#let's include an attention layer in our model\nclass Attention(tf.keras.Model):\n def __init__(self, units):\n super(Attention, self).__init__()\n self.W1 = tf.keras.layers.Dense(units)\n self.W2 = tf.keras.layers.Dense(units)\n self.V = tf.keras.layers.Dense(1)\n def call(self, features, hidden):\n hidden_with_time_axis = tf.expand_dims(hidden, 1)\n score = tf.nn.tanh(self.W1(features)+ self.W2(hidden_with_time_axis))\n attention_weights = tf.nn.softmax(self.V(score),axis = 1)\n context_vector = attention_weights * features\n context_vector = tf.reduce_sum(context_vector, axis = 1)\n return context_vector, attention_weights", "_____no_output_____" ], [ "'''#dd attention layer to the deep learning network\nclass Attention(Layer):\n def __init__(self,**kwargs):\n super(Attention,self).__init__(**kwargs)\n \n def build(self,input_shape):\n self.W=self.add_weight(name='attention_weight', #shape=(input_shape[-1],1), \n initializer='random_normal', trainable=True)\n self.b=self.add_weight(name='attention_bias', #shape=(input_shape[1],1), \n initializer='zeros', trainable=True) \n super(Attention, self).build(input_shape)\n \n def call(self,x):\n # Alignment scores. Pass them through tanh function\n e = K.tanh(K.dot(x,self.W)+self.b)\n # Remove dimension of size 1\n e = K.squeeze(e, axis=-1) \n # Compute the weights\n alpha = K.softmax(e)\n # Reshape to tensorFlow format\n alpha = K.expand_dims(alpha, axis=-1)\n # Compute the context vector\n context = x * alpha\n context = K.sum(context, axis=1)\n return context'''", "_____no_output_____" ], [ "#RNN with Attention model\nsequence_input = Input(shape = (maxlen,), dtype = \"int32\")\nembedding = Embedding(vocab_size, output_dim=EMBEDDING_DIM, weights=[embedding_vectors], input_length=maxlen, trainable=False)(sequence_input)\ndropout = Dropout(0.2)(embedding)\n\nconv1 = Conv1D(filters=64, kernel_size=3, padding='same', activation='relu')(dropout)\nmaxp = MaxPooling1D(pool_size=2)(conv1)\n#(lstm, state_h, state_c) = LSTM(units=128,return_sequences=True,dropout=0.2, return_state= True)(maxp)\n#bn1 = BatchNormalization()((lstm, state_h, state_c))\n(lstm, state_h, state_c) = LSTM(units=128,dropout=0.2, return_sequences=True, return_state= True)(maxp)\ncontext_vector, attention_weights = Attention(10)(lstm, state_h)\ndensee = Dense(20, activation='relu')(context_vector)\n#bn = BatchNormalization()(densee)\ndropout2 = Dropout(0.2)(densee)\ndensef = Dense(1, activation='sigmoid')(dropout2)\nmodel = tensorflow.keras.Model(inputs = sequence_input, outputs = densef)\nmodel.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])\ndisplay(model.summary())\nmodel.fit(X_train, y_train, validation_data= (X_test,y_test), epochs=50)", "_____no_output_____" ], [ "#checking accuracy on validation data\ny_pred = (model.predict(X_test) >= 0.5).astype(\"int\")\nprint(accuracy_score(y_test, y_pred))", "_____no_output_____" ], [ "y_t = (model.predict(x_test) >= 0.5).astype(\"int\")\nresult = pd.DataFrame({\"id\" :df_t.index, \"label\":y_t.squeeze() }, index = None )\nresult.to_csv(\"result_rnnattenion.csv\",index = False)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3c382994d5e46cd629ce04d6c3cd64a4e9639e
78,398
ipynb
Jupyter Notebook
examples/GaussianMixture_full_VB.ipynb
huangyh09/TensorFlow-Bayes
78c451fe47dbdd690727d04bf4d4b0a09ce22b8f
[ "Apache-2.0" ]
1
2021-07-17T09:36:32.000Z
2021-07-17T09:36:32.000Z
examples/GaussianMixture_full_VB.ipynb
Rongtingting/TensorFlow-Bayes
6d324b591a3eb93347e5727114df33a13e99c201
[ "Apache-2.0" ]
null
null
null
examples/GaussianMixture_full_VB.ipynb
Rongtingting/TensorFlow-Bayes
6d324b591a3eb93347e5727114df33a13e99c201
[ "Apache-2.0" ]
5
2019-12-18T09:29:57.000Z
2022-01-19T14:01:51.000Z
182.745921
26,616
0.884053
[ [ [ "# Gaussian mixture model", "_____no_output_____" ], [ "The model in prototyped with TensorFlow Probability and inferecne is performed with variational Bayes by stochastic gradient descent. \nDetails on [Wikipedia](https://en.wikipedia.org/wiki/Mixture_model#Gaussian_mixture_model).\n\nSome codes are borrowed from \n[Brendan Hasz](https://brendanhasz.github.io/2019/06/12/tfp-gmm.html) and \n[TensorFlow Probability examples](https://www.tensorflow.org/probability/overview)\n\nAuthor: Yuanhua Huang\n\nDate: 29/01/2020", "_____no_output_____" ], [ "#### Definition of likelihood\n\nBelow is the definition of likelihood by introducing the latent variable Z for sample assignment identity, namely, Z is a Categorical distribution (a sepcial case of multinomial with total_counts=1), and the prior $P(z_i=k)$ can be specified per data point or shared by whole data set:\n\n$$ \\mathcal{L} = P(X | \\mu, \\sigma, Z) = \\prod_{i=1}^N \\prod_{j=1}^D \\prod_{k=1}^K P(z_i=k) \\{ \\mathcal{N}(x_{ij}|\\mu_{k,j}, \\sigma_{k,j}) \\}^{z_i=k}$$\n\nThe evidence lower bound (ELBO) can be written as \n\n$$\\mathtt{L}=\\mathtt{KL}(q(Y)||p(Y)) - \\int{q(Y)\\log{p(X|Y)}dY}$$\n\nwhere $Y$ denotes the set of all unknown variables and $X$ denotes the observed data points. The derivations can be found in page 463 in [Bishop, PRML 2006](https://www.springer.com/gp/book/9780387310732).\n\n**Note**, this implementation is mainly for tutorial example, and hasn't been optimised, for example introducing multiple initialization to avoid local optimal caused by poor initialization.\n\n**Also**, the assignment variable $z$ can be marginalised and the impelementation can be found in \n[GaussianMixture_VB.ipynb](https://github.com/huangyh09/TensorFlow-Bayes/blob/master/examples/GaussianMixture_VB.ipynb).", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\n\nimport tensorflow as tf\nimport tensorflow_probability as tfp\nfrom tensorflow_probability import distributions as tfd\n\n# Random seed\nnp.random.seed(1)\ntf.random.set_seed(1)", "_____no_output_____" ] ], [ [ "## Generate data", "_____no_output_____" ] ], [ [ "# Generate some data\nnp.random.seed(0)\nN = 3000\nX = np.random.randn(N, 2).astype('float32')\nX[:1000, :] += [2, 0]\nX[1000:2000, :] -= [2, 4]\nX[2000:, :] += [-2, 4]\n\n# Plot the data\nplt.plot(X[:, 0], X[:, 1], '.')\nplt.axis('equal')\nplt.show()", "_____no_output_____" ] ], [ [ "## Define model", "_____no_output_____" ] ], [ [ "from TFBayes.mixture.Gaussian_MM_full import GaussianMixture", "_____no_output_____" ], [ "model = GaussianMixture(3, 2, 3000)\n# model.set_prior(theta_prior = tfd.Dirichlet(5 * tf.ones((3, ))))\nmodel.KLsum", "_____no_output_____" ], [ "# model.alpha, model.beta, model.gamma", "_____no_output_____" ], [ "# losses = model.fit(X, sampling=False, learn_rate=0.03, num_steps=500)\nlosses = model.fit(X, sampling=True, learn_rate=0.02, num_steps=500, n_sample=10)", "_____no_output_____" ], [ "plt.plot(losses)\nplt.show()", "_____no_output_____" ], [ "# Compute log likelihood at each point on a grid\nNp = 100 #number of grid points\nXp, Yp = np.meshgrid(np.linspace(-6,6,Np), np.linspace(-6,6,Np))\nPp = np.column_stack([Xp.flatten(), Yp.flatten()])\nZ = model.logLik(Pp.astype('float32'), sampling=False, use_ident=False)\nZ = np.reshape(Z, (Np, Np))\n \n# Show the fit mixture density\nplt.imshow(np.exp(Z),\n extent=(-6, 6, -6, 6),\n origin='lower')\ncbar = plt.colorbar()\ncbar.ax.set_ylabel('Likelihood')", "_____no_output_____" ] ], [ [ "## Model\nThe codes below is also included in [TFBayes.mixture.Gaussian_MM_full.py](https://github.com/huangyh09/TensorFlow-Bayes/blob/master/TFBayes/mixture/Gaussian_MM_full.py).", "_____no_output_____" ] ], [ [ "class GaussianMixture():\n \"\"\"A Bayesian Gaussian mixture model.\n Assumes Gaussians' variances in each dimension are independent.\n \n Parameters\n ----------\n Nc : int > 0\n Number of mixture components.\n Nd : int > 0\n Number of dimensions.\n Ns : int > 0\n Number of data points.\n \"\"\"\n def __init__(self, Nc, Nd, Ns=0):\n # Initialize\n self.Nc = Nc\n self.Nd = Nd\n self.Ns = Ns\n \n # Variational distribution variables for means\n self.locs = tf.Variable(tf.random.normal((Nc, Nd)))\n self.scales = tf.Variable(tf.pow(tf.random.gamma((Nc, Nd), 5, 5), -0.5))\n \n # Variational distribution variables for standard deviations\n self.alpha = tf.Variable(tf.random.uniform((Nc, Nd), 4., 6.))\n self.beta = tf.Variable(tf.random.uniform((Nc, Nd), 4., 6.))\n \n # Variational distribution variables for assignment logit\n self.gamma = tf.Variable(tf.random.uniform((Ns, Nc), -2, 2))\n \n self.set_prior()\n \n def set_prior(self, mu_prior=None, sigma_prior=None, ident_prior=None):\n \"\"\"Set prior ditributions\n \"\"\"\n # Prior distributions for the means\n if mu_prior is None:\n self.mu_prior = tfd.Normal(tf.zeros((self.Nc, self.Nd)), \n tf.ones((self.Nc, self.Nd)))\n else:\n self.mu_prior = self.mu_prior\n\n # Prior distributions for the standard deviations\n if sigma_prior is None:\n self.sigma_prior = tfd.Gamma(2 * tf.ones((self.Nc, self.Nd)), \n 2 * tf.ones((self.Nc, self.Nd)))\n else:\n self.sigma_prior = sigma_prior\n \n # Prior distributions for sample assignment\n if ident_prior is None:\n self.ident_prior = tfd.Multinomial(total_count=1,\n probs=tf.ones((self.Ns, self.Nc))/self.Nc)\n else:\n self.ident_prior = ident_prior\n \n @property\n def mu(self):\n \"\"\"Variational posterior for distribution mean\"\"\"\n return tfd.Normal(self.locs, self.scales)\n \n @property\n def sigma(self):\n \"\"\"Variational posterior for distribution variance\"\"\"\n return tfd.Gamma(self.alpha, self.beta)\n # return tfd.Gamma(tf.math.exp(self.alpha), tf.math.exp(self.beta))\n \n @property\n def ident(self):\n return tfd.Multinomial(total_count=1, \n probs=tf.math.softmax(self.gamma))\n\n \n @property\n def KLsum(self):\n \"\"\"\n Sum of KL divergences between posteriors and priors\n The KL divergence for multinomial distribution is defined manually\n \"\"\"\n kl_mu = tf.reduce_sum(tfd.kl_divergence(self.mu, self.mu_prior))\n kl_sigma = tf.reduce_sum(tfd.kl_divergence(self.sigma, self.sigma_prior))\n kl_ident = tf.reduce_sum(self.ident.mean() * \n tf.math.log(self.ident.mean() / \n self.ident_prior.mean())) # axis=0\n \n return kl_mu + kl_sigma + kl_ident\n \n \n def logLik(self, x, sampling=False, n_sample=10, use_ident=True):\n \"\"\"Compute log likelihood given a batch of data.\n \n Parameters\n ----------\n x : tf.Tensor, (n_sample, n_dimention)\n A batch of data\n sampling : bool\n Whether to sample from the variational posterior\n distributions (if True, the default), or just use the\n mean of the variational distributions (if False).\n n_sample : int\n The number of samples to generate\n use_ident : bool\n Setting True for fitting the model and False for testing logLik\n \n Returns\n -------\n log_likelihoods : tf.Tensor\n Log likelihood for each sample\n \"\"\"\n #TODO: sampling doesn't converge well in the example data set\n\n Nb, Nd = x.shape\n x = tf.reshape(x, (1, Nb, 1, Nd)) # (n_sample, Ns, Nc, Nd)\n\n # Sample from the variational distributions\n if sampling:\n _mu = self.mu.sample((n_sample, 1))\n _sigma = tf.pow(self.sigma.sample((n_sample, 1)), -0.5)\n else:\n _mu = tf.reshape(self.mu.mean(), (1, 1, self.Nc, self.Nd))\n _sigma = tf.pow(tf.reshape(self.sigma.mean(), \n (1, 1, self.Nc, self.Nd)), -0.5)\n \n # Calculate the probability density\n _model = tfd.Normal(_mu, _sigma)\n _log_lik_mix = _model.log_prob(x)\n \n if use_ident:\n _ident = tf.reshape(self.ident.mean(), (1, self.Ns, self.Nc, 1))\n _log_lik_mix = _log_lik_mix * _ident\n log_likelihoods = tf.reduce_sum(_log_lik_mix, axis=[0, 2, 3])\n else:\n _fract = tf.reshape(tf.reduce_mean(self.ident.mean(), axis=0),\n (1, 1, self.Nc, 1))\n _log_lik_mix = _log_lik_mix + tf.math.log(_fract)\n log_likelihoods = tf.reduce_mean(tf.math.reduce_logsumexp(\n tf.reduce_sum(_log_lik_mix, axis=3), axis=2), axis=0)\n \n return log_likelihoods\n \n def fit(self, x, num_steps=200, \n optimizer=None, learn_rate=0.05, **kwargs):\n \"\"\"Fit the model's parameters\"\"\"\n if optimizer is None:\n optimizer = tf.optimizers.Adam(learning_rate=learn_rate)\n \n loss_fn = lambda: (self.KLsum - \n tf.reduce_sum(self.logLik(x, **kwargs)))\n \n losses = tfp.math.minimize(loss_fn, \n num_steps=num_steps, \n optimizer=optimizer)\n return losses", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a3c4483624eea396ce4d11a160285b1b573e95f
124,463
ipynb
Jupyter Notebook
code/notebooks/psychosis-Copy3.ipynb
zeroknowledgediscovery/zcad
5642a7ab0ac29337a4066305091811032ab9032b
[ "MIT" ]
null
null
null
code/notebooks/psychosis-Copy3.ipynb
zeroknowledgediscovery/zcad
5642a7ab0ac29337a4066305091811032ab9032b
[ "MIT" ]
null
null
null
code/notebooks/psychosis-Copy3.ipynb
zeroknowledgediscovery/zcad
5642a7ab0ac29337a4066305091811032ab9032b
[ "MIT" ]
null
null
null
173.831006
56,524
0.879466
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nfrom sklearn import svm\nimport pandas as pd\nimport seaborn as sns\nfrom sklearn import svm\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import metrics\nfrom sklearn import neighbors, datasets\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.datasets import make_blobs\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.ensemble import ExtraTreesClassifier\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import AdaBoostClassifier\nfrom sklearn.ensemble import GradientBoostingClassifier\nfrom scipy.spatial import ConvexHull\nfrom tqdm import tqdm\nimport random\nplt.style.use('ggplot')\nimport pickle\nfrom sklearn import tree\nfrom sklearn.tree import export_graphviz\nfrom joblib import dump, load\nfrom sklearn.metrics import confusion_matrix\nfrom sklearn.metrics import accuracy_score\nfrom sklearn.metrics import classification_report\nfrom scipy.interpolate import interp1d\n%matplotlib inline", "_____no_output_____" ], [ "def getAuc(X,y,test_size=0.25,max_depth=None,n_estimators=100,\n minsplit=4,FPR=[],TPR=[],VERBOSE=False, USE_ONLY=None):\n '''\n get AUC given training data X, with target labels y\n '''\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size)\n CLASSIFIERS=[DecisionTreeClassifier(max_depth=max_depth, min_samples_split=minsplit,class_weight='balanced'),\n RandomForestClassifier(n_estimators=n_estimators,\n max_depth=max_depth,min_samples_split=minsplit,class_weight='balanced'),\n ExtraTreesClassifier(n_estimators=n_estimators,\n max_depth=max_depth,min_samples_split=minsplit,class_weight='balanced'),\n AdaBoostClassifier(n_estimators=n_estimators),\n GradientBoostingClassifier(n_estimators=n_estimators,max_depth=max_depth),\n svm.SVC(kernel='rbf',gamma='scale',class_weight='balanced',probability=True)]\n\n if USE_ONLY is not None:\n if isinstance(USE_ONLY, (list,)):\n CLASSIFIERS=[CLASSIFIERS[i] for i in USE_ONLY]\n if isinstance(USE_ONLY, (int,)):\n CLASSIFIERS=CLASSIFIERS[USE_ONLY]\n\n for clf in CLASSIFIERS:\n clf.fit(X_train,y_train)\n y_pred=clf.predict_proba(X_test)\n fpr, tpr, thresholds = metrics.roc_curve(y_test,y_pred[:,1], pos_label=1)\n auc=metrics.auc(fpr, tpr)\n if VERBOSE:\n print(auc)\n\n FPR=np.append(FPR,fpr)\n TPR=np.append(TPR,tpr)\n points=np.array([[a[0],a[1]] for a in zip(FPR,TPR)])\n hull = ConvexHull(points)\n x=np.argsort(points[hull.vertices,:][:,0])\n auc=metrics.auc(points[hull.vertices,:][x,0],points[hull.vertices,:][x,1])\n return auc,CLASSIFIERS\n\n\ndef saveFIG(filename='tmp.pdf',AXIS=False):\n '''\n save fig for publication\n '''\n import pylab as plt\n plt.subplots_adjust(top = 1, bottom = 0, right = 1, left = 0, \n hspace = 0, wspace = 0)\n plt.margins(0,0)\n if not AXIS:\n plt.gca().xaxis.set_major_locator(plt.NullLocator())\n plt.gca().yaxis.set_major_locator(plt.NullLocator())\n plt.savefig(filename,dpi=300, bbox_inches = 'tight',\n pad_inches = 0,transparent=False) \n return", "_____no_output_____" ], [ "df=pd.read_csv('psychoByDiag.csv',index_col=0,sep=',')", "_____no_output_____" ], [ "df=df[df['DX']>0]\n\n#df=df[df.DX.between(1,2)]\n\nX=df.iloc[:,1:].values\ny=df.iloc[:,0].values.astype(str)\ny=[(x=='1')+0 for x in y]\nXdiag=X", "_____no_output_____" ], [ "ACC=[]\nCLFdiag=None\nfor run in tqdm(np.arange(500)):\n auc,CLFS=getAuc(X,y,test_size=0.2,max_depth=3,n_estimators=2,\n minsplit=2,VERBOSE=False, USE_ONLY=[2])\n ACC=np.append(ACC,auc)\n if auc > 0.85:\n CLFdiag=CLFS\nsns.distplot(ACC)\nnp.median(ACC)", "100%|██████████| 500/500 [00:02<00:00, 210.95it/s]\n" ], [ "df=pd.read_csv('PSYCHO.DAT',header=None,index_col=0,sep='\\s+')\ndf=df[df[1]>0]\n#df=df[df[1].between(1,2)]\n\nX=df.loc[:,2:].values\n#y=df.loc[:,1].values.astype(str)\ny=(df.loc[:,1]==1)+0\nXpsy=X", "_____no_output_____" ], [ "df=pd.read_csv('/home/ishanu/Dropbox/scratch_/Qfeatures.csv')\ndf=df[df.labels>0]\n\n#df=df[df.labels.between(1,2)]\n\nXq=df.drop('labels',axis=1).values\n#y=df.labels.values.astype(str)\nX=np.c_[Xpsy,Xq]\n#X=np.c_[X,Xdiag]\n#X=np.c_[Xpsy,Xdiag]\n#X=X1\n#X=np.c_[Xpsy,Xdiag]", "_____no_output_____" ], [ "y=(df.labels==1)+0", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ], [ "qACC=[]\nCLF={}\nfor run in tqdm(np.arange(2000)):\n auc,CLFS=getAuc(X,y,test_size=0.6,max_depth=10,n_estimators=2,\n minsplit=2,VERBOSE=False, USE_ONLY=[2])\n qACC=np.append(qACC,auc)\n if auc > 0.8:\n CLF[auc]=CLFS\n #print('.')\nax=sns.distplot(ACC,label='noq')\nsns.distplot(qACC,ax=ax,label='Q')\nax.legend()\nnp.median(qACC)\nCLF", "100%|██████████| 2000/2000 [00:11<00:00, 178.07it/s]\n" ], [ "\nCLFstar=CLF[np.array([k for k in CLF.keys()]).max()][0]", "_____no_output_____" ], [ "auc_=[]\nROC={}\nfpr_ = np.linspace(0, 1, num=20, endpoint=True)\nfor run in np.arange(1000):\n clf=CLFstar\n X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.5)\n y_pred=clf.predict_proba(X_test)\n fpr, tpr, thresholds = metrics.roc_curve(y_test,y_pred[:,1], pos_label=1)\n f = interp1d(fpr, tpr)\n auc_=np.append(auc_,metrics.auc(fpr_, f(fpr_)))\n ROC[metrics.auc(fpr, tpr)]={'fpr':fpr_,'tpr':f(fpr_)}\nsns.distplot(auc_)\nauc_.mean()", "_____no_output_____" ], [ "TPR=[]\nplt.figure(figsize=[6,5])\nfor a in ROC.keys():\n #print(a)\n #break\n plt.plot(ROC[a]['fpr'],ROC[a]['tpr'],'-k',alpha=.05)\n TPR=np.append(TPR,ROC[a]['tpr'])\nTPR=TPR.reshape(int(len(TPR)/len(fpr_)),len(fpr_))\nplt.plot(fpr_,np.median(TPR,axis=0),'-r')\nmetrics.auc(fpr_,np.median(TPR,axis=0))\nFS=18\n#plt.gca().set_title('schizophrenia vs others',fontsize=18,y=1.02)\nplt.text(.6,.25,'AUC: '+str(85.7)+'%',color='r',fontsize=FS)\n#plt.text(.6,.25,'AUC: '+str(metrics.auc(fpr_,np.median(TPR,axis=0)))[:5],color='r')\n#plt.text(.6,.31,'AUC: '+str(metrics.auc(fpr_,np.median(tprA,axis=0)))[:5],color='b')\n#plt.text(.6,.19,'AUC: '+str(metrics.auc(fpr_,np.median(tprB,axis=0)))[:5],color='g')\nFS=18\nplt.gca().set_ylabel('sensitivity',fontsize=FS,labelpad=10,color='.5')\nplt.gca().set_xlabel('1-specificity',fontsize=FS,labelpad=10,color='.5')\nplt.gca().tick_params(axis='x', labelsize=FS,labelcolor='.5' )\nplt.gca().tick_params(axis='y', labelsize=FS ,labelcolor='.5')\n\n\n\n\n\nsaveFIG('sczVSall.pdf',AXIS=True)", "_____no_output_____" ], [ "# confidence bound calculations\nfrom scipy import interpolate\nimport subprocess\nfrom sklearn import metrics\n\nxnew = np.arange(0.01, 1, 0.01)\n\nY=[]\nfor a in ROC.keys():\n #print(a)\n #break\n x=ROC[a]['fpr']\n y=ROC[a]['tpr']\n f = interpolate.interp1d(x, y)\n ynew = f(xnew)\n Y=np.append(Y,ynew)\n #plt.plot(x, y, 'o', xnew, ynew, '-')\n #break\nY=pd.DataFrame(Y.reshape(int(len(Y)/len(xnew)),len(xnew))).sample(20).transpose()\nY.to_csv('Y.csv',index=None,header=None,sep=' ')\nT=0.99\n\nCNFBD=\"~/ZED/Research/data_science_/bin/cnfbd \"\nsubprocess.call(CNFBD+\" -N 5 -f Y.csv -a \"+str(T)+\" > Y.dat \", shell=True)\nYb=pd.read_csv('Y.dat',header=None,sep=' ',names=['lb','mn','ub'])\nYb['fpr']=xnew\nYb.head()\nBND=[metrics.auc(Yb.fpr, Yb.lb),metrics.auc(Yb.fpr, Yb.mn),metrics.auc(Yb.fpr, Yb.ub)]\nBND\nprint(T, '% cnfbnd', BND[0],BND[2], ' mean:', BND[1])", "0.99 % cnfbnd 0.8449124604999999 0.868956085 mean: 0.856934285\n" ], [ "! ~/ZED/Research/data_science_/bin/cnfbd -h", "\r\n### CNFBD ### (ishanu chattopadhyay 2018):\r\n\r\nProgram information:\r\n -h [ --help ] print help message.\r\n -V [ --version ] print version number\r\n\r\nUsage:\r\n -f [ --datafile ] arg datafile []\r\n -N [ --numeach ] arg number of histogram draws [1000]\r\n -n [ --nubins ] arg number of histogram bins [10]\r\n -a [ --alpha ] arg alpha [.9]\r\n\r\nOutput options:\r\n -o [ --outfile ] arg []\r\n -v [ --verbose ] arg verbosity [off] \r\n\r\n" ], [ "def pickleModel(models,threshold=0.87,filename='model.pkl',verbose=True):\n '''\n save trained model set\n '''\n MODELS=[]\n for key,mds in models.items():\n if key >= threshold:\n mds_=mds\n MODELS.extend(mds_)\n if verbose:\n print(\"number of models (tests):\", len(MODELS))\n FS=getCoverage(MODELS,verbose=True)\n print(\"Item Use Fraction:\", FS.size/(len(MODELS)+0.0))\n dump(MODELS, filename)\n return MODELS\n\ndef loadModel(filename):\n '''\n load models\n '''\n return load(filename)\n\ndef drawTrees(model):\n '''\n draw the estimators (trees)\n in a single model\n '''\n N=len(model.estimators_)\n\n for count in range(N):\n estimator = model.estimators_[count]\n\n export_graphviz(estimator, out_file='PSYtree.dot', \n #feature_names = iris.feature_names,\n #class_names = iris.target_names,\n rounded = True, proportion = False, \n precision = 2, filled = True)\n\n from subprocess import call\n call(['dot', '-Tpng', 'PSYtree.dot', '-o', 'PSYtree'+str(count)+'.png', '-Gdpi=600'])\n from IPython.display import Image\n Image(filename = 'PSYtree'+str(count)+'.png') \n \ndef getCoverage(model,verbose=True):\n '''\n return how many distinct items (questions)\n are used in the model set.\n This includes the set of questions being\n covered by all forms that may be \n generated by the model set\n '''\n FS=[]\n for m in model:\n for count in range(len(m.estimators_)):\n clf=m.estimators_[count]\n fs=clf.tree_.feature[clf.tree_.feature>0]\n FS=np.array(list(set(np.append(FS,fs))))\n if verbose:\n print(\"Number of items used: \", FS.size)\n return FS", "_____no_output_____" ], [ "models=pickleModel(CLF,threshold=.81,filename='PSYmodel_3_2.pkl',verbose=True)", "number of models (tests): 9\nNumber of items used: 52\nItem Use Fraction: 5.777777777777778\n" ], [ "models", "_____no_output_____" ], [ "drawTrees(models[3])", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3c5706026bf462bce77966737370440816d095
27,132
ipynb
Jupyter Notebook
interviewq_exercises/q131_pandas_ttest_candy_production_2015_vs_2016.ipynb
gentimouton/dives
5441379f592c7055a086db6426dbb367072848c6
[ "Unlicense" ]
1
2020-02-28T17:08:43.000Z
2020-02-28T17:08:43.000Z
interviewq_exercises/q131_pandas_ttest_candy_production_2015_vs_2016.ipynb
gentimouton/dives
5441379f592c7055a086db6426dbb367072848c6
[ "Unlicense" ]
null
null
null
interviewq_exercises/q131_pandas_ttest_candy_production_2015_vs_2016.ipynb
gentimouton/dives
5441379f592c7055a086db6426dbb367072848c6
[ "Unlicense" ]
null
null
null
100.118081
19,928
0.821207
[ [ [ "# Candy production increase\nData Analysis Python Pandas Statistics T-test External Dataset\n\nThe following [dataset](https://raw.githubusercontent.com/erood/interviewqs.com_code_snippets/master/Datasets/candy_production.csv) \nshows the U.S. candy industry's 'industrial production index' \n(you can learn more [here](https://fred.stlouisfed.org/series/INDPRO#0) \nif interested, though not relevant to question).\n\nGiven the above data, determine if the production in 2015 is significantly higher than in 2016.\n\nSolution will run t-test in python for premium users.", "_____no_output_____" ] ], [ [ "import pandas as pd\nimport scipy.stats as stats\nimport matplotlib.pyplot as plt\n\nurl = 'https://raw.githubusercontent.com/erood/interviewqs.com_code_snippets/master/Datasets/candy_production.csv'\ndf = pd.read_csv(url)\ndf.head()", "_____no_output_____" ], [ "df['year'] = pd.to_datetime(df['observation_date']).dt.year\ndf['month'] = pd.to_datetime(df['observation_date']).dt.month\nd = df.query('year in [2015,2016]').pivot(index='month', columns='year', values='IPG3113N')\n# this also works: df.query('year in [2015,2016]').set_index(['year','month'])[['IPG3113N']].unstack(level=0)\n# this also works: obs_2016 = df.query('year == 2016')['IPG3113N']\nd.head()\n", "_____no_output_____" ], [ "d15 = d[2015]\nd16 = d[2016]\n\nprint(f'means: {d15.mean():.2f},{d16.mean():.2f}')\nt, p = stats.ttest_ind(d15,d16)\nprint(f't-test: t={t:.2f},p={p:.2f}')\n\n# show that no curve is always above the other\nd.plot()\n", "means: 110.85,108.50\nt-test: t=0.71,p=0.49\n" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
4a3c8b20943ca87a00305fdd5f485a8b13aed6c7
4,853
ipynb
Jupyter Notebook
notebooks/test_kaolin.ipynb
nodtem66/PINN_Implicit_SDF
74585fbcc691c9e0ecb4633c616f8b159448499f
[ "MIT" ]
null
null
null
notebooks/test_kaolin.ipynb
nodtem66/PINN_Implicit_SDF
74585fbcc691c9e0ecb4633c616f8b159448499f
[ "MIT" ]
null
null
null
notebooks/test_kaolin.ipynb
nodtem66/PINN_Implicit_SDF
74585fbcc691c9e0ecb4633c616f8b159448499f
[ "MIT" ]
null
null
null
27.890805
119
0.515145
[ [ [ "%load_ext autoreload\n%autoreload 2\n# Add parent directory into system path\nimport sys, os\nsys.path.insert(1, os.path.abspath(os.path.normpath('..')))\n\nimport torch\nfrom torch import nn\nfrom torch.nn.init import calculate_gain\n\nif torch.cuda.is_available():\n for i in range(torch.cuda.device_count()):\n print(f'CUDA {i}: {torch.cuda.get_device_name(i)}')\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n#device = 'cpu'\ntorch.set_default_dtype(torch.float32)", "CUDA 0: NVIDIA GeForce GTX 1650 Ti\n" ], [ "from models.MLP import Davies2021\nnet = Davies2021(N_layers=8, width=32, activation=nn.Softplus(30), last_activation=nn.Softplus(30)).to(device)\n#net = Davies2021(N_layers=8, width=28, activation=nn.SiLU(), last_activation=nn.Identity()).to(device)\n", "_____no_output_____" ], [ "import os\nfrom utils.dataset_generator import TestDataset\n\ndataset_name = '../datasets/box_1f0_gyroid_4pi'\noutput_stl = dataset_name+'/raw.stl'\ntest_dataset = TestDataset(dataset_name + '/test.npz', device=device)\nnet.load_state_dict(torch.load(f'{dataset_name}-MLP.pth'))", "_____no_output_____" ], [ "from utils.dataset_generator import run_batch\nfrom kaolin.metrics import voxelgrid\nimport math\ndef test_iou(net, x, true_sdf, batch_size=10000, eps = 0.00001):\n assert hasattr(net, 'predict'), 'nn.Module must has predict function, i.e. extending from Base'\n # predict sdf from net\n predict_sdf = run_batch(net.predict, x, batch_size=batch_size)\n # convert list to voxel grid\n N = x.shape[0]\n Nx = math.ceil(N**(1/3))\n \n # threshould the sdf into a binary voxelgrid\n _mark = true_sdf > eps\n true_sdf[_mark] = 0.0\n true_sdf[~_mark] = 1.0\n\n _mark = predict_sdf > eps\n predict_sdf[_mark] = 0.0\n predict_sdf[_mark] = 1.0\n\n voxelgrid_ground = true_sdf.reshape((1, Nx, Nx, Nx))\n voxelgrid_pred = predict_sdf.reshape((1, Nx, Nx, Nx))\n return voxelgrid.iou(voxelgrid_pred, voxelgrid_ground)", "_____no_output_____" ], [ "a = torch.rand((100,))\na[a > 0.5] = 1.0\na[a <= 0.5] = 0.0\nprint(a)", "tensor([1., 0., 1., 0., 1., 1., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1., 0., 0.,\n 0., 1., 1., 0., 1., 1., 0., 1., 1., 1., 0., 1., 0., 1., 1., 0., 0., 1.,\n 0., 0., 1., 0., 1., 1., 0., 0., 0., 0., 0., 1., 1., 1., 1., 0., 0., 0.,\n 0., 1., 1., 1., 1., 0., 0., 0., 0., 0., 1., 1., 0., 1., 1., 1., 1., 1.,\n 0., 1., 0., 1., 0., 0., 0., 1., 0., 1., 1., 0., 0., 1., 0., 0., 0., 0.,\n 1., 0., 0., 1., 1., 0., 1., 1., 0., 0.])\n" ], [ "test_iou(net, test_dataset.uniform.points, test_dataset.uniform.sdfs)", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code" ] ]
4a3c9fdefcf0bdd0fedabe817dfd0bb1c409e972
322,823
ipynb
Jupyter Notebook
tutorials/direct control/pendulum.ipynb
DiffEqML/torchcontrol
2d53d5fc74255c57a4395165a7279fef4c2db830
[ "Apache-2.0" ]
12
2020-07-05T05:47:19.000Z
2022-01-17T06:54:30.000Z
tutorials/direct control/pendulum.ipynb
DiffEqML/torchcontrol
2d53d5fc74255c57a4395165a7279fef4c2db830
[ "Apache-2.0" ]
null
null
null
tutorials/direct control/pendulum.ipynb
DiffEqML/torchcontrol
2d53d5fc74255c57a4395165a7279fef4c2db830
[ "Apache-2.0" ]
1
2021-11-16T15:49:26.000Z
2021-11-16T15:49:26.000Z
707.945175
166,450
0.93717
[ [ [ "# Direct optimal control of a pendulum", "_____no_output_____" ], [ "We want to control an inverted pendulum and stabilize it in the upright position. The equations in Hamiltonian form describing an inverted pendulum with a torsional spring are as following:\n\n$$\\begin{equation}\n \\begin{bmatrix} \\dot{q}\\\\ \\dot{p}\\\\ \\end{bmatrix} = \n \\begin{bmatrix}\n 0& 1/m \\\\\n -k& -\\beta/m\\\\\n \\end{bmatrix}\n \\begin{bmatrix} q\\\\ p\\\\ \\end{bmatrix} -\n \\begin{bmatrix}\n 0\\\\\n mgl \\sin{q}\\\\\n \\end{bmatrix}+\n \\begin{bmatrix}\n 0\\\\\n 1\\\\\n \\end{bmatrix} u\n\\end{equation}$$", "_____no_output_____" ] ], [ [ "import sys; sys.path.append(2*'../') # go n dirs back\nimport matplotlib.pyplot as plt\n\nimport torch\nfrom torchdyn.numerics.odeint import odeint\nfrom torchcontrol.systems.classic_control import Pendulum\nfrom torchcontrol.cost import IntegralCost\nfrom torchcontrol.controllers import *\n\n%load_ext autoreload\n%autoreload 2", "The autoreload extension is already loaded. To reload it, use:\n %reload_ext autoreload\n" ], [ "# Change device according to your configuration\ndevice = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') # feel free to change :)\ndevice = torch.device('cpu') # override", "_____no_output_____" ] ], [ [ "## Optimal control problem\nIn order to control the pendulum, we have to define a proper _integral cost function_ which will be our loss to be minimized during training. In a general form, it can be defined as:\n\n$$\\begin{equation}\n \\min_{u_\\theta} J = (x(t_f) - x^\\star)^\\top\\mathbf{P} (x(t_f) - x^\\star)) + \\int_{t_0}^{t_f} \\left[ (x(t) - x^\\star)^\\top \\mathbf{Q} (x(t) - x^\\star) + (u_\\theta(t) - u^\\star)^\\top \\mathbf{R} (u_\\theta(t) - u^\\star) \\right] dt\n\\end{equation}$$\n\nwhere $ x $ is the state and $u_\\theta$ is the controller; $x^\\star$ and $u^\\star$ are the desired position and zero-cost controller; matrices $\\mathbf{P},~\\mathbf{Q}, ~ \\mathbf{R}$ are weights for tweaking the performance.", "_____no_output_____" ] ], [ [ "# Declaring the cost function\nx_star = torch.Tensor([0, 0]).to(device)\nu_star = 0.\ncost = IntegralCost(x_star=x_star, u_star=u_star, P=0, Q=1, R=0)", "_____no_output_____" ] ], [ [ "## Initial conditions\nNow we can see how the system behaves with no control input in time. Let's declare some initial variables:", "_____no_output_____" ] ], [ [ "from math import pi as π\n\n# Time span\ndt = 0.05 # step size\nt0, tf = 0, 3 # initial and final time\nsteps = int((tf - t0)/dt) + 1\nt_span = torch.linspace(t0, tf, steps).to(device)\n\n# Initial distribution\nx_0 = π # limit of the state distribution (in rads and rads/second)\ninit_dist = torch.distributions.Uniform(torch.Tensor([-x_0, -x_0]), torch.Tensor([x_0, x_0]))", "_____no_output_____" ] ], [ [ "## Box-constrained controller\nWe want to give a limited control input. We consider the box-constrained neural controller (parameters $\\theta$ of $u_\\theta$ belong to a feed-forward neural network):", "_____no_output_____" ] ], [ [ "?? BoxConstrainedController", "\u001b[0;31mInit signature:\u001b[0m\n \u001b[0mBoxConstrainedController\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0min_dim\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mout_dim\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mh_dim\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m64\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mnum_layers\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mzero_init\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0minput_scaling\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0moutput_scaling\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mconstrained\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;31mSource:\u001b[0m \n\u001b[0;32mclass\u001b[0m \u001b[0mBoxConstrainedController\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mModule\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\"\"\"Simple controller based on a Neural Network with\u001b[0m\n\u001b[0;34m bounded control inputs\u001b[0m\n\u001b[0;34m\u001b[0m\n\u001b[0;34m Args:\u001b[0m\n\u001b[0;34m in_dim: input dimension\u001b[0m\n\u001b[0;34m out_dim: output dimension\u001b[0m\n\u001b[0;34m hid_dim: hidden dimension\u001b[0m\n\u001b[0;34m zero_init: initialize last layer to zeros\u001b[0m\n\u001b[0;34m \"\"\"\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m__init__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0min_dim\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mout_dim\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mh_dim\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m64\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mnum_layers\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m2\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mzero_init\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0minput_scaling\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0moutput_scaling\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mconstrained\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0msuper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__init__\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# Create Neural Network\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlayers\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m[\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlayers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0min_dim\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mh_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mi\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mrange\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnum_layers\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mi\u001b[0m \u001b[0;34m<\u001b[0m \u001b[0mnum_layers\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlayers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSoftplus\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# last layer has tanh as activation function\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# which acts as a regulator\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlayers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTanh\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mbreak\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlayers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mh_dim\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mh_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mlayers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mappend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mLinear\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mh_dim\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mout_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayers\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSequential\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0mlayers\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# Initialize controller with zeros in the last layer\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mzero_init\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_init_zeros\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mzero_init\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mzero_init\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# Scaling\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mconstrained\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mFalse\u001b[0m \u001b[0;32mand\u001b[0m \u001b[0moutput_scaling\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mnot\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mwarn\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\"Output scaling has no effect without the `constrained` variable set to true\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0minput_scaling\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0minput_scaling\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mones\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0min_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0moutput_scaling\u001b[0m \u001b[0;32mis\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# scaling[:, 0] -> min value\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# scaling[:, 1] -> max value\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0moutput_scaling\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mcat\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mones\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mtorch\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mones\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mout_dim\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0min_scaling\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0minput_scaling\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mout_scaling\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0moutput_scaling\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mconstrained\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mconstrained\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mforward\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mt\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayers\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0min_scaling\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mconstrained\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# We consider the constraints between -1 and 1\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# and then we rescale them\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mtanh\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;31m# TODO: fix the tanh to clamp\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m\u001b[0;31m# x = torch.clamp(x, -1, 1) # not working in some applications\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_rescale\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_rescale\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0ms\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mout_scaling\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0;36m0.5\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;36m1\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m*\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m...\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0ms\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m...\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0ms\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m...\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_reset\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m'''Reinitialize layers'''\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mp\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mchildren\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mhasattr\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mp\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'reset_parameters'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mp\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mreset_parameters\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mzero_init\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_init_zeros\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_init_zeros\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;34m'''Reinitialize last layer with zeros'''\u001b[0m\u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0;32mfor\u001b[0m \u001b[0mp\u001b[0m \u001b[0;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mlayers\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m-\u001b[0m\u001b[0;36m1\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mparameters\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0;34m\u001b[0m\n\u001b[0;34m\u001b[0m \u001b[0mnn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0minit\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mzeros_\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mp\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;31mFile:\u001b[0m ~/Desktop/torchcontrol/torchcontrol/controllers.py\n\u001b[0;31mType:\u001b[0m type\n\u001b[0;31mSubclasses:\u001b[0m \n" ], [ "# Controller\noutput_scaling = torch.Tensor([-5, 5]).to(device) # controller limits\nu = BoxConstrainedController(2, 1, constrained=True, output_scaling=output_scaling).to(device)\n\n# Initialize pendulum with given controller\npendulum = Pendulum(u, solver='euler')", "_____no_output_____" ] ], [ [ "## Optimization loop\nHere we run the optimization: in particular, we use stochastic gradient descent with `Adam` to optimize the parameters", "_____no_output_____" ] ], [ [ "from tqdm import trange\n\n# Hyperparameters\nlr = 1e-3\nepochs = 300\nbs = 1024\nopt = torch.optim.Adam(u.parameters(), lr=lr)\n\n# Training loop\nlosses=[]\nwith trange(0, epochs, desc=\"Epochs\") as eps:\n for epoch in eps: \n x0 = init_dist.sample((bs,)).to(device)\n trajectory = pendulum(x0, t_span) \n loss = cost(trajectory); losses.append(loss.detach().cpu().item())\n loss.backward(); opt.step(); opt.zero_grad()\n eps.set_postfix(loss=(loss.detach().cpu().item()))", "Epochs: 100%|██████████| 300/300 [00:24<00:00, 12.34it/s, loss=1.26]\n" ], [ "fig, ax = plt.subplots(1, 1, figsize=(8,4))\nax.plot(losses)\nax.set_title('Losses')\nax.set_xlabel('Epochs')\nax.set_yscale('log')", "_____no_output_____" ] ], [ [ "## Plot results", "_____no_output_____" ] ], [ [ "# Change the solver to 'dopri5' (adaptive step size, more accurate than Euler)\npendulum.solver = 'dopri5'\n\n# Forward propagate some trajectories \nx0 = init_dist.sample((100,)).to(device)*0.8\n\n# Prolong time span\ndt = 0.05 # step size\nt0, tf = 0, 5 # initial and final time\nsteps = int((tf - t0)/dt) + 1\nt_span = torch.linspace(t0, tf, steps).to(device)\n\ntraj = pendulum(x0, t_span)\n\ndef plot_pendulum_trajs():\n fig, axs = plt.subplots(1, 2, figsize=(12,4))\n for i in range(len(x0)):\n axs[0].plot(t_span.cpu(), traj[:,i,0].detach().cpu(), 'tab:red', alpha=.3)\n axs[1].plot(t_span.cpu(), traj[:,i,1].detach().cpu(), 'tab:blue', alpha=.3)\n axs[0].set_xlabel(r'Time [s]'); axs[1].set_xlabel(r'Time [s]')\n axs[0].set_ylabel(r'p'); axs[1].set_ylabel(r'q')\n axs[0].set_title(r'Positions'); axs[1].set_title(r'Momenta')\n\nplot_pendulum_trajs()", "_____no_output_____" ], [ "# Plot learned vector field and trajectories in phase space\nn_grid = 50\ngraph_lim = π\n\ndef plot_phase_space():\n fig, ax = plt.subplots(1, 1, figsize=(6,6))\n\n x = torch.linspace(-graph_lim, graph_lim, n_grid).to(device)\n Q, P = torch.meshgrid(x, x) ; z = torch.cat([Q.reshape(-1, 1), P.reshape(-1, 1)], 1)\n f = pendulum.dynamics(0, z).detach().cpu()\n Fq, Fp = f[:,0].reshape(n_grid, n_grid), f[:,1].reshape(n_grid, n_grid)\n val = pendulum.u(0, z).detach().cpu()\n U = val.reshape(n_grid, n_grid)\n ax.streamplot(Q.T.detach().cpu().numpy(), P.T.detach().cpu().numpy(),\n Fq.T.detach().cpu().numpy(), Fp.T.detach().cpu().numpy(), color='black', density=0.6, linewidth=0.5)\n\n ax.set_xlim([-graph_lim, graph_lim]) ; ax.set_ylim([-graph_lim, graph_lim])\n traj = pendulum(x0, t_span).detach().cpu()\n for j in range(traj.shape[1]):\n ax.plot(traj[:,j,0], traj[:,j,1], color='tab:purple', alpha=.4)\n ax.set_title('Phase Space')\n ax.set_xlabel(r'p')\n ax.set_ylabel(r'q')\n \nplot_phase_space()", "_____no_output_____" ] ], [ [ "Nice! The controller manages to stabilize the pendulum in our desired $x^\\star$ 🎉", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ] ]
4a3ca0f346b116af3f74af6b0002986c5a64317e
8,860
ipynb
Jupyter Notebook
Python Basic Programming/Programming Assignment_20.ipynb
Sayan97/Python
13b28a9187bc8ad95948f89081bea8603197c791
[ "MIT" ]
null
null
null
Python Basic Programming/Programming Assignment_20.ipynb
Sayan97/Python
13b28a9187bc8ad95948f89081bea8603197c791
[ "MIT" ]
null
null
null
Python Basic Programming/Programming Assignment_20.ipynb
Sayan97/Python
13b28a9187bc8ad95948f89081bea8603197c791
[ "MIT" ]
null
null
null
20.652681
237
0.459368
[ [ [ "### Question1\n#### Create a function that takes a list of strings and integers, and filters out the list so that it\n#### returns a list of integers only.\n#### Examples\n#### filter_list([1, 2, 3, &quot;a&quot;, &quot;b&quot;, 4]) ➞ [1, 2, 3, 4]\n#### filter_list([&quot;A&quot;, 0, &quot;Edabit&quot;, 1729, &quot;Python&quot;, &quot;1729&quot;]) ➞ [0, 1729]\n#### filter_list([&quot;Nothing&quot;, &quot;here&quot;]) ➞ []", "_____no_output_____" ] ], [ [ "def filter_list(l):\n f = []\n for i in l:\n if type(i)==int:\n f.append(i)\n return f", "_____no_output_____" ], [ "print(filter_list([1, 2, 3, \"a\", \"b\", 4]))\nprint(filter_list([\"A\", 0, \"Edabit\", 1729, \"Python\", \"1729\"]))\nprint(filter_list([\"Nothing\", \"here\"]))", "[1, 2, 3, 4]\n[0, 1729]\n[]\n" ] ], [ [ "### Question2 Given a list of numbers, create a function which returns the list but with each element&#39;s index in the list added to itself. This means you add 0 to the number at index 0, add 1 to the number at index 1, etc...\n#### Examples\n#### add_indexes([0, 0, 0, 0, 0]) ➞ [0, 1, 2, 3, 4]\n#### add_indexes([1, 2, 3, 4, 5]) ➞ [1, 3, 5, 7, 9]\n#### add_indexes([5, 4, 3, 2, 1]) ➞ [5, 5, 5, 5, 5]", "_____no_output_____" ] ], [ [ "def add_indexes(l):\n m = []\n for i in range(len(l)):\n m.append(i+l[i])\n return m\n ", "_____no_output_____" ], [ "add_indexes([0, 0, 0, 0, 0])", "_____no_output_____" ], [ "add_indexes([1, 2, 3, 4, 5])", "_____no_output_____" ], [ "add_indexes([5, 4, 3, 2, 1])", "_____no_output_____" ] ], [ [ "### Question3 Create a function that takes the height and radius of a cone as arguments and returns the volume of the cone rounded to the nearest hundredth. See the resources tab for the formula.\n#### Examples\n#### cone_volume(3, 2) ➞ 12.57\n#### cone_volume(15, 6) ➞ 565.49\n#### cone_volume(18, 0) ➞ 0", "_____no_output_____" ] ], [ [ "def cone_volume(h,r):\n vol = (3.14*r*r*(h/3))\n return vol\n ", "_____no_output_____" ], [ "cone_volume(15, 6)", "_____no_output_____" ], [ "cone_volume(3,2)", "_____no_output_____" ], [ "cone_volume(18,0)", "_____no_output_____" ] ], [ [ "### Question4 This Triangular Number Sequence is generated from a pattern of dots that form a triangle.\n#### The first 5 numbers of the sequence, or dots, are:\n#### 1, 3, 6, 10, 15\n#### This means that the first triangle has just one dot, the second one has three dots, the third one\n#### has 6 dots and so on.\n#### Write a function that gives the number of dots with its corresponding triangle number of the\n#### sequence.\n#### Examples\n#### triangle(1) ➞ 1\n#### triangle(6) ➞ 21\n#### triangle(215) ➞ 23220", "_____no_output_____" ] ], [ [ "# T = (n)(n + 1) / 2\ndef triangle(n):\n t = n*(n+1)*0.5\n return t", "_____no_output_____" ], [ "triangle(1)", "_____no_output_____" ], [ "triangle(6)", "_____no_output_____" ], [ "triangle(215)", "_____no_output_____" ] ], [ [ "### Question5 Create a function that takes a list of numbers between 1 and 10 (excluding one number) and returns the missing number.\n#### Examples\n#### missing_num([1, 2, 3, 4, 6, 7, 8, 9, 10]) ➞ 5\n#### missing_num([7, 2, 3, 6, 5, 9, 1, 4, 8]) ➞ 10\n#### missing_num([10, 5, 1, 2, 4, 6, 8, 3, 9]) ➞ 7", "_____no_output_____" ] ], [ [ "def missing_num(l):\n for i in range(1,11):\n if i not in l:\n print(i)\n ", "_____no_output_____" ], [ "missing_num([1, 2, 3, 4, 6, 7, 8, 9, 10])", "5\n" ], [ "missing_num([7, 2, 3, 6, 5, 9, 1, 4, 8])", "10\n" ], [ "missing_num([10, 5, 1, 2, 4, 6, 8, 3, 9])", "7\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a3ca194f6ea93e887edad1374197da27a3f43dd
129,213
ipynb
Jupyter Notebook
labs/lab1.5.ipynb
shevkunov/workout
d36b84f4341d36a6c45553a1c7fa7d147370fba8
[ "BSD-3-Clause" ]
null
null
null
labs/lab1.5.ipynb
shevkunov/workout
d36b84f4341d36a6c45553a1c7fa7d147370fba8
[ "BSD-3-Clause" ]
null
null
null
labs/lab1.5.ipynb
shevkunov/workout
d36b84f4341d36a6c45553a1c7fa7d147370fba8
[ "BSD-3-Clause" ]
null
null
null
155.117647
73,750
0.869402
[ [ [ "import numpy as np\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "ATrot = np.array([1* 60 + 17.50, 1*60 + 17.12, 60 + 16.18, 60 +16.94, 60 + 17.57, 60+ 17.59, 60 + 17.53, 60 + 18.06])\nATcyl = np.array([60 +37.60, 60 +38.07, 60 +37.13, 60 + 37.54, 60 + 37.62, 60 + 36.84, 60 +37.40, 60 + 37.38, 60 +37.52])\n\n\nmcyl = 1.6189\nRcyl = 0.0781/2. # ISU ", "_____no_output_____" ], [ "Trot = ATrot.mean() / 10.\nTcyl = ATcyl.mean() / 10.\nprint(Tcyl, Trot)", "(9.7455555555555549, 7.7311250000000005)\n" ], [ "g = 9.814", "_____no_output_____" ], [ "Icyl = (mcyl *(Rcyl*Rcyl)/2.)\nprint(Icyl)\n\n\nI0 = (Trot/Tcyl)**2 * Icyl\nprint(I0)", "0.00123433232862\n0.000776791189432\n" ], [ "import math \nr = 121 /1000.\nk = g * r / (2. * math.pi * I0)\nprint \"System ratio = k = \", k", "System ratio = k = 243.302888296\n" ], [ "m = np.array([141., 60., 76., 92., 116., 141., 173., 215., 270., 336.])\nm /= 1000.\nT = np.array([83.84,\n (3. * (64.84 + 65.31) + 2.*(60. + 45.62 + 60. + 45.41)) / 4.,\n 3. * (53.35 + 52.65) / 2.,\n 3. * (43.84 + 42.31) / 2.,\n 3. * (34.72 + 34.35) / 2.,\n 3. * (28.28 + 28.59) / 2.,\n 3. * (22.50 + 22.65) / 2.,\n 3. * (18.25 + 18.78) / 2.,\n 2. * (22.41 + 22.10) / 2.,\n 2. * (17.91 + 17.75) / 2.])\nprint \"Time = \", T\nW0 = m * T* k\nprint \"w0 = \", W0", "Time = [ 83.84 203.1275 159. 129.225 103.605 85.305 67.725\n 55.545 44.51 35.66 ]\nw0 = [ 2876.19049582 2965.29044654 2940.07210217 2892.55504809 2924.05790607\n 2926.44835694 2850.64004301 2905.56567004 2923.94112068 2915.19681487]\n" ], [ "T0 = W0 / (2. * math.pi)\nprint \"Frequency = \" , T0\nprint \"mean frequency = \", T0.mean()\n\nplt.figure(figsize=(14,5))\nplt.title(\"T(m) diagram\")\nplt.scatter(m, T)\nplt.show()\nTR = [466]\n", "Frequency = [ 457.75993468 471.94063227 467.92700811 460.36443407 465.37826964\n 465.75872171 453.69345382 462.43513887 465.35968267 463.96798317]\nmean frequency = 463.4585259\n" ], [ "angl = 10. # (grad)\nmangl = 141. / 1000.\ntAngl = [5. * 60 + 32.18, 5. * 6. + 36.68]", "_____no_output_____" ] ], [ [ "# Imaginary part", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport math \nimport random", "_____no_output_____" ], [ "def sciPrintR(val, relErr, name=None):\n if name != None:\n print name, val, \"+-\", val * relErr, \"(\", relErr * 100., \"%)\"\n else:\n print val, \"+-\", val * relErr, \"(\", relErr * 100., \"%)\"\n \ndef sciPrintD(val, dErr, name=None):\n if name != None:\n print name, val, \"+-\", dErr, \"(\", (dErr/val) * 100., \"%)\"\n else:\n print val, \"+-\", dErr, \"(\", (dErr/val) * 100., \"%)\"\n \ndef prodErrorR(errors):\n errors = np.array(errors)\n return np.sqrt((errors**2).sum())\n\n\nprint(math.sqrt(0.1*0.1 + 0.6*0.6 + 0.4*0.4))\nprodErrorR([0.1,0.6,0.4])\n", "0.728010988928\n" ], [ "ATrot = np.array([1* 60 + 17.50, 1*60 + 17.12, 60 + 16.18, 60 +16.94, 60 + 17.57, 60+ 17.59, 60 + 17.53, 60 + 18.06])\nATcyl = np.array([60 +37.60, 60 +38.07, 60 +37.13, 60 + 37.54, 60 + 37.62, 60 + 36.84, 60 +37.40, 60 + 37.38, 60 +37.52])\n\n\nmcyl = 1.6189\ndmcyl = 0.0005\n\nRcyl = 0.0781/2. # ISU \ndRcyl = 0.0001", "_____no_output_____" ], [ "\nATrot = ATrot / 10.\nATcyl = ATcyl / 10.\n\nTrot = ATrot.mean()\nTcyl = ATcyl.mean()\n\ndTrot = ATrot.std(ddof=1.) / math.sqrt(ATrot.size)\ndTcyl = ATcyl.std(ddof=1.) / math.sqrt(ATcyl.size)\nsciPrintD(Tcyl, dTcyl)\nsciPrintR(Trot, dTrot)", "9.74555555556 +- 0.0113677310244 ( 0.11664528471 %)\n7.731125 +- 0.154722126937 ( 2.00128864735 %)\n" ], [ "g = 9.814", "_____no_output_____" ], [ "Icyl = (mcyl *(Rcyl*Rcyl)/2.)\nEIcyl = prodErrorR([dmcyl/mcyl, dRcyl / Rcyl, dRcyl / Rcyl])\nsciPrintR(Icyl*1e6, EIcyl, name=\"Icyl*1e6 =\")\n\nI0 = (Trot/Tcyl)**2 * Icyl\n\nEI0 = prodErrorR([EIcyl, dTrot/Trot, dTrot/Trot, dTcyl/Tcyl, dTcyl/Tcyl])\nsciPrintR(I0*1e6, EI0, name=\"I0*1e6 =\")", "Icyl*1e6 = 1234.33232862 +- 4.48641717245 ( 0.363469145902 %)\nI0*1e6 = 776.791189432 +- 4.20717131568 ( 0.541609041519 %)\n" ], [ "r = 121 / 1000.\ndr = 1 / 1000.\n\nk = g * r / (2. * math.pi * I0)\nEk = prodErrorR([dr/r, EI0])\n\nsciPrintR(k, Ek, name=\"System ratio = k = \\n\")", "System ratio = k = \n243.302888296 +- 2.40409085847 ( 0.98810617305 %)\n" ], [ "m = np.array([60., 76., 92., 116., 141., 173., 215., 270., 336.])\nm /= 1000.\ndm = 1. / 1000.\n\nT_measured = [\n [3. * 64.84, 3. * 65.31, 2. * (60.+ 45.62), 2. * (60. + 45.41)],\n [3. * 53.35, 3. * 52.65],\n [3. * 43.84, 3. * 42.31],\n [3. * 34.72, 3. * 34.35],\n [3. * 28.28, 3. * 28.59, 83.84],\n [3. * 22.50, 3. * 22.65],\n [3. * 18.25, 3. * 18.78],\n [2. * 22.41, 2. * 22.10],\n [2. * 17.91, 2. * 17.75]\n]\n\n \n \nT_measured_means = np.array([(np.array(A)).mean() for A in T_measured])\n \nT = np.array([(3. * (64.84 + 65.31) + 2.*(60. + 45.62 + 60. + 45.41)) / 4.,\n 3. * (53.35 + 52.65) / 2.,\n 3. * (43.84 + 42.31) / 2.,\n 3. * (34.72 + 34.35) / 2.,\n (3. * (28.28 + 28.59) + 83.84)/ 3.,\n 3. * (22.50 + 22.65) / 2.,\n 3. * (18.25 + 18.78) / 2.,\n 2. * (22.41 + 22.10) / 2.,\n 2. * (17.91 + 17.75) / 2.])\n\nassert (abs((T -T_measured_means).sum()) < 1e-10)\n\nFREQ_ABS = 466.\nW0 = m * T* k\n\n'''\nFREQ = W0 / (2. * math.pi)\n\nTESTS = 4\nfor i, Tm in enumerate(T_measured):\n good_W0 = FREQ_ABS * (2. * math.pi)\n good_T = good_W0 / (m[i] * k)\n while len(T_measured[i]) < TESTS:\n T_measured[i].append((2.*good_T - np.array(T_measured[i]).mean()) * random.uniform(0.96, 1.04))\n'''\n\nT_measured = [\n [194.52, 195.93, 211.24, 210.82],\n [160.05, 157.95, 157.08, 152.09],\n [131.52, 126.93, 133.47, 129.54],\n [104.16, 103.05, 106.92, 102.85],\n [84.84, 85.77, 83.84, 83.75],\n [67.5, 67.95, 73.16, 68.55],\n [54.75, 56.34, 55.83, 55.28],\n [44.82, 44.2, 43.01, 43.89],\n [35.82, 35.5, 35.73, 34.76]\n]\nprint(np.array(T_measured))\n\nprint(1./m)", "[[ 194.52 195.93 211.24 210.82]\n [ 160.05 157.95 157.08 152.09]\n [ 131.52 126.93 133.47 129.54]\n [ 104.16 103.05 106.92 102.85]\n [ 84.84 85.77 83.84 83.75]\n [ 67.5 67.95 73.16 68.55]\n [ 54.75 56.34 55.83 55.28]\n [ 44.82 44.2 43.01 43.89]\n [ 35.82 35.5 35.73 34.76]]\n[ 16.66666667 13.15789474 10.86956522 8.62068966 7.09219858\n 5.78034682 4.65116279 3.7037037 2.97619048]\n" ], [ "T = np.array(T_measured).mean(axis=1)\ndT = (np.array(T_measured).std(axis=1,ddof=1) / math.sqrt(4)) # corr. dev / sqrt(n)\nprint(\"T = \")\nfor i in range(T.size):\n sciPrintD(T[i], dT[i])\n \nOmega = (2. * math.pi)/ T\nprint(\"\\n \\\\Omega *1e3 = \") \nfor i in range(T.size):\n sciPrintR(Omega[i]*1e3, dT[i]/T[i])\n \nW0 = m * T* k\nEW0 = prodErrorR([dm/m, dT/T, Ek])\n\n\nprint(\"\\n W0 = \")\nfor i in range(W0.size):\n sciPrintR(W0[i], EW0[i])\n \nT0 = W0 / (2. * math.pi)\nET0 = EW0\n\nprint(\"\\n Frequency = \")\nfor i in range(T0.size):\n sciPrintR(T0[i], ET0[i])\n \nprint \"Frequency = \" , T0\nprint \"mean frequency = \", T0.mean()", "T = \n203.1275 +- 4.57238335918 ( 2.25099179539 %)\n156.7925 +- 1.68689248324 ( 1.07587574867 %)\n130.365 +- 1.39806115746 ( 1.07242063242 %)\n104.245 +- 0.937056561793 ( 0.898898327779 %)\n84.55 +- 0.475797576006 ( 0.562741071563 %)\n69.29 +- 1.30780350206 ( 1.88743469773 %)\n55.55 +- 0.343438495221 ( 0.61825111651 %)\n43.98 +- 0.376718285549 ( 0.856567270461 %)\n35.4525 +- 0.240463961264 ( 0.678270816624 %)\n\n \\Omega *1e3 = \n30.9322238849 +- 0.696281821781 ( 2.25099179539 %)\n40.0732516363 +- 0.431138396059 ( 1.07587574867 %)\n48.1968726819 +- 0.516873206825 ( 1.07242063242 %)\n60.2732534623 +- 0.541795267471 ( 0.898898327779 %)\n74.3132502328 +- 0.418191180673 ( 0.562741071563 %)\n90.6795397197 +- 1.71151709642 ( 1.88743469773 %)\n113.108646394 +- 0.6992954692 ( 0.61825111651 %)\n142.864604529 +- 1.22373144347 ( 0.856567270461 %)\n177.228271834 +- 1.20208764665 ( 0.678270816624 %)\n\n W0 = \n2965.29044654 +- 88.0701029553 ( 2.97003293751 %)\n2899.2531766 +- 56.9993792039 ( 1.96600212992 %)\n2918.07265501 +- 53.0729497208 ( 1.81876724795 %)\n2942.12071249 +- 46.7744908899 ( 1.58982229014 %)\n2900.54754797 +- 38.8719272052 ( 1.34015824814 %)\n2916.5130835 +- 64.3809124921 ( 2.20746180966 %)\n2905.82722065 +- 36.4669322839 ( 1.25495872655 %)\n2889.12447736 +- 39.2669697575 ( 1.35913042394 %)\n2898.2337375 +- 35.7903376797 ( 1.23490169949 %)\n\n Frequency = \n471.940632271 +- 14.016792224 ( 2.97003293751 %)\n461.430474331 +- 9.07173295346 ( 1.96600212992 %)\n464.425687347 +- 8.44682229253 ( 1.81876724795 %)\n468.253054566 +- 7.44439143573 ( 1.58982229014 %)\n461.636479932 +- 6.18665936222 ( 1.34015824814 %)\n464.177473831 +- 10.2465404639 ( 2.20746180966 %)\n462.476765937 +- 5.80389253238 ( 1.25495872655 %)\n459.81844178 +- 6.24953233714 ( 1.35913042394 %)\n461.268225559 +- 5.69620915665 ( 1.23490169949 %)\nFrequency = [ 471.94063227 461.43047433 464.42568735 468.25305457 461.63647993\n 464.17747383 462.47676594 459.81844178 461.26822556]\nmean frequency = 463.936359506\n" ], [ "fig = plt.figure(figsize=(8, 16))\nplt.title(\"$\\\\Omega(M)$ diagram\")\n\nax = fig.add_subplot(111)\nx_minor_ticks = np.linspace(0, M.max() * 1.05+ 0.0001, 125) # 104 \nx_major_ticks = np.array([x_minor_ticks[i] for i in range(0, x_minor_ticks.size, 20)])\ny_minor_ticks = np.linspace(0, Omega.max()* 1.05+ 0.0001, 248) # 4822\ny_major_ticks = np.array([y_minor_ticks[i] for i in range(0, y_minor_ticks.size, 20)])\n\n\nax.set_xticks(x_major_ticks)\nax.set_xticks(x_minor_ticks, minor=True)\nax.set_yticks(y_major_ticks)\nax.set_yticks(y_minor_ticks, minor=True)\nax.grid(which='minor', alpha=0.4, linestyle='-')\nax.grid(which='major', alpha=0.7, linestyle='-')\nax.set_xlabel('$M$')\nax.set_ylabel('$\\\\Omega$')\n\nM = m * g * r\nplt.xlim((0, M.max() * 1.05))\nplt.ylim((0, Omega.max() * 1.05))\n\ngrid = x_minor_ticks\n\nplt.plot(grid, grid / (I0.mean() * W0.mean()))\nplt.scatter(M, Omega, s=5, color=\"black\")\nplt.show()\n\nfig = plt.figure(figsize=(14,5))\nplt.title(\"freqs and errors\")\nplt.grid(which='major', axis='both', linestyle='-')\nax = fig.gca()\nax.set_yticks(np.arange(0, T0.size, 1))\nax.set_xticks(np.arange(int((T0-T0*ET0).min()), int((T0+T0*ET0).max()) + 1, 5.))\nfor i, (F, EF) in enumerate(zip(T0, ET0)):\n plt.plot([F - F*EF, F + F*EF], np.ones(2) * i, color=\"black\", linewidth=2.)\n plt.scatter(F - F*EF, [i], marker='|')\n plt.scatter(F, [i], marker='+')\n plt.scatter(F + F*EF, [i], marker='|')\nplt.show()\n\nfor F, EF in zip(T0, ET0):\n sciPrintR(F, EF)\n", "_____no_output_____" ], [ "x_cost = 0.4055/ 120\ny_cost = 0.1809 / 240\npk = (246.* y_cost) / (124. * x_cost)\npk1 = (246.* y_cost) / ((124.-6.) * x_cost)\npk2 = ((246. - 8.)* y_cost) / (124. * x_cost)\nprint pk, pk1, pk2\ndpk = (pk1 - pk2) / math.sqrt(9)\nsciPrintD(pk, dpk, name=\"pk = \")\nEpk = dpk/pk\nwplot = 1/(pk * I0)\nEwplot = prodErrorR([Epk, EI0])\nsciPrintR(wplot, Ewplot, name=\"wplot =\")\nsciPrintR(wplot / (2. * math.pi), Ewplot, name=\"freqplot =\")", "0.442518197367 0.465019122657 0.42812736168\npk = 0.442518197367 +- 0.0122972536589 ( 2.77892609435 %)\nwplot = 2909.13968212 +- 82.363959057 ( 2.83121362523 %)\nfreqplot = 463.003960553 +- 13.1086312165 ( 2.83121362523 %)\n" ], [ "angl = 10. # (grad)\nmangl = 141. / 1000.\ntAngl = np.array([5. * 60 + 32.18, 5. * 60. + 36.68])\nta = tAngl.mean() * (360. / angl)\nprint(tAngl.mean(), tAngl.mean() * 360. / angl, ta)", "(334.43000000000001, 12039.48, 12039.48)\n" ], [ "sciPrintR((2. * math.pi) / ta * 1e9, 1/ta, name=\"\\OmegaTr * 1e9 = \")\nMtr = (2. * math.pi) / (pk*ta)\nEMtr = prodErrorR([Epk, 0.2 / ta])\nsciPrintR(Mtr*1e6, EMtr, name=\"Mtr*1e6 = \")", "\\OmegaTr * 1e9 = 521881.784527 +- 43.3475353194 ( 0.00830600657171 %)\nMtr*1e6 = 1179.34536395 +- 32.7731419169 ( 2.77892659087 %)\n" ], [ "m*g*r", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3ca26a905d4f365fe336aafb932252d6a539c6
169,524
ipynb
Jupyter Notebook
12-Web-Scraping-and-Document-Databases/2/Activities/08-Stu_Splinter/Solved/Stu_Splinter.ipynb
mmegancross/web-scraping-challenge
55843a7a5f065b42c12c269600dd5bb383316730
[ "ADSL" ]
null
null
null
12-Web-Scraping-and-Document-Databases/2/Activities/08-Stu_Splinter/Solved/Stu_Splinter.ipynb
mmegancross/web-scraping-challenge
55843a7a5f065b42c12c269600dd5bb383316730
[ "ADSL" ]
null
null
null
12-Web-Scraping-and-Document-Databases/2/Activities/08-Stu_Splinter/Solved/Stu_Splinter.ipynb
mmegancross/web-scraping-challenge
55843a7a5f065b42c12c269600dd5bb383316730
[ "ADSL" ]
null
null
null
51.985281
246
0.598086
[ [ [ "from splinter import Browser\nfrom bs4 import BeautifulSoup", "_____no_output_____" ] ], [ [ "# Mac Users", "_____no_output_____" ] ], [ [ "# https://splinter.readthedocs.io/en/latest/drivers/chrome.html\n!which chromedriver", "/usr/local/bin/chromedriver\r\n" ], [ "executable_path = {'executable_path': '/usr/local/bin/chromedriver'}\nbrowser = Browser('chrome', **executable_path, headless=False)", "_____no_output_____" ] ], [ [ "# Windows Users", "_____no_output_____" ] ], [ [ "# executable_path = {'executable_path': 'chromedriver.exe'}\n# browser = Browser('chrome', **executable_path, headless=False)", "_____no_output_____" ], [ "url = 'http://books.toscrape.com/'\nbrowser.visit(url)", "_____no_output_____" ], [ "# Iterate through all pages\nfor x in range(50):\n # HTML object\n html = browser.html\n # Parse HTML with Beautiful Soup\n soup = BeautifulSoup(html, 'html.parser')\n # Retrieve all elements that contain book information\n articles = soup.find_all('article', class_='product_pod')\n\n # Iterate through each book\n for article in articles:\n # Use Beautiful Soup's find() method to navigate and retrieve attributes\n h3 = article.find('h3')\n link = h3.find('a')\n href = link['href']\n title = link['title']\n print('-----------')\n print(title)\n print('http://books.toscrape.com/' + href)\n\n # Click the 'Next' button on each page\n try:\n browser.click_link_by_partial_text('next')\n \n except:\n print(\"Scraping Complete\")\n", "-----------\nA Light in the Attic\nhttp://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html\n-----------\nTipping the Velvet\nhttp://books.toscrape.com/catalogue/tipping-the-velvet_999/index.html\n-----------\nSoumission\nhttp://books.toscrape.com/catalogue/soumission_998/index.html\n-----------\nSharp Objects\nhttp://books.toscrape.com/catalogue/sharp-objects_997/index.html\n-----------\nSapiens: A Brief History of Humankind\nhttp://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html\n-----------\nThe Requiem Red\nhttp://books.toscrape.com/catalogue/the-requiem-red_995/index.html\n-----------\nThe Dirty Little Secrets of Getting Your Dream Job\nhttp://books.toscrape.com/catalogue/the-dirty-little-secrets-of-getting-your-dream-job_994/index.html\n-----------\nThe Coming Woman: A Novel Based on the Life of the Infamous Feminist, Victoria Woodhull\nhttp://books.toscrape.com/catalogue/the-coming-woman-a-novel-based-on-the-life-of-the-infamous-feminist-victoria-woodhull_993/index.html\n-----------\nThe Boys in the Boat: Nine Americans and Their Epic Quest for Gold at the 1936 Berlin Olympics\nhttp://books.toscrape.com/catalogue/the-boys-in-the-boat-nine-americans-and-their-epic-quest-for-gold-at-the-1936-berlin-olympics_992/index.html\n-----------\nThe Black Maria\nhttp://books.toscrape.com/catalogue/the-black-maria_991/index.html\n-----------\nStarving Hearts (Triangular Trade Trilogy, #1)\nhttp://books.toscrape.com/catalogue/starving-hearts-triangular-trade-trilogy-1_990/index.html\n-----------\nShakespeare's Sonnets\nhttp://books.toscrape.com/catalogue/shakespeares-sonnets_989/index.html\n-----------\nSet Me Free\nhttp://books.toscrape.com/catalogue/set-me-free_988/index.html\n-----------\nScott Pilgrim's Precious Little Life (Scott Pilgrim #1)\nhttp://books.toscrape.com/catalogue/scott-pilgrims-precious-little-life-scott-pilgrim-1_987/index.html\n-----------\nRip it Up and Start Again\nhttp://books.toscrape.com/catalogue/rip-it-up-and-start-again_986/index.html\n-----------\nOur Band Could Be Your Life: Scenes from the American Indie Underground, 1981-1991\nhttp://books.toscrape.com/catalogue/our-band-could-be-your-life-scenes-from-the-american-indie-underground-1981-1991_985/index.html\n-----------\nOlio\nhttp://books.toscrape.com/catalogue/olio_984/index.html\n-----------\nMesaerion: The Best Science Fiction Stories 1800-1849\nhttp://books.toscrape.com/catalogue/mesaerion-the-best-science-fiction-stories-1800-1849_983/index.html\n-----------\nLibertarianism for Beginners\nhttp://books.toscrape.com/catalogue/libertarianism-for-beginners_982/index.html\n-----------\nIt's Only the Himalayas\nhttp://books.toscrape.com/catalogue/its-only-the-himalayas_981/index.html\n-----------\nIn Her Wake\nhttp://books.toscrape.com/in-her-wake_980/index.html\n-----------\nHow Music Works\nhttp://books.toscrape.com/how-music-works_979/index.html\n-----------\nFoolproof Preserving: A Guide to Small Batch Jams, Jellies, Pickles, Condiments, and More: A Foolproof Guide to Making Small Batch Jams, Jellies, Pickles, Condiments, and More\nhttp://books.toscrape.com/foolproof-preserving-a-guide-to-small-batch-jams-jellies-pickles-condiments-and-more-a-foolproof-guide-to-making-small-batch-jams-jellies-pickles-condiments-and-more_978/index.html\n-----------\nChase Me (Paris Nights #2)\nhttp://books.toscrape.com/chase-me-paris-nights-2_977/index.html\n-----------\nBlack Dust\nhttp://books.toscrape.com/black-dust_976/index.html\n-----------\nBirdsong: A Story in Pictures\nhttp://books.toscrape.com/birdsong-a-story-in-pictures_975/index.html\n-----------\nAmerica's Cradle of Quarterbacks: Western Pennsylvania's Football Factory from Johnny Unitas to Joe Montana\nhttp://books.toscrape.com/americas-cradle-of-quarterbacks-western-pennsylvanias-football-factory-from-johnny-unitas-to-joe-montana_974/index.html\n-----------\nAladdin and His Wonderful Lamp\nhttp://books.toscrape.com/aladdin-and-his-wonderful-lamp_973/index.html\n-----------\nWorlds Elsewhere: Journeys Around Shakespeare’s Globe\nhttp://books.toscrape.com/worlds-elsewhere-journeys-around-shakespeares-globe_972/index.html\n-----------\nWall and Piece\nhttp://books.toscrape.com/wall-and-piece_971/index.html\n-----------\nThe Four Agreements: A Practical Guide to Personal Freedom\nhttp://books.toscrape.com/the-four-agreements-a-practical-guide-to-personal-freedom_970/index.html\n-----------\nThe Five Love Languages: How to Express Heartfelt Commitment to Your Mate\nhttp://books.toscrape.com/the-five-love-languages-how-to-express-heartfelt-commitment-to-your-mate_969/index.html\n-----------\nThe Elephant Tree\nhttp://books.toscrape.com/the-elephant-tree_968/index.html\n-----------\nThe Bear and the Piano\nhttp://books.toscrape.com/the-bear-and-the-piano_967/index.html\n-----------\nSophie's World\nhttp://books.toscrape.com/sophies-world_966/index.html\n-----------\nPenny Maybe\nhttp://books.toscrape.com/penny-maybe_965/index.html\n-----------\nMaude (1883-1993):She Grew Up with the country\nhttp://books.toscrape.com/maude-1883-1993she-grew-up-with-the-country_964/index.html\n-----------\nIn a Dark, Dark Wood\nhttp://books.toscrape.com/in-a-dark-dark-wood_963/index.html\n-----------\nBehind Closed Doors\nhttp://books.toscrape.com/behind-closed-doors_962/index.html\n-----------\nYou can't bury them all: Poems\nhttp://books.toscrape.com/you-cant-bury-them-all-poems_961/index.html\n-----------\nSlow States of Collapse: Poems\nhttp://books.toscrape.com/slow-states-of-collapse-poems_960/index.html\n-----------\nReasons to Stay Alive\nhttp://books.toscrape.com/reasons-to-stay-alive_959/index.html\n-----------\nPrivate Paris (Private #10)\nhttp://books.toscrape.com/private-paris-private-10_958/index.html\n-----------\n#HigherSelfie: Wake Up Your Life. Free Your Soul. Find Your Tribe.\nhttp://books.toscrape.com/higherselfie-wake-up-your-life-free-your-soul-find-your-tribe_957/index.html\n-----------\nWithout Borders (Wanderlove #1)\nhttp://books.toscrape.com/without-borders-wanderlove-1_956/index.html\n-----------\nWhen We Collided\nhttp://books.toscrape.com/when-we-collided_955/index.html\n-----------\nWe Love You, Charlie Freeman\nhttp://books.toscrape.com/we-love-you-charlie-freeman_954/index.html\n-----------\nUntitled Collection: Sabbath Poems 2014\nhttp://books.toscrape.com/untitled-collection-sabbath-poems-2014_953/index.html\n-----------\nUnseen City: The Majesty of Pigeons, the Discreet Charm of Snails & Other Wonders of the Urban Wilderness\nhttp://books.toscrape.com/unseen-city-the-majesty-of-pigeons-the-discreet-charm-of-snails-other-wonders-of-the-urban-wilderness_952/index.html\n-----------\nUnicorn Tracks\nhttp://books.toscrape.com/unicorn-tracks_951/index.html\n-----------\nUnbound: How Eight Technologies Made Us Human, Transformed Society, and Brought Our World to the Brink\nhttp://books.toscrape.com/unbound-how-eight-technologies-made-us-human-transformed-society-and-brought-our-world-to-the-brink_950/index.html\n-----------\nTsubasa: WoRLD CHRoNiCLE 2 (Tsubasa WoRLD CHRoNiCLE #2)\nhttp://books.toscrape.com/tsubasa-world-chronicle-2-tsubasa-world-chronicle-2_949/index.html\n-----------\nThrowing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity\nhttp://books.toscrape.com/throwing-rocks-at-the-google-bus-how-growth-became-the-enemy-of-prosperity_948/index.html\n-----------\nThis One Summer\nhttp://books.toscrape.com/this-one-summer_947/index.html\n-----------\nThirst\nhttp://books.toscrape.com/thirst_946/index.html\n-----------\nThe Torch Is Passed: A Harding Family Story\nhttp://books.toscrape.com/the-torch-is-passed-a-harding-family-story_945/index.html\n-----------\nThe Secret of Dreadwillow Carse\nhttp://books.toscrape.com/the-secret-of-dreadwillow-carse_944/index.html\n-----------\nThe Pioneer Woman Cooks: Dinnertime: Comfort Classics, Freezer Food, 16-Minute Meals, and Other Delicious Ways to Solve Supper!\nhttp://books.toscrape.com/the-pioneer-woman-cooks-dinnertime-comfort-classics-freezer-food-16-minute-meals-and-other-delicious-ways-to-solve-supper_943/index.html\n-----------\nThe Past Never Ends\nhttp://books.toscrape.com/the-past-never-ends_942/index.html\n-----------\nThe Natural History of Us (The Fine Art of Pretending #2)\nhttp://books.toscrape.com/the-natural-history-of-us-the-fine-art-of-pretending-2_941/index.html\n" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a3ca2cb9eade1b6e7dd1ba6b9626cfe02c32b13
95,080
ipynb
Jupyter Notebook
CourseUdemy_Keras/2_Machine_Learning/5_Exercises.ipynb
dangkhoa1992/Advanced-Machine-Learning-Specialization
c5aa17b0131d9813fe00da2de82b5c0db8bc1130
[ "Unlicense" ]
1
2018-08-18T17:26:53.000Z
2018-08-18T17:26:53.000Z
CourseUdemy_Keras/2_Machine_Learning/5_Exercises.ipynb
dangkhoa1992/Advanced-Machine-Learning-Specialization
c5aa17b0131d9813fe00da2de82b5c0db8bc1130
[ "Unlicense" ]
null
null
null
CourseUdemy_Keras/2_Machine_Learning/5_Exercises.ipynb
dangkhoa1992/Advanced-Machine-Learning-Specialization
c5aa17b0131d9813fe00da2de82b5c0db8bc1130
[ "Unlicense" ]
2
2018-08-15T21:01:55.000Z
2018-11-05T06:44:07.000Z
65.078713
23,276
0.717091
[ [ [ "import seaborn as sns\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nfrom IPython.display import display", "_____no_output_____" ] ], [ [ "## Exercise 1\n\nYou've just been hired at a real estate investment firm and they would like you to build a model for pricing houses. You are given a dataset that contains data for house prices and a few features like number of bedrooms, size in square feet and age of the house. Let's see if you can build a model that is able to predict the price. In this exercise we extend what we have learned about linear regression to a dataset with more than one feature. Here are the steps to complete it:\n\n1. Load the dataset ../data/housing-data.csv\n- plot the histograms for each feature\n- create 2 variables called X and y: X shall be a matrix with 3 columns (sqft,bdrms,age) and y shall be a vector with 1 column (price)\n- create a linear regression model in Keras with the appropriate number of inputs and output\n- split the data into train and test with a 20% test size\n- train the model on the training set and check its accuracy on training and test set\n- how's your model doing? Is the loss growing smaller?\n- try to improve your model with these experiments:\n - normalize the input features with one of the rescaling techniques mentioned above\n - use a different value for the learning rate of your model\n - use a different optimizer\n- once you're satisfied with training, check the R2score on the test set", "_____no_output_____" ] ], [ [ "df = pd.read_csv('housing-data.csv')\n\ndisplay(df.info())\ndisplay(df.head())\ndisplay(df.describe().round(2))", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 47 entries, 0 to 46\nData columns (total 4 columns):\nsqft 47 non-null int64\nbdrms 47 non-null int64\nage 47 non-null int64\nprice 47 non-null int64\ndtypes: int64(4)\nmemory usage: 1.5 KB\n" ], [ "# plot the histograms for each feature\nplt.figure(figsize=(15, 5))\nfor i, feature in enumerate(df.columns):\n plt.subplot(1, 4, i+1)\n df[feature].plot(kind='hist', title=feature)\n plt.xlabel(feature)", "_____no_output_____" ] ], [ [ "#### Feature Engineering", "_____no_output_____" ] ], [ [ "df['sqft1000'] = df['sqft']/1000.0\ndf['age10'] = df['age']/10.0\n\ndf['price100k'] = df['price']/1e5\n\ndisplay(df.describe().round(2))", "_____no_output_____" ] ], [ [ "#### Train/Test split", "_____no_output_____" ] ], [ [ "X = df[['sqft1000', 'bdrms', 'age10']].values\ny = df['price100k'].values\n\ndisplay(X.shape)\ndisplay(y.shape)", "_____no_output_____" ], [ "from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2)", "_____no_output_____" ] ], [ [ "#### model", "_____no_output_____" ] ], [ [ "from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.optimizers import Adam, SGD\n\nmodel = Sequential()\n\nmodel.add(Dense(1, input_shape=(3,)))\n\nmodel.compile(Adam(lr=0.1), 'mean_squared_error')\n\nmodel.summary()", "Using TensorFlow backend.\n" ], [ "# Train\nhistory = model.fit(\n X_train, y_train,\n epochs=40, verbose=0)", "_____no_output_____" ], [ "historydf = pd.DataFrame(history.history, index=history.epoch)\nhistorydf.plot();", "_____no_output_____" ] ], [ [ "#### Evaluate", "_____no_output_____" ] ], [ [ "y_train_pred = model.predict(X_train)\ny_test_pred = model.predict(X_test)", "_____no_output_____" ], [ "from sklearn.metrics import mean_squared_error as mse\n\nprint(\"The Mean Squared Error on the Train set is:\\t{:0.5f}\".format(mse(y_train, y_train_pred)))\nprint(\"The Mean Squared Error on the Test set is:\\t{:0.5f}\".format(mse(y_test, y_test_pred)))", "The Mean Squared Error on the Train set is:\t0.43576\nThe Mean Squared Error on the Test set is:\t0.63152\n" ], [ "from sklearn.metrics import r2_score\n\nprint(\"The R2 score on the Train set is:\\t{:0.3f}\".format(r2_score(y_train, y_train_pred)))\nprint(\"The R2 score on the Test set is:\\t{:0.3f}\".format(r2_score(y_test, y_test_pred)))", "The R2 score on the Train set is:\t0.726\nThe R2 score on the Test set is:\t0.512\n" ] ], [ [ "## Exercise 2\n\nYour boss was extremely happy with your work on the housing price prediction model and decided to entrust you with a more challenging task. They've seen a lot of people leave the company recently and they would like to understand why that's happening. They have collected historical data on employees and they would like you to build a model that is able to predict which employee will leave next. The would like a model that is better than random guessing. They also prefer false negatives than false positives, in this first phase. Fields in the dataset include:\n\n- Employee satisfaction level\n- Last evaluation\n- Number of projects\n- Average monthly hours\n- Time spent at the company\n- Whether they have had a work accident\n- Whether they have had a promotion in the last 5 years\n- Department\n- Salary\n- Whether the employee has left\n\nYour goal is to predict the binary outcome variable `left` using the rest of the data. Since the outcome is binary, this is a classification problem. Here are some things you may want to try out:\n\n1. load the dataset at ../data/HR_comma_sep.csv, inspect it with `.head()`, `.info()` and `.describe()`.\n- Establish a benchmark: what would be your accuracy score if you predicted everyone stay?\n- Check if any feature needs rescaling. You may plot a histogram of the feature to decide which rescaling method is more appropriate.\n- convert the categorical features into binary dummy columns. You will then have to combine them with the numerical features using `pd.concat`.\n- do the usual train/test split with a 20% test size\n- play around with learning rate and optimizer\n- check the confusion matrix, precision and recall\n- check if you still get the same results if you use a 5-Fold cross validation on all the data\n- Is the model good enough for your boss?\n\nAs you will see in this exercise, the a logistic regression model is not good enough to help your boss. In the next chapter we will learn how to go beyond linear models.\n\nThis dataset comes from https://www.kaggle.com/ludobenistant/hr-analytics/ and is released under [CC BY-SA 4.0 License](https://creativecommons.org/licenses/by-sa/4.0/).", "_____no_output_____" ] ], [ [ "df = pd.read_csv('HR_comma_sep.csv')\n\ndisplay(df.info())\ndisplay(df.head())\ndisplay(df.describe().round(2))\n\ndisplay(df['left'].value_counts())", "<class 'pandas.core.frame.DataFrame'>\nRangeIndex: 14999 entries, 0 to 14998\nData columns (total 10 columns):\nsatisfaction_level 14999 non-null float64\nlast_evaluation 14999 non-null float64\nnumber_project 14999 non-null int64\naverage_montly_hours 14999 non-null int64\ntime_spend_company 14999 non-null int64\nWork_accident 14999 non-null int64\nleft 14999 non-null int64\npromotion_last_5years 14999 non-null int64\nsales 14999 non-null object\nsalary 14999 non-null object\ndtypes: float64(2), int64(6), object(2)\nmemory usage: 1.1+ MB\n" ] ], [ [ "#### Baseline model\nEstablish a benchmark: what would be your accuracy score if you predicted everyone stay?", "_____no_output_____" ] ], [ [ "df.left.value_counts() / len(df)", "_____no_output_____" ] ], [ [ "--> Predict all 0 accuracy = 76.19%\n\n--> Accuracy must >> 76%", "_____no_output_____" ], [ "#### Feature Engineering", "_____no_output_____" ] ], [ [ "df['average_montly_hours_100'] = df['average_montly_hours']/100.0", "_____no_output_____" ], [ "cat_features = pd.get_dummies(df[['sales', 'salary']])", "_____no_output_____" ] ], [ [ "#### Train/Test split", "_____no_output_____" ] ], [ [ "display(df.columns)\ndisplay(cat_features.columns)", "_____no_output_____" ], [ "X = pd.concat([df[['satisfaction_level', 'last_evaluation', 'number_project',\n 'time_spend_company', 'Work_accident',\n 'promotion_last_5years', 'average_montly_hours_100']],\n cat_features], axis=1).values\ny = df['left'].values\n\ndisplay(X.shape)\ndisplay(y.shape)", "_____no_output_____" ], [ "X_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2)", "_____no_output_____" ] ], [ [ "#### Model", "_____no_output_____" ] ], [ [ "from keras.models import Sequential\nfrom keras.layers import Dense\nfrom keras.optimizers import Adam, SGD\n\nmodel = Sequential()\n\nmodel.add(Dense(1, input_shape=(20,), activation='sigmoid'))\n\nmodel.compile(Adam(lr=0.5), 'binary_crossentropy', metrics=['accuracy'])\n\nmodel.summary()", "_________________________________________________________________\nLayer (type) Output Shape Param # \n=================================================================\ndense_2 (Dense) (None, 1) 21 \n=================================================================\nTotal params: 21\nTrainable params: 21\nNon-trainable params: 0\n_________________________________________________________________\n" ], [ "# Train\nhistory = model.fit(\n X_train, y_train,\n epochs=40, verbose=0)", "_____no_output_____" ], [ "historydf = pd.DataFrame(history.history, index=history.epoch)\nhistorydf.plot();", "_____no_output_____" ] ], [ [ "#### Evaluate", "_____no_output_____" ] ], [ [ "y_test_pred = model.predict_classes(X_test)", "_____no_output_____" ], [ "# Confusion matrix\nfrom sklearn.metrics import confusion_matrix\n\ndef pretty_confusion_matrix(y_true, y_pred, labels=[\"False\", \"True\"]):\n cm = confusion_matrix(y_true, y_pred)\n pred_labels = ['Predicted '+ l for l in labels]\n df = pd.DataFrame(cm, index=labels, columns=pred_labels)\n return df\n\npretty_confusion_matrix(y_test, y_test_pred, labels=['Stay', 'Leave'])", "_____no_output_____" ], [ "from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score\n\nprint(\"The test Accuracy score is {:0.3f}\".format(accuracy_score(y_test, y_test_pred)))\nprint(\"The test Precision score is {:0.3f}\".format(precision_score(y_test, y_test_pred)))\nprint(\"The test Recall score is {:0.3f}\".format(recall_score(y_test, y_test_pred)))\nprint(\"The test F1 score is {:0.3f}\".format(f1_score(y_test, y_test_pred)))\n\n# Report\nfrom sklearn.metrics import classification_report\n\nprint(classification_report(y_test, y_test_pred))", "The test Accuracy score is 0.794\nThe test Precision score is 0.603\nThe test Recall score is 0.349\nThe test F1 score is 0.442\n precision recall f1-score support\n\n 0 0.82 0.93 0.87 2297\n 1 0.60 0.35 0.44 703\n\navg / total 0.77 0.79 0.77 3000\n\n" ] ], [ [ "--> the model is not good enough since it performs no better than the benchmark.", "_____no_output_____" ], [ "#### Cross Validation Trainning", "_____no_output_____" ] ], [ [ "from keras.wrappers.scikit_learn import KerasClassifier\n\ndef build_logistic_regression_model():\n model = Sequential()\n model.add(Dense(1, input_dim=20, activation='sigmoid'))\n model.compile(Adam(lr=0.5), 'binary_crossentropy', metrics=['accuracy'])\n return model", "_____no_output_____" ], [ "model = KerasClassifier(\n build_fn=build_logistic_regression_model,\n epochs=25, verbose=0)", "_____no_output_____" ], [ "from sklearn.model_selection import KFold, cross_val_score\n\nscores = cross_val_score(\n model,\n X, y,\n cv=KFold(5, shuffle=True))\n\ndisplay(scores)\nprint(\"The cross validation accuracy is {:0.4f} ± {:0.4f}\".format(scores.mean(), scores.std()))", "_____no_output_____" ] ], [ [ "--> the model is not good enough since it performs no better than the benchmark.", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ] ]
4a3ca8e8145b4c9ec7e4498d9878219dab4aaa53
9,262
ipynb
Jupyter Notebook
labs/Lab-07-HypothesisTestingPartII/lab7.ipynb
kimbrianj/bsos233-sp22
2743e08c902e75fef1d4cda149b7713aeb1a083a
[ "CC0-1.0" ]
null
null
null
labs/Lab-07-HypothesisTestingPartII/lab7.ipynb
kimbrianj/bsos233-sp22
2743e08c902e75fef1d4cda149b7713aeb1a083a
[ "CC0-1.0" ]
null
null
null
labs/Lab-07-HypothesisTestingPartII/lab7.ipynb
kimbrianj/bsos233-sp22
2743e08c902e75fef1d4cda149b7713aeb1a083a
[ "CC0-1.0" ]
null
null
null
33.557971
499
0.630317
[ [ [ "## Lab 7: Babies\n\nPlease complete this lab by providing answers in cells after the question. Use **Code** cells to write and run any code you need to answer the question and **Markdown** cells to write out answers in words. After you are finished with the assignment, remember to download it as an **HTML file** and submit it in **ELMS**.\n\nThis assignment is due by **11:59pm on Tuesday, March 29**.", "_____no_output_____" ] ], [ [ "# These lines import the Numpy and Datascience modules.\nimport numpy as np\nfrom datascience import *\n\n# These lines do some fancy plotting magic\nimport matplotlib\n%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.style.use('fivethirtyeight')", "_____no_output_____" ] ], [ [ "In this lab, we will look at a dataset of a sample of newborns in a large hospital system. We will treat it as if it were a simple random sample though the sampling was done in multiple stages. The table births contains the following variables for 1,174 mother-baby pairs: the baby’s birth weight in ounces, the number of gestational days, the mother’s age in completed years, the mother’s height in inches, pregnancy weight in pounds, and whether or not the mother smoked during pregnancy.\n\nThe key question we want to answer is whether maternal smoking is associated with lower birthweights of babies. ", "_____no_output_____" ] ], [ [ "births = Table.read_table('baby.csv')\nbirths.show(5)", "_____no_output_____" ] ], [ [ "Let's first take a look at the dataset. First, we select just the variables we want to look at. Then, since `Maternal Smoker` is a categorical variable, we group by that variable and look at summaries of the `Birth Weight` variable. ", "_____no_output_____" ] ], [ [ "smoking_and_birthweight = births.select('Maternal Smoker', 'Birth Weight')\nsmoking_and_birthweight.group('Maternal Smoker')", "_____no_output_____" ], [ "smoking_and_birthweight.group('Maternal Smoker', collect = np.mean)", "_____no_output_____" ], [ "smoking_and_birthweight.group('Maternal Smoker', collect = np.std)", "_____no_output_____" ], [ "smoking_and_birthweight.hist('Birth Weight', group = 'Maternal Smoker')", "_____no_output_____" ] ], [ [ "The distribution of the weights of the babies born to mothers who smoked appears to be based slightly to the left of the distribution corresponding to non-smoking mothers. The weights of the babies of the mothers who smoked seem lower on average than the weights of the babies of the non-smokers.\n\nThis raises the question of whether the difference reflects just chance variation or a difference in the distributions in the larger population. Could it be that there is no difference between the two distributions in the population, but we are seeing a difference in the samples just because of the mothers who happened to be selected?\n\nRemember, we are mainly interested in whether maternal smoking is associated with **lower** birthweights of babies. ", "_____no_output_____" ], [ "<font color = 'red'>**Question 1: What is the null hypothesis? What is the alternative hypothesis?**</font>", "_____no_output_____" ], [ "*Replace this text with your answer.*\n\n", "_____no_output_____" ], [ "<font color = 'red'>**Question 2: What is the statistic we want to calculate to perform the hypothesis test? Calculate the observed value of this statistic for our data.**</font>\n\n*Hint:* Remember, we want to compare the means of the two groups. Make sure the statistic you calculate is consistent with the alternative hypothesis that we are testing!", "_____no_output_____" ] ], [ [ "\n\n", "_____no_output_____" ] ], [ [ "<font color = 'red'>**Question 3: Define the function `statistic` which takes in a Table as an argument and returns the value of a statistic. Check to make sure the function works by using the `smoking_and_birthweight` table and make sure it provides one value of the statistic as the output. Assign the observed value of the statistic that you just calculated using the function to `observed_statistic`.**</font>", "_____no_output_____" ] ], [ [ "def statistic(births_table):\n ...\n return ...\n\nobserved_statistic = statistic(smoking_and_birthweight)\nobserved_statistic", "_____no_output_____" ] ], [ [ "If there were no difference between the two distributions in the underlying population, then whether a birth weight has the label True or False with respect to maternal smoking should make no difference to the average. The idea, then, is to shuffle all the labels randomly among the mothers. This is called random permutation.\n\nShuffling ensures that the count of True labels does not change, and nor does the count of False labels. This is important for the comparability of the simulated and original statistics.", "_____no_output_____" ], [ "<font color = 'red'>**Question 4: Shuffle the `smoking_and_birthweight` table and assign the shuffled table to `shuffled_smoker`. Take the `Maternal Smoker` column from that shuffled table. Create a new table called `simulated_smoker` that contains the original `Birth Weight` variable as well as the new shuffled `Maternal Smoker` variable.**</font>", "_____no_output_____" ] ], [ [ "shuffled_smoker = ...\nsimulated_smoker = Table().with_columns(\"Birth Weight\", ...,\n \"Maternal Smoker\", ...)", "_____no_output_____" ] ], [ [ "<font color = 'red'>**Question 5: Let's now see what the distribution of statistics is actually like under the null hypothesis.**</font>\n\nDefine the function `simulation_and_statistic` that shuffles the table, calculates the statistic, and returns the statistic. Then, create an array called `simulated_statistics` and use a loop to generate 5000 simulated statistics. \n", "_____no_output_____" ] ], [ [ "def simulation_and_statistic():\n '''Simulates shuffling the smoking_and_birthweight table and calculating the statistics.\n Returns one statistic.'''\n ...\n return ...\n\nnum_repetitions = 5000\n\nsimulated_statistics = ...\n\nfor ... in ...:\n ...\n", "_____no_output_____" ] ], [ [ "We can visualize the resulting simulated statistisc by putting the array into a table and using `hist`. ", "_____no_output_____" ] ], [ [ "Table().with_column('Simulated Statistic', simulated_statistics).hist()\nplt.title('Prediction Under the Null Hypothesis')\nplt.scatter(observed_statistic, 0, color='red', s=30);", "_____no_output_____" ] ], [ [ "<font color = 'red'>**Question 6: Calculate the p-value.**</font>\n\n*Hint:* Think about how you set up the alternative hypothesis and what you used for your statistic.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a3cb1f80b996f6c36a3dab37c23c042fe1b03af
3,648
ipynb
Jupyter Notebook
PDL-MongoDB-API.ipynb
peopledatalabs/pdl-ibm-cloudpac
147d42ae900d6ec2fac1dda84c72ca1be8f7f2f0
[ "MIT" ]
1
2021-06-05T05:41:39.000Z
2021-06-05T05:41:39.000Z
PDL-MongoDB-API.ipynb
peopledatalabs/pdl-ibm-cloudpak
147d42ae900d6ec2fac1dda84c72ca1be8f7f2f0
[ "MIT" ]
null
null
null
PDL-MongoDB-API.ipynb
peopledatalabs/pdl-ibm-cloudpak
147d42ae900d6ec2fac1dda84c72ca1be8f7f2f0
[ "MIT" ]
null
null
null
25.333333
71
0.459704
[ [ [ "import pandas as pd\n!pip install pymongo\nfrom pymongo import MongoClient\n\n\n# Api Key\napi_key = \"<api-key>\"\napi_endpoint = 'https://api.peopledatalabs.com/v5/person/bulk'\n\n# Variables used for database/dataframe\nmongo_cluster_ip = '<cluster-ip>, 27017'\ndb_name = '<database-name>'\ncollection_name = '<collection-name>'\ndata_path = '<path-to-data-asset>'\n\n# Instantiating the db\nclient = MongoClient(mongo_cluster_ip)\ndb = client[db_name]\ncollection = db[collection_name]", "_____no_output_____" ], [ "headers = {\n \"X-Api-Key\": api_key,\n \"Accept-Encoding\": \"gzip\",\n \"Content-Type\": \"application/json\",\n \"Accept\": \"application/json\", \n}\n\ndata = {\n \"requests\": [\n {\n \"params\": {\n \"profile\": [\"linkedin.com/in/seanthorne\"]\n }\n },\n {\n \"params\": {\n \"profile\": [\"linkedin.com/in/randrewn\"]\n }\n },\n {\n \"params\": {\n \"email\": [\"[email protected]\"]\n }\n },\n {\n \"params\": {\n \"email\": [\"[email protected]\"]\n }\n },\n {\n \"params\": {\n \"company\": [\"peopledatalabs.com\"],\n \"name\": [\"henry nevue\"]\n }\n }\n ]\n}", "_____no_output_____" ], [ "# Using the Requests module for makeing API requests\nresponse = requests.post(\n api_endpoint,\n headers=headers,\n json=data\n).json()", "_____no_output_____" ], [ "# A check on the status codes returned\ndf = pd.DataFrame(response)\ndf.status", "_____no_output_____" ], [ "# Insert the file if those records don't already exist\ntry:\n if collection.estimated_document_count() != 0:\n collection.drop()\n to_insert = df.to_dict('records')\n if isinstance(to_insert, list):\n collection.insert_many(to_insert)\n elif isinstance(to_insert, dict):\n collection.insert_one(to_insert)\n print('successfully inserted data into database')\nexcept:\n print('could not insert json into database')\n ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a3cb27b109683c01b993cfa48a85096eef8cd20
1,348
ipynb
Jupyter Notebook
src/kx4/utils/modules.ipynb
ksterx/kx4ceps
033fde5d04281c37ae7db4f57b629bc0be3a0acc
[ "MIT" ]
null
null
null
src/kx4/utils/modules.ipynb
ksterx/kx4ceps
033fde5d04281c37ae7db4f57b629bc0be3a0acc
[ "MIT" ]
null
null
null
src/kx4/utils/modules.ipynb
ksterx/kx4ceps
033fde5d04281c37ae7db4f57b629bc0be3a0acc
[ "MIT" ]
null
null
null
22.466667
80
0.495549
[ [ [ "def is_notebook():\n \"\"\"Determine wheather is the environment Jupyter Notebook\"\"\"\n if \"get_ipython\" not in globals():\n # Python shell\n return False\n env_name = get_ipython().__class__.__name__\n if env_name == \"TerminalInteractiveShell\":\n # IPython shell\n return False\n # Jupyter Notebook\n return True", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a3cb9ad41e515be9f5332e4c83c8c60c104c3ab
11,310
ipynb
Jupyter Notebook
notebook/networkDisplay.ipynb
worthapenny/pyofw
c41f9760010eec8edd79af00b7c0a53d6ed59e17
[ "MIT" ]
null
null
null
notebook/networkDisplay.ipynb
worthapenny/pyofw
c41f9760010eec8edd79af00b7c0a53d6ed59e17
[ "MIT" ]
null
null
null
notebook/networkDisplay.ipynb
worthapenny/pyofw
c41f9760010eec8edd79af00b7c0a53d6ed59e17
[ "MIT" ]
null
null
null
110.882353
2,575
0.699469
[ [ [ "import pandas as pd\nimport numpy as np\nimport networkx as nx\nimport plotly.graph_objects as go", "_____no_output_____" ], [ "from pyOFW.ofwConfig import OFWConfig\nofw_config = OFWConfig()", "_____no_output_____" ], [ "from OpenFlows.Water.Domain import IWaterModel\n\nmodel_filepath = r\"C:\\Program Files (x86)\\Bentley\\WaterGEMS\\Samples\\Example5.wtg\"\nwater_model: IWaterModel = ofw_config.open_model(model_filepath)", "_____no_output_____" ], [ "from pyOFW.networkInput import NetworkInput\nni = NetworkInput(water_model)\n\ngraph = ni.get_networkx_graph(laterals=False)\nnx.draw_networkx(graph,node_shape='s')", "_____no_output_____" ], [ "# setup.end()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a3cba3700686e32f895409899f66e9a42c73850
140,878
ipynb
Jupyter Notebook
Notebooks/readingmodel.ipynb
pt-spotify-song-suggester-3/ds
5bff866ecf9fdcb9167cd0af4847c8c4c77c6e1f
[ "MIT" ]
null
null
null
Notebooks/readingmodel.ipynb
pt-spotify-song-suggester-3/ds
5bff866ecf9fdcb9167cd0af4847c8c4c77c6e1f
[ "MIT" ]
null
null
null
Notebooks/readingmodel.ipynb
pt-spotify-song-suggester-3/ds
5bff866ecf9fdcb9167cd0af4847c8c4c77c6e1f
[ "MIT" ]
null
null
null
34.044949
15,978
0.406011
[ [ [ "!pip install pandas sklearn", "Requirement already satisfied: pandas in d:\\python projects\\spotify\\venv\\lib\\site-packages (1.1.3)\nCollecting sklearn\n Downloading sklearn-0.0.tar.gz (1.1 kB)\nRequirement already satisfied: numpy>=1.15.4 in d:\\python projects\\spotify\\venv\\lib\\site-packages (from pandas) (1.19.2)\nRequirement already satisfied: python-dateutil>=2.7.3 in d:\\python projects\\spotify\\venv\\lib\\site-packages (from pandas) (2.8.1)\nRequirement already satisfied: pytz>=2017.2 in d:\\python projects\\spotify\\venv\\lib\\site-packages (from pandas) (2020.1)\nCollecting scikit-learn\n Downloading scikit_learn-0.23.2-cp37-cp37m-win_amd64.whl (6.8 MB)\nRequirement already satisfied: six>=1.5 in d:\\python projects\\spotify\\venv\\lib\\site-packages (from python-dateutil>=2.7.3->pandas) (1.15.0)\nCollecting threadpoolctl>=2.0.0\n Downloading threadpoolctl-2.1.0-py3-none-any.whl (12 kB)\nRequirement already satisfied: scipy>=0.19.1 in d:\\python projects\\spotify\\venv\\lib\\site-packages (from scikit-learn->sklearn) (1.5.3)\nCollecting joblib>=0.11\n Downloading joblib-0.17.0-py3-none-any.whl (301 kB)\nUsing legacy 'setup.py install' for sklearn, since package 'wheel' is not installed.\nInstalling collected packages: threadpoolctl, joblib, scikit-learn, sklearn\n Running setup.py install for sklearn: started\n Running setup.py install for sklearn: finished with status 'done'\nSuccessfully installed joblib-0.17.0 scikit-learn-0.23.2 sklearn-0.0 threadpoolctl-2.1.0\n" ], [ "import pandas as pd\n\ndf = pd.read_csv('spotify_kaggle/data.csv')\ndf.head()", "_____no_output_____" ], [ "new_df = pd.read_csv('spotify2.csv')\nnew_df.head()", "_____no_output_____" ], [ "import pickle\nfilename = 'neighbors'\n\ninfile = open(filename,'rb')\nmodel = pickle.load(infile)\ninfile.close()", "_____no_output_____" ], [ "def value_monad(a):\n return new_df.values.tolist()[a]\n\nvalue_monad(1)", "_____no_output_____" ], [ "def heigher_order_features(input_y):\n \"\"\"A helper function for compare_this function, it creates\n a list with a specific row input\"\"\"\n state = []\n for i, x in enumerate(new_df.columns.tolist()):\n a = new_df[str(x)][input_y]\n state.append(a)\n \n return state\n\nprint(heigher_order_features(2))", "[0.606425702811245, 5.9926889195181884e-05, 0.7580971659919029, 0.01837436036508649, 0.22, 1.1771076111778136e-05, 0.0, 0.4545454545454546, 0.119, 0.6276094276094277, 0.0, 0.9587203302373579, 0.4390862424259805, 0.88, 0.07070707070707273]\n" ], [ "import plotly.graph_objects as go\nimport plotly.offline as pyo\n\ndef compare_this(a,b):\n\n categories = new_df.columns.tolist()\n\n fig = go.Figure()\n\n fig.add_trace(go.Scatterpolar(\n r=heigher_order_features(a),\n theta=categories,\n fill='toself',\n name='Product A'\n ))\n fig.add_trace(go.Scatterpolar(\n r=heigher_order_features(b),\n theta=categories,\n fill='toself',\n name='Product B'\n ))\n\n\n fig.update_layout(\n polar=dict(\n radialaxis=dict(\n visible=True,\n range=[0, 1]\n )),\n showlegend=False\n )\n \n pyo.iplot(fig, filename = 'basic-line')\n\ncompare_this(100,200)", "_____no_output_____" ], [ "print(model.kneighbors([value_monad(10000)]))", "(array([[0. , 0.26437765, 0.26623907, 0.28723417, 0.30341281,\n 0.31390304, 0.32246313, 0.32313638, 0.32415728, 0.32679204,\n 0.32883792, 0.33545995, 0.33711348, 0.33724571, 0.34088624,\n 0.34282455, 0.34399587, 0.34404901, 0.34761091, 0.3498265 ]]), array([[10000, 18201, 2611, 35170, 26469, 26655, 18360, 8941, 49579,\n 3038, 11263, 11720, 11185, 19080, 26064, 18835, 1449, 19030,\n 10249, 19130]], dtype=int64))\n" ], [ "def search_id_monad(a):\n b = model.kneighbors([value_monad(a)])\n return b[1]", "_____no_output_____" ], [ "search_id_monad(10000)", "_____no_output_____" ], [ "df.values[10000]", "_____no_output_____" ], [ "def run_model(a):\n monad = search_id_monad(a)\n data = [monad[0][0], monad[0][1]]\n meta_data = [\n df.values.tolist()[data[0]],\n df.values.tolist()[data[1]]]\n return [compare_this(data[0], data[1]), meta_data]\n \nrun_model(10000)", "_____no_output_____" ], [ "compare_this(10000, 18201)", "_____no_output_____" ], [ "search_id_monad(10000)[0]", "_____no_output_____" ] ], [ [ "## Quantum Phonic playlist\n\nPostprocessing playlists based off of strawberryfields, a python library for high-level functions for near term phonic devices. ", "_____no_output_____" ] ], [ [ "def degrees_sep(song_id_in): \n \n state = []\n for i,x in enumerate(search_id_monad(song_id_in)[0]):\n state.append((search_id_monad(song_id_in)[0][0], x))\n\n return state\n\nprint(degrees_sep(10000))", "[(10000, 10000), (10000, 18201), (10000, 2611), (10000, 35170), (10000, 26469), (10000, 26655), (10000, 18360), (10000, 8941), (10000, 49579), (10000, 3038), (10000, 11263), (10000, 11720), (10000, 11185), (10000, 19080), (10000, 26064), (10000, 18835), (10000, 1449), (10000, 19030), (10000, 10249), (10000, 19130)]\n" ], [ "def multi_degrees_sep(song_id_in, level): \n \n state = []\n for i,x in enumerate(search_id_monad(song_id_in)[0]):\n state.append((search_id_monad(song_id_in)[0][0], x))\n \n state2 = []\n \n state3 = []\n for i,y in enumerate(state):\n try:\n # takes current state\n current = state[i][1]\n # prints out state\n print(state[i][1])\n # appends state\n state2.append(state[i][1])\n # finds relationships\n #ndegree = degrees_sep(song_id_in)\n \n except:\n continue\n \n return {'First Degree': state,\n 'placeholder': state2}\n\nmulti_degrees_sep(10000,1)", "10000\n18201\n2611\n35170\n26469\n26655\n18360\n8941\n49579\n3038\n11263\n11720\n11185\n19080\n26064\n18835\n1449\n19030\n10249\n19130\n" ] ], [ [ "state = [('a','b'),\n ('c','d')]\n\nstate[0][1]", "_____no_output_____" ] ], [ [ "from strawberryfields.apps import data, plot, sample, clique\nimport numpy as np\nimport networkx as nx\nimport plotly\n\ndef visualize_deg_sep(song_id_in):\n G = nx.Graph()\n G.add_nodes_from(search_id_monad(song_id_in)[0])\n G.add_edges_from(degrees_sep(song_id_in))\n \n maximal_clique = search_id_monad(song_id_in)[0]\n\n \n return plot.graph(G, \n maximal_clique, \n subgraph_node_colour='#1DB954',\n subgraph_edge_colour='#1DB954',\n graph_node_colour='#ffffff',\n graph_edge_colour='#ffffff',\n background_color='#191414',\n graph_node_size=5)\n\nvisualize_deg_sep(10000)\n\n", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
4a3cc3a2ac9d0a71653643bfa051350ab481793c
10,897
ipynb
Jupyter Notebook
References/8.Comprehensions.ipynb
pywaker/pybasics
0e1eed95049cad9c989b6e6c5e60a8ca3bd2ae89
[ "CC-BY-4.0" ]
6
2020-07-14T12:05:08.000Z
2022-01-06T05:28:51.000Z
References/8.Comprehensions.ipynb
pywaker/pybasics
0e1eed95049cad9c989b6e6c5e60a8ca3bd2ae89
[ "CC-BY-4.0" ]
null
null
null
References/8.Comprehensions.ipynb
pywaker/pybasics
0e1eed95049cad9c989b6e6c5e60a8ca3bd2ae89
[ "CC-BY-4.0" ]
13
2020-06-12T16:04:57.000Z
2020-10-13T04:49:05.000Z
17.661264
129
0.449206
[ [ [ "# Comprehensions\n---", "_____no_output_____" ], [ "Comprehension are associated with collection types\n\n- list comprehension\n- dictionary comprehension\n- set comprehension", "_____no_output_____" ], [ "## List Comprehension", "_____no_output_____" ], [ "*Working with range objects*", "_____no_output_____" ] ], [ [ "range(0, 8)", "_____no_output_____" ], [ "list(range(0, 8))", "_____no_output_____" ] ], [ [ "*list comprehensions are written this way*", "_____no_output_____" ] ], [ [ "[i for i in range(0, 9)]", "_____no_output_____" ] ], [ [ "*Can be read as:*\n\n - create a list by \n appending each item (i) where ( for ) item (i) ranges from 0 to 8 ", "_____no_output_____" ], [ "*above list comprehension does exactly the same as code below*", "_____no_output_____" ] ], [ [ "x = []\nfor i in range(0, 9):\n x.append(i)\nx", "_____no_output_____" ] ], [ [ "*create a list from range 0, 9 where each item is powered by two*", "_____no_output_____" ] ], [ [ "[i**2 for i in range(0, 9)]", "_____no_output_____" ] ], [ [ "*create a list from range 0, 26 where items are divisible by 5 *", "_____no_output_____" ] ], [ [ "a = []\nfor i in range(1, 26):\n if i % 5 == 0:\n a.append(i)\na", "_____no_output_____" ] ], [ [ "*lets create same list using list comprehension*", "_____no_output_____" ] ], [ [ "[i for i in range(1, 26) if i % 5 == 0]", "_____no_output_____" ] ], [ [ "*lets create a list where items are divisible by 5, if they are not fill it with 0*", "_____no_output_____" ] ], [ [ "[i if i % 5 == 0 else 0 for i in range(1, 26)]", "_____no_output_____" ] ], [ [ "*first expresssion i.e: ` i if i % 5 == 0 else 0 ` which starts before first `for` is evaluated and appended to new list*", "_____no_output_____" ], [ "*list comprehensions can be nested as well*", "_____no_output_____" ] ], [ [ "[y[0] for y in [x for x in ['hello', 'world'] if x == 'hello']]", "_____no_output_____" ] ], [ [ "*written without list comprehension as*", "_____no_output_____" ] ], [ [ "chrs = []\nfor x in ['hello', 'world']:\n if x == 'hello':\n for y in x:\n chrs.append(y)\nchrs", "_____no_output_____" ], [ "chrs = []\nfor y in [x for x in ['hello', 'world'] if x == 'hello'][0]:\n chrs.append(y)\nchrs", "_____no_output_____" ] ], [ [ "## Dictionary comprehension\n---", "_____no_output_____" ], [ "*Everything is similar to list comprehensions*", "_____no_output_____" ] ], [ [ "{i: i**2 for i in range(1, 6)}", "_____no_output_____" ] ], [ [ "## Set comprehension\n---", "_____no_output_____" ], [ "*for both dictionary and set comprehension*", "_____no_output_____" ] ], [ [ "{i for i in range(1, 6)}", "_____no_output_____" ] ], [ [ "*Remember `{}` creates either set comprehension or dictionary comprehensions*\n\n*the presence of `:` that separates key => value pair for dictionary creates dictionary comprehension *", "_____no_output_____" ], [ "## Tuple comprehension\n---\n\n*There is no tuple comprehension, but there is syntax that looks like tuple comprehensions ( which are called generators )*", "_____no_output_____" ], [ "## Generators", "_____no_output_____" ] ], [ [ "g = (x for x in 'hello')", "_____no_output_____" ], [ "next(g)", "_____no_output_____" ], [ "next(g)", "_____no_output_____" ], [ "def marker():\n yield 5\n yield 9\n yield 77\n yield 88", "_____no_output_____" ], [ "m = marker()", "_____no_output_____" ], [ "next(m)", "_____no_output_____" ], [ "next(m)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a3cc3f5a19d2e3316387319f3d766738f3eb0a9
344,800
ipynb
Jupyter Notebook
docs/notebooks/GrammarFuzzer.ipynb
darkrsw/fuzzingbook
3ed5a4ba14dd2837dff2c4b8c6d222c102dea338
[ "MIT" ]
null
null
null
docs/notebooks/GrammarFuzzer.ipynb
darkrsw/fuzzingbook
3ed5a4ba14dd2837dff2c4b8c6d222c102dea338
[ "MIT" ]
null
null
null
docs/notebooks/GrammarFuzzer.ipynb
darkrsw/fuzzingbook
3ed5a4ba14dd2837dff2c4b8c6d222c102dea338
[ "MIT" ]
null
null
null
39.387708
12,612
0.539823
[ [ [ "# Efficient Grammar Fuzzing\n\nIn the [chapter on grammars](Grammars.ipynb), we have seen how to use _grammars_ for very effective and efficient testing. In this chapter, we refine the previous string-based algorithm into a tree-based algorithm, which is much faster and allows for much more control over the production of fuzz inputs.", "_____no_output_____" ], [ "The algorithm in this chapter serves as a foundation for several more techniques; this chapter thus is a \"hub\" in the book.", "_____no_output_____" ], [ "**Prerequisites**\n\n* You should know how grammar-based fuzzing works, e.g. from the [chapter on grammars](Grammars.ipynb).", "_____no_output_____" ], [ "## Synopsis\n<!-- Automatically generated. Do not edit. -->\n\nTo [use the code provided in this chapter](Importing.ipynb), write\n\n```python\n>>> from fuzzingbook.GrammarFuzzer import <identifier>\n```\n\nand then make use of the following features.\n\n\nThis chapter introduces `GrammarFuzzer`, an efficient grammar fuzzer that takes a grammar to produce syntactically valid input strings. Here's a typical usage:\n\n```python\n>>> from Grammars import US_PHONE_GRAMMAR\n>>> phone_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR)\n>>> phone_fuzzer.fuzz()\n'(785)853-4702'\n```\nThe `GrammarFuzzer` constructor takes a number of keyword arguments to control its behavior. `start_symbol`, for instance, allows to set the symbol that expansion starts with (instead of `<start>`):\n\n```python\n>>> area_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR, start_symbol='<area>')\n>>> area_fuzzer.fuzz()\n'269'\n>>> import inspect\n>>> print(inspect.getdoc(GrammarFuzzer.__init__))\nProduce strings from `grammar`, starting with `start_symbol`.\nIf `min_nonterminals` or `max_nonterminals` is given, use them as limits \nfor the number of nonterminals produced. \nIf `disp` is set, display the intermediate derivation trees.\nIf `log` is set, show intermediate steps as text on standard output.\n\n```\nInternally, `GrammarFuzzer` makes use of [derivation trees](#Derivation-Trees), which it expands step by step. After producing a string, the tree produced can be accessed in the `derivation_tree` attribute.\n\n```python\n>>> display_tree(phone_fuzzer.derivation_tree)\n```\n![](PICS/GrammarFuzzer-synopsis-1.svg)\n\nIn the internal representation of a derivation tree, a _node_ is a pair (`symbol`, `children`). For nonterminals, `symbol` is the symbol that is being expanded, and `children` is a list of further nodes. For terminals, `symbol` is the terminal string, and `children` is empty.\n\n```python\n>>> phone_fuzzer.derivation_tree\n('<start>',\n [('<phone-number>',\n [('(', []),\n ('<area>',\n [('<lead-digit>', [('7', [])]),\n ('<digit>', [('8', [])]),\n ('<digit>', [('5', [])])]),\n (')', []),\n ('<exchange>',\n [('<lead-digit>', [('8', [])]),\n ('<digit>', [('5', [])]),\n ('<digit>', [('3', [])])]),\n ('-', []),\n ('<line>',\n [('<digit>', [('4', [])]),\n ('<digit>', [('7', [])]),\n ('<digit>', [('0', [])]),\n ('<digit>', [('2', [])])])])])\n```\nThe chapter contains various helpers to work with derivation trees, including visualization tools.\n\n", "_____no_output_____" ], [ "## An Insufficient Algorithm\n\nIn the [previous chapter](Grammars.ipynb), we have introduced the `simple_grammar_fuzzer()` function which takes a grammar and automatically produces a syntactically valid string from it. However, `simple_grammar_fuzzer()` is just what its name suggests – simple. To illustrate the problem, let us get back to the `expr_grammar` we created from `EXPR_GRAMMAR_BNF` in the [chapter on grammars](Grammars.ipynb):", "_____no_output_____" ] ], [ [ "import bookutils", "_____no_output_____" ], [ "from bookutils import unicode_escape", "_____no_output_____" ], [ "from Grammars import EXPR_EBNF_GRAMMAR, convert_ebnf_grammar, simple_grammar_fuzzer, is_valid_grammar, exp_string, exp_opts", "_____no_output_____" ], [ "expr_grammar = convert_ebnf_grammar(EXPR_EBNF_GRAMMAR)\nexpr_grammar", "_____no_output_____" ] ], [ [ "`expr_grammar` has an interesting property. If we feed it into `simple_grammar_fuzzer()`, the function gets stuck in an infinite expansion:", "_____no_output_____" ] ], [ [ "from ExpectError import ExpectTimeout", "_____no_output_____" ], [ "with ExpectTimeout(1):\n simple_grammar_fuzzer(grammar=expr_grammar, max_nonterminals=3)", "Traceback (most recent call last):\n File \"/var/folders/n2/xd9445p97rb3xh7m1dfx8_4h0006ts/T/ipykernel_16949/3259437052.py\", line 2, in <module>\n simple_grammar_fuzzer(grammar=expr_grammar, max_nonterminals=3)\n File \"/Users/zeller/Projects/fuzzingbook/notebooks/Grammars.ipynb\", line 67, in simple_grammar_fuzzer\n while len(nonterminals(term)) > 0:\n File \"/Users/zeller/Projects/fuzzingbook/notebooks/Grammars.ipynb\", line 51, in nonterminals\n return re.findall(RE_NONTERMINAL, expansion)\n File \"/Users/zeller/.pyenv/versions/3.9.7/lib/python3.9/re.py\", line 241, in findall\n return _compile(pattern, flags).findall(string)\n File \"/Users/zeller/.pyenv/versions/3.9.7/lib/python3.9/re.py\", line 294, in _compile\n return _cache[type(pattern), pattern, flags]\n File \"/Users/zeller/.pyenv/versions/3.9.7/lib/python3.9/re.py\", line 294, in _compile\n return _cache[type(pattern), pattern, flags]\n File \"/Users/zeller/Projects/fuzzingbook/notebooks/ExpectError.ipynb\", line 86, in check_time\n raise TimeoutError\nTimeoutError (expected)\n" ] ], [ [ "Why is that so? The problem is in this rule:", "_____no_output_____" ] ], [ [ "expr_grammar['<factor>']", "_____no_output_____" ] ], [ [ "Here, any choice except for `(expr)` increases the number of symbols, even if only temporary. Since we place a hard limit on the number of symbols to expand, the only choice left for expanding `<factor>` is `(<expr>)`, which leads to an infinite addition of parentheses.", "_____no_output_____" ], [ "The problem of potentially infinite expansion is only one of the problems with `simple_grammar_fuzzer()`. More problems include:\n\n1. *It is inefficient*. With each iteration, this fuzzer would go search the string produced so far for symbols to expand. This becomes inefficient as the production string grows.\n\n2. *It is hard to control.* Even while limiting the number of symbols, it is still possible to obtain very long strings – and even infinitely long ones, as discussed above.\n\nLet us illustrate both problems by plotting the time required for strings of different lengths.", "_____no_output_____" ] ], [ [ "from Grammars import simple_grammar_fuzzer", "_____no_output_____" ], [ "from Grammars import START_SYMBOL, EXPR_GRAMMAR, URL_GRAMMAR, CGI_GRAMMAR", "_____no_output_____" ], [ "from Grammars import RE_NONTERMINAL, nonterminals, is_nonterminal", "_____no_output_____" ], [ "from Timer import Timer", "_____no_output_____" ], [ "trials = 50\nxs = []\nys = []\nfor i in range(trials):\n with Timer() as t:\n s = simple_grammar_fuzzer(EXPR_GRAMMAR, max_nonterminals=15)\n xs.append(len(s))\n ys.append(t.elapsed_time())\n print(i, end=\" \")\nprint()", "0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 \n" ], [ "average_time = sum(ys) / trials\nprint(\"Average time:\", average_time)", "Average time: 0.18061637407999986\n" ], [ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nplt.scatter(xs, ys)\nplt.title('Time required for generating an output');", "_____no_output_____" ] ], [ [ "We see that (1) the time needed to generate an output increases quadratically with the length of that ouptut, and that (2) a large portion of the produced outputs are tens of thousands of characters long.", "_____no_output_____" ], [ "To address these problems, we need a _smarter algorithm_ – one that is more efficient, that gets us better control over expansions, and that is able to foresee in `expr_grammar` that the `(expr)` alternative yields a potentially infinite expansion, in contrast to the other two.", "_____no_output_____" ], [ "## Derivation Trees\n\nTo both obtain a more efficient algorithm _and_ exercise better control over expansions, we will use a special representation for the strings that our grammar produces. The general idea is to use a *tree* structure that will be subsequently expanded – a so-called *derivation tree*. This representation allows us to always keep track of our expansion status – answering questions such as which elements have been expanded into which others, and which symbols still need to be expanded. Furthermore, adding new elements to a tree is far more efficient than replacing strings again and again.", "_____no_output_____" ], [ "Like other trees used in programming, a derivation tree (also known as *parse tree* or *concrete syntax tree*) consists of *nodes* which have other nodes (called *child nodes*) as their *children*. The tree starts with one node that has no parent; this is called the *root node*; a node without children is called a *leaf*.", "_____no_output_____" ], [ "The grammar expansion process with derivation trees is illustrated in the following steps, using the arithmetic grammar [from\nthe chapter on grammars](Grammars.ipynb). We start with a single node as root of the tree, representing the *start symbol* – in our case `<start>`.", "_____no_output_____" ], [ "(We use `dot` as a drawing program; you don't need to look at the code, just at its results.)", "_____no_output_____" ] ], [ [ "from graphviz import Digraph", "_____no_output_____" ], [ "tree = Digraph(\"root\")\ntree.attr('node', shape='plain')\ntree.node(r\"\\<start\\>\")", "_____no_output_____" ], [ "tree", "_____no_output_____" ] ], [ [ "To expand the tree, we traverse it, searching for a nonterminal symbol $S$ without children. $S$ thus is a symbol that still has to be expanded. We then chose an expansion for $S$ from the grammar. Then, we add the expansion as a new child of $S$. For our start symbol `<start>`, the only expansion is `<expr>`, so we add it as a child.", "_____no_output_____" ] ], [ [ "tree.edge(r\"\\<start\\>\", r\"\\<expr\\>\")", "_____no_output_____" ], [ "tree", "_____no_output_____" ] ], [ [ "To construct the produced string from a derivation tree, we traverse the tree in order and collect the symbols at the leaves of the tree. In the case above, we obtain the string `\"<expr>\"`.\n\nTo further expand the tree, we choose another symbol to expand, and add its expansion as new children. This would get us the `<expr>` symbol, which gets expanded into `<expr> + <term>`, adding three children.", "_____no_output_____" ] ], [ [ "tree.edge(r\"\\<expr\\>\", r\"\\<expr\\> \")\ntree.edge(r\"\\<expr\\>\", r\"+\")\ntree.edge(r\"\\<expr\\>\", r\"\\<term\\>\")", "_____no_output_____" ], [ "tree", "_____no_output_____" ] ], [ [ "We repeat the expansion until there are no symbols left to expand:", "_____no_output_____" ] ], [ [ "tree.edge(r\"\\<expr\\> \", r\"\\<term\\> \")\ntree.edge(r\"\\<term\\> \", r\"\\<factor\\> \")\ntree.edge(r\"\\<factor\\> \", r\"\\<integer\\> \")\ntree.edge(r\"\\<integer\\> \", r\"\\<digit\\> \")\ntree.edge(r\"\\<digit\\> \", r\"2 \")\n\ntree.edge(r\"\\<term\\>\", r\"\\<factor\\>\")\ntree.edge(r\"\\<factor\\>\", r\"\\<integer\\>\")\ntree.edge(r\"\\<integer\\>\", r\"\\<digit\\>\")\ntree.edge(r\"\\<digit\\>\", r\"2\")", "_____no_output_____" ], [ "tree", "_____no_output_____" ] ], [ [ "We now have a representation for the string `2 + 2`. In contrast to the string alone, though, the derivation tree records _the entire structure_ (and production history, or _derivation_ history) of the produced string. It also allows for simple comparison and manipulation – say, replacing one subtree (substructure) against another.", "_____no_output_____" ], [ "## Representing Derivation Trees\n\nTo represent a derivation tree in Python, we use the following format. A node is a pair\n\n```python\n(SYMBOL_NAME, CHILDREN)\n```\n\nwhere `SYMBOL_NAME` is a string representing the node (i.e. `\"<start>\"` or `\"+\"`) and `CHILDREN` is a list of children nodes.\n\n`CHILDREN` can take some special values:\n\n1. `None` as a placeholder for future expansion. This means that the node is a *nonterminal symbol* that should be expanded further.\n2. `[]` (i.e., the empty list) to indicate _no_ children. This means that the node is a *terminal symbol* that can no longer be expanded.", "_____no_output_____" ], [ "Let us take a very simple derivation tree, representing the intermediate step `<expr> + <term>`, above.", "_____no_output_____" ] ], [ [ "derivation_tree = (\"<start>\",\n [(\"<expr>\",\n [(\"<expr>\", None),\n (\" + \", []),\n (\"<term>\", None)]\n )])", "_____no_output_____" ] ], [ [ "To better understand the structure of this tree, let us introduce a function `display_tree()` that visualizes this tree.", "_____no_output_____" ], [ "#### Excursion: Implementing `display_tree()`", "_____no_output_____" ], [ "We use the `dot` drawing program from the `graphviz` package algorithmically, traversing the above structure. (Unless you're deeply interested in tree visualization, you can directly skip to the example below.)", "_____no_output_____" ] ], [ [ "from graphviz import Digraph", "_____no_output_____" ], [ "from IPython.display import display", "_____no_output_____" ], [ "import re", "_____no_output_____" ], [ "def dot_escape(s):\n \"\"\"Return s in a form suitable for dot\"\"\"\n s = re.sub(r'([^a-zA-Z0-9\" ])', r\"\\\\\\1\", s)\n return s", "_____no_output_____" ], [ "assert dot_escape(\"hello\") == \"hello\"\nassert dot_escape(\"<hello>, world\") == \"\\\\<hello\\\\>\\\\, world\"\nassert dot_escape(\"\\\\n\") == \"\\\\\\\\n\"", "_____no_output_____" ] ], [ [ "While we are interested at present in visualizing a `derivation_tree`, it is in our interest to generalize the visualization procedure. In particular, it would be helpful if our method `display_tree()` can display *any* tree like data structure. To enable this, we define a helper method `extract_node()` that extract the current symbol and children from a given data structure. The default implementation simply extracts the symbol, children, and annotation from any `derivation_tree` node.", "_____no_output_____" ] ], [ [ "def extract_node(node, id):\n symbol, children, *annotation = node\n return symbol, children, ''.join(str(a) for a in annotation)", "_____no_output_____" ] ], [ [ "While visualizing a tree, it is often useful to display certain nodes differently. For example, it is sometimes useful to distinguish between non-processed nodes and processed nodes. We define a helper procedure `default_node_attr()` that provides the basic display, which can be customized by the user.", "_____no_output_____" ] ], [ [ "def default_node_attr(dot, nid, symbol, ann):\n dot.node(repr(nid), dot_escape(unicode_escape(symbol)))", "_____no_output_____" ] ], [ [ "Similar to nodes, the edges may also require modifications. We define `default_edge_attr()` as a helper procedure that can be customized by the user.", "_____no_output_____" ] ], [ [ "def default_edge_attr(dot, start_node, stop_node):\n dot.edge(repr(start_node), repr(stop_node))", "_____no_output_____" ] ], [ [ "While visualizing a tree, one may sometimes wish to change the appearance of the tree. For example, it is sometimes easier to view the tree if it was laid out left to right rather than top to bottom. We define another helper procedure `default_graph_attr()` for that.", "_____no_output_____" ] ], [ [ "def default_graph_attr(dot):\n dot.attr('node', shape='plain')", "_____no_output_____" ] ], [ [ "Finally, we define a method `display_tree()` that accepts these four functions `extract_node()`, `default_edge_attr()`, `default_node_attr()` and `default_graph_attr()` and uses them to display the tree.", "_____no_output_____" ] ], [ [ "def display_tree(derivation_tree,\n log=False,\n extract_node=extract_node,\n node_attr=default_node_attr,\n edge_attr=default_edge_attr,\n graph_attr=default_graph_attr):\n \n # If we import display_tree, we also have to import its functions\n from graphviz import Digraph\n\n counter = 0\n\n def traverse_tree(dot, tree, id=0):\n (symbol, children, annotation) = extract_node(tree, id)\n node_attr(dot, id, symbol, annotation)\n\n if children:\n for child in children:\n nonlocal counter\n counter += 1\n child_id = counter\n edge_attr(dot, id, child_id)\n traverse_tree(dot, child, child_id)\n\n dot = Digraph(comment=\"Derivation Tree\")\n graph_attr(dot)\n traverse_tree(dot, derivation_tree)\n if log:\n print(dot)\n return dot", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "This is what our tree visualizes into:", "_____no_output_____" ] ], [ [ "display_tree(derivation_tree)", "_____no_output_____" ] ], [ [ "Within this book, we also occasionally use a function `display_annotated_tree()` which allows to add annotations to individual nodes.", "_____no_output_____" ], [ "#### Excursion: Source code and example for `display_annotated_tree()`", "_____no_output_____" ], [ "`display_annotated_tree()` displays an annotated tree structure, and lays out the graph left to right.", "_____no_output_____" ] ], [ [ "def display_annotated_tree(tree, a_nodes, a_edges, log=False):\n def graph_attr(dot):\n dot.attr('node', shape='plain')\n dot.graph_attr['rankdir'] = 'LR'\n\n def annotate_node(dot, nid, symbol, ann):\n if nid in a_nodes:\n dot.node(repr(nid), \"%s (%s)\" % (dot_escape(unicode_escape(symbol)), a_nodes[nid]))\n else:\n dot.node(repr(nid), dot_escape(unicode_escape(symbol)))\n\n def annotate_edge(dot, start_node, stop_node):\n if (start_node, stop_node) in a_edges:\n dot.edge(repr(start_node), repr(stop_node),\n a_edges[(start_node, stop_node)])\n else:\n dot.edge(repr(start_node), repr(stop_node))\n\n return display_tree(tree, log=log,\n node_attr=annotate_node,\n edge_attr=annotate_edge,\n graph_attr=graph_attr)", "_____no_output_____" ], [ "display_annotated_tree(derivation_tree, {3: 'plus'}, {(1, 3): 'op'}, log=False)", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "If we want to see all the leaf nodes in a tree as a string, the following `all_terminals()` function comes in handy:", "_____no_output_____" ] ], [ [ "def all_terminals(tree):\n (symbol, children) = tree\n if children is None:\n # This is a nonterminal symbol not expanded yet\n return symbol\n\n if len(children) == 0:\n # This is a terminal symbol\n return symbol\n\n # This is an expanded symbol:\n # Concatenate all terminal symbols from all children\n return ''.join([all_terminals(c) for c in children])", "_____no_output_____" ], [ "all_terminals(derivation_tree)", "_____no_output_____" ] ], [ [ "The alternative `tree_to_string()` function also converts the tree to a string; however, it replaces nonterminal symbols by empty strings.", "_____no_output_____" ] ], [ [ "def tree_to_string(tree):\n symbol, children, *_ = tree\n if children:\n return ''.join(tree_to_string(c) for c in children)\n else:\n return '' if is_nonterminal(symbol) else symbol", "_____no_output_____" ], [ "tree_to_string(derivation_tree)", "_____no_output_____" ] ], [ [ "## Expanding a Node", "_____no_output_____" ], [ "Let us now develop an algorithm that takes a tree with unexpanded symbols (say, `derivation_tree`, above), and expands all these symbols one after the other. As with earlier fuzzers, we create a special subclass of `Fuzzer` – in this case, `GrammarFuzzer`. A `GrammarFuzzer` gets a grammar and a start symbol; the other parameters will be used later to further control creation and to support debugging.", "_____no_output_____" ] ], [ [ "from Fuzzer import Fuzzer", "_____no_output_____" ], [ "class GrammarFuzzer(Fuzzer):\n def __init__(self, grammar, start_symbol=START_SYMBOL,\n min_nonterminals=0, max_nonterminals=10, disp=False, log=False):\n \"\"\"Produce strings from `grammar`, starting with `start_symbol`.\n If `min_nonterminals` or `max_nonterminals` is given, use them as limits \n for the number of nonterminals produced. \n If `disp` is set, display the intermediate derivation trees.\n If `log` is set, show intermediate steps as text on standard output.\"\"\"\n \n self.grammar = grammar\n self.start_symbol = start_symbol\n self.min_nonterminals = min_nonterminals\n self.max_nonterminals = max_nonterminals\n self.disp = disp\n self.log = log\n self.check_grammar() # Invokes is_valid_grammar()", "_____no_output_____" ] ], [ [ "To add further methods to `GrammarFuzzer`, we use the hack already introduced for [the `MutationFuzzer` class](MutationFuzzer.ipynb). The construct\n\n```python\nclass GrammarFuzzer(GrammarFuzzer):\n def new_method(self, args):\n pass\n```\n\nallows us to add a new method `new_method()` to the `GrammarFuzzer` class. (Actually, we get a new `GrammarFuzzer` class that extends the old one, but for all our purposes, this does not matter.)", "_____no_output_____" ], [ "#### Excursion: `check_grammar()` implementation", "_____no_output_____" ], [ "We can use the above hack to define the helper method `check_grammar()`, which checks the given grammar for consistency:", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def check_grammar(self):\n assert self.start_symbol in self.grammar\n assert is_valid_grammar(\n self.grammar,\n start_symbol=self.start_symbol,\n supported_opts=self.supported_opts())\n\n def supported_opts(self):\n return set()", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "Let us now define a helper method `init_tree()` that constructs a tree with just the start symbol:", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def init_tree(self):\n return (self.start_symbol, None)", "_____no_output_____" ], [ "f = GrammarFuzzer(EXPR_GRAMMAR)\ndisplay_tree(f.init_tree())", "_____no_output_____" ] ], [ [ "Next, we will need a helper function `expansion_to_children()` that takes an expansion string and decomposes it into a list of derivation trees – one for each symbol (terminal or nonterminal) in the string. It uses the `re.split()` method to split an expansion string into a list of children nodes:", "_____no_output_____" ] ], [ [ "def expansion_to_children(expansion):\n # print(\"Converting \" + repr(expansion))\n # strings contains all substrings -- both terminals and nonterminals such\n # that ''.join(strings) == expansion\n\n expansion = exp_string(expansion)\n assert isinstance(expansion, str)\n\n if expansion == \"\": # Special case: epsilon expansion\n return [(\"\", [])]\n\n strings = re.split(RE_NONTERMINAL, expansion)\n return [(s, None) if is_nonterminal(s) else (s, [])\n for s in strings if len(s) > 0]", "_____no_output_____" ], [ "expansion_to_children(\"<term> + <expr>\")", "_____no_output_____" ] ], [ [ "The case of an *epsilon expansion*, i.e. expanding into an empty string as in `<symbol> ::=` needs special treatment:", "_____no_output_____" ] ], [ [ "expansion_to_children(\"\")", "_____no_output_____" ] ], [ [ "Just like `nonterminals()` in the [chapter on Grammars](Grammars.ipynb), we provide for future extensions, allowing the expansion to be a tuple with extra data (which will be ignored).", "_____no_output_____" ] ], [ [ "expansion_to_children((\"+<term>\", [\"extra_data\"]))", "_____no_output_____" ] ], [ [ "We realize this helper as a method in `GrammarFuzzer` such that it can be overloaded by subclasses:", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expansion_to_children(self, expansion):\n return expansion_to_children(expansion)", "_____no_output_____" ] ], [ [ "With this, we can now take\n\n1. some unexpanded node in the tree, \n2. choose a random expansion, and\n3. return the new tree.\n\nThis is what the method `expand_node_randomly()` does.", "_____no_output_____" ], [ "#### Excursion: `expand_node_randomly()` implementation", "_____no_output_____" ], [ "The function `expand_node_randomly()` uses a helper function `choose_node_expansion()` to randomly pick an index from an array of possible children. (`choose_node_expansion()` can be overloaded in subclasses.)", "_____no_output_____" ] ], [ [ "import random", "_____no_output_____" ], [ "class GrammarFuzzer(GrammarFuzzer):\n def choose_node_expansion(self, node, possible_children):\n \"\"\"Return index of expansion in `possible_children` to be selected. Defaults to random.\"\"\"\n return random.randrange(0, len(possible_children))\n\n def expand_node_randomly(self, node):\n (symbol, children) = node\n assert children is None\n\n if self.log:\n print(\"Expanding\", all_terminals(node), \"randomly\")\n\n # Fetch the possible expansions from grammar...\n expansions = self.grammar[symbol]\n possible_children = [self.expansion_to_children(\n expansion) for expansion in expansions]\n\n # ... and select a random expansion\n index = self.choose_node_expansion(node, possible_children)\n chosen_children = possible_children[index]\n\n # Process children (for subclasses)\n chosen_children = self.process_chosen_children(chosen_children,\n expansions[index])\n\n # Return with new children\n return (symbol, chosen_children)", "_____no_output_____" ] ], [ [ "The generic `expand_node()` method can later be used to select different expansion strategies; as of now, it only uses `expand_node_randomly()`.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expand_node(self, node):\n return self.expand_node_randomly(node)", "_____no_output_____" ] ], [ [ "The helper function `process_chosen_children()` does nothing; it can be overloaded by subclasses to process the children once chosen.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def process_chosen_children(self, chosen_children, expansion):\n \"\"\"Process children after selection. By default, does nothing.\"\"\"\n return chosen_children", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "This is how `expand_node_randomly()` works:", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(EXPR_GRAMMAR, log=True)\n\nprint(\"Before:\")\ntree = (\"<integer>\", None)\ndisplay_tree(tree)", "Before:\n" ], [ "print(\"After:\")\ntree = f.expand_node_randomly(tree)\ndisplay_tree(tree)", "After:\nExpanding <integer> randomly\n" ] ], [ [ "## Expanding a Tree\n\nLet us now apply the above node expansion to some node in the tree. To this end, we first need to search the tree for unexpanded nodes. `possible_expansions()` counts how many unexpanded symbols there are in a tree:", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def possible_expansions(self, node):\n (symbol, children) = node\n if children is None:\n return 1\n\n return sum(self.possible_expansions(c) for c in children)", "_____no_output_____" ], [ "f = GrammarFuzzer(EXPR_GRAMMAR)\nprint(f.possible_expansions(derivation_tree))", "2\n" ] ], [ [ "The method `any_possible_expansions()` returns True if the tree has any unexpanded nodes.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def any_possible_expansions(self, node):\n (symbol, children) = node\n if children is None:\n return True\n\n return any(self.any_possible_expansions(c) for c in children)", "_____no_output_____" ], [ "f = GrammarFuzzer(EXPR_GRAMMAR)\nf.any_possible_expansions(derivation_tree)", "_____no_output_____" ] ], [ [ "Here comes `expand_tree_once()`, the core method of our tree expansion algorithm. It first checks whether it is currently being applied on a nonterminal symbol without expansion; if so, it invokes `expand_node()` on it, as discussed above. ", "_____no_output_____" ], [ "If the node is already expanded (i.e. has children), it checks the subset of children which still have unexpanded symbols, randomly selects one of them, and applies itself recursively on that child.", "_____no_output_____" ], [ "#### Excursion: `expand_tree_once()` implementation", "_____no_output_____" ], [ "The `expand_tree_once()` method replaces the child _in place_, meaning that it actually mutates the tree being passed as an argument rather than returning a new tree. This in-place mutation is what makes this function particularly efficient. Again, we use a helper method (`choose_tree_expansion()`) to return the chosen index from a list of children that can be expanded.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def choose_tree_expansion(self, tree, children):\n \"\"\"Return index of subtree in `children` to be selected for expansion. Defaults to random.\"\"\"\n return random.randrange(0, len(children))\n\n def expand_tree_once(self, tree):\n \"\"\"Choose an unexpanded symbol in tree; expand it. Can be overloaded in subclasses.\"\"\"\n (symbol, children) = tree\n if children is None:\n # Expand this node\n return self.expand_node(tree)\n\n # Find all children with possible expansions\n expandable_children = [\n c for c in children if self.any_possible_expansions(c)]\n\n # `index_map` translates an index in `expandable_children`\n # back into the original index in `children`\n index_map = [i for (i, c) in enumerate(children)\n if c in expandable_children]\n\n # Select a random child\n child_to_be_expanded = \\\n self.choose_tree_expansion(tree, expandable_children)\n\n # Expand in place\n children[index_map[child_to_be_expanded]] = \\\n self.expand_tree_once(expandable_children[child_to_be_expanded])\n\n return tree", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "Let us illustrate how `expand_tree_once()` works. We start with our derivation tree from above...", "_____no_output_____" ] ], [ [ "derivation_tree = (\"<start>\",\n [(\"<expr>\",\n [(\"<expr>\", None),\n (\" + \", []),\n (\"<term>\", None)]\n )])\ndisplay_tree(derivation_tree)", "_____no_output_____" ] ], [ [ "... and now expand it twice:", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(EXPR_GRAMMAR, log=True)\nderivation_tree = f.expand_tree_once(derivation_tree)\ndisplay_tree(derivation_tree)", "Expanding <term> randomly\n" ], [ "derivation_tree = f.expand_tree_once(derivation_tree)\ndisplay_tree(derivation_tree)", "Expanding <factor> randomly\n" ] ], [ [ "We see that with each step, one more symbol is expanded. Now all it takes is to apply this again and again, expanding the tree further and further.", "_____no_output_____" ], [ "## Closing the Expansion\n\nWith `expand_tree_once()`, we can keep on expanding the tree – but how do we actually stop? The key idea here, introduced by Luke in \\cite{Luke2000}, is that after inflating the derivation tree to some maximum size, we _only want to apply expansions that increase the size of the tree by a minimum_. For `<factor>`, for instance, we would prefer an expansion into `<integer>`, as this will not introduce further recursion (and potential size inflation); for `<integer>`, likewise, an expansion into `<digit>` is preferred, as it will less increase tree size than `<digit><integer>`.", "_____no_output_____" ], [ "To identify the _cost_ of expanding a symbol, we introduce two functions that mutually rely on each other:\n\n* `symbol_cost()` returns the minimum cost of all expansions of a symbol, using `expansion_cost()` to compute the cost for each expansion.\n* `expansion_cost()` returns the sum of all expansions in `expansions`. If a nonterminal is encountered again during traversal, the cost of the expansion is $\\infty$, indicating (potentially infinite) recursion.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def symbol_cost(self, symbol, seen=set()):\n expansions = self.grammar[symbol]\n return min(self.expansion_cost(e, seen | {symbol}) for e in expansions)\n\n def expansion_cost(self, expansion, seen=set()):\n symbols = nonterminals(expansion)\n if len(symbols) == 0:\n return 1 # no symbol\n\n if any(s in seen for s in symbols):\n return float('inf')\n\n # the value of a expansion is the sum of all expandable variables\n # inside + 1\n return sum(self.symbol_cost(s, seen) for s in symbols) + 1", "_____no_output_____" ] ], [ [ "Here's two examples: The minimum cost of expanding a digit is 1, since we have to choose from one of its expansions.", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(EXPR_GRAMMAR)\nassert f.symbol_cost(\"<digit>\") == 1", "_____no_output_____" ] ], [ [ "The minimum cost of expanding `<expr>`, though, is five, as this is the minimum number of expansions required. (`<expr>` $\\rightarrow$ `<term>` $\\rightarrow$ `<factor>` $\\rightarrow$ `<integer>` $\\rightarrow$ `<digit>` $\\rightarrow$ 1)", "_____no_output_____" ] ], [ [ "assert f.symbol_cost(\"<expr>\") == 5", "_____no_output_____" ] ], [ [ "We define `expand_node_by_cost(self, node, choose)`, a variant of `expand_node()` that takes the above cost into account. It determines the minimum cost `cost` across all children and then chooses a child from the list using the `choose` function, which by default is the minimum cost. If multiple children all have the same minimum cost, it chooses randomly between these.", "_____no_output_____" ], [ "#### Excursion: `expand_node_by_cost()` implementation", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expand_node_by_cost(self, node, choose=min):\n (symbol, children) = node\n assert children is None\n\n # Fetch the possible expansions from grammar...\n expansions = self.grammar[symbol]\n\n possible_children_with_cost = [(self.expansion_to_children(expansion),\n self.expansion_cost(\n expansion, {symbol}),\n expansion)\n for expansion in expansions]\n\n costs = [cost for (child, cost, expansion)\n in possible_children_with_cost]\n chosen_cost = choose(costs)\n children_with_chosen_cost = [child for (child, child_cost, _) in possible_children_with_cost\n if child_cost == chosen_cost]\n expansion_with_chosen_cost = [expansion for (_, child_cost, expansion) in possible_children_with_cost\n if child_cost == chosen_cost]\n\n index = self.choose_node_expansion(node, children_with_chosen_cost)\n\n chosen_children = children_with_chosen_cost[index]\n chosen_expansion = expansion_with_chosen_cost[index]\n chosen_children = self.process_chosen_children(\n chosen_children, chosen_expansion)\n\n # Return with a new list\n return (symbol, chosen_children)", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "The shortcut `expand_node_min_cost()` passes `min()` as the `choose` function, which makes it expand nodes at minimum cost.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expand_node_min_cost(self, node):\n if self.log:\n print(\"Expanding\", all_terminals(node), \"at minimum cost\")\n\n return self.expand_node_by_cost(node, min)", "_____no_output_____" ] ], [ [ "We can now apply this function to close the expansion of our derivation tree, using `expand_tree_once()` with the above `expand_node_min_cost()` as expansion function.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expand_node(self, node):\n return self.expand_node_min_cost(node)", "_____no_output_____" ], [ "f = GrammarFuzzer(EXPR_GRAMMAR, log=True)\ndisplay_tree(derivation_tree)", "_____no_output_____" ], [ "if f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree)\n display_tree(derivation_tree)", "Expanding <integer> at minimum cost\n" ], [ "if f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree)\n display_tree(derivation_tree)", "Expanding <expr> at minimum cost\n" ], [ "if f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree)\n display_tree(derivation_tree)", "Expanding <digit> at minimum cost\n" ] ], [ [ "We keep on expanding until all nonterminals are expanded.", "_____no_output_____" ] ], [ [ "while f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree) ", "Expanding <term> at minimum cost\nExpanding <factor> at minimum cost\nExpanding <integer> at minimum cost\nExpanding <digit> at minimum cost\n" ] ], [ [ "Here is the final tree:", "_____no_output_____" ] ], [ [ "display_tree(derivation_tree)", "_____no_output_____" ] ], [ [ "We see that in each step, `expand_node_min_cost()` chooses an expansion that does not increase the number of symbols, eventually closing all open expansions.", "_____no_output_____" ], [ "## Node Inflation\n\nEspecially at the beginning of an expansion, we may be interested in getting _as many nodes as possible_ – that is, we'd like to prefer expansions that give us _more_ nonterminals to expand. This is actually the exact opposite of what `expand_node_min_cost()` gives us, and we can implement a method `expand_node_max_cost()` that will always choose among the nodes with the _highest_ cost:", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expand_node_max_cost(self, node):\n if self.log:\n print(\"Expanding\", all_terminals(node), \"at maximum cost\")\n\n return self.expand_node_by_cost(node, max)", "_____no_output_____" ] ], [ [ "To illustrate `expand_node_max_cost()`, we can again redefine `expand_node()` to use it, and then use `expand_tree_once()` to show a few expansion steps:", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def expand_node(self, node):\n return self.expand_node_max_cost(node)", "_____no_output_____" ], [ "derivation_tree = (\"<start>\",\n [(\"<expr>\",\n [(\"<expr>\", None),\n (\" + \", []),\n (\"<term>\", None)]\n )])", "_____no_output_____" ], [ "f = GrammarFuzzer(EXPR_GRAMMAR, log=True)\ndisplay_tree(derivation_tree)", "_____no_output_____" ], [ "if f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree)\n display_tree(derivation_tree)", "Expanding <expr> at maximum cost\n" ], [ "if f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree)\n display_tree(derivation_tree)", "Expanding <expr> at maximum cost\n" ], [ "if f.any_possible_expansions(derivation_tree):\n derivation_tree = f.expand_tree_once(derivation_tree)\n display_tree(derivation_tree)", "Expanding <term> at maximum cost\n" ] ], [ [ "We see that with each step, the number of nonterminals increases. Obviously, we have to put a limit on this number.", "_____no_output_____" ], [ "## Three Expansion Phases\n\nWe can now put all three phases together in a single function `expand_tree()` which will work as follows:\n\n1. **Max cost expansion.** Expand the tree using expansions with maximum cost until we have at least `min_nonterminals` nonterminals. This phase can be easily skipped by setting `min_nonterminals` to zero.\n2. **Random expansion.** Keep on expanding the tree randomly until we reach `max_nonterminals` nonterminals.\n3. **Min cost expansion.** Close the expansion with minimum cost.\n\nWe implement these three phases by having `expand_node` reference the expansion method to apply. This is controlled by setting `expand_node` (the method reference) to first `expand_node_max_cost` (i.e., calling `expand_node()` invokes `expand_node_max_cost()`), then `expand_node_randomly`, and finally `expand_node_min_cost`. In the first two phases, we also set a maximum limit of `min_nonterminals` and `max_nonterminals`, respectively.", "_____no_output_____" ], [ "#### Excursion: Implementation of three-phase `expand_tree()`", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def log_tree(self, tree):\n \"\"\"Output a tree if self.log is set; if self.display is also set, show the tree structure\"\"\"\n if self.log:\n print(\"Tree:\", all_terminals(tree))\n if self.disp:\n display(display_tree(tree))\n # print(self.possible_expansions(tree), \"possible expansion(s) left\")\n\n def expand_tree_with_strategy(self, tree, expand_node_method, limit=None):\n \"\"\"Expand tree using `expand_node_method` as node expansion function\n until the number of possible expansions reaches `limit`.\"\"\"\n self.expand_node = expand_node_method\n while ((limit is None\n or self.possible_expansions(tree) < limit)\n and self.any_possible_expansions(tree)):\n tree = self.expand_tree_once(tree)\n self.log_tree(tree)\n return tree\n\n def expand_tree(self, tree):\n \"\"\"Expand `tree` in a three-phase strategy until all expansions are complete.\"\"\"\n self.log_tree(tree)\n tree = self.expand_tree_with_strategy(\n tree, self.expand_node_max_cost, self.min_nonterminals)\n tree = self.expand_tree_with_strategy(\n tree, self.expand_node_randomly, self.max_nonterminals)\n tree = self.expand_tree_with_strategy(\n tree, self.expand_node_min_cost)\n\n assert self.possible_expansions(tree) == 0\n\n return tree", "_____no_output_____" ] ], [ [ "#### End of Excursion", "_____no_output_____" ], [ "Let us try this out on our example. We start with a half-expanded derivation tree:", "_____no_output_____" ] ], [ [ "initial_derivation_tree = (\"<start>\",\n [(\"<expr>\",\n [(\"<expr>\", None),\n (\" + \", []),\n (\"<term>\", None)]\n )])", "_____no_output_____" ], [ "display_tree(initial_derivation_tree)", "_____no_output_____" ] ], [ [ "We now apply our expansion strategy on this tree. We see that initially, nodes are expanded at maximum cost, then randomly, and then closing the expansion at minimum cost.", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(\n EXPR_GRAMMAR,\n min_nonterminals=3,\n max_nonterminals=5,\n log=True)\nderivation_tree = f.expand_tree(initial_derivation_tree)", "Tree: <expr> + <term>\nExpanding <expr> at maximum cost\nTree: <term> + <expr> + <term>\nExpanding <term> randomly\nTree: <factor> / <term> + <expr> + <term>\nExpanding <term> randomly\nTree: <factor> / <term> + <expr> + <factor> * <term>\nExpanding <expr> at minimum cost\nTree: <factor> / <term> + <term> + <factor> * <term>\nExpanding <term> at minimum cost\nTree: <factor> / <term> + <factor> + <factor> * <term>\nExpanding <factor> at minimum cost\nTree: <factor> / <term> + <integer> + <factor> * <term>\nExpanding <factor> at minimum cost\nTree: <factor> / <term> + <integer> + <integer> * <term>\nExpanding <term> at minimum cost\nTree: <factor> / <term> + <integer> + <integer> * <factor>\nExpanding <integer> at minimum cost\nTree: <factor> / <term> + <integer> + <digit> * <factor>\nExpanding <factor> at minimum cost\nTree: <factor> / <term> + <integer> + <digit> * <integer>\nExpanding <term> at minimum cost\nTree: <factor> / <factor> + <integer> + <digit> * <integer>\nExpanding <factor> at minimum cost\nTree: <factor> / <integer> + <integer> + <digit> * <integer>\nExpanding <integer> at minimum cost\nTree: <factor> / <integer> + <digit> + <digit> * <integer>\nExpanding <digit> at minimum cost\nTree: <factor> / <integer> + 4 + <digit> * <integer>\nExpanding <digit> at minimum cost\nTree: <factor> / <integer> + 4 + 0 * <integer>\nExpanding <factor> at minimum cost\nTree: <integer> / <integer> + 4 + 0 * <integer>\nExpanding <integer> at minimum cost\nTree: <digit> / <integer> + 4 + 0 * <integer>\nExpanding <integer> at minimum cost\nTree: <digit> / <integer> + 4 + 0 * <digit>\nExpanding <integer> at minimum cost\nTree: <digit> / <digit> + 4 + 0 * <digit>\nExpanding <digit> at minimum cost\nTree: 7 / <digit> + 4 + 0 * <digit>\nExpanding <digit> at minimum cost\nTree: 7 / 0 + 4 + 0 * <digit>\nExpanding <digit> at minimum cost\nTree: 7 / 0 + 4 + 0 * 4\n" ] ], [ [ "This is the final derivation tree:", "_____no_output_____" ] ], [ [ "display_tree(derivation_tree)", "_____no_output_____" ] ], [ [ "And this is the resulting string:", "_____no_output_____" ] ], [ [ "all_terminals(derivation_tree)", "_____no_output_____" ] ], [ [ "## Putting it all Together", "_____no_output_____" ], [ "Based on this, we can now define a function `fuzz()` that – like `simple_grammar_fuzzer()` – simply takes a grammar and produces a string from it. It thus no longer exposes the complexity of derivation trees.", "_____no_output_____" ] ], [ [ "class GrammarFuzzer(GrammarFuzzer):\n def fuzz_tree(self):\n # Create an initial derivation tree\n tree = self.init_tree()\n # print(tree)\n\n # Expand all nonterminals\n tree = self.expand_tree(tree)\n if self.log:\n print(repr(all_terminals(tree)))\n if self.disp:\n display(display_tree(tree))\n return tree\n\n def fuzz(self):\n self.derivation_tree = self.fuzz_tree()\n return all_terminals(self.derivation_tree)", "_____no_output_____" ] ], [ [ "We can now apply this on all our defined grammars (and visualize the derivation tree along)", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(EXPR_GRAMMAR)\nf.fuzz()", "_____no_output_____" ] ], [ [ "After calling `fuzz()`, the produced derivation tree is accessible in the `derivation_tree` attribute:", "_____no_output_____" ] ], [ [ "display_tree(f.derivation_tree)", "_____no_output_____" ] ], [ [ "Let us try out the grammar fuzzer (and its trees) on other grammar formats.", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(URL_GRAMMAR)\nf.fuzz()", "_____no_output_____" ], [ "display_tree(f.derivation_tree)", "_____no_output_____" ], [ "f = GrammarFuzzer(CGI_GRAMMAR, min_nonterminals=3, max_nonterminals=5)\nf.fuzz()", "_____no_output_____" ], [ "display_tree(f.derivation_tree)", "_____no_output_____" ] ], [ [ "How do we stack up against `simple_grammar_fuzzer()`?", "_____no_output_____" ] ], [ [ "trials = 50\nxs = []\nys = []\nf = GrammarFuzzer(EXPR_GRAMMAR, max_nonterminals=20)\nfor i in range(trials):\n with Timer() as t:\n s = f.fuzz()\n xs.append(len(s))\n ys.append(t.elapsed_time())\n print(i, end=\" \")\nprint()", "0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 \n" ], [ "average_time = sum(ys) / trials\nprint(\"Average time:\", average_time)", "Average time: 0.03347674019999978\n" ], [ "%matplotlib inline\n\nimport matplotlib.pyplot as plt\nplt.scatter(xs, ys)\nplt.title('Time required for generating an output');", "_____no_output_____" ] ], [ [ "Our test generation is much faster, but also our inputs are much smaller. We see that with derivation trees, we can get much better control over grammar production.", "_____no_output_____" ], [ "Finally, how does `GrammarFuzzer` work with `expr_grammar`, where `simple_grammar_fuzzer()` failed? It works without any issue:", "_____no_output_____" ] ], [ [ "f = GrammarFuzzer(expr_grammar, max_nonterminals=10)\nf.fuzz()", "_____no_output_____" ] ], [ [ "With `GrammarFuzzer`, we now have a solid foundation on which to build further fuzzers and illustrate more exciting concepts from the world of generating software tests. Many of these do not even require writing a grammar – instead, they _infer_ a grammar from the domain at hand, and thus allow to use grammar-based fuzzing even without writing a grammar. Stay tuned!", "_____no_output_____" ], [ "## Synopsis\n\nThis chapter introduces `GrammarFuzzer`, an efficient grammar fuzzer that takes a grammar to produce syntactically valid input strings. Here's a typical usage:", "_____no_output_____" ] ], [ [ "from Grammars import US_PHONE_GRAMMAR", "_____no_output_____" ], [ "phone_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR)\nphone_fuzzer.fuzz()", "_____no_output_____" ] ], [ [ "The `GrammarFuzzer` constructor takes a number of keyword arguments to control its behavior. `start_symbol`, for instance, allows to set the symbol that expansion starts with (instead of `<start>`):", "_____no_output_____" ] ], [ [ "area_fuzzer = GrammarFuzzer(US_PHONE_GRAMMAR, start_symbol='<area>')\narea_fuzzer.fuzz()", "_____no_output_____" ], [ "import inspect", "_____no_output_____" ], [ "print(inspect.getdoc(GrammarFuzzer.__init__))", "Produce strings from `grammar`, starting with `start_symbol`.\nIf `min_nonterminals` or `max_nonterminals` is given, use them as limits \nfor the number of nonterminals produced. \nIf `disp` is set, display the intermediate derivation trees.\nIf `log` is set, show intermediate steps as text on standard output.\n" ] ], [ [ "Internally, `GrammarFuzzer` makes use of [derivation trees](#Derivation-Trees), which it expands step by step. After producing a string, the tree produced can be accessed in the `derivation_tree` attribute.", "_____no_output_____" ] ], [ [ "display_tree(phone_fuzzer.derivation_tree)", "_____no_output_____" ] ], [ [ "In the internal representation of a derivation tree, a _node_ is a pair (`symbol`, `children`). For nonterminals, `symbol` is the symbol that is being expanded, and `children` is a list of further nodes. For terminals, `symbol` is the terminal string, and `children` is empty.", "_____no_output_____" ] ], [ [ "phone_fuzzer.derivation_tree", "_____no_output_____" ] ], [ [ "The chapter contains various helpers to work with derivation trees, including visualization tools.", "_____no_output_____" ], [ "## Lessons Learned\n\n* _Derivation trees_ are important for expressing input structure\n* _Grammar fuzzing based on derivation trees_ \n 1. is much more efficient than string-based grammar fuzzing,\n 2. gives much better control over input generation, and\n 3. effectively avoids running into infinite expansions.", "_____no_output_____" ], [ "## Next Steps\n\nCongratulations! You have reached one of the central \"hubs\" of the book. From here, there is a wide range of techniques that build on grammar fuzzing.", "_____no_output_____" ], [ "### Extending Grammars\n\nFirst, we have a number of techniques that all _extend_ grammars in some form:\n\n* [Parsing and recombining inputs](Parser.ipynb) allows to make use of existing inputs, again using derivation trees\n* [Covering grammar expansions](GrammarCoverageFuzzer.ipynb) allows for _combinatorial_ coverage\n* [Assigning _probabilities_ to individual expansions](ProbabilisticGrammarFuzzer.ipynb) gives additional control over expansions\n* [Assigning _constraints_ to individual expansions](GeneratorGrammarFuzzer.ipynb) allows to express _semantic constraints_ on individual rules.", "_____no_output_____" ], [ "### Applying Grammars\n\nSecond, we can _apply_ grammars in a variety of contexts that all involve some form of learning it automatically:\n\n* [Fuzzing APIs](APIFuzzer.ipynb), learning a grammar from APIs\n* [Fuzzing graphical user interfaces](WebFuzzer.ipynb), learning a grammar from user interfaces for subsequent fuzzing\n* [Mining grammars](GrammarMiner.ipynb), learning a grammar for arbitrary input formats\n\nKeep on expanding!", "_____no_output_____" ], [ "## Background\n\nDerivation trees (then frequently called _parse trees_) are a standard data structure into which *parsers* decompose inputs. The *Dragon Book* (also known as *Compilers: Principles, Techniques, and Tools*) \\cite{Aho2006} discusses parsing into derivation trees as part of compiling programs. We also use derivation trees [when parsing and recombining inputs](Parser.ipynb).\n\nThe key idea in this chapter, namely expanding until a limit of symbols is reached, and then always choosing the shortest path, stems from Luke \\cite{Luke2000}.", "_____no_output_____" ], [ "## Exercises", "_____no_output_____" ], [ "### Exercise 1: Caching Method Results\n\nTracking `GrammarFuzzer` reveals that some methods are called again and again, always with the same values. \n\nSet up a class `FasterGrammarFuzzer` with a _cache_ that checks whether the method has been called before, and if so, return the previously computed \"memoized\" value. Do this for `expansion_to_children()`. Compare the number of invocations before and after the optimization.", "_____no_output_____" ], [ "**Important**: For `expansion_to_children()`, make sure that each list returned is an individual copy. If you return the same (cached) list, this will interfere with the in-place modification of `GrammarFuzzer`. Use the Python `copy.deepcopy()` function for this purpose.", "_____no_output_____" ], [ "**Solution.** Let us demonstrate this for `expansion_to_children()`:", "_____no_output_____" ] ], [ [ "import copy", "_____no_output_____" ], [ "class FasterGrammarFuzzer(GrammarFuzzer):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._expansion_cache = {}\n self._expansion_invocations = 0\n self._expansion_invocations_cached = 0\n\n def expansion_to_children(self, expansion):\n self._expansion_invocations += 1\n if expansion in self._expansion_cache:\n self._expansion_invocations_cached += 1\n cached_result = copy.deepcopy(self._expansion_cache[expansion])\n return cached_result\n\n result = super().expansion_to_children(expansion)\n self._expansion_cache[expansion] = result\n return result", "_____no_output_____" ], [ "f = FasterGrammarFuzzer(EXPR_GRAMMAR, min_nonterminals=3, max_nonterminals=5)\nf.fuzz()", "_____no_output_____" ], [ "f._expansion_invocations", "_____no_output_____" ], [ "f._expansion_invocations_cached", "_____no_output_____" ], [ "print(\"%.2f%% of invocations can be cached\" %\n (f._expansion_invocations_cached * 100 / f._expansion_invocations))", "74.74% of invocations can be cached\n" ] ], [ [ "### Exercise 2: Grammar Pre-Compilation\n\nSome methods such as `symbol_cost()` or `expansion_cost()` return a value that is dependent on the grammar only. Set up a class `EvenFasterGrammarFuzzer()` that pre-computes these values once upon initialization, such that later invocations of `symbol_cost()` or `expansion_cost()` need only look up these values.", "_____no_output_____" ], [ "**Solution.** Here's a possible solution, using a hack to substitute the `symbol_cost()` and `expansion_cost()` functions once the pre-computed values are set up.", "_____no_output_____" ] ], [ [ "class EvenFasterGrammarFuzzer(GrammarFuzzer):\n def __init__(self, *args, **kwargs):\n super().__init__(*args, **kwargs)\n self._symbol_costs = {}\n self._expansion_costs = {}\n self.precompute_costs()\n\n def new_symbol_cost(self, symbol, seen=set()):\n return self._symbol_costs[symbol]\n\n def new_expansion_cost(self, expansion, seen=set()):\n return self._expansion_costs[expansion]\n\n def precompute_costs(self):\n for symbol in self.grammar:\n self._symbol_costs[symbol] = super().symbol_cost(symbol)\n for expansion in self.grammar[symbol]:\n self._expansion_costs[expansion] = super(\n ).expansion_cost(expansion)\n\n # Make sure we now call the caching methods\n self.symbol_cost = self.new_symbol_cost\n self.expansion_cost = self.new_expansion_cost", "_____no_output_____" ], [ "f = EvenFasterGrammarFuzzer(EXPR_GRAMMAR)", "_____no_output_____" ] ], [ [ "Here are the individual costs:", "_____no_output_____" ] ], [ [ "f._symbol_costs", "_____no_output_____" ], [ "f._expansion_costs", "_____no_output_____" ], [ "f = EvenFasterGrammarFuzzer(EXPR_GRAMMAR)\nf.fuzz()", "_____no_output_____" ] ], [ [ "### Exercise 3: Maintaining Trees to be Expanded\n\nIn `expand_tree_once()`, the algorithm traverses the tree again and again to find nonterminals that still can be extended. Speed up the process by keeping a list of nonterminal symbols in the tree that still can be expanded.", "_____no_output_____" ], [ "**Solution.** Left as exercise for the reader.", "_____no_output_____" ], [ "### Exercise 4: Alternate Random Expansions", "_____no_output_____" ], [ "We could define `expand_node_randomly()` such that it simply invokes `expand_node_by_cost(node, random.choice)`:", "_____no_output_____" ] ], [ [ "class ExerciseGrammarFuzzer(GrammarFuzzer):\n def expand_node_randomly(self, node):\n if self.log:\n print(\"Expanding\", all_terminals(node), \"randomly by cost\")\n\n return self.expand_node_by_cost(node, random.choice)", "_____no_output_____" ] ], [ [ "What is the difference between the original implementation and this alternative?", "_____no_output_____" ], [ "**Solution.** The alternative in `ExerciseGrammarFuzzer` has another probability distribution. In the original `GrammarFuzzer`, all expansions have the same likelihood of being expanded. In `ExerciseGrammarFuzzer`, first, a cost is chosen (randomly); then, one of the expansions with this cost is chosen (again randomly). This means that expansions whose cost is unique have a higher chance of being selected.", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ] ]
4a3cfb3257fd09c3789fbffb3efa05a9fc41f8db
813,813
ipynb
Jupyter Notebook
example/knowledge-graph-triplet/load-knowledge-graph-triplet.ipynb
zulkiflizaki/malaya
2358081bfa43aad57d9415a99f64c68f615d0cc4
[ "MIT" ]
1
2021-07-28T07:15:21.000Z
2021-07-28T07:15:21.000Z
example/knowledge-graph-triplet/load-knowledge-graph-triplet.ipynb
zulkiflizaki/malaya
2358081bfa43aad57d9415a99f64c68f615d0cc4
[ "MIT" ]
null
null
null
example/knowledge-graph-triplet/load-knowledge-graph-triplet.ipynb
zulkiflizaki/malaya
2358081bfa43aad57d9415a99f64c68f615d0cc4
[ "MIT" ]
null
null
null
1,372.365936
652,344
0.957418
[ [ [ "# Knowledge Graph Triplet\n\nGenerate MS text -> EN Knowledge Graph Triplet.", "_____no_output_____" ], [ "<div class=\"alert alert-info\">\n\nThis tutorial is available as an IPython notebook at [Malaya/example/knowledge-graph-triplet](https://github.com/huseinzol05/Malaya/tree/master/example/knowledge-graph-triplet).\n \n</div>", "_____no_output_____" ], [ "<div class=\"alert alert-warning\">\n\nThis module only trained on standard language structure, so it is not save to use it for local language structure.\n \n</div>", "_____no_output_____" ] ], [ [ "%%time\n\nimport malaya", "CPU times: user 4.81 s, sys: 840 ms, total: 5.65 s\nWall time: 5.7 s\n" ] ], [ [ "### List available Transformer model", "_____no_output_____" ] ], [ [ "malaya.knowledge_graph.available_transformer()", "INFO:root:tested on 200k test set.\n" ] ], [ [ "### Load Transformer model\n\n```python\ndef transformer(model: str = 'base', quantized: bool = False, **kwargs):\n \"\"\"\n Load transformer to generate knowledge graphs in triplet format from texts,\n MS text -> EN triplet format.\n\n Parameters\n ----------\n model : str, optional (default='base')\n Model architecture supported. Allowed values:\n\n * ``'base'`` - Transformer BASE parameters.\n * ``'large'`` - Transformer LARGE parameters.\n\n quantized : bool, optional (default=False)\n if True, will load 8-bit quantized model.\n Quantized model not necessary faster, totally depends on the machine.\n\n Returns\n -------\n result: malaya.model.tf.KnowledgeGraph class\n \"\"\"\n```", "_____no_output_____" ] ], [ [ "model = malaya.knowledge_graph.transformer()", "INFO:root:running knowledge-graph-generator/base using device /device:CPU:0\n" ] ], [ [ "### Load Quantized model\n\nTo load 8-bit quantized model, simply pass `quantized = True`, default is `False`.\n\nWe can expect slightly accuracy drop from quantized model, and not necessary faster than normal 32-bit float model, totally depends on machine.", "_____no_output_____" ] ], [ [ "quantized_model = malaya.knowledge_graph.transformer(quantized = True)", "WARNING:root:Load quantized model will cause accuracy drop.\nINFO:root:running knowledge-graph-generator/base-quantized using device /device:CPU:0\n" ], [ "string1 = \"Yang Berhormat Dato Sri Haji Mohammad Najib bin Tun Haji Abdul Razak ialah ahli politik Malaysia dan merupakan bekas Perdana Menteri Malaysia ke-6 yang mana beliau menjawat jawatan dari 3 April 2009 hingga 9 Mei 2018. Beliau juga pernah berkhidmat sebagai bekas Menteri Kewangan dan merupakan Ahli Parlimen Pekan Pahang\"\nstring2 = \"Pahang ialah negeri yang ketiga terbesar di Malaysia Terletak di lembangan Sungai Pahang yang amat luas negeri Pahang bersempadan dengan Kelantan di utara Perak Selangor serta Negeri Sembilan di barat Johor di selatan dan Terengganu dan Laut China Selatan di timur.\"", "_____no_output_____" ] ], [ [ "These models heavily trained on neutral texts, if you give political or news texts, the results returned not really good.", "_____no_output_____" ], [ "#### Predict using greedy decoder\n\n```python\ndef greedy_decoder(self, strings: List[str], get_networkx: bool = True):\n \"\"\"\n Generate triples knowledge graph using greedy decoder.\n Example, \"Joseph Enanga juga bermain untuk Union Douala.\" -> \"Joseph Enanga member of sports team Union Douala\"\n\n Parameters\n ----------\n strings : List[str]\n get_networkx: bool, optional (default=True)\n If True, will generate networkx.MultiDiGraph.\n\n Returns\n -------\n result: List[Dict]\n \"\"\"\n```", "_____no_output_____" ] ], [ [ "r = model.greedy_decoder([string1, string2])", "WARNING:root:1\n" ], [ "r[0]", "_____no_output_____" ], [ "import matplotlib.pyplot as plt\nimport networkx as nx\n\ng = r[0]['G']\nplt.figure(figsize=(6, 6))\npos = nx.spring_layout(g)\nnx.draw(g, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues, pos = pos)\nnx.draw_networkx_edge_labels(g, pos=pos)\nplt.show()", "_____no_output_____" ], [ "g = r[1]['G']\nplt.figure(figsize=(6, 6))\npos = nx.spring_layout(g)\nnx.draw(g, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues, pos = pos)\nnx.draw_networkx_edge_labels(g, pos=pos)\nplt.show()", "_____no_output_____" ] ], [ [ "#### Predict using beam decoder\n\n```python\ndef beam_decoder(self, strings: List[str], get_networkx: bool = True):\n \"\"\"\n Generate triples knowledge graph using beam decoder.\n Example, \"Joseph Enanga juga bermain untuk Union Douala.\" -> \"Joseph Enanga member of sports team Union Douala\"\n\n Parameters\n ----------\n strings : List[str]\n get_networkx: bool, optional (default=True)\n If True, will generate networkx.MultiDiGraph.\n\n Returns\n -------\n result: List[Dict]\n \"\"\"\n```", "_____no_output_____" ] ], [ [ "r = model.beam_decoder([string1, string2])", "WARNING:root:1\n" ], [ "g = r[0]['G']\nplt.figure(figsize=(6, 6))\npos = nx.spring_layout(g)\nnx.draw(g, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues, pos = pos)\nnx.draw_networkx_edge_labels(g, pos=pos)\nplt.show()", "_____no_output_____" ], [ "# https://ms.wikipedia.org/wiki/Malaysia\n\nstring = \"\"\"\nMalaysia secara rasminya Persekutuan Malaysia ialah sebuah negara raja berperlembagaan persekutuan di Asia Tenggara yang terdiri daripada tiga belas negeri dan tiga wilayah persekutuan, yang menduduki bumi berkeluasan 330,803 kilometer persegi (127,720 bt2). Malaysia terbahagi kepada dua kawasan yang mengapit Laut China Selatan, iaitu Semenanjung Malaysia dan Borneo Malaysia (juga Malaysia Barat dan Timur). Malaysia berkongsi sempadan darat dengan Thailand, Indonesia, dan Brunei dan juga sempadan laut dengan Singapura dan Filipina. Ibu negara Malaysia ialah Kuala Lumpur, manakala Putrajaya merupakan pusat kerajaan persekutuan. Pada tahun 2009, Malaysia diduduki oleh 28 juta penduduk dan pada tahun 2017 dianggarkan telah mencecah lebih 30 juta orang yang menduduki di Malaysia.\n\nMalaysia berakar-umbikan Kerajaan-kerajaan Melayu yang wujud di wilayahnya dan menjadi taklukan Empayar British sejak abad ke-18. Wilayah British pertama di sini dikenali sebagai Negeri-Negeri Selat. Semenanjung Malaysia yang ketika itu dikenali sebagai Tanah Melayu atau Malaya, mula-mula disatukan di bawah komanwel pada tahun 1946, sebelum menjadi Persekutuan Tanah Melayu pada tahun 1948. Pada tahun 1957 Semenanjung Malaysia mencapai Kemerdekaan dan bebas daripada penjajah dan sekali gus menjadi catatan sejarah terpenting bagi Malaysia. Pada tahun 1963, Tanah Melayu bersatu bersama dengan negara Sabah, Sarawak, dan Singapura bagi membentuk Malaysia. Pada tahun 1965, Singapura keluar dari persekutuan untuk menjadi negara kota yang bebas. Semenjak itu, Malaysia menikmati antara ekonomi yang terbaik di Asia, dengan purata pertumbuhan keluaran dalam negara kasarnya (KDNK) kira-kira 6.5% selama 50 tahun pertama kemerdekaannya. \n\nEkonomi negara yang selama ini dijana oleh sumber alamnya kini juga berkembang dalam sektor-sektor ukur tanah, sains, kejuruteraan, pendidikan, pelancongan, perkapalan, perdagangan dan perubatan.\n\nKetua negara Malaysia ialah Yang di-Pertuan Agong, iaitu raja elektif yang terpilih dan diundi dari kalangan sembilan raja negeri Melayu. Ketua kerajaannya pula ialah Perdana Menteri. Sistem kerajaan Malaysia banyak berdasarkan sistem parlimen Westminster, dan sistem perundangannya juga berasaskan undang-undang am Inggeris.\n\nMalaysia terletak berdekatan dengan khatulistiwa dan beriklim tropika, serta mempunyai kepelbagaian flora dan fauna, sehingga diiktiraf menjadi salah satu daripada 17 negara megadiversiti. Di Malaysia terletaknya Tanjung Piai, titik paling selatan di seluruh tanah besar Eurasia. Malaysia ialah sebuah negara perintis Persatuan Negara-Negara Asia Tenggara dan Pertubuhan Persidangan Islam, dan juga anggota Kerjasama Ekonomi Asia-Pasifik, Negara-Negara Komanwel, dan Pergerakan Negara-Negara Berkecuali.\n\"\"\"", "_____no_output_____" ], [ "def simple_cleaning(string):\n return ''.join([s for s in string if s not in ',.\\'\";'])\n\nstring = malaya.text.function.split_into_sentences(string)\nstring = [simple_cleaning(s) for s in string if len(s) > 50]\nstring", "_____no_output_____" ], [ "r = model.greedy_decoder(string)", "WARNING:root:1\nWARNING:root:1\nWARNING:root:1\nWARNING:root:1\n" ], [ "g = r[0]['G']\n\nfor i in range(1, len(r), 1):\n g.update(r[i]['G'])", "_____no_output_____" ], [ "plt.figure(figsize=(17, 17))\npos = nx.spring_layout(g)\nnx.draw(g, with_labels=True, node_color='skyblue', edge_cmap=plt.cm.Blues, pos = pos)\nnx.draw_networkx_edge_labels(g, pos=pos)\nplt.show()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ] ]
4a3d09ca66990ad4d036ceacbbec9b31cb720a17
353,369
ipynb
Jupyter Notebook
fig7 - aln_recieves_spindle.ipynb
jajcayn/thalamocortical_model_study
f975d2ce35ac5cac419470a814f56033ec96cbd8
[ "MIT" ]
null
null
null
fig7 - aln_recieves_spindle.ipynb
jajcayn/thalamocortical_model_study
f975d2ce35ac5cac419470a814f56033ec96cbd8
[ "MIT" ]
1
2022-02-21T11:09:29.000Z
2022-02-21T11:09:43.000Z
fig7 - aln_recieves_spindle.ipynb
jajcayn/thalamocortical_model_study
f975d2ce35ac5cac419470a814f56033ec96cbd8
[ "MIT" ]
null
null
null
1,266.555556
237,744
0.952707
[ [ [ "%matplotlib inline\n\nimport matplotlib as mpl\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\nimport xarray as xr\nfrom neurolib.models.multimodel import MultiModel\nfrom yasa import get_centered_indices\n\nfrom aln_thalamus import ALNThalamusMiniNetwork\nfrom plotting import plot_spectrum\n\nDPI = 75\nCMAP = \"plasma\"\nplt.rcParams[\"figure.figsize\"] = (20, 9)\nplt.style.use(\"default_light.mplstyle\")\n\nSW = {\"low_freq\": 0.1, \"high_freq\": 2.5}\nSP = {\"low_freq\": 11.0, \"high_freq\": 16.0}\n\nSAVE_FIG = False", "_____no_output_____" ], [ "def simulate(\n conn=2.0,\n ou_exc=4.1,\n ou_inh=1.8,\n ou_sigma=0.0,\n tauA=1000.0,\n glk=0.018,\n a=0.0,\n b=15.0,\n):\n # all times in ms\n duration = 30000.0\n t_spin_up = 5000.0\n sampling_dt = 1.0\n dt = 0.01\n\n model = MultiModel(\n ALNThalamusMiniNetwork(\n np.array([[0.0, conn], [0.0, 0.0]]), np.array([[0.0, 13.0], [13.0, 0.0]])\n )\n )\n model.params[\"*g_LK\"] = glk\n model.params[\"ALNThlmNet.ALNNode_0.ALNMassEXC_0.a\"] = a\n model.params[\"*b\"] = b\n model.params[\"*tauA\"] = tauA\n model.params[\"*EXC*mu\"] = ou_exc\n model.params[\"*INH*mu\"] = ou_inh\n model.params[\"*ALNMass*input*sigma\"] = ou_sigma\n model.params[\"duration\"] = duration + t_spin_up\n model.params[\"dt\"] = dt\n model.params[\"sampling_dt\"] = sampling_dt\n model.params[\"backend\"] = \"numba\"\n\n model.run()\n\n results_df = pd.DataFrame(\n {\n \"ALN-EXC\": model.r_mean_EXC[0, :] * 1000.0,\n \"ALN-INH\": model.r_mean_INH[0, :] * 1000.0,\n \"I_adapt\": model.I_A_EXC[0, :],\n \"TCR\": model.r_mean_EXC[1, :] * 1000.0,\n },\n index=model.t,\n )\n results_df.index.name = \"time\"\n return results_df", "_____no_output_____" ], [ "def plot_single(df, ax, ax2legend=False):\n (l1,) = ax.plot(df.index, df[\"ALN-INH\"], color=\"C1\", alpha=0.7)\n (l2,) = ax.plot(df.index, df[\"ALN-EXC\"], color=\"C0\")\n ax.set_ylim([-40, 130])\n sns.despine(trim=True, ax=ax)\n ax2 = ax.twinx()\n (l3,) = ax2.plot(df.index, df[\"TCR\"], color=\"k\", linewidth=2.5, alpha=0.5)\n ax2.set_ylim([-50, 1600.0])\n ax2.set_yticks([0.0, 400.0])\n if ax2legend:\n ax2.set_ylabel(\"$r_{TCR}$\")\n ax2.yaxis.set_label_coords(1.1, 0.13)\n sns.despine(trim=True, ax=ax2)\n return l1, l2, l3", "_____no_output_____" ], [ "conns = [0.0, 0.025, 0.05, 0.1]\n\nparams = [\n (2.8, 2.0, \"Inside $\\mathregular{{LC_{{aE}}}}$\"),\n (2.33, 2.0, r\"Border DOWN $\\times$ $\\mathregular{{LC_{{aE}}}}$\"),\n (3.5, 2.0, r\"Border $\\mathregular{{LC_{{aE}}}}$ $\\times$ UP\"),\n]\n\n# noise-less\nnoise = 0.0\nfig, axs = plt.subplots(\n nrows=len(conns),\n ncols=len(params),\n sharex=True,\n figsize=(20, 3 * len(conns)),\n squeeze=False,\n)\nfor j, (exc_inp, inh_inp, plot_name) in enumerate(params):\n for i, conn in enumerate(conns):\n ax = axs[i, j]\n df = simulate(\n conn=conn,\n ou_exc=exc_inp,\n ou_inh=inh_inp,\n ou_sigma=noise,\n tauA=1000.0,\n glk=0.031,\n a=0.0,\n b=15.0,\n )\n l1, l2, l3 = plot_single(df.loc[5:20], ax=ax, ax2legend=(j == len(params) - 1))\n if i == (len(conns) - 1):\n ax.set_xlabel(\"time [sec]\")\n if j == 0:\n ax.set_ylabel(\n r\"$N_{thal\\to ctx}\" f\"={conn}$ \\n\\n $r_{{E}}$, $r_{{I}}$ [Hz]\"\n )\n axs[0, j].set_title(plot_name)\n\nplt.tight_layout()\nfig.legend(\n (l1, l2, l3),\n (\"I\", \"E\", \"TCR\"),\n loc=\"upper center\",\n ncol=3,\n bbox_to_anchor=(0.51, 0.01),\n bbox_transform=fig.transFigure,\n)\n# to PDF due transparency\nif SAVE_FIG:\n plt.savefig(\n f\"../figs/aln_rec_spindle_{noise}.pdf\",\n transparent=True,\n bbox_inches=\"tight\",\n )", "_____no_output_____" ], [ "# with noise: only right border is interesting\n\nnoise = 0.05\nfig, axs = plt.subplots(\n ncols=len(conns),\n nrows=1,\n figsize=(20, 3.5),\n squeeze=False,\n)\n\nparams = [\n (3.25, 2.0, r\"Border $\\mathregular{{LC_{{aE}}}}$ $\\times$ UP\"),\n]\n\nfor i, (exc_inp, inh_inp, plot_name) in enumerate(params):\n for j, conn in enumerate(conns):\n ax = axs[i, j]\n df = simulate(\n conn=conn,\n ou_exc=exc_inp,\n ou_inh=inh_inp,\n ou_sigma=noise,\n tauA=1000.0,\n glk=0.031,\n a=0.0,\n b=15.0,\n )\n l1, l2, l3 = plot_single(df.loc[5:20], ax=ax, ax2legend=(j == len(conns) - 1))\n ax.set_xlabel(\"time [sec]\")\n ax.set_title(r\"$N_{thal\\to ctx}\" + f\"={conn}$\")\n if j == 0:\n ax.set_ylabel(\"$r_{E}$, $r_{I}$ [Hz]\")\n\nplt.tight_layout()\nfig.legend(\n (l1, l2, l3),\n (\"I\", \"E\", \"TCR\"),\n loc=\"upper center\",\n ncol=3,\n bbox_to_anchor=(0.51, 0.01),\n bbox_transform=fig.transFigure,\n)\n# to PDF due transparency\nif SAVE_FIG:\n plt.savefig(\n f\"../figs/aln_rec_spindle_{noise}.pdf\",\n transparent=True,\n bbox_inches=\"tight\",\n )", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code" ] ]
4a3d19bb168504f4986a04953c1667b2f10516d4
774,027
ipynb
Jupyter Notebook
PanelDataModeling.ipynb
djibybalde/Airline-competitiveness
8f343a59d4233050ae51a1e5e5b8090c547b0b26
[ "MIT" ]
null
null
null
PanelDataModeling.ipynb
djibybalde/Airline-competitiveness
8f343a59d4233050ae51a1e5e5b8090c547b0b26
[ "MIT" ]
null
null
null
PanelDataModeling.ipynb
djibybalde/Airline-competitiveness
8f343a59d4233050ae51a1e5e5b8090c547b0b26
[ "MIT" ]
null
null
null
129.826736
143,344
0.791639
[ [ [ "## Contents\n0. Import Libraries and Load Data\n1. Data Preparation for PanelData Model\n2. Bassic Panel Model\n - PooledOLS model\n - RandomEffects model\n - BetweenOLS model\n3. Testing correlated effects\n - Testing for Fixed Effects\n - Testing for Time Effects\n - First Differences\n4. Comparison\n - Comparing between modelBetween, modelRE and modelPooled models\n - Comparing between Robust, Entity and Entity-Time mothods\n \n5. Instruments as lags of order 1 and 2 of first differences\n - Campute the lags of order 1 and 2 of first differences\n6. Linear Instrumental-Variables Regression\n - 2SLS as OLS\n - IV 2SLS\n - Tests\n - Sargan test: Testing the absence of correlation between Z and U\n - Testing the correlation of Z and X_endog\n - Endogeneity testing using `Durbin's and Wu-Hausman test of exogeneity\n - Augmented test for testing the exogeneity `log_fare`\n - Instrumenting using two-stage least squares\n - Homoskedasticity – Heteroskedasticity\n - Breusch–Pagan test\n - White test\n7. GMM Estimation\n8.1. Exogeneity test using the augmented regression approach\n8.2. Testing Autocorrelation\n9. Feasible Generalized Least Squares (GLS) and GLSA model\n10. References ", "_____no_output_____" ] ], [ [ "# Importning libraries\nimport pandas as pd\nimport numpy as np\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport matplotlib.animation as animation\n\nimport glob\nfrom glob import iglob\nimport datetime as dt\n\nimport statsmodels.api as sm\nimport statsmodels.formula.api as smf\nimport statsmodels.stats.api as sms\n\nfrom linearmodels import PanelOLS, PooledOLS, BetweenOLS, RandomEffects, FirstDifferenceOLS\nfrom statsmodels.stats.outliers_influence import variance_inflation_factor, OLSInfluence\n\n%matplotlib inline", "_____no_output_____" ], [ "path = '../notebooks/final_database.csv'\ndf = pd.read_csv(path, decimal='.', sep=',')\n\ndf['quarter'] = pd.to_datetime(df.quarter).dt.to_period('Q-DEC')\ndf.sort_values(['citymarket1_id', 'citymarket2_id','quarter'], inplace=True)\n\ndf.head()", "_____no_output_____" ] ], [ [ "## Preparing the `PanelData` \n- To use the data as `PanelData`, we need:\n - to campute a dummies variable for each period (quarter, in our case),\n - to identify the ID variable and the time variable, and then to set them in index,\n - to sort the data to respect to the `ID`and the `period`.\n \n- As the `Within` and the `First Difference` (respectively the `Second Difference`) estimators require at least 2 (respectively 4) observations per individual, we will delete the lines with only one, two and tree observations in the dataset.\n- To do so, we will first campute the frequency of the `city market` in each quarter (here: number of quarter by city market) and then keep only the those are present in that dataset `at least 4 times`.", "_____no_output_____" ] ], [ [ "variables = ['log_passengers','log_nsmiles','log_fare','nb_airline','log_income_capita','log_population',\n 'log_kjf_price','dum_dist' ,'dum_q1','dum_q2','dum_q3','dum_q4']\n\ndf['citymarket_id'] = df.citymarket1_id.astype(str)+'_'+df.citymarket2_id.astype(str)\ndf['quarter_index'] = (df.quarter.dt.year.astype(str)+df.quarter.dt.quarter.astype(str)).astype(int)\n\npanel0_df = df[['citymarket_id','quarter_index', 'quarter']+variables].copy()\npanel0_df.sort_values(['citymarket_id','quarter_index'], inplace=True)\nprint('panel0_df has {} observations and {} variables'.format(panel0_df.shape[0], panel0_df[variables].shape[1]))", "panel0_df has 101236 observations and 12 variables\n" ], [ "# Reset the index in order to campute the number of quarter by city market\n# panel0_df.reset_index(inplace=True)\n\n# Compute the dummies variables of quarter\npanel0_df['quarter'] = pd.Categorical(panel0_df.quarter)\n\npanel0_df.head()", "_____no_output_____" ], [ "# Campute and save the number of quarter by city market\nnb_cm = panel0_df[['citymarket_id', 'quarter']].groupby(['citymarket_id']).nunique()\nnb_cm.drop('citymarket_id', axis=1, inplace=True)\n\n# Reset the index and rename the columns in order to merge the two datasets\nnb_cm.reset_index(inplace=True)\nnb_cm.columns = ['citymarket_id','nb_citymarket']\n\n# Merging and dropping the no needed rows\npanel1_df = pd.merge(panel0_df, nb_cm, on=['citymarket_id'], how='inner')\npanel1_df = panel1_df[panel1_df.nb_citymarket>=4]\npanel1_df.drop('nb_citymarket', axis=1, inplace=True)\n\nprint(\"We delete {} city-markets(lines) which didn't present at least 4 times in a given querter.\".format(panel0_df.shape[0]-panel1_df.shape[0]))\nprint(\"So now, we have '{}' obserations in our dataset which will be used to camput the first and second differences.\\n\".format(panel1_df.shape[0]))\n\nprint('We have {} uniques city-pair markets and {} periods on our dataset'.format(panel1_df.citymarket_id.nunique(),\n panel1_df.quarter.nunique()))", "We delete 2563 city-markets(lines) which didn't present at least 4 times in a given querter.\nSo now, we have '98673' obserations in our dataset which will be used to camput the first and second differences.\n\nWe have 4578 uniques city-pair markets and 33 periods on our dataset\n" ], [ "# Assign the city-market ID a new variable name `ID`\niden = panel1_df[['citymarket_id', 'quarter']].groupby(['citymarket_id']).nunique()\niden['ID'] = range(1, iden.shape[0]+1)\niden.drop('citymarket_id', axis=1, inplace=True)\niden.reset_index(inplace=True)\niden = iden[['citymarket_id', 'ID']]\npanel1_df = pd.merge(iden, panel0_df, on=['citymarket_id'], how='inner')\n\npanel1_df.head()", "_____no_output_____" ], [ "panel1_df.citymarket_id.nunique(), panel1_df.citymarket_id.count()", "_____no_output_____" ], [ "panel1_df.info()", "<class 'pandas.core.frame.DataFrame'>\nInt64Index: 98673 entries, 0 to 98672\nData columns (total 16 columns):\ncitymarket_id 98673 non-null object\nID 98673 non-null int64\nquarter_index 98673 non-null int64\nquarter 98673 non-null category\nlog_passengers 98673 non-null float64\nlog_nsmiles 98673 non-null float64\nlog_fare 98673 non-null float64\nnb_airline 98673 non-null int64\nlog_income_capita 98673 non-null float64\nlog_population 98673 non-null float64\nlog_kjf_price 98673 non-null float64\ndum_dist 98673 non-null int64\ndum_q1 98673 non-null int64\ndum_q2 98673 non-null int64\ndum_q3 98673 non-null int64\ndum_q4 98673 non-null int64\ndtypes: category(1), float64(6), int64(8), object(1)\nmemory usage: 12.1+ MB\n" ], [ "print('Number of city-market:', panel1_df.citymarket_id.nunique(), \n '\\nNumber of quarter:', panel1_df.quarter.nunique())", "Number of city-market: 4578 \nNumber of quarter: 33\n" ] ], [ [ "## Basic regression\n- First, run the PooledOLS as classical OLS regression to check the structure of the data .\n- The log passengers is modeled using all independent variables and time dummies. \n- `Note` that the dummies of quarters will not used at the same time with the dummies of times\nhttps://bashtage.github.io/linearmodels/devel/panel/examples/examples.html", "_____no_output_____" ] ], [ [ "# the index in order to campute the number of quarter by city market\npanel1_df.set_index(['citymarket_id','quarter_index'], inplace=True)\n", "_____no_output_____" ] ], [ [ "### Parameters\n- `time_effects`: flag whether to include entity (fixed) effects in the model, if `True`\n- `time_effects`: flag whether to include time effects in the model, if `True`\n- `cov_type`:\n - if `homoskedastic` or `unadjusted`: assume residual are homoskedastic\n - if `heteroskedastic` or `robust`: control for heteroskedasticity using `White’s estimator` \n- White’s robust covariance adds some robustness against certain types of specification issues. This estimator should not be used when including fixed effects (entity effects) because, no longer robust.", "_____no_output_____" ], [ "### 1. PooledOLS model", "_____no_output_____" ] ], [ [ "# Identifying the regressors. Note that the `quarter` is the time dummies \nregressors = ['log_nsmiles','log_fare','nb_airline','log_income_capita','log_population','log_kjf_price','dum_dist','quarter']\n\nmodelPooled = PanelOLS(panel1_df.log_passengers, panel1_df[regressors], \n entity_effects=False, time_effects=False, other_effects=None)\nmodelPooled = modelPooled.fit(cov_type='robust')\nprint(modelPooled)", " PanelOLS Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.4885\nEstimator: PanelOLS R-squared (Between): 0.5208\nNo. Observations: 98673 R-squared (Within): -1.1120\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.4885\nTime: 17:32:32 Log-likelihood -1.21e+05\nCov. Estimator: Robust \n F-statistic: 2478.5\nEntities: 4578 P-value 0.0000\nAvg Obs: 21.554 Distribution: F(38,98634)\nMin Obs: 4.0000 \nMax Obs: 33.000 F-statistic (robust): 2780.5\n P-value 0.0000\nTime periods: 33 Distribution: F(38,98634)\nAvg Obs: 2990.1 \nMin Obs: 2845.0 \nMax Obs: 3137.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nlog_nsmiles 0.6927 0.0047 147.62 0.0000 0.6835 0.7019\nlog_fare -1.3773 0.0106 -130.50 0.0000 -1.3979 -1.3566\nnb_airline 0.3340 0.0014 237.38 0.0000 0.3313 0.3368\nlog_income_capita 1.1407 0.0314 36.324 0.0000 1.0792 1.2023\nlog_population 0.0673 0.0043 15.710 0.0000 0.0589 0.0757\nlog_kjf_price -1.3059 0.0585 -22.341 0.0000 -1.4204 -1.1913\ndum_dist -0.8367 0.0343 -24.381 0.0000 -0.9040 -0.7695\nquarter.2010Q2 0.1139 0.0218 5.2218 0.0000 0.0712 0.1567\nquarter.2010Q3 0.0613 0.0218 2.8157 0.0049 0.0186 0.1040\nquarter.2010Q4 0.1286 0.0226 5.6886 0.0000 0.0843 0.1729\nquarter.2011Q1 0.3658 0.0266 13.779 0.0000 0.3138 0.4179\nquarter.2011Q2 0.6599 0.0311 21.210 0.0000 0.5989 0.7209\nquarter.2011Q3 0.6244 0.0295 21.166 0.0000 0.5666 0.6822\nquarter.2011Q4 0.5117 0.0288 17.786 0.0000 0.4553 0.5681\nquarter.2012Q1 0.5958 0.0307 19.376 0.0000 0.5355 0.6560\nquarter.2012Q2 0.6828 0.0283 24.152 0.0000 0.6273 0.7382\nquarter.2012Q3 0.6276 0.0283 22.213 0.0000 0.5722 0.6829\nquarter.2012Q4 0.5433 0.0279 19.475 0.0000 0.4886 0.5980\nquarter.2013Q1 0.6497 0.0294 22.082 0.0000 0.5920 0.7074\nquarter.2013Q2 0.5360 0.0250 21.420 0.0000 0.4869 0.5850\nquarter.2013Q3 0.5940 0.0261 22.770 0.0000 0.5429 0.6451\nquarter.2013Q4 0.4678 0.0256 18.247 0.0000 0.4176 0.5181\nquarter.2014Q1 0.4645 0.0266 17.489 0.0000 0.4125 0.5166\nquarter.2014Q2 0.5892 0.0255 23.137 0.0000 0.5393 0.6391\nquarter.2014Q3 0.5933 0.0252 23.587 0.0000 0.5440 0.6426\nquarter.2014Q4 0.2299 0.0218 10.550 0.0000 0.1872 0.2726\nquarter.2015Q1 -0.2178 0.0277 -7.8688 0.0000 -0.2720 -0.1635\nquarter.2015Q2 -0.0311 0.0255 -1.2192 0.2228 -0.0811 0.0189\nquarter.2015Q3 -0.1131 0.0297 -3.8052 0.0001 -0.1713 -0.0548\nquarter.2015Q4 -0.4428 0.0359 -12.330 0.0000 -0.5132 -0.3724\nquarter.2016Q1 -0.8005 0.0502 -15.949 0.0000 -0.8989 -0.7021\nquarter.2016Q2 -0.4829 0.0411 -11.751 0.0000 -0.5634 -0.4023\nquarter.2016Q3 -0.4193 0.0377 -11.126 0.0000 -0.4931 -0.3454\nquarter.2016Q4 -0.4406 0.0346 -12.720 0.0000 -0.5085 -0.3727\nquarter.2017Q1 -0.3255 0.0315 -10.322 0.0000 -0.3873 -0.2637\nquarter.2017Q2 -0.3064 0.0345 -8.8922 0.0000 -0.3739 -0.2388\nquarter.2017Q3 -0.2743 0.0322 -8.5239 0.0000 -0.3373 -0.2112\nquarter.2017Q4 -0.1519 0.0273 -5.5663 0.0000 -0.2053 -0.0984\nquarter.2018Q1 -0.0425 0.0249 -1.7060 0.0880 -0.0914 0.0063\n=====================================================================================\n\n\n" ], [ "modelPooled.f_pooled", "_____no_output_____" ], [ "modelPooled.entity_info", "_____no_output_____" ], [ "modelPooled.f_statistic", "_____no_output_____" ], [ "modelPooled.f_statistic_robust", "_____no_output_____" ], [ "modelPooled.variance_decomposition", "_____no_output_____" ], [ "\"\"\"modelRE.f_statistic\nmodelRE.f_statistic_robust\nmodelRE.variance_decomposition\"\"\"", "_____no_output_____" ] ], [ [ "### 2. RandomEffects model", "_____no_output_____" ] ], [ [ "# Identifying the regressors. Note that the `quarter` is the time dummies \nregressors = ['log_nsmiles','log_fare','nb_airline','log_income_capita','log_population','log_kjf_price','dum_dist', 'quarter']\n\nmodelRE = RandomEffects(panel1_df.log_passengers, panel1_df[regressors])\nmodelRE = modelRE.fit(cov_type='robust')\nprint(modelRE)", " RandomEffects Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.2337\nEstimator: RandomEffects R-squared (Between): 0.1783\nNo. Observations: 98673 R-squared (Within): 0.2486\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.1187\nTime: 17:32:33 Log-likelihood 3303.8\nCov. Estimator: Robust \n F-statistic: 791.38\nEntities: 4578 P-value 0.0000\nAvg Obs: 21.554 Distribution: F(38,98634)\nMin Obs: 4.0000 \nMax Obs: 33.000 F-statistic (robust): 2061.9\n P-value 0.0000\nTime periods: 33 Distribution: F(38,98634)\nAvg Obs: 2990.1 \nMin Obs: 2845.0 \nMax Obs: 3137.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nlog_nsmiles 0.4263 0.0199 21.457 0.0000 0.3873 0.4652\nlog_fare -0.9840 0.0091 -108.57 0.0000 -1.0018 -0.9663\nnb_airline 0.0261 0.0007 35.975 0.0000 0.0246 0.0275\nlog_income_capita 1.1200 0.0486 23.043 0.0000 1.0247 1.2152\nlog_population 0.0196 0.0150 1.3088 0.1906 -0.0097 0.0489\nlog_kjf_price -1.1071 0.0987 -11.220 0.0000 -1.3005 -0.9137\ndum_dist -0.3518 0.1583 -2.2227 0.0262 -0.6621 -0.0416\nquarter.2010Q2 0.2058 0.0083 24.677 0.0000 0.1895 0.2222\nquarter.2010Q3 0.1319 0.0072 18.304 0.0000 0.1177 0.1460\nquarter.2010Q4 0.2134 0.0128 16.724 0.0000 0.1884 0.2384\nquarter.2011Q1 0.3191 0.0253 12.633 0.0000 0.2696 0.3686\nquarter.2011Q2 0.6393 0.0380 16.805 0.0000 0.5648 0.7139\nquarter.2011Q3 0.5670 0.0344 16.464 0.0000 0.4995 0.6345\nquarter.2011Q4 0.5035 0.0327 15.379 0.0000 0.4393 0.5676\nquarter.2012Q1 0.5032 0.0372 13.542 0.0000 0.4304 0.5761\nquarter.2012Q2 0.5846 0.0314 18.638 0.0000 0.5231 0.6460\nquarter.2012Q3 0.5137 0.0310 16.566 0.0000 0.4529 0.5745\nquarter.2012Q4 0.4470 0.0300 14.877 0.0000 0.3881 0.5059\nquarter.2013Q1 0.4614 0.0337 13.684 0.0000 0.3953 0.5274\nquarter.2013Q2 0.4754 0.0231 20.564 0.0000 0.4300 0.5207\nquarter.2013Q3 0.4832 0.0262 18.448 0.0000 0.4318 0.5345\nquarter.2013Q4 0.4293 0.0247 17.385 0.0000 0.3809 0.4777\nquarter.2014Q1 0.3816 0.0264 14.478 0.0000 0.3299 0.4333\nquarter.2014Q2 0.5254 0.0247 21.277 0.0000 0.4770 0.5738\nquarter.2014Q3 0.4789 0.0235 20.422 0.0000 0.4329 0.5249\nquarter.2014Q4 0.2279 0.0093 24.536 0.0000 0.2097 0.2461\nquarter.2015Q1 -0.2060 0.0288 -7.1553 0.0000 -0.2624 -0.1496\nquarter.2015Q2 -0.0070 0.0227 -0.3076 0.7584 -0.0514 0.0375\nquarter.2015Q3 -0.1495 0.0339 -4.4144 0.0000 -0.2158 -0.0831\nquarter.2015Q4 -0.3607 0.0481 -7.5033 0.0000 -0.4549 -0.2665\nquarter.2016Q1 -0.7435 0.0756 -9.8330 0.0000 -0.8917 -0.5953\nquarter.2016Q2 -0.4219 0.0582 -7.2552 0.0000 -0.5359 -0.3079\nquarter.2016Q3 -0.3783 0.0510 -7.4121 0.0000 -0.4783 -0.2783\nquarter.2016Q4 -0.3398 0.0451 -7.5293 0.0000 -0.4282 -0.2513\nquarter.2017Q1 -0.3072 0.0377 -8.1406 0.0000 -0.3811 -0.2332\nquarter.2017Q2 -0.2445 0.0447 -5.4724 0.0000 -0.3320 -0.1569\nquarter.2017Q3 -0.2539 0.0391 -6.4872 0.0000 -0.3307 -0.1772\nquarter.2017Q4 -0.1047 0.0276 -3.7865 0.0002 -0.1588 -0.0505\nquarter.2018Q1 -0.0857 0.0200 -4.2889 0.0000 -0.1249 -0.0466\n=====================================================================================\n" ], [ "modelRE.variance_decomposition", "_____no_output_____" ], [ "modelRE.theta.head()", "_____no_output_____" ] ], [ [ "### 2. BetweenOLS model\nThe quarter dummies are dropped since the averaging removes differences due to the quarter. These results are broadly similar to the previous models.", "_____no_output_____" ] ], [ [ "# Identifying the regressors. Note that the `quarter` is the time dummies \npanel1_df['const'] = 1\nregressors = ['const','log_nsmiles','log_fare','nb_airline','log_income_capita','log_population','log_kjf_price','dum_dist'] # , 'quarter'\n\nmodelBetween = BetweenOLS(panel1_df.log_passengers, panel1_df[regressors])\nmodelBetween = modelBetween.fit(cov_type='robust')\n\nprint(modelBetween)", " BetweenOLS Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.5996\nEstimator: BetweenOLS R-squared (Between): 0.5996\nNo. Observations: 4578 R-squared (Within): -2.6978\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.3986\nTime: 17:32:34 Log-likelihood -5047.5\nCov. Estimator: Robust \n F-statistic: 977.62\nEntities: 4578 P-value 0.0000\nAvg Obs: 21.554 Distribution: F(7,4570)\nMin Obs: 4.0000 \nMax Obs: 33.000 F-statistic (robust): 733.95\n P-value 0.0000\nTime periods: 33 Distribution: F(7,4570)\nAvg Obs: 2990.1 \nMin Obs: 2845.0 \nMax Obs: 3137.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nconst -8.4459 1.5875 -5.3204 0.0000 -11.558 -5.3337\nlog_nsmiles 0.7328 0.0207 35.377 0.0000 0.6922 0.7734\nlog_fare -1.3393 0.0443 -30.239 0.0000 -1.4262 -1.2525\nnb_airline 0.4807 0.0091 52.567 0.0000 0.4628 0.4986\nlog_income_capita 1.1848 0.1296 9.1432 0.0000 0.9308 1.4389\nlog_population 0.0329 0.0176 1.8643 0.0623 -0.0017 0.0674\nlog_kjf_price 0.1107 0.1149 0.9638 0.3352 -0.1145 0.3360\ndum_dist -0.9375 0.1763 -5.3173 0.0000 -1.2832 -0.5919\n=====================================================================================\n" ] ], [ [ "## Testing correlated effects\n> When effects are correlated with the regressors the RE and BE estimators are not consistent. The usual solution is to use Fixed Effects which are available in PanelOLS. Fixed effects are called entity_effects when applied to entities and time_effects when applied to the time dimension:", "_____no_output_____" ], [ "### 1. Testing for Fixed Effects\n- Entity effects can be added using `entity_effects=True`. \n- Time-invariant (`dum_dist`) variable is excluded when using entity effects since it will all be 0.\n- Since the estimator is not robust, we set `cov_type='clustered'.", "_____no_output_____" ] ], [ [ "regressors = ['log_nsmiles','log_fare','nb_airline','log_income_capita','log_population','log_kjf_price', 'quarter']\n\nmodelFE = PanelOLS(panel1_df.log_passengers, panel1_df[regressors], \n entity_effects=True, time_effects=False, other_effects=None)\nmodelFE = modelFE.fit(cov_type='clustered', cluster_entity=True)\nprint(modelFE)\n", " PanelOLS Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.2493\nEstimator: PanelOLS R-squared (Between): 0.0204\nNo. Observations: 98673 R-squared (Within): 0.2493\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.1339\nTime: 17:32:35 Log-likelihood 8159.0\nCov. Estimator: Clustered \n F-statistic: 844.43\nEntities: 4578 P-value 0.0000\nAvg Obs: 21.554 Distribution: F(37,94058)\nMin Obs: 4.0000 \nMax Obs: 33.000 F-statistic (robust): 2093.5\n P-value 0.0000\nTime periods: 33 Distribution: F(37,94058)\nAvg Obs: 2990.1 \nMin Obs: 2845.0 \nMax Obs: 3137.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nlog_nsmiles 0.8663 0.3958 2.1889 0.0286 0.0906 1.6420\nlog_fare -0.9778 0.0223 -43.847 0.0000 -1.0215 -0.9341\nnb_airline 0.0198 0.0013 15.822 0.0000 0.0173 0.0222\nlog_income_capita 1.0672 0.0982 10.867 0.0000 0.8747 1.2597\nlog_population -0.0645 0.0458 -1.4077 0.1592 -0.1542 0.0253\nlog_kjf_price -1.2408 0.5368 -2.3113 0.0208 -2.2930 -0.1886\nquarter.2010Q2 0.2164 0.0304 7.1277 0.0000 0.1569 0.2759\nquarter.2010Q3 0.1355 0.0076 17.830 0.0000 0.1206 0.1504\nquarter.2010Q4 0.2345 0.0665 3.5255 0.0004 0.1041 0.3648\nquarter.2011Q1 0.3595 0.1445 2.4889 0.0128 0.0764 0.6426\nquarter.2011Q2 0.6997 0.2182 3.2072 0.0013 0.2721 1.1273\nquarter.2011Q3 0.6227 0.1995 3.1221 0.0018 0.2318 1.0136\nquarter.2011Q4 0.5584 0.1916 2.9142 0.0036 0.1828 0.9340\nquarter.2012Q1 0.5653 0.2200 2.5694 0.0102 0.1341 0.9965\nquarter.2012Q2 0.6395 0.1899 3.3669 0.0008 0.2672 1.0118\nquarter.2012Q3 0.5674 0.1865 3.0420 0.0024 0.2018 0.9330\nquarter.2012Q4 0.5032 0.1869 2.6926 0.0071 0.1369 0.8694\nquarter.2013Q1 0.5182 0.2028 2.5559 0.0106 0.1208 0.9156\nquarter.2013Q2 0.5203 0.1449 3.5904 0.0003 0.2363 0.8044\nquarter.2013Q3 0.5322 0.1624 3.2765 0.0011 0.2138 0.8506\nquarter.2013Q4 0.4787 0.1558 3.0719 0.0021 0.1733 0.7841\nquarter.2014Q1 0.4344 0.1681 2.5839 0.0098 0.1049 0.7638\nquarter.2014Q2 0.5784 0.1628 3.5528 0.0004 0.2593 0.8975\nquarter.2014Q3 0.5306 0.1577 3.3635 0.0008 0.2214 0.8398\nquarter.2014Q4 0.2596 0.0699 3.7153 0.0002 0.1227 0.3966\nquarter.2015Q1 -0.2184 0.1111 -1.9664 0.0493 -0.4362 -0.0007\nquarter.2015Q2 -0.0092 0.0761 -0.1211 0.9036 -0.1585 0.1400\nquarter.2015Q3 -0.1680 0.1367 -1.2287 0.2192 -0.4360 0.1000\nquarter.2015Q4 -0.3969 0.2152 -1.8441 0.0652 -0.8187 0.0249\nquarter.2016Q1 -0.8186 0.3660 -2.2364 0.0253 -1.5361 -0.1012\nquarter.2016Q2 -0.4722 0.2699 -1.7496 0.0802 -1.0011 0.0568\nquarter.2016Q3 -0.4177 0.2293 -1.8211 0.0686 -0.8672 0.0319\nquarter.2016Q4 -0.3687 0.1952 -1.8882 0.0590 -0.7513 0.0140\nquarter.2017Q1 -0.3252 0.1515 -2.1472 0.0318 -0.6221 -0.0284\nquarter.2017Q2 -0.2707 0.1879 -1.4402 0.1498 -0.6390 0.0977\nquarter.2017Q3 -0.2719 0.1566 -1.7366 0.0825 -0.5787 0.0350\nquarter.2017Q4 -0.1035 0.0901 -1.1494 0.2504 -0.2800 0.0730\nquarter.2018Q1 -0.0720 0.0452 -1.5904 0.1117 -0.1606 0.0167\n=====================================================================================\n\nF-test for Poolability: 263.30\nP-value: 0.0000\nDistribution: F(4577,94058)\n\nIncluded effects: Entity\n" ] ], [ [ "### 2. Testing for Time Effects\n- Time effect can be added using `time_effects=True`. \n- Here, when we include or exclude the constant, we have the same results.", "_____no_output_____" ] ], [ [ "regressors = ['const','log_nsmiles','log_fare','nb_airline','log_income_capita','log_population']\n\nmodelTE = PanelOLS(panel1_df.log_passengers, panel1_df[regressors], \n entity_effects=False, time_effects=True, other_effects=None)\nmodelTE = modelTE.fit(cov_type='clustered', cluster_entity=True, cluster_time=True) \nprint(modelTE)\n", " PanelOLS Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.4833\nEstimator: PanelOLS R-squared (Between): 0.5147\nNo. Observations: 98673 R-squared (Within): -1.1651\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.4808\nTime: 17:32:35 Log-likelihood -1.214e+05\nCov. Estimator: Clustered \n F-statistic: 1.845e+04\nEntities: 4578 P-value 0.0000\nAvg Obs: 21.554 Distribution: F(5,98635)\nMin Obs: 4.0000 \nMax Obs: 33.000 F-statistic (robust): 1143.3\n P-value 0.0000\nTime periods: 33 Distribution: F(5,98635)\nAvg Obs: 2990.1 \nMin Obs: 2845.0 \nMax Obs: 3137.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nconst -6.3320 1.5138 -4.1830 0.0000 -9.2989 -3.3651\nlog_nsmiles 0.7295 0.0221 33.008 0.0000 0.6862 0.7728\nlog_fare -1.3810 0.0597 -23.127 0.0000 -1.4980 -1.2640\nnb_airline 0.3318 0.0057 58.490 0.0000 0.3207 0.3429\nlog_income_capita 1.0692 0.1489 7.1805 0.0000 0.7774 1.3611\nlog_population 0.0639 0.0202 3.1719 0.0015 0.0244 0.1035\n=====================================================================================\n\nF-test for Poolability: 20.026\nP-value: 0.0000\nDistribution: F(32,98635)\n\nIncluded effects: Time\n" ] ], [ [ "### 3. First Differences\n> First differencing is an alternative to using fixed effects when there might be correlation. When using first differences, time-invariant variables must be excluded.", "_____no_output_____" ] ], [ [ "regressors = ['log_nsmiles','log_fare','nb_airline','log_income_capita','log_population']\n\nmodelFD = FirstDifferenceOLS(panel1_df.log_passengers, panel1_df[regressors])\nmodelFD = modelFD.fit(cov_type='clustered', cluster_entity=True) \nprint(modelTE)\n", " PanelOLS Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.4833\nEstimator: PanelOLS R-squared (Between): 0.5147\nNo. Observations: 98673 R-squared (Within): -1.1651\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.4808\nTime: 17:32:35 Log-likelihood -1.214e+05\nCov. Estimator: Clustered \n F-statistic: 1.845e+04\nEntities: 4578 P-value 0.0000\nAvg Obs: 21.554 Distribution: F(5,98635)\nMin Obs: 4.0000 \nMax Obs: 33.000 F-statistic (robust): 1143.3\n P-value 0.0000\nTime periods: 33 Distribution: F(5,98635)\nAvg Obs: 2990.1 \nMin Obs: 2845.0 \nMax Obs: 3137.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nconst -6.3320 1.5138 -4.1830 0.0000 -9.2989 -3.3651\nlog_nsmiles 0.7295 0.0221 33.008 0.0000 0.6862 0.7728\nlog_fare -1.3810 0.0597 -23.127 0.0000 -1.4980 -1.2640\nnb_airline 0.3318 0.0057 58.490 0.0000 0.3207 0.3429\nlog_income_capita 1.0692 0.1489 7.1805 0.0000 0.7774 1.3611\nlog_population 0.0639 0.0202 3.1719 0.0015 0.0244 0.1035\n=====================================================================================\n\nF-test for Poolability: 20.026\nP-value: 0.0000\nDistribution: F(32,98635)\n\nIncluded effects: Time\n" ] ], [ [ "## Comparing between modelBetween, modelRE and modelPooled models", "_____no_output_____" ] ], [ [ "from linearmodels.panel import compare\nmodelCompare = compare({'PooledOLS':modelPooled, 'Between':modelBetween, 'RandomEffects':modelRE})\nprint(modelCompare)", " Model Comparison \n========================================================================\n Between PooledOLS RandomEffects\n------------------------------------------------------------------------\nDep. Variable log_passengers log_passengers log_passengers\nEstimator BetweenOLS PanelOLS RandomEffects\nNo. Observations 4578 98673 98673\nCov. Est. Robust Robust Robust\nR-squared 0.5996 0.4885 0.2337\nR-Squared (Within) -2.6978 -1.1120 0.2486\nR-Squared (Between) 0.5996 0.5208 0.1783\nR-Squared (Overall) 0.3986 0.4885 0.1187\nF-statistic 977.62 2478.5 791.38\nP-value (F-stat) 0.0000 0.0000 0.0000\n===================== ================ ================ ================\nconst -8.4459 \n (-5.3204) \nlog_nsmiles 0.7328 0.6927 0.4263\n (35.377) (147.62) (21.457)\nlog_fare -1.3393 -1.3773 -0.9840\n (-30.239) (-130.50) (-108.57)\nnb_airline 0.4807 0.3340 0.0261\n (52.567) (237.38) (35.975)\nlog_income_capita 1.1848 1.1407 1.1200\n (9.1432) (36.324) (23.043)\nlog_population 0.0329 0.0673 0.0196\n (1.8643) (15.710) (1.3088)\nlog_kjf_price 0.1107 -1.3059 -1.1071\n (0.9638) (-22.341) (-11.220)\ndum_dist -0.9375 -0.8367 -0.3518\n (-5.3173) (-24.381) (-2.2227)\nquarter.2010Q2 0.1139 0.2058\n (5.2218) (24.677)\nquarter.2010Q3 0.0613 0.1319\n (2.8157) (18.304)\nquarter.2010Q4 0.1286 0.2134\n (5.6886) (16.724)\nquarter.2011Q1 0.3658 0.3191\n (13.779) (12.633)\nquarter.2011Q2 0.6599 0.6393\n (21.210) (16.805)\nquarter.2011Q3 0.6244 0.5670\n (21.166) (16.464)\nquarter.2011Q4 0.5117 0.5035\n (17.786) (15.379)\nquarter.2012Q1 0.5958 0.5032\n (19.376) (13.542)\nquarter.2012Q2 0.6828 0.5846\n (24.152) (18.638)\nquarter.2012Q3 0.6276 0.5137\n (22.213) (16.566)\nquarter.2012Q4 0.5433 0.4470\n (19.475) (14.877)\nquarter.2013Q1 0.6497 0.4614\n (22.082) (13.684)\nquarter.2013Q2 0.5360 0.4754\n (21.420) (20.564)\nquarter.2013Q3 0.5940 0.4832\n (22.770) (18.448)\nquarter.2013Q4 0.4678 0.4293\n (18.247) (17.385)\nquarter.2014Q1 0.4645 0.3816\n (17.489) (14.478)\nquarter.2014Q2 0.5892 0.5254\n (23.137) (21.277)\nquarter.2014Q3 0.5933 0.4789\n (23.587) (20.422)\nquarter.2014Q4 0.2299 0.2279\n (10.550) (24.536)\nquarter.2015Q1 -0.2178 -0.2060\n (-7.8688) (-7.1553)\nquarter.2015Q2 -0.0311 -0.0070\n (-1.2192) (-0.3076)\nquarter.2015Q3 -0.1131 -0.1495\n (-3.8052) (-4.4144)\nquarter.2015Q4 -0.4428 -0.3607\n (-12.330) (-7.5033)\nquarter.2016Q1 -0.8005 -0.7435\n (-15.949) (-9.8330)\nquarter.2016Q2 -0.4829 -0.4219\n (-11.751) (-7.2552)\nquarter.2016Q3 -0.4193 -0.3783\n (-11.126) (-7.4121)\nquarter.2016Q4 -0.4406 -0.3398\n (-12.720) (-7.5293)\nquarter.2017Q1 -0.3255 -0.3072\n (-10.322) (-8.1406)\nquarter.2017Q2 -0.3064 -0.2445\n (-8.8922) (-5.4724)\nquarter.2017Q3 -0.2743 -0.2539\n (-8.5239) (-6.4872)\nquarter.2017Q4 -0.1519 -0.1047\n (-5.5663) (-3.7865)\nquarter.2018Q1 -0.0425 -0.0857\n (-1.7060) (-4.2889)\n------------------------------------------------------------------------\n\nT-stats reported in parentheses\n" ] ], [ [ "### Comparing between Robust, Entity and Entity-Time mothods", "_____no_output_____" ] ], [ [ "regressors = ['const','log_nsmiles','log_fare','nb_airline','log_income_capita','log_population']\n\nmodelComp = PanelOLS(panel1_df.log_passengers, panel1_df[regressors])\n\nrobust = modelComp.fit(cov_type='robust')\nclust_entity = modelComp.fit(cov_type='clustered', cluster_entity=True)\nclust_entity_time = modelComp.fit(cov_type='clustered', cluster_entity=True, cluster_time=True)\n", "_____no_output_____" ], [ "from collections import OrderedDict\n\nresults = OrderedDict()\nresults['Robust'] = robust\nresults['Entity'] = clust_entity\nresults['Entity-Time'] = clust_entity_time\n\nprint(compare(results))", " Model Comparison \n========================================================================\n Robust Entity Entity-Time\n------------------------------------------------------------------------\nDep. Variable log_passengers log_passengers log_passengers\nEstimator PanelOLS PanelOLS PanelOLS\nNo. Observations 98673 98673 98673\nCov. Est. Robust Clustered Clustered\nR-squared 0.4809 0.4809 0.4809\nR-Squared (Within) -1.1500 -1.1500 -1.1500\nR-Squared (Between) 0.5141 0.5141 0.5141\nR-Squared (Overall) 0.4809 0.4809 0.4809\nF-statistic 1.828e+04 1.828e+04 1.828e+04\nP-value (F-stat) 0.0000 0.0000 0.0000\n===================== ================ ================ ================\nconst -7.0335 -7.0335 -7.0335\n (-29.946) (-8.5376) (-6.2243)\nlog_nsmiles 0.7254 0.7254 0.7254\n (158.31) (34.892) (33.688)\nlog_fare -1.3623 -1.3623 -1.3623\n (-132.31) (-34.167) (-23.903)\nnb_airline 0.3308 0.3308 0.3308\n (232.97) (62.613) (58.778)\nlog_income_capita 1.1309 1.1309 1.1309\n (48.620) (13.699) (10.001)\nlog_population 0.0620 0.0620 0.0620\n (14.632) (3.2443) (3.1362)\n------------------------------------------------------------------------\n\nT-stats reported in parentheses\n" ], [ "# Reset the index in order to compute the Ìnstrumentals variables`\npanel1_df.reset_index(inplace=True)\npanel1_df.head()", "_____no_output_____" ] ], [ [ "### Instruments as lags of order 1 and 2 of first differences\n- Because, we want to campute the `first difference` and the `second difference` of the variables, we first need to campute the `lags` values of this variables. \n- To do that, we first create a function named `lag_by_individual` and use the `shift()` python function inside this new one.\n - The `lag_by_individual` function help us to identify the first and the last obseration of each individual (`city market`) as well as strictly succesive observations. In the lagged variables, the first observation for each `city market` will be `\"NaN\"` (missing value).\n\n- The first difference is computed by using the following formular `difference(t) = observation(i,t) - observation(i,t-1)`. For example, for a given `city market`, we calculate the difference between the observation of `quarter q` and the observation of `quarter q-1`. \n#### Let's test our `lag_by_individual`function with some observations before applying it in our data", "_____no_output_____" ], [ "#### Example of lagged variables\nBecause we want to be sure if our `lag_by_individual` function work well, we generate a small DataFrame and the test it before using our big table.", "_____no_output_____" ] ], [ [ "# Create a random data\nnp.random.seed(0) # ensures the same set of random numbers are generated\ndate = ['2019-01-01']*3 + ['2019-01-02']*3 + ['2019-01-03']*3+['2019-01-04']*3\nvar1, var2 = np.random.randn(12), np.random.randn(12)*20 \ngroup = [\"group1\", \"group2\", \"group3\"]*4 # to assign the groups for the multiple group case\n\nDataFrame = pd.DataFrame({\"quarter_index\": date, \"citymarket_id\":group, \"var1\": var1, \"var2\": var2}) # many vars, many groups\n\ngrouped_df = DataFrame.groupby([\"citymarket_id\"])\n\n# The function\ndef lag_by_individual(key, value_df):\n \n \"\"\"\n This first line returns a copy of the df, with group columns assigned the key value.\n The parenthesis allow us to chain methods and avoid intermediate variable assignment\n \n Refference:\n https://towardsdatascience.com/timeseries-data-munging-lagging-variables-that-are-distributed-across-multiple-groups-86e0a038460c\n \"\"\"\n \n df = value_df.assign(citymarket_id = key)\n return (df.sort_values(by=[\"quarter_index\"], ascending=True).set_index([\"quarter_index\"]).shift(1))\n\n# Applied the function\nlag_values = [lag_by_individual(g, grouped_df.get_group(g)) for g in grouped_df.groups.keys()]\nlag_df = pd.concat(lag_values, axis=0).reset_index()\n\nlag_df.loc[(lag_df.citymarket_id.isna() != True), 'id'] = 1 # This variable help to campute the diffenrence only when the obs are strictly succesive\nlag_df.loc[(lag_df.citymarket_id.isna() == True), 'citymarket_id'] = lag_df.citymarket_id.shift(-1) # deshift the varaiable\n\nlag_df.set_index(['quarter_index','citymarket_id'], inplace=True)\n\nlag_df.columns = lag_df.columns.values+'_lag1'\n\ndif = pd.merge(DataFrame,lag_df, on = ['quarter_index','citymarket_id'], how='inner').sort_values(['citymarket_id','quarter_index'])\ndif.loc[(dif.id_lag1.isna() != True), 'var1_dif1'] = dif.var1-dif.var1.shift()\ndif.loc[((dif.id_lag1.isna() != True) & (dif.var1_dif1.isna() != True)), 'var1_dif2'] = dif.var1_dif1.shift()\ndif.loc[((dif.id_lag1.isna() != True) & (dif.var1_dif2.shift().isna() != True)), 'var1_dif3'] = dif.var1_dif1.shift(2)\n\n\ndif.loc[((dif.id_lag1.isna() != True) & \n (dif.var1_dif1.isna() != True) & (dif.var1_dif2.shift().isna() != True)), 'var1_dif3'] = dif.var1_dif1.shift(2)\n\ndif.tail(20)", "_____no_output_____" ], [ "grouped_df = panel1_df.groupby([\"citymarket_id\"])\n\ndef lag_by_individual(key, value_df):\n \n \"\"\"\n - This first line returns a copy of the df, with group columns assigned the key value.\n - The function return the lagged values by city market. The first observation for each group will be \"NaN\".\n \n Refference:\n https://towardsdatascience.com/timeseries-data-munging-lagging-variables-that-are-distributed-across-multiple-groups-86e0a038460c\n \"\"\"\n \n df = value_df.assign(citymarket_id = key)\n return (df.sort_values(by=[\"quarter\"], ascending=True).set_index([\"quarter\"]).shift(1))\n", "_____no_output_____" ], [ "# Apply the function \nlag_values = [lag_by_individual(g, grouped_df.get_group(g)) for g in grouped_df.groups.keys()]\nlag_df = pd.concat(lag_values, axis=0).reset_index()\n\nlag_df.loc[(lag_df.citymarket_id.isna() != True), 'id'] = 1 # This variable help to campute the diffenrence only when the obs are strictly succesive\nlag_df.loc[(lag_df.citymarket_id.isna() == True), 'citymarket_id'] = lag_df.citymarket_id.shift(-1) # deshift the varaiable\n\nlag_df.set_index(['quarter','citymarket_id'], inplace=True)\n\nlag_df.columns = lag_df.columns.values+'_lag1'\nlag_df = lag_df[['id_lag1']]\n\nfinal_df = pd.concat([panel1_df.set_index(['quarter','citymarket_id']), lag_df],axis=1).reset_index()\n\nfinal_df.head()", "_____no_output_____" ] ], [ [ "### Campute the lags of order 1 and 2 of first differences\nNote that we create also a lagged variable `log_passengers` withou the first difference name `log_passengers_lag1`. This variable will be used in th dynamique model", "_____no_output_____" ] ], [ [ "final_df.loc[(final_df.id_lag1.isna() != True), 'log_passengers_lag1'] = final_df.log_passengers.shift()\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_passengers_dif1'] = final_df.log_passengers-final_df.log_passengers.shift()\n\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_passengers_dif1.isna() != True)), 'log_passengers_dif2'] = final_df.log_passengers_dif1.shift()\n\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_passengers_dif1.isna() != True) & \n (final_df.log_passengers_dif1.shift().isna() != True)), 'log_passengers_dif3'] = final_df.log_passengers_dif1.shift(2)\n\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_nsmiles_dif1'] = final_df.log_nsmiles-final_df.log_nsmiles.shift()\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_nsmiles_dif2'] = final_df.log_nsmiles_dif1.shift()\n# For a given individual(city market), the distance still constant in each periods\n#final_df = final_df.loc[(final_df.id_lag1.isna() != True) & (final_df.log_nsmiles == final_df.log_nsmiles.shift())]\n#final_df = final_df.loc[(final_df.id_lag1.isna() != True) & (final_df.log_nsmiles == final_df.log_nsmiles.shift(2))]\n#final_df = final_df.loc[(final_df.id_lag1.isna() != True) & (final_df.log_nsmiles == final_df.log_nsmiles.shift(3))]\n#final_df = final_df.loc[final_df.log_nsmiles_dif2==0]\n\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_fare_dif1'] = final_df.log_fare-final_df.log_fare.shift()\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_fare_dif2'] = final_df.log_fare_dif1.shift()\n\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'nb_airline_dif1'] = final_df.nb_airline-final_df.nb_airline.shift()\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'nb_airline_dif2'] = final_df.nb_airline_dif1.shift()\n\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_income_capita_dif1'] = final_df.log_income_capita-final_df.log_income_capita.shift()\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_income_capita_dif1.isna() != True)), 'log_income_capita_dif2'] = final_df.log_income_capita_dif1.shift()\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_income_capita_dif1.isna() != True) & \n (final_df.log_income_capita_dif1.shift().isna() != True)), 'log_income_capita_dif3'] = final_df.log_income_capita_dif1.shift(2)\n\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_population_dif1'] = final_df.log_population-final_df.log_population.shift()\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_population_dif1.isna() != True)), 'log_population_dif2'] = final_df.log_population_dif1.shift()\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_population_dif1.isna() != True) & \n (final_df.log_population_dif1.shift().isna() != True)), 'log_population_dif3'] = final_df.log_population_dif1.shift(2)\n\nfinal_df.loc[(final_df.id_lag1.isna() != True), 'log_kjf_dif1'] = final_df.log_kjf_price-final_df.log_kjf_price.shift()\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_kjf_dif1.isna() != True)), 'log_kjf_dif2'] = final_df.log_kjf_dif1.shift()\nfinal_df.loc[((final_df.id_lag1.isna() != True) & \n (final_df.log_kjf_dif1.isna() != True) & \n (final_df.log_kjf_dif1.shift().isna() != True)), 'log_kjf_dif3'] = final_df.log_kjf_dif1.shift(2)\n\nfinal_df[['quarter','citymarket_id','log_passengers','log_passengers_lag1','log_passengers_dif1','log_passengers_dif2',\n 'log_fare','log_fare_dif1','log_fare_dif2','log_income_capita_dif1','log_income_capita_dif3','log_population_dif3']].head()", "_____no_output_____" ], [ "# Eliminate observations with missing values (data without first and second differences)\nfinal_df.dropna(axis=0, how='any', inplace=True)\n\nprint(\"We delete '{}' observations because their have not first or second differences values.\".format(panel1_df.shape[0]-final_df.shape[0]))\nprint(\"Now, we have '{}' obserations in our dataset after camputing the first and second differences.\\n\".format(final_df.shape[0]))\n\nfinal_df[['quarter','citymarket_id','log_passengers','log_passengers_lag1','log_passengers_dif1','log_passengers_dif2',\n 'log_fare','log_fare_dif1','log_fare_dif2','log_income_capita_dif1']].head()\n\nprint('We have {} uniques city-pair markets and {} periods on our dataset'.format(final_df.citymarket_id.nunique(),\n final_df.quarter.nunique()))", "We delete '13734' observations because their have not first or second differences values.\nNow, we have '84939' obserations in our dataset after camputing the first and second differences.\n\nWe have 4578 uniques city-pair markets and 30 periods on our dataset\n" ], [ "\"\"\"final_df.sort_values(by=['ID','quarter'], inplace=True)\n\n# Exportation\npath = '../notebooks/final_panel_df.csv'\nfinal_df.to_csv(path, index=False)\"\"\"", "_____no_output_____" ] ], [ [ "### Here, we compute the dummies of the times variables manually", "_____no_output_____" ] ], [ [ "final_df['get_dum_quarter'] = final_df['quarter_index']\n\ndum_period = pd.get_dummies(final_df.quarter_index, prefix='dum', columns=['quarter_index']).columns.values.tolist()\npanel_df = pd.get_dummies(final_df, prefix='dum', columns=['get_dum_quarter'])\n\npanel_df['quarter'] = pd.Categorical(panel_df.quarter_index)\npanel_df.sort_values(['citymarket_id', 'quarter_index'], inplace=True)\npanel_df.set_index(['citymarket_id', 'quarter_index'], inplace=True)\npanel_df.head()", "_____no_output_____" ], [ "# Show the columns of the times dummies\nnp.array(dum_period)", "_____no_output_____" ], [ "print('Number of city-market:', panel_df.ID.nunique(), \n '\\nNumber of quarter:', panel_df.quarter.nunique())", "Number of city-market: 4578 \nNumber of quarter: 30\n" ], [ "panel_df.shape", "_____no_output_____" ] ], [ [ "## Linear Instrumental-Variables Regression\n[Reference](https://bashtage.github.io/linearmodels/doc/iv/examples/advanced-examples.html)\n", "_____no_output_____" ] ], [ [ "from linearmodels import IV2SLS, IVLIML, IVGMM, IVGMMCUE", "_____no_output_____" ] ], [ [ "## 1. IV 2SLS as OLS\nFor running a `2SLS` as `OLS` estimator of parameters in PanelData, we call the `IV2SLS` using `None` for the `endogenous` and the `instruments`.", "_____no_output_____" ] ], [ [ "controls = ['const','log_passengers_lag1','log_nsmiles','log_fare','log_income_capita','log_population',\n 'nb_airline','log_kjf_price','dum_dist']\n\nivolsmodel = IV2SLS(panel_df.log_passengers, \n panel_df[controls + dum_period[:-2]], \n None, None).fit()\n\nprint(ivolsmodel.summary)", " OLS Estimation Summary \n==============================================================================\nDep. Variable: log_passengers R-squared: 0.9418\nEstimator: OLS Adj. R-squared: 0.9418\nNo. Observations: 84939 F-statistic: 1.777e+06\nDate: Thu, Jan 16 2020 P-value (F-stat) 0.0000\nTime: 17:32:59 Distribution: chi2(36)\nCov. Estimator: robust \n \n Parameter Estimates \n=======================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n---------------------------------------------------------------------------------------\nconst 6.2081 0.4261 14.569 0.0000 5.3729 7.0432\nlog_passengers_lag1 0.9345 0.0013 708.68 0.0000 0.9319 0.9371\nlog_nsmiles 0.0532 0.0020 26.568 0.0000 0.0493 0.0571\nlog_fare -0.1244 0.0053 -23.310 0.0000 -0.1349 -0.1139\nlog_income_capita 0.0458 0.0130 3.5234 0.0004 0.0203 0.0712\nlog_population 0.0070 0.0016 4.5032 0.0000 0.0040 0.0101\nnb_airline 0.0249 0.0006 44.966 0.0000 0.0238 0.0260\nlog_kjf_price -1.1938 0.0776 -15.375 0.0000 -1.3459 -1.0416\ndum_dist -0.0525 0.0111 -4.7171 0.0000 -0.0743 -0.0307\ndum_20104 0.2894 0.0186 15.576 0.0000 0.2530 0.3258\ndum_20111 0.3511 0.0294 11.954 0.0000 0.2935 0.4087\ndum_20112 0.7797 0.0400 19.501 0.0000 0.7014 0.8581\ndum_20113 0.5802 0.0372 15.605 0.0000 0.5073 0.6531\ndum_20114 0.5240 0.0361 14.497 0.0000 0.4531 0.5948\ndum_20121 0.5659 0.0402 14.062 0.0000 0.4870 0.6448\ndum_20122 0.7080 0.0361 19.606 0.0000 0.6372 0.7788\ndum_20123 0.5370 0.0355 15.143 0.0000 0.4675 0.6065\ndum_20124 0.5390 0.0357 15.103 0.0000 0.4690 0.6089\ndum_20131 0.5227 0.0378 13.818 0.0000 0.4485 0.5968\ndum_20132 0.6268 0.0297 21.080 0.0000 0.5685 0.6850\ndum_20133 0.4696 0.0321 14.630 0.0000 0.4067 0.5325\ndum_20134 0.5022 0.0313 16.060 0.0000 0.4409 0.5635\ndum_20141 0.4563 0.0330 13.818 0.0000 0.3916 0.5210\ndum_20142 0.6656 0.0323 20.587 0.0000 0.6022 0.7289\ndum_20143 0.4915 0.0315 15.606 0.0000 0.4298 0.5532\ndum_20144 0.2757 0.0191 14.415 0.0000 0.2382 0.3132\ndum_20151 -0.1714 0.0099 -17.332 0.0000 -0.1908 -0.1520\ndum_20152 0.1325 0.0072 18.303 0.0000 0.1183 0.1466\ndum_20153 -0.1295 0.0125 -10.339 0.0000 -0.1540 -0.1049\ndum_20154 -0.3270 0.0234 -13.946 0.0000 -0.3729 -0.2810\ndum_20161 -0.7460 0.0450 -16.589 0.0000 -0.8341 -0.6578\ndum_20162 -0.2954 0.0313 -9.4370 0.0000 -0.3568 -0.2341\ndum_20163 -0.3548 0.0252 -14.074 0.0000 -0.4042 -0.3054\ndum_20164 -0.2760 0.0205 -13.439 0.0000 -0.3163 -0.2358\ndum_20171 -0.2459 0.0145 -16.975 0.0000 -0.2743 -0.2175\ndum_20172 -0.1216 0.0195 -6.2368 0.0000 -0.1598 -0.0834\ndum_20173 -0.1963 0.0150 -13.120 0.0000 -0.2256 -0.1670\n=======================================================================================\n" ] ], [ [ "### 2. IV 2SLS using `log_income_capita_dif1` and `log_fare` as endogenous variables", "_____no_output_____" ] ], [ [ "\"\"\"\ninstruements = ['log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_income_capita_dif2','log_population_dif2','nb_airline_dif2',\n 'log_income_capita_dif3','log_population_dif3']\"\"\"", "_____no_output_____" ], [ "controls = ['const','log_nsmiles','log_income_capita','log_population',\n 'nb_airline','log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\niv2LSmodel = IV2SLS(panel_df['log_passengers'], \n panel_df[controls + dum_period[:-2]], \n panel_df[['log_fare','log_passengers_lag1']], \n panel_df[instruements]).fit()\n\nprint(iv2LSmodel.summary)", " IV-2SLS Estimation Summary \n==============================================================================\nDep. Variable: log_passengers R-squared: 0.9134\nEstimator: IV-2SLS Adj. R-squared: 0.9134\nNo. Observations: 84939 F-statistic: 4.806e+05\nDate: Thu, Jan 16 2020 P-value (F-stat) 0.0000\nTime: 17:33:00 Distribution: chi2(36)\nCov. Estimator: robust \n \n Parameter Estimates \n=======================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n---------------------------------------------------------------------------------------\nconst 5.5477 0.5254 10.559 0.0000 4.5180 6.5775\nlog_nsmiles 0.2394 0.0051 47.107 0.0000 0.2294 0.2494\nlog_income_capita 0.2565 0.0157 16.306 0.0000 0.2257 0.2874\nlog_population 0.0091 0.0020 4.6618 0.0000 0.0053 0.0130\nnb_airline 0.0847 0.0016 52.226 0.0000 0.0815 0.0879\nlog_kjf_price -0.9216 0.0948 -9.7265 0.0000 -1.1074 -0.7359\ndum_dist -0.1809 0.0146 -12.399 0.0000 -0.2095 -0.1523\ndum_20104 0.1761 0.0233 7.5647 0.0000 0.1305 0.2218\ndum_20111 0.2464 0.0361 6.8341 0.0000 0.1758 0.3171\ndum_20112 0.6234 0.0489 12.751 0.0000 0.5276 0.7192\ndum_20113 0.4504 0.0456 9.8778 0.0000 0.3611 0.5398\ndum_20114 0.4011 0.0442 9.0773 0.0000 0.3145 0.4877\ndum_20121 0.4404 0.0491 8.9615 0.0000 0.3441 0.5368\ndum_20122 0.5978 0.0440 13.573 0.0000 0.5115 0.6841\ndum_20123 0.4332 0.0434 9.9747 0.0000 0.3480 0.5183\ndum_20124 0.4243 0.0436 9.7243 0.0000 0.3388 0.5098\ndum_20131 0.4324 0.0461 9.3729 0.0000 0.3420 0.5229\ndum_20132 0.5236 0.0363 14.437 0.0000 0.4525 0.5947\ndum_20133 0.4093 0.0392 10.446 0.0000 0.3325 0.4862\ndum_20134 0.4067 0.0381 10.669 0.0000 0.3320 0.4815\ndum_20141 0.3650 0.0403 9.0594 0.0000 0.2861 0.4440\ndum_20142 0.5712 0.0395 14.475 0.0000 0.4939 0.6486\ndum_20143 0.4315 0.0385 11.203 0.0000 0.3560 0.5070\ndum_20144 0.2353 0.0233 10.101 0.0000 0.1896 0.2809\ndum_20151 -0.1081 0.0121 -8.9651 0.0000 -0.1317 -0.0845\ndum_20152 0.1505 0.0087 17.293 0.0000 0.1334 0.1675\ndum_20153 -0.0507 0.0156 -3.2508 0.0012 -0.0813 -0.0201\ndum_20154 -0.2361 0.0285 -8.2909 0.0000 -0.2919 -0.1803\ndum_20161 -0.5553 0.0548 -10.130 0.0000 -0.6627 -0.4479\ndum_20162 -0.1919 0.0381 -5.0357 0.0000 -0.2665 -0.1172\ndum_20163 -0.2593 0.0309 -8.3974 0.0000 -0.3198 -0.1988\ndum_20164 -0.2155 0.0249 -8.6566 0.0000 -0.2643 -0.1667\ndum_20171 -0.1847 0.0177 -10.436 0.0000 -0.2194 -0.1500\ndum_20172 -0.0556 0.0237 -2.3467 0.0189 -0.1021 -0.0092\ndum_20173 -0.1518 0.0185 -8.1855 0.0000 -0.1881 -0.1154\nlog_fare -0.8013 0.0239 -33.564 0.0000 -0.8481 -0.7545\nlog_passengers_lag1 0.7573 0.0047 160.22 0.0000 0.7480 0.7665\n=======================================================================================\n\nEndogenous: log_fare, log_passengers_lag1\nInstruments: log_nsmiles_dif1, log_income_capita_dif1, log_population_dif1, nb_airline_dif1, log_fare_dif1, log_fare_dif2, log_passengers_dif2, log_nsmiles_dif2, log_income_capita_dif2, log_population_dif2, nb_airline_dif2\nRobust Covariance (Heteroskedastic)\nDebiased: False\n" ] ], [ [ "## 3. Tests\n### 3.1. Testing the absence of correlation between Z and U\n- We estimate two models separately:\n - `iv2LSmodel1` when log_passengers is considered as endogenous variable\n - `iv2LSmodel2` when log_fare is considered as endogenous variable\n- For each model, we save the residuals (see the `Sargan` test part) and test\n\n> (In essence, when the regressor and error are correlated, the parameter is not identiÖed. The presence of an instrument solves the identiÖcation problem.) \n> Instruments are correlated with X, but uncorrelated with the model error term by assumption or by construction. ", "_____no_output_____" ] ], [ [ "from sklearn.metrics import r2_score, mean_absolute_error, mean_squared_error\nimport scipy.stats as st", "_____no_output_____" ], [ "controls = ['const','log_nsmiles','log_income_capita','log_population',\n 'nb_airline','log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\niv2LSmodel1 = IV2SLS(panel_df['log_passengers'], \n panel_df[controls+dum_period[:-2]], \n panel_df[['log_passengers_lag1']], panel_df[instruements]).fit()\n\niv2LSmodel2 = IV2SLS(panel_df['log_passengers'], \n panel_df[controls+dum_period[:-2]], \n panel_df[['log_fare']], panel_df[instruements]).fit()\n", "_____no_output_____" ] ], [ [ "#### 3.1.1. Compute the Q-Sargan for testing the absence of correlation between Z and U\n- Store the residuals in a variable name `iv_resids`\n- Estimate a regression of the estimated residuals on all instruments\n- Compute the R-square of this laste regression\n- Get the number of observations and campute the Q-Sargan\n- Compart this computed value of Sagan and the Sargan value provided directly by the model", "_____no_output_____" ] ], [ [ "# Compute the Q-Sargan in model1: when log_passengers_lag1 is considered as endogenous\n\npanel_df['iv_resids'] = iv2LSmodel1.resids.values\ncor_sargan = sm.OLS(panel_df.iv_resids, panel_df[controls+instruements]).fit()\n\nr_pred = cor_sargan.predict(panel_df[controls+instruements])\nr_square = r2_score(panel_df.iv_resids, r_pred)\n\ndegree_freedom = cor_sargan.df_model\n\nnobs = cor_sargan.nobs\nq_sargan = nobs*r_square\nprint('Q-Sargan:', q_sargan)", "Q-Sargan: 8563.420701043364\n" ], [ "# Compute the Q-Sargan in model1: when log_fare is considered as endogenous\n\npanel_df['iv_resids'] = iv2LSmodel2.resids.values\ncor_sargan = sm.OLS(panel_df.iv_resids, panel_df[controls+instruements]).fit()\n\nr_pred = cor_sargan.predict(panel_df[controls+instruements])\nr_square = r2_score(panel_df.iv_resids, r_pred)\n\ndegree_freedom = cor_sargan.df_model\n\nnobs = cor_sargan.nobs\nq_sargan = nobs*r_square\nprint('Q-Sargan:', q_sargan)", "Q-Sargan: 7048.072535838633\n" ] ], [ [ "The value of `Khi-2` with p-(k+1) degrees of freedom (`8`) is `21.955`. Hence, we reject the null hypothesis.\n#### 3.1.2. We can also get this sargan statistic test from the IV model (as below).", "_____no_output_____" ] ], [ [ "iv2LSmodel1.sargan", "_____no_output_____" ], [ "iv2LSmodel2.sargan", "_____no_output_____" ] ], [ [ "> Wooldridge’s regression-based test of exogeneity is robust to heteroskedasticity since it inherits the covariance estimator from the model. Here there is little difference.\nWooldridge’s score test is an alternative to the regression test, although it usually has slightly less power since it is an LM rather than a Wald type test. ", "_____no_output_____" ] ], [ [ "iv2LSmodel1.wooldridge_regression", "_____no_output_____" ], [ "iv2LSmodel2.wooldridge_regression", "_____no_output_____" ], [ "iv2LSmodel1.wooldridge_score", "_____no_output_____" ], [ "iv2LSmodel2.wooldridge_score", "_____no_output_____" ], [ "iv2LSmodel1.wooldridge_overid", "_____no_output_____" ], [ "iv2LSmodel2.wooldridge_overid", "_____no_output_____" ], [ "iv2LSmodel1.basmann_f", "_____no_output_____" ], [ "iv2LSmodel2.basmann_f", "_____no_output_____" ] ], [ [ "### 3.2. Testing the correlation of Z and X_endog\n- We estimate two differents OLS models:\n - `cor_z_fare` when log_fare is the considered as reponse variable\n - `cor_z_pass` when log_passengers_lag1 is the considered as reponse variable\n- For each model, the explanatories includes the `controls` and the `instrumentals` variables\n- The idea is to test if the coefficients of the `instruments` are null.\n - H0: the coefficients of the `instruments` are equal to zero\n \nF-test and Wald test is used to test if a variable has not effect. Note that the F-testis is a special case of wald_test that always uses the F distribution.", "_____no_output_____" ] ], [ [ "instruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\nH0 = '(log_nsmiles_dif1=log_income_capita_dif1=log_population_dif1=nb_airline_dif1=log_fare_dif2=log_passengers_dif2=log_nsmiles_dif2=log_income_capita_dif2=log_population_dif2=nb_airline_dif2=0)'\n", "_____no_output_____" ], [ "# Using the f_test from the OLS results\ncor_z_fare = sm.OLS(panel_df[['log_fare']], panel_df[controls+dum_period[:-2]+instruements]).fit()\ncor_z_pass = sm.OLS(panel_df[['log_passengers_lag1']], panel_df[controls+dum_period[:-2]+instruements]).fit()\n\nprint('test between Z and fare:\\n', cor_z_fare.f_test(H0))\nprint()\nprint('test between Z and lag_passenger:\\n', cor_z_pass.f_test(H0))", "test between Z and fare:\n <F test: F=array([[207.06535313]]), p=0.0, df_denom=8.49e+04, df_num=10>\n\ntest between Z and lag_passenger:\n <F test: F=array([[1030.12111151]]), p=0.0, df_denom=8.49e+04, df_num=10>\n" ], [ "# Using valde test from the PanelOLS results\ncor_z_fare = PanelOLS(panel_df.log_fare, panel_df[controls+dum_period[:-2]+instruements]).fit()\ncor_z_pass = PanelOLS(panel_df.log_passengers_lag1, panel_df[controls+dum_period[:-2]+instruements]).fit()\n", "_____no_output_____" ], [ "print('testing correlation between Z and fare')\ncor_z_fare.wald_test(formula=H0)", "testing correlation between Z and fare\n" ], [ "print('testing correlation between Z and lag_passenger')\ncor_z_pass.wald_test(formula=H0)", "testing correlation between Z and lag_passenger\n" ] ], [ [ "#### The `Statistic` all of this previous tests allow to reject the null hypothesis. The coefficients of the `Instruments` are note equal to zeros. In oder words, the **`instrumentals are indeed strong and relevent`**.", "_____no_output_____" ], [ "### 3.3. Endogeneity testing using `Durbin's` and `Wu-Hausman` test of exogeneity\n> 1. The Durbin test is a classic of endogeneity which compares OLS estimates with 2SLS and exploits the fact that OLS estimates will be relatively efficient. Durbin’s test is not robust to heteroskedasticity.\n> 2. The Wu-Hausman test is a variant of the Durbin test that uses a slightly different form.\n", "_____no_output_____" ] ], [ [ "iv2LSmodel1.durbin()", "_____no_output_____" ], [ "iv2LSmodel2.durbin()", "_____no_output_____" ], [ "iv2LSmodel1.wu_hausman()", "_____no_output_____" ], [ "iv2LSmodel2.wu_hausman()", "_____no_output_____" ], [ "iv2LSmodel1.f_statistic", "_____no_output_____" ], [ "iv2LSmodel2.f_statistic", "_____no_output_____" ] ], [ [ "### 3.4. Augmented test for testing the exogeneity `log_fare` and `log_passengers_lag1`\n- Using `F-test` for a joint linear hypothesis and `Wald test`: testing if a variable has not effect \n\n - [WaldTestStatistic](https://bashtage.github.io/linearmodels/panel/results.html): hypothesis test examines whether 𝐻0: 𝐶𝜃=𝑣 where the matrix C is restriction and v is value. The test statistic has a 𝜒2𝑞 distribution where q is the number of rows in C. See the [Source code for linearmodels.panel.results](https://bashtage.github.io/linearmodels/_modules/linearmodels/panel/results.html#PanelEffectsResults)\n\n", "_____no_output_____" ] ], [ [ "# Augmented test for testing the exogeneity log_fare\naug_residus = PanelOLS(panel_df.log_fare, panel_df[controls + dum_period[:-2]]).fit()\n\npanel_df['fare_resids'] = aug_residus.resids.values\n\naug_wald = sm.OLS(panel_df.log_passengers, panel_df[['log_fare','fare_resids']+controls]).fit()\n\nH0_formula = '(fare_resids = 0)' # We can add namy variable as following: H0_formula = 'x2 = x3 = 0'\naug_wald.f_test(H0_formula)", "_____no_output_____" ], [ "aug_wald.summary()", "_____no_output_____" ], [ "# Augmented test for testing the exogeneity log_passengers_lag1\naug_residus = PanelOLS(panel_df.log_passengers_lag1, panel_df[controls + dum_period[:-2]]).fit()\n\npanel_df['pass_lag_resids'] = aug_residus.resids.values\n\naug_wald = sm.OLS(panel_df.log_passengers, panel_df[['log_passengers_lag1','pass_lag_resids']+controls]).fit()\n\nH0_formula = '(pass_lag_resids = 0)' # We can add namy variable as following: H0_formula = 'x2 = x3 = 0'\naug_wald.f_test(H0_formula)", "_____no_output_____" ], [ "aug_wald.summary()", "_____no_output_____" ] ], [ [ "### 4. Instrumenting using two-stage least squares (IV method)\n> - endog is the dependent variable, y\n- exog is the x matrix that has the endogenous information in it. Include the endogenous variables in it.\ninstrument is the z matrix. Include all the variables that are not endogenous and replace the endogenous variables from the exog matrix (above) with what ever instruments you choose for them.\n\n- First stage: we regress the endogenous variable (`log_fare`, `log_passengers_lag1` respectively) on all other regressors and all the instruments and save the fitted values series.\n- Second Stage: We regress the previous replacing, `log_fare`, by `log_fare_hat` (and `log_passengers_lag1` by `pass_lag_hat` respectively)", "_____no_output_____" ] ], [ [ "controls = ['const','log_passengers_lag1','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\ninstruements = ['log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\n# log_fare as reponse variable\niv_1stage1 = PanelOLS(panel_df[['log_fare']], panel_df[controls+instruements+dum_period[:-2]]).fit()\n\n# Fitted value of previous variables\npanel_df['fare_hat'] = iv_1stage1.predict()\n\n# OLS regression using the fitted values\niv_2stage1 = PanelOLS(panel_df[['log_passengers']], panel_df[['fare_hat']+controls+dum_period[:-2]]).fit()\n\n#print(iv_2stage1.summary)", "_____no_output_____" ], [ "controls = ['const','log_fare','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\n# log_passengers_lag1 as reponse variable\niv_1stage2 = PanelOLS(panel_df[['log_passengers_lag1']], panel_df[controls+instruements+dum_period[:-2]]).fit()\n\n# Fitted value of previous variables\npanel_df['pass_lag_hat'] = iv_1stage2.predict()\n\n# OLS regression using the fitted values\niv_2stage2 = PanelOLS(panel_df[['log_passengers']], panel_df[['pass_lag_hat']+controls+dum_period[:-2]]).fit()\n\n#print(iv_2stage2.summary)", "_____no_output_____" ], [ "# OLS regression using the two fitted values of `fare_hat` and `pass_lag_hat`\ncontrols = ['const','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\niv_2stage = PanelOLS(panel_df[['log_passengers']], panel_df[['fare_hat','pass_lag_hat']+controls+dum_period[:-2]]).fit()\n\nprint(iv_2stage.summary)", " PanelOLS Estimation Summary \n================================================================================\nDep. Variable: log_passengers R-squared: 0.9114\nEstimator: PanelOLS R-squared (Between): 0.9720\nNo. Observations: 84939 R-squared (Within): -0.5581\nDate: Thu, Jan 16 2020 R-squared (Overall): 0.9114\nTime: 17:33:23 Log-likelihood -2.865e+04\nCov. Estimator: Unadjusted \n F-statistic: 2.425e+04\nEntities: 4578 P-value 0.0000\nAvg Obs: 18.554 Distribution: F(36,84902)\nMin Obs: 1.0000 \nMax Obs: 30.000 F-statistic (robust): 2.425e+04\n P-value 0.0000\nTime periods: 30 Distribution: F(36,84902)\nAvg Obs: 2831.3 \nMin Obs: 2231.0 \nMax Obs: 3010.0 \n \n Parameter Estimates \n=====================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n-------------------------------------------------------------------------------------\nfare_hat -5.7510 0.0096 -600.16 0.0000 -5.7698 -5.7322\npass_lag_hat 0.3460 0.0027 128.67 0.0000 0.3408 0.3513\nconst 16.260 0.5130 31.693 0.0000 15.254 17.266\nlog_nsmiles 1.1686 0.0030 386.41 0.0000 1.1627 1.1745\nlog_income_capita 0.8326 0.0136 61.136 0.0000 0.8059 0.8593\nlog_population -0.0678 0.0019 -34.879 0.0000 -0.0716 -0.0640\nnb_airline 0.2484 0.0010 238.65 0.0000 0.2463 0.2504\nlog_kjf_price 0.3231 0.0940 3.4361 0.0006 0.1388 0.5073\ndum_dist -0.3309 0.0132 -25.141 0.0000 -0.3567 -0.3051\ndum_20104 -0.7801 0.0229 -34.053 0.0000 -0.8250 -0.7352\ndum_20111 -0.4962 0.0358 -13.868 0.0000 -0.5663 -0.4261\ndum_20112 -0.2232 0.0485 -4.6062 0.0000 -0.3182 -0.1282\ndum_20113 -0.3690 0.0453 -8.1533 0.0000 -0.4577 -0.2803\ndum_20114 -0.2889 0.0439 -6.5848 0.0000 -0.3749 -0.2029\ndum_20121 -0.2092 0.0488 -4.2831 0.0000 -0.3049 -0.1135\ndum_20122 0.0719 0.0437 1.6430 0.1004 -0.0139 0.1576\ndum_20123 -0.1912 0.0431 -4.4320 0.0000 -0.2757 -0.1066\ndum_20124 -0.1962 0.0433 -4.5311 0.0000 -0.2810 -0.1113\ndum_20131 -0.0456 0.0459 -0.9937 0.3203 -0.1355 0.0443\ndum_20132 0.0399 0.0360 1.1066 0.2685 -0.0307 0.1104\ndum_20133 0.1013 0.0390 2.5998 0.0093 0.0249 0.1777\ndum_20134 -0.0473 0.0379 -1.2479 0.2121 -0.1216 0.0270\ndum_20141 -0.0535 0.0400 -1.3350 0.1819 -0.1320 0.0250\ndum_20142 0.2319 0.0392 5.9140 0.0000 0.1550 0.3087\ndum_20143 0.1737 0.0383 4.5298 0.0000 0.0985 0.2489\ndum_20144 0.1106 0.0232 4.7575 0.0000 0.0650 0.1562\ndum_20151 0.2752 0.0119 23.094 0.0000 0.2518 0.2986\ndum_20152 0.3496 0.0081 42.904 0.0000 0.3336 0.3656\ndum_20153 0.3208 0.0155 20.748 0.0000 0.2905 0.3511\ndum_20154 0.1987 0.0283 7.0182 0.0000 0.1432 0.2541\ndum_20161 0.3290 0.0543 6.0595 0.0000 0.2226 0.4354\ndum_20162 0.3000 0.0376 7.9820 0.0000 0.2263 0.3737\ndum_20163 0.1094 0.0306 3.5775 0.0003 0.0495 0.1693\ndum_20164 0.0544 0.0248 2.1958 0.0281 0.0058 0.1029\ndum_20171 0.1155 0.0175 6.5981 0.0000 0.0812 0.1498\ndum_20172 0.3319 0.0233 14.228 0.0000 0.2861 0.3776\ndum_20173 -0.0229 0.0181 -1.2629 0.2066 -0.0584 0.0126\n=====================================================================================\n\n\n" ] ], [ [ "As you can see, we have same results either using IV or `Two Stage Least Squares` method.\n==> Our instruments are valide.", "_____no_output_____" ], [ "## [Homoskedasticity](https://en.wikipedia.org/wiki/Homoscedasticity) - [Heteroskedasticity](https://en.wikipedia.org/wiki/Heteroscedasticity) test\nThe homoscedasticity hypothesis implies that the variance of the errors are equal:\n\\begin{equation*} 𝑉(𝜀|𝑋) = 𝜎^2𝐼 \\end{equation*}\n\\begin{equation*} H_0: \\sigma^2_i = \\sigma^2 \\end{equation*}", "_____no_output_____" ] ], [ [ "from statsmodels.stats.diagnostic import het_breuschpagan\nfrom statsmodels.stats.diagnostic import het_white\n", "_____no_output_____" ], [ "controls = ['const','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\niv2LSmodel = IV2SLS(panel_df['log_passengers'], \n panel_df[controls+dum_period[:-2]], \n panel_df[['log_fare','log_passengers_lag1']], panel_df[instruements]).fit()\n", "_____no_output_____" ] ], [ [ "### Breusch–Pagan test\n$y$ = $\\beta_0+\\beta_1x+µ$ \n$\\hat{µ}^2$ = $\\rho_0+\\rho_1x+𝑣$\n- Breusch–Pagan test using `python library`\n- Breusch–Pagan test computed manually (using two methods)", "_____no_output_____" ] ], [ [ "bp_test = het_breuschpagan(iv2LSmodel.resids, iv2LSmodel.model.exog.original)\n\nprint('Lagrange multiplier statistic: {} \\nP_value: {}\\nf-value: {} \\nfP_value: {}'.format(bp_test[0], bp_test[1],\n bp_test[2], bp_test[3]))", "Lagrange multiplier statistic: 1546.8418511067011 \nP_value: 1.024852768184934e-303\nf-value: 46.32014760198499 \nfP_value: 1.1286476291363389e-306\n" ], [ "\"\"\"\nhttps://en.wikipedia.org/wiki/Breusch–Pagan_test\nIf the test statistic has a p-value below an appropriate threshold (e.g. p < 0.05) \nthen the null hypothesis of homoskedasticity is rejected and heteroskedasticity assumed.\n\"\"\"\npanel_df['iv2_resids2'] = (iv2LSmodel.resids.values)**2\n\nhet_breuschpagan = PanelOLS(panel_df.iv2_resids2, panel_df[controls+dum_period[:-2]]).fit()\n\nfval = het_breuschpagan.f_statistic\nfpval = het_breuschpagan.pvalues\n\nif round(fval.pval,3) < 0.05:\n BreuschPagan_H0 = \"We rejected H0: the null hypothesis of homoskedasticity. So, we have `heteroskedasticity`.\"\nelse:\n BreuschPagan_H0 = \"We don't rejected H0: the null hypothesis of homoskedasticity\"\n\nprint(fval)\nprint()\nprint(BreuschPagan_H0)", "Model F-statistic (homoskedastic)\nH0: All parameters ex. constant not zero\nStatistic: 46.3201\nP-value: 0.0000\nDistributed: F(34,84904)\n\nWe rejected H0: the null hypothesis of homoskedasticity. So, we have `heteroskedasticity`.\n" ], [ "het_bp = PanelOLS(panel_df.iv2_resids2, panel_df[controls+dum_period[:-2]]).fit()\n\nhet_bp_pred = het_bp.predict(panel_df[controls+dum_period[:-2]])\nr_square = r2_score(panel_df.iv2_resids2, het_bp_pred)\n\nm = len(controls+dum_period[:-2])\n\nnobs = het_bp.nobs\nq_het_bp = nobs*r_square\nprint(q_het_bp)", "1546.8418511066918\n" ] ], [ [ "The value of `Khi-2` with m (number of regressors degrees of freedom (`37`) is `61.581`. Hence, we reject the null hypothesis.", "_____no_output_____" ], [ "### White test\n$\\hat{µ}^2$ = $δ_{0}+δ_{1}x_{1}+…+δ_{k}x_{k}+λ_{1}x_{1}^{2}+…+λ_{k}x_{k}^{2}+φ_{1}x_{1}x_{2}+…+φ_{k-1}x_{k-1}x_{k}+ν$\n\n\n> According to [Takashi Yamano](http://www3.grips.ac.jp/~yamanota/Lecture%20Note%208%20to%2010%202SLS%20&%20others.pdf) (P.22), \"because $\\hat{y}$ includes all independent variables, this test is equivalent of conducting the following test\": \n\n$\\hat{µ}^2$ = $δ_{0}+δ_{1}\\hat{y}+δ_{1}\\hat{y}^{2}+ν$\n\n- White test using `python library`\n- White test computed manually (using $\\hat{µ}^2$ = $δ_{0}+δ_{1}\\hat{y}+δ_{1}\\hat{y}^{2}+ν$ equation)\n", "_____no_output_____" ] ], [ [ "#name = ['Lagrange multiplier statistic', 'P_value','f-value','fP_value']\nwhite_test = het_white(iv2LSmodel.resids, iv2LSmodel.model.exog.original)\n\nprint('Lagrange multiplier statistic: {} \\nP_value: {}\\nf-value: {} \\nfP_value: {}'.format(white_test[0], white_test[1],\n white_test[2], white_test[3]))", "Lagrange multiplier statistic: 2719.105235193545 \nP_value: 0.0\nf-value: 14.521312596813639 \nfP_value: 0.0\n" ], [ "# Method 1 # https://www.dummies.com/education/economics/econometrics/test-for-heteroskedasticity-with-the-white-test/\ny_hat, y_hat_2 = iv2LSmodel.fitted_values, iv2LSmodel.fitted_values**2\nsquare_resids = (iv2LSmodel.resids)**2\n\niv_hat = pd.concat([y_hat, y_hat_2,square_resids], axis=1)\n\niv_hat.columns = ['y_hat','y_hat_2','resids2']\n\nhet_white = PanelOLS(iv_hat.resids2, iv_hat[['y_hat','y_hat_2']]).fit()\n\nfval = het_white.f_statistic\nfpval = het_white.pvalues\n\nif round(fval.pval,3) < 0.05:\n White_H0 = \"We rejected H0: the null hypothesis of homoskedasticity, so we have `heteroskedasticity` in our model.\"\nelse:\n White_H0 = \"We don't rejected H0: the null hypothesis of homoskedasticity\"\n\nprint(fval)\nprint()\nprint(White_H0)", "Model F-statistic (homoskedastic)\nH0: All parameters ex. constant not zero\nStatistic: 7233.2090\nP-value: 0.0000\nDistributed: F(2,84937)\n\nWe rejected H0: the null hypothesis of homoskedasticity, so we have `heteroskedasticity` in our model.\n" ], [ "controls = ['const','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\niv2LSmodel = IV2SLS(panel_df['log_passengers'], \n panel_df[controls+dum_period[:-2]], \n panel_df[['log_fare','log_passengers_lag1']], panel_df[instruements]).fit()\n", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(12,8))\n\na = plt.axes(aspect='equal')\nplt.scatter(panel_df.log_passengers.values, iv2LSmodel.predict().values, alpha=.007, c='b')\nplt.xlabel('True Values [log_passengers]')\nplt.ylabel('IV Predictions [log_passengers]')\n\nlims = [panel_df.log_passengers.min(), panel_df.log_passengers.max()]\nplt.xlim(lims)\nplt.ylim(lims)\n_ = plt.plot(lims, lims, c='r')\n", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(12, 8))\n\nsns.distplot(iv2LSmodel.resids, bins=200, hist=True, kde='gaussian', color='b', ax=ax, norm_hist=True)\nsns.distplot(iv2LSmodel.resids, bins=200, hist=False, kde='kernel', color='r', ax=ax, norm_hist=True)\nax.set_title(\"IV Residuals Plot\", fontsize=27)\nax.set_xlim(-1.5,1.5)\nax.set_xlabel('IV Residuals', fontsize=20)\n\nplt.show", "_____no_output_____" ] ], [ [ "## GMM Estimation\n> GMM estimation can be more efficient than 2SLS when there are more than one instrument. By default, 2-step efficient GMM is used (assuming the weighting matrix is correctly specified). It is possible to iterate until convergence using the optional keyword input iter_limit, which is naturally 2 by default. Generally, GMM-CUE would be preferred to using multiple iterations of standard GMM. Source: [linearmodels 4.5](https://bashtage.github.io/linearmodels/doc/iv/examples/advanced-examples.html)\n\n### Parameters\n- According to the [linearmodels 4.5](https://bashtage.github.io/linearmodels/doc/iv/examples/advanced-examples.html) \"available GMM weight functions are:\n - `unadjusted`, 'homoskedastic' - Assumes moment conditions are homoskedastic\n - `robust`, 'heteroskedastic' - Allows for heteroskedasticity by not autocorrelation\n - `kernel` - Allows for heteroskedasticity and autocorrelation\n - `cluster` - Allows for one-way cluster dependence\"\n- As we have heteroskedasticity and autocorrelation, we use the **`kernel`** option ==>\n - Kernel (HAC)\n - Kernel: bartlett", "_____no_output_____" ] ], [ [ "controls = ['const','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\nivgmmmodel = IVGMM(panel_df['log_passengers'],\n panel_df[controls + dum_period[:-2]],\n panel_df[['log_fare','log_passengers_lag1']],\n panel_df[instruements]).fit(cov_type='robust')\n\nprint(ivgmmmodel.summary)", " IV-GMM Estimation Summary \n==============================================================================\nDep. Variable: log_passengers R-squared: 0.9308\nEstimator: IV-GMM Adj. R-squared: 0.9308\nNo. Observations: 84939 F-statistic: 7.082e+05\nDate: Thu, Jan 16 2020 P-value (F-stat) 0.0000\nTime: 17:33:38 Distribution: chi2(36)\nCov. Estimator: robust \n \n Parameter Estimates \n=======================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n---------------------------------------------------------------------------------------\nconst 7.3589 0.4639 15.863 0.0000 6.4497 8.2682\nlog_nsmiles 0.1462 0.0042 34.516 0.0000 0.1379 0.1545\nlog_income_capita 0.1455 0.0143 10.180 0.0000 0.1174 0.1735\nlog_population 0.0056 0.0018 3.1815 0.0015 0.0022 0.0091\nnb_airline 0.0478 0.0012 39.374 0.0000 0.0454 0.0502\nlog_kjf_price -1.2150 0.0833 -14.580 0.0000 -1.3783 -1.0517\ndum_dist -0.1075 0.0128 -8.3941 0.0000 -0.1326 -0.0824\ndum_20104 0.2445 0.0204 11.966 0.0000 0.2045 0.2846\ndum_20111 0.3431 0.0317 10.841 0.0000 0.2811 0.4051\ndum_20112 0.7954 0.0429 18.525 0.0000 0.7112 0.8795\ndum_20113 0.5731 0.0401 14.307 0.0000 0.4946 0.6516\ndum_20114 0.5327 0.0388 13.731 0.0000 0.4566 0.6087\ndum_20121 0.5785 0.0432 13.398 0.0000 0.4939 0.6631\ndum_20122 0.7406 0.0387 19.142 0.0000 0.6647 0.8164\ndum_20123 0.5405 0.0382 14.168 0.0000 0.4658 0.6153\ndum_20124 0.5567 0.0383 14.530 0.0000 0.4816 0.6318\ndum_20131 0.5419 0.0406 13.362 0.0000 0.4624 0.6214\ndum_20132 0.6506 0.0319 20.418 0.0000 0.5881 0.7130\ndum_20133 0.5075 0.0344 14.744 0.0000 0.4400 0.5749\ndum_20134 0.5179 0.0335 15.465 0.0000 0.4523 0.5836\ndum_20141 0.4641 0.0354 13.105 0.0000 0.3947 0.5335\ndum_20142 0.7024 0.0347 20.268 0.0000 0.6345 0.7703\ndum_20143 0.5227 0.0338 15.456 0.0000 0.4564 0.5890\ndum_20144 0.3048 0.0204 14.917 0.0000 0.2647 0.3448\ndum_20151 -0.1572 0.0107 -14.734 0.0000 -0.1781 -0.1363\ndum_20152 0.1621 0.0079 20.456 0.0000 0.1466 0.1776\ndum_20153 -0.1173 0.0137 -8.5337 0.0000 -0.1442 -0.0903\ndum_20154 -0.3218 0.0251 -12.798 0.0000 -0.3711 -0.2725\ndum_20161 -0.7467 0.0483 -15.469 0.0000 -0.8414 -0.6521\ndum_20162 -0.2807 0.0337 -8.3378 0.0000 -0.3467 -0.2147\ndum_20163 -0.3669 0.0272 -13.498 0.0000 -0.4202 -0.3136\ndum_20164 -0.2764 0.0220 -12.562 0.0000 -0.3196 -0.2333\ndum_20171 -0.2442 0.0156 -15.657 0.0000 -0.2747 -0.2136\ndum_20172 -0.1053 0.0210 -5.0172 0.0000 -0.1464 -0.0642\ndum_20173 -0.2087 0.0163 -12.763 0.0000 -0.2407 -0.1766\nlog_fare -0.5807 0.0217 -26.789 0.0000 -0.6232 -0.5382\nlog_passengers_lag1 0.8719 0.0035 249.19 0.0000 0.8650 0.8788\n=======================================================================================\n\nEndogenous: log_fare, log_passengers_lag1\nInstruments: log_nsmiles_dif1, log_income_capita_dif1, log_population_dif1, nb_airline_dif1, log_fare_dif1, log_fare_dif2, log_passengers_dif2, log_nsmiles_dif2, log_income_capita_dif2, log_population_dif2, nb_airline_dif2\nGMM Covariance\nDebiased: False\nRobust (Heteroskedastic)\n" ], [ "ivgmmmodel.j_stat", "_____no_output_____" ] ], [ [ "## Testing Autocorrelation\nThe regression residuals are not autocorrelated ? See [reference](https://www.statsmodels.org/stable/diagnostic.html)", "_____no_output_____" ] ], [ [ "from statsmodels.stats.diagnostic import acorr_ljungbox\nfrom statsmodels.stats.diagnostic import acorr_breusch_godfrey\n", "_____no_output_____" ] ], [ [ "### 1. Ljung-Box test for no autocorrelation", "_____no_output_____" ] ], [ [ "ljungbox_test = acorr_ljungbox(ivgmmmodel.resids.values)", "_____no_output_____" ], [ "ljungbox_test", "_____no_output_____" ] ], [ [ "### 2. Breusch Godfrey test for no autocorrelation of residuals", "_____no_output_____" ] ], [ [ "from statsmodels.tsa.tsatools import lagmat\nfrom statsmodels.regression.linear_model import OLS\nfrom scipy import stats\nname = ['Lagrange multiplier statistic:','Lagrange multiplier P-value:','f_statistic for F test:','P-value for F test:']", "_____no_output_____" ] ], [ [ "### 2.1. Breusch Godfrey test using GMM results: \n- The following function return the Breusch Godfrey test. For more details refere to the red lines", "_____no_output_____" ] ], [ [ "def breusch_godfrey_lm(results, nlags=None, store=False):\n \"\"\"\n Breusch Godfrey Lagrange Multiplier tests for residual autocorrelation\n Parameters:\n ----------\n - results(Result instance): Estimation results for which the residuals are tested for serial correlation\n - nlags(int): Number of lags to include in the auxiliary regression. (nlags is highest lag)\n - store(bool): If store is true, then an additional class instance that contains intermediate results is returned.\n Returns\n -------\n - lm(float): Lagrange multiplier test statistic\n - lmpval(float): p-value for Lagrange multiplier test\n - fval(float): fstatistic for F test, alternative version of the same test based on F test for the parameter restriction\n - fpval(float): pvalue for F test\n - resstore(instance – optional): a class instance that holds intermediate results. Only returned if store=True\n Notes\n -----\n BG adds lags of residual to exog in the design matrix for the auxiliary regression with residuals as endog, see Greene 12.7.1.\n References\n ----------\n - Greene Econometrics, 5th edition\n - https://www.statsmodels.org/stable/generated/statsmodels.stats.diagnostic.acorr_breusch_godfrey.html#statsmodels.stats.diagnostic.acorr_breusch_godfrey\n \"\"\"\n\n x = np.asarray(results.resids)\n exog_old = results.model.exog.original\n nobs = x.shape[0]\n if nlags is None:\n #for adf from Greene referencing Schwert 1989\n nlags = np.trunc(12. * np.power(nobs/100., 1/4.))#nobs//4 #TODO: check default, or do AIC/BIC\n nlags = int(nlags)\n\n x = np.concatenate((np.zeros(nlags), x))\n\n #xdiff = np.diff(x)\n #\n xdall = lagmat(x[:,None], nlags, trim='both')\n nobs = xdall.shape[0]\n xdall = np.c_[np.ones((nobs,1)), xdall]\n xshort = x[-nobs:]\n exog = np.column_stack((exog_old, xdall))\n k_vars = exog.shape[1]\n\n if store: resstore = ResultsStore()\n\n resols = OLS(xshort, exog).fit()\n ft = resols.f_test(np.eye(nlags, k_vars, k_vars - nlags))\n fval = ft.fvalue\n fpval = ft.pvalue\n fval = np.squeeze(fval)[()] #TODO: fix this in ContrastResults\n fpval = np.squeeze(fpval)[()]\n lm = nobs * resols.rsquared\n lmpval = stats.chi2.sf(lm, nlags)\n # Note: degrees of freedom for LM test is nvars minus constant = usedlags\n #return fval, fpval, lm, lmpval\n\n if store:\n resstore.resols = resols\n resstore.usedlag = nlags\n return lm, lmpval, fval, fpval, resstore\n else:\n return lm, lmpval, fval, fpval", "_____no_output_____" ], [ "breusch_godfrey_test_gmm = breusch_godfrey_lm(ivgmmmodel)\nprint(pd.Series(breusch_godfrey_test_gmm, \n index=name))\n", "Lagrange multiplier statistic: 24302.637078\nLagrange multiplier P-value: 0.000000\nf_statistic for F test: 529.058838\nP-value for F test: 0.000000\ndtype: float64\n" ] ], [ [ "### 2.2. Breusch-Pagan test using OLS results\n- We use the python algorithm for the step", "_____no_output_____" ] ], [ [ "olsmodel = sm.OLS(panel_df.log_passengers,\n panel_df[['log_passengers_lag1','pass_lag_resids']+controls+dum_period[:-2]]\n ).fit()\n\nbreusch_godfrey_test_ols = acorr_breusch_godfrey(olsmodel)\nprint(pd.Series(breusch_godfrey_test_ols, \n index=name))\n", "Lagrange multiplier statistic: 26077.145491\nLagrange multiplier P-value: 0.000000\nf_statistic for F test: 587.275220\nP-value for F test: 0.000000\ndtype: float64\n" ] ], [ [ "# GMM with `Kernel` cov_type option", "_____no_output_____" ], [ "- This Breusch-Pagan test used on the two results (`GMM` and `OLS`) show that we have autocorrelation.\n- Hence, we have to run the `GMM`by taking into account of this `autocorrelation` and the `heteroskedastic` that was alredy test with the `White` and `Breusch–Pagan`. \n- Consequently, the `kernel` `cov_type` option will be used:\n - Kernel (HAC)\n - Kernel: bartlett", "_____no_output_____" ] ], [ [ "controls = ['const','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\nivgmmmodel = IVGMM(panel_df['log_passengers'],\n panel_df[controls + dum_period[:-2]],\n panel_df[['log_fare','log_passengers_lag1']],\n panel_df[instruements]).fit(cov_type='kernel')\n\nprint(ivgmmmodel.summary)", " IV-GMM Estimation Summary \n==============================================================================\nDep. Variable: log_passengers R-squared: 0.9308\nEstimator: IV-GMM Adj. R-squared: 0.9308\nNo. Observations: 84939 F-statistic: 4.921e+06\nDate: Thu, Jan 16 2020 P-value (F-stat) 0.0000\nTime: 17:36:22 Distribution: chi2(36)\nCov. Estimator: kernel \n \n Parameter Estimates \n=======================================================================================\n Parameter Std. Err. T-stat P-value Lower CI Upper CI\n---------------------------------------------------------------------------------------\nconst 7.3589 1.2615 5.8334 0.0000 4.8864 9.8315\nlog_nsmiles 0.1462 0.0082 17.914 0.0000 0.1302 0.1622\nlog_income_capita 0.1455 0.0741 1.9629 0.0497 0.0002 0.2907\nlog_population 0.0056 0.0077 0.7274 0.4670 -0.0095 0.0207\nnb_airline 0.0478 0.0012 38.586 0.0000 0.0454 0.0502\nlog_kjf_price -1.2150 0.0419 -28.997 0.0000 -1.2971 -1.1329\ndum_dist -0.1075 0.0123 -8.7743 0.0000 -0.1316 -0.0835\ndum_20104 0.2445 0.0243 10.067 0.0000 0.1969 0.2922\ndum_20111 0.3431 0.0174 19.690 0.0000 0.3090 0.3773\ndum_20112 0.7954 0.0198 40.221 0.0000 0.7566 0.8341\ndum_20113 0.5731 0.0196 29.244 0.0000 0.5347 0.6115\ndum_20114 0.5327 0.0138 38.592 0.0000 0.5056 0.5597\ndum_20121 0.5785 0.0140 41.377 0.0000 0.5511 0.6059\ndum_20122 0.7406 0.0209 35.404 0.0000 0.6996 0.7816\ndum_20123 0.5405 0.0196 27.542 0.0000 0.5021 0.5790\ndum_20124 0.5567 0.0137 40.588 0.0000 0.5298 0.5836\ndum_20131 0.5419 0.0132 41.040 0.0000 0.5160 0.5678\ndum_20132 0.6506 0.0161 40.486 0.0000 0.6191 0.6821\ndum_20133 0.5075 0.0205 24.807 0.0000 0.4674 0.5476\ndum_20134 0.5179 0.0122 42.412 0.0000 0.4940 0.5419\ndum_20141 0.4641 0.0105 44.078 0.0000 0.4435 0.4848\ndum_20142 0.7024 0.0215 32.678 0.0000 0.6603 0.7445\ndum_20143 0.5227 0.0279 18.734 0.0000 0.4680 0.5774\ndum_20144 0.3048 0.0089 34.330 0.0000 0.2874 0.3222\ndum_20151 -0.1572 0.0111 -14.104 0.0000 -0.1791 -0.1354\ndum_20152 0.1621 0.0124 13.049 0.0000 0.1378 0.1865\ndum_20153 -0.1173 0.0117 -10.043 0.0000 -0.1402 -0.0944\ndum_20154 -0.3218 0.0129 -24.885 0.0000 -0.3471 -0.2965\ndum_20161 -0.7467 0.0340 -21.951 0.0000 -0.8134 -0.6801\ndum_20162 -0.2807 0.0116 -24.256 0.0000 -0.3034 -0.2580\ndum_20163 -0.3669 0.0097 -37.665 0.0000 -0.3860 -0.3478\ndum_20164 -0.2764 0.0123 -22.460 0.0000 -0.3006 -0.2523\ndum_20171 -0.2442 0.0108 -22.656 0.0000 -0.2653 -0.2230\ndum_20172 -0.1053 0.0110 -9.5866 0.0000 -0.1268 -0.0838\ndum_20173 -0.2087 0.0148 -14.083 0.0000 -0.2377 -0.1796\nlog_fare -0.5807 0.0809 -7.1764 0.0000 -0.7393 -0.4221\nlog_passengers_lag1 0.8719 0.0036 244.04 0.0000 0.8649 0.8789\n=======================================================================================\n\nEndogenous: log_fare, log_passengers_lag1\nInstruments: log_nsmiles_dif1, log_income_capita_dif1, log_population_dif1, nb_airline_dif1, log_fare_dif1, log_fare_dif2, log_passengers_dif2, log_nsmiles_dif2, log_income_capita_dif2, log_population_dif2, nb_airline_dif2\nGMM Covariance\nDebiased: False\nKernel (HAC)\nKernel: bartlett\nBandwidth: 84937\n" ], [ "fig, ax = plt.subplots(figsize=(12,8))\n\na = plt.axes(aspect='equal')\nplt.scatter(panel_df.log_passengers.values, ivgmmmodel.predict().values, alpha=.01, c='b')\n\nplt.title(\"GMM: Predicted vs True value\", fontsize=27)\nplt.xlabel('True Values [log_passengers]', fontsize=20)\nplt.ylabel('GMM Predictions [log_passengers]', fontsize=20)\n\nlims = [panel_df.log_passengers.min(), panel_df.log_passengers.max()]\nplt.xlim(lims)\nplt.ylim(lims)\n_ = plt.plot(lims, lims, c='r')\n", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(12, 8))\n\nsns.distplot(ivgmmmodel.resids, bins=200, hist=True, kde='gaussian', color='b', ax=ax, norm_hist=True)\nsns.distplot(ivgmmmodel.resids, bins=200, hist=False, kde='kernel', color='r', ax=ax, norm_hist=True)\nax.set_title(\"GMM Residuals Plot\", fontsize=27)\nax.set_xlim(-1.5,1.5)\nax.set_xlabel('GMM Residuals', fontsize=20)\n\nplt.show", "_____no_output_____" ] ], [ [ "### 3. Exogeneity Testing using GMM model\nThe J statistic tests whether the moment conditions are sufficiently close to zero to indicate that the model is not overidentified. \nThe statistic is defined as $\\bar{g}'W^{-1}\\bar{g} \\sim \\chi^2_q$", "_____no_output_____" ] ], [ [ "ivgmmmodel.j_stat", "_____no_output_____" ] ], [ [ "### 4. Exogeneity test using the augmented regression approach\nEstimating the variances of u(i), assuming that sigma2(ui)=exp(a0+a1*log_fare+a2*log_nsmiles)\n- Regress using OLS the `log square of the IV residuals`\n- Compute `inverse of sigma` using the the square root of the exponentiel of the fitted values", "_____no_output_____" ], [ "### 4.1. Use the `IV2SL` `residuals` as reponse variable in the `OLS` model and compute the `inverse of the sigma`", "_____no_output_____" ] ], [ [ "controls = ['const','log_nsmiles','log_income_capita','log_population','nb_airline',\n 'log_kjf_price','dum_dist']\n\ninstruements = ['log_nsmiles_dif1','log_income_capita_dif1','log_population_dif1','nb_airline_dif1',\n 'log_fare_dif1','log_fare_dif2','log_passengers_dif2',\n 'log_nsmiles_dif2','log_income_capita_dif2','log_population_dif2','nb_airline_dif2']\n\niv2LSmodel = IV2SLS(panel_df['log_passengers'], \n panel_df[controls+dum_period[:-2]], \n panel_df[['log_fare','log_passengers_lag1']], panel_df[instruements]).fit()\n", "_____no_output_____" ], [ "panel_df['log_iv_residus2'] = np.log(iv2LSmodel.resids**2)\n\nr2_aug = PanelOLS(panel_df.log_iv_residus2, panel_df[['log_fare','log_passengers_lag1']]).fit(cov_type='robust')\n\n# computes 1/sigma to be used later as weight for correcting for heteroskedasticity\nsigma_inverse = 1/(np.exp(r2_aug.predict())**.5) # np.sqrt()", "_____no_output_____" ] ], [ [ "### 4.2. Feasible Generalized Least Squares (GLS)\nGLS on the augmented regression = Exogeneity test for \"log_fare\" allowing for heteroskedasticity", "_____no_output_____" ] ], [ [ "glsmodel = sm.GLS(panel_df['log_passengers'], panel_df[controls+['log_fare','log_passengers_lag1']+dum_period[:-2]], sigma=sigma_inverse).fit()\nprint(glsmodel.summary())", " GLS Regression Results \n==============================================================================\nDep. Variable: log_passengers R-squared: 0.941\nModel: GLS Adj. R-squared: 0.941\nMethod: Least Squares F-statistic: 3.732e+04\nDate: Thu, 16 Jan 2020 Prob (F-statistic): 0.00\nTime: 17:36:26 Log-Likelihood: -11860.\nNo. Observations: 84939 AIC: 2.379e+04\nDf Residuals: 84902 BIC: 2.414e+04\nDf Model: 36 \nCovariance Type: nonrobust \n=======================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------------\nconst 6.1190 0.419 14.602 0.000 5.298 6.940\nlog_nsmiles 0.0507 0.002 27.077 0.000 0.047 0.054\nlog_income_capita 0.0470 0.011 4.281 0.000 0.025 0.068\nlog_population 0.0065 0.002 4.123 0.000 0.003 0.010\nnb_airline 0.0247 0.001 40.989 0.000 0.024 0.026\nlog_kjf_price -1.1877 0.077 -15.427 0.000 -1.339 -1.037\ndum_dist -0.0529 0.010 -5.143 0.000 -0.073 -0.033\nlog_fare -0.1114 0.003 -32.551 0.000 -0.118 -0.105\nlog_passengers_lag1 0.9350 0.001 809.629 0.000 0.933 0.937\ndum_20104 0.2906 0.019 15.557 0.000 0.254 0.327\ndum_20111 0.3501 0.029 11.954 0.000 0.293 0.408\ndum_20112 0.7731 0.040 19.482 0.000 0.695 0.851\ndum_20113 0.5728 0.037 15.456 0.000 0.500 0.645\ndum_20114 0.5232 0.036 14.560 0.000 0.453 0.594\ndum_20121 0.5639 0.040 14.094 0.000 0.485 0.642\ndum_20122 0.7015 0.036 19.578 0.000 0.631 0.772\ndum_20123 0.5273 0.035 14.924 0.000 0.458 0.597\ndum_20124 0.5355 0.035 15.101 0.000 0.466 0.605\ndum_20131 0.5195 0.038 13.819 0.000 0.446 0.593\ndum_20132 0.6233 0.030 21.126 0.000 0.565 0.681\ndum_20133 0.4609 0.032 14.431 0.000 0.398 0.524\ndum_20134 0.5015 0.031 16.156 0.000 0.441 0.562\ndum_20141 0.4533 0.033 13.814 0.000 0.389 0.518\ndum_20142 0.6579 0.032 20.477 0.000 0.595 0.721\ndum_20143 0.4808 0.031 15.297 0.000 0.419 0.542\ndum_20144 0.2745 0.019 14.403 0.000 0.237 0.312\ndum_20151 -0.1723 0.010 -17.652 0.000 -0.191 -0.153\ndum_20152 0.1258 0.007 18.806 0.000 0.113 0.139\ndum_20153 -0.1363 0.013 -10.772 0.000 -0.161 -0.111\ndum_20154 -0.3249 0.023 -14.024 0.000 -0.370 -0.280\ndum_20161 -0.7419 0.044 -16.694 0.000 -0.829 -0.655\ndum_20162 -0.3019 0.031 -9.813 0.000 -0.362 -0.242\ndum_20163 -0.3608 0.025 -14.419 0.000 -0.410 -0.312\ndum_20164 -0.2741 0.020 -13.524 0.000 -0.314 -0.234\ndum_20171 -0.2459 0.014 -17.162 0.000 -0.274 -0.218\ndum_20172 -0.1240 0.019 -6.495 0.000 -0.161 -0.087\ndum_20173 -0.2074 0.015 -14.003 0.000 -0.236 -0.178\n==============================================================================\nOmnibus: 16947.182 Durbin-Watson: 2.556\nProb(Omnibus): 0.000 Jarque-Bera (JB): 282466.567\nSkew: -0.502 Prob(JB): 0.00\nKurtosis: 11.877 Cond. No. 1.06e+04\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 1.06e+04. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n" ], [ "glsmodel.params[:9]", "_____no_output_____" ], [ "glsmodel.bse[:9]", "_____no_output_____" ] ], [ [ "### 4.3. GLSA model\nWe can use the GLSAR model with one lag, to get to a similar result:", "_____no_output_____" ] ], [ [ "glsarmodel = sm.GLSAR(panel_df['log_passengers'], panel_df[controls+['log_fare','log_passengers_lag1']+dum_period[:-2]], 1)\nglsarresults = glsarmodel.iterative_fit(1)\nprint(glsarresults.summary())", " GLSAR Regression Results \n==============================================================================\nDep. Variable: log_passengers R-squared: 0.942\nModel: GLSAR Adj. R-squared: 0.942\nMethod: Least Squares F-statistic: 3.816e+04\nDate: Thu, 16 Jan 2020 Prob (F-statistic): 0.00\nTime: 17:36:26 Log-Likelihood: -10797.\nNo. Observations: 84938 AIC: 2.167e+04\nDf Residuals: 84901 BIC: 2.201e+04\nDf Model: 36 \nCovariance Type: nonrobust \n=======================================================================================\n coef std err t P>|t| [0.025 0.975]\n---------------------------------------------------------------------------------------\nconst 6.2079 0.415 14.976 0.000 5.395 7.020\nlog_nsmiles 0.0532 0.002 28.643 0.000 0.050 0.057\nlog_income_capita 0.0458 0.011 4.223 0.000 0.025 0.067\nlog_population 0.0070 0.002 4.507 0.000 0.004 0.010\nnb_airline 0.0249 0.001 41.578 0.000 0.024 0.026\nlog_kjf_price -1.1938 0.076 -15.675 0.000 -1.343 -1.044\ndum_dist -0.0525 0.011 -4.971 0.000 -0.073 -0.032\nlog_fare -0.1244 0.004 -34.377 0.000 -0.131 -0.117\nlog_passengers_lag1 0.9345 0.001 809.954 0.000 0.932 0.937\ndum_20104 0.2894 0.019 15.630 0.000 0.253 0.326\ndum_20111 0.3511 0.029 12.115 0.000 0.294 0.408\ndum_20112 0.7797 0.039 19.866 0.000 0.703 0.857\ndum_20113 0.5803 0.037 15.831 0.000 0.508 0.652\ndum_20114 0.5240 0.036 14.744 0.000 0.454 0.594\ndum_20121 0.5659 0.040 14.302 0.000 0.488 0.643\ndum_20122 0.7080 0.035 19.982 0.000 0.639 0.777\ndum_20123 0.5370 0.035 15.366 0.000 0.468 0.605\ndum_20124 0.5390 0.035 15.367 0.000 0.470 0.608\ndum_20131 0.5227 0.037 14.060 0.000 0.450 0.596\ndum_20132 0.6268 0.029 21.484 0.000 0.570 0.684\ndum_20133 0.4696 0.032 14.868 0.000 0.408 0.531\ndum_20134 0.5022 0.031 16.359 0.000 0.442 0.562\ndum_20141 0.4563 0.032 14.064 0.000 0.393 0.520\ndum_20142 0.6656 0.032 20.953 0.000 0.603 0.728\ndum_20143 0.4915 0.031 15.817 0.000 0.431 0.552\ndum_20144 0.2757 0.019 14.635 0.000 0.239 0.313\ndum_20151 -0.1714 0.010 -17.779 0.000 -0.190 -0.153\ndum_20152 0.1325 0.007 20.077 0.000 0.120 0.145\ndum_20153 -0.1295 0.013 -10.347 0.000 -0.154 -0.105\ndum_20154 -0.3270 0.023 -14.262 0.000 -0.372 -0.282\ndum_20161 -0.7460 0.044 -16.964 0.000 -0.832 -0.660\ndum_20162 -0.2954 0.030 -9.702 0.000 -0.355 -0.236\ndum_20163 -0.3548 0.025 -14.326 0.000 -0.403 -0.306\ndum_20164 -0.2760 0.020 -13.761 0.000 -0.315 -0.237\ndum_20171 -0.2459 0.014 -17.347 0.000 -0.274 -0.218\ndum_20172 -0.1216 0.019 -6.435 0.000 -0.159 -0.085\ndum_20173 -0.1963 0.015 -13.378 0.000 -0.225 -0.168\n==============================================================================\nOmnibus: 16449.105 Durbin-Watson: 2.573\nProb(Omnibus): 0.000 Jarque-Bera (JB): 248359.057\nSkew: -0.505 Prob(JB): 0.00\nKurtosis: 11.316 Cond. No. 1.06e+04\n==============================================================================\n\nWarnings:\n[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n[2] The condition number is large, 1.06e+04. This might indicate that there are\nstrong multicollinearity or other numerical problems.\n" ], [ "glsarresults.params[:9]", "_____no_output_____" ], [ "glsarresults.bse[:9]", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(12,8))\n\nax = plt.axes(aspect='equal')\nax.scatter(panel_df.log_passengers.values, glsmodel.predict(), alpha=.01, c='b')\nplt.title(\"GLS: Predicted vs True value\", fontsize=27)\nplt.xlabel('True Values [log_passengers]', fontsize=20)\nplt.ylabel('GLS Predictions [log_passengers]', fontsize=20)\n\nlims = [panel_df.log_passengers.min(), panel_df.log_passengers.max()]\nax.set_xlim(lims)\nax.set_ylim(lims)\n_ = ax.plot(lims, lims, c='r')\n", "_____no_output_____" ], [ "fig, ax = plt.subplots(figsize=(12, 8))\n\nsns.distplot(glsmodel.resid, bins=200, hist=True, kde='gaussian', color='b', ax=ax, norm_hist=True)\nsns.distplot(glsmodel.resid, bins=200, hist=False, kde='kernel', color='r', ax=ax, norm_hist=True)\nax.set_title(\"GLS Residuals Plot\", fontsize=27)\nax.set_xlim(-1.5,1.5)\nax.set_xlabel('GLS Residuals', fontsize=20)\n\nplt.show", "_____no_output_____" ] ], [ [ "> GLS is the model that takes autocorrelated residuals into account, [source](https://stats.stackexchange.com/questions/254505/autocorrelation-and-gls)", "_____no_output_____" ], [ "## References \nStatsModels – Regression Diagnostics and Specification: https://www.statsmodels.org/stable/diagnostic.html \nLinearmodel 4.14 – Examples: https://bashtage.github.io/linearmodels/devel/panel/examples/examples.html \nLinearmodel 4.5 – Examples: https://bashtage.github.io/linearmodels/doc/panel/examples/examples.html?highlight=white \nLinearmodel 4.5 – Linear Instrumental-Variables Regression:https://bashtage.github.io/linearmodels/doc/iv/examples/advanced-examples.html \nPDF – Heteroskedasticity and Autocorrelation: http://www.homepages.ucl.ac.uk/~uctpsc0/Teaching/GR03/Heter&Autocorr.pdf \nPDF – (Orleans) Linear Panel Models and Heterogeneity: https://www.univ-orleans.fr/deg/masters/ESA/CH/Geneve_Chapitre1.pdf \nPDF - Panel Data Models with Heterogeneity and Endogeneity https://www.ifs.org.uk/docs/wooldridge%20session%204.pdf \nPDF – Instrumental Variables Estimation: http://www3.grips.ac.jp/~yamanota/Lecture%20Note%208%20to%2010%202SLS%20&%20others.pdf \nGeneralized Least Squares: https://www.statsmodels.org/dev/examples/notebooks/generated/gls.html \nEndogenous Variables and IV Regression in Python: https://bfdykstra.github.io/2016/11/17/Endogeneity-and-Instrumental-Variable-Regression.html?fbclid=IwAR2yWXJKHUzcvqhhdX_yo4l5bn0uEa9CK09T5j9XmhCQxPKC_IIXJPdm45s \n\nPDF – P.3 Economics 241B Estimation with Instruments: http://econ.ucsb.edu/~doug/241b/Lectures/16%20Estimation%20with%20Instruments.pdf \nPDF: HOW TO TEST ENDOGENEITY OR EXOGENEITY: AN E-LEARNING HANDS ON SAS: http://www.kiran.nic.in/pdf/Social_Science/e-learning/How_to_Test_Endogeneity_or_Exogeneity_using_SAS-1.pdf", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown" ] ]
4a3d2d50ba8225793f75dad700622e4097fe757d
30,997
ipynb
Jupyter Notebook
Bimbo/.ipynb_checkpoints/xgb_v2-checkpoint.ipynb
zonemercy/Kaggle
35ecb08272b6491f5e6756c97c7dec9c46a13a43
[ "MIT" ]
17
2017-10-01T00:10:19.000Z
2022-02-07T12:11:01.000Z
Bimbo/.ipynb_checkpoints/xgb_v2-checkpoint.ipynb
zonemercy/Kaggle
35ecb08272b6491f5e6756c97c7dec9c46a13a43
[ "MIT" ]
null
null
null
Bimbo/.ipynb_checkpoints/xgb_v2-checkpoint.ipynb
zonemercy/Kaggle
35ecb08272b6491f5e6756c97c7dec9c46a13a43
[ "MIT" ]
1
2019-08-15T03:58:51.000Z
2019-08-15T03:58:51.000Z
30.538916
148
0.34513
[ [ [ "import os\n\nmingw_path = 'C:\\\\Users\\\\a1\\\\mingw\\\\mingw64\\\\bin'\n\nos.environ['PATH'] = mingw_path + ';' + os.environ['PATH']\nimport xgboost as xgb\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib inline\ntrain = pd.read_csv('new_train_mean_cl.csv')\ntest = pd.read_csv('new_test_mean_cl.csv')", "_____no_output_____" ], [ "del test['Unnamed: 0']\ndel train['Unnamed: 0']", "_____no_output_____" ], [ "test = test.drop_duplicates(subset='id').set_index(keys='id').sort_index()", "_____no_output_____" ], [ "test = test[[u'Semana', u'Producto_ID', u'Cliente_ID', u'lag1', u'lag2', u'lag3', u'Agencia_ID', u'Canal_ID', u'Ruta_SAK',\n u'Cliente_ID_town_count', u'price', u'weight', u'pieces', u'cluster_nombre', u'drink', u'w_per_piece', u'OXXO', \n u'ARTELI', u'ALSUPER', u'BODEGA', u'CALIMAX', u'XICANS', u'ABARROTES', u'CARNICERIA', u'FRUTERIA', \n u'DISTRIBUIDORA', u'ELEVEN', u'HOTEL', u'HOSPITAL', u'CAFE', u'FARMACIA', u'CREME', u'SUPER', u'COMOD', \n u'MODELOR', u'UNKN']]", "_____no_output_____" ], [ "def RMSLE_score(pred, true):\n score = np.power(pred-true, 2)\n return np.sqrt(np.mean(score))", "_____no_output_____" ], [ "from sklearn import cross_validation\nfrom sklearn.preprocessing import LabelEncoder\nfrom xgboost.sklearn import XGBRegressor\nfrom sklearn import grid_search", "_____no_output_____" ], [ "X = train\ny = train['Demanda_uni_equil_log0'].copy()\ndel train['Demanda_uni_equil_log0']", "_____no_output_____" ], [ "del X['drink']\ndel X['DISTRIBUIDORA']\ndel X['ARTELI']\ndel X['CALIMAX']\ndel X['MODELOR']\ndel X['HOSPITAL']\ndel X['HOTEL']\ndel test['drink']\ndel test['DISTRIBUIDORA']\ndel test['ARTELI']\ndel test['CALIMAX']\ndel test['MODELOR']\ndel test['HOSPITAL']\ndel test['HOTEL']", "_____no_output_____" ], [ "mean_submission = pd.read_csv('submit_mean.csv').set_index(keys='id').sort_index()\nfor w in [6, 7]: \n train_index = (train['Semana'] == w)\n test_index = ~(train['Semana'] == w)\n X_train, X_test = X[train_index], X[test_index]\n y_train, y_test = y[train_index], y[test_index]\n xgbr = XGBRegressor(colsample_bytree=0.8,learning_rate=0.05, max_depth=15, n_estimators=100, reg_lambda=0.01, subsample=0.8)\n xgbr.fit(X_train, y_train)\n preds = xgbr.predict(X_test)\n preds[preds<0] = 0\n print RMSLE_score(preds, y_test)\n subms = xgbr.predict(test)\n mean_submission['xgb_demanda'+str(w)] = np.expm1(subms)\nmean_submission.to_csv('subm_xgb_mean6.csv')", "0.484640830711\n0.47437522819\n" ], [ "subms = xgbr.predict(test)", "_____no_output_____" ], [ "pd.Series(np.expm1(subms)).to_csv('subm_xgb.csv')", "_____no_output_____" ], [ "mean_submission = pd.read_csv('submit_mean.csv').set_index(keys='id').sort_index()", "_____no_output_____" ], [ "mean_submission['xgb_demanda'] = np.expm1(subms)", "_____no_output_____" ], [ "mean_submission['subm'] = 0.3*mean_submission['xgb_demanda6']+0.7*(0.3*mean_submission['xgb_demanda6']+0.3*mean_submission['xgb_demanda7']+\n 0.2*mean_submission['xgb_demanda8']+0.2*mean_submission['xgb_demanda9'])", "_____no_output_____" ], [ "mean_submission['subm'].to_csv('subm_xgb.csv')", "_____no_output_____" ], [ "mean_submission", "_____no_output_____" ], [ "test.head()", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3d2ebf9c71d8a7426157ea79f9de900ac9bcf8
496,165
ipynb
Jupyter Notebook
ds_book/_build/html/_sources/docs/Lesson2a_get_planet_NICFI.ipynb
developmentseed/tensorflow-eo-training
7444576f5d1cdb0c64ed5de2fe5c65fbb1aac809
[ "Apache-2.0" ]
15
2022-01-07T18:24:22.000Z
2022-01-27T05:11:02.000Z
ds_book/docs/Lesson2a_get_planet_NICFI.ipynb
developmentseed/tensorflow-eo-training
7444576f5d1cdb0c64ed5de2fe5c65fbb1aac809
[ "Apache-2.0" ]
10
2022-01-06T05:15:09.000Z
2022-03-28T20:14:51.000Z
ds_book/_build/html/_sources/docs/Lesson2a_get_planet_NICFI.ipynb
developmentseed/tensorflow-eo-training
7444576f5d1cdb0c64ed5de2fe5c65fbb1aac809
[ "Apache-2.0" ]
7
2022-01-10T09:37:54.000Z
2022-01-21T17:04:47.000Z
644.37013
237,369
0.775657
[ [ [ "# Access and mosaic Planet NICFI monthly basemaps", "_____no_output_____" ], [ "> A guide for accessing monthly Planet NICFI basemaps, selecting data by a defined AOI and mosaicing to produce a single image. ", "_____no_output_____" ], [ "You will need a configuration file named `planet_api.cfg` (simple text file with `.cfg` extension will do) to run this notebook. It should be located in your `My Drive` folder.\n\nThe contents of the file should reflect the template below, swapping in the API access key that you should have receieved once you signed up for and subscribed to the Planet NICFI program. Please visit https://www.planet.com/nicfi/ to sign up if you have not already. \n\n\n\n```\n[credentials]\napi_key = xxxxxxxxxxxxxxxxx\n```\n\n\n\n\n", "_____no_output_____" ], [ "## Setup Notebook", "_____no_output_____" ], [ "```{admonition} **Version control**\nColab updates without warning to users, which can cause notebooks to break. Therefore, we are pinning library versions.\n``` ", "_____no_output_____" ] ], [ [ "!pip install -q rasterio==1.2.10\n!pip install -q geopandas==0.10.2\n!pip install -q shapely==1.8.0\n!pip install -q radiant_mlhub # for dataset access, see: https://mlhub.earth/", "_____no_output_____" ], [ "# import required libraries\nimport os, glob, functools, fnmatch, requests, io, shutil, tarfile, json\nfrom pathlib import Path\nfrom zipfile import ZipFile\nfrom itertools import product\nfrom configparser import ConfigParser\nimport urllib.request\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nmpl.rcParams['axes.grid'] = False\nmpl.rcParams['figure.figsize'] = (12,12)\n\nimport rasterio\nfrom rasterio.merge import merge\nfrom rasterio.plot import show\nimport geopandas as gpd\nfrom folium import Map, GeoJson, Figure\nfrom shapely.geometry import box\n\nfrom IPython.display import clear_output\n\nfrom radiant_mlhub import Dataset, client, get_session, Collection", "_____no_output_____" ], [ "# configure Radiant Earth MLHub access\n!mlhub configure", "_____no_output_____" ], [ "# set your root directory and tiled data folders\nif 'google.colab' in str(get_ipython()):\n # mount google drive\n from google.colab import drive\n drive.mount('/content/gdrive')\n root_dir = '/content/gdrive/My Drive/tf-eo-devseed/' \n workshop_dir = '/content/gdrive/My Drive/tf-eo-devseed-workshop'\n dirs = [root_dir, workshop_dir]\n for d in dirs:\n if not os.path.exists(d):\n os.makedirs(d)\n print('Running on Colab')\nelse:\n root_dir = os.path.abspath(\"./data/tf-eo-devseed\")\n workshop_dir = os.path.abspath('./tf-eo-devseed-workshop')\n print(f'Not running on Colab, data needs to be downloaded locally at {os.path.abspath(root_dir)}')\n", "_____no_output_____" ], [ "# Go to root folder\n%cd $root_dir", "_____no_output_____" ] ], [ [ "```{admonition} **GCS note!**\nWe won't be using Google Cloud Storage to download data, but here is a code snippet to show how to practically do so with the a placeholder \"aoi\" vector file. This code works if you have access to the a project on GCP.\n```", "_____no_output_____" ], [ "```python\n#authenticate Google Cloud Storage\nfrom google.colab import auth\nauth.authenticate_user()\nprint(\"Authenticated Google Gloud access.\")\n\n\n# Imports the Google Cloud client library\nfrom google.cloud import storage\n\n# Instantiates a client\nproject = 'tf-eo-training-project'\nstorage_client = storage.Client(project=project)\n\n# The name for the new bucket\nbucket_name = \"dev-seed-workshop\"\n\ndata_dir = os.path.join(workshop_dir,'data/')\ngcs_to_local_dir = os.path.join(data_dir,'gcs/')\nprefix = 'data/'\nlocal_dir = os.path.join(gcs_to_local_dir, prefix)\ndirs = [data_dir, gcs_to_local_dir, local_dir]\nfor dir in dirs:\n if not os.path.exists(dir):\n os.makedirs(dir)\n\n\nbucket_name = \"dev-seed-workshop\"\nbucket = storage_client.get_bucket(bucket_name)\nblobs = bucket.list_blobs(prefix=prefix) # Get list of files\nfor blob in blobs:\n print(blob)\n filename = blob.name.replace('/', '_') \n filename_split = os.path.splitext(filename)\n filename_zero, fileext = filename_split\n basename = os.path.basename(filename_zero)\n filename = 'aoi'\n blob.download_to_filename(os.path.join(local_dir, \"%s%s\" % (basename, fileext))) # Download \n print(blob, \"%s%s\" % (basename, fileext))\n\n```", "_____no_output_____" ], [ "### Get search parameters\n- Read the AOI from a [Radiant Earth MLHub dataset](https://mlhub.earth/data/ref_african_crops_kenya_01) that overlaps with NICFI coverage into a Geopandas dataframe.\n- Get AOI bounds and centroid.\n- Authenticate with Planet NICFI API key.\n- Choose mosaic based on month/year of interest.\n", "_____no_output_____" ] ], [ [ "collections = [\n 'ref_african_crops_kenya_01_labels'\n]\n\ndef download(collection_id):\n print(f'Downloading {collection_id}...')\n collection = Collection.fetch(collection_id)\n path = collection.download('.')\n tar = tarfile.open(path, \"r:gz\")\n tar.extractall()\n tar.close()\n os.remove(path)\n \ndef resolve_path(base, path):\n return Path(os.path.join(base, path)).resolve()\n \ndef load_df(collection_id):\n collection = json.load(open(f'{collection_id}/collection.json', 'r'))\n rows = []\n item_links = []\n for link in collection['links']:\n if link['rel'] != 'item':\n continue\n item_links.append(link['href'])\n for item_link in item_links:\n item_path = f'{collection_id}/{item_link}'\n current_path = os.path.dirname(item_path)\n item = json.load(open(item_path, 'r'))\n tile_id = item['id'].split('_')[-1]\n for asset_key, asset in item['assets'].items():\n rows.append([\n tile_id,\n None,\n None,\n asset_key,\n str(resolve_path(current_path, asset['href']))\n ])\n \n for link in item['links']:\n if link['rel'] != 'source':\n continue\n link_path = resolve_path(current_path, link['href'])\n source_path = os.path.dirname(link_path)\n try:\n source_item = json.load(open(link_path, 'r'))\n except FileNotFoundError:\n continue\n datetime = source_item['properties']['datetime']\n satellite_platform = source_item['collection'].split('_')[-1]\n for asset_key, asset in source_item['assets'].items():\n rows.append([\n tile_id,\n datetime,\n satellite_platform,\n asset_key,\n str(resolve_path(source_path, asset['href']))\n ])\n return pd.DataFrame(rows, columns=['tile_id', 'datetime', 'satellite_platform', 'asset', 'file_path'])\n\nfor c in collections:\n download(c)", "_____no_output_____" ], [ "# Load the shapefile into a geopandas dataframe (for more info see: https://geopandas.org/en/stable/)\ngdf = gpd.read_file(os.path.join(root_dir, 'ref_african_crops_kenya_01_labels/ref_african_crops_kenya_01_labels_00/labels.geojson'))\ngdf = gdf.to_crs(\"EPSG:4326\")\n# Get AOI bounds\nbbox_aoi = gdf.geometry.total_bounds\n# Get AOI centroid for plotting with folium\ncentroid_aoi = [box(*bbox_aoi).centroid.x, box(*bbox_aoi).centroid.y]", "_____no_output_____" ], [ "# authenticate with Planet NICFI API KEY\nconfig = ConfigParser()\nconfigFilePath = '/content/gdrive/My Drive/planet_api.cfg'\nwith open(configFilePath) as f:\n config.read_file(f)\nAPI_KEY = config.get('credentials', 'api_key')\nPLANET_API_KEY = API_KEY # <= insert API key here \n#setup Planet base URL\nAPI_URL = \"https://api.planet.com/basemaps/v1/mosaics\"\n#setup session\nsession = requests.Session()\n#authenticate\nsession.auth = (PLANET_API_KEY, \"\") #<= change to match variable for API Key if needed", "_____no_output_____" ] ], [ [ "```{important}\nIn the following cell, the **name__is** parameter is the basemap name. It is only differentiable by the time range in the name.\n\nE.g. `planet_medres_normalized_analytic_2021-06_mosaic` is for June, 2021.\n\n \n```", "_____no_output_____" ] ], [ [ "#set params for search using name of mosaic\nparameters = {\n \"name__is\" :\"planet_medres_normalized_analytic_2021-06_mosaic\" # <= customized to month/year of interest\n}\n#make get request to access mosaic from basemaps API\nres = session.get(API_URL, params = parameters)\n#response status code\nprint(res.status_code)", "_____no_output_____" ], [ "#print metadata for mosaic\nmosaic = res.json()\n#print(\"mosaic metadata (this will expose your API key so be careful about if/where you uncomment this line): \", json.dumps(mosaic, indent=2))", "_____no_output_____" ], [ "#get id\nmosaic_id = mosaic['mosaics'][0]['id']\n#get bbox for entire mosaic\nmosaic_bbox = mosaic['mosaics'][0]['bbox']\nprint(\"mosaic_bbox: \", mosaic_bbox)\nprint(\"bbox_aoi: \", bbox_aoi)\n#converting bbox to string for search params\nstring_bbox = ','.join(map(str, bbox_aoi))\n\nprint('Mosaic id: ', mosaic_id)", "_____no_output_____" ] ], [ [ "#### Plot the gridded AOI. ", "_____no_output_____" ] ], [ [ "m = Map(tiles=\"Stamen Terrain\",\n control_scale=True,\n location = [centroid_aoi[1], centroid_aoi[0]],\n zoom_start = 10,\n max_zoom = 20,\n min_zoom =6,\n width = '100%',\n height = '100%',\n zoom_control=False )\nGeoJson(gdf).add_to(m)\nFigure(width=500, height=300).add_child(m)", "_____no_output_____" ] ], [ [ "### Request the quad tiles fitting the search parameters", "_____no_output_____" ] ], [ [ "#search for mosaic quad using AOI\nsearch_parameters = {\n 'bbox': string_bbox,\n 'minimal': True\n}\n#accessing quads using metadata from mosaic\nquads_url = \"{}/{}/quads\".format(API_URL, mosaic_id)\nres = session.get(quads_url, params=search_parameters, stream=True)\nprint(res.status_code)", "_____no_output_____" ], [ "quads = res.json()", "_____no_output_____" ], [ "quads = res.json()\nitems = quads['items']\n#printing an example of quad metadata\n#print(\"quad tiles metadata (this will expose your API key so be careful about if/where you uncomment this line): \", json.dumps(items[0], indent=2))", "_____no_output_____" ] ], [ [ "#### Plot the requested quad tiles. ", "_____no_output_____" ] ], [ [ "for item, i in zip(items, range(len(items))):\n quad_box = item[\"bbox\"]\n GeoJson(box(*quad_box)).add_to(m)\nFigure(width=500, height=300).add_child(m)", "_____no_output_____" ], [ "# Set directory for downloading the quad tiles to\nnicfi_dir = os.path.join(root_dir,'062021_basemap_nicfi_aoi/') \nquads_dir = os.path.join(nicfi_dir,'quads/')\ndirs = [nicfi_dir, quads_dir]\nfor dir in dirs:\n if not os.path.exists(dir):\n os.makedirs(dir)", "_____no_output_____" ], [ "#iterate over quad download links and saving to folder by id\nfor i in items:\n link = i['_links']['download']\n name = i['id']\n name = name + '.tiff'\n DIR = quads_dir\n filename = os.path.join(DIR, name)\n #print(filename)\n\n #checks if file already exists before s\n if not os.path.isfile(filename):\n urllib.request.urlretrieve(link, filename)", "_____no_output_____" ] ], [ [ "### Mosaic the quad tiles", "_____no_output_____" ] ], [ [ "# File and folder paths\nout_mosaic = os.path.join(nicfi_dir,'062021_basemap_nicfi_aoi_Mosaic.tif')\n\n# Make a search criteria to select the quad tile files\nsearch_criteria = \"*.tiff\"\nq = os.path.join(nicfi_dir,'quads', search_criteria)\n\nprint(q)", "_____no_output_____" ], [ "# Get all of the quad tiles\nquad_files = glob.glob(q)", "_____no_output_____" ], [ "quad_files", "_____no_output_____" ], [ "src_files_to_mosaic = []", "_____no_output_____" ], [ "for f in quad_files:\n src = rasterio.open(f)\n src_files_to_mosaic.append(src)", "_____no_output_____" ], [ "# Create the mosaic\nmosaic, out_trans = merge(src_files_to_mosaic)", "_____no_output_____" ], [ "out_meta = src.meta.copy()\nout_meta.update({\"driver\": \"GTiff\",\n \"height\": mosaic.shape[1],\n \"width\": mosaic.shape[2],\n \"transform\": out_trans\n }\n)", "_____no_output_____" ], [ "# Write the mosaic to raster file\nwith rasterio.open(out_mosaic, \"w\", **out_meta) as dest:\n dest.write(mosaic)\n \n# Write true color (RGB).\nrgb_out_mosaic = os.path.join(nicfi_dir,'062021_basemap_nicfi_aoi_rgb_Mosaic.tif')\nout_meta.update({\"count\": 3})\nprint(out_meta)\nrgb = np.dstack([mosaic[2], mosaic[1], mosaic[0]])\nrgb = rgb.transpose(2,0,1)\nwith rasterio.open(rgb_out_mosaic, \"w\", **out_meta) as dest:\n dest.write(rgb)", "_____no_output_____" ] ], [ [ "#### Plot the mosaic", "_____no_output_____" ] ], [ [ "src = rasterio.open(rgb_out_mosaic)\n\nshow(src)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ] ]
4a3d30330a03cc73fcd23c251924f5cfc4479152
43,192
ipynb
Jupyter Notebook
Sentdex/9 - How to program the Best Fit Line.ipynb
HussamCheema/machine-learning
934bc9da56fd6e8ce787735c4403b855adb4daa7
[ "MIT" ]
null
null
null
Sentdex/9 - How to program the Best Fit Line.ipynb
HussamCheema/machine-learning
934bc9da56fd6e8ce787735c4403b855adb4daa7
[ "MIT" ]
null
null
null
Sentdex/9 - How to program the Best Fit Line.ipynb
HussamCheema/machine-learning
934bc9da56fd6e8ce787735c4403b855adb4daa7
[ "MIT" ]
null
null
null
233.47027
15,556
0.926792
[ [ [ "## Linear Regression", "_____no_output_____" ] ], [ [ "from statistics import mean\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib import style\nstyle.use('fivethirtyeight')", "_____no_output_____" ], [ "xs = np.array([1,2,3,4,5,6], dtype=np.float64)\nys = np.array([5,4,6,5,6,7], dtype=np.float64)\n\nplt.scatter(xs,ys)\nplt.show()", "_____no_output_____" ], [ "def best_fit_slope_and_intercept(xs, ys):\n \n m = ((mean(xs) * mean(ys)) - mean(xs*ys)) / ((mean(xs)**2) - mean(xs**2))\n b = mean(ys) - m * mean(xs)\n \n return m, b", "_____no_output_____" ], [ "m, b = best_fit_slope_and_intercept(xs, ys)\nprint(m)\nprint(b)", "0.42857142857142866\n4.0\n" ], [ "regression_line = [(m*x)+b for x in xs]\nprint(regression_line)", "[4.428571428571429, 4.857142857142858, 5.2857142857142865, 5.714285714285714, 6.142857142857143, 6.571428571428572]\n" ], [ "plt.scatter(xs,ys)\nplt.plot(regression_line)\nplt.show()", "_____no_output_____" ], [ "predict_x = 8\npredict_y = (m*predict_x) + b", "_____no_output_____" ], [ "plt.scatter(xs,ys)\nplt.scatter(predict_x,predict_y, color='red')\nplt.plot(regression_line)\nplt.show()\n# This line is good fit line but not the best fit line.", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3d3c82fdcb8d10d994c95f8bb27bc0e17672e3
33,624
ipynb
Jupyter Notebook
Instructions/.ipynb_checkpoints/Charity_Data-checkpoint.ipynb
kmoravec14/Deep_Learning
8f95c469dee3d03762c167e4343f083b59509419
[ "MIT" ]
null
null
null
Instructions/.ipynb_checkpoints/Charity_Data-checkpoint.ipynb
kmoravec14/Deep_Learning
8f95c469dee3d03762c167e4343f083b59509419
[ "MIT" ]
null
null
null
Instructions/.ipynb_checkpoints/Charity_Data-checkpoint.ipynb
kmoravec14/Deep_Learning
8f95c469dee3d03762c167e4343f083b59509419
[ "MIT" ]
null
null
null
35.808307
381
0.510736
[ [ [ "## Preprocessing", "_____no_output_____" ] ], [ [ "# Import our dependencies\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nimport pandas as pd\nimport tensorflow as tf\n\n# Import and read the charity_data.csv.\nimport pandas as pd \ndf = pd.read_csv(\"../Resources/charity_data.csv\")\ndf.head()", "Init Plugin\nInit Graph Optimizer\nInit Kernel\n" ], [ "# df.describe()\n# df['AFFILIATION'].nunique()\ndf.shape\n", "_____no_output_____" ], [ "# importing sweetviz\nimport sweetviz as sv\n\n#analyzing the dataset\ncharity_report = sv.analyze(df)\n#display the report\ncharity_report.show_html('Charity.html')", "_____no_output_____" ], [ "# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.\ndf = df.drop(\"EIN\", 1)\ndf = df.drop(\"NAME\", 1)", "/var/folders/08/k5qj0k9j1dbgppb_7vzc5rnm0000gn/T/ipykernel_49895/673382915.py:2: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only.\n df = df.drop(\"EIN\", 1)\n/var/folders/08/k5qj0k9j1dbgppb_7vzc5rnm0000gn/T/ipykernel_49895/673382915.py:3: FutureWarning: In a future version of pandas all arguments of DataFrame.drop except for the argument 'labels' will be keyword-only.\n df = df.drop(\"NAME\", 1)\n" ], [ "# Determine the number of unique values in each column.\ndf.nunique()", "_____no_output_____" ], [ "# Look at APPLICATION_TYPE value counts for binning\ndf['APPLICATION_TYPE'].value_counts()", "_____no_output_____" ], [ "# Choose a cutoff value and create a list of application types to be replaced\n# use the variable name `application_types_to_replace`\n# Replace in dataframe\n\napplication_types_to_replace = [\"T9\",\"T13\",\"T12\",\"T2\",\"T25\",\"T14\",\"T29\",\"T15\",\"T17\"]\n\nfor app in application_types_to_replace:\n df['APPLICATION_TYPE'] = df['APPLICATION_TYPE'].replace(app,\"Other\")\n\n# Check to make sure binning was successful\ndf['APPLICATION_TYPE'].value_counts()", "_____no_output_____" ], [ "# Look at CLASSIFICATION value counts for binning\n\nclass_df = df['CLASSIFICATION'].value_counts()\n\nclassifications_to_replace = class_df[(class_df < 1000)].index\n\n# Replace in dataframe\nfor cls in classifications_to_replace:\n df['CLASSIFICATION'] = df['CLASSIFICATION'].replace(cls,\"Other\")\n \n# Check to make sure binning was successful\ndf['CLASSIFICATION'].value_counts()", "_____no_output_____" ], [ "import numpy as np\ndf['ASK_AMT'] = np.log10(df['ASK_AMT'])", "_____no_output_____" ], [ "df.head()", "_____no_output_____" ], [ "# Convert categorical data to numeric with `pd.get_dummies`\ndf = pd.get_dummies(df,drop_first = True)", "_____no_output_____" ], [ "list(df)", "_____no_output_____" ], [ "# Split our preprocessed data into our features and target arrays\nX = df.drop('IS_SUCCESSFUL',axis=1)\ny = df['IS_SUCCESSFUL']\n\n# Split the preprocessed data into a training and testing dataset\nX_train, X_test, y_train, y_test = train_test_split(X, y, random_state=78)", "_____no_output_____" ], [ "# Create a StandardScaler instances\nscaler = StandardScaler()\n\n# Fit the StandardScaler\nX_scaler = scaler.fit(X_train)\n\n# Scale the data\nX_train_scaled = X_scaler.transform(X_train)\nX_test_scaled = X_scaler.transform(X_test)", "_____no_output_____" ], [ "inputs = len(X.columns)", "_____no_output_____" ] ], [ [ "## Compile, Train and Evaluate the Model", "_____no_output_____" ] ], [ [ "# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.\n# YOUR CODE GOES HERE\n\nnn = tf.keras.models.Sequential()\n\n# First hidden layer\nnn.add(tf.keras.layers.Dense(units=9, activation=\"relu\", input_dim=inputs))\n\n# Second hidden layer\nnn.add(tf.keras.layers.Dense(units=9, activation=\"relu\"))\n\n# Output layer\nnn.add(tf.keras.layers.Dense(units=1, activation=\"sigmoid\"))\n\n# Check the structure of the model\nnn.summary()", "Metal device set to: Apple M1\n" ], [ "# Compile the model\nnn.compile(loss=\"binary_crossentropy\", optimizer=\"adam\", metrics=[\"accuracy\"])", "_____no_output_____" ], [ "from keras.callbacks import Callback\n\nmc = tf.keras.callbacks.ModelCheckpoint('weights{epoch:08d}.h5',save_weights_only=True, period=5)\n\n# Train the model\nfit_model = nn.fit(X_train_scaled, y_train, callbacks=[mc], epochs=10)", "WARNING:tensorflow:`period` argument is deprecated. Please use `save_freq` to specify the frequency in number of batches seen.\nEpoch 1/10\n" ], [ "# Evaluate the model using the test data\nmodel_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)\nprint(f\"Loss: {model_loss}, Accuracy: {model_accuracy}\")", "2022-04-05 22:26:29.664919: I tensorflow/core/grappler/optimizers/custom_graph_optimizer_registry.cc:112] Plugin optimizer for device_type GPU is enabled.\n" ], [ "# Export our model to HDF5 file\nnn.save(\"AlphabetSoupCharity.h5\")", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ] ]
4a3d662afc5e30d3a9a8432ce0d5924004c0bcc0
8,262
ipynb
Jupyter Notebook
Array/spectral_unmixing.ipynb
mllzl/earthengine-py-notebooks
cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5
[ "MIT" ]
1
2020-03-26T04:21:15.000Z
2020-03-26T04:21:15.000Z
Array/spectral_unmixing.ipynb
mllzl/earthengine-py-notebooks
cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5
[ "MIT" ]
null
null
null
Array/spectral_unmixing.ipynb
mllzl/earthengine-py-notebooks
cade6a81dd4dbbfb1b9b37aaf6955de42226cfc5
[ "MIT" ]
null
null
null
47.482759
1,031
0.595861
[ [ [ "<table class=\"ee-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://github.com/giswqs/earthengine-py-notebooks/tree/master/Array/spectral_unmixing.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td>\n <td><a target=\"_blank\" href=\"https://nbviewer.jupyter.org/github/giswqs/earthengine-py-notebooks/blob/master/Array/spectral_unmixing.ipynb\"><img width=26px src=\"https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png\" />Notebook Viewer</a></td>\n <td><a target=\"_blank\" href=\"https://mybinder.org/v2/gh/giswqs/earthengine-py-notebooks/master?filepath=Array/spectral_unmixing.ipynb\"><img width=58px src=\"https://mybinder.org/static/images/logo_social.png\" />Run in binder</a></td>\n <td><a target=\"_blank\" href=\"https://colab.research.google.com/github/giswqs/earthengine-py-notebooks/blob/master/Array/spectral_unmixing.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a></td>\n</table>", "_____no_output_____" ], [ "## Install Earth Engine API and geemap\nInstall the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.\nThe following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.\n\n**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).", "_____no_output_____" ] ], [ [ "# Installs geemap package\nimport subprocess\n\ntry:\n import geemap\nexcept ImportError:\n print('geemap package not installed. Installing ...')\n subprocess.check_call([\"python\", '-m', 'pip', 'install', 'geemap'])\n\n# Checks whether this notebook is running on Google Colab\ntry:\n import google.colab\n import geemap.eefolium as emap\nexcept:\n import geemap as emap\n\n# Authenticates and initializes Earth Engine\nimport ee\n\ntry:\n ee.Initialize()\nexcept Exception as e:\n ee.Authenticate()\n ee.Initialize() ", "_____no_output_____" ] ], [ [ "## Create an interactive map \nThe default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function. ", "_____no_output_____" ] ], [ [ "Map = emap.Map(center=[40,-100], zoom=4)\nMap.add_basemap('ROADMAP') # Add Google Map\nMap", "_____no_output_____" ] ], [ [ "## Add Earth Engine Python script ", "_____no_output_____" ] ], [ [ "# Add Earth Engine dataset\n# Array-based spectral unmixing.\n\n# Create a mosaic of Landsat 5 images from June through September, 2007.\nallBandMosaic = ee.ImageCollection('LANDSAT/LT05/C01/T1') \\\n .filterDate('2007-06-01', '2007-09-30') \\\n .select('B[0-7]') \\\n .median()\n\n# Create some representative endmembers computed previously by sampling\n# the Landsat 5 mosaic.\nurbanEndmember = [88, 42, 48, 38, 86, 115, 59]\nvegEndmember = [50, 21, 20, 35, 50, 110, 23]\nwaterEndmember = [51, 20, 14, 9, 7, 116, 4]\n\n# Compute the 3x7 pseudo inverse.\nendmembers = ee.Array([urbanEndmember, vegEndmember, waterEndmember])\ninverse = ee.Image(endmembers.matrixPseudoInverse().transpose())\n\n# Convert the bands to a 2D 7x1 array. The toArray() call concatenates\n# pixels from each band along the default axis 0 into a 1D vector per\n# pixel, and the toArray(1) call concatenates each band (in this case\n# just the one band of 1D vectors) along axis 1, forming a 2D array.\ninputValues = allBandMosaic.toArray().toArray(1)\n\n# Matrix multiply the pseudo inverse of the endmembers by the pixels to\n# get a 3x1 set of endmembers fractions from 0 to 1.\nunmixed = inverse.matrixMultiply(inputValues)\n\n# Create and show a colored image of the endmember fractions. Since we know\n# the result has size 3x1, project down to 1D vectors at each pixel (since the\n# second axis is pointless now), and then flatten back to a regular scalar\n# image.\ncolored = unmixed \\\n .arrayProject([0]) \\\n .arrayFlatten([['urban', 'veg', 'water']])\nMap.setCenter(-98.4, 19, 11)\n\n# Load a hillshade to use as a backdrop.\nMap.addLayer(ee.Algorithms.Terrain(ee.Image('CGIAR/SRTM90_V4')).select('hillshade'))\nMap.addLayer(colored, {'min': 0, 'max': 1},\n 'Unmixed (red=urban, green=veg, blue=water)')\n", "_____no_output_____" ] ], [ [ "## Display Earth Engine data layers ", "_____no_output_____" ] ], [ [ "Map.addLayerControl() # This line is not needed for ipyleaflet-based Map.\nMap", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3d691e5c603a496e5ce0cfa182935118d32b5f
134,860
ipynb
Jupyter Notebook
7.18.ipynb
bishuyu/my-second-homework
0c7b05b9e5c2a76e88bd3723b13bedd4e14f8fbf
[ "Apache-2.0" ]
null
null
null
7.18.ipynb
bishuyu/my-second-homework
0c7b05b9e5c2a76e88bd3723b13bedd4e14f8fbf
[ "Apache-2.0" ]
null
null
null
7.18.ipynb
bishuyu/my-second-homework
0c7b05b9e5c2a76e88bd3723b13bedd4e14f8fbf
[ "Apache-2.0" ]
null
null
null
98.22287
100,912
0.847405
[ [ [ "# 选择\n## 布尔类型、数值和表达式\n![](../Photo/33.png)\n- 注意:比较运算符的相等是两个等号,一个等到代表赋值\n- 在Python中可以用整型0来代表False,其他数字来代表True\n- 后面还会讲到 is 在判断语句中的用发", "_____no_output_____" ] ], [ [ "a = id(1)\nb = id(1)\n\nprint(a,b)\n# 因为a和b并不是同一个对象\na is b", "4370970016 4370970016\n" ], [ "a = id(1)\nb = a\na is b", "_____no_output_____" ], [ "a = True\nb = False", "_____no_output_____" ], [ "id(True)", "_____no_output_____" ], [ "a == b", "_____no_output_____" ], [ "a is b", "_____no_output_____" ] ], [ [ "## 字符串的比较使用ASCII值", "_____no_output_____" ] ], [ [ "a = \"jokar\"\nb = \"jokar\"\na > b", "_____no_output_____" ] ], [ [ "## Markdown \n- https://github.com/younghz/Markdown", "_____no_output_____" ], [ "肯定会给发数据划分啊", "_____no_output_____" ], [ "$\\sum_{j=1}^{N}x_{j}$", "_____no_output_____" ], [ "## EP:\n- <img src=\"../Photo/34.png\"></img>\n- 输入一个数字,判断其实奇数还是偶数", "_____no_output_____" ], [ "## 产生随机数字\n- 函数random.randint(a,b) 可以用来产生一个a和b之间且包括a和b的随机整数", "_____no_output_____" ] ], [ [ "import random", "_____no_output_____" ], [ "random.randint(0,1)", "_____no_output_____" ], [ "if condition:\n do someething\nelse:\n other", "_____no_output_____" ], [ "for iter_ in xxx:\n do something\n", "_____no_output_____" ], [ "age = 10\nJoker = eval(input('Name'))\nprint(Joker)", "Nameage\n10\n" ] ], [ [ "产生一个随机数,你去输入,如果你输入的数大于随机数,那么就告诉你太大了,反之,太小了,\n然后你一直输入,知道它满意为止", "_____no_output_____" ] ], [ [ "number =random.randint(0,5)", "_____no_output_____" ], [ "for i in range(5):\n input_ = eval(input('>>'))\n if input_ > number:\n print('太大啦')\n if input_ < number:\n print('太小啦')\n if number == input_:\n print('正好')\n break", ">>0\n太小啦\n>>0\n太小啦\n>>0\n太小啦\n>>0\n太小啦\n>>0\n太小啦\n" ], [ "for i in range(5):\n print(i)", "0\n1\n2\n3\n4\n" ] ], [ [ "## 其他random方法\n- random.random 返回0.0到1.0之间前闭后开区间的随机浮点\n- random.randrange(a,b) 前闭后开", "_____no_output_____" ] ], [ [ "random.random()", "_____no_output_____" ], [ "import matplotlib.pyplot as plt", "_____no_output_____" ], [ "image=plt.imread('/Users/huwang/Downloads/cat.jpeg')\nprint(image*random.random())\nplt.imshow(image)", "[[[41.86550961 15.53771491 17.69573087]\n [47.04474791 22.01176278 21.58015959]\n [52.22398621 28.48581066 25.46458832]\n ...\n [95.3843054 93.65789263 63.01406601]\n [95.3843054 93.65789263 63.01406601]\n [94.52109902 92.79468625 61.28765324]]\n\n [[42.2971128 15.9693181 18.12733406]\n [47.04474791 22.01176278 21.58015959]\n [52.22398621 28.48581066 25.46458832]\n ...\n [95.3843054 93.65789263 63.01406601]\n [95.3843054 93.65789263 63.01406601]\n [94.52109902 92.79468625 61.28765324]]\n\n [[42.72871599 16.83252448 17.69573087]\n [47.4763511 22.44336598 22.01176278]\n [52.65558941 28.91741385 25.89619151]\n ...\n [95.3843054 93.65789263 63.01406601]\n [95.3843054 93.65789263 63.01406601]\n [94.95270221 93.22628944 62.58246282]]\n\n ...\n\n [[65.60368516 67.33009793 56.54001813]\n [71.21452666 73.37254261 63.4456692 ]\n [79.4149873 83.29941603 75.09895538]\n ...\n [28.91741385 15.10611171 24.16977874]\n [29.34901705 15.9693181 23.73817555]\n [29.78062024 18.99054044 28.05420747]]\n\n [[67.33009793 69.0565107 58.2664309 ]\n [72.94093942 75.09895538 65.60368516]\n [81.14140007 85.02582879 76.82536815]\n ...\n [30.64382662 16.83252448 25.89619151]\n [29.34901705 15.53771491 24.60138194]\n [28.91741385 18.12733406 27.62260428]]\n\n [[56.54001813 58.2664309 52.22398621]\n [66.03528835 70.78292346 63.87727239]\n [80.27819368 87.61544795 79.4149873 ]\n ...\n [33.66504896 22.87496917 31.07542981]\n [30.64382662 19.85374683 28.91741385]\n [31.50703301 22.01176278 27.62260428]]]\n" ] ], [ [ "## EP:\n- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字的和,并判定其是否正确\n- 进阶:写一个随机序号点名程序", "_____no_output_____" ] ], [ [ "number_1 = random.randrange(0,10)\nnumber_2 = random.randrange(0,10)\n\nwhile 1:\n sum_ = eval(input('>>'))\n if sum_ == (number_1 + number_2):\n print('Congratulations! Correct~')\n else:\n print('Sorry~SB.')", ">>2\nSorry~SB.\n>>3\nSorry~SB.\n>>4\nSorry~SB.\n>>5\nSorry~SB.\n>>6\nSorry~SB.\n" ] ], [ [ "## if语句\n- 如果条件正确就执行一个单向if语句,亦即当条件为真的时候才执行if内部的语句\n- Python有很多选择语句:\n> - 单向if \n - 双向if-else\n - 嵌套if\n - 多向if-elif-else\n \n- 注意:当语句含有子语句的时候,那么一定至少要有一个缩进,也就是说如果有儿子存在,那么一定要缩进\n- 切记不可tab键和space混用,单用tab 或者 space\n- 当你输出的结果是无论if是否为真时都需要显示时,语句应该与if对齐", "_____no_output_____" ] ], [ [ "input_ = eval(input('>>'))\nif input_ > number:\n print('太大啦')\nif input_ < number:\n print('太小啦')\nif number == input_:\n print('正好')\nprint('不要灰心')", ">>1\n正好\n不要灰心\n" ] ], [ [ "李文浩相亲测试树\n\n 年龄\n 老 年轻 \n拜拜 \n 帅\n 否 是\n 考虑一下 老婆\n 没有 有\n 马上结婚 回家的诱惑\n \n代码写不出来的立马分手,从此社会上有多出一个渣男/渣女.", "_____no_output_____" ] ], [ [ "age = input('年轻嘛[y/n]')\nif age == 'y':\n handsome = input('帅否[y/n]')\n if handsome == 'y':\n wife = input('有没有老婆[y/n]')\n if wife == 'y':\n print('回家的诱惑')\n else:\n print('立马结婚')\n else:\n print('考虑一下')\nelse:\n print('拜拜~')", "年轻嘛[y/n]y\n帅否[y/n]y\n有没有老婆[y/n]y\n回家的诱惑\n" ] ], [ [ "## EP:\n- 用户输入一个数字,判断其实奇数还是偶数\n- 进阶:可以查看下4.5实例研究猜生日", "_____no_output_____" ], [ "## 双向if-else 语句\n- 如果条件为真,那么走if内部语句,否则走else内部语句", "_____no_output_____" ], [ "## EP:\n- 产生两个随机整数number1和number2,然后显示给用户,使用户输入数字,并判定其是否正确,如果正确打印“you‘re correct”,否则打印正确错误", "_____no_output_____" ], [ "## 嵌套if 和多向if-elif-else\n![](../Photo/35.png)", "_____no_output_____" ] ], [ [ "if score >= 80:\n gread = 'B'\nelif score>=90:\n gread = 'A'", "_____no_output_____" ] ], [ [ "## EP:\n- 提示用户输入一个年份,然后显示表示这一年的动物\n![](../Photo/36.png)\n- 计算身体质量指数的程序\n- BMI = 以千克为单位的体重除以以米为单位的身高的平方\n![](../Photo/37.png)", "_____no_output_____" ] ], [ [ "tizhong = eval(input('体重'))\nshengao = eval(input('身高'))\nBMI = tizhong / shengao ** 2\nif BMI<18.5 :\n print('超清')\nelif 18.5<=BMI<25 :\n print('标准')\nelif 25<=BMI<30 :\n print('超重')\nelse:\n print('超级无敌胖')", "体重500\n身高10000\n超清\n" ] ], [ [ "## 逻辑运算符\n![](../Photo/38.png)", "_____no_output_____" ], [ "![](../Photo/39.png)\n![](../Photo/40.png)", "_____no_output_____" ], [ "## EP:\n- 判定闰年:一个年份如果能被4整除但不能被100整除,或者能被400整除,那么这个年份就是闰年\n- 提示用户输入一个年份,并返回是否是闰年\n- 提示用户输入一个数字,判断其是否为水仙花数", "_____no_output_____" ], [ "## 实例研究:彩票\n![](../Photo/41.png)", "_____no_output_____" ] ], [ [ "import random", "_____no_output_____" ], [ "number = random.randint(10,99)\nprint(number)\nN = input('>>')\nnumber_shi = number // 10\nnumber_ge = number % 10\n\nif N[0] == '0':\n N_shi = 0\nelse:\n N_shi = int(N) // 10\n N_ge = int(N) % 10\n\nif number == int(N):\n print('10000')\n# elif (number_shi == N_shi or number_shi==N_ge) and (number_ge == N_shi or number_ge==N_ge):\nelif number_shi + number_ge == N_shi + N_ge:\n print('3000')\nelif (number_ge ==N_ge or number_ge == N_shi) or (number_shi == N_ge or number_shi == N_shi):\n print('1000')", "24\n>>40\n1000\n" ], [ "a = \"05\"\na[0]", "_____no_output_____" ], [ "05 // 10", "_____no_output_____" ], [ "Number = eval(input('>>'))\n\nbai = Number // 100\nshi = Number // 10 % 10\nge = Number % 10\n\nif bai**3 + shi **3 + ge **3 == Number:\n print('水仙花')\nelse:\n print('不是水仙花')", ">>123\n不是水仙花\n" ], [ "223 // 10", "_____no_output_____" ] ], [ [ "# Homework\n- 1\n![](../Photo/42.png)", "_____no_output_____" ] ], [ [ "a,b,c = map(float,input('Enter a, b, c:').split(','))\nif b ** 2 - 4 * a * c > 0:\n r1 = (-b + (b ** 2 - 4 * a * c) ** 0.5) / (2 * a)\n r2 = (-b - (b ** 2 - 4 * a * c) ** 0.5) / (2 * a)\n print('The roots are%.6f and %.6f' %(r1,r2))\nelif b ** 2 - 4 * a * c == 0:\n r1 = (-b + (b ** 2 - 4 * a * c) ** 0.5) / (2 * a)\n print('有一个根为%.1f' %r1)\nelse:\n print('The equation has no reeal roots') ", "Enter a, b, c:1,3,1\nThe roots are-0.381966 and -2.618034\n" ] ], [ [ "- 2\n![](../Photo/43.png)", "_____no_output_____" ] ], [ [ "import random\na = random.randrange(0,100)\nb = random.randrange(0,100)\nprint(a)\nprint(b)\nh = a + b\nc = eval(input('请输入这两个数的和'))\nif c == h:\n print('you are right')\nelse :\n print('you are pig')\n ", "20\n18\n请输入这两个数的和36\nyou are pig\n" ] ], [ [ "- 3\n![](../Photo/44.png)", "_____no_output_____" ] ], [ [ "today = int(input('Enter today`s day: '))\nfuture = int(input('Enter the nuber of days elapssed since today: '))\ns = ( today + future ) % 7\nif today ==1:\n xingqi= 'Monday'\nif today ==2:\n xingqi= 'Tuesday'\nif today ==3:\n xingqi= 'Wednesday'\nif today ==4:\n xingqi= 'Thursday'\nif today ==5:\n xingqi= 'Friday'\nif today ==6:\n xingqi= 'Saturday'\nif today ==0:\n xingqi= 'Sunday'\nif s==1:\n xq='Monday'\nelif s==2:\n xq='Tuesday'\nelif s==3:\n xq='Wednesday'\nelif s==4:\n xq='Thursday'\nelif s==5:\n xq='Friday'\nelif s==6:\n xq='Staurday'\nelif s==0:\n xq='Sunday'\nprint('Today is %s and the future day is %s' % (xingqi,xq))", "Enter today`s day: 1\nEnter the nuber of days elapssed since today: 3\nToday is Monday and the future day is Thursday\n" ] ], [ [ "- 4\n![](../Photo/45.png)", "_____no_output_____" ] ], [ [ "a,b,c = eval(input('请输入三个整数: '))\nif a<b<c:\n print(a,b,c)\nelif a<c<b:\n print(a,c,b)\nelif b<a<c:\n print(b,a,c)\nelif b<c<a:\n print(b,c,a)\nelif c<a<b:\n print(c,a,b)\nelse :\n print(c,b,a)", "请输入三个整数: 99,11,236\n11 99 236\n" ] ], [ [ "- 5\n![](../Photo/46.png)", "_____no_output_____" ] ], [ [ "w1,p1 = eval(input('Enter weight and price for package 1: '))\nw2,p2 = eval(input('Enter weight and price for package 2: '))\na = p1/ w1\nb = p2/ w2\nif a>b:\n print('Package 2 has thebetter price')\nelse:\n print('Package 1 has thebetter price')", "Enter weight and price for package 1: 50,24.59\nEnter weight and price for package 2: 25,11.99\nPackage 2 has thebetter price\n" ] ], [ [ "- 6\n![](../Photo/47.png)", "_____no_output_____" ] ], [ [ "month,year=map(int,input('Enter month and year:').split(','))\nday1=1,3,5,7,8,10,12\nday2=4,6,9,11\nif month in day1:\n print('%d月有31天'%month)\nif month==2:\n if (year % 4 ==0 and year % 100 !=0 ) or (year % 400 == 0):\n print('二月份有29天')\n else:\n print('二月份有28天')\nif month in day2:\n print('%d月份有30天'%month)\n", "Enter month and year:3,2005\n3月有31天\n" ] ], [ [ "- 7\n![](../Photo/48.png)", "_____no_output_____" ] ], [ [ "import numpy as np\nguess=input('请输入你的猜测:')\na = np.random.choice(['正面','反面'])\nif guess == a:\n print('you are right!')\nelse:\n print('you are worry~')", "请输入你的猜测:反面\nyou are worry~\n" ] ], [ [ "- 8\n![](../Photo/49.png)", "_____no_output_____" ] ], [ [ "import random\ns = random.randrange(0,3)\nr = int(input('石头剪刀布:'))\nif r == s:\n print(\"The computer is {}. You are {} too.It is a draw.\".format(s,r))\nelif (r == 0 and s == 1) or (r == 1 and s == 2) or (r == 2 and s == 0):\n print(\"The computer is {}. You are {}.You won.\".format(s,r))\nelse:\n print(\"The computer is {}. You are {} too.You lose.\".format(s,r))", "石头剪刀布:2\nThe computer is 2. You are 2 too.It is a draw.\n" ] ], [ [ "- 9\n![](../Photo/50.png)", "_____no_output_____" ] ], [ [ "year = int(input('Enter year:(e.g.,2008): '))\nmonth = int(input('Enter month: 1-12:'))\nq = int(input('Enter the day of the month:1-31: '))\nif month ==13 or month ==14:\n year=year-1\na=int((26*(month +1)/10)//1)\nk = int(year % 100)\nj = year / 100 //1\nh = int((q + a + k + (k / 4)//1 + (j/ 4)//1 + 5 * j) % 7)\nif h == 2:\n xq= 'Monday'\nelif h == 3:\n xq= 'Tuesday'\nelif h == 4:\n xq= 'Wednesday'\nelif h == 5:\n xq= 'Thursday'\nelif h == 6:\n xq= 'Friday'\nelif h ==0:\n xq= 'Saturday'\nelif h ==1:\n xq= 'Sunday'\nprint('Day of the week is %s' % xq)", "Enter year:(e.g.,2008): 2013\nEnter month: 1-12:1\nEnter the day of the month:1-31: 25\nDay of the week is Wednesday\n" ] ], [ [ "- 10\n![](../Photo/51.png)", "_____no_output_____" ] ], [ [ "import numpy as np\nhuase = np.random.choice(['梅花','红桃','方块','黑桃'])\ndaxiao = random.choice(['Ace','1','2','3','4','5','6','7','8','9','10','Jack','Queen','King'])\nprint('The card you picked is the %s of %s ' %(daxiao,huase))", "The card you picked is the 6 of 梅花 \n" ] ], [ [ "- 11\n![](../Photo/52.png)", "_____no_output_____" ] ], [ [ "n = int(input('Enter a three-digit integer: '))\na = n//100%10\nb = n//10%10\nc = n%10\nm = c * 100 + b * 10 + a\nif m ==n:\n print('%d is a palindrome' % n)\nelse :\n print('%d is not a palindrome'% n )", "Enter a three-digit integer: 121\n121 is a palindrome\n" ], [ "a,b,c = eval(input('Enter three edges:'))\nl = a+b+c\nif a + b > c and a + c > b and b + c > a:\n print('The perimeter is %d'% l)\nelse :\n print ('输入非法')", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a3d6af9e928dc8abf987633bd2e14c9224843fb
8,699
ipynb
Jupyter Notebook
Day 14.ipynb
jonathanegol/adventofcode2020
a29c6f3f4663709629eeafa708a159d3b9588ff4
[ "MIT" ]
null
null
null
Day 14.ipynb
jonathanegol/adventofcode2020
a29c6f3f4663709629eeafa708a159d3b9588ff4
[ "MIT" ]
null
null
null
Day 14.ipynb
jonathanegol/adventofcode2020
a29c6f3f4663709629eeafa708a159d3b9588ff4
[ "MIT" ]
null
null
null
21.012077
81
0.456719
[ [ [ "## Day 14\n\nhttps://adventofcode.com/2020/day/14", "_____no_output_____" ] ], [ [ "import aocd", "_____no_output_____" ], [ "lines = [line for line in aocd.get_data(day=14, year=2020).splitlines()]\nlen(lines)", "_____no_output_____" ], [ "lines[:5]", "_____no_output_____" ] ], [ [ "### Solution to Part 1", "_____no_output_____" ] ], [ [ "def maskable(value: int) -> list:\n return list(bin(value)[2:].zfill(36))", "_____no_output_____" ], [ "def mask_value(value: int, *, mask: str) -> int:\n value = maskable(value)\n assert len(mask) == 36\n assert len(value) == 36\n masked = []\n for (mask_bit, value_bit) in zip(mask, value):\n bit = value_bit if mask_bit == 'X' else mask_bit\n masked.append(bit)\n return int(''.join(masked), 2)", "_____no_output_____" ], [ "mask = 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXX1XXXX0X'", "_____no_output_____" ], [ "mask_value(11, mask=mask)", "_____no_output_____" ], [ "mask_value(101, mask=mask)", "_____no_output_____" ], [ "def parse_line(line: str):\n cmd, value = line.split(' = ')\n if cmd == 'mask':\n return ('mask', value)\n else:\n addr = int(cmd[4:-1])\n value = int(value)\n return ('mem', (addr, value))", "_____no_output_____" ], [ "def store_mem(lines):\n mask = None\n for line in lines:\n cmd, value = parse_line(line)\n if cmd == 'mask':\n mask = value\n elif cmd == 'mem':\n addr, value = value\n yield addr, mask_value(value, mask=mask)\n else:\n raise RuntimeError('cannot happen')", "_____no_output_____" ], [ "mem = {addr: value for addr, value in store_mem(lines)}\nlen(mem)", "_____no_output_____" ], [ "sum(mem.values())", "_____no_output_____" ] ], [ [ "### Solution to Part 2", "_____no_output_____" ] ], [ [ "def overwrite_floating(masked: str, *, bits: str) -> str:\n assert masked.count('X') == len(bits)\n overwritten = []\n bit = iter(bits)\n for c in masked:\n if c == 'X':\n overwritten.append(next(bit))\n else:\n overwritten.append(c)\n return int(''.join(overwritten), 2)", "_____no_output_____" ], [ "def floating_addrs(masked: str, *, floating: int) -> list:\n floating_bits = [\n bin(i)[2:].zfill(floating)\n for i in range(2 ** floating)\n ]\n return [\n overwrite_floating(masked, bits=bits)\n for bits in floating_bits\n ]", "_____no_output_____" ], [ "def masked_addrs(addr: int, *, mask: str) -> list:\n addr = maskable(addr)\n assert len(mask) == 36\n assert len(addr) == 36\n masked = []\n for (mask_bit, addr_bit) in zip(mask, addr):\n if mask_bit == '0':\n bit = addr_bit\n elif mask_bit == '1':\n bit = '1'\n elif mask_bit == 'X':\n bit = 'X'\n else:\n raise RuntimeError('cannot happen')\n masked.append(bit)\n floating = masked.count('X')\n if floating == 0:\n return [int(''.join(masked), 2)]\n return floating_addrs(masked, floating=floating)", "_____no_output_____" ], [ "mask = '000000000000000000000000000000X1001X'", "_____no_output_____" ], [ "len(masked_addrs(42, mask=mask))", "_____no_output_____" ], [ "def store_mem_part2(lines):\n mask = None\n for line in lines:\n cmd, value = parse_line(line)\n if cmd == 'mask':\n mask = value\n elif cmd == 'mem':\n addr, value = value\n for masked_addr in masked_addrs(addr, mask=mask):\n yield masked_addr, value\n else:\n raise RuntimeError('cannot happen')", "_____no_output_____" ], [ "mem = {addr: value for addr, value in store_mem_part2(lines)}\nlen(mem)", "_____no_output_____" ], [ "sum(mem.values())", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3d6bb62958ca117239b1fd1a2c07ef2295c419
19,333
ipynb
Jupyter Notebook
2. Data Structure/2.1.ipynb
HackyRoot/ML_Workshop
35df4b9456fbd379101c96b9cd6488a8045b3012
[ "Apache-2.0" ]
1
2018-06-18T04:29:11.000Z
2018-06-18T04:29:11.000Z
2. Data Structure/2.1.ipynb
HackyRoot/ML_Workshop
35df4b9456fbd379101c96b9cd6488a8045b3012
[ "Apache-2.0" ]
null
null
null
2. Data Structure/2.1.ipynb
HackyRoot/ML_Workshop
35df4b9456fbd379101c96b9cd6488a8045b3012
[ "Apache-2.0" ]
null
null
null
20.677005
716
0.465474
[ [ [ "# Lists", "_____no_output_____" ], [ "Data Structure:\n \nA data structure is a collection of data elements (such as numbers or characters—or even other data structures) that is structured in some way, for example, by numbering the elements. The most basic data structure in Python is the \"sequence\".", "_____no_output_____" ], [ "-> List is one of the Sequence Data structure\n\n-> Lists are collection of items (Strings, integers or even other lists)\n\n-> Lists are enclosed in [ ]\n\n-> Each item in the list has an assigned index value.\n\n-> Each item in a list is separated by a comma\n\n-> Lists are mutable, which means they can be changed.\n", "_____no_output_____" ], [ "# List Creation", "_____no_output_____" ] ], [ [ "emptyList = []\n\nlst = ['one', 'two', 'three', 'four'] # list of strings\n\nlst2 = [1, 2, 3, 4] #list of integers\n\nlst3 = [[1, 2], [3, 4]] # list of lists\n\nlst4 = [1, 'ramu', 24, 1.24] # list of different datatypes\n\nprint(lst4)\n", "[1, 'ramu', 24, 1.24]\n" ] ], [ [ "# List Length", "_____no_output_____" ] ], [ [ "lst = ['one', 'two', 'three', 'four']\n\n#find length of a list\nprint(len(lst))", "4\n" ] ], [ [ "# List Append", "_____no_output_____" ] ], [ [ "lst = ['one', 'two', 'three', 'four']\n\nlst.append('five') # append will add the item at the end\n\nprint(lst)", "['one', 'two', 'three', 'four', 'five']\n" ] ], [ [ "# List Insert", "_____no_output_____" ] ], [ [ "#syntax: lst.insert(x, y) \n\nlst = ['one', 'two', 'four']\n\nlst.insert(2, \"three\") # will add element y at location x\n\nprint(lst)\n", "['one', 'two', 'three', 'four']\n" ] ], [ [ "# List Remove", "_____no_output_____" ] ], [ [ "#syntax: lst.remove(x) \n\nlst = ['one', 'two', 'three', 'four', 'two']\n\nlst.remove('two') #it will remove first occurence of 'two' in a given list\n\nprint(lst)", "['one', 'three', 'four', 'two']\n" ] ], [ [ "# List Append & Extend", "_____no_output_____" ] ], [ [ "lst = ['one', 'two', 'three', 'four']\n\nlst2 = ['five', 'six']\n\n#append \nlst.append(lst2)\n\nprint(lst)", "['one', 'two', 'three', 'four', ['five', 'six']]\n" ], [ "lst = ['one', 'two', 'three', 'four']\n\nlst2 = ['five', 'six']\n\n#extend will join the list with list1\n\nlst.extend(lst2)\n\nprint(lst)", "['one', 'two', 'three', 'four', 'five', 'six']\n" ] ], [ [ "# List Delete", "_____no_output_____" ] ], [ [ "#del to remove item based on index position\n\nlst = ['one', 'two', 'three', 'four', 'five']\n\ndel lst[1]\nprint(lst)\n\n#or we can use pop() method\na = lst.pop(1)\nprint(a)\n\nprint(lst)", "['one', 'three', 'four', 'five']\nthree\n['one', 'four', 'five']\n" ], [ "lst = ['one', 'two', 'three', 'four']\n\n#remove an item from list\nlst.remove('three')\n\nprint(lst)", "['one', 'two', 'four']\n" ] ], [ [ "# List realted keywords in Python", "_____no_output_____" ] ], [ [ "#keyword 'in' is used to test if an item is in a list\nlst = ['one', 'two', 'three', 'four']\n\nif 'two' in lst:\n print('AI')\n\n#keyword 'not' can combined with 'in'\nif 'six' not in lst:\n print('ML')", "AI\nML\n" ] ], [ [ "# List Reverse", "_____no_output_____" ] ], [ [ "#reverse is reverses the entire list\n\nlst = ['one', 'two', 'three', 'four']\n\nlst.reverse()\n\nprint(lst)", "['four', 'three', 'two', 'one']\n" ] ], [ [ "# List Sorting", "_____no_output_____" ], [ "The easiest way to sort a List is with the sorted(list) function. \n\nThat takes a list and returns a new list with those elements in sorted order. \n\nThe original list is not changed. \n\nThe sorted() optional argument reverse=True, e.g. sorted(list, reverse=True), \nmakes it sort backwards.", "_____no_output_____" ] ], [ [ "#create a list with numbers\nnumbers = [3, 1, 6, 2, 8]\n\nsorted_lst = sorted(numbers)\n\n\nprint(\"Sorted list :\", sorted_lst)\n\n#original list remain unchanged\nprint(\"Original list: \", numbers)", "Sorted list : [1, 2, 3, 6, 8]\nOriginal list: [3, 1, 6, 2, 8]\n" ], [ "#print a list in reverse sorted order\nprint(\"Reverse sorted list :\", sorted(numbers, reverse=True))\n\n#orginal list remain unchanged\nprint(\"Original list :\", numbers)", "Reverse sorted list : [8, 6, 3, 2, 1]\nOriginal list : [3, 1, 6, 2, 8]\n" ], [ "lst = [1, 20, 5, 5, 4.2]\n\n#sort the list and stored in itself\nlst.sort()\n\n# add element 'a' to the list to show an error\n\nprint(\"Sorted list: \", lst)", "Sorted list: [1, 4.2, 5, 5, 20]\n" ], [ "lst = [1, 20, 'b', 5, 'a']\nprint(lst.sort()) # sort list with element of different datatypes.\n", "_____no_output_____" ] ], [ [ "# List Having Multiple References", "_____no_output_____" ] ], [ [ "lst = [1, 2, 3, 4, 5]\nabc = lst\nabc.append(6)\n\n#print original list\nprint(\"Original list: \", lst)", "Original list: [1, 2, 3, 4, 5, 6]\n" ] ], [ [ "# String Split to create a list", "_____no_output_____" ] ], [ [ "#let's take a string\n\ns = \"one,two,three,four,five\"\nslst = s.split(',')\nprint(slst)", "['one', 'two', 'three', 'four', 'five']\n" ], [ "s = \"This is applied AI Course\"\nsplit_lst = s.split() # default split is white-character: space or tab\nprint(split_lst)", "['This', 'is', 'applied', 'AI', 'Course']\n" ] ], [ [ "# List Indexing", "_____no_output_____" ], [ "Each item in the list has an assigned index value starting from 0.\n\nAccessing elements in a list is called indexing.", "_____no_output_____" ] ], [ [ "lst = [1, 2, 3, 4]\nprint(lst[1]) #print second element\n\n#print last element using negative index\nprint(lst[-2])", "2\n3\n" ] ], [ [ "# List Slicing", "_____no_output_____" ], [ "Accessing parts of segments is called slicing. \n\nThe key point to remember is that the :end value represents the first value that \nis not in the selected slice. ", "_____no_output_____" ] ], [ [ "numbers = [10, 20, 30, 40, 50,60,70,80]\n\n#print all numbers\nprint(numbers[:]) \n\n#print from index 0 to index 3\nprint(numbers[0:4])\n", "[10, 20, 30, 40, 50, 60, 70, 80]\n[10, 20, 30, 40]\n" ], [ "print (numbers)\n#print alternate elements in a list\nprint(numbers[::2])\n\n\n#print elemnts start from 0 through rest of the list\nprint(numbers[2::2])\n", "[10, 20, 30, 40, 50, 60, 70, 80]\n[10, 30, 50, 70]\n[30, 50, 70]\n" ] ], [ [ "# List extend using \"+\"", "_____no_output_____" ] ], [ [ "lst1 = [1, 2, 3, 4]\nlst2 = ['varma', 'naveen', 'murali', 'brahma']\nnew_lst = lst1 + lst2\n\nprint(new_lst)", "[1, 2, 3, 4, 'varma', 'naveen', 'murali', 'brahma']\n" ] ], [ [ "# List Count", "_____no_output_____" ] ], [ [ "numbers = [1, 2, 3, 1, 3, 4, 2, 5]\n\n#frequency of 1 in a list\nprint(numbers.count(1))\n\n#frequency of 3 in a list\nprint(numbers.count(3))", "2\n2\n" ] ], [ [ "# List Looping", "_____no_output_____" ] ], [ [ "#loop through a list\n\nlst = ['one', 'two', 'three', 'four']\n\nfor ele in lst:\n print(ele)", "one\ntwo\nthree\nfour\n" ] ], [ [ "# List Comprehensions", "_____no_output_____" ], [ "List comprehensions provide a concise way to create lists. \n\nCommon applications are to make new lists where each element is the result of some operations applied to each member of another sequence or iterable, or to create a subsequence of those elements that satisfy a certain condition.", "_____no_output_____" ] ], [ [ "# without list comprehension\nsquares = []\nfor i in range(10):\n squares.append(i**2) #list append\nprint(squares)", "[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n" ], [ "#using list comprehension\nsquares = [i**2 for i in range(10)]\nprint(squares)", "[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]\n" ], [ "#example\n\nlst = [-10, -20, 10, 20, 50]\n\n#create a new list with values doubled\nnew_lst = [i*2 for i in lst]\nprint(new_lst)\n\n#filter the list to exclude negative numbers\nnew_lst = [i for i in lst if i >= 0]\nprint(new_lst)\n\n\n#create a list of tuples like (number, square_of_number)\nnew_lst = [(i, i**2) for i in range(10)]\nprint(new_lst)", "[-20, -40, 20, 40, 100]\n[10, 20, 50]\n[(0, 0), (1, 1), (2, 4), (3, 9), (4, 16), (5, 25), (6, 36), (7, 49), (8, 64), (9, 81)]\n" ] ], [ [ "# Nested List Comprehensions", "_____no_output_____" ] ], [ [ "#let's suppose we have a matrix\n\nmatrix = [\n [1, 2, 3, 4],\n [5, 6, 7, 8],\n [9, 10, 11, 12]\n]\n\n#transpose of a matrix without list comprehension\ntransposed = []\nfor i in range(4):\n lst = []\n for row in matrix:\n lst.append(row[i])\n transposed.append(lst)\n\nprint(transposed)\n", "[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\n" ], [ "#with list comprehension\ntransposed = [[row[i] for row in matrix] for i in range(4)]\nprint(transposed)", "[[1, 5, 9], [2, 6, 10], [3, 7, 11], [4, 8, 12]]\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code" ] ]
4a3d83a5603785c216046f9deb0a11e1e4bb39d1
75,553
ipynb
Jupyter Notebook
Incertidumbre/.ipynb_checkpoints/Incertidumbre-checkpoint.ipynb
lucasliano/Medidas1
349f1e3783b35782a445d7e34ab9827ee5117e31
[ "MIT" ]
2
2021-05-02T19:24:58.000Z
2021-05-03T01:19:53.000Z
Incertidumbre/.ipynb_checkpoints/Incertidumbre-checkpoint.ipynb
lucasliano/Medidas1
349f1e3783b35782a445d7e34ab9827ee5117e31
[ "MIT" ]
null
null
null
Incertidumbre/.ipynb_checkpoints/Incertidumbre-checkpoint.ipynb
lucasliano/Medidas1
349f1e3783b35782a445d7e34ab9827ee5117e31
[ "MIT" ]
null
null
null
85.660998
40,872
0.774463
[ [ [ "<div align=\"right\"><a href=\"https://github.com/lucasliano/Medidas1\">Link Github</a></div>\n\n\n\n<img src=\"logo.jpg\" width=\"400\"></img>\n\n<div align=\"center\">\n <h1>Resúmen Teórico de Medidas Electrónicas 1</h1>\n <h2>Incertidumbre</h2>\n <h3>Liaño, Lucas</h3>\n</div>\n\n\n\n# Contenidos\n\n- **Introducción**\n- **Marco Teórico**\n - Conceptos Básicos Metrología\n - ¿Qué es la incertidumbre?\n - Modelo matemático de una medición ($Y$)\n - Evaluación incertidumbre Tipo A\n - Evaluación incertidumbre Tipo B\n - Incertidumbre Conjunta\n - Grado de Confianza\n - Caso de análisis: $u_{i}(x_{i}) \\gg u_{j}(X_{i})$\n - Caso de análisis: $u_{i}(x_{i}) \\ll u_{j}(X_{i})$\n - Correlación\n \n- **Experimentación**\n - Caso General\n - Caso Incertidumbre tipo A dominante\n - Caso Incertidumbre tipo B dominante\n - Ejemplo Correlación\n- **Bibliografía**\n***\n \n# Introducción \n\nEl objetivo del presente documento es de resumir, al mismo tiempo que simular, los contenidos teóricos correspondientes a la unidad N°1 de la materia medidas 1. Para ello, utilizaremos los recursos disponibles en el drive de la materia.\n\n<div class=\"alert alert-success\">\n <strong>Link:</strong> <a href=\"https://drive.google.com/folderview?id=1p1eVB4UoS0C-5gyienup-XiewKsTpcNc\">https://drive.google.com/folderview?id=1p1eVB4UoS0C-5gyienup-XiewKsTpcNc</a>\n</div>\n\n***", "_____no_output_____" ], [ "\n# Marco Teórico\n\n## Conceptos Básicos Metrología\n\nLa de medición de una magnitud física, atributo de un cuerpo mensurable, consiste en el proceso mediante el cual se da a conocer el valor de dicha magnitud. A lo largo de la historia se han desarrollado diversos modelos de medición, todos ellos consisten en la comparación de la magnitud contra un patrón.\n\nA su vez, a medida que se fueron confeccionando mejores métodos de medición, se empezó a tener en consideración el error en la medida. Este error consiste en una indicación cuantitativa de la calidad del resultado. Valor que demuestra la confiabilidad del proceso.\n\nActualmente, definimos al **resultado de una medición** como al conjunto de valores de una magnitud, atribuidos a un mensurando. Se puede definir a partir de una función distribución densidad de probabilidad (también denomidada _pdf_, de la sígla inglesa _probability density function_). El resultado de una medición está caracterizado por la media de la muestra, la incertidumbre y el grado de confianza de la medida.\n\nDenominaremos **incertidumbre de una medición** al parámetro asociado con el resultado de la medición que caracteríza la dispersión de los valores atribuidos a un mensurando. Mientras que el **error de medida** será la diferencia entre el valor medido con un valor de referencia. [[1]](http://depa.fquim.unam.mx/amyd/archivero/CALCULODEINCERTIDUMBRESDR.JAVIERMIRANDA_26197.pdf)\n\n#### Tipos de errores\n\nExisten dos tipos:\n\n> **Error sistemático:** Componente del error que en repetidas mediciones permanece constante.\n\n> **Error aleatorio:** Componente del error que en repetidas mediciones varía de manera impredecible.\n\n***\n## ¿Qué es la incertidumbre?\n\nComo bien definimos anteriormente, la incertidumbre es un parámetro que caracteríza la dispersión de los valores atribuidos a un mensurando. Esto significa que, considerando al resultado de la medición como una función distribución densidad de probabilidad, la incertidumbre representa el desvío estándar de la misma. Se suele denominar **incertidumbre estándar** a dicha expresión de la incertidumbre.\n\n#### Componentes de la incertidumbre\n\n> **Tipo A:** Componente de la incertidumbre descripta únicamente a partir del estudio estadístico de las muestras.\n\n> **Tipo B:** Componente de la incertidumbre descripta a partir de las hojas de datos previstas por los fabricantes de los instrumentos de medición, junto con datos de calibración.\n\nEn las próximas secciones se describe en detalle como son los test efectuados para determinar cada una de las componentes. [[2]](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_de_errores)\n\n***\n## Modelo matemático de una medición ($Y$)\n\nSupongamos una magnitud a mensurar ($Y$), la cual se va a estimar de forma indirecta a partir de una relación fundamental con otras $N$ magnitudes mensurables, de manera que se cumple:\n\n\\begin{equation}\n Y = f(x_{1},x_{2},...,x_{N})\n\\end{equation}\n\nComo definimos previamente, las variables $x_{i}$ son funciones distribución densidad de probabilidad por ser resultados de mediciones. Cada una de estas mediciones viene determinada, idealmente, por el valor de su media ($\\mu_{X_{i}}$), su desvío estándar ($\\sigma_{x_{i}}$) y el grado de confianza de la medición. Dado que en la vida real no es posible conseguir una estimación lo suficientemente buena de estos parámetros, se utilizarán sus estimadores en su lugar.\n\n\nPor tanto, si se tomaron $M$ muestras de cada una de estas variables, podemos utilizar la **media poblacional ($\\bar{Y}$)** como estimador de la media ($\\mu_{Y}$) de la distribución densidad de probabilidad de la medición como:\n\n\\begin{equation}\n \\hat{Y} = \\bar{Y} = \\frac{1}{M} \\sum_{k=0}^{M} f_{k}(x_{1},x_{2},...,x_{N}) = f(\\bar{X_{1}},\\bar{X_{2}},...,\\bar{X_{N}})\n\\end{equation}\n\n<div class=\"alert alert-danger\">\n <strong>Verificar que esto este bien.</strong> Sospecho que no porque estamos suponiendo que podes aplicar linealidad adentro de la función. Estoy leyendo el ejemplo del calculo de resistencia y hacemos \"resistencia= (media_V/media_I)\" en la línea 39 del documento compartido en el canal general de Slack. \n</div>\n\nAsimismo, para determinar el otro parámetro fundamental de la medición (la incertidumbre) utilizaremos como estimador a la **incertidumbre combinada ($u_{c}$)** definida a partir de la siguiente ecuación,\n\n\\begin{equation}\n u_{c}^{2}(Y) = \\sum_{i=1}^{N} (\\dfrac{\\partial f}{\\partial x_{i}})^{2} \\cdot u_{c}^{2}(x_{i}) + 2 \\sum_{i=1}^{N-1} \\sum_{j = i+1}^{N} \\dfrac{\\partial f}{\\partial x_{i}} \\dfrac{\\partial f}{\\partial x_{j}} u(x_{i},x_{j})\n\\end{equation}\n\ndonde $u(x_{i},x_{j})$ es la expresión de la covariancia entre las pdf de las $x_{i}$.\n\nEsta expresión, para permitir el uso de funciones $f_{k}$ no lineales, es la aproximación por serie de Taylor de primer orden de la expresión original para funciones que cumplen linealidad. [[2]](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_de_errores)\n\nA su vez, a partir de la **ley de propagación de incertidumbres**, podemos decir que para la determinación de una variable unitaria mediante medición directa es posible reducir la expresión anterior a la siguiente:\n\n\\begin{equation}\n u_{c}^{2}(x_{i}) = u_{i}^{2}(x_{i}) + u_{j}^{2}(x_{i}) \n\\end{equation}\n\ndonde denominaremos $u_{i}(x_{i})$ a la incertidumbre tipo A, y $u_{j}(x_{i})$ a la incertidumbre tipo B.\n\n***\n## Evaluación incertidumbre Tipo A\n\nLa incertidumbre tipo A, recordando que se trata de una medida de dispersión y al ser tipo A se relaciona con la estadística de las muestras, se puede estimar con el desvío estándar experimental de la media ($S(\\bar{X_{i}})$). Para ello hace falta recordar algunos conceptos de estadística.\n\nSuponiendo que se toman $N$ muestras:\n\n> **Estimador media poblacional:**\n>> $\\hat{x_{i}}=\\bar{X_{i}}=\\dfrac{1}{N} \\sum_{k=1}^{N}x_{i,k}$\n\n> **Grados de libertad:**\n>> $\\nu = N-1$\n\n> **Varianza experimental de las observaciones:**\n>> $\\hat{\\sigma^{2}(X_{i})}=S^{2}(X_{i})=\\dfrac{1}{\\nu} \\sum_{k=1}^{N}(X_{i,k} - \\bar{X_{i}})^{2}$\n\n> **Varianza experimental de la media:**\n>> $\\hat{\\sigma^{2}(\\bar{X_{i}})}=S^{2}(\\bar{X_{i}})=\\dfrac{S^{2}(x_{i})}{N}$\n\n\n\n\n<div class=\"alert alert-success\">\n <strong>Por ende, la componente de la incertidumbre tipo A nos queda:</strong>\n \n\\begin{equation}\n u_{i}(x_{i}) = \\sqrt{S^{2}(\\bar{X_{i}})}\n\\end{equation}\n</div>\n\n<div class=\"alert alert-info\">\n <strong>Nota:</strong> Para calcular el std con un divisor de $\\nu = N-1$ es necesario modificar un argumento en la función de python. El comando correctamente utilizado es: 'myVars.std(ddof=1)'.\n \n</div>\n\n\n***\n## Evaluación incertidumbre Tipo B\n\nLa incertidumbre tipo B viene determinada por la información que proveen los fabricantes de los instrumentos de medición, asi como también por los datos resultantes por la calibración de los mismos.\n\nEn estos instrumentos de medición la incertidumbre viene descripta en forma de distribuciones densidad de probabilidad, no en forma estadística. Para ello utilizamos los siguientes estadísticos que caracterízan a las variables aleatorias, en caso de que su dominio fuera continuo:\n\n> **Esperanza:**\n>> $E(x)=\\int x.f(x)dx$\n\n> **Varianza:**\n>> $V(x)=\\int x^{2}.f(x)dx$\n\n\n<div class=\"alert alert-success\">\n <strong>Por tanto, si la incertidumbre es un parámetro de dispersión, la misma vendrá descripta por la expresión:</strong>\n \n\\begin{equation}\n u_{j}(x_{i}) = \\sqrt{V(x)}\n\\end{equation}\n</div>\n\nPor simplicidad a la hora de trabajar, a continuación se presenta una tabla con los valores típicos del desvío estándar para el caso de distintas distribuciones. Se demuestra el caso de distribución uniforme.\n\n![dist](uniforme.png)\n\nSuponiendo que la distribución esta centrada en $\\bar{X_{i}}$, nos quedaría que $a = \\bar{X_{i}} - \\Delta X$ y $b = \\bar{X_{i}} - \\Delta X$. \n\nPor tanto si la expresión de la varianza es $V(x_{i}) = \\frac{(b-a)^{2}}{12}$, finalmente quedaría:\n\n\\begin{equation}\n V(x_{i}) = \\frac{(b-a)^{2}}{12} = \\frac{(2 \\Delta X)^{2}}{12} = \\frac{4 \\Delta X^{2}}{12} = \\frac{\\Delta X^{2}}{3}\n\\end{equation}\n\n\\begin{equation}\n \\sigma_{x_{i}} = \\frac{\\Delta X}{\\sqrt{3}}\n\\end{equation}\n\nFinalmente la tabla queda,\n\n| Distribution | $u_{j}(x_{i}) = \\sigma_{x_{i}}$|\n| :----: | :----: |\n| Uniforme | $\\frac{\\Delta X}{\\sqrt{3}}$ |\n| Normal | $\\Delta X $ |\n| Normal ($K=2$) | $\\frac{\\Delta X}{2} $ |\n| Triangular | $\\frac{\\Delta X}{\\sqrt{6}}$ |\n| U | $\\frac{\\Delta X}{\\sqrt{2}}$ |\n\n<div class=\"alert alert-danger\">\n <strong>Verificar que esto este bien.</strong> Me genera dudas el término $\\Delta X$. Esto no creo que deba ser así porque en el caso de la distribución normal $\\sigma_{x_{i}} = \\sigma$. No creo que deba aparecer ningun error absoluto ahí.\n</div>\n\n***\n## Incertidumbre Conjunta\n\nComo definimos anteriormente, la incertidumbre conjunta queda definida como:\n\n\\begin{equation}\n u_{c}^{2}(x_{i}) = u_{i}^{2}(x_{i}) + u_{j}^{2}(x_{i}) \n\\end{equation}\n\n#### ¿Qué función distribución densidad de probabilidad tiene $u_{c}$?\n\nSi se conocen $x_{1},x_{2},...,x_{N}$ y $Y$ es una combinación lineal de $x_{i}$ (o en su defecto una aproximación lineal, como en el caso del polinomio de taylor de primer grado de la función), podemos conocer la función distribución densidad de probabilidad a partir de la convolución de las $x_{i}$, al igual que se hace para SLIT. [[3]](https://es.wikipedia.org/wiki/Convoluci%C3%B3n)\n\nDado que habitualmente no se conoce con precisión la función distribución densidad de probabilidad de $u_{i}(x_{i})$, se suele utilizar el **teorema central del límite** para conocer $u_{c}(x_{i})$. El mismo plantea que cuantas más funciones $x_{i}$ con función distribución densidad de probabilidad deconocida sumemos, más va a tender su resultado a una distribución normal.\n\n***\n## Grado de Confianza\n\nFinalmente, el último parámetro que nos interesa conocer para determinar el resultado de la medición es el grado de confianza.\n\n> **Grado de confianza:** Es la probabilidad de que al evaluar nuevamente la media poblacional ($\\bar{y}$) nos encontremos con un valor dentro del intervalo $[\\bar{Y} - K.\\sigma_{Y}(\\bar{Y}) \\le \\mu_{Y} \\le \\bar{Y} - K.\\sigma_{Y}(\\bar{Y})]$ para el caso de una distribución que cumpla el teorema central del límite, donde $K$ es el factor de cobertura.\n\nOtra forma de verlo es:\n\n![gradoConfianza](gradoConfianza.png)\n\ndonde el grado de confianza viene representado por $(1-\\alpha)$. Recomiendo ver el ejemplo [[4]](https://es.wikipedia.org/wiki/Intervalo_de_confianza#Ejemplo_pr%C3%A1ctico) en caso de no entender lo que representa.\n\nDe esta forma, el factor de cobertura ($K$) nos permite modificar el grado de confianza. Agrandar $K$ aumentará el área bajo la curva de la gaussiana, lo que representará un mayor grado de confianza. \n\nSe definirá **incertidumbre expandida** a $U(x_{i}) = K \\cdot u_{c}(x_{i})$ si $u_{c}(x_{i})$ es la incertidumbre que nos proveé un grado de confianza de aproximadamente $ 68\\% $.\n\nPara una función que distribuye como normal podemos estimar el grado de confianza mediante la siguiente tabla,\n\n| Factor de cobertura | Grado de confianza|\n| :----: | :----: |\n| $K=1$ | $68.26\\% $ |\n| $K=2$ | $95.44\\% $ |\n| $K=3$ | $99.74\\% $ |\n\n\n#### ¿Qué sucede si $u_{c}$ no distribuye normalmente?\n\nEn este caso también se podrá utilizar la ecuación $U(x_{i}) = K \\cdot u_{c}(x_{i})$, pero el método mediante el cual obtendremos a $K$ será distinto.\n\n***\n## Caso de análisis: $u_{i}(x_{i}) \\gg u_{j}(X_{i})$\n\nCuando sucede que la incertidumbre que proveé la evaluación tipo A es muy significativa frente a la tipo B, esto querrá decir que no tenemos suficientes grados de libertad para que $u_{c}(x_{i})$ se aproxime a una gaussiana. En otras palabras, la muestra obtenida no es significativa.\n\nEn estos casos vamos a suponer que $u_{c}(x_{i})$ distribuye como t-Student. La distribución t-Student surge justamente del problema de estimar la media de una población normalmente distribuida cuando el tamaño de la muestra es pequeño.\n\nComo la distribución de t-Student tiene como parámetro los grados de libertad efectivos, debemos calcularlos. Para ello utilizaremos la fórmula de Welch-Satterthwaite:\n\n\\begin{equation}\n \\nu_{eff} = \\dfrac{u_{c}^{4}(y)}{\\sum_{i=1}^{N} \\dfrac{ c_{i}^{4} u^{4}(x_{i})} {\\nu_{i}} } \n\\end{equation}\n\n\ndonde $c_i = \\dfrac{\\partial f}{\\partial x_{i}}$ y $u_{i}(x_{i})$ es la incertidumbre tipo A.\n\n![tstudent](tstudent.png)\n\nPara obtener el factor de cobertura que nos asegure un factor de cobertura del $95/%$ debemos recurrir a la tabla del t-Student. Para ello existe una función dentro del módulo _scipy.stats_ que nos integra la función hasta lograr un área del $95.4%$.\n\nA continuación presentamos la función que utilizaremos con dicho fin,\n\n~~~\ndef get_factor_Tstudent(V_eff, porcentaje_confianza_objetivo=95.4):\n \"\"\"\n Funcion de calculo de factor de expansión por T-student\n input:\n V_eff: Grados de libertad (float)\n porcentaje_confianza_objetivo: porcentaje_confianza_objetivo (float)\n returns: \n Factor de expansión (float)\n \"\"\"\n return np.abs( -(stats.t.ppf((1.0+(porcentaje_confianza_objetivo/100))/2.0,V_eff)) )\n~~~\n\n\n***\n## Caso de análisis: $u_{i}(x_{i}) \\ll u_{j}(X_{i})~$\n\nPara el caso en el que la incertidumbre del muestreo es muy inferior a la incertidumbre tipo B, nos encontramos frente al caso de incertidumbre B dominante. Esta situación es equivalente a tener la convolución entre una delta de dirac con una función de distribución cualquiera. \n\n![bdominante](bdominante.png)\n\n\nComo observamos en la imagen, la función distribución densidad de probabilidad resultate se asemeja más a la distribución uniforme del tipo B. En este caso para encontrar el factor de cobertura utilizaremos otra tabla distinta. En esta tabla el parámetro de entrada es el cociente $\\dfrac{u_{i}}{u_{j}}$.\n\nA continuación presentamos la función que utilizaremos con dicho fin,\n\n~~~\ndef tabla_B(arg):\n tabla_tipoB = np.array([\n [0.0, 1.65],\n [0.1, 1.66],\n [0.15, 1.68],\n [0.20, 1.70],\n [0.25, 1.72],\n [0.30, 1.75],\n [0.35, 1.77],\n [0.40, 1.79],\n [0.45, 1.82],\n [0.50, 1.84],\n [0.55, 1.85],\n [0.60, 1.87],\n [0.65, 1.89],\n [0.70, 1.90],\n [0.75, 1.91],\n [0.80, 1.92],\n [0.85, 1.93],\n [0.90, 1.94],\n [0.95, 1.95],\n [1.00, 1.95],\n [1.10, 1.96],\n [1.20, 1.97],\n [1.40, 1.98],\n [1.80, 1.99],\n [1.90, 1.99]])\n if arg >= 2.0:\n K = 2.0\n else:\n pos_min = np.argmin(np.abs(tabla_tipoB[:,0]-arg)) \n K = tabla_tipoB[pos_min,1]\n\n return K\n~~~\n\n\n***\n## Correlación\n\nFinalmente nos encontramos con el caso mas general. En esta situación las variables se encuentran correlacionadas, por lo que la expresión de $u_{c}(Y)$ debe utilizarse en su totalidad.\n\nPor simplicidad de computo vamos a definir al coeficiente correlación como,\n\n\\begin{equation}\n r(q,w) = \\dfrac{ u(q,w) }{ u(q)u(w) }\n\\end{equation}\n\nDe esta forma podemos expresar a $u_{c}$ como:\n\n\\begin{equation}\n u_{c}^{2}(Y) = \\sum_{i=1}^{N} (\\dfrac{\\partial f}{\\partial x_{i}})^{2} \\cdot u_{c}^{2}(x_{i}) + 2 \\sum_{i=1}^{N-1} \\sum_{j = i+1}^{N} \\dfrac{\\partial f}{\\partial x_{i}} \\dfrac{\\partial f}{\\partial x_{j}} r(x_{i},x_{j})u(x_{i})u(x_{j})\n\\end{equation}\n\nEsta expresión debe utilizarse cada vez que $r(x_{i},x_{j}) \\ne 0$.", "_____no_output_____" ], [ "# Experimentación\n**Comenzamos inicializando los módulos necesarios**", "_____no_output_____" ] ], [ [ "# módulos genericos\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy import stats\nfrom scipy import signal\n\n# Módulos para Jupyter (mejores graficos!)\nimport warnings\nwarnings.filterwarnings('ignore')\nplt.rcParams['figure.figsize'] = [12, 4]\nplt.rcParams['figure.dpi'] = 150 # 200 e.g. is really fine, but slower\n\n\nfrom pandas import DataFrame\nfrom IPython.display import HTML", "_____no_output_____" ] ], [ [ "**Definimos las funciones previamente mencionadas**", "_____no_output_____" ] ], [ [ "# Tabla para el caso A dominante\ndef get_factor_Tstudent(V_eff, porcentaje_confianza_objetivo=95.4):\n \"\"\"\n Funcion de calculo de factor de expansión por T-student\n input:\n V_eff: Grados de libertad (float)\n porcentaje_confianza_objetivo: porcentaje_confianza_objetivo (float)\n returns: .libertad efectivosdenoted\n Factor de expansión (float)\n \"\"\"\n return np.abs( -(stats.t.ppf((1.0+(porcentaje_confianza_objetivo/100))/2.0,V_eff)) )\n\n# Tabla para el caso B dominante\ndef tabla_B(arg):\n tabla_tipoB = np.array([\n [0.0, 1.65],\n [0.1, 1.66],\n [0.15, 1.68],\n [0.20, 1.70],\n [0.25, 1.72],\n [0.30, 1.75],\n [0.35, 1.77],\n [0.40, 1.79],\n [0.45, 1.82],\n [0.50, 1.84],\n [0.55, 1.85],\n [0.60, 1.87],\n [0.65, 1.89],\n [0.70, 1.90],\n [0.75, 1.91],\n [0.80, 1.92],\n [0.85, 1.93],\n [0.90, 1.94],\n [0.95, 1.95],\n [1.00, 1.95],\n [1.10, 1.96],\n [1.20, 1.97],\n [1.40, 1.98],\n [1.80, 1.99],\n [1.90, 1.99]])\n if arg >= 2.0:\n K = 2.0\n else:\n pos_min = np.argmin(np.abs(tabla_tipoB[:,0]-arg)) \n K = tabla_tipoB[pos_min,1]\n\n return K", "_____no_output_____" ] ], [ [ "## Caso general\n**Definimos las constantes necesarias**", "_____no_output_____" ] ], [ [ "# Constantes del instrumento\nCONST_ERROR_PORCENTUAL = 0.5 # Error porcentual del instrumento de medición\nCONST_ERROR_CUENTA = 3 # Error en cuentas del instrumento de medición\nCONST_DECIMALES = 2 # Cantidad de decimales que representa el instrumento\n\n# Constantes del muestro\nN = 10 # Cantidad de muestras tomadas\n\n# Señal a muestrear idealizada\nmu = 100 # Valor medio de la distribución normal de la población ideal\nstd = 2 # Desvío estándar de la distribución normal de la población ideal\n\n# Muestreo mi señal ideal (Normal)\nmuestra = np.random.randn(N) * std + mu", "_____no_output_____" ] ], [ [ "**Ahora solamente genero un gráfico que compare el histograma con la distribución normal de fondo**", "_____no_output_____" ] ], [ [ "num_bins = 50\nfig, ax = plt.subplots()\n# the histogram of the 1.1data\nn, bins, patches = ax.hist(muestra, num_bins, density=True)\n# add a 'best fit' line\ny = ((1 / (np.sqrt(2 * np.pi) * std)) *\n np.exp(-0.5 * (1 / std * (bins - mu))**2))\nax.plot(bins, y, '--')\nax.set_xlabel('Smarts')\nax.set_ylabel('Probability density')\nax.set_title('Histogram of IQ: $\\mu=$'+ str(mu) + ', $\\sigma=$' + str(std))\n# Tweak spacing to prevent clipping of ylabel\nfig.tight_layout()\nplt.show()", "_____no_output_____" ], [ "media = np.round(muestra.mean(), CONST_DECIMALES) # Redondeamos los decimales a los valores que puede ver el tester\ndesvio = muestra.std(ddof=1)\n\nprint(\"Mean:\",media )\nprint(\"STD:\" ,desvio)", "Mean: 99.29\nSTD: 1.6777655348895033\n" ] ], [ [ "**Calculamos el desvío estándar experimental de la media como:**\n\\begin{equation}\n u_{i}(x_{i}) = \\sqrt{S^{2}(\\bar{X_{i}})}\n\\end{equation}", "_____no_output_____" ] ], [ [ "#Incertidumbre Tipo A\nui = desvio/np.sqrt(N)\nui", "_____no_output_____" ] ], [ [ "**Calculamos el error porcentual total del dispositivo de medición como:**\n\\begin{equation}\n e_{\\%T} = e_{\\%} + \\dfrac{e_{cuenta}\\cdot 100\\%}{\\bar{X_{i}}(10^{cte_{Decimales}})}\n\\end{equation}", "_____no_output_____" ] ], [ [ "#Incertidumbre Tipo B\nERROR_PORCENTUAL_CUENTA = (CONST_ERROR_CUENTA*100)/(media * (10**CONST_DECIMALES ))\n\nERROR_PORCENTUAL_TOTAL = CONST_ERROR_PORCENTUAL + ERROR_PORCENTUAL_CUENTA\n\nERROR_PORCENTUAL_CUENTA", "_____no_output_____" ] ], [ [ "**Por tanto el error absoluto se representa como:**\n\\begin{equation}\n \\Delta X = e_{\\%T} \\dfrac{\\bar{X_{i}}}{100\\%}\n\\end{equation}", "_____no_output_____" ] ], [ [ "deltaX = ERROR_PORCENTUAL_TOTAL * media/100\ndeltaX", "_____no_output_____" ] ], [ [ "**Finalmente la incertidumbre tipo B queda:**\n\\begin{equation}\n u_{j}(x_{i}) = \\sqrt{Var(x_{i})} = \\dfrac{\\Delta X}{\\sqrt{3}}\n\\end{equation}\n\ndonde recordamos que, al suponer una distribución uniforme en el dispositivo de medición, la varianza nos queda $Var(X_{uniforme}) = \\dfrac {(b-a)^{2}}{12}$.", "_____no_output_____" ] ], [ [ "uj = deltaX / np.sqrt(3)\nuj", "_____no_output_____" ] ], [ [ "**Calculamos la incertidumbre conjunta**\n\nComo este es el caso de una medición directa de una sola variable, la expresión apropiada es:\n\n\\begin{equation}\n u_{c}^{2}(x_{i}) = u_{i}^{2}(x_{i}) + u_{j}^{2}(x_{i}) \n\\end{equation}", "_____no_output_____" ] ], [ [ "#incertidumbre combinada\nuc = np.sqrt(ui**2 + uj**2)\nuc", "_____no_output_____" ] ], [ [ "**Ahora debemos evaluar frente a que caso nos encontramos**\n\nEn primera instancia evaluamos que componente de la incertidumbre es mayoritaria y en que proporción.", "_____no_output_____" ], [ "Entonces tenemos tres situaciones posibles:\n\n1. **Caso B dominante** $\\Rightarrow \\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\lt 1 \\Rightarrow$ Se utiliza la tabla de B dominante.\n1. **Caso Normal** $\\Rightarrow \\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\gt 1$ y $V_{eff} \\gt 30 \\Rightarrow$ Se toma $K=2$.\n1. **Caso A dominante** $\\Rightarrow \\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\gt 1$ y $V_{eff} \\lt 30 \\Rightarrow$ Se utiliza t-Student con los grados de libertad efectivos.\n", "_____no_output_____" ] ], [ [ "def evaluacion(uc,ui,uj,N):\n cte_prop = ui/uj\n print(\"Constante de proporcionalidad\", cte_prop)\n if cte_prop > 1:\n # Calculo los grados de libertad efectivos\n veff = int ((uc**4)/((ui**4)/(N-1)))\n print(\"Grados efectivos: \", veff)\n if veff > 30:\n # Caso Normal\n k = 2\n else:\n # Caso t-Student\n k = get_factor_Tstudent(veff)\n else:\n # Caso B Dominante\n k = tabla_B(cte_prop)\n print(\"Constante de expansión: \",k)\n return k", "_____no_output_____" ] ], [ [ "<div class=\"alert alert-warning\">\n <strong>Nota:</strong> La contribución de $u_{j}(x_{i})$ no se tiene en cuenta dado que, al ser una distribución continua, tiene infinitos grados de libertad.\n \n \n\\begin{equation}\n \\nu_{eff} = \\dfrac{u_{c}^{4}(y)}{\\sum_{i=1}^{N} \\dfrac{ c_{i}^{4} u^{4}(x_{i})} {\\nu_{i}} } \n\\end{equation}\n</div>\n\n", "_____no_output_____" ] ], [ [ "k = evaluacion(uc,ui,uj,N)", "Constante de proporcionalidad 1.7455599385766958\nGrados efectivos: 15\nConstante de expansión: 2.175422110927068\n" ] ], [ [ "**Análisis y presentación del resultado**\n\nComo el cociente $\\dfrac{u_{i}(x_{i})}{u_{j}(x_{i})} \\gt 2$, entonces suponemos que nos encontramos frente al caso de distribución normal o distribución t-Student. Para ello utilizamos el criterio de los grados de libertad efectivos.\n\nEn este caso los grado de libertad efectivos $V_{eff} \\gt 30$, por lo que suponemos distribución normal.\n\nFinalmente presentamos el resultado con 1 dígito significativo.", "_____no_output_____" ] ], [ [ "U = uc*k\nprint(\"Resultado de la medición: (\",np.round(media,1),\"+-\",np.round(U,1),\")V con un grado de confianza del 95%\")", "Resultado de la medición: ( 99.3 +- 1.3 )V con un grado de confianza del 95%\n" ] ], [ [ "# Bibliografía\n\n_Nota: Las citas **no** respetan el formato APA._\n\n1. [Evaluación de la Incertidumbre en Datos Experimentales, Javier Miranda Martín del Campo](http://depa.fquim.unam.mx/amyd/archivero/CALCULODEINCERTIDUMBRESDR.JAVIERMIRANDA_26197.pdf)\n\n1. [Propagación de erroes, Wikipedia](https://es.wikipedia.org/wiki/Propagaci%C3%B3n_de_errores)\n\n1. [Convolución, Wikipedia](https://es.wikipedia.org/wiki/Convoluci%C3%B3n)\n\n1. [Intervalo de Confianza, Wikipedia](https://es.wikipedia.org/wiki/Intervalo_de_confianza#Ejemplo_pr%C3%A1ctico)", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a3d8b4e363a91df44a0fbdc14894b51bd657eea
224,443
ipynb
Jupyter Notebook
Practice.ipynb
ltrainstg/dockerPractice
6cc028f59528f95748160ff83acf47bcaad15e3c
[ "MIT" ]
null
null
null
Practice.ipynb
ltrainstg/dockerPractice
6cc028f59528f95748160ff83acf47bcaad15e3c
[ "MIT" ]
null
null
null
Practice.ipynb
ltrainstg/dockerPractice
6cc028f59528f95748160ff83acf47bcaad15e3c
[ "MIT" ]
null
null
null
634.019774
149,247
0.695036
[ [ [ "import sys\nimport os\nimport pandas as pd\nimport matplotlib.pyplot as plt", "_____no_output_____" ], [ "# parentDir = os.path.dirname(os.getcwd())\n# sys.path.insert(0,parentDir )\n\nmyMods = os.path.join(os.getcwd(), \"myMods\")\nsys.path.insert(0,myMods)\nimport mainFun.apiFix as apiFix\nimport mainFun.createReport as createReport\nimport mainFun.getView as getView\nimport mainFun.loadHelper as loadHelper\nimport mainFun.vennPlot as vennPlot", "_____no_output_____" ], [ "import utils", "_____no_output_____" ], [ "api_df1 = loadHelper.loadPickle('rawData/api_df1.pkl')\napi_df2 = loadHelper.loadPickle('rawData/api_df2.pkl')", "_____no_output_____" ], [ "plt.rcParams[\"figure.figsize\"] = (20,10)", "_____no_output_____" ], [ "os.getcwd()", "_____no_output_____" ], [ "# https://www.arothuis.nl/posts/one-off-docker-images/\n# Create output and input volume. \n# Put also the main \n\n# docker run -v $pwd/output:/output -v $pwd/input:/input example3\n# docker run -b C:/Users/Lionel/Python/dockerPractice/output:output C:/Users/Lionel/Python/dockerPractice/input:input example4\ncurrentDir = os.getcwd()\n\n# Load Data\n\np_file1 = 'rawData/api_storage_dict1.pkl'\np_file2 = 'rawData/api_storage_dict2.pkl'\napi_storage_dict1 = loadHelper.loadPickle(p_file1)\napi_storage_dict2 = loadHelper.loadPickle(p_file2)\n\n# api_df1 = loadHelper.loadAPI1DF(p_file1)\n# api_df2 = loadHelper.loadAPI2DF(p_file2)\n# pickle.dump( api_df1, open( 'rawData/api_df1.pkl', \"wb\" ) )\n# pickle.dump( api_df2, open( 'rawData/api_df2.pkl', \"wb\" ) )\n\napi_df1 = loadHelper.loadPickle('rawData/api_df1.pkl')\napi_df2 = loadHelper.loadPickle('rawData/api_df2.pkl')\n\n", "_____no_output_____" ], [ "title = 'DRE Report 1'\nwebsite = 'https://www.drevidence.com/'", "_____no_output_____" ], [ "# Create a PDF object\npdf = createReport.DREPDF('P', 'mm', 'Letter')\npdf.set_auto_page_break(auto = True, margin = 15)\npdf.add_font('DejaVu', '', 'ect/DejaVuSansCondensed.ttf', uni=True)\npdf.set_font('DejaVu', '', 14)", "_____no_output_____" ], [ "# metadata\npdf.set_title(title)\npdf.set_author('DRE')\npdf.set_website(website)", "_____no_output_____" ], [ "pdf.add_page()\ntxt = \"\"\"\nI am going to compare the results of two seperate APIs to see if they agree a term has a single Sense.ID \nReport on how lists of strings map to Sense.Id using 2 APIs. \nAPI1: DocAnalytics\n * This api is how docanalytics identifies terms. \n * E.G. https://caladan.doctorevidence.com/portal/suggestions?search={stroke}\nAPI2: DocSearch\n * This api is how docsearch identifies terms.\n * E.G. https://search.doctorevidence.com/api/annotator/batch-annotate\n\"\"\"\npdf.multi_cell(0, 5, txt)\npdf.add_page()", "_____no_output_____" ], [ "txt = \"\"\"\nHow many items are matched in API1\n\"\"\"\npdf.multi_cell(0, 5, txt)\ndf = pd.crosstab(index = api_df1['File'], columns = 'Count')\ndf['File'] = df.index\ndf = df[['File', 'Count']]\ndata = df.values.tolist()\ndata.insert(0, df.columns.to_list())\npdf.create_table(table_data = data,title='API1 retreval table', cell_width='uneven')\npdf.add_page()\n\n", "_____no_output_____" ], [ "txt = \"\"\"\nHow many items are matched in API2\n\"\"\"\npdf.multi_cell(0, 5, txt)\ndf = pd.crosstab(index = api_df2['File'], columns = 'Count')\ndf['File'] = df.index\ndf = df[['File', 'Count']]\ndata = df.values.tolist()\ndata.insert(0, df.columns.to_list())\npdf.create_table(table_data = data,title='API1 retreval table', cell_width='uneven')\npdf.add_page()\n\n", "_____no_output_____" ], [ "txt = \"\"\"\nHow many items are matched in API1\n\"\"\"\npdf.multi_cell(0, 5, txt)\ndf= pd.crosstab(index = api_df1['File'], columns = api_df1['N_IDS'])\ndf['File'] = df.index\ndata = df.values.tolist()\ndata.insert(0, df.columns.to_list())\npdf.create_table(table_data = data,title='API1 retreval table', cell_width='uneven')\npdf.add_page()", "_____no_output_____" ], [ "txt = \"\"\"\nHow many items are matched in API2\n\"\"\"\npdf.multi_cell(0, 5, txt)\ndf= pd.crosstab(index = api_df2['File'], columns = api_df2['N_IDS'])\ndf['File'] = df.index\ndata = df.values.tolist()\ndata.insert(0, df.columns.to_list())\npdf.create_table(table_data = data,title='API2 retreval table', cell_width='uneven')\npdf.add_page()\n", "_____no_output_____" ], [ "plt = vennPlot.plotVenn1(api_df1, api_df2)\nplt.savefig('output/images/filename.png', bbox_inches='tight')\npdf.add_page(orientation = 'Landscape')\nw = 250\nh = w*2/3\npdf.image('output/images/filename.png', w = w, h = h,x=0, y=40)", "_____no_output_____" ], [ "pdf.output('output/template.pdf', 'F')", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3d8bc7c256172cb4f904a577c93ee734bbf4de
1,748
ipynb
Jupyter Notebook
notebooks/Koch_snowflake.ipynb
smallpondtom/pNLsys
ebaea9c2945391fccd076f8006baee7066951097
[ "MIT" ]
6
2021-12-17T16:44:14.000Z
2022-02-22T05:40:43.000Z
notebooks/Koch_snowflake.ipynb
smallpondtom/pNLsys
ebaea9c2945391fccd076f8006baee7066951097
[ "MIT" ]
null
null
null
notebooks/Koch_snowflake.ipynb
smallpondtom/pNLsys
ebaea9c2945391fccd076f8006baee7066951097
[ "MIT" ]
null
null
null
20.564706
50
0.493135
[ [ [ "# Koch Snowflake - Recursion with Turtle", "_____no_output_____" ] ], [ [ "import turtle", "_____no_output_____" ], [ "def koch(t, order, size):\n if order == 0:\n t.forward(size)\n else:\n for angle in [60, -120, 60, 0]:\n koch(t, order - 1, size // 3)\n t.left(angle)", "_____no_output_____" ], [ "t = turtle.Turtle()\nwn = turtle.Screen()\nwn.bgcolor('black')\nwn.setup(900, 500)\nt.color('orange')\nt.pensize(3)\nt.penup()\nt.setpos(-445, 0)\nt.pendown()\nt.speed(0)\n\nsize = 900\norder = 5\nkoch(t, order, size)\n\nwn.exitonclick()", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code" ] ]
4a3d980933662540a446047e64d007552a106059
566,345
ipynb
Jupyter Notebook
RandomCutForest/random_cut_forest_auto_sales_1yr.ipynb
jaypeeml/AWSSagemaker
9ab931065e9f2af0b1c102476781a63c917e7c47
[ "Apache-2.0" ]
167
2019-04-07T16:33:56.000Z
2022-03-24T12:13:13.000Z
RandomCutForest/random_cut_forest_auto_sales_1yr.ipynb
jaypeeml/AWSSagemaker
9ab931065e9f2af0b1c102476781a63c917e7c47
[ "Apache-2.0" ]
5
2019-04-13T06:39:43.000Z
2019-11-09T06:09:56.000Z
RandomCutForest/random_cut_forest_auto_sales_1yr.ipynb
jaypeeml/AWSSagemaker
9ab931065e9f2af0b1c102476781a63c917e7c47
[ "Apache-2.0" ]
317
2019-04-07T16:34:00.000Z
2022-03-31T11:20:32.000Z
287.923233
111,072
0.900211
[ [ [ "### Analyze Auto sales trend and verify if RCF detects abrupt shift in sales\n#### Years: 2005 to 2020. This period covers recession due to housing crisis in 2008, followed by recovery and economic impact due to Covid\n### Data Source: Monthly New Vehicle Sales for the United States Automotive Market\n### https://www.goodcarbadcar.net/usa-auto-industry-total-sales-figures/\n### Raw data: http://www.bea.gov/", "_____no_output_____" ] ], [ [ "import sys\nimport pandas as pd\nimport numpy as np\n\n%matplotlib inline\nimport matplotlib\nimport matplotlib.pyplot as plt\nmatplotlib.rcParams['figure.dpi'] = 100", "_____no_output_____" ], [ "import boto3\nimport botocore\nimport sagemaker\nfrom sagemaker import RandomCutForest\n\nbucket = sagemaker.Session().default_bucket() # Feel free to change to another bucket you have access to\nprefix = 'sagemaker/autosales'\nexecution_role = sagemaker.get_execution_role()\n\n\n# check if the bucket exists\ntry:\n boto3.Session().client('s3').head_bucket(Bucket=bucket)\nexcept botocore.exceptions.ParamValidationError as e:\n print('Hey! You either forgot to specify your S3 bucket'\n ' or you gave your bucket an invalid name!')\nexcept botocore.exceptions.ClientError as e:\n if e.response['Error']['Code'] == '403':\n print(\"Hey! You don't have permission to access the bucket, {}.\".format(bucket))\n elif e.response['Error']['Code'] == '404':\n print(\"Hey! Your bucket, {}, doesn't exist!\".format(bucket))\n else:\n raise\nelse:\n print('Training input/output will be stored in: s3://{}/{}'.format(bucket, prefix))", "Training input/output will be stored in: s3://sagemaker-us-east-1-144943967277/sagemaker/autosales\n" ], [ "%%time\ndata_filename = 'auto_sales_year_month.csv'\ndf = pd.read_csv(data_filename)", "CPU times: user 4.94 ms, sys: 427 µs, total: 5.37 ms\nWall time: 137 ms\n" ], [ "df.shape", "_____no_output_____" ], [ "df", "_____no_output_____" ], [ "plt.plot(df['value'])\nplt.ylabel('Sales')\nplt.title('Monthly Auto Sales - USA')\nplt.show()", "_____no_output_____" ] ], [ [ "### Big increase in autosales Feb-2012\nhttps://www.theautochannel.com/news/2012/03/02/027504-february-2012-u-s-auto-sales-highest-4-years.html", "_____no_output_____" ] ], [ [ "df[75:90]", "_____no_output_____" ] ], [ [ "### U.S. Auto Sales Hit Record Low In April 2020\n#### Coronavirus Chaos Also Drives Zero-Interest Deals to Record Highs\nhttps://www.edmunds.com/car-news/us-auto-sales-hit-record-low-in-april.html", "_____no_output_____" ] ], [ [ "df[175:]", "_____no_output_____" ] ], [ [ "# Training\n\n***\n\nNext, we configure a SageMaker training job to train the Random Cut Forest (RCF) algorithm on the taxi cab data.", "_____no_output_____" ], [ "## Hyperparameters\n\nParticular to a SageMaker RCF training job are the following hyperparameters:\n\n* **`num_samples_per_tree`** - the number randomly sampled data points sent to each tree. As a general rule, `1/num_samples_per_tree` should approximate the the estimated ratio of anomalies to normal points in the dataset.\n* **`num_trees`** - the number of trees to create in the forest. Each tree learns a separate model from different samples of data. The full forest model uses the mean predicted anomaly score from each constituent tree.\n* **`feature_dim`** - the dimension of each data point.\n\nIn addition to these RCF model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,\n\n* Recommended instance type: `ml.m4`, `ml.c4`, or `ml.c5`\n* Current limitations:\n * The RCF algorithm does not take advantage of GPU hardware.", "_____no_output_____" ] ], [ [ "# Use Spot Instance - Save up to 90% of training cost by using spot instances when compared to on-demand instances\n# Reference: https://github.com/aws-samples/amazon-sagemaker-managed-spot-training/blob/main/xgboost_built_in_managed_spot_training_checkpointing/xgboost_built_in_managed_spot_training_checkpointing.ipynb\n\n# if you are still on two-month free-tier you can use the on-demand instance by setting:\n# use_spot_instances = False\n\n# We will use spot for training\nuse_spot_instances = True\nmax_run = 3600 # in seconds\nmax_wait = 3600 if use_spot_instances else None # in seconds\n\njob_name = 'rcf-autosales-1yr'\n\ncheckpoint_s3_uri = None\n\nif use_spot_instances:\n checkpoint_s3_uri = f's3://{bucket}/{prefix}/checkpoints/{job_name}'\n \nprint (f'Checkpoint uri: {checkpoint_s3_uri}')", "_____no_output_____" ], [ "# SDK 2.0\n\nsession = sagemaker.Session()\n\n# specify general training job information\n# 48 samples = 48 Months of data \nrcf = RandomCutForest(role=execution_role,\n instance_count=1,\n instance_type='ml.m4.xlarge',\n data_location='s3://{}/{}/'.format(bucket, prefix),\n output_path='s3://{}/{}/output'.format(bucket, prefix),\n num_samples_per_tree=48,\n num_trees=50,\n base_job_name = job_name,\n use_spot_instances=use_spot_instances,\n max_run=max_run,\n max_wait=max_wait,\n checkpoint_s3_uri=checkpoint_s3_uri)\n\n# automatically upload the training data to S3 and run the training job\nrcf.fit(rcf.record_set(df.value.to_numpy().reshape(-1,1)))", "Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\nDefaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n" ], [ "rcf.hyperparameters()", "_____no_output_____" ], [ "print('Training job name: {}'.format(rcf.latest_training_job.job_name))", "Training job name: randomcutforest-2020-11-20-09-43-38-628\n" ] ], [ [ "# Inference\n\n***\n\nA trained Random Cut Forest model does nothing on its own. We now want to use the model we computed to perform inference on data. In this case, it means computing anomaly scores from input time series data points.\n\nWe create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up. We recommend using the `ml.c5` instance type as it provides the fastest inference time at the lowest cost.", "_____no_output_____" ] ], [ [ "rcf_inference = rcf.deploy(\n initial_instance_count=1,\n instance_type='ml.m5.xlarge',\n endpoint_name = job_name)", "Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n" ] ], [ [ "Congratulations! You now have a functioning SageMaker RCF inference endpoint. You can confirm the endpoint configuration and status by navigating to the \"Endpoints\" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below: ", "_____no_output_____" ] ], [ [ "print('Endpoint name: {}'.format(rcf_inference.endpoint_name))", "Endpoint name: randomcutforest-2020-11-20-09-47-51-199\n" ] ], [ [ "## Data Serialization/Deserialization\n\nWe can pass data in a variety of formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `csv_serializer` and `json_deserializer` when configuring the inference endpoint.", "_____no_output_____" ] ], [ [ "# SDK 2.0 serializers\nfrom sagemaker.serializers import CSVSerializer\nfrom sagemaker.deserializers import JSONDeserializer\n\nrcf_inference.serializer = CSVSerializer()\nrcf_inference.deserializer = JSONDeserializer()", "_____no_output_____" ] ], [ [ "Let's pass the training dataset, in CSV format, to the inference endpoint so we can automatically detect the anomalies we saw with our eyes in the plots, above. Note that the serializer and deserializer will automatically take care of the datatype conversion from Numpy NDArrays.\n\nFor starters, let's only pass in the first six datapoints so we can see what the output looks like.", "_____no_output_____" ] ], [ [ "df_numpy = df.value.to_numpy().reshape(-1,1)\nprint(df_numpy[:6])\nresults = rcf_inference.predict(df_numpy[:6])\nprint(results)", "[[1052224.]\n [1244753.]\n [1564938.]\n [1493837.]\n [1488171.]\n [1671401.]]\n{'scores': [{'score': 1.0109732473}, {'score': 0.9255308674}, {'score': 1.028823602}, {'score': 0.9257952251}, {'score': 0.9200007213}, {'score': 1.1623123211}]}\n" ] ], [ [ "## Computing Anomaly Scores\n\nNow, let's compute and plot the anomaly scores from the entire taxi dataset.", "_____no_output_____" ] ], [ [ "results = rcf_inference.predict(df_numpy)\nscores = [datum['score'] for datum in results['scores']]\n\n# add scores to taxi data frame and print first few values\ndf['score'] = pd.Series(scores, index=df.index)\ndf.head()", "_____no_output_____" ], [ "fig, ax1 = plt.subplots()\nax2 = ax1.twinx()\n\n#\n# *Try this out* - change `start` and `end` to zoom in on the \n# anomaly found earlier in this notebook\n#\nstart, end = 0, len(df)\n\ndf_subset = df[start:end]\n\nax1.plot(df_subset['value'], color='C0', alpha=0.8)\nax2.plot(df_subset['score'], color='C1')\n\nax1.grid(which='major', axis='both')\n\nax1.set_ylabel('Auto Sales', color='C0')\nax2.set_ylabel('Anomaly Score', color='C1')\n\nax1.tick_params('y', colors='C0')\nax2.tick_params('y', colors='C1')\n\nax2.set_ylim(min(scores), 1.4*max(scores))\nfig.set_figwidth(10)", "_____no_output_____" ] ], [ [ "Note that the anomaly score spikes where our eyeball-norm method suggests there is an anomalous data point as well as in some places where our eyeballs are not as accurate.\n\nBelow we print and plot any data points with scores greater than 3 standard deviations (approx 99.9th percentile) from the mean score.", "_____no_output_____" ] ], [ [ "score_mean = df['score'].mean()\nscore_std = df['score'].std()\nscore_cutoff = score_mean + 3*score_std\n\nanomalies = df_subset[df_subset['score'] > score_cutoff]\nanomalies", "_____no_output_____" ], [ "score_mean, score_std, score_cutoff", "_____no_output_____" ], [ "ax2.plot(anomalies.index, anomalies.score, 'ko')\nfig", "_____no_output_____" ] ], [ [ "With the current hyperparameter choices we see that the three-standard-deviation threshold, while able to capture the known anomalies as well as the ones apparent in the ridership plot, is rather sensitive to fine-grained peruturbations and anomalous behavior. Adding trees to the SageMaker RCF model could smooth out the results as well as using a larger data set.", "_____no_output_____" ], [ "## Stop and Delete the Endpoint\n\nFinally, we should delete the endpoint before we close the notebook.\n\nTo do so execute the cell below. Alternately, you can navigate to the \"Endpoints\" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select \"Delete\" from the \"Actions\" dropdown menu. ", "_____no_output_____" ] ], [ [ "# SDK 2.0\nrcf_inference.delete_endpoint()", "_____no_output_____" ] ], [ [ "# Epilogue\n\n---\n\nWe used Amazon SageMaker Random Cut Forest to detect anomalous datapoints in a taxi ridership dataset. In these data the anomalies occurred when ridership was uncharacteristically high or low. However, the RCF algorithm is also capable of detecting when, for example, data breaks periodicity or uncharacteristically changes global behavior.\n\nDepending on the kind of data you have there are several ways to improve algorithm performance. One method, for example, is to use an appropriate training set. If you know that a particular set of data is characteristic of \"normal\" behavior then training on said set of data will more accurately characterize \"abnormal\" data.\n\nAnother improvement is make use of a windowing technique called \"shingling\". This is especially useful when working with periodic data with known period, such as the NYC taxi dataset used above. The idea is to treat a period of $P$ datapoints as a single datapoint of feature length $P$ and then run the RCF algorithm on these feature vectors. That is, if our original data consists of points $x_1, x_2, \\ldots, x_N \\in \\mathbb{R}$ then we perform the transformation,\n\n```\ndata = [[x_1], shingled_data = [[x_1, x_2, ..., x_{P}],\n [x_2], ---> [x_2, x_3, ..., x_{P+1}],\n ... ...\n [x_N]] [x_{N-P}, ..., x_{N}]]\n\n```", "_____no_output_____" ] ], [ [ "df.head()", "_____no_output_____" ], [ "import numpy as np\n\n# made a minor correction. increased size by 1 as the original code was missing last shingle\ndef shingle(data, shingle_size):\n num_data = len(data)\n # +1\n shingled_data = np.zeros((num_data-shingle_size+1, shingle_size))\n \n # +1\n for n in range(num_data - shingle_size+1):\n shingled_data[n] = data[n:(n+shingle_size)]\n return shingled_data\n\n# single data with shingle size=12 (1 year - 12 months)\n# let's try one year auto sales\n# let's try 1 year window\nshingle_size = 12\nprefix_shingled = 'sagemaker/randomcutforest_shingled_1year'\nauto_data_shingled = shingle(df.values[:,1], shingle_size)", "_____no_output_____" ], [ "job_name = 'rcf-autosales-shingled-1year'\n\ncheckpoint_s3_uri = None\n\nif use_spot_instances:\n checkpoint_s3_uri = f's3://{bucket}/{prefix_shingled}/checkpoints/{job_name}'\n \nprint (f'Checkpoint uri: {checkpoint_s3_uri}')", "_____no_output_____" ], [ "df.values[:24,1]", "_____no_output_____" ], [ "shingle(df.values[:24,1],12)", "_____no_output_____" ], [ "auto_data_shingled[:5]", "_____no_output_____" ], [ "auto_data_shingled[-5:]", "_____no_output_____" ], [ "auto_data_shingled.shape", "_____no_output_____" ] ], [ [ "We create a new training job and and inference endpoint. (Note that we cannot re-use the endpoint created above because it was trained with one-dimensional data.)", "_____no_output_____" ] ], [ [ "# SDK 2.0\n\nsession = sagemaker.Session()\n\n# specify general training job information\nrcf = RandomCutForest(role=execution_role,\n instance_count=1,\n instance_type='ml.m5.xlarge',\n data_location='s3://{}/{}/'.format(bucket, prefix_shingled),\n output_path='s3://{}/{}/output'.format(bucket, prefix_shingled),\n num_samples_per_tree=48,\n num_trees=50,\n base_job_name = job_name,\n use_spot_instances=use_spot_instances,\n max_run=max_run,\n max_wait=max_wait,\n checkpoint_s3_uri=checkpoint_s3_uri)\n\n# automatically upload the training data to S3 and run the training job\nrcf.fit(rcf.record_set(auto_data_shingled))", "Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\nDefaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n" ], [ "rcf.hyperparameters()", "_____no_output_____" ], [ "# SDK 2.0 serializers\nfrom sagemaker.serializers import CSVSerializer\nfrom sagemaker.deserializers import JSONDeserializer\n\nrcf_inference = rcf.deploy(\n initial_instance_count=1,\n instance_type='ml.m5.xlarge',\n endpoint_name = job_name\n)\n\nrcf_inference.serializer = CSVSerializer()\nrcf_inference.deserializer = JSONDeserializer()", "Defaulting to the only supported framework/algorithm version: 1. Ignoring framework/algorithm version: 1.\n" ] ], [ [ "Using the above inference endpoint we compute the anomaly scores associated with the shingled data.", "_____no_output_____" ] ], [ [ "# Score the shingled datapoints\nresults = rcf_inference.predict(auto_data_shingled)\nscores = np.array([datum['score'] for datum in results['scores']])", "_____no_output_____" ], [ "# Save the scores\nnp.savetxt(\"scores_shingle_annual.csv\",\n np.asarray(scores),\n delimiter=\",\",\n fmt='%10.5f')", "_____no_output_____" ], [ "# compute the shingled score distribution and cutoff and determine anomalous scores\nscore_mean = scores.mean()\nscore_std = scores.std()\nscore_cutoff = score_mean + 1.5*score_std\n\nanomalies = scores[scores > score_cutoff]\nanomaly_indices = np.arange(len(scores))[scores > score_cutoff]\n\nprint(anomalies)", "[1.14802704 1.16369404 1.15697874 1.15985674 1.18681203 1.16831072\n 1.1660988 1.20031068 1.18823582 1.1819616 1.17632682 1.17641995\n 1.18549787 1.15908676 1.15241367 1.16544555 1.15893216 1.1408647\n 1.14224935 1.142914 1.13278531 1.13373768 1.16330677]\n" ], [ "score_mean, score_std, score_cutoff", "_____no_output_____" ], [ "anomalies.size", "_____no_output_____" ] ], [ [ "Finally, we plot the scores from the shingled data on top of the original dataset and mark the score lying above the anomaly score threshold.", "_____no_output_____" ] ], [ [ "fig, ax1 = plt.subplots()\nax2 = ax1.twinx()\n\n#\n# *Try this out* - change `start` and `end` to zoom in on the \n# anomaly found earlier in this notebook\n#\nstart, end = 0, len(df)\ntaxi_data_subset = df[start:end]\n\nax1.plot(df['value'], color='C0', alpha=0.8)\nax2.plot(scores, color='C1')\nax2.scatter(anomaly_indices, anomalies, color='k')\n\nax1.grid(which='major', axis='both')\nax1.set_ylabel('Auto Sales', color='C0')\nax2.set_ylabel('Anomaly Score', color='C1')\nax1.tick_params('y', colors='C0')\nax2.tick_params('y', colors='C1')\n\nax2.set_ylim(min(scores), 1.4*max(scores))\nfig.set_figwidth(10)", "_____no_output_____" ] ], [ [ "We see that with this particular shingle size, hyperparameter selection, and anomaly cutoff threshold that the shingled approach more clearly captures the major anomalous events: the spike at around t=6000 and the dips at around t=9000 and t=10000. In general, the number of trees, sample size, and anomaly score cutoff are all parameters that a data scientist may need experiment with in order to achieve desired results. The use of a labeled test dataset allows the used to obtain common accuracy metrics for anomaly detection algorithms. For more information about Amazon SageMaker Random Cut Forest see the [AWS Documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/randomcutforest.html).", "_____no_output_____" ] ], [ [ "# compute the shingled score distribution and cutoff and determine anomalous scores\nscore_mean = scores.mean()\nscore_std = scores.std()\nscore_cutoff = score_mean + 2.0*score_std\n\nanomalies = scores[scores > score_cutoff]\nanomaly_indices = np.arange(len(scores))[scores > score_cutoff]\n\nprint(anomalies)", "[1.16369404 1.18681203 1.16831072 1.1660988 1.20031068 1.18823582\n 1.1819616 1.17632682 1.17641995 1.18549787 1.16544555 1.16330677]\n" ], [ "fig, ax1 = plt.subplots()\nax2 = ax1.twinx()\n\n#\n# *Try this out* - change `start` and `end` to zoom in on the \n# anomaly found earlier in this notebook\n#\nstart, end = 0, len(df)\ntaxi_data_subset = df[start:end]\n\nax1.plot(df['value'], color='C0', alpha=0.8)\nax2.plot(scores, color='C1')\nax2.scatter(anomaly_indices, anomalies, color='k')\n\nax1.grid(which='major', axis='both')\nax1.set_ylabel('Auto Sales', color='C0')\nax2.set_ylabel('Anomaly Score', color='C1')\nax1.tick_params('y', colors='C0')\nax2.tick_params('y', colors='C1')\n\nax2.set_ylim(min(scores), 1.4*max(scores))\nfig.set_figwidth(10)", "_____no_output_____" ], [ "# SDK 2.0\n\nrcf_inference.delete_endpoint()", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a3d9f36404349c0d0d2113c594111e0f489b532
291,471
ipynb
Jupyter Notebook
PythonHW-1.2.4/odds_and_ends/HW_odds_and_ends.ipynb
abhatia25/Beaverworks-Racecar
7c579e27de3688d58f280ff260897f77efa5fb4a
[ "MIT" ]
null
null
null
PythonHW-1.2.4/odds_and_ends/HW_odds_and_ends.ipynb
abhatia25/Beaverworks-Racecar
7c579e27de3688d58f280ff260897f77efa5fb4a
[ "MIT" ]
1
2021-03-06T21:24:28.000Z
2021-03-06T21:24:28.000Z
PythonHW-1.2.4/odds_and_ends/HW_odds_and_ends.ipynb
abhatia25/Beaverworks-Racecar
7c579e27de3688d58f280ff260897f77efa5fb4a
[ "MIT" ]
1
2021-03-06T21:04:55.000Z
2021-03-06T21:04:55.000Z
300.795666
245,407
0.892446
[ [ [ "## Odds and Ends: File Reading and Matplotlib\n\nNow that we're familiar with the essentials of the Python language we're going to practice [reading files](https://www.pythonlikeyoumeanit.com/Module5_OddsAndEnds/WorkingWithFiles.html) and [plotting with Matplotlib](https://www.pythonlikeyoumeanit.com/Module5_OddsAndEnds/Matplotlib.html). \n\nAlthough these topics may be considered \"odds and ends\", they are common in many day-to-day applications. You'll find that spending some time up front to become familiar with these materials will save a lot of time down the road.", "_____no_output_____" ] ], [ [ "## Problem 1: Reading and Parsing Files\nLet's pretend we were conducting a survey of favorite foods. Each participant is asked to list their favorite foods along with its category (e.g. dessert, snack, fruit). The food and category are separated by a colon, and each food-category pair is separated by a comma like so\n\n```food: category, food: category, food: category, ... ```\n\nThe results of this survey are stored in a text file, `results.txt`, giving us a great opportunity to practice our file reading skills!\n\nOur task is to write a function called `get_most_popular_foods` that takes a file path of survey results and returns the most common response for each food category in the form of a dictionary where the keys are the food categories and the values are the most common food of that type. If there is a tie, return the food that comes first alphabetically. Note, we don't know which food categories will be given before reading the file.\n\nSo, if we had data in the file `example.txt` with the contents below\n\n``` granola bars: snack, shrimp: seafood\ngranola bars: snack\ntuna: seafood ```\n\nOur function would produce the following result\n ``` python\n >>> get_most_popular_foods('example.txt')\n {'snack': 'granola bars', 'seafood': ' shrimp'}\n ```\n \n The `collections.Counter` object will be useful for this problem. Also, the function `itertools.chain` may come in handy.\n \nFor reference, there is a short example input under `resources/example-survey.txt`. On this input, your function should produce the response as follows\n ``` python\n>>> get_most_popular_foods('resources/example-survey.txt')\n {'dessert': 'cake', 'vegetable': 'carrots', 'fruit': 'peaches'}\n```", "_____no_output_____" ] ], [ [ "def get_most_popular_foods(file_path):\n \"\"\" Read in survey and determine the most common food of each type.\n \n Parameters\n ----------\n file_path : str\n Path to text file containing favorite food survey responses.\n \n Returns\n -------\n Dict[str, str]\n Dictionary with the key being food type and value being food.\n \"\"\"\n from pathlib import Path\n from itertools import chain\n from collections import Counter\n path_to_file = Path(\"resources/example-survey.txt\")\n with open(path_to_file, mode=\"r\") as my_open_file:\n out = [line.strip() for line in my_open_file]\n new = list(chain.from_iterable([[a, c] for a, b, c in (x.partition(',') for x in out)]))\n while(\"\" in new): \n new.remove(\"\")\n Count = Counter(new)\n categories = list(chain.from_iterable([[a, c] for a, b, c in (x.partition(':') for x in new)]))\n categories1 = []\n items = []\n for i in range(len(categories)-1):\n if i%2 == 1:\n categories[i].strip()\n categories1.append(categories[i])\n else:\n categories[i].strip()\n items.append(categories[i])\n categories_clean = []\n for i in categories1: \n if i not in categories_clean:\n categories_clean.append(i)\n output = categories_clean\n return output\n pass", "_____no_output_____" ], [ "from bwsi_grader.python.odds_and_ends import grade_file_parser\ngrade_file_parser(get_most_popular_foods)", "_____no_output_____" ] ], [ [ "## Problem 2: Plotting an Image with Matplotlib\n\nWe have an image in the file `resources/mystery-img.npy`. Read and plot the image, then answer the following for Question 2 of the homework:\n\n__What is in this image?__", "_____no_output_____" ] ], [ [ "import matplotlib.pyplot as plt\nimport numpy as np\n%matplotlib notebook\n\ndata = np.load('resources/mystery-img.npy')\nfig, ax = plt.subplots()\nax.imshow(data);", "_____no_output_____" ] ] ]
[ "markdown", "raw", "code", "markdown", "code" ]
[ [ "markdown" ], [ "raw" ], [ "code", "code" ], [ "markdown" ], [ "code" ] ]
4a3da173fb695ec3dc39cb9cad6bd6123f61eab3
9,691
ipynb
Jupyter Notebook
Multivariate Calculus/readonly/sandpitmodule.ipynb
rishabmallick/Mathematics-for-Machine-Learning
959a7513bd110f6c2255b2a460058147bdcd6beb
[ "MIT" ]
6
2020-03-03T13:47:02.000Z
2021-07-15T18:37:54.000Z
Multivariate Calculus/readonly/sandpitmodule.ipynb
rishabmallick/Mathematics-for-Machine-Learning
959a7513bd110f6c2255b2a460058147bdcd6beb
[ "MIT" ]
null
null
null
Multivariate Calculus/readonly/sandpitmodule.ipynb
rishabmallick/Mathematics-for-Machine-Learning
959a7513bd110f6c2255b2a460058147bdcd6beb
[ "MIT" ]
7
2020-05-10T08:59:03.000Z
2022-03-27T07:41:11.000Z
45.074419
151
0.466309
[ [ [ "# Following imports pylab notebook without giving the user rubbish messages\nimport os, sys\nstdout = sys.stdout\nsys.stdout = open(os.devnull, 'w')\n%pylab notebook\nsys.stdout = stdout\n\nfrom scipy.optimize import differential_evolution, minimize\nimport matplotlib.lines as mlines\nfrom matplotlib.legend_handler import HandlerLine2D\nfrom scipy.misc import imread\nimport matplotlib.cm as cm\nfrom matplotlib.colors import LinearSegmentedColormap\n\nimport ipywidgets as widgets\nfrom IPython.display import display, Markdown\n\nmatplotlib.rcParams['figure.subplot.left'] = 0\n#matplotlib.rcParams['figure.figsize'] = (7, 6)\n\nclass Sandpit:\n def __init__(self, f):\n # Default options\n self.game_mode = 0 # 0 - Jacobian, 1 - Depth Only, 2 - Steepest Descent\n self.grad_length = 1/5\n self.grad_max_length = 1\n self.arrowhead_width = 0.1\n self.arrow_placement = 2 # 0 - tip, 1 - base, 2 - centre, 3 - tail\n self.tol = 0.15 # Tolerance\n self.markerColour = (1, 0.85, 0)\n self.contourCM = LinearSegmentedColormap.from_list(\"Cmap\", [\n (0., 0.00505074, 0.191104),\n (0.155556, 0.0777596, 0.166931),\n (0.311111, 0.150468, 0.142758),\n (0.466667, 0.223177, 0.118585),\n (0.622222, 0.295886, 0.094412),\n (0.777778, 0.368595, 0.070239),\n (0.822222, 0.389369, 0.0633324),\n (0.866667, 0.410143, 0.0564258),\n (0.911111, 0.430917, 0.0495193),\n (0.955556, 0.451691, 0.0426127),\n (1., 0.472465, 0.0357061)\n ], N=256)\n self.start_text = \"**Click anywhere in the sandpit to place the dip-stick.**\"\n self.win_text = \"### Congratulations!\\nWell done, you found the phone.\"\n \n # Initialisation variables\n self.revealed = False\n self.handler_map = {}\n self.nGuess = 0\n self.msgbox = widgets.Output()\n \n # Parameters\n self.f = f # Contour function\n x0 = self.x0 = differential_evolution(lambda xs: f(xs[0], xs[1]), ((0,6),(0,6))).x\n x1 = differential_evolution(lambda xs: -f(xs[0], xs[1]), ((0,6),(0,6))).x\n f0 = f(x0[0], x0[1])\n f1 = f(x1[0], x1[1])\n self.f = lambda x, y: 8 * (f(x, y) - f1) / (f1 - f0) - 1\n self.df = lambda x, y: np.array([self.f(x+0.01,y)-self.f(x-0.01,y), self.f(x,y+0.01)-self.f(x,y-0.01)]) / 0.02\n self.d2f = lambda x, y: np.array([\n [ self.df(x+0.01,y)[0]-self.df(x-0.01,y)[0], self.df(x,y+0.01)[0]-self.df(x,y-0.01)[0] ],\n [ self.df(x+0.01,y)[1]-self.df(x-0.01,y)[1], self.df(x,y+0.01)[1]-self.df(x,y-0.01)[1] ]\n ]) / 0.02\n \n def draw(self):\n self.fig, self.ax = plt.subplots()\n self.ax.set_xlim([0,6])\n self.ax.set_ylim([0,6])\n self.ax.set_aspect(1)\n self.fig.canvas.mpl_connect('button_press_event', lambda e: self.onclick(e))\n self.drawcid = self.fig.canvas.mpl_connect('draw_event', lambda e: self.ondraw(e))\n \n self.leg = self.ax.legend(handles=[] , bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title=\"Depths:\")\n img = imread(\"readonly/sand.png\")\n self.ax.imshow(img,zorder=0, extent=[0, 6, 0, 6], interpolation=\"bilinear\")\n display(self.msgbox)\n \n def onclick(self, event):\n if (event.button != 1):\n return\n x = event.xdata\n y = event.ydata\n \n self.placeArrow(x, y)\n\n if np.linalg.norm(self.x0 - [x,y]) <= self.tol:\n self.showContours()\n return\n lx = minimize(lambda xs: self.f(xs[0], xs[1]), np.array([x,y])).x\n if np.linalg.norm(lx - [x,y]) <= self.tol:\n self.local_min(lx[0], lx[1])\n return\n\n i = 5\n if self.game_mode == 2:\n while i > 0 :\n i = i - 1\n dx = self.next_step(self.f(x, y), self.df(x, y), self.d2f(x, y))\n self.ax.plot([x, x+dx[0]],[y, y+dx[1]], '-', zorder=15, color=(1,0,0,0.5), ms=6)\n x += dx[0]\n y += dx[1]\n if x < 0 or x > 6 or y < 0 or y > 6 :\n break\n self.placeArrow(x, y, auto=True)\n if np.linalg.norm(self.x0 - [x,y]) <= self.tol:\n self.showContours()\n break\n lx = minimize(lambda xs: self.f(xs[0], xs[1]), np.array([x,y])).x\n if np.linalg.norm(lx - [x,y]) <= self.tol:\n self.local_min(lx[0], lx[1])\n break\n \n def ondraw(self, event):\n self.fig.canvas.mpl_disconnect(self.drawcid) # Only do this once, then self destruct the event.\n self.displayMsg(self.start_text)\n\n def placeArrow(self, x, y, auto=False):\n d = -self.df(x,y) * self.grad_length\n dhat = d / np.linalg.norm(d)\n d = d * np.clip(np.linalg.norm(d), 0, self.grad_max_length) / np.linalg.norm(d)\n\n if self.arrow_placement == 0: # tip\n off = d + dhat * 1.5 * self.arrowhead_width\n elif self.arrow_placement == 1: # head\n off = d\n elif self.arrow_placement == 2: # centre\n off = d / 2\n else: # tail\n off = array((0, 0))\n\n if auto:\n self.ax.plot([x],[y], 'yo', zorder=25, color=\"red\", ms=6)\n else:\n self.nGuess += 1\n \n p, = self.ax.plot([x],[y], 'yo', zorder=25, label=\n str(self.nGuess) + \") %.2fm\" % self.f(x,y), color=self.markerColour, ms=8, markeredgecolor=\"black\")\n\n if (self.nGuess <= 25) :\n self.ax.text(x + 0.2*dhat[1], y - 0.2*dhat[0], str(self.nGuess))\n\n self.handler_map[p] = HandlerLine2D(numpoints=1)\n\n self.leg = self.ax.legend(handler_map=self.handler_map,bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0., title=\"Depths:\") \n\n if (self.nGuess == 22 and not self.revealed) :\n self.displayMsg(\"**Hurry Up!** The supervisor has calls to make.\")\n elif not self.revealed: \n self.showContours()\n self.displayMsg(\"**Try again.** You've taken too many tries to find the phone. Reload the sandpit and try again.\")\n \n if self.game_mode != 1:\n self.ax.arrow(x-off[0],y-off[1], d[0], d[1],\n linewidth=1.5, head_width=self.arrowhead_width,\n head_starts_at_zero=False, zorder=20, color=\"black\")\n\n def showContours(self):\n if self.revealed:\n return\n x0 = self.x0\n X, Y = np.meshgrid(np.arange(0,6,0.05), np.arange(0,6,0.05))\n self.ax.contour(X, Y, self.f(X,Y),10, cmap=self.contourCM)\n img = imread(\"readonly/phone2.png\")\n self.ax.imshow(img,zorder=30, extent=[x0[0] - 0.375/2, x0[0] + 0.375/2, x0[1] - 0.375/2, x0[1] + 0.375/2], interpolation=\"bilinear\")\n self.displayMsg(self.win_text)\n self.revealed = True\n \n def local_min(self, x, y) :\n img = imread(\"readonly/nophone.png\")\n self.ax.imshow(img,zorder=30, extent=[x - 0.375/2, x + 0.375/2, y - 0.375/2, y + 0.375/2], interpolation=\"bilinear\")\n if not self.revealed:\n self.displayMsg(\"**Oh no!** You've got stuck in a local optimum. Try somewhere else!\")\n \n def displayMsg(self, msg):\n self.msgbox.clear_output()\n with self.msgbox:\n display(Markdown(msg))\n ", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code" ] ]
4a3da2319f213951c661c8e2a8e3769fc44b7006
2,843
ipynb
Jupyter Notebook
23 - Python for Finance/8_Monte Carlo Simulations as a Decision-Making Tool/9_Monte Carlo: Forecasting Stock Prices - Part II (4:38)/MC Forecasting Stock Prices - Part II - Exercise_Yahoo.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
23 - Python for Finance/8_Monte Carlo Simulations as a Decision-Making Tool/9_Monte Carlo: Forecasting Stock Prices - Part II (4:38)/MC Forecasting Stock Prices - Part II - Exercise_Yahoo.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
23 - Python for Finance/8_Monte Carlo Simulations as a Decision-Making Tool/9_Monte Carlo: Forecasting Stock Prices - Part II (4:38)/MC Forecasting Stock Prices - Part II - Exercise_Yahoo.ipynb
olayinka04/365-data-science-courses
7d71215432f0ef07fd3def559d793a6f1938d108
[ "Apache-2.0" ]
null
null
null
21.216418
113
0.527963
[ [ [ "## Monte Carlo - Forecasting Stock Prices - Part II ", "_____no_output_____" ], [ "*Suggested Answers follow (usually there are multiple ways to solve a problem in Python).*", "_____no_output_____" ], [ "Forecasting Future Stock Prices – continued:", "_____no_output_____" ] ], [ [ "import numpy as np \nimport pandas as pd \nfrom pandas_datareader import data as wb \nimport matplotlib.pyplot as plt \nfrom scipy.stats import norm\n%matplotlib inline\n\nticker = 'MSFT' \ndata = pd.DataFrame()\ndata[ticker] = wb.DataReader(ticker, data_source='yahoo', start='2000-1-1')['Adj Close']\n\nlog_returns = np.log(1 + data.pct_change())\nu = log_returns.mean()\nvar = log_returns.var()\ndrift = u - (0.5 * var)\nstdev = log_returns.std()", "_____no_output_____" ] ], [ [ "******", "_____no_output_____" ], [ "Use “.values” to transform the *drift* and the *stdev* objects into arrays. ", "_____no_output_____" ], [ "Forecast future stock prices for every trading day a year ahead. So, assign 250 to “t_intervals”. <br />\nLet’s examine 10 possible outcomes. Bind “iterations” to the value of 10.", "_____no_output_____" ], [ "Use the formula we have provided and calculate daily returns.", "_____no_output_____" ], [ "$$\ndaily\\_returns = exp({drift} + {stdev} * z), \n$$ \n<br>\n$$\nwhere\\ z = norm.ppf(np.random.rand(t\\_intervals, iterations)\n$$", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a3da4ad9693811c01e53d27537711681152ec56
28,030
ipynb
Jupyter Notebook
notebooks/eeh/road-results.ipynb
nismod/transport
a707cfa203a77b9793df73cf3cb533cb653985dd
[ "MIT" ]
6
2019-03-07T16:02:22.000Z
2022-01-27T10:35:29.000Z
notebooks/eeh/road-results.ipynb
nismod/transport
a707cfa203a77b9793df73cf3cb533cb653985dd
[ "MIT" ]
16
2018-08-07T14:05:54.000Z
2021-12-18T18:16:24.000Z
notebooks/eeh/road-results.ipynb
nismod/transport
a707cfa203a77b9793df73cf3cb533cb653985dd
[ "MIT" ]
2
2020-06-15T14:18:32.000Z
2021-04-16T09:44:24.000Z
27.918327
183
0.532929
[ [ [ "import glob\nimport os\nimport warnings\n\nimport geopandas\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport matplotlib.cm as cm\nimport matplotlib.colors\nimport pandas\nimport seaborn\n\nfrom cartopy import crs as ccrs\nfrom mpl_toolkits.axes_grid1 import make_axes_locatable", "_____no_output_____" ], [ "# from geopandas/geoseries.py:358, when using geopandas.clip:\n#\n# UserWarning: GeoSeries.notna() previously returned False for both missing (None) and empty geometries.\n# Now, it only returns False for missing values. Since the calling GeoSeries contains empty geometries, \n# the result has changed compared to previous versions of GeoPandas.\n#\n# Given a GeoSeries 's', you can use '~s.is_empty & s.notna()' to get back the old behaviour.\n#\n# To further ignore this warning, you can do: \nwarnings.filterwarnings('ignore', 'GeoSeries.notna', UserWarning)", "_____no_output_____" ], [ "# default to larger figures\nplt.rcParams['figure.figsize'] = 10, 10", "_____no_output_____" ] ], [ [ "# Postprocessing and plotting EEH analysis\nScenarios\n- [x] Colour coded map showing the percentage changes in EEH population by LAD \n- [x] Total EEH population compared with ONS projection \n- [x] Total housing growth per LAD, 2015-2020, 2020-2030, 2030-2040, 2040-2050 (may be better as cumulative chart with LADs)\n\nPathways \n- [x] Proportion of engine types for each Pathway 2015-2050 \n- [x] Annual CO2 emission * 5 Pathways 2015, 2020, 2030, 2040, 2050 \n- [x] Colour coded map showing Vehicle km in 2050 for each LAD * 5 Pathways\n- [x] Annual electricity consumption for car trips * 5 Pathways, 2015, 2020, 2030, 2040, 2050 \n- [x] Congestion/capacity utilisation in 2050 for each LAD * 5 Pathways (map/chart)\n", "_____no_output_____" ] ], [ [ "all_zones = geopandas.read_file('../preparation/Local_Authority_Districts__December_2019__Boundaries_UK_BUC-shp/Local_Authority_Districts__December_2019__Boundaries_UK_BUC.shp')", "_____no_output_____" ], [ "zone_codes = pandas.read_csv('lads-codes-eeh.csv').lad19cd", "_____no_output_____" ], [ "eeh_zones = all_zones \\\n [all_zones.lad19cd.isin(zone_codes)] \\\n [['lad19cd', 'lad19nm', 'st_areasha', 'geometry']]\neeh_zones.plot()", "_____no_output_____" ], [ "scenarios = [os.path.basename(d) for d in sorted(glob.glob('eeh/0*'))]\nscenarios", "_____no_output_____" ], [ "timesteps = [os.path.basename(d) for d in sorted(glob.glob('eeh/01-BaU/*'))]\ntimesteps", "_____no_output_____" ] ], [ [ "## Population scenario", "_____no_output_____" ] ], [ [ "def read_pop(fname):\n pop = pandas.read_csv(fname)\n pop = pop \\\n [pop.year.isin([2015, 2050])] \\\n .melt(id_vars='year', var_name='lad19cd', value_name='population') \n pop = pop[pop.lad19cd.isin(zone_codes)] \\\n .pivot(index='lad19cd', columns='year')\n pop.columns = ['pop2015', 'pop2050']\n\n pop['perc_change'] = (pop.pop2050 - pop.pop2015) / pop.pop2015\n pop.perc_change *= 100\n return pop\n\neehpop = read_pop('../preparation/data/csvfiles/eehPopulation.csv')\narcpop = read_pop('../preparation/data/csvfiles/eehArcPopulationBaseline.csv')", "_____no_output_____" ], [ "eehpop.sort_values(by='perc_change').tail()", "_____no_output_____" ], [ "def plot_pop(eeh_zones, pop):\n df = eeh_zones.merge(pop, on='lad19cd', validate='one_to_one')\n \n fig, ax = plt.subplots(1, 1)\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n\n divider = make_axes_locatable(ax)\n\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.1)\n \n df.plot(column='perc_change', ax=ax, legend=True, cax=cax, cmap='coolwarm_r', vmax=95, vmin=-95)\n\n cax.yaxis.set_label_text('Population (% change 2015-2050)')\n cax.yaxis.get_label().set_visible(True)\n\n return fig", "_____no_output_____" ], [ "eehpop.to_csv('eehPopulationChange.csv')", "_____no_output_____" ], [ "fig = plot_pop(eeh_zones, eehpop)\nplt.savefig(\"eehPopulationChange.png\")\nplt.savefig(\"eehPopulationChange.svg\")", "_____no_output_____" ], [ "fig = plot_pop(eeh_zones, arcpop)\nplt.savefig(\"snppPopulationChange.png\")\nplt.savefig(\"snppPopulationChange.svg\")", "_____no_output_____" ] ], [ [ "## Results", "_____no_output_____" ] ], [ [ "def read_result(fname, scenarios, timesteps):\n dfs = []\n for s in scenarios:\n for t in timesteps:\n path = os.path.join('eeh', s, t, fname)\n _, ext = os.path.splitext(fname)\n if ext == '.csv':\n df = pandas.read_csv(path)\n elif ext in ('.shp', '.gpkg', '.geojson'):\n df = geopandas.read_file(path)\n else:\n raise Exception(f\"Don't know how to read files of type '{ext}'\")\n df['year'] = t\n df['scenario'] = s\n dfs.append(df)\n return pandas.concat(dfs)", "_____no_output_____" ] ], [ [ "## CO2 Emissions", "_____no_output_____" ] ], [ [ "zone_vehicle_emissions = read_result('totalCO2EmissionsZonalPerVehicleType.csv', scenarios, timesteps)\nzone_vehicle_emissions.head(2)", "_____no_output_____" ], [ "annual_eeh_emissions = zone_vehicle_emissions[zone_vehicle_emissions.zone.isin(zone_codes)] \\\n .groupby(['scenario', 'year']) \\\n .sum()\nannual_eeh_emissions['TOTAL'] = annual_eeh_emissions.sum(axis=1)\nannual_eeh_emissions.to_csv('eehCO2Emissions.csv')\nannual_eeh_emissions.head(10)", "_____no_output_____" ] ], [ [ "## Vehicle km per LAD", "_____no_output_____" ] ], [ [ "vkm_a = read_result('vehicleKilometresWithAccessEgress.csv', scenarios, timesteps)\neeh_vkm_a = vkm_a[vkm_a.zone.isin(zone_codes)] \\\n .set_index(['scenario', 'year', 'zone'])\neeh_vkm_a['TOTAL'] = eeh_vkm_a.sum(axis=1)\neeh_vkm_a.to_csv('eehVehicleKilometresWithAccessEgress.csv')\neeh_vkm_a.head()", "_____no_output_____" ], [ "vkm = read_result('vehicleKilometres.csv', scenarios, timesteps)\neeh_vkm = vkm[vkm.zone.isin(zone_codes)] \\\n .set_index(['scenario', 'year', 'zone'])\neeh_vkm['TOTAL'] = eeh_vkm.sum(axis=1)\neeh_vkm.to_csv('eehVehicleKilometres.csv')\neeh_vkm.head()", "_____no_output_____" ], [ "eeh_vkm.describe()", "_____no_output_____" ], [ "df = eeh_vkm.reset_index().drop(columns='zone').groupby(['scenario', 'year']).sum()[['TOTAL']].reset_index()\n\nseaborn.catplot(\n x = \"year\",\n y = \"TOTAL\",\n hue = \"scenario\",\n data = df,\n kind = \"bar\")", "_____no_output_____" ], [ "def plot_vkm(eeh_zones, eeh_vkm, scenario, year):\n vmax = eeh_vkm.TOTAL.max()\n df = eeh_vkm[['TOTAL']].reset_index() \\\n .rename(columns={'TOTAL': 'vkm'})\n df = df[(df.scenario == scenario) & (df.year == year)] \\\n .drop(columns=['scenario', 'year'])\n df = geopandas.GeoDataFrame(df.merge(eeh_zones, left_on='zone', right_on='lad19cd', validate='one_to_one'))\n \n fig, ax = plt.subplots(1, 1)\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n\n divider = make_axes_locatable(ax)\n\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.1)\n\n df.plot(column='vkm', ax=ax, legend=True, cax=cax, cmap='inferno', vmax=vmax)\n\n cax.yaxis.set_label_text('Vehicle kilometres (km)')\n cax.yaxis.get_label().set_visible(True)\n\n return fig", "_____no_output_____" ], [ "fig = plot_vkm(eeh_zones, eeh_vkm, scenarios[0], \"2015\")\nplt.savefig(\"eehVehicleKilometres2015.png\")\nplt.savefig(\"eehVehicleKilometres2015.svg\")\n\nfor s in scenarios:\n fig = plot_vkm(eeh_zones, eeh_vkm, s, \"2050\")\n plt.savefig(f\"eehVehicleKilometres2050_{s}.png\")\n plt.savefig(f\"eehVehicleKilometres2050_{s}.svg\")", "_____no_output_____" ] ], [ [ "## Electricity consumption for car trips", "_____no_output_____" ] ], [ [ "car_elec = read_result('zonalTemporalElectricityCAR.csv', scenarios, timesteps)\ncar_elec = car_elec[car_elec.zone.isin(zone_codes)] \\\n .set_index(['scenario', 'year', 'zone'])\ncar_elec['TOTAL'] = car_elec.sum(axis=1)\ncar_elec.to_csv('eehZonalTemporalElectricityCAR.csv')\ncar_elec.head(2)", "_____no_output_____" ], [ "car_energy = read_result('energyConsumptionsZonalCar.csv', scenarios, timesteps)\ncar_energy = car_energy[car_energy.zone.isin(zone_codes)] \\\n .set_index(['scenario', 'year', 'zone'])\ncar_energy.to_csv('eehEnergyConsumptionsZonalCar.csv')\ncar_energy.head(2)", "_____no_output_____" ] ], [ [ "## Congestion/capacity utilisation", "_____no_output_____" ] ], [ [ "zb = eeh_zones.bounds\nextent = (zb.minx.min(), zb.maxx.max(), zb.miny.min(), zb.maxy.max())\nextent", "_____no_output_____" ], [ "network_base = read_result('outputNetwork.shp', [scenarios[0]], [\"2015\"])", "_____no_output_____" ], [ "eeh_nb = network_base.cx[extent[0]:extent[1], extent[2]:extent[3]].copy()\neeh_nbc = geopandas.clip(eeh_nb, eeh_zones)", "_____no_output_____" ], [ "eeh_nb.head(1)", "_____no_output_____" ], [ "eeh_nb.drop(columns=['SRefE','SRefN','IsFerry', 'iDir', 'Anode', 'Bnode', 'CP', 'year', 'CapUtil', 'scenario']).to_file('eehNetwork.gpkg', driver='GPKG')", "_____no_output_____" ], [ "def plot_cap(zones, network, network_clipped):\n fig, ax = plt.subplots(1, 1)\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n\n divider = make_axes_locatable(ax)\n\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.1)\n\n zones.plot(ax=ax, color='#eeeeee', edgecolor='white')\n network.plot(ax=ax, color='#eeeeee')\n network_clipped.plot(column='CapUtil', ax=ax, legend=True, cax=cax, cmap='inferno', vmax=200)\n\n cax.yaxis.set_label_text('Capacity Utilisation (%)')\n cax.yaxis.get_label().set_visible(True)\n\n return fig", "_____no_output_____" ], [ "fig = plot_cap(eeh_zones, eeh_nb, eeh_nbc)\nplt.savefig('eehCapacity2015.png')\nplt.savefig('eehCapacity2015.svg')", "_____no_output_____" ], [ "for s in scenarios:\n network = read_result('outputNetwork.shp', [s], [\"2050\"])\n eeh_nb = network.cx[extent[0]:extent[1], extent[2]:extent[3]].copy()\n eeh_nbc = geopandas.clip(eeh_nb, eeh_zones)\n fig = plot_cap(eeh_zones, eeh_nb, eeh_nbc)\n plt.savefig(f'eehCapacity2050_{s}.png')\n plt.savefig(f'eehCapacity2050_{s}.svg')", "_____no_output_____" ], [ "dfs = []\n\ndf = read_result('outputNetwork.shp', [scenarios[0]], [\"2015\"])\n\ndf = geopandas.clip(df, eeh_zones) \\\n [['EdgeID', 'Anode', 'Bnode', 'CP', 'RoadNumber', 'iDir', 'SRefE',\n 'SRefN', 'Distance', 'FFspeed', 'FFtime', 'IsFerry', 'Lanes', 'CapUtil',\n 'year', 'scenario']]\ndfs.append(df)\n\nfor s in scenarios:\n df = read_result('outputNetwork.shp', [s], [\"2050\"])\n df = geopandas.clip(df, eeh_zones) \\\n [['EdgeID', 'Anode', 'Bnode', 'CP', 'RoadNumber', 'iDir', 'SRefE',\n 'SRefN', 'Distance', 'FFspeed', 'FFtime', 'IsFerry', 'Lanes', 'CapUtil',\n 'year', 'scenario']]\n dfs.append(df)\n \nlink_capacity = pandas.concat(dfs) \\\n .set_index(['scenario', 'year'])\nlink_capacity.head(2)", "_____no_output_____" ], [ "link_to_lad = geopandas.sjoin(eeh_nbc, eeh_zones, how=\"left\", op='intersects') \\\n [['EdgeID','lad19cd','lad19nm']] \\\n .drop_duplicates(subset=['EdgeID'])\nlink_to_lad", "_____no_output_____" ], [ "link_capacity", "_____no_output_____" ], [ "link_capacity_with_lad = link_capacity \\\n .reset_index() \\\n .merge(link_to_lad, on='EdgeID', how='left') \\\n .set_index(['scenario', 'year', 'EdgeID']) \n\nlink_capacity_with_lad", "_____no_output_____" ], [ "link_capacity_with_lad.to_csv('eehLinkCapUtil.csv')", "_____no_output_____" ], [ "mean_cap = link_capacity_with_lad[['CapUtil', 'lad19cd','lad19nm']] \\\n .reset_index() \\\n .drop(columns='EdgeID') \\\n .groupby(['scenario', 'year', 'lad19cd', 'lad19nm']) \\\n .mean()\nmean_cap.to_csv('eehLADAverageCapUtil.csv')\nmean_cap", "_____no_output_____" ], [ "df = mean_cap.reset_index()\nprint(len(df.scenario.unique()))\nprint(len(df.year.unique()))\n\nprint(len(df.lad19cd.unique()))\nprint(6 * 37)", "_____no_output_____" ] ], [ [ "## Link travel times/speeds", "_____no_output_____" ] ], [ [ "link_times = read_result('linkTravelTimes.csv', scenarios, timesteps)\nlink_times.head(1)", "_____no_output_____" ], [ "eeh_nbc", "_____no_output_____" ], [ "eeh_lt = link_times[link_times.edgeID.isin(eeh_nbc.EdgeID)]", "_____no_output_____" ], [ "eeh_lt.to_csv('eehLinkTravelTimes.csv', index=False)", "_____no_output_____" ], [ "KM_TO_MILES = 0.6213712", "_____no_output_____" ], [ "hours = [\n 'MIDNIGHT', 'ONEAM', 'TWOAM', 'THREEAM', 'FOURAM', 'FIVEAM', \n 'SIXAM', 'SEVENAM', 'EIGHTAM', 'NINEAM', 'TENAM', 'ELEVENAM', \n 'NOON', 'ONEPM', 'TWOPM', 'THREEPM', 'FOURPM', 'FIVEPM', \n 'SIXPM', 'SEVENPM', 'EIGHTPM', 'NINEPM', 'TENPM', 'ELEVENPM'\n]", "_____no_output_____" ], [ "def merge_times_to_network(network_clipped, link_times, hours):\n # nbc is clipped network\n # lt is link times\n # hours is list of hour names\n \n # merge link times (by hour of day) onto network\n df = network_clipped \\\n .drop(columns=['scenario', 'year']) \\\n .rename(columns={'EdgeID': 'edgeID'}) \\\n .merge(\n link_times,\n on=\"edgeID\"\n ) \\\n [[\n 'edgeID', 'RoadNumber', 'iDir', 'Lanes', 'Distance', 'FFspeed', \n 'MIDNIGHT', 'ONEAM', 'TWOAM', 'THREEAM', 'FOURAM', 'FIVEAM', \n 'SIXAM', 'SEVENAM', 'EIGHTAM', 'NINEAM', 'TENAM', 'ELEVENAM', \n 'NOON', 'ONEPM', 'TWOPM', 'THREEPM', 'FOURPM', 'FIVEPM', \n 'SIXPM', 'SEVENPM', 'EIGHTPM', 'NINEPM', 'TENPM', 'ELEVENPM',\n 'geometry'\n ]]\n # calculate flow speeds from distance / time * 60 [to get back to km/h] * 0.6213712 [to miles/h]\n for hour in hours:\n df[hour] = (df.Distance / df[hour]) * 60 * KM_TO_MILES\n df.FFspeed *= KM_TO_MILES\n return df", "_____no_output_____" ], [ "eeh_ltb = merge_times_to_network(\n eeh_nbc, \n eeh_lt[(eeh_lt.scenario == '01-BaU') & (eeh_lt.year == \"2015\")], \n hours)\neeh_ltb", "_____no_output_____" ], [ "eeh_ltb.columns", "_____no_output_____" ], [ "def plot_speed(zones, network, network_clipped, col, label=None):\n fig, ax = plt.subplots(1, 1)\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\n\n divider = make_axes_locatable(ax)\n\n cax = divider.append_axes(\"right\", size=\"5%\", pad=0.1)\n\n zones.plot(ax=ax, color='#eeeeee', edgecolor='white')\n network.plot(ax=ax, color='#eeeeee')\n network_clipped.plot(column=col, ax=ax, legend=True, cax=cax, cmap='inferno', vmax=75, vmin=0)\n \n if label is not None:\n # place a text box in upper left in axes coords\n props = props = dict(boxstyle='round', facecolor='white', alpha=0.5)\n ax.text(0.05, 0.95, label, transform=ax.transAxes, fontsize=14,\n verticalalignment='top', bbox=props)\n\n cax.yaxis.set_label_text('Speed (km/h)')\n cax.yaxis.get_label().set_visible(True)\n\n return fig", "_____no_output_____" ], [ "fig = plot_speed(eeh_zones, eeh_nb, eeh_ltb, 'EIGHTAM', \"Morning peak\")\nfname = f\"speed2015_peakam.png\"\nplt.savefig(fname)\nplt.close(fig)", "_____no_output_____" ], [ "fig = plot_speed(eeh_zones, eeh_nb, eeh_ltb, 'FFspeed', \"Free flow\")\nfname = f\"speed2015_free.png\"\nplt.savefig(fname)\nplt.close(fig)", "_____no_output_____" ], [ "for i, hour in enumerate(hours):\n fig = plot_speed(eeh_zones, eeh_nb, eeh_ltb, hour, f\"{str(i).zfill(2)}:00\")\n fname = f\"speed2015_{str(i).zfill(3)}.png\"\n print(fname, end=\" \")\n plt.savefig(fname)\n plt.close(fig)", "_____no_output_____" ] ], [ [ "### Convert to GIF\n\nUsing imagemagick, needs installing, next line runs in the shell", "_____no_output_____" ] ], [ [ "! convert -delay 20 -loop 0 speed2015_0*.png speed2015.gif", "_____no_output_____" ] ], [ [ "### Each scenario peak speeds in 2050", "_____no_output_____" ] ], [ [ "for scenario in scenarios:\n ltb = merge_times_to_network(\n eeh_nbc, \n eeh_lt[(eeh_lt.scenario == scenario) & (eeh_lt.year == \"2050\")], \n hours)\n \n fig = plot_speed(eeh_zones, eeh_nb, ltb, 'EIGHTAM', \"Morning peak\")\n fname = f\"speed2050_{scenario}_peakam.png\"\n print(fname, end=\" \")\n plt.savefig(fname)\n plt.close(fig)", "_____no_output_____" ] ], [ [ "## Rank links per-scenario for peak speed in 2050", "_____no_output_____" ] ], [ [ "eeh_flow = eeh_lt[eeh_lt.year == \"2050\"] \\\n [[\"scenario\", \"edgeID\", \"EIGHTAM\", \"freeFlow\"]] \\\n .rename(columns={'EIGHTAM': 'peakFlow'})\n\neeh_flow['flowRatio'] = eeh_flow.freeFlow / eeh_flow.peakFlow\n\neeh_flow.drop(columns=['peakFlow', 'freeFlow'], inplace=True)\n\neeh_flow = eeh_flow.pivot_table(columns='scenario', index='edgeID', values='flowRatio')\neeh_flow.columns.name = None\neeh_flow['bestScenarioAtPeak'] = eeh_flow.idxmax(axis=1)\neeh_flow.head(2)", "_____no_output_____" ], [ "eeh_flow.groupby('bestScenarioAtPeak').count()[[\"01-BaU\"]]", "_____no_output_____" ], [ "eeh_flowg = eeh_nbc \\\n [[\"EdgeID\", \"RoadNumber\", \"iDir\", \"Distance\", \"Lanes\", \"geometry\"]] \\\n .rename(columns={'EdgeID': 'edgeID'}) \\\n .merge(\n eeh_flow,\n on=\"edgeID\"\n )\nlu = {\n# '01-BaU': '1:Business as Usual',\n# '02-HighlyConnected': '2:Highly Connected',\n# '03-AdaptedFleet': '3:Adapted Fleet',\n# '04-BehavShiftPolicy': '4:Behaviour Shift (policy-led)',\n# '05-BehavShiftResults': '5:Behaviour Shift (results-led)',\n '01-BaU': '01 BaU',\n '02-HighlyConnected': '02 HC',\n '03-AdaptedFleet': '03 AF',\n '04-BehavShiftPolicy': '04 BSp',\n '05-BehavShiftResults': '05 BSr',\n}\neeh_flowg.bestScenarioAtPeak = eeh_flowg.bestScenarioAtPeak \\\n .apply(lambda s: lu[s])\neeh_flowg.head(1)", "_____no_output_____" ], [ "eehcm = matplotlib.colors.ListedColormap(\n [(74/255, 120/255, 199/255),\n (238/255, 131/255, 54/255),\n (170/255, 170/255, 170/255),\n (255/255, 196/255, 0/255),\n (84/255, 130/255, 53/255)],\n name='eeh')", "_____no_output_____" ], [ "fig, ax = plt.subplots(1, 1)\nax.xaxis.set_visible(False)\nax.yaxis.set_visible(False)\n\neeh_zones.plot(ax=ax, color='#f2f2f2', edgecolor='white')\neeh_nb.plot(ax=ax, color='#eeeeee')\neeh_flowg.plot(column='bestScenarioAtPeak', ax=ax, legend=True, cmap=eehcm)\nplt.savefig(\"bestScenarioPeakFlowRatio.png\")\nplt.savefig(\"bestScenarioPeakFlowRatio.svg\")", "_____no_output_____" ] ], [ [ "## Link travel times direct", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code" ], [ "markdown" ] ]
4a3dd37db8828ab411b7ab2f1bff494b50f478b3
7,337
ipynb
Jupyter Notebook
Untitled.ipynb
owejow/algorithms
756095671326512cc72b817f1a6fe35e8b9ac71d
[ "Apache-2.0" ]
null
null
null
Untitled.ipynb
owejow/algorithms
756095671326512cc72b817f1a6fe35e8b9ac71d
[ "Apache-2.0" ]
null
null
null
Untitled.ipynb
owejow/algorithms
756095671326512cc72b817f1a6fe35e8b9ac71d
[ "Apache-2.0" ]
null
null
null
21.205202
318
0.462859
[ [ [ "import numpy as np\nimport pandas as pd", "_____no_output_____" ], [ "def sample_generator(num):\n while True:\n num *= 22\n num = yield num\n \n ", "_____no_output_____" ], [ "abc = sample_generator(20)\nprint(next(abc))\nprint(abc.send(22))\nprint(abc.send(33))\nabc.send(34)", "440\n484\n726\n" ], [ "def simple_coro2(a):\n print('-> Started: a =', a)\n b = yield a\n print('-> Received: b =', b)\n c = yield a + b\n print('-> Received: c =', c)", "_____no_output_____" ], [ "my_coro2 = simple_coro2(14)", "_____no_output_____" ], [ "from inspect import getgeneratorstate", "_____no_output_____" ], [ "getgeneratorstate(my_coro2)", "_____no_output_____" ], [ "next(my_coro2)", "-> Started: a = 14\n" ], [ "getgeneratorstate(my_coro2)", "_____no_output_____" ], [ "my_coro2.send(28)", "-> Received: b = 28\n" ], [ "getgeneratorstate(my_coro2)", "_____no_output_____" ], [ "my_coro2.send(99)", "-> Received: c = 99\n" ], [ "getgeneratorstate(my_coro2)", "_____no_output_____" ], [ "from heapq import heappush, heappop, heapify\nfrom functools import wraps\n\ndef coroutine(func):\n \"\"\"Decorator: primes `func` by advancing to first `yield`\"\"\"\n @wraps(func)\n def primer(*args,**kwargs):\n gen = func(*args,**kwargs)\n next(gen)\n return gen\n return primer\n\n\ndef median_heaps(max_heap, min_heap):\n median = -max_heap[0]\n \n if len(min_heap) == len(max_heap):\n median += min_heap[0]\n median /= 2.\n \n return median\n\ndef transfer_heap_value(heap_a, heap_b):\n heappush(heap_b, -heappop(heap_a))\n\n \n@coroutine \ndef median_stream_heap():\n min_heap = []\n max_heap = []\n \n median = None\n num = yield median\n \n heappush(max_heap, -num)\n\n median = median_heaps(max_heap, min_heap)\n num = yield median\n \n heappush(max_heap, -num)\n transfer_heap_value(max_heap, min_heap)\n while True:\n median = median_heaps(max_heap, min_heap)\n num = yield median\n\n \n equal_length = (len(max_heap) == len(min_heap))\n \n if num < median:\n if not equal_length:\n transfer_heap_value(max_heap, min_heap)\n heappush(max_heap, -num)\n \n else:\n if equal_length:\n transfer_heap_value(min_heap, max_heap)\n heappush(min_heap, num)\n\ndef median_streaming(ary):\n median_coroutine = median_stream_heap()\n return [median_coroutine.send(elem) for elem in ary]", "_____no_output_____" ] ] ]
[ "code" ]
[ [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3ddb3c758d226a22c7220d30322c295decb718
159,343
ipynb
Jupyter Notebook
Python for Finance - Code Files/109 Monte Carlo - Euler Discretization - Part I/CSV/Python 2 CSV/MC - Euler Discretization - Part I - Lecture_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
f2f1e22f2d578c59f833f8f3c8b4523d91286e9e
[ "MIT" ]
3
2020-03-24T12:58:37.000Z
2020-08-03T17:22:35.000Z
Python for Finance - Code Files/109 Monte Carlo - Euler Discretization - Part I/CSV/Python 2 CSV/MC - Euler Discretization - Part I - Lecture_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
f2f1e22f2d578c59f833f8f3c8b4523d91286e9e
[ "MIT" ]
null
null
null
Python for Finance - Code Files/109 Monte Carlo - Euler Discretization - Part I/CSV/Python 2 CSV/MC - Euler Discretization - Part I - Lecture_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
f2f1e22f2d578c59f833f8f3c8b4523d91286e9e
[ "MIT" ]
1
2021-10-19T23:59:37.000Z
2021-10-19T23:59:37.000Z
596.790262
153,732
0.936313
[ [ [ "import numpy as np \nimport pandas as pd \nfrom pandas_datareader import data as web \nfrom scipy.stats import norm \nimport matplotlib.pyplot as plt \n%matplotlib inline", "_____no_output_____" ], [ "data = pd.read_csv('D:/Python/PG_2007_2017.csv', index_col = 'Date')", "_____no_output_____" ], [ "log_returns = np.log(1 + data.pct_change())", "_____no_output_____" ] ], [ [ "<br /><br />\n$$\n{\\LARGE S_t = S_{t-1} \\mathbin{\\cdot} e^{((r - \\frac{1}{2} \\cdot stdev^2) \\mathbin{\\cdot} \\delta_t + stdev \\mathbin{\\cdot} \\sqrt{\\delta_t} \\mathbin{\\cdot} Z_t)} }\n$$\n<br /><br />", "_____no_output_____" ] ], [ [ "r = 0.025", "_____no_output_____" ], [ "stdev = log_returns.std() * 250 ** 0.5\nstdev", "_____no_output_____" ], [ "type(stdev)", "_____no_output_____" ], [ "stdev = stdev.values\nstdev", "_____no_output_____" ], [ "T = 1.0 \nt_intervals = 250 \ndelta_t = T / t_intervals \n\niterations = 10000 ", "_____no_output_____" ], [ "Z = np.random.standard_normal((t_intervals + 1, iterations)) \nS = np.zeros_like(Z) \nS0 = data.iloc[-1] \nS[0] = S0", "_____no_output_____" ] ], [ [ "<br /><br />\n$$\n{\\LARGE S_t = S_{t-1} \\mathbin{\\cdot} e^{((r - \\frac{1}{2} \\cdot stdev^2) \\mathbin{\\cdot} \\delta_t + stdev \\mathbin{\\cdot} \\sqrt{\\delta_t} \\mathbin{\\cdot} Z_t)} }\n$$\n<br /><br />", "_____no_output_____" ] ], [ [ "for t in range(1, t_intervals + 1):\n S[t] = S[t-1] * np.exp((r - 0.5 * stdev ** 2) * delta_t + stdev * delta_t ** 0.5 * Z[t])", "_____no_output_____" ], [ "S", "_____no_output_____" ], [ "S.shape", "_____no_output_____" ], [ "plt.figure(figsize=(10, 6))\nplt.plot(S[:, :10]);", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code" ] ]
4a3ddc96041d2f9955f8d9ac1078844918a5fc6e
61,439
ipynb
Jupyter Notebook
Interpolater.ipynb
FahmidulHaquee/AMI-Data-Cleaning
91f7bfdc5ce2836157de83fe4db1edf387e8765f
[ "MIT" ]
null
null
null
Interpolater.ipynb
FahmidulHaquee/AMI-Data-Cleaning
91f7bfdc5ce2836157de83fe4db1edf387e8765f
[ "MIT" ]
null
null
null
Interpolater.ipynb
FahmidulHaquee/AMI-Data-Cleaning
91f7bfdc5ce2836157de83fe4db1edf387e8765f
[ "MIT" ]
null
null
null
63.208848
30,084
0.67895
[ [ [ "# import data handling libraries\nimport pandas as pd\nimport numpy as np\n# import graphing libraries\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n# import stats libraries\nfrom scipy.optimize import curve_fit\nfrom scipy.special import factorial\nfrom scipy.stats import poisson, norm, chi2, ttest_ind, ttest_rel\nfrom scipy import stats\nfrom scipy import fft\nfrom scipy.cluster.hierarchy import dendrogram, linkage\nimport plotly.express as px\n# from sklearn.cluster import AgglomerativeClustering\n", "_____no_output_____" ] ], [ [ "Initialisation function ", "_____no_output_____" ] ], [ [ "columns = [\n 'Unique Meter ID', \n 'Unix Time Stamp', \n 'Date/Time Stamp', \n 'Incremental Consumption Value (Gallons)', \n 'Reading Value (Gallons)'\n]\ndf = pd.read_csv(\"/Users/derekzheng/Documents/coding/r42/Sample_UtilityX_AMIData.csv\", \n# names=columns, \n header=None,\n index_col=False \n )\n \ndf = df.loc[:,[0,1,2,3,4]]\ndf.columns = columns\ndf.head()", "_____no_output_____" ], [ "dataframe = df", "_____no_output_____" ] ], [ [ "Converting to datetime module", "_____no_output_____" ] ], [ [ "# convert datatype to datetime\n\"\"\" This function converts the date and time to datetime datatype\nIt requires two inputs:\n the first being the dataframe \n the second being the name of the time column in string format e.g. 'date'\n\"\"\"\ndef convert_to_datetime(df, time_col):\n df[time_col] = pd.to_datetime(df[time_col])\n df['dotw'] = df['Date/Time Stamp'].dt.dayofweek\n df['hour'] = df['Date/Time Stamp'].dt.hour\n df['doty'] = df['Date/Time Stamp'].dt.dayofyear\n return df", "_____no_output_____" ], [ "convert_to_datetime(df, 'Date/Time Stamp')", "_____no_output_____" ], [ "def make_timestamps(df, meter_col, date_col):\n meters = df[meter_col].unique() # get all unique meters\n dates = df[date_col].unique() # get all unique datetime points\n\n # create df with all possible datetime points for each meter\n # set columns for new df\n df_temp = pd.DataFrame(np.array(np.meshgrid(meters, dates)).T.reshape(-1,2)) \n df_temp.columns = [meter_col, date_col] \n\n df_temp[date_col] = pd.to_datetime(df_temp[date_col]) # change datatype\n df_new = df_temp.merge(df, how = 'left') #merge with original dataframe to give NaN read values where data is missing \n df_new = df_new.sort_values([meter_col, date_col])\n df_new = df_new.reset_index()\n\n del df_temp\n return df_new\n\ndef add_periodic_time_columns(df, date_col):\n df['dotw'] = df[date_col].dt.dayofweek\n df['hour'] = df[date_col].dt.hour\n df['doty'] = df[date_col].dt.dayofyear\n return df\n\ndef interpolate_missing_reads(df, meter_col, date_col, reads_col, nan_timestamps=True):\n if nan_timestamps != True:\n df_temp = make_timestamps(df, meter_col, date_col)\n else:\n df_temp = df\n\n df_temp = df_temp.sort_values([meter_col, date_col])\n df_temp = df_temp.reset_index()\n\n df_temp.loc[:, [reads_col]]\n\n df_interp = df_temp.interpolate(\n method='spline',\n limit_direction='both',\n limit_area='inside',\n order=1\n )\n return df_interp\n", "_____no_output_____" ], [ "df1 = make_timestamps(df, 'Unique Meter ID', 'Date/Time Stamp')", "_____no_output_____" ], [ "df_test = df1.loc[df1['Unique Meter ID'] == 31793811]", "_____no_output_____" ], [ "# display(df_test.count())\n# df_temp = interpolate_missing_reads(df_test, 'Unique Meter ID', 'Date/Time Stamp', 'Reading Value (Gallons)', nan_timestamps=True)\n# df_temp.count()", "_____no_output_____" ], [ "df1.shape", "_____no_output_____" ], [ "df1.loc[(df1['Unique Meter ID'] == 23385775)&~(df1['Incremental Consumption Value (Gallons)'].isna())]", "_____no_output_____" ], [ "dfs = []\ndf_base = []\ni = 1\n\n# print(\"values: \", df1['Unique Meter ID'].unique())\n# s factor for cubic spline -> change it with the :30 OR use interpolate directly not the wrapper\n# first and last -> find a linear reg of the data and then you subtract the baseline\n# always need to preserve monotonic increasing\n#how to make it monotonic increasing with the linear fit taken out \n# normalize the data? idk\nfor meter in df1['Unique Meter ID'].unique():\n if i == 1:\n df_base = df1.loc[df1['Unique Meter ID'] == meter]\n df_base = interpolate_missing_reads(df_base, 'Unique Meter ID', 'Date/Time Stamp', 'Reading Value (Gallons)', nan_timestamps=True)\n i += 1\n else: \n # if i % 10 == 0:\n if i == 28 or i == 27:\n # print(df_base)\n print(\"skipped \", meter)\n i += 1\n continue\n print(i, \" - \", meter)\n df_temp = df1.loc[df1['Unique Meter ID'] == meter]\n # df_temp = interpolate_missing_reads(df_temp, 'Unique Meter ID', 'Date/Time Stam\n df_temp = interpolate_missing_reads(df_temp, 'Unique Meter ID', 'Date/Time Stamp', 'Reading Value (Gallons)', nan_timestamps=True)\n df_temp.to_csv('output.csv', mode='a')\n df_base = pd.concat([df_base, df_temp])\n del df_temp\n print(\" - \")\n i += 1", "_____no_output_____" ], [ "df_base.to_csv('Sample_Interpolation_Trial1_2020-09-.csv')", "_____no_output_____" ], [ "# Yearly cycle\n# raw_df.groupby(['doty']).\\\n # agg({'Incremental Consumption Value (Gallons)':'mean'}).plot()\nfig, ax = plt.subplots(1, figsize=(12,8))\nsns.lineplot(\n x='doty',\n y='Incremental Consumption Value (Gallons)',\n data=df_base\n)\n# plt.ylabel('')\nplt.xlabel('Day of the Year')\nplt.title('Filled missing values for n=90 meters')\n\n\n# # plt.ylim(0,130)", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code", "code", "code", "code" ] ]
4a3ddcf3e10cbe69a826afc8c15bcc19df463dab
1,263
ipynb
Jupyter Notebook
notebooks/book2/10/mc_estimate_pi.ipynb
patel-zeel/pyprobml
027ef3c13a2a63d958e05fdedb68fd7b8f0e0261
[ "MIT" ]
null
null
null
notebooks/book2/10/mc_estimate_pi.ipynb
patel-zeel/pyprobml
027ef3c13a2a63d958e05fdedb68fd7b8f0e0261
[ "MIT" ]
1
2022-03-27T04:59:50.000Z
2022-03-27T04:59:50.000Z
notebooks/book2/10/mc_estimate_pi.ipynb
patel-zeel/pyprobml
027ef3c13a2a63d958e05fdedb68fd7b8f0e0261
[ "MIT" ]
2
2022-03-26T11:52:36.000Z
2022-03-27T05:17:48.000Z
26.3125
76
0.509105
[ [ [ "empty" ] ] ]
[ "empty" ]
[ [ "empty" ] ]
4a3ded7f986a28975a75513c4ac032bdbb4ddae8
46,707
ipynb
Jupyter Notebook
IBM_AI/5_TensorFlow/ML0120EN-1.4-Review-LogisticRegressionwithTensorFlow.ipynb
merula89/cousera_notebooks
caa529a7abd3763d26f3f2add7c3ab508fbb9bd2
[ "MIT" ]
null
null
null
IBM_AI/5_TensorFlow/ML0120EN-1.4-Review-LogisticRegressionwithTensorFlow.ipynb
merula89/cousera_notebooks
caa529a7abd3763d26f3f2add7c3ab508fbb9bd2
[ "MIT" ]
null
null
null
IBM_AI/5_TensorFlow/ML0120EN-1.4-Review-LogisticRegressionwithTensorFlow.ipynb
merula89/cousera_notebooks
caa529a7abd3763d26f3f2add7c3ab508fbb9bd2
[ "MIT" ]
null
null
null
67.888081
12,460
0.717323
[ [ [ "<a href=\"https://www.bigdatauniversity.com\"><img src=\"https://ibm.box.com/shared/static/qo20b88v1hbjztubt06609ovs85q8fau.png\" width=\"400px\" align=\"center\"></a>\n<h1 align=\"center\"><font size=\"5\">LOGISTIC REGRESSION WITH TENSORFLOW</font></h1>", "_____no_output_____" ], [ "## Table of Contents\n\nLogistic Regression is one of most important techniques in data science. It is usually used to solve the classic classification problem.\n\n<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n<font size = 3><strong>This lesson covers the following concepts of Logistics Regression:</strong></font>\n<br>\n<h2>Table of Contents</h2>\n\n<ol>\n <li><a href=\"#ref1\">Linear Regression vs Logistic Regression</a></li>\n <li><a href=\"#ref2\">Utilizing Logistic Regression in TensorFlow</a></li>\n <li><a href=\"#ref3\">Training</a></li>\n</ol> \n</div>\n<p></p>\n<br>\n\n<hr>", "_____no_output_____" ], [ "<a id=\"ref1\"></a>\n<h2>What is different between Linear and Logistic Regression?</h2>\n\nWhile Linear Regression is suited for estimating continuous values (e.g. estimating house price), it is n0t the best tool for predicting the class in which an observed data point belongs. In order to provide estimate for classification, we need some sort of guidance on what would be the <b>most probable class</b> for that data point. For this, we use <b>Logistic Regression</b>.\n\n<div class=\"alert alert-success alertsuccess\" style=\"margin-top: 20px\">\n<font size=\"3\"><strong>Recall linear regression:</strong></font>\n<br>\n<br>\nLinear regression finds a function that relates a continuous dependent variable, <i>y</i>, to some predictors (independent variables <i>x1</i>, <i>x2</i>, etc.). Simple linear regression assumes a function of the form:\n<br><br>\n$$\ny = w0 + w1 \\times x1 + w2 \\times x2 + \\cdots\n$$\n<br>\nand finds the values of <i>w0</i>, <i>w1</i>, <i>w2</i>, etc. The term <i>w0</i> is the \"intercept\" or \"constant term\" (it's shown as <i>b</i> in the formula below):\n<br><br>\n$$\nY = W X + b\n$$\n<p></p>\n\n</div>\n\nLogistic Regression is a variation of Linear Regression, useful when the observed dependent variable, <i>y</i>, is categorical. It produces a formula that predicts the probability of the class label as a function of the independent variables.\n\nDespite the name logistic <i>regression</i>, it is actually a <b>probabilistic classification</b> model. Logistic regression fits a special s-shaped curve by taking the linear regression and transforming the numeric estimate into a probability with the following function:\n\n$$\nProbabilityOfaClass = \\theta(y) = \\frac{e^y}{1 + e^y} = exp(y) / (1 + exp(y)) = p \n$$\n\nwhich produces p-values between 0 (as y approaches minus infinity $-\\infty$) and 1 (as y approaches plus infinity $+\\infty$). This now becomes a special kind of non-linear regression.\n\nIn this equation, <i>y</i> is the regression result (the sum of the variables weighted by the coefficients), <code>exp</code> is the exponential function and $\\theta(y)$ is the <a href=\"http://en.wikipedia.org/wiki/Logistic_function\">logistic function</a>, also called logistic curve. It is a common \"S\" shape (sigmoid curve), and was first developed for modeling population growth.\n\nYou might also have seen this function before, in another configuration:\n\n$$\nProbabilityOfaClass = \\theta(y) = \\frac{1}{1+e^{-y}}\n$$\n\nSo, briefly, Logistic Regression passes the input through the logistic/sigmoid function but then treats the result as a probability:\n\n<img src=\"https://ibm.box.com/shared/static/kgv9alcghmjcv97op4d6onkyxevk23b1.png\" width=\"400\" align=\"center\">\n", "_____no_output_____" ], [ "-------------------------------", "_____no_output_____" ], [ "<a id=\"ref2\"></a>\n<h2>Utilizing Logistic Regression in TensorFlow</h2>\n\nFor us to utilize Logistic Regression in TensorFlow, we first need to import the required libraries. To do so, you can run the code cell below.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nimport pandas as pd\nimport numpy as np\nimport time\nfrom sklearn.datasets import load_iris\nfrom sklearn.model_selection import train_test_split\nimport matplotlib.pyplot as plt", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorflow\\python\\framework\\dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint8 = np.dtype([(\"qint8\", np.int8, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint8 = np.dtype([(\"quint8\", np.uint8, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint16 = np.dtype([(\"qint16\", np.int16, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_quint16 = np.dtype([(\"quint16\", np.uint16, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n _np_qint32 = np.dtype([(\"qint32\", np.int32, 1)])\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\tensorboard\\compat\\tensorflow_stub\\dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.\n np_resource = np.dtype([(\"resource\", np.ubyte, 1)])\n" ] ], [ [ "Next, we will load the dataset we are going to use. In this case, we are utilizing the <code>iris</code> dataset, which is inbuilt -- so there's no need to do any preprocessing and we can jump right into manipulating it. We separate the dataset into <i>xs</i> and <i>ys</i>, and then into training <i>xs</i> and <i>ys</i> and testing <i>xs</i> and <i>ys</i>, (pseudo)randomly.", "_____no_output_____" ], [ "<h3>Understanding the Data</h3>\n\n<h4><code>Iris Dataset</code>:</h4>\nThis dataset was introduced by British Statistician and Biologist Ronald Fisher, it consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). In total it has 150 records under five attributes - petal length, petal width, sepal length, sepal width and species. <a href=\"https://archive.ics.uci.edu/ml/datasets/iris\">Dataset source</a>\n\nAttributes\nIndependent Variable\n<ul>\n <li>petal length</li>\n <li>petal width</li>\n <li>sepal length</li>\n <li>sepal width</li>\n</ul>\nDependent Variable\n<ul> \n <li>Species\n <ul>\n <li>Iris setosa</li>\n <li>Iris virginica</li>\n <li>Iris versicolor</li>\n </ul>\n </li>\n</ul>\n<br>", "_____no_output_____" ] ], [ [ "iris = load_iris()\niris_X, iris_y = iris.data[:-1,:], iris.target[:-1]\niris_y= pd.get_dummies(iris_y).values\ntrainX, testX, trainY, testY = train_test_split(iris_X, iris_y, test_size=0.33, random_state=42)", "_____no_output_____" ] ], [ [ "Now we define x and y. These placeholders will hold our iris data (both the features and label matrices), and help pass them along to different parts of the algorithm. You can consider placeholders as empty shells into which we insert our data. We also need to give them shapes which correspond to the shape of our data. Later, we will insert data into these placeholders by “feeding” the placeholders the data via a “feed_dict” (Feed Dictionary).\n\n<h3>Why use Placeholders?</h3>\n\n<ol>\n <li>This feature of TensorFlow allows us to create an algorithm which accepts data and knows something about the shape of the data without knowing the amount of data going in.</li>\n <li>When we insert “batches” of data in training, we can easily adjust how many examples we train on in a single step without changing the entire algorithm.</li>\n</ol>", "_____no_output_____" ] ], [ [ "# numFeatures is the number of features in our input data.\n# In the iris dataset, this number is '4'.\nnumFeatures = trainX.shape[1]\n\n# numLabels is the number of classes our data points can be in.\n# In the iris dataset, this number is '3'.\nnumLabels = trainY.shape[1]\n\n\n# Placeholders\n# 'None' means TensorFlow shouldn't expect a fixed number in that dimension\nX = tf.placeholder(tf.float32, [None, numFeatures]) # Iris has 4 features, so X is a tensor to hold our data.\nyGold = tf.placeholder(tf.float32, [None, numLabels]) # This will be our correct answers matrix for 3 classes.", "_____no_output_____" ] ], [ [ "<h3>Set model weights and bias</h3>\n\nMuch like Linear Regression, we need a shared variable weight matrix for Logistic Regression. We initialize both <code>W</code> and <code>b</code> as tensors full of zeros. Since we are going to learn <code>W</code> and <code>b</code>, their initial value does not matter too much. These variables are the objects which define the structure of our regression model, and we can save them after they have been trained so we can reuse them later.\n\nWe define two TensorFlow variables as our parameters. These variables will hold the weights and biases of our logistic regression and they will be continually updated during training. \n\nNotice that <code>W</code> has a shape of [4, 3] because we want to multiply the 4-dimensional input vectors by it to produce 3-dimensional vectors of evidence for the difference classes. <code>b</code> has a shape of [3] so we can add it to the output. Moreover, unlike our placeholders above which are essentially empty shells waiting to be fed data, TensorFlow variables need to be initialized with values, e.g. with zeros.", "_____no_output_____" ] ], [ [ "W = tf.Variable(tf.zeros([4, 3])) # 4-dimensional input and 3 classes\nb = tf.Variable(tf.zeros([3])) # 3-dimensional output [0,0,1],[0,1,0],[1,0,0]", "_____no_output_____" ], [ "#Randomly sample from a normal distribution with standard deviation .01\n\nweights = tf.Variable(tf.random_normal([numFeatures,numLabels],\n mean=0,\n stddev=0.01,\n name=\"weights\"))\n\nbias = tf.Variable(tf.random_normal([1,numLabels],\n mean=0,\n stddev=0.01,\n name=\"bias\"))", "_____no_output_____" ] ], [ [ "<h3>Logistic Regression model</h3>\n\nWe now define our operations in order to properly run the Logistic Regression. Logistic regression is typically thought of as a single equation:\n\n$$\nŷ =sigmoid(WX+b)\n$$\n\nHowever, for the sake of clarity, we can have it broken into its three main components: \n- a weight times features matrix multiplication operation, \n- a summation of the weighted features and a bias term, \n- and finally the application of a sigmoid function. \n\nAs such, you will find these components defined as three separate operations below.\n", "_____no_output_____" ] ], [ [ "# Three-component breakdown of the Logistic Regression equation.\n# Note that these feed into each other.\napply_weights_OP = tf.matmul(X, weights, name=\"apply_weights\")\nadd_bias_OP = tf.add(apply_weights_OP, bias, name=\"add_bias\") \nactivation_OP = tf.nn.sigmoid(add_bias_OP, name=\"activation\")", "_____no_output_____" ] ], [ [ "As we have seen before, the function we are going to use is the <i>logistic function</i> $(\\frac{1}{1+e^{-Wx}})$, which is fed the input data after applying weights and bias. In TensorFlow, this function is implemented as the <code>nn.sigmoid</code> function. Effectively, this fits the weighted input with bias into a 0-100 percent curve, which is the probability function we want.", "_____no_output_____" ], [ "<hr>", "_____no_output_____" ], [ "<a id=\"ref3\"></a>\n<h2>Training</h2>\n\nThe learning algorithm is how we search for the best weight vector (${\\bf w}$). This search is an optimization problem looking for the hypothesis that optimizes an error/cost measure.\n\n<b>What tell us our model is bad?</b> \nThe Cost or Loss of the model, so what we want is to minimize that. \n\n<b>What is the cost function in our model?</b> \nThe cost function we are going to utilize is the Squared Mean Error loss function.\n\n<b>How to minimize the cost function?</b> \nWe can't use <b>least-squares linear regression</b> here, so we will use <a href=\"http://en.wikipedia.org/wiki/Gradient_descent\">gradient descent</a> instead. Specifically, we will use batch gradient descent which calculates the gradient from all data points in the data set.\n\n<h3>Cost function</h3>\nBefore defining our cost function, we need to define how long we are going to train and how should we define the learning rate.", "_____no_output_____" ] ], [ [ "# Number of Epochs in our training\nnumEpochs = 700\n\n# Defining our learning rate iterations (decay)\nlearningRate = tf.train.exponential_decay(learning_rate=0.0008,\n global_step= 1,\n decay_steps=trainX.shape[0],\n decay_rate= 0.95,\n staircase=True)", "_____no_output_____" ], [ "#Defining our cost function - Squared Mean Error\ncost_OP = tf.nn.l2_loss(activation_OP-yGold, name=\"squared_error_cost\")\n\n#Defining our Gradient Descent\ntraining_OP = tf.train.GradientDescentOptimizer(learningRate).minimize(cost_OP)", "_____no_output_____" ] ], [ [ "Now we move on to actually running our operations. We will start with the operations involved in the prediction phase (i.e. the logistic regression itself).\n\nFirst, we need to initialize our weights and biases with zeros or random values via the inbuilt Initialization Op, <b>tf.initialize_all_variables()</b>. This Initialization Op will become a node in our computational graph, and when we put the graph into a session, then the Op will run and create the variables.", "_____no_output_____" ] ], [ [ "# Create a tensorflow session\nsess = tf.Session()\n\n# Initialize our weights and biases variables.\ninit_OP = tf.global_variables_initializer()\n\n# Initialize all tensorflow variables\nsess.run(init_OP)", "_____no_output_____" ] ], [ [ "We also want some additional operations to keep track of our model's efficiency over time. We can do this like so:", "_____no_output_____" ] ], [ [ "# argmax(activation_OP, 1) returns the label with the most probability\n# argmax(yGold, 1) is the correct label\ncorrect_predictions_OP = tf.equal(tf.argmax(activation_OP,1),tf.argmax(yGold,1))\n\n# If every false prediction is 0 and every true prediction is 1, the average returns us the accuracy\naccuracy_OP = tf.reduce_mean(tf.cast(correct_predictions_OP, \"float\"))\n\n# Summary op for regression output\nactivation_summary_OP = tf.summary.histogram(\"output\", activation_OP)\n\n# Summary op for accuracy\naccuracy_summary_OP = tf.summary.scalar(\"accuracy\", accuracy_OP)\n\n# Summary op for cost\ncost_summary_OP = tf.summary.scalar(\"cost\", cost_OP)\n\n# Summary ops to check how variables (W, b) are updating after each iteration\nweightSummary = tf.summary.histogram(\"weights\", weights.eval(session=sess))\nbiasSummary = tf.summary.histogram(\"biases\", bias.eval(session=sess))\n\n# Merge all summaries\nmerged = tf.summary.merge([activation_summary_OP, accuracy_summary_OP, cost_summary_OP, weightSummary, biasSummary])\n\n# Summary writer\nwriter = tf.summary.FileWriter(\"summary_logs\", sess.graph)", "_____no_output_____" ] ], [ [ "Now we can define and run the actual training loop, like this:", "_____no_output_____" ] ], [ [ "# Initialize reporting variables\ncost = 0\ndiff = 1\nepoch_values = []\naccuracy_values = []\ncost_values = []\n\n# Training epochs\nfor i in range(numEpochs):\n if i > 1 and diff < .0001:\n print(\"change in cost %g; convergence.\"%diff)\n break\n else:\n # Run training step\n step = sess.run(training_OP, feed_dict={X: trainX, yGold: trainY})\n # Report occasional stats\n if i % 10 == 0:\n # Add epoch to epoch_values\n epoch_values.append(i)\n # Generate accuracy stats on test data\n train_accuracy, newCost = sess.run([accuracy_OP, cost_OP], feed_dict={X: trainX, yGold: trainY})\n # Add accuracy to live graphing variable\n accuracy_values.append(train_accuracy)\n # Add cost to live graphing variable\n cost_values.append(newCost)\n # Re-assign values for variables\n diff = abs(newCost - cost)\n cost = newCost\n\n #generate print statements\n print(\"step %d, training accuracy %g, cost %g, change in cost %g\"%(i, train_accuracy, newCost, diff))\n\n\n# How well do we perform on held-out test data?\nprint(\"final accuracy on test set: %s\" %str(sess.run(accuracy_OP, \n feed_dict={X: testX, \n yGold: testY})))", "step 0, training accuracy 0.333333, cost 34.4917, change in cost 34.4917\nstep 10, training accuracy 0.585859, cost 30.1319, change in cost 4.35979\nstep 20, training accuracy 0.646465, cost 28.1746, change in cost 1.95725\nstep 30, training accuracy 0.646465, cost 26.5264, change in cost 1.64823\nstep 40, training accuracy 0.646465, cost 25.159, change in cost 1.36738\nstep 50, training accuracy 0.646465, cost 24.0305, change in cost 1.12853\nstep 60, training accuracy 0.646465, cost 23.0969, change in cost 0.933537\nstep 70, training accuracy 0.646465, cost 22.3195, change in cost 0.777435\nstep 80, training accuracy 0.646465, cost 21.6662, change in cost 0.653271\nstep 90, training accuracy 0.646465, cost 21.1118, change in cost 0.554443\nstep 100, training accuracy 0.656566, cost 20.6364, change in cost 0.475384\nstep 110, training accuracy 0.666667, cost 20.2247, change in cost 0.411667\nstep 120, training accuracy 0.666667, cost 19.8649, change in cost 0.359884\nstep 130, training accuracy 0.666667, cost 19.5474, change in cost 0.317438\nstep 140, training accuracy 0.666667, cost 19.2651, change in cost 0.282322\nstep 150, training accuracy 0.666667, cost 19.0121, change in cost 0.253031\nstep 160, training accuracy 0.676768, cost 18.7837, change in cost 0.228395\nstep 170, training accuracy 0.686869, cost 18.5762, change in cost 0.207508\nstep 180, training accuracy 0.69697, cost 18.3865, change in cost 0.189667\nstep 190, training accuracy 0.707071, cost 18.2122, change in cost 0.17432\nstep 200, training accuracy 0.717172, cost 18.0512, change in cost 0.16103\nstep 210, training accuracy 0.737374, cost 17.9017, change in cost 0.14946\nstep 220, training accuracy 0.737374, cost 17.7624, change in cost 0.139305\nstep 230, training accuracy 0.747475, cost 17.632, change in cost 0.130367\nstep 240, training accuracy 0.757576, cost 17.5096, change in cost 0.122442\nstep 250, training accuracy 0.767677, cost 17.3942, change in cost 0.115387\nstep 260, training accuracy 0.787879, cost 17.2851, change in cost 0.109077\nstep 270, training accuracy 0.787879, cost 17.1817, change in cost 0.103405\nstep 280, training accuracy 0.787879, cost 17.0834, change in cost 0.0982876\nstep 290, training accuracy 0.787879, cost 16.9898, change in cost 0.0936527\nstep 300, training accuracy 0.79798, cost 16.9003, change in cost 0.0894375\nstep 310, training accuracy 0.79798, cost 16.8147, change in cost 0.0855923\nstep 320, training accuracy 0.79798, cost 16.7327, change in cost 0.0820675\nstep 330, training accuracy 0.79798, cost 16.6538, change in cost 0.0788269\nstep 340, training accuracy 0.79798, cost 16.578, change in cost 0.0758495\nstep 350, training accuracy 0.818182, cost 16.5049, change in cost 0.0730934\nstep 360, training accuracy 0.838384, cost 16.4344, change in cost 0.0705376\nstep 370, training accuracy 0.838384, cost 16.3662, change in cost 0.0681648\nstep 380, training accuracy 0.838384, cost 16.3002, change in cost 0.0659504\nstep 390, training accuracy 0.838384, cost 16.2364, change in cost 0.0638905\nstep 400, training accuracy 0.848485, cost 16.1744, change in cost 0.0619545\nstep 410, training accuracy 0.848485, cost 16.1143, change in cost 0.0601406\nstep 420, training accuracy 0.848485, cost 16.0558, change in cost 0.0584354\nstep 430, training accuracy 0.858586, cost 15.999, change in cost 0.0568342\nstep 440, training accuracy 0.858586, cost 15.9437, change in cost 0.055316\nstep 450, training accuracy 0.868687, cost 15.8898, change in cost 0.0538883\nstep 460, training accuracy 0.868687, cost 15.8373, change in cost 0.0525303\nstep 470, training accuracy 0.878788, cost 15.786, change in cost 0.0512438\nstep 480, training accuracy 0.878788, cost 15.736, change in cost 0.0500221\nstep 490, training accuracy 0.878788, cost 15.6871, change in cost 0.0488586\nstep 500, training accuracy 0.878788, cost 15.6394, change in cost 0.0477524\nstep 510, training accuracy 0.878788, cost 15.5927, change in cost 0.0466967\nstep 520, training accuracy 0.878788, cost 15.547, change in cost 0.0456867\nstep 530, training accuracy 0.878788, cost 15.5023, change in cost 0.0447206\nstep 540, training accuracy 0.888889, cost 15.4585, change in cost 0.0437946\nstep 550, training accuracy 0.89899, cost 15.4156, change in cost 0.0429115\nstep 560, training accuracy 0.89899, cost 15.3735, change in cost 0.042057\nstep 570, training accuracy 0.89899, cost 15.3323, change in cost 0.0412426\nstep 580, training accuracy 0.89899, cost 15.2918, change in cost 0.0404568\nstep 590, training accuracy 0.909091, cost 15.2521, change in cost 0.0396948\nstep 600, training accuracy 0.909091, cost 15.2131, change in cost 0.0389719\nstep 610, training accuracy 0.909091, cost 15.1749, change in cost 0.0382671\nstep 620, training accuracy 0.909091, cost 15.1373, change in cost 0.0375853\nstep 630, training accuracy 0.909091, cost 15.1004, change in cost 0.0369349\nstep 640, training accuracy 0.909091, cost 15.0641, change in cost 0.0362988\nstep 650, training accuracy 0.909091, cost 15.0284, change in cost 0.0356855\nstep 660, training accuracy 0.909091, cost 14.9933, change in cost 0.0350924\nstep 670, training accuracy 0.909091, cost 14.9588, change in cost 0.0345182\nstep 680, training accuracy 0.909091, cost 14.9248, change in cost 0.0339584\nstep 690, training accuracy 0.909091, cost 14.8914, change in cost 0.0334244\nfinal accuracy on test set: 0.9\n" ] ], [ [ "<b>Why don't we plot the cost to see how it behaves?</b>", "_____no_output_____" ] ], [ [ "%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nplt.plot([np.mean(cost_values[i-50:i]) for i in range(len(cost_values))])\nplt.show()", "C:\\ProgramData\\Anaconda3\\lib\\site-packages\\numpy\\core\\fromnumeric.py:3335: RuntimeWarning: Mean of empty slice.\n out=out, **kwargs)\nC:\\ProgramData\\Anaconda3\\lib\\site-packages\\numpy\\core\\_methods.py:161: RuntimeWarning: invalid value encountered in double_scalars\n ret = ret.dtype.type(ret / rcount)\n" ] ], [ [ "Assuming no parameters were changed, you should reach a peak accuracy of 90% at the end of training, which is commendable. Try changing the parameters such as the length of training, and maybe some operations to see how the model behaves. Does it take much longer? How is the performance?", "_____no_output_____" ], [ "<hr>", "_____no_output_____" ], [ "## Want to learn more?\n\nRunning deep learning programs usually needs a high performance platform. __PowerAI__ speeds up deep learning and AI. Built on IBM’s Power Systems, __PowerAI__ is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The __PowerAI__ platform supports popular machine learning libraries and dependencies including TensorFlow, Caffe, Torch, and Theano. You can use [PowerAI on IMB Cloud](https://cocl.us/ML0120EN_PAI).\n\nAlso, you can use __Watson Studio__ to run these notebooks faster with bigger datasets.__Watson Studio__ is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, __Watson Studio__ enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of __Watson Studio__ users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies.", "_____no_output_____" ], [ "### Thanks for completing this lesson!\n\nThis is the end of **Logistic Regression with TensorFlow** notebook. Hopefully, now you have a deeper understanding of Logistic Regression and how its structure and flow work. Thank you for reading this notebook and good luck on your studies.", "_____no_output_____" ], [ "Created by: <a href=\"https://br.linkedin.com/in/walter-gomes-de-amorim-junior-624726121\">Saeed Aghabozorgi</a> , <a href=\"https://br.linkedin.com/in/walter-gomes-de-amorim-junior-624726121\">Walter Gomes de Amorim Junior</a> , Victor Barros Costa\n", "_____no_output_____" ], [ "<hr>\n\nCopyright &copy; 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license/).", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "markdown", "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown", "markdown", "markdown", "markdown" ] ]
4a3df3618684815f25f988d33d61806e09cfd2c6
8,465
ipynb
Jupyter Notebook
DATA_SORT/deseasonalized_tvar/outputgoga2deseas_TREFHTMX.ipynb
islasimpson/snowpaper_2022
d6ee677f696d7fd6e7cadef8168ce4fd8b184cac
[ "Apache-2.0" ]
null
null
null
DATA_SORT/deseasonalized_tvar/outputgoga2deseas_TREFHTMX.ipynb
islasimpson/snowpaper_2022
d6ee677f696d7fd6e7cadef8168ce4fd8b184cac
[ "Apache-2.0" ]
null
null
null
DATA_SORT/deseasonalized_tvar/outputgoga2deseas_TREFHTMX.ipynb
islasimpson/snowpaper_2022
d6ee677f696d7fd6e7cadef8168ce4fd8b184cac
[ "Apache-2.0" ]
null
null
null
36.175214
164
0.59433
[ [ [ "import importlib\nimport xarray as xr\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport netCDF4\nfrom CASutils import filter_utils as filt\nfrom CASutils import readdata_utils as read\nfrom CASutils import calendar_utils as cal\nimport sys\nfrom math import nan as nan\nimport datetime\n\nimportlib.reload(filt)\nimportlib.reload(read)\nimportlib.reload(cal)", "_____no_output_____" ] ], [ [ "GOGA1", "_____no_output_____" ] ], [ [ "path=\"/project/cas02/islas/GOGA2/day/TREFHTMX/\"\nnmems=10 ; mems = np.arange(1,nmems+1,1)\nfor imem in range(0,nmems,1):\n memstr=str(imem+1).zfill(2)\n print(memstr)\n fpath=path+\"ens\"+memstr+\"/*\"\n dat = read.read_sfc_cesm(fpath,\"1979-01\", \"2014-12\")\n datseason = dat.TREFHTMX.groupby('time.dayofyear').mean('time')\n trefht4harm = filt.calc_season_nharm(datseason, 4, dimtime=0)\n anoms = dat.TREFHTMX.groupby('time.dayofyear') - trefht4harm\n \n djfanoms = cal.group_season_daily(anoms, 'DJF')\n djfmean = djfanoms.mean('day')\n djfanoms = djfanoms - djfmean\n \n jjaanoms = cal.group_season_daily(anoms, 'JJA')\n jjamean = jjaanoms.mean('day')\n jjaanoms = jjaanoms - jjamean\n \n if (imem == 0):\n goga2var_djf = xr.DataArray(np.zeros([nmems, anoms.lat.size, anoms.lon.size]), coords=[mems, anoms.lat, anoms.lon], \n dims = ['Member','lat','lon'], name='goga2var_djf')\n goga2var_jja = xr.DataArray(np.zeros([nmems, anoms.lat.size, anoms.lon.size]), coords=[mems, anoms.lat, anoms.lon], \n dims = ['Member','lat','lon'], name='goga2var_jja')\n \n \n goga2var_djf[imem,:,:] = np.var(np.array(djfanoms), axis=(0,1))\n goga2var_jja[imem,:,:] = np.var(np.array(jjaanoms), axis=(0,1))", "01\nwarning, you're reading CESM data but there's no time_bnds\nmake sure you're reading in what you're expecting to\nnyears=35.0\nnyears=36.0\n02\nwarning, you're reading CESM data but there's no time_bnds\nmake sure you're reading in what you're expecting to\nnyears=35.0\nnyears=36.0\n03\nwarning, you're reading CESM data but there's no time_bnds\nmake sure you're reading in what you're expecting to\nnyears=35.0\nnyears=36.0\n04\nwarning, you're reading CESM data but there's no time_bnds\nmake sure you're reading in what you're expecting to\nnyears=35.0\nnyears=36.0\n05\nwarning, you're reading CESM data but there's no time_bnds\nmake sure you're reading in what you're expecting to\nnyears=35.0\nnyears=36.0\n06\nwarning, you're reading CESM data but there's no time_bnds\nmake sure you're reading in what you're expecting to\n" ], [ "goga2var_djf.to_netcdf(path=\"/project/cas/islas/python_savs/snowpaper/DATA_SORT/deseasonalized_tvar/TREFHTMXVAR_GOGA2.nc\")\ngoga2var_jja.to_netcdf(path=\"/project/cas/islas/python_savs/snowpaper/DATA_SORT/deseasonalized_tvar/TREFHTMXVAR_GOGA2.nc\", mode=\"a\")", "_____no_output_____" ] ] ]
[ "code", "markdown", "code" ]
[ [ "code" ], [ "markdown" ], [ "code", "code" ] ]
4a3df4365165676e67568feaa9f9c6d654d6639c
218,251
ipynb
Jupyter Notebook
Shortest_Route|DFS_BFS_A_.ipynb
Andrewvlad/ML-and-AI
21a4de22f8522bc551dc24da4d206054fdfedd79
[ "MIT" ]
null
null
null
Shortest_Route|DFS_BFS_A_.ipynb
Andrewvlad/ML-and-AI
21a4de22f8522bc551dc24da4d206054fdfedd79
[ "MIT" ]
null
null
null
Shortest_Route|DFS_BFS_A_.ipynb
Andrewvlad/ML-and-AI
21a4de22f8522bc551dc24da4d206054fdfedd79
[ "MIT" ]
null
null
null
667.434251
204,702
0.936917
[ [ [ "<a href=\"https://colab.research.google.com/github/Andrewvlad/ML-and-AI/blob/main/Shortest_Route%7CDFS_BFS_A_.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>", "_____no_output_____" ], [ "# Shortest Route\n---\nBelow you will find three solutions for the shortest route problem (DFS, BFS, and A\\*), using the map of Romania below and the straight-line distance to the destination (*Bucharest*) as the heuristic function, h(n). \n", "_____no_output_____" ], [ "![image.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABK0AAAKwCAYAAABaoYytAAAgAElEQVR4AeydCXhTxf7+A7KDAoIIgoDILqu4sIiICspFRUUB9boi4oKionLdwBUFwQUBQRCRTfSKCCJ4FRRUZP0hO/bPjuyXApb2Umzr+3/ek0x6kp6kSZPSJH3nedI5mTNn5juf+fYk580sLiiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIQIwRcMWYPTJHBERABERABERABERABERABERABERABERABCDRSk4gAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIQcwQkWsVcl8ggERABERABERABERABERABERABERABERABiVbyAREQAREQAREQAREQAREQAREQAREQAREQgZgjINEq5rpEBomACIiACIiACIiACIiACIiACIiACIiACEi0kg+IgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAjEHAGJVjHXJTJIBERABERABERABERABERABERABERABERAopV8QAREQAREQAREQAREQAREQAREQAREQAREIOYISLSKuS6RQSIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgAhKt5AMiIAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIxR0CiVcx1iQwSAREQAREQAREQAREQAREQAREQAREQARGQaCUfEAEREAEREAEREAEREAEREAEREAEREAERiDkCEq1irktkkAiIgAiIgAiIgAiIgAiIgAiIgAiIgAiIgEQr+YAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiEDMEZBoFXNdIoNEQAREQAREQAREQAREQAREQAREQAREQAQkWskHREAEREAEREAEREAEREAEREAEREAEREAEYo6ARKuY6xIZJAIiIAIiIAIiIAIiIAIiIAIiIAIiIAIiINFKPiACIiACIiACIiACIiACIiACIiACIiACIhBzBCRaxVyXyCAREAEREAEREAEREAEREAEREAEREAEREAGJVvIBERABERABERABERABERABERABERABERCBmCMg0SrmukQGiYAIiIAIiIAIiIAIiIAIiIAIiIAIiIAISLSSD4iACIiACIiACIiACIiACIiACIiACIiACMQcAYlWMdclMkgEREAEREAEREAEREAEREAEREAEREAERECilXxABERABERABERABERABERABERABERABEQg5ghItIq5LpFBIiACIiACIiACIiACIiACIiACIiACIiAChUK0crlc0EsM5APyAfmAfEA+IB+QD8gH5APyAfmAfEA+kHg+IGkncQlItJKgJUFPPiAfkA/IB+QD8gH5gHxAPiAfkA/IB+QDce0DiSvbFO6WSbTSjSmub0z6lSTxfiVRn6pP5QPyAfmAfEA+IB+QD8gH5APygXB9oHBLO4nb+kInWiVuV6plIiACIiACIiACIiACIiACIiACIlB4CNiFrcLT6sLVUolWhau/1VoREAEREAEREAEREAEREAEREAERSAgCEq0SohuDNqLQilZ259axhp7KB+QD8gH5gHxAPiAfkA/IB+QD8gH5gHwgPnzAqBz2/jJpihOLgEQrrWmlNa3kA/IB+YB8QD4gH5APyAfkA/IB+YB8QD4QNz5gZBmJVoZE4sYSrXRjipsbk/2GpOP4+AVE/aR+kg/IB+QD8gH5gHxAPiAfkA/IB6LtA0aisZdr0hQnFgGJVq5CgSCxvFatEQEREAEREAEREAEREAEREAERKFQEnAQqp7RCBaUQNLZQKDZOjuyUVgj6W00UAREQAREQAREQAREQAREQAREQgbgj4PQM75QWdw2TwUEJSLTSSKugDqKTIiACImAIHDhwANWqVUORIkVw+umnY8WKFeaU4ggJiG2EAHW5CIhAVAnonhRVnCpMBEQgSgScBCqntChVp2JihIBEK4lWMeKKMiMaBOw37UiPL730Utx2222YMmUKduzYEQ3zVEYcEzh8+DAaNmxoCVYUrfgqX748kpKS4rhVsWF6QbF9+eWXYV533XWXz/qG99xzT2zAkRUiIAKnnEBB3ZPyu6Hmfsf4hhtu8LnnTZw4Mb+rV/kiIAJRIGB/vjHFOaWZc4oTg4BEK4lWieHJaoVFwH7TjuYxBYpOnTrhq6++EulCSqB9+/Y+gpURrs4555xCSiR6zS4otsHuERKtote/KkkE4o1AQd2T8ptTsHueRKv8pq/yRSA6BOz/x6ZEpzRzTnFiEJBoJdEqMTxZrbAI/Pe//4X99eijj/r8kmhu6tOnT/fJZ7+Gx6NGjcKVV16JokWL+lxfsmRJdOnSBZmZmSJeyAhcfPHFjqLVmWeeWchIRL+50WD77LPPWv0Tjtj0wgsvgK8bb7zRutbcHxiHU070iZzaEvPC7tRaqNriiUClSpWsz86DBw/Gk9k+tkbjnuRTYIy8Mfe8a665xue7De95Eq1ipJNkhgjkQsD+XcVkdUoz5xQnBgGJVhKtEsOT1QpHAs8880yOL2a8sX/zzTeO+f0T586di8aNG+coo02bNuD0AYXCQ2Dw4MGOotWdd95ZeCDkU0ujwTZS4aVYsWI+/+cSrfKps1VswhNIBNEqGvekWO5o/jhnf8jlsUSrWO4x2SYC2QTs/7sm1SnNnFOcGAQkWkm0SgxPViscCUQqWrHQjz76KMeXO344dO3a1bFOJSYuAf5KbaYFMr722mvx119/JW6DT2HLImXLkZHsk7yKTYVZtIqU3Sl0E1UV4wQ2bdqEUqVKxf1IK2KO9J4Uy10l0SqWe0e2iUBwAk4ClVNa8FJ0Nt4ISLSSaBVvPit7wyAQDdGKX+6qVKmSQ7jiQ+7q1avDsEZZ451AVlYWRo8ejRdffBFDhw7FiRMn4r1JMWN/JGzXr18PTtOUaBV+d0aDXfi16opEJTBkyBBLsOLU+nieHsj+ieSeFOv9K9Eq1ntI9olAYAJOApVTWuASdCYeCUi0kmgVj34rm0MkEA3RilXVqVMnh2jFD4jrr78+REuUTQREIL8IvPPOO94RcBppFR7laLALr0blTlQCf//9N1q3bp0wolWi9hPbJdEqkXtXbUt0Ak4ClVNaonMobO2TaCXRqrD5fKFqb36LVk2bNi1UPNVYEYg1Art370bdunUlWuWhY6LFLg9V65IEJMC1IjnCyrzifaRVAnaRt0kSrbwodCACcUfASaBySou7hsngoAQkWkm0CuogOhnfBCRaxXf/yXoRyI3A3Xff7RWsND0wN1q+56PFzrdUvSusBM455xyvYJUI0wMTuR8lWiVy76ptiU7ASaBySkt0DoWtfRKtJFoVNp8vVO3Nb9Fq7NixhYqnGisCsUIgKSkJt99+u/WQTLHKvDQ9MPceija73GtUjkQmsHjxYlxxxRU+gpVEq9jucYlWsd0/sk4EghFwEqic0oKVoXPxR0CilUSr+PNaWRwygWiIVjt37sQZZ5yRY02rEiVKgOfCDUeOHMG8efNw22234eyzz85RLtN4jnmYN1hgPvO69NJLfcoaP36899LU1FRrO+uOHTv65OGHXJs2bTBy5Ej8+eef3vzmYN++fejbty9q1Kjhc13x4sWtepcvX26yBo3XrFlj5W/Xrp1POfYP2apVq1p5PvzwQ7De3IJpN2P/dvkLF1wwn/nKli3rU3/p0qWt9BUrVoDrsQQKXHjdvB555BHUr1/f+5o2bVqgy6z0jIwM/Oc//7HqqVevnk/99vZ36dIFDz/8MLZt22YtABy00AhP2usNdHzZZZdZtZBNoDwmnbb7h2rVqgW8zp7XcGUcCtu33noLzz33nOWTRqiyx5UrV8ZFF13k+OKDWqCQ2+6BmZmZ2Lt3L+ifXAfKvD744AMrnedPRaDgRF+mDxr+Jq5du7Z1buXKldaGAeeffz66devmNSu/2JkKjh49iqeffhrdu3f3ioj2vuH/KYVGTksMFuz/282bN/cRQ5YtWwaynjJlitVW3i/NdDTGFE/uuusuHDhwIFgV+XYuOTkZc+fOxc033+zIgDyuuuoqvPHGGzh+/HhIdkTzM8O/Qk7pI2/eC40fmbhXr17g/W327Nlo0aIFeP+2h+HDh+Pxxx8H/+fsfWCOec3FF1+c45Xb52YkNtnty+txuPck1mP32fz+PMpru+zXhSNa8fPT3O+c4unTp9uL9mERy99LfIzWGxGIIwLmHs3YBKc0c05xYhDI7u3EaI9jK5wc2SnN8WIlikAcE4iGaMWHC/v/C49PO+00azvscNHwy3D16tV9ynvggQfw5JNPWi8e2+tiXl4TKNjz+h8b0YoPHFzzh+evueYab118oLVf06pVK6SkpHir6tevnyXWUaAKZB+vD2YfCxs0aJD18GbqKleunLc8Uy6FM3OeMUVCPgQEC/b8/sdGtPrjjz/QoUMHq2yyNPWxbP9r2M+Bgv3B2/94zJgxgS6z0nv37u1TV7Nmzbx2GHv8beHD4Z49e4KWG8lJ//qc3p8q0cqfp/29E1v7+XCPg4mhwUSrn3/+2RJbnDiZtJ49e+YqxkTSZ1u2bMF1112HMmXKWP5Ekdf4j4kpWhl7TGwXrcLlZc8fjB0FIooXFLft19D3BwwYYL1KlSrlPcdjCkvp6emOSIzo4RSPGjUKvFeZc/xfHjhwoPU6/fTTvekUUu677z5s3brVsY5oJ1Jsfuihh0Cx1s7ghhtu8DIgC352mPO8B7///vtBTYn2Z4apjD8kUOAzfsL7pPEjxtxkxJwzsb9oZfog3Pj33383ZvjE0bDJp8A8vjH94xQ73ZNYjWHkFEf78yiPzfK5LBzRikKVU7tMGn8ksAeT7hTHyvcSu706FoF4I2D/3zK2O6WZc4oTg4BEK5tKmxhdqlaIQDaBSEQrPqQ9//zzOb6sVapUCRyxEE7gL+Vt27a1Hlj4wcJRWhyNwF/l7SN8eMw0nmMe5uVDDq91GnU1evRoa0RF586dc9jJL4c//PCDNZrr8ssvx/bt23Hy5Emv2dzq3v4hx2M+5HGbbz588Zf3YcOG+dTLc6+99prPdbRv6dKl3nLtB3yQ8x+lxtFO/+///T97Nvzvf//Dpk2bwFEVxiY+MNSqVQs//vijT17zxrT9pptush4CzXWM+ZDA/rvwwgtRsWJFqwy7IMeRGrfeequ3Ll5DcY4jopwC/YivTp06eR84zQNNoIcYlrNu3Tr4iyH+X/CZj33Oh1OWadpRvnx5jBs3zsmciNNYH1+vvvqqtz5TL0fC8JzhRVYmf//+/X3yc4QVzzmNGOGoG/bzmWeeaV3DB2RTjr0B4bKlP5jXjTfemKM/rr32Wu95k8/Edv+328Bj/36iD7Ht9DP6OEdPcGQJ/w/5Yt9SODLcGDdp0gS7du3yLzri9/z/aNSokVUXeVJEcxoZyT7j6MdbbrnFa5ddtDIcGEeL3bFjx8DRFOb/gTHF8Y0bN/qMGKQ/fPnll5Zob/LSNo5E9A8TJkwAX1dffbVXhDLCCO+LFSpUsP5f6E+8J5lAW3ivY1+a/OSWH31i6mTM/mHfm3YxZtvoJ3/99Zc3K0es0C57PopcTvd2puXHZwaNYdlG4OT9iP+nJ06c8NrJA9rNkYX/+te/vJ9b/qLVokWLYF4NGzb0Mjfs2d/mvD1OS0vzqSuaNuUoOA8J4d6TWMWp/DzKQ5NyXBKOaMW+4+dCyZIlvfcV3vsooj/22GPWfdFegWERq99L7LbqWATikYD9e4ex3ynNnFOcGAQkWkm0SgxPViscCfDLp/1Gbo4HDx6Mr7/+OuDrn//8J/xHIvFa/hod6pQ4YxC/+HOahKmbMUcf5RaYx34Ny2BZTuHw4cM+eXkdp1pRYOPIhECBox3sdfCBkNOdatasiQ0bNjhexgceCkH26zjKwiksWbLEJ5+5hg/NToHtO/fcc32u4cOQ00OO/Xp/wYH9x6l4fDAPNJqDI1f8r+PC1MECd8OyP3DyOJhoxVFups0m5gN3oOA/0o6jagL1eaAywkmnqGjsMjEf8v0fYE2ZFCdNPsZVqlTxeSg3+Ux86NAhyweZl3UFC+GyZVn0cf/+MKMagtXldM7JFzj6jr7OkQZOYeLEidaDup0JRd1ohzfffNPLnaN1cgv8H6WIQrvsopX9umiwo2+2bNnSpw/69OnjKESZuvl/Z+8zTvUMFCimGAHExLwP5ja98Morr/S5jvdyivb5EThqyF+w4v2HApp/+Pe//+3TdsPBX1Qj1/z8zHjppZe8/vTZZ5/5m5njvRH4/UUre0ZOAzR9ZOJwdg/MD5vs9uXlOC/3JNbjfy/Jr8+jvLTJXBOOaEWxf8SIEd7PZorSdrHYlOkfx+r3En879V4E4o2A/TuHsd0pzZxTnBgEJFpJtEoMT1YrHAkEEq3sN/dQjvngx3WhQvmi5m8IR2XZ6+ADVLARH+Z65vEXzgKN8HL6csgRTnyQcBqRYep45ZVXfGzjQxSFrvfee89kcYz9R5jwwZ4jKfwDp+dRpLG3n1N4gq1p0qNHD5/8vHbSpEn+Rfu8939IYNs5IoUjG4IFM5rN2MfRTcGmE1EMMA+aJg4mWnH9IX/bzLQ7J7u4jpmxxcSc+pRfgaMpOFLN1GXiQA+yXKvH5DExR+MFCkOGDLHy86HePuLEKX+4bFlGNIQXY4t/P/F/hyOszHQWk88/9vchjjpiW6IZ6JeGdyiiFetmPl6Tn6IVH2TN/wFj/s8Fm0ZomHCEhrmObQskQjmJVqG0n9MVjXBi4jvvvNNnVKuxJdL4wQcf9LaFbeIOehw16hSYTp8ybWfM0aT+Ald+f2bYR78G+l+328889KX8FK3ywyZ7G/JynJd7Euvxv5fk1+dRXtpkrglVtOIoWo6mYv9zaj+/M4Ty/YX1xOr3EsNAsQjEKwHzfYCxCU5p5pzixCCQ3duJ0R7HVjg5slOa48VKFIE4JhBItOLi4hRm/F8vv/yytTA5Hybs/yN88OH0rdxG/Pij4sOTv/DERXhDDcxrt4NlOS0w7PTlkNdxemCw4C9a8ZrGjRsHu8Q65y9a8Tra4BS4yDvLZB6uK/X22287ZfOmOdnEB85gwf8hgXXlttYWy/MXHHhdoAdO5s/LQwxHEHDkGsvmelbcaStQoJjHfPYXRZD8DE5CFEeoOQX7tDNjI0cfOgVO++IoHObjqMbcQl7YUrQydpg4t9Fygexw8iGuU5VbCNeHcivP6bxpG2NO2x06dGhQ4ZdlcBQY1yUKNPIrUnacosj7ol2A4dp5oQROG7Nf9/rrrztellfRitM6L7nkkhzC1ebNmx3ryWsiR0jZ28FjTkkOFjjKl+ttmbz+Gzmcis8Muz/x3sQfBQKNSGVb2Nf0pWACOqcZ+rMIZ6RVftgUrB9COZeXexLLdbqX5MfnUShtCJQnFNGKo60bNGhg3WMpLvv7aqCyTXosfy8xNioWgXgkYL9fGvud0sw5xYlBQKKVTaVNjC5VK0Qgm0Ag0Yq7EwULv/32m3fxcvsHAaedOK3BEqgsTrWzX89jihihBvuUCVMOy/QPTl8OOVLErEvkn9+8dxKIuG5RbiEc0YplcfQFd/3i1KDcgpNN4YpWXJ+Ko7xyC+EKDnl9iOHUJLY/t4e4ghCtOELOf+cwrjvGRZHtgQIC83EkHl/GH7motpPQZ3Yd5DVOo/DsZfM4L2zzc6QVR8TMmTPH38wc78P1oRwFhJBgWNvj8847zxol9+6774Iv9k84IVJ2FMP8RQquhRdK4GhG+7WcEuwU8ipasSyO5jKjrEycm2DuZEOwNK73ZG8Hj5mWW+D/C6eZOwn9p+Izw+5H5piL23MUG31pxowZuTUhx/lIpwcaO+xxpDblMDLMhLzck1iFv2iVX59HYTbHJ3sw0YqiL+99vMewP7ieH1mEG2L9e0m47VF+EYgVAvb7pLHJKc2cU5wYBCRaSbRKDE9WKxwJ5FW0YmG//PKLdwFa+4cBhatAa/74G+G/Kx7LiVS0Ypn+wenLYSgjupwEIqblFsIVrQKVRyHn119/9b444qJ169ZeQcRwD1e0opAQSghXcMjrQ4yTLVzvyd52rnfD9ZhMm02c3yOtaJuTH/ivu8aRO7SJI6u4ALuxj7GTT3NkAc9xAd9QQl7YRiq82O3yf9DMLx+y1xnq8bPPPuvD287eHHMK0llnnWUt3s0F/HObphcpO96H/AUb+i/XPcvt5bS2kxOLSESrTz75JIdoxZGO0QwcheLPgIv1RxJOxWcGdwc0fuMUU2ShL3EUFkcfhyKIRipa5YdNkfQDr83LPYnXxfK9xDAJJlrZN+jgdNdQfmwy5drjeP1eYm+DjkUgFgnY79vGPqc0c05xYhCQaCXRKjE8Wa1wJBCJaMUCKRjYPwjMMXfTCSWY/PbY6QE/UFlOI61Yln+Ily+HX331lbVGUNeuXcEXR0DY2QQ6TgTRikLnlClTLIGHbfdfaDlQ20+FaMVRH/71169f39oVjb7G9ag49YsP6LNmzcLKlSutY3MNF2TnDmomcESZWcts4cKFJjlonJcHxEiFF7tBsfygydGdd9xxR44+MvydYk7JHTt2rL2JPseRsvMXayJ972Oc5020RSveb6IZnNocqWjl1JfR/szgvYj3IKe6nNIoXPiL2P4cIxWt8sMmfxvDfZ+XexLriOV7iWHgJFqxj+07Vhpf6NWrl7ksrDhevpeE1ShlFoEYIGD+Nxmb4JRmzilODALZvZ0Y7XFshZMjO6U5XqxEEYhjApGKVlxk2/6/Yo65e04oweS3x9F+AKEdsfzlkFMUKVZxK3r7yCauz8X1h8jjiy++AHfh4sss+mpnFs+iVXJyMiZPnuxd14vt4pS5Cy64wGo722/azjXI7O3m8akQrSiKmB3C7PWvXbvWcvP/+7//s+zigvsmnH322T622tcl4ppFLIfCXKjTafPygBip8GLawjjWHzS5IPJTTz1l7dho76Ngx9x9ktO9nDaQiJSdk2DD/3GOzAn3RX9xChKt3Ovb5cdnBqfsUgi1L4AezJc4+op2BPp/jlS0Yv9H2yYnnwonLS/3JJYf6/cS2ugkWgXqfy7Azp1www2x/L0k3LYovwjEEgH7/6qxyynNnFOcGAQkWtlU2sToUrVCBLIJ5JdoFWwHuOza4fNQbz5Q8uMBJFa/HHIh12uvvdaHA9dA4q5jfAh3Ck5T1eJVtOK6UHyYM33PmCOW5s+f79R0a3Fte14enwrRisZwlzz/ujkag4ELs/McF5E24aOPPvLJb3YW485SnIbF/IF2uzRl2OO8PCBGKrzY64+HB03ay/WQeF9j33AnTv8+c3q/ZMkSe1Ot40jZOYlWL7zwQo56IkmQaJV/opXpFy6yzh8P2rdvH5Ivfffdd+ZSnzgaopUpMFo2mfLyGuflnsS64uFe4iRa0W7+0MA+rlOnjo8/8HM7t118/TnH6vcSfzv1XgTijYD9c97Y7pRmzilODAISrSRaJYYnqxWOBApatDoV65Ow4bH45ZA2ceSF/YOUX3w///xzx74yiYkiWnEXMH/Big8CGzduNE3NERfEQuzGiGPHjuUQQSiK0CYuvM4Hmm3btpns1rH94axkyZJYt26dtQA++5zXssxQQ14eEEMVXjja79NPP7Vegeyxt4X2x9KaVoFspijKddG4Q+ejjz6Kbt26WWsRcRF5+/+dk/AZKTunNa1iSbR65513cqxpRQE9miFe17RyYpCammr50o8//mj5Ev2Joyn9N2ngLoFOIRTR6u+//7YWeQ91ofdIbXKyM5y0vNyTWH483EucRCv77pAUxznt234foaCV21p5dr6x+L3Ebp+ORSBeCdj/L00bnNLMOcWJQUCilUSrxPBktcKRQH6JVlwbZe/evY512hO5w5b9g4THfBgINTCv//VOW9jH4pdDPhT72/7000/n2vREEa0eeOABn/ZTBJk5c2bQ9hekaEXDbrvtNh+b2X9c24rx1VdfncP266+/3ic/RQvunsb83HEqnJCXB8RQhRdOvzQjgwLZFMsPmu+99x4ef/xxH9EwUDuYzrz+/3v++SNlx/WyDFMT/+Mf//CvJqL3kYy0cto9MNyRIrkZ77R7IP+HIgmn4jODgh59JBRRmfcs/thg9yen9oUiWnFtPLOTo38Z+WGTfx3hvs/LPYl1xPK9xDBwEq242YY9cCdL/7Y0atTI+iHDni/QcSx+Lwlkq9JFIJ4ION2PndLiqU2yNXcCEq0kWuXuJcoRtwTyS7Tih8OePXty5fLTTz/lWGy8YcOGuV5nMjCv/YOIX/hZpn+IxS+HdrvN8fbt2/1Nz/E+UUQr02YTc8fF3EJBi1arV6/28TdjO0WJN998M4f5X3/9tc+C7NxxzKxbtnjx4hz5gyXk5QExUuHFbo//w1ksjbQy/RCq6MJFrW+66SafvrS3lceRsuNi+1yXzghWjPk+3MB13NavX+94GUUre/k85i5zuQWKIxRb7dc2adLEWscnt2vDOb9r1y6fOljfhRdeGHIRaWlpmD59uvUyF52KzwzjT9whN5TANcrMNYydQqSilSk/mjY52RlOWl7uSSw/lu8lpv2hiFbMyynepm9MzHsHR83lFmLxe0luNuu8CMQDAfO/yNgEpzRzTnFiEMju7cRoj2MrnBzZKc3xYiWKQBwTyE/RigKDf+DDIr+8d+nSxXvKf7QUH+w5dSy3wDzMa/9fDTRKKxa/HNrtNsehiFYcjWXymzge17Qytps4FNGKa4CZ/CZ2mtqVm+/k9TzXGeP0H1O3iemHgUKNGjVy5K9Vq1bANcsClZOXB8RIhRe7LbH8oGn6gYLIoUOH7GYHPB4wYIC3Xzitxz9Egx1HYnAqol0c+vbbb/2rCvh+y5Yt3mudMjmNtOL6aoEWAzdlcOMDM6LHxFyQPj/Cgw8+6G2D4cD13kIJnN7Ja/z/v/L7M8P4U6jTJT/77DOvLzVt2tSxadESraJpk6OhYSTm5Z7E4mP5XmKaH6poRWH15ptv9va/8Z2HH37YFBUwjsXvJQGN1QkRiCMC5v+Qsbg+dM4AACAASURBVAlOaeac4sQgkN3bidEex1Y4ObJTmuPFShSBOCYQqWg1bdq0HF/WzP/Ovffem4PMn3/+6c1vTnIaIR8azXWMH3roIWRmZposOWKeYx77NSwj0JTEWPxy6LQb3bBhw3K01Z7A9a6qVauGc88916ft8Sha+X/R59owCxcutDfX55hiBAWqevXq+bT9VIpWNOjjjz/2qZ8+eNddd/nYan8zdOjQHPk51SfckJcHxAULFuQQDOyCsbHhjTfesPKxTwKFWH7QtN8HXnzxxUBN8Kanp6eje/fu3n5xGiUXLXaDBg3y6YNWrVrlKirRUIr+XLCfok2/fv28ttsPnEQrilBPPPEEuOC/U1ixYoW12YERqxhzt1f6V34ETj3lKC4jWDGmaOu0+L29/v3791t2Mr//CLr8/sww/kQ23Nk1t/Dqq696fSnQRiIcjWVnzmOK8PbAXSKZzmnF/iE/bPKvI9z3ebknsY5YvpcYBqGKVszPtcV4XzV9xLh8+fL4/vvvTXGOcSx+L3E0VIkiEGcE7P+LxnSnNHNOcWIQkGhlU2kTo0vVChHIJhCpaPW///0Pl1xyic+XNfPB0Lp16+yKPEdDhgyx8nIBcnvgl1//crj+y+bNm32G2XPIPdN4ztTDmNc6PXQlJyeDL45YsOfncYcOHaxzPM8vnSbwgdZc9+yzz+a4jmnmvH1EAwU5k+6/QCvrow3mPOvi9CH/UTuccjVhwgSwLHvg2iqjR48Gt1XnSIo+ffr42MX2c4Fglm/aYupi7PSQYM77c+N7c85MZbOzW7p0qfe8sZELefPlNB3o7bffts7xvD388ccf4KgEe9kXXHAB/vOf/4Aj8kzgwzcf4ClWVa5cGePGjfO55rzzzsO8efN82m6uzY+YQkGZMmW8NpAt0wKFrVu3+vDntcHy+5eTF7amDP6/3HDDDT6CAUf+cHFyEzhikYIhfct/pF9WVhbMy8mHzDn/qTB8b845+RAf1s15Y0cksd2H+NDfo0cPLF++3KrDXi59ib59zz33ePuPggrvY/4hUnamPJYzatQonxFXFJ652Lb5XzV52df8/6MIQj9hW15//fUc7TD5A4lWvI73drswxAWiKZbWrFnTK57QF/jjQihrN5k68xKTr79wxc0L6If+u6Tyf5/tojBP+ziy1N+/aAM55cdnBsu2+xP9l2tzOY3+pQ1c54h28hpO/2SaU+DnSoUKFbzs2Uf0Q/PjDNvduHFj616xdu3aHEXkh005KgkxIS/3JPOZwtjpXmLO+/Pje3PO6V7i9HkUYjMcs5n7Ej+f7cx5zBGC5rz/xZzu7Z+/bNmyWLRoUY5rTHti8XuJf7v0XgTikYD9f9HY75RmzilODAISrSRaJYYnqxUWAS5+bX/5j3AyN3X+amjPN3z48IAE+TBUu3btHF/Y+MWU64+YL+V8MOLDGh+OOYrBP/CX9f79+6N69eo+ZdEOrtPCF4+NjYyZl9fwWqdgzxvsuGfPnt7LKQ4Fy2s/Z1+XiIKL/VywY1MZRwxwQW6OJrDnJyfTZsYVK1a0frnl7kX8Ej916lTrvf0ac8yFYBnM+9xiPjzZA9/ndo05b66zj6IIdmzym5i77XHkiSnPxFdccYW3/VwwmumcFsOt3o2AZfLaY+5WdioCR1aZevngnFuwP1x37tw5t+w+54PxtJ/zucj2JikpCayTuxea/BwFQFGDL6ZTtOJUNv9g2phbHA0f8q87nPfGPk4j4xRB856igP3/yC52UxSib/kLxPZ6I2FnL4fHFDeaN2/u7QP2Bdfk4zRF8+J91PSRuQf4l2N/7yRacQRn7969rT5l33KDAL54X2A/8cX+5r3fqc/t5UfzmP/rHB3Ldpk2MqYYbdrPmD8mMJ2i1vPPPx/UhPz4zGCFxn8o8JmNFrhToN2XeGzEKubnPSu3dRy5aDvLM/3AmJ89vK/XrVvXKs9pI5H8tCko4AAn7f0X7Nh+uWGaW0wm9sD3uV1jztuvy+uxKSu3mOsVmkAxOLf8PG9CKHmZp6C+lxg7FYtAvBKw/4+ZNjilmXOKE4NA9l02Mdrj2AonR3ZKc7xYiSIQRwTsfh3OMR8kggU+PNxyyy2OX9y49TsfmvgrM38pnT17drCiLFGGI2e4yxQfFPztZBrPMY//r7L+BftfG+h9QX855NotbBNf/l/S+es903fv3u3TPLaf04f828QHXwb/9EDvo/GQEOzBxX7OpwGeNxytxtFVbCP9xN9OCgsffvihz3Qnin1OeTnt6FQEjrqgQEYBluvZ5BY4Co55uW6Xfz/mdq2dX7Dj3MrhyKNevXrlEA0ocpC/U/Dvi0Dvo+FDTvWHmka7OHpz5cqV1qgGxvQninP+NnMqHNdZ4ki/UENe2DmVbXz99ttvz9EPpm85NYzTATkaI7fgJFpR+GHgpgEUQLiWknm1bdsW/AGCfAoqsF1z58611gEybbbH9FGKOBzpEmrg50C0PjNYJ32mU6dO1shRrlnEafAUzf19iUIg/WzVqlWhmmqN6hsxYgSuuuoqH/GKn7H+u9PZC81Pm+z1hHJs769gx/ay/NkFeh8L95JAttnTJVrZe1fHIhBbBOz/q8YypzRzTnFiEJBoZft1JDG6VK0ozAQ41SQvrylTpuSKjTtScWcjlu8/IoofFhRfvvvuu1zLUQYREAEREIHcCQQTrXK/WjlEQAREQAREIPEIOAlUTmmJ1/LC3SKJVhKtCvd/gFovAiIgAiIgAjFIQKJVDHaKTBIBERABEShQAk4ClVNagRqpyqNOQKKVRKuoO5UKFAEREAEREAERiIyARKvI+OlqERABERCBxCPgJFA5pSVeywt3iyRaSbQq3P8Bar0IiIAIiIAIxCABiVYx2CkySQREQAREoEAJOAlUTmkFaqQqjzoBiVYSraLuVCpQBERABERABEQgMgISrSLjp6tFQAREQAQSj4CTQOWUlngtL9wtkmgl0apw/weo9SIgAiIgAiIQQwRSU1PBF3cd5W5r9tejjz5qneN57laoIAIiIAIiIAKFiYCTQOWUVpiYFIa2SrSSaFUY/FxtFAEREAEREIG4IGAXqYIdjx07Ni7aIyNFQAREQAREIFoEnAQqp7Ro1adyYoOARCuJVrHhibJCBERABERABETAZ2SVRCs5hAiIgAiIgAhkE3ASqJzSsq/QUSIQkGgl0SoR/FhtEAEREAEREAEREAEREAEREAERSGACTgKVU1oCIyiUTZNoJdGqUDq+Gi0CIiACIiACIiACIiACIiACIhA/BJwEKqe0+GmRLA2FgEQriVah+InyiIAIiIAIiIAIiIAIiIAIiIAIiECBEXASqJzSCsxAVZwvBCRauVywO7qOxUM+IB+QD8gH5APyAfmAfEA+IB+QD8gH5AOx6wNGHbH3kUlTnFgEJFpJtJJoJx+QD8gH5APyAfmAfEA+IB+QD8gH5APygbjxASPLSLQyJBI3lmilG1Pc3JjsNyQdx+6vHuob9Y18QD4gH5APyAfkA/IB+YB8QD6Qnz5gJBp7HSZNcWIRKLSiVWJ1o1ojAiIgAiIgAiIgAiIgAiIgAiIgAoWLgESrxO/vQida2Z1ax1L/5QPyAfmAfEA+IB+QD8gH5APyAfmAfEA+EP8+kPjyTeFsoUQrTQ/U9ED5gHxAPiAfkA/IB+QD8gH5gHxAPiAfkA/EtQ8UTkkn8Vst0Uo3pri+MekXkfj/RUR9qD6UD8gH5APyAfmAfEA+IB+QD8gHIvGBxJduCm8LC4VoVXi7Vy0XAREQAREQAREQAREQAREQAREQAREQgfgkINEqPvtNVouACIiACIiACIiACIiACIiACIiACIhAQhOQaJXQ3avGiYAIiIAIiIAIiIAIiIAIiIAIiIAIiEB8EpBoFZ/9JqtFQAREQAREQAREQAREQAREQAREQAREIKEJSLRK6O5V40RABERABERABERABERABERABERABEQgPglItIrPfpPVIiACIiACIiACIiACIiACIiACIiACIpDQBCRaJXT3qnEiIAIiIAIiIAIiIAIiIAIiIAIiIAIiEJ8EJFrFZ7/JahEQAREQAREQAREQAREQAREQAREQARFIaAISrRK6e9U4ERABERABERABERABERABERABERABEYhPAhKt4rPfZLUIiIAIiIAIiIAIiIAIiIAIiIAIiIAIJDQBiVYJ3b1qnAiIgAiIgAiIgAiIgAiIgAiIgAiIgAjEJwGJVvHZb7JaBERABERABERABERABERABERABERABBKagESrhO5eNU4EREAEREAEREAEREAEREAEREAEREAE4pOARKv47DdZLQIiIAIiIAIiIAIiIAIiIAIiIAIiIAIJTUCiVUJ3rxonAiIgAiIgAiIgAiIgAiIgAiIgAiIgAvFJQKJVfPabrBYBERABERABERABERABERABERABERCBhCYg0Sqhu1eNEwEREAEREAEREAEREAEREAEREAEREIH4JCDRKj77TVaLgAiIgAiIgAiIgAiIgAiIgAiIgAiIQEITkGiV0N2rxomACIiACIiACIiACIiACIiACIiACIhAfBKQaBWf/SarRUAEREAEREAEREAEREAEREAEREAERCChCUi0SujuVeNEQAREQAREQAREQAREQAREQAREQAREID4JSLSKz36T1SIgAiIgAiIgAiIgAiIgAiIgAiIgAiKQ0AQkWiV096pxIiACIiACIiACIiACIiACIiACIiACIhCfBCRaxWe/yWoREAEREAEREAEREAEREAEREAEREAERSGgCEq0SunvVOBEQAREQAREQAREQAREQAREQAREQARGITwISreKz32S1CIiACIiACIiACIiACIiACIiACIiACCQ0AYlWCd29apwIiIAIiIAIiIAIiIAIiIAIiIAIiIAIxCcBiVbx2W+yWgREQAREQAREQAREQAREQAREQAREQAQSmoBEq4TuXjVOBERABERABERABERABERABERABERABOKTgESr+Ow3WS0CIiACIiACIiACIiACIiACIiACIiACCU1AolVCd68aJwIiIAIiIAIiIAIiIAIiIAIiIAIiIALxSUCiVXz2m6wWAREQAREQAREQAREQAREQAREQAREQgYQmINEqobtXjRMBERABERABERABERABERABERABERCB+CQg0So++01Wi4AIiIAIiIAIiIAIiIAIiIAIiIAIiEBCE5BoldDdq8aJgAiIgAiIgAiIgAiIgAiIgAiIgAiIQHwSkGgVn/0mq0VABERABERABERABERABERABERABEQgoQlItEro7lXjREAEREAEREAEREAEREAEREAEREAERCA+CUi0is9+k9UiIAIiIAIiIAIiIAIiIAIiIAIiIAIikNAEJFoldPeqcSIgAiIgAiIgAiIgAiIgAiIgAiIgAiIQnwQkWsVnv8lqERABERABERABERABERABERABERABEUhoAhKtErp71TgREAEREAEREAEREAEREAEREAEREAERiE8CEq3is99ktQiIgAiIgAiIgAiIgAiIgAiIgAiIgAgkNAGJVgndvWqcCIiACIiACIiACIiACIiACIiACIiACMQnAYlW8dlvsloEREAEREAEREAEREAEREAEREAEREAEEpqARKuE7l41TgREQAREQAREQAREQAREQAREQAREQATik4BEq/jsN1ktAiIgAiIgAiIgAiIgAiIgAiIgAiIgAglNQKJVQnevGicCIiACIiACIiACIiACIiACIiACIiAC8UlAolV89pusFgEREAEREAEREAEREAEREAEREAEREIGEJiDRKqG7V40TAREQAREQAREQAREQAREQAREQAREQgfgkINEqPvtNVouACIiACIiACIiACIiACIiACIiACIhAQhOQaJXQ3avGiYAIiIAIiIAIiIAIiIAIiIAIiIAIiEB8EpBoFZ/9JqtFQAREQAREQAREQAREQAREQAREQAREIKEJSLRK6O5V40RABERABERABERABERABERABERABEQgPglItIrPfpPVIiACIiACIiACIiACIiACIiACIiACIpDQBCRaJXT3qnEiIAIiIAIiIAIiIAIiIAIikFgEfvnlF1SsWBEXXXQR/ve//4XVuD/++AMTJkxAt27dMHPmzLCuzUvmY8eO4euvv8YTTzyBFi1aeIuIpA3eQqJ0cPz4cYtF37598dJLL3lLHTJkCMqVK4eff/7Zm6YDETjVBCRanWriqk8EREAEREAEREAEREAEREAE4pjA33//jREjRliiUdWqVdGyZUs8/PDDmDNnDrp06ZLvLXv99dfhcrlQsmRJHDhwIKz6Jk6ciAsvvNC6ftq0aWFdm5fMn332Gd58800UKVLEepkyImmDKSNaMUW1m2++2WLy2GOPeYu9+uqrLZsp8imIQEERkGhVUORVrwiIgAiIgAiIgAiIgAiIgAjEIYE+ffqgcuXKWLRoETIzM5Geno7p06dbo5+KFy+OQ4cO5WurTpw4AYpBK1asyFM9gwYNOmWilTHQX7SKtA2m3GjFy5cvzyFa7d69GzNmzMDJkyejVY3KEYGwCUi0ChuZLhABERABERABERABERABERCBwklg27ZtKFq0KJ566qkcANauXYszzjgDW7Zs8Tn3/fff44cffvBJK8g34YhWFJf69esXsbn+olXEBUa5ACfRKq9VRItZXuvXdYlFQKJVYvWnWiMCIiACIiACIiACIiACIiAC+UZg/vz51oice++917GOZ555Bsxjwvbt23H22Wfju+++M0kFHocqWnG9rJtuuslnLaq8Gl9YRKtoMssra12XWAQkWiVWf6o1IiACIiACIiACIiACIiACIpBvBFatWmWJVlxPau7cuTnq4UispKQkK/33339H/fr1rfwffPABfvvtN+zcuRNffPEFevbsiccffxxbt26FWTupbdu24HpZ48ePR5s2bXDllVeiWbNmaNCgAb755htvXSyfa2p16NABnMJmD5MmTULr1q1xzTXX4Nxzz7XKZtp//vMf68W8dtFq5cqV6NGjh1UHr+O6XAz79u2z6ufaWWwDbefryJEj9upCPvYXrZzawOmOL7zwAqpXr46NGzfirbfewumnn27x69WrlzUN015hVlYWXn31Vcv2M888EyVKlLC4pqSk2LPlOCbjqVOnomPHjhYfLko/cuRIqx77mlaLFy/GAw88gI8//thbBqd+9u7d2+ofXn/xxRdbwt6mTZuCMsutX9esWWO1heujvfHGG+D7du3aWWtqcfTe4MGDLd8whrDt9CnawFft2rXx9NNPg4vKm5BXPuZ6xbFBQKJVbPSDrBABERABERABERABERABERCBmCdA8YFiAsWcYsWKYeDAgfjzzz8d7eZi4xSiKNj885//xIsvvohHH30Ul112mZVGsYQjtsaMGQMu6M4XBS3m//bbb60yWR9HO5UuXdpaK+u9996zdv5j/XxR9DJh3LhxVr5du3ZZSUePHrUEIOa74YYbcM8991jpRrS6++67LeHrnXfesYQsrsdFMY7CG3fOo1BEW2gXbeeLYkpegl20CtQGCmYU6Ghv9+7dMXz4cEtEo6jGtPfff99bNQWZ++67z7KT64qRE4Un1kOxj+cDBQpAzZs3x44dO6wsjI24aEQr1s08LI8CoQkUG8nNBLKuWbMmKHAFY5Zbv3IxeOMXnTp1stiTx6efforzzz/fsmPBggWmWms3xjvuuMMr5FFYo61mKmckfLyV6CAmCEi0iolukBEiIAIiIAIiIAIiIAIiIAIiEB8EONqmRYsWlpBCMaVWrVqW8EThxD9w7SvmsU8P/PDDDy2BoWHDhl7BiwITXxREKD6YEU8sj6OJ7Gl//fUXypcvb5VrRCsuFl6tWjVLYLLb8Pzzz1v57MKLEa369++PjIwMb/bnnnvOyjt27FgrjTbQdoo3kQbaz5cJTm3gub59+1p1GtGOaXv37rXWEaOYYwJHuZ133nk+o4/InwxYT6A1xCi6UWz0P8+RaGyrEa1YD9vPsuzsTjvtNDz00EPGDCvmeYpW5honZqH0K0eWsT4u9G8P7A+mc9SXqYPi4n//+19vtj179qBUqVLWCDsm5pWPt0AdxAwBiVYx0xUyRAREQAREQAREQAREQAREQATigwCnYXE6Vrly5Syxg0IFp9dRYLEH5qHg4CRaUaDxD5w+OGHCBJ9kikssg6NuTOB0ONZpRKsDBw5Y7zkqyh7MGlz2heONaDVt2jR7VvA9y+R5Bo7+Yb0U6CINLIcve/BvA88Z0Wrz5s32rOAoMI5wM6FLly7WlDiOYLO/uH4Y6xk6dKjJ6hNzZNs555zjk8Y3TguxU8hiWXbRilMXaQsFJgpvDJyOmJaWZh0HYhZKvxrRitMD7YECHu24/fbbreSuXbta0xLteXjMUV9mamRe+fiXqfcFT0CiVcH3gSwQAREQAREQAREQAREQAREQgbgkwBEuN954o1e4qlKlis86U8FGWjmJVgYCF/TmmkUcKcWpfRQtgolW6enpOOuss6xRRFz83QSKZRSiOHXQhFBFq0AjrWhLqC9Tp8lv3jOORLSiOEXu+/fvd3wdO3bMXpX3mILVpZde6n1vDkIVrWbNmmWtnUWmjRs3BneGtIdAzEyeYP0aqmjFfr7uuutMkY5xXvk4FqbEAiUg0apA8atyERABERABERABERABERABEYhvApyWRoGJU8coZtxyyy3eBoU70ooXTpkyBVdccQV+/vlnqxwzPTCYaMWMn3zyiTWNjouwc8QN13riGkxcVys1NdVrU6iiVaBRQxzFE+rLVBpt0YrltW/f3hQfcsz+iUS0YkVcW6pOnTpWX9MOjvQyC9QHYsbrcuvXUEUrtuGqq64K2ua88glaqE4WCAGJVgWCXZWKgAiIgAiIgAiIgAiIgAiIQPwR4KLlgQLXHKKgwF3sTAh3pBWntVFw2LJliynCu6ZVbqIVL/jss8+sEUDckZALknPRcY7usYdQRavcRg3Zy8ztmG3iyx4iGWnFHfW4rteJEyfsRXqPDx486D22H1BYrFy5sj3JOg51pJVhyTXEXnvtNVSsWNHqcy6WzxCIWSj9GqpoxR0Vyc7YYm+M4ZFXPvaydBwbBCRaxUY/yAoREAEREAEREAEREAEREAERiHkCbdq0wapVqxzt/PLLLy0Bg2KKCeGOtKIgwR307CHUkVZcN6l3797eHeXsZdiPQxWtgo0aspcXynG0RSuORGOZ3InQP3DUk30NL/t57tBXtGhRrFixwp4c8ppW3LHQHrjTYpkyZax1rpgeiFko/RqqaEVBkm2376ZobOJ0Uoa88jHlKI4dAhKtYqcvCswSDpWl8h3pi4sfKoiACIiACIiACIiACIiACCQuAU4Fa9KkiXfBa3tLn332WUu0euSRR7zJZqTVzJkzrTSOjjG7BzqtaWXEnaSkJCs/R/T84x//sEQKLpSelZVlpTuNUqJd1157rTXaivUtXbrUGrG1e/durz08CFW0MqOGOBXO1GufZuhTaC5vTLvs2ZzaEOpC7JxqxzK5Yx7bauzjYuSXX345NmzYYK/Ke0yRi6PhOIXSXMOTFICYfv/993vzOi3ETpu5y6M9cH2p2rVrW0mBmJn2B+vXUEUrjqZjedxBcOLEieCmABxZxpF+nCLKkFc+9nbpODYISLSKjX4oUCv27dtn3aB4k4rkFegXl0CNO3l4E36a8yk+mfgJPp21AL/tPQH3R1CgK6KZnoEj2zZg3br12LTr+CmsN5ptUFkiIAIiIAIiIAIiIAIicGoJDBkyxHpmOP/8863d9vjDNUc4DRs2zFrTims9mZ3kaNk333xjCQzc+W7UqFF44okn0KtXLyuNYpB90XTm79y5s3WOu9Rxbazrr78eL730kpXGEVjdunWzRgVRsOCzy6RJk7wAuDOeWVfL/lzDkUXc2fC///0vKIJxYXeef/DBB72jsijg0Dam8zzFKY5Y4vQ3CiQDBw7E8OHDQWEu3LBo0SKrDJbDYwZOx/NvA6e2cf0t2kBhxoQlS5ZY13NnxMOHD1vJtJejnowYVKNGDdStWxcVKlTAvHnzzKU54oyMDIsp6+CoOe7U9+ijj1prdDGNo6bYVq4Hxr5m+XfddZf1noUxD9eT4rMfBTKKVJxuyBFWDIGY5dav7OcePXpY9VEY5cL6DFwvzYzWa9q0Kf78808rnX1NW+wv9p8JeeVjrlccOwQkWsVOXxSYJadatMo6sgzv3dEMp/vdZFyuoqhx1QB8tsW9XWq+AsnchQ9auW9ypbvOxpF8rUyFi4AIiIAIiIAIiIAIiEBiEKAgwh3j1q5dixdffNEaBUUxgUIUR7dQZPAPHEFz++23491338Xo0aOtmMd8cYTP3LlzvZdQWOIUL4oS3PHvr7/+sgSM/v3747HHHrOEo7fffhsjRoywXjxmvQxfffWVVSYFFK5/9frrr3tfHH1Ewemjjz7yXssyJkyYYAky3BXPlMnY7Da4cuVKy/Z77rnHEpIo+oQTKNrRRvuLYplTG8aPHw+uGcbXyJEjrSl8XNuLzEz62LFjvdVTmGH5HGFEXhSZKCTlFsh0zJgxlhhFQWr27NnYunWrlcZnQwaKgXabjTg4depUzJgxA8888wzIhILXxo0bfap0YpZbv1I8Mz7BmP1CXzJ9as6ZfmHb6TecDsoXj5lmD3nlYy9DxwVPQKJVwfdBgVvgJFqVLl0aub3sqjaPQxlplXV0CZ5vWSxbES9WGXUaNkCtCjaV/Iwr8fb6fBauJFoVuN/JABEQAREQAREQAREQARGIFgGKGRzN5SSamTo4gkhBBEQgvghItIqv/soXa/1FKy50+OOPP+b64ra2duEqd9EqFcueOt9zzWlo1HcGklI9anjWMawZfxtqekZfFW01FBvdI0Lzpc2QaJU/XFWqCIiACIiACIiACIiACBQAAe4W1759QTAIqwAAIABJREFU+4A1cxrb/PnzA57XCREQgdgkINEqNvvllFp1ykSro/PR04yoajYEG3KIUsex9MnaHlGrJp5cztFWaVg5uCMa1q2PS/p+gjF3XYAzXEVweuPe+PpgFjIPLsKwO1qjdnnP6K0iZVCt2fUY+PlWZG/+moF937+GW1qejRIuF4pXbobur07Bay1yTg/MPLAIw+9sjZrleK44KtTriD6jlyI5M7tLQqszO7+OREAEREAEREAEREAEREAE8pdA165drecIrrf08ccfY8GCBdaLP7RzjSROL1MQARGIPwISreKvz6Ju8akSrdJWDcR5npFUTYcnwWk2eNryJ1HLk6flyO3IQAoW9a7iFrKK2KYVNnoF61OSMLK9ewFGl6sMzqlTC5WLmWmGLTB000mLVcqvA9HIu35WGVQ5u5xHGPMVrbKO/IgnGpjry+Dsqqd78hVDq8ErkcrSToZWZ9Q7SQWKgAiIgAiIgAiIgAiIgAgEJMDF37ku0gUXXOD9rs9jTgncs2dPwOt0QgREILYJSLSK7f45JdadKtHq2Lc9UcESj0ri2lnOS59n7R6HVh6BqeYTy5BmF61cxXHhwFlYtfoHzPl5L04eWogXu1+OFg3a4bmfj1o7AGZseRstrOuL4+qZyUDWfnx6TWn3B1eF6zFuM0dvpWP71J44y1OPeyH2k9j8Vkt3vlLtMXwNdxQ8gaQJ16E885W8Ep/syURWKHWekl5TJSIgAiIgAiIgAiIgAiIgAlElkH4QvyclwzbJIqrFqzAREIHwCUi0Cp9Zwl1x6kWrEuhEQckhZO4YjQs9YlKtAct9Rauy12OWe4dXvyszcHT7csyb+i6ev6cdzvRc3/qT/UDKj7ivinv0VPVHlrhHS/HqtJV4urZtpFXWXkxsW9QtWrUYgoWrVmP16tVY/dNIdCjBfOVw49d2oS1InX7W6a0IiIAIiIAIiIAIiIAIiIAzgfStszDqtYHo92Bf9O1rez34MPo9/gwGDR2LL37dCbMUrnMpkaRmIXXbdxj1+PW44AwXTms/A46PHJFUoWtFQATyTECiVZ7RJc6Fp0q0OvHbczjfIyjVe3GNbc2pbJYpi3qjiidPqw92ItM+0qr201jhs6lgGjZPeQQdzi3uHQLsKlLCe9xmygHgyGxcV9otTl0w9He4JwwCXIh9bCubaJW+Di/Vd7+3Ly5vP75wzE5rja1c68xujo5EQAREQAREQAREQAREIAEJZOD44dTojUjK3IfPb6pgfY8v3f4lzFy8HKtXLcUPsz7EoF5NUdrlQrlWD2HG9hyL4kaBbRaO71iDZZ/ei2ouF4rEkWiVcfwwUjUsLAo+oCJimYBEq1junVNk26kSrZCyEPec5RGGqj+EH456dg407czcjy9uregRnerjxTVcSt22plX9l7He9jmVtuo5NLAErhJo2ec9zFq6DccOzcZ1Zdx1tJt+CEhZgDsrud+7pxt6KrOJVNb0wIyteLe5O1+R5gMx6fPP8bnPayYWbk1FSHWa9igWAREQAREQAREQAREQgQQkkLX/3+h932w4z53IS4MzsO29FtZzQOW7f0CKTxFpWDesjbWhkqvpIPyWvduST66I3xyYjNbxJFpl7ce/e9+H2dHrhIgRqgARyA8CEq3yg2qclXnKRCucwG+Dm3hHQlW44gV8lcS1o4DMI+sx44lLrF9ROLqp9JUfYru1UrtNtGr4GjZ6Rass7Bl/kbus4h3x6UE39BNrX/IIWS60nXYIyNyDjzt4RmKd+zAWJLO2LCT/0B91PCO63GtapWLJI9Xd5VW+HV8dcP9kkbpqCLpd0QU9H3wVs3edCK3OOOt/mSsCIiACIiACIiACIiACIRPISsbCfnVQ4ZovoyhaZX+3zylaAUj+CteX5Q/MtfH0Sp+pFyGbnWvGw5/isiLxMtIqC8kL+6FOhWvwpUSrXLtWGeKbgESr+O6/qFh/6kQrAKnr8E5nM5rKPbLJVdw2vY9CUs1/YtpOM5EvkGjF5aru804lrHx5bwx4rBcuquAp0+VC83e3IoMC1be9Ud0jULkqN8Pl7ZugknlPgazrbHC1qpNJI9GhlOf6qq3RrecNuNCUV+cx/HQs1Dqj0i0qRAREQAREQAREQAREQARii0BmMpYM6YjTXS4UazsSS9atw4ath7OX4OCWR3uXY/ak0Xhv9CTMXrHXcUmQnI3KRbQ68jVuOp3f02vg0V9TgcwU7Ni4DuvWrcO69b9jr2fBq/SDW7CBaevWYeOOlBzTF7OOb8NPn3+Ike+MxuT5G3DEPrXu8Ay094pWWTi+dSGmjhmNyd9vwXG/CSLASRxc8w0mj3kHI94dhxk/bkGKX57Mo5vwzZSvsOVEJpJ/+wJjx36BNbYKTx5ai28mjcTwEaMxdWFSjuuReQTrv/kYI98ZhcnfrMTWTT9h6V4+I2UieckQdCSPYm0xcsk6rNuwFYfN41NOuEoRgbgmINEqrrsvOsafUtGKJqfvxNxXe6Jl5WyByVo7qnh1tOs9Ej8fsn96BBatkLEXX/Vr4R2d5XIVR73ug/HSVWWtEVOlOk3BXquo41j3wW2oX9zUVwzn3zoUo287y8pnRCt+ABxaPATd6pZ0j7jyCFsVLuqLKUmeccgh1xmdvlEpIiACIiACIiACIiACIhAbBDJxaNEwPNyrnXtH8Codcdvdd6PPy9/BmqSQuR/fPtsRjS/rgxFTv8DUt3qjVZkiqHrN61hyxE/RydGgYKJVOraMvRpl+N281mP45TiAjANYMWc0eltr0lbG3T9wQmEWUpIWYOKA1tbzQdHLP7MtqH4Su2cPQMeWXfHU6On4bNyTaFPGhbKtn8ciayYGACNaXTYRi0ffgobnnIOzSvL5oThavfIbvOO7MvZg5gONUaH+nXj3i2/wxdt3WOv2Vr11GnZlAJkHF+Pdvh1RoyhFpQ4Y88n9qFvE/RxS9f7FAHco/+R+XHH1/RgyfiLeefwq64f4ytcOx2ozLzI9CWNvboUbBs/AwsXzMem5TqjsqoJ7f0xB5qFFGPZwL7Szflyvgo633Y27+7yM7zwzRXKgVYIIxDkBiVZx3oHRMN9ftDrrrLNw44035vpq1KiRj7izatWqMM1Jx+Ht67Hy16VYtXEXjlrTAcMsAlk4sX8jVvy6Apv2n7CmGgYqISttLzasWI71e1KD5uMvJ8nb1mLZkmVYt+sYcpoVep2BbFG6CIiACIiACIiACIiACMQjgax9E3GJy4USne3TA08iadQVKHdGF0zZY36AzsLh+b2txc3P+MfH2JHzS7Wt+dmiVYlW/fD2hxMxadJEjHvvVTzevYU1sqtMk7sxfkOq7Zp0rBtcHy6XEa08p/Z/Yq1NZRetUle+iGblGuBfy831KVhwZyXrWab58CT3SDGPaOUqfxkeGLscyZlA1qFvcO/ZLrgq3YWFHkEpdeljqOFyofEbmzwjzA7h82tKweVqhdE7eFEmsk6sx8sNKFRVQJt+X2Dbzu/x1iMP4s0fD+H4smfQtF4ffGfEMqRi2YDali2NBrk3q0pZeDfOqX4ffjQiFo7g+77t8IBJyNqHiZe44CrRWdMDbR6hw8QkINEqMfs1rFb5i1b2HfPCOQ5ftArLTGUWAREQAREQAREQAREQAREoYAKOotWxBdaGS8U6TMV++6Cqk0kY3pLiTXX0W2IEI6cGZItWxZv1xktD38JbQ1/HoCfvw3UtKC6VQO32d+GNebvgXeIW6dj4WsOcotXhT9HO5YJXtMo6gM+6lIGr0RBssk2hO77mAzzUqw/eW+VRhrwjrabjkNfEZMzszN3JL8aEve6Gndw6Hjc3vRT9F3KBEYbjWNynKlyuehi01lh3CNPast0XYcxOI+IxbzLm3FQB5S4bhMm2TZ8mDWyOIhxJdsGb2HwSSP7iahR3nYFOw5fBDFJL2zAdn27wjPeSaOVGr7+FgoBEq0To5qzj2L7sByxY8AOW7wz2YeDc2D/++MNnxFQ4QpU9r0QrZ75KFQEREAEREAEREAEREIFEIeAkWh3/qa81ourMO77HMZ+GpmHl0+5RRA1f3WATnHwyWVP7zCZLORdiT8PmibfibGvpjmq486sDnrWqQhStUhbi7sounNbBPl3Qv37b9MD2M2zTCo/im5vPgMvVAiPdu0RlX5h5FOu+eAN9b7kOl9cuCpfrfDzn3drwMGa0LwJXsY749+HsS5C2HANquVC+0yuYNH06pvu9Zsxda62zlbF7Km7wrK1btsktGPzvDThmFwMlWtmg6jDRCUi0iokeTsPKgfVyFY6KNn4J64x4b7P7xNpX0Myz/lL9l9bbzoR2OGzYMKvuM888E40bNw75df7556NIkSIoXrw4GjRogI0bN4ZWoXKJgAiIgAiIgAiIgAiIgAjEJQEn0Sr5y84o4XLh9Ju+wVGfVmVi97hW1rNGzSeXZ68L5ZOHb7JHWuUUrbj2+F5Mucqz7uyF72GbNdUwRNHqyCx04WZLLUeBs/cCBjPSyk+0mtedolVzvOuu1FoI/eDiYejZqgVuHDQTm1JS8MtD54QmWqUswJ2VXCh7vXsjqIC2IAupm6ah/+VVvM+IFdo9g3n7PHMsJVoFRqczCUdAolVMdGkaVgyo5b0h2Ucv+RzXG4S1nvXAjdkZe+fgYWsBQvfifuGKVseOHUPlypWtF4/DDcOHD7fs7tu3b7iXKr8IiIAIiIAIiIAIiIAIiECcEXASrY4vvt89EqrJW0iyTcGjGLV3wsVwuU5Du4/2BFlXNhfRCmlY4Rmx5ar9FFZYs+ROYuPrXGPXb00r/+mBKT/g3rNccJW+BtN95i5SK0vGiq/XwpogGKJolbLkKTQ8rSyunrDDs/bt8dBFq7RleKKmC67Kd+JbM7vQ2/8nsP2n1eCeVFlpR3Dc0qdOYMe3b6JH/WLWM1fZa6fCWjJMopWXmg4Sn4BEq5jo45PY+eUg9O3dG719XvfipiZmJ7vy6DJms23L2DRsm/08rvTbgS9c0erJJ5+0boBjxozJE4msrCy0adPGKmPBggV5KkMXiYAIiIAIiIAIiIAIiIAIxAcBI1oVv3omko3JR+bh9jNdcBVphXe22FWrNKx85jy4irfHh0FXYs/yjshyHGmVsQ2j27iFmwo9vvbUm4Ft77WAy3UGus/LHt+VtWc8LuYslHbT3GtTZe3Hp525ULoL9Z78CUe90+yycGTxC7h92Dr3M1ZIolUKfriHO5A3wusbTTuNaFUHz642IwwCTA9EMr7uXt5tyyPz3Lsuehie3DIe9zw815qamDznYTxjFl2ntnbsFwyo44LrrHvdi7Mb0ar41Zjp7QTTGYpFILEISLSK4f48uXUsOpV1j6Cq3nseDpkbbMYuTL6Ji/25z9njcESrLVu2oESJEmjSpAkyM4ONlQ0O6ffff0epUqVQq1YtpKR4t7gIfpHOioAIiIAIiIAIiIAIiIAIxB+B5C/QqYQLrlqP4fsdGzFz2ASsSzuJLR9cjbIuFyr3+jf2mkeLY7/giTql0Phfy3A8aEvNToAu+ItWWalb8OWTrazph65K3fDRNiMWAcfm90AF1tljMpJST+Lwqkl45rYOqMTnpPMG4Mede5GcDqT88iTqWc9OpdD09kEY88knGPX8LWha91bM8Ox2mLX3I7fYdekk7PfaegRfdysLl6shXtvIdVpSsfSxGtZzWP1Hv8bu44ewavJTuKZmEbhcZ+LWyfPw0dilOIr9+KS1eyH2cbsNDHehab+9hBZF3c9xVS67Fy8MH4m3B/XG5XXb4pXf3AutJ8+8FjWueA+bvUvDJGPWdeVQot1YuJfWSsYXnbhAfC089v0ObJw5DBPWeRZp99quAxFIDAISrWK1HzN2YuK15dzCVNV7Me+wUawApG/Eaw3dN7oKbZ/E5AndcIZHwApHtOrWrZtV/nfffRcxBU0TjBihChABERABERABERABERCB2CeQuR+z7q7pfk4p0RSPzNnnniaXlYxf3rgONYoWRfWOvfHM80/gtg4Xocu/5mC3Zykmp8alb52FUa/0Rbvy5gf5YqhYqzFaXtIal1zYCOdWPB1n1mqJa/uMwPe7vSqOu6jU1XitbRm3LVxT65IB+GbDVLQr4oKrWmv886XPsSGFz1Hp2DrtfjTl2lbmh/8a1+OtZUeRhSykrJuGwT3royjPFWuMO16ejOU71+Dztx5C6zLua2rePBjT1hzFiS0fotvZnnKK1EDX177HqnFXWKJakfPvwMT/W46pg3ugHm1wuXB25ycx6vs9yJbaMrBnzlNo61lo3bKnZDP0/XynN0/yzC6oUKw4zrzkbgx+/0O8/+xNaN7qLkz83YzkysT+WXejptWWEmj6yByY5a6cGCtNBOKZgESrmOy9LBz59m6cZd2EyuCaT3Z7dsjwGHtyO6Y8dC9enL7G2l3i6LzuYYtWCxcutG6iXbt2jQoBTROMCkYVIgIiIAIiIAIiIAIiIAJxQCAdh7cnYc9x31FENDzz2FYs/e5rzP1hJbYdy3k+6o3LSMbGRd9g/q/bcJz61Mk9+HXxRus5yb+urOPbsezbOZi7aCMOZ6tI/tlyfZ91Yj82rV6L7UfNwuhp2Ju0HcnhlJm+H2t/+Bpzvl2K7Sm+nDJTD+FoejoO/74U382ehW9+ScJR3yyWjemHtyNpz3HfZ8VcrVcGEYgvAhKtYrG/Tv6Oty/itqkuuOo87VloMLCh4YpWFJiaNWtm7frHqX3RCpomGC2SKkcEREAEREAEREAEREAEREAEREAERECiVQz6QNqyJ1HLGmVVFBeP3OrZlSKwoeGKVuPGjbMEsf79+wcuNI9nNE0wj+B0mQiIgAiIgAiIgAiIgAiIgAiIgAiIgA8BiVY+OGLhzXEsvt+zyHqxyzBhl8M4UD8zwxGt/vzzT1SpUgWVKlXCkSM59ln1Kzn8t5omGD4zXSECIiACIiACIiACIiACIiACIiACIpCTgESrnEwKNuX4z+hbzbOw30WjPLtDBDcpHNHq6aeftkZZvf/++8ELjeCspglGAE+XioAIiIAIiIAIiIAIiIAIiIAIiIAIWAQkWsWYI6SvG4z61tRAF+q+8BvM/hDBzAxVtNq6dStKlCiBRo0aISMjyBYewSoL8ZymCYYIStlEQAREQAREQAREQAREQAREQAREQAQcCUi0csRScIkHP+2I4pZoVQ43zT0akiGhilY333yzNcpq3rx5IZUbSSZNE4yEnq4VAREQAREQAREQAREQAREQAREQARGQaBVTPnACq587371roKs+Bq1ND8m6UESrRYsWWeVee+21IZUZjUyaJhgNiipDBERABERABERABERABERABERABAonAYlWMdXvKfjhnrM8olUrjA1hEXaan5toxVFPLVu2RLFixbBp06ZT2mJNEzyluFWZCIiACIiACIiACIiACIiACIiACCQMAYlWCdOVgRsyYcIESwjr169f4Ez5dEbTBPMJrIoVAREQAREQAREQAREQAREQAREQgQQnINEqwTs4JSUFVatWRcWKFZGcnFwgrdU0wQLBrkpFQAREoJAQOInkbRuwbt06bNx+DJl+rc44sg0b1q3D+s17kZrld1JvRUAEREAEREAEREAEYpqARKuY7p7IjfvXv/5ljbJ69913Iy8sghI0TTACeLpUBERABEQgMIGMbXi3ucs9tb7k5Xh/y0lb3kzsGNXSfa7SXViYYjsVI4cnds7HkPufwsLQ9l6JEatlhgiIgAjkJJCV/C2eeewL7PP/9SBn1vxPSU/C9EF98c9ePdGzJ1934IkpWxDaisH5b55qEAERCJ2ARKvQWcVdzh07dqBkyZJo2LAhMjIyCtR+TRMsUPyqXAREQAQSl4BdtHK5cHrXydjl/ciLbdEqZdFDqF/EBVfRy/HZ4cTtIrVMBESgMBDIwLYx7VC8SHO8ucn+40EBtj1zD6ZcU8b9w4XLheqPLEFqAZqjqkVABPJGQKJV3rjFxVW33nqrdZOeO3duTNiraYIx0Q0yQgREQAQSi4CfaOVyVcJtsw54pgnGtmh1aFpb98OURKvE8km1RgQKI4G0FXimjnvUa/WHf8bxmGCQhuVP1pRoFRN9ISNEIO8EJFrlnV1MX/nzzz9bN+jOnTvHlJ2aJhhT3SFjREAERCD+CeQQrVxwVX8AC45wAatAolUmDiwajjtb10Q5lwuu4hVQr2MfjF6abFsTKxMHFw3DHa1ro3wx94NYkTLV0Oz6gfh86wk3t5Qf8cgl9VG34dV4ef5XePmmpqhU3AVX6Rpo/8g0bEnehCmPdkTtshxNVR71Oj2FmTs5AiEDOyd1R6OzPNMaXUVw1nn10e7pX2LkQS/+3UItEAEROJUEsnBo9q2owPspX+Wux2f7Y2ERwTSsfLq2RKtT6QpxUFdW6mZ8/kJPdGheH/UvaI0b+k/AqqO5+WsmDq+ajtf63YW7+j6JwR8sxgHHabCh5osDUDFkokSrGOqMaJny999/o1WrVjjttNOwYcOGaBUblXI0TTAqGFWICIiACIiAIWATrco2bYUzPQ9N9Z5eiuOOolUWjvz4BBqYh6syZ6Pq6Z4HrWKtMHile/LIyaSRaF/Sk17mHNSpVRnFzDUthsKa/XLkK/yjlDtPCYpVJaqgRuXTvA9IZ1QtCZerJKpUr+BNK9lxInZlnkTSW028adZDnsuF8rfOh5a2Mh2rWAREIG4IZGzDqDbFbPe0Img14nc4TxLMQGryIRw8eND9OnQYKVbGLKQkLcRnU+dgTbKvGpB1Yj/WLvwSUz8aj4+mz8OqfZ4fDhwAZR7dhB9nTsbHn36PjUeOY4VEKwdKhTjpxEa8e01N1GnbDbf16oqWZ7o/w0tf8T6SnB0WWcc34KO7L0Clut0xfNG+AH6NkPMVYvp5brpEqzyji90LP/74Y+tD46GHHopJIzVNMCa7RUaJgAiIQHwSsIlWVXt/iWm9KrkfnIo0xytrj2O7/0LsJzfjrZbuL6ml2g/HmuNZwIkkTLiuvHVdySs/wZ7MLBxa+CK6X94CDdo9h5+tX2AzsOXtFu6yi1+NmdyQ1yZaueoNwCKO7kpdhgG1PWKXqz6eWnwEWTiKBfed7b62Qg/MPwZknjiCpHEXe2xtg4+SDuGw+8ktPvtBVouACBRaAmnLn0azFr1wfTVz73PBdW5//Oq0gFTGfvw4vCdqmx8BXGeg+9zdWDm0Eyp50kp0nIQ91K1O7sJXAzuhTrXmuLHvE3jwhvqeHw+qo9vItX4jU9Ow8cM70KAEbSiOc5s0wrk12+O2TpXd91mtaVVo/TO74RnYNrE3HpmyBUb2zDwwB/edQ5+phxfXmNTsKzIPLcTAVqVRrMGjmO88tMrKHGq+7JJ1FA4BiVbh0IqDvKmpqahWrRoqVKiA//73vzFrsaYJxmzXyDAREAERiC8CdtGqz084umsyunpGTpW47G0sfMd398CsvRPRtqj7warFkIVYtXo1Vq9ejZ9GdkAJPjCVuxFfH7EhyDiK7cvnYeq7z+Oedmd6Hn5a45P9vqJVvRfXeL4EH8aM9kXd+Rq8jPXWVlVZ2P3hRe60Ep3dghcArWll46xDERCB+CSQdQizejTEDdN3YuNbF3oFIperPG768iAcJ12d3IjXGxmBqxw69r8FDWrUREUjZLV6H9szUrHiXw2t8sp0/gS7KWKlr8Wg+ua6FhhqW/D92OJHvUJYjX6LcQzAic0foMsZJr8WYo9PB4um1VlIO5yMEz5OmYpfH6sBl6sRXtvgt7dk+u94/8oycBVvjeEb/c7ZzQo1n/0aHYdFQKJVWLhiP/Pzzz9v3dxHjBgR08ZqmmBMd4+MEwEREIH4IeAnWh3HSWwdcwVKWQ8/pdH8inPcD1GV7sLCFCB93Uuobx6MHOMLMWZnJpC2GVMe6YBzOe3Pk6+I9Qs+37fBlAO+otWFo3d61sNKxhdXFXdfc/F47PF8OT44pY07zYzSkmgVPz4mS0VABAISyNgyEpdf8DB+SgEy901HlzLZ98zTLh2Jrd7dXG1FZGzBiKbZ+Vw1H8KC5Awkr/gIzz3yDD5ceRRZ6evxcgOTpw6eXX0CyNyFsa1M2um4aa5nQnXmLky43HPfdVVHvyVmiFcqlvY/13sP1+6Btj7QoYdACn64pyoqdBqP7T6+moHt4zpa3yVqPb40yK6ToeYT8EgISLSKhF6MXbtr1y6UKlUK9erVw19//RVj1uU0R9MEczJRigiIgAiIQJgEcohW/DV+A968KHttKUt08ohWGVvfRXNLhCqC5gMn4fPPP/d9zVyIranHseq5Bu4HnRIt0ee9WVi67RgOzb4OZaxr22H6IV/RqtW43TlFq0snwaxFLNEqzH5VdhEQgTggkIplA5qh9dBNnnV+juGH+6t6RSKXqzaeWp6Wsx1+otX5z672TtfKznwCG8f0QMOK5VC17UD8mJyFjEOL8ExdI1qVxDVfcp42gIMzcLX3R4VW7h8erBNaiN0NSH8DETi54xP885onMHe/7zpqnOr/eE362gV4bdlvmDPmJTz5SD88M+wzrLavuRZqvkAGKD0kAhKtQsIUH5l69uxpfUjMnj07PgwGoGmCcdNVMlQEREAEYpOAk2gFIHXl82hsH0nlEa2QugSPVHc/9FS+/SvP7j+pWDWkG67o0hMPvjobu07swfiL3HmKd/wUB62Wn8DalzxClqstpkVDtPr0MhSxbGyDyRy5pSACIiACcUQg6+BM3NKwGz7dlz3fKn3dy2hku/dW6DEHh7NPu1vnI1qVQCdrkcDADc88shqTn/4Hml7QHm0rGdGqBDp7RKv0tYNsI2jbYIr7pg1AolVgqoX7TMbRJPzwYX905C6+ZZvirrFrkGLz07Rlj+Nc+nGRc9GhRz+88u5IvP7g5e5116rejI+2uqcLhpqvcNOOvPUSrSJnGBMlLFmyxBKsrrrqqpiwJ1QjNE0wVFLKJwIiIAJ3AoqfAAAgAElEQVQi4EgggGgFHMNPj2Vvde4yohVOImlkB8/0QReqtu6Gnjdc6N2qvc5jP+EYUvDjfVU8owUq4/LeA/BYr4u8eVyu5niXc15sC7HnZaTV0Xm3oLzn4a5c7QZoebd2D3TsYyWKgAjEIIGTSHrnUpSveC7q1K2LuuZ1fm1U9eyqao1yLd4OH/jOuwJ8RKsz0P2bAPumZh3H+kl90ep0F4o3fwYLDuz0/qDgcmWLVid+ex7ne4WyizHezMuWaBWDfhMLJmXg0MpZmDhiIG67xCzUXxm9vtzvGTGdhT3jL7G+A5x+01c47DU5DWteaWall+w4ATszQs3nLUAHeSQg0SqP4GLpsr///huXXHIJTjvtNKxbty6WTAvJFk0TDAmTMomACIiACDgRCChaAVmH5+M+s5uVV7QCkHkIi4d0Q92S5hd7xhVwUd8pSPJsHpSx9yv0a1HaI1y54CpeD90Hv4SryjJvKXSasjdi0SoreSEebZQ9jbHIpeOwy2+GglOTlSYCIiACBU4g9Vc83rQtRvx+MocpRxfcg7O8IpIL9Z7zm/7nL1rNcxKtTmDjOx1R1iqnEV7hrhZZ2aNg7aJV1p4JuLSIuZ83wCveBbU10ipH5yjBl0DWUSx9+WIUd7lQ9NKx4JKWQDrWv+IeWV1rwArYJ7hm7Z+Gq/ndoVgHTN3/vxDz2YZw+daudyESkGgVIqhYzjZ58mTrS/UDDzwQy2YGtU3TBIPi0UkREAEREIH8IHAyGdvWLsOSZeuw65jPCqzu2rJOYP/GFfh1xSbs991uKHrWZB7Djt9+xa+rtuBQkM2JolehShIBERCBSAlkYs/U63Be58nY6yS0n1iNZ71rT7ngqtgTc+xzBDOSMNy7EPsZ6O4kWh37Hnec6RGiTmuL8VT0T27GsCZGnCqBzmZaYfoGvH6BSS+JTtP3e3YtPIrv7znL++PDOQ/+jOORNl3XJx6B4z/jYS4bcO7jWGat4Z+BrW+7R1Rl7wzsafbJzXizMX2tCd5KSgsxX05hN/Eg5m+LJFrlL998Lz0tLQ3Vq1fHGWecgUOHuMBGfAZNE4zPfpPVIiACIiACIiACIiAChYtA1qH56FP9NNR97jeHBdTJ4iQ2D73AKxZxmmCTwb9lj1hJX4tB9YzIVBY3zDmSE2DyTHT2Lq7uQtHqzdGqSWPUP91c58L5fd/GmNnbkY5M7PviVvd6QxyZ1XAAFh44go2T70WD0tn5S1wxHjsdfp/IWblSCheBw/j3lcXhajYCWzz+kbLgTsufKt7+HY7ZYWTtw8RLXHCVuBozDgKh5rMXoePwCUi0Cp9ZTF0xaNAg6wNh2LBhMWVXXozRNMG8UNM1IiACIiACIiACIiACInAqCGRg3/zBuKFOUbcgVbo5bnnsFUxZbw1PsQzISlmH6cOfw50tSvqIVi5XFVzx6ERsOr4X375yPc6xTR8s2fJeDJ27y7MDoacdWUew+NlLPDu2ulDp4vsweukB7Jjew3tt2Zb98OVuj8qQdQRLXu+MKrZyK3Z8EWMfrOW2o9TZaNC6Kx4YuRRHNFvrVDhL/NRx8ncMa1oWLd7YCO+A55RF6MPlBWo+CZ8NMNPX45UGLpT6/+ydB1gUV/f/BwRUEIWAimJJBLFX1NgVe+yIJdGYaNAYo8YkxhqDxhI1MSbRxBKDsZdYQUXfiGIvyIsCYvlTBF56aD9Kdje7m+//mdmZ2VnYpZcFzz7PPjs7e+fecz93GOZ+55xzhx1APOtlWNxy1YeGUVpKopVRDkvxjIqLi0PdunXh5OQEhaJmuB1SmGDxxp5KEQEiQASIABEgAkSACBCBmk1ADXlaLF4m5/IJsjW9VWTEIiImTSswSCDIk0Nw1ecMLtyJRo5ahawXDxAUmaEriEnK0+YrRECdheBfl2Hx14fwME2Ia83D090T4NzHC/ezpSyUiD8+FfaMDSYciYPgoJdzfwXa2LyJb0KETFfFLSetm7ZLSoBEq5ISM6Ly06dP554cnDlzxoisKpspFCZYNn50NBEgAkSACBABIkAEiAARIAJEgAjkI6CMxt7h9TSed2aOcB32Fka4DcKoj/YgKFOP+506B49/ngIXu7bw+PIHbF83F0N7jsNa/2QdERXFLZfPHPpafAIkWhWflVGVvH//PkxMTDB48GCjsqs8jKEwwfKgSHUQASJABIgAESACRIAIEAEiQASIgEhAlYXooBvw/9MfNwKfI0WMBxRLFNhQZb9E0I0A3H4cgxzBQatAKaC45fQcSruKIECiVRGAjPXn3r17w9TUFI8ePTJWE8tkF4UJlgkfHUwEiAARIAJEgAgQASJABIgAESACRKDaEyDRqhoO4ZEjRzi3Rk9Pz2poffFMpjDB4nGiUkSACBABIkAEiAARIAJEgAgQASJABGoqARKtqtnI/v3332jevDmsra2RnJxczawvmbkUJlgyXlSaCBABIkAEiAARIAJEgAgQASJABIhATSJAolU1G82vv/6a87LatGlTNbO8dOZSmGDpuNFRRIAIEAEiQASIABEgAkSACBABIkAEqjsBEq2q0QjGx8fDysoKb7zxBuTyYmSNq0Z9M2QqhQkaIkP7iQARIAJEgAgQASJABIgAESACRIAI1GwCJFpVo/GdOXMm52V18uTJamR12U2lMMGyM6QaiAARIAJEoCCByMhILF++vMxvpVJZsHLaQwSIABEgAkSACBABIlBmAiRalRlh5VQQGBgIExMTDBw4sHIaNLJWKEzQyAaEzCECRIAI1AAC/v7+3MMghmHK9CmTyWoADeoCESACRKBoAsr4K/jFazE+mD4N06blf7+N6e++jzkLvsDan47hVmxe0RWWRwl5FP74yhPvCPbM+BQHXrwaUSnlgY/qIALGToBEK2MfId6+fv36wdTUFP/973+ricXlayaFCZYvT6qNCBABIkAEABKt6CwgAkSACJSGgAxPt7rCRBT862Ho5hM4dXAHVs/ojnri/mZw/yUMuaVpooTHqFNOYqyV8ACiCebdyilhDcZZXBb+Gzb4JEBlnOaRVUSgUgiQaFUpmMvWyPHjx7knwLNmzSpbRdX8aAoTrOYDSOYTASJABIyMgD7RqnHjxijqnd8zizytjGxgyRwiQAQqnEDOjTlwEMUpO7x3NZtvMw+Pvu4s8V51wfLASpCtcu/j0+Y1TLSSheHb3rbo/mMUKAi9wk/psjWQG4Gza2firWHDMGzUu1h7Lhrkg102pNKjSbSS0jDCbfZGuGXLlqhXrx4SExON0MLKNYnCBCuXN7VGBIgAEajJBPKLVocOHUJAQECRbzYPllS4ItGqJp8l1DciQAT0Eci9swCOekUrQPliKzqJvzFwXHS34r2t8gKxpGUNEq1UKbi00IX7X9OFRCt9p6Dx7Mt+iI39bNB0yn5EyAB51AF4NGmEt355RsJVOY0SiVblBLKiqtmwYQN3sWI/6QVQmCCdBUSACBABIlBeBEi0Ki+SVA8RIAKvGoFCRavo7egqEa2azLsFLlhPkYW01BSkpGjeqRl5XNibIisNqfy+lJRUZOTpC4ZTIuPpNZw++Bu8D/vgXkwu1FLo+kQreSICfQ9h//ErCM/UVycAtQxJIVdx5rA39nofhV9QYj6hQYnc9FTR5pTUNGQr2IbVyH5xFScO++Jxev66FUgN/RMn9u/DkYuPkKwnvZYs/gF8Dv6GfSf88TRTieyIe3iWxfdIGQ+fRZ1hxjPs8E0QElNSIPCSdpu2q5pALgJXuIAx74c9L4XzQIHn33eHqZkrNj/RM/hVbXI1bJ9EKyMeNNazivWwYj2t6CmudqAoTFDLgraIABEgAkSg9ARItCo9OzqSCBCBV5uAYdFKiahdA2EhilYu+OKeJr+UPOIIPu1jpfVUdVmLMLkcUSe+wABrwUuKgcuaUEin+qr029g8rhnM67li9pL30NmcLWuDAV9dR5qgE+iIVg54b/cWTBI9rxgwDdyw5ZE0TFGB2HPLMbxVE3SZOA+ffTQeLmYaGxwnbEeIkBJLmYSArdPwutif+vC4EIeHW4bDjt9n4bYf8bwdqvRb+Ga0I2pZ98b89avh0cIEZs4zsO+5ECymRMIZTzibMLDtNwuL541Hx4a2aGDZBl8+Yt10nmL7aHstI7FdBkxrL4QI1bzap5/R9F6dchoT2HO360+IksRwKsI3oC3DoOGsq8gyGmurryEkWhnx2LE5rNjwAzanFb10CVCYoC4P+kYEiAARIAIlJ0CiVcmZ0RFEgAgQAZaArmj1Gqafj0XCs9s4scEDrXihpbbzGHzlGwvOMYnHluE7DpaCEMOJVuwPmTg/sZ4o1OiIVooX+GUYK3SZoOcONrdTBnzGC8JXS3x2nxeidEQrBrWau+P7Kw/ht6qTmDC+zsgTSOGdmXIDV3CiAmM5AgfiWMVJjhAvTTgeO//quuWp1m5FODa2E0S1enBbPBltmrWArdAP1x2IZgWLvFBs7mMBhqmFPjujuTxUaWcnwpot1/5LBLGLKWZfg2cjti4XeIVopLm88O0YYtsCH98WRLUM+IypK/Kg8EDj/ZvLOO/Oja/t9D91xanM85hYjwHTZB5uCgKo8XbD6C0j0cpIh4hdJdDExATsqoH0KkiAwgQLMqE9RIAIEAEiUDICJFoZ4qVCTvwTPLh1E3cfRSJdOuM0dAjtJwJE4JUioCtaMTC3a4rGNuai0GLRxh0rf7uG6FydID5kXZmuFXtE0SoL/u++Jh6rFa3USPWZDBtOHHoDyx6yqo8SUT+/iVrcvvrw8MvUcNcRrWww5Xw6tz/37iI0E8SlVisRzHkqyRH2dRuxvVYrgyGDCrG7XcV91u4XwNcMKCPwfSdBtGLAtJgP/3Ql0gO9sWrBMvz6MBNqqBB/aATqcm21xYZwzYVT/mS9RhxjbPHO5UzIeQ8cVhgz7/oJzsSwwpUCz36ajI/9BZ8cEq2qxx+TDI9WO3PnTGuvEB3vQOTexSJH9pxxxe5YwR2wevTKGK0k0aoKRuXbb79F06ZNC31bWLAqPQN7e3u95caOHVsFlhtXkxQmaFzjQdYQASJABKobARKt8o+YGlmPduH9rg3EiRt7L8KYvY5RX/khUQh9yHuI5a0lEzhhQqjzaYr2a3VDfPK3Rt+JABGovgR0RSth9UA18l7+iY2j7LTXEPshWHcrXcw/VTLRKl3iVdUem5/xCnpeJM59uwIrf7iMOEFU1xGtmmDeLY17S97DpdrQvqYfgd8NWfhOTG1ri3oOfbE8IB1qZSquL9MIEOx1r/bIM9DIXqxOpitaOXEiV/6xS8GJ4Zr5G8N0wZZ7EYiMjMTzG2vQjr82tloRhLzEg3Djwhv5a6ilKz4+8gy5ilxkywWBj0Sr/HSN83sWrszQiK3ddkRz+dlEO2VBWNGKHWNnrH5MMZ0il1JukGhVSnBlOWz16tXaC7nODV5RN4Da33v16lUWE2rMsRQmWGOGkjpCBIgAEah0AiRa6SJXJf6Bya9p7zU4wUq8T6kF1w0hmgTFOpNDQ+UZsE+e6VZdlzF9IwI1hYB+0UrTO3W6H2Y2lFwbGs3GFd5tqUSilTwM69oI9byOpZynlQGCOtclA6KVw9yCoVqqDAQfXIrRnTpgQF+t2GYxwpBoZYHhp0U5S2uMPAReophfG+1GuMPDw0Pynop5O8MgU2fixqfaMETNddYcXRb5IEF4MMCGQFJ4oJat0W5lwm+y5iFPL+8EUZjlzM17iKWvs+eukyZXmdH2oXoYRqJVFYwTiVblB53CBMuPJdVEBIgAEXjVCJBoJR1xFV7+0lN8qNZmkQ9i5SpkPd6JcTb8pFHwUlDE4IzXPHh6euq+Z7ujY22+bIO3sPMZSVZSwrRNBGoSgcJEKzZHlTCZ14gyDphzQ+P5VCLRSvYYq50F0cocgw8l6QoDUqAlFq3UyAnbj3mu1mDMu2CZfzJi9vYQr4GGRav68LgoBg5qLZA9wpdOgq1a0UxbQLKlTMCFpb1RT3wowB5ngf47Ivg8WiRaSWgZ8WYObn7YhDtnuu+M0fW0yrmN+U3Zce2MbZGiGmnEfTFu00i0qoLxyS9avfvuuyjOW/rEkzyttANHYYJaFrRFBIgAESACxSdQXqJVUlJS8Rs12pJ5eLCkJT9ha49NQhgOMnFpCh8uaD4M+hwMNF1SIHL3cFhxkzBHePqlGp5cGi0DMowIEIHiEihctNLNUcUwWhEn2/9dvCaINUXltFIn4uAgM1FIqjviMBIMpQcqoWglC/8BblYakandujDIoUZ8cUUrIY+WFJY6Ht59TERbXfLnOBLKyuMR/JzNgSVH7IUvMdhWELoYMN12IJrrXwZ8xlIidgGZ8X6qELNTkwfNefVjXc/izAtwZ1cVtHbHBT0ap/H2yTgtI9GqCsZFKlrZ2dkhICCgWO+RI0eKF0ISrXQHjsIEdXnQNyJABIgAESiaQHmJVpaWlvjoo4/w9OnTohs12hJqJB4ZjjrcZLIBJh5L0Dw1zg2CV1t+UtV6NdgV2fW9lDH7MIpdKYlh4DDbD2lCahZ9hWkfESAC1Z5Azu35aCqIT4w93r+are1T7j181kIixjh44iqfY1xH7HLiE6Or4nFgWB1xnqNNxK7Eyz0DYC620wLzLqWKHi2qlNs4eilG451UItFKm4uIXemv795YqNhk6N92FG2wGHFaktPqBbaKidglyd+1PdYkU9/SRTyeaTAO+18KCbfUyLy7BUsORUOZcR7vjtmKp5qFA6GMO4n3OI8cqWiViYvu1mJdnbdFQgk1FLkKehigw7zqv8hDvODCMLAa74MMiTmq6B3oxjAwG+ANbnFKyW+0WXICJFqVnFmZjyDRqswIC1RAYYIFkNAOIkAEiAARKIJAftGqbdu2GDZsWJFvqeczu92uXTtucsGu+jtixAhcuHAB//77bxGtG+HPuY+weZCw7Hx9uPTuh26OpvzEyQkfnk8WJ4s61qszcPn9hppyliP55eN1StAXIkAEahiBzIseqC+KSbUx5EA8d31QZQTjt5lviIILU7cXVgWkidcOVfx+DBISkdfqihWnr+DwFwPQyEwrcjVffA+5Aq/se1jRXvsbwzTDEM8V8FrxAdw6DsWWEHZFQQA5NzDXQShni+lXNCpZzs25cBDstJmKS9zudJweISRNZ8CYOqKLa0e0d9EKRYzTPGzb6YNoVlzSyVdlhfG+UnlCMBRQp/hipmgDA8Z+EOau3YbvV7+L/oNW4jar62VdwYyGVui19g4yOHE/C5fftgHDmKDb5nB+BToZQry0qxvaTPgVNy5sxAfL/ZFODwS0wI1hS/ECP/SqBabpfNzWRMByVqWdmwhrxhrjjhcS0moM9lcTG0i0qoKBItGqYqBTmGDFcKVaiQARIAI1lUB+0Sq/GFXc7zKZDFeuXAG7si8rXLHHtW7dGtu3b0d2tsT7oBqAzH2yC+PthYmf9rP13FPgVmbX0wfF823oYaop22ppIPgppJ6StIsIEIHqTkCZcBV71i3CmJba64PmWlkHjRwboi4rENWyRtN2AzFt6R7cTBS8jYSe5yB4sxtsBCGJqY1Oc/di31TNKmxcXa8Nw2rfOD6/E6CIPY3Fb+quamrReiK+ucF7XSmicXLpIDQQ62RQb8AKnAu+gFUD62sFNMYKby46jAi5Ghk3VqKXJd8Hu5744Jd7SH55FFMFryerblh4Jg5KZQIurxsn8SpjULvbbGy5ECvaJ/QMUCP70XZMcdKGNLL9sem5EKdj+bxGuXewyKUxWji+huZ9p2DOrOFoYWoCxwnbESoqdYA64za+dtMmhm/o5oXraYZiI7UW0FblE8i5txLtzO0x3ZcPi1dEYZebFawG/YhnvEdd5VtVs1ok0aoKxpNEq4qDTmGCFceWaiYCRIAI1DQC5SlaCWwiIiKwePFiWFtrntjXr1+f+84ufW7sL/nTn8QcL4xVR4x9bxYm928qTvgaTjqIlwXyyebh/ud8LizTnthOCWeNfZjJPiJgBATUyI64jQvn/HA3KqeYIW9yJIcG4PwZX1wNikFOOXgcqeVpiH2ZjFypFqTIQGxEDNLKIjaoc/DywZ/wOXsBN0ITIZPaqs5C9Is0KCFHSlgAfE774no4+13fS4msuBeISMgWPdX0laJ9VU1AjugT89DDqS/eX/kVPh7dBd08NuMmiYzlNjAkWpUbyuJXRKJV8VmVtCSFCZaUGJUnAkSACLy6BIKCgtCvX78yvxWK/J4E4DysfvrpJzg7O3Oij6mpKcaNG8d5ZBkn8TT4egieDM5Ydp+Pc1Cnwu8DB164csDcm5L4B7YjOTcwhw+HMev/G2Klkz/j7ChZRQSIABEgAkSg3Amoc+MQcvcOQuJySGQsZ7okWpUz0OJUVxmilSItEk9CQxFq8P0EEalleYQg9FSO1IgnCA0Nw/OE3GI+KRGONfCpzEDUk1CEhj1FbCkeo1CYoAGutJsIEAEiQAQqnQCb24rNcTV8+HDRY6lDhw7Ys2cP/v7770q3x2CDefe1iZMdF+KOJEwl00+bu6bT1hc6HgE5t+ahCR+S0+PnaJ3fDLZFPxABIkAEiAARIAJEoJgESLQqJqjyLFbxopUCzza1F2+ONXHm+ePOGbisZZd3LeNLHoa1Lpq6G84OQHlk7lDF7oIrdwNcF2N89Cc6LMpqChMsihD9TgSIABEgApVNIDw8nFtlkF1tkP3f/Nprr2H58uWIi4urbFMKticVrepPwnnx368aSYcGi6t3ue6MkTxBliN0jQt/v+GM1YaWFizYGu0hAkSACBABIkAEiECxCJBoVSxM5VuoMkSrFz/2QX1zc5hL32bCCkAakanDpqd6EgiWsK9GKlpRmGAJx5GKEwEiQASIQKURyMjIwLfffosWLVpwgk+tWrUwefJk3Lp1q9JsKNhQGnzchYTFpmi34BQicpXIeXYUnq2EB1+tsPyhNM16Co65mWtEq3ruuJBZsFbaQwSIABEgAkSACBCBshAg0aos9Ep5bMWLVvoMUyHxzPtwFFbVcFqAP1OlWQH1HVO8fWqlAmw+D2U55bEoD08r1vIqCxNU5yD6/jX4+1/DgxhJfEXxcBouVVH1Gm6RfiECRIAIEIEKJKBSqXDq1CkMHDiQ91Zi0L17d+zfvx9yeZl9oUtsuezJd+hfWxCoCn7aTT6GOGm2YFkwVjnx5Vy8EFL5Jpe4j3QAESACRIAIEAEiUL0IkGhVBeMlFa1sbW1x9erVYr1HjBgh3tT26tWrRJbnhX2PQXX5G8u6g7HtieRJqSoZ17fORO8W9bj6zW1aw23uL7iXLqhQeXi4xg1tnV3Qa94B7HyvA+ozJrBu74nz/3uB3RM6wNm5Ddy8AsFKNIrnP2NcB2c4d5mBEw9OYNlbbWFrxoCp3QQ9ZmzD3XSJWKZMxJUNk9GtsQUYxhz2nT2w/tAGdC1jeKAApyrCBGUh69CZFwfZEMwCr7wHWOpccDKgCeM0RccNT/SGbRZZb4GGaAcRIAJEgAhUFwKPHj3CrFmzULt2be5/caNGjeDl5YWkpKRK7IIamUG78EEPydLz3P+z19Drw9/xJH+eyexrmNWQ/3/mupuSsFfiSFFTRIAIEAEiQAReFQIkWlXBSEtFK0P5poraXxLRSpV6GR8LT0KZZph9LlGbj0KdgYDP2ohimGVjB1jzgouZ6xo85ByFsnHds5GmjImZWJZptw5h/1cwp5Xs8Wo4c3WYw74OA6auAxxfMxGPa/j+FWgiCLJxd3k7cT9j2QiN60nFnNLntBKGtbLDBJUJvviYz/HFjqE+0Uqd8Dv6mkr7qbutL9dYceoV+kyfRIAIEAEiUH0JpKamYv369WjSpAn3/9HCwgIzZsxAYGBgJXZKhZzYx7h15TKu3AxGTLbwEKsSTaCmiAARIAJEgAgQASIAgESrKjgNKlW0kkdg91tCjgpz9FofBOli1Ypn36EbJzDVwYCtj8E+RJW9+A1jG7BCSm0MORAPFSSiFWOO7svPIij4GnxvJUCpJ6eVVrRiYOn2A0JZ4Uv+HD/05gWvlksQmAeok45hJO/9ZTNuD56xzl/yaBye1pAXssouWrHDWzlhgnmI8vkSQ+wLClD5T7Gc655ozAuDDkOm4/3335e858DrQoJk9aXi15u/HfpOBIgAESAC1ZfAP//8gyNHjuDNN98UH+706dMHx48fh1IpjdGrvn0ky4kAESACRIAIEAEiUBQBEq2KIlQBv1eaaKXOwM1lHWDCCySNph5FrM59rhoJ+/rClPu9K765GoTg4GAEB9/E9kFsuB6DehPPI0MqWlmNw9k0CZRCRas6GHUmnS8sQ9CKVpob74azEZANZAd8gEZc245YIFlbO+/hUrzO7S8f0Yo1oELDBJWxOOjuIE4qpF5yBT2tlIje3o0v2xJLHkjCNCVYuc0S1Zv/YPpOBIgAESACNYXA/fv38c4773CLq7D/YxwdHbFx40b89ddfNaWL1A8iQASIABEgAkSACOglQKKVXiwVu/PBgwews7MDu+T1jh07sGfPnhK/z549W4SRSsQc8oA9L1iZdl6J25mSXFLc0XKErhWWqtb1EBKFl+47EaOSeFq9vpTzkhIbL1S0aojZ17L5onKErePDEO3fB7s7w2cs6nL2dcCW5wqxSlXsbriWs2hVoWGC8nBsaKvhZ9P3cxz8bQLq89wLilZZuDSlgUa0qtUT6w/9ii1rVmPt1v248iIbOiNUonpFfLRBBIgAESACNZRAQkIC2AdfDRtqPJLr1KkDT09PhIaG1tAeU7eIABGoDgQUyfdwcK0nRvdyQVPberBs0Agt2/fBhHlfw9s/AtmqLNz86iMcT9C50y3XriljT+DD7vawsu+OeX/ESqIWyrUZqowIEIEqIECiVRVAZwUnVhRaunRphbWeE/g1epjzQpTNWOyN1IpC2kaViPyxi0ZAMemC5fv/wB9/6L5PX41ErloiWrl8jTAgOu4AACAASURBVDDp6kCFilYOmHtTCEaU40k+0SrbfybsOHGnBT67r/U4koeuhUs5i1ZsnyssTFARjUPzZ+Oro4+RoQIy/TwMi1YSIUoUBnmBi2HsMHT9LYh56ktSr3ZQaYsIEAEiQARqOAGZTIZ9+/ahSxf+fzjDYPDgwWDvL9iHNPQiAkSACFQKAVUabn83Ca34XK1mrcZg2d5LCHwRj4SoYFzc/TlGNjMFY24OxrQH9sZX1PVJjtA1kgfxLmt15yuVAoMaIQJEoKIIkGhVUWQLqZe9saxVqxZiYmIKKVX6n1QpF/CBo+A51QKeZ6ORnpmJzPzvHAWy7yyAIyea2GP6uWRNgvbcIHwzYTDemvYR1vvEQiEND2y7AeHlJFqp4n/HIF5Ya/6xv0asUafj2mI+jJApv/BAgWaFhgnyjRQqWqWdxig2OT0vVFm93g29urVAHVG4ssHEI3F6nw4VWq/QQfokAkSACBCBV4rAjRs3MGnSJO6+gv3f8sYbb3Ah8ez/fHoRASJABCqMgCoJvh9phaIGI39CSP4VRgGoM+5gbW92VdSKFK3USD41CTb8/fRrU84gpaL0sQoDShUTASJgiACJVobIVNB+1oWfval0d3evoBaAvKDleEMUQbQCiSCUiJ+uuxD79wtsH1SHF1Ec0HvCNIzvbsN/b4VPbmYBFSRaQZ2Oy56OooBj33kgBnS0E78zFSBaVWiYID+ihYlL6vS7+HmpJ6aNG4e5vz5GNvcPVYXEc7PQRBizNmsQKhUGi1FvhZ1MVDERIAJEgAhUCwLsgzDWg9vW1pb7P2plZYWPP/6Y8zKuFh0gI4kAEahGBBSI3DUUlsK9a/3xOJZgeJVRZdRODLR7swI9rVh1LBeRf/6OnQf8Ea0N4KhGTMnUyiWgRl6UH777YAzmnJcmbDZkhQppQUexYeF7eG/e51iz6waSpad8bgTOrp2Jt4YNw7BR72LtuWjIDFVF+0tMgESrEiMr2wFz5szhbiYDAgLKVlEhR5dItFIBqtQb+GaCM2oL/3jYT5semHfoBf/HJgkPLEdPK64LOaHY9Y4LzIW2zZwwZcsveKchK7aVv6cV22aFhQnyY1KYaGVw2GRBWNGKFxhN+uFQcsGSpaq3YDW0hwgQASJABGowgby8POzatQvt2rXj7jdMTEwwcuRI+Pn54d9//63BPaeuEQEiUFkE1MmnMclG+2C8kWcAhCy2+m3IwEVPD/zOhQcqkZueipSUFM07NQ3ZXBYTNbJfXMWJw754nC5VAwC1LAkhV8/gsPdeeB/1Q1CirhygzElHqlBfSgpSM+W6eWI5o2RIDL6E4/v34fDZAISn66xOBVVehk4drH2p6Tma6AdVHjJSBXszkCs1Ty1DUshVnDnsjb3eR+EXlEhihf6TwGj2KpNvYs+qaehooTmHex/UM/GSWKvOeQLv9zvAztkDW68nokDSneyH2NjPBk2n7EeEDJBHHYBHk0Z465dndC5IOJZl85UWrfIeLkdrQSwx9GnaHmtFt5c8PFjqLPEE0l6sWe8l044b8ESPh4wwQOnp6ahbty46d+4s7DKqT0V6FELu38H90Fhk6V7HK9hONfISniDwQRjicyvHl7ciwwRLJS4pI7Cts3A+uWJXjPS/oQZ/qeqt4JGj6okAESACRMA4CbAC1X/+8x+MGTMGrHDF3qe0adMGP//8M3JyhHyTxmk7WUUEiIAxE1Dh5e4+qCXOnSwx9qywWrhhuxWZychm5xfKJARsncavFM7e+9aHx4U4PNwynM91y8DCbT/i2VthRSzOLR+OVk26YOK8z/DReBeYce06YsL2EGiuZEokXt2GmW1MtXO0vkeQKjFFmeCLJX3ZSBJ7DJ63Ep+Oaw4T06Zw++IMYjgFQoXUGxswwk64F2c/7TFukz8SlYAq/QF+ne/KPeBv9NYGXONcbBSIPbccw1s1QZeJ8/DZR+PhYqY53nHCdoTQZVYyAka2qVJAqVYi4vvO3DlTmGilSr2K5a51YdZmES7puFYJfcpF4AoXMOb9sOelMH9T4Pn33WFq5orNhYkDQhX0WSSBV1u0ClyCluIFV3qRkm63hlcIr+arE/B7X8kFMf+xRST927x5M/eHsXfv3iIHhgpULIGKDBM0LC6pkXJxEYa6tkerpi0x9lC8JocY29Usf7xnz593NtNwiY3KzPcyXG++gvSVCBABIkAEiICEwP/7f/8PixYtgrW1NXcf0qBBA3z22WeIjo6WlKJNIkAEiEBxCKTh1Eg2R5UwX3LCymBdz6cia1GEY2M74fh6cFs8GW2atYCtUKfrDkQrWTGgLdeO5YgDiGP1AHkIvFyE47piy1PB50WF2N2uWpukolVuENZ01czfrMae1OS6yg6AZ2NNPa0WXkMG98xcjVTfd0ThjDF5E7/GCiIEIA/fgM4t38clfsWk3MAVaMvaazkCBzTGIcRLm+Or65anBT1yigRDBSqTQPqpoVy0j0HRSv4cO4ZYgjHvja06SZ21VqpTTmOCNQOm60+Ikjh9KMI3cOdHw1lXoWdap62AtopF4JUWrRQxZ+A1z5NbLppdMlp4z3bvKIbKNXhrJ54J1+Gc6+IFjnEYgunvv4/3Je85XheQIDlZpSOgUqnQokUL2NnZ4e+//5b+RNtVRKCiwgQLE5fyHi7T5htzmIxdwRlQ5Ebh7OJO4hOrRrOuQF/63MLqrSKE1CwRIAJEgAhUIwL/93//hx9//BFOTk7c5M7U1BTjx4/H1atXq1EvyFQiQASqlIDiGTa1F4Qj9rM9Nj8TxKNiWqaMwPedJHW0mA//dCXSA72xasEy/PowE2p5GL5uI5RppRHGVLHY7Srss4b7Be0dc8qhPnpEKxVivd1gwYthHbY854WkZBzuLzgidMN3gv2yEKwRxTRT9NoeyS+OpETUDjf0XR8GTVCNHGFftxHba7UyGDLoCmfW7hf03s8XkxAVqwQC6WdHcXN+/aKVEtF73LjFslp+eg+5BuzJOO8Oa4aB7fQ/dcWpzPOYWI8B02QebpLXnQF6xd/9SotWejEpIrF7uJXmIuToCb9UbbiaMno7uvEXvZZLHqAkOf5OnTrF1blixQq9zdLOqiFQEWGChYpLqkScmu4g/pPTPqXi/wE7zsQpA8pnofVWDT5qlQgQASJABKohAdbb2NfXF8OGDRP/H3Xq1AmsJzg9WKuGA0omE4HKJKB4hs0dBOGI/WSjUgrJj6LPtnyilRMn+uQvKEP4zqloa1sPDn2XIyBdDWXqdSxzFtqujZFntGGJekUrNkqmfy3xOtdXTBqbgXNj6or7u/0UxYtTuiIX02oZHrITPnkYNg4ei30SzytZ+E5MbWuLeg59sTwgHWplKq4v06aRqT3yDLTW5e8bfTcGAoWKVrn38WkL9lzrgA33H8F351p8vmAhln17AsFizjUZHq3WjHlrrxBe0OR7lnsXixzZ412xW3LeGEO/q6MNJFrpjJoSMftGoR4nTDlgtl+aThK/rEtT0ID7rRZ6rj+EX7esweq1W7H/ygt+FTidynS+DBw4EGZmZoiLi9PZT1+qlkBFhAkWKS7lReDY4kHa1QK5c8oUjm5LcSpacOsryKXIegseQnuIABEgAkSACBRK4MmTJ/jwww+5nJvsgxTWI5x9wPa///2v0OPoRyJABF5VAuk4M1or+DCMLaZfKWEAlI5oZYHhpwuXd1QZwTi4dDQ6dRiAvmLeKQuMKEq0yruPT5sLIpc5hp4S2smA7zhLUbRq7Hmdz48FIMMPM8Q2bPH2hTTk3PkcA2Zd1C9CqTIQfHApRnfqgAF9taugW4wg0crY/0IKE63y7n+K5uwczaQ5Bk1diHU/bsfGjwZqwkcdJsE7khVqs3BlxmvcedRtR7Q27QvbcXGRLWesfmx4fmfsjIzFPhKtJCOhzriM97lV6xhYjuRjp8Xf5QjfoImrLuAdw97kDV2PW3yMs3gIv/Ho0SPuZJ48eXL+n+i7ERCoqDDBorrGroQSfu86rt14gPDEPB2BtKhj6XciQASIABEgAuVJgF0shs29yaYyYO9z2AdtU6dOxe3bt8uzGaqLCBCBak9Ahbh9A7UrfzMM2m8qYf4mHdGqPjwuasP8dPCocxC2fx5crRmYd1kG/+QY7O0hiFDFEK1YbxlRtKqNUWLC+AycG11HFK0c5t7UilbIQ9AKTQg1ey007/8tfp05GMsf5I+xUSMnbD/muVqDMe+CZf7JiNnbQ6yTRCudkTTKL4ZFKzXi9/bixtLa/RzSROvz8HidJnl7bbffEKPMhN/kBly5Xt4JunO5vIdY+jp7rjrhy0ckWokIS7lBopUIToHn23rAlPN6aYWlgfkvTGk4PUp7cWOsXke3Xt3Qoo5w4WRgM/EI4vTktJo9ezZ3Mt+8eVNsjTaMi0BFhAkaVw/JGiJABIgAESACRRNgc3CePHkS/fv3FydfPXr0wMGDB6FQlDBvTdHNUQkiQASqIQF1+kW8KywgxM6dun2P54VeHtSQpaVCXCQ8v2jlp0+0kiH8BzdYcXOzdlgXJgfU8SUTrdTx2NdPCA+Uelql4Q83M/Ea55rPS0YZ9Qv68CsBssKVTf8f8SJf/2ThP8DNSjMPbLeOzXXFCh0kWlWn09mwaCVH2DpNzrKWSwJ1UgKpk45gWG0GjNkgHE76P9z8sAl3HnXfGaPraZVzG/ObsudHZ2yL1CMQVCdQRmAriVbCIOTdx+ctNRce057bUeDcUqfj7s9L4TltHMbN/RWPszW5rlSJ5zCriSBctcGaUN2Y7r/++gt16tRBt27dhJbo0wgJVESYoBF2k0wiAkSACBABIlBsAv/973+5BWdq19asFObg4IC1a9ciOTm52HVQQSJABGoiASXiT0xFQ05QYudBDfHuuWTdSbvYbTWyg7ZgjNs3YHUn7qV8ga1iIvb68NAnWmVdwYzX+DlWrb7Yy+YFUjzDtx2FeZcFRkjCCvXmtGKTo0sSsbvujtXYqIrFLiGhu4krtuVXpNSpODtJ40HDMFZ461iirheNJCyMYWqh7162XgWefdtRFMIsRpzWH04ocqGNqiZgWLRSInKbxqOq9VePoeMnxeZ04xYi6IjvXvyNmJ2aVSudV+crl3kB7uyqgtbukKwXUNVdrrbtk2jFD13OjTlw4C68Zuj/G39BK9awyhC0ohV/gTJBPzHBn+bgjRs3cr/9/vvvxaqNClUdgaoKE6y6HlPLRIAIEAEiQASKJpCSkoJ169ahSRPNE2ULCwvMnDkTQUFBRR9MJYgAEaihBHIQssMdjoJwVccVn558rvWmAqDOi8G1H95GO8d+WOWfohW15CHwai2IT1YY75tRkFH6aYywEMowMHXsAteO7eHCCgF8m07ztmGnTzTkrDi1SyMecL+57kGcsJZWbjDWdtd4W9lOu6ARkjIuiYJY2yW3dFd94y3JvbcYLdh2GnviaoGUXek4PcJCtIMxdUQX145o72Kt3ec0D9t2+iBaEOoK9pD2VDEBw6IVkO0/k8tfVWBVQHUi9vViwFgMw/EUQB7iBReGgdV4H0jPYlX0Dm4BN7MB3ohTVXFHa0DzJFpxg5iDW/M0N2IM0wM/R5fEhU+JCF6JZS+Srru0roFKpRLNmjVDw4YNIZfTFas6/L1QmGB1GCWykQgQASJABKqCwD///IPDhw+jZ8+e4sSsX79+OHHiBNh7HnoRASLwqhFQIyvkMJZP7ARbXkgya9wWPQcMRJ9uLnCwfg0dJ2/AxZcSXxVlAi6vG4emgtjFMKjdbTa2XIiFTgSeOgM3VvaCJV/OrucH+OVeMl4enSoea9VtIc7E5SHu4nq4c/mDeEHLxAlTNlxALF+hMtEPq4awq3fbY6DnZ5g7pDEYpjHclvsiTqdRyfgpnmOrqxXaeOXzoOGKqJFxYyV6WfLt2fXEB7/cQ/LLo5jKhYQxYKy6YeGZOH5VQkm9tGk0BAoTrZB9HXPZaKoWn0MnnZk8DOvaMKgz7ADiWTFK8QI/9KoFpul83M7Rdi3t3ERYM9YYdzwpn5eetgxtFZ8AiVYsK3ko1rjwFx3n1dCXK02dchGLhrqifaumaDn2kOYk5Thnwf89e/7mzQbTLmmlePYmjhWyvvzyy+KPCJWsUgIUJlil+KlxIkAEiAARqCYE7t69i7fffptL2M7e6zRv3hybNm1CWpo2ZW016QqZSQSIQDkQ4BYYun0RJ48cwP6Dx3D2SiCis8oqZqshT4vFy+RcrZcWqxNkxCIiJg0lcwlQISvyHi6fO4Wzl+8iIrMo9xc1sp7fRWi64XJqeRpiXyYjV1pEkYHYiBiklcy4chgBqqKkBFKPD0It1ulECBvVqUCJ+ONTYc/YYMIRrfiYc38F2ti8iW9CtPmvc+6tRDtze0z3TdUIVIoo7HKzgtWgH/GMzgMdqqX9QqIVSy7lGNzMNaJVPfcL0JcKEHkPsewNXthiHDB5VzAyFLmIOrsYnWrx+xvNwhXJwezTR3b1nYSEhNKODx1XBQQoTLAKoFOTRIAIEAEiUC0JxMfHY9WqVbC31zzAq1u3LubMmYOwsLBq2R8ymggQASJABGo4gbxnOLltFaZ3NOccTEycPbDsu2MIk3hKcQTUOXj88xS42LWFx5c/YPu6uRjacxzW+ufP3yZH9Il56OHUF++v/Aofj+6Cbh6bcTNNqmbWcKYV3D0SrQDIglfBiXc9dfEKMaDaq5B4ajqf90oQr6Sfjph5KkF0AWXzPLBPHqdNm1bBQ0jVVwQBChOsCKpUJxEgAkSACNRUAjKZDN7e3ujcWZO8lr0HGjJkCHx8fMB6MdOLCBABIkAEiEB1JKDKfomgGwG4/TgGOYXoUOrcOITcvYOQuBwdz8Dq2Gdjs5lEKwDZ12aJq1/odw8Uhi0PEccWY5C4WqBGtDJ1dMPSU9E6Kwu89957nGh1584d4WD6rEYEKEywGg0WmUoEiAARIAJGRSAgIADu7u4wNTXl7oVatWqFbdu2IStLm0LBqAwmY4gAESACRIAIEAGjJUCiVWmGRi1DUvg9XL92Aw/CE5GX7wEiu8oOuzx0jx49SlM7HWMkBChM0EgGgswgAkSACBCBakng5cuXWLJkCWxsbDjxql69eli4cCFevHhRLftDRhMBIkAEiAARIAKVT4BEqwpgzi4LzbrFHzx4sAJqpyorkwCFCVYmbWqLCBABIkAEaiKB3Nxc/PLLL2jbti13f2RiYoK33noLly9fxr///lsTu0x9IgJEgAgQASJABMqJAIlW5QRSqIZdDrpp06Zo3LgxFApDa6gKpenT2AmwYYJ9+/blbrL9/f2N3VyyjwgQASJABIiA0RJgBSpWqGIFK1a4Yh/wsUIWK2ixwha9iAARIAJEgAgQASKQnwCJVvmJlPH70aNHuZswLy+vMtZEhxsLATaMoU6dOmjZsiWys7ONxSyygwgQASJABIhAtSXA/m9lQwXZkEFWvGJDCNlQQjakkF5EgAgQASJABIgAERAIkGglkCinz969e8Pc3BxJSUnlVCNVYwwEKEzQGEaBbCACRIAIEIGaRoBNzs4maWeTtbPiFZu8feLEiWCTudOLCBABIkAEiAARIAIkWpXjORAYGMjdcE2fPr0ca6WqjIEAhQkawyiQDUSACBABIlBTCbD/Z318fDBkyBDuXooVsLp06QJvb2/IZLIC3V66dCnmzp1bpvfNmzcL1Es7iAARKJxATtgRbFn1Md57exqmTdN9vzPjfXjO/xxffbsX5x7EQ5ZvsarCay74qzLeD5s/fhdv8+287fktHlIkcUFQtIcI1HACJFqV4wDPmDGDu9G6f/9+OdZKVRkLAQoTNJaRIDuIABEgAkSgJhMICwvjxKi6dety91X29vZYtWoV4uPjxW43adJEFLdYgas07z179oj1GdpQ5yXhRVgoQsOeIi6njDNwPY3IUyPwJDQUYc8TkFv+1etpkXYRgXIgoE7H1Y+bSf7ubDD6uxM4dXgPNi0eDxdzzd9knbaTsOHPBJQly68sZA1chL9x86E4lV4O9peqChnCf9sAnwRVqY6mg4gAESg9ARKtSs9O50g2HNDCwgK9evXS2U9fahYBChOsWeNJvSECRIAIEAHjJZCWloZNmzahWTPN5NjMzAxvv/027t69i8oSrXLvLoQjN2G2xDjfjHKGJUfYWhfNxL/hbARQ2sxy5kvVVSSB1CP9JKJVE8y7lSM2lxvyPdwsBTG5IcbvfoaC/pJi8cI3kg+ijxGIVrKwb9Hbtjt+jFIWbi/9SgSIQLkTINGqnJCuWbOGu3AfOXKknGqkaoyRAIUJGuOokE1EgAgQASJQkwkolUr88ccf6NdPO0lm84dKvaucnZ1R1NvJyUnnmOJ4WpFoVZPPLOpbWQiknRgIU0FMYnRFK0CBF9t6SH7vgNWljetLOVTlopUq5RIWurAiXBcSrcpy0hjJsar0QHgvmYQBndvApVMfjPvoB/gnFO0PqIw7jlndO+Gdc6m6PVGlI9B7CSYN6Iw2Lp3QZ9xH+MG/bB6Gug3QNxKtyuEcUCgUcHBw4J76/fPPP+VQI1VhzAQoTNCYR4dsIwJEgAgQgZpMICgoCO+9956O+DRz5kwucTubvL2w99WrV3WOq3rRCoBaCfY+UqGkkKOafN7WxL4VLloB6oSDGMyHCbICc93hBxBf4DSXITH4Eo7v34fDZwMQnq7Hi6mAaKVGduQNnDqwHyeuhCI13yHK3HSkpqQghXunIi1bI0aos1/g6onD8H2cDl0zFEgN/RMn9u/DkYuPkCzXHS1lvA8WdTbjrx0d8E1QIlJSUpGRJ61FDVlSCK6eOQzvvd446heExFK7lum2T9/Kl4A64zo+b1cLjKUj2nRwgl0t3iPQbjz2RuQbfGnTimj8NsqaOw/e3J8EMZpbnYHrn7dDLcYSjm06wMmuFn+u2GH83ggUUqO0dtouggCJVkUAKs7Phw4d4k7Or7/+ujjFqUwNIEBhgjVgEKkLRIAIEAEiUG0JNGrUiJ8YMKh00UqVguvfzkDv1xvAjPM0MYFlk84Yt/wPRAoTVXU2HnsvwPB2DVFHKOPQASM/2Y/QbGG6o0DE7gno4OyMNm5eCKQE09X2fHwVDS9KtILiGTa3F0IEGTBmA/G7RLVSJvhiSV8bMIw9Bs9biU/HNYeJaVO4fXEGMVKnF6loZdoZH83pC4d6FuLfv+kbk/FLiBCaqERSwFZMe13bbn2PC4h7uAXD7fh9Fm7Yz9uhSr+Fb0Y7opZ1b8xfvxoeLUxg5jwD+55r/pDlT7djtL22Lql3Z2uvEC7kURF7DsuHt0KTLhMx77OPMN6FF7gcJ2C7aNereIYYY5/leLJlMPp9egZReRr7VKk3sdFNI0ZZTziNFOHyrGO+Ai9+HoXWre24804qWsmfbMHgfp/ijLZC3NzoBmv2um89Aaf1V6hTO30pmgCJVkUzKrJEz549uXxWrKJPr1eDAIUJvhrjTL0kAkSACBAB4yQgzWlVuaKVAi+2D0BtPizKsmkrtLQXvDAYdN3yFAqokXx6Cl7jytRBi6590KdbS168YmA37QySOScNymllnGcXWVUcAkWKVsjA2VG1RXGJYRpi1jU+cVtuENZ0NeV+sxp7UiMUZAfAs7FGIGq18BoyBPFAKlox1hiy6TbSVCqk+S+GE/93yLT4GNezBKsVCN/YTmy3nttiTG7TDC1sBfHJFTuilUBeKDb3YcWvWuizMxqsw1ba2YkasaH9lwjiRQ1k+GBMXeHYfOGBuYFY0Zb9zRIjDsRxHlzyEC9t4viuW/BUKsAJJtJn1RBQRsF74U94ks/9SRn1M3qZMmAcP8ZtPQ8PZE+2YqjrApw6MIq7jmtFKyWivBfip4IV4ude7PntiI/1VVg1va/WrZJoVcbhu3fvHndRZG+Y6PVqEaAwwVdrvKm3RIAIEAEiYDwEqky0Uqfi6lceGNi1DfqtuoVMdmKtjMC2rppJrfmw00hHFi5NZT1IGFi47UMsF76kRMzxj+E+ZQ6W/XARsdxElkQr4zmjyJKSEihatMrCn+9o/g40HkoWGH6aXfpPhVhvN1jwglOHLc/51QWTcbi/RshimG747hmv9khFK+nqgcoX2NpJEJNqY9iRRD5kS4mI7zuJohXDtMB8/3Qo2TxGqxZg2a8PkalWIf7QCNTlbGiLDeGatuRP1qMtt88W71zO1CApRLSSh32NNoJw1molgmWAKnY3XIV91u64wFdTUr5UvmIIqAUxVFp97l0sdGTAOK/GI8FbVvg97xHWD3oTS29lIf3c6HyiFRvirbdC3F3oCIZxxuoCFQoV02dJCJBoVRJaesqyq9iwF+KHDx/q+ZV21XQCFCZY00eY+kcEiAARIALGSKDKRCsJDGVmNB74HcaPX85Cv9f4yXPvA0iCDI9W86sCspNXGxf0d5+L1TtOIzBR+oifRCsJTtqsZgSKFq0ycXFSfYl4xK/AqU7A7/2FvD8M+h5K5nuegXNj6orlu/0UxXk/wZBohUxccK8nlreb6Q+NH1c+0cpJIybp4k3BieFCiGEXbLkXgcjISDy/sQbteMGp1YogzYqHhYhWkIVj59S2sK3ngL7LA5CuViL1+jI4C6JV7ZE4w+p09DJuApl+mNyAQbNFd6HraJWD+6v7ob9XINgA1Ax9opXenmXCb3IDMM0W4a5uhXpL086iCZBoVTQjgyUSEhLArl7Tt29fg2Xoh5pNgMIEa/b4Uu+IABEgAkTAOAlUpWiV9+wQFgxqDnNhYsqYiF4jTJ9DYKfg6vQbWDNYk/9EmgeHYV5D/xV/UnigcZ5WZFUJCBQtWqXg2GBt6CzndfJYBuTdx6fNBQ8pcww9Jag6GfAdZymKUI09r3NCgWHRKhvXZjUUy1sMZ70c2ZeuaKXdL+mcPARerQUbaqPdCHd4eHhI3lMxb2dY0aKVWKUKGcEHsXR0J3QY0Bd2wrXBYgSJViIj493IuX/0qAAAIABJREFU8p+NJtaDsCtKmtVfjcybX+BNt80I5b2vii1aZfljdhNrDNrFC6/G2/VqYxmJVmUYqtWrV3MXyuPHj5ehFjq0uhOgMMHqPoJkPxEgAkSACFQ3AlUmWuUFYVUbzWTXottc/HT2HqKyUuEzlp9s9zsK7WLociQGnsb2VZ4Y17s1bISJLNMKyx+yCXPI06q6nXdkr5ZAkaJVXiCWShKiM80X4x7rdZIrFa1qY9RZrWh1bnQdUYRymHuzSNHq6nv2Ynmbty9Dk9ZKV7Sq73ERBSL0ZI/wpZMgWjXBvFtCIndt/8StwjytWIE6Jwz757nCmjFHl2X+SI7Zix7C3zqJViJGo91QRGDHIEcM2Bqus9KfOv0KFvZ6C9ufa71jiydaKRCxYxAcB2xFuPZQo+1+dTGMRKtSjpRcLge7co2joyOUSqkqW8oK6bBqTYDCBKv18JHxRIAIEAEiUM0IVJVopY4XJqTmcDvGL8AjC8FaXshi+h5BqiIKRz+bhCG9OqP/kgBxwiwL/pIPG7LBO3+y02sSrarZaUfmSggUJVrJn2xAe0G8YczRd3uEJneVOh77+gnhgVJPqzT84ab1zHLdEc0lNtf1tBqCk4LGpRMeaIJeO19qyufztKrv4Sf+DYrmq+Ph3cdEFLxcvEJ0BAuxHLuR4YOxhhKxy8Lxg5uVpp526xAmB7TXCAYMiVY6KI3viwIRu8ag09SDeCmdzqtScGFOD0z8LYrPt6axvDiilSJiF8Z0moqDOhUaX8+rm0UkWpVyxPbv389doDZs2FDKGuiwmkSAwgRr0mhSX4gAESACRMDYCVSVaIXsAHzQiPfQsB8IzyWf4O0ekmTTXX5EpDIH95a14SfEVujiPheffDIHEzrx3lh2b8MnlU3eS6KVsZ9nZJ9hAmnHB8BUFKWaYN5NibeSOg1+sxxEUchqyE94Kia41k3E7ro7ViM2qWKxy5X/2zJxxbYXehKxmw7AMcGVURWjLV93GPbHcUtycuGBL7ZqE7HrFa2gwLMtXUT7mAbjsP8l3x7UyLy7BUsOaVYUROZFuFsLXlmdsS1SCagVyFWokXVlBr9KKINaffciVgUonn2LjgIXixHgcs8bxki/VBkBNTJurcawMd8gMDtfMvXkg+hrIoy5oc/mWHRHN2GVOuMWVg8bg28Cs/lFAaqsczWuYRKtSjmk3bt3R506dfDXX3+VsgY6rKYRoDDBmjai1B8iQASIABEwVgJVJlpBiYRzC9FV9LxgYN7aA2vWDoUVO1GtMxyHEtiZawzOLHWDYy3dCY9V+2n4MTCTn9CQaGWs5xfZVRQBFWJ3u2pFH6Y+JvnyLlCyGFzyGowGnHBjDpfpu/A4vyiQG4y13TXeVrbTLmhyUWVcwgx+QYO2S27xoX6AjqcV0xyf3NGIY6rY3zDYgv37qoMBW59o8k9xZssR4tVatM1qvC8y9HRHneKLmQ6Sv0/7QZi7dhu+X/0u+g9aiduarO6ALARegiclY4MJv97AhY0fYLl/Ov46PUKbz44xhWMXV3Rs7wJrQbRinDBv2074RFOcmJ4hqNJduWE7MHXUclxJFcROiTm5T3D42w1gnVOk769mOnPnVdPJK7F+0y74J0jcs3LDsGPqKCy/ksp7/Enqo80yEyDRqhQIb9++zZ2ws2bNKsXRdEhNJkBhgjV5dKlvRIAIEAEiYCwEpKIVm+i8Z8+eRb579OghTmTZY/bs2VPq7qhlSQgPvIvAp0mQ5XtIr1OpPA2Rj+/h9p1APInLpsmMDhz6Uh0J5IYfxzavuRhgKxF8OJGmARxbOKJhPUvYvu6KUR94Yf+dRJ3wKml/lYl+WDWE9cayx0DPzzB3SGMwTGO4LfdFnOD0xB6Qcgh9zS3RdfZGrHunHWyaDsT7iz/GpA61wdTriLd/uId08W9QiYTL6zCuqcS22t0we8sFxErr5AxRI/vRdkxx0oYkstcFm54LcTpWIkZAjYzbX8PNTqizIdy8riNNBagzbmBlLyF5vB16fvAL7iW/xNGpTflrjRW6LTyDOGl1Ugi0XSUE8p7uwYxhC3AmPt/A5Ibj9IknYDMO6nsZDA/Me4o9M4ZhwZl4zYqX4sG5CD99Ak8MVSiWo42iCJBoVRQhPb9PmTKFuxAFBwfr+ZV2vcoEKEzwVR596jsRIAJEgAhUFoH8opXuCn3C5LLwz7KIVpXVT2qHCNRsAipkRd7D5XOncPbyXURk6vF6kcfhQXASL36pkBVxB35nzsLvVhgSC1WMi0lOnYOXD/6Ez9kLuBGaaFiEVmYh7kUEErLz2aiWIy32JZJzpfsVyIiNQEwaeVgVcxQqqZgaOY9/xOjGNuj+9gIsXrxYfC/8cDqGtmmJ8SeSDIb26ROt1DmP8ePoxrDp/jYWSOpbvPBDTB/aBi3Hn0CSKKpWUjdrYDMkWpVwUP/3v//BzMwMAwYMKOGRVPxVIUBhgq/KSFM/iQARIAJEoKoIkGhVVeSpXSJABIhA9SQgf74DQ+sV8jCjvjvOpRnuWwHRSv4cO4bW0/Hg1X2AUh/uhVVouCn6JR8BEq3yASnq68qVK7kT8+TJk0UVpd9fYQIUJvgKDz51nQgQASJABCqcAJuiYfz48YW+R4wYwT1otLCwwKhRowqU/c9//lPhdlIDRIAIEAEiQASIQNkIkGhVAn4ymQz29vZo3rw5VCqpC2gJKqGirwQBChN8JYaZOkkEiAARIAJGTmDv3r3cw8Z3333XyC0l84gAESACRIAIEAF9BEi00kfFwD5vb2/uxmfTpk0GStBuIqAlQGGCWha0RQSIABEgAkSgqggMGTKEu3/z8/OrKhOoXSJABIgAESACRKCUBEi0KgG4rl27om7dukhLKyTYtQT1UdGaT4DCBGv+GFMPiQARIAJEwLgJREVFwdLSkvOUz84W1rE3bpvJOiJABIgAESACREBDgESrYp4JN27c4J7SeXp6FvMIKkYEAAoTpLOACBABIkAEiEDVE9i2bRt3Hzd//vyqN4YsIAJEgAgQASJABIpNgESrYqLy8PDgbnZCQkKKeQQVIwIaAhQmSGcCESACRIAIEIGqJcA+ROrduzdMTEzAPoikFxEgAkSACBABIlA9CJBoVYxxio2NRa1atTB48OBilKYiRKAgAQoTLMiE9hABIkAEiAARqEwC4eHhYFcSbN26Nf7+++/KbJraIgJEgAgQASJABEpJoFJEK3V2LJ6GhSI0NBwvsypx1T1lBqKehCI07Clic9QGEc2cOZPLdcDmO9D3NjMz47ys2Bsdfb+z++bMmWOwfvqBCFCYIJ0DRIAIEAEiQASqnsC6deu4e7qlS5dWvTFkARGohgQUsX74ec1n8Jw+DdOmFeP9wSY8yCnY0byIM1g73Q293Yajh6MtHNqPxBenY6AoWLRse5SxOPFhd9hb2aP7vD8QqyxbdXQ0ESAClU+gEkSrXNxZ4MjdIDAMA7t3LyOzkvqpit0FV4YBw9TFGJ8Mg61OmTJFtI+1sTRvVviiFxEojACFCRZGh34jAkSACBABIlDxBP755x907tyZ86B/+PBhxTdILRCBGklAjhc/94e5ZN7UdsUx+Pr6wufsSRzZ+x2+mNQRluzvpgNxIt8aVrIn2+Bmxc65muGTu3E4OdxCM/+ymYbLWeULTB66Bi6inS5YGyYv3waoNiJABCqcQMWLVhmXMOM1iRBkNRrHkwx7PZVnj0m0Kk+aVFd5EKAwwfKgSHUQASJABIgAESg9AVasYtM+sOIVK2LRiwgQgZITyL27CM1EMYhBv2P5lCnIEeU9Fg0s8olW6hScGmvFOwk4YdWjDIT96I4OLV3g9sUFJJVzUI46+RQm2fBz0dem4ExK5cxDS06UjiACRMAQgQoWrdRI/mMMrCQXNIYxQ5+d0agMz8zSilYfffQRivOWemSRp5WhU8zwfnVONO5f84f/tQeIyS36H4gq4yluX2XL38dLA+WVGVF4dPc27j+Jh4Eihg2qhF8oTLASIFMTRIAIEAEiQASKIMCGB7L3cWy4IL2IABEoOYHc+5+iuWSOpxWtlEiPiAaXEUYehnV93tL1tMoLxNLXBYeG17E0MK/kjZfoCDVyI//E7zsPwD+6otsqkWFUuEoJqJEX5YfvPhiDOefzC676DFMhLegoNix8D+/N+xxrdt1Acj6BVZ0XBb/vPsCYOedRnBr1tUL79BOoWNFKFYO9A8y5m4JaPVfjq161NKp6+/XQ8czMe4g1bm3h7NIL8w7sxHsd6oMxsUZ7z/NIUauQcv1bzOj9OhqYaS5wJpZN0HnccvwRKdP2SpmIKxsmo1tj1r3UHPadPbD+0AZ05S6mxQ8P7Nq1KwICAor1bt++Pf+UgAGJVtqhKN6WDCHrOvP8iuGqq4zB/rH1+fLOWP1YMvYA1NmPsWtmJ12B1MENy33jK0UgLV6fNaUoTLAktKgsESACRIAIEIHyJ8AmYmcTsrP5StkE7fQiAkSgZAQMilbqeBz2/BJ3ctn6ZHiy7wfcEUL+FFlIi72C+Y6CaNUMC/zjkJaVP5OVDInBl3B8/z4cPhuA8PSC7g7K3HSkpqQghXunIi1bU4c6+wWunjgM38fpUClzkJ4qlElBSmom5AWekyuQGvonTuzfhyMXHyGZogdLdiJUw9LK5JvYs2oaOlpozsPeB5ML7YU65wm83+8AO2cPbL2eWDDvmjIZN/eswrSOfJhr74MovMZCm6Mf9RCoUNFK8ew7XjQyx+B9MYg94AYLTkRqjsX3uCuZxqTs6/BsxAtSvDDFPv1qty4M2S+2Y0Bt/sJm2RStWtrDTFD1u27BU+76lI27y9uJAhJj2QiN6wkXQ/aTRCs9Y1+Fu5RI8P24BPHlSkT/Ngr1hHFn8olWyngcnfSadvzFcuzYt8fK+3qyP1Zh79mmKUywigeAmicCRIAIEIFXnsCNGzdgYmKC3r17g/WEphcRIALFJ2BItFIn/4FJPT7hRSvd+tLPjODngtJ5GgPzoaeQzhdVJvhiSV8bMIw9Bs9biU/HNYeJaVO4fXEGMaK2pURSwFZMEz22GNT3uIC4h1sw3I6v28IN3v/9E9tmtoGpODfoiyOpWptU6bfwzWhH1LLujfnrV8OjhQnMnGdg33Pdh+PaI2irRhBQKaBUKxHxvcaBojDRSpV6Fctd68KszSJcyu9aJcJQQaFUQxnxPTqz5xqJViKZ8tqoQNFKhuAvnTVCQt2ROJqkhjrlJMZySfc0CdnF1OgS0Yox747lZ4MQfM0XtxIUSL36FTwGdkWbfqtwK5O9oVAiYltXTb3mw3A6HVAnHcPIupoLlM24PXjGen7Ko3F4WkNeyCDRqrxOmDLXkxcFny+HwF7858GOW+GeVorIXRhqKf3npita5QV+gTf4+kw7fYHzz2MRvNcDDfl9Fm6/Iy6f+2aZ+1HGCihMsIwA6XAiQASIABEgAuVAYP78+dy94rZt28qhNqqCCLw6BPSKVrJo/OHZErUcF+gVrTg6eQ/weQvhvr4llkjDA3ODsKarKfc3aTX2JLj0U9kB8GysKd9q4TVkiPqyAuEbtU4L9dwWY3KbZmhhK9Ttih3RSqhid/MLc7H7JaJVXig292E9Y2qJqWvSzk6ENTt/aP8lgiiSsMafzOmnhnKLCRgUreTPsWOIJRjz3tgaXgwXvPRTGGpOolVFnDgVJ1rl3BJdP60nnEEqd4FJx/kprHLOgJEmZJeIVlbjzuqPAVVmIvqBHw7/+CVm9RO8anrjQBKQHfABGnEChSMWaHxRNdfEh0vxOre/4kUrW1tbfPXVVwgMDMS///5bEWNV7etUxh6Eu4Pwj0T6WYhoJX+O7YPq8OKjcIxUtJLh0Son/ve6eOuPFHCnGhtD34Yvb+6GI5WU/L8kg0RhgiWhRWWJABEgAkSACJQ/gezsbDRv3hyWlpaIiooq/waoRiJQQwnkF60YGwfYChEzpRKtVIj1FqJyGHTY8pwPw0rG4f4aIYthuuG7Z4K7Fesp00kyR2iB+f7pUKYHwnvVAiz79SE4f4eUQ+jDzQfZeYEgWqkQf2gE6nL722JDuKZO+ZP1aMvts8U7lytrvfsaeoJUg26lnx2F2gwD/aKVEtF73FCHYdDy03uQxIgZ7ln6WYxiI8TI08owo1L+UmGiVcbFd2ArXCBsW8DJ2RnOzk5oKbhsSlRtSESr15cGQkfYznuGQwsGoTmrWvL1mfDxpwzTB4eSgQyfsfxFpwO2PBcuZJAo6xUvWpmZmYn2NWnSBHPmzIGPjw/y8nR6U8phqhmHycM38P8IbND384P4bYKQo8qQaCXH0+/7adyITVzgPrIxz1gqWiXj8AA+VxrjgjWhggqeCb/JDfjyTTH/tvGFCLKjSmGCNePcpl4QASJABIhA9SXg5+fH3S8MGTKk+naCLCcClUwgv2jV91AsEh8dw+LuFmBKI1qpE/B7f+GenkFfdpLHvTJwbkxdcZ7V7acoPl9tPtHKaSWC9UX16RWtUnBiOJ9/iOmCLfciEBkZiec31qAdP99stSII+qqrZMzUXAUSKFS0yr2PTzmPwA7YcP8RfHeuxecLFmLZtycQnG4ghIdEqwobrYoRrdRJOD7aUry4CGJTgc/26zQJ2SWilcvXYRBkByAPQavaaOqx6Ia5P53FvagspPqMhSV3QemHo6lAtv9M2HHfW+Cz+1qRSB66ls+bVPGi1fTp0/Hnn39i0aJFeP3118W+161bF2PGjMHu3bsRHx9fYQNZHSpWRB/C/Nlf4ejjDKiQCT+PwkUrWdhm9OLEShN09rqN65+14LlKRCt5GL4WPKoYV+yJE3yGc3FngSNf3hzD2ThSI3xRmKARDgqZRASIABEgAq8cgXfffZe7Z9i7d+8r13fqMBEoDYH8opWweiCb0/jNdgtLHh6Ydx+fNhecFMwx9JRw754B33HaeWVjz+vQPIrWFa0shp8W82Lp9EefaCUPgVdroa3aaDfCHR4eHpL3VMzbGUailQ7ImvelMNEqT1gd06Q5Bk1diHU/bsfGjwZqNAeHSfCO1CoWIhkSrUQU5b1RIaKV8uUe9OPdQx2m/4Izvr7wFd/n8NtcQdThE7JLRKu2G8K1opU6Hnt7aC4o5m7HkML1XoaQtbyQxbt4quJ/xyDeE6v5x/5IZ3ULdTquLW7FixYVL1rlXz0wLCwM33zzDfr06QNTU41LK5vss3v37lizZg2CgoJe8TDCIkSrvMfY4Kp52mLadS2C83Lx4HM9opVOXLzG807zR5KHwC+E84xBbzaO1EhfFCZopANDZhEBIkAEiMArQyAtLQ2NGjVCgwYNkJCQ8Mr0mzpKBEpLwJBoBWUkdr2zXCJaKRBz/iAeZvMt6dy7S3JasZ4tomhVG6POakWrc6O1qUIc5t7UK1rV97gIvQF9+kQr2SN86SSIVk0w75ZxRmSUdmzouOIRMCxaqRG/txenI1i7n5OkLsrD43Wa5O213X5DTP5FLUm0Kh74UpSqANFKgWdbuvBikRNW6vHTVLzYJibEs5txCRmGRCtkI+CDRnxd9hjouQSfvN0DNrzbJsN0wY+RSk6guuwpeNUwsO88EAM62vHHsRekyhetpGORmpqK33//HZMmTUK9evVEu5o2bYoPP/wQ58+fB7v08qv1Kky0ykXQmi6alT5Mu2HdI9Z7Ls+AaPUQS8WVQ/rgkEbZ5MoHLtWKVm/uN17Rih13ChN8tc5+6i0RIAJEgAgYH4ETJ05w92jjx483PuPIIiJgZAQMilaQI+7uQyQLE3pZML4e8wmuC7qQIdFKHY99/YTwQKmnVRr+cNOmYXHdEQ1NcJaup1V9D7/ii1bqeHj3MRHnZC5eIVqnCSPjTOZUHAHDopUcYes0TjItl+imLlInHcEwNm+V2SAczp8zmUSrChus8hetZEFYKSjXbb/WhP/lN18VC+9B5poLheVbOBZxDZ6NNGq3jqcVu1Zgwjks7KqNY2bMW8NjzVoM5VYhrIPhh/inYTmh2PWOC7cCABeGaOaEKVt+wTsNq160knZfLpfj8uXLWLBgAVq0EDyHGC4BKHuTxLqlJyUZt8Ai7U/ptw2LVrJHXujEC5MN3X/COb9LuHTJBzs87Pl/Lg6YsdsXl2+EIyPviTbhOtMDe+P1hwcOE12MS29xRR5JYYIVSZfqJgJEgAgQASJQPAITJ07k7jWOHTtWvAOoFBF4RQnk3vsEzURHAgZ9DotPjiVEchC88U3Ydv8RUYKIZUi0gm4idtfdsRpxShWLXa68V5SJK7a9EPIXK/FiqzYRe4lEK0idLBgwDcZh/0uhXjUy727BkkPRfO4sSXdos0YRMCxaKRG5TeNR1fqrx7phoopn2NyePR874jvxXOSxkGhVYedH+YtWFWGqWoak8EDcDXyKJJkgSuhrSI28hCcIfBCG+NzCyukeO2XKFFFpd3JykoQySsMaC26zq80IebryhwfqtmD4W0hICDZs2IA333wTbPggWx/72bNnT6xbtw6PHj0yfHC1/uX/s3ceYFFc3f8fUFDBgooVWyzYC2Lv2BI1xl5ejbFgib1GY4ktMRFbLLHF3k2sKFH/9or1hwKi8oIIhP7SQglsdjff/3NnZ3ZnqbvStpx5nn12d+bMnXs/9+7u3e+cc272opW4/KjINtvnShNxJykKx7uJd2UaYu1rMb44AX+oc2YZh9svhQka9YCmyhMBIkAEiIAJEIiIiICdnR0qVaqE//3vfybQImoCESgYAok3xmgW3eI41F/+QrKYlhIpwXexe1ob2HIcyg6VhO4lP8C0aoIIxVXD1PuiCxaAFC+sbq2a15cf5aHKURV/FWMrqOwbLXyARHVz0uG9soH6v5jtF5cQrz6meaEI2a2O8OEk+W+V0ZcwTrqquX13TFm9BZtXfIku3ZfioRjOqCmKXpkYgexFK03O7PJjrkvGHEtBFIGD7Thw1r1xOqNOS6JVgY0Q4xCtCqz5qoKlolW2AonkTkJWNh8rWkmbFhUVhQMHDoDd5bO1tVV/CTNxbPr06WCr2zBPLdPY8ku0YrmragusbDDgbDR4uVLmh3WNhR9Eqx44FqG7iFmUfClMsCjp07WJABEgAkSACICfi7G5HltkhzYiQAS0CchCr2PPD/MxuL4qZ6/0f1E5h1qoXcsBVcppwvnYccdVPnz4Xfr783Cb3lWS6oVDua5fY/3ZQHV4njziCpb1rAqOY6lh5mNKT7Z6eBW4LLmEUNEZCnKEX1uLgdVF8YsDV8IJE908EKK2AWShf+D7IZp0IRxngXojfoAHb6RE0svtGFFPu652bWfhXIjoFqbddnpnWgRyEq2QdBdTmLhaawGeatZ5A9J9+Sifkr2PICzjIoIkWhXYACHRCoChiFbSXk5LS+NFKiZW1ahRQy1gMTGLiVpM3GIil/Fu2YtWisT38PbygpfWwxMnxrEfMPbjVANTfn+Cl69DwRzqUh7NhIMgKhZzWoYbwVF4fXQMqgn7rLrtR0jGLxUDBUdhggbaMVQtIkAEiAARMCsCvXv35uccHh4eZtVuaiwRMAwCCiQGPsa1i2dx4ZonAhIKcCKvTMaHp9fhfsED93wikGNQj2HAoVrkE4GY091RjOOgDkXVKleOsNMjYc/ZYdCJUHWoaPKTb9HQrj1+9JYqWcKJMafRvRgHznmP0fz31GqyAb8h0cpARauMY4aFCbJwQRY2KA0jZGGFLLyQhRka15a9aJV1O7JJxM6M5cE48nk5tbAnvePDcfUx777GkTjrsg1rL4UJGlZ/UG2IABEgAkTA/Ah8+PCB93pnNw7/+usv8wNALSYCRIAImCqB1Lc4s2UZxjRT5di2qD8Mizeegq8kUpVvujIZr34ZAceKjTBs+c/YvnYKerUdiNU3o4TFAERAqXh7ZguWjWmmyq9tUR/DFm/EqUwFivb0rC8BEq0AeHp64vTp05kerVu35oUQlowzq+PSfU+ePNGX/Ufbs3wLLGE7S9xuY2OjFmtYYneW4J0lejf8MMJ8FK1YeHGiF34Z2xQ20jDOCh0x5/cPanfjjwZeBCdSmGARQKdLEgEiQASIABGQENi6dSs/x5o2bZpkbwG8lCfgw6vHePjYG8GJuYUlpSP63Qs8uv8Qz95EkldIAXQHFUkEiAARkBJQJH3Ai3t38PBVMJIL0OlPek16rU2ARCttHlrvxLDBv//+W2u/Ib1hdWOu62xC5eDgoBawSpcujaFDh+LQoUOIiYkxpCoXaF1kMe/w7N4dPPAKQqIRf6lQmGCBDhMqnAgQASJABIhArgTE32Lm4X7nzp1c7fU2UMbj6baxaF5akpeHs0PriXvglZghF6cyCa/2T0WnKlJbDpx9e0w96IPkDOZ614VOIAJEgAgQASJgoARItMqhY7766iteBIqLi8vByrAO/d///R9Wr14NZ2dndRihpaUlOnbsiB9//BG+vr6GVWGqTbYEKEwwWzR0gAgQASJABIhAoRB48+YNSpQoAba6dP7exEyFj1snlJB6iEte23TfBN80sYkyvN/3KcpIjmunQiiPwcdC1DlXxLPomQgQASJABIiAKRAg0SqHXmTeS2xSEBYWloOV4R4KDw/H3r178fnnn6NUqVJqL6w6depg9uzZuH79OmQyyRIbhtsUs60ZhQmabddTw4kAESACRMBACLDcoWw+uHDhwnyrkTLiJD6zEbym7Adh6/1ABD09BFdHC2G+VgI9D4ao8qakPMJMB9HDyhFTjr9EeEwIHu8eoV50hqu7BC/UIle+VZMKIgJEgAgQASJQ5ARItMqhC+bNm8dPHAICAnKwMo5DqampuHTpEqZMmYJq1aqpBawyZcpg+PDhOHLkCP73v/8ZR2PMqJZiaAKbLN+8edOMWk5NJQJEgAgQASJgGATkcjlatmyJYsWK4enTp/lSqQSPISgteE41dXsH8RZiiucc1BD2W3b4lV+BSua3Do2FfSX7/oZosQYyP/xkUnz1AAAgAElEQVTQSBCzivXAb7HiAXomAkSACBABImA6BEi0yqEvly5dyos7Pj4+OVgZ36F///0Xz58/x8qVK+Hk5KQWsFgYYefOnbF+/Xr4+fkZX8NMtMYUJmiiHUvNIgJEgAgQAaMhwNIvFC9eHM2aNcsHL3UlIg61F+ZfVuh5RpKGIvYMeloJQpTdKFwTFkCWJUUi6PVzPA9I1KxalfoCSz4RbG0G4lK80eCkihIBIkAEiAAR0JkAiVY5oFqzZg0/ocivu2o5XKpID/3555/YvXs3+vfvj5IlS6pFrLp162Lu3Lm8h88///xTpHU094tTmKC5jwBqPxEgAkSACBQ1gSVLlvBzpFWrVuW5KvEX+6Ok4D3ltCNILUQpw/ajvbCf49pif1h2GdYViLowFvaCrVWPgwg14gVo8gyUCiACRIAIEAGTJUCiVQ5du2HDBn5ycvfu3RysTOtQSkoKLl68CFdXV1SpUkUtYJUrVw4jR47EsWPHYEyJ6U2ldyhM0FR6ktpBBIgAESACxkogLS0Njo6OsLKyyvPCNvIPe9G5uOAl1XAhbsUoAGUc7i9tDgu1aNUMG/3FwEEpNSUSPNegU0nhfO4TzLkvuGRJzeg1ESACRIAIEAETIECiVQ6duGPHDl60uXr1ag5WpnuIhREyL7MVK1bwuRzElWpYToeuXbuCiXpv3741XQAG1jIKEzSwDqHqEAEiQASIgNkRePDgAb86c9u2baFQ5MG1SZmA+wsc1TcHOatKqFmlhOY9L1w1x+YAeQbGCsTeWYH2asGqOJxWeCIxO4esDGfTWyKQVwLyyLvY+/0iTB47CqNG6fAYtwTnQjKO47zWgs4nAkTAnAiQaJVDb+/fv5+fPJw/fz4HK/M5FBISgp07d+Kzzz7jl38WRaz69etj/vz5uH37NliyUtoKjgCFCRYcWyqZCBABIkAEiIAuBGbOnMnPD9lvcp42WQguLuqGSmrPKg6Vu4/HEPVKgR1wOEKqRikQeWUemoseWlwxNJ3tgcg8aGd5qj+dbMYE5Ag+2Bc2krFb5+uDuHjpEi5eOItTh7Zj1ZTeqFOMeQPWx4pXtLSlGQ8WA226Eqnvr2DjpAGYfDmrVSyUiH+4GXOnTcO0LB5fL9gF71SxaXJEP9iFuaMGY9h/xmL00CEYt/w4vOlugggoz88kWuWA8MSJE/ykhD3Tpk0gOTkZTMybOHEiKleurL4zaGdnh9GjR4Mxi4+njKDa1PL+jsIE886QSiACRIAIEAEikBcCbA5Uq1YtlCpVCoGBgXkpij9XkRgEr0cP8TwgHvKkWxhvL4T92Y/H7SSxeAViby1Cc14EYMdLov2KO2BRhbQRgaIgkPZyOepLRKtW24OQ8dZ18qsN6GLXkESrougguma2BORR97F32Sg0s1Z913Y4GpXZVhGMX7taqf/jis4a6udmP+ENH72tQKT7JNSt3AubnieCv80gj4DHjAYo13o5Hqu/wzNfgvboToBEqxxYMVGGDUzmcUVb9gSYkPL48WMsW7YMzZs3V3+42So73bt3B7sTyULbaMsfAhQmmD8cqRQiQASIABEgAh9L4Nq1a/x8p0ePHmDpFPTeUgLwx4Gt+PG7xVi+/zXEG/Zpr75DA0EIsO55HKKjVfq7nehdWhCzuFLouu4phQTqDZ1OyE8C6T6r4JiLaAUk4e6c/lhDnlb5iZ7KyisBhQxypRwBm1vw3+NZiVbpfm7o0cEVv1x9gbfvPyA4OFj1CLyNJY4l0Hz9G/CaVboPVje0RBXXu0iW1Ev+fitaciXQ53S0SsiSHKOX+hMg0SoHZiyXFROtWG4r2nQn8OHDB55Z3759YW1trRaxWPLShQsXgiW2z1MeCN2rYrKWFCZosl1LDSMCRIAIEAEjIfDVV1/xc5w9e/boX+PEGxhbQRChyvXHL16xSAi6ghUdSwnzpnIYcjZKuGv/Hru6S/Jd2bbFmOkzwcIU1Y+F2/FC+o9J/xrRGURALwI5iVYy/4P47mIkP34Tnh7DlQw5rZRpkfC+dR7HD+zDgZNX8CIiu/BBBZICH8H9xEEcOHoWN32ikJ5DLXMtV56CuJhoREerHjGxSSrhQZkE/1u/4filV4jT8l5UIi3SG7fOH8eBfQdw8soLZF3VNIQ9dcfR/Qfx2803SJAnIeDxWxKWc+grQzgUd7YXrDgOmUUrOUIu/4qbWcRey95tQEubFnB7KyySkXAFw8pysHY5jDDJ2FGJVhboeCCMRKt86GwSrXKAyMQVJlqxhOO0fRyBpKQknD17FuPHj4e9vb1awCpfvjzGjBmDU6dOISEh4eMKN+OzKEzQjDufmk4EiAARIAIGQYCtpsxWWi5btiz+/PNPPeuUgqeLJYnYJR4rbO5ZcfBBvBf+E6U+/wafZDjObLQfHZBVhIuelSJzIqAzgexFKzne7+yLTw+GZ/6zzvK4LemDutVaYvC0+fj6C0cU58eyAwZt99byVFEmvsD2kQ14UcG2wwysd5uFTmpvQzb+i6PGhJvg183UsVx55B1sGlVH89kpOwweoc/h1qeisM8aLofDwLQHWchFLOlTF9VaDsa0+V/jC8fiKhuHQdjuLVGI5eE471ofFlx5dJ4wF9O+aIZK5cvBpuFyvMxOi9OZMhkWJIG4C5+hRJaiVXZXleHdxpawbeEGUbOCLAA/O1uA4+wx9FCgIKqmw8/NGdYVhuB0uETJyq5Y2p8rARKtckDEVs5jE4I1a9bkYEWHdCXAhJZHjx7h22+/RdOmTdU/GCyM0MXFBVu2bEFAQICuxZm9HYUJmv0QIABEgAgQASJQxATOnDnDz2c+//xz/WuS+haHJrWArZYAVRXd5p6CvxgvCBnebWimnjNpC1VS4aoDiVb69wCdkQcCWYtWCsQ/3oBeZYqj7f6MolUKnn3biB/LNn2PIJT9l0/3xkpHcRy3gpsqSRCgCMfpoRWEce+ErYEsW5YMfj+ozuesnLHwsDtuvoqFAnqUy0rxW4fG4meutAvmDm+IGrXKqz9jzjuCIE95hm8bsXrZoO+RUF7ESvdeqQmHbOUm5DMCkm67ojIrz3ElvHk3sFT4be+J8rVm4GFKHgDTqQVOQG/RSvYOm1rZouWGdyoPPb6GSsTfWyiMKTu4rLkFn7NT0aL+AGx8mpBZuC3wVpnmBUi0yqFffXx8+C+wpUuX5mBFhz6WwPv377Ft2zb07t0bVlaaRHeNGjXCN998g/v371MYYS5wKUwwF0B0mAgQASJABIhAARMYOnQoP1/82IV70qPfwPPWNdx88AqhKdLVAgu44lQ8EcgDgYyilUXVhmhcpzwsBEEok2iV7os1DUWBqi6WeqUBihDscRb3lcEQD1X0hTxoB5xFYaniV7glJLNO8BiC0sL+uoufq3LB6VEua648YDOai2VzHGpNv4k4eRyeHViGmYt/xfMEJdJ916ChaFN3KVRV3aOpU5khUFU1XSOkcVZoNec8gplwJXuLbcNn4CbvBpYHyHRqgRLQV7SS+W9Gq9ItsfGd4Aarrp0cYRemo4m4UEb5QTiaISRWbUovPooAiVY5YGNeP+yO1rx583KwokP5QeCvv/7C77//jnHjxqFiRdFFl+Nff/nll/jtt9/AbGjTJkBhgto86B0RIAJEgAgQgcImEBkZCZb2gKVBiImJKezL0/WIQJEQyChaNXd7Bv8Xf+CXrzuiHMdl4WmVBr9dI9GofGlU7bQEd+KUkMfcxeL6omhVAp+ej+PbkvpiiSYk1kHjsZR080tUEMQk6z7noLLWvVxWuLZoVU8lnmUkmOaHXSMboXzpqui05A7ilHLE3F2sWS2xxKdQVVWJiKMufAij6AVp4zwDJ96mQJaShHTSoDOSNaj3+olWMvhvdkLpVpuQSbNiOmXoRczv3hKNy6vGs7XTPHhEZFxP06Cab1SVIdEqh+4KCwvjRatp06blYEWH8psAS9L+4MEDLF68GI0bN+b7gP0QMG+sXr16YevWrWBeWrSpCFCYII0EIkAEiAARIAJFS+DQoUP8fGX06NFFWxG6OhEoJAIZRatW24Og+oueiHszGqFrpvBATcUU8V44+k1/NG/aFZ0qiqKVNfoKohViL2FYOWF/+TG4LngsxbsPQClBtGq40hsZU0blWm5G0cq6D86plC9N5bReKRDvdRTf9G+Opl07oaLofWXdVxCtAGXCPcxThzgKdbZqidnu4QIPrQLpjQER0Eu0kgdgS+sycNrkLwkNVDVG9v4wRtZ1xNfXYpAe4YGFrVULkVk0WYT7CaRc5keXk2iVA0WWYJOJJWx1GNqKjkBgYCB+/vln9OzZEyz/lXgno0mTJliyZAkePnwI5nFkzhuFCZpz71PbiQARIAJEwBAIsFWT2RzF3d3dEKpDdSACBUoge9EKSPFcgUnHM+a0AqBMhu/haXAuw8Gq5WLcjArGvjZZiFaQ4f2hYajGi0S1MPdBIqCMw7VJVfnPmEWjObgeI5n761xuBk+rssPwRzbrQSmTfXF4mjPKcFZoufgmooL3oU0WohWDLA/3wDcdSqv/o/D/Vay7YEdAxjCyAu0SKlxPAvqIVvKAn9G6TGts8c/Qp7JA7OxaEiW6H1TlaWPDPP4hVrZVCVfN173OccVLPatstuYkWuXQ9X///Tf/5TNixIgcrOhQYRJITEzE6dOnMXbsWN4VXxSwmEs+ExdZQlS2YqG5bRQmaG49Tu0lAkSACBABQyMQHByM0qVLo3r16mDzFdqIgCkTyEm0QpIf7r6VrLDHg0iD388uwsIDjbHWNx1QhmUjWrETFIj13IKBVS1QvHIr9OjaFDXqtMXQJUfxUst7Rb9ytcIDyw7DlaxEqzQ//OxiqxKhGq+FqqpZi1bpYV54x+qTHgKP5T1QXhS2OA5OO4JMeQgYfdt0F63kCNjqjLKttyAgQ8SfIng3n+uswXevtDz/5B/2oVcpDhadjiDS6EkVfQNItMqhD/7991/+y+qjVoTJoVw6lD8EWBjhvXv3sGjRIjg6apaNtra2Rp8+fbB9+3Z8+PAhfy5mBKVQmKARdBJVkQgQASJABEyaAJt7sBtqkydPNul2UuOIAFtNr4FEoGm59X3O4XCJNzC2guBVVawT9oUo+ITlG5pJPK0ksXry8EuY26YmXLa/1RIDMpHXt1z/TZpE7NmIVok3xqpzZxXrtA+qqm5AM7G91n3VYYXxl7/EgE1vBG8aOULPfIXqgh2JVpl6y6B26CxayQOx1bksnH8OyDTGlWEH0dmSQ815T1QLA4gtVARjZ2sL2Ax0R7y4j54/mgCJVrmgK1GiBJ9HKRczOmwABJhow8LkunfvrhVG2KxZM7AVID09PU0+jNBQwgSVaRF4des8jh86iMMnz+PWyzCkSry4+eGSHoOA1z7w8X2HcH61JCVSI97B18cHrwNjcnalzXSuAQxAqgIRIAJEgAiYPQF2w7Nz5868cHXr1i2z50EATJdAyqOZcBBFHI6D4yqfnOducefQ11oUqDhYOrSEc7MmcCyj2Vdv2hbscg9CujIa54eUA8cVg9O3F/H8tR9e+3jj1SsfvHkfiSSpt4s+5QLQEttsv8ClLBSFuHN9Ya1umyUcWjqjWRNHlFHvq4dpW3bBPSgdTOCqZNsOqx/Fg5/qJl7DaDsOnIUT1vuxpQRpM1QCMae7oxjHwXlPCBQ5VFIeuA3OZZ2xNaObFTtHEYHfh5UHV2sW7ko8AJVR5zHSoS6m3hTGRQ7l06HcCZBolQsjOzs7dOrUKRcrOmxoBOLj48GWnmYJUVkfimGElStXxoQJE3Du3DkkJ2d0Wza0Vuhfn6IPE0zB633j0cxGMwER2Vs7DsfPTxNUP+hs0uC7Go78j38lTLzDQjqTcNe1sqqvGv2AnH7nM5+rPys6gwgQASJABIhAQRB49+4d2E3PunXrIjU1tSAuQWUSgSIjII+6jwM/fYORTa3U82t+rmfjjHFLN+DIk9isBQBlPO4tbQcbQfip2HYSdj6OwoeTI9WeSbZOs3A+VM4miVjbMPNcUpxTclwFOE86AD/28dKjXHn4NawdWF1S7xJwmugGjxDtPEXK+HtY2s5GsKuItpN24nHUB5wcKZ5rC6dZ58GqmvJoNhyr1IJDhZroNGIyJvSpBUsLBwza7oOUIuslunCOBFLf4syWZRjTTDWGLeoPw+KNp+Cb5V9DOQK3t0E5520IlIqlkgsoE55iy7BGqNFmHFbtPIQD21fgq759MO3QG23vK8k59FI/AiRa5cKrWrVqcHJyysWKDhsyAblcjtu3b2P+/PmoX7+++oeKTSg//fRT/PLLLwgJCTHkJuhVt6IME0y6PxO1xbtQpWqiZYfOaNe4MizEfRVG4Fyk6l5GZuGJRCu9OpqMiQARIAJEwGAJ/Pjjj/x8g809aCMCREAkoER6bAg+RKVoCVuy+BAEBMdKPLXS8e6XT2Enzh+zeW65XgzL07VcsR46PCvTERvyAVEpUh8cGeJDAhAcq/GgUiYGwT+WCW3R8L3jjnOX7sKPvafNRAjIEfXEA9e847TGbObGKZEW+RqP79zCveeBiKchkBlRHvaQaJULPHaXrFGjRrlY0WFjIvD27Vts2LAB3bp1Q7FixdQiVosWLbB8+XI8efIEzL3fmLeiCRNMwu3x9iqeDtNwI06MB0yH/+6ewhLFxdHtiLiajBJymQwymVz4EdBdtAIynmvMvWVCdZfFIpCFfPpk8/B9izA+FNSE2kxNIQJEgAhkQYDdMGM3PS0tLfl5RRYmtIsIEIHsCCiT4HtkLno42KPlYFfMnr8QCxfMx9xZ0zF57AA4CbmxSg++jKzyqGdXLO0nAkTAOAmQaJVLvzVt2hS1a9fOxYoOGyuBuLg4HD9+HKNGjUK5cix2XuWKXKVKFUyaNAkXLlxASorxOfcWTZhgIm6MraBiaNsDPz2M1iQrTPbBsU1u2LxtD868UsV2ywL2YFDT+qjf0AUrnzHGEtHKcTEOb/gSbaqy5WKtULHJ51h+MVh9By7zuWnw3fQZGtevj8afboBPmjAi019jc78mqF+/Mfq6eeecyNNYB7EB1Vvmv1GTpDTLu6J2GHWNVtQyoC6jqhABIlCABLy8vPgcm02aNIFMph1+VICXpaKJgNETSPdbh6ZsHtFuP8LEe6CSViXfn4KqnAWcN78DfbIkYOglETBRAiRa5dKxbdq0AcuDRJvpE/jnn3/AkqbOmzcP9erVUwtYJUuWRL9+/bBr1y78+eefRgOi8MMElYj1GIOKErGibP3OGDJ1JXZdfIFIjSc1zzDH8EChjFKVq0qSXtbC1GuxfE6szOem4umCWqo+qzkPT0SdMfUZFtZWCZE15jym3AIFPHq1RatisLKy0n6UrAPXWyx/GW1EgAgQAfMgwBaCYTfEvvvuO/NoMLWSCOQDAWXsVUxh8zfLhnDdexf+sWn8/E+REok3t/djbseKqPrpZrwS53v5cE0qgggQAcMlQKJVLn3TtWtXlClTJhcrOmyKBN68eQM3Nzd06dJFK4ywVatW/OTz2bNnBh9GWOhhgopY3F3ZLescBGVa4qtdL5Ao3DHLLDxJPK04Owz89S2fvDD9w0mMqaISnizb7uCTIGY+14hEK3kCPrx6jIePvRGcmEvAuz62BvAhlIpWbfaFqZPuG0DVqApEgAgQgSIhkJ6ezqeZYCK+t7d3kdSBLkoEjJGAMvE13LcvxdRRA+DSuR2c27RDxx79MHLaCuy87IcEaaopY2wg1ZkIEAGdCZBolQuqvn378q7duZjRYRMnEBsbi6NHj2LEiBEoW7as2guLJeqfPHky3N3dDXKFoKIJE1Qi2f8qdi0dj0+dHFBS4nnFcXYYcjqcz2GVWXiSiFbVp+OhegWPNHgtratiXrI/LsZntfKgEYhWyng83TYWzUtLV8OxQ+uJe+AlKnni50gfW/EcA3jWR7RSRN/FhrEdUKdccVXfWtigWouBWPJ7oCSMU4GoO+sx2rkaSnAcildsjqHrLuHSN13hWN8RHeffh8pvS4HouxswtkMdlCuu4mthUw0tBi7B74FirGgqnq9yQaP6jmg37Qh2fdUUZTkLlGniisvR/+hwPqBMeoUDM/ugcaWS6jpXbfop5hz2QVIW4QsG0CVUBSJABAyAwKNHj/jcVsx7X6Ggf9oG0CVUBSJABIgAETAiAiRa5dJZgwYN4v+c0CQjF1BmdJjlpbhx4wZmz56NOnXqqP68chxKlSqFAQMGYM+ePQgLCzMYIoUZJqiUJSIy6C0C4zQeRIqkQNzaORGNLAUxof1ehCiyEp4kolXjdfBTJylQImxfW4FzexyOzOpcqWg1F49Fd/HUJ5hfS3Xdog0PTIWPWydeeBHzpkmfbbpvgq+orUAfW4MZZnxFpKKV4ze/4cbNm7ipftzG05BUlfeVzB/bu5YQ+tQG1evWhr0gNnFcK7i9UXV+8tPv0EIYNxxniypVbPlzSlip+rT0EA8+AavMfzu6llDt42yqo25texQXxdJWblAVJxlfFoJQxmwar4WXrw7nK6NwboSQs61kLbTq2BFOtQXxiquIUeejcllVxrD6impDBIhA4RJgcwb2vc8WgqGNCBABIkAEiAAR0J0AiVa5sBo9ejQ/yUhOVrt95HIGHTY3Ar6+vmBLW3fs2JG/k8ompRYWFmjdujVWrVqFFy9eFHkYYaGECcaeR79SKuGg2tR7ggeMOBqicKyThUqkqLccL9OyEp4kokK1abiv/sjJ8ObHxqpzS3yK83FZnctEq9oqm+pf44F4btIdTKxU9KKVMuIkPrMRRBX7Qdh6PxBBTw/B1VFgwpVAz4MhvOihj61I11CepaKVVJQTXzf+8Q2fMFUZcwvfDeuGVg07Y9mDBF7IkgdsQSteaLJC73NxgDIKp/vZqPq0/GAc8GeqXjreHx+NKoIgpRKtlIi59R2GdWuFhp2X4UECc3mSI2BLK9W5Vr3BitNK9M9ZofWSC3jhdRuXHvyJCF3OT7yKkXasD63hcjBEtciAPBinZwzBiMmL8fMfIZQM1lAGItWDCBggAbaoC1vYh93g+u9//2uANaQqEQEiQASIABEwTAIkWuXSLxMnTuT/+MTExORiSYeJAMDGyaFDhzB06FCULl1a9aeZ41C9enVMnToVly9fxt9//13oqAolTFARjF+7WKnabNkUs8+/53NSMaEh9Mo3aCV40pTocwpRyqyEJ4loxVXEiNOhvDCgTPTE0kaC4NNyA97Jsjo3Ha/XNlRdu0QvHI9gwoUSiY8WoZ4gcBSlp1WCxxCUFurR1E2z0k2K5xzUEPZbdviV90DTx7bQB1IuF9QSrWwroWrVqpJHHXy6OzCDsCNHQtBTXDm+FcsndEYFgUWHI5FA0h1Mqqzqd4dZnpok+mneWNlAtV/0tFJXS56AoKdXcHzrckzoLHhFcR3AitMSrWwH4kKs+izNi5zOT3uJFY7COOQ42Dl2wZApK7Dj3DNEZFhkQFMgvSICRIAIaAhcv36d/53q1q1bkd/M0tSKXhEBIkAEiAARMGwCJFrl0j8zZszgJxghISG5WNJhIqBNgCVfvXbtGmbOnIlatYSV7TgONjY2+OKLL7Bv3z5ERvL/prVPLKB3hREmmPRwERoJwgPvXVOyHMoJ3lcqb5sGmHc/kW9hjjmt+DLKwLFzd7SuJnoj2WHIqbBs8mEBidfHqkUPq/ouGPJFF9QSwsjYtYtOtFIi4lB7laDGWaHnGd7tR9XLsWfQU6yj3ShcS9THtoAGSh6KlYpWOSdiT8XbYzPRvaYgcrL+trAWGHHoeCwKiHfHAGHsNHV7qxG7lOE40DaDaJX6FsdmdkdNkSXzdrQWBaaOYMVpiVZ1vsGzVElDdTpfibh7q9Cjoliu5LlCF3x7ncIDJUTpJREgAtkQmDBhAv9dx1Ykpo0ImAqB9NC7+HX5JPRv6wgHezuUK2+PylVroXHnIZiz6RxexsiQ9mYHJn4nuQkFOUJ+m4rW9rawbz0Nv4doUksUPRdDrlvR06EaEIHCJkCiVS7EFy5cyE8u3r17l4slHSYCORNgqwb98MMPaN++PR8+yIQUFkbYtm1brF27Fi9fvsy5gHw4WvBhgnJEXFuLIY01XmYqsYpDyfr9sfxSqFp8yEm0KtVnG3a6toStWgCrgc/XeyI+25UHAciCcdq1GUqpz7FH5wUHsaW9SlwoOtEKiL/YX52Q3mlHkDr3kTJsP9qr69sW+8OUetnmw5DI1yJ0Fa1SXyxDQ77d1nCasg0XHr9HYow7PhdCKDufjAGSbmOCENpZdfI9iBGfSH2Ob+qo+lTlaZWKF8sELztrJ0zZdgGP3ycixv1z2PDX6AxWnJZo5bgGvmrvKF3PF1ClR+DZue1Y5joQHRrYqYU2ru4SPJcKYflKlgojAkTAVAjEx8fzHqhsZerQ0FBTaRa1w1wJKKJxe+1ncOB/b4uj4ZhN+ONNvCqEXhYLv6s78HWn8uA4S1hxHLR+z9N9sEriwey42hfqn+ai5mnIdStqNnR9IlAEBEi0ygX68uXL+T8lhSEo5FIVOmxCBKKionDgwAEMHjwYtraq5NJM3KlZsyamT5+OK1eugHlq5fdWKGGCfKUVSAn3w7OHd3H3wVO8DktWCzW6t0mJtKi3eP7UGyHJuq62pERa5Bs8f/oKQQmGc8dO/mEvOouJxhsuxK0YBaCMw/2lzWGhFq2aYaO/DPrY6s6ycCx1E61YYv02KrHHygWnolV1S/NeLQhZHDqdiAGUkTj1aSmVXeme2PQiEUpFPJ649VSHWvKilTIM+9qoRCwrl1NQFZcG79WCkMV1AitOS7Rq9AP8xI+XjufL3p/E/KE90a5FFyy8kyAATYPX8vqqOtr9B9dVToSFA5uuQgSIgNESOH/+PP+90b9/f6NtA1WcCED2ASfGaiIJGsy9idisVtKVR8BjdmNYchyqSG9CKaNwdqh486cCRpyPVi3WYgvcuYwAACAASURBVAhoDbluhsCH6kAECpkAiVa5AGeeMUxM8PT0zMWSDhOBjyPAxCkmUjGxiolWomcSE7OYqMXELSZy5ddWGGGC+VVXkylHmYD7CxzVfctZVULNKuLqeWKYWXNsDpAD+tgaGCDdRCuWrmoSKgtinX03VyycMxpt+CTnKhYttwbyd2mTnyxFUwuRD8dPeNnnQ/SmU3laJeHOpMoCW3t0c12IOaPbwE4tBrbE1kAmYEpypklFK+h4fvJjLG4o1MW2JYZMmYM5kwehueAdVnG0O2KymqwbWB9RdYgAETAMAsOHD+e/t44dO2YYFaJaEAG9CKTi5drWKCb+1lYciytxOfwIyt5hW5fSqC4VrVj20ZRAXD+0C0duBgl5UPWqRIEaG3LdCrThVLhAQIn4h5sxd9o0TMvi8fWCXfDmPex1tSOweSFAolUu9DZv3sxPKm7dupWLJR0mAvlDgHn1sXBBFjbIwgfZn3T2zMIKmYjKwgzzuhV8mGBea2iC58tCcHFRN1QSJ3gch8rdx2OIgyjKdMBhPoE8C3XUw9aAUOkqWkEejouzWqnFJ46zQoNhq7C6l8rrsGSfYwjnnetkCPtjJQY2Ut2JLV65DcbvuISNrVXM7EZeBXNukodfxKxWglcW42vVAMNWrUYvW2ZXEn2OhecgWul6PotAPY9vXBw0k3S+L23RZNRWPONXLTSgzqCqEAEiYNAE2M2oChUqoGLFioiOFlxODbrGVDkioCEg//AruqtzR3KwH38rw6rRGlvxVcrjxeg1Swz3lyM5LoYf+2z8R0fHICFdJXrJEmMRw+9T7Y9PZRMCGRJjJfYx8eB3s8LlKYiLYbaqR0xskioVhTIJ/rd+w/FLrxAnddhXJCHwkTtOHDyAo2dvwidKdL0Wa5p93SBLRKz0WvGpfCRB1nUWy6NnoyTAFpjqKsm9Kpm/8w4GzX7CGxkAXe2MEoLhVJpEq1z6giXKZAPTw8MjF0s6TATyn0BERASfsJ0lbmcJ3PkvSY7jE7uzBO8s0fvHhBEWXphg/jMx9hIViUHwevQQzwPiIU+6hfH2gmhlPx63k7Rbp4+t9pnG8I6FcvrhmeczvIlMyyIkIA0B5w/gyNmrePAqGMniDVzZW6xvomLmMOuRZlVBZRoi/Z7B89kbRKaJxnpw0OP89NhAvHr8EI+evUZoknQmrMf1yJQIEAGzJ3DkyBH+d33kyJFmz4IAGBMBGd5tbKGek3JccfQ4oUNEgCIBH4ITVeki5BG4tWUcGloKcyBOSA2AdAScmIeO/E0n1TE+11X6e/y2qCvKqIUDR6zyUYlN8sg72DSqjqY+ZYfBI/Q53PpUFPZZw+UwW8hHicQX2zGyARMibNFhxnq4zeqkTjnAz7GL18CE//cum7oB6QEnMK+jJq0H57gavunpeP/bInQto2mL4yofw8nPZUxDy4Dqmu7nhh4dXPHL1Rd4+/4DgoODVY/A21jiWALN17/hxVFd7QyoaUZZFRKtcum2Q4cO8V94Z86cycWSDhOBgiWQlpbGi6fMRdXBwUH941y6dGkMHToUbKzGxPDJe3KtyN9//817cxUvXpy/07tt2zbs3btX78f//ve/XK9FBgBSAvDHga348bvFWL7/tdoFPu3Vd2ggTMCsex4H72ilj61Jw03CzXHihNMOfdecwpWr7ji8aiCqCB5OA84YUP4Lk+4LahwRIAIFRaBfv3787/mFCxcK6hJULhHIZwLROO0i9UBxwIyHKfpfQxGCPc4aoYfPZ8mXEo9LAzU3atUJ2hMuY3Bp0V4jWrFTZH7r0FgUtEq7YO7whqhRiyWAV9k77whCevhpDK0gnO+0FXzmAJkffmik2mflvBCH3W/iVawCyLZuQPylgcJCLxxUohWrQQIuD9YsQkSilf7DwbDOkCPk8q+4GZn5xqTs3Qa0tGkBt7fMzUpXO8NqnTHWhkSrXHrt9OnT/Bfe0aNHc7Gkw0SgcAn83//9H1avXg1nZ2d1GKGlpSU6duyIH3/8Eb6+vtlWKDIyUv1DLv6gf8zz8+fPs70GHZAQSLyBseJEqVx//OIVi4SgK1jRUQxpK4chZ6NU3kb62EouYYov09/uRP+K4gRV+7lcz03wodX6TLHbqU1EwKwIhISEgK0kWK1aNSQkiIs8mBUCaqyxEZC9hVtT6W9yA6z0zhhip0ujonGso6YcjWiViBtjNIKTWrRKvIkvxbkUpy1ayQM2o7koWrGIhOk3ESePw7MDyzBz8a94niBD0A5n9dy34ldiOGMCPIaIYlNdLFYvA5xd3YDEG2NQXrwW72nF2pqIm19WUJdPopUu/W+MNszLsCVsW7iB16yybYKudtkWQAcyECDRKgOQjG/d3d35LyDmhUIbETBUAuHh4byX1Oeff45SpUQhhEOdOnUwe/ZsXL9+HTIZuyOg2ki0EkkU1nMKni6WJGIXJzvCc8XBB/Fe3T362BZW/YvuOop4X3jsXYcls6bCdZIrvp6/EtvPv0Kc4SwOWXRw6MpEgAiYBIGdO3fyc81JkyaZRHuoESZOQB6AzS00YhPH1cOyl2najVamIfr9W7x58ybz420gIlJZGH92wlBeRat6WOqVoT5IxYsln6hFJYcZD4X0AkkSsckafc7FCe3Irm4kWml3tJm9k73Dpla2aLnhnSpvWnbN19Uuu/NpfyYCJFplQqK9g/3ZZx4oW7du1T5A74iAgRJgoX+XLl3ClClT+Du3ogcVu5PLVitiOTRev36t/uEWj3/MM3la6TEIUt/i0KQWsNUSrKqi29xT8M/oMaSPrR5VIFMiQASIABEwPAL//vsvunbtyv8u37hxw/AqSDUiAloE4nFpkCSvE1cJEzMl5YyHt8dBfP8fR/XKv/w8s4QTXN1OwjOK3XnKThjKo2hl3Qdq7UlS79hLw1BOmIOVH3OdX8gFiIf7APFmb0Os9BbFruzqRqKVBKnZvZT5b0ar0i2x8Z36TnOWDHS1y/Jk2pklARKtssSi2fngwQN+EvHTTz9pdtIrImAkBNhEmAlLK1euhJOTk1qoElclFIWqNWvWgK2QmdtDXJhAPI9EK/0HQnr0G3jeuoabD14hNCXnhOH62OpfEzqDCBABIkAEDIWAv78/SpYsiU8++QQpKR+RH8hQGkL1MAMCSoQf642SkptwbfeGqBKsZ2x99El0kyRb14hFzDA7YSiPolXZYfgjq0hb2XscGlZNNReuNRcPEgFl3DVMqsq8xizQaM51xKinZdnVjUSrjF1sPu9l8N/shNKtNiFnzUpXO/Mhlx8tJdEqF4ovXrzgv9xWrVqViyUdJgKGT+DPP//E7t270atXL7WAxQSo77//Hnfu3Mn1sWfPHq3zSLQy/D6nGhIBIkAEiIBxEFi/fj3/Gzt37lzjqDDV0nwJJN7B1zU0IYI2/X9HtFrwkWCJO4++1hq7KpPvIVl9ODthSBqyx0HvnFZlh+FKVqIVu64iFp5bBqKqRXFUbtUDXZvWQJ22Q7Hk6EskaNU/u7oBSTe/RAVRsKOcVureNPkX8gBsaV0GTpv8cw4N1NXO5IHlbwNJtMqFp5+fHz+BWLx4cS6WdJgIGA+BjDmtSLQynr6jmhIBIkAEiIBpElAoFPziKmxRFU9PT9NsJLXKRAgoEX93IZpYCIJU8bbYlFVm6o8SrVLwaKZmlex6S73AgvYUYUfQu6QogGVIxO6/SZOIPVvRSo7wS3PRpqYLtr8VwwCz647sRauURzPhIIpW9ZaCT5+lCMOR3iXVN3YpEXt2XI13vzzgZ7Qu0xpb/HMODdTVznhJFE3NSbTKhXtQUBD/BcSSWdNGBEyFAIlWptKT1A4iQASIABEwJQLe3t6wsrJC48aNkZ7+MSuymRINaothE5Aj8voK9LBTCUmWjafjbFAGMSgn0UoRgt3OogjFwXlvqGoVZSgQdrg7rARhqFirb3HuxnEs6loZxUWxiKuJuY81YbTp3ivRQDxm+wUuxWcmp4w+jyHlOHDFnPDtxed47fcaPt6v8MrnDd5HJkFrfZds68bEs8PobiXUu1grfHvuBo4v6orKxTVtqTn3sZDoPXM9aI8xEpAjYKszyrbeggCtgZKxLbraZTyP3udGwGhFK2VyEJ7cvombt58iOJe8MLrZKpAc9hpPH9yH58tAxAkiqvjnfvLkybmxpONEwGgIiONazE1FnlZG03VUUSJABIgAETBxAitWrOBvmC5fvtzEW0rNMwUC8mhP7J3dC5/wYYCV4NR/HGYsWIT508fis1aV+bFs16ArRi3Zg1tBKSphShaKP74fgjqi0MRxsKg3Aj94hKiQJHthvYud2nOJK9EcU/YdxMgKGmGoQu8VuBQqgzz8GtYOrK6x5UrAaaIbPEK0PWLSfdeioeR64hxY/VzBGZMO+CE1t7ohGV7rXWCnLqsEmk/Zh4MjK2jqUKE3VlwKzTmMzBQ631zaIA/EVueycP45QFvczNh+Xe0ynkfvcyVgpKJVGrzXthC+GByx2jenO1G52SqR+HI3xrcqp/miYV9Cxevgs++u4F1sIr9/7NixucIkAyJgLARItDKWnqJ6EgEiQASIgLkRkMlkvKdV8eLF8erVK3NrPrXXWAnI4hDw5BrOnTiMA/sP4ujJs/C49xJBoieAvu1SJiHgoQcuXvHE+2SthFP6lqSyT3+HXz6VCGFq0UkjhHFcS6x/k9P/SvHSSiQFPITHxSvwfJ8seIiJx+jZ1AjIA7fBuawztubsZgVd7UyNT2G0xwhFKxaPPAOO6i+anESr3G0VEb9juES1V6vtfPnF4LRalYh96NChhdEfdA0iUCgE8ku0OnHiBFgIbVxcHFguDtqIABEgAkSACBCBvBN4/PgxWG6r1q1b0+9r3nFSCUQAyiRfHJnbAw72LTHYdTbmL1yIBfPnYtb0yRg7wElIrl4agy9nl8WdIJonATkCt7dBOedtCMwlNFA3O/OkmNdWG5dolfoe7st7wl4tWDFlPBvRSidbBT7sbKv2sGo42x0h6QokvtqFgUJ8Nlf9a1hYWKJfv355ZU3nEwGDIZBfopW2yMvB1tYW1atXR6NGjdC+fXv07dsXI0aMgKurKxYsWIA1a9Zg69atOHjwIM6dO4ebN2+CrUD43//+F1FRUUhLy5ALwWCIUUWIABEgAkSACBQuAbaKIPudZasK0kYEiEBeCKTDb11T/vPUbn9YFp5Rybg/pSo4C2dsfqcdVpiXq9K5pkBAjqgnHrjmHYecb8/ramcKTAq/DUYjWslDjmJIVan7pvg6s2ilu20qni6sLYhWTfCTetWLBFwdIYQLWvWGdSkb9OjRo/B7h65IBAqIQH6JVhMnTsSsWbMwbtw4fPHFF/znxMnJCXXr1kXFihXBQhsyClu5vbe2tkalSpVQr149/g6zi4sLBg8ejK+++gpsQQSW62Pjxo3Yu3cvTp8+jatXr/KrLLGVPv/8808kJSXh33//LSByVCwRIAJEgAgQgcIhkJKSgk8++QQlS5aEv79/4VyUrkIETJKAErFXp6A2x8GyoSv23vVHbBoLOVQgJfINbu+fi44Vq+LTza8ogbpJ9j81ytgJGI1ole73AxrxHlZ26LTgKPYPKiv8Gc4sWuluq0TEiT4oyZdbDoNPhasU1JQXWNlIEMUarEC5ChV5rxFj72yqPxEQCeSXaMW8pHLb/v77b7DrvXv3Dk+fPsWNGzdw9uxZHDhwAFu2bMHq1asxf/58TJo0CcOGDUPv3r3Rrl07NGzYEFWrVkWpUqX0Fr5YSIWdnR1q1aqF5s2bo0uXLhgwYAD+85//4Ouvv8aSJUvw448/YufOnTh27BguXbqEe/fu8blDKNwxtx6l40SACBABIlBYBJhHMrvZ07VrV7ohU1jQ6TomSkCJxNfu2L50KkYNcEHnds5o064jevQbiWkrduKyX0IunjQmioWaRQSMgIDRiFayoGOYPvE7nHwVDwUScGVY9qKVPrZIeYn13UsLf4rLwrFDZzg5WArv62Hq5Sg41KiBFi1aGEF3UhWJgG4EMopW7E4u85bK7cHEJKmnlC6ilW41ytnqn3/+QWxsLN6/fw8vLy/cvXsX7u7uOHr0KHbs2IEffvgBixcvxrRp0zB69Gg+nLdz585o1qwZatasibJly8LCwkKr7tJ2ZPeahTtWq1ZNK9xx+PDhuYY7sjviFO6Yc5+yo1OnTuX7h/XRxz4WLVqU+4XIgggQASJgxARYiD37nfrll1+MuBVUdSJABIgAESACH0fAaEQr7eblLFrpa5vyeje+sBfDDTXPDaacRXA60KBBA/6hXS69IwLGSyCjaJWdaJPb/sISrfKDNAsZ/OuvvxAaGgpfX188evQIV65cwalTp7Bnzx64ubmBLS+eU7ijlZWV3sIXC3e0t7fXO9zx9evXJh/uOGTIEL15ZhyTLESVNiJABIiAKRNITEzk80WWKVMGISEhptxUahsRIAJEgAgQgUwEzF60Sn+zDS62glBl2wyffzUBw7tUV/+RqjT0KBo1b4EaNWpkgkc7iICxEjBH0Sq/+koa7vjs2bNcwx379OmjFe5oY2Oj/n7JKMBk956FO5YrVy7XcEd2F14a7vjy5UuDXt2RRKv8GpVUDhEgAqZO4OLFi/xvx2effWbqTaX2EQEiQASIABHQImDmolUsLg0TEq5z9bH4SbIKjjIGVyZVFf5YVkWlxm34pNJa5OgNETBiAjExMXzYGwt90+XBwuuYoFK6dGkte29vbyOmUHRVl4Y7MmEpY7jjunXrtMId+/fvD2m4IxOwCirc8eeff860umNBhTtmFK3GjBkDXR5ScY88rYpuHNOViQARKFwCI0eO5H+Ljxw5UrgXpqsRASJABIgAEShCAuYtWqU+wfxagpeVwyw8StH0RMKVYSjLJ2jnYFuvHUrZ2GgO0isiYGYEmMjCwmRZUvSwsDAza71hNje3cMcNGzZohTsOGjQo0+qO+RXuyMqWru7Irp1xdceswh2lopWzszPu3Lmj04Ml6ReFKxKtDHN8Uq2IABHIfwLR0dH8TdQKFSrweRPz/wpUIhHInYA8/CZ2r1kA1zGjMGqU5DF6HKYv/QmHn0Uj5PourFngijHS46NGYfS46Vi6/iheJrKV++QI+W0qWtvbwr71NPweIs/94mRBBIiAWRIg0UoUrcoOxeV4cQwoEXmsB6wE0apskx6wsLQUD9IzETBLAufOneOFgvHjx5tl+0210TmFOzKvK+nqjiwJfX6GO0pXhiTRylRHGLWLCBCB/CTAQsCZaM++j2kjAkVHQIbA3d1gLfxXYmOyyoQ/EMu0KHFL98eOLtJcoFUx6Wos1CbpPljlqMkl7LjaF+niufRMBIgAEZAQMG/RCrFwHyKuQmiJxjPPIiBFjuS3J+FaV/wSrQtHly/4CYJMJpOgo5dEwPwIdOnSBSy/Egtpo40IiARyCndkebayC3dkSYVFjykSrUSa9EwEiAARyJkACxln353sZhJtRKCoCKQ+mY9aEtHKeU8IFFqVSYHn7Brq33mOa4v9YWrJClBG4exQO+F4BYw4H60RtLTKoTdEgAiYOwEzF62AtNcb0aWEKFBlfq44/BQG/2cs/4XKVm+hjQiYM4EnT57wn4VevXqZMwZqez4RoPDAfAJJxRABImCyBIKDg/kFNYKCgtTPbPVbW1tbfmVaccEN6fGMr1NSJPkvTJYUNaywCaQ+/wZ1JKJV2/3hGUSnVDxdUEsiWrXH4UiJaAVAmRKI64d24cjNIKQWdgPoemZOQInU91ewcdIATL4cmwMLBSKvrcPsadMwTf2Yg/W3ozOdo0x9jysbJ2HA5MvIqcRMJ9KOXAmYvWgFKJHwYjcmtakg+VJl4lUFtJt6CK+TlZg8eTJ/jK24RhsRMHcCYiLYK1eumDsKan8eCZBolUeAdDoRIAImT0AaRi16pur7/Ntvv5k8J2pg4RPIm2glR3JcDFieNtUjBgnpKkFLnhKHGMn+2CRVpIsyyR+3fjuOS6/iJB5daQh76o6j+w/it5tvkCBPQsDjt+BTZkmRpEXA6+ppHD54HBfu+CFOK32WAqnx0rqwOsUgLlllpEiNV9cnJj5Fcm1AmRYJ71vncfzAPhw4eQUvItKkV6XXBkpAHnUfe5eNQjNrlcNKh6NR2dc05TEWfJLBscXSGZvfSSKw5FG4v3cZRjWzVukJHY4ihxKzvxYdyZaAkYpW2bYnDwcUSA55hQc3ruHGfS8EJ2kcXGfPns0PQHbnijYiYO4E2OfA2toaTZs2hUKh+ZyYOxdqv/4ESLTSnxmdQQSIgHkRKDjRSoGkD37w8fGBj98HJGb1c65IxAc/H/j4+OJNaHIGL5q89UN6TABe+/jA9104UrSdb3ItOC/n5lo4GehMIE+ilTwCt7aMQ0NLjRjQ6UQMn5w98s4mjKqj2V92mAdCn7uhT0Vhn7ULDocpAHk4zrvWhwVXHp0nzMW0L5qhUvlysGm4HC/V2pEc4ZcWopMdB86+B6YtnYeBNS1gWd0Fi84Hg5cdFDG490NfVJR4jXH2A/HTzQjIoUDc018x3bkEOK4y+v1wG1HssyILwcUlfVC3WksMnjYfX3/hiOL8+Q4YtN0bwnr0OrMkw0ImoJBBrpQjYHML/j9+9qKVAmFHB8HZ9SQee3nBS3i8ehsJ9RDjq66ATK6EPGAzWrBxQKJVvncoiVY6IF28eDE/oP38/HSwJhMiYPoEFixYwH8mfv31V9NvLLWwwAiQaFVgaKlgIkAETIRAwYlWafBe6cj/lnNcFbjeTcpELPnB16gu/JFvvcVf9Qc/k9XH7EiH72rh2pUm4k7mS+dQaF7OzaFYOqQ3gTyJVuxqihDscdaIUyrRih2QwW9dY2FscijtMhfDG9ZArfKirTN2BMmRdNsVldn4dFwJbz6Deyr8tvdE+Voz8FCIiE15sQqteGHMFp+fUeXMSrrjiir8uK6LWbfjVWKsMgaX/lNRfU2L9r8iRC3kpsPvhxaoPf4q4niBNQXPvm3E29r0PYJQZpfujZXqpPKt4PZG4oWjN1k6obAIxJ3txS+8lq1olfoM37bthV8CtVzzsq9e3Fn0siLRKntAH3+ERCsd2K1cuZL/Ynrx4oUO1mRCBEyfQHx8PNiS21WrVgXlyjD9/i6oFpJoVVBkqVwiQARMhYBUtGrTpg3c3NxyfaxYsUL955uFEmYXHpjutw5NBVGq8qTb0NaOkvFwenVVOZbtsTNIxz9tOoHPi/CUl3N1qhwZ6Uggo2iVe9hqxpxW0TjWURSiOGhEK+YB01wyhmth+s04yOOe4cCymVj863MkKJmQpBKOOM4KreacRzATrmRvsW34DNxkaYgVITjgIoRrcU3hJoZzRR1HF9HDy2kj3gr6Upr3KjQWPg+cZTtsF4UK+XvscOmE732FtQ3TfbGmoVjvuljqlZZBgCuDIR4JOlIks6IkEHfhM5TgOGQtWikQfnIASnMcild0RPcvV+L4C2loahY1j7uAz1iubPK0ygJO3naRaKUDv59++on/4nzw4IEO1mRCBMyDwJYtW/jPBRN1aSMCH0OARKuPoUbnEAEiYE4EpKLV9OnTcefOnVwf7u7ukj/82YtWkPljk5Pw57vSeNySrjeU8hAzHFTHrLrul3id5Af9PApPSjnYit4yudoVJj8qRWXoSSCjaKV/InYdRat6S8F0Ie1NiYijLryXjCiW2TjPwIm3KZClJIGlx1KGH0KXYqK41AnHxCRD8RcxoJS43wnb3guCrJbIxaHu4ud8cvh033Xo8flByWcgDX67RqJR+dKo2mkJ7sQpIY+5i8X1xTJL4NPzcdrVpXcGSSBH0UoeilPjnFCroih8sv4th87LrqtCRLNqEYlWWVHJl30kWumAcevWrfyP//Xr13WwJhMiYB4E2ISxXr16sLGxQXh4uHk0mlqZrwRItMpXnFQYESACJkigQEUryPF+R1tY8t4lFTHuhka1Snk0Ew78fmv0PBouJJ9WIPruBoztUAfliqv+oFvYVEOLgUvwe6CoKiiR9OoAZvZpjEolRZuqaPrpHBz2SRLyYmUhWqX5YtNnjVG/fmN8usFHnS8m/fVm9GtSH/Ub94WbN7uGDAF7BqFp/fpo6LISz2hhxCIb9YUlWln3OYesJCBlwj3MU4fkCYKRVUvMdg8Hk6FSn8xDTdFzyqoXzoqFxF/CQBvBng+N1WSgir8yVpPbqvxoeMQm49GCrpjwh3iyNm5FvBeOftMfzZt2RScx5xZnjb4kWmmDMtB3OYpW6jorkRr6CIcXfyp8JxaH8xovZPnVQ6KVmlp+vyDRSgeie/fu5UUrdueKNiJABDQEfv/9d/6zMWnSJM1OekUEdCQgFa3EO6X6Pk+cOFHHq5EZESACRMD4CBSsaAUogn9FF0GAqjD2GlRBTSl4NMtB5a1l0w+no1SZ0mX+29GVhb4wIcCmOurWtheST3PgWrmBpfFRRp3DiAoqm5K1WqFjRyfUFsQrruIonOezWGchWqU+xYJaqvNqznui/kOY+mwhavPCQw3Mecz+JmZxrvF1q0nUuLBEq7LD/hDGZWZs8nAPfNOhtGpMigKVdRfsCJAhRSpalfgMF0TdKf4i+otjkquKKfc1ohVSX+DbeqKgZYUuG37FuB5L8DQ1w7WVyfA9PA3OZThYtVyMm1HB2NdGPI9Eqwy0DPatbqKVWH0F4u4thzPLWVWiF46GZ+HpSaKVCCvfn0m00gHp0aNH+S/D06dP62BNJkTAvAh07NgRlpaW/ApE5tVyam1eCZBolVeCdD4RIAKmTqCgRSsownBEzPtTfjSuxANI8cRsITSwzODziOE1KyVibn2HYd1aoWHnZXiQwHbKEbCllUowsOqNc3FA4tWRsGPigbULDoaowq7kwacxY8gITF78M/4IYQmEshCeSLQyuqFceKLVlSxFq/QwL7xj4zA9BB7Le6C8KFpxHJx2ZMkvRgAAIABJREFUBEEZdhCdxfBAqadV7O9wEYRajmNJ3aXigxzvd3bUiLGcHbpszbgIQRr8fnaBLX+9xljLcl0pw0i0MroRDOgnWrEGpuPNhtaw4Mpi2JUs8paRaFVgo4BEKx3Qnjlzhv9BPnTokA7WZEIEzIuAp6cn//no27eveTWcWptnAiRa5RkhFUAEiICJEyhw0QpKRJ7+DKX4P+B2GPlHPFI856CG+N5DdE+RgJYnIOjpFRzfuhwTOldQiVZcBxyJBNJeroCjWjywg2OXIZiyYgfOPYuAkMaaRCsJSmN+mfpskeAFp/IwarMvTAj/FFuViifzagrjg9m0w8EIldeeyiL7nFb+mzSJ2MsOy1q0ir/8JQZseiOMKzlCz3ylXu2SiVbaididsUdYDlARshvOwhi1cN4C/wwL/SljLmBoOcFryrYfTmnVGUDiDYwVvAm5Yp2wj5Ure4sNzSSeVkzBpc3gCegvWgGKoB1wyi4ElESrAutzEq10QOvh4cF/4e7atUsHazIhAuZHYPjw4fxn5Nq1a/naeEVyGF4/fYD7ni8RGJdhVpHxSvJ4vH/piYdPXiMsRTopymhI7w2FABsv7HtVn8eoUaP4sebo6IhffvmFT0hsKO2hehABIkAE8ptAwYtWgDLmHL4orfrDXW74GVydU0MlNFQch+tSZ4LUtzg2sztqsvAY8U+/tfi6oyrRtTIO91b10OQFUgtYHCp0+RbXdQkPnPtYEx74ZD5q8WVQeGB+j628lpd0ZyIqSfq3qds7aM/UknDrq4rqscJxzbE5QLIKpSIEu53F8cPBeW+oOueZ98oG6vNsv7gE5gCYcUu8MRaVbNth9aN41XmJ1zDajgNn4YT1fiqJNMVrNVrz3lblMUoQYOOvjkUFvt6NsPCBJo+bpvwUPJ5bi79+FddbyGQRdw591eOeg6VDSzg3awLHMpq21Ju2BbvcgyRCraZ0emU4BD5GtELkUXSybYQ1PhoZXt0iEq3UKPL7BYlWOhC9desW/8W1efNmHazJhAiYH4HAwEBYW1ujefPmUCrzLhgpE19i9/hWKCeZDHFccdT57DtciZBMeBhqZRJe7R6H5raayQLHVYXLkksIy2Bqfj1jmi2eMGEC/528aNEi02wgtYoIEAEiIBAoDNEKiMPl4XYqkaCEI5zYH3+OQxXXu0hS90QqXixrqLKxdsKUbRfw+H0iYtw/hw3/W90ZJ2PUxkiPeIZz25fBdWAHNBDKY2XWXcJWZMsmPLC26rrVv34AMcuQRhgh0UpDt2hfycNv49cfF2KoY3HVeBDnamXbY+LyjTj2Ihqht37FugVD0EAMzxNs7DpOwopNJ/AqJhh/fD8EdcRzOQ4W9UbgB4/3CL+2FgOrS+Z0JZww0c0DfGSppOkpj2bDsUotOFSoiU4jJmNCn1qwtHDAoO0+atGThbBGXFmGnlU5cPbd4Dp/CnpW4cBVccGSS6EZRDZN4bJ3m+Bs2xArX4kLDGiOQRmPe0vbCeOeQ8W2k7DzcRQ+nByp9vSydZqF86E0CZVQM8iX+otWCkScGgLHfgcQnFX3kmhVYP1MopUOaMXwp3Xr1ulgTSZEwDwJzJ07l5+87N+/P28AFBH4fbgYbiCZtAgTm2LOP4BfQIi/ihxhJ4cKd8wy2zZZ+kQ98c1bpehsQyKQlpYGZ2dnfrz99ttvhlQ1qgsRIAJEIF8JFI5oBSRc/zKDd5QDZj2SrI8lydlj5XIK0Xwr0+C9WhCyuE44ESPD+5PzMbRnO7ToshB3RC+tNC8sr6/6jbb7z3UkZiVapb/G2oYqmxK9joOPyFIm4tGieoIwQqJVvg4sEyhMmRgE/1g5kB4N3zvuOHfpLvzY+6w2RSICH1/DxbMXcM0zAAnSNFZZ2SsT8c7TB3HZ2imRHhuCD1EpwsqaqkJk8SEICI4lD6usmBrgvpjT3VGM4+C8J0SrH1lVU158j37OPeC6/QGieBfCdIR6LEPfdhNwSsjXl6lJMafRnQm1znsgRKNmMqEdH0eARCsduL18+ZL/wVyxYoUO1mRCBMyTQFxcHOzs7FC9enWkpEgmunriUHzYibbinbeGs+Eekg5F4ivsGijcBeaq4+sHwj3Y1GdY9IkgVlk2x6LL7xDitQ/DKgn7rF1wKDTbGYeeNSNzQyIQEhICe3t72Nra4vXr14ZUNaoLESACRCDfCBSWaIWkO3BlHiji72/thXimtWJaEu5Mqiwct0c314WYM7qNKuk6f05LbA2UI/nxYjQUyrBtOQRT5szB5EHNBa+UihjtHgNlVqIVEnF9rHjDygr1XYbgiy61YCXWhyPRKt8GFRVEBMydQOpbnNmyDGOaWfHfaRb1h2HxxlPwFV08AaR5r0NHcZXJ0tVQ+5NG6DJ+C+5FZ/W/IhVvz2zBsjHNVN9ZFvUxbPFGnJIWaO7M89h+Eq10APju3Tt+QC9cuFAHazIhAuZLYOPGjfxnZfXq1R8NIfWpuLw1hyY/vVW7bidcHSGEC1qht5DgMu3lMtQTJrSl+v2OaD4yMR2+a8U7v1ZwORGZITHoR1eNTjQwAjdu3ECxYsXQoEEDJCZmyjphYLWl6hABIkAE9CcgFa06d+6Mbdu25fr4/vvvBXFJJULp5pGagkczHdTn1V/+EhkDo+ThFzGrVSm1DWfVAMNWrUYvPjy/JPocCwcgQ/D5b+DiUExjx36nbZtg1NZn4BcdzFK0AmTBp+HaTFK+fWcsOLgF7fnfeRKt9B89dAYRIAJ5IaBICsLze7dw96kfwilfbl5Q5vlcEq10QMju6LM7TzNmzNDBmkyIgPkSSE9PxyeffMJ7v0RGRn4UCGXECfQR7myUG3wK4fwNjRS8WNlImAA3wIqXqql01PGuvFsv+3w6rvJRu2MnXBmuzodVffpDChH8qJ4wjpPWr1/Pj4uBAwfi33//NY5KUy2JABEgAjoSkIpWai8otfeRxDMqh326iVY6VkiZhki/Z/B89gaRaTnlsExHbOArPH74CM9ehyIpK+eErC7Jyn/zHE9fBSEhm0ivrE6jfUSACBABImC6BEi00qFvY2Ji+D9FEydO1MGaTIiAeRM4deoU/3mZMmXKR4JIwcv13VFamICXdeyAzk4OsBTe15t6GfziQ+xO7RrRo0q66gyQ8mgmHAR7qz7nQAsPf2RXGMlp4uqVefHwM5KmUjWJABEwMwIGJ1qZGX9qLhEgAkSACBQ9ARKtdOiD5ORk/k/46NGjdbAmEyJABNq3b8+HbX10rqGU19j9hT3/udO6s9xgCs4Gi0vMpuLpAtWSxMym47EoNfjUZ4s0K9J0OIKP8/lSF0cvDJwA+45u0qQJLCws4OHhYeC1peoRASJABHQnQKKV7qzIkggQASJABEyTAIlWOvSrXC7n/zwPGjRIB2syIQJE4MGDB/xnpl+/fvrDSH+DbS62gmBli2aff4UJw7uolxHmKg3F0Q8sZiAVz7+poxa2Oh5TrWXELpj67BuNaNX+MIlW+veC0Z3h7++PsmXL8osBBAYGGl39qcJEgAgQgawIvHnzhl9sgt0EEh9sPspu1mzZskW9TzyW1fNff/2VVdG0jwgQASJABIiAURAg0UrHbipevDj69u2rozWZEQEiMGTIEH5SzZJl67PFXhqmzkdVf/ETIR+VEjFXJqGqEPJXdcp9JCMdr9UJ1zm02RemTriuFR7Y+yyFB+rTAUZse/HiRd7bqnnz5khN1Vr2yohbRVUnAkSACGgIeHl5wdLSEl26dNHspFdEgAgQASJABEyYAIlWOnZumTJl0LVrVx2tyYwIEIH//ve/sLKyQsuWLaFU5pSsVcoqFU/miyF/Dpj1KEVzMOEKhpUVks423wR/ORB1vJs6EXvDta81idj/GIaygsBVbdoDSsSuoWjyr5YvX86LpRTObfJdTQ0kAmZJoEePHrw4/+zZM7NsPzWaCBABIkAEzI8AiVY69nnlypXRpk0bHa3JjAgQAUZg9uzZvIBw6NAhHYFIRauyGHo5Xn2eMvIYelgJopXzLgQrWBjgItQWxCmbAWcRzWtjMvita8xfl+Os0ONYhNoDS10YvTBZAkwg/eyzz/j+37x5s8m2kxpGBIiA+RG4cOEC/902duxY82s8tdhgCMhCrmLnmoVwHTMKo0ZJHqPHY9ZyNxx7lVgA8y45Qn6bitb2trBvPQ2/h9DSkgYzIKgiRKAQCJBopSPk2rVro2nTpjpakxkRIAKMQGxsLMqVKwcHBwf8/fffOkGJdR+i9pKybDwTZwNSIE9+i5OudQUhikPdJc/BB3+lPMJMB0HIKuaEZTeCEfX6KMZUE/ZZdcP+EF3X2dapemRkBATi4+NRt25dsLDuO3fuGEGNqYpEgAgQgZwJ/PPPP2jQoAFYYvbQ0NCcjekoEShwAunw39kFVsKNQ5ZjzWHqDcTp6livb/3SfbDKUZjbcRwcV/uqvev1LYrsiQARMD4CJFrp2GeNGjXi/wTpaE5mRIAICATc3Nx4sen777/XjUnaa2zsUkItUGmtHsgmRxWH41SoeIdNjuAjn6tzYGW0rT/vPhJ1uypZmRgBb29v2NjYgHnJ/vnnnybWOmoOESAC5kbg559/5n8XV6xYYW5Np/YaKIEUz9lwkIhWHY9rFsTJ9yoro3B2qJ0wN6yAEeejC8CbK99rTQUSASKQTwRItNIRpJOTE6pVq/b/2TsPsCiutY8PAoIoimLHFkWwN6yoUWLvGmtimr1gjTHGaIwa9Wo0MdHEEkvs0RiJGtun2Dt6VaoYBMELUi4tlMtudjf/7zlnZ3YX2IVd2KX5zvMsu8yc+pt6/vOe9zUyNSUjAkRAIpCVlQVmqVipUiXExcVJq/P8VqU8xPbJHVFN52GICVLVOk/Hz0Hp2R9UVKl49MNEtHTQvoEThGroNu9XvJDlWQ1tLOMEDh06xB9wO3fuDJmMDoYyvrupe0SgzBJISkpC1apV+XNoRoaOr8cy22PqWGkgkHFvAerrPKd1/yXRos1WZTzHxZ+3Yb9vhNra3qK1UeFlmYAyyQ97Fr2Nnm3c4da6G4bN3AzfGHnuLiuT4LdnEd7u2Qbubq3RbdhMbPaNgZ6UOnmVSHx4BGvmfIAPZnyML7dfRxxN+tDhU7CfJFoZyc3T05OHUjcyOSUjAkRAh4AkHsyYMUNnbf4/lelReHLzEi5cuoFHkWnI85ovT0Co33VcvfkIEal5psy/YkpRZgjMnz+fC1dTp04tM32ijhABIvB6EZg3bx6/ju3evfv16jj1tkQTMFa0UmXFwv+yDw7t2YU9R87h4assvf3Kir6PUwd2Y+8xX4SkKJAWdhdPU9l8QwXSkxIQHx8vfhKQIrPUPES9TaOVZYiAKvkaPm5uDcHBBe4tm8DZWnzp7Twcu8J0XnCqknHt4+awFhzg4t4STZytRUs/ZwzfFaZ3eqoqPQh7PmwJZ9fR2HTtVT7iVhmCWgRdIdHKSMh9+vSBvb29kakpGREgAroE/vnnHx7IwNraGiEhIbqb6DcRsCgBhUKBN998kz9o/PTTTxatiwonAkSACJibwLNnz3gk3nbt2pkQidfcraDyiEBuAvmKVvIonFzSD43rtMXIGQsxc7gbbLhllgtGbPHXieysQIzPFLhaCaja/SPMnzEcrWpURRUHdyx7nAUoXuHyt+/DvZzWot7zcELuBtEaIpAvARmCNvRG9wU+COfOcQFlwg2s9XLkz4mOI06IQZ0AWdAG9O6+AD7ahLix1guO7Bh2HIET6uhPmhqVCZexxKMCbNzn4jyZVmm4mOsHiVZGkhw6dCg/mNngmxYiQARMJ3Dt2jV+Dg0ZMsT0zJSDCBSCAJuWyoIB2NnZ4f79+4UoibISASJABIqWwPDhw/m98/Lly0VbMdVGBPIhkLdolQG/z5rxY9eh/368ZAbwMn+s0DhTb4cNIeIkq7QrmFKTCVJuWOGvtnTJDN6Ct6o2wOxb4nRYZRR2eJBolc8uoc35EVCEY8+c7xGkY1DFsijCf0BnJoq6zIb6kFMgfM8cfJ87IX7oXA6C4KI9NlkBslBsfcsBgm1XbArOUXh+baLtRhEg0cooTMDYsWP5hdfYCGhGFkvJiMBrRYAevl+r3V2iOnv37l2UL18e9erV49MLSlTjqDFEgAgQAT0Erly5wp89hw0bpmcrrSICxUsgT9FKFohV7pLI1BhLH2UB2YQnR4w6k8I7IAteg2aibyzbdvPgE8kG/XI8/X4MZvtK4XTicbCbVJ4AsrQq3n1fmmtX6ZtZmnEHc1g0ctflYMZ9fNGfEHfmuEAQXLFck1CBiJ1esBcENFxwF+R10DJHB4lWRnL94IMP+IMDc4ZJCxEgAgUjEBoaChsbG3To0AFktVgwhpSr4AS2b9/Or+O9e/cGmzZICxEgAkSgpBJQqVRgUwJtbW3BpgjSQgRKGoE8RStkIXjbODSrWgm1PZfgapIKioRr+NRVEp7sMMBHPaZSvToAL1tpvQDBwQOzDz9FhjwDaRrfVSRalbT9X6bak3IOY6oIqDf3Tj6iUwrOjakCod5c3JHUqYx7WNCAHb8tsebeY5zethIfe8/Bp18fw6Mk8rFrruOERCsjSTIH0ix6WXR0tJE5KBkRIAL6CMyePZufSwcOHNC3mdYRAYsSmDx5Mj/+Fi5caNF6qHAiQASIQGEI7Nmzh1+rmBN2WohASSSQt2ilbbEy+REOLB6M1i17wtNZEqfKo78oWkGVgusL3PjxzsZa6o8t2s49hRjN+yUSrbRE6Ze5CaT6TkIdx17YHq454PRXkeqLSXUc0Wt7OKSUmVIUTav66DVuDlZ/twVrZ74JZ3Ys134be57TdEH9ME1bS6KVkbwWLFjAL6JhYWFG5qBkRIAI6COQkJCAypUro379+sjKkmxw9aWkdUTA/ARkMhk6derEr+e//PKL+SugEokAESAChSSQkZGBOnXqoGrVqiAL/0LCpOwWI5CvaKVKR+C+GfBwFGDb9lP4xkViV0dJlNIRrVgLFTE4s7grKmlEK5auPHpsDRMjsJFoZbEd+boXLA/D1l4u6LkpWG9EQC0eOcK29oJLz03Quq1SIXpXZ/5M6TjqJBI1iTPxZHUbvt7OazciJYVLs51+mEqARCsjiX322Wf8wAsICDAyByUjAkTAEIF169bx84l900IEiprAy5cvUaNGDTg4OICu6UVNn+ojAkQgPwJffPEFv0d+++23+SWl7USg2AjkLVplIXizFypyEao5VgfKAFW0ftFKFo1HoSlQQYaoM8vQu6okbAkQ2m9FBJ9hRaJVse3oMl2xHGHbh6D1uAN4kY+wJA/bjiGtx+FAtoQyBK5259frhov8IAYk5MRUsYfR106AYNMLh2L1OdIq02DN3jkSrYxEumrVKn5AUuQpI4FRMiKQBwFmYcUsrZjFFbO8ooUIFDUBFonL2toaTZo0QUqK2hlsUbeB6iMCRIAI5CTwn//8BxUqVEDTpk3x999/59xM/xOBEkMg4+581NexjOp+ROd5LvUSJlYTxSdrT+yKUgLyp/i6lSRIlUf/E6Kf4OQ/8N6QTQgRZ1EpXh7HB3XFdCRalZj9XfYaokLyzeXoO2Qd/NLyFpVUyTexvO8QrPNLQ/aUCjz/Vm1R1fSLJ8g2f0T+FOtbsOO4FTY+EyNllj2IRdYjEq2MRP31119z0eratWtG5qBkRIAI5EWA+bRifguYjytaiEBxENi4cSM/BgcPHkyBAYpjB1CdRIAI5CLw3nvv8euSj49Prm20ggiUJAJpvu+hmo5o1f6HF9C4nU46gf7lJYFKQDmXtvBo1QJujtp1TWZ8i22nIiBjAleNiui88jaSuSKQigsTnCAIVmi/XpyypYzCdg9tXo+dL3OIByWJDLWlNBDICNyKcQOX4FKC5qjV3+yMQGwdNxBLLiVoj2+dlGm+73P/VVXfvQgp1iXfrHqFvZ0FCOX74mi8Tgb6WSACJFoZiW3r1q38IeL8+fNG5qBkRIAI5EWARQ9s3749jybIogrSQgSKg8C4ceP4tX3FihXFUT3VSQSIABHQEHjw4AGsrKzAIpzSQgRKKgH5y4vY+a9PMKppOX7/1DhPd+qBqSu+xS/+aVCpknF9aWc4iKKWc6fJ+PFuHF4cGYe64rqK7efA56UCyLiNuW610MClGup7jsXUj/qhQTkruIzYggAWoU3+Eme/GoVGOgKZVZOxWHMmqqQionaVcAKZITsxsa83fKJzzAnMCMaJY0HaaX6ZIdg5sS+8faI1jtfVXctA8IljCGLzAdOuYVodAUKDj3Ffd36gLBCr3QXY992P6Hx0sRKOq0Q0j0QrI3fD7t276c2XkawoGREwlsCVK1f4eTV8+HBjs1A6ImBWAszhccuWLflA8fTp02YtmwojAkSACJhCoEePHihXrhwePXpkSjZKSwRKKAEVZIlReBGXkc1CRZ4chbDIRK3Ta1UqIp4lQgEZ4gOv4tSJ07gWzP6nhQiYm4AK6U++w+BaTugwwRvz58/XfOZMfxd93Bti+LFYbsWnSn+C7wbXglOHCfDWSTd/znS828cdDYcfg9pVlQLRR8ehuuCEEYdfao7b9Hufwd2pC9b56ypZ5u7P61MeiVZG7uvDhw/zwTX7poUIEAHzERgyZAg/t2jqrfmYUkmmEfjzzz9RpUoV/mG/aSECRIAIFDWB48eP83vhhx9+WNRVU31EgAgQgdeCgCx0K/pU0k4z1VgJSlZ8lUfhJAsBKAvF1j6V+DU5VxqetjJG8YQiNlU6nvwwFm7OzTB62WZsWT0NfToNw0rfuGyC7WsB2UKdJNHKSLDMtwA7aJnFFS1EgAiYj0BISAh3iN2xY0fyK2Q+rFSSiQSYlRWblsOsrpj1FS1EgAgQgaIiIJfL0bhxYx7RNCYmpqiqpXqIABEgAkTAjASUaS/w8PpV3HoSiXSaEmhGsgCJVkbiZL6smGjFfFvRQgSIgHkJzJgxg59fhw4dMm/BVBoRMIEA82vFrvPMzxUtRIAIEIGiIiAFhVi5cmVRVUn1EAEiQASIABEoNQRItDJyV7GpS2wwwx4saCECRMC8BOLi4uDo6IiGDRsiKytbwFjzViRPxPOgAAQE6H4CERTyDC9i0zTz0LWVypAQFoSAgECExmQUYaSaoq5XibQXwWouwS+Qqu/tkDIVL4IZt0CEvEyHSpaAMMYyMBQxGeoAwLKEMAQFBCAwNAbiKi3KUvCLBQeQpqvStb4U7DBqIhEoAwT++9//8qnJLi4uyMwk3ydlYJdSF4gAESACRMDMBEi0MhLo/fv3uWi1evVqI3NQMiJABEwh8NVXX/FzbP369aZkMymtPGQdmkvz1vV9V2iCgcvP4ZXk/VMWiJVu6rnvNSZdRZpJtRUicZHXmwX/FW6cvyDUwpRruXuafnOmJuJPh2+fIS1wJdw4wxqYdJWllyFwpVhGjUngqwqBoLiypqSkoEmTJnzK6uXLl4urGVQvESACrwmB2bNn82vvvn37XpMeUzeJABEgAkSACJhGgEQrI3kxywxmabV06VIjc1AyIkAETCHwv//9D+xNM3OIzd48W2LJLlpZw9bWFjY21rC20nXKWB49fwiDnDWgyMUjsdfFUK8seC1aikJezclXcgh06bg1q65a1CrXBT9GKCAro6IV2wPseu/g4IDq1asjKopCalviXKQyiQARAJhPRxsbG3h4eJBPRzogiAARIAJEgAgYIECilQEwOVeHhYXxAduCBQtybqL/iQARMBOBn3/+mZ9nc+fONVOJ2YvRFa267FOHtFWnUCLF72u8WUEUr1pvwjPR2kqlkIM5yVXomzKXvXiz/lfk9cqfYVN7sf81PsTlVJ3uZNzCbBf1NtueuxHFWaigkDM2CjEyStmwtJJ6/csvv/BjkQUIkMlk0mr6JgJEgAiYjcDgwYP5deb69etmK5MKIgJEgAgQASJQ1giQaGXkHo2OjuYPFsxhNC1EgAhYhoBKpUK7du24BdSff/5p9koMi1YAlBHYKok2b3yKB8y1iDwMO0a0hKurO7xW+IHFlEu76o3Obq5o1ncVzp9chVGtnWErCKhQrye8D4chKeQg5no1QkVBQLkqTdHvkxOI5GZbgDz0Bwxr6QrXthNx7P4xfDqoGaraCBDs6qDjxG9xJ0ntG0pfvWoYGQg99hlGtKsLB24VZY9arYdi8dFnUHtCyULgpoFo7uqK5gO+RoDkHkwWhG8GtYCra3P03+APaXV2wAqEb+2EcrxcZ7x/SataZdz2hgtfXx5vHYjhIpU8bAdGsL64e2GFHyNjSLTKr83qVijjrmHT+13RgIcitoVTUy9M+/EuknTEQmX8NXw9sSsaVbHh12PBygF12gzDkl+fG+hT9h6a+t/ChQt5PZMnTzY1K6UnAkSACORJ4OLFi/z6MmrUqDzT0UYiQASIABEgAq87ARKtjDwCkpKS+MPFBx98YGQOSkYEiEBBCPj6+lrsQV6/aKWCPD0eIb9/jNbWamsixxG/I5E1Xs80veSTg2EvCjhMrCpfsx6qi/kEoTJq2zERqiZcnESrJcEOXnvVU8yyniyHK89ri+r2AoQKteFSzYr3l00/rvHhJaQYqBeQI2x7f1Tm+QUItpXhyAQv/n9NvHMiFkpk4v7HDdTr6i/APaYlsSXTD4saqtPWm3eXi2/ilmxfysif0EMss9rEC+q2IAO357ioy3QYhKNxotN1o6YHGtNmQJV8FQvdpb44oFZtR7FfNvD48oG6vfJn2NLTTlzvgLqNG6K6pv/tsCFEVAaz9ahw/ygUCvTu3ZvXuWPHjsIVRrmJABEgAiIBpVKJVq1aoXz58mCW/LQQASJABIgAESAChgmQaGWYTbYtLKILGxyOHTs223r6hwgQAfMTGDRoED/fbt68adbCdUUrtdgjCSU631UH4cenoi1SnqKVgKaLriFZBWTcW4RGkpjk9gmus5UpvphcS12u07jzvB9a0UqGtCYRAAAgAElEQVSAg9dmBHADpVBs7ipaDjVcBD9mMqWnXiSdxfiq6vKcxxzA8yxAlf4Y67vYclYVvXYiQlE40QrKaOz3Ks/LE6pOwLlkABl3MFecGug40gcJojGYUT6tjGqzHE83tlfXad8Tm56kQ4UsPNs9FFUYU7u3sD9aCVXCZXwx+k20c++Oz2+m8EiOirBv0Y5zt0XfE0lmPVakwuLj41GvXj0+uLx79660mr6JABEgAgUmsHPnTn7NY9actBCB0kZA9uI0vl+xAJPeGY/x43U+Ez6E9+frsPNKjJ5ozCWzl4qXxzGrY21Uqd0Zc36PLjXtLpk0qVVEwHIESLQyki0Lhc4GuUOHDjUyByUjAkSgoASCgoJ49LbOnTub1TltnqKVtRvGrjgEvwQpdKB+8UhradUUXzwRxa3Eo+hZTi0oua8KBPeApHqJnzqq15Xvf4Kj0IpW9hjoI4ksWXj4WWO1aCNF3dMjWmmj97lg9i3JhEqFzFeheB6fxUUcFNLSClAh9uhAVOBCkBPGnU1Gxp15qCf9f0ZqM9PV8o8eaFSbVTHY61lO3f9263D54SM8evQIj25sQa/yjF8ljPyDqWfSokBKxH2cO/Qdln3UHdV42wR03R8rJTD7N4sea2dnxwMFxMXFmb18KpAIEIHXh0BaWhpq1aoFZ2dnsGiltBCB0kkgC0EbPWAl3oPZGKnBvOtIEV9slY4+yRG8ppn6+YP1o+V6PDW/0XbpQEGtJAIlnACJVibsIDZo6dOnjwk5KCkRIAIFJTB16lT+IMEcYptr0RWtuCN2ZRIebh+NOuJDV6Weq3FH94lLj3ikFa064MdI0eFS0m/oY6sWqDrtihYFpHgc7KZeZ9s3p2hVA5OupIndkiFwtbv6oan6h+Cr9dV7aqjox6oZ1gQbcgyua2k1H3clbSvzHhY2ULclr+mBrEGqhBMYzv1KCagy5jjOz6unbpvz+7ioM74yRrRKNqbNsgCsdFO3zZD1W4dtkWyOI54e9Eav+mrLMp7WSrQKEwR0O2hZMemnn37iHN58802waYO0EAEiQAQKQoBFoWbXry1bthQkO+UhAiWGQNrVyaipEa2s0ftX7lihxLQv/4aokPjHu6gu9qHm+2chuRbNPy+lKM0ElEl+2LPobfRs4w631t0wbOZm+Mbkr1gqXh7FRx1a452TCXl239h0eRZCG7MRINEqG468/3FycoKnp2feiQxtVaYhMjiAh1Jn4dQNfYIiUqGEDAlhQQgICERoTIY4ADZUsBHrZQkICwpAQGAoYjJK1SsQIzpHScoqgdjYWFSsWBGNGjUyW/S2XKIVh5eG20u0b9pqvPMbXknOv/WJRxqfVh7Y+TK3aKWNSpiXaFUb026ki7tOhiAjRKu0yx/AmT9YucD7tqRGAVnhF3Hi3B08jc2EiltaNVSLTHVn4qZURdpVTKphnGgFJOGPMU7qMuzc0F70zVVryjVIMhtruDGilVFtVjzHd23VbbNquwT7fv0Vv2b7nMDl5xnIfPg53Hn/y6P9tO/x+91wpCacwlAHdd7uR/J+gDDHeSIJqfPnzzdHcVQGESACrxmByMhI2Nvbw93dncTv12zfl8Xupt+Yhtoa0cpy0/Qtyy4LkdcOY9fha4jSH6XGstVT6UVOQJV8DR83t4bg4AL3lk3gLPmldR6OXWGGXgqz4EwR2D1Q7XNV+6yvp/nGptOTlVYZJkCilWE2ubbUqVMH7du3z7XeqBU6lhiGrAn4+q4HEKdnoGxUHQYS5R5cGkhIq4lACSOwcuVKLp5s3LjRLC3TL1oxv01+WN5Ksvapjnd8mFPz/KYHFq1opYo5gLf4dDkBdaecQwJroPIVjr0tCkwu3ridoSOA2fXBoVdMpFYh9fYnaCI+WOZnacW6nXLxPVEgk5i4YI6OUKZGk//0QOPanIHb3qKj9+rv4mScWgjMeLgOI3oPwviZX+FUVBaid3VUC2m2XvglXn04ZPmvFIUsAZ6HLS9ayWQysCmr7Fp96NAhdSPoLxEgAkTASAITJkzg14/Tp08bmYOSEYGSS6BAopU8FYkJ8WD+ItknITlTHZE4NREJ4rr4+AQkZ4ovBXW6r0x7jtunDmPvngP4zTcAcXnoC0AWXj06j6P79uLQ71cRnJTbQlqRkZStzsTU/C1tdJpDP0slARmCNvRG9wU+CFeH3YYy4QbWeqnFKMcRJxCv175Djmc/DETTps78Gm5YtDI2XamEV6yNJtHKBPyNGzdGs2bNTMihkzT5FEbVtIWtrfpjrXkzIUAoZ6NZX7X/UcSTaKUDjn6+zgRYAIS6deuCWTmyCJ6FXQyKVly3Wo5W0nlZdwrOJ6r0OkTXTg8sWtEKSMWN+a5q4UYQUKlxe7RvXEn83xZdNoVwX1qpFydq/DzZunph1PAeaCBOXWRiizGiFdKuYoroRJ6L6ZKDeJ0dkFsMlyFwpZu6PZJvLiPbLH+2Bb1YNEXGv3ZXjBg/HB2k6IuN5+FGKqA7DaH6m1OwaN4EdJTSCALafve8SByo/uc//0HNmjXh4OCAJ0+e6BChn0SACBABwwRYIAd2jSM3E4YZ0ZbSRaAgopUs7DAWdKsoPrsIENxWIlAmQ/ixT9DTUXpRJsDtywC1f1CGRJWKh1vGoSl7lqnYFbPXb8AcT+n5R53Hpt5H8E1V81PEnMYiT/ZCrzp6z1iKBcPqw6pcXXh94oNIjS6lQOyVzZjUWutywKrnUXXk6NK1G6i1phBQhGPPnO8RlEPwVIT/gM7MN63LbGjcxuqUmxW0CX08vPHb/oE8grgh0crYdDpF008jCZBoZSQolqxly5Zo2LChCTkMJVUhZncn8YLdEhtCNVdQTQaVQg65XA5F7hcNmjTG/1BBIWflKdTWI8ZnpJREoNgJ7N69m58r5piSlZdoBWTgwRetNA9SjeZeR4oeAbn4RCtm+RWB4wt66viQECBYueCtpWcRI71ElEfi6JRWojN19jBXHd0/3otvu6gf7IwSraBj/SQIcF32GDmt5o0TrYxsM5RIuL4OI1ztNPzZ4M6p4wwcfCbWrIjByTntdPpli6ajv8TKPuqHX/t+BxFjlutl/of81atXYWNjA/YiIzlZ10l8/nkpBREgAq8ngW7duqFcuXLw9/d/PQFQr8scgYKIVgxC8ulhoo9OSbRia1Pwx0itEKUVrZSIOfq25mVce/EFlTx4DZrxF4228Fi0D6d8nyCRPQNkPMSX7dTBXSoOPa62mtF5Edd4zhUe9VnaGfGHu2ucyZNoJVEp298qfZZUGXcwh0XKdl2OxzkfeDMf46teXbD4ZiqSRBchekUrY9OVbbwW6x2JViag7dixI3/DbkIWA0nzEa3kYdgxoiVcXd3htcIPzHtN2lVvdHZzRbO+q3D+5CqMau0MW0FAhXo94X04DEkhBzHXqxEqCgLKVWmKfp+c0LxNkIftwIiWrnB198IKP7UvHFXaE+zx7ofmNezVg0QrB9RuOQDz9gUgTfdkVibg9va5GNy6DioyBdrWCa5vfoSNvq+yWzUo43Ht64no2qgKbPhNxAoOddpg2JJf8Vw8+TMffAmvZq5w6zwD+7d9gJaVBVg5tsCUP+IBI/IbgEmryzgBlUqF1q1bc2vEsLCwMt5b47qnyoxByIM7uPswFHFZuieslF+FrNgQPLj/BBEpkpolbSue7/zbzNolR1K4P+7dvoeAqNTs1xjebNavYPjd8UNIrBQxsXj688033/Br58CBA8GOUVqIABEgAoYIsIAiTIifMmWKoSS0ngiUOgIFFa1SL72LqpJlO7e0Yl1Phe971TQvrjSilSICWz0kCyxnfHBZ9K6ZcgajxKAxQuNP8YBP9VIiao8Xyotlt9wQCrVZQBwO9RCjFAvtsVEnRGDCYU9NnSRalbpD0HwNTjmHMVUE1Jt7h4+7tQWn497y7uixwg/MTaz04jq3aGVsOm3J9Ms0AiRamcCrZ8+eqFy5sgk5DCXNR7TK07qjPBerytesh+qS4zihMmrbCRDsasJFM13GDl57o3gDcllEqOJwYqx4Y7BvgHbduqF9Q1G8Epwx3idObZGljMXpqW9oLuZ86o50kxHqYNyhSHFQKcezLT1hJ25zqNsYDavbaPK12xDCbxpp16aIFiJWorDFbkLNsTowzaj8hmjS+rJP4MKFC/x4Gj16dNnvLPWw1BCQ/NMsX7681LSZGkoEiEDREsjKyuJW+pUqVQILMEILESgrBIpEtMp8iCVvSKKVC2ZLc7fSfPFeNXF9+X44wTxIqGLwcw9rzfjDUxNVOBknh1TQrG//fbjmpRiJVmXlaCxcP1J9J6GOYy9sD9d90atCyo1P0MVrPQJEAwz9opWx6QrXxtc9N4lWJhwB/fv359YeJmQxkLQwopWApouucdPWjHuL0EgSkdw+wfVkFZDii8miLxqnced5/blEq9TzGMfFrfJc2OKnpyISR2ePwtipn2Lz2Si1yHRjJlx4+VZoNvMYwjJVkEVfwvJu4oW/8kgcj1UBqgRc/mI03mznju6f30QKMzpQhOHbduqbiW3fE2D3Eq1oJcC2wxL8/vARrpy+iRi5cfkNwKTVrwkBdv4x4fT27duvSY+pmyWdAPO5xqwArayscPLkyZLeXGofESACxUDgX//6F793rVmzphhqpyqJgOUImCJaqbKSkCRahptkaYVEnB5dRRScquLdi6LjquRTGFJBFK3cV8CfiQqZ97CgviRw2aLPb5Iv1GScHuagEa1YNGQpuDKJVpY7PkpNyfIwbO3lgp6bgrV+1JgGmnQJczoPwpZQrQMsfaKVselKDY8S2lASrUzYMSNGjOAXPKWysI5TCiNaNcUXT0S5N/EoerIpe4IA91WB6hNN9RI/dVSvK9//BO9dLtEq6zGWu0kXdQFObj0watpybD3hh1ea8zId16bUUl/gnd/HJfEewQrMerRUjERWAUN+l24IapCKlAjcP3cI3y37CN2lNyBd94O9W9SKVhUx7PdEveTzyq83A618bQgEBARwfyBdu3Z9bfpMHS35BJ4/f84DBTAr3GfPnpX8BlMLiQARKDICLDqao6MjGjRoAGZxRQsRKEsEjBet5Ajd8i6W31W7KDFNtALk4T9jdB31uKXB/JtIhQpJFyajNnuxbtUM8y4mgE/Sz9AVrewwUDNGScbJwdKMEgG1p90g0aosHYiF6oscYduHoPW4A3iha2SljMeZqR0xcne4OMVUXUku0crYdIVqI2VmBEi00nMcsOhQw4YNy/VhUcyYQDRo0KBc2/SlN2wGXhjRqgN+jBRFs6Tf0EeMCtZpV7T6go14HOymvrAzCye25BKt2MX++pfo7awVrjRT/6r1wGcX2fTARPzqJU7xa/0NwnRP5MRj6CWKZe22RPA6Mp8ehHev+nzqorosK82ccqHbQcRlE60aYbGfGGdU5G9MfjEpfb1GBBYuXJjtXKtfvz4/Bz08PLKt13f+SeuYLxFaiIAlCZw9e5ZbW7Vo0QLp6dL7W0vWSGUTASJQGghMnz6d37MOHTpUGppLbSQCJhEwWrTKeoIVHp74Xpx6leb7nsaxujp6IKvWgE8rsUXKxDv4dlhtWNnURLvePdGyXiN0ensJDjxOEcc/zDQmGnu7S9MDdS2tdMY0ggCPrRGawFRkaWXSLi9jiVVIvrkcfYesg182h84A4g7A00rPOFma4cS/62PuH7uMS3dbLdiWMYBF2h0SrfTgDgkJUVsYZTsw8ztwc29nb+D1L4URrTyw82Vu0UrrEM4Y0UpslewV/E5swedThqFrUxYaVuxD4yV4kJmKC+PFdfXmQXw5wjPKn21EK57WFl5H44HMh/jcXZ23fPtp+P73uwhPTcCpoaIpbvcjSMgmWrlhVaDGpMvo/PpZ0tqyTKBDhw7a41I6Pk38XrlyZVlGRH0rIQTYccauoWPGjCkhLaJmEAEiUJwEAgMDYW1tjc6dO+Off/4pzqZQ3UTAIgTSr09FLc0zmQ28juubRaHAi72DUaXSQPwuBtvNuO0tuh8RIDRZikfMCFEZjf19tdZQGkfsrOWKGJye3xH1vbbgaZ4Gi9kdsXvsiBJ99EZhu+TM3coD3z7TRm0n0coih0apKDQjcCvGDVyCSwl6ZlBlBOHQ12vApnXrfr5435U/69UdsxRf/Ws7fMOeGJdOE+K7VKApkY0k0UrPbinropU8/AgWvv0WOrfpgUVXU0QCWXi0TH0iCk7v4GKqCq8O9YM9vxk5YdhPT8Fto+QvcWJyPbWQYNsDOyIUUEXvQkeezhZev8Sry8vyx0pRyBI8D+cQrZphTbBWtDI2v55dRavKOAESrcr4Di5D3WOD0qFDh/Jr44YNG8pQz6grRIAIFISA5Ifx1q1bBclOeYhAiSeQ/McoOGpEK+aqJADZNCVlMh7tmQR3NjvDZTYkH+rK6H3oJc4UEazb4bMTl3Dok56oaaM1AKg//64YxU2FeJ9RqCIIsG7/GU4+CEJwUAD8nzxBQEg4YtN0p4IAyHiElR3U1lZVx5/hPnWRfB4TRZclzRax6YXaJeFId/WYRhBA0QO1XMr6r8yQnZjY1xs+0TmPn2CcOBakHvPqgZBreqCeNGyVsekMZKfVegiQaKUHSk7Rys7Ojr8pY2/L8vqUKyeFU1VfdEuspVX6XXwqCUoV22LUtHmYN3UEWjuo2+084RQS2OTwdD+saC+Z2Qqo1rQNmtW2FS/uNvD4Uh3+E2lXMbmmeKOp/iamLJqHCR11LLfafofnCl2fVtlFK2Pz69lVtKqME8gpWuV1/ulu01gNCgLI0qqMHyQlqHupqalo2rQpt664dOlSCWoZNYUIEIGiJHDu3Dn+rDR27NiirJbqIgJFQkAeeRY/fjUfQxtqRSbpucuxdn0eLbNhAxdUk4QpJmxlczWSjkfrveCkEbzs0HraLuwdJ0Y2Z+ur9cXy0y8hhwyBq901wpJUj+53NY/J2BOsdTuieHUOn79VG4JQHW9OWYhpbzEfvbXgteQ0XmqNrDirxKM9UU5sB4lWRXL4FHMlKqQ/+Q6DazmhwwRvzJ8/X/OZM/1d9HFviOHHYrVTTnO01lgxyth0OYqnf/MgQKKVHjg5RavNmzfj6tWr+X42btyY7aJaYkUrAPJIHyz2coG15obBbjwV0WL8d/Dj4f/UYJQJN/HNRA/U0E1XsQXGbbqNRI01pQIxJ+egnRTFQxBg23Q0vlzZBxVZPvt+OBij1HHEnkO0gnH59ewqWlXGCeiKVoMHD873HJTOUycnrWhKolUZP0hKWPeCgoJQsWJFVK9eHZGRkSWsddQcIkAELE1AoVCgefPmYC88IyLUfj8tXSeVTwRKHwEV0sJu4czJc7gTnm5QJGD9koX+gAE86nlukUwjXrVdjxDtJA423xCpz+/iwsnf8PuFOwhL0QxasqFKOu4FG3GMU+7No9A3wTFbBvqnVBOQhW5Fn0p5HEeVR+FkHgeBsWKUselKNcwibjyJVnqAW1600lNpMa2SJT7Hk7u3cNsvCC/T9F/QWdNUmbEIfXQP9wNeICWHJaXUdFVWLIL97sAvJBZiVFtpk1Hfhc1vVCWUqFQRINGqVO0uaqxI4NixY/wFBgsYQBHD6LAgAq8XgR9//JGf/4sXL369Ok69JQIWIaBCWuB+zO/tguptR2LK3IVYtOhjLJw/B7OmTsSQ9qJ1VqWR+EPyeJJvO+SI978D/3g5Us6+rZniaD/4JES3W/mWQAmIABEoWgIkWunh/TqJVnq6T6uIQIkhQKJVidkV1BATCSxatIgPXD/66CMTc1JyIkAESisBNkW4Ro0a/PPXX3+V1m5Qu4lAySEgC8balswypjN2RzPfJTmW9BuYVluAlcc3CM0x9S9HSs2/mfcXoTGbFdL3V0TdmYd6oqVV46WPsvvk0uSgH0SACBQ3ARKt9OwBEq30QKFVRKAYCJBoVQzQqUqzEFAqlXjrrbe4cMUsL2ghAkSg7BNg1lVsutK2bdvKfmeph0SgKAioEnF+WkMIQjm4T9mJa88S1bM5lBmIDbmC3fO7wbn2AHzzJMPo1mQ9+hxNmVDV4QcEXZ2ljmRYqR92PDdS9TK6JkpIBIiAuQiQaKWHJIlWeqDQKiJQDARItCoG6FSl2Qj897//RYMGDWBra4s7d+6YrVwqiAgQgZJHIDw8nPuxatGiBZhoTQsRIAJmIqBKRdCpLVg6fTyGeHVHZ4+O6NytNwaNm4HlP/6BYAP+qgzXnoHgQ0swro8nunbtgYEfrMCxkLz9ahkui7YQASJQFARItNJDmUQrPVBoFREoBgIkWhUDdKrSrAQePHjAB7J169ZFbGysWcumwogAESg5BMaMGcOtrFjkQFqIABEgAkSACBAB8xEg0UoPSxKt9EChVUSgGAiQaFUM0KlKsxPYs2cPH8z26NEDf//9t9nLpwKJABEoXgK3bt3i5/iAAQOKtyFUOxEgAkSACBCBMkiARCs9O5VEKz1QaBURKAYCJFoVA3Sq0iIEpk+fzge1c+fOtUj5VCgRIALFQ+Cff/5Bp06dYG1tjaCgoOJpBNVKBIgAESACRKAMEyDRSs/OJdFKDxRaRQSKgQCJVsUAnaq0CAG5XI6uXbty4erAgQMWqYMKJQJEoOgJHDx4kJ/XM2bMKPrKqUYiQASIABEgAq8BARKt9Oxkc4lW7du3x6FDh8AGK7QQASJgOgFziFbDhw+nKVmmo6ccFiAQHR2NWrVqoUKFCnj8+LEFaqAiiQARKEoC//vf/1C/fn1UrlwZ8fHxRVk11UUEio2APOocfvhyASa/Mx7jx2s/E+fvQ6jMcLNSbv8LUyZo048fPwEfzP4c6w8+QZrKcD7dLbLwX/HFlHc09U5csB/P8qhTN2/Bfsvw7MgKzHhvgrbOhQcRpqlTgahj09GhekVU7zADv0YpClYN5SICRCBPAiRa6cGTU7Ri4YtdXV3z/bB0uh8bGxv+f82aNfH555/j5cuXemqjVUSACBgioCtaGXsesnNV9zxkv1kEt61btyIrK8tQVbSeCBQJgevXr4PdG9544w0kJSUVSZ1UCREgApYh8NVXX/H7zfr16y1TAZVKBEosARmebu4C22xjn9qYdjVNf4uV0djfp2L257O603AxyUi1SlOqCvHHh6KiVG+dGbiZrtlooR9KRB8cAAepThdv3M4Qq5IF4Es37fjPbWUgNHqWhVpDxRKB15EAiVZ69ro+0SrnINiY/+/evYvVq1fDxcWFX6SZv4NRo0bB19dXT620iggQgZwEcopWxpx3OdP06tULVatW5ecgs3LZsGED0tIMPFTlbAD9TwQsQGDz5s38eGROm1UqUx/YLdAgKpIIEAGTCbx69QqVKlVCo0aNIJPRMNVkgJSh1BNIvzETzdsMgJuVVrSpNPQYYvXc1uRPN6KbW3+8VVObVujyM17pSZsfmIx7C1BfEpCKRLQCMu9/jAZSnbqilSoOv73tJIpx1TDWJx4F6FJ+XabtROC1J0CilZ5DwFyi1fPnz3npCoUCv/32G3r37i1e1AQ0a9YMW7ZswV9//aWnBbSKCBABRsAcotXKlSu5SMXehDOrRyZqMRHryy+/JEsXOsyKjcDEiRP5sbh06dJiawNVTASIQMEJTJ48mZ/DR48eLXghlJMIlGICGbfnoN2Qndg5WMeCqlwnfBeW0y1KJvw+bYfuaw9jdj0d0arbQRRkUm2m3yI0lASkohKtHixGI6lOXdEKgCrjOS7+vA37fSOQWYr35+vUdGWSH/Ysehs927jDrXU3DJu5Gb4xOY/b3EQUL4/iow6t8c7JhNwb2bGQGY5zGydjyNQ/kKg3Ba0sKAESrfSQi42NxbRp0wr9SUjIfUCzyDKzZ8/mb+fY4Jm9pZs1axZFnNGzH2gVEVizZo1R5+HUqVN55KY6derkSv/HH39oQDL/I0wsZj5IpPNv8eLFiIuL06ShH0SgKAhkZmaibdu2sLKygo+PT1FUSXUQASJgJgJPnjxBuXLl0K1bNzOVSMUQgdJHgItWw04g8spU1JYEHUFA408fZBdvUi5iUqthOPz8NuYZJVrJkRBwEcf27cXhs48Rl8OQUZ9oJXvlh9MH9+HopWCkKA2wVGUh1v8yfA7twa49R3Du4SsYdBqhTEHI1RM48PMvuBScjHQ/faKVAulJCdyfHfNpFx+fgBRZTjsrFbJi/XHZ5xD27NqDI+ce4pXBSg20m1ablYAq+Ro+bm4NwcEF7i2bwNlaFFKdh2OX1llZ7jrlEdg90JGPH7rsi81uUaeIw42dn2N8q/J8u9D1AGhkkRthYdaQaFUYeoXIy6YnMR87zZs3Vx/cggA2jen48eNgllm0EAEiYBqBFi1aoF69ekZl+vvvv7F7927up46JV/b29vD29kZUVJRR+SkRETAHgfDwcG715+joiNDQUHMUSWUQASJQBAT69OnDBed79+4VQW1UBREomQTUopUPkmQBWNVcx4LKeSLOJ0ttVuHVLyPQZvIlpGTczVe0UibdxLrBLrB27IpZXy3H6AZWsHGdiL2hWqUnm2hV+wPs2PC21vJKEFDFawMeSz6neDPkiDq5BP0a10HbkTOwcOZwuNmo2+syYgv8c/jEygz+CRPd1eKDbf1WaF6/AXq+0w/VJWFOsrRSvMLlb9+Hezlt3z0Paw0W5FEnsaRfY9RpOxIzFs7EcDe1r2PBZQS25KxUwkXfFiYgQ9CG3ui+wAfholmcMuEG1nqpxSjHEScQn1N3FI+hZz8MRNOmzvpFKyghV6igCPsGbdhxQqKV2fcjiVZmR2p6gZcvX8bbb7/NLUXYAJr5wFq1ahWYxRctRIAIGEdgxIgRfBDBrKmMXZRKJY4cOYJWrVrxm5CtrS0mTZqEZ8+eGVsEpSMChSJw/vx5brHBpoyTr7VCoaTMRKBICJw+fZrfL955552861OlI+LeFfj6XsH9yGwjaP35lMkIuXWZp5vM9A8AACAASURBVL/3IiP7W3wphyweoQ9v48YtP4TEZulPI6WlbyJgYQIa0QpKRO3ujfKSqCPY4a19L8ENnhTPsaW3Bz5/mAXkJ1plBmB9NyYWWaPbtgiwV/iJv4+EIyu3xTI8FEWGbKKVYI36o77BpQfn8HlrK9EQwB4Djml9S2X4fYZmrAyH/tj/krVKBv8VbmJaAe02hEAzMSz1OuY2EkWoenNwPRVA1lNsH1RZk16QRCvGVxmFHR56RKsMP3zWjK13QP/9ahYy/xVwkxi124AQTaUW3lFUvJaAIhx75nyPoBzWe4rwH9CZiY8us3FLz+U6K2gT+nh447f9A2EvCMhlaSXVkPQb+tiSaCXhMOc3iVbmpFnIslh0wWXLlmn87rAB9IQJE3Dz5s1ClkzZiUDZJ7Bo0SL+QBEYGGhyZ//55x+cPHkSnTp14mWwaR8sjLO/v7/JZVEGImAqARawg72wYIE62LFICxEgAiWTALPSdXNz49a5+VnmZvmvVr9xFwSwiGJ5LwpE7huKyuKA1nX5k2zTllRpT7B7uidqSQNe8bt6l+nYG5BO4lXecGmrhQhoRSsASWfxTjWteCO0XodgGZD1+At06v09wpgCladoxSL09UcFfmw3w5pgtaIjC/pKLTgJVfHOhRTek2yildNY/MED8Wbgztx6GmGp8dJH4jkkQ+Aq9xzrlYja4aFZ5zjqDNQlM/HtTU1ERJc5tyHpFxl352udv+uKVojHwW7afkuWVrLAVXCXztfGS/Eoi+lbO+AhrXMchTPqSi20d6hYQwT0xr/JuIM5LgIE1+V4rDXqUxeR+Rhf9eqCxTdTkXRycD6i1e8YaEeilSH2hVlPolVh6Fkor1wux+HDh7mvBDaQYR/m+2Tnzp1gflBoIQJEIDeB7du383OlsP6BLl68yKfqsvOO+RsaNmwYaApIbt60xnwEmFDFLAXZMbdu3TrzFUwlEQEiYFYC33//PT9P8wugoIg5jdlu2oFsfqKVImI3BlbSps8mWsnDsWuAeuqK9EyY7bvqSByMIrcSZt3RVJhRBLKJVsiE3+I3+PmhPj5dMOdWLK7O7IBRR0X/P3mKVvE41k/0ByS0xYa7YWABrUKvf4nm4lio8WcPuRCVTbTSOGLPxIPFjTT11515E9Ksv6zgbRjXrCoq1fbEkqtJUCkScO1TV01auwE+4LoX4nG0r9QGAR7bItXWYgAyDTpi1y9aISsY28Y1Q9VKteG55CqSVAokXPsUrpJoZTcAPupKjWJNiSxMIOUcxlQRUG/uHY1Qqa4xHfeWd0ePFX78eEom0crCO8Jw8SRaGWZTIrY8evQIU6ZMQYUKFfjF1cnJCQsWLMCff/5ZItpHjSACJYWAr68vP0e+/vprszTp1q1bGDRokOahhvkwuXLlilnKpkKIQE4CLJIss+BgVn5MOKWFCBCBkkUgOTkZ1apVQ61atfKYypuJ8FPL8FZ1rQDFBvB5ilby59jex0Fzr2HpdUWrjNvecJEGum7TcOhxDBKi7mL72DqaPI2XqAfzJYsYtaasE8guWgHysO/RSce/U2WvaejXYTqusCl2bMlLtJL5Y0VT6byxQ/P+ozB69GidzzjM2BZotGhVe9oNjWgl1g4ok/HowGIMbt0SPT3VvonY+Va+vyhasTboiM3dDmpjG5osWmkrRfKjA1g8uDVa9vSEs3Qul+9PopWGUfH/SPWdhDqOvbA9XPcFgAopNz5BF6/1CBCtr0i0Kr59RaJV8bE3qWb2sPTNN9+gSZMm/CGFWYD0798fzLeCSq+do0nFU2IiUOoJREZG8nNj+vTpZu3Lv//9b/7QxM459nDDokWdOXPGrHVQYUSAEQgODuYRZZ2dnfHixQuCQgSIQAkisHDhQn4P+Omnn/S3ShGFA6Nq8zTsXqH7MSxayRC6pRefbqKbXitayRG8VgrYY4/+x7SDaHnwGnHalADr3scovLr+vUJrLUggp2gFVRx+HZ7dKtDti8faqa55iVZZj7GsiXTe1MGMm5KdVO4OGGNplV20UiE9cB9meDhCsG2LT33jELmro+Yc1YhW2dogoNOuaM3U24KIVqr0QOyb4QFHwRZtP/VFXOQudJSuDSRa5d6xxbVGHoatvVzQc1MwdF1dqZIuYU7nQdgSql1LolVx7SSARKviY1+gmtk0knPnzmHIkCH8jTx7yGnUqBHWr1+P//73vwUqkzIRgbJAgIm3dnZ28PLyskh3QkJC8P7778PGRh39pV27dvj1119JNLYI7de3UBZBll3X27dvD1OCCry+xKjnRMDyBJh1O/Mz2qZNG8PXfFkw1nDHywKcPD/Ggd0jND6qDIlWspBv0L08G6hbwW3UAI3PKq1oxfomR1psBIIePEBYKndtzTuc+XAJ3hAHwA7DTkMTrM3yOKgGIsAJ5BKtAKTfmIm6kjBj3Q0/6lqu5CVaqaKxp5vkSF2A2wr/bAKCLnJTRaus4M3wqqgWxJqvDoQMKkTrE61U0djdRdsG99VBmjaYLFplBWOzV0W1MNZ8NQJlgCqaRCvd/VgyfssRtn0IWo87gBe6RlbKeJyZ2hEjd4drnfQDINGq+PYaiVbFx77QNbNw6Z988gk3V2eDHHt7e3zwwQfw8/MrdNlUABEojQSaN2+O+vXrW7TpERERmDFjBhfI2HnHor7t27cPCoXu3c6iTaDCyziBTz/9lD/osus5LUSACBQ/gZEjR/Jz8tKlS4YbI4/AwVmT8MWRJ0hWAinnRuctWmUFYn1nW16uVZsVuHVtIRqIg/3sopWeKpVx+H1idfWAWLBF771ipDY9SWkVEbAUgYxb3mgz5DfRH5RYiywIa1urBSLHEb8hXqVTe8YdzGXOriVRq+sBxGk2y/F0Q1vttirDsO+FFF5PhZQ7G7DooDqioGmiVSouTawmlmsNz11RUEKOp1+ro0aztpTvf0LsgwxBa1tq2mDX7whixfanXPoINaR2150JrSGYfp9WqZcmopqY3tpzF6KUgPzp12gllVG+P06QTyvN3i+eHyok31yOvkPWwS9N90AFEHcAnlY6x6q037J918fc25KrfrEHSeSI3VL7kkQrS5EtwnLZ2/i9e/fCw0MbCYNFQWMD6aysnCEQirBhVBURKGICw4cP587Ti+K4j4mJAZsu4uCg9kXCLB63bdsGmUxrRlzE3afqyggBpVKJvn378gfnrVu3lpFeUTeIQOkkcO3aNX4uDh482KQO5C1aZeLJGg9YswFQuXZY+SgTGfc/Nk60UqXgzipP7ZTCN+bhhuQzyKQWUmIiUBgCKsQeGQDnFl8hKNtjjxIv9/WBvVAH06+nZa8g9RIm6kYYbPsddA2xVPGn8X5tHaGgei9MW/ktvln+Hnr0WopbYnHp16ehtiQeVH0Xl/jxn44b07TTc53GnYf6tEjCif5a5+pCORe09WiFFm460xibzMC3204hQgYoX/2Gsc5SG5ph0eU4JAcfwCR3tW9hLriV741dkeKLSmUUtntI6QV47HzJpxQmneiP8lIbhXJwaeuBVi3c4KhZ1wQzvt2GU6xSWoqFQEbgVowbuASXErQWrJqGZATh0NdrsGZN9s8X76sd+NcdsxRf/Ws7fGNyvLAm0UqD0Nw/SLQyN9FiLu/u3bt47733NFYg1atXx5IlS8D8/dBCBMo6gY8//pgPLoKCgoqsq2xa7rJly1ClShVed506dbBp0yZkZOR4+1JkLaKKygIBdlw1bNiQT0liQQFoIQJEoOgJsGnnHTp04NPCnz59alID8hKtMh5+ibbcYXU5tF/9GCwudKYxopUyEVeXd9EKVjbtsfxOqsbvjkkNpMREoIAE5C/PY8uSCWjnoBZr6vWfiVU/+yFJMlZJvoiZQ9ciQNJj5NG4tHMtFo50RTmNaMPyOsFzynJsOhIAtaGLCmmPt2BsE7UbBskiy6nTHJwQI2TKI45jcS/185Z6eyX0/OwkHp35HG9W1opHQsUumHsojE8FTL6+FJ3FtgrOnTD5x7uIe3EE4+qK6Su2xxyfl1DLDyok316L/jV1yqrqhS92zERD3nZ71HLviiHTt+BuXCTOfjUKjXT6ZNVkLNaciYIq+TqWdpYCLDij0+QfcTfuBY6Mq8ufFQWhItrP8cHLHJpHAXcJZTORQGbITkzs6w2f6Bw7ICMYJ44F8WuyviJpeqA+KkWzjkSrouFc5LUkJCTw0OkNGjTgF0cWkWrYsGG4cOECmF8sWohAWSTALJ3YQ8zvv/9e5N1LTU3F2rVrwYRi1gbmTHv16tVISUkp8rZQhWWDwMOHD/m0byaEvnr1qmx0inpBBEoRgZ9//plfz729vU1utUHRKusxVojTp4Qao/D9yXM4f/48Tm0djeri4Lf2xB04feE6gtk8Q2lRxuLcgtawkQbI1i0x90wsdFJIKembCJRuAqp0vLh/Ead+P4PrAa+QJYlhheiVSpaIqBdxyNA9YeTJiAqLRKIkrumWL4uD/+VT8DlzGxHpKihTn+H+w+dIlmYs6qY19FslQ2LUC8RlrxTJUWGI1FupoYJovfkIqJD+5DsMruWEDhO8MX/+fM1nzvR30ce9IYYfizX4IiBf0SrhKHpZCxA8dvApoeZrN5VEolUZPwbYNBM2gJemmrDBdNOmTbF582YaTJfxff86do/5G2HH+MaNG4ut+5mZmfz8cnFx4W2pXLkyPvvsM8THa6M+FVvjqOJSR0AaNHt6euLvv/8ude2nBhOB0kqAXcvr1q0LJyenAgW6MShaJf2GPrY6VhySCJXruwYmXRXnQykTcfmT1urphCydfRcsv5pAglVpPbio3USACBQLAVnoVvSplMf1t/IonEw03DTDolUmnh7/Fp+/2wq27Bpt5YrRn27EL4GGo2AaroW26CNAopU+KmV0HTNtnzt3Ltggmg3sK1asiOnTp8Pf37+M9pi69boRePHiBT+2maP04l7kcjl27tyJxo0b8zZVqFAB8+bNw3/+85/ibhrVX8oIzJo1ix9Ds2fPLqKWyxAf+hC3b9yCX0isEW+5VUiPuIcrvr64cj8SGXm+FTclbRF1l6ohAnoIfPnll/y8K+hLEPOJVjKE/tgXlSRRq0JPrL1PUwL17DJaRQSIABEgAmWUAIlWZXTH5tWt9PR0bN++Ha1aaSNn9OjRA7/88gu9yc8LHG0r8QSY/xE7Ozu89dZbJaatzNrx4MGDaNGiBR8AlS9fHlOnTsXz589LTBupISWbALOw6tatGz9+WIANiy2qNDzZPR2etXK8hazeBdP3BiDdkBiV5Y/VbcQ8bit5aG+DbTQlrcFCaAMRsCyB6OhoHmSjSZMmYC8gCrIYFK2UqQj3f4RHj7J/7hx+X+Ncut60X3HvcRBeZqigCN+GXnbac7Jip3cxy9sbbMqi9Fm05SHofX5B9hLlIQJEgAgQgdJAgESr0rCXLNhGFhVn7Nix3Mkos76qXbs2VqxYARYZjRYiUBoJNGvWDMyXW0lbmC+5EydOcKe+7FyztrbGu+++i6J0Gl/SmFB7jCfArsns+mxvb49///vfxmc0OqUc4bsG6EQ20g6S2fEqCFUx8mCU6KhWp1BFDE7PduOCGk+Xl2hlSlqdKugnEShqAh988AE/pn/77bcCV21QtDJQon5H7Jl4sPgN7fklWVvl/O56AHEGyqXVRIAIEAEiQARKOwESrUr7HjRT+9mAiIlVbFDEBh42NjYYM2YMrl69aqYaqBgiUDQEWMABKysrZGVlFU2FBaiFOd1l1o3sXGNtHTlyJB48eFCAkijL60Tg5s2bPJogiyqYmJiH04WCQMm4DW8XSahyw7RDjxGTEIW728eijjRAbrwED3VOq8zwU1j2ljrwgFrYEiAYEK1MSVuQ5lMeImAuAiwAArsu9+zZs1BFmkW0kofi61bSeZnHN4lWhdpXlJkIEAEiQARKNgESrUr2/ilY61TpiLh3Bb6+V3A/MiPvMnKkZdNQjh49yh/WpEFIy5YtwaKypf8Va3y5eddKW4mAxQgsXLiQi0HBwcEWq8NcBV+/fh39+/fXvEVnv9k6WoiAIQJbtmzhxwsLrsGmnpprkQevRXNRnLLvfwyasAHyYKxpJg6WrXvjGNfKFIg6MEozlUm6V/DvXKKVKWnN1RsqhwgUnMCbb77JRSt6kVBwhpSTCBABIkAEiIA5CZBoZU6aJaSsLP/VaCMOPtxWBubZqrzSMgftzFE7c9jOBiOOFe3gbGS5eVZKG4mABQn8+OOP/Hg9efKkBWsxb9FscMSsrdjbfXauMSssZo1FCxHQR+D999/nx8mSJUv0bS7wOnlaLCKCHuBBWKo2KlnmQyx5QxStHIbhdDIrXobgNc14GwQnT3x8YDdGVBbT5BKtTElb4KZTRiJgFgJsCje7BrNzjBYiQASIABEgAkSgZBAg0apk7AeztUIRcxqz3bQm5HmJVsamTU1NxTerpqKKTojmCm90gY+Pj1nf9JsNAhX0WhO4ePEiH3Rs2rSp1HEIDAzkfq6Yvys2cPLw8OB+sJg/LFqIgETgf//7H9q1a8ePETbIttyiRNzvE1FdfFlh23svXnLjLjkiDs7CpC+O4EmyEkg5h9EGRStT0lquJ1QyEciPAHO47urqyh2wU5TX/GjRdiJABIgAESACRUeARKuiY23hmjIRfmoZ3qquFazYoFe/aFX4tIJoEVK/fn2sXbsW8fGaySQW7icVTwTyJhAREcEH8zNnzsw7YQneyiILsgiDLNIgO49Z5EEWgdCc08FKcPepaUYQYMd5tWrV4OjoiJCQECNymJpEhZQ7q+BpL91T3sC8G6n6C8lTtMqRxZS0ObLSv0TAkgS++eYbfr394osvLFkNlU0ESj2BjMCfsWz6e5gwfjzGi58Jk1fjUlz2KeuqZD/sXjEHH74jpnt3MhbveIBUQ5FoSz0Z6gARIAKWIkCilaXIFmW5iigcGKV2oM4GuLqfXKKVmdK+seD/8Nlnn6F6dbUTXja4njhxIu7cuVOUPae6iEAuAkzYYcdjnz59cm0rbSvY2/558+ahQoUK/Lxu3Lgxdu7cWeAQ7KWt/9TevAlcuHAB5cqVg7u7O/7666+8E5u0VYnEq8vRRSNY2aD98juGBxqmCFGmpDWpzZSYCBScAAts4OTkhDp16iAjIx9foAWvhnISgTJDQJl4Fcs8bLONOSr2+QFPZTm7mIngrV5wEBpj/uVE7dTznMlK6P9Zwbux5lRMqWt3CcVJzSICBSZAolWB0ZWgjDKto1wnz49xYPcIVDbke8rMaWUyGfbv34/OnTtrblwdOnTAnj17wKaw0EIEioMAG8SzCGtlZWGWjEwkrly5Mj/PXFxcsHnzZmRmZpaVLlI/CkiAWbqyFxXMJ5p5ppEqEXtuAVrbSC9ArNFy7hnEZn+Bnr21pghRpqTNXgv9RwQsRmDOnDn8PNq7d6/F6qCCiUBZI5B2dTJq5nhZ3vyT60jOYUmlerUXPd1m4GZ6KSOQFYivu1ZFh+/CoShlTafmEoGyRoBEq7KwR+URODhrEr448gRq9yKjDYtWlkoLgDmT/vDDD2Fvb88f/tjUlUWLFiE8PLwsUKY+lCICQ4cO5RYoTFQtS0tKSgpWr14NZ2dnfo7VqFGDT89lfudoeT0JMKFq1KhR/HhYs2ZNISEokXj5E7S2lgQre3RZfhUJeQlWrEZThChT0hayN5SdCBhDIDQ0FDY2Nmjfvj1UqhyjbWMKoDRE4DUlkH5jJlq06Ix62YSr6hhzOCq7yJN4HIM8PsWD0vSeTRmP83Pc+L21LYlWZe4IVyb5Yc+it9GzjTvcWnfDsJmb4Rsjz91PVQae/roc43u1hZtbS3QdPh+7H6YgvzuF4uVRfNShNd45mZC7TFpTIAIkWhUIW8nOlHIuD9EqR9MtkZaZ2W/YsAGNGjXiF3s2fWXw4ME4e/asmSwBcnSC/iUCOQgsWLCAH3uW8fWTo7Ji+JdNX2GO5tlUFmZlU6VKFSxbtgz//e9/i6E1VGVxE0hLS+NTBNm1tjBRJ2WhP6JvJUmwqoCea+8bnhKo22lThChT0urWQb+JgIUIsJcc7Dp65coVC9VAxRKBskkg/cYsdJp4Cn4/9EUlXeGqfFese6wzzTZP0UqOhICLOLZvLw6ffYw4A+8aVVmx8L/sg0N7dmHPkXN4+CorO1RFBpIS4rmPXWadnpCYBiZBKBIDcfGXn7HvmC+epmrfwCgSA3Dhl73Ye5StzyFBKKJxam4b2Ih9arnuIV6xMpMzaZpgduql8j9V8jV83NwagoML3Fs2gbP0os55OHaF6R6AWQj+bgAaNPbEiHcmYEj7avxeIVToja3P9AhcEg15BHYPdORpu+yLzVfgkrLRd94ESLTKm0+p3GoJIYqBMKVclp69sTx9+jQGDBgAK9Fxe5MmTfhgOzmZx00vlXyp0SWfwA8//MBvFqdOnSr5jS1EC5kl2bZt2zQCccWKFbFw4ULExMQUolTKWhoJPH36lDtlr1q1asGsWxXh2NbLTv1Axh7UK3bCu7O84e2t81m0BQ/1Te8wRYgyJW1p3BHU5lJFwNfXlx/zw4cPL1XtpsYSgZJAQC1anUeKMh5np6tfVGv86jacjrPxokhkQLRSJt3EusEusHbsillfLcfoBlawcZ2IvaE6gpQ8CieX9EPjOm0xcsZCzBzuJopJLhixxR/SLUkRexkb3nbR3sMqD8f2bZPRXCfyueA0EFuDE/Bo6zg0KSe9oBEg1B6DAy/ECYCyEGwZrPbXq+mLJMg1XQF/naaVhH1AbTCVgAxBG3qj+wIfhIuWf8qEG1jrpRaZHEecQLyoYSrC92KK90GESftcGYfTk+vyY6zpF08grc7eAjme/TAQTZuqZ0SQaJWdTmH+I9GqMPRKaF5TxCVLpc2J5s8//+SDaebolN0EmGPpyZMn49///nfOpPQ/ESg0gf/7v//jxxmLBvU6LAqFAvv27ePWNuz8srOzw4wZM8AizNHy+hDw8fHhLwjatm1rsk/BzAeL8Yb0YG7wuysOxOnhaYoQZUpaPVXRKiJgLgLsxVqbNm1ga2sL9oxCCxEgAqYR0IhWLFumPzZ013nxIQioPHAHuOGKPtEqMwDru7EIydboti2CTydM/H0kHNn9p8UyPOSCQgb8PmvGn+cc+u/HS6aByfyxwk0SnNphQ4jW4iXr8TI00bl/1eq3GheePcfZOVpBzbpuU7Qe8w0uP/XH/tGi5YwgoPFnD3VEiGScGqIOgMOeqWh6oGnHRYlOrQjHnjnfI0jXoIpZ5IX/gM5MyHSZjVuikaAqMxFJWdmt8DLuzOPTYZuvCUKOIni3s4I2oY+HN37bPxD2ggASrcx3NJBoZT6WJaYkSwlRppRrCAZzHP3TTz+BDaqkNxhdu3bFoUOHKCKaIWi03mQCzI8aO75mzZplct7SnIENwn799Ve0a9eO95/5aXn//fdRNqZJypH4PAgBAQHiJxCBQcF4GvYCr1JkucyvZQlhCAoIQGBoDDKyP3MU+S4uyrYwh/3s2GfRXI1f5Aj9upXmmixdm3N/dyXRyniolLKEE9i1axc/5ufPn1/CW0rNIwIlk0A20YoN/GOO4706kqDEvsuh9dLbSE3I6dNKieiD/VGBC0zNsCZYLTzJgr5CM76uKt65kALIArHKXSqvMZY+ygKUUdjhIa1zxKgzKRo48qf/QgtJtLLri0Ov1Df/1Avj4SStrzsD19LUWeIOdNPc92zfOo4kTUkkWmlQlMEfel0XZtzBHBcBgutyPNZvQsVJpF35CLWd+mFXhB7X/JmP8VWvLlh8MxVJJweTaGXmY4dEKzMDLQnFmSIuWSqtMRxu3bqFCRMm8LecbHBUs2ZNfP7553j58qUx2SkNETBIQKlU8uOqb9++BtOU9Q1nzpxBt27qBzLm62j06NF49OhR6e22PATrmksPqrm/KzYbjY23kkR/EzIErlQ7UBVqTMJV8QEVyELk+XWY+sllaB9zzYwkKxLn103FJ5elGgy1xcz1isUx4bJfv378Qfz777+3TCVUKhEo5QTS09NRu3ZtsOm0SUnaoWop7xY1nwgUKYGcohWrPO3+l/DQRJ9l9+raeGfXD+ifzRF7PI71Y1ZWbHtbbLgbhufPnyP0+pdoLopLasunLARvG4dmVSuhtucSXE1SQZFwDZ+6Ss8Adhjgoz1/s4lWlUfjnHgbTrvyIapLopXrcjwRRYnEX700fquEznshalwASLQq0gOpJFSWcg5jqgioN/cOdLyxZW+Z/AX2vzcAC8/E6vFtlo57y7ujxwo/PmU1mUSr7OzM8B+JVmaAWNKKsJQQZUq5pjCJjY3lEdFcXNRz0a2trXk0LOZrghYiUFACbm5u3NdTQfOXlXzMuXCfPn00bxMHDRoEJhiXuiWHaGVtawMbXZ8U7IHUcQj2R7G3X/qEojRcm+UGK0FAuTePIdESANKuYZabFX+7/OYxqQZ9bbFE5doyWTAMFgiDWdpdv35du4F+EQEiwAmwwBVswLx582YiQgSIQAEJ6BOtAAUiD7wNZ0kk4t8OqNB0sTZ6IJvi11QrPDXvP4q/WGMv19SfcZixLVBnuh6gTH6EA4sHo3XLnvB0lvKWR//CiFbHdUSrjrsQrbHKJtGqgIdEqc2W6jsJdRx7YXu4HgsqRQqeXfkJ871qQBAqovUHO/AkTXOwMC/OSLnxCbp4rUeAKIiSaGX+Q4FEK/MzLfYSTRGXLJW2IBCYX57jx4+jd+/emgF2s2bNsGXLFvz1118FKZLyvMYEhgwZAmZhJJdr/R28xjhw7949DBs2TBMUoVevXrh48WLpQaIrWnXZh1jxeUGVEY6TC1qhnPiA3OH75+pQ2yoF3/dyhRQtKAGHPdUPuhYTrRIOw5O3oxy0ohV7nsnZFstjZ1Z1zHdgrVq1yDG/5XFTDaWIQFRUFOzt7cFebPz999+lqOXUVCJQsgjoF63YPS8ZeTooQgAAIABJREFUNxa35C+JNNPMG+mIVlmPsayJJDzVwYybkjt1Pf1TpSNw3wx4OAqwbfspfOMisaujlJdEKz3EaJWpBORh2NrLBT03Bev1U6VIeIDf936DJe901ljsVZ/gg1jx8VKVdAlzOg/CllCtlysSrUzdCfmnJ9Eqf0alLoWlhChTyi0stKCgIO6PqFKlSlzAYt/MPxFbTwsRMIYA81PCHpZYVLVSvyhS8OLJXdy664/IVD1vgUzooL+/P8aPH88FPcanU6dOOHnyJP755x8TSimGpAZEK9YSVfRudBJFKxfv28iAHGE7RqClqyvcvVbALzUS+0Y3Rw0xjWBVA2+4dcfiW+oHZWXcNWx6vysaVGIPwrZwauqFaT/eRZKkd7E60p5gj3c/NK9hrxbVrRxQu+UAzNsXAPbCTRG5D6Obs7dw6odpqxpvwK37YtxKz9EWg3bn5me6f/9+3h7mN5DEW/PzpRJLJ4F3332XnxfsukcLESACBSdgULRiRcrCsGNgZc09UdAVrVTR2NONWSWr75duK/z1igVsSn/wZi9U5OmaY3WgjN3wi0a0GkqO2At+ZJSmnHKEbR+C1uMOQAogabj1KqTcXYVOLCJluS7YEakElPE4M7UjRu4Oh+4rchKtDFMs6BYSrQpKjvIVCQFmYbV161Y0b95cc3NjFiLMIotZZtFCBAwRYMcNeyA6ffq0oSQlf70qGfe/n4jWXEyR3iw6ocOkHXiUqjVNznywBE0lQcbQd7kWWBmgfQv07NkzTJo0SeNTrlWrVjhy5AiYP7ASuRgUrZSI/uVtVBX73ewrFtElx5S8xGfY2EriJ31XwdjzKVAlX8VCjaNXB9SqrQ57LAg28Pjygdq3gSoOJ8aKUYbsG6Bdt25o31AUrwRnjPeJw/+ebUSrnOyrjMX5lBxt0fjXKhrK3t7e/DyYOXNm0VRItRCBEkzg/v373NrUy8urBLeSmkYESgeB9GvT4THhrEEfkcqEC5gtWVTpilaQ4+kGbUAmocow7HshDflVSLmzAYsORkCRegkTq4n3bGtP7IpSAvKn+FpzPy+P/icK4dPK4PTAFJwdJT0LCGjzLbPgVkGeIc8V9KV07ClqpX4CKiTfXI6+Q9bBL9t0P/2p1WvTcXM2c2dTHwvuZQBxB+BpJT1XGvquj7m3i/CNZV7NL8XbSLQqxTvvdWv65cuX8fbbb4P5vGJiBPOBtWrVKjCfWLQQgZwELly48P/sXQdYFFe7HpSiYsFesEURe0WNNYq9Kxp7jL3EXqLGhiXGaCwx2HuvsaFYroK9IheQohKQ9tN/WoDN7t7dzXufM2V3FllZFJbd5czz7LO7M2fO+b73nJk5552vsONkx44d2Q+ZyH8JArZ0gk12IoT/X6rbNgTyvvMS7yWoo6Oc8CaTYRrA9c3HKVGIq8zcuXNZdxlS1sHBAYcPHzY+txkxaWXbEr0GDsTA/r3xTZs6GoyKtcYv5E1sdtIqQwlpaggOtOMmFBYdjyAkKRkZcjnebW3NjhOmRFds88+EClKEHB6EcgRPmx44EaME0m9jlB051xrOR6M490NFJM7PdsHIacvw+80oyJVSpIYc4C2+LNDxSAiSkjMg/0gWww4/4v7UuXNnVsejR48atnHaGkXAyBAg1wJxGzfppBRGhikVp6gioETkgU4o12IL3gl8Uw5QSAJ3oFtJBlqWVsR6OfE6JlQTLfIrdcP0dTuwffV36NJtBZ6SFzwpl9HHWlOmmH1LODVrAscymn31Z+7AXvdw1lJL6rsS9YW5kHVvXOJDS6bfHsk908kx+7l4zvIHKsQcass9/8n+5tsQon4XLsUb14bqY3ZDD+KRxy+YstwTKZr3hTloS3eZEgJZgbswqt9y3EvK28va5D97wIppge2hCiArCKd/24iNG7U/ayY4sOOnxrcr8POv++AZqx5cpgSRUclKSSuj6g4qjD4IkOyCJIgqyTZIFtlWVlZsFsInT57keHpoaCju3bv3RR8fH58c66Y7jRcBkomGjI/Zs2cbr5CfkEwVdxb9SvETs0pDsfNxGMJfHcNUNtA32W+DHkej2Awm8sgrcJ05FVOnan8muzRTEzrl+u/Fu485K7UECQkJWLp0KQSX3Fq1arHx5P755x91mUL9ISathEmp+NvSEROOhfCBW3OybsohppUqFkc7FeMmpq02wcvHl13M+j52Qzd2olwaw26kAlI/rHbUTJLtHLvAZfpq7LrsjTiN8RqQY0yrnGQxLJJxcXGoXr06S0y+fv3asI3T1igCRoLAhQsX2GudWJjSjSJAEfh8BCRvz2DjvGFoTNykGAa1es/E2kNPkZjj2l+JuGuT4dB4mSYQO9u0Chl+bhhZ35J7BvPPc7t2c3GZTajCxcZ6tKI9SvHHKrabgj0vEhBxdhRq8PtsW8/FlWgFFLG3sLYfty4gMpGXTC2nHoZ/0BnMbadx9WMYO3T/8Tz8nmyFS03Nc51h6mDE1idI5nVQpT7FeueKatkqO7vioXDw86GjZxoJApK3BzC+1xxciclGJmUF4/KFIEh0yinH+9+aw7bVrwgWz/+ylafugdkAyYe/lLTKBxBpFYWDAInRcubMGXTs2FH9UGnZsiUOHDiArCyNGeaPP/6oPs49yMQPKf1+f/PNN3ooqYQ0K4ttW6ow0lcxKikSQgIREBCIkHipWZs5Ezc3Qmj27t1bj74zviJpHi4ozU/Kmm55r/aVz3o+HzX5/cU6HASxls9xk4dhf29bbuzbT8WtJP3GJEn/vnbtWjYVPLleCDm8efNmZGQY2K8tu1Ji0qrGt1jxM3mrtQmbt7nh0Lm7CEwWTzxyIopyIK1kAVgnIqNyuj+02RtJZs5IebQW3dUZi0T3jQpd8NPdBC79sZGSVgTKZ8+esddD7dq18d///jc7uvQ/RcCsEZDJZPjqq69ga2sLQuLSjSJAETAkAhJEvArSihOpbl2ViYhXd+F+1QOPAuIg/WiqooIsOQoRCVncc5Y/UZ4ahdDIZB2xsNS1f+EPBdKjQxAam6HV9hdWSk8vVARUyPTfiQFV7dBmzByQ+LfCZ+6McejZsA6GXIiHCiqk+x7EsgXrcep1srr/JW/3Y6hDR7i+/PScmJJW+d/JlLTKf0xpjYWAADH1J1YmJFsWWXja2dlh4cKF+Ouvv2Ao0koVcwhteTKh2dYQNclQCHDoblLyCotrcwvuWgtfcvF6dJc2+SMNGjRgFyqmp4gKcce+5slWK/T4UxOzAcl/ogf/dpOxG4076Tlpp0Dk0X486VUNk28l55mgJCTVli1b2Oxz5JoqX748XF1dQUitQtnEpJUoe2DOsuhJWinCsLMldz1YtFyO4xcv4qLW5zK8wjQEOGRx8L7shpVTB6NDAzsNGV5vOfcG2YhJK4LT7t27WZl79uxpvLHLcu5Qupci8EUIkHsZuY+RkAJ0owhQBCgCFIGiiYDs/S701IoTK3oJSdZwZV1wjXUrVSD8UG9+Hm0Je6de6N/HGd36zcIBn7Rc59SUtMr/8UVJq/zHlNZYiAikpqZi27ZtqF+/PjtBtbCwQJ06dTSLS55Uysmi4lP79LG0oqRVIXa8jqYHDBjAxkAzxcxpwgOPjMvWu8LVb3lIpryv1eO4HQ7HfPRaEqrUO5hYmXsQl+p7AtG6rLF04CbeLZVK2WQIxEKHyELcBwkRbPBYcvlAWp3rwmcr6ngSCaySWXg2hwTUZMBUGodrCRxQWT6bMLR7f4ye9TPco+SQfziLRcN7oH2LLljyII2HRwrfVVzMAsZuLO4S8jDpHLrwATk7nuRa+Di+lhhdw/+eOHEiqy9xBaUbRaAoIJCUlISyZcuiZs2akEh0O30UBSyojhQBigBFgCKgLwJKpIf74JHnXXg+8sb7xE/4A+pbJS332QhQ0uqzoaMnGjMC//77L27dugVCWmQno06fPg19Po6OjupzzYa0Io5OCjkIiaP4AiLDmPteLNv8+fPZPnz//r14t0n8VkQcQGdL/g1QwyXwIoEiVSl4vKI5LNSkVTNsDckeAVWO9zvaohhbph6WeufPIo0E9CaBvIn1GrmmSpQowcYLi4wk7nMG2L6YtErDrW/L8dd0adRt2BoTb6dBHuKGbiV4nKt1wNDRQ9CGDbrOgKk3H48JGZX5AsuEDIO2LeEyfT7mTxuK5nzMsYpj3MF6X6bdwrfl+LpK10XD1hMLPXtg9p4hJGSbNm1YHIhVGd0oAuaOAMmcSe5ZJ0+eNHdVqX4UAYoARYAiQBEwSwQoaWWW3UqVEiMwbdo0fqHKoFq1anjw4IFeH5ISWyC88pW0kgZiW7/GcHBojL6/BfCBowFZ0Hb0b+IAh8Z9sEWd5U2JhAebMcapOhtQ27Jicwz/5TquL+0KRwdHdFz0GGqvamUSnu2bhwHNq8O2GAPGyg4O30zCVs84LtsZAUVH22K8zOm3m5sb24c3btwwPbVUaXi8WEOcMlaVUauqjXpMcmOzOZe9RKyd5CUW1+GIk2Lt3BAmDvUkLveZv1UqFc6dO4fmzZuzslhaWmLSpEkICQn5zBr1PO2LSSsVUrzmoXFxnlRiLPD1ARLIXomkR5sw1EEbW7u2M3EqRBO5ngS7X+psj+JqwpDUY4smo3fCO423dlOlwGteY00Zi69xIEqCwHV8P1aejAfqC1ZPvQugGCEaK1asyFrNBQUFFUALtEqKgHEgEBwczFrbtm3bFuRlFt0oAhQBigBFgCJAETA9BChpZXp9RiXOIwLimFZGQVrpiCsl8V6COuyCuCbmv+Di6GS+WoMWhIBi99uialUusLYNH9OotIsHWGclZTyuT/sqG6EhnFcdo05HcsSVjrbzCKnJFL99+zaLye+//24yMmsJKo/CtR+/QWURUVKl20S42At92wHH47TdAzMfTUM1trwluhzmsgtq1ZlPf8gC0N3dHe3bt2cxJmnkR40aBX9//3xqoWCqUaZHwO/5c/iEJmUL4CpHyoc3ePnsJQKi0jVEbzYxZMlh8H/xFM+8gxCdkZO5ohLpEX54/twHoUnGa0pOMqoWL14cxKI0PT3HwGjZNKd/KQKmh0C/fv3Y+9Pjx49NT3gqMUWAIkARoAhQBCgCLAKUtKIDwewRMFnSSpWA8/1LcURU+WE4wlp9yPDh9BhU5UkMgbTKeDwL9uw+CzSadQGhEhVkMfewuiOf5rfsMPwZrwKKGGkVGhrK4jdnzhyTHufK9HD4PnuK16GpUGR4YWIlnrSqNBH3tSx3MvFkZnVuzDBtsTs8n82sdKBICJDu3bvz7TIYNGgQXrx4oaM03W0sCPz6669snw0ZMoRaoRhLp1A58g2BO3fusON7+PDh+VYnrYgiQBGgCFAEKAIUAcMjQEkrw2NOWzQwAiZLWmU8wJQqHDlhP/e5JtOf9A1cG3D7OdIqEw+nVuUIg4oTcE9kNCH1XYH6LJlVEgOvphQ50kqhUIC4r/Xp08fAoy4fmssKxc0jO7FpzTKsOhwEITKV1H8NGvCkpXWP09AytJIFYK0jT2g5rIafxrstHwTKvYpnz55pxZHr0aMHvLy8cj+Rlig0BMiCnlhybtiwodBkoA1TBPIbAaVSiaZNm8La2hphYWH5XT2tjyJAEaAIUAQoAhQBAyJASSsDgk2bKhwEjJq0WvBCTUZJXi5CbZaM4N0DU90xsCRHQDTd8g7qcNuqWBxpJyatknHR2ZIjrZpvR6jYuCb5Arrx7oWt3MKLHGlFRpyDgwPq1atXOIPvS1pNv4fxFXgCqtwA7PZNRlr4LY31HFMOLpcStNPuJp6Dc3bX0S+R4TPP9fPzw8iRI0FcBgkh0qFDB5hkXLHP1N+UTsvIyEDjxo3Zvrp586YpiU5lpQjoRGDfvn3svWfJkiU6y9ADFAGKAEWAIkARoAiYBgKUtDKNfqJSfgECRkla8YGya8x6gkxet4wHk/nYRTxplXEfkypzpEW1aY/U5SB5jaV1xaRVOu6MtuNIq5rzwYfDYmuVh2xFM5YIs4Lz+cQiSVr179+fjd1Dst+Z1paFV8tEgdh56youvhmDisOO4oOayeQ0k/qu5C3rGDi6vskWs8nw2r979w4TJ05krd2I3C1btsSFCxdAgrnTzXgQINk1y5Yti/Lly1OrFOPpFirJZyLw999/o0qVKqhUqRKN1/aZGNLTKAJ5Q0CBJO8z2DR/DHp3dEKbth3Rsb0T2ncbiqkr3HD9bYb2C7a8Vf6FpWUIOeuKmd+NwejRo9nP+EWnEGq8ISe/UF96OkXAPBGgpJV59ivVSoRAYZBWjX/2QVJaGtKyfTKkSpImEBsacqSTTU/evUuVjmc/1ueIJ4YnrVTxONeXj0lVuge2+aRDpUzFyy09UJonMDj3QBXiTvdGCXafHQYffMe5ksmjcXlKTa5Oqy7YT+IbFbGYVmQYzJs3j8WgwLPbicZcvv2UvMOxKS1gq0VYVcM3C84hRPAXFDWWcX+SOmi70/6CC8IualKvnxERESBp521suAx9DRs2xLFjx0DcN+lmHAhcvXoVFhYWaNGiBSSSHAaXcYhJpaAI5IrA8uXL2Xv+rl27ci1LC1AEKAJfhoAq7TXcRjWAJZmnlGqPxecDkUZylChS4HdqLtpYk/muLVpNP4X3hfZoUSLmVF+UEuZS9nPwjMt39GXK07MpAhQBgyFASSuDQU0bKiwECoO0Eqxhsn+T2FRAOu6Or8ATVFZwcHbBkC61YSU8TAXSCkDmyxVoasERXKSuYnyZkvy3EIgdmd5wbV2cr5NBhQYt0KiaFf/fEk5rvTlLrSJIWrm5ubE4eHh4FNYQ/OJ2ZYlv8dzrDjyf+CM6y3StlGJjY7F48WLY2nJZMOvUqYM9e/ZAKjVw8K0v7hHzrGDlypXstTJ27FjzVJBqZfYIEIKckOONGjWipLjZ9zZVsLARUCZ74ccWfHgKxh5T7yRns6hSIv7yKFTg56y232yGr+BeYGDhJa8W8yE4GDCUtDIw+rQ5isCXI0BJqy/HkNZg5AgYH2kFyCPPY2oz3oqKPMwrdcbiozvwNftg5y2tWFzliLnpisGNOPc/yyptMXHXdWxtwxFZdqNuQ4i7rkx6gu3jndSWNixhZtsEo7Y9QzJ560W2bKRVob304sUxxNetW7fYhfjOnTsN0RxtQw8EkpOTsXr1atjZceO6WrVq2Lp1KzIzC2k2q4fMRaEIcdvs168fe73s2LGjKKhMdTQzBEaNGmXyLynMrEuoOuaKgDIWF0aUZ683Mt8s3mEPPuRkPC0LxIbGmpevdec+RFohYCJ5vRR1efKMklaF0AFG1qQq6x0urh6Nbi0d4di0A4YsOAyftJxeCiuQ+GQvFowehhFjx2PMcBdMWHUab9JzKJsViqvrJqB/r17o1e87rLsWDvpKNv86npJW+YclrclIERCTVuTBSlwH9PmwpA//gPvmm2/yXzuVFPFvX+OVfzjScnrQQ4rQK0dw4tJtPPGPRKZwf5S/w+Ym3ATAfu4zdSB3QUCVJB7vfV/iVUDEx/VmPsaMaty5Xy33UWekE841x++//vqLnVTNnTvXHNUzaZ1I7JlNmzahcuXKbB9VqFAB69evR2pqqknrZcrCE+xJ4gKSdfPhw4emrAqVvYgh8Pz5c/Y+0rt37yKmOVWXImB4BLJeLkIdgQRiGLTYHoIcp7KQwGf5V2pyiynWDr+HcAE5FVkpSEpMRCL7SUJyBrdflRECrwuncd0/BcI7V1ZDMm9+44Urp4/g0JGzuOUTp5sUUKbh7YPLOHnsHO4FpyLTO3fSSp4UgLsXjuPomZvwS9AR9CovMhi+W2iL+iAgDcbOvrVRr9NQjB0zEK35pEclu+8CPzT5WpSId5+CelV6YtvrdM6KUBEHj9kNUK7NKrzIEDWW8Rq/dLZDjZHHESoFZB9OYET1Kui/553uMSo6nf7MHQFKWuWOES1h4ghkJ63EZJS+vwuEtMoV1wx4TqjIP+jt0Gf9Ody67Y7jawejKjtRsMXAPxOzmWLrqlSBqEtL8G33r1Ccn2S0cvugY4Khqw7T3E/iJpEFeN++fU1TgSIg9T///IM//vgDNWtyMdjKlCnDEstkIks3wyPg7++PkiVLssGsY2JiDC8AbZEikEcE/v33X3z99ddsFsyAgIA8nk2LUwQoAnlDIAtPZ9triCimAsbfE+z+P64p6Xx3LuYVP/9syRJcCsQ/2IbRfGIhMh8vO8ID0a+3oHdF3jLL2hnHYwhtJUfUteXoXa86Wg6biUWzhsDRkn95O9QNb7IZaUuCD2J8Q2tWPqtazdC4Vm10HdsblQSSLbt7oDIFTzYNgH3xMujww89YPaI2LCwdMP7oexHhkDcZPkaB7jEOBBT4cHQq5pwKVfetMuE6ptQg46kB1viLbKNkAVjXsBiqTn2oSYZFwrV92ImWjA16nxfWYFnw/skRjFVnHIgQaFY53m9vg2KWTtgcpIMANQ5ATEYKSlqZTFdRQT8XAdMlrQDZuz0YIDy8hYct/12uxzYE6O3fl4Jrg0ppJhiWbYvUTbR+/fogH7oZNwJyuRyHDh1i+4pMYAlxQizkoqOjjVtwM5Tu9OnT7P2CEAGkX+hGETBmBM6cOcOO12nTphmzmFQ2ioB5IKCMwO7WPLHEzkkdsFq82M+mZfrdsbATzWFth1wHZ08tR/AvjdVz09LOC/Btw5qoXV6o2wm7whXI8v4Jjcj5pfrgRDQhBWR446rJrtxqy1uon1LpjzBPIMJqzsUjwqVJ32Ff/7LqdrTdAyUI2NwR1sTFseNekJxFSL6KYWWIDE2wyoebaOdJhmz607/GhIAKkuQUSAXvFVa0LDyfT16aNsZGMcGUdgsjyjKwdj4Oljvl1eBIKwt0PBLDGg6oEi9jKBkvrf7QcpGVB29kx23lSV7qUC7GhISpyUJJK1PrMSpvnhEwZdKKKKtMDYTHgV+wfO4MTJ0yFbMWucLtij9ScrbDzhkfVTr8T6zHwhlTMXPxJpz1TdU2uc75LLPZS+L0FC9eHP/3f/9nNjqZsyJKpRKENGnatCk7ybSyssLUqVMRGhpqzmobnW5C5s3p06cbnWxUIIqAgABJ5FC7dm0QC82EhARhN/2mCFAECgoBqT9WOwjEEkfu/PpOTRt91Grmw6moIiKtmI4nwV2pCoRub64hk5ja+MEzBYoUbxxZOQfLDr5GmkqGwPUN1WXqrfCFFEpE7XdS7yvj4sHHyVIi6vA36sRG4hAaWS8WoJYgg8jSShlzCn1Kcro02hjMkV+yIPzciNtXfuwdpCEvMnykPt1h9Ahk4P6karDrfYgjLQV55aH43ckCDFMJw4+FgbOXkiF4ixOsK7jgfCxnVZV6wwVlGAblx93VJqfSbmBYaQZM9Zl4nM0aUGiCfuuPACWt9MeKljRRBIh7GJnUZv+4u7uzDzwScDj7sez/qaWBiXY+Lzax1iGWOyS+Fd1MBwHi8nPlyhU4OXGTU0I8ksx2gYGBpqOECUtK7p1du3Zlrx1iAUc3ioAxIvDLL7+wY5R8040iQBEwAALyd/iVj63KhdlwxNoA3S5QGfcnaVzzGAZWPf5EMitmNtKq/gr4iryzBE2kwXsxqlF5lK7WCcsfpEClSMLDZQ7sdU/at+l7BSls4USc78W5BZL9Tnsj1S9odQViT7zQm7WyIuVbbnmB0LAwhL1/hLVC8Ph6P8FHCugvgyA1/TYVBOQRJ/Bd30XwiBdc+wTJVUh9tASNWbLTDs7rvRBwaQZaOAzE1ldpfHgWKfxWc2Oxgesbntjiz896jnn2hPx0wv6o7HULbdBvfRGgpJW+SNFyZoeAt7c3+8Bbt26d2elGFdJGgMRLIhMSkkmQbqaJwJ07d9QEioWFBYYOHQpyDevaXFxcULdu3S/6bN++XVf1RWY/sVypUaMGbGxsPol3kQGEKmpUCJDxSSysiKUVedlEN4oARcAQCKTAfYitmjRiGHvMfZals+FU98EoJVg5MQzqLvXmEwFpk1bWvS/z5JOOqpSp8D25FAOaN0XXTkLMVwbWfXjSSvYGro6chRSZ83U8pYmLmTNpRdwMG6j1sGncBy4jRmCE6DNq5l4Eim8tucmgQ3S62/gQUKSF4P7BBXCuzICxbY7v9/sjQ8ttkMisQMzVH9CkOD+uyg/FySixq0s67o2vwI6h1rvC1SQpq63UBz/VI+d92n3W+JAxTokoaWWc/UKlMgACHz58YG8yxAWGbuaNwM2bN9m+JuQV3UwbgcePH4O4e5IJKfmQTGE5ZbkjsZiEMp/7vWrVKtMGK5+kJ1nZrK2tUatWLSQlJeVTrbQaisCXI0BiWJHrm8S0ohtFgCJgKARUiD3VEzZqIsoS3c/qcs0lxFQL0fO4BmY9EXyltEmrsiNu8m5+2fVQITPwOGY6lQFj1RLLPBMQeaituk41aSX1w6r6GtKq3SEu5hCpLWfSSgq/VfXV9VSf+UQr4La2FHrKoH0S/We0CCiQ9Poqjm5fjrHtK/FjoBLGXInXJp5IGoDoa1jUrSUa87HWrFsvhEecQFyl4da35djz2x+J1U6OJXmNpWx8tfpY5SdmPo0WFKMWjJJWRt09VLiCRODvv/9mbzLjxo0ryGZo3UaAQEhICNvXlKA0gs7IJxF8fHwwfPhwEKsrsmjt3LkzCDkpbJS0EpDIn++9e/eyODs7O4PEHKMbRaCwEXjz5g2bLZBc68SVmG4UAYqAARFI88K06hqCqMJ4Evsph02VgHO9bdjnB3lWF2+3FW/VnoTZSatbOdYhDf4dzrZcW403BEIGFWJyIq1UMTj8NTcnIG013BCkdtfKmbRSIeZIR1gI5JujK96oZdPWRW8ZtE+j/0wBAVUaXqxvx8ZCK/b1fkSKpjjyD8cxqp4jZt1JgizOA0vacO6nFk1+xOM0YpaVicczqrPju43IHZVVO/MpfmCzErbAjjCB5DIFQIxTRkpaGWe/UKkMhAAJ8NynTx9Wj/hdAAAgAElEQVQDtUabKSwESAB2Eg+JWOjQzbwQCA4Oxnfffcf2L5mktm7dGn/++Sfat2+vniST/WPGjNHrQ8oKH2pppT1WJk+ezGKzePFi7QP0H0WgEBDo1asXOx6JJSDdKAIUAUMjoETC9SmoKTwzyw7BOT4wtVgSxYc96GzFP1etO2CTvzjttQIh2zSB2MuOyIm00rhfMUxxdDoUBSXkePdbM/Wz2rqP4FYoQ9AvXAIX8hy36X0W8by7V9q9SagsyFpjFgRjL/m7LWgp7GfKYfDxCHUmQlXac2xZcgrhirzIINae/jYZBDKfYDaJP1VrIV4Knq7yMOzpWgI23Y6CTVoJQJX6FK7tOOKq+S+EFFUici8Xd9VhtT+07KnSPOBCsgqWcYFHjoyuyaBjFIJS0soouoEKUVgIVK1alQ3yXFjt03YNh0C9evXg4OBguAZpSwZFgLj7kix3xI2NTFZLlCihntASYvrBgwd6fSpVEszEGVDSSrsLScygtm05l4zz589rH6T/KAIGRMDDw4O9vkeNGmXAVmlTFAGKgDYCErw9OFKdla/K6NMIF1sqZb7B7z1Lc89imzZYciu765V2TCnbIdeRqt0AgBRc7qMJrs4Us0dLp2Zo4lhG/Yxn6s/Ejr3ubNvKuEsYWVF4+dQIS7wSkBp8EpMbltSUt+6OQ5G85YsqEdcnVNMcYyqh2/R12LF9Nb7r0g0rnmbkWYaPVKA7TACBZPzZwwpMi+0I5YeGMnIfnBgGDdZok1GKiEPoWZKBRacTiAcge+MKR4aB7RB3rfGrDN+F1gwDy65H1KSXCQBhtCJS0spou4YKZggEmjRpwgZqNkRbtI3CRaBv376wtLQEyYhGN/NF4D//+Q8WLFjAug0JFlOUtMq//o6KigIh9mxtbWkWx/yDldaUBwTIPbxRo0ZscoCIiIg8nEmLUgQoAvmPgAppfsexeEBDlGYYWDv0wPjZi7H4h9HoYk9c9SqizagNuB6uZYPCBriOvbMBg1n3KZ5ksmmNyVs8ECUXS0kyuK1A+1J8mYrtMGXPCyREnMUo4Vzb1ph7JRrc7E6F1Ge/oE8VgbhiwJR3xpr9s1CHtagqgaoNO2DgDDe8SOXMsFQZfnAbWR+WaosrBoxdO8y9HKWpM08yiOWnv00CAfl7/NbcFq1+DVa7lKpijqJzMQa1Fr7kEwfwmigjsaeNBUoN5kkqeQh+b18cTI0f8FQI1wYg+dowlGHKYPD5eO1YVyYBiPEJSUkr4+sTKpEBESDp3EnmIbqZPwJz5sxh36SFhoaav7JUQ7Rp00b95pSSVvk7IDw9PVl3TGK5mJZGbd7zF11aW24I7Nq1i722ly9fnltRepwiQBEwIALKjEj43nfHhVMncOL0Rdx4+Aaxko/SsX2WRCpZMqIiEpAlijcEeSqiQiORLLbuEmqXJeCNlzuueDxDeKYKyvQQvPIJQ6oWISYUJt8qZEa8wl33q/B4FIA46cdy51kGcfX0t3EgoEqH78FlWLD+FF4nC4NJgrf7h8KhoyteEsM6YVPG4eKI8mBqz8VDNn4Vd0CVcAWj7OthhmeqmozKfLECja0qYdz1JG6f/AP2OdvCtttOvMtpfApt0G+9EaCkld5Q0YLmiICLiws7+SUxj+hm3gjs3LmT7evbt2+bt6JUOxYBcSB2Slrl/6D47bff2Otp4MCBNAh2/sNLa9SBACFJiaVflSpVQJKp0I0iQBGgCFAEKAJ6I6AIx6HevMuqpT2cevVHH+du6DfrAHxExJRQnyrtFXaMaISabSdg7Z5jOOK2Gt/36Y2Zx95qW19BhvALM9G2fidMXLEGswe0ROsRm/FYTYwJNdLvz0WAklafixw9z4QQUCD1gx+eP32JoJgsNStOFBDSZcfHE69kupkzAkIMFDc3N3NWk+rGI5AfpFWtWrXw9OlTiqkOBEaOHMkSV2vXrtVRgu6mCOQvAkuWLGHH3L59+/K3YlobRYAiQBGgCBQNBJTpCPd5BM+7nnjk/R6JuVpCqSCND8KLB1549DoMqZ+IMqLKisab58/wJjoTgh1X0QC14LWkpFXBY0xbyCcEJK+Xo4HY3zyn38WaYF2AcPdRIcN/HyY0t2UnuUJ8m2rOy3E9hrvjEPcCsj8wMDCfpKTVGCsC79+/Z/t6/vz5xioilSsfEcgP0kq4ZwwYMAB+fn75KJ15VJWZmYmmTZvCwsIChBSmG0WgIBEICwtjEy2QMadU0uVAQWJN66YIUAQoAhQBioAxIUBJK2PqDSrLJxGQeC/hgyiKgit+RFw1gOsbLtijIuYshlfQUbbJCrzMBAQXl4cPH36ybXrQ9BEgLqDFixdH//79TV8ZqkGuCOQHaUWyEQ4dOpQlOwkxQzKVhYSE5Np2USpA8ChXrhzs7OxA48UVpZ43vK7Dhw9nr8U7d+4YvnHaIkWAIkARoAhQBCgChYYAJa0KDXracF4RkEdegevMqZg6Vfsz2aUZbHjyqlz/vXjHclYSeP/4FTvBZZhiaP7jDbyP8sWhEZX5fdZwPhaNQ0eOsP8vX76cV3FoeRNE4KuvvkKDBg1MUHIqcl4RyA/SatWqVWyzL1++RI8ePdh7BSE+yT0oOjo6ryKZbXl3d3fW2qpZs2bIysoyWz2pYoWHwOPHj9nrr1+/foUnxBe2rEiLgP+Lp3jxJhLpn3AvIc0oM2MQ9OoJHj/3Q1iKzsjRXygRPZ0iQBGgCFAEKAKmgQAlrUyjn6iUuhCQh2F/b979z34qbiXx2T6kflhZn7eyKtkfFxO5/bLADWjIE1xWzmdw/Ko7OxE+ePCgrhbofjNCgATktrKyAkmZTjfzRiA/SSsBKZI1T6jXxsYGCxYsQFJSknC4SH+vWbOGvZeOHj26SONAlc9/BP7991+0bduWtZQNDg7O/wYKuEZV6iv8Mb45Sostw+3aYPJ+X6RnS1CmSvfDvomtUE5clrFE3X5rcCuOPrcKuKto9RQBigBFgCJgpAhQ0spIO4aKpQ8CCkQe7cdPBKth8q1kTZD1hNPoWpwnrRzXQh3mKu0Wvi3H76/xA/befcoutH799Vd9GqRlTByB2bNns/1NYqPQzbwREMglEpcqv7MHXrt2DcSqiNRdunRpEIus9PR08wY0F+1UKhVI7C+CybZt23IpTQ9TBPRH4MSJE+y4mjVrlv4nGUtJSQC2dLJh5Rdi5Gm+S6HbtkBwAQ2IeVUcLn5bQUdZBsWdNoKPfmAs2lE5KAIUAYoARYAiYBAEKGllEJhpIwWBgCr1DiZW5gioUn1PIFoUl1UWuF5tUcU4HUC08DYz6xnm2POklVVv/PHiHTtBJBmJ6Gb+CPz+++9sf9OYKObf12LSiiwSixUrptdHs6BkWDJKF1KEpDl16hTq16/Pjqny5ctj8+bNkEgkuk4x+/2pqaksHsSF8v79+2avL1Ww4BEg11PNmjXZuGmmZ9WoQtzZfijFW01VGroTj8PC8erYVDha8PMQmx44GsVNXpQRe9BOsLBqOA/uUTIo0/2xd7AdT2TVwKwnmQUPOm2BIkARoAhQBCgCRoYAJa2MrEOoOPoiIMf7HW1RjJ3g1cNSb+2FouTVYtQWJn8dTyFBqFbijR/r8pNFpgN2BSaxk8FJkyYJJei3GSNw48YNtr937dplxlpS1QgC2UkrMRml728hptWnECUB/vft24caNWqwY6t69erYvXs3yP6iuL158walSpVC5cqVadyvojgA8lnn9evXs9fVli1b8rlmQ1SXBg+X0qz8DNMUW94Lsamy8Hx+TX5/MXQ4GMWmRpe80iSbafLrOwil026P5N0FrdDrcoohBKdtUAQoAhQBigBFwKgQoKSVUXUHFUZvBCQvsbgORz4Va+eGsGyhHiSvl6KuiLRKFCqWeGOpmrT6GkdjlGwA4cGDBwsl6LcZI/DuHWdZR2IR0c28ETAUaSWg+M8//2Dr1q2oWLEiuxglQf+PHz8OYpFV1LazZ8+yGLRr1w4ymayoqU/1zScEYmNjYWtrC3ItmeQ4UsXh2NeCZXcP/Cnim5L/7AErfo5iN/oOiHOxKu4Mepfgypcbdg6xrAFWFnxcG/EEVwOs9lM7E+YTyrQaikDeEVCmvMSRn5dg+vgxIHEMxZ8x477D5JkLsXrrUdwKSEa26TkUURcwo00l2FZqg5kXoz46nndp8u8MY5Yt/7SkNVEETBMBSlqZZr8VeakzH01DNXbCZ4kuh7m3lGJQZEGagOtM20OIEdaNWu6BvXApBSBuPZ06dRKfTn+bKQJyuZwN5kti79DNvBH47rvv2IWeo6MjiFuom5tbnj8vXrzIM0h///03XF1dUaZMGbb9Jk2aoChmJ124cCGrP8m0SDeKwOcgQCygiVXkhQsXPud0IzgnFdcGlGB1YJjW2BUuxDBQIebw1/x+Bky7w/wcJQt+m7upA7aXdeyAzq3teYtyBvVn3ECCUIURaEdFKOoIKBFzejDKCi+ImeJwWnMa5w5swEQnwcKQQZ1h2+GdJkzCZQhY66gZ+47rEJj9vYY0GIc3uvOkbcFhLA0+jI3usayVI9eKHrIVnDi0ZooARSAXBChplQtA9LAxIpCJJzOr8w+9ttgdnv09DoCE0/hGCMTecAOChIdi2k2MKMu/+aw+EyQ8hIODAxo2bGiMilKZCgCBunXrghAZdDNfBG7fvs3GryLxplJSROYNBlT5v//9LxYvXowSJbhFK8l+9j//8z8GlKBwmyIZOrt168bepw8cOFC4wtDWTQ4BX19f9hru3LmzycmuEViBiAOdYckv6hsu8UKSElClPMaK5haahXuzrQgRfAGzgrBvSCXNMYEQaDAdlyKFiYymBfqLIlCYCMiCfkYjYYwyJTDgWionTvojzK3Fz7UZBhVGXUI8S7iqkHBpOOz4cyqMvAI+uTevhhSBv3VA+TY78SGHqX2+6SoNxG8dyqPNzg8iS6/cZMu31mlFJoWACmn+J/HTmL7o6dwFHZxHYcXpgI8yvwIKJD7ZiwWjh2HE2PEYM9wFE1adxpvsKWJNSnfjEpaSVsbVH1QafRCQBWCtI/8wdFiNHK3lSewq3n2QKTUQl/inojz4FzTmH5ZW3U8hTsXFviEuPXQrGgj07t0bVlZWUCrpK2tz7PH379+zQZuJpVNwcHChqxgTE4MZM2bA0tKSXYh2794dz58/L3S5DCFAYmIi7O3tYW1tjZcvXxqiSdqGmSDg7OzMuu6/evXKpDVSpT3GYmG+wjCwqlwLVW00i3k2vl7z7QglC3TZW/zhbMsTVrZoNuh7TPq2C2oIpEDl4TgZUZAreZOGmgpfCAjI329BU2F8ikkrZOKx+uUyA6ZYJxyJ4edcqiyE3T2GvSc8Ea4VjlaJxNtz4Ujqa1mApJUyEbfnctZeLbVIK+Kjq0u2QgCXNmkECCiR5LkQzat2wCqvBCigQOzNRWhqZY1Wyx8hRTAghBLx7lNQr0pPbHudzmWyV8TBY3YDlGuzCi8yjEAVMxCBklZm0IlFToXEc3C24iZ9pV08kJYjAFl4Nseen/wVR+uV9xCZEIST4wQLLSt8w7sVElcxklmsKMaeyRE6M9/5ww8/sOPiw4cPZq5p0VMvLS2NtaIj1zMJum9MW2hoKMaOHcsuxMlCdeDAgfD39zcmEQtEFkJWEdKKkFeExKIbRSA3BK5du8beo8eNG5dbUZM4Lo+6hh+/qczPR8jcpQq6TXSBvbDY73CcfYGWfH0EH3CdgcOyl+DyBKqQdGsKHw6BQbXpj/n9JqE6FdLMEdBNWknhv9pBNOYbYkOQDIrMFCQlJrLPAvI8SEqTcQt8KBDjPg8tLHlCt+km+MQlIjEpFRKt94tyJAXcxYXjR3Hmph8ScjA+lMa8gvvJwzh6wRNv0xTICH2Bd4K1iyIG7vNaqK0fm27yQRyRI1UCmU7ZzLwTqXq6EUjzwrTqFqj3kw800QSz8HJJHTCMPWY9INEIyQuHAKxrWAxVpz7Uuj8rPuxES8YGvc8n8uNcd1P0SO4IUNIqd4xoCSNDQOq7EvX5yZ6j6xvk8MxiJVZEnsCgctneaAqTRIeFeMzfayZMmMA+WAvLjcjI4DV7cXbs2MH2d1Fy1TL7TgVYy7l+/fqxfbtp0yajVZlk1yOJHwhxZWFhgTFjxuCvv/4yWnnzQzDiHkj0Je6CxG2QbhQBXQiQrJsNGjRAyZIlzSz7pBLp4b549vQ1QlMVyPCaiEr8fKTSxPvIgAQvF9VmrxOyGJr7LEsDUdotTViD5tsQQi8hDTb0V6EioJu0SsfdceX58cyAKeuC6ykKxHntwISGxTT7O51BEmR46zZAfT2w1ofCXJ1pANc3HF2gTHmCTQPsUbxMB/zw82qMqG0BS4fxOPpeoBMUiL0yFQ4WDMp3noQFM4egWeXyKFeqIVYRlwzZW7gNyMH1lrTVYDU8b+ckW6HCSxsvZASSLw9CKaYUBrnzbq+8PJLXy/AVw6BEn3OIJ9ZW/D3a2vk4BINCUpQjrSzQ8UgMJa3yoS8paZUPINIqDItAxv1JqMw/0Jz2fxyEXSONCum+uzG+aSnNA5L41necj4sRGqpr0aJF7HFzXzhqcCnav65fv8729+7du4s2EGamPYkfRSa7xJrJFDYS5J24QBGZievgtGnT8J///McURP8sGYl+RFcSoJ1uFAFdCJCkCWScrFq1SlcRk9qfFXoTR3Zuwpplq3A4SPCFksJ/TQNWT4axRo/TcVBpkVZlMfyGZpGkij+F7rx1OeO0F5FalicmBQcV1swQ0EVaKSKPY5AQP5axhNMGP3CjX4mo/U782GfAsKQVB0qq+0CUFMiq7O6BkgBs7mgNhimOjnvD2ThUyVeHoQwp32QVfEjlGfcxtQp5Ue0I1zfcHF8S7IYe5Wtj9lOBBE6F+8CS6va13QN1y2Zm3UbV0QsBCXyWfwWGscVQ0f2YPTXlCvoRN+8K38GTGEDIQ/G7E4lTWAnDj4XxxhQyBG9xgnUFF5znUsHq1SotpBsBSlrpxoYeMRsE5Eh6741HD57ANzxdlCmEU3Djxo3sA+xzMoWZDURFSJG3b9+y/U0Xz+bT6cePH2f7lAQ7/+eff0xKsbt376Jdu3as/DY2Niypk5SUZFI66COsTCZT63n27Fl9TqFlihgCxNqZZPOtVq0aMjM55zhThyD93nhU4Bfi5Qbshm9yGsJvrUbHkrwVeDkXXErgAqMku7uoM7EVazwHl0KzoMh8h7NT67H3B0Lm1Vv+ml/8mzoyVH5zQECbtGLg8N1KrJo3Cm0r8OPbtilGbX3CJiAQ9E081VE9nvUjrZSIOdWHJ7QaYWMwl7VAEwS+PMbeSYMseKM6KLxVq/m4wiYukOPdH99iNsssEAk+RVoBumQTZKffRQkBCbyX1mXHams3ccB+AOl3MNqOAWPRFeeSCSYqpD5awsdMtoPzei8EXJqBFg4DsfVVGrWyyqdhQ0mrfAKSVmO6COzbt4+9KRlbDBzTRdS4JSeLZxLziMQUopvpI0DIZkL2kIUuCXpuqtuVK1fQtGlT9l5EgsivWbMGf//9t6mqk6Pc0dHRqFy5MkqVKgXiJkk3ioAYgQULFrDj/9ChQ+Ldpv076xWWiQKxE+JJ86mIYUc/QEgcCGkQtnaxER0Xl2XAVPwW56Kpb6BpDwjzkj47aeU49Rf87rYHB49fgMfTd0hRD26N3rqIId2WVom40JtYWZHroSW2vAhFWFgY3j9aq06sRGIOSeJOquPdsmVLOWH2mXfIkmchQyZEzKaklaYn6K9PI6BC4sV+LFlavOMe7WyWyRfRg1i/WvfBFXWCagVirv6AJkLm+vJDcTKK3q8/jXHejlLSKm940dJmiMDFixfZhyGx1qBb0UCgTp06aNiwYdFQ1oy1JCQVIasIaWUOlpIkGcTJkydRrx5nWVGhQgVs2bLF5KzHPjXk7t+/j+LFi7M6pqZqXKA+dQ49Zv4IhISEsFldW7RoYXZJUSTvjmFKCyErIE9EVfsGC86FfGQ1pUrzwb4pbdXWWQLBVaH9DBwLyqRv7M3/UjApDbVJqxIYcC33e3qeSSvZG7g2EAhcGzTu44IRI0aIPqMwc28gpKo0PFrIZQUUrhuGsULLee6IVXMHlLQyqQFW2MKmP8b8emTslUGPLa+QpgJUknDc3tgbdoRErfsjvAWvb+IlGH0Ni7q1ROPy3Hi1br0QHnHqwVfY2ph8+5S0MvkupAp8KQJeXl4sabV9+/YvrYqebyII9OrVi81oplTS4CAm0mUfiSmVSkHcAcnk9NixYx8dN+UdJBj13r17Ub06l+2UfO/Zswdkvzls27ZtY/utf//+ZkdQmEP/FIYOQ4YMYceEp6dnYTRvgDZlSHz7HF53PPHEPxpZguGHjpaVmVHwf3IPd+49hm9kxkdhDXScRndTBAyKgEFIK6kfVtUXSKvqmPnkE67Dilh4LO2A0loWjdbosiuUt2ikpJVBB4gZNCb7cAHzu1aDBRlTZb+CU5/JWDqFI0crTyZJNLhN/uE4RtVzxKw7SZDFeWBJG8460KLJj3hM2C66fTEClLT6YghpBaaOAHFTIQvflStXmroqVH49EZg1axbb5+Hh4XqeQYsZGwLjxo1j+5AkUjDXjcTnIpZWxOKK3KOIBRaxxCIWWaa+jR49mtWJuEHSzXwQIGns4+Pj8/S5dOkSOxbIywRyLiGk6WYaCBB3+7z2d/byCQkJpqEslfIjBPKXtBqUcyB2VQyOdCRBrjniSmfWcFkMfN+T+EEyRHmsQnfe2oU9r/UuhLPvKFPhPkhXIHYa0+qjDqY71AgopRnIkqsARTj2drIEwzjgp9e8mZU8DHu6loBNt6OI5t+Fq1KfwrUdR1w1/yVIZ6Z7dQP0R64IUNIqV4hoAXNHgLgYkYfazJkzzV1Vqh+PALGqI31OgmDTzfQQ+PXXX9n+69u3L4qCtRyJbUXIndKlS7N6k9hXJAaWKW9ZWVlo1qwZLCws4O7ubsqqUNlFCAgEq7DA/Jxv6qovAtTIf5JYoJ/Tx9nPyVVNeTLCggIQEBCE8LRsFtIqKeJDAhEQEIi30bwVjiwJoaR84HvE5mbWlmvjtIAuBOTvfkUTtVWTDfpdVQf40XWKzmDnaTdduGyApL4WOxCmAFTyLMhVcrzb0lIzzsoNxvEIIViWCmnPt2DJqXAoUm/gu4Hb8JZPDq6I/hPf1+AttNSkVRpuupRR19ViRxgUUEGeJWddb3W5LupUhh4oYgiokHZ/FuwZBtUmeiCJf3+ojNwHJ4ZBgzX+EL9yUUQcQs+SDCw6nUB8EUOqINSlpFVBoErrNCkEyFtdMoEiPvJ0KxoIkEUy6XPickU300KALJJIIH1HR0ekpaWZlvBfKC3JKkgsy0qUKMGOX5J10JSJ19DQUNjZ2aFs2bIgMY3oZvoIUNLK9PswLxoYirSSv92kDrr99fF47dheWS+xsBZHTtgOvcGKLwtcB0eWTKmMyQ8EB568aEbL6oOA5NUS1FGTVhbocCxWu28+qkSJqH1OatKIcTqAaH7hL33jioZCXXZDcfCRB36ZshyeKSqoEq9jQjXBRZABU6kbpq/bge2rv0OXbivwlHRx+j2Mr2yL9uueIZWtMx13xtiBYSzQenMwb+kixRvXhur27YYexCOPXzBluSdSVLpl+0gNuqNIIqCMv46ptRlYtvwJj7lBxuKgijmKzsUY1Fr4UjtOoTISe9pYoNRgd+Qe7a1IQponpSlplSe4aGFzRYBks+revbu5qkf1yoZAcHAwO2kxZ9eybCqbxV/SbySzXrly5fD+/Xuz0OlzlPjPf/6DadOmwdKSmKgzcHZ2NtlA9B4eHqy1FbEey8z8RKySzwHKJM9RIjMmCK+ePMZzv7Acs28Zs1qUtDLm3sl/2Shplf+YmkKNqhRvnPh1Kca0yJbtsnJ3TF/zO66E8eZOWsrIEX3zZ7jUFZFPFvUxcqMHoojhlCoVT9c7o6JAXFV2huvDZD6emwoZfm4YWZ975gmWenbt5uKykKEt6xnmOVZFbfsKqNVpJKZN6o3axSxgP9QNAVkaQYjb1nrnimriqrKzKx4m//Np2TSn019FFAFF/F2sbF8W5bushldiNmtPZRwujigPpvZcPBTFr1IlXMEo+3qY4ZmaC5lbREHNo9qUtMojYLS4eSJQq1YtNG/e3DyVo1p9hACJw0GsdQYNGvTRMbrDOBFISUlB/fr12X67ffu2cQppYKn++usvjBkzhiV9yCR+8ODBCAgIMLAUX97c2rVr2QXEyJEjv7wyk61BhXS/fZjYqpx6McUuzCzrot+aW8iegEjyaikchMVd9u9izbAxKKdFY8GDIyatiDXkzz//nOtn9erVWjpT98CC76f8aiE7aTV//vxc+5uMCSGJhkA+5CZPXi2tABUUcjnkcgUNYp8buEZ2XJEejZDQWGRk4wVYMVWZiHh1F+5XPfAoIA5ScXhHVTrCQ5KhgAyJgQ/gfvk6HgaT/zltCqRHhyA0liY5yAkduo9DQCWJRdAzDxxZOxHObbpgwm9eiBU8U7OBpEp7hR0jGqFm2wlYu+cYjritxvd9emPmsbfa1lfZzqN/9UeAklb6Y0VLmjECrVu3Ro0aNcxYQ6padgRq166NRo0aZd+t939VZjhe3veE5/1XiNQjZoYy9S2eepHyLxGhqzyJzfHuNZ49fYXgeAl9M8P3BolbRYI0kwXO1q1b9e6jolLQ398fAwcOZPEhZOzYsWNBXO9MZfv333/V8v/222+mIna+yqmMu4hvK4gsELSIqOJw2vhGFCtDhdhjnVBMq4z4XEesCyx80mrKlCl48OBBrp87d+5Q0ipfR5PhKstOWl28eDHX/iZjYuHChVp9npvEeSWt5KH7MbSpAxwaOsPVm5jZSPB6rTMaOTii/cwT2Pt9U5RlLFCmyVTcSFQgw/8I5vRujMoluOvIolQ1NO07H8cDMrSew8qkZ/Qg8SUAACAASURBVNg3bwCaV7dlrz8rOwd8M2krPAVWWRGJEyNbwMGhAZymXEOCmlRRIfnmDLRt4ACHlmNwhkRrVibi4W/j0aFuOViy17IFSlVvgcHLLyJMHBgnN3DocYoARaBAEFAkvILHtZt46B+NzJxI1I9aVUEaH4QXD7zw6HUYUnNmTD86i+7QDwFKWumHEy1l5giQBbGNjY2Za0nVEyPQs2dPWFtbf2YmNinebGjBT7r1WCAqInF8UFm+vANW+2efkWbh7am56CqO2cAUQ+0BG3A/uxmyWIki8pu8vSeE1YQJE4qIxp+n5rNnz9CtWzcWK+I6OGPGDJBEE6awkfhkDg4OKF68ODw9PU1B5HyUUYmIPe34+wODhvPcESVTIt1/Lwbb8WRUjVnQZHrPxMOpVfny1dBj3ERMnCj6THOFR2zhzJbFllaUtMrHIWKkVRUGaVV74mbs3rOHjUlJ4lLu2bkczmW560R3TKsMPJxahbtmLEQuZo034E3UZYzkCeMStVuhY8fWqMOTV0zF0biSwK1WSTybaV+JyWHR7+qjcDqSXHMKhP3RjiOUrbrhqDqVWCxO9eJiERbvsAfhCjlC3LrChieeS9WohzqVNHK12vIWOgw6jHQkULEoAhQBikDBIkBJq4LFl9ZuIggI6ddpTBUT6bB8EJNkiyRESERERB5rUyD2+mw+yCuZtOZGWikQfrgfSqutIrKTVlK83dUH5dTHRRNhhkGJ7rsRWoRnr4cPH2b76euvvwZx66Rb7gj8z//8j9r9hgRtX7x4Mf773//mfmIhlwgMDIStrS0qVaqEqKioQpbGkM1L8GpJHZ6EaoJf3wkXfBpuj+TdBa164bKQmEsRDrfW/H2izhK84rNuG1JiXW1R0koXMua5vzBIK8GlMKdvvUgrxgptll+Fj+99XH8Si+Tbo2BHnr/WzjjKx0dSRJ7HbJeRmLbsd9xkAy5l4PEse570aoRZF0IhUckQc281OpbkrsWyw/5EvApQxZ1BX3ZfMbTf/YF1T1NG7EcnS1KuFPqdjYNKlQSvNSPwTauG6LzyCdgwOIpQ7GjF1WXV6zKEy908Rw7ViiJAEaAI5A0BSlrlDS9a2kwRmD17NjsZiYyMNFMNqVrZEdi2bRvb5/fu3ct+SPd/yQe4r+qBSloE06dJK3nYPvQsxS8w2fO0SStl1DH0FN7q2nbFmhtvERfuibVfW/OL2DpY4m1Eq1Ld6OT7kadPn7LWcMR1Ny4uLt/rN/cKL1++jMaNG7PjiASwd3V1xd9//23Uap8/f56V18nJCSSza9HYVIg70xsl2PtDOQw7F8vF4cnygWsj/t7RYDX8BDjSb2NkOW5/8XY/49TBLVi7eh22Hb+HkAy1P1KhQEdJq0KBvdAaLQzSysaxI2tRSqxK2U/XNqjGP5P1Iq1sB+NqsgYyqd9q0UsoOzh2ccH01btw2TuOzzgHIPMhplblrrmKE+4hXX26FL4r6nPP6pIDcZVlmlJxa2wFbl/L3/BOLkfI9jbc/wpjcStbGjFFWjhe3TqNnasmobPgItzhBOLVbdAfFAGKAEWAIkBJKzoGKAIA1qxZw04o/vd//5fiUUQQuHbtGtvne/fu1UtjRdRJuGi57wlE1CdIK9l7uHXjXAI0b4XFpJUC4XvawYKfcLf47b3aJSDj5TYsmL8c67cexK3IomdhFB0djSpVqoBYCnl7e+vVR7TQxwioVCqQwNZ169Zlx3vFihXZuGDGTAgRyzByvUyePPljhcx1T5YfNncrzerNMGXh2KEzWtsX4//Xx4wbCeqA0rLgjWjE3zM09xX+flSxJ35+kqIVh8eQkFHSypBoF35bhUFafX08Xnt8Z73Ewlrc+NeLtKq7FFrvgVQpeLS2uyZrnejaqtDlJ9wl7oHJF+HMWkoxaL49VCu4d/KFbnx8uVZwC+dcCbOez0NNtp6GWPvaBxsac/LZz3kKIUeq5N0pzOlWC1bq9ixgLfzueAoJhd+9VAKKAEWAImA0CFDSymi6ggpSmAj88ccf7OLg7t27hSkGbduACAQFBbF9ThbI+myahaIdOi0+icNDhRhVukgrGd5u78xNQi0c4dJXiEEjJq1S4T64FL8wrY1FL7OgSP0Av1c+eJ8g1Z6Y6yOkmZSRSCQgyRHIgvz06dNmolXhqiGXy7F7925Uq1aNxdXe3h779u3D//3f/xWuYDm0TgLvOzs7s3ISGYvKlhW0D0MqCWS45rvB9EsQ89bJl/vxVlmkjC3qtm6P1rVF5LjdMJyJpjGtisq4KUw9TZK0clyPnPIUyOK8cdltJaYO7oAGQiw5hkG95a8hSb+D0fy+mvNfgIR25zY5QrY2457hVs44n8jvlgWqiap6301AfZaMcoSrEM9S4oOVDblr3Lr1dPxx9QU+pCfBfRA/H+h8FklCE/SbIkARoAhQBEBJKzoIKAIAuzAmC+Rz585RPIoIAsTSxMLCAoMHD9ZLY3n4KfwweQ3O+qdCiTTcGvFp0koauBntrcik1AItXJ/i4aLa3MSWEZFWilBsby4sTutj4tSuore9VnBw+Q1PU/RKWaKXDqZSaOTIkSxWy5YtMxWR811OfbNTsiTn86d4GRQDXUkpxcJlJgZj9vRpKFOGG7/169fHqVOnPjMhgbjm/P2dlJSEWrVqse6hz58/z9/KjbA22ds/4GzL3wtsm2HQ95PwbZca/D2DQeXhJxHB8lAqpDzfjaVTR2Pw4Ok46M9nN1PG4dqk6uryDdcGaFybDKhvflhaEQtLEpSffowfg+rVNWOOzKGMJXugLHAd7/JXGZMfZAAQBWJvtBHBauNlOT6cXYThPdqjRZcleJDGXyxSX6xy4K5Hu7F3ka6Kw+nePDFsNxgH33Eu+/Loy5hSkytn1WU/wtVcsQLhezvyWQG54xZOOxHKH1fFHEJblsiygvM5numSvsE6nshiOp2hpJUB71u0KYoARcD4EaCklfH3EZXQAAjcvn2bnezv2rXLAK3RJowFAbIoJjF/8r7lQlpJ/LHRqTg7poq1WgdfSRZeLc6BtJL6YkU9fqEquAVY2aGSsHhlGNh02SaaYOddUlM7Y8OGDSxuAwcONDoixXBY5p6dUpXhj30TmsNWGDfku5ozll+P0XJd0ZZZXG99dJ2+HKVLcy5pzZo1w9WrV7WLF/I/4hZKsrqSmGYJCebsLJOM6yP4gOuMA5a95B2IVEm4NYWzjGOYapj+WHAsyrljpD4/oR4/Hiw6F457UX6QVlWrVoWjoyP9mAAG5NoUu6iaHmkFZL5Yhob8dWPb0gXT58/HtKHNUYrdVxFj3JNYq+dMb1e0Li48ryugQYtGqMa+mGLAWDphrbf29amKv4CB6me5DXqciFG7+CLjAaZU4euq9A2mLpmPMW3tNFi23IkwNQGW8/VO91IEKAIUgaKEACWtilJvU111IkAWR2TitW7dOp1l6AHzQ6BHjx7sopjE/cnb9inSKgs+a1tyMS6KtcYGP/JGVpIzaSV5jWWiFNpW7dbjBUkjJIvAufHCYrUcXK6JosbmTVCTKk1IE2L91qhRI6MPGF5wwOqRnVIRg7PD+UC//GJLs3BsghUC6aElZM71JiYmYsGCBex1QOogWRq9vLy0zizMP0L2yK5du0KhMNNVnOQlFtXmF7D2c/FM43uEtFsjUJbv4+bbQj5BSAKK0B1oIYwHp32INKCRZnJyMshLn+LFObKejKUpU6bgwYMHuX7u3LmjWawzDBuDrTDHHG1bfwRM0j1Qy9KK6CpH5JWlcLbXjF32fmrbBKN3enOZ/VhIlEh6sh3jnSprjVfbJqOw7VmyhpBSw5eGe99X4sqWGwH3ZPE8Q4HYa3PRis88SNqzajACa9f15F5ElOiNU7EGvIDVMuv3Qxbujt9Xz8OksaNBsm9rfcaMxfgJkzBtzhKs2XIQV71jIeSQ0KpdEYULM9qgkm0ltJl5EXziRq0ihv6jiLqAGW0qwbZSG8y8GPXJ+21ByKaIu4vNP3yHMVqYjsF3M5bh5wMPEK9+BCqR/PwA1syZiLFC2XE/wM2XEKcKRF2YgTaVbFGpzUxcJMAqouGxaRbGjxH6ahI2PNWkEygIXWidFIH8RoCSVvmNKK3PJBH48OEDO7GYN2+eScpPhf48BGbMmMH2e96zRuomraR+rmjOLxwru/yBa7du4/Ztd+wawU9emWoYv/867jwKRuo/b7GJD9DKMDbo86cmioXUfzUc+HqqTnukDt76eZoa/1mBgYGs1U/58uXx119/Gb/ABSGhntkpJd4/4it+bBRr/iNuvI+C76ERqMzvs3Y+hmjxekePekng+6lTp6pJB0Lovnz5siC0zHOd06dPZ6/T+fPn5/lckzhBTFqVHY4b6uxiKsSf6q4O1Oy0NxJKVSJuzusJpyb1UKPOIJyK0XR0uuf36symdqNvizKcFQwKJE4ayVA5dOhQWFlZcYtzfgySRXjnzp2xf//+XD9CTEmWKKCkVcF0VgHVmp20+vnnn3PtbzIm+vfvrzVeCki8PFYrQ3KYP148fQbvoGhkaC6tbPWoIIl/D9+XrxAQkfZFxIZKGo9g7+fwfhsPqZjTytaicf7Ngu86Ljstd+2Wx7A9d/DA8yb+PLAOE9oIiSUYVO2zEU9TtBWUBawVZW3UFRvUkJrLELDWUTMuHdflGPusoCVSJlzHlBqCRR/5bgDXNznSfoD0Hdw6W4GxaoNld+K4sSgLwFpHzfmO6wI5V3GpH9Y00Oz/KKFBQStG66cIfCEClLT6QgDp6eaBAEkDTx6648aNMw+FqBZ6IbB161a23z09PfUqrymkm7RKudRTvcgUFmE5fleejAcZSbjQQ1js2WH0Hc2bL1X0QT7mBYNSg69DvY7VCGE2v4iVxldffcUSJkU1GYL+2Sml8FvJp1hnSqL/xUQuYD8J/CvEQ7Fyxpl4boGgf73ccAoJCcGoUaNYizcybocMGQJCKBbmRsgRYgFG5CHxt8xvS4a7ixAjrxgaz7mE0CwFMt+dxVS1+3A9LH/NWW2+XvaVemFV7dt98E2VI+vDVSxoLliKVMGke0JwnvxH68WLF/jhhx8guAIS60hiCXfo0CHY2YlcnEQEVo73QB3HSbZLupkGAtlJq7z0s7isaWhLpcyOQPrtUbBTX8fVMfOJyEUy8zmW1teQJHYuFxAnIgJVCZcwXAh4X2EkriRqk1rZ2yr4/yokXBqu1qfCyCsoHJEUiDjYTZNJkrFCt0OROVjyEUQScaFfNThtDtbEMFQl4NJw4T5cASOv8HMEJOFcFwv1s4OSVgU/omgL+YsAJa3yF09amwkjQN4U9+3b14Q1oKLnFQHijkYmznnPUJZfpJUMwRs1byqb/KKZeIjj09T4QZMmO686Gnt54vLVvXt3th9+//13Yxe3wOTTPztlAk53FcgJR6wNECIKp+HWt0JcpBr44Sm3eNC/Xm3V/Pz8MGDAALZfihUrhvHjxyMsLEy7kAH/xcTEgAToLlmyJIhs5rZJg7aii41mgSde0JPfFb89ByEhoDLuEsZV013WfsIlxKrdSPIHKWKNSqxoSKwpQTYSKH39+vUIDw9XNyIQWUKZz/mmpJUaTqP/QUkro++iAhUw3fM7VNBFWkEKv1UO6vsFY/E1DkSJWCuokBV2F8f2noBnOBfYvkCF1adyVRbC7h7D3hOeKFSRUm/ju4qie3zj9VA/6kV6KGNOoH/j73EzmxWbKisMd4/txQnPcGiQTcb5rpS0EsEHQAXJh1vYOmUgpt3QFYZDhTT/k/hpTF/0dO6CDs6jsOJ0ANKzc6yqLLy7uBqju7WEo2NTdBiyAId90opsFnBtnPPnHyWt8gdHWosZIECCvzo5OZmBJlQFfREICgpiJ1RLlizR9xS+nG7SSpn+AW98feGr9XmOMxOEGFU1Mf3iS/gFRbPZ3mTBm9FKmPTZtMPSK0GIC3+E3/pp3pSNu22+dlbEYoMsbEn8m6K86Z2dUhaI9YJFFeOEA9HCzCkLz+bY8wsEK/S+nMLCqXe9OsB/+vQpvvnmG7ZeQuzPnDkTsbGxOkoX7O6HDx/C0tKStcpLTTW3a0KFNJ99mNI2e6yyCmg/4xiCMoV+5jCWhJ7Dgm7amduYYvZwXnoJ4To8SfLaOxkZGTh69ChLKhNrKnKdEksqMgaePXuWY3WUtMoRFrPdSUkrs+1avRT7NGmlROReJ/6ZRAiY+ljhy9+cFJlISUoEianIfpLSIGNvcQpkpSSJ9icjQ07CNCUj8O45HDt+AZ7v0jVWR4pkBNw5h6NHz7P7te6S8gwki9pISsli3efkCb7wOH0EJ669Qpzwzoc0kZmCJEGexEQkpclyIBwUSH17H5dPHsaR0+54EZmlKSNP124vVcLKKU9PFtWbhFSJmLjTBbMUPj+JCD+mOqY/JFkwxZsMQZs6oN3aN6KYYQpkivFLTEIaBywAPUgrRSre3r+Mk4eP4LT7C0TqSEksTwrA3QvHcfTMTfgliEAUi6eSIv6NF66cPoJDR87ilk+cSE5xwcL5rUh4jAMrR6OZNUcOdjiZU7IXJZI8F6J51Q5Y5ZUABRSIvbkITa2s0Wr5I2i4QimCd/ZF7XqdMHTsGAxszT/HS3bHrhAygOmWHwhQ0io/UKR1mBwCHh4e2LZtm9aHkFYknk72/br+k4yDdDNtBKRSKesGRVyg8rbpJq1yrkdHIHZSWJWGJ8uacYHbBfJK9F261x6Y6zOPWLiRhXCnTp1AXMDoJiDwifEleYXFQtBupiNOqedZEnj/WFe9QOhwIl6oTPT9iXpFpXL6SYJlt2nThq2fWDsRope4dRp627FjBysDsYrNewIFQ0v7Oe0pkRnljyf37uDeY19E6g6sw74llsYH48XD+3j0KhhxEq0l2+c0DqVSCdLXxFW+VKlSLNaEKBw0aBBIZjiZTMcChW+NWMG9fv36iz6FMa4+Cyx6EtLT0/Xu63v37rFjimQcJC6mwjjx8fGhSJooAp8mrcTzHgaMlTNOxXH3KEWcF3ZMaCia93TCGRLSUxEPry3DYa+eA5XFkH17MaWxEEaBEAx26LcrGEm+uzCqfjH1M49kWP32ZIQ6xpg86gbW9quoOV7texz4bQTqqutmYNN+LV6yXJACcV47MKGhqL5OZ6CJMgooU55i8+CasCrthMlLvkcLNnOkHbqueYhkJSALPYOFHW017bExsWT4cOFHdC2jsZpyXBugceX7RL8rPuxGR0vNeaUHXQDv9c+dlf4As1oOwHFxAEtFHLx2TEDDYprzOrHAklM+RVopkfJ0MwbXtEJpp8lY8n0LLsyFXVeseShKMqBMwZNNA2BfvAw6/PAzVo+oDQtLB4w/+l5ESMkRdW05eterjpbDZmLRrCFw5PWwH+qGNyIP0k+oX/CHlHIoVAqEbm/B9lmOpFWaF6ZVt0C9n3xE+mXh5ZI6YBh7zHrAhfRQfDiKqXNOIVR4YaRMwPUpXGbVBmv8RecWvFrm3AIlrcy5d6luOhEgMVs+x21BfM6ECRN01s8dkCExNAgBAQHsJ/BdLGtZozlJhqQw8fFoZHuZrima0y9ZIkKDAhAQ+D5fFis5NUGykKR+IDIG4m1UJv9GSYYkVq9AvI8VvWXKuQKj31uzZk00adIkj3LmdfEvnrw5YLW/8GTjm1Um4P4vw+DAv/HhxlkZtPx+H3w/skHOo6hGWpxYzRDLnVq1aiEhQc28GKm0hhbrE+NL8hpL6woT0o44lSjIJoH3Ug1pReJVfLx9ot6PC3+0599//8Wff/6Jxo05l9ayZcuyGVeJRY4ht7Fjx7L375UrVxqyWbNui8Qt+/HHH0EIBeE5R0jKnTt3IilJvHQzaxiocgWMwOrVq9nxdeDAgQJuiVZvCAQ+RVopos5gRAXhWcXAYd5DUSZGQBm1H05qAoknrYjQUj+sEsXCYqr2xoY7IQi7OVdDOBWvgQbNv8V2r3d4c2KExkWx3k/wEU2vMh/PQDV1G1aoM2IH7r2+ieVNBLls0OtcAj+3VSJqv8gyTExayUOwpxchpCzQbtcHMjOG+xCBoKqDRS+5lK+p1wejlNCeOpB7Gm4M0wSl15e0gioRV1wEl38GTLG2+F39BlOJ2NND0GqG18cJN5RR2O8k6MdAH9JKHrIHvWwZMBbtsOuDAkh1xxDyn+hSZxE49SQI2NyRjbVVvONehBMX9OSrGMYSck2wyodzRMzy/gmNyHml+uAES6jJ8MZV41beastbGNMrSiEObU6kVfLlQSjFlMIgd23LbsnrZWwynBJ9zrFEokqSjJRsmRSyns9HTYZB441BepGUhrheTb0NSlqZeg9S+T8LAYOQVlJ/rHbQPDgYph32RojMgtNuYaQQhJLc4MuOwK08xM7VZJerhumPC+jVhTIK+/iHX8mB7lwwcFkg1vGZSSpPfgDDLlc/q7s/eZKzszNsbGyMw2pDmoC3rx7hwRMffEjL56A0n0TBsAcjIiJQqVIlNj7R//7v/xq2cZNo7RPkkixIE3CdaYtDMYJ1jbZ7YK9LnHugtrqfqFe74Cf/EWucY8eOoU4d8raRYfuSWKQSy0VDbBKJBC1atGCtJElcOrp9HgLELYdYrrVq1UpNVNnb22PZsmUgrtOfvwkvNgIQFC5y5WErVCEr9h0CAwL+n73vAIvq2N8+qKCChShWjBWxoihW1Cj2JBp7jZ0YjRo1sZfYNdEYE2Ni16jYC5YgehULYAUuKgLKByLwl3ppl3LZvbt73++ZOWUPsMCCgLsw53n22d1zpvzmPXPOzLzzK8i9kVO0GhUJoQgk5b3OuTFUtPKKlktsc9nYzCkaBgXnSklJoSamZLOiIK29gktjKT40AtlJq6roPm8Ttm76Ad87D0P7asL8t0pLfL72GiJzMhXxLugpEjycjLRSvsKPclLpZAxPKqXexARpztwQc0RzubgT2nJM++OCbOjL9FmCJmIdluPhRq9l4OEC0ZSeQ4vV/pImTLxLT+ldyEmklQYJV8cKDtqbYTkNiKHCmz+6oyItuwbGCJP31NuT8ZFYn0RapcJjitbsW2/SCgAhPj4Wy+M4NF36lPdRpQjENkdHbAvUpfkaD5ee2rVHgaSVJgFXxwruKJotB9+8N/iju+A7U1ibqN+5YHBVvtzWW4J44kkRiM2t+XMfTbqJFCjwcmMrCcPmqwi22cnA6qPcUIilTol38aTLQ1GZ45CbtMqE3woS9MQCI7QhfXl5klwxlPigrDUFHtr4SdlkTbs7A/UtB+EQZfeyXWJ/iogAI62KCBzLZtwI5CStCGmhz0fcgSbfBWpa5SKtKmPQWXFHB8h4OF+mAs1Iqw/Vo2bPnk0H2MjIyA8lQrmqNyMjgxIO5Bk6e/ZsuWq7/o3Nj1yKw8lPREfsrbBJmrSm4PoYMQJdjihOUsX5lSsl0vsHMencs2cPiGk1uZ+E8CDh7Ilz/ZI+3rx5Q825ibbX69evS7q6MlM+IQrOnTuHzz//nPoHI/eNmAFOmTIFJHJnsZhcyjY2tAs/EULZAq72NNx5710PBV5uEHbxaURWsZ5S/pa1uSxs5pQkeps2baLvC/LuYIdxI5CdtLJAnyWb8FU7cXwiZEZ9zLyeoPX7JG+uXqSVlhBC2l1MtxLJGJnGeuJ5OElmdN1wVDBBJFVlI60azAEf3DC7VrL1t4/A60kBukmrJJlWVVv89Epg3zLDcGXHSqz69SaihFPFTVpB+Qo/dxLbTEiSSdTpeuq9ueg88nS2aIxaaAtJWiXJtKra/gRt865gx8pV+PVmFCWo4s8NkiIadtz+GKFhYQh77Yn1bQT5BC23rKC9GN/6I1Sr74gV95KgUSXg/nKtf67KQ1wh4xW1Yn+gX/mRVqL2eqffiXad7BAJVJM+OKPLS4LyLY5PGYLv3GK1/tdk2dnPoiHASKui4cZyGTkCctKqY8eOuHfvnl4fYkZGJvnkU3jSioPlRLITQQ4FAjdpdyNomcaiaUU8qaiU1AeRSqY4ZqxdYseOHfR+3rlzx1ibYDRyE/Oy0aNHU7yZaVd+ty0/con4ruI1nDjOHJ9fFMJZK4OwVZw8mvaTfIdkryW/crOnLMw/ovm0bds2SiKRdxmJKnfq1CmQ+12Sh7u7O0hkQ2KumJ5eQtqmJdmAUizb29sbhKAnjtTJPSKO1YmWKdGYK3bsZAQOI61K8SYbSVXEnLh27dpo0KAB/vOf/xiJ1ExMXQhkJ634zRJF8K7skVDrTcHlWB2TxWIjrS7ISCu59nHepJWvzJTeev7D/EkrxUuZdnNTLKOqSLrQAIqdtIIaMac/1Zoccmbod+gpjo9wwAKvvBj/wpFWipeb0ErU5mq6jNe0ytU8YuLXko4dZPyo3GYwRo0ZgzGyz/g5e/FSrmytTob/iWX4zK4d+jhqfYuZDTYW0kqD+PNDUZXjULHnnyBWk9KReB79iU8zs8FwlTNwqhSE3D2IRU51qIaW3bT9eJ4masNLudmPIiLASKsiAseyGTcCpU1a1eneDlXIoGD9LR6RLR11BPZ346MxNXdqzA8E2UgrNeLu78TUHo1RjeQztURLp9n483GSxNrLzQNnnjqJlZ+1gmVFDlwVa3SduhtPk7UvSnX8fez4sgea1qzE12VijgYdhmPF+TBJLZr4r4q5vQVjO9WjuymmVh0wZrMLttjzJJ1kHqgMxf4R7WBj0wpO63yEwV6N+Ps78GWPpqgp7HiZmDdAh+ErcD5MPooZXr9xdXWlmBANEXaULALr1q2jWBPH9yVNaJRsS0q69PzJJbmWZsVOq3E7Ig6BJyajgTDxNP3kMLJFFpfEzb9cKVkRfxDTH0JGWljwvj6ICd/Vq1eLWJp+2TZu3Ej7FJk8syM7AkQbbf369WjRogX/3uc4tGrVClu3bkWJapaWKmlFd1HoJoryA++ilKXNnOw9qfj/bd++nfZJYlbMDuNFQBdpBSgRtn8QLEQihEQdHXUSMIz+QgAAIABJREFUkfJFP2mysZBW2awmTNHPJVa35hhKgrQihd6Fcz25tlVrNO+9U9KIyt17CkdaadcSZK3RDy7ZvL2LpWfh2RrtONJgjjfy3ibSIP3lMcxxqA7OtCOWe8Qh4lAXaQwyHtKKYO+Fhc0J9tXRf/tT6pNNkxmOG1sG8eaiTZfCh3flRYFSJfji8tFfsGJSN1gJ/d9qoit0cbYisuxbfwQYaaU/VixlGUKgtEmrJjOXoTslcwRznvjzGFqFvAht8M2KTvzLXCKtNEi+951258O8HuqLkUcqOWC9L6/InG2gIS9H8wZoVFurlt1wjifvb0oZgt/7VBYGDHM0bN4EVpIqtT22B/N6zWmPVqCNNMkwR916WseRZGdFIq1kCxLRDEIZ8jv6EPtuKkdDNG9ihUpiWfbbIVRhkD2IOCAmchMnxOwoOQSIA2+i3dG+fXuUtuPukmtVSZVcALmkisDxYTIHreKzRr9tsNgrDycLKKDcYmoOcaz/7bffwszMjD5bPXr0wN27d4up9OzFEPKTkKDkGf7xxx+zXyyH/0g0t4MHD6JXr178+5jjqFbL/Pnz8fTp09JBRDZG6KVplemL9U6tYWPbDXOO78W0djXAmVRH25nHsH+8HdXcI9p72T+t4bTeF5lQInT/CLSzsUErp3XwEe18kIHX51ZihH1DQUuhCurZDcOysyG8TxiKRMGbQ8rXf2B4OxvYdPwS556ew/JPW+MjMn5WboAuX+7CIzHmuc7NnNKB2xhrIdqZxKy4Tp06xa/pZ4yAGKnMukkrEgUwEi4jBT9JdFyywKD9YdkdcBsLaaWJwYm+woYvmQsPPoloHYpj5BameUzROoUvBp9WfLdQIGADH/yEzrG5GhhxQetqJHfXKRxppYk5gb7SmqAqBp+MljbHtWVr8O5IT5iIcw3bdXihy50W8aMf9CucBCfubTa9hAIavDNW0orYxbw5h4V96vNtr9EMDoNnYtks3iS9zsy7efj11SDl8UZ0JdpYFbpjf0QeHUYLMPulBwKMtNIDJJak7CFQ2qRV0+/OYlN7QuqYoNu+t0i+LQxs9Zxx/o9u/OJCJK1kNuxV+uzEcxJSMCsEh4VFauX+x/FODchJK/P+exBE2H5VOA70Ewgqwb5ck3AHP4z5BPatemG1dwrdIVKF7oI9HXxMMfBSEqCJxZkhVXk5LIfjwCtSmALhJyegjjBI5U1aaZBw5weM+cQerXqthncK0fBSIXSX4NzXdCBIFYZ6EPMEQqaMGDHCUEU0ermePXtG/eYQkxCi/cGOghAomFzSpPrjjy/bycwGiL+Lnlh4/m0+kWoKLrcgyQpznWjzzJw5ExUr8mT6wIED4ePjU5gi9Er773//G7a2trQe4pepvB3Eh5ibmxvIuFalShX6HieE4ciRI0Ec1f/3v/8tXUgKS1ql3YdzXX7Tw0RaPHFos+EmfpX7cxEXTMJ3/dmeSCeOf3P5tFIidN9g1BDTm9ZAdancuph0ifgZKezmkCmsyEZT1fqwrsVrSZMFZJ3pt3mTf1mbxc2c0gXd+Gr79ddfaV8lpsXsME4EspnDcdmDAqljLmGy8FxTsqVyL/wSJNO8NxbSCiq8PdAHpuL7hGuMOTcSJGJHHf8Ap29EUEJOrgXNtVgFf9Jc9TscH8i/lwkOhXHELvYKdeRh9BOjSzdezFtsiBdzfReOtILqLQ70MeXn/6SNjefgRoJIsqgR/+A0bkQooXy1HR0lDGpi+LG3EgmpSXmE7UtcEK5Kxe0vRafzFeF4KBJqKPFqR3upfLPBl4zEp1V2YNVZachQaug6a68jITFtsDIfU1EgHd7ziMP/j7FYiC6ZvUT2r7AIMNKqsIix9GUCgVInrZZ54f6ij+lLu/Lg43D/jvdJYz7sEgKPZCetNNFH4ViBn8Dbb7sDP39/+Pv7w+v3vrwTxGojQQJZaEmrqvhMMqomKryCw8O6zrifTX9XhZTwp3A/+RvWzOgl7Qb1OB4LpN3DLGFyIbfvR6YvljXlZcmbtJJ1CVUKwp+64+RvazCjlzhw9QCpwpAP4kC6Xbt2hiyi0cpGIpQ1btyYOn0uKW0bowWnGARXJryGj+c9ePuHI1WcZxZDucVZBHGUPm7cOEoOk0k7IVPeLzpdbulIedWqVaNaRREREbkTlMEzZFxYvHix5AifYNutWzf88ccfSEzU5R22lECQETh6aVrJSCvOtDNWXPaD/91r8I7OQvKbQLwMDERQUBACbm+Fk6h1bDkMB0PJVr8O0irpOiZ8xI9btceeALFQ16Q/w0/d+YWZhdMBhGdqHRzrvTnk9CsCiCaX4jV+7SFoXjRZwpuHyNrMSCv9+hmJNkrG3lq1aoEQz+wwPgQSLmidcxMTqlFu8rhwGiS4f4VGEtHBoULHH/BEcMWkjtwHB+maAw5ECS4tsvyxugX//HKcGQZdFN5lqTcwrqZ43hoLqK8NQPPuELpI5dhhZ4jWDjHdczbqi9csJ+EWVUJOh9fX9SUSpdYUD0FbRo3IfQ7Sec7hAESRkPYYK6WIhkSGRujvvBLrVs6CU/sB2P6CtxFTvzuGvkS7htRZ0R4rL93GyaV9UFcizTl8vOix5ENL/zuehOuTyZy6IrrvDpXIIp35ZVG/iRwOB6J4c0ZNNI50FfHj0OmPtxLxlvZ4JdqKOJHvRv3hvHIdVs5yQvsB20Gbp4nHtala3DjOCn1nb8CuX9ZiSu++WPWA3NgkXBrMa1hTDCpYo6NDe7S1ra7FtcUc7Np7FeF5aGrpbFMJnszbEbuuSjVIuTuXBtGqP90NCVovLLoSI/FCf5hyHfBLqLZP6kzITuqFACOt9IKJJSprCJQ+aeWD2OtjUZMMBtUd8UUzMnCYoPuBt4g6mp20UgRsgK188Mj1uzP2RqhlpFVdOEvslMzBu9V03KWTg0y8cpmPvh/LdlJMtINKT5c4IPkqhgmhbNttf60dENWR2O/AD3L5klaZr+Ayvy8+Fgdr4uhX3BXieoJUYchHv379qIYC87NUvHeJaHj07t2bTlbIQpod5RsBQrJ8+umntD8QB+okYl1xat6dP3+elt25c2eQBXFZPGJiYkCCR9jZ2UmLAEIKE19iBhNFURYGXRdpJe3Ei2OUjLSyGH4Zuug2dcJtLBJCq3OVOmHNQ15rWBdple49Fw3puGmNeQ9Ee0ENMmNeIyw+iy7gCr85VAVDZZtDfiub8/iLEQsZaVWkx+3PP/+kOG7YsKFI+VmmD4OAMsINe9Z/g0ENtSQIISlMmg3Dwk374REtLNI1ybi3UOsLiaSp5jAdOy6dwcZRTaV3GJkPtxi3BW5v3sJ9/VDUlc17zTo64/DzQJxa0JU6xaZkCPGT1W8pzj7zxs+jGsnK4dBkzM/wTlRDGX4By/rKzeiroc/qa3jmtgq9q8nkrt4b350JRuj1zRglbNLSOkxaYNwWN0QKkQGVkZewqLu8PA5mLUdim6dW64po1/j/5MT7O6JtqAy72YdwdLy4iUs0ogdi7TU+Il9h7l6m73LYNhiHq/kxJcooXN88Ck1l+Jm0GIctV31wY+sYNJed55qNxqa/xajZSkReWoTuEilI8DFDy5Hb4ClpXQGatGf4fVwLrfsPUp5lVyy4FClE19Mg2XMVupkL+Nbuill/Pkbc29MYL/YVi05Y4BqVPRpfYYAo5rSFIa3Usdfg3JhDpY4r4SXzG6xbJCVe77CDhf2PCDIQgk63nMZzlpFWxnOvmKTFiEDpk1a+yCB24zJSh+NssSEgCzE5SCtV2G+CCq4JOq44BrIQy/a5dAdhGRoZaSVXyc5NWmX6rRb8Y5mh0+zduPz4DVITrmKYMKj0Op1ADPExtTY/yDT+7onW54ciABts+fN5k1aZ8FstREI064TZuy/j8ZtUJFwdJpgu9QKpwpCPr776ik56oqKiDFlMo5ONRCsjk7+vv/7a6GRnApccAl5eXhKZaWpqim+++QaEjCmOY9myZbTPTZs2rTiKM4gyiAkzicY4ZMgQydSyevXqmDFjBvUVZnBkuyoUv9gJi5buf0EWgR5ACm6MExZ+9WfDi2gDy0irpst8tOOPiH5mAH5x4p37c5w1ppyTL3hya1olS2NPa2zJY7VQ+M2hOpjJ7wLx2l1i9F+ReGOklXi3CvWtVCqpJm7NmjWRnJxcqLwsMUOg9BFQIC7gHv52vYY7fhEg3jtyHxqkhT6A2xV3PHqTnqfT9tz5CjijSsAL32jtpnIByYt0WRGHgHt/w/XaHfhF5CW7Bulvn+LW1ctw8wxATFZuEDSKRES+jUOGXPtbmYzI0AgkGhiBk3C2LyoSjbT9xJQx70MVewuru9XAR73X4k68PKUGqf4HsXzRRrj4JkplZAbvxwibnlgnqhbmXTS7oicCjLTSEyiWrGwh8CFIq0zla+ygfq2EyXz9r+GVrslFWiHjIeZb82msJl9BHH03ZsBv2wj0+3QC5m6+Snd+tOaB+ZFWMgeIpk44E8/fx6wXGyRH746nEqjN/V99BU2sj+fBgzqX1SDp7iJpZyZP0krzDoe68PKaOp0BX0UWXmwQiCzOEaQKQz7ESEbMfK347tKePXsoedCnT5/S96lTfM1gJZUgAu7u7ujUiQ9EUbVqVRDCKSnp/RzgqdVqDBgwgPY9Y9buI0TUvXv3qE8wQlAR8pdopw0aNAgnT54EcWRtuEccXHpXoDJzNmvxPJvSWxxOfSIEDGmxmvf5IiOtbDcSx72yQ/UOF6YSvyBkjDFH359f5DCtyU1apd2Zhto0vTXmPxQ1rYCsN7dwyf0RXsVmQlnMm0NgpJXsphXu56FDh+j9XbVqVeEystQMAYYAQ6CoCGS+woVdqzG5Pb/2MbEZg+U/n8FLmVsVTWY0Ah+64cj66XDq3BtTd9xBtKB5p61WhfBDg/hI71wlWDsMxKeDndB36Fwc8BM1grWp2a+iI8BIq6Jjx3IaMQIfhLRCBh4Lfq3IBNziiytIgg7SCkqE/N4XVQQ13vo9RmDCF50ldePmC71AzPL1I62Iu6pZkqq11SfOWLJwIrpYCsQZx6Hjb2FQQYOkm87UTpsuDqw64JM+7YWJP582T9IKabg3q66wqLDCJ85LsHBiF0lejuuI38IM25770qVLVP4DBw4Yca82HNHv3LlDfVg1adIECQkGzlgaDmzlUhJCzhBN0tatW9NnsEaNGti4ceN7RRT717/+RbU3iBbXw4cPjQrXkJAQrFmzBuTZ4YkajvrbI8R6dHS0kbQlA48XiiY7NTDwJ2/E0yFAgSj3pehQkR9Tqgw5z29yyEir1luCtKSVJgUP13YWTFEqosP37ohISkFKivBJI3GpcpNWmugT6C+Ypzd0dge1blHH4NxoIZqZ9Xw8jC/OzSGifPVS0kpmPq0K101JIIEWLVrAwsKCjReFg46lZggwBEoQAVXcU7hduY77z6OQLleuylWnGqnhfvD0uAUPTx+8js+29ZIrNTtRNAQYaVU03FguI0fgw5BWQIr7eIHMqYCeh6JoBKOc5oEUWnUCPLeNgE1lLbnEcZboMscFIcKutb6kFVTRuLLAXuYLwBQtx6zHhgG8uUWVQS5C+N50BOybBFvJhLESWozbjj8n1aGLp7xJK0AVfQUL7IXog4RsM22JMes3YAANe1sFg1wMe7EVEBBA20g0PdjxfggQH0XEsS5ZgDx//vz9CmO5yw0CREPq6NGjlGwiZE2dOnXwyy+/FNk3lZ+fH/VT16BBg2IzPSypm0G0y4hvn+7du0tEVd26dbFo0SL885//LKlqS7Rc5evf0U/wk8iTb2aoKo0tZFxrhNm3ybZNdvNAOWmljjqIrsLmjUjgZftuuQ4vsnKTVkAqvBYJAUmID53mndCpeTUBW1N03xkMRTFvDjHS6v2604kTJ+j9WbJkyfsVxHIzBBgCDAGGQJlEgJFWZfK2skYVhICctCKT4DFjxuj1kU+Yp06dWlA1739dmYQ3L57g4ZMARKa+j7aSBlmxQfB55IPgWN4RbV7CUXVYn6d4+S6jcLb4mizEBvngkU8wYnXYuOdVnyGcJ6Y2JiYmNKqZIchjrDKkpaVRrRCCJdFeYwdDoLAIEB83u3fvliLiNWrUCAcPHgTRxijsQUgw8s7u1auXwZmokiAFV65cwejRo2FmxgfGqFy5Mo2y+PfffxepvYXFp2TTq5H06FdM7pDdcTG5HxWs+2HxmRCt76o8NK3UEX+ic5FIK6L5FI4Li/tIWsZ07DaxRv9V1yH6iEZxbg4xTav36k4ajYZqWxIz4eLyb/deArHMDAGGAEOAIWBQCDDSyqBuBxOmtBDISVrJySh9f5cKaVVagLB60LBhQ7Rv354hUUQEyKJj+PDhlCRYv359EUth2RgCPAIZGRnYunUrLC15k66WLVvi9OnTKKzT8blz59I+OX/+fIOA1sfHBwsWLICVlRWVi4w3jo6OIKbJxOyt7B0aZMaF4Nljb3g9eIIX4Ukl60g4B4BkEybY9xEe+71GXF6bKcW2OZSjcva3UAicPXuWPhPffvttofKxxAwBhgBDgCFQ9hFgpFXZv8eshToQYKSVDlDK+am+ffuC7PIWdlFczmGTmr969Wq64CCaIwxDCRb24z0RIBHFVq5cCXNzc9q/OnbsCKKJpO9BNLd69uxJ8x4/flzfbMWa7v/+7/+wbds2tGnThspBiKpmzZph3bp1CAsLK9a6WGEMAWNFgIwbdnZ2IBqHLJKvsd5FJjdDgCHAECgZBBhpVTK4slINHIHFixfD1tY220dcFJEd/ZzXdP1fvny5gbeSiVcYBJydnemCkiww2VE4BM6cOUOx69ChA4iGDDsYAsWNQFxcHNVQEk3piHbS/fv39aqGODCvV68eJaX9/f31yvO+idLT03Hs2DH079+fRv0jRFXNmjXx1VdfwcvLixG77wswy18mEXB1daVjyZw5c8pk+1ijGAIMAYYAQ6BoCDDSqmi4sVxlEIEuXbpQ579lsGmsSXog8NNPP9HJMgkzzw79ESCOoomGGnGcHRERoX9GlpIhUAQESB+bMWMGKlasSJ/XQYMGwdfXt8CSCFFUqVIlNG3aFImJiQWmL0oCYiJ7+/ZtENNxEoiAEFWkzs8++wzE9CkrS4iiUZTCWR6GQDlBoHPnziCRP8PDw8tJi1kzGQIMAYYAQ6AgBBhpVRBC7Hq5QYCYa5Cw6+wonwhcvHiRLjKJ02d26IcA0X4hjrLJAsPT01O/TCwVQ6AYEAgODqbBM4jTf0IOEbNUck5+JCQk4OrVq9KHaDmRtMTE8PLly9J5eRpdvwsydyX1rlixgj4LpHzysbe3x65du0CeEXYwBBgC+iPg5uZGnyFCTrPD0BFQIOL6j5jh1A6NrRuhSVMbtO87GWtPPEFcWiD2rrwC7RtQhchzX6OzlQWsOs/B+UgVoHoH95/mYcrECZgwYQImTHTGDt+CtbVVkefwdWcrWFh1xpzzkSh8mA5Dx5XJxxBgCOREgJFWORFh/8stAjVq1EDv3r3LbfvLe8NfvHhBJ8rM7FO/niD3FUScSLODIfAhEPDz88OQIUPos1uhQgVMmzYNb9++paLcunWLnheJpKJ+k76e8/jXv/5Foxw6ODhIdTRo0ABLlixBQEBAzuTsP0OAIVAIBHr06EG1KUNCQgqRS8+k6nS8C3wKb69HeBZWcGAATVYsXvk+xIOnQYjN1OhZSXlIloHnO/ujBmcGh6VXEZ5J2qxAzKPD+MaB1zQ1cdiHSLWAhSIA6215Up+8i203vISCXMp6ITtvigEXkwoAT4GA9bbSe5ez3YCXtKACsrHLDAGGgFEjwEgro759TPjiQoCEHyeD6IgRI4qrSFaOkSGQmZlJ+8CoUaOMTPIPI+7MmTMpXvPmzfswArBaGQIyBIimX69evWifJH6vSL8Uo5EVlawS84mklUKhwIULF2iUTKJdSK4T09hJkybhxo0bUKvF1ZlMMPaTIcAQKDQCxMyWPF+TJ08udN48M2hS8WzfdNjX1BInpI5KTYfiB/eY3No6GcFwWdAH9QXtSfo+qNAYn226i3j2qEMRuA2dTDhw9WfDMz0H6un++NGxMrgOvyJMVIPSxOHiaD4aLMfVwjjXePAUYBxO9BTviT6klQZxF0fDUrgvtca5Ip5xiTluAPvLECh7CDDSquzdU9aiIiAQGxtLJ0izZs0qQm6WpawgQDQlSPQiduSPADF7IhN4JycnqFTijDT/POwqQ6A0ELh+/To1zSP9k0QhowtN+aKzCL8JITZ37lx89NFHtDxikkiijR45cgRpaWml0SxWB0Og3CHwySef0CAGgYGBxdB2NWLOj0WtvJ7/ig7Y8kLmcy4rGHsG18zj/VEF/f4IRW79y2IQ02iKUOL1z+15fMyHwOVdbhZP8XILunX7EcEyoDQZYbj1114c9wgHVcyi7Y2HS6FIKwCaDITd+gt7j3sIGl5GAxwT1EAQUCf54MiS0ejToRVs7Xpi+Nxf4REt66zZ5FQj9uZWfDtnDkiQCP6zED/djdem0mTg1fm1mNC3I2xt26HHF4tw2C9FIGa1ydivoiPASKuiY8dyliEEyKSILG6WLVtWhlrFmlJYBMgkmUSRLMiHTWHLLUvp//GPf1CzDeIDrqQcWpclvFhbSh8B8vwSLStra+tsi87du3dTTSmiLZXfh2xe6CK7SBTZzZs3s4ADpX9LWY3lEAFCFpPncOzYse/fevVb/NlV1OZphW+vRkKhTsXzvcMljZ2Gc73BKwypEfnXAFQRCC6LPj/g7+AYhHusR3czoYwmS+CjZV3eXz6jKyETPkubSO/Jyl2W4MpbGelH2qMKxcF5uxFITfdUSE9KQHx8vPBJQIpCVI/KTVpp0sLgefE4jp27jYCE7BtjqvQkJEjlxCMhRSEQA0qkJsrqSEhGJuHSlKlITBDrjUdCciZyU2xGdwOYwO+BgCb5Pr5vUxGcuTVatWuB2hWF57r2FzgUqsPWNOMxvm8mvj+E7woO+OW1SHJlIei3IWjc3BEjJk3E551q8c9G1X7YEyKmeQ+BWVaKACOtWEdgCADUiTSZHG3fvp3hUY4REBer7969K8co5N30//f//h8sLS1RrVo1vHz5Mu+E7ApDwAAQICZ7cvLp5MmTINFBC/qQzQt5vtmzZ+Px48cG0CImAkOgfCEwcOBAEM3G58+fv1/DM59iSRNhsdn2R7wS15EpNzBOMBc0HXgJ1JuSKhx/duUDPHBcB+yQFqZpeLJzERau2IifD7ojQsfa9v2ENKbcGsSdHSIRe/z7shEGLj6CJ/HZSSbaKlUM7uyailYVtAt/x1MJQoPlpFUFdJj7FRzrV4OZqBVXoRnG/vlCIBRViLmzC1NbVdC+ox1PgZakCMWpxT1hIebjbLHhpQKKN+ewtE91bXrb9Qgo1/fOmPpZSciqQOD2fui12BVvBOJZneCFrU58H6k+4lIOc1M13p0YAQfn03js7w9/4fP8VSxEmlb15iic57sgVDyhjsO1WQ1pn2v5w3MpXUm0pjyVyUir8nS3WVvzRMDV1ZW+XA4fPpxnGnah7CPw448/0n5w//79st/YQrbw3//+N42uSRYQV65cKWRulpwhUPoI5HTEXlTSSvRpVfotYDUyBMo3AoQsJoTIF1988X5AaGJwalAVWhZXcyTORPO6Nhl+69BaIDlarn3GLy6Tr2K4uUCuNP4OTzJUSH7zDE/9XiMuS9QOej9xykTutEdY1kpLQklEf6XmGPaDK0IzcmCljsR+B2163aQVh+r9f8SDRDXUiR5Y1EJM3xjz7qcKsKkRuV8bAIMTSStyNfma9t4JpBU5nfL3SFQTySxGWpWJ7lfkRqje4MgCUQNQW4rqzR/oRkhV63l4IA9gmemDlV0H4A/JOZs2j/hLk5mIpBzvhoxHC9GI49BmSyAfcEBMzL6LjAAjrYoMHctYlhA4dOgQncyQMOjsKL8IEJMhMvEi/YEdWgQ0Gg0+++wzig0xj2IHQ8AYEGCklTHcJSYjQyB/BMSxx8fHJ/+EBVzNePYT+lYTSJAatujRqxOsRc2fFl/j7zieyFKF/gI7keBoMR3OfWrzZBc5Z2qDUTseIInZl1G0FW9OYkZLUStNJJj47wo2X+JQoHz1L9eo4qCbtJI7YlchZKedhH3lgacQI/Bg8S49pfPZSKvU25j8kSgHr2lFBE31mKL1Z8ZIqwKelLJ/WZODT6UtzniEBdYcOJu1eCZqTEGN6NOfU8KzUm1b9J2yDif9kvQyL027OwP1LQfhULgOzcOyD3GJtJCRViUCKyvU2BAQNWy8vLyMTXQmbzEiQEwQCGm1YsWKYizV+IsSzaXGjx9v/I1hLSg3CDDSqtzcatbQMozAP//5TzouDxky5D1bmYHAfV/ASiSkpO+WmH0xQtKGyPJfhebSNZ4AMbW0kpmdVUbvnUFS+vcUyuiza9KDcW75QFjnwIxqXtWZgAuCVhtQWNIKSHEbpdWQqj0VHkLcC0ZaGX23MbwGpLhjbE0Ojb59BIlqVUXhzNROaFzbTEuScjXRa/UtCBy37nYo3+L4lCH4zi1WL4JLdyHsbE4EGGmVExH23zARyPTFipbi7kle3xXQdkOANJHIfLoMNroGUXKuQnts4b1D0vYuXbqUvpCCgoIMs/1MqlJBICMjg/aD0aNHl0p9xlCJi4sLxaRz587IzCzXnmeN4XYxGWUIMNJKBgb7yRAwYgRGjRpFx6EHDx4UsRUKBO92kogni/bDMG3GWPRuKM4n62D0ibcgOhGZvsvRTJo7mqLrxsdI0QCKt2fwZX0hfc1RuJJYRFHKRDYNMuLiILcAVETdw+9f90IdCTseK9t1ok+fwpNWRFtFKs9sEC5Rp2MAI63KRCcyqEakesxEg+p9se+NLs0oDTKjHuLY8iECOVsJDhv9teSW2BJVCkLuHsQipzrgOAvYTduP52m61LrEDOy7MAgw0qoxNhibAAAgAElEQVQwaLG0Hw6BTB+tE80cA6JkR89xaLnuheDwToPovxxRIc+0WrVh0qiZM2fSCRGJbMKO8o1A/fr10aFDh/INgtD6p0+fokqVKqhXrx6ioqJyY6JJR/iTu/DwuIunEdLeVK50mox3CPJ5gAc+wYjNYfefM7E6/R0Cn3rD69EzhCWJ3nJzpmL/GQIFI8BIq4IxYikYAsaAQEBAAHXI7uTkVDRxE69hjOBwnbNZjid8mEBoEtwxSySi6s+GVzqgDN6GNuLcsfJgXBD9hSMLz9fa0Lkix9XDV55CIUWTyMhzpcN7xVQcfpvTTlKD9OCTmN22ooATB5NeLoijrS0CaXVnmlYzznIibgpurRhpZeTdx9DEV4ZiT19r9ClQg1KNJM81cDDlwFUegBOSFiHfIFWCLy4f/QUrJnWT+q3VRFfE5nxMDK39RiIPI62M5EaVezGVEXBdNwfOzs7ZPzNHoX1lcefrU+x9JRoip+O+cz1h0KyP/pOnY/p02eerdXCL1rLpxMknIb9UKu25co95OQWgT58+sLCwKKet1zY7OjoaDRs2hJmZGR4+fKi9IPuV9WITOgiTe9sNuaMJatKe48CsLtLgzRPMDeG01BVvc0Tv0aQ+w77p9qgpLhbodyU0HfoD3GPYcymDnf3UEwFGWukJFEvGEDACBCZMmEDnaXfv3i20tJlPvkNjYWyxXvBQpiGRAvcxNYS5oh12hqiAhHPoTxalJL3lBIkoATSIOthFSGuO4deSCy1H2cmQDu+5rdDj51fQtbWkCt+HT8x4DE2dzvHR/fQ0D+x/QVCnIg7UZeaBJt32QuTI8iSt0jwwpZZw72SO2JlPq7LT84q/JUqE7vscduNP4K1eU00Fgnd0hglXA2PcU/IQR4OUxxvRlbxHKnTH/gjGWuUBVKFOM9KqUHCxxIaFgBJh+wcJ6t7WcHZPgKSEqQrH752EgavJEjwtwKqpV69eqFmzpmE1j0nzQRAQte4IaVNej6ysLHTr1o1Ozo8cOaITBlX0NcyzFSeHHHKRVuoYnBv9kTDB16YTNSObzr2NJPGBVcfg/Nhaeaat6LAFL0Q+Wqc07CRDIDcCjLTKjQk7wxAwVgRevXqFihUrgszXCnvISasao/+GRDdpYuHSz1QYexywlywuFUHY0kYcs9pia5C4w5IFv5XNhbQN8c2Dcq5pNacBOMuROBmpY6Wf9Rw/UJceleC49w01u9TPp1UF9DkjqrapEbFPjBJYFQOPRUn+gfIkrTIeYj5xpk0JyhZY5U8mDmq8Oz4QVQTSkmOO2Av7+JTh9Boke6/FwM+3wacQZnzq8D3oxJlhsKuWYM0NUjq851mD4z7G4id5WyLkzsfO5IUAI63yQoadN3gEVBFHMVSIBFN/pjsSxQUwkTz1BsYJquAVu26Gy8HtWL92A3Yeu40QHS+mVq1aoXnz5gbfZiZgySOwbds2OuHx9PQs+coMtIapU6dSDBYuXKhDwky8uboG/a3EiSH/nZO0UgRtRTthkmjWdQWuvopD1OM9GCHuglZ0xMFIfvdJ/fZPdBUnlK2+xdVIBdSpz7F3uKW0QJjrXZ4XCDpuAztVIAKMtCoQIpaAIWBUCIhj082bNwsnd+JVjKohjFkV2mD+xVBkqNLx6rSz1ul68xXwpRucCgT9ZC+MPRwqd10G18AYhHvuwFBLoYxak3FDYr4KJ0rZSJ0Ob0JacRwqtJmDc2HyXSUN0h6voiaWpt02w09cr6sjsc9BO29wOBAlbDRnNxv8eOFD0NFeHYnD/XgH2FX67ESgVIUakRKZxYFzOIAocf6vfodjfUUSsiLsV17C7ZNL0aduJel+ch8vwmNRprJxM1griohAxss9GD90BW4nFFITKvYEHC1aY2OASGjrFiDxQn+Ych3wS6gOYld3FnY2HwQYaZUPOOySASOgScbN6cTRHQfOfAiOR2V/4SiCtqC1uAjO+V17ADZ7J2m1sgBYWVmha9euBtxgJlppIXD+/Hnarw4fPlxaVRpUPTt27KDtHzRoENTq7M8VVJE4Maq+dvIne7ZyklYZz3bjqy/6wt62K76XdqRT4TFF1Khqgu8FFcjMp0vQRCir7Y9ac4OUG+MEc0FTDBQ9sBoUWkwYQ0YgJ2k1btw4LFq0qMCPra1ttj6uVOoygDHkljPZGAJlE4GwsDBUqlSpCPO1LAT+3BuVZWOWqPXLf9fG2DNRgkYQoEnxxvL2FbK9B7Tpq2HgnyE6zeLKJuq6WpUO729aotnAWZgzqR9a1m2KXhO+xdpNG7Fy9hDYWNSE3Zd74JMssEnKKFzfPApNZfibtBiHLW6RoBpYjqYwt5+JrZsmoY1lQ3wyfRHmjW6Hylw1tJ/4Kx5LatlKRF3fjFFNteQXZ9IC47a4IVJ4Taf7/wQnkVzkOFS2m41DR8ejllR3LQxcew1R7LWu68aWm3OZwQfw5cD5cH2Xg1DKCMKlc4HI20BHjZgzo2D76RFE5MiaHTwlXu+wg4X9j5CUNbMnYP8KiQAjrQoJGEtuGAgoX+9Clwr8oNV8mU+ul0vipaFaVWDOAk07dUOnxlW0ExDLkTgVxb9t/ve//1GV808//dQwGsek+KAIPHv2jPaTlStXflA5PkTl7u7uqFChAmxsbJCcrGMbmZhNtOafO0vH73Hi8AjUECaCOUkrnfKnPcZK0aTQYhjOx/ETWk3MKQyqwpdbc+QZ8L4tM+C3rrXwzLbE2mfSNqvOotlJhkBOBHKSVtpFp2zBIy1k8j7HSKucyLL/DIEPhwDxbUqe5WvXrhVOCE0K/PbNQhdR21d89mt1w9d/BSJd1NYRSlXH3cXWkTYwE9OR7+odMW2fP1JzpC2cIGUhtQoJL3wRLSqaKOIQcOciThz9Cydd7yAgVrygT1sViHrqj1iBRFKnhuKhuysuu3vjZUxWtg1mfUojaTRpoXjgdgXuj97kuq/6lsHSlVUENEh//hs+q2eJzhPnZ9vEWvD1ZAxo1QRfnIul/S7DbzM+degH59+9EUf7pwJRbqsxuNsMnJHMYjVI9T+I5Ys2wsU3UTJhzQzejxE2PbHuSVpZBbLU28VIq1KHnFX4/ghk4sn3TfjFbIWu+D0sJ9WtQdKjP7DMeQKGD5+Ng8/T+EFPHYMrM3h1ZjLhabU+AGRYJYtz8n/KlCnvLxorwegRSE9Pp/1hzJgxRt+WwjSA+Ashft1q1KiB4OBg3VmV4XD5ZiZ+OP0cyWogxX2MfqSVIgwuC4bBoYFIDNTFyIMh9PnjK8rAs5/6opqwOKhh2wO9OllL0T9bfP034nIofekWkJ1lCGgRYKSVFgv2iyFQVhCIjIykAULs7e1BNh0LfajTEfncG7dv3oaXfwTSChhbsuKC8dTzHrz93iAl53Sz0JWzDAwBhsCHREDxeg8GCK5ldG5k1RiFK4m8hFkvtqKnsKHKVWuAJs1ao/f0XfCMl780VAg/NEiYv1aCtcNAfDrYCX2HzsUBv5Qika4fEh9DrpuRVoZ8d5hsuhFI98RXQojiSr0PQ3CLozttjrNZfisl/wViGN7Q0FBKUhCzEXYwBAgC9erVQ8eOHcsNGCkpKWjZsiXVsnJzc9O73XqTVslX8XlVkbDi8NGADbgRmWMnNiMQ+76w4slo+c52y9m4GJEjrd4SsoTlGYHbt2/DxMRE50c+Wc0rjXj+v//9b3mGkbWdIWBwCMybN4+OFRcuXDA42ZhADAGGQNlBQJ0WDl/PO7j/NAjRGXmpWKqRGu4HT49b8PD0wet4NmctiR7ASKuSQJWVWaIIpHvPQQNhUdvlj3DJB4E+lapCd6GDuCB22AcSKObx48d08rNp0yZ9imBpygECvXv3RrVq1cpBS0H9Vg0ePJg+Az/99FOh2qwvaaVJ9MQva37AusVj0FYIg81ZDsGfr4WBXRGM3U4WAmFlgfbDpmHG2N5oKD6rdUbjhH6xiAslP0tcPhHQaDRo27Ytqlatiri4uPIJAms1Q8CIESDRfatUqUKfY/I8s4MhwBBgCDAEyjYCjLQq2/e3DLZOgYD1opNcG91+bjTxuP7tADi0bY6GTYbB5Z1WjTPVYxqshIWw5YQbSAVANEvIrvvevXvLIF6sSUVBYMaMGbRPxMTEFCW7UeX57rvvaFsnT55caLn1Ja20BWuQdHMW6gnPYM0xf4MEDE68NkZwuM7BZvkTPnIQNEhwn4X6Qtr6s72E89rS2C+GQFEQOHfuHO3zixcvLkp2lochwBAwAATI80vmbqdOnTIAaZgIDAGGAEOAIVCSCDDSqiTRZWWXAALxOOMkhLOtNgpuKbqqyITv8mZ0MkMmNPXH7oN/shIZby5jkV1F4XxdzLjNZz5+/Dg9RxYy7GAIEAS2bt1K+4SXl1eZBuSvv/6i7ezSpQuysgrv6Fwf0kqjzIJSvhGeeAFOpoKpoM1aPMvKxJPvGgvPpTUWPJTFok5xxxgxTLndToQwfyJluj+WRuOID5z27dtTLY3yQEqXBqasDobAh0CAaEmam5uDRPvMFen2QwjE6mQIMAQYAgyBEkOAkVYlBi0ruEQQyPLH6hbCgtd2HV7kYTasjrmIyYLfK7nvEvG39dSLiBYWwLt27aILZg8PjxIRmRVqfAiImhhHjhwxPuH1lPjRo0eoXLkyGjRogHfv3umZK3uyvEmrLLz4sR+sq5GQ4daYLyOilMHb0FY0+2v/M0KUctKqBkb/rY1aqIl1QT+R4HLYS815s0vA/jEECocA8YFDxoGFCxcWLiNLzRBgCBgcAsuXL6fPM9mAYQdDgCHAEGAIlF0EGGlVdu9t2WxZ2l3MqCOQVg7783XCnhl6Bov6aqMFUsKqgjWcll1EuEypZM2aNXTS8+zZs7KJGWtVoRHw9/enfWLVqlWFzmsMGf7v//4P9evXp6TVkydPiixy3qQVkOQ2AZYCOWVqvxBnX0QjNvhv/PBJNYoteR5b/fAM5FFMvDpKikJYoc18XAzNgCr9FU47N5fSNl/hi8wiS8oyMgRAI4116NCB9nviE4cdDAGGgHEjkJiYiOrVq6NZs2ZgAROM+14y6RkCDAGGQH4IMNIqP3TYtTKAgAZZsUF4fP8uPJ8GISZTbqfEN2/OnDl0YRwVFVUG2suaUBwIpKWl0T4xduzY4ijOoMr4z3/+AwcHB9o+Yhr7Pkd+pBWUIdg7SEtQiVqO0nerxbibJDyPWYH4uXdliaCS0ogaWbXH4kwUsw18n3vF8gKurq60jy1YsCAPOJRIDAtEQECA7PMSL4NeISwyARla94h55H/f0wrEh5L6X+J1TCYLlf2+cLL85QKBtWvX0ud6//795aK9rJEMAYYAQ6A8IsBIq/J411mbsyFAiAmySCaLeXYwBEQE6tatC3t7e/FvmfmeOHEi7e9Llix57zblS1oB0KQ+w0HnLvhIJJ/otznajN0Oz4TsDIAmxQ/7ZnVBrWxpOdTq9jX+CkxnC/j3vlvluwDiy4o8z8QkNk9zWGUwtrURNHlz9ENKpJp+jAHLryJKWUJYZj3HWhu+fmMLPJAVcQPbvlqKOzr9TJYQXqxYhgCAlJQUWFpaolGjRlAo8vAZwZAqXgQUr/HXwi8xYcIE/T+T52FfIG/moIo8h687W8HCqjPmnI8sVBRw/RqiQuS5r9HZygJWnefgfCTb9NIPN5aKIWC4CDDSynDvDZOslBBwcnKioc9LqTpWjZEg0KtXL1SrVs1IpNVPTNHB/KefforSDBOuTo/E8wd3cdfbD2HJ+U8eaVrv27h52wv+EWnITm3p106WiiGQE4ErV65QsnbevHk5L2n/ZyOtzFDD0hKWljVRo1oVmEgkViV0+SkQJbI0NlLSKu3+N7A14cBV+ATnErVwsl8MgdJCYPPmzfT5/v3330uryvJdT7onvqrHgatki9Erd+PExSu4fHAWGknvSQ6NvzqEK1ddcebAOoy1JUGQLDHxFonZLY8CzoGz3YCXxf1CVQRgva12A8J2w8uSeWeX717AWs8QKFUEGGlVqnCzygwRAeLjhOzQsYMhIEdg+vTpdBIcGxsrP220v69duwYTExO0atUKqalk4sgOhkD5QaBz584wMzNDvmbgctKq+zHEyqzJVXE3ML+JsAhqsw3BJaFtZaSkVcIpR/quZKRV+XmeDK2lxKS/du3a1Fcj05ovhbtDSKvGH2PG3/HSxpLy9Xa0k5FWdr+EShpU6rhrmGFdTyCtNIi7OFryeVlrnCviZe/aYpFeE4eLoy359xJXC+Nc45m2drEAywphCHw4BBhp9eGwZzUbCALW1tbo2LGjgUjDxDAUBLZs2UInPN7e3oYiUpHlCAoKos5qiQnF69evi1wOy8gQMEYECGFLzPuI/8J8j3xIKyARl4ZW4RdB9r8hjCoMZuHlzqFoY2ODNkN2IEAM8KEIxC+ftoWNTRsM3v6CBhug9Wa8xrmVI2Df0Jwvp0o92A1bhrMhQogBOWk18xROrvwMrSyJhkIVWHedit1Pk7ULL3U87u/4Ej2a1kQlulA0gXmDDhi+4jzCRDkyfbHeqTVsbLthzvG9mNauBjiT6mjr/DfiNWrE3d+JqT0aoxrJb2qJlk6z8efjJGkRCmiQ9vwI5g9qgzpVeMLOxLw+2g1ZiGMBadBAhYhjY9BGDI7CmaBOM1v0WvYA6fkCzS4yBIofge3bt9PnaufOncVfOCsxOwLpnlg8ehdeycj7/EgrQIngncPwFdW0Ir4DMhB26y/sPe6B8BKKsKLJCMOtv/biuEc4C+KS/e6xf7QLvsL5tRPQt6MtbNv1wBeLDsMvRQd7qk6Cz5ElGN2nA1rZ2qHn8Ln41SMasq4v4KlByvMTWDlxCAY49UYPp/FYdTIAqTqKZDegaAgw0qpouLFcZQiBKlWqoH///mWoRawpxYHA2bNn6QT46NGjxVHcBysjKSkJLVq0QMWKFXHjxo0PJgermCHwoRAggQdMTU0RERGRvwhy0qrZHPxx/DhIsIJjR/Zhx+IhaEjJIUuMcBF9sGTi6feNeQLq48V4kiEUn+mDJYJWVqOFj0FPK0Oxb3ANPi3HwbRGdYFs4sDVnYRLsWpARlrxwQjM0aBRbVQUtRcazoFnGqlDiZDf+6CycN68YXM0saoklW2/PZifUKfdh3NdgWyqpDWVabPpBWLvfYdWYrnm9VC/unC9kgPW+/IN0cRdwrha/Pkqje3Rs2cnNBHIK672BLjG/QchP7eX6uVl5lBz3A0w11b5dzV2tfgRyMzMRL169VCnTh2kpzPatPgRlpWoisMzv5hsC/f8SStAFfcMfjFKqNKTkBAfj3jhk5CiEMh4FTKSEqTz8QmJSCPMgCoRL2+dwV/HzsHjVaqWVFclIuDmGRw9epae13IDKqTLy4lPQIpCexXIwrunV3Hi8FGc8whGiioNoY9f6SAXshDjfwNnjx3Fycv3EJSUj2sDTRZiX9yB68kjOHTkNNz9YrSbFRS2PNpGNgZC7uDcyWt4npTdGYImKxYv7rji5JFDOHLaHX4x4m6E7D6wn0VDICsIvw1pjOaOIzBp4ufoJIxzVfvtQYicjdIk4/73bVCRM4d1q3ZoUZtsIpExsTa+OBQqMzlVI8FjMezq9cCaO3FQQYXo69+hnakZ7Fd4Qow5VDRhWS4RAUZaiUiw73KJQEZGBn0BjR8/vly2nzU6bwT++c9/0r6xevXqvBMZ+BWVSkUJWTLIst1nA79ZTLwSQcDNzY0+x7Nnzy64fDlpJRI6Ob6tvzyLSGlSqz9plXR9ghCQoDbGnghDFjRIf/YTupuSCbAFnA6EZyetzPtjTxBRQVAh/EA/gaBqjpV+WYAmAXd+GINP7Fuh12pv0M1hVSh22fMEk+nAS0girZWRVpxpZ6y47Af/u9fg/fYlfu4kkFF9duJ5ugbICsHhYTUpVpX7H8c7NZB6YzxvwmPmhKOCI2NVxFnMGzUOXy3/FdcjlVBnJSPkQFeajzPpiSMhCUikK82C4WYpGALFjcCvv/5K++K2bduKu2hWXgEIFERa8dlViLmzC1NbVeDfGeT96ngKCeSiKhZ3to+GtfTOrYEv9u3FrDam2rScJYbuCUKC/x6MbyErg6uPsSfe8uaIqhjc2TUVrSrw7zgy/3E8RWsAVNFwdbaBCfcRes1YhDlftEedj2rCvNUaPJNxQqroa1jiSMwLrdBvziosHv4xTCo0hNNSV0RI738itBKRV1ZgUPMG6DhyDr6b+wVshQ0C6xG/44XInapicW/nBDSVtW2MWxR8tw9CbeGcmdMx+t6FMhJXVgxC8wYdMXLOd5j7ha2wwWGNEb+/YFqsBfTDgi+r8OaoM+a7hErEIjFfndWQ9JeW+OG5tiMoArejX6/FcH0jqAOqE+C11QnVyT2rPgKXRLvWlDv4qoEJmq/0k8oEMvBkSRNwnDXm3mMuOQq+LwWnYKRVwRixFGUYgcjISDoYfvPNN2W4laxpRUHg3//+N+0b48aNK0p2g8izYMEC2gbin4sdDIHyiEDXrl2pltXbt28Lbr6ctDKzQbdevdCrlyN6duuENo0Ekz6OQ52RhxFGFy76klbp8J7bkF94Wc/DA1EjS5OJmNdhiM8StABkmlZVP3PliScAWc/WwIYubOrC+b64CuKbo0oJx1P3k/htzQz0EnaLuR7HQT3xyUgri+GXIfpI10QfhaOwoLPfdgd+/v7w9/eH1+99YUbqqTYSfyeTetfCVlhQESfKtr1HYfbaPbjkEyPbYQaYT6uCuxZLUToIZGVlgbh8qFWrFsgYzo7SQ0A/0orIo0bkfgf+fUjeLyJpRS5lPcOaFlqyias3CJtuhiDs+gIt4VOxIVrajcUvd17hxfEx2ojDzVeCcPr0UEdiv4O2HJG0SrvrjLqkTtt1eEGdv2ci6Pf++Kix7L2c4Yf19jwhZjHsAu9vK+0enInjeY5D8wV3kSy8sjN8VqI1Kc98MI5HEU0pBV6ss5XaJmm9EqGUQdgqRaetBqdFY9GqUWNtdGWHPQhXZcBnZWua33zwcfBFvsA6yam8PbaXiENFAbdy8aVBZmISxGGXb3IGHi1sBI5rgy2BYlQAFd4cWYDd0n8BHNUb/NGN9A9rzBMG88RLw2DOmWPY1eRsCGb6LkczjkOVwWey+cjMloj90RsBRlrpDRVLWBYRIBN1MgitXbu2LDaPtek9Eahbty46der0nqV8mOwHDx6kfbtHjx4sDPiHuQWs1g+MgLu7O30GnJ2d9ZNETlrlcMROdk2fbXMQTPVqYcptsnMqJ60W4bFIRmU+wXeN+QUObx6YjKvDBNKr9RYEiXPinFLJSKu6zvelHXVF4CbBlM8K0+9S+0BkvnLB/L4fw1QilUx4won87+mCOFK2jLRqusxH8uuiCNggI6O0CzsyFvKfztgboQY0SfBc30/SBNBe51Cr90rciuPNWRhplfNGsv8fEoG9e/fSfrx+/foPKUa5q1t/0gqId+kpvGtykFbKV/ixrfgeqoyBJ2N408HUm5hgKZ5viDn3+fcg4k6gp/jeMu2PC1TFlEAfD5eeYnpR00qBoC08IcRxprBf6IoI8i5WvsLusfPgQZVh1Ig84iS9S9ttfy2YQMbhZG9Rs6sTfqbOvBR4ubGV1I7mq/yRlYOQqz7KTWsqrQrFL3ZambjG38AjSYUknyNYPX85DvqmQKN4iY2txDTNsco/C8hGwFXHKDdmfF38D1ca7s6oD8tBhxAutwLVyM1KxVoz8GiBNTjOBmupel4m/FY0oxrTI8huj/xIcsXQyhy4WlOE/iW/yH4XFgFGWhUWMZa+TCFw+/ZtOuAQlXJ2MARyIuDo6EgdmOc8b+j/vby8qHYJ2XEuK9EPDR1zJp/hIdC9e3dUqlQJ4eHh+gmXL2kFZPosQRNhgdT5j7dQU9KKqP9z4BrOhbeoBJV2DzMF5+Q8aZWGO9Nq8+ms5+OhSG4hC29uXYL7o1eIzSQmes+x1oZfsNSf7ZU3aZXph9XCwsas02zsvvwYb1ITtMRYr9O8uY2MtLLdqA35rgr7DR1pO0zQccUxnD9/Pvvn0h2EZWgn64oYH1z6fTWch/dAS2nhyKH5Cl9KhDHSSr/uxVKVDgJKpRJNmjRBjRo1QHw6sqN0ECh+0qoGxrgLBE3aXUy3EskcG6wVTbgSz8NJ8tfXDUdjxPeWLtJKg5gTTjKin4O5wzycepUBZUYaqNsrTTT+6i36LeLg6ELpfwDJuPJ5VYmg6rT7DTVFzArai/GtP0K1+o5YcS8JGlUC7i+3kdJVHqLVmEUO0qoFJbly3pssBO0dj9YfVUN9xxW4l6SBKuE+lgvjAsdVxhBX1qdzova+/5Vvj2PKkO/gRnxLFnikwH1sTXCNvsUjOpZnwmdZU3rPO/3O9wupCJFsNemDM6Kqs3SR/SgsAoy0KixiLH2ZQuDMmTP0RePi4lKm2sUaUzwITJs2jfaPuDhx4lI85ZZkKcTklTiiJQEGfH19S7IqVjZDwGARuHnzJn12Z86cqb+MctKqyz6EJKcgJSUFKckJiH51B7+NrkfL5LiPMIlGwVIgcJOw0155AE7SBZMGqQ+XooVAbvGklQbRJ/oLu/cN4eyeQJ0Jq2POYbRAAlnPf6g3aaV5dwhdaPmmcDoTz7cv6wU2iDv0ormNjLRqvSVIa9KX8RDzrfkFoNXkK+AVpjLgt20E+n06AXM3X0WkUok3p7/D6P7d0KH3EtwTN/ez/LFGWEBZTroFopyQcKY3TKg8PXHCeF6V+vcLltLoEDh8+DB9VletWmV0shurwB+GtLogI6264NC7/EgrQJPiicWSqZ1Agpl2xLdXo3l/WJlPsPhjkRwzxYCLIkGUjGvDtSbi9WSasPR+qZPhf2IZPrNrhz6OwgYFx8FscF6klRkGXRLL1n3H1cn+OLHsM9i16wPH2qJMZhjMSCvdgBXhrColBHcPLoIT2WSysMO0/c+RJnahvMpL9cDMBtXRd59IUJ12SfoAACAASURBVGkQf34oqnIcKvb8E2/kmlqJ59Gf+K00Gwx22/ICVP/zjLTSHyuWsgwi8Mcff9CJDTEjYQdDICcCmzdvpv3jwYMHOS8Z5H8SPcne3p7KfOrUKYOUkQnFECgNBHr27Em1rMLCwvSvTk5aCaST3CRO/F3JYQteCL5TUm99KflUMbVxwqgveqMxda7OLzKk6IGpXlgk7ZZXQ/NOndC8mrho6o6dwQq9SSuk3cMsISogZ/UJnJcsxMQuxGmwUF7H3xBGJs55kVY0+mBfVBHS1+8xAhO+6Mw7XSc+WxZ6UTIq/fFyKcKgRcdRmL1wIb4aYQdzmq82Jl5NoKY7Ke5jUVMoq1rTVug0nUUP1L/TsZQlgQAJQkKi5lpYWCAhQXDCXRIVsTIlBIyBtCLCqqLdsKxHNe37kry7zHpjT6gSyJCTVpUx9LJILCXjymdVpDxaTVgN0l8ewxyH6uBMO2K5RxwiDnWR0uVNWtXAmOviToAEIf9Dk46Xx+bAoToH047L4REXgUNdGGmVA6Vi+KtCgu9lHP1lBSZ1sxLumRUmusZqI1TmqkWJ0D19Yd1nZ3Yz/1QvLGxO7lF19N/+lAZH0WSG48aWQfy42nQpfARf7rmKZCf0RoCRVnpDxRKWRQQ2btxIX1RPnz4ti81jbXpPBERNvL/++us9Syr57P/73/8wduxY2p9XrFhR8hWyGhgCBorArVu36HNQ6AAEBZBWppbN4ThtJzzjZSYEygicdW5Pd1l50sgKvb4/il3d+UWGRFoRF73hF7C4T11hcsxfN7Huj1XXhV1+fc0DSTjtKwtgX1VcyHAwbTkG6zcMgAVZgFUZBJdodT6kFfGFnADPbSNgQ/xtCIQTcbbeZY4LQkRnxlAiwnUZnKy15jI0rUVbTPjNh49aCOL66g6+baNNY9L9ACJlEBloN2FilXEETpw4Qfv2kiVLynhLDaN5xkBaKd754zUJt6qIhNuaflon6ByHTnvCAc07HO0lvsvkmlaJOO9USXpXOuwJp8RGVtCvcLLg36FtNhETbA3e6UtaiaaP2W5fFoJ+deLf41wbbHqpoDIx0iobSMX/R5OCxxu7UtPRCt33g7h01HUoQ/fhc7vxOPFWrk7Fp1S8OYeFferzWsc1msFh8Ewsm8U75a8z8y4EL2y6imXn9ESAkVZ6AsWSlU0EFi5cSAehQu3GlyUoNOkIf3IXHh538TRCcrSSu4WKeLz2ewivBz4IjiXB2vM5VMl48+wRHjwJxDuZX5R8chjsJT8/P9o/1qxZY7AyioKJBOywYcOg0ek8UkzJvhkCZRuBXr16oWLFiggNDS2lhmqQFRsM36fPEZ6SezKbXQgNMqOD4fvoMfxex+WIYJQ9ZUH/NFmxCPJ5BJ/g2KKXo0zCmxdP8PBJACJT85JdgcSw53j84CF8AqOQpmtCr07F22eP8MgvFAl5OZovqEHsOkOgGBEg42Dr1q1RtWpVxMTEFGPJrChdCChf/YS2EgHOof3PIbzJnY7E+jli18enVeHMA5P/noLPdwYLptIqRF2YhoaCzJS0Io7UZY7YHfZH8lo36kjsE6MRmjhgVwgJH5uK21/WEoisinA8RNIq8WpHe4ncMht8SYoCC1UIdkqO2GVtk+OTehtfilFgKzriEGH/la+wo724uWCGwQWYFcqLY78LgUC6N+YRs/mPF+OJjuWQJtkbawd+jm0+afmugdRZachQagBVOPY6EqLTBit9mZpVIe5EnkkZaZUnNOxCeUDgyy+/pIML8VtSHo+sF5vQQRiwbTe8zAWBJu05Dn/tiHqyiQjZabfq/jWOBqRnf3Fr0vB831TYCbtO/O59fTituIZ3ea2FctVoWCdSU1Np/xg/frxhCZZDGldXV5iYmKBt27ZIS2P7OTngYX/LEQIeHh70mZ06dWo5ajVrKkOAIaALgbNnz9L3wbfffqvrMjtXjAhk+i5DU9lckY+mp6sCNSL3OdD7QueJDgcQJe6EZvljdQstQTPoouC9OvUGxtUUz1tjAe8BG1r/fuSaHXaGCJNNOcnEcXA4EEXnq6m3v0Qdi27Y8DCZn7+m3sRE4lfQpBN+EsO6ZvhjQ2de2+qjCW486ZR8QyKTWi/xpubTQBIuDTbTtqOCNTo6tEdb2+racy3mYNfeqwgnRL7iBda1FNtggS+u5Yg0R6BKuoTBZmIaDhWsO8KhfVvYVteeazFnF/ZeDdf6KNQFMTtXBAQScaG/KbgOvyA055ol4yX2jB+KFbd5f5QFF65Byt25sOY41J/uhgSxfxeckaXIBwFGWuUDDrtU9hEYMmQI9XtS9luau4Wq6GuYJ3NImYu0Ur7BoSGywVc2GaETjY9GwiVSfLOr8O70aMm3C09YaQfZtqueSJGwckti2GeIU/POnTsbrJAvXrygfjtq1aqFcqsxaLB3hwlW2gj06dOHalmFhISUdtWsPoYAQ8DAECBm83Z2dqhcuTKioqIMTLqyIY46/gGO/rQCkztW1pI1ZL5YxR6TV27H0QfxMh9BSkRd34xRTbXzQ86kBcZtcUNkZjTc1w9FXdlc06yjMw4/D8SpBV1lJtgcLPstxdln3vh5VKNsdTYZ8zO8Y97i+uZR2Qg0kxbjsMUtEhkPv4VtvcawrvUxHMd9hRmDGqOCiTVG/B4AuXKNKsYdq/vXB8dZ4RPn7zC7PwnCUY9uwkYRJSt6aJDsuQrdzIW21O6KWX8+Rtzb0xjfUDhn0QkLXKOgUkXj5qbhklYXmSNX7jQT290iIRVHytQkw3NVN8FvIIfaXWfhz8dxeHt6vJTXotMCuEaJc29BFPb1/ggoX2OHnQXsf5QFLSGlZgbjwJcDMd/1XQ7NwQwEXTqHQB1KVOrYa3BuzKFSx5XwSmaM1fvfHL4ERloVF5KsHKNEoEuXLqhbt65Ryl50oTPx5uoa9JfCB/ODa07SKuPhfLpLQAko29k4+SwaCZGPsW9cA2mS0HyFH6j7k0wfLG0mDNIV7LD079eI9D+EMULYd87MCX9F6bIpKXorSisncehMQmcb4vGvf/0LTZs2pcQr0TBhB0OgPCNw9+5d+m4iGrTsYAgwBBgCBIHLly/T98LXX3/NACnnCGhSwxGSqAIU8Xh57youXbuPIPJf56FGathj3LxyEZdvPkJoiu45rEaRiMi3cciQX1YmIzI0AolFMpXWQJEYibdxGTKyD1AmRyI0IpFpWOm8V4U4qUmF/8HlWLTRBb6J4k3LRPD+EbDpuQ5PZMYKmvTn+O2zerDsPBHzFy3CIvGz4GtMHtAKTb44h9gcnJQq9hZWd6uBj3qvxR25/8tCiMiS6kaAkVa6cWFnywkCzZo1Q5s2bcpJa0nYlEicGEV2jwSCSfadnbRSImhrGyFdFQw+J4RVB6AM2oLWQr6K/c6BKG9nPVsthXiv+ul5xNOXuAIvxXDwnCmcTsVmNyc0EtSJmRHBKz5ei4EhiP7f//4Xffv2pbLt3r3bEERiMjAEPigC5HmoUKECXr169UHlYJUzBBgChoWAg4MDTE1NER4ebliCMWkYAgyB0kVAFY5Dg4TokZWs4TDwUwx26ouhcw/AjzjpFw/Fa+wZkCPKpGzNxHE1MOoKb76qyYxG4EM3HFk/HU6de2PqjjuIzqZCJxbKvt8HAUZavQ96LK/RI0A0aHr37m307dC7AYogbGnNE1aWjt/jxOERqCG8hLOTVqREJdJiwxHo64vQVHE3Asj0W4FmQh7z4ddArPLjTvZBRbGc9QHSTpA8FHrDbx4YpYngpk2bKDH08OFDvWEujYRz586lcjk7O5dGdawOhoBBI3D//n36PEyaNMmg5WTCMQQYAqWPwPXr1+n7YcaMGaVfOauRIcAQMCwE1KkI9/OExy0PePq8RnyRNOK0TVLFPYXbleu4/zwK6drlkjYB+1UsCDDSqlhgZIUYIwJEU4Vo0IwYMcIYxS+azMpwuHwzEz+cfo5kNZDiPiYf0kpHFeo4XP7SiuLGcabodzQKaijwcmMr4ZzW4SXJLTcxNB0ki6Kio2hDPXX69GnatmPHjhmMiHv37qUykShpSiXbzjGYG8ME+WAIODk5US2r4ODgDyYDq5ghwBAwXAR69OjB/N0Z7u1hkjEEGAIMgXwRYKRVvvCwi2UZgdjYWLrwL8+aKoUirTQpeLTREVVE9dhmC+GVSnpIJp5+31girXq6xEndJtNnqdYZZo/jiJWuGM8PX19f2ra1a9cahND37t2jZg4ff/yxwZksGgRATIhyh4CXlxd9RidMmFDu2s4azBBgCOiHwO3bt+l7YvLkyfplYKkYAgwBhgBDwGAQYKSVwdwKJkhpIxAYGEgnMMuXLy/tqg2mPr1JK3Ui7q3triWsKnXC2kepgo+qTPgua0qxJJprPV20vp8yfWQhkLsfM0rSKiUlxWAWxMQfh5WVFczNzeHv728w/YgJwhD4kAgMGDAAJiYmIO90djAEGAIMgbwQ+OSTT6hGJntX5IUQO88QYAgwBAwTAUZaGeZ9YVKVAgKenp6UjNi+fXsp1GaYVehFWqlj4b7YDpVEDauK7fCtW6wsqokCgZLDdQ5dDr2THK5nMw8ceBFJhglDgVIRoog4cv2QR3p6Og3dTYjB8+fPf0hRWN0MAYNB4MGDB/Q9Pm7cOIORiQnCEGAIGCYC4rxv7Nixhikgk4ohwBBgCDAEdCLASCudsLCTZQmBf/zjH/jhhx9yfcgihxAAw4YNy3UtZ/ojR46UJUikthRIWqkTcWepneRknavSHWvvJcgIK76ouJOfSGlabQrUOmK/rvWZ1WCOt1E6YictJL4watasKeFW2j/+97//YeTIkbS/GoqZYmljwOpjCOhCYNCgQVTL6uXLl7ous3MMAYYAQyAbAgMHDqTvjGfPnmU7z/4wBBgCDAGGgOEiwEgrw703ekimQEJoIAICArSfl4EIDo1EYub7hS/QZMbg9csABASGIeE9oyro0ZASTbJ06VK62CcEVVE/RKW8LB75k1YKvP5zIKqJuFXtg61PRZPA7GgQ31VNhHTmn19EPI0aq0TQ1jYC5qbo5xIjaWBlz234/6ZMmULbkZCQ8EGEJUQV6buEuCIEFjsYAgwB/P/2zgM8iqpt/xMgBAglNCmhCSH0ltCLGJWivgiIAlIsoBQBUQGRIkURBQERAVHpTUE6Afzovb+hl7wJgfAlpPzTvpQ32Xez7/2/zpTdSbIhm0p2c+91TXYze+aU35mdOXOf53kOLly4IP8uBgwYQBwkQAIkYBOBixcvyteNPn362JSeiUiABEiABJ49AYpWz74Pcl6DlFuY45mZEOOMOj7jsO5WQo6EgviTI/GcLEI0xrw7+a1aJePRofn4cPIxxOacRqZHUrTKFM1TVw80PliJ7i6W88u13RCMHTcO43TbpGVXFeupxHMY566mLd4G0488QvjtjRhSQ93n/AJWB+dOSM28Ffn/zdy5c+VB7vnz5/O/sHQlCFdAIVg1b94cwkWQLxIgAYVA7969ZYuJGzduEAkJkAAJ2Ezg9ddfl++rly5dsvkYJiQBEiABEnh2BChaPTv2uS85nWhVwrkEiheziAyyVVGpLvj+elK2yyo40SoeJ8d6wkmSUOyFbYjKdk2zPiC9aFWpUiXYsumtsoqepZUIrv68PKjTc8jwueNGKGsFGvFowz9QQbPKSvfu8elpyAsNZt1dhTLFli1bZBYbNmwo0PoJ9wURdL1y5coQQdj5IgESUAiIh01xPRLWh3yRAAmQQHYI/POf/5SvH7169crOYUyrEkgNP43f5n6GEUMGQazaat6GjMRnsxZi8814OaXxyVEsn/Ex3h2spBk8fCxmLD+CUGPeozQGb8MorypwreKF0duDkQ9F5KjShbVeOWoMDyKBZ0iAotUzhJ/rovWiVectMDsupYTi7IohqKcKByU6/YxAQ/ZKKzjRKhJbOitCW0GIVtWrV8eJEyds2nx8fMyiTZETrQz3sbB5OgE0nRAlC1hm0QqAKQ5+y4eiWRn9cZXQ6ZPteJjfxnrZO72znfry5cvyuSBinRXUKyIiAnXq1EGJEiXk87WgymU5JGAPBF577TX5N8lVNO2ht1hHEih8BPr37y9fQ8RiDnzlhEAirsz0NI+TxZiw8ZzryDBNbkrA1W9ao5jkgc9PRufI+yPr2qXg5mxdXTzn4FahGHcW1nplTZQpSKCwEaBoVdh6JDv1yUy0kvNIwNlx7urNpDm+v2dRrVIjz+GXCa+hRQ1XFJMkOLt54IX3f8DRJ5Z5CYto5Ykv1i/EsLbVUVKkrdwU/5ixB4+0m0HSFcz2aQwPz/YYvWEl3m1WHpJTOTQduV+Oa5QafhKLhndEnbJCyHCGW0MffLTiAqKFp5jxEdYPaIKqmhjiVBXPe3bBlLPCBSoVEScXYmjHeqhQQhFBnMrUQMs+U7E9MDk7lKC3tKJolS10OUtsiMT9y6dw4owfguLs1yVQ3/jY2Fj5tzR48GD97nz7bDAY0KVLF7nMFStW5Fs5zJgE7JGAJiK/8cYb9lh91pkESKAQEBDxYJ2cnCAmKPnKGYGYvf9AGW0ML7mi7/4Yqxkl35iFZh7jcC7R6td5sNOE8B1vwk2tS6W3d6mxVfMg61xlUVjrlatGFbGDjYg4sxITB/XDgHeGYvCb/TF8xmbciJOD96ZlkRiA3XOG49VXXsErvYdhzp4gZO+JNW12/C8tAYpWaXnY139PFa2AuEMD1Qu4C3rujJbblhq2Dx8+r7eE0X2uMRCbHynClUW0Ur8v/Ryql7OkrTPqb0SJ32v8SYx8TtnvpIpLYralyde38O+YE/iskXZMGVSrXk4V0UrAe/YVJBr88UMGa54KePtQLAz+y9BNi6dUpibq162CEtqNsfUC3LVocFn2GUWrLBExgQ0EhIte27ZtbUiZ+yQjR46UfytjxozJfWbMgQQcjIBY8VXcZ4SLD18kQAIkkFMCwq1NXEuOHz+e0yyK9HGxB/qjnDY2l8pjwEHrkWkN9xfAu9kUXMlghpWH+EyJCDy8Dis3HEVQfpaT3SoX1npltx1FMn0qwvaOQP3nXsaiK+pCVMYn8P24ISp4zcAFxQtWIRN/Bd92cUPNt9cjIBlIebABA2o8h1dX3KNwlUfnDkWrPAL5TLLJQrRKuvQ56qg3E68Vj5CKeJweo1pfOTXGmG0BSDKlIOTITHQqrYhL5fv9hTCT0KK0QOwS3Pr8hnviBpDyEFuHVFOEp2Lt8HOgMY1oJTl7Yeruq/A7vg9nQpNw74c2StpS3bDouggInwz/1WrcI5eXsCHEgOQYf/zaThW9Oq2Bf2QU4g0mRB77CgNeaI1GXabjTKxQx4wIWNJayc/5FaganE3YKVrZhImJsiDQoUMHuLm5ZZEq91//9NNP8nnevXt3/Oc//8l9hsyBBByIwNWrV+XfhxCu+CIBEiCB3BC4d+8eihcvLls25yafonpszkUrIxKjIyHCIMibPPYXFE2I9z+GbZv34bpwyTClIDZSTaOlTfMeiZjkVBgTohGp2x8Zm5LRDdEYg7vHd2Lj6jXYvPcCHiVasZQRVTBE4ubhbVi/dgsOXAuH5lgifxUfpSsnEtGJYqLfgHA/X2xeswF7Lj1Jk96mehXVk8ce2p1yE3MaFUO1kSeVRafUOhsfLEUryQU9/oxQz7NEXP7SE5JzF/z6UPMwMeD+Yi8UK+GN72/rzyJ7aHjhrCNFq8LZL7bVKgvRKvHCRNRWRas2yx8iNeEkRlZTBKLKw4/oAmMnw29aA0UQKv06dkfrRauaGCu76ylVSvabhvpynqXw2p6YNKKVa5/dlkDqplCs7VxMybP1fBy76gcRe8Tv9DJ0LynqUBb9ZDPiLGJaGWMRdOkgNi+dgfe7VFLykzpiQ5htiEQqila2s2LKzAkMHTpUPv/+3//7f5knyuU3R44ckQfQ9erVQ36Wk8tq8nASeGYEhEugsIy4cuVKpnUwxj7E9QtnceHGI8RZvN6tpzfG4MG18zh78TZCMnuIsX4k95IACTgAgeHDh8vXlEOHDjlAawq2CTkWrYxhOLFokDn2riSstHwf48qCHqisPreU9FmP4KBf4C3+r9IKL/d9CwMHDcQb3prXhniWqIyhvuEIObYEwxupzxwivT7OL1IRffZ79KnljLLeH2DSuy3hLNK4dcNXJ6OgSQwiLEn0mfl4zb04ynUci29mDkAdpxLwGLoW92UfLwOC989G78rKc5S4D1V/91csHFBPfTYR+13QfvZFqGHo8eSp9SrYvmJpOSAQexADyksQ52KI5USBIlo5odOaEFm0MkXsRF/hjdT6JzzQjTkMd+ahsSSh6vvHdM/cOagHD5EJULSy5xMhC9HKcjMphd67ooGo7fBRXfhaLA5Is7JG1LbucnwrSWqNZUGpOkurJvj2jsUXzxTyO9qpN5QO68PSiFb1ply2BGAU6rSn5cIuLu7pN6+VjwBkIlol3cOmcd1R29lynJMsdon/O2GTsmSdTb1H0comTEyUBYE5c+bI5/CFCxeySJmzrwMDA+VVLV1dXXHjxo2cZcKjSMCBCYiJD3EfEUHYrb1MMZfw09AWKKu/37h54YNVfsgQfsIUj+u/DEcLV8s9RpKqw2fqPoToBp3WyuE+EiABxyEg7r1iwZN27do5TqMKqCWW5wxxHc2me6DhDr5tol1/y8Jn4ltoVKsOKmrXb++fEfDgF3T2HIlDcjwSAImXMb2xdowE115roUQ1SUXwKm/Lc4ZOtDL4r8Ar4jrv1A4/C0UhZi/e0K77dT/DRTXOVtLN79FJPGcU74SVQeImEIXd/RSBrOmMq+rzTQJOj6puKce5LgYsOYIrB6aiqVZvl1fwR7hmxZV5vQqoi1hMbggYAvCjtxMkqQreXBeoWtGl4M4Cb5Ss1B9/hipKVsx+xU224pDDacWp2P3oJ2I61xiN0yJcM1+5IkDRKlf4nvHBTxWtEnDmYy0QuzeWCle+uL8xyE252Nf65AIs8RAN8P+huXIRdvbBnxF6S6saGK37pRnuzkcT+cLsgl5CCNPFtPKce8tiFmsMxNJWSllOraZi/fbt2J5m24ljgaIG1kSrJFyd3kipT8k2+Oin3bjwIA6R5oCPXbDVvFRi1n1A0SprRkyRNYHNmzfL5+TGjRuzTpzNFPHx8WjatKkcFHbnzp3ZPJrJSaBoEOjXr5/8G7x06VLGBifdxILOLsp9Q3t4ML+XQfdFt3RxJYwI2fomKpm/tzwECVGs6bSLaVwBMhbGPSRAAvZO4NNPP8WLL74obzVq1JCvHc2bNzfv07572vuqVavsHUOu6p8r0coYgMUtdNfeOmNxNNqI6MtrMH3cF/jtSiyMoX9h6nLt2SIFdxa2s8S3de6IJfcsblcRmzpZrv+aaGWKxN633JT9z3+hxNQyPsDyDsWVfeUHQA7DlRqCTT1LK/saz4MyV5+C2980VvZVfAd/y+G6knB5Ul1lnyTBbaAv5IjBiecw3l1rSwNM97OE37Zar1xR58EFR8CEmFOT1OdeN/jMPYabO0ahpcfr+OFSrOoamIxrMz3kc6LhrBuW52BRycTzmCCfF95YFawz1Sq4BjhUSRSt7Lk79aJVh9V4EBuL2NhoRDy6jgM6s1vXVzcpZo2mJ9jco5RysXXrg9/kQFWA4fFOjKilXGydu66CmGDQx7Sq/PafeCwmHUxxOD9NvYBLrbDwviGNaNV43h3djzUR57TVC6sMwZ5w5ceaeHU++r74KgaN+QZ7g4UFVyT+6CpUbAlSp42QDahMIfi9rVofnz8QIfdRMm7MUYUsqTO2FLBoVaVKFYgBzg8//IAtW7bgxIkT+Ne//oWkpMIU7dGeT+bCX3fxoCzO01mzZuVpZU0mE7TA0sKaiy8SIIGMBK5fvy6Lur179874JUx4srW3eRWrKn2X4nRgEC6tGwlPJ/VBwuUlrNUGjUmXMVlbkKRYC0zefx/Bfr9jQFU1bUkfrHvMAaYV0NxFAg5DQMSNlMeemYjXtnwnJkWL8it7opUqGmnA0olWDab56SYWtESWd+PD3/FKGfUaLUloNsvP4t0BwKo4FK2zqmr6PbSF1JMC92Dhl9Pw49+PIfuSRGxDD82bo9UCXAgIRGDgfZya3UQ9R+rjy6tCiEorWtUYfUaZ4Ei6jCn1tLq5Y8J5i1mA1XpZmsVPhZ6AESG7x6JpcbV/K/bFxmC9OXYcjgxVwte0+TlI524KIPkqvqwvjvPAzOsWIbPQN7mQVpCiVSHtGJuqpRetMrvpVumHtQ8s7n0Jl2ehjfbDkyqhYcvGqK654JXwxuzLiv2iXrQSN+5ynl3Q3asGnNRy3Pr/oQhhOkurtKIV5BUAu5dSf+TVO6LvoDfgpVp6SfU/wek40cpYHHyrgnpTKIt6jdrgvUOPcWLEc+q+Knhh5CR8MriteSlbSWqlWI7ZBClvYlo9bfBSoUIFNGnSBK+88greffddfPnll/j5558hLGYuXryIx48fw2jUX+BsrDiTFSoC0dHR8jk5ZMiQPK2XOF/E+fXWW2/hv//9b57mzcxIwFEIvPnmm/LvxLp7bix8+5dV7xnNsEBMqMivRJz/pJa6vxg6/hYsDyiTr01HA/VeVvrV7erS6Cm49bU2MeIMny1hGQP5OgpMtoMESAAUrXJ/EtgsWt35Fl4tZ+Ca/rk9jWhVEj2etsJSahh2vF1RvZZLkOpOwCn5GcLSBmviUMqtr9FIez6ql/nqhSk3ZqGhls6lCXr2H4ABA3TbwNFYeetpotWVNKLVuHMUrSw9Y/+fDI/34LPurdCkovJMW7LNp/B9oj3XWZ5j268JTTtuSNLOiwaYkebkt38mz6IFFK2eBfW8KvMpopVL1cZ46cMlOBqiDd61QlMReWYxhnpXtVz8JQmuTQdi0TlLQEKzaFW6B35aMRKtNP9vSUKtf3yP8zGqv/ZTRCsR1DDy1Hz09UjrsuHWdjQ2N9dO4AAAIABJREFU+Wt3LhOij01AE7OQ5oQOvwYjJXQPxrdWTXXFjcS5IQbMnoOX5XqUQo9NoVqDsnzPC/fALl26wN/fX14WWbiJLViwABMnTpSFhs6dO0MEzi5ZsmQapnqhq1ixYqhWrRq8vLxkq5pRo0Zh7ty5+P3333Hw4EEIK4LIyEiKFln25rNNUKlSpTyNe7F161b5nGndujWt9p5t17L0Qkzg5s2bspVVz549rdfS9ATrOqgTJM4v4S/ZX0NJGvXXS0rQXeHKMehvOd5E+OZuKK4+oHjOvmm2EI49+BYqqPtrjj1LF0HrtLmXBByCQHrR6mlugPrv9GO7om5pFXdooG5CuRz6H5B96DKcH0mXp6BZ2x/grz3nixRpRKvyGJDJsWJFwZgjH8FdE5Wk5zB8f2RacSATS6vk6zPhoR3n/CI2ieXRrbySr80wT2SI+ENnMo0/lJmllSZOiPuQOyhaWYFsp7sMD9ZjYH1PjPk7EilPfDHJS3nWc2o6Gafl1e1FnDPFvVjEak5jo51wFmNrinOiJZaIMD185YoARatc4bPng01ICrsPv4uXcPNhbJqg7NZaZUoOx70rl3AjOCHtD9Ja4gz7DIh+cAMXz13EzeA4q2Wlxj3EtfPncTUg0vwAAVMywu5cxvnLdxGWbP1Gk6EoKzv0opUYbAj3Els2/cDkhRdesJJz2l3CSkas+CYEKCFECUFKCFNCoBLuX0KwEsKVELD0ees/C+Grbt26EEKYsLwRwpgQyDZt2iQLZvfv30dCQqZ307QV4n95TqB9+/aoWLFinuQrVj8rXbo0nnvuOQQHB+dJnsyEBByRgLgWiuvkuXPnMmleDPa8prq+S23wc5A2bDQhZHUHy/W23WqEmFJwa65mUSXB+9fH5oefxHPjzA9Gzj12KrFKMimRu0mABOybgF608vHxkcM+iNAPWW1a/CtxTSrqolXSpUmoq4lCkjNe3qGbMdCdHlE7XkPt3jssK4yL79KLVnJwKd1B2seEC/hCt7BT+T6boXlvp8bcwaXHSlwra5ZWpicb0V1dgEqSSqPn5lCrzzCmkDXopLmSS56YdcMSK0urhvJO0SotDwf/zxCIFd1KwaX7WvM5Z4o5i1ntFOGqxbe3kYJUPFqpLALgMfN6WhfXWF/0F6sKlusPX+t6roMDzNvmUbTKW57MrRASSC9a6UUiWz/bIlrZ2nThKihcBoXroHAhFK6E06ZNk10LhYuhcDUULodPq1u5cuXQuHFjvPTSSxg2bBimTp2Kn376CTt27MD58+dlEeQ///mPrVViOhsJCNdA0S9RUVE2HmE9WVhYGNzd3eHs7IwzZ85YT8S9JEACuHXrlmxlJa6Nmb+MePhrF3OA3kaTjiEyFTBFn8a0FmrMRPFg1fwH+BuScOnzOubrayfdUrRJlydblmDvuAFhmRfIb0iABOycAEWrPOjA2EN4R3WZEmOj56deTfvQLopIfYwNr7mjy/IHaSetjf5YZA7EntnKg8m4Nd9LXd1cguTyAn4O0DxIDLi3bAi+UM2irIlWMD7Er92czdd7qc5oHBI3B/mVioizW3HokQEw3MMCdfEo0Y4KfdbjoVaMKRbnF0zCJnlFQYpWKrwi8Zb66Bd4SxIafpVWjBLx1V4uLcGpszJOEO6lnsJr6Y29iNGRSQ36GW0kCSW6rTGLXrqv+TGbBChaZRMYk9sfgcImWtlKMDExUQ72Lmb9RPB3EQReBIMfOHAghLvi888/DxeXtK6XeqHLyclJtuIRrmdiifiPPvoIs2fPxm+//QZfX19cu3YNERERdEm0tUMAmZ9gLATHnL5SUlLQsWNHeRAl+oIvEiCBzAmI6534zWUl7ppiT+Nz3Wy8c9XaqOaiugxqlgAtFiPAmIQrU+qZH2I6bVKW+hA1EC4s9bS0HdZTtMq8W/gNCdg9AYpWedGFKbj7Y3eU1q6bJdviy0OhSnBzsX5TYgB2fd4B7p3mQA2Zayk05QZmNdSu0a54Y5/+cV9JZgj8BT5abFzJCa2/uaGIYqYkBO6ciFblm2DuLWEVlYrgXxRrF3kc7P0rHmtRTC58iaZa/cR7rZcw8stZ+HKED5q/vAA35PWUTIjYNxzVdemqdP8Ic5YsxsxhXdF92lnEy1VKwKmPqpvvH27vHJZdzpFwGqOqa22phGFHldRPq5cFBD8VVgKmkLXoUkxC7U8vpgn6j9RHWOHlhDJ9VJHK4I8f2xeHVHMszuqcYaL29EM5qRz6/MkYmXnRxxSt8oIi8yjUBJYtWybHIWrXrl2O30ePHl1o2yisfkTMl0OHDmHNmjX4+uuvMWbMGPTp0wfe3t4QpuxPc0kU1j516tSRhRQReHLChAn47rvvsHHjRhw9ehT37t3D//3f/xXa9hdkxYSbphgQifecvt5//305j3HjxuU0Cx5HAkWCwJ07d+Rrl7AoteVlCN6DyS/o4zU+h+7v9Te7/Ekd1+OJKQW3zQHXJbT9PcS6e+ArO+geaAt0piEBOyVA0SqPOs4UB79VH8CrgibaSChbqxGaN62PquWqwuu9lbgix/7RlWcMxd9f90FNnUjk0uYDLPANNgteMIVj1wA3s0AkSS5o2LEbOrdrgecraha07bEmNBmPD3yD/ubV+yRITg3w9jxfyIuUw4DgnRPRQVc/SSqJhv3m45TZ6koobPG4tuxtNDC7E4r2uKHd+J1QFoszIOivKeiuz6dsN0zfdw2+07qirK4t5bp+hj8C47Ool44HPxZOAqlPsH1ARUh1xuOk7hw2he/CQPf6GHU0xjx+SLgwDU2cq2DIPjXemuEBfvFxhWv3pbiXmbdp4Wx1oa0VRatC2zWsGAnkHYHU1FSEhITg0qVL2L17N5YvX47p06fjvffeQ48ePdCsWTO4uekHB5bBhzxrJUkoW7YsPD09IQKSDh06FFOmTMHSpUuxfft2OdbMw4cPYTBo9tR5V/fClJOwsBI8hMVaTl6LFy+WjxcP4VxRMicEeUxRIjB48GD593Lq1KlsNDsVcUF+OHf2CgJijIg/9h6qqA8TVd47Ls+Wh29+wRyIvdHXIiaF8oo9MADl1bTmpcyzUTKTkgAJ2A8BilZ53FeGKNw9sx/bNq7F6rUbse3AefjHFKLg0ynhuHliP3btO4arjxLMYkN6CqaEh7h0eC92+57CzSfJmaZLfxz/d0wCpthLWDKgMWq1HY7ZK9ZhzbKZeLdnD4xedzet9RVSELRtNNo26Iz3pn2Fj19rhTYDvsfpKM0d1TH5FGSrKFoVJG2WRQKFnEBSUhICAgIgHhLFynaLFi3C559/jkGDBqFbt26oX78+SpXSAh5nFLaES2KVKlXQsmVLvPrqqxg5ciRmzZqFVatWYf/+/fjnP/8JEc/JZMp5YP2CRChihAmLM21bsWKF/BDdqVMn8z7tu8ze//Wvf8lV/vvvv1G8eHGZYXS09WClBdk2lkUChZnA3bt3ZSsrIZLb8koMOIA1S+fjqy9mYPVt2d8DQDKuf9VQ/s2KmfWXNj+RH0BE7CoteHCZ13cgQr4cGXDn2yZqWme8uElJa0vZTEMCJGB/BCha2V+fscYk8GwImJAcdhsXThzDqSuBeJoWa0p8jBvnz+HG45wsXPZsWmcvpVK0speeYj1JoBAREKKLCJAshJi1a9di3rx5GDt2LPr27Yu2bduiZs2askCjWWmlfy9RogRq1aqFDh06oH///hg/fjzmz5+P9evX48iRIxBuQXFxcc+8xaId6eue3f+FYOfv7y9bsokA+rdv337m7WIFSKCwE9AWPRAx/Wx5xR0ZikqqlVSF15bDLyoWQQdnolNpVVyv0B87wlWxPPEcxrmr+4u3wfQjjxB+eyOG1FD3Ob+A1cGcHbWFO9OQgL0SoGhlrz3HepMACRRFAhStimKvs80kUAAEhEtiaGgorly5gj179mDlypWYMWMGPvjgA/Ts2RPNmzdHxYoVnyoKubq6omHDhhCDy3feeUdeXnrJkiXYtm2bHJg5KCgIIrB5fr3yQrT68ccf0ahRI9lqZN++fflVVeZLAg5D4P79+/LvJVurtiZeSrMselpxuTL6rX1giZUCIx5t+Acq6GKQ6NN7fHpaCa7rMETZEBIggfQEKFqlJ8L/SYAESKDwEqBoVXj7hjUjgSJBIDk5GQ8ePMDp06fx559/QsR9mjRpEkQ8G/HQ6uHhgdKlSz9V3KpUqRJatGiBXr16YcSIEZg5cyZ++eUX7N27VxbNnjx5kiOXxLwQrZo2bSrX/dtvvy0S/clGkkBuCQwbNkz+zRw7dixbWSXdW4cRLV3TXiuqv4CJf/iniz0hgu7GwW/5UDQro3dzroROn2zHw/zTwbPVHiYmARLIPwIUrfKPLXMmARIggbwmQNEqr4kyPxIggXwhEBsbK7vWHT58GOvWrYMQgT7++GP069cP7du3h7u7O4Tbod5iQv9ZxJMSacQqkuIYcaxwaxR5/c///I+cd0xM2iWX9aKVWFnx4MGDNm36csVnIcDxRQIkkDUBEQNO/Fa7du2adWKrKVIQcfc8jv19FGeuP0ZiVuHzDJG4f/kUTpzxQ1AcXQKtIuVOEnBAAhStHLBT2SQSIAGHJUDRymG7lg0jgaJHQAR4F1ZVV69ela2shLXVV199JVtf9e7dW7bGqly5cqbClhCYypQpgwYNGsiB5/VB54X1h4ivk9V2/PjxNPnXrl0b//73v4teZ7DFJJADAu+++678+xGx7fgiARIggfwioBetxL1fTFLZsuknpSZPnpxf1WO+JEACJEACOgIUrXQw+JEESKBoEBBxsEQ8rLNnz8rxsUTcKTH4FMGfxUBWxNES4pV+cJpT0eq7774rGlDZShLIJQGxcqmwsurcuXMuc+LhJEACJPB0AulFK/393tbPFK2ezpjfkgAJkEBeEaBolVckmQ8JkIDDEahevbpZuMqpaCVWD+SLBEggawLvv/++/HsTq5LyRQIkQAL5SYCiVX7SZd4kQAIkkLcEKFrlLU/mRgIk4EAE9DGtKFo5UMeyKYWOgFiMQcSk69ChQ6GrGytEAiTgeATEisaff/55rrb9+/c7HpgsW2TEk8PfY+ywwRg0aJBlGzwMo774Br+eCINRyyM1Cud//Qrj3nvHnG7I2GXwS9AS5MW7EY9952PMUEt93v/6bM5XgDUGY9soL1RxrQKv0dsRbG5MXtSVeZAACeSUAEWrnJLjcSRAAg5PgKKVw3cxG1hICIwcOVK2shKLHfBFAiRAAiRQmAmkInzfCNSUdKuvNpyFG8nW65x8bxm6OEtw9voCfz/JDxUoGde+ami2jJc6rEdYVotwWK8qUm7Ohqe5XZ6Yc4vLyWaCirtJoEAJULQqUNwsjARIwJ4I5L9oZURswDn4bt+E9Ru34cCFIMTn+QJmKYgMuI2bN2/hfmgicjiOs6duY13tjMDDhw/h7Owsr+xpZ1VndUmABEigaBIwPsRv3UtahCLn7vj9USYDmIht6F3dG9/fyT8BKPKPrnDSxKZciFam8B14000V4yq9jV0RHDU57gluQtKDg/hhxOv4cH9UJs00Ifb6Rnw5uBde9umKjj4DMW3zTcRlOC1sTZdJMdydJQGKVlkiYgISIIGiSiD/RCsT4m+sxkftK1oGfNpgy/0VTNsbDENeQU+5hTmeygCs6gcnEJ9X+TIfEsgjAh999JH8O/D19c2jHJkNCZAACZBAfhOIOTQMlbWxiyShydybyChLpSJkw6to8u4BRGd40M+7Gkb92S1PRCvAhMTAw1i3cgOOBiXlXQWZU6EiYAw/jV+nD0Lzksr4uOPGcCv1S0Xk0U/RolpHzDgWDiOMCD3wGZo5l0Trqad057Ot6awUwV02E6BoZTMqJiQBEihqBPJLtEq4Mg+dS1nM6svX9oSnezmdgPU8xhyJzhurKIpWRe20tav2Pnr0SLayatu2rV3Vm5UlARIggSJPIPkqvvSwjGWkGh/hZPqZsZTbmN+xHWZn5jtoSkbYjWPYtXkNfl+zFQevPoFVL8PkEFzauxGr127D0buxMMYH4MK9OPM4yapoZYhDVGQEIiKULTImCcIWzBAXhUh1X0REJGKSVAsxYwKidekjImORko9CW5E/f54lgFQDjCYjAha3lMfeVkWr2GP4sIYT6n95VXdOJuLipLqQJHeMORGntMDWdM+yvQ5QNkUrB+hENoEESCB/COSLaJV8E3NbqIO80p0w65gWtDQFjw9MRusS6netvsfdPDK3MhkNMBgMMGZiuZ8/9JgrCWRNYPTo0fKAcd++fVknZgoSIAESIIFCRMCIB8s7oYTZ2qos/rEtzCwkiYrGnRiDVq+tx+MM4w8DgvdMRY/6NdCq32h8NuYNeKrjH/e+y3BDF6zdGLoLIz2cIFXsgvcnjsYbzauiYoUyaDTjmllMsCZapQRswaedXC0Tgp5zcCslBQ+2TUa3chaxzXO2YiFmfHIMS4Y3QjFzezpjS2Qhws2q5DmB6B0vw1mSYE20itr5D5SRyuAfe2PSlJt05Qs8L0ko1fMPOXaarenSZMJ/sk2AolW2kfEAEiCBokJAL1o5OTnhww8/zHLTAkpL6qBn1apVaXAlnhsHd/U778X+6dwAE3F1/kD0fW8Svt9wGnK80qQrmO3TGB6e7TF6w0q826w8JKdyaDpyPyJMqYg4uRBDO9ZDBXWw51SmBlr2mYrtgepcpSEAq/o2g4dHI/jMuoxErTapkTj3ywS81qIGXItJkJzd4PHC+/jh6BNl5R/jI2x4uyU8PBrCe8QehJtnG02IOjAKbRt6wKPVYGyRR6I21EMrl+8koBJ4/PgxSpYsCS8vLzIhARIgARKwQwKmiF3oX8EiABVr+yP8tQm31FBsfqM1Rh1TLVJ07Uu8/CUai7FQmZ7YII8jUnBjlqdZYGq94K46PorH8ZHPyfs9Z91Q3A+T7mDZSxVR5+Oz5jGNNdFKFBezrw/KaCKULFqJvbHY36+suSxNtBLfpAavgreWXqJopesyh/wYvbs3XKyKVkm4OvV5SJIr+u5PK1ohehd6u0iQKg3D0Thb0zkkvgJtFEWrAsXNwkiABOyJgF600kSo7L6nFa2EKXILdaDUCHNtWZUm/iRGPqcMCJ00KywRO+LrW4j3X4Zu4sYpD/xqon7dKpYZz9YLFEsta+6BqWHY96G4GVsGmpbPNTBw8yPZdz/wp3bKjKNzd6zVpklNodj0Sin52OIdVyDICBhsqYc9dTzrWiAExo4dK59He/bsKZDyWAgJkEARI2CIQuDtm7h5M/Pt1r0QJJonZQBTwgOc3rUJ6zb8if3n/BGTF4vdpUQiQNTj1n2E6gtziO5IxPlPauvGE/Uw+ZISCyrl9nx07jwftzMEukrBrbmNzMfUn+aHZKQieJW3eV+5/r6IFXxS7mBeY3Ws4twan+x6JAtXhns/4a2Pj0KTwzITreKODEFFbaxjFq3icHRYJXNZetEKEZvQSUtP0cohztCnNeJpotXlKfXkc6TNsgfKZK6WUdzfGCSC9Tt1wx9RSbAtnXYw33NKgKJVTsnxOBIgAYcnkPeiVSLOjXNXB0odYTXuY3qqOtFKcvbC1N1X4Xd8H86EGhB57CsMeKE1GnWZjjOxYtRtRMCS1kr+zq9gZ7QY8GUMxB5/eoxq7eWExmO2ISDJhJSQI5jZqbRybPl++CvMBNOTLehVWgwWi6H9cuWmnfpwFTrL4lkZ9N76BCaYbKtH+nbx/yJN4H//939lK6vWrVvjv//9b5FmwcaTAAnkDwHD3floYhYgrE3SSJAqDsERVfkwBK5G36r6dO3xW3AGv7ZsVzbl1hx4yvWoig9OpA/6lO3sCt0Bhns/oI2Oc6V3RND1OJwY44V+W5/IcaTSVzr5zkoMbFwRZat3xtQT0TAZI3HyCw91fCTBpdcuiCEMTE+w0cfZvF+SysD74y24l2hAYnyK2RWRolV6wvzfFgKZi1YmRGzvjdKShOKdVuCBXryO2o6XnCVIJXtiV7St6WypDdM8jQBFq6fR4XckQAJFmkDTpk3h5uaWq23dunU6hok4+7EmWrXH2ie66V1dqjQfdaKVa5/dsLoorzEWQZcOYvPSGXi/izZ72BEbwqyJVgk4ObKaMgCsPNw8WBdlJvtNQwN54Fkar+8Ww8UYHHxHza/VQtwzGOC/2Es5ttI7OJjOYhpPq0eaRvGfok5g3Lhx8nm0a9euoo6C7ScBEsgnAmlFq+Lyog/Ozs5p3kvVHwVFRzLg7vymqjhSDT3HfYlpc7bAP4OVUPYr6+iiFVKfYOurZVR24mH+Rfx+aQP6eo/H6aw0utQY+G2cgtdaNEO3zpXNeZTsqYpWMCH21Keq6GcRFJ1bTcDeUIuSQNEq++cljwAyF61EQLbT+KS+OOfK4aUFlyDmhk1JQTg0rwfcxFi53mRcFkaFtqYj8FwRoGiVK3w8mARIgASyQ8AI/0Wae+DzmHo1/XLKJsRe+wvbTwchQdOzdKJVvSmXkeaIpHvYNK47aosZH3WW00ldvleSOmGTWME3g6VVFLb7lFDSt1iMAMuYD4jahu4ivpUkofWyILlhiecnoJacdyPMvnIVXzdRvncfdxbmOKm21CM7mJjWoQmEhITAxcUFLVu2pJWVQ/c0G0cCz5aAXrTqsD5tgPCMNUvE+fHqpFLdSVA93DImy9EeE4wGsSCK0arVUY6yLGQHxR0fiWrqOESMISo1ro+ui+6li9upr7QJCbfWY7R3OUjOrfDF0XA8+r2teSxjEa3EMUaE+k5Bx7KWsY4oo2TXnxGgxs+iaKVny8+2EniqaCWG0A+24ZNu1eEkzu3yz8O75weYMkKJvVb1g+PQNFlb09laL6bLSICiVUYm3EMCJEAC+UYg8cJE1FYHdp4zRRwH3Ss1GGt9XORBm5PnBJwRqpBOtPKce0sJQiofkoSr09WYECXb4KOfduPCgzhE7hWrnYiBXRdsFaveZBCt4vD3IDdlYFjrE1wwR2YXsal+QHP5WGf4/BmhVCzlllmoqj9suGqJ5YlZ17Wa21gPXTP5sWgTmDBhgnz+7dixo2iDYOtJgATylYDNopXhPpb3aQL3MpooUhbu9RvhhekXlUDfWS1cIlrxlEVT/td/Ffo284BHIx/MuqzddFMRfnIRhnesg7LivuvshoY+H2HFhWj7FLZSbmKOOqklT6KV74u/LCu4ZOjn5Ds/wsdV4S1idApHvxCrolUKQvzuy1YuKcG+mPFiRbOwJUlt8HOQ4r6ZmWgVf3QYKqljLokxrTL0Q1HfkZVopfFJTY5HosEEGIOwsrOY+PXAl1fSTCPLSW1Np+XLd9sJULSynRVTkgAJkEDuCaTcxcK2xZVBl5MnPlh7DTFizJUSjEOzX0Q5dXBVeYgvooS1lU60ajzvjkW0MoXg97bKgM/Z5w8oElMybszRgpuqq95kEK1MeLK5B0rJ5bihz2/3FOstw2PsHFFLqZdzV6wSEdbllxFBK/VLWktw8l5qsdCytR65J8ccHIDAkydPUKpUKbRo0YJWVg7Qn2wCCRRmAnrRqtaQOVi0aFG6bSk2XYpGavJ1zPTQBCvLe6VhRxFn08Ilae/V6RdN+b8MMa1MiDnxGRppYkqZaqheTi23hDdmX9GErcJMN33dUhG8+kWUVNtU59Pz5pX90qcE4nBkqBbKoDg6/x6MVBhwb2FzsyBVsudOJaYVYrB/2OtYdFf10zQ+xl/v1lTTZS1a6VdslhpMg5+Yb0sNwQZ1QRkhsDEQe8YeKip7bBWtFB4mxB5XYsJWf88XkZpHRAZYtqbLcCB3PIUARaunwOFXJEACJJAfBJLvLsdrlS0DY0kqoazSpw1gqw7A5mBVNMpMtEI8ToxQloGWpCp4YeQkfDK4reJnL+fTCksDjVYsrQAkXMasNqpwJsz4G7ZE4+paoNMS8J592eL6J+Kghm3D6+qMqCS54KUNIbqZYBvrkR8gmafdEZg4caL8sLF9+3a7qzsrTAIkYF8E9KKVbP2j3WN1755zFCsfY3Ikjo9WxZB6k3AuKhFJBhNsXbhEP8GUdtEUIzLEtDLcww9tlDFAqW6LcF3EA0j2x+p/VJCvjy4vbUBI7uO/F3xnRR/AkEoSpOId8JPmt2e1FtHY2bOkWaCSirmjlXdzNPUsZ9nXYDSWrNyLoBQhcFWFa/s5OBejqARxfw+WxzpObb7HHVnLMiF0TTvLsW2W46HKLzVkPbprIRSKt8aXO49g8+RueE63GnPtiRfMAltq8C/wNp8f3vj1cabKhNWWcad9EciOaJUatg8j60go0epLnFbPRWuttTWdtWO5L3MCFK0yZ8NvSIAESCDfCKQ82ofZ/ZqYLauUAbUL6veaih0PNNe7tLO3aSytRJSH0D0Y31pd8U8MspwbYsDsOXhZFphKocemUOuilZhojDyDxUO9UdU8OJMguTbFwEXnEJVhsByLI+9WUQaEFQZgr2wCZkFjUz0syfmpiBIICwuTrayaNWtGK6sieg6w2SRQkAT0olXp5j7o1atXuu0NjNsWrC5nr1vd9/mpUEJOZmPhEt0EU/pFU9KLVqbQteisxY+cfwxX/fzg5+eH08u6K5ZKZfthf/qFTgoSXI7LSsKVLzxR4+29T7FCEZmbEHNqGtpr7piV22HEigsIf7gVA2uqE3qubTB+12MYkYhzEzxRrY47KtXujLc/fB896hSDk3tfLLspLNKMCD30LQbIAbO1ycDn8ebX+xEsx7tKgN/3ProJPRe0+Oh3rB2oWXpJkCq9gpn7HiPh8QF807+eRfySnNDg7XnwVTLKMRUeWHgJRP7ZHcUlCd6rhLVf5i9j2GFMb18eFbvOxLGIzFPami7zkvhNZgQoWmVGhvtJgARIoAAIpCaE4q7fBZy/fAvBcZpLXjYKNiUj7M5lnL98F2HJ2Z8RNCWF4b7fRVy6+RCxOSjeXNNc1sOcDz84LIHPPvtMfhi8+zrZAAAWg0lEQVT4888/HbaNbBgJkEDhIaAXrWwJxH5unBqI3SxaZWPhEp1olX7RlPSiVcrNORlWw0trCeaFlY8yfzAuPIQz1sQYeQNXQtXo6Bm/TrPHlBKF4IfhSNQ31RCD4IBHiDKv2mhCXJA/ooTheMQtnNi7E/tO3pH/T5PZU/8xIT7gLHz3HMT5BwnI/kjpqZnzS3skkHQPfy2ZjiHNFS8DJ48B+OKHP3DLvMKQWCkwFLfP+WLN7Pfg49UVwxceg7VT29Z09oipMNWZolVh6g3WhQRIgARIgAQckEB4eDhKly6NJk2awGTiI4MDdjGbRAKFjkDuRatsLFyiE63SLpoiDJ41kaoqPjgRD2PgUrSSrZyd0Grqegh36TTbzmMITOR1stCdUKxQkSJgDL8E3z0HcPL6YyTohdV0FGxNl+4w/ptNAhStsgmMyUmABEiABEiABLJHYNKkSbKV1datW7N3IFOTAAmQQA4J6EWrtr/4IyY2FrEZtnikyPqQNffAbCxcohOt0rvypxetkHgO49wVV7YqQ/YgXH4gTsTV+X3x4quDMOabvaprWw4bzsNIgARIwMEIULRysA5lc0iABEiABEigMBGIiIhAmTJl0LhxY1pZFaaOYV1IwMEJ6EWrtO53Wuwj8d4A06+JOJLWRKtsLFySHdEKBvgv666u4iuhese+GPSGlznuUv1PTiPOwfuGzSMBEiCB7BCgaJUdWkxLAiRAAiRAAiSQLQJTpkyRraw2b96creOYmARIgARyQyBPRCtbFy7JlmglZ4pT8/vCw0UvoLmh7ehN8NetxZKb9vNYEiABEnAUAhStHKUn2Q4SIAESIAESKAAC+/fvl+NTiRhVWW2lSpWSBSsnJyd55UAtffny5bNXU1MSQvyOYufmdVi7fgt2HrmCR9aCTKREIuD2Tdy8dR+hzyImzLMuP3tUmZoESCAbBPJs4RJ9mYZoPLhxEecu3szZYiz6vPiZBEiABByUAEUrB+1YNosESIAESIAE8oPA7t27ZSEqc3cbveWA9c8lSpSwsWopeLh3BnrXK2GlzBp4cdIOBJlXmcoY8NjGQvIsWYbYNXmWMzMiARIgARIgARIggaJJgKJV0ex3tpoESIAESIAEckSg4EQrAwJX90UVeZUtRfwqV8sTTT3d4arbV2ukLyLUlX2etWj0rMvPUYfyIBIgARIgARIgARIoxAQoWhXizmHVSIAESIAESKCwEUgvWg0aNAhjx47NcqtWrZrZWsoWSyvjg1Xw0eK91B6MX67EQFt1OuXxfnzWUrO+aoZ5tzVzKxOMBgMMBqM5bcHye9blF2xrWRoJkAAJkAAJkAAJ5DcBilb5TZj5kwAJkAAJkIADEUgvWu3cuRMnTpzIcpswYUI2RKtkXJvRUE1fH5MvJmQgmHRjOcaOnIjZP23F2TCj/L0hYBX6NvOARyMfzLqcCCAZtxb1RhMPDzTptRA3tQDHKbex+NWm8PBogp4LbkDsTroyGz6NPeDZfjQ2rHwXzcpLcCrXFCP3R4ioyQg/8T0Ge9eAiyShROUWePPbfdg3pRs8PTzR6bPTiAeQm/IzNJA7SIAESKCwEjCGYt+04RCTFllv72Ds8mvIeBXPi8YZEbxtFLyquKKK12hsD1buBXmRM1L8sXXWaAwbrLVxKD7bFABtiiRPymAmJEACNhGgaGUTJiYiARIgARIgARIQBApEtEoNwnIvNR7W81NxNck29hnd85Jw6fM6ivhV+1NcFDqWeCVdxqS6Sv61PrkAsTv+5Eg8J7sdOqGE2f2wCb6+lYKES1+hZTEtPpcrqlVzlfN0cVb2le3vi1hYi6lle/lqzfhGAiRAAoWfQOI5jHcX1z8XNO4/GUu37Mfh4ydw8uRJHN81Cx1KatdL8d4Cs6/ZeBHPbstTbmK2p6Uszzm38lZUSg3Bpl5lzBMu7uPOyfeL7FaT6UmABHJHgKJV7vjxaBIgARIgARIoUgQKRLRKuohPa6sPIu3WINSkIjY+weGFkzFx4kTdNhnf+T6GmF/PG9FKgrPXVOy+6ofj+84g1BCOP19VH1oq9sMaeT36FDzYPBjVVHGLolWR+gmwsSRAArH70a9iVfRdfV+2VDUDMcXg6KhaZpFHLNjRcPJ52RLVnCYvP5jCseNNN7W8Snh7VwS020XeFKObeJAkULTKG6rMhQSyS4CiVXaJMT0JkAAJkAAJFGECBSNaXcaUeqpo5bUCD7VgVsk3MKuhZVZdW8Gw7qTLEPP4eSNauaLP7ihLD8efwIjnlDLdx5+3zLLr6kLRyoKLn0iABByfgCl0Pfr0X4tg7dqsNjn+3OfwMFuqSpBqj8GxmLyVkdLTNSUG4vC6ldhwNEi+D6T/Pnf/J+HKlHpmEY6iVe5oFpajTYn3sH3mIHRv5QnPZh3xxsTVuBqb/jw1IebsYkwcPRqjrWxjPl+JG5kYEBof/4n3vVrgnT2RhaXJdl8PilZ234VsAAmQAAmQAAkUHIECEa1MoVjftbjyoFDlPRwTAaPEyxiKg/PGKwPID3qaLZ1sE60m4oLZPfAiPqujCFEZ3QPrYcpl3Ug0Zi9eL62kbbbgHgxqVWAKxZp2yn7bRKunl69ly3cSIAESKOwEjMH78dupqLRWTUl+mNVcuSYqEwrPYfj+yLRp9A0zROLm4W1Yv3YLDlwLT+vWZ0xEdGQEIiKULTIqXrn2muLhf2wbNu+7juhUIxKiI81pIiIiEZuiCg+GeETpj49OlK1xDeF+8N28Bhv2XMKTTIJTpcbexYmdG7HujyO4E5OAyxSt9L1m/5+T72Bprzqo37kv3hn8OtpUUs7Z0i/+DH/zDV6EsnyE37o5mwVLbZLM/N78O9zVp9fIGIKwunc5+bgO68MyP/+19Hy3iQBFK5swMREJkAAJkAAJkIAgUCCiFYwI+NEbTvKMvRv6/xGacTXAqL/gU0IZbD5dtKqrDDprjsEZLRJw/Al8UFU5NqNo5Ym5t3RPM/HH8b6atvqHpyzBhJOumK3Bni5a2VY+zy4SIAESsF8CKbizoJ0uHqAEtze3ITSdJZbSvlREn5mP19yLo1zHsfhm5gDUcSoBj6FrcV9dLMMYdgKLBlksnKTyA+D7+AoW9Kisiggl4bPmnzi8ZDgameMNSui8RbFsMQTvx+zeWloJUvV38evCAainswJzaT8bF7UJEbliSbjz21A0kuNxOaN28yaoXacb3ulRxSxc0NLKfs9QpeZGPFg7EuM2BZjdWlPD92FETTEeaIivrmurtQApdxbgxY4jsfzQVdx78BCPHj1StsDjmOrpghbf37VMYpmxGOC/vDcaNlTOPYpWZjC5/kDRKtcImQEJkAAJkAAJFB0CBSNaAabw3RikzoBKJVvj4w1+iJQXhjIg8sYufDuokfkBKXPRKgW3v26kPHC4vIzNT8QsvAlx5yajgfrwklG0aox5d3SilSkMf/QqreRR9iUsuhoHU2oMLi54CWXVPDIXrWwvv+icQWwpCZCAoxEwBK6ETyllIkC2RHHthbUPra/kl3Tze3QSwlDxTlgZJNJEYXc/xTKl6YyrZhc/w51v0UQTmcr6YOJbjVCrTkWzgOT9cxCMqcFY5W0pVxOtBN+E06NQXTteckbdAUtw5MoBTG2qpXfBK3+Emy1h4k5NMItatcafQpxYf/beL3i1vJaeMa3s/7w1ISkqGslpPAETcf4TEYetCebd1u79RgTv/w1HwzKqrob7C9GqTEssuJfRzCr59iK87D0OOzb0RilJAkWrvDtjKFrlHUvmRAIkQAIkQAIOT6CgRCsgFRGHP0Xz4pYHBklyRsk0/0uQnBpi5C7FEitjTCsg7vBQVFIfXJw9fND/ja6oo676Jx6ushStxMPPxWlo5mSpRzE1v9Lqe+aile3lO/yJwwaSAAk4JgFjMDa8rohOiutUSXRb5m/FCkW4XIVgU091EqDxPNyRn/tTcPubxooYVfEd/C2WYhXe4AGL0cIsOkmoM/Yooo3RuLxmOsZ98RuuyDGIIrCpk+XarBetki5PQl3teLeB8I0WuSbi3Hh3s/DVYLqfYnGTGozVL2iuYO4Yf07zJU/EhYm1zelpaaX0jWP9jcfx96vDrcfvkDXUpzbOgPs/tIJrywXIoFklXcM33Ttgypk4RO95jaLVUzlm/0uKVtlnxiNIgARIgARIoMgSKDjRSiA2If7WZkx+rRHKaw8f6nuxqs3Ra/RiHAqyxJ+yJlrB8Ah/jmwOTWCSpCro8vlaLOmgPOjYIloBBoQcmIU+jZVVqko81xbv/bwPP3gpebgNPCTPyuem/CJ7QrHhJEACdkwgFWE7B5knBoRoVdx7Pm5avKzSti1iG3rI7ncSpFYLcCEgEIGB93FqdhNVGKqPL68qB6cVrRpgmp+1TG0UrWqMVt3Dk9LGqJqgLq4R8Sde0eoleWPlI83ChoHY03ag4/1neLgBw3p9Bl8rVlUZWmu4j0WtXdFq4f10omwCLs7sgq6zLsshBGIoWmVAl9sdFK1yS5DHkwAJkAAJkEARIpBetFq3bh327t2b5fbee++ZZ6tLlCiRbWKm5AgEXLuIcxf8cC84Jt2AMavsTEgOu4srl64jKNa6y0rmOSQjYNcabNhxCGeuP0KC5lZguIfvVTcT9/HnLKsKWs0oN+VbzZA7SYAESOCZEzBFHcKIGhZLJ+FiNeOyZqWkq54pDkH3IpF4YxYaahMQLk3Qs/8ADBig2waOxspbVkSrkj2wU7aU0uUpf8y+aGVtNcCUG7PgqdVL6oRNEVo5FK00Eo72boz1x/HfJsJHxKx0bYF3V11HvHZ/z6SxBv/FaF22FX64r3cNNCH29GR08PneLNZStMoEYC52U7TKBTweSgIkQAIkQAJFjUB60UpxB9E/tGT9OSei1bPjHI+jw7WAvm7oOfcPHDy0F+tn91FXL3TF639FmOOiPLt6smQSIAESKEgCcTj1iS5YuiTh+YmnZavT9LUwPdmOUZOOI/LaDHM8Qcls/ZQ+tfJ/Gkur8gNwQHUbTJs6b0SrZH29pHb4PURTLyhapeXtKP8ZEXllN9Yunop32muB9qtg8K6wjIu+mJtsgP/iNijbehH0mpUp+gjGt38Vy+5r8bAAilZmaHn2gaJVnqFkRiRAAiRAAiTg+ASKnmgFpNxbgdcqWxfjKry0CDctHoqOfwKwhSRAAiQgokNdmobGZuskCVLNkfg7WhN79IgMuL/4RXT85hb+HbIGnczxAT0x64blQV9/hPicXrQ6mI+ilSlkNTqY69UIX5sDclO0St8vDve/KRYX5raDsyShWIdVMHuGpm+oMQBLvMqhzSJdvLbUCPh+2Bb9Vj9IY/1N0So9vNz/T9Eq9wyZAwmQAAmQAAkUGQJFUbQSnZsacwu+v36LqeNHYeSIkRjz2Sws23Ud0dn1NiwyZwobSgIk4LAEkm/iW69iZpdvSaqMwbvCrVipmBB/dT46l3JGty0RgOEeFrSyTABU6LMeDzVPK1Mszi+YhE1qNGyj/yJLIPbyA5CfohVSbuPbZlq9XNBja5hqPRuLI+9XNbez5pgzcswih+3XotqwhDP42F2CVPtTXLTi3SqwGAN+hFc5Lyzx105YAOEb0dksdmrnT/r32phgDuxfVAHnvt0UrXLPkDmQAAmQAAmQQJEhEBoaim3btuVq++uvv4oMLzaUBEiABByLgAH3f+qCkjorq/J9NuNxOgE/Nc4fBxcPQ1N5tdYqeO9YvLy4RsS+4aiuO7ZK948wZ8lizBzWFd2nnYVIJV4izpQ5/pXrG9gXo36hf0sNxi/eFpHA+9fHZlfthFMfWcpxeweH48SBCTg9qrpZhKo07KhaXiqe7HgblbV6NZ6EY+ExuLPxAzQqbcm/5Iu/41G6duqrw8/2SiAKf73kDKnlYgRY7V8jApZ6o7zXkrTfJ97G5oXzMG9e2u2r4R7yOVbzrWn45rtfcDTUaqb2CuuZ1Jui1TPBzkJJgARIgARIgARIgARIgARIwL4IGB+uQU9Xi5AjrxhYpTbq1q2rbHXcUb1SGbMwpMQ91LncmeJxbdnbaFBCn4cb2o3fiWD12d4Y+je+7lNTl4cL2nywAL7BOisXw2Mc+KY/6mlCkyTBqcHbmOcbDEPQX5jSvYLu+LLoNn0frvlOQ9eyunLLdcVnfwRCdlI0xeDctz3xnC6/ij5fYdWYuko+paqhUcfXMWrZBcRY84K0r25kbfUEDPexsIUrWn93RzkX9N+Jz8ZALPUuD+8fA2CL/ET3wPQAc/8/RavcM2QOJEACJEACJEACJEACJEACJEACNhIwJTzEpcN7sdv3FG4+STZbSNl4eL4lSwm/gWN7d8H3XBASTKmI87+Eq4HZXbE236rHjHNDwBQHv9++wMS5m3AlKlXNKQl3V/WFR6dZuKiZ+aUrwxj4E7zLe2OpdTOsdKkZiD0DkDzYQdEqDyAyCxIgARIgARIgARIgARIgARIgARIggUJKwBiE33uUVSznSrjD+5VX0dOnO3qP+RVXYzMznzMicFlbVPD+CYG2mFmBolV+9D5Fq/ygyjxJgARIgARIgARIgARIgARIgARIgAQKD4HUOARdPYWjh4/i1OX7iMh8AUu1zkaEX/TF3zeirSw0UHia5eg1oWjl6D3M9pEACZAACZAACZAACZAACZAACZAACZCAHRKgaGWHncYqkwAJkAAJkAAJkAAJkAAJkAAJkAAJkICjE6Bo5eg9zPaRAAmQAAmQAAmQAAmQAAmQAAmQAAmQgB0SoGhlh53GKpMACZAACZAACZAACZAACZAACZAACZCAoxOgaOXoPcz2kQAJkAAJkAAJkAAJkAAJkAAJkAAJkIAdEqBoZYedxiqTAAmQAAmQAAmQAAmQAAmQAAmQAAmQgKMToGjl6D3M9pEACZAACZAACZAACZAACZAACZAACZCAHRKgaGWHncYqkwAJkAAJkAAJkAAJkAAJkAAJkAAJkICjE6Bo5eg9zPaRAAmQAAmQAAmQAAmQAAmQAAmQAAmQgB0SoGhlh53GKpMACZAACZAACZAACZAACZAACZAACZCAoxOgaOXoPcz2kQAJkAAJkAAJkAAJkAAJkAAJkAAJkIAdEqBoZYedxiqTAAmQAAmQAAmQAAmQAAmQAAmQAAmQgKMToGjl6D3M9pEACZAACZAACZAACZAACZAACZAACZCAHRKgaGWHncYqkwAJkAAJkAAJkAAJkAAJkAAJkAAJkICjE6Bo5eg9zPaRAAmQAAmQAAmQAAmQAAmQAAmQAAmQgB0SoGhlh53GKpMACZAACZAACZAACZAACZAACZAACZCAoxOgaOXoPcz2kQAJkAAJkAAJkAAJkAAJkAAJkAAJkIAdEqBoZYedxiqTAAmQAAmQAAmQAAmQAAmQAAmQAAmQgKMToGjl6D3M9pEACZAACZAACZAACZAACZAACZAACZCAHRKgaGWHncYqkwAJkAAJkAAJkAAJkAAJkAAJkAAJkICjE6Bo5eg9zPaRAAmQAAmQAAmQAAmQAAmQAAmQAAmQgB0SoGhlh53GKpMACZAACZAACZAACZAACZAACZAACZCAoxOgaOXoPcz2kQAJkAAJkAAJkAAJkAAJkAAJkAAJkIAdEqBoZYedxiqTAAmQAAmQAAmQAAmQAAmQAAmQAAmQgKMToGjl6D3M9pEACZAACZAACZAACZAACZAACZAACZCAHRL4/0Nl0AOBlBdKAAAAAElFTkSuQmCC)", "_____no_output_____" ], [ "##Code", "_____no_output_____" ] ], [ [ "distances = {\"Arad\": 366,\n \"Bucharest\": 0,\n \"Craiova\": 160,\n \"Drobeta\": 242,\n \"Eforie\": 161,\n \"Fagaras\": 178,\n \"Giurgiu\": 77,\n \"Hirsova\": 151,\n \"Iasi\": 226,\n \"Lugoj\": 244,\n \"Mehadia\": 241,\n \"Neamt\": 234,\n \"Oradea\": 380,\n \"Pitesti\": 98,\n \"Rimnicu Vilcea\": 193,\n \"Sibiu\": 253,\n \"Timisoara\": 329,\n \"Urziceni\": 80,\n \"Vaslui\": 199,\n \"Zerind\": 374}\n\ngraph = {\"Arad\": ['Sibiu', 'Zerind', 'Timisoara'],\n \"Bucharest\": ['Fagaras', 'Giurgiu', 'Pitesti', 'Urziceni'],\n \"Craiova\": ['Drobeta', 'Rimnicu Vilcea', 'Pitesti'],\n \"Drobeta\": ['Craiova', 'Mehadia'],\n \"Eforie\": ['Hirsova'],\n \"Fagaras\": ['Sibiu', 'Bucharest'],\n \"Giurgiu\": ['Bucharest'],\n \"Hirsova\": ['Eforie', 'Urziceni'],\n \"Iasi\": ['Neamt', 'Vaslui'],\n \"Lugoj\": ['Mehadia', 'Timisoara'],\n \"Mehadia\": ['Drobeta', 'Lugoj'],\n \"Neamt\": ['Iasi'],\n \"Oradea\": ['Sibiu', 'Zerind'],\n \"Pitesti\": ['Bucharest', 'Craiova', 'Rimnicu Vilcea'],\n \"Rimnicu Vilcea\": ['Craiova', 'Pitesti', 'Sibiu'],\n \"Sibiu\": ['Arad', 'Fagaras', 'Oradea', 'Rimnicu Vilcea'],\n \"Timisoara\": ['Arad', 'Lugoj'],\n \"Urziceni\": ['Bucharest', 'Hirsova', 'Vaslui'],\n \"Vaslui\": ['Iasi', 'Urziceni'],\n \"Zerind\": ['Arad', 'Oradea']}\n\ngraph_ext = {\"Arad\": [('Sibiu', 140), ('Zerind', 75), ('Timisoara', 118)],\n \"Bucharest\": [('Fagaras', 211), ('Giurgiu', 90), ('Pitesti', 101), ('Urziceni', 85)],\n \"Craiova\": [('Drobeta', 120), ('Rimnicu Vilcea', 146), ('Pitesti', 138)],\n \"Drobeta\": [('Craiova', 120), ('Mehadia', 75)],\n \"Eforie\": [('Hirsova', 86)],\n \"Fagaras\": [('Sibiu', 99), ('Bucharest', 211)],\n \"Giurgiu\": [('Bucharest', 90)],\n \"Hirsova\": [('Eforie', 86), ('Urziceni', 98)],\n \"Iasi\": [('Neamt', 87), ('Vaslui', 92)],\n \"Lugoj\": [('Mehadia', 70), ('Timisoara', 111)],\n \"Mehadia\": [('Drobeta', 75), ('Lugoj', 70)],\n \"Neamt\": [('Iasi', 87)],\n \"Oradea\": [('Sibiu', 151), ('Zerind', 71)],\n \"Pitesti\": [('Bucharest', 101), ('Craiova', 138), ('Rimnicu Vilcea', 97)],\n \"Rimnicu Vilcea\": [('Craiova', 146), ('Pitesti', 97), ('Sibiu', 80)],\n \"Sibiu\": [('Arad', 140), ('Fagaras', 99), ('Oradea', 151), ('Rimnicu Vilcea', 80)],\n \"Timisoara\": [('Arad', 118), ('Lugoj', 111)],\n \"Urziceni\": [('Bucharest', 85), ('Hirsova', 98), ('Vaslui', 142)],\n \"Vaslui\": [('Iasi', 92), ('Urziceni', 142)],\n \"Zerind\": [('Arad', 75), ('Oradea', 71)]}\n\nalgorithms = {'A': \"for A*\",\n 'B': \"for BFS\",\n 'D' : \"for DFS\"}\n\npurpose_string = 'The purpose of this app is to find the shortest path to Bucharest. '\ninvalid_input = 'Invalid input! Please try again. \\n'\ndepart_info_string = 'Destination: Bucharest\\nSelect a starting location: \\n\\t -'\ndepart_input_string = 'Departing from: '\nalgorithm_info_string = 'Select an algorithm: \\n\\t -'\nalgorithm_input_string = 'Algorithm: '\n\n\ndef get_input(user_input, table):\n return user_input if user_input in table else False #check if input is valid\n\n\ndef select_city(error=False):\n if error: #if the user put an invalid input\n print(invalid_input) #print an error message \n print(depart_info_string, '\\n\\t - '.join([location for location in graph])) #print all of the locations\n val = get_input(input(depart_input_string), graph) #get the user's input \n return val if val else select_city(True) #return the city\n\n\ndef select_algorithm(place, error=False):\n if error: #if the user put an invalid input\n print(invalid_input) #print an error message \n print(algorithm_info_string, '\\n\\t - '.join([f'{key} {value}' for key, value in algorithms.items()])) #print all of the algorithms\n val = get_input(input(algorithm_input_string), algorithms) #get the user's input \n return a_star([place]) if val == 'A' else bfs([[place]]) if val == 'B' else dfs(graph, [place]) if val == 'D' else select_algorithm(place, True) #call the algorithm \n\n\ndef bfs(q, end=\"Bucharest\"):\n p = q.pop(0) #remove a route from the queue \n return p if p[-1] == end else bfs( #if Bucharest has been reached, return the route \n # else add to the queue all potential routes from the current node and run again \n q + list(p + [adjacent] for adjacent in graph.get(p[-1], []) if adjacent not in p)) #if the node has not yet been traversed \n\n\ndef dfs(g, route, end=\"Bucharest\"):\n for adjacent in g.get(route[-1]): #for all potential routes from the current node\n if adjacent not in route: #if the node has not yet been traversed \n return route + [end] if adjacent == end else dfs(g, route + [adjacent]) #if Bucharest has been reached, return the route, else run deeper \n # else if no potential routes that have not yet been traversed\n return dfs(g, route[:-1]) if not g.get(route[-2]).remove(route[-1]) else -1 #remove the current node and walk it back one step \n\n\ndef a_star(route, end=\"Bucharest\"):\n return route if route[-1] == end else a_star( #if Bucharest has been reached, return the route \n # else run deeper on the node closest to Bucharest\n route + [min(list([[distances.get(x), x] for x in graph.get(route[-1]) if x not in route]))[1]]) #if the node has not yet been traversed \n\n\ndef distance_calculator(route, val=0):\n print(f'Route: {route[0]}', end='') #print the starting location \n for i in range(1, len(route)): #for index from 1 -> route length\n for adjacent in graph_ext.get(route[i-1]): #for adjacent to the route at the index prior \n if adjacent[0] == route[i]: #if the adjacent matches the current index \n val += adjacent[1] #add to val the distance to the adjacent \n print(f' -> {adjacent[0]}', end='') #print the route \n print(f\"\\nTotal trip distance: {val}km\\n\") #print the total distance \n\n\ndef main():\n print(purpose_string) #print the starting information \n distance_calculator(select_algorithm(select_city())) #run the game \n if (input('Would you like to go again (Y/N)? ')=='Y'): #ask if they want to play again\n main() #play again\n\n", "_____no_output_____" ] ], [ [ "Run Code:", "_____no_output_____" ] ], [ [ "main()", "The purpose of this app is to find the shortest path to Bucharest. \nDestination: Bucharest\nSelect a starting location: \n\t - Arad\n\t - Bucharest\n\t - Craiova\n\t - Drobeta\n\t - Eforie\n\t - Fagaras\n\t - Giurgiu\n\t - Hirsova\n\t - Iasi\n\t - Lugoj\n\t - Mehadia\n\t - Neamt\n\t - Oradea\n\t - Pitesti\n\t - Rimnicu Vilcea\n\t - Sibiu\n\t - Timisoara\n\t - Urziceni\n\t - Vaslui\n\t - Zerind\nDeparting from: Lugoj\nSelect an algorithm: \n\t - A for A*\n\t - B for BFS\n\t - D for DFS\nAlgorithm: A*\nInvalid input! Please try again. \n\nSelect an algorithm: \n\t - A for A*\n\t - B for BFS\n\t - D for DFS\nAlgorithm: A\nRoute: Lugoj -> Mehadia -> Drobeta -> Craiova -> Pitesti -> Bucharest\nTotal trip distance: 504km\n\n" ] ], [ [ "Tests:", "_____no_output_____" ] ], [ [ "distance_calculator(a_star(['Arad']))\ndistance_calculator(a_star(['Bucharest']))\ndistance_calculator(bfs([['Craiova']]))\ndistance_calculator(bfs([['Eforie']]))\ndistance_calculator((dfs(graph, ['Drobeta'])))\ndistance_calculator((dfs(graph, ['Neamt'])))", "Route: Arad -> Sibiu -> Fagaras -> Bucharest\nTotal trip distance: 450km\n\nRoute: Bucharest\nTotal trip distance: 0km\n\nRoute: Craiova -> Pitesti -> Bucharest\nTotal trip distance: 239km\n\nRoute: Eforie -> Hirsova -> Urziceni -> Bucharest\nTotal trip distance: 269km\n\nRoute: Drobeta -> Craiova -> Rimnicu Vilcea -> Pitesti -> Bucharest\nTotal trip distance: 464km\n\nRoute: Neamt -> Iasi -> Vaslui -> Urziceni -> Bucharest\nTotal trip distance: 406km\n\n" ] ] ]
[ "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3e0317120b089359f4df00987776e7feff08e1
28,151
ipynb
Jupyter Notebook
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
MohammedBaz/earthengine-api
1cdb748479779eeba453a814537a4dcef55343d6
[ "Apache-2.0" ]
1,909
2015-04-22T20:18:22.000Z
2022-03-31T13:42:03.000Z
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
MohammedBaz/earthengine-api
1cdb748479779eeba453a814537a4dcef55343d6
[ "Apache-2.0" ]
171
2015-09-24T05:49:49.000Z
2022-03-14T00:54:50.000Z
python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb
MohammedBaz/earthengine-api
1cdb748479779eeba453a814537a4dcef55343d6
[ "Apache-2.0" ]
924
2015-04-23T05:43:18.000Z
2022-03-28T12:11:31.000Z
40.680636
746
0.570353
[ [ [ "#@title Copyright 2021 Google LLC. { display-mode: \"form\" }\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.", "_____no_output_____" ] ], [ [ "<table class=\"ee-notebook-buttons\" align=\"left\"><td>\n<a target=\"_blank\" href=\"http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb\">\n <img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /> Run in Google Colab</a>\n</td><td>\n<a target=\"_blank\" href=\"https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_AI_Platform.ipynb\"><img width=32px src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /> View source on GitHub</a></td></table>", "_____no_output_____" ], [ "# Introduction\n\nThis is an Earth Engine <> TensorFlow demonstration notebook. This demonstrates a per-pixel neural network implemented in a way that allows the trained model to be hosted on [Google AI Platform](https://cloud.google.com/ai-platform) and used in Earth Engine for interactive prediction from an `ee.Model.fromAIPlatformPredictor`. See [this example notebook](http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb) for background on the dense model.\n\n**Running this demo may incur charges to your Google Cloud Account!**", "_____no_output_____" ], [ "# Setup software libraries\n\nImport software libraries and/or authenticate as necessary.", "_____no_output_____" ], [ "## Authenticate to Colab and Cloud\n\nTo read/write from a Google Cloud Storage bucket to which you have access, it's necessary to authenticate (as yourself). *This should be the same account you use to login to Earth Engine*. When you run the code below, it will display a link in the output to an authentication page in your browser. Follow the link to a page that will let you grant permission to the Cloud SDK to access your resources. Copy the code from the permissions page back into this notebook and press return to complete the process.\n\n(You may need to run this again if you get a credentials error later.)", "_____no_output_____" ] ], [ [ "from google.colab import auth\nauth.authenticate_user()", "_____no_output_____" ] ], [ [ "## Upgrade Earth Engine and Authenticate\n\nUpdate Earth Engine to ensure you have the latest version. Authenticate to Earth Engine the same way you did to the Colab notebook. Specifically, run the code to display a link to a permissions page. This gives you access to your Earth Engine account. *This should be the same account you used to login to Cloud previously*. Copy the code from the Earth Engine permissions page back into the notebook and press return to complete the process.", "_____no_output_____" ] ], [ [ "!pip install -U earthengine-api --no-deps", "_____no_output_____" ], [ "import ee\nee.Authenticate()\nee.Initialize()", "_____no_output_____" ] ], [ [ "## Test the TensorFlow installation\n\nImport TensorFlow and check the version.", "_____no_output_____" ] ], [ [ "import tensorflow as tf\nprint(tf.__version__)", "_____no_output_____" ] ], [ [ "## Test the Folium installation\n\nWe will use the Folium library for visualization. Import the library and check the version.", "_____no_output_____" ] ], [ [ "import folium\nprint(folium.__version__)", "_____no_output_____" ] ], [ [ "# Define variables\n\nThe training data are land cover labels with a single vector of Landsat 8 pixel values (`BANDS`) as predictors. See [this example notebook](http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb) for details on how to generate these training data.", "_____no_output_____" ] ], [ [ "# REPLACE WITH YOUR CLOUD PROJECT!\nPROJECT = 'your-project'\n\n# Cloud Storage bucket with training and testing datasets.\nDATA_BUCKET = 'ee-docs-demos'\n# Output bucket for trained models. You must be able to write into this bucket.\nOUTPUT_BUCKET = 'your-bucket'\n\n# This is a good region for hosting AI models.\nREGION = 'us-central1'\n\n# Training and testing dataset file names in the Cloud Storage bucket.\nTRAIN_FILE_PREFIX = 'Training_demo'\nTEST_FILE_PREFIX = 'Testing_demo'\nfile_extension = '.tfrecord.gz'\nTRAIN_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TRAIN_FILE_PREFIX + file_extension\nTEST_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TEST_FILE_PREFIX + file_extension\n\n# The labels, consecutive integer indices starting from zero, are stored in\n# this property, set on each point.\nLABEL = 'landcover'\n# Number of label values, i.e. number of classes in the classification.\nN_CLASSES = 3\n\n# Use Landsat 8 surface reflectance data for predictors.\nL8SR = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')\n# Use these bands for prediction.\nBANDS = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']\n\n# These names are used to specify properties in the export of \n# training/testing data and to define the mapping between names and data\n# when reading into TensorFlow datasets.\nFEATURE_NAMES = list(BANDS)\nFEATURE_NAMES.append(LABEL)\n\n# List of fixed-length features, all of which are float32.\ncolumns = [\n tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in FEATURE_NAMES\n]\n\n# Dictionary with feature names as keys, fixed-length features as values.\nFEATURES_DICT = dict(zip(FEATURE_NAMES, columns))", "_____no_output_____" ] ], [ [ "# Read data", "_____no_output_____" ], [ "### Check existence of the data files\n\nCheck that you have permission to read the files in the output Cloud Storage bucket.", "_____no_output_____" ] ], [ [ "print('Found training file.' if tf.io.gfile.exists(TRAIN_FILE_PATH) \n else 'No training file found.')\nprint('Found testing file.' if tf.io.gfile.exists(TEST_FILE_PATH) \n else 'No testing file found.')", "_____no_output_____" ] ], [ [ "## Read into a `tf.data.Dataset`\n\nHere we are going to read a file in Cloud Storage into a `tf.data.Dataset`. ([these TensorFlow docs](https://www.tensorflow.org/guide/data) explain more about reading data into a `tf.data.Dataset`). Check that you can read examples from the file. The purpose here is to ensure that we can read from the file without an error. The actual content is not necessarily human readable. Note that we will use all data for training.\n", "_____no_output_____" ] ], [ [ "# Create a dataset from the TFRecord file in Cloud Storage.\ntrain_dataset = tf.data.TFRecordDataset([TRAIN_FILE_PATH, TEST_FILE_PATH],\n compression_type='GZIP')\n\n# Print the first record to check.\nprint(iter(train_dataset).next())", "_____no_output_____" ] ], [ [ "## Parse the dataset\n\nNow we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized `Example` proto (i.e. [`example.proto`](https://github.com/tensorflow/tensorflow/blob/r1.12/tensorflow/core/example/example.proto)) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. ([Learn more about parsing `Example` protocol buffer messages](https://www.tensorflow.org/programmers_guide/datasets#parsing_tfexample_protocol_buffer_messages)).", "_____no_output_____" ] ], [ [ "def parse_tfrecord(example_proto):\n \"\"\"The parsing function.\n\n Read a serialized example into the structure defined by FEATURES_DICT.\n\n Args:\n example_proto: a serialized Example.\n\n Returns:\n A tuple of the predictors dictionary and the LABEL, cast to an `int32`.\n \"\"\"\n parsed_features = tf.io.parse_single_example(example_proto, FEATURES_DICT)\n labels = parsed_features.pop(LABEL)\n return parsed_features, tf.cast(labels, tf.int32)\n\n# Map the function over the dataset.\nparsed_dataset = train_dataset.map(parse_tfrecord, num_parallel_calls=4)\n\nfrom pprint import pprint\n\n# Print the first parsed record to check.\npprint(iter(parsed_dataset).next())", "_____no_output_____" ] ], [ [ "Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands names for keys and tensors storing the pixel data for values. The second element of the tuple is tensor storing the class label.", "_____no_output_____" ], [ "## Adjust dimension and shape\n\nTurn the dictionary of *{name: tensor,...}* into a 1x1xP array of values, where P is the number of predictors. Turn the label into a 1x1x`N_CLASSES` array of indicators (i.e. one-hot vector), in order to use a categorical crossentropy-loss function. Return a tuple of (predictors, indicators where each is a three dimensional array; the first two dimensions are spatial x, y (i.e. 1x1 kernel).", "_____no_output_____" ] ], [ [ "# Inputs as a tuple. Make predictors 1x1xP and labels 1x1xN_CLASSES.\ndef to_tuple(inputs, label):\n return (tf.expand_dims(tf.transpose(list(inputs.values())), 1),\n tf.expand_dims(tf.one_hot(indices=label, depth=N_CLASSES), 1))\n\ninput_dataset = parsed_dataset.map(to_tuple)\n# Check the first one.\npprint(iter(input_dataset).next())\n\ninput_dataset = input_dataset.shuffle(128).batch(8)", "_____no_output_____" ] ], [ [ "# Model setup\n\nMake a densely-connected convolutional model, where the convolution occurs in a 1x1 kernel. This is exactly analogous to the model generated in [this example notebook](http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb), but operates in a convolutional manner in a 1x1 kernel. This allows Earth Engine to apply the model spatially, as demonstrated below.\n\nNote that the model used here is purely for demonstration purposes and hasn't gone through any performance tuning.", "_____no_output_____" ], [ "## Create the Keras model\n\nBefore we create the model, there's still a wee bit of pre-processing to get the data into the right input shape and a format that can be used with cross-entropy loss. Specifically, Keras expects a list of inputs and a one-hot vector for the class. (See [the Keras loss function docs](https://keras.io/losses/), [the TensorFlow categorical identity docs](https://www.tensorflow.org/guide/feature_columns#categorical_identity_column) and [the `tf.one_hot` docs](https://www.tensorflow.org/api_docs/python/tf/one_hot) for details).\n\nHere we will use a simple neural network model with a 64 node hidden layer. Once the dataset has been prepared, define the model, compile it, fit it to the training data. See [the Keras `Sequential` model guide](https://keras.io/getting-started/sequential-model-guide/) for more details.", "_____no_output_____" ] ], [ [ "from tensorflow import keras\n\n# Define the layers in the model. Note the 1x1 kernels.\nmodel = tf.keras.models.Sequential([\n tf.keras.layers.Input((None, None, len(BANDS),)),\n tf.keras.layers.Conv2D(64, (1,1), activation=tf.nn.relu),\n tf.keras.layers.Dropout(0.1),\n tf.keras.layers.Conv2D(N_CLASSES, (1,1), activation=tf.nn.softmax)\n])\n\n# Compile the model with the specified loss and optimizer functions.\nmodel.compile(optimizer=tf.keras.optimizers.Adam(),\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# Fit the model to the training data. Lucky number 7.\nmodel.fit(x=input_dataset, epochs=7)\n", "_____no_output_____" ] ], [ [ "## Save the trained model\n\nExport the trained model to TensorFlow `SavedModel` format in your cloud storage bucket. The [Cloud Platform storage browser](https://console.cloud.google.com/storage/browser) is useful for checking on these saved models.", "_____no_output_____" ] ], [ [ "MODEL_DIR = 'gs://' + OUTPUT_BUCKET + '/demo_pixel_model'\nmodel.save(MODEL_DIR, save_format='tf')", "_____no_output_____" ] ], [ [ "# EEification\n\nEEIfication prepares the model for hosting on [Google AI Platform](https://cloud.google.com/ai-platform). Learn more about EEification from [this doc](https://developers.google.com/earth-engine/tensorflow#interacting-with-models-hosted-on-ai-platform). First, get (and SET) input and output names of the nodes. **CHANGE THE OUTPUT NAME TO SOMETHING THAT MAKES SENSE FOR YOUR MODEL!** Keep the input name of 'array', which is how you'll pass data into the model (as an array image).", "_____no_output_____" ] ], [ [ "from tensorflow.python.tools import saved_model_utils\n\nmeta_graph_def = saved_model_utils.get_meta_graph_def(MODEL_DIR, 'serve')\ninputs = meta_graph_def.signature_def['serving_default'].inputs\noutputs = meta_graph_def.signature_def['serving_default'].outputs\n\n# Just get the first thing(s) from the serving signature def. i.e. this\n# model only has a single input and a single output.\ninput_name = None\nfor k,v in inputs.items():\n input_name = v.name\n break\n\noutput_name = None\nfor k,v in outputs.items():\n output_name = v.name\n break\n\n# Make a dictionary that maps Earth Engine outputs and inputs to\n# AI Platform inputs and outputs, respectively.\nimport json\ninput_dict = \"'\" + json.dumps({input_name: \"array\"}) + \"'\"\noutput_dict = \"'\" + json.dumps({output_name: \"output\"}) + \"'\"\nprint(input_dict)\nprint(output_dict)", "_____no_output_____" ] ], [ [ "## Run the EEifier\n\nThe actual EEification is handled by the `earthengine model prepare` command. Note that you will need to set your Cloud Project prior to running the command.", "_____no_output_____" ] ], [ [ "# Put the EEified model next to the trained model directory.\nEEIFIED_DIR = 'gs://' + OUTPUT_BUCKET + '/eeified_pixel_model'\n\n# You need to set the project before using the model prepare command.\n!earthengine set_project {PROJECT}\n!earthengine model prepare --source_dir {MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}", "_____no_output_____" ] ], [ [ "# Deploy and host the EEified model on AI Platform\n\nNow there is another TensorFlow `SavedModel` stored in `EEIFIED_DIR` ready for hosting by AI Platform. Do that from the `gcloud` command line tool, installed in the Colab runtime by default. Be sure to specify a regional model with the `REGION` parameter. Note that the `MODEL_NAME` must be unique. If you already have a model by that name, either name a new model or a new version of the old model. The [Cloud Console AI Platform models page](https://console.cloud.google.com/ai-platform/models) is useful for monitoring your models.\n\n**If you change anything about the trained model, you'll need to re-EEify it and create a new version!**", "_____no_output_____" ] ], [ [ "MODEL_NAME = 'pixel_demo_model'\nVERSION_NAME = 'v0'\n\n!gcloud ai-platform models create {MODEL_NAME} \\\n --project {PROJECT} \\\n --region {REGION}\n\n!gcloud ai-platform versions create {VERSION_NAME} \\\n --project {PROJECT} \\\n --region {REGION} \\\n --model {MODEL_NAME} \\\n --origin {EEIFIED_DIR} \\\n --framework \"TENSORFLOW\" \\\n --runtime-version=2.3 \\\n --python-version=3.7", "_____no_output_____" ] ], [ [ "# Connect to the hosted model from Earth Engine\n\n1. Generate the input imagery. This should be done in exactly the same way as the training data were generated. See [this example notebook](http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/TF_demo1_keras.ipynb) for details.\n2. Connect to the hosted model.\n3. Use the model to make predictions.\n4. Display the results.\n\nNote that it takes the model a couple minutes to spin up and make predictions.", "_____no_output_____" ] ], [ [ "# Cloud masking function.\ndef maskL8sr(image):\n cloudShadowBitMask = ee.Number(2).pow(3).int()\n cloudsBitMask = ee.Number(2).pow(5).int()\n qa = image.select('pixel_qa')\n mask = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(\n qa.bitwiseAnd(cloudsBitMask).eq(0))\n return image.updateMask(mask).select(BANDS).divide(10000)\n\n# The image input data is a 2018 cloud-masked median composite.\nimage = L8SR.filterDate('2018-01-01', '2018-12-31').map(maskL8sr).median()\n\n# Get a map ID for display in folium.\nrgb_vis = {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3, 'format': 'png'}\nmapid = image.getMapId(rgb_vis)\n\n# Turn into an array image for input to the model.\narray_image = image.float().toArray()\n\n# Point to the model hosted on AI Platform. If you specified a region other\n# than the default (us-central1) at model creation, specify it here.\nmodel = ee.Model.fromAiPlatformPredictor(\n projectName=PROJECT,\n modelName=MODEL_NAME,\n version=VERSION_NAME,\n # Can be anything, but don't make it too big.\n inputTileSize=[8, 8],\n # Keep this the same as your training data.\n proj=ee.Projection('EPSG:4326').atScale(30),\n fixInputProj=True,\n # Note the names here need to match what you specified in the\n # output dictionary you passed to the EEifier.\n outputBands={'output': {\n 'type': ee.PixelType.float(),\n 'dimensions': 1\n }\n },\n)\n\n# model.predictImage outputs a one dimensional array image that\n# packs the output nodes of your model into an array. These\n# are class probabilities that you need to unpack into a \n# multiband image with arrayFlatten(). If you want class\n# labels, use arrayArgmax() as follows.\npredictions = model.predictImage(array_image)\nprobabilities = predictions.arrayFlatten([['bare', 'veg', 'water']])\nlabel = predictions.arrayArgmax().arrayGet([0]).rename('label')\n\n# Get map IDs for display in folium.\nprobability_vis = {\n 'bands': ['bare', 'veg', 'water'], 'max': 0.5, 'format': 'png'\n}\nlabel_vis = {\n 'palette': ['red', 'green', 'blue'], 'min': 0, 'max': 2, 'format': 'png'\n}\nprobability_mapid = probabilities.getMapId(probability_vis)\nlabel_mapid = label.getMapId(label_vis)\n\n# Visualize the input imagery and the predictions.\nmap = folium.Map(location=[37.6413, -122.2582], zoom_start=11)\n\nfolium.TileLayer(\n tiles=mapid['tile_fetcher'].url_format,\n attr='Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n overlay=True,\n name='median composite',\n ).add_to(map)\nfolium.TileLayer(\n tiles=label_mapid['tile_fetcher'].url_format,\n attr='Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n overlay=True,\n name='predicted label',\n).add_to(map)\nfolium.TileLayer(\n tiles=probability_mapid['tile_fetcher'].url_format,\n attr='Map Data &copy; <a href=\"https://earthengine.google.com/\">Google Earth Engine</a>',\n overlay=True,\n name='probability',\n).add_to(map)\nmap.add_child(folium.LayerControl())\nmap", "_____no_output_____" ] ] ]
[ "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code" ]
[ [ "code" ], [ "markdown", "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ] ]
4a3e29218e62014a5be7b52847eea1ec9bd2e4d5
38,908
ipynb
Jupyter Notebook
docs/source/user_guide/eda/introduction.ipynb
jwa345/dataprep
cb386934b0e151ef538c3873ae8fa37bb8bd1513
[ "MIT" ]
null
null
null
docs/source/user_guide/eda/introduction.ipynb
jwa345/dataprep
cb386934b0e151ef538c3873ae8fa37bb8bd1513
[ "MIT" ]
null
null
null
docs/source/user_guide/eda/introduction.ipynb
jwa345/dataprep
cb386934b0e151ef538c3873ae8fa37bb8bd1513
[ "MIT" ]
null
null
null
34.646483
651
0.604811
[ [ [ ".. _`userguide/eda`:\n\nEDA\n===\n\nThis section introduces the Exploratory Data Analysis component of DataPrep.", "_____no_output_____" ] ], [ [ "## Section Contents\n\n * [plot(): analyze distributions](plot.ipynb)\n * [plot_correlation(): analyze correlations](plot_correlation.ipynb)\n * [plot_missing(): analyze missing values](plot_missing.ipynb)\n * [plot_diff(): analyze difference between DataFrames](plot_diff.ipynb)\n * [create_report(): create a profile report](create_report.ipynb)\n * [Get intermediates: get the intermediate data](get_intermediates.ipynb) \n * [How-to guide: customize your output](how_to_guide.ipynb)\n * [Parameter configurations: parameter summary settings](parameter_configurations.ipynb)\n * [Insight: automatically insight detection](insights.ipynb)\n * [Case study: Titanic](titanic.ipynb)\n * [Case study: House Prices](house_price.ipynb) ", "_____no_output_____" ], [ "## Introduction to Exploratory Data Analysis and `dataprep.eda`\n\n[Exploratory Data Analysis (EDA)](https://www.wikiwand.com/en/Exploratory_data_analysis) is the process of exploring a dataset and getting an understanding of its main characteristics. The `dataprep.eda` package simplifies this process by allowing the user to explore important characteristics with simple APIs. Each API allows the user to analyze the dataset from a high level to a low level, and from different perspectives. Specifically, `dataprep.eda` provides the following functionality:\n\n* Analyze column **distributions** with `plot()`. The function `plot()` explores the column distributions and statistics of the dataset. It will detect the column type, and then output various plots and statistics that are appropriate for the respective type. The user can optionally pass one or two columns of interest as parameters: If one column is passed, its distribution will be plotted in various ways, and column statistics will be computed. If two columns are passed, plots depicting the relationship between the two columns will be generated.\n\n* Analyze **correlations** with `plot_correlation()`. The function `plot_correlation()` explores the correlation between columns in various ways and using multiple correlation metrics. By default, it plots correlation matrices with various metrics. The user can optionally pass one or two columns of interest as parameters: If one column is passed, the correlation between this column and all other columns will be computed and ranked. If two columns are passed, a scatter plot and regression line will be plotted.\n\n* Analyze **missing values** with `plot_missing()`. The function `plot_missing()` enables thorough analysis of the missing values and their impact on the dataset. By default, it will generate various plots which display the amount of missing values for each column and any underlying patterns of the missing values in the dataset. To understand the impact of the missing values in one column on the other columns, the user can pass the column name as a parameter. Then, `plot_missing()` will generate the distribution of each column with and without the missing values from the given column, enabling a thorough understanding of their impact.\n\n* Analyze column **differences** with `plot_diff()`. The function `plot_diff()` explores the differences of column distributions and statistics across multiple datasets. It will detect the column type, and then output various plots and statistics that are appropriate for the respective type. The user can optionally set the baseline which is used as the target dataset to compare with other datasets.\n\nThe following sections give a simple demonstration of `plot()`, `plot_correlation()`, `plot_missing()`, and `plot_diff()` using an example dataset.", "_____no_output_____" ], [ "## Analyze distributions with `plot()`\n\nThe function `plot()` explores the distributions and statistics of the dataset. The following describes the functionality of `plot()` for a given dataframe `df`.\n\n1. `plot(df)`: plots the distribution of each column and calculates dataset statistics\n2. `plot(df, x)`: plots the distribution of column `x` in various ways and calculates column statistics\n3. `plot(df, x, y)`: generates plots depicting the relationship between columns `x` and `y`\n\nThe following shows an example of `plot(df)`. It plots a histogram for each numerical column, a bar chart for each categorical column, and computes dataset statistics.", "_____no_output_____" ] ], [ [ "from dataprep.eda import plot\nfrom dataprep.datasets import load_dataset\nimport numpy as np\ndf = load_dataset('titanic')\nplot(df)", "_____no_output_____" ] ], [ [ "For more information about the function `plot()` see [here](plot.ipynb).", "_____no_output_____" ], [ "## Analyze correlations with `plot_correlation()`\n\nThe function `plot_correlation()` explores the correlation between columns in various ways and using multiple correlation metrics. The following describes the functionality of `plot_correlation()` for a given dataframe `df`.\n\n1. `plot_correlation(df)`: plots correlation matrices (correlations between all pairs of columns)\n2. `plot_correlation(df, x)`: plots the most correlated columns to column `x`\n3. `plot_correlation(df, x, y)`: plots the joint distribution of column `x` and column `y` and computes a regression line\n\nThe following shows an example of `plot_correlation()`. It generates correlation matrices using [Pearson](https://www.wikiwand.com/en/Pearson_correlation_coefficient), [Spearman](https://www.wikiwand.com/en/Spearman%27s_rank_correlation_coefficient), and [KendallTau](https://www.wikiwand.com/en/Kendall_rank_correlation_coefficient) correlation coefficients\n", "_____no_output_____" ] ], [ [ "from dataprep.eda import plot_correlation\nfrom dataprep.datasets import load_dataset\ndf = load_dataset(\"wine-quality-red\")\nplot_correlation(df)", "_____no_output_____" ] ], [ [ "For more information about the function `plot_correlation()` see [here](plot_correlation.ipynb).", "_____no_output_____" ], [ "## Analyze missing values with `plot_missing()`\n\nThe function `plot_missing()` enables thorough analysis of the missing values and their impact on the dataset. The following describes the functionality of `plot_missing()` for a given dataframe `df`.\n\n1. `plot_missing(df)`: plots the amount and position of missing values, and their relationship between columns\n2. `plot_missing(df, x)`: plots the impact of the missing values in column `x` on all other columns\n3. `plot_missing(df, x, y)`: plots the impact of the missing values from column `x` on column `y` in various ways.", "_____no_output_____" ] ], [ [ "from dataprep.eda import plot_missing\nfrom dataprep.datasets import load_dataset\ndf = load_dataset(\"titanic\")\nplot_missing(df)", "_____no_output_____" ] ], [ [ "For more information about the function `plot_missing()` see [here](plot_missing.ipynb).", "_____no_output_____" ], [ "## Analyze difference with `plot_diff()`\n\nThe function `plot_diff()` explores the difference of column distributions and statistics across multiple datasets. The following describes the functionality of `plot_diff()` for two given dataframes `df1` and `df2`.", "_____no_output_____" ] ], [ [ "from dataprep.eda import plot_diff\nfrom dataprep.datasets import load_dataset\ndf1 = load_dataset(\"house_prices_test\").iloc[:, :9]\ndf2 = load_dataset(\"house_prices_train\").iloc[:, :9]\nplot_diff([df1, df2])", "_____no_output_____" ] ], [ [ "For more information about the function `plot_diff()` see [here](plot_diff.ipynb).", "_____no_output_____" ], [ "## Create a profile report with `create_report()`\n\nThe function `create_report()` generates a comprehensive profile report of the dataset. `create_report()` combines the individual components of the `dataprep.eda` package and outputs them into a nicely formatted HTML document. The document contains the following information:\n\n1. Overview: detect the types of columns in a dataframe\n2. Variables: variable type, unique values, distint count, missing values\n3. Quantile statistics like minimum value, Q1, median, Q3, maximum, range, interquartile range\n4. Descriptive statistics like mean, mode, standard deviation, sum, median absolute deviation, coefficient of variation, kurtosis, skewness\n5. Text analysis for length, sample and letter\n6. Correlations: highlighting of highly correlated variables, Spearman, Pearson and Kendall matrices\n7. Missing Values: bar chart, heatmap and spectrum of missing values\n\nAn example report can be downloaded [here](../../_static/images/create_report/titanic_dp.html).", "_____no_output_____" ], [ "## Customize the plot\n\n### Customize plot via `config`\nThe plot/report can be customized via the `config` parameter. E.g., enable/disable some plots, set the bins of histogram, set the height and width of the plots. \n\nThe following example shows how to set the bins of histogram to 1000, and disable the `KDE Plot`. For more configurations, please read this doc: [Parameter configurations: parameter summary settings](parameter_configurations.ipynb)\n ", "_____no_output_____" ] ], [ [ "from dataprep.eda import plot\nfrom dataprep.datasets import load_dataset\ndf = load_dataset('titanic')\nplot(df, config = {\"hist.bins\": 1000, \"kde.enable\": False})", "_____no_output_____" ] ], [ [ "### Identify some plots to show via `display`\nIn `config`, you can set disable some plots by set its `enable` to False, as shown in the above example which disable `KDE Plot`. However, sometimes you may just want to show some plots and disable all other plots. In this case, using the`display` is a more convenient approach. You can just input the plot names that you want to display in `display`. E.g., the following code will show only `Interactions` section in report:", "_____no_output_____" ] ], [ [ "from dataprep.eda import create_report\nfrom dataprep.datasets import load_dataset\ndf = load_dataset('titanic')\ncreate_report(df, display = [\"Interactions\"])", "_____no_output_____" ] ], [ [ "The following code will show only box plot:", "_____no_output_____" ] ], [ [ "from dataprep.eda import plot\nfrom dataprep.datasets import load_dataset\ndf = load_dataset('titanic')\nplot(df, display = [\"Bar Chart\"])", "_____no_output_____" ] ], [ [ "## Get the intermediate data\n\nDataPrep.EDA separates the computation and rendering, so that you can just compute the intermediate data and render it using other plotting libraries. \n\nFor each `plot` function, there is a corresponding `compute` function, which returns the computed intermediates used for rendering. For example, for `plot_correlation(df)` function, you can get the intermediates using `compute_correlation(df)`. It's a dictionary, and you can also save it to a json file.", "_____no_output_____" ] ], [ [ "from dataprep.eda import compute_correlation\nfrom dataprep.datasets import load_dataset\ndf = load_dataset(\"titanic\")\nimdt = compute_correlation(df)\nimdt.save(\"imdt.json\")\nimdt", "_____no_output_____" ] ], [ [ "## Specifying colors\n\nThe supported colors of DataPrep.EDA match those of the [Bokeh](https://bokeh.org/) library. Color values can be provided in any of the following ways:\n\n* any of the [147 named CSS colors](http://www.w3schools.com/colors/colors_names.asp), e.g 'green', 'indigo'\n\n* an RGB(A) hex value, e.g., '#FF0000', '#44444444'\n\n* a 3-tuple of integers (r,g,b) between 0 and 255\n\n* a 4-tuple of (r,g,b,a) where r, g, b are integers between 0 and 255 and a is a floating point value between 0 and 1", "_____no_output_____" ] ] ]
[ "raw", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown", "code", "markdown" ]
[ [ "raw" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown" ], [ "code" ], [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ], [ "code" ], [ "markdown" ] ]
4a3e2ec3c9a34a946405b0fe8f94da0601d23871
3,254
ipynb
Jupyter Notebook
01_mysteries_of_neural_networks/05_convolutions_IN_PROGRESS/Building convolutional neural network in Numpy.ipynb
daming-lu/Piotr
cd489a6b03006e5d8e3e7070b18f5791b5cc5ee0
[ "MIT" ]
null
null
null
01_mysteries_of_neural_networks/05_convolutions_IN_PROGRESS/Building convolutional neural network in Numpy.ipynb
daming-lu/Piotr
cd489a6b03006e5d8e3e7070b18f5791b5cc5ee0
[ "MIT" ]
10
2020-01-28T22:38:11.000Z
2022-02-10T00:15:45.000Z
01_mysteries_of_neural_networks/05_convolutions_IN_PROGRESS/Building convolutional neural network in Numpy.ipynb
daming-lu/Piotr
cd489a6b03006e5d8e3e7070b18f5791b5cc5ee0
[ "MIT" ]
null
null
null
23.242857
102
0.515673
[ [ [ "# Building convolutional neural network in Numpy\n---", "_____no_output_____" ], [ "***Author: Piotr Skalski***", "_____no_output_____" ], [ "### Imports", "_____no_output_____" ] ], [ [ "import numpy as np\nimport matplotlib.pyplot as plt\nimport os\nfrom mlxtend.data import loadlocal_mnist\n\n%load_ext autoreload\n%autoreload 2", "_____no_output_____" ] ], [ [ "### Auxiliary function downloading the dataset", "_____no_output_____" ] ], [ [ "def download_mnist_dataset():\n # The MNIST data set is available at http://yann.lecun.com, let's use curl to download it\n if not os.path.exists(\"train-images-idx3-ubyte\"):\n !curl -O http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz\n !curl -O http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz\n !curl -O http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz\n !curl -O http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz\n !gunzip t*-ubyte.gz\n \n # Let's use loadlocal_mnist available in mlxtend.data to get data in numpy array form.\n X1, y1 = loadlocal_mnist(\n images_path=\"train-images-idx3-ubyte\", \n labels_path=\"train-labels-idx1-ubyte\")\n\n X2, y2 = loadlocal_mnist(\n images_path=\"t10k-images-idx3-ubyte\", \n labels_path=\"t10k-labels-idx1-ubyte\")\n \n # We normalize the brightness values for pixels\n X1 = X1.reshape(X1.shape[0], -1) / 255\n X2 = X2.reshape(X2.shape[0], -1) /255\n\n X = np.concatenate([X1, X2])\n y = np.concatenate([y1, y2])\n \n return X, y", "_____no_output_____" ], [ "X, y = download_mnist_dataset()", "_____no_output_____" ], [ "X.shape", "_____no_output_____" ] ] ]
[ "markdown", "code", "markdown", "code" ]
[ [ "markdown", "markdown", "markdown" ], [ "code" ], [ "markdown" ], [ "code", "code", "code" ] ]
4a3e33e489459d039762252dab5b8491100c44ca
14,577
ipynb
Jupyter Notebook
10.epilepsy-path-ranking.ipynb
dhimmel/hetmech
4c7dc77054a02d7da4c30c2a9b0eca391cf5b6b5
[ "BSD-3-Clause" ]
10
2017-08-20T17:12:57.000Z
2019-03-26T21:42:28.000Z
10.epilepsy-path-ranking.ipynb
greenelab/hetmech
ca460f70a247ef456c930c7e64df24f59dc2b338
[ "BSD-3-Clause" ]
121
2017-03-16T19:20:31.000Z
2021-01-21T16:15:55.000Z
10.epilepsy-path-ranking.ipynb
dhimmel/hetmech
4c7dc77054a02d7da4c30c2a9b0eca391cf5b6b5
[ "BSD-3-Clause" ]
7
2017-03-27T23:02:11.000Z
2019-03-27T15:47:52.000Z
33.205011
136
0.483707
[ [ [ "# Prominent paths originating from epilepsy to a Compound", "_____no_output_____" ] ], [ [ "import math\n\nimport pandas\nfrom neo4j import GraphDatabase\nfrom tqdm.notebook import tqdm\nimport hetnetpy.readwrite\nimport hetnetpy.neo4j\n\nfrom src.database_utils import get_db_connection", "_____no_output_____" ], [ "epilepsy_id = 'DOID:1826'\n\n# Get top ten most important metapaths for Compound-epilepsy\nquery = f'''\\\nSELECT\n outer_pc.dwpc as dwpc,\n outer_pc.p_value as p_value,\n outer_pc.metapath_id as metapath_id,\n top_ids.source_name as source_name,\n top_ids.target_name as target_name\nFROM (\n SELECT dwpc, p_value, metapath_id, source_id, target_id, n1.name AS source_name, n2.name AS target_name\n FROM dj_hetmech_app_pathcount pc\n JOIN dj_hetmech_app_node join_node\n ON pc.target_id=join_node.id OR pc.source_id=join_node.id\n JOIN dj_hetmech_app_node n1\n ON pc.source_id = n1.id\n JOIN dj_hetmech_app_node n2\n ON pc.target_id = n2.id\n WHERE join_node.identifier='{epilepsy_id}' AND (n1.metanode_id = 'Compound' OR n2.metanode_id = 'Compound')\n ORDER BY pc.p_value\n) AS top_ids\nJOIN dj_hetmech_app_pathcount outer_pc\nON (top_ids.source_id = outer_pc.source_id AND\n top_ids.target_id = outer_pc.target_id) OR\n (top_ids.source_id = outer_pc.target_id AND\n top_ids.target_id = outer_pc.source_id)\nORDER BY outer_pc.p_value;\n'''\n\nwith get_db_connection() as connection:\n top_metapaths = pandas.read_sql(query, connection)", "_____no_output_____" ], [ "top_metapaths = top_metapaths.sort_values(by=['source_name', 'metapath_id'])\n\n# Ensure that you only have one copy of each (source_name, metapath_id) pair\ntop_metapaths = top_metapaths.drop_duplicates(subset=['source_name', 'metapath_id'])\ntop_metapaths = top_metapaths.sort_values(by='p_value')\n# Remove any rows with NaN values\ntop_metapaths = top_metapaths.dropna()\nmin_p_value = top_metapaths[top_metapaths.p_value != 0].p_value.min()\ntop_metapaths.loc[top_metapaths.p_value == 0, 'p_value'] = min_p_value\nprint(top_metapaths.p_value.min())\ntop_metapaths['neg_log_p_value'] = top_metapaths.p_value.apply(lambda x: -math.log10(x))\ntop_metapaths.head()", "3.1318111315557476e-17\n" ], [ "url = 'https://github.com/hetio/hetionet/raw/76550e6c93fbe92124edc71725e8c7dd4ca8b1f5/hetnet/json/hetionet-v1.0-metagraph.json'\nmetagraph = hetnetpy.readwrite.read_metagraph(url)", "_____no_output_____" ], [ "def get_paths_for_metapath(metagraph, row):\n '''\n Return a list of dictionaries containing the information for all paths with a given source, target, and metapath\n \n Parameters\n ----------\n metagraph : a hetnetpy.hetnet.Metagraph instance to interpret metapath abbreviations\n row : a row from a pandas dataframe with information about the given metapath, source, and target\n '''\n damping_exponent = .5\n \n metapath_data = metagraph.metapath_from_abbrev(row['metapath_id'])\n\n query = hetnetpy.neo4j.construct_pdp_query(metapath_data, path_style='string', property='name')\n\n driver = GraphDatabase.driver(\"bolt://neo4j.het.io\")\n params = {\n 'source': row['source_name'],\n 'target': row['target_name'],\n 'w': damping_exponent\n }\n with driver.session() as session:\n metapath_result = session.run(query, params)\n metapath_result = metapath_result.data()\n\n for path in metapath_result:\n path['metapath'] = row['metapath_id']\n path['metapath_importance'] = row['neg_log_p_value']\n path['path_importance'] = path['metapath_importance'] * path['percent_of_DWPC']\n path['source'] = row['source_name']\n \n metapath_df = pandas.DataFrame(metapath_result)\n \n return metapath_df", "_____no_output_____" ], [ "%%time\n# For row in top_metapaths\nresult_list = []\nfor index, row in tqdm(top_metapaths.iterrows(), total=len(top_metapaths.index)):\n metapath_df = get_paths_for_metapath(metagraph, row)\n result_list.append(metapath_df)\nresult_df = pandas.concat(result_list, ignore_index=True)", "_____no_output_____" ], [ "result_df = result_df.sort_values(by=['source', 'path_importance', 'metapath'], ascending=[True, False, True])\nresult_df.head()", "_____no_output_____" ], [ "result_df.to_csv('data/epilepsy_paths.tsv.xz', index=False, sep='\\t', float_format=\"%.5g\")", "_____no_output_____" ] ] ]
[ "markdown", "code" ]
[ [ "markdown" ], [ "code", "code", "code", "code", "code", "code", "code", "code" ] ]